From 625d6501c9080ea231db859af4fee159f141754f Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 22 Feb 2018 10:24:31 +0800 Subject: [PATCH 01/81] =?UTF-8?q?PRF&PUB:20171208=20Sessions=20And=20Cooki?= =?UTF-8?q?es=20=E2=80=93=20How=20Does=20User-Login=20Work.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @lujun9972 --- ... And Cookies – How Does User-Login Work.md | 73 +++++++++++++++++++ ... And Cookies – How Does User-Login Work.md | 72 ------------------ 2 files changed, 73 insertions(+), 72 deletions(-) create mode 100644 published/20171208 Sessions And Cookies – How Does User-Login Work.md delete mode 100644 translated/tech/20171208 Sessions And Cookies – How Does User-Login Work.md diff --git a/published/20171208 Sessions And Cookies – How Does User-Login Work.md b/published/20171208 Sessions And Cookies – How Does User-Login Work.md new file mode 100644 index 0000000000..e034b55a67 --- /dev/null +++ b/published/20171208 Sessions And Cookies – How Does User-Login Work.md @@ -0,0 +1,73 @@ +会话与 Cookie:用户登录的原理是什么? +====== + +Facebook、 Gmail、 Twitter 是我们每天都会用的网站(LCTT 译注:才不是呢)。它们的共同点在于都需要你登录进去后才能做进一步的操作。只有你通过认证并登录后才能在 twitter 发推,在 Facebook 上评论,以及在 Gmail上处理电子邮件。 + +[![gmail, facebook login page](http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg)][1] + +那么登录的原理是什么?网站是如何认证的?它怎么知道是哪个用户从哪儿登录进来的?下面我们来对这些问题进行一一解答。 + +### 用户登录的原理是什么? + +每次你在网站的登录页面中输入用户名和密码时,这些信息都会发送到服务器。服务器随后会将你的密码与服务器中的密码进行验证。如果两者不匹配,则你会得到一个错误密码的提示。如果两者匹配,则成功登录。 + +### 登录时发生了什么? + +登录后,web 服务器会初始化一个会话session并在你的浏览器中设置一个 cookie 变量。该 cookie 变量用于作为新建会话的一个引用。搞晕了?让我们说的再简单一点。 + +### 会话的原理是什么? + +服务器在用户名和密码都正确的情况下会初始化一个会话。会话的定义很复杂,你可以把它理解为“关系的开始”。 + +[![session beginning of a relationship or partnership](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png)][2] + +认证通过后,服务器就开始跟你展开一段关系了。由于服务器不能象我们人类一样看东西,它会在我们的浏览器中设置一个 cookie 来将我们的关系从其他人与服务器的关系标识出来。 + +### 什么是 Cookie? + +cookie 是网站在你的浏览器中存储的一小段数据。你应该已经见过他们了。 + +[![theitstuff official facebook page cookies](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png)][3] + +当你登录后,服务器为你创建一段关系或者说一个会话,然后将唯一标识这个会话的会话 id 以 cookie 的形式存储在你的浏览器中。 + +### 什么意思? + +所有这些东西存在的原因在于识别出你来,这样当你写评论或者发推时,服务器能知道是谁在发评论,是谁在发推。 + +当你登录后,会产生一个包含会话 id 的 cookie。这样,这个会话 id 就被赋予了那个输入正确用户名和密码的人了。 + +[![facebook cookies in web browser](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png)][4] + +也就是说,会话 id 被赋予给了拥有这个账户的人了。之后,所有在网站上产生的行为,服务器都能通过他们的会话 id 来判断是由谁发起的。 + +### 如何让我保持登录状态? + +会话有一定的时间限制。这一点与现实生活中不一样,现实生活中的关系可以在不见面的情况下持续很长一段时间,而会话具有时间限制。你必须要不断地通过一些动作来告诉服务器你还在线。否则的话,服务器会关掉这个会话,而你会被登出。 + +[![websites keep me logged in option](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png)][5] + +不过在某些网站上可以启用“保持登录”功能,这样服务器会将另一个唯一变量以 cookie 的形式保存到我们的浏览器中。这个唯一变量会通过与服务器上的变量进行对比来实现自动登录。若有人盗取了这个唯一标识(我们称之为 cookie stealing),他们就能访问你的账户了。 + +### 结论 + +我们讨论了登录系统的工作原理以及网站是如何进行认证的。我们还学到了什么是会话和 cookies,以及它们在登录机制中的作用。 + +我们希望你们以及理解了用户登录的工作原理,如有疑问,欢迎提问。 + +-------------------------------------------------------------------------------- + +via: http://www.theitstuff.com/sessions-cookies-user-login-work + +作者:[Rishabh Kandari][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.theitstuff.com/author/reevkandari +[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg +[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png +[3]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png +[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png +[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png diff --git a/translated/tech/20171208 Sessions And Cookies – How Does User-Login Work.md b/translated/tech/20171208 Sessions And Cookies – How Does User-Login Work.md deleted file mode 100644 index 16b04b3e6c..0000000000 --- a/translated/tech/20171208 Sessions And Cookies – How Does User-Login Work.md +++ /dev/null @@ -1,72 +0,0 @@ -Sessions 与 Cookies – 用户登录的原理是什么? -====== -Facebook, Gmail, Twitter 是我们每天都会用的网站. 它们的共同点在于都需要你登录进去后才能做进一步的操作. 只有你通过认证并登录后才能在 twitter 发推, 在 Facebook 上评论,以及在 Gmail上处理电子邮件. - - [![gmail, facebook login page](http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg)][1] - -那么登录的原理是什么? 网站是如何认证的? 它怎么知道是哪个用户从哪儿登录进来的? 下面我们来对这些问题进行一一解答. - -### 用户登录的原理是什么? - -每次你在网站的登录页面中输入用户名和密码时, 这些信息都会发送到服务器. 服务器随后会将你的密码与服务器中的密码进行验证. 如果两者不匹配, 则你会得到一个错误密码的提示. 如果两则匹配, 则成功登录. - -### 登陆时发生了什么? - -登录后, web 服务器会初始化一个 session 并在你的浏览器中设置一个 cookie 变量. 该 cookie 变量用于作为新建 session 的一个引用. 搞晕了? 让我们说的再简单一点. - -### 会话的原理是什么? - -服务器在用户名和密码都正确的情况下会初始化一个 session. Sessions 的定义很复杂,你可以把它理解为 `关系的开始`. - - [![session beginning of a relationship or partnership](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png)][2] - -认证通过后, 服务器就开始跟你展开一段关系了. 由于服务器不能象我们人类一样看东西, 它会在我们的浏览器中设置一个 cookie 来将我们的关系从其他人与服务器的关系标识出来. - -### 什么是 Cookie? - -cookie 是网站在你的浏览器中存储的一小段数据. 你应该已经见过他们了. - - [![theitstuff official facebook page cookies](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png)][3] - -当你登录后,服务器为你创建一段关系或者说一个 session, 然后将唯一标识这个 session 的 session id 以 cookie 的形式存储在你的浏览器中. - -### 什么意思? - -所有这些东西存在的原因在于识别出你来,这样当你写评论或者发推时, 服务器能知道是谁在发评论,是谁在发推. - -当你登录后, 会产生一个包含 session id 的 cookie. 这样, 这个 session id 就被赋予了那个输入正确用户名和密码的人了. - - [![facebook cookies in web browser](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png)][4] - -也就是说, session id 被赋予给了拥有这个账户的人了. 之后,所有在网站上产生的行为, 服务器都能通过他们的 session id 来判断是由谁发起的. - -### 如何让我保持登录状态? - -session 有一定的时间限制. 这一点与现实生活中不一样,现实生活中的关系可以在不见面的情况下持续很长一段时间, 而 session 具有时间限制. 你必须要不断地通过一些动作来告诉服务器你还在线. 否则的话,服务器会关掉这个 session,而你会被登出. - - [![websites keep me logged in option](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png)][5] - -不过在某些网站上可以启用 `保持登录(Keep me logged in)`, 这样服务器会将另一个唯一变量以 cookie 的形式保存到我们的浏览器中. 这个唯一变量会通过与服务器上的变量进行对比来实现自动登录. 若有人盗取了这个唯一标识(我们称之为 cookie stealing), 他们就能访问你的账户了. - -### 结论 - -我们讨论了登录系统的工作原理以及网站是如何进行认证的. 我们还学到了什么是 sessions 和 cookies,以及它们在登录机制中的作用. - -我们希望你们以及理解了用户登录的工作原理, 如有疑问, 欢迎提问. - --------------------------------------------------------------------------------- - -via: http://www.theitstuff.com/sessions-cookies-user-login-work - -作者:[Rishabh Kandari][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.theitstuff.com/author/reevkandari -[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg -[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png -[3]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png -[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png -[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png From 29b605b514e04773786a0e4ba0e9cacc95a695a8 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 22 Feb 2018 10:28:32 +0800 Subject: [PATCH 02/81] =?UTF-8?q?Rename=2020171208=20Sessions=20And=20Cook?= =?UTF-8?q?ies=20=E2=80=93=20How=20Does=20User-Login=20Work.md=20to=202017?= =?UTF-8?q?1208=20Sessions=20And=20Cookies=20-=20How=20Does=20User-Login?= =?UTF-8?q?=20Work.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... => 20171208 Sessions And Cookies - How Does User-Login Work.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename published/{20171208 Sessions And Cookies – How Does User-Login Work.md => 20171208 Sessions And Cookies - How Does User-Login Work.md} (100%) diff --git a/published/20171208 Sessions And Cookies – How Does User-Login Work.md b/published/20171208 Sessions And Cookies - How Does User-Login Work.md similarity index 100% rename from published/20171208 Sessions And Cookies – How Does User-Login Work.md rename to published/20171208 Sessions And Cookies - How Does User-Login Work.md From c11388c838c89f501483d7ff764dbac4663bb0f0 Mon Sep 17 00:00:00 2001 From: Sihua Zheng Date: Thu, 22 Feb 2018 10:28:56 +0800 Subject: [PATCH 03/81] translated --- ... ftp-https download speed on Linux-UNIX.md | 84 ------------------- ... ftp-https download speed on Linux-UNIX.md | 81 ++++++++++++++++++ 2 files changed, 81 insertions(+), 84 deletions(-) delete mode 100644 sources/tech/20070810 How to use lftp to accelerate ftp-https download speed on Linux-UNIX.md create mode 100644 translated/tech/20070810 How to use lftp to accelerate ftp-https download speed on Linux-UNIX.md diff --git a/sources/tech/20070810 How to use lftp to accelerate ftp-https download speed on Linux-UNIX.md b/sources/tech/20070810 How to use lftp to accelerate ftp-https download speed on Linux-UNIX.md deleted file mode 100644 index 52a9c4c89f..0000000000 --- a/sources/tech/20070810 How to use lftp to accelerate ftp-https download speed on Linux-UNIX.md +++ /dev/null @@ -1,84 +0,0 @@ -translating---geekpi - -How to use lftp to accelerate ftp/https download speed on Linux/UNIX -====== -lftp is a file transfer program. It allows sophisticated FTP, HTTP/HTTPS, and other connections. If the site URL is specified, then lftp will connect to that site otherwise a connection has to be established with the open command. It is an essential tool for all a Linux/Unix command line users. I have already written about [Linux ultra fast command line download accelerator][1] such as Axel and prozilla. lftp is another tool for the same job with more features. lftp can handle seven file access methods: - - 1. ftp - 2. ftps - 3. http - 4. https - 5. hftp - 6. fish - 7. sftp - 8. file - - - -### So what is unique about lftp? - - * Every operation in lftp is reliable, that is any not fatal error is ignored, and the operation is repeated. So if downloading breaks, it will be restarted from the point automatically. Even if FTP server does not support REST command, lftp will try to retrieve the file from the very beginning until the file is transferred completely. - * lftp has shell-like command syntax allowing you to launch several commands in parallel in the background. - * lftp has a builtin mirror which can download or update a whole directory tree. There is also a reverse mirror (mirror -R) which uploads or updates a directory tree on the server. The mirror can also synchronize directories between two remote servers, using FXP if available. - - - -### How to use lftp as download accelerator - -lftp has pget command. It allows you download files in parallel. The syntax is -`lftp -e 'pget -n NUM -c url; exit'` -For example, download file using pget in 5 parts: -``` -$ cd /tmp -$ lftp -e 'pget -n 5 -c http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.2.tar.bz2' -``` -Sample outputs: -``` -45108964 bytes transferred in 57 seconds (775.3K/s) -lftp :~>quit - -``` - -Where, - - 1. pget – Download files in parallel - 2. -n 5 – Set maximum number of connections to 5 - 3. -c – Continue broken transfer if lfile.lftp-pget-status exists in the current directory - - - -### How to use lftp to accelerate ftp/https download on Linux/Unix - -Another try with added exit command: -`$ lftp -e 'pget -n 10 -c https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.15.tar.xz; exit'` - -[Linux-lftp-command-demo][https://www.cyberciti.biz/tips/wp-content/uploads/2007/08/Linux-lftp-command-demo.mp4] - -### A note about parallel downloading - -Please note that by using download accelerator you are going to put a load on remote host. Also note that lftp may not work with sites that do not support multi-source downloads or blocks such requests at firewall level. - -NA command offers many other features. Refer to [lftp][2] man page for more information: -`man lftp` - -### about the author - -The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][3], [Facebook][4], [Google+][5]. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[my RSS/XML feed][6]**. - --------------------------------------------------------------------------------- - -via: https://www.cyberciti.biz/tips/linux-unix-download-accelerator.html - -作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.cyberciti.biz -[1]:https://www.cyberciti.biz/tips/download-accelerator-for-linux-command-line-tools.html -[2]:https://lftp.yar.ru/ -[3]:https://twitter.com/nixcraft -[4]:https://facebook.com/nixcraft -[5]:https://plus.google.com/+CybercitiBiz -[6]:https://www.cyberciti.biz/atom/atom.xml diff --git a/translated/tech/20070810 How to use lftp to accelerate ftp-https download speed on Linux-UNIX.md b/translated/tech/20070810 How to use lftp to accelerate ftp-https download speed on Linux-UNIX.md new file mode 100644 index 0000000000..9fa97d794b --- /dev/null +++ b/translated/tech/20070810 How to use lftp to accelerate ftp-https download speed on Linux-UNIX.md @@ -0,0 +1,81 @@ +如何使用 lftp 来加速 Linux/UNIX 上的 ftp/https 下载速度 +====== +lftp 是一个文件传输程序。它可以用复杂的 FTP, HTTP/HTTPS 和其他连接。如果指定了站点 URL,那么 lftp 将连接到该站点,否则会使用 open 命令建立连接。它是所有 Linux/Unix 命令行用户的必备工具。我目前写了一些关于[ Linux 下超快命令行下载加速器][1],比如 Axel 和 prozilla。lftp 是另一个能做相同的事,但有更多功能的工具。lftp 可以处理七种文件访问方式: + + 1. ftp + 2. ftps + 3. http + 4. https + 5. hftp + 6. fish + 7. sftp + 8. file + + + +### 那么 lftp 的独特之处是什么? + + * lftp 中的每个操作都是可靠的,即任何非致命错误都被忽略,并且重复操作。所以如果下载中断,它会自动重新启动。即使 FTP 服务器不支持 REST 命令,lftp 也会尝试从开头检索文件,直到文件传输完成。 +  * lftp 具有类似 shell 的命令语法,允许你在后台并行启动多个命令。 +  * lftp 有一个内置镜像,可以下载或更新整个目录树。还有一个反向镜像(mittor -R),它可以上传或更新服务器上的目录树。镜像也可以在两个远程服务器之间同步目录,如果可用的话会使用 FXP。 + + +### 如何使用 lftp 作为下载加速器 + +lftp 有 pget 命令。它能让你并行下载。语法是: +`lftp -e 'pget -n NUM -c url; exit'` +例如,使用 pget 分 5个部分下载 : +``` +$ cd /tmp +$ lftp -e 'pget -n 5 -c http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.2.tar.bz2' +``` +示例输出: +``` +45108964 bytes transferred in 57 seconds (775.3K/s) +lftp :~>quit + +``` + +这里: + + 1. pget - 并行下载文件 +  2. -n 5 - 将最大连接数设置为 5 +  3. -c - 如果当前目录存在 lfile.lftp-pget-status,则继续中断的传输 + + + +### 如何在 Linux/Unix 中使用 lftp 来加速 ftp/https下载 + +再尝试添加退出命令: +`$ lftp -e 'pget -n 10 -c https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.15.tar.xz; exit'` + +[Linux-lftp-command-demo][https://www.cyberciti.biz/tips/wp-content/uploads/2007/08/Linux-lftp-command-demo.mp4] + +### 关于并行下载的说明 + +请注意,通过使用下载加速器,你将增加远程服务器负载。另请注意,lftp 可能无法在不支持多点下载的站点上工作,或者防火墙阻止了此类请求。 + +NA 命令提供了许多其他功能。有关更多信息,请参考 [lftp][2] 的 man 页面: +`man lftp` + +### 关于作者 + +作者是 nixCraft 的创建者,经验丰富的系统管理员,也是 Linux 操作系统/Unix shell 脚本的培训师。他曾与全球客户以及IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 [Twitter][9]、[Facebook][10]、[Google +][11] 上关注他。通过[我的 RSS/XML 订阅][5]获取**最新的系统管理、Linux/Unix 以及开源主题教程**。 + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/linux-unix-download-accelerator.html + +作者:[Vivek Gite][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/tips/download-accelerator-for-linux-command-line-tools.html +[2]:https://lftp.yar.ru/ +[3]:https://twitter.com/nixcraft +[4]:https://facebook.com/nixcraft +[5]:https://plus.google.com/+CybercitiBiz +[6]:https://www.cyberciti.biz/atom/atom.xml From db039195c21c8cc2719a22443ccb7b89da6ea33e Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 22 Feb 2018 10:32:31 +0800 Subject: [PATCH 04/81] PRF&PUB:20171226 Dockerizing Compiled Software - Tianon-s Ramblings .md @geekpi --- ...ng Compiled Software - Tianon-s Ramblings .md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) rename {translated/tech => published}/20171226 Dockerizing Compiled Software - Tianon-s Ramblings .md (77%) diff --git a/translated/tech/20171226 Dockerizing Compiled Software - Tianon-s Ramblings .md b/published/20171226 Dockerizing Compiled Software - Tianon-s Ramblings .md similarity index 77% rename from translated/tech/20171226 Dockerizing Compiled Software - Tianon-s Ramblings .md rename to published/20171226 Dockerizing Compiled Software - Tianon-s Ramblings .md index bc5ed80216..47aa34ecf4 100644 --- a/translated/tech/20171226 Dockerizing Compiled Software - Tianon-s Ramblings .md +++ b/published/20171226 Dockerizing Compiled Software - Tianon-s Ramblings .md @@ -1,11 +1,11 @@ -Docker 化编译的软件 ┈ Tianon's Ramblings ✿ +如何 Docker 化编译的软件 ====== -我最近在 [docker-library/php][1] 仓库中关闭了大量问题,最老的(并且是最长的)讨论之一是关于安装编译扩展的依赖关系,我写了一个[中篇评论][2]解释了我如何用通常的方式为我想要的软件 Docker 化的。 -I'm going to copy most of that comment here and perhaps expand a little bit more in order to have a better/cleaner place to link to! -我要在这复制大部分的评论,或许扩展一点点,以便有一个更好的/更干净的链接! +我最近在 [docker-library/php][1] 仓库中关闭了大量问题,最老的(并且是最长的)讨论之一是关于安装编译扩展的依赖关系,我写了一个[中等篇幅的评论][2]解释了我如何用常规的方式为我想要的软件进行 Docker 化的。 -我第一步是编写 `Dockerfile` 的原始版本:下载源码,运行 `./configure && make` 等,清理。然后我尝试构建我的原始版本,并希望在这过程中看到错误消息。(对真的!) +我要在这里复制大部分的评论内容,或许扩展一点点,以便有一个更好的/更干净的链接! + +我第一步是编写 `Dockerfile` 的原始版本:下载源码,运行 `./configure && make` 等,清理。然后我尝试构建我的原始版本,并希望在这过程中看到错误消息。(对,真的!) 错误信息通常以 `error: could not find "xyz.h"` 或 `error: libxyz development headers not found` 的形式出现。 @@ -13,9 +13,9 @@ I'm going to copy most of that comment here and perhaps expand a little bit more 如果我在 Alpine 中构建,我将使用 进行类似的搜索。 -“libxyz development headers” 在某种程度上也是一样的,但是根据我的经验,对于这些 Google 对开发者来说效果更好,因为不同的发行版和项目会以不同的名字来调用这些开发包,所以有时候更难确切的知道哪一个是“正确”的。 +“libxyz development headers” 在某种程度上也是一样的,但是根据我的经验,对于这些用 Google 对开发者来说效果更好,因为不同的发行版和项目会以不同的名字来调用这些开发包,所以有时候更难确切的知道哪一个是“正确”的。 -当我得到包名后,我将这个包名称添加到我的 `Dockerfile` 中,清理之后,然后重复操作。最终通常会构建成功。偶尔我发现某些库不在 Debian 或 Alpine 中,或者是不够新的,由此我必须从源码构建它,但这些情况在我的经验中很少见 - 因人而异。 +当我得到包名后,我将这个包名称添加到我的 `Dockerfile` 中,清理之后,然后重复操作。最终通常会构建成功。偶尔我发现某些库不在 Debian 或 Alpine 中,或者是不够新的,由此我必须从源码构建它,但这些情况在我的经验中很少见 —— 因人而异。 我还会经常查看 Debian(通过 )或 Alpine(通过 )我要编译的软件包源码,特别关注 `Build-Depends`(如 [`php7.0=7.0.26-1` 的 `debian/control` 文件][3])以及/或者 `makedepends` (如 [`php7` 的 `APKBUILD` 文件][4])用于包名线索。 @@ -31,7 +31,7 @@ via: https://tianon.github.io/post/2017/12/26/dockerize-compiled-software.html 作者:[Tianon Gravi][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 58d4aa1b868b255d52693c7a92055158d4400b5f Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 22 Feb 2018 10:35:45 +0800 Subject: [PATCH 05/81] PRF&PUB:20171231 Making Vim Even More Awesome With These Cool Features.md @stevenzdg988 --- ...n More Awesome With These Cool Features.md | 108 +++++++++++++++++ ...n More Awesome With These Cool Features.md | 109 ------------------ 2 files changed, 108 insertions(+), 109 deletions(-) create mode 100644 published/20171231 Making Vim Even More Awesome With These Cool Features.md delete mode 100644 translated/tech/20171231 Making Vim Even More Awesome With These Cool Features.md diff --git a/published/20171231 Making Vim Even More Awesome With These Cool Features.md b/published/20171231 Making Vim Even More Awesome With These Cool Features.md new file mode 100644 index 0000000000..34a435d964 --- /dev/null +++ b/published/20171231 Making Vim Even More Awesome With These Cool Features.md @@ -0,0 +1,108 @@ +用一些超酷的功能使 Vim 变得更强大 +====== + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/making-vim-even-more-awesome-with-these-cool-features_orig.jpg) + +Vim 是每个 Linux 发行版]中不可或缺的一部分,也是 Linux 用户最常用的工具(当然是基于终端的)。至少,这个说法对我来说是成立的。人们可能会在利用什么工具进行程序设计更好方面产生争议,的确 Vim 可能不是一个好的选择,因为有很多不同的 IDE 或其它类似于 Sublime Text 3,Atom 等使程序设计变得更加容易的成熟的文本编辑器。 + +### 我的感想 + +但我认为,Vim 应该从一开始就以我们想要的方式运作,而其它编辑器让我们按照已经设计好的方式工作,实际上不是我们想要的工作方式。我不会过多地谈论其它编辑器,因为我没有过多地使用过它们(我对 Vim 情有独钟)。 + +不管怎样,让我们用 Vim 来做一些事情吧,它完全可以胜任。 + +### 利用 Vim 进行程序设计 + +#### 执行代码 + + +考虑一个场景,当我们使用 Vim 设计 C++ 代码并需要编译和运行它时,该怎么做呢。 + +(a). 我们通过 `Ctrl + Z` 返回到终端,或者利用 `:wq` 保存并退出。 + +(b). 但是任务还没有结束,接下来需要在终端上输入类似于 `g++ fileName.cxx` 的命令进行编译。 + +(c). 接下来需要键入 `./a.out` 执行它。 + +为了让我们的 C++ 代码在 shell 中运行,需要做很多事情。但这似乎并不是利用 Vim 操作的方法( Vim 总是倾向于把几乎所有操作方法利用一两个按键实现)。那么,做这些事情的 Vim 的方式究竟是什么? + +#### Vim 方式 + +Vim 不仅仅是一个文本编辑器,它是一种编辑文本的编程语言。这种帮助我们扩展 Vim 功能的编程语言是 “VimScript”(LCTT 译注: Vim 脚本)。 + +因此,在 VimScript 的帮助下,我们可以只需一个按键轻松地将编译和运行代码的任务自动化。 + + [![create functions in vim .vimrc](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_orig.png)][2] + +以上是在我的 `.vimrc` 配置文件里创建的一个名为 `CPP()` 函数的片段。 + +#### 利用 VimScript 创建函数 + +在 VimScript 中创建函数的语法非常简单。它以关键字 `func` 开头,然后是函数名(在 VimScript 中函数名必须以大写字母开头,否则 Vim 将提示错误)。在函数的结尾用关键词 `endfunc`。 + +在函数的主体中,可以看到 `exec` 语句,无论您在 `exec` 关键字之后写什么,都会在 Vim 的命令模式上执行(记住,就是在 Vim 窗口的底部以 `:` 开始的命令)。现在,传递给 `exec` 的字符串是(LCTT 译注:`:!clear && g++ % && ./a.out`) - + +[![vim functions commands & symbols](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_1_orig.png)][3] + + +当这个函数被调用时,它首先清除终端屏幕,因此只能看到输出,接着利用 `g++` 执行正在处理的文件,然后运行由前一步编译而形成的 `a.out` 文件。 + +#### 将 `Ctrl+r` 映射为运行 C++ 代码。 + +我将语句 `call CPP()` 映射到键组合 `Ctrl+r`,以便我现在可以按 `Ctrl+r` 来执行我的 C++ 代码,无需手动输入`:call CPP()` ,然后按回车键。 + +#### 最终结果 + +我们终于找到了 Vim 方式的操作方法。现在,你只需按一个(组合)键,你编写的 C++ 代码就输出在你的屏幕上,你不需要键入所有冗长的命令了。这也节省了你的时间。 + +我们也可以为其他语言实现这类功能。 + + [![create function in vim for python](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_2_orig.png)][4] + +对于Python:您可以按下 `Ctrl+e` 解释执行您的代码。 + + [![create function in vim for java](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_3_orig.png)][5] + + +对于Java:您现在可以按下 `Ctrl+j`,它将首先编译您的 Java 代码,然后执行您的 Java 类文件并显示输出。 + +### 进一步提高 + +所以,这就是如何在 Vim 中操作的方法。现在,我们来看看如何在 Vim 中实现所有这些。我们可以直接在 Vim 中使用这些代码片段,而另一种方法是使用 Vim 中的自动命令 `autocmd`。`autocmd` 的优点是这些命令无需用户调用,它们在用户所提供的任何特定条件下自动执行。 + +我想用 `autocmd` 实现这个,而不是对每种语言使用不同的映射,执行不同程序设计语言编译出的代码。 + + [![autocmd in vimrc](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_4_orig.png)][6] + +在这里做的是,为所有的定义了执行相应文件类型代码的函数编写了自动命令。 + +会发生什么?当我打开任何上述提到的文件类型的缓冲区, Vim 会自动将 `Ctrl + r` 映射到函数调用,而 `` 表示回车键,这样就不需要每完成一个独立的任务就按一次回车键了。 + +为了实现这个功能,您只需将函数片段添加到 `.vimrc` 文件中,然后将所有这些 `autocmd` 也一并添加进去。这样,当您下一次打开 Vim 时,Vim 将拥有所有相应的功能来执行所有具有相同绑定键的代码。 + +### 总结 + +就这些了。希望这些能让你更爱 Vim 。我目前正在探究 Vim 中的一些内容,正阅读文档,补充 `.vimrc` 文件,当我研究出一些成果后我会再次与你分享。 + +如果你想看一下我现在的 `.vimrc` 文件,这是我的 Github 账户的链接: [MyVimrc][7]。 + +期待你的好评。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/making-vim-even-more-awesome-with-these-cool-features + +作者:[LINUXANDUBUNTU][a] +译者:[stevenzdg988](https://github.com/stevenzdg988) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxandubuntu.com +[1]:http://www.linuxandubuntu.com/home/category/distros +[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_orig.png +[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_1_orig.png +[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_2_orig.png +[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_3_orig.png +[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_4_orig.png +[7]:https://github.com/phenomenal-ab/VIm-Configurations/blob/master/.vimrc diff --git a/translated/tech/20171231 Making Vim Even More Awesome With These Cool Features.md b/translated/tech/20171231 Making Vim Even More Awesome With These Cool Features.md deleted file mode 100644 index 9a23cbf124..0000000000 --- a/translated/tech/20171231 Making Vim Even More Awesome With These Cool Features.md +++ /dev/null @@ -1,109 +0,0 @@ -用一些超酷的功能使 Vim 变得更强大 -====== - -![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/making-vim-even-more-awesome-with-these-cool-features_orig.jpg) - -**Vim** 是每个 [Linux 发行版][1] 中不可或缺的一部分,也是 Linux 用户最常用的工具(当然是基于终端的)。至少,这个说法对我来说是成立的。人们可能会在利用什么工具进行程序设计更好产生争议,的确 Vim 可能不是一个好的选择,因为有很多不同的 IDE 或其他高性能的类似于 `Sublime Text 3`,`Atom` 等使程序设计变得更加容易的文本编辑器。 -#### 我的感想 - -但我认为,Vim 应该从一开始就以我们想要的方式运作,而其他编辑器让我们按照已经设计好的方式工作,实际上不是我们想要的工作方式。我不能过多地谈论其他编辑器,因为我没有过多地使用它们(我对 Vim 有偏见` Linux 中国注:我对 Vim 情有独钟`)。 - -不管怎样,让我们用 Vim 来做一些事情吧,它完全可以胜任。 -### 利用 Vim 进行程序设计 - -#### 执行代码 - - -考虑一个场景,当我们使用 Vim 设计 C++ 代码并需要编译和运行它时,该怎么做呢。 - -(a). 我们通过 `(Ctrl + Z)` 返回到终端,或者利用 `(:wq)` 保存并退出。 - -(b). 但是任务还没有结束,接下来需要在终端上输入类似于 `g++ fileName.cxx` 的命令进行编译。 - -(c). 接下来需要键入 `./a.out` 执行它。 - - -为了让我们的 C++ 代码在 shell 中运行,需要做很多事情。但这似乎并不是利用 Vim 操作的方法( Vim 总是倾向于把几乎所有操作方法利用一个/两个按键实现)。那么,做这些事情的 Vim 的方式究竟是什么? -#### Vim 方式 - - -Vim 不仅仅是一个文本编辑器,它是一种编辑文本的编程语言。这种帮助我们扩展 Vim 功能的编程语言是 `“VimScript”(Linux 中国注: Vim 脚本)`。 - -因此,在 `VimScript` 的帮助下,我们可以只需一个按键轻松地将编译和运行代码的任务自动化。 - [![create functions in vim .vimrc](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_orig.png)][2] - - -以上是在我的 `.vimrc` 配置文件里创建的一个名为 CPP() 函数的片段。 -#### 利用 VimScript 创建函数 - - -在VimScript中创建函数的语法非常简单。它以关键字“ -**func** -”开头,然后是函数名[在 VimScript 中函数名必须以大写字母开头,否则 Vim 将提示错误]。在函数的结尾用关键词 -“**endfunc** -”。 -在函数的主体中,可以看到 -**exec** -声明,无论您在 **exec** 关键字之后写什么,都要在 Vim 的命令模式上执行(记住,在 Vim 窗口的底部以 `:` 开始)。现在,传递给 **exec** 的字符串是(Linux 中国注: ``:!clear && g++ % && ./a.out``) - -[![vim functions commands & symbols](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_1_orig.png)][3] - - -当这个函数被调用时,它首先清除终端屏幕,因此只能看到输出,接着利用 `g++` 执行正在处理的文件,然后运行由前一步编译而形成的 `a.out` 文件。 - -将 `Ctrl+r` 映射为运行 C++ 代码。 -------------------------------------------------------------- - - -映射语句: `call CPP()` 到键组合 `Ctrl+r`,以便我现在可以按 `Ctrl+r` 来执行我的 C++ 代码,无需手动输入: `call CPP()`,然后按 `Enter` 键。 -#### 最终结果 - - -我们终于找到了 Vim Way 操作的方法。现在,你只需点击一个按钮,你编写的 C++ 代码就输出在你的屏幕上,你不需要键入所有冗长的命令了。这也节省了你的时间。 - -我们也可以为其他语言实现这类功能。 - [![create function in vim for python](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_2_orig.png)][4] - - -对于Python:您可以按下映射键执行您的代码。 - [![create function in vim for java](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_3_orig.png)][5] - - -对于Java:您现在可以按下映射健,它将首先编译您的 Java 代码,然后执行您的 Java 类文件并显示输出。 -### 进一步提高 - - -所以,这就是如何在 Vim 中操作的方法。现在,我们来看看如何在 Vim 中实现所有这些。我们可以直接在 Vim 中使用这些代码片段,而另一种方法是使用 Vim 中的自动命令 `autocmd`。`autocmd` 的优点是这些命令无需用户调用,它们在用户所提供的任何特定条件下自动执行。 - -我想用 [autocmd] 实现类似于单一的映射来执行每种语言替代使用不同的映射执行不同程序设计语言编译出的代码,。 - [![autocmd in vimrc](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_4_orig.png)][6] - - -在这里做的是,为所有的定义了执行相应文件类型代码的函数编写了自动命令。 - -会发生什么当我打开任何上述提到的文件类型的缓冲区, Vim 会自动将 `Ctrl + r` 映射到函数调用和表示 Enter Key (Linux 中国注:回车键),这样就不需要每完成一个独立的任务就按一次 “Enter key” 了。 - -为了实现这个功能,您只需将函数片段添加到[dot]vimrc(Linux 中国注: `.vimrc` 文件)文件中,然后将所有这些 `autocmds` 也一并添加进去。这样,当您下一次打开 Vim 时,Vim 将拥有所有相应的功能来执行所有具有相同绑定键的代码。 -### 总结 - -就这些了。希望这些能让你更爱 Vim 。我目前正在探究 Vim 中的一些内容,正阅读文档,补充 [.vimrc] 文件,当我研究出一些成果后我会再次与你分享。 -如果你想看一下我现在的 [.vimrc] 文件,这是我的 Github 账户的链接: [MyVimrc][7]。 - -期待你的好评。 --------------------------------------------------------------------------------- - -via: http://www.linuxandubuntu.com/home/making-vim-even-more-awesome-with-these-cool-features - -作者:[LINUXANDUBUNTU][a] -译者:[stevenzdg988](https://github.com/stevenzdg988) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxandubuntu.com -[1]:http://www.linuxandubuntu.com/home/category/distros -[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_orig.png -[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_1_orig.png -[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_2_orig.png -[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_3_orig.png -[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vim_4_orig.png -[7]:https://github.com/phenomenal-ab/VIm-Configurations/blob/master/.vimrc From 094f6b89c454a7758e2847b3e78ed06512083d1e Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 22 Feb 2018 10:37:01 +0800 Subject: [PATCH 06/81] translating --- sources/tech/20180209 Gnome without chrome-gnome-shell.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180209 Gnome without chrome-gnome-shell.md b/sources/tech/20180209 Gnome without chrome-gnome-shell.md index b3158cfa12..f885234bbb 100644 --- a/sources/tech/20180209 Gnome without chrome-gnome-shell.md +++ b/sources/tech/20180209 Gnome without chrome-gnome-shell.md @@ -1,3 +1,5 @@ +translating---geekpi + Gnome without chrome-gnome-shell ====== From 1eb67172471b2fcffab51a99f202d7921c1807fe Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 22 Feb 2018 11:00:45 +0800 Subject: [PATCH 07/81] PRF:20171231 Why You Should Still Love Telnet.md @XYenChi --- ...171231 Why You Should Still Love Telnet.md | 54 +++++++++---------- 1 file changed, 25 insertions(+), 29 deletions(-) diff --git a/translated/tech/20171231 Why You Should Still Love Telnet.md b/translated/tech/20171231 Why You Should Still Love Telnet.md index c08fe6a7eb..0a5b3d604b 100644 --- a/translated/tech/20171231 Why You Should Still Love Telnet.md +++ b/translated/tech/20171231 Why You Should Still Love Telnet.md @@ -1,40 +1,41 @@ Telnet,爱一直在 ====== -Telnet, 是系统管理员登录远程服务器的协议和工具。然而,由于所有的通信都没有加密,包括密码,都是明文发送的。Telnet 在 SSH 被开发出来之后就基本弃用了。 + +Telnet,是系统管理员登录远程服务器的一种协议和工具。然而,由于所有的通信都没有加密,包括密码,都是明文发送的。Telnet 在 SSH 被开发出来之后就基本弃用了。 登录远程服务器,你可能不会也从未考虑过它。但这并不意味着 `telnet` 命令在调试远程连接问题时不是一个实用的工具。 -本教程中,我们将探索使用 `telnet` 解决所有常见问题,“我怎么又连不上啦?” +本教程中,我们将探索使用 `telnet` 解决所有常见问题:“我怎么又连不上啦?” -这种讨厌的问题通常会在安装了像web服务器、邮件服务器、ssh服务器、Samba服务器等诸如此类的事之后遇到,用户无法连接服务器。 +这种讨厌的问题通常会在安装了像 Web服务器、邮件服务器、ssh 服务器、Samba 服务器等诸如此类的事之后遇到,用户无法连接服务器。 `telnet` 不会解决问题但可以很快缩小问题的范围。 `telnet` 用来调试网络问题的简单命令和语法: + ``` telnet - ``` -因为 `telnet` 最初通过端口建立连接不会发送任何数据,适用于任何协议包括加密协议。 +因为 `telnet` 最初通过端口建立连接不会发送任何数据,适用于任何协议,包括加密协议。 -连接问题服务器有四个可能会遇到的主要问题。我们会研究这四个问题,研究他们意味着什么以及如何解决。 +连接问题服务器有四个可能会遇到的主要问题。我们会研究这四个问题,研究它们意味着什么以及如何解决。 本教程默认已经在 `samba.example.com` 安装了 [Samba][1] 服务器而且本地客户无法连上服务器。 ### Error 1 - 连接挂起 首先,我们需要试着用 `telnet` 连接 Samba 服务器。使用下列命令 (Samba 监听端口445): + ``` telnet samba.example.com 445 - ``` 有时连接会莫名停止: + ``` telnet samba.example.com 445 Trying 172.31.25.31... - ``` 这意味着 `telnet` 没有收到任何回应来建立连接。有两个可能的原因: @@ -43,10 +44,10 @@ Trying 172.31.25.31... 2. 防火墙拦截了你的请求。 +为了排除第 1 点,对服务器上进行一个快速 [`mtr samba.example.com`][2] 。如果服务器是可达的,那么便是防火墙(注意:防火墙总是存在的)。 -为了排除 **1.** 在服务器上运行一个快速 [`mtr samba.example.com`][2] 。如果服务器是可达的那么便是防火墙(注意:防火墙总是存在的)。 +首先用 `iptables -L -v -n` 命令检查服务器本身有没有防火墙,没有的话你能看到以下内容: -首先用 `iptables -L -v -n` 命令检查服务器本身有没有防火墙, 没有的话你能看到以下内容: ``` iptables -L -v -n Chain INPUT (policy ACCEPT 0 packets, 0 bytes) @@ -57,41 +58,38 @@ Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination - ``` 如果你看到其他东西那可能就是问题所在了。为了检验,停止 `iptables` 一下并再次运行 `telnet samba.example.com 445` 看看你是否能连接。如果你还是不能连接看看你的提供商或企业有没有防火墙拦截你。 ### Error 2 - DNS 问题 -DNS问题通常发生在你正使用的主机名没有解析到 IP 地址。错误如下: +DNS 问题通常发生在你正使用的主机名没有解析到 IP 地址。错误如下: + ``` telnet samba.example.com 445 Server lookup failure: samba.example.com:445, Name or service not known - ``` -第一步是把主机名替换成服务器的IP地址。如果你可以连上那么就是主机名的问题。 +第一步是把主机名替换成服务器的 IP 地址。如果你可以连上那么就是主机名的问题。 有很多发生的原因(以下是我见过的): - 1. 域注册了吗?用 `whois` 来检验。 - 2. 域过期了吗?用 `whois` 来检验。 + 1. 域名注册了吗?用 `whois` 来检验。 + 2. 域名过期了吗?用 `whois` 来检验。 3. 是否使用正确的主机名?用 `dig` 或 `host` 来确保你使用的主机名解析到正确的 IP。 4. 你的 **A** 记录正确吗?确保你没有偶然创建类似 `smaba.example.com` 的 **A** 记录。 - - -一定要多检查几次拼写和主机名是否正确(是 `samba.example.com` 还是 `samba1.example.com`)这些经常会困扰你特别是长、难或外来主机名。 +一定要多检查几次拼写和主机名是否正确(是 `samba.example.com` 还是 `samba1.example.com`)?这些经常会困扰你,特别是比较长、难记或其它国家的主机名。 ### Error 3 - 服务器没有侦听端口 这种错误发生在 `telnet` 可达服务器但是指定端口没有监听。就像这样: + ``` telnet samba.example.com 445 Trying 172.31.25.31... telnet: Unable to connect to remote host: Connection refused - ``` 有这些原因: @@ -100,18 +98,16 @@ telnet: Unable to connect to remote host: Connection refused 2. 你的应用服务器没有侦听预期的端口。在服务器上运行 `netstat -plunt` 来查看它究竟在干什么并看哪个端口才是对的,实际正在监听中的。 3. 应用服务器没有运行。这可能突然而又悄悄地发生在你启动应用服务器之后。启动服务器运行 `ps auxf` 或 `systemctl status application.service` 查看运行。 - - ### Error 4 - 连接被服务器关闭 这种错误发生在连接成功建立但是应用服务器建立的安全措施一连上就将其结束。错误如下: + ``` telnet samba.example.com 445 Trying 172.31.25.31... Connected to samba.example.com. Escape character is '^]'. -��Connection closed by foreign host. - +Connection closed by foreign host. ``` 最后一行 `Connection closed by foreign host.` 意味着连接被服务器主动终止。为了修复这个问题,需要看看应用服务器的安全设置确保你的 IP 或用户允许连接。 @@ -119,17 +115,18 @@ Escape character is '^]'. ### 成功连接 成功的 `telnet` 连接如下: + ``` telnet samba.example.com 445 Trying 172.31.25.31... Connected to samba.example.com. Escape character is '^]'. - ``` 连接会保持一段时间只要你连接的应用服务器时限没到。 -输入 `CTRL+]` 中止连接然后当你看到 `telnet>` 提示,输入 "quit" 并点击 ENTER 例: +输入 `CTRL+]` 中止连接,然后当你看到 `telnet>` 提示,输入 `quit` 并按回车: + ``` telnet samba.example.com 445 Trying 172.31.25.31... @@ -138,12 +135,11 @@ Escape character is '^]'. ^] telnet> quit Connection closed. - ``` ### 总结 -客户程序连不上服务器的原因有很多。确切原理很难确定特别是当客户是图形用户界面提供很少或没有错误信息。用 `telnet` 并观察输出可以让你很快确定问题所在节约很多时间。 +客户程序连不上服务器的原因有很多。确切原因很难确定,特别是当客户是图形用户界面提供很少或没有错误信息。用 `telnet` 并观察输出可以让你很快确定问题所在节约很多时间。 -------------------------------------------------------------------------------- @@ -151,7 +147,7 @@ via: https://bash-prompt.net/guides/telnet/ 作者:[Elliot Cooper][a] 译者:[XYenChi](https://github.com/XYenChi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 393adb39f59b7a12cb073634a53c1267ba6d7107 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 22 Feb 2018 11:03:24 +0800 Subject: [PATCH 08/81] PUB:20171231 Why You Should Still Love Telnet.md @XYenChi https://linux.cn/article-9369-1.html --- .../20171231 Why You Should Still Love Telnet.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171231 Why You Should Still Love Telnet.md (100%) diff --git a/translated/tech/20171231 Why You Should Still Love Telnet.md b/published/20171231 Why You Should Still Love Telnet.md similarity index 100% rename from translated/tech/20171231 Why You Should Still Love Telnet.md rename to published/20171231 Why You Should Still Love Telnet.md From 2e29d3d662eaed20173e0a08e4950dcbe901dfdf Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 22 Feb 2018 12:47:58 +0800 Subject: [PATCH 09/81] PRF:20171211 A tour of containerd 1.0.md @qhwdw --- .../tech/20171211 A tour of containerd 1.0.md | 25 +++++++++++-------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/translated/tech/20171211 A tour of containerd 1.0.md b/translated/tech/20171211 A tour of containerd 1.0.md index 7b30ccd304..56741f9dc7 100644 --- a/translated/tech/20171211 A tour of containerd 1.0.md +++ b/translated/tech/20171211 A tour of containerd 1.0.md @@ -1,28 +1,31 @@ containerd 1.0 探索之旅 ====== + +我们在过去的文章中讨论了一些 containerd 的不同特性,它是如何设计的,以及随着时间推移已经修复的一些问题。containerd 被用于 Docker、Kubernetes CRI、以及一些其它的项目,在这些平台中事实上都使用了 containerd,而许多人并不知道 containerd 存在于这些平台之中,这篇文章就是为这些人所写的。我将来会写更多的关于 containerd 的设计以及特性集方面的文章,但是现在,让我们从它的基础知识开始。 + ![containerd][1] -我们在过去的文章中讨论了一些 containerd 的不同特性,它是如何设计的,以及随着时间推移已经修复的一些问题。Containerd 是被用于 Docker、Kubernetes CRI、以及一些其它的项目,在这些平台中事实上都使用了 containerd,而许多人并不知道 containerd 存在于这些平台之中,这篇文章就是为这些人所写的。我想写更多的关于 containerd 的设计以及特性集方面的文章,但是现在,我们从它的基础知识开始。 +我认为容器生态系统有时候可能很复杂。尤其是我们所使用的术语。它是什么?一个运行时,还是别的?一个运行时 … containerd(它的发音是 “container-dee”)正如它的名字,它是一个容器守护进程,而不是一些人忽悠我的“收集containnerd”。它最初是作为 OCI 运行时(就像 runc 一样)的集成点而构建的,在过去的六个月中它增加了许多特性,使其达到了像 Docker 这样的现代容器平台以及像 Kubernetes 这样的编排平台的需求。 -我认为容器生态系统有时候可能很复杂。尤其是我们所使用的技术。它是什么?一个运行时,还是别的?一个运行时 … containerd(它的发音是 " _container-dee "_)正如它的名字,它是一个容器守护进程,而不是一些人所“传说”的那样。它最初是作为 OCI 运行时(就像 runc 一样)的集成点构建的,在过去的六个月中它增加了许多特性,使其达到了像 Docker 这样的现代容器平台以及像 Kubernetes 这样的编排平台的需求。 - -那么,你使用 containerd 能去做些什么呢?你可以推送或拉取功能以及镜像管理。可以获得容器生命周期 APIs 去创建、运行、以及管理容器和它们的任务。一个完整的专门用于快照管理的 API,以及一个公开管理的项目。如果你需要去构建一个容器平台,基本上你不需要去处理任何底层操作系统细节方面的事情。我认为关于 containerd 中最重要的部分是,它有一个版本化的并且有 bug 修复和安全补丁的稳定 API。 +那么,你使用 containerd 能去做些什么呢?你可以拥有推送或拉取功能以及镜像管理。可以拥有容器生命周期 API 去创建、运行、以及管理容器和它们的任务。一个完整的专门用于快照管理的 API,以及一个其所依赖的开放治理的项目。如果你需要去构建一个容器平台,基本上你不需要去处理任何底层操作系统细节方面的事情。我认为关于 containerd 中最重要的部分是,它有一个版本化的并且有 bug 修复和安全补丁的稳定 API。 ![containerd][2] -由于在内核中并没有太多的用作 Linux 容器的东西,因此容器是多种内核特性捆绑在一起的,当你构建一个大型平台或者分布式系统时,你需要在你的管理代码和系统调用之间构建一个抽象层,然后将这些特性捆绑粘接在一起去运行一个容器。而这个抽象层就是 containerd 的所在之外。它为稳定类型的平台层提供了一个客户端,这样平台可以构建在顶部而无需进入到内核级。因此,可以让使用容器、任务、和快照类型的工作相比通过管理调用去 clone() 或者 mount() 要友好的多。与灵活性相平衡,直接与运行时或者宿主机交互,这些对象避免了常规的高级抽象所带来的性能牺牲。结果是简单的任务很容易完成,而困难的任务也变得更有可能完成。 +由于在内核中没有一个 Linux 容器这样的东西,因此容器是多种内核特性捆绑在一起而成的,当你构建一个大型平台或者分布式系统时,你需要在你的管理代码和系统调用之间构建一个抽象层,然后将这些特性捆绑粘接在一起去运行一个容器。而这个抽象层就是 containerd 的所在之处。它为稳定类型的平台层提供了一个客户端,这样平台可以构建在顶部而无需进入到内核级。因此,可以让使用容器、任务、和快照类型的工作相比通过管理调用去 clone() 或者 mount() 要友好的多。与灵活性相平衡,直接与运行时或者宿主机交互,这些对象避免了常规的高级抽象所带来的性能牺牲。结果是简单的任务很容易完成,而困难的任务也变得更有可能完成。 -![containerd][3]Containerd 被设计用于 Docker 和 Kubernetes、以及想去抽象出系统调用或者在 Linux、Windows、Solaris、 以及其它的操作系统上特定的功能去运行容器的其它的容器系统。考虑到这些用户的想法,我们希望确保 containerd 只拥有它们所需要的东西,而没有它们不希望的东西。事实上这是不太可能的,但是至少我们想去尝试一下。虽然网络不在 containerd 的范围之内,它并不能做到高级系统完全控制的那些东西。原因是,当你构建一个分布式系统时,网络是非常重要的方面。现在,对于 SDN 和服务发现,在 Linux 上,相比于抽象出 netlink 调用,网络是更特殊的平台。大多数新的网络都是基于路由的,并且每次一个新的容器被创建或者删除时,都会请求更新路由表。服务发现、DNS 等等都需要及时通知到这些改变。如果在 containerd 中添加对网络的管理,为了能够支持不同的网络接口、钩子、以及集成点,将会在 containerd 中增加很大的一块代码。而我们的选择是,在 containerd 中做一个健壮的事件系统,以便于很多的消费者可以去订阅它们所关心的事件。我们也公开发布了一个 [任务 API ][4],它可以让用户去创建一个运行任务,也可以在一个容器的网络命名空间中添加一个接口,以及在一个容器的生命周期中的任何时候,无需复杂的 hooks 来启用容器的进程。 +![containerd][3] -在过去的几个月中另一个添加到 containerd 中的领域是完整的存储,以及支持 OCI 和 Docker 镜像格式的分布式系统。你有一个跨 containerd API 的完整的目录地址存储系统,它不仅适用于镜像,也适用于元数据、检查点、以及附加到容器的任何数据。 +containerd 被设计用于 Docker 和 Kubernetes、以及想去抽象出系统调用或者在 Linux、Windows、Solaris 以及其它的操作系统上特定的功能去运行容器的其它容器系统。考虑到这些用户的想法,我们希望确保 containerd 只拥有它们所需要的东西,而没有它们不希望的东西。事实上这是不太可能的,但是至少我们想去尝试一下。虽然网络不在 containerd 的范围之内,它并不能做成让高级系统可以完全控制的东西。原因是,当你构建一个分布式系统时,网络是非常中心的地方。现在,对于 SDN 和服务发现,相比于在 Linux 上抽象出 netlink 调用,网络是更特殊的平台。大多数新的网络都是基于路由的,并且每次一个新的容器被创建或者删除时,都会请求更新路由表。服务发现、DNS 等等都需要及时被通知到这些改变。如果在 containerd 中添加对网络的管理,为了能够支持不同的网络接口、钩子、以及集成点,将会在 containerd 中增加很大的一块代码。而我们的选择是,在 containerd 中做一个健壮的事件系统,以便于多个消费者可以去订阅它们所关心的事件。我们也公开发布了一个 [任务 API][4],它可以让用户去创建一个运行任务,也可以在一个容器的网络命名空间中添加一个接口,以及在一个容器的生命周期中的任何时候,无需复杂的钩子来启用容器的进程。 -我们也花时间去 [重新考虑如何使用 "图形驱动" 工作][5]。这些是叠加的或者允许镜像分层的块级文件系统,以使你执行的构建更加高效。当我们添加对 devicemapper 的支持时,图形驱动最初是由 Solomon 和我写的。Docker 在那个时候仅支持 AUFS,因此我们在叠加文件系统之后,对图形驱动进行建模。但是,做一个像 devicemapper/lvm 这样的块级文件系统,就如同一个堆叠文件系统一样,从长远来看是非常困难的。这些接口必须基于时间的推移进行扩展,以支持我们最初认为并不需要的那些不同的特性。对于 containerd,我们使用了一个不同的方法,像快照一样做一个堆叠文件系统而不是相反。这样做起来更容易,因为堆叠文件系统比起像 BTRFS、ZFS、以及 devicemapper 这样的文件系统提供了更好的灵活性。因为这些文件系统没有严格的父/子关系。这有助于我们去构建出 [快照的一个小型接口][6],同时还能满足 [构建者][7] 的要求,还能减少了需要的代码数量,从长远来看这样更易于维护。 +在过去的几个月中另一个添加到 containerd 中的领域是完整的存储,以及支持 OCI 和 Docker 镜像格式的分布式系统。有了一个跨 containerd API 的完整的目录地址存储系统,它不仅适用于镜像,也适用于元数据、检查点、以及附加到容器的任何数据。 + +我们也花时间去 [重新考虑如何使用 “图驱动” 工作][5]。这些是叠加的或者允许镜像分层的块级文件系统,可以使你执行的构建更加高效。当我们添加对 devicemapper 的支持时,图驱动graphdrivers最初是由 Solomon 和我写的。Docker 在那个时候仅支持 AUFS,因此我们在叠加文件系统之后,对图驱动进行了建模。但是,做一个像 devicemapper/lvm 这样的块级文件系统,就如同一个堆叠文件系统一样,从长远来看是非常困难的。这些接口必须基于时间的推移进行扩展,以支持我们最初认为并不需要的那些不同的特性。对于 containerd,我们使用了一个不同的方法,像快照一样做一个堆叠文件系统而不是相反。这样做起来更容易,因为堆叠文件系统比起像 BTRFS、ZFS 以及 devicemapper 这样的快照文件系统提供了更好的灵活性。因为这些文件系统没有严格的父/子关系。这有助于我们去构建出 [快照的一个小型接口][6],同时还能满足 [构建者][7] 的要求,还能减少了需要的代码数量,从长远来看这样更易于维护。 ![][8] -你可以在 [Stephen Day's Dec 7th 2017 KubeCon SIG Node presentation][9]上找到更多关于 containerd 的架构方面的详细资料。 +你可以在 [Stephen Day 2017/12/7 在 KubeCon SIG Node 上的演讲][9]找到更多关于 containerd 的架构方面的详细资料。 -除了在 1.0 代码库中的技术和设计上的更改之外,我们也将 [containerd 管理模式从长期 BDFL 转换为技术委员会][10],为社区提供一个独立的可信任的第三方资源。 +除了在 1.0 代码库中的技术和设计上的更改之外,我们也将 [containerd 管理模式从长期 BDFL 模式转换为技术委员会][10],为社区提供一个独立的可信任的第三方资源。 -------------------------------------------------------------------------------- @@ -30,7 +33,7 @@ via: https://blog.docker.com/2017/12/containerd-ga-features-2/ 作者:[Michael Crosby][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d04524b8a3dd360265deb71199f001934dbb5b5b Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 22 Feb 2018 13:01:40 +0800 Subject: [PATCH 10/81] PUB:20171211 A tour of containerd 1.0.md @qhwdw https://linux.cn/article-9370-1.html --- .../tech => published}/20171211 A tour of containerd 1.0.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171211 A tour of containerd 1.0.md (100%) diff --git a/translated/tech/20171211 A tour of containerd 1.0.md b/published/20171211 A tour of containerd 1.0.md similarity index 100% rename from translated/tech/20171211 A tour of containerd 1.0.md rename to published/20171211 A tour of containerd 1.0.md From 7f0908d99ba85a48b4320117529909b47ac31ff2 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 22 Feb 2018 13:48:59 +0800 Subject: [PATCH 11/81] PRF:20180210 How to create AWS ec2 key using Ansible.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @qianghaohao 恭喜你完成了第一篇翻译~ --- ...How to create AWS ec2 key using Ansible.md | 96 +++++++++++++------ 1 file changed, 66 insertions(+), 30 deletions(-) diff --git a/translated/tech/20180210 How to create AWS ec2 key using Ansible.md b/translated/tech/20180210 How to create AWS ec2 key using Ansible.md index f0850d4134..7924b642eb 100644 --- a/translated/tech/20180210 How to create AWS ec2 key using Ansible.md +++ b/translated/tech/20180210 How to create AWS ec2 key using Ansible.md @@ -1,33 +1,43 @@ 如何使用 Ansible 创建 AWS ec2 密钥 ====== -我想使用 Ansible 工具创建 Amazon EC2 密钥对。不想使用 AWS CLI 来创建。可以使用 Ansible 来创建 AWS ec2 密钥吗? -你需要使用 Ansible 的 ec2_key 模块。这个模块依赖于 python-boto 2.5 版本或者更高版本。 boto 只不过是亚马逊 Web 服务的一个 Python API。你可以将 boto 用于 Amazon S3,Amazon EC2 等其他服务。简而言之,你需要安装 ansible 和 boto 模块。我们一起来看下如何安装 boto 并结合 Ansible 使用。 +**我想使用 Ansible 工具创建 Amazon EC2 密钥对。不想使用 AWS CLI 来创建。可以使用 Ansible 来创建 AWS ec2 密钥吗?** + +你需要使用 Ansible 的 ec2_key 模块。这个模块依赖于 python-boto 2.5 版本或者更高版本。 boto 是亚马逊 Web 服务的一个 Python API。你可以将 boto 用于 Amazon S3、Amazon EC2 等其他服务。简而言之,你需要安装 Ansible 和 boto 模块。我们一起来看下如何安装 boto 并结合 Ansible 使用。 + +### 第一步 - 在 Ubuntu 上安装最新版本的 Ansible + +你必须[给你的系统配置 PPA 来安装最新版的 Ansible][2]。为了管理你从各种 PPA(Personal Package Archives)安装软件的仓库,你可以上传 Ubuntu 源码包并编译,然后通过 Launchpad 以 apt 仓库的形式发布。键入如下命令 [apt-get 命令][3]或者 [apt 命令][4]: -### 第一步 - [在 Ubuntu 上安装最新版本的 Ansible][1] -你必须[给你的系统配置 PPA 来安装最新版的 ansible][2]。为了管理你从各种 PPA(Personal Package Archives) 安装软件的仓库,你可以上传 Ubuntu 源码包并编译,然后通过 Launchpad 以 apt 仓库的形式发布。键入如下命令 [apt-get 命令][3]或者 [apt 命令][4]: ``` $ sudo apt update $ sudo apt upgrade $ sudo apt install software-properties-common ``` -接下来给你的系统的软件源中添加 ppa:ansible/ansible + +接下来给你的系统的软件源中添加 `ppa:ansible/ansible`。 + ``` $ sudo apt-add-repository ppa:ansible/ansible ``` -更新你的仓库并安装ansible: + +更新你的仓库并安装 Ansible: + ``` $ sudo apt update $ sudo apt install ansible ``` + 安装 boto: + ``` $ pip3 install boto3 ``` -#### 关于在CentOS/RHEL 7.x上安装Ansible的注意事项 +#### 关于在CentOS/RHEL 7.x上安装 Ansible 的注意事项 你[需要在 CentOS 和 RHEL 7.x 上配置 EPEL 源][5]和 [yum命令][6] + ``` $ cd /tmp $ wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm @@ -35,14 +45,17 @@ $ ls *.rpm $ sudo yum install epel-release-latest-7.noarch.rpm $ sudo yum install ansible ``` + 安装 boto: + ``` $ pip install boto3 ``` ### 第二步 2 – 配置 boto -你需要配置 AWS credentials/API 密钥。参考 “[AWS Security Credentials][7]” 文档如何创建 API key。用 mkdir 命令创建一个名为 ~/.aws 的目录,然后配置 API key: +你需要配置 AWS credentials/API 密钥。参考 “[AWS Security Credentials][7]” 文档如何创建 API key。用 `mkdir` 命令创建一个名为 `~/.aws` 的目录,然后配置 API key: + ``` $ mkdir -pv ~/.aws/ $ vi ~/.aws/credentials @@ -54,14 +67,20 @@ aws_secret_access_key = YOUR-SECRET-ACCESS-KEY-HERE ``` 还需要配置默认 [AWS 区域][8]: -`$ vi ~/.aws/config` + +``` +$ vi ~/.aws/config +``` + 输出样例如下: + ``` [default] region = us-west-1 ``` -通过创建一个简单的名为 test-boto.py 的 python 程序来测试你的 boto 配置是否正确: +通过创建一个简单的名为 `test-boto.py` 的 Python 程序来测试你的 boto 配置是否正确: + ``` #!/usr/bin/python3 # A simple program to test boto and print s3 bucket names @@ -72,20 +91,25 @@ for b in t.buckets.all(): ``` 按下面方式来运行该程序: -`$ python3 test-boto.py` + +``` +$ python3 test-boto.py +``` + 输出样例: + ``` nixcraft-images nixcraft-backups-cbz nixcraft-backups-forum - ``` 上面输出可以确定 Python-boto 可以使用 AWS API 正常工作。 ### 步骤 3 - 使用 Ansible 创建 AWS ec2 密钥 -创建一个名为 ec2.key.yml 的 playbook,如下所示: +创建一个名为 `ec2.key.yml` 的剧本,如下所示: + ``` --- - hosts: local @@ -106,44 +130,54 @@ nixcraft-backups-forum 其中, - * ec2_key: – ec2 密钥对。 - * name: nixcraft_key – 密钥对的名称。 - * region: us-west-1 – 使用的 AWS 区域。 - * register: ec2_key_result : 保存生成的密钥到 ec2_key_result 变量。 - * copy: content="{{ ec2_key_result.key.private_key }}" dest="./aws.nixcraft.pem" mode=0600 : 将 ec2_key_result.key.private_key 的内容保存到当前目录的一个名为 aws.nixcraft.pem 的文件中。设置该文件的权限为 0600 (unix 文件权限). - * when: ec2_key_result.changed : 仅仅在 ec2_key_result 改变时才保存。我们不想覆盖你的密钥文件。 + * `ec2_key:` – ec2 密钥对。 + * `name: nixcraft_key` – 密钥对的名称。 + * `region: us-west-1` – 使用的 AWS 区域。 + * `register: ec2_key_result` – 保存生成的密钥到 ec2_key_result 变量。 + * `copy: content="{{ ec2_key_result.key.private_key }}" dest="./aws.nixcraft.pem" mode=0600` – 将 `ec2_key_result.key.private_key` 的内容保存到当前目录的一个名为 `aws.nixcraft.pem` 的文件中。设置该文件的权限为 `0600` (unix 文件权限)。 + * `when: ec2_key_result.changed` – 仅仅在 `ec2_key_result` 改变时才保存。我们不想覆盖你的密钥文件。 +你还必须创建如下 `hosts` 文件: -你还必须创建如下主机文件: ``` [local] localhost - ``` -如下运行你的 playbook: -`$ ansible-playbook -i hosts ec2.key.yml` +如下运行你的剧本: + +``` +$ ansible-playbook -i hosts ec2.key.yml +``` + ![](https://www.cyberciti.biz/media/new/faq/2018/02/How-to-create-AWS-ec2-key-using-Ansible.jpg) -最后你应该有一个名为 aws.nixcraft.pem 私钥,该私钥可以和 AWS EC2 一起使用。查看你的密钥 [cat 命令][9]: +最后你应该有一个名为 `aws.nixcraft.pem 私钥,该私钥可以和 AWS EC2 一起使用。使用 [cat 命令][9]查看你的密钥: + ``` $ cat aws.nixcraft.pem ``` + 如果你有 EC2 虚拟机,请按如下方式使用: + ``` $ ssh -i aws.nixcraft.pem user@ec2-vm-dns-name ``` -#### 查看有关 python 数据结构变量名的信息,比如 ec2_key_result.changed 和 ec2_key_result.key.private_key +**查看有关 python 数据结构变量名的信息,比如 ec2_key_result.changed 和 ec2_key_result.key.private_key** -你一定在想我是如何使用变量名的,比如 ec2_key_result.changed 和 ec2_key_result.key.private_key。它们在哪里定义过吗?变量的值是通过 API 调用返回的。简单地使用 -v 选项运行 ansible-playbook 命令来查看这样的信息: -`$ ansible-playbook -v -i hosts ec2.key.yml` +你一定在想我是如何使用变量名的,比如 `ec2_key_result.changed` 和 `ec2_key_result.key.private_key`。它们在哪里定义过吗?变量的值是通过 API 调用返回的。简单地使用 `-v` 选项运行 `ansible-playbook` 命令来查看这样的信息: + +``` +$ ansible-playbook -v -i hosts ec2.key.yml +``` ![](https://www.cyberciti.biz/media/new/faq/2018/02/ansible-verbose-output.jpg) ### 我该如何删除一个密钥? -使用如下 ec2-key-delete.yml: +使用如下 `ec2-key-delete.yml`: + ``` --- - hosts: local @@ -160,8 +194,10 @@ $ ssh -i aws.nixcraft.pem user@ec2-vm-dns-name ``` 按照如下方式运行: -`$ ansible-playbook -i hosts ec2-key-delete.yml` +``` +$ ansible-playbook -i hosts ec2-key-delete.yml +``` ### 关于作者 @@ -173,7 +209,7 @@ via: https://www.cyberciti.biz/faq/how-to-create-aws-ec2-key-using-ansible/ 作者:[Vivek Gite][a] 译者:[qianghaohao](https://github.com/qianghaohao) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 7c3a3c1a8a9d573641cdd06b5ad7ffcee27b3ab0 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 22 Feb 2018 13:49:53 +0800 Subject: [PATCH 12/81] PUB:20180210 How to create AWS ec2 key using Ansible.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @qianghaohao 首发地址: https://linux.cn/article-9371-1.html 您的 LCTT 专页地址: https://linux.cn/lctt/qianghaohao --- .../20180210 How to create AWS ec2 key using Ansible.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180210 How to create AWS ec2 key using Ansible.md (100%) diff --git a/translated/tech/20180210 How to create AWS ec2 key using Ansible.md b/published/20180210 How to create AWS ec2 key using Ansible.md similarity index 100% rename from translated/tech/20180210 How to create AWS ec2 key using Ansible.md rename to published/20180210 How to create AWS ec2 key using Ansible.md From 2f5002dd37ecf380694a9d8aa432ba4b03d49d8b Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 14:10:25 +0800 Subject: [PATCH 13/81] remove www.linuxjournal.com --- .../20180122 Raspberry Pi Alternatives.md | 58 -- ...ance Problems on Unix and Linux Servers.md | 91 -- ...plete Guide for Using AsciiDoc in Linux.md | 347 -------- ...08 Linux Filesystem Events with inotify.md | 789 ------------------ .../tech/20180117 Avoiding Server Disaster.md | 125 --- ...nture Game in the Terminal with ncurses.md | 325 -------- ...pid, Secure Patching- Tools and Methods.md | 583 ------------- .../20180130 Ansible- Making Things Happen.md | 174 ---- ...l Scripting- Dungeons, Dragons and Dice.md | 191 ----- ...g Your Own Life- Introducing Biogenesis.md | 84 -- 10 files changed, 2767 deletions(-) delete mode 100644 sources/talk/20180122 Raspberry Pi Alternatives.md delete mode 100644 sources/tech/20171020 Four Hidden Costs and Risks of Sudo Can Lead to Cybersecurity Risks and Compliance Problems on Unix and Linux Servers.md delete mode 100644 sources/tech/20171030 Complete Guide for Using AsciiDoc in Linux.md delete mode 100644 sources/tech/20180108 Linux Filesystem Events with inotify.md delete mode 100644 sources/tech/20180117 Avoiding Server Disaster.md delete mode 100644 sources/tech/20180126 Creating an Adventure Game in the Terminal with ncurses.md delete mode 100644 sources/tech/20180129 Rapid, Secure Patching- Tools and Methods.md delete mode 100644 sources/tech/20180130 Ansible- Making Things Happen.md delete mode 100644 sources/tech/20180202 Shell Scripting- Dungeons, Dragons and Dice.md delete mode 100644 sources/tech/20180203 Evolving Your Own Life- Introducing Biogenesis.md diff --git a/sources/talk/20180122 Raspberry Pi Alternatives.md b/sources/talk/20180122 Raspberry Pi Alternatives.md deleted file mode 100644 index bf3bca4f61..0000000000 --- a/sources/talk/20180122 Raspberry Pi Alternatives.md +++ /dev/null @@ -1,58 +0,0 @@ -Raspberry Pi Alternatives -====== -A look at some of the many interesting Raspberry Pi competitors. - -The phenomenon behind the Raspberry Pi computer series has been pretty amazing. It's obvious why it has become so popular for Linux projects—it's a low-cost computer that's actually quite capable for the price, and the GPIO pins allow you to use it in a number of electronics projects such that it starts to cross over into Arduino territory in some cases. Its overall popularity has spawned many different add-ons and accessories, not to mention step-by-step guides on how to use the platform. I've personally written about Raspberry Pis often in this space, and in my own home, I use one to control a beer fermentation fridge, one as my media PC, one to control my 3D printer and one as a handheld gaming device. - -The popularity of the Raspberry Pi also has spawned competition, and there are all kinds of other small, low-cost, Linux-powered Raspberry Pi-like computers for sale—many of which even go so far as to add "Pi" to their names. These computers aren't just clones, however. Although some share a similar form factor to the Raspberry Pi, and many also copy the GPIO pinouts, in many cases, these other computers offer features unavailable in a traditional Raspberry Pi. Some boards offer SATA, Wi-Fi or Gigabit networking; others offer USB3, and still others offer higher-performance CPUs or more RAM. When you are choosing a low-power computer for a project or as a home server, it pays to be aware of these Raspberry Pi alternatives, as in many cases, they will perform much better. So in this article, I discuss some alternatives to Raspberry Pis that I've used personally, their pros and cons, and then provide some examples of where they work best. - -### Banana Pi - -I've mentioned the Banana Pi before in past articles (see "Papa's Got a Brand New NAS" in the September 2016 issue and "Banana Backups" in the September 2017 issue), and it's a great choice when you want a board with a similar form factor, similar CPU and RAM specs, and a similar price (~$30) to a Raspberry Pi but need faster I/O. The Raspberry Pi product line is used for a lot of home server projects, but it limits you to 10/100 networking and a USB2 port for additional storage. Where the Banana Pi product line really shines is in the fact that it includes both a Gigabit network port and SATA port, while still having similar GPIO expansion options and running around the same price as a Raspberry Pi. - -Before I settled on an Odroid XU4 for my home NAS (more on that later), I first experimented with a cluster of Banana Pis. The idea was to attach a SATA disk to each Banana Pi and use software like Ceph or GlusterFS to create a storage cluster shared over the network. Even though any individual Banana Pi wasn't necessarily that fast, considering how cheap they are in aggregate, they should be able to perform reasonably well and allow you to expand your storage by adding another disk and another Banana Pi. In the end, I decided to go a more traditional and simpler route with a single server and software RAID, and now I use one Banana Pi as an image gallery server. I attached a 2.5" laptop SATA drive to the other and use it as a local backup server running BackupPC. It's a nice solution that takes up almost no space and little power to run. - -### Orange Pi Zero - -I was really excited when I first heard about the Raspberry Pi Zero project. I couldn't believe there was such a capable little computer for only $5, and I started imagining all of the cool projects I could use one for around the house. That initial excitement was dampened a bit by the fact that they sold out quickly, and just about every vendor settled into the same pattern: put standalone Raspberry Pi Zeros on backorder but have special $20 starter kits in stock that include various adapter cables, a micro SD card and a plastic case that I didn't need. More than a year after the release, the situation still remains largely the same. Although I did get one Pi Zero and used it for a cool Adafruit "Pi Grrl Zero" gaming project, I had to put the rest of my ideas on hold, because they just never seemed to be in stock when I wanted them. - -The Orange Pi Zero was created by the same company that makes the entire line of Orange Pi computers that compete with the Raspberry Pi. The main thing that makes the Orange Pi Zero shine in my mind is that they have a small, square form factor that is wider than a Raspberry Pi Zero but not as long. It also includes a Wi-Fi card like the more expensive Raspberry Pi Zero W, and it runs between $6 and $9, depending on whether you opt for 256MB of RAM or 512MB of RAM. More important, they are generally in stock, so there's no need to sit on a backorder list when you have a fun project in mind. - -The Orange Pi Zero boards themselves are pretty capable. Out of the box, they include a quad-core ARM CPU, Wi-Fi (as I mentioned before), along with a 10/100 network port and USB2\. They also include Raspberry-Pi-compatible GPIO pins, but even more interesting is that there is a $9 "NAS" expansion board for it that mounts to its 13-pin header and provides extra USB2 ports, a SATA and mSATA port, along with an IR and audio and video ports, which makes it about as capable as a more expensive Banana Pi board. Even without the expansion board, this would make a nice computer you could sit anywhere within range of your Wi-Fi and run any number of services. The main downside is you are limited to composite video, so this isn't the best choice for gaming or video-based projects. - -Although Orange Pi Zeros are capable boards in their own right, what makes them particularly enticing to me is that they are actually available when you want them, unlike some of the other sub-$10 boards out there. There's nothing worse than having a cool idea for a cheap home project and then having to wait for a board to come off backorder. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12261f1.jpg) - -Figure 1\. An Orange Pi Zero (right) and an Espressobin (left) - -### Odroid XU4 - -When I was looking to replace my rack-mounted NAS at home, I first looked at all of the Raspberry Pi options, including Banana Pi and other alternatives, but none of them seemed to have quite enough horsepower for my needs. I needed a machine that not only offered Gigabit networking to act as a NAS, but one that had high-speed disk I/O as well. The Odroid XU4 fit the bill with its eight-core ARM CPU, 2GB RAM, Gigabit network and USB3 ports. Although it was around $75 (almost twice the price of a Raspberry Pi), it was a much more capable computer all while being small and low-power. - -The entire Odroid product line is a good one to consider if you want a low-power home server but need more resources than a traditional Raspberry Pi can offer and are willing to spend a little bit extra for the privilege. In addition to a NAS, the Odroid XU4, with its more powerful CPU and extra RAM, is a good all-around server for the home. The USB3 port means you have a lot of storage options should you need them. - -### Espressobin - -Although the Odroid XU4 is a great home server, I still sometimes can see that it gets bogged down in disk and network I/O compared to a traditional higher-powered server. Some of this might be due to the chips that were selected for the board, and perhaps some of it has to do with the fact that I'm using both disk encryption and software RAID over USB3\. In either case, I started looking for another option to help take a bit of the storage burden off this server, and I came across the Espressobin board. - -The Espressobin is a $50 board that launched as a popular Indiegogo campaign and is now a shipping product that you can pick up in a number of places, including Amazon. Although it costs a bit more than a Raspberry Pi 3, it includes a 64-bit dual-core ARM Cortex A53 at 1.2GHz, 1–2Gb of RAM (depending on the configuration), three Gigabit network ports with a built-in switch, a SATA port, a USB3 port, a mini-PCIe port, plus a number of other options, including two sets of GPIO headers and a nice built-in serial console running on the micro-USB port. - -The main benefit to the Espressobin is the fact that it was designed by Marvell with chips that actually can use all of the bandwidth that the board touts. In some other boards, often you'll find a SATA2 port that's hanging off a USB2 interface or other architectural hacks that, although they will let you connect a SATA disk or Gigabit networking port, it doesn't mean you'll get the full bandwidth the spec claims. Although I intend to have my own Espressobin take over home NAS duties, it also would make a great home gateway router, general-purpose server or even a Wi-Fi access point, provided you added the right Wi-Fi card. - -### Conclusion - -A whole world of alternatives to Raspberry Pis exists—this list covers only some of the ones I've used myself. I hope it has encouraged you to think twice before you default to a Raspberry Pi for your next project. Although there's certainly nothing wrong with Raspberry Pis, there are several small computers that run Linux well and, in many cases, offer better hardware or other expansion options beyond the capabilities of a Raspberry Pi for a similar price. - - --------------------------------------------------------------------------------- - -via: http://www.linuxjournal.com/content/raspberry-pi-alternatives - -作者:[Kyle Rankin][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxjournal.com/users/kyle-rankin diff --git a/sources/tech/20171020 Four Hidden Costs and Risks of Sudo Can Lead to Cybersecurity Risks and Compliance Problems on Unix and Linux Servers.md b/sources/tech/20171020 Four Hidden Costs and Risks of Sudo Can Lead to Cybersecurity Risks and Compliance Problems on Unix and Linux Servers.md deleted file mode 100644 index 6acb15c7ec..0000000000 --- a/sources/tech/20171020 Four Hidden Costs and Risks of Sudo Can Lead to Cybersecurity Risks and Compliance Problems on Unix and Linux Servers.md +++ /dev/null @@ -1,91 +0,0 @@ -Four Hidden Costs and Risks of Sudo Can Lead to Cybersecurity Risks and Compliance Problems on Unix and Linux Servers -====== -It is always a philosophical debate as to whether to use open source software in a regulated environment. Open source software is crowd sourced, and developers from all over the world contribute to packages that are later included in Operating System distributions. In the case of ‘sudo’, a package designed to provide privileged access included in many Linux distributions, the debate is whether it meets the requirements of an organization, and to what level it can be relied upon to deliver compliance information to auditors. - -There are four hidden costs or risks that must be considered when evaluating whether sudo is meeting your organization’s cybersecurity and compliance needs on its Unix and Linux systems, including administrative, forensics and audit, business continuity, and vendor support. Although sudo is a low-cost solution, it may come at a high price in a security program, and when an organization is delivering compliance data to satisfy auditors. In this article, we will review these areas while identifying key questions that should be answered to measure acceptable levels of risk. While every organization is different, there are specific risk/cost considerations that make a strong argument for replacing sudo with a commercially-supported solution. - -### Administrative Costs - -There are several hidden administrative costs is using sudo for Unix and Linux privilege management. For example, with sudo, you also need to run a third-party automation management system (like CFEngine or Puppet) plus third party authentication modules on the box. And, if you plan to externalize the box at all, you’re going to have to replace sudo with that supplier’s version of sudo. So, you end up maintaining sudo, a third-party management system, a third-party automation system, and may have to replace it all if you want to authenticate against something external to the box. A commercial solution would help to consolidate this functionality and simplify the overall management of Unix and Linux servers. - -Another complexity with sudo is that everything is local, meaning it can be extremely time-consuming to manage as environments grow. And as we all know, time is money. With sudo, you have to rely on local systems on the server to keep logs locally, rotate them, send them to an archival environment, and ensure that no one is messing with any of the other related subsystems. This can be a complex and time-consuming process. A commercial solution would combine all of this activity together, including binary pushes and retention, upgrades, logs, archival, and more. - -Unix and Linux systems by their very nature are decentralized, so managing each host separately leads to administrative costs and inefficiencies which in turn leads to risks. A commercial solution centralizes management and policy development across all hosts, introducing enterprise level consistency and best practices to a privileged access management program. - -### Forensics & Audit Risks - -Administrative costs aside, let’s look at the risks associated with not being able to produce log data for forensic investigations. Why is this a challenge for sudo? The sudo package is installed locally on individual servers, and configuration files are maintained on each server individually. There are some tools such as Puppet or Chef that can monitor these files for changes, and replace files with known good copies when a change is detected, but those tools only work after a change takes place. These tools usually operate on a schedule, often checking once or twice per day, so if a system is compromised, or authorization files are changed, it may be several hours before the system is restored to a known good state. The question is, what can happen in those hours? - -There is currently no keystroke logging within sudo, and since any logs of sudo activity are stored locally on servers, they can be tampered with by savvy administrators. Event logs are typically collected with normal system logs, but once again, this requires additional configuration and management of these tools. When advanced users are granted administrative access on servers, it is possible that log data can be modified, or deleted, and all evidence of their activities erased with very little indication that events took place. Now, the question is, has this happened, or is it continuing to happen? - -With sudo, there is no log integrity – no chain of custody on logs – meaning logs can’t be non-repudiated and therefore can’t be used in legal proceedings in most jurisdictions. This is a significant risk to organizations, especially in criminal prosecution, termination, or other disciplinary actions. Third-party commercial solutions’ logs are tamper-proof, which is just not possible with sudo. - -Large organizations typically collect a tremendous amount of data, including system logs, access information, and other system information from all their systems. This data is then sent to a SIEM for analytics, and reporting. SIEM tools do not usually deliver real-time alerting when uncharacteristic events happen on systems, and often configuration of events is difficult and time consuming. For this reason, SIEM solutions are rarely relied upon for alerting within an enterprise environment. Here the question is, what is an acceptable delay from the time an event takes place until someone is alerted? - -Correlating log activity with other data to determine a broader pattern of abuse is also impossible with sudo. Commercial solutions gather logs into one place with searchable indices. Some commercial solutions even correlate this log data against other sources to identify uncharacteristic behavior that could be a warning that a serious security issue is afoot. Commercial solutions therefore provide greater forensic benefits than sudo. - -Another gotcha with sudo is that change management processes can’t be verified. It is always a best practice to review change records, and to validate that what was performed during the change matches the implementation that was proposed. ITIL and other security frameworks require validation of change management practices. Sudo can’t do this. Commercial solutions can do this through reviewing session command recording history and file integrity monitoring without revealing the underlying session data. - -There is no session recording with sudo. Session logs are one of the best forensic tools available for investigating what happened on servers. It’s human nature that people tend to be more cautious when they know they can be watched. Sudo doesn’t provide session recordings. - -Finally, there is no segregation of duties with sudo. Most security and compliance frameworks require true separation of duties, and using a tool such as sudo just “skins” over the segregation of duties aspect. All of these deficiencies – lack of log integrity, lack of session monitoring, no change management – introduces risk when organizations must prove compliance or investigate anomalies. - -### Business Continuity Risks - -Sudo is open source. There is no indemnification if there is a critical error. Also, there is no rollback with sudo, so there is always the chance that mistakes will bring and entire system down with no one to call for support. Sure, it is possible to centralize sudo through a third-party tool such as Puppet or CFEngine, but you still end up managing multiple files across multiple groups of systems manually (or managed as one huge policy). With this approach, there is greater risk that mistakes will break every system at once. A commercial solution would have policy roll-back capability that would limit the damage done. - -### Lack of Enterprise Support - -Since sudo is an open source package, there is no official service level for when packages must be updated to respond to identified security flaws, or vulnerabilities. By mid-2017, there have already been two vulnerabilities identified in sudo with a CVSS score greater than six (CVE Sudo Vulnerabilities). Over the past several years, there have been a number of vulnerabilities discovered in sudo that took as many as three years to patch ([CVE-2013-2776][1] , [CVE-2013-2777][2] , [CVE-2013-1776][3]). The question here is, what exploits have been used in the past several months or years? A commercial solution that replaces sudo would eliminate this problem. - -### Ten Questions to Measure Risk in Your Unix and Linux Environment - -Unix and Linux systems present high-value targets for external attackers and malicious insiders. Expect to be breached if you share accounts, provide unfettered root access, or let files and sessions go unmonitored. Gaining root or other privileged credentials makes it easy for attackers to fly under the radar and access sensitive systems and data. And as we have reviewed, sudo isn’t going to help. - -In balancing costs vs. an acceptable level of risk to your Unix and Linux environment, consider these 10 questions: - -1. How much time are Unix/Linux admins spending just trying to keep up? Can your organization benefit from automation? - -2. Are you able to keep up with the different platform and version changes to your Unix/Linux systems? - -3. As you grow and more hosts are added, how much more time will admins need to keep up with policy? Is adding personnel an option? - -4. What about consistency across systems? Modifying individual sudoers files with multiple admins makes that very difficult. Wouldn’t systems become siloed if not consistently managed? - -5. What happens when you bring in new or different Linux or Unix platforms? How will that complicate the management of the environment? - -6. How critical is it for compliance or legal purposes to know whether a policy file or log has been tampered with? - -7. Do you have a way to verify that the sudoers file hasn’t been modified without permission? - -8. How do you know what admins actually did once they became root? Do you have a command history for their activity? - -9. What would it cost the business if a mission-critical Unix/Linux host goes down? With sudo, how quickly could the team troubleshoot and fix the problem? - -10. Can you demonstrate to the board that you have a backup if there is a significant outage? - -### Benefits of Using a Commercial Solution - -Although they come at a higher cost than free open source solutions, commercial solutions provide an effective way to mitigate the general issues related to sudo. Solutions that offer centralized management ease the pressure on monitoring and maintaining remote systems, centralized logging of events, and keystroke recording are the cornerstone of audit expectations for most enterprises. - -Commercial solutions usually have a regular release cycle, and can typically deliver patches in response to vulnerabilities in hours, or days from the time they’re reported. Commercial solutions like PowerBroker for Unix & Linux by BeyondTrust provide event logging on separate infrastructure that is inaccessible to privileged users, and this eliminates the possibility of log tampering. PowerBroker also provides strong, centralized policy controls that are managed within an infrastructure separate from systems under management; this eliminates the possibility of rogue changes to privileged access policies in server environments. Strong policy control also moves security posture from ‘Respond’ to ‘Prevent’, and advanced features provide the ability to integrate with other enterprise tools, and conditionally alert when privileged access sessions begin, or end. - -### Conclusion - -For organizations that are serious about incorporating a strong privileged access management program into their security program, there is no question that a commercial product delivers much better than an open source offering such as sudo. Eliminating the possibility of malicious behavior using strong controls, centralized log file collection, and centralized policy management is far better than relying on questionable, difficult to manage controls delivered within sudo. In calculating an acceptable level of risk to your tier-1 Unix and Linux systems, all of these costs and benefits must be considered. - - --------------------------------------------------------------------------------- - -via: http://www.linuxjournal.com/content/four-hidden-costs-and-risks-sudo-can-lead-cybersecurity-risks-and-compliance-problems-unix-a - -作者:[Chad Erbe][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxjournal.com/users/chad-erbe -[1]:https://www.cvedetails.com/cve/CVE-2013-2776/ -[2]:https://www.cvedetails.com/cve/CVE-2013-2777/ -[3]:https://www.cvedetails.com/cve/CVE-2013-1776/ diff --git a/sources/tech/20171030 Complete Guide for Using AsciiDoc in Linux.md b/sources/tech/20171030 Complete Guide for Using AsciiDoc in Linux.md deleted file mode 100644 index b321f4fb23..0000000000 --- a/sources/tech/20171030 Complete Guide for Using AsciiDoc in Linux.md +++ /dev/null @@ -1,347 +0,0 @@ -Complete Guide for Using AsciiDoc in Linux -====== -**Brief: This detailed guide discusses the advantages of using AsciiDoc and shows you how to install and use AsciiDoc in Linux.** - -Over the years I used many different tools to write articles, reports or documentation. I think all started for me with Luc Barthelet's Epistole on Apple IIc from the French editor Version Soft. Then I switched to GUI tools with the excellent Microsoft Word 5 for Apple Macintosh, then the less convincing (to me) StarOffice on Sparc Solaris, that was already known as OpenOffice when I definitively switched to Linux. All these tools were really [word-processors][1]. - -But I was never really convinced by [WYSIWYG][2] editors. So I investigated many different more-or-less human-readable text formats: [troff][3], [HTML][4], [RTF][5], [TeX][6]/[LaTeX][7], [XML][8] and finally [AsciiDoc][9] which is the tool I use the most today. In fact, I am using it right now to write this article! - -If I made that history, it was because somehow the loop is closed. Epistole was a word-processor of the text-console era. As far as I remember, there were menus and you can use the mouse to select text -- but most of the formatting was done by adding non-intrusive tags into the text. Just like it is done with AsciiDoc. Of course, it was not the first software to do that. But it was the first I used! - - -![Controlling text alignment in Luc Barthelet's Epistole \(1985-Apple II\) by using commands embedded into the text][11] - -### Why AsciiDoc (or any other text file format)? - -I see two advantages in using text formats for writing: first, there is a clear separation between the content and the presentation. This argument is open to discussion since some text formats like TeX or HTML require a good discipline to adhere to that separation. And on the other hand, you can somehow achieve some level of separation by using [templates and stylesheets][12] with WYSIWYG editors. I agree with that. But I still find presentation issues intrusive with GUI tools. Whereas, when using text formats, you can focus on the content only without any font style or widow line disturbing you in your writing. But maybe it's just me? However, I can't count the number of times I stopped my writing just to fix some minor styling issue -- and having lost my inspiration when I came back to the text. If you disagree or have a different experience, don't hesitate to contradict me using the comment section below! - -Anyway, my second argument will be less subject to personal interpretation: documents based on text formats are highly interoperable. Not only you can edit them with any text editor on any platform, but you can easily manage text revisions with a tool such as [git][13] or [SVN][14], or automate text modification using common tools such as [sed][15], [AWK][16], [Perl][17] and so on. To give you a concrete example, when using a text-based format like AsciiDoc, I only need one command to produce highly personalized mailing from a master document, whereas the same job using a WYSIWYG editor would have required a clever use of "fields" and going through several wizard screens. - -### What is AsciiDoc? - -Strictly speaking, AsciiDoc is a file format. It defines syntactic constructs that will help a processor to understand the semantics of the various parts of your text. Usually in order to produce a nicely formatted output. - -Even if that definition could seem abstract, this is something simple: some keywords or characters in your document have a special meaning that will change the rendering of the document. This is the exact same concept as the tags in HTML. But a key difference with AsciiDoc is the property of the source document to remain easily human readable. - -Check [our GitHub repository][18] to compare how the same output can be produced using few common text files format: (coffee manpage idea courtesy of ) - - * `coffee.man` uses the venerable troff processor (based on the 1964 [RUNOFF][19] program). It's mostly used today to write [man pages][20]. You can try it after having downloaded the `coffee.*` files by typing `man ./coffee.man` at your command prompt. - * `coffee.tex` uses the LaTeX syntax (1985) to achieve mostly the same result but for a PDF output. LaTeX is a typesetting program especially well suited for scientific publications because of its ability to nicely format mathematical formulae and tables. You can produce the PDF from the LaTeX source using `pdflatex coffee.tex` - * `coffee.html` is using the HTML format (1991) to describe the page. You can directly open that file with your favorite web browser to see the result. - * `coffee.adoc`, finally, is using the AsciiDoc syntax (2002). You can produce both HTML and PDF from that file: - - -``` -asciidoc coffee.adoc # HTML output -a2x --format pdf ./coffee.adoc # PDF output (dblatex) -a2x --fop --format pdf ./coffee.adoc # PDF output (Apache FOP) -``` - -Now you've seen the result, open those four files using your favorite [text editor][21] (nano, vim, SublimeText, gedit, Atom, … ) and compare the sources: there are great chances you will agree the AsciiDoc sources are easier to read -- and probably to write too. - - -![Who is who? Could you guess which of these example files is written using AsciiDoc?][22] - -### How to install AsciiDoc in Linux? - -AsciiDoc is relatively complex to install because of the many dependencies. I mean complex if you want to install it from sources. For most of us, using our package manager is probably the best way: -``` -apt-get install asciidoc fop -``` - -or the following command: -``` -yum install acsiidoc fop -``` - -(fop is only required if you need the [Apache FOP][23] backend for PDF generation -- this is the PDF backend I use myself) - -More details about the installation can be found on [the official AsciiDoc website][24]. For now, all you need now is a little bit of patience, since, at least on my minimal Debian system, installing AsciiDoc require 360MB to be downloaded (mostly because of the LaTeX dependency). Which, depending on your Internet bandwidth, may give you plenty of time to read the rest of this article. - -### AsciiDoc Tutorial: How to write in AsciiDoc? - - -![AsciiDoc tutorial for Linux][25] - -I said it several times, AsciiDoc is a human-readable text file format. So, you can write your documents using the text editor of your choice. There are even dedicated text editors. But I will not talk about them here-- simply because I don't use them. But if are using one of them, don't hesitate to share your feedback using the comment section at the end of this article. - -I do not intend to create yet another AsciiDoc syntax tutorial here: there are plenty of them already available on the web. So I will only mention the very basic syntactic constructs you will use in virtually any document. From the simple "coffee" command example quoted above, you may see: - - * **titles** in AsciiDoc are identified by underlying them with `===` or `---` (depending on the title level), - * **bold** character spans are written between starts, - * and **italics** between underscores. - - - -Those are pretty common convention probably dating back to the pre-HTML email era. In addition, you may need two other common constructs, not illustrated in my previous example: **hyperlinks** and **images** inclusion, whose syntax is pretty self-explanatory. -``` -// HyperText links -link:http://dashing-kazoo.flywheelsites.com[ItsFOSS Linux Blog] - -// Inline Images -image:https://itsfoss.com/wp-content/uploads/2017/06/itsfoss-text-logo.png[ItsFOSS Text Logo] - -// Block Images -image::https://itsfoss.com/wp-content/uploads/2017/06/itsfoss-text-logo.png[ItsFOSS Text Logo] -``` - -But the AsciiDoc syntax is much richer than that. If you want more, I can point you to that nice AsciiDoc cheatsheet: - -### How to render the final output? - -I will assume here you have already written some text following the AsciiDoc format. If this is not the case, you can download [here][26] some example files copied straight out of the AsciiDoc documentation: -``` -# Download the AsciiDoc User Guide source document -BASE='https://raw.githubusercontent.com/itsfoss/asciidoc-intro/master' -wget "${BASE}"/{asciidoc.txt,customers.csv} -``` - -Since AsciiDoc is human-readable, you can send the AsciiDoc source text directly to someone by email, and the recipient will be able to read that message without further ado. But, you may want to provide some more nicely formatted output. For example as HTML for web publication (just like I've done it for this article). Or as PDF for print or display usage. - -In all cases, you need a processor. In fact, under the hood, you will need several processors. Because your AsciiDoc document will be transformed into various intermediate formats before producing the final output. Since several tools are used, the output of one being the input of the next one, we sometimes speak of a toolchain. - -Even if I explain some inner working details here, you have to understand most of that will be hidden from you. Unless maybe when you initially have to install the tools-- or if you want to fine-tune some steps of the process. - -#### In practice? - -For HTML output, you only need the `asciidoc` tool. For more complicated toolchains, I encourage you to use the `a2x` tool (part of the AsciiDoc distribution) that will trigger the necessary processors in order: -``` -# All examples are based on the AsciiDoc User Guide source document - -# HTML output -asciidoc asciidoc.txt -firefox asciidoc.html - -# XHTML output -a2x --format=xhtml asciidoc.txt - -# PDF output (LaTeX processor) -a2x --format=pdf asciidoc.txt - -# PDF output (FOP processor) -a2x --fop --format=pdf asciidoc.txt -``` - -Even if it can directly produce an HTML output, the core functionality of the `asciidoc` tool remains to transform the AsciiDoc document to the intermediate [DocBook][27] format. DocBook is a XML-based format commonly used for (but not limited to) technical documentation publishing. DocBook is a semantic format. That means it describes your document content. But not its presentation. So formatting will be the next step of the transformation. For that, whatever is the output format, the DocBook intermediate document is processed through an [XSLT][28] processor to produce either directly the output (e.g. XHTML), or another intermediate format. - -This is the case when you generate a PDF document where the DocBook document will be (at your will) converted either as a LaTeX intermediate representation or as [XSL-FO][29] (a XML-based language for page description). Finally, a dedicated tool will convert that representation to PDF. - -The extra steps for PDF generations are notably justified by the fact the toolchain has to handle pagination for the PDF output. Something this is not necessary for a "stream" format like HTML. - -#### dblatex or fop? - -Since there are two PDF backends, the usual question is "Which is the best?" Something I can't answer for you. - -Both processors have [pros and cons][30]. And ultimately, the choice will be a compromise between your needs and your tastes. So I encourage you to take the time to try both of them before choosing the backend you will use. If you follow the LaTeX path, [dblatex][31] will be the backend used to produce the PDF. Whereas it will be [Apache FOP][32] if you prefer using the XSL-FO intermediate format. So don't forget to take a look at the documentation of these tools to see how easy it will be to customize the output to your needs. Unless of course if you are satisfied with the default output! - -### How to customize the output of AsciiDoc? - -#### AsciiDoc to HTML - -Out of the box, AsciiDoc produces pretty nice documents. But sooner or later you will what to customize their appearance. - -The exact changes will depend on the backend you use. For the HTML output, most changes can be done by changing the [CSS][33] stylesheet associated with the document. - -For example, let's say I want to display all section headings in red, I could create the following `custom.css` file: -``` -h2 { - color: red; -} -``` - -And process the document using the slightly modified command: -``` -# Set the 'stylesheet' attribute to -# the absolute path to our custom CSS file -asciidoc -a stylesheet=$PWD/custom.css asciidoc.txt -``` - -You can also make changes at a finer level by attaching a role attribute to an element. This will translate into a class attribute in the generated HTML. - -For example, try to modify our test document to add the role attribute to the first paragraph of the text: -``` -[role="summary"] -AsciiDoc is a text document format .... -``` - -Then add the following rule to the `custom.css` file: -``` -.summary { - font-style: italic; -} -``` - -Re-generate the document: -``` -asciidoc -a stylesheet=$PWD/custom.css asciidoc.txt -``` - - -![AsciiDoc HTML output with custom CSS to display the first paragraph in italics and section headings in color][34] - - 1. et voila: the first paragraph is now displayed in italic. With a little bit of creativity, some patience and a couple of CSS tutorials, you should be able to customize your document at your wills. - - - -#### AsciiDoc to PDF - -Customizing the PDF output is somewhat more complex. Not from the author's perspective since the source text will remain identical. Eventually using the same role attribute as above to identify the parts that need a special treatment. - -But you can no longer use CSS to define the formatting for PDF output. For the most common settings, there are parameters you can set from the command line. Some parameters can be used both with the dblatex and the fop backends, others are specific to each backend. - -For the list of dblatex supported parameters, see - -For the list of DocBook XSL parameters, see - -Since margin adjustment is a pretty common requirement, you may also want to take a look at that: - -If the parameter names are somewhat consistent between the two backends, the command-line arguments used to pass those values to the backends differ between dblatex and fop. So, double check first your syntax if apparently, this isn't working. But to be honest, while writing this article I wasn't able to make the `body.font.family` parameter work with the dblatex backend. Since I usually use fop, maybe did I miss something? If you have more clues about that, I will be more than happy to read your suggestions in the comment section at the end of this article! - -Worth mentioning using non-standard fonts-- even with fop-require some extra work. But it's pretty well documented on the Apache website: -``` -# XSL-FO/FOP -a2x -v --format pdf \ - --fop \ - --xsltproc-opts='--stringparam page.margin.inner 10cm' \ - --xsltproc-opts='--stringparam body.font.family Helvetica' \ - --xsltproc-opts='--stringparam body.font.size 8pt' \ - asciidoc.txt - -# dblatex -# (body.font.family _should_ work, but, apparently, it isn't ?!?) -a2x -v --format pdf \ - --dblatex-opts='--param page.margin.inner=10cm' \ - --dblatex-opts='--stringparam body.font.family Helvetica' \ - asciidoc.txt -``` - -#### Fine-grained setting for PDF generation - -Global parameters are nice if you just need to adjust some pre-defined settings. But if you want to fine-tune the document (or completely change the layout) you will need some extra efforts. - -At the core of the DocBook processing there is [XSLT][28]. XSLT is a computer language, expressed in XML notation, that allows to write arbitrary transformation from an XML document to … something else. XML or not. - -For example, you will need to extend or modify the [DocBook XSL stylesheet][35] to produce the XSL-FO code for the new styles you may want. And if you use the dblatex backend, this may require modifying the corresponding DocBook-to-LaTeX XSLT stylesheet. In that latter case you may also need to use a custom LaTeX package. But I will not focus on that since dblatex is not the backend I use myself. I can only point you to the [official documentation][36] if you want to know more. But once again, if you're familiar with that, please share your tips and tricks in the comment section! - -Even while focusing only on fop, I don't really have the room here to detail the entire procedure. So, I will just show you the changes you could use to obtain a similar result as the one obtained with few CSS lines in HTML output above. That is: section titles in red and a summary paragraph in italics. - -The trick I use here is to create a new XSLT stylesheet, importing the original DocBook stylesheet, but overriding the attribute sets or template for the elements we want to change: -``` - - - - - - - - - #FF0000 - - - - - - - - - - - - - - italic - - - - - - - -``` - -Then, you have to request `a2x` to use that custom XSL stylesheet to produce the output rather than the default one using the `--xsl-file` option: -``` -a2x -v --format pdf \ - --fop \ - --xsl-file=./custom.xsl \ - asciidoc.txt -``` - -![AsciiDoc PDF output generated from Apache FOP using a custom XSLT to display the first paragraph in italics and section headings in color][37] - -With a little bit of familiarity with XSLT, the hints given here and some queries on your favorite search engine, I think you should be able to start customizing the XSL-FO output. - -But I will not lie, some apparently simple changes in the document output may require you to spend quite some times searching through the DocBook XML and XSL-FO manuals, examining the stylesheets sources and performing a couple of tests before you finally achieve what you want. - -### My opinion - -Writing documents using a text format has tremendous advantages. And if you need to publish to HTML, there is not much reason for not using AsciiDoc. The syntax is clean and neat, processing is simple and changing the presentation if needed, mostly require easy to acquire CSS skills. - -And even if you don't use the HTML output directly, HTML can be used as an interchange format with many WYSIWYG applications today. As an example, this is was I've done here: I copied the HTML output of this article into the WordPress edition area, thus conserving all formatting, without having to type anything directly into WordPress. - -If you need to publish to PDF-- the advantages remain the same for the writer. Things will be certainly harsher if you need to change the default layout in depth though. In a corporate environment, that probably means hiring a document designed skilled with XSLT to produce the set of stylesheets that will suit your branding or technical requirements-- or for someone in the team to acquire those skills. But once done it will be a pleasure to write text with AsciiDoc. And seeing those writings being automatically converted to beautiful HTML pages or PDF documents! - -Finally, if you find AsciiDoc either too simplistic or too complex, you may take a look at some other file formats with similar goals: [Markdown][38], [Textile][39], [reStructuredText][40] or [AsciiDoctor][41] to name few. Even if based on concepts dating back to the early days of computing, the human-readable text format ecosystem is pretty rich. Probably richer it was only 20 years ago. As a proof, many modern [static web site generators][42] are based on them. Unfortunately, this is out of the scope for this article. So, let us know if you want to hear more about that! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/asciidoc-guide/ - -作者:[Sylvain Leroux][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/sylvain/ -[1]:https://www.computerhope.com/jargon/w/wordssor.htm -[2]:https://en.wikipedia.org/wiki/WYSIWYG -[3]:https://en.wikipedia.org/wiki/Troff -[4]:https://en.wikipedia.org/wiki/HTML -[5]:https://en.wikipedia.org/wiki/Rich_Text_Format -[6]:https://en.wikipedia.org/wiki/TeX -[7]:https://en.wikipedia.org/wiki/LaTeX -[8]:https://en.wikipedia.org/wiki/XML -[9]:https://en.wikipedia.org/wiki/AsciiDoc -[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09//epistole-manual-command-example-version-soft-luc-barthelet-1985.png -[12]:https://wiki.openoffice.org/wiki/Documentation/OOo3_User_Guides/Getting_Started/Templates_and_Styles -[13]:https://en.wikipedia.org/wiki/Git -[14]:https://en.wikipedia.org/wiki/Apache_Subversion -[15]:https://en.wikipedia.org/wiki/Sed -[16]:https://en.wikipedia.org/wiki/AWK -[17]:https://en.wikipedia.org/wiki/Perl -[18]:https://github.com/itsfoss/asciidoc-intro/tree/master/coffee -[19]:https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF -[20]:https://en.wikipedia.org/wiki/Man_page -[21]:https://en.wikipedia.org/wiki/Text_editor -[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09//troff-latex-html-asciidoc-compare-source-code.png -[23]:https://en.wikipedia.org/wiki/Formatting_Objects_Processor -[24]:http://www.methods.co.nz/asciidoc/INSTALL.html -[25]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/asciidoc-tutorial-linux.jpg -[26]:https://raw.githubusercontent.com/itsfoss/asciidoc-intro/master -[27]:https://en.wikipedia.org/wiki/DocBook -[28]:https://en.wikipedia.org/wiki/XSLT -[29]:https://en.wikipedia.org/wiki/XSL_Formatting_Objects -[30]:http://www.methods.co.nz/asciidoc/userguide.html#_pdf_generation -[31]:http://dblatex.sourceforge.net/ -[32]:https://xmlgraphics.apache.org/fop/ -[33]:https://en.wikipedia.org/wiki/Cascading_Style_Sheets -[34]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09//asciidoc-html-output-custom-role-italic-paragraph-color-heading.png -[35]:http://www.sagehill.net/docbookxsl/ -[36]:http://dblatex.sourceforge.net/doc/manual/sec-custom.html -[37]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09//asciidoc-fop-output-custom-role-italic-paragraph-color-heading.png -[38]:https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet -[39]:https://txstyle.org/ -[40]:http://docutils.sourceforge.net/docs/user/rst/quickstart.html -[41]:http://asciidoctor.org/ -[42]:https://www.smashingmagazine.com/2015/11/modern-static-website-generators-next-big-thing/ diff --git a/sources/tech/20180108 Linux Filesystem Events with inotify.md b/sources/tech/20180108 Linux Filesystem Events with inotify.md deleted file mode 100644 index 5e35f06ea8..0000000000 --- a/sources/tech/20180108 Linux Filesystem Events with inotify.md +++ /dev/null @@ -1,789 +0,0 @@ -translating by lujun9972 -Linux Filesystem Events with inotify -====== - -Triggering scripts with incron and systemd. - -It is, at times, important to know when things change in the Linux OS. The uses to which systems are placed often include high-priority data that must be processed as soon as it is seen. The conventional method of finding and processing new file data is to poll for it, usually with cron. This is inefficient, and it can tax performance unreasonably if too many polling events are forked too often. - -Linux has an efficient method for alerting user-space processes to changes impacting files of interest. The inotify Linux system calls were first discussed here in Linux Journal in a [2005 article by Robert Love][6] who primarily addressed the behavior of the new features from the perspective of C. - -However, there also are stable shell-level utilities and new classes of monitoring dæmons for registering filesystem watches and reporting events. Linux installations using systemd also can access basic inotify functionality with path units. The inotify interface does have limitations—it can't monitor remote, network-mounted filesystems (that is, NFS); it does not report the userid involved in the event; it does not work with /proc or other pseudo-filesystems; and mmap() operations do not trigger it, among other concerns. Even with these limitations, it is a tremendously useful feature. - -This article completes the work begun by Love and gives everyone who can write a Bourne shell script or set a crontab the ability to react to filesystem changes. - -### The inotifywait Utility - -Working under Oracle Linux 7 (or similar versions of Red Hat/CentOS/Scientific Linux), the inotify shell tools are not installed by default, but you can load them with yum: - -``` - - # yum install inotify-tools -Loaded plugins: langpacks, ulninfo -ol7_UEKR4 | 1.2 kB 00:00 -ol7_latest | 1.4 kB 00:00 -Resolving Dependencies ---> Running transaction check ----> Package inotify-tools.x86_64 0:3.14-8.el7 will be installed ---> Finished Dependency Resolution - -Dependencies Resolved - -============================================================== -Package Arch Version Repository Size -============================================================== -Installing: -inotify-tools x86_64 3.14-8.el7 ol7_latest 50 k - -Transaction Summary -============================================================== -Install 1 Package - -Total download size: 50 k -Installed size: 111 k -Is this ok [y/d/N]: y -Downloading packages: -inotify-tools-3.14-8.el7.x86_64.rpm | 50 kB 00:00 -Running transaction check -Running transaction test -Transaction test succeeded -Running transaction -Warning: RPMDB altered outside of yum. - Installing : inotify-tools-3.14-8.el7.x86_64 1/1 - Verifying : inotify-tools-3.14-8.el7.x86_64 1/1 - -Installed: - inotify-tools.x86_64 0:3.14-8.el7 - -Complete! - -``` - -The package will include two utilities (inotifywait and inotifywatch), documentation and a number of libraries. The inotifywait program is of primary interest. - -Some derivatives of Red Hat 7 may not include inotify in their base repositories. If you find it missing, you can obtain it from [Fedora's EPEL repository][7], either by downloading the inotify RPM for manual installation or adding the EPEL repository to yum. - -Any user on the system who can launch a shell may register watches—no special privileges are required to use the interface. This example watches the /tmp directory: - -``` - -$ inotifywait -m /tmp -Setting up watches. -Watches established. - -``` - -If another session on the system performs a few operations on the files in /tmp: - -``` - -$ touch /tmp/hello -$ cp /etc/passwd /tmp -$ rm /tmp/passwd -$ touch /tmp/goodbye -$ rm /tmp/hello /tmp/goodbye - -``` - -those changes are immediately visible to the user running inotifywait: - -``` - -/tmp/ CREATE hello -/tmp/ OPEN hello -/tmp/ ATTRIB hello -/tmp/ CLOSE_WRITE,CLOSE hello -/tmp/ CREATE passwd -/tmp/ OPEN passwd -/tmp/ MODIFY passwd -/tmp/ CLOSE_WRITE,CLOSE passwd -/tmp/ DELETE passwd -/tmp/ CREATE goodbye -/tmp/ OPEN goodbye -/tmp/ ATTRIB goodbye -/tmp/ CLOSE_WRITE,CLOSE goodbye -/tmp/ DELETE hello -/tmp/ DELETE goodbye - -``` - -A few relevant sections of the manual page explain what is happening: - -``` - -$ man inotifywait | col -b | sed -n '/diagnostic/,/helpful/p' - inotifywait will output diagnostic information on standard error and - event information on standard output. The event output can be config- - ured, but by default it consists of lines of the following form: - - watched_filename EVENT_NAMES event_filename - - watched_filename - is the name of the file on which the event occurred. If the - file is a directory, a trailing slash is output. - - EVENT_NAMES - are the names of the inotify events which occurred, separated by - commas. - - event_filename - is output only when the event occurred on a directory, and in - this case the name of the file within the directory which caused - this event is output. - - By default, any special characters in filenames are not escaped - in any way. This can make the output of inotifywait difficult - to parse in awk scripts or similar. The --csv and --format - options will be helpful in this case. - -``` - -It also is possible to filter the output by registering particular events of interest with the -e option, the list of which is shown here: - -| access | create | move_self | -|========|========|===========| -| attrib | delete | moved_to | -| close_write | delete_self | moved_from | -| close_nowrite | modify | open | -| close | move | unmount | - -A common application is testing for the arrival of new files. Since inotify must be given the name of an existing filesystem object to watch, the directory containing the new files is provided. A trigger of interest is also easy to provide—new files should be complete and ready for processing when the close_write trigger fires. Below is an example script to watch for these events: - -``` - -#!/bin/sh -unset IFS # default of space, tab and nl - # Wait for filesystem events -inotifywait -m -e close_write \ - /tmp /var/tmp /home/oracle/arch-orcl/ | -while read dir op file -do [[ "${dir}" == '/tmp/' && "${file}" == *.txt ]] && - echo "Import job should start on $file ($dir $op)." - - [[ "${dir}" == '/var/tmp/' && "${file}" == CLOSE_WEEK*.txt ]] && - echo Weekly backup is ready. - - [[ "${dir}" == '/home/oracle/arch-orcl/' && "${file}" == *.ARC ]] -&& - su - oracle -c 'ORACLE_SID=orcl ~oracle/bin/log_shipper' & - - [[ "${dir}" == '/tmp/' && "${file}" == SHUT ]] && break - - ((step+=1)) -done - -echo We processed $step events. - -``` - -There are a few problems with the script as presented—of all the available shells on Linux, only ksh93 (that is, the AT&T Korn shell) will report the "step" variable correctly at the end of the script. All the other shells will report this variable as null. - -The reason for this behavior can be found in a brief explanation on the manual page for Bash: "Each command in a pipeline is executed as a separate process (i.e., in a subshell)." The MirBSD clone of the Korn shell has a slightly longer explanation: - -``` - -# man mksh | col -b | sed -n '/The parts/,/do so/p' - The parts of a pipeline, like below, are executed in subshells. Thus, - variable assignments inside them fail. Use co-processes instead. - - foo | bar | read baz # will not change $baz - foo | bar |& read -p baz # will, however, do so - -``` - -And, the pdksh documentation in Oracle Linux 5 (from which MirBSD mksh emerged) has several more mentions of the subject: - -``` - -General features of at&t ksh88 that are not (yet) in pdksh: - - the last command of a pipeline is not run in the parent shell - - `echo foo | read bar; echo $bar' prints foo in at&t ksh, nothing - in pdksh (ie, the read is done in a separate process in pdksh). - - in pdksh, if the last command of a pipeline is a shell builtin, it - is not executed in the parent shell, so "echo a b | read foo bar" - does not set foo and bar in the parent shell (at&t ksh will). - This may get fixed in the future, but it may take a while. - -$ man pdksh | col -b | sed -n '/BTW, the/,/aware/p' - BTW, the most frequently reported bug is - echo hi | read a; echo $a # Does not print hi - I'm aware of this and there is no need to report it. - -``` - -This behavior is easy enough to demonstrate—running the script above with the default bash shell and providing a sequence of example events: - -``` - -$ cp /etc/passwd /tmp/newdata.txt -$ cp /etc/group /var/tmp/CLOSE_WEEK20170407.txt -$ cp /etc/passwd /tmp/SHUT - -``` - -gives the following script output: - -``` - -# ./inotify.sh -Setting up watches. -Watches established. -Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE). -Weekly backup is ready. -We processed events. - -``` - -Examining the process list while the script is running, you'll also see two shells, one forked for the control structure: - -``` - -$ function pps { typeset a IFS=\| ; ps ax | while read a -do case $a in *$1*|+([!0-9])) echo $a;; esac; done } - -$ pps inot - PID TTY STAT TIME COMMAND - 3394 pts/1 S+ 0:00 /bin/sh ./inotify.sh - 3395 pts/1 S+ 0:00 inotifywait -m -e close_write /tmp /var/tmp - 3396 pts/1 S+ 0:00 /bin/sh ./inotify.sh - -``` - -As it was manipulated in a subshell, the "step" variable above was null when control flow reached the echo. Switching this from #/bin/sh to #/bin/ksh93 will correct the problem, and only one shell process will be seen: - -``` - -# ./inotify.ksh93 -Setting up watches. -Watches established. -Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE). -Weekly backup is ready. -We processed 2 events. - -$ pps inot - PID TTY STAT TIME COMMAND - 3583 pts/1 S+ 0:00 /bin/ksh93 ./inotify.sh - 3584 pts/1 S+ 0:00 inotifywait -m -e close_write /tmp /var/tmp - -``` - -Although ksh93 behaves properly and in general handles scripts far more gracefully than all of the other Linux shells, it is rather large: - -``` - -$ ll /bin/[bkm]+([aksh93]) /etc/alternatives/ksh --rwxr-xr-x. 1 root root 960456 Dec 6 11:11 /bin/bash -lrwxrwxrwx. 1 root root 21 Apr 3 21:01 /bin/ksh -> - /etc/alternatives/ksh --rwxr-xr-x. 1 root root 1518944 Aug 31 2016 /bin/ksh93 --rwxr-xr-x. 1 root root 296208 May 3 2014 /bin/mksh -lrwxrwxrwx. 1 root root 10 Apr 3 21:01 /etc/alternatives/ksh -> - /bin/ksh93 - -``` - -The mksh binary is the smallest of the Bourne implementations above (some of these shells may be missing on your system, but you can install them with yum). For a long-term monitoring process, mksh is likely the best choice for reducing both processing and memory footprint, and it does not launch multiple copies of itself when idle assuming that a coprocess is used. Converting the script to use a Korn coprocess that is friendly to mksh is not difficult: - -``` - -#!/bin/mksh -unset IFS # default of space, tab and nl - # Wait for filesystem events -inotifywait -m -e close_write \ - /tmp/ /var/tmp/ /home/oracle/arch-orcl/ \ - 2 ~oracle/.curlog-$ORACLE_SID - -) 9>~oracle/.processing_logs-$ORACLE_SID - -``` - -The above script can be executed manually for testing even while the inotify handler is running, as the flock protects it. - -A standby server, or a DataGuard server in primitive standby mode, can apply the archived logs at regular intervals. The script below forces a 12-hour delay in log application for the recovery of dropped or damaged objects, so inotify cannot be easily used in this case—cron is a more reasonable approach for delayed file processing, and a run every 20 minutes will keep the standby at the desired recovery point: - -``` - -# cat ~oracle/archutils/delay-lock.sh - -#!/bin/ksh93 - -( - flock -n 9 || exit 1 # Critical section-only one process. - - WINDOW=43200 # 12 hours - - LOG_DEST=~oracle/arch-$ORACLE_SID - - OLDLOG_DEST=$LOG_DEST-applied - - function fage { print $(( $(date +%s) - $(stat -c %Y "$1") )) - } # File age in seconds - Requires GNU extended date & stat - - cd $LOG_DEST - - of=$(ls -t | tail -1) # Oldest file in directory - - [[ -z "$of" || $(fage "$of") -lt $WINDOW ]] && exit - - for x in $(ls -rt) # Order by ascending file mtime - do if [[ $(fage "$x") -ge $WINDOW ]] - then y=$(basename $x .lz) # lzip compression is optional - - [[ "$y" != "$x" ]] && /usr/local/bin/lzip -dkq "$x" - - $ORACLE_HOME/bin/sqlplus '/ as sysdba' > /dev/null 2>&1 <<-EOF - recover standby database; - $LOG_DEST/$y - cancel - quit - EOF - - [[ "$y" != "$x" ]] && rm "$y" - - mv "$x" $OLDLOG_DEST - fi - - done -) 9> ~oracle/.recovering-$ORACLE_SID - -``` - -I've covered these specific examples here because they introduce tools to control concurrency, which is a common issue when using inotify, and they advance a few features that increase reliability and minimize storage requirements. Hopefully enthusiastic readers will introduce many improvements to these approaches. - -### The incron System - -Lukas Jelinek is the author of the incron package that allows users to specify tables of inotify events that are executed by the master incrond process. Despite the reference to "cron", the package does not schedule events at regular intervals—it is a tool for filesystem events, and the cron reference is slightly misleading. - -The incron package is available from EPEL. If you have installed the repository, you can load it with yum: - -``` - -# yum install incron -Loaded plugins: langpacks, ulninfo -Resolving Dependencies ---> Running transaction check ----> Package incron.x86_64 0:0.5.10-8.el7 will be installed ---> Finished Dependency Resolution - -Dependencies Resolved - -================================================================= - Package Arch Version Repository Size -================================================================= -Installing: - incron x86_64 0.5.10-8.el7 epel 92 k - -Transaction Summary -================================================================== -Install 1 Package - -Total download size: 92 k -Installed size: 249 k -Is this ok [y/d/N]: y -Downloading packages: -incron-0.5.10-8.el7.x86_64.rpm | 92 kB 00:01 -Running transaction check -Running transaction test -Transaction test succeeded -Running transaction - Installing : incron-0.5.10-8.el7.x86_64 1/1 - Verifying : incron-0.5.10-8.el7.x86_64 1/1 - -Installed: - incron.x86_64 0:0.5.10-8.el7 - -Complete! - -``` - -On a systemd distribution with the appropriate service units, you can start and enable incron at boot with the following commands: - -``` - -# systemctl start incrond -# systemctl enable incrond -Created symlink from - /etc/systemd/system/multi-user.target.wants/incrond.service -to /usr/lib/systemd/system/incrond.service. - -``` - -In the default configuration, any user can establish incron schedules. The incrontab format uses three fields: - -``` -path> -``` - -Below is an example entry that was set with the -e option: - -``` - -$ incrontab -e #vi session follows - -$ incrontab -l -/tmp/ IN_ALL_EVENTS /home/luser/myincron.sh $@ $% $# - -``` - -You can record a simple script and mark it with execute permission: - -``` - -$ cat myincron.sh -#!/bin/sh - -echo -e "path: $1 op: $2 \t file: $3" >> ~/op - -$ chmod 755 myincron.sh - -``` - -Then, if you repeat the original /tmp file manipulations at the start of this article, the script will record the following output: - -``` - -$ cat ~/op - -path: /tmp/ op: IN_ATTRIB file: hello -path: /tmp/ op: IN_CREATE file: hello -path: /tmp/ op: IN_OPEN file: hello -path: /tmp/ op: IN_CLOSE_WRITE file: hello -path: /tmp/ op: IN_OPEN file: passwd -path: /tmp/ op: IN_CLOSE_WRITE file: passwd -path: /tmp/ op: IN_MODIFY file: passwd -path: /tmp/ op: IN_CREATE file: passwd -path: /tmp/ op: IN_DELETE file: passwd -path: /tmp/ op: IN_CREATE file: goodbye -path: /tmp/ op: IN_ATTRIB file: goodbye -path: /tmp/ op: IN_OPEN file: goodbye -path: /tmp/ op: IN_CLOSE_WRITE file: goodbye -path: /tmp/ op: IN_DELETE file: hello -path: /tmp/ op: IN_DELETE file: goodbye - -``` - -While the IN_CLOSE_WRITE event on a directory object is usually of greatest interest, most of the standard inotify events are available within incron, which also offers several unique amalgams: - -``` - -$ man 5 incrontab | col -b | sed -n '/EVENT SYMBOLS/,/child process/p' - -EVENT SYMBOLS - -These basic event mask symbols are defined: - -IN_ACCESS File was accessed (read) (*) -IN_ATTRIB Metadata changed (permissions, timestamps, extended - attributes, etc.) (*) -IN_CLOSE_WRITE File opened for writing was closed (*) -IN_CLOSE_NOWRITE File not opened for writing was closed (*) -IN_CREATE File/directory created in watched directory (*) -IN_DELETE File/directory deleted from watched directory (*) -IN_DELETE_SELF Watched file/directory was itself deleted -IN_MODIFY File was modified (*) -IN_MOVE_SELF Watched file/directory was itself moved -IN_MOVED_FROM File moved out of watched directory (*) -IN_MOVED_TO File moved into watched directory (*) -IN_OPEN File was opened (*) - -When monitoring a directory, the events marked with an asterisk (*) -above can occur for files in the directory, in which case the name -field in the returned event data identifies the name of the file within -the directory. - -The IN_ALL_EVENTS symbol is defined as a bit mask of all of the above -events. Two additional convenience symbols are IN_MOVE, which is a com- -bination of IN_MOVED_FROM and IN_MOVED_TO, and IN_CLOSE, which combines -IN_CLOSE_WRITE and IN_CLOSE_NOWRITE. - -The following further symbols can be specified in the mask: - -IN_DONT_FOLLOW Don't dereference pathname if it is a symbolic link -IN_ONESHOT Monitor pathname for only one event -IN_ONLYDIR Only watch pathname if it is a directory - -Additionally, there is a symbol which doesn't appear in the inotify sym- -bol set. It is IN_NO_LOOP. This symbol disables monitoring events until -the current one is completely handled (until its child process exits). - -``` - -The incron system likely presents the most comprehensive interface to inotify of all the tools researched and listed here. Additional configuration options can be set in /etc/incron.conf to tweak incron's behavior for those that require a non-standard configuration. - -### Path Units under systemd - -When your Linux installation is running systemd as PID 1, limited inotify functionality is available through "path units" as is discussed in a lighthearted [article by Paul Brown][8] at OCS-Mag. - -The relevant manual page has useful information on the subject: - -``` - -$ man systemd.path | col -b | sed -n '/Internally,/,/systems./p' - -Internally, path units use the inotify(7) API to monitor file systems. -Due to that, it suffers by the same limitations as inotify, and for -example cannot be used to monitor files or directories changed by other -machines on remote NFS file systems. - -``` - -Note that when a systemd path unit spawns a shell script, the $HOME and tilde (~) operator for the owner's home directory may not be defined. Using the tilde operator to reference another user's home directory (for example, ~nobody/) does work, even when applied to the self-same user running the script. The Oracle script above was explicit and did not reference ~ without specifying the target user, so I'm using it as an example here. - -Using inotify triggers with systemd path units requires two files. The first file specifies the filesystem location of interest: - -``` - -$ cat /etc/systemd/system/oralog.path - -[Unit] -Description=Oracle Archivelog Monitoring -Documentation=http://docs.yourserver.com - -[Path] -PathChanged=/home/oracle/arch-orcl/ - -[Install] -WantedBy=multi-user.target - -``` - -The PathChanged parameter above roughly corresponds to the close-write event used in my previous direct inotify calls. The full collection of inotify events is not (currently) supported by systemd—it is limited to PathExists, PathChanged and PathModified, which are described in man systemd.path. - -The second file is a service unit describing a program to be executed. It must have the same name, but a different extension, as the path unit: - -``` - -$ cat /etc/systemd/system/oralog.service - -[Unit] -Description=Oracle Archivelog Monitoring -Documentation=http://docs.yourserver.com - -[Service] -Type=oneshot -Environment=ORACLE_SID=orcl -ExecStart=/bin/sh -c '/root/process_logs >> /tmp/plog.txt 2>&1' - -``` - -The oneshot parameter above alerts systemd that the program that it forks is expected to exit and should not be respawned automatically—the restarts are limited to triggers from the path unit. The above service configuration will provide the best options for logging—divert them to /dev/null if they are not needed. - -Use systemctl start on the path unit to begin monitoring—a common error is using it on the service unit, which will directly run the handler only once. Enable the path unit if the monitoring should survive a reboot. - -Although this limited functionality may be enough for some casual uses of inotify, it is a shame that the full functionality of inotifywait and incron are not represented here. Perhaps it will come in time. - -### Conclusion - -Although the inotify tools are powerful, they do have limitations. To repeat them, inotify cannot monitor remote (NFS) filesystems; it cannot report the userid involved in a triggering event; it does not work with /proc or other pseudo-filesystems; mmap() operations do not trigger it; and the inotify queue can overflow resulting in lost events, among other concerns. - -Even with these weaknesses, the efficiency of inotify is superior to most other approaches for immediate notifications of filesystem activity. It also is quite flexible, and although the close-write directory trigger should suffice for most usage, it has ample tools for covering special use cases. - -In any event, it is productive to replace polling activity with inotify watches, and system administrators should be liberal in educating the user community that the classic crontab is not an appropriate place to check for new files. Recalcitrant users should be confined to Ultrix on a VAX until they develop sufficient appreciation for modern tools and approaches, which should result in more efficient Linux systems and happier administrators. - -### Sidenote: Archiving /etc/passwd - -Tracking changes to the password file involves many different types of inotify triggering events. The vipw utility commonly will make changes to a temporary file, then clobber the original with it. This can be seen when the inode number changes: - -``` - -# ll -i /etc/passwd -199720973 -rw-r--r-- 1 root root 3928 Jul 7 12:24 /etc/passwd - -# vipw -[ make changes ] -You are using shadow passwords on this system. -Would you like to edit /etc/shadow now [y/n]? n - -# ll -i /etc/passwd -203784208 -rw-r--r-- 1 root root 3956 Jul 7 12:24 /etc/passwd - -``` - -The destruction and replacement of /etc/passwd even occurs with setuid binaries called by unprivileged users: - -``` - -$ ll -i /etc/passwd -203784196 -rw-r--r-- 1 root root 3928 Jun 29 14:55 /etc/passwd - -$ chsh -Changing shell for fishecj. -Password: -New shell [/bin/bash]: /bin/csh -Shell changed. - -$ ll -i /etc/passwd -199720970 -rw-r--r-- 1 root root 3927 Jul 7 12:23 /etc/passwd - -``` - -For this reason, all inotify triggering events should be considered when tracking this file. If there is concern with an inotify queue overflow (in which events are lost), then the OPEN, ACCESS and CLOSE_NOWRITE,CLOSE triggers likely can be immediately ignored. - -All other inotify events on /etc/passwd might run the following script to version the changes into an RCS archive and mail them to an administrator: - -``` - -#!/bin/sh - -# This script tracks changes to the /etc/passwd file from inotify. -# Uses RCS for archiving. Watch for UID zero. - -PWMAILS=Charlie.Root@openbsd.org - -TPDIR=~/track_passwd - -cd $TPDIR - -if diff -q /etc/passwd $TPDIR/passwd -then exit # they are the same -else sleep 5 # let passwd settle - diff /etc/passwd $TPDIR/passwd 2>&1 | # they are DIFFERENT - mail -s "/etc/passwd changes $(hostname -s)" "$PWMAILS" - cp -f /etc/passwd $TPDIR # copy for checkin - -# "SCCS, the source motel! Programs check in and never check out!" -# -- Ken Thompson - - rcs -q -l passwd # lock the archive - ci -q -m_ passwd # check in new ver - co -q passwd # drop the new copy -fi > /dev/null 2>&1 - -``` - -Here is an example email from the script for the above chfn operation: - -``` - ------Original Message----- -From: root [mailto:root@myhost.com] -Sent: Thursday, July 06, 2017 2:35 PM -To: Fisher, Charles J. ; -Subject: /etc/passwd changes myhost - -57c57 -< fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/bash ---- -> fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/csh - -``` - -Further processing on the third column of /etc/passwd might detect UID zero (a root user) or other important user classes for emergency action. This might include a rollback of the file from RCS to /etc and/or SMS messages to security contacts. - - --------------------------------------------------------------------------------- - -via: http://www.linuxjournal.com/content/linux-filesystem-events-inotify - -作者:[Charles Fisher][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[1]:http://www.nongnu.org/lzip -[2]:http://www.nongnu.org/lzip/xz_inadequate.html -[3]:http://www.7-zip.org -[4]:http://www.ncsl.org/research/telecommunications-and-information-technology/security-breach-notification-laws.aspx -[5]:http://www.linuxjournal.com/content/flat-file-encryption-openssl-and-gpg -[6]:http://www.linuxjournal.com/article/8478 -[7]:https://fedoraproject.org/wiki/EPEL -[8]:http://www.ocsmag.com/2015/09/02/monitoring-file-access-for-dummies diff --git a/sources/tech/20180117 Avoiding Server Disaster.md b/sources/tech/20180117 Avoiding Server Disaster.md deleted file mode 100644 index cb88fe20d9..0000000000 --- a/sources/tech/20180117 Avoiding Server Disaster.md +++ /dev/null @@ -1,125 +0,0 @@ -Avoiding Server Disaster -====== - -Worried that your server will go down? You should be. Here are some disaster-planning tips for server owners. - -If you own a car or a house, you almost certainly have insurance. Insurance seems like a huge waste of money. You pay it every year and make sure that you get the best possible price for the best possible coverage, and then you hope you never need to use the insurance. Insurance seems like a really bad deal—until you have a disaster and realize that had it not been for the insurance, you might have been in financial ruin. - -Unfortunately, disasters and mishaps are a fact of life in the computer industry. And so, just as you pay insurance and hope never to have to use it, you also need to take time to ensure the safety and reliability of your systems—not because you want disasters to happen, or even expect them to occur, but rather because you have to. - -If your website is an online brochure for your company and then goes down for a few hours or even days, it'll be embarrassing and annoying, but not financially painful. But, if your website is your business, when your site goes down, you're losing money. If that's the case, it's crucial to ensure that your server and software are not only unlikely to go down, but also easily recoverable if and when that happens. - -Why am I writing about this subject? Well, let's just say that this particular problem hit close to home for me, just before I started to write this article. After years of helping clients around the world to ensure the reliability of their systems, I made the mistake of not being as thorough with my own. ("The shoemaker's children go barefoot", as the saying goes.) This means that just after launching my new online product for Python developers, a seemingly trivial upgrade turned into a disaster. The precautions I put in place, it turns out, weren't quite enough—and as I write this, I'm still putting my web server together. I'll survive, as will my server and business, but this has been a painful and important lesson—one that I'll do almost anything to avoid repeating in the future. - -So in this article, I describe a number of techniques I've used to keep servers safe and sound through the years, and to reduce the chances of a complete meltdown. You can think of these techniques as insurance for your server, so that even if something does go wrong, you'll be able to recover fairly quickly. - -I should note that most of the advice here assumes no redundancy in your architecture—that is, a single web server and (at most) a single database server. If you can afford to have a bunch of servers of each type, these sorts of problems tend to be much less frequent. However, that doesn't mean they go away entirely. Besides, although people like to talk about heavy-duty web applications that require massive iron in order to run, the fact is that many businesses run on small, one- and two-computer servers. Moreover, those businesses don't need more than that; the ROI (return on investment) they'll get from additional servers cannot be justified. However, the ROI from a good backup and recovery plan is huge, and thus worth the investment. - -### The Parts of a Web Application - -Before I can talk about disaster preparation and recovery, it's important to consider the different parts of a web application and what those various parts mean for your planning. - -For many years, my website was trivially small and simple. Even if it contained some simple programs, those generally were used for sending email or for dynamically displaying different assets to visitors. The entire site consisted of some static HTML, images, JavaScript and CSS. No database or other excitement was necessary. - -At the other end of the spectrum, many people have full-blown web applications, sitting on multiple servers, with one or more databases and caches, as well as HTTP servers with extensively edited configuration files. - -But even when considering those two extremes, you can see that a web application consists of only a few parts: - -* The application software itself. - -* Static assets for that application. - -* Configuration file(s) for the HTTP server(s). - -* Database configuration files. - -* Database schema and contents. - -Assuming that you're using a high-level language, such as Python, Ruby or JavaScript, everything in this list either is a file or can be turned into one. (All databases make it possible to "dump" their contents onto disk, into a format that then can be loaded back into the database server.) - -Consider a site containing only application software, static assets and configuration files. (In other words, no database is involved.) In many cases, such a site can be backed up reliably in Git. Indeed, I prefer to keep my sites in Git, backed up on a commercial hosting service, such as GitHub or Bitbucket, and then deployed using a system like Capistrano. - -In other words, you develop the site on your own development machine. Whenever you are happy with a change that you've made, you commit the change to Git (on your local machine) and then do a git push to your central repository. In order to deploy your application, you then use Capistrano to do a cap deploy, which reads the data from the central repository, puts it into the appropriate place on the server's filesystem, and you're good to go. - -This system keeps you safe in a few different ways. The code itself is located in at least three locations: your development machine, the server and the repository. And those central repositories tend to be fairly reliable, if only because it's in the financial interest of the hosting company to ensure that things are reliable. - -I should add that in such a case, you also should include the HTTP server's configuration files in your Git repository. Those files aren't likely to change very often, but I can tell you from experience, if you're recovering from a crisis, the last thing you want to think about is how your Apache configuration files should look. Copying those files into your Git repository will work just fine. - -### Backing Up Databases - -You could argue that the difference between a "website" and a "web application" is a database. Databases long have powered the back ends of many web applications and for good reason—they allow you to store and retrieve data reliably and flexibly. The power that modern open-source databases provides was unthinkable just a decade or two ago, and there's no reason to think that they'll be any less reliable in the future. - -And yet, just because your database is pretty reliable doesn't mean that it won't have problems. This means you're going to want to keep a snapshot ("dump") of the database's contents around, in case the database server corrupts information, and you need to roll back to a previous version. - -My favorite solution for such a problem is to dump the database on a regular basis, preferably hourly. Here's a shell script I've used, in one form or another, for creating such regular database dumps: - -``` - -#!/bin/sh - -BACKUP_ROOT="/home/database-backups/" -YEAR=`/bin/date +'%Y'` -MONTH=`/bin/date +'%m'` -DAY=`/bin/date +'%d'` - -DIRECTORY="$BACKUP_ROOT/$YEAR/$MONTH/$DAY" -USERNAME=dbuser -DATABASE=dbname -HOST=localhost -PORT=3306 - -/bin/mkdir -p $DIRECTORY - -/usr/bin/mysqldump -h $HOST --databases $DATABASE -u $USERNAME - ↪| /bin/gzip --best --verbose > - ↪$DIRECTORY/$DATABASE-dump.gz - -``` - -The above shell script starts off by defining a bunch of variables, from the directory in which I want to store the backups, to the parts of the date (stored in $YEAR, $MONTH and $DAY). This is so I can have a separate directory for each day of the month. I could, of course, go further and have separate directories for each hour, but I've found that I rarely need more than one backup from a day. - -Once I have defined those variables, I then use the mkdir command to create a new directory. The -p option tells mkdir that if necessary, it should create all of the directories it needs such that the entire path will exist. - -Finally, I then run the database's "dump" command. In this particular case, I'm using MySQL, so I'm using the mysqldump command. The output from this command is a stream of SQL that can be used to re-create the database. I thus take the output from mysqldump and pipe it into gzip, which compresses the output file. Finally, the resulting dumpfile is placed, in compressed form, inside the daily backup directory. - -Depending on the size of your database and the amount of disk space you have on hand, you'll have to decide just how often you want to run dumps and how often you want to clean out old ones. I know from experience that dumping every hour can cause some load problems. On one virtual machine I've used, the overall administration team was unhappy that I was dumping and compressing every hour, which they saw as an unnecessary use of system resources. - -If you're worried your system will run out of disk space, you might well want to run a space-checking program that'll alert you when the filesystem is low on free space. In addition, you can run a cron job that uses find to erase all dumpfiles from before a certain cutoff date. I'm always a bit nervous about programs that automatically erase backups, so I generally prefer not to do this. Rather, I run a program that warns me if the disk usage is going above 85% (which is usually low enough to ensure that I can fix the problem in time, even if I'm on a long flight). Then I can go in and remove the problematic files by hand. - -When you back up your database, you should be sure to back up the configuration for that database as well. The database schema and data, which are part of the dumpfile, are certainly important. However, if you find yourself having to re-create your server from scratch, you'll want to know precisely how you configured the database server, with a particular emphasis on the filesystem configuration and memory allocations. I tend to use PostgreSQL for most of my work, and although postgresql.conf is simple to understand and configure, I still like to keep it around with my dumpfiles. - -Another crucial thing to do is to check your database dumps occasionally to be sure that they are working the way you want. It turns out that the backups I thought I was making weren't actually happening, in no small part because I had modified the shell script and hadn't double-checked that it was creating useful backups. Occasionally pulling out one of your dumpfiles and restoring it to a separate (and offline!) database to check its integrity is a good practice, both to ensure that the dump is working and that you remember how to restore it in the case of an emergency. - -### Storing Backups - -But wait. It might be great to have these backups, but what if the server goes down entirely? In the case of the code, I mentioned to ensure that it was located on more than one machine, ensuring its integrity. By contrast, your database dumps are now on the server, such that if the server fails, your database dumps will be inaccessible. - -This means you'll want to have your database dumps stored elsewhere, preferably automatically. How can you do that? - -There are a few relatively easy and inexpensive solutions to this problem. If you have two servers—ideally in separate physical locations—you can use rsync to copy the files from one to the other. Don't rsync the database's actual files, since those might get corrupted in transfer and aren't designed to be copied when the server is running. By contrast, the dumpfiles that you have created are more than able to go elsewhere. Setting up a remote server, with a user specifically for handling these backup transfers, shouldn't be too hard and will go a long way toward ensuring the safety of your data. - -I should note that using rsync in this way basically requires that you set up passwordless SSH, so that you can transfer without having to be physically present to enter the password. - -Another possible solution is Amazon's Simple Storage Server (S3), which offers astonishing amounts of disk space at very low prices. I know that many companies use S3 as a simple (albeit slow) backup system. You can set up a cron job to run a program that copies the contents of a particular database dumpfile directory onto a particular server. The assumption here is that you're not ever going to use these backups, meaning that S3's slow searching and access will not be an issue once you're working on the server. - -Similarly, you might consider using Dropbox. Dropbox is best known for its desktop client, but it has a "headless", text-based client that can be used on Linux servers without a GUI connected. One nice advantage of Dropbox is that you can share a folder with any number of people, which means you can have Dropbox distribute your backup databases everywhere automatically, including to a number of people on your team. The backups arrive in their Dropbox folder, and you can be sure that the LAMP is conditional. - -Finally, if you're running a WordPress site, you might want to consider VaultPress, a for-pay backup system. I must admit that in the weeks before I took my server down with a database backup error, I kept seeing ads in WordPress for VaultPress. "Who would buy that?", I asked myself, thinking that I'm smart enough to do backups myself. Of course, after disaster occurred and my database was ruined, I realized that $30/year to back up all of my data is cheap, and I should have done it before. - -### Conclusion - -When it comes to your servers, think less like an optimistic programmer and more like an insurance agent. Perhaps disaster won't strike, but if it does, will you be able to recover? Making sure that even if your server is completely unavailable, you'll be able to bring up your program and any associated database is crucial. - -My preferred solution involves combining a Git repository for code and configuration files, distributed across several machines and services. For the databases, however, it's not enough to dump your database; you'll need to get that dump onto a separate machine, and preferably test the backup file on a regular basis. That way, even if things go wrong, you'll be able to get back up in no time. - --------------------------------------------------------------------------------- - -via: http://www.linuxjournal.com/content/avoiding-server-disaster - -作者:[Reuven M.Lerner][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxjournal.com/user/1000891 diff --git a/sources/tech/20180126 Creating an Adventure Game in the Terminal with ncurses.md b/sources/tech/20180126 Creating an Adventure Game in the Terminal with ncurses.md deleted file mode 100644 index 5b9a731e3d..0000000000 --- a/sources/tech/20180126 Creating an Adventure Game in the Terminal with ncurses.md +++ /dev/null @@ -1,325 +0,0 @@ -leemeans translating -Creating an Adventure Game in the Terminal with ncurses -====== -How to use curses functions to read the keyboard and manipulate the screen. - -My [previous article][1] introduced the ncurses library and provided a simple program that demonstrated a few curses functions to put text on the screen. In this follow-up article, I illustrate how to use a few other curses functions. - -### An Adventure - -When I was growing up, my family had an Apple II computer. It was on this machine that my brother and I taught ourselves how to write programs in AppleSoft BASIC. After writing a few math puzzles, I moved on to creating games. Having grown up in the 1980s, I already was a fan of the Dungeons and Dragons tabletop games, where you role-played as a fighter or wizard on some quest to defeat monsters and plunder loot in strange lands. So it shouldn't be surprising that I also created a rudimentary adventure game. - -The AppleSoft BASIC programming environment supported a neat feature: in standard resolution graphics mode (GR mode), you could probe the color of a particular pixel on the screen. This allowed a shortcut to create an adventure game. Rather than create and update an in-memory map that was transferred to the screen periodically, I could rely on GR mode to maintain the map for me, and my program could query the screen as the player's character moved around the screen. Using this method, I let the computer do most of the hard work. Thus, my top-down adventure game used blocky GR mode graphics to represent my game map. - -My adventure game used a simple map that represented a large field with a mountain range running down the middle and a large lake on the upper-left side. I might crudely draw this map for a tabletop gaming campaign to include a narrow path through the mountains, allowing the player to pass to the far side. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-map.jpg) - -Figure 1. A simple Tabletop Game Map with a Lake and Mountains - -You can draw this map in cursesusing characters to represent grass, mountains and water. Next, I describe how to do just that using curses functions and how to create and play a similar adventure game in the Linux terminal. - -### Constructing the Program - -In my last article, I mentioned that most curses programs start with the same set of instructions to determine the terminal type and set up the curses environment: - -``` -initscr(); -cbreak(); -noecho(); - -``` - -For this program, I add another statement: - -``` -keypad(stdscr, TRUE); - -``` - -The TRUE flag allows curses to read the keypad and function keys from the user's terminal. If you want to use the up, down, left and right arrow keys in your program, you need to use keypad(stdscr, TRUE) here. - -Having done that, you now can start drawing to the terminal screen. The curses functions include several ways to draw text on the screen. In my previous article, I demonstrated the addch() and addstr() functions and their associated mvaddch() and mvaddstr() counterparts that first moved to a specific location on the screen before adding text. To create the adventure game map on the terminal, you can use another set of functions: vline() and hline(), and their partner functions mvvline() and mvhline(). These mv functions accept screen coordinates, a character to draw and how many times to repeat that character. For example, mvhline(1, 2, '-', 20) will draw a line of 20 dashes starting at line 1, column 2. - -To draw the map to the terminal screen programmatically, let's define this draw_map() function: - -``` -#define GRASS ' ' -#define EMPTY '.' -#define WATER '~' -#define MOUNTAIN '^' -#define PLAYER '*' - -void draw_map(void) -{ - int y, x; - - /* draw the quest map */ - - /* background */ - - for (y = 0; y < LINES; y++) { - mvhline(y, 0, GRASS, COLS); - } - - /* mountains, and mountain path */ - - for (x = COLS / 2; x < COLS * 3 / 4; x++) { - mvvline(0, x, MOUNTAIN, LINES); - } - - mvhline(LINES / 4, 0, GRASS, COLS); - - /* lake */ - - for (y = 1; y < LINES / 2; y++) { - mvhline(y, 1, WATER, COLS / 3); - } -} - -``` - -In drawing this map, note the use of mvvline() and mvhline() to fill large chunks of characters on the screen. I created the fields of grass by drawing horizontal lines (mvhline) of characters starting at column 0, for the entire height and width of the screen. I added the mountains on top of that by drawing vertical lines (mvvline), starting at row 0, and a mountain path by drawing a single horizontal line (mvhline). And, I created the lake by drawing a series of short horizontal lines (mvhline). It may seem inefficient to draw overlapping rectangles in this way, but remember that curses doesn't actually update the screen until I call the refresh() function later. - -Having drawn the map, all that remains to create the game is to enter a loop where the program waits for the user to press one of the up, down, left or right direction keys and then moves a player icon appropriately. If the space the player wants to move into is unoccupied, it allows the player to go there. - -You can use curses as a shortcut. Rather than having to instantiate a version of the map in the program and replicate this map to the screen, you can let the screen keep track of everything for you. The inch() function, and associated mvinch() function, allow you to probe the contents of the screen. This allows you to query curses to find out whether the space the player wants to move into is already filled with water or blocked by mountains. To do this, you'll need a helper function that you'll use later: - -``` -int is_move_okay(int y, int x) -{ - int testch; - - /* return true if the space is okay to move into */ - - testch = mvinch(y, x); - return ((testch == GRASS) || (testch == EMPTY)); -} - -``` - -As you can see, this function probes the location at column y, row x and returns true if the space is suitably unoccupied, or false if not. - -That makes it really easy to write a navigation loop: get a key from the keyboard and move the user's character around depending on the up, down, left and right arrow keys. Here's a simplified version of that loop: - -``` - - do { - ch = getch(); - - /* test inputted key and determine direction */ - - switch (ch) { - case KEY_UP: - if ((y > 0) && is_move_okay(y - 1, x)) { - y = y - 1; - } - break; - case KEY_DOWN: - if ((y < LINES - 1) && is_move_okay(y + 1, x)) { - y = y + 1; - } - break; - case KEY_LEFT: - if ((x > 0) && is_move_okay(y, x - 1)) { - x = x - 1; - } - break; - case KEY_RIGHT - if ((x < COLS - 1) && is_move_okay(y, x + 1)) { - x = x + 1; - } - break; - } - } - while (1); - -``` - -To use this in a game, you'll need to add some code inside the loop to allow other keys (for example, the traditional WASD movement keys), provide a method for the user to quit the game and move the player's character around the screen. Here's the program in full: - -``` - -/* quest.c */ - -#include -#include - -#define GRASS ' ' -#define EMPTY '.' -#define WATER '~' -#define MOUNTAIN '^' -#define PLAYER '*' - -int is_move_okay(int y, int x); -void draw_map(void); - -int main(void) -{ - int y, x; - int ch; - - /* initialize curses */ - - initscr(); - keypad(stdscr, TRUE); - cbreak(); - noecho(); - - clear(); - - /* initialize the quest map */ - - draw_map(); - - /* start player at lower-left */ - - y = LINES - 1; - x = 0; - - do { - /* by default, you get a blinking cursor - use it to indicate player */ - - mvaddch(y, x, PLAYER); - move(y, x); - refresh(); - - ch = getch(); - - /* test inputted key and determine direction */ - - switch (ch) { - case KEY_UP: - case 'w': - case 'W': - if ((y > 0) && is_move_okay(y - 1, x)) { - mvaddch(y, x, EMPTY); - y = y - 1; - } - break; - case KEY_DOWN: - case 's': - case 'S': - if ((y < LINES - 1) && is_move_okay(y + 1, x)) { - mvaddch(y, x, EMPTY); - y = y + 1; - } - break; - case KEY_LEFT: - case 'a': - case 'A': - if ((x > 0) && is_move_okay(y, x - 1)) { - mvaddch(y, x, EMPTY); - x = x - 1; - } - break; - case KEY_RIGHT: - case 'd': - case 'D': - if ((x < COLS - 1) && is_move_okay(y, x + 1)) { - mvaddch(y, x, EMPTY); - x = x + 1; - } - break; - } - } - while ((ch != 'q') && (ch != 'Q')); - - endwin(); - - exit(0); -} - -int is_move_okay(int y, int x) -{ - int testch; - - /* return true if the space is okay to move into */ - - testch = mvinch(y, x); - return ((testch == GRASS) || (testch == EMPTY)); -} - -void draw_map(void) -{ - int y, x; - - /* draw the quest map */ - - /* background */ - - for (y = 0; y < LINES; y++) { - mvhline(y, 0, GRASS, COLS); - } - - /* mountains, and mountain path */ - - for (x = COLS / 2; x < COLS * 3 / 4; x++) { - mvvline(0, x, MOUNTAIN, LINES); - } - - mvhline(LINES / 4, 0, GRASS, COLS); - - /* lake */ - - for (y = 1; y < LINES / 2; y++) { - mvhline(y, 1, WATER, COLS / 3); - } -} - -``` - -In the full program listing, you can see the complete arrangement of curses functions to create the game: - -1) Initialize the curses environment. - -2) Draw the map. - -3) Initialize the player coordinates (lower-left). - -4) Loop: - -* Draw the player's character. - -* Get a key from the keyboard. - -* Adjust the player's coordinates up, down, left or right, accordingly. - -* Repeat. - -5) When done, close the curses environment and exit. - -### Let's Play - -When you run the game, the player's character starts in the lower-left corner. As the player moves around the play area, the program creates a "trail" of dots. This helps show where the player has been before, so the player can avoid crossing the path unnecessarily. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-start.png) - -Figure 2\. The player starts the game in the lower-left corner. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-1.png) - -Figure 3\. The player can move around the play area, such as around the lake and through the mountain pass. - -To create a complete adventure game on top of this, you might add random encounters with various monsters as the player navigates his or her character around the play area. You also could include special items the player could discover or loot after defeating enemies, which would enhance the player's abilities further. - -But to start, this is a good program for demonstrating how to use the curses functions to read the keyboard and manipulate the screen. - -### Next Steps - -This program is a simple example of how to use the curses functions to update and read the screen and keyboard. You can do so much more with curses, depending on what you need your program to do. In a follow up article, I plan to show how to update this sample program to use colors. In the meantime, if you are interested in learning more about curses, I encourage you to read Pradeep Padala's [NCURSES Programming HOWTO][2] at the Linux Documentation Project. - - --------------------------------------------------------------------------------- - -via: http://www.linuxjournal.com/content/creating-adventure-game-terminal-ncurses - -作者:[Jim Hall][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxjournal.com/users/jim-hall -[1]:http://www.linuxjournal.com/content/getting-started-ncurses -[2]:http://tldp.org/HOWTO/NCURSES-Programming-HOWTO diff --git a/sources/tech/20180129 Rapid, Secure Patching- Tools and Methods.md b/sources/tech/20180129 Rapid, Secure Patching- Tools and Methods.md deleted file mode 100644 index 9ac7340c14..0000000000 --- a/sources/tech/20180129 Rapid, Secure Patching- Tools and Methods.md +++ /dev/null @@ -1,583 +0,0 @@ -Rapid, Secure Patching: Tools and Methods -====== - -It was with some measure of disbelief that the computer science community greeted the recent [EternalBlue][1]-related exploits that have torn through massive numbers of vulnerable systems. The SMB exploits have kept coming (the most recent being [SMBLoris][2] presented at the last DEF CON, which impacts multiple SMB protocol versions, and for which Microsoft will issue no corrective patch. Attacks with these tools [incapacitated critical infrastructure][3] to the point that patients were even turned away from the British National Health Service. - -It is with considerable sadness that, during this SMB catastrophe, we also have come to understand that the famous Samba server presented an exploitable attack surface on the public internet in sufficient numbers for a worm to propagate successfully. I previously [have discussed SMB security][4] in Linux Journal, and I am no longer of the opinion that SMB server processes should run on Linux. - -In any case, systems administrators of all architectures must be able to down vulnerable network servers and patch them quickly. There is often a need for speed and competence when working with a large collection of Linux servers. Whether this is due to security situations or other concerns is immaterial—the hour of greatest need is not the time to begin to build administration tools. Note that in the event of an active intrusion by hostile parties, [forensic analysis][5] may be a legal requirement, and no steps should be taken on the compromised server without a careful plan and documentation. Especially in this new era of the black hats, computer professionals must step up their game and be able to secure vulnerable systems quickly. - -### Secure SSH Keypairs - -Tight control of a heterogeneous UNIX environment must begin with best-practice use of SSH authentication keys. I'm going to open this section with a simple requirement. SSH private keys must be one of three types: Ed25519, ECDSA using the E-521 curve or RSA keys of 3072 bits. Any key that does not meet those requirements should be retired (in particular, DSA keys must be removed from service immediately). - -The [Ed25519][6] key format is associated with Daniel J. Bernstein, who has such a preeminent reputation in modern cryptography that the field is becoming a DJB [monoculture][7]. The Ed25519 format is deigned for speed, security and size economy. If all of your SSH servers are recent enough to support Ed25519, then use it, and consider nothing else. - -[Guidance on creating Ed25519 keys][8] suggests 100 rounds for a work factor in the "-o" secure format. Raising the number of rounds raises the strength of the encrypted key against brute-force attacks (should a file copy of the private key fall into hostile hands), at the cost of more work and time in decrypting the key when ssh-add is executed. Although there always is [controversy and discussion][9] with security advances, I will repeat the guidance here and suggest that the best format for a newly created SSH key is this: - -``` - -ssh-keygen -a 100 -t ed25519 - -``` - -Your systems might be too old to support Ed25519—Oracle/CentOS/Red Hat 7 have this problem (the 7.1 release introduced support). If you cannot upgrade your old SSH clients and servers, your next best option is likely E-521, available in the ECDSA key format. - -The ECDSA curves came from the US government's National Institute of Standards (NIST). The best known and most implemented of all of the NIST curves are P-256, P-384 and E-521\. All three curves are approved for secret communications by a variety of government entities, but a number of cryptographers have [expressed growing suspicion][10] that the P-256 and P-384 curves are tainted. Well known cryptographer Bruce Schneier [has remarked][11]: "I no longer trust the constants. I believe the NSA has manipulated them through their relationships with industry." However, DJB [has expressed][12] limited praise of the E-521 curve: "To be fair I should mention that there's one standard NIST curve using a nice prime, namely 2521 – 1; but the sheer size of this prime makes it much slower than NIST P-256." All of the NIST curves have greater issues with "side channel" attacks than Ed25519—P-521 is certainly a step down, and many assert that none of the NIST curves are safe. In summary, there is a slight risk that a powerful adversary exists with an advantage over the P-256 and P-384 curves, so one is slightly inclined to avoid them. Note that even if your OpenSSH (source) release is capable of E-521, it may be [disabled by your vendor][13] due to patent concerns, so E-521 is not an option in this case. If you cannot use DJB's 2255 – 19 curve, this command will generate an E-521 key on a capable system: - -``` - -ssh-keygen -o -a 100 -b 521 -t ecdsa - -``` - -And, then there is the unfortunate circumstance with SSH servers that support neither ECDSA nor Ed25519\. In this case, you must fall back to RSA with much larger key sizes. An absolute minimum is the modern default of 2048 bits, but 3072 is a wiser choice: - -``` - -ssh-keygen -o -a 100 -b 3072 -t rsa - -``` - -Then in the most lamentable case of all, when you must use old SSH clients that are not able to work with private keys created with the -o option, you can remove the password on id_rsa and create a naked key, then use OpenSSL to encrypt it with AES256 in the PKCS#8 format, as [first documented by Martin Kleppmann][14]. Provide a blank new password for the keygen utility below, then supply a new password when OpenSSL reprocesses the key: - -``` - -$ cd ~/.ssh - -$ cp id_rsa id_rsa-orig - -$ ssh-keygen -p -t rsa -Enter file in which the key is (/home/cfisher/.ssh/id_rsa): -Enter old passphrase: -Key has comment 'cfisher@localhost.localdomain' -Enter new passphrase (empty for no passphrase): -Enter same passphrase again: -Your identification has been saved with the new passphrase. - -$ openssl pkcs8 -topk8 -v2 aes256 -in id_rsa -out id_rsa-strong -Enter Encryption Password: -Verifying - Enter Encryption Password: - -mv id_rsa-strong id_rsa -chmod 600 id_rsa - -``` - -After creating all of these keys on a newer system, you can compare the file sizes: - -``` - -$ ll .ssh -total 32 --rw-------. 1 cfisher cfisher 801 Aug 10 21:30 id_ecdsa --rw-r--r--. 1 cfisher cfisher 283 Aug 10 21:30 id_ecdsa.pub --rw-------. 1 cfisher cfisher 464 Aug 10 20:49 id_ed25519 --rw-r--r--. 1 cfisher cfisher 111 Aug 10 20:49 id_ed25519.pub --rw-------. 1 cfisher cfisher 2638 Aug 10 21:45 id_rsa --rw-------. 1 cfisher cfisher 2675 Aug 10 21:42 id_rsa-orig --rw-r--r--. 1 cfisher cfisher 583 Aug 10 21:42 id_rsa.pub - -``` - -Although they are relatively enormous, all versions of OpenSSH that I have used have been compatible with the RSA private key in PKCS#8 format. The Ed25519 public key is now small enough to fit in 80 columns without word wrap, and it is as convenient as it is efficient and secure. - -Note that PuTTY may have problems using various versions of these keys, and you may need to remove passwords for a successful import into the PuTTY agent. - -These keys represent the most secure formats available for various OpenSSH revisions. They really aren't intended for PuTTY or other general interactive activity. Although one hopes that all users create strong keys for all situations, these are enterprise-class keys for major systems activities. It might be wise, however, to regenerate your system host keys to conform to these guidelines. - -These key formats may soon change. Quantum computers are causing increasing concern for their ability to run [Shor's Algorithm][15], which can be used to find prime factors to break these keys in reasonable time. The largest commercially available quantum computer, the [D-Wave 2000Q][16], effectively [presents under 200 qubits][17] for this activity, which is not (yet) powerful enough for a successful attack. NIST [announced a competition][18] for a new quantum-resistant public key system with a deadline of November 2017 In response, a team including DJB has released source code for [NTRU Prime][19]. It does appear that we will likely see a post-quantum public key format for OpenSSH (and potentially TLS 1.3) released within the next two years, so take steps to ease migration now. - -Also, it's important for SSH servers to restrict their allowed ciphers, MACs and key exchange lest strong keys be wasted on broken crypto (3DES, MD5 and arcfour should be long-disabled). My [previous guidance][20] on the subject involved the following (three) lines in the SSH client and server configuration (note that formatting in the sshd_config file requires all parameters on the same line with no spaces in the options; line breaks have been added here for clarity): - -``` - -Ciphers chacha20-poly1305@openssh.com, - aes256-gcm@openssh.com, - aes128-gcm@openssh.com, - aes256-ctr, - aes192-ctr, - aes128-ctr - -MACs hmac-sha2-512-etm@openssh.com, - hmac-sha2-256-etm@openssh.com, - hmac-ripemd160-etm@openssh.com, - umac-128-etm@openssh.com, - hmac-sha2-512, - hmac-sha2-256, - hmac-ripemd160, - umac-128@openssh.com - -KexAlgorithms curve25519-sha256@libssh.org, - diffie-hellman-group-exchange-sha256 - -``` - -Since the previous publication, RIPEMD160 is likely no longer safe and should be removed. Older systems, however, may support only SHA1, MD5 and RIPEMD160\. Certainly remove MD5, but users of PuTTY likely will want to retain SHA1 when newer MACs are not an option. Older servers can present a challenge in finding a reasonable Cipher/MAC/KEX when working with modern systems. - -At this point, you should have strong keys for secure clients and servers. Now let's put them to use. - -### Scripting the SSH Agent - -Modern OpenSSH distributions contain the ssh-copy-id shell script for easy key distribution. Below is an example of installing a specific, named key in a remote account: - -``` - -$ ssh-copy-id -i ~/.ssh/some_key.pub person@yourserver.com -ssh-copy-id: INFO: Source of key(s) to be installed: - "/home/cfisher/.ssh/some_key.pub" -ssh-copy-id: INFO: attempting to log in with the new key(s), - to filter out any that are already installed -ssh-copy-id: INFO: 1 key(s) remain to be installed -- - if you are prompted now it is to install the new keys -person@yourserver.com's password: - -Number of key(s) added: 1 - -Now try logging into the machine, with: - "ssh 'person@yourserver.com'" -and check to make sure that only the key(s) you wanted were added. - -``` - -If you don't have the ssh-copy-id script, you can install a key manually with the following command: - -``` - -$ ssh person@yourserver.com 'cat >> ~/.ssh/authorized_keys' < \ - ~/.ssh/some_key.pub - -``` - -If you have SELinux enabled, you might have to mark a newly created authorized_keys file with a security type; otherwise, the sshd server dæmon will be prevented from reading the key (the syslog may report this issue): - -``` - -$ ssh person@yourserver.com 'chcon -t ssh_home_t - ↪~/.ssh/authorized_keys' - -``` - -Once your key is installed, test it in a one-time use with the -i option (note that you are entering a local key password, not a remote authentication password): - -``` - -$ ssh -i ~/.ssh/some_key person@yourserver.com -Enter passphrase for key '/home/v-fishecj/.ssh/some_key': -Last login: Wed Aug 16 12:20:26 2017 from 10.58.17.14 -yourserver $ - -``` - -General, interactive users likely will cache their keys with an agent. In the example below, the same password is used on all three types of keys that were created in the previous section: - -``` - -$ eval $(ssh-agent) -Agent pid 4394 - -$ ssh-add -Enter passphrase for /home/cfisher/.ssh/id_rsa: -Identity added: ~cfisher/.ssh/id_rsa (~cfisher/.ssh/id_rsa) -Identity added: ~cfisher/.ssh/id_ecdsa (cfisher@init.com) -Identity added: ~cfisher/.ssh/id_ed25519 (cfisher@init.com) - -``` - -The first command above launches a user agent process, which injects environment variables (named SSH_AGENT_SOCK and SSH_AGENT_PID) into the parent shell (via eval). The shell becomes aware of the agent and passes these variables to the programs that it runs from that point forward. - -When launched, the ssh-agent has no credentials and is unable to facilitate SSH activity. It must be primed by adding keys, which is done with ssh-add. When called with no arguments, all of the default keys will be read. It also can be called to add a custom key: - -``` - -$ ssh-add ~/.ssh/some_key -Enter passphrase for /home/cfisher/.ssh/some_key: -Identity added: /home/cfisher/.ssh/some_key - ↪(cfisher@localhost.localdomain) - -``` - -Note that the agent will not retain the password on the key. ssh-add uses any and all passwords that you enter while it runs to decrypt keys that it finds, but the passwords are cleared from memory when ssh-add terminates (they are not sent to ssh-agent). This allows you to upgrade to new key formats with minimal inconvenience, while keeping the keys reasonably safe. - -The current cached keys can be listed with ssh-add -l (from, which you can deduce that "some_key" is an Ed25519): - -``` - -$ ssh-add -l -3072 SHA256:cpVFMZ17oO5n/Jfpv2qDNSNcV6ffOVYPV8vVaSm3DDo - /home/cfisher/.ssh/id_rsa (RSA) -521 SHA256:1L9/CglR7cstr54a600zDrBbcxMj/a3RtcsdjuU61VU - cfisher@localhost.localdomain (ECDSA) -256 SHA256:Vd21LEM4lixY4rIg3/Ht/w8aoMT+tRzFUR0R32SZIJc - cfisher@localhost.localdomain (ED25519) -256 SHA256:YsKtUA9Mglas7kqC4RmzO6jd2jxVNCc1OE+usR4bkcc - cfisher@localhost.localdomain (ED25519) - -``` - -While a "primed" agent is running, the SSH clients may use (trusting) remote servers fluidly, with no further prompts for credentials: - -``` - -$ sftp person@yourserver.com -Connected to yourserver.com. -sftp> quit - -$ scp /etc/passwd person@yourserver.com:/tmp -passwd 100% 2269 65.8KB/s 00:00 - -$ ssh person@yourserver.com - (motd for yourserver.com) -$ ls -l /tmp/passwd --rw-r--r-- 1 root wheel 2269 Aug 16 09:07 /tmp/passwd -$ rm /tmp/passwd -$ exit -Connection to yourserver.com closed. - -``` - -The OpenSSH agent can be locked, preventing any further use of the credentials that it holds (this might be appropriate when suspending a laptop): - -``` - -$ ssh-add -x -Enter lock password: -Again: -Agent locked. - -$ ssh yourserver.com -Enter passphrase for key '/home/cfisher/.ssh/id_rsa': ^C - -``` - -It will provide credentials again when it is unlocked: - -``` - -$ ssh-add -X -Enter lock password: -Agent unlocked. - -``` - -You also can set ssh-agent to expire keys after a time limit with the -t option, which may be useful for long-lived agents that must clear keys after a set daily shift. - -General shell users may cache many types of keys with a number of differing agent implementations. In addition to the standard OpenSSH agent, users may rely upon PuTTY's pageant.exe, GNOME keyring or KDE Kwallet, among others (the use of the PUTTY agent could likely fill an article on its own). - -However, the goal here is to create "enterprise" keys for critical server controls. You likely do not want long-lived agents in order to limit the risk of exposure. When scripting with "enterprise" keys, you will run an agent only for the duration of the activity, then kill it at completion. - -There are special options for accessing the root account with OpenSSH—the PermitRootLogin parameter can be added to the sshd_config file (usually found in /etc/ssh). It can be set to a simple yes or no, forced-commands-only, which will allow only explicitly-authorized programs to be executed, or the equivalent options prohibit-password or without-password, both of which will allow access to the keys generated here. - -Many hold that root should not be allowed any access. [Michael W. Lucas][21] addresses the question in SSH Mastery: - -> Sometimes, it seems that you need to allow users to SSH in to the system as root. This is a colossally bad idea in almost all environments. When users must log in as a regular user and then change to root, the system logs record the user account, providing accountability. Logging in as root destroys that audit trail....It is possible to override the security precautions and make sshd permit a login directly as root. It's such a bad idea that I'd consider myself guilty of malpractice if I told you how to do it. Logging in as root via SSH almost always means you're solving the wrong problem. Step back and look for other ways to accomplish your goal. - -When root action is required quickly on more than a few servers, the above advice can impose painful delays. Lucas' direct criticism can be addressed by allowing only a limited set of "bastion" servers to issue root commands over SSH. Administrators should be forced to log in to the bastions with unprivileged accounts to establish accountability. - -However, one problem with remotely "changing to root" is the [statistical use of the Viterbi algorithm][22] Short passwords, the su - command and remote SSH calls that use passwords to establish a trinary network configuration are all uniquely vulnerable to timing attacks on a user's keyboard movement. Those with the highest security concerns will need to compensate. - -For the rest of us, I recommend that PermitRootLogin without-password be set for all target machines. - -Finally, you can easily terminate ssh-agent interactively with the -k option: - -``` - -$ eval $(ssh-agent -k) -Agent pid 4394 killed - -``` - -With these tools and the intended use of them in mind, here is a complete script that runs an agent for the duration of a set of commands over a list of servers for a common named user (which is not necessarily root): - -``` - -# cat artano - -#!/bin/sh - -if [[ $# -lt 1 ]]; then echo "$0 - requires commands"; exit; fi - -R="-R5865:127.0.0.1:5865" # set to "-2" if you don't want - ↪port forwarding - -eval $(ssh-agent -s) - -function cleanup { eval $(ssh-agent -s -k); } - -trap cleanup EXIT - -function remsh { typeset F="/tmp/${1}" h="$1" p="$2"; - ↪shift 2; echo "#$h" - if [[ "$ARTANO" == "PARALLEL" ]] - then ssh "$R" -p "$p" "$h" "$@" < /dev/null >>"${F}.out" - ↪2>>"${F}.err" & - else ssh "$R" -p "$p" "$h" "$@" - fi } # HOST PORT CMD - -if ssh-add ~/.ssh/master_key -then remsh yourserver.com 22 "$@" - remsh container.yourserver.com 2200 "$@" - remsh anotherserver.com 22 "$@" - # Add more hosts here. -else echo Bad password - killing agent. Try again. -fi - -wait - -####################################################################### -# Examples: # Artano is an epithet of a famous mythical being -# artano 'mount /patchdir' # you will need an fstab entry for this -# artano 'umount /patchdir' -# artano 'yum update -y 2>&1' -# artano 'rpm -Fvh /patchdir/\*.rpm' -####################################################################### - -``` - -This script runs all commands in sequence on a collection of hosts by default. If the ARTANO environment variable is set to PARALLEL, it instead will launch them all as background processes simultaneously and append their STDOUT and STDERR to files in /tmp (this should be no problem when dealing with fewer than a hundred hosts on a reasonable server). The PARALLEL setting is useful not only for pushing changes faster, but also for collecting audit results. - -Below is an example using the yum update agent. The source of this particular invocation had to traverse a firewall and relied on a proxy setting in the /etc/yum.conf file, which used the port-forwarding option (-R) above: - -``` - -# ./artano 'yum update -y 2>&1' -Agent pid 3458 -Enter passphrase for /root/.ssh/master_key: -Identity added: /root/.ssh/master_key (/root/.ssh/master_key) -#yourserver.com -Loaded plugins: langpacks, ulninfo -No packages marked for update -#container.yourserver.com -Loaded plugins: langpacks, ulninfo -No packages marked for update -#anotherserver.com -Loaded plugins: langpacks, ulninfo -No packages marked for update -Agent pid 3458 killed - -``` - -The script can be used for more general maintenance functions. Linux installations running the XFS filesystem should "defrag" periodically. Although this normally would be done with cron, it can be a centralized activity, stored in a separate script that includes only on the appropriate hosts: - -``` - -&1' -Agent pid 7897 -Enter passphrase for /root/.ssh/master_key: -Identity added: /root/.ssh/master_key (/root/.ssh/master_key) -#yourserver.com -#container.yourserver.com -#anotherserver.com -Agent pid 7897 killed - -``` - -An easy method to collect the contents of all authorized_keys files for all users is the following artano script (this is useful for system auditing and is coded to remove file duplicates): - -``` - -artano 'awk -F: {print\$6\"/.ssh/authorized_keys\"} \ - /etc/passwd | sort -u | xargs grep . 2> /dev/null' - -``` - -It is convenient to configure NFS mounts for file distribution to remote nodes. Bear in mind that NFS is clear text, and sensitive content should not traverse untrusted networks while unencrypted. After configuring an NFS server on host 1.2.3.4, I add the following line to the /etc/fstab file on all the clients and create the /patchdir directory. After the change, the artano script can be used to mass-mount the directory if the network configuration is correct: - -``` - -# tail -1 /etc/fstab -1.2.3.4:/var/cache/yum/x86_64/7Server/ol7_latest/packages - ↪/patchdir nfs4 noauto,proto=tcp,port=2049 0 0 - -``` - -Assuming that the NFS server is mounted, RPMs can be upgraded from images stored upon it (note that Oracle Spacewalk or Red Hat Satellite might be a more capable patch method): - -``` - -# ./artano 'rpm -Fvh /patchdir/\*.rpm' -Agent pid 3203 -Enter passphrase for /root/.ssh/master_key: -Identity added: /root/.ssh/master_key (/root/.ssh/master_key) -#yourserver.com -Preparing... ######################## -Updating / installing... -xmlsec1-1.2.20-7.el7_4 ######################## -xmlsec1-openssl-1.2.20-7.el7_4 ######################## -Cleaning up / removing... -xmlsec1-openssl-1.2.20-5.el7 ######################## -xmlsec1-1.2.20-5.el7 ######################## -#container.yourserver.com -Preparing... ######################## -Updating / installing... -xmlsec1-1.2.20-7.el7_4 ######################## -xmlsec1-openssl-1.2.20-7.el7_4 ######################## -Cleaning up / removing... -xmlsec1-openssl-1.2.20-5.el7 ######################## -xmlsec1-1.2.20-5.el7 ######################## -#anotherserver.com -Preparing... ######################## -Updating / installing... -xmlsec1-1.2.20-7.el7_4 ######################## -xmlsec1-openssl-1.2.20-7.el7_4 ######################## -Cleaning up / removing... -xmlsec1-openssl-1.2.20-5.el7 ######################## -xmlsec1-1.2.20-5.el7 ######################## -Agent pid 3203 killed - -``` - -I am assuming that my audience is already experienced with package tools for their preferred platforms. However, to avoid criticism that I've included little actual discussion of patch tools, the following is a quick reference of RPM manipulation commands, which is the most common package format on enterprise systems: - -* rpm -Uvh package.i686.rpm — install or upgrade a package file. - -* rpm -Fvh package.i686.rpm — upgrade a package file, if an older version is installed. - -* rpm -e package — remove an installed package. - -* rpm -q package — list installed package name and version. - -* rpm -q --changelog package — print full changelog for installed package (including CVEs). - -* rpm -qa — list all installed packages on the system. - -* rpm -ql package — list all files in an installed package. - -* rpm -qpl package.i686.rpm — list files included in a package file. - -* rpm -qi package — print detailed description of installed package. - -* rpm -qpi package — print detailed description of package file. - -* rpm -qf /path/to/file — list package that installed a particular file. - -* rpm --rebuild package.src.rpm — unpack and build a binary RPM under /usr/src/redhat. - -* rpm2cpio package.src.rpm | cpio -icduv — unpack all package files in the current directory. - -Another important consideration for scripting the SSH agent is limiting the capability of an authorized key. There is a [specific syntax][23] for such limitations Of particular interest is the from="" clause, which will restrict logins on a key to a limited set of hosts. It is likely wise to declare a set of "bastion" servers that will record non-root logins that escalate into controlled users who make use of the enterprise keys. - -An example entry might be the following (note that I've broken this line, which is not allowed syntax but done here for clarity): - -``` - -from="*.c2.security.yourcompany.com,4.3.2.1" ssh-ed25519 - ↪AAAAC3NzaC1lZDI1NTE5AAAAIJSSazJz6A5x6fTcDFIji1X+ -↪svesidBonQvuDKsxo1Mx - -``` - -A number of other useful restraints can be placed upon authorized_keys entries. The command="" will restrict a key to a single program or script and will set the SSH_ORIGINAL_COMMAND environment variable to the client's attempted call—scripts can set alarms if the variable does not contain approved contents. The restrict option also is worth consideration, as it disables a large set of SSH features that can be both superfluous and dangerous. - -Although it is possible to set server identification keys in the known_hosts file to a @revoked status, this cannot be done with the contents of authorized_keys. However, a system-wide file for forbidden keys can be set in the sshd_config with RevokedKeys. This file overrides any user's authorized_keys. If set, this file must exist and be readable by the sshd server process; otherwise, no keys will be accepted at all (so use care if you configure it on a machine where there are obstacles to physical access). When this option is set, use the artano script to append forbidden keys to the file quickly when they should be disallowed from the network. A clear and convenient file location would be /etc/ssh/revoked_keys. - -It is also possible to establish a local Certificate Authority (CA) for OpenSSH that will [allow keys to be registered with an authority][24] with expiration dates. These CAs can [become quite elaborate][25] in their control over an enterprise. Although the maintenance of an SSH CA is beyond the scope of this article, keys issued by such CAs should be strong by adhering to the requirements for Ed25519/E-521/RSA-3072. - -### pdsh - -Many higher-level tools for the control of collections of servers exist that are much more sophisticated than the script I've presented here. The most famous is likely [Puppet][26], which is a Ruby-based configuration management system for enterprise control. Puppet has a somewhat short list of supported operating systems. If you are looking for low-level control of Android, Tomato, Linux smart terminals or other "exotic" POSIX, Puppet is likely not the appropriate tool. Another popular Ruby-based tool is [Chef][27], which is known for its complexity. Both Puppet and Chef require Ruby installations on both clients and servers, and they both will catalog any SSH keys that they find, so this key strength discussion is completely applicable to them. - -There are several similar Python-based tools, including [Ansible][28], [Bcfg2][29], [Fabric][30] and [SaltStack][31]. Of these, only Ansible can run "agentless" over a bare SSH connection; the rest will require agents that run on target nodes (and this likely includes a Python runtime). - -Another popular configuration management tool is [CFEngine][32], which is coded in C and claims very high performance. [Rudder][33] has evolved from portions of CFEngine and has a small but growing user community. - -Most of the previously mentioned packages are licensed commercially and some are closed source. - -The closest low-level tool to the activities presented here is the Parallel Distributed Shell (pdsh), which can be found in the [EPEL repository][34]. The pdsh utilities grew out of an IBM-developed package named dsh designed for the control of compute clusters. Install the following packages from the repository to use pdsh: - -``` - -# rpm -qa | grep pdsh -pdsh-2.31-1.el7.x86_64 -pdsh-rcmd-ssh-2.31-1.el7.x86_64 - -``` - -An SSH agent must be running while using pdsh with encrypted keys, and there is no obvious way to control the destination port on a per-host basis as was done with the artano script. Below is an example using pdsh to run a command on three remote servers: - -``` - -# eval $(ssh-agent) -Agent pid 17106 - -# ssh-add ~/.ssh/master_key -Enter passphrase for /root/.ssh/master_key: -Identity added: /root/.ssh/master_key (/root/.ssh/master_key) - -# pdsh -w hosta.com,hostb.com,hostc.com uptime -hosta: 13:24:49 up 13 days, 2:13, 6 users, load avg: 0.00, 0.01, 0.05 -hostb: 13:24:49 up 7 days, 21:15, 5 users, load avg: 0.05, 0.04, 0.05 -hostc: 13:24:49 up 9 days, 3:26, 3 users, load avg: 0.00, 0.01, 0.05 - -# eval $(ssh-agent -k) -Agent pid 17106 killed - -``` - -The -w option above defines a host list. It allows for limited arithmetic expansion and can take the list of hosts from standard input if the argument is a dash (-). The PDSH_SSH_ARGS and PDSH_SSH_ARGS_APPEND environment variables can be used to pass custom options to the SSH call. By default, 32 sessions will be launched in parallel, and this "fanout/sliding window" will be maintained by launching new host invocations as existing connections complete and close. You can adjust the size of the "fanout" either with the -f option or the FANOUT environment variable. It's interesting to note that there are two file copy commands: pdcp and rpdcp, which are analogous to scp. - -Even a low-level utility like pdsh lacks some flexibility that is available by scripting OpenSSH, so prepare to feel even greater constraints as more complicated tools are introduced. - -### Conclusion - -Modern Linux touches us in many ways on diverse platforms. When the security of these systems is not maintained, others also may touch our platforms and turn them against us. It is important to realize the maintenance obligations when you add any Linux platform to your environment. This obligation always exists, and there are consequences when it is not met. - -In a security emergency, simple, open and well understood tools are best. As tool complexity increases, platform portability certainly declines, the number of competent administrators also falls, and this likely impacts speed of execution. This may be a reasonable trade in many other aspects, but in a security context, it demands a much more careful analysis. Emergency measures must be documented and understood by a wider audience than is required for normal operations, and using more general tools facilitates that discussion. - -I hope the techniques presented here will prompt that discussion for those who have not yet faced it. - -### Disclaimer - -The views and opinions expressed in this article are those of the author and do not necessarily reflect those of Linux Journal. - -### Note: - -An exploit [compromising Ed25519][35] was recently demonstrated that relies upon custom hardware changes to derive a usable portion of a secret key. Physical hardware security is a basic requirement for encryption integrity, and many common algorithms are further vulnerable to cache timing or other side channel attacks that can be performed by the unprivileged processes of other users. Use caution when granting access to systems that process sensitive data. - - --------------------------------------------------------------------------------- - -via: http://www.linuxjournal.com/content/rapid-secure-patching-tools-and-methods - -作者:[Charles Fisher][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxjournal.com/users/charles-fisher -[1]:https://en.wikipedia.org/wiki/EternalBlue -[2]:http://securityaffairs.co/wordpress/61530/hacking/smbloris-smbv1-flaw.html -[3]:http://www.telegraph.co.uk/news/2017/05/13/nhs-cyber-attack-everything-need-know-biggest-ransomware-offensive -[4]:http://www.linuxjournal.com/content/smbclient-security-windows-printing-and-file-transfer -[5]:https://staff.washington.edu/dittrich/misc/forensics -[6]:https://ed25519.cr.yp.to -[7]:http://www.metzdowd.com/pipermail/cryptography/2016-March/028824.html -[8]:https://blog.g3rt.nl/upgrade-your-ssh-keys.html -[9]:https://news.ycombinator.com/item?id=12563899 -[10]:http://safecurves.cr.yp.to/rigid.html -[11]:https://en.wikipedia.org/wiki/Curve25519 -[12]:http://blog.cr.yp.to/20140323-ecdsa.html -[13]:https://lwn.net/Articles/573166 -[14]:http://martin.kleppmann.com/2013/05/24/improving-security-of-ssh-private-keys.html -[15]:https://en.wikipedia.org/wiki/Shor's_algorithm -[16]:https://www.dwavesys.com/d-wave-two-system -[17]:https://crypto.stackexchange.com/questions/40893/can-or-can-not-d-waves-quantum-computers-use-shors-and-grovers-algorithm-to-f -[18]:https://yro.slashdot.org/story/16/12/21/2334220/nist-asks-public-for-help-with-quantum-proof-cryptography -[19]:https://ntruprime.cr.yp.to/index.html -[20]:http://www.linuxjournal.com/content/cipher-security-how-harden-tls-and-ssh -[21]:https://www.michaelwlucas.com/tools/ssh -[22]:https://people.eecs.berkeley.edu/~dawnsong/papers/ssh-timing.pdf -[23]:https://man.openbsd.org/sshd#AUTHORIZED_KEYS_FILE_FORMAT -[24]:https://ef.gy/hardening-ssh -[25]:https://code.facebook.com/posts/365787980419535/scalable-and-secure-access-with-ssh -[26]:https://puppet.com -[27]:https://www.chef.io -[28]:https://www.ansible.com -[29]:http://bcfg2.org -[30]:http://www.fabfile.org -[31]:https://saltstack.com -[32]:https://cfengine.com -[33]:http://www.rudder-project.org/site -[34]:https://fedoraproject.org/wiki/EPEL -[35]:https://research.kudelskisecurity.com/2017/10/04/defeating-eddsa-with-faults diff --git a/sources/tech/20180130 Ansible- Making Things Happen.md b/sources/tech/20180130 Ansible- Making Things Happen.md deleted file mode 100644 index 88210cd20c..0000000000 --- a/sources/tech/20180130 Ansible- Making Things Happen.md +++ /dev/null @@ -1,174 +0,0 @@ -Ansible: Making Things Happen -====== -In my [last article][1], I described how to configure your server and clients so you could connect to each client from the server. Ansible is a push-based automation tool, so the connection is initiated from your "server", which is usually just a workstation or a server you ssh in to from your workstation. In this article, I explain how modules work and how you can use Ansible in ad-hoc mode from the command line. - -Ansible is supposed to make your job easier, so the first thing you need to learn is how to do familiar tasks. For most sysadmins, that means some simple command-line work. Ansible has a few quirks when it comes to command-line utilities, but it's worth learning the nuances, because it makes for a powerful system. - -### Command Module - -This is the safest module to execute remote commands on the client machine. As with most Ansible modules, it requires Python to be installed on the client, but that's it. When Ansible executes commands using the Command Module, it does not process those commands through the user's shell. This means some variables like $HOME are not available. It also means stream functions (redirects, pipes) don't work. If you don't need to redirect output or to reference the user's home directory as a shell variable, the Command Module is what you want to use. To invoke the Command Module in ad-hoc mode, do something like this: - -``` - -ansible host_or_groupname -m command -a "whoami" - -``` - -Your output should show SUCCESS for each host referenced and then return the user name that the user used to log in. You'll notice that the user is not root, unless that's the user you used to connect to the client computer. - -If you want to see the elevated user, you'll add another argument to the ansible command. You can add -b in order to "become" the elevated user (or the sudo user). So, if you were to run the same command as above with a "-b" flag: - -``` - -ansible host_or_groupname -b -m command -a "whoami" - -``` - -you should see a similar result, but the whoami results should say root instead of the user you used to connect. That flag is important to use, especially if you try to run remote commands that require root access! - -### Shell Module - -There's nothing wrong with using the Shell Module to execute remote commands. It's just important to know that since it uses the remote user's environment, if there's something goofy with the user's account, it might cause problems that the Command Module avoids. If you use the Shell Module, however, you're able to use redirects and pipes. You can use the whoami example to see the difference. This command: - -``` - -ansible host_or_groupname -m command -a "whoami > myname.txt" - -``` - -should result in an error about > not being a valid argument. Since the Command Module doesn't run inside any shell, it interprets the greater-than character as something you're trying to pass to the whoami command. If you use the Shell Module, however, you have no problems: - -``` - -ansible host_or_groupname -m shell -a "whom > myname.txt" - -``` - -This should execute and give you a SUCCESS message for each host, but there should be nothing returned as output. On the remote machine, however, there should be a file called myname.txt in the user's home directory that contains the name of the user. My personal policy is to use the Command Module whenever possible and to use the Shell Module if needed. - -### The Raw Module - -Functionally, the Raw Module works like the Shell Module. The key difference is that Ansible doesn't do any error checking, and STDERR, STDOUT and Return Code is returned. Other than that, Ansible has no idea what happens, because it just executes the command over SSH directly. So while the Shell Module will use /bin/sh by default, the Raw Module just uses whatever the user's personal default shell might be. - -Why would a person decide to use the Raw Module? It doesn't require Python on the remote computer—at all. Although it's true that most servers have Python installed by default, or easily could have it installed, many embedded devices don't and can't have Python installed. For most configuration management tools, not having an agent program installed means the remote device can't be managed. With Ansible, if all you have is SSH, you still can execute remote commands using the Raw Module. I've used the Raw Module to manage Bitcoin miners that have a very minimal embedded environment. It's a powerful tool, and when you need it, it's invaluable! - -### Copy Module - -Although it's certainly possible to do file and folder manipulation with the Command and Shell Modules, Ansible includes a module specifically for copying files to the server. Even though it requires learning a new syntax for copying files, I like to use it because Ansible will check to see whether a file exists, and whether it's the same file. That means it copies the file only if it needs to, saving time and bandwidth. It even will make backups of existing files! I can't tell you how many times I've used scp and sshpass in a Bash FOR loop and dumped files on servers, even if they didn't need them. Ansible makes it easy and doesn't require FOR loops and IP iterations. - -The syntax is a little more complicated than with Command, Shell or Raw. Thankfully, as with most things in the Ansible world, it's easy to understand—for example: - -``` - -ansible host_or_groupname -b -m copy \ - -a "src=./updated.conf dest=/etc/ntp.conf \ - owner=root group=root mode=0644 backup=yes" - -``` - -This will look in the current directory (on the Ansible server/workstation) for a file called updated.conf and then copy it to each host. On the remote system, the file will be put in /etc/ntp.conf, and if a file already exists, and it's different, the original will be backed up with a date extension. If the files are the same, Ansible won't make any changes. - -I tend to use the Copy Module when updating configuration files. It would be perfect for updating configuration files on Bitcoin miners, but unfortunately, the Copy Module does require that the remote machine has Python installed. Nevertheless, it's a great way to update common files on many remote machines with one simple command. It's also important to note that the Copy Module supports copying remote files to other locations on the remote filesystem using the remote_src=true directive. - -### File Module - -The File Module has a lot in common with the Copy Module, but if you try to use the File Module to copy a file, it doesn't work as expected. The File Module does all its actions on the remote machine, so src and dest are all references to the remote filesystem. The File Module often is used for creating directories, creating links or deleting remote files and folders. The following will simply create a folder named /etc/newfolder on the remote servers and set the mode: - -``` - -ansible host_or_groupname -b -m file \ - -a "path=/etc/newfolder state=directory mode=0755" - -``` - -You can, of course, set the owner and group, along with a bunch of other options, which you can learn about on the Ansible doc site. I find I most often will either create a folder or symbolically link a file using the File Module. To create a symlink: - -``` - -sensible host_or_groupname -b -m file \ - -a "src=/etc/ntp.conf dest=/home/user/ntp.conf \ - owner=user group=user state=link" - -``` - -Notice that the state directive is how you inform Ansible what you actually want to do. There are several state options: - -* link — create symlink. - -* directory — create directory. - -* hard — create hardlink. - -* touch — create empty file. - -* absent — delete file or directory recursively. - -This might seem a bit complicated, especially when you easily could do the same with a Command or Shell Module command, but the clarity of using the appropriate module makes it more difficult to make mistakes. Plus, learning these commands in ad-hoc mode will make playbooks, which consist of many commands, easier to understand (I plan to cover this in my next article). - -### File Management - -Anyone who manages multiple distributions knows it can be tricky to handle the various package managers. Ansible handles this in a couple ways. There are specific modules for apt and yum, but there's also a generic module called "package" that will install on the remote computer regardless of whether it's Red Hat- or Debian/Ubuntu-based. - -Unfortunately, while Ansible usually can detect the type of package manager it needs to use, it doesn't have a way to fix packages with different names. One prime example is Apache. On Red Hat-based systems, the package is "httpd", but on Debian/Ubuntu systems, it's "apache2". That means some more complex things need to happen in order to install the correct package automatically. The individual modules, however, are very easy to use. I find myself just using apt or yum as appropriate, just like when I manually manage servers. Here's an apt example: - -``` - -ansible host_or_groupname -b -m apt \ - -a "update_cache=yes name=apache2 state=latest" - -``` - -With this one simple line, all the host machines will run apt-get update (that's the update_cache directive at work), then install apache2's latest version including any dependencies required. Much like the File Module, the state directive has a few options: - -* latest — get the latest version, upgrading existing if needed. - -* absent — remove package if installed. - -* present — make sure package is installed, but don't upgrade existing. - -The Yum Module works similarly to the Apt Module, but I generally don't bother with the update_cache directive, because yum updates automatically. Although very similar, installing Apache on a Red Hat-based system looks like this: - -``` - -ansible host_or_groupname -b -m yum \ - -a "name=httpd state=present" - -``` - -The difference with this example is that if Apache is already installed, it won't update, even if an update is available. Sometimes updating to the latest version isn't want you want, so this stops that from accidentally happening. - -### Just the Facts, Ma'am - -One frustrating thing about using Ansible in ad-hoc mode is that you don't have access to the "facts" about the remote systems. In my next article, where I plan to explore creating playbooks full of various tasks, you'll see how you can reference the facts Ansible learns about the systems. It makes Ansible far more powerful, but again, it can be utilized only in playbook mode. Nevertheless, it's possible to use ad-hoc mode to peek at the sorts information Ansible gathers. If you run the setup module, it will show you all the details from a remote system: - -``` - -ansible host_or_groupname -b -m setup - -``` - -That command will spew a ton of variables on your screen. You can scroll through them all to see the vast amount of information Ansible pulls from the host machines. In fact, it shows so much information, it can be overwhelming. You can filter the results: - -``` - -ansible host_or_groupname -b -m setup -a "filter=*family*" - -``` - -That should just return a single variable, ansible_os_family, which likely will be Debian or Red Hat. When you start building more complex Ansible setups with playbooks, it's possible to insert some logic and conditionals in order to use yum where appropriate and apt where the system is Debian-based. Really, the facts variables are incredibly useful and make building playbooks that much more exciting. - -But, that's for another article, because you've come to the end of the second installment. Your assignment for now is to get comfortable using Ansible in ad-hoc mode, doing one thing at a time. Most people think ad-hoc mode is just a stepping stone to more complex Ansible setups, but I disagree. The ability to configure hundreds of servers consistently and reliably with a single command is nothing to scoff at. I love making elaborate playbooks, but just as often, I'll use an ad-hoc command in a situation that used to require me to ssh in to a bunch of servers to do simple tasks. Have fun with Ansible; it just gets more interesting from here! - - --------------------------------------------------------------------------------- - -via: http://www.linuxjournal.com/content/ansible-making-things-happen - -作者:[Shawn Powers][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxjournal.com/users/shawn-powers -[1]:http://www.linuxjournal.com/content/ansible-automation-framework-thinks-sysadmin diff --git a/sources/tech/20180202 Shell Scripting- Dungeons, Dragons and Dice.md b/sources/tech/20180202 Shell Scripting- Dungeons, Dragons and Dice.md deleted file mode 100644 index d3794c7da4..0000000000 --- a/sources/tech/20180202 Shell Scripting- Dungeons, Dragons and Dice.md +++ /dev/null @@ -1,191 +0,0 @@ -Shell Scripting: Dungeons, Dragons and Dice -====== -In my [last article][1], I talked about a really simple shell script for a game called Bunco, which is a dice game played in rounds where you roll three dice and compare your values to the round number. Match all three and match the round number, and you just got a bunco for 25 points. Otherwise, any die that match the round are worth one point each. It's simple—a game designed for people who are getting tipsy at the local pub, and it also is easy to program. - -The core function in the Bunco program was one that produced a random number between 1–6 to simulate rolling a six-sided die. It looked like this: - -``` - -rolldie() -{ - local result=$1 - rolled=$(( ( $RANDOM % 6 ) + 1 )) - eval $result=$rolled -} - -``` - -It's invoked with a variable name as the single argument, and it will load a random number between 1–6 into that value—for example: - -``` - -rolldie die1 - -``` - -will assign a value 1..6 to $die1\. Make sense? - -If you can do that, however, what's to stop you from having a second argument that specifies the number of sides of the die you want to "roll" with the function? Something like this: - -``` - -rolldie() -{ - local result=$1 sides=$2 - rolled=$(( ( $RANDOM % $sides ) + 1 )) - eval $result=$rolled -} - -``` - -To test it, let's just write a tiny wrapper that simply asks for a 20-sided die (d20) result: - -``` - -rolldie die 20 -echo resultant roll is $die - -``` - -Easy enough. To make it a bit more useful, let's allow users to specify a sequence of dice rolls, using the standard D&D notation of nDm—that is, n m-sided dice. Bunco would have been done with 3d6, for example (three six-sided die). Got it? - -Since you might well have starting flags too, let's build that into the parsing loop using the ever handy getopt: - -``` - -while getopts "h" arg -do - case "$arg" in - * ) echo "dnd-dice NdM {NdM}" - echo "NdM = N M-sided dice"; exit 0 ;; - esac -done -shift $(( $OPTIND - 1 )) -for request in $* ; do - echo "Rolling: $request" -done - -``` - -With a well formed notation like 3d6, it's easy to break up the argument into its component parts, like so: - -``` - -dice=$(echo $request | cut -dd -f1) -sides=$(echo $request | cut -dd -f2) -echo "Rolling $dice $sides-sided dice" - -``` - -To test it, let's give it some arguments and see what the program outputs: - -``` - -$ dnd-dice 3d6 1d20 2d100 4d3 d5 -Rolling 3 6-sided dice -Rolling 1 20-sided dice -Rolling 2 100-sided dice -Rolling 4 3-sided dice -Rolling 5-sided dice - -``` - -Ah, the last one points out a mistake in the script. If there's no number of dice specified, the default should be 1\. You theoretically could default to a six-sided die too, but that's not anywhere near so safe an assumption. - -With that, you're close to a functional program because all you need is a loop to process more than one die in a request. It's easily done with a while loop, but let's add some additional smarts to the script: - -``` - -for request in $* ; do - dice=$(echo $request | cut -dd -f1) - sides=$(echo $request | cut -dd -f2) - echo "Rolling $dice $sides-sided dice" - sum=0 # reset - while [ ${dice:=1} -gt 0 ] ; do - rolldie die $sides - echo " dice roll = $die" - sum=$(( $sum + $die )) - dice=$(( $dice - 1 )) - done - echo " sum total = $sum" -done - -``` - -This is pretty solid actually, and although the output statements need to be cleaned up a bit, the code's basically fully functional: - -``` - -$ dnd-dice 3d6 1d20 2d100 4d3 d5 -Rolling 3 6-sided dice - dice roll = 5 - dice roll = 6 - dice roll = 5 - sum total = 16 -Rolling 1 20-sided dice - dice roll = 16 - sum total = 16 -Rolling 2 100-sided dice - dice roll = 76 - dice roll = 84 - sum total = 160 -Rolling 4 3-sided dice - dice roll = 2 - dice roll = 2 - dice roll = 1 - dice roll = 3 - sum total = 8 -Rolling 5-sided dice - dice roll = 2 - sum total = 2 - -``` - -Did you catch that I fixed the case when $dice has no value? It's tucked into the reference in the while statement. Instead of referring to it as $dice, I'm using the notation ${dice:=1}, which uses the value specified unless it's null or no value, in which case the value 1 is assigned and used. It's a handy and a perfect fix in this case. - -In a game, you generally don't care much about individual die values; you just want to sum everything up and see what the total value is. So if you're rolling 4d20, for example, it's just a single value you calculate and share with the game master or dungeon master. - -A bit of output statement cleanup and you can do that: - -``` - -$ dnd-dice.sh 3d6 1d20 2d100 4d3 d5 -3d6 = 16 -1d20 = 13 -2d100 = 74 -4d3 = 8 -d5 = 2 - -``` - -Let's run it a second time just to ensure you're getting different values too: - -``` - -3d6 = 11 -1d20 = 10 -2d100 = 162 -4d3 = 6 -d5 = 3 - -``` - -There are definitely different values, and it's a pretty useful script, all in all. - -You could create a number of variations with this as a basis, including what some gamers enjoy called "exploding dice". The idea is simple: if you roll the best possible value, you get to roll again and add the second value too. Roll a d20 and get a 20? You can roll again, and your result is then 20 + whatever the second value is. Where this gets crazy is that you can do this for multiple cycles, so a d20 could become 30, 40 or even 50. - -And, that's it for this article. There isn't much else you can do with dice at this point. In my next article, I'll look at...well, you'll have to wait and see! Don't forget, if there's a topic you'd like me to tackle, please send me a note! - - --------------------------------------------------------------------------------- - -via: http://www.linuxjournal.com/content/shell-scripting-dungeons-dragons-and-dice - -作者:[Dave Taylor][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxjournal.com/users/dave-taylor -[1]:http://www.linuxjournal.com/content/shell-scripting-bunco-game diff --git a/sources/tech/20180203 Evolving Your Own Life- Introducing Biogenesis.md b/sources/tech/20180203 Evolving Your Own Life- Introducing Biogenesis.md deleted file mode 100644 index 1b7ec47b9d..0000000000 --- a/sources/tech/20180203 Evolving Your Own Life- Introducing Biogenesis.md +++ /dev/null @@ -1,84 +0,0 @@ -Evolving Your Own Life: Introducing Biogenesis -====== - -Biogenesis provides a platform where you can create entire ecosystems of lifeforms and see how they interact and how the system as a whole evolves over time. - -You always can get the latest version from the project's main [website][1], but it also should be available in the package management systems for most distributions. For Debian-based distributions, install Biogenesis with the following command: - -``` - -sudo apt-get install biogenesis - -``` - -If you do download it directly from the project website, you also need to have a Java virtual machine installed in order to run it. - -To start it, you either can find the appropriate entry in the menu of your desktop environment, or you simply can type biogenesis in a terminal window. When it first starts, you will get an empty window within which to create your world. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof1.png) - -Figure 1\. When you first start Biogenesis, you get a blank canvas so you can start creating your world. - -The first step is to create a world. If you have a previous instance that you want to continue with, click the Game→Open menu item and select the appropriate file. If you want to start fresh, click Game→New to get a new world with a random selection of organisms. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof2.png) - -Figure 2\. When you launch a new world, you get a random selection of organisms to start your ecosystem. - -The world starts right away, with organisms moving and potentially interacting immediately. However, you can pause the world by clicking on the icon that is second from the right in the toolbar. Alternatively, you also can just press the p key to pause and resume the evolution of the world. - -At the bottom of the window, you'll find details about the world as it currently exists. There is a display of the frames per second, along with the current time within the world. Next, there is a count of the current population of organisms. And finally, there is a display of the current levels of oxygen and carbon dioxide. You can adjust the amount of carbon dioxide within the world either by clicking the relevant icon in the toolbar or selecting the World menu item and then clicking either Increase CO2 or Decrease CO2. - -There also are several parameters that govern how the world works and how your organisms will fare. If you select World→Parameters, you'll see a new window where you can play with those values. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof3.png) - -Figure 3\. The parameter configuration window allows you to set parameters on the physical characteristics of the world, along with parameters that control the evolution of your organisms. - -The General tab sets the amount of time per frame and whether hardware acceleration is used for display purposes. The World tab lets you set the physical characteristics of the world, such as the size and the initial oxygen and carbon dioxide levels. The Organisms tab allows you to set the initial number of organisms and their initial energy levels. You also can set their life span and mutation rate, among other items. The Metabolism tab lets you set the parameters around photosynthetic metabolism. And, the Genes tab allows you to set the probabilities and costs for the various genes that can be used to define your organisms. - -What about the organisms within your world though? If you click on one of the organisms, it will be highlighted and the display will change. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof4.png) - -Figure 4\. You can select individual organisms to find information about them, as well as apply different types of actions. - -The icon toolbar at the top of the window will change to provide actions that apply to organisms. At the bottom of the window is an information bar describing the selected organism. It shows physical characteristics of the organism, such as age, energy and mass. It also describes its relationships to other organisms. It does this by displaying the number of its children and the number of its victims, as well as which generation it is. - -If you want even more detail about an organism, click the Examine genes button in the bottom bar. This pops up a new window called the Genetic Laboratory that allows you to look at and alter the genes making up this organism. You can add or delete genes, as well as change the parameters of existing genes. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof5.png) - -Figure 5\. The Genetic Laboratory allows you to play with the individual genes that make up an organism. - -Right-clicking on a particular organism displays a drop-down menu that provides even more tools to work with. The first one allows you to track the selected organism as the world evolves. The next two entries allow you either to feed your organism extra food or weaken it. Normally, organisms need a certain amount of energy before they can reproduce. Selecting the fourth entry forces the selected organism to reproduce immediately, regardless of the energy level. You also can choose either to rejuvenate or outright kill the selected organism. If you want to increase the population of a particular organism quickly, simply copy and paste a number of a given organism. - -Once you have a particularly interesting organism, you likely will want to be able to save it so you can work with it further. When you right-click an organism, one of the options is to export the organism to a file. This pops up a standard save dialog box where you can select the location and filename. The standard file ending for Biogenesis genetic code files is .bgg. Once you start to have a collection of organisms you want to work with, you can use them within a given world by right-clicking a blank location on the canvas and selecting the import option. This allows you to pull those saved organisms back into a world that you are working with. - -Once you have allowed your world to evolve for a while, you probably will want to see how things are going. Clicking World→Statistics will pop up a new window where you can see what's happening within your world. - -![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof6.png) - -Figure 6\. The statistics window gives you a breakdown of what's happening within the world you have created. - -The top of the window gives you the current statistics, including the time, the number of organisms, how many are dead, and the oxygen and carbon dioxide levels. It also provides a bar with the relative proportions of the genes. - -Below this pane is a list of some remarkable organisms within your world. These are organisms that have had the most children, the most victims or those that are the most infected. This way, you can focus on organisms that are good at the traits you're interested in. - -On the right-hand side of the window is a display of the world history to date. The top portion displays the history of the population, and the bottom portion displays the history of the atmosphere. As your world continues evolving, click the update button to get the latest statistics. - -This software package could be a great teaching tool for learning about genetics, the environment and how the two interact. If you find a particularly interesting organism, be sure to share it with the community at the project website. It might be worth a look there for starting organisms too, allowing you to jump-start your explorations. - - --------------------------------------------------------------------------------- - -via: http://www.linuxjournal.com/content/evolving-your-own-life-introducing-biogenesis - -作者:[Joey Bernard][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxjournal.com/users/joey-bernard -[1]:http://biogenesis.sourceforge.net From 2ae455bf4f3030b30297fba1e7ffdbc457a7d073 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 14:32:46 +0800 Subject: [PATCH 14/81] remove www.codementor.io --- ...0129 Advanced Python Debugging with pdb.md | 363 ------------------ ...hon Hello World and String Manipulation.md | 133 ------- ...nt descent as an optimization technique.md | 51 --- ...5 Locust.io- Load-testing using vagrant.md | 262 ------------- .../20180208 Apache Beam- a Python example.md | 243 ------------ 5 files changed, 1052 deletions(-) delete mode 100644 sources/tech/20180129 Advanced Python Debugging with pdb.md delete mode 100644 sources/tech/20180204 Python Hello World and String Manipulation.md delete mode 100644 sources/tech/20180205 Linear Regression Classifier from scratch using Numpy and Stochastic gradient descent as an optimization technique.md delete mode 100644 sources/tech/20180205 Locust.io- Load-testing using vagrant.md delete mode 100644 sources/tech/20180208 Apache Beam- a Python example.md diff --git a/sources/tech/20180129 Advanced Python Debugging with pdb.md b/sources/tech/20180129 Advanced Python Debugging with pdb.md deleted file mode 100644 index 80f17e23a3..0000000000 --- a/sources/tech/20180129 Advanced Python Debugging with pdb.md +++ /dev/null @@ -1,363 +0,0 @@ -translating by lujun9972 -Advanced Python Debugging with pdb -====== - -![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/nygTCcWMQuyCFaOrlEnh) - -Python's built-in [`pdb`][1] module is extremely useful for interactive debugging, but has a bit of a learning curve. For a long time, I stuck to basic `print`-debugging and used `pdb` on a limited basis, which meant I missed out on a lot of features that would have made debugging faster and easier. - -In this post I will show you a few tips I've picked up over the years to level up my interactive debugging skills. - -## Print debugging vs. interactive debugging - -First, why would you want to use an interactive debugger instead of inserting `print` or `logging` statements into your code? - -With `pdb`, you have a lot more flexibility to run, resume, and alter the execution of your program without touching the underlying source. Once you get good at this, it means more time spent diving into issues and less time context switching back and forth between your editor and the command line. - -Also, by not touching the underlying source code, you will have the ability to step into third party code (e.g. modules installed from PyPI) and the standard library. - -## Post-mortem debugging - -The first workflow I used after moving away from `print` debugging was `pdb`'s "post-mortem debugging" mode. This is where you run your program as usual, but whenever an unhandled exception is thrown, you drop down into the debugger to poke around in the program state. After that, you attempt to make a fix and repeat the process until the problem is resolved. - -You can run an existing script with the post-mortem debugger by using Python's `-mpdb` option: -``` -python3 -mpdb path/to/script.py - -``` - -From here, you are dropped into a `(Pdb)` prompt. To start execution, you use the `continue` or `c` command. If the program executes successfully, you will be taken back to the `(Pdb)` prompt where you can restart the execution again. At this point, you can use `quit` / `q` or Ctrl+D to exit the debugger. - -If the program throws an unhandled exception, you'll also see a `(Pdb)` prompt, but with the program execution stopped at the line that threw the exception. From here, you can run Python code and debugger commands at the prompt to inspect the current program state. - -## Testing our basic workflow - -To see how these basic debugging steps work, I'll be using this (buggy) program: -``` -import random - -MAX = 100 - -def main(num_loops=1000): - for i in range(num_loops): - num = random.randint(0, MAX) - denom = random.randint(0, MAX) - result = num / denom - print("{} divided by {} is {:.2f}".format(num, denom, result)) - -if __name__ == "__main__": - import sys - arg = sys.argv[-1] - if arg.isdigit(): - main(arg) - else: - main() - -``` - -We're expecting the program to do some basic math operations on random numbers in a loop and print the result. Try running it normally and you will see one of the bugs: -``` -$ python3 script.py -2 divided by 30 is 0.07 -65 divided by 41 is 1.59 -0 divided by 70 is 0.00 -... -38 divided by 26 is 1.46 -Traceback (most recent call last): - File "script.py", line 16, in - main() - File "script.py", line 7, in main - result = num / denom -ZeroDivisionError: division by zero - -``` - -Let's try post-mortem debugging this error: -``` -$ python3 -mpdb script.py -> ./src/script.py(1)() --> import random -(Pdb) c -49 divided by 46 is 1.07 -... -Traceback (most recent call last): - File "/usr/lib/python3.4/pdb.py", line 1661, in main - pdb._runscript(mainpyfile) - File "/usr/lib/python3.4/pdb.py", line 1542, in _runscript - self.run(statement) - File "/usr/lib/python3.4/bdb.py", line 431, in run - exec(cmd, globals, locals) - File "", line 1, in - File "./src/script.py", line 1, in - import random - File "./src/script.py", line 7, in main - result = num / denom -ZeroDivisionError: division by zero -Uncaught exception. Entering post mortem debugging -Running 'cont' or 'step' will restart the program -> ./src/script.py(7)main() --> result = num / denom -(Pdb) num -76 -(Pdb) denom -0 -(Pdb) random.randint(0, MAX) -56 -(Pdb) random.randint(0, MAX) -79 -(Pdb) random.randint(0, 1) -0 -(Pdb) random.randint(1, 1) -1 - -``` - -Once the post-mortem debugger kicks in, we can inspect all of the variables in the current frame and even run new code to help us figure out what's wrong and attempt to make a fix. - -## Dropping into the debugger from Python code using `pdb.set_trace` - -Another technique that I used early on, after starting to use `pdb`, was forcing the debugger to run at a certain line of code before an error occurred. This is a common next step after learning post-mortem debugging because it feels similar to debugging with `print` statements. - -For example, in the above code, if we want to stop execution before the division operation, we could add a `pdb.set_trace` call to our program here: -``` - import pdb; pdb.set_trace() - result = num / denom - -``` - -And then run our program without `-mpdb`: -``` -$ python3 script.py -> ./src/script.py(10)main() --> result = num / denom -(Pdb) num -94 -(Pdb) denom -19 - -``` - -The problem with this method is that you have to constantly drop these statements into your source code, remember to remove them afterwards, and switch between running your code with `python` vs. `python -mpdb`. - -Using `pdb.set_trace` gets the job done, but **breakpoints** are an even more flexible way to stop the debugger at any line (even third party or standard library code), without needing to modify any source code. Let's learn about breakpoints and a few other useful commands. - -## Debugger commands - -There are over 30 commands you can give to the interactive debugger, a list that can be seen by using the `help` command when at the `(Pdb)` prompt: -``` -(Pdb) help - -Documented commands (type help ): -======================================== -EOF c d h list q rv undisplay -a cl debug help ll quit s unt -alias clear disable ignore longlist r source until -args commands display interact n restart step up -b condition down j next return tbreak w -break cont enable jump p retval u whatis -bt continue exit l pp run unalias where - -``` - -You can use `help ` for more information on a given command. - -Instead of walking through each command, I'll list out the ones I've found most useful and what arguments they take. - -**Setting breakpoints** : - - * `l(ist)`: displays the source code of the currently running program, with line numbers, for the 10 lines around the current statement. - * `l 1,999`: displays the source code of lines 1-999. I regularly use this to see the source for the entire program. If your program only has 20 lines, it'll just show all 20 lines. - * `b(reakpoint)`: displays a list of current breakpoints. - * `b 10`: set a breakpoint at line 10. Breakpoints are referred to by a numeric ID, starting at 1. - * `b main`: set a breakpoint at the function named `main`. The function name must be in the current scope. You can also set breakpoints on functions in other modules in the current scope, e.g. `b random.randint`. - * `b script.py:10`: sets a breakpoint at line 10 in `script.py`. This gives you another way to set breakpoints in another module. - * `clear`: clears all breakpoints. - * `clear 1`: clear breakpoint 1. - - - -**Stepping through execution** : - - * `c(ontinue)`: execute until the program finishes, an exception is thrown, or a breakpoint is hit. - * `s(tep)`: execute the next line, whatever it is (your code, stdlib, third party code, etc.). Use this when you want to step down into function calls you're interested in. - * `n(ext)`: execute the next line in the current function (will not step into downstream function calls). Use this when you're only interested in the current function. - * `r(eturn)`: execute the remaining lines in the current function until it returns. Use this to skip over the rest of the function and go up a level. For example, if you've stepped down into a function by mistake. - * `unt(il) [lineno]`: execute until the current line exceeds the current line number. This is useful when you've stepped into a loop but want to let the loop continue executing without having to manually step through every iteration. Without any argument, this command behaves like `next` (with the loop skipping behavior, once you've stepped through the loop body once). - - - -**Moving up and down the stack** : - - * `w(here)`: shows an annotated view of the stack trace, with your current frame marked by `>`. - * `u(p)`: move up one frame in the current stack trace. For example, when post-mortem debugging, you'll start off on the lowest level of the stack and typically want to move `up` a few times to help figure out what went wrong. - * `d(own)`: move down one frame in the current stack trace. - - - -**Additional commands and tips** : - - * `pp `: This will "pretty print" the result of the given expression using the [`pprint`][2] module. Example: - - -``` -(Pdb) stuff = "testing the pp command in pdb with a big list of strings" -(Pdb) pp [(i, x) for (i, x) in enumerate(stuff.split())] -[(0, 'testing'), - (1, 'the'), - (2, 'pp'), - (3, 'command'), - (4, 'in'), - (5, 'pdb'), - (6, 'with'), - (7, 'a'), - (8, 'big'), - (9, 'list'), - (10, 'of'), - (11, 'strings')] - -``` - - * `!`: sometimes the Python code you run in the debugger will be confused for a command. For example `c = 1` will trigger the `continue` command. To force the debugger to execute Python code, prefix the line with `!`, e.g. `!c = 1`. - - * Pressing the Enter key at the `(Pdb)` prompt will execute the previous command again. This is most useful after the `s`/`n`/`r`/`unt` commands to quickly step through execution line-by-line. - - * You can run multiple commands on one line by separating them with `;;`, e.g. `b 8 ;; c`. - - * The `pdb` module can take multiple `-c` arguments on the command line to execute commands as soon as the debugger starts. - - - - -Example: -``` -python3 -mpdb -cc script.py # run the program without you having to enter an initial "c" at the prompt -python3 -mpdb -c "b 8" -cc script.py # sets a breakpoint on line 8 and runs the program - -``` - -## Restart behavior - -Another thing that can shave time off debugging is understanding how `pdb`'s restart behavior works. You may have noticed that after execution stops, `pdb` will give a message like, "The program finished and will be restarted," or "The script will be restarted." When I first started using `pdb`, I would always quit and re-run `python -mpdb ...` to make sure that my code changes were getting picked up, which was unnecessary in most cases. - -When `pdb` says it will restart the program, or when you use the `restart` command, code changes to the script you're debugging will be reloaded automatically. Breakpoints will still be set after reloading, but may need to be cleared and re-set due to line numbers shifting. Code changes to other imported modules will not be reloaded -- you will need to `quit` and re-run the `-mpdb` command to pick those up. - -## Watches - -One feature you may miss from other interactive debuggers is the ability to "watch" a variable change throughout the program's execution. `pdb` does not include a watch command by default, but you can get something similar by using `commands`, which lets you run arbitrary Python code whenever a breakpoint is hit. - -To watch what happens to the `denom` variable in our example program: -``` -$ python3 -mpdb script.py -> ./src/script.py(1)() --> import random -(Pdb) b 9 -Breakpoint 1 at ./src/script.py:9 -(Pdb) commands -(com) silent -(com) print("DENOM: {}".format(denom)) -(com) c -(Pdb) c -DENOM: 77 -71 divided by 77 is 0.92 -DENOM: 27 -100 divided by 27 is 3.70 -DENOM: 10 -82 divided by 10 is 8.20 -DENOM: 20 -... - -``` - -We first set a breakpoint (which is assigned ID 1), then use `commands` to start entering a block of commands. These commands function as if you had typed them at the `(Pdb)` prompt. They can be either Python code or additional `pdb` commands. - -Once we start the `commands` block, the prompt changes to `(com)`. The `silent` command means the following commands will not be echoed back to the screen every time they're executed, which makes reading the output a little easier. - -After that, we run a `print` statement to inspect the variable, similar to what we might do when `print` debugging. Finally, we end with a `c` to continue execution, which ends the command block. Typing `c` again at the `(Pdb)` prompt starts execution and we see our new `print` statement running. - -If you'd rather stop execution instead of continuing, you can use `end` instead of `c` in the command block. - -## Running pdb from the interpreter - -Another way to run `pdb` is via the interpreter, which is useful when you're experimenting interactively and would like to drop into `pdb` without running a standalone script. - -For post-mortem debugging, all you need is a call to `pdb.pm()` after an exception has occurred: -``` -$ python3 ->>> import script ->>> script.main() -17 divided by 60 is 0.28 -... -56 divided by 94 is 0.60 -Traceback (most recent call last): - File "", line 1, in - File "./src/script.py", line 9, in main - result = num / denom -ZeroDivisionError: division by zero ->>> import pdb ->>> pdb.pm() -> ./src/script.py(9)main() --> result = num / denom -(Pdb) num -4 -(Pdb) denom -0 - -``` - -If you want to step through normal execution instead, use the `pdb.run()` function: -``` -$ python3 ->>> import script ->>> import pdb ->>> pdb.run("script.main()") -> (1)() -(Pdb) b script:6 -Breakpoint 1 at ./src/script.py:6 -(Pdb) c -> ./src/script.py(6)main() --> for i in range(num_loops): -(Pdb) n -> ./src/script.py(7)main() --> num = random.randint(0, MAX) -(Pdb) n -> ./src/script.py(8)main() --> denom = random.randint(0, MAX) -(Pdb) n -> ./src/script.py(9)main() --> result = num / denom -(Pdb) n -> ./src/script.py(10)main() --> print("{} divided by {} is {:.2f}".format(num, denom, result)) -(Pdb) n -66 divided by 70 is 0.94 -> ./src/script.py(6)main() --> for i in range(num_loops): - -``` - -This one is a little trickier than `-mpdb` because you don't have the ability to step through an entire program. Instead, you'll need to manually set a breakpoint, e.g. on the first statement of the function you're trying to execute. - -## Conclusion - -Hopefully these tips have given you a few new ideas on how to use `pdb` more effectively. After getting a handle on these, you should be able to pick up the [other commands][3] and start customizing `pdb` via a `.pdbrc` file ([example][4]). - -You can also look into other front-ends for debugging, like [pdbpp][5], [pudb][6], and [ipdb][7], or GUI debuggers like the one included in PyCharm. Happy debugging! - --------------------------------------------------------------------------------- - -via: https://www.codementor.io/stevek/advanced-python-debugging-with-pdb-g56gvmpfa - -作者:[Steven Kryskalla][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.codementor.io/stevek -[1]:https://docs.python.org/3/library/pdb.html -[2]:https://docs.python.org/3/library/pprint.html -[3]:https://docs.python.org/3/library/pdb.html#debugger-commands -[4]:https://nedbatchelder.com/blog/200704/my_pdbrc.html -[5]:https://pypi.python.org/pypi/pdbpp/ -[6]:https://pypi.python.org/pypi/pudb/ -[7]:https://pypi.python.org/pypi/ipdb diff --git a/sources/tech/20180204 Python Hello World and String Manipulation.md b/sources/tech/20180204 Python Hello World and String Manipulation.md deleted file mode 100644 index 7a27b8b174..0000000000 --- a/sources/tech/20180204 Python Hello World and String Manipulation.md +++ /dev/null @@ -1,133 +0,0 @@ -translating---geekpi - -Python Hello World and String Manipulation -====== - -![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/eadkmsrBTcWSyCeA4qti) - -Before starting, I should mention that the [code][1] used in this blog post and in the [video][2] below is available on my github. - -With that, let’s get started! If you get lost, I recommend opening the [video][3] below in a separate tab. - -[Hello World and String Manipulation Video using Python][2] - -#### ** Get Started (Prerequisites) - -Install Anaconda (Python) on your operating system. You can either download anaconda from the [official site][4] and install on your own or you can follow these anaconda installation tutorials below. - -Install Anaconda on Windows: [Link][5] - -Install Anaconda on Mac: [Link][6] - -Install Anaconda on Ubuntu (Linux): [Link][7] - -#### Open a Jupyter Notebook - -Open your terminal (Mac) or command line and type the following ([see 1:16 in the video to follow along][8]) to open a Jupyter Notebook: -``` -jupyter notebook - -``` - -#### Print Statements/Hello World - -Type the following into a cell in Jupyter and type **shift + enter** to execute code. -``` -# This is a one line comment -print('Hello World!') - -``` - -![][9] -Output of printing ‘Hello World!’ - -#### Strings and String Manipulation - -Strings are a special type of a python class. As objects, in a class, you can call methods on string objects using the .methodName() notation. The string class is available by default in python, so you do not need an import statement to use the object interface to strings. -``` -# Create a variable -# Variables are used to store information to be referenced -# and manipulated in a computer program. -firstVariable = 'Hello World' -print(firstVariable) - -``` - -![][9] -Output of printing the variable firstVariable -``` -# Explore what various string methods -print(firstVariable.lower()) -print(firstVariable.upper()) -print(firstVariable.title()) - -``` - -![][9] -Output of using .lower(), .upper() , and title() methods -``` -# Use the split method to convert your string into a list -print(firstVariable.split(' ')) - -``` - -![][9] -Output of using the split method (in this case, split on space) -``` -# You can add strings together. -a = "Fizz" + "Buzz" -print(a) - -``` - -![][9] -string concatenation - -#### Look up what Methods Do - -For new programmers, they often ask how you know what each method does. Python provides two ways to do this. - - 1. (works in and out of Jupyter Notebook) Use **help** to lookup what each method does. - - - -![][9] -Look up what each method does - - 2. (Jupyter Notebook exclusive) You can also look up what a method does by having a question mark after a method. - - -``` -# To look up what each method does in jupyter (doesnt work outside of jupyter) -firstVariable.lower? - -``` - -![][9] -Look up what each method does in Jupyter - -#### Closing Remarks - -Please let me know if you have any questions either here or in the comments section of the [youtube video][2]. The code in the post is also available on my [github][1]. Part 2 of the tutorial series is [Simple Math][10]. - --------------------------------------------------------------------------------- - -via: https://www.codementor.io/mgalarny/python-hello-world-and-string-manipulation-gdgwd8ymp - -作者:[Michael][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.codementor.io/mgalarny -[1]:https://github.com/mGalarnyk/Python_Tutorials/blob/master/Python_Basics/Intro/Python3Basics_Part1.ipynb -[2]:https://www.youtube.com/watch?v=JqGjkNzzU4s -[3]:https://www.youtube.com/watch?v=kApPBm1YsqU -[4]:https://www.continuum.io/downloads -[5]:https://medium.com/@GalarnykMichael/install-python-on-windows-anaconda-c63c7c3d1444 -[6]:https://medium.com/@GalarnykMichael/install-python-on-mac-anaconda-ccd9f2014072 -[7]:https://medium.com/@GalarnykMichael/install-python-on-ubuntu-anaconda-65623042cb5a -[8]:https://youtu.be/JqGjkNzzU4s?t=1m16s -[9]:data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw== -[10]:https://medium.com/@GalarnykMichael/python-basics-2-simple-math-4ac7cc928738 diff --git a/sources/tech/20180205 Linear Regression Classifier from scratch using Numpy and Stochastic gradient descent as an optimization technique.md b/sources/tech/20180205 Linear Regression Classifier from scratch using Numpy and Stochastic gradient descent as an optimization technique.md deleted file mode 100644 index 958b7f38b0..0000000000 --- a/sources/tech/20180205 Linear Regression Classifier from scratch using Numpy and Stochastic gradient descent as an optimization technique.md +++ /dev/null @@ -1,51 +0,0 @@ -Translating by Torival Linear Regression Classifier from scratch using Numpy and Stochastic gradient descent as an optimization technique -====== - -![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/cKkX2ryQteXTdZYSR6t7) - -In statistics, linear regression is a linear approach for modelling the relationship between a scalar dependent variable y and one or more explanatory variables (or independent variables) denoted X. The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression. - -As you may know the equation of line with a slope **m** and intercept **c** is given by **y=mx+c** .Now in our dataset **x** is a feature and **y** is the label that is the output. - -Now we will start with some random values of m and c and by using our classifier we will adjust their values so that we obtain a line with the best fit. - -Suppose we have a dataset with a single feature given by **X=[1,2,3,4,5,6,7,8,9,10]** and label/output being **Y=[1,4,9,16,25,36,49,64,81,100]**.We start with random value of **m** being **1** and **c** being **0**. Now starting with the first data point which is **x=1** we will calculate its corresponding output which is **y=m*x+c** - > **y=1-1+0** - > **y=1** . - -Now this is our guess for the given input.Now we will subtract the calculated y which is our guess whith the actual output which is **y(original)=1** to calculate the error which is **y(guess)-y(original)** which can also be termed as our cost function when we take the square of its mean and our aim is to minimize this cost. - -After each iteration through the data points we will change our values of **m** and **c** such that the obtained m and c gives the line with the best fit.Now how we can do this? - -The answer is using **Gradient Descent Technique**. - -![Gd_demystified.png][1] - -In gradient descent we look to minimize the cost function and in order to minimize the cost function we need to minimize the error which is given by **error=y(guess)-y(original)**. - - -Now error depends on two values **m** and **c** . Now if we take the partial derivative of error with respect to **m** and **c** we can get to know the oreintation i.e whether we need to increase the values of m and c or decrease them in order to obtain the line of best fit. - -Now error depends on two values **m** and **c**.So on taking partial derivative of error with respect to **m** we get **x** and taking partial derivative of error with repsect to **c** we get a constant. - -So if we apply two changes that is **m=m-error*x** and **c=c-error*1** after every iteration we can adjust the value of m and c to obtain the line with the best fit. - -Now error can be negative as well as positive.When the error is negative it means our **m** and **c** are smaller than the actual **m** and **c** and hence we would need to increase their values and if the error is positive we would need to decrease their values that is what we are doing. - -But wait we also need a constant called the learning_rate so that we don't increase or decrease the values of **m** and **c** with a steep rate .so we need to multiply **m=m-error * x * learning_rate** and **c=c-error * 1 * learning_rate** so as to make the process smooth. - -So we need to update **m** to **m=m-error * x * learning_rate** and **c** to **c=c-error * 1 * learning_rate** to obtain the line with the best fit and this is our linear regreesion model using stochastic gradient descent meaning of stochastic being that we are updating the values of m and c in every iteration. - -You can check the full code in python :[https://github.com/assassinsurvivor/MachineLearning/blob/master/Regression.py][2] - --------------------------------------------------------------------------------- - -via: https://www.codementor.io/prakharthapak/linear-regression-classifier-from-scratch-using-numpy-and-stochastic-gradient-descent-as-an-optimization-technique-gf5gm9yti - -作者:[Prakhar Thapak][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.codementor.io/prakharthapak -[1]:https://process.filestackapi.com/cache=expiry:max/5TXRH28rSo27kTNZLgdN -[2]:https://www.codementor.io/prakharthapak/here diff --git a/sources/tech/20180205 Locust.io- Load-testing using vagrant.md b/sources/tech/20180205 Locust.io- Load-testing using vagrant.md deleted file mode 100644 index ef79362a7c..0000000000 --- a/sources/tech/20180205 Locust.io- Load-testing using vagrant.md +++ /dev/null @@ -1,262 +0,0 @@ -Locust.io: Load-testing using vagrant -====== - -![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/Rm2HlpyYQc6ma5BnUGRO) - -What could possibly go wrong when you release an application to the public domain without testing? You could either wait to find out or you can just find out before releasing the product. - -In this tutorial, we will be considering the art of load-testing, one of the several types of [non-functional test][1] required for a system. - -According to wikipedia - -> [Load testing][2] is the process of putting demand on a software system or computing device and measuring its response. Load testing is performed to determine a system's behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation. - -### What the heck is locust.io? -[Locust][3] is an opensource load-testing tool that can be used to simulate millions of simultaneous users, it has other cool features that allows you to visualize the data generated from the test plus it has been proven & battle tested ![😃][4] - -### Why Vagrant? -Because [vagrant][5] allows us to build and maintain our near replica production environment with the right parameters for memory, CPU, storage, and disk i/o. - -### Why VirtualBox? -VirtualBox here will act as our hypervisor, the computer software that will create and run our virtual machine(s). - -### So, what is the plan here? - - * Download [Vagrant][6] and [VirtualBox][7] - * Set up a near-production replica environment using ### vagrant** and **virtualbox [SOURCE_CODE_APPLICATION][8] - * Set up locust to run our load test [SOURCE_CODE_LOCUST][9] - * Execute test against our production replica environment and check performance - - - -### Some context -Vagrant uses "Provisioners" and "Providers" as building blocks to manage the development environments. - -> Provisioners are tools that allow users to customize the configuration of virtual environments. Puppet and Chef are the two most widely used provisioners in the Vagrant ecosystem. -> Providers are the services that Vagrant uses to set up and create virtual environments. - -Reference can be found [here][10] - -That said for our vagrant configuration we will be making use of the Vagrant Shell provisioner and VirtualBox for our provider, just a simple setup for now ![😉][11] - -One more thing, the Machine, and software requirements are written in a file called "Vagrantfile" to execute necessary steps in order to create a development-ready box, so let's get down to business. - -### A near production environment using Vagrant and Virtualbox -I used a past project of mine, a very minimal Python/Django application I called Bookshelf to create a near-production environment. Here is the link to the [repository][8] - -Let's create our environmnet using a vagrantfile. -Use the command `vagrant init --minimal hashicorp/precise64` to create a vagrant file, where `hashicorp` is the username and `precise64` is the box name. - -More about getting started with vagrant can be found [here][12] -``` -# vagrant file - -# set our environment to use our host private and public key to access the VM -# as vagrant project provides an insecure key pair for SSH Public Key # Authentication so that vagrant ssh works -# https://stackoverflow.com/questions/14715678/vagrant-insecure-by-default - -private_key_path = File.join(Dir.home, ".ssh", "id_rsa") -public_key_path = File.join(Dir.home, ".ssh", "id_rsa.pub") -insecure_key_path = File.join(Dir.home, ".vagrant.d", "insecure_private_key") - -private_key = IO.read(private_key_path) -public_key = IO.read(public_key_path) - -# Set the environment details here -Vagrant.configure("2") do |config| - config.vm.box = "hashicorp/precise64" - config.vm.hostname = "bookshelf-dev" - # using a private network here, so don't forget to update your /etc/host file. - # 192.168.50.4 bookshelf.example - config.vm.network "private_network", ip: "192.168.50.4" - - config.ssh.insert_key = false - config.ssh.private_key_path = [ - private_key_path, - insecure_key_path # to provision the first time - ] - - # reference: https://github.com/hashicorp/vagrant/issues/992 @dwickern - # use host/personal public and private key for security reasons - config.vm.provision :shell, :inline => <<-SCRIPT - set -e - mkdir -p /vagrant/.ssh/ - - echo '#{private_key}' > /vagrant/.ssh/id_rsa - chmod 600 /vagrant/.ssh/id_rsa - - echo '#{public_key}' > /vagrant/.ssh/authorized_keys - chmod 600 /vagrant/.ssh/authorized_keys - SCRIPT - - # Use a shell provisioner here - config.vm.provision "shell" do |s| - s.path = ".provision/setup_env.sh" - s.args = ["set_up_python"] - end - - - config.vm.provision "shell" do |s| - s.path = ".provision/setup_nginx.sh" - s.args = ["set_up_nginx"] - end - - if Vagrant.has_plugin?("vagrant-vbguest") - config.vbguest.auto_update = false - end - - # set your environment parameters here - config.vm.provider 'virtualbox' do |v| - v.memory = 2048 - v.cpus = 2 - end - - config.vm.post_up_message = "At this point use `vagrant ssh` to ssh into the development environment" -end - -``` - -Something to note here, notice the config `config.vm.network "private_network", ip: "192.168.50.4"` where I configured the Virtual machine network to use a private network "192.168.59.4", I edited my `/etc/hosts` file to map that IP address to the fully qualified domain name (FQDN) of the application called `bookshelf.example`. So, don't forget to edit your `/etc/hosts/` as well it should look like this -``` -## -# /etc/host -# Host Database -# -# localhost is used to configure the loopback interface -# when the system is booting. Do not change this entry. -## -127.0.0.1 localhost -255.255.255.255 broadcasthost -::1 localhost -192.168.50.4 bookshelf.example - -``` - -The provision scripts can be found in the `.provision` [folder][13] of the repository -![provision_sd.png][14] - -There you would see all the scripts used in the setup, the `start_app.sh` is used to run the application once you are in the virtual machine via ssh. - -To start the process run `vagrant up && vagrant ssh`, this will start the application and take you via ssh into the VM, inside the VM navigate to the `/vagrant/` folder to start the app via the command `./start_app.sh` - -With our application up and running, next would be to create a load testing script to run against our setup. - -### NB: The current application setup here makes use of sqlite3 for the database config, you can change that to Postgres by uncommenting that in the settings file. Also, `setup_env.sh` provisions the environment to use Postgres. - -To set up a more comprehensive and robust production replica environment I would suggest you reference the docs [here][15], you can also check out [vagrant][5] to understand and play with vagrant. - -### Set up locust for load-testing -In other to perform load testing we are going to make use of locust. Source code can be found [here][9] - -First, we create our locust file -``` -# locustfile.py - -# script used against vagrant set up on bookshelf git repo -# url to repo: https://github.com/andela-sjames/bookshelf - -from locust import HttpLocust, TaskSet, task - -class SampleTrafficTask(TaskSet): - - @task(2) - def index(self): - self.client.get("/") - - @task(1) - def search_for_book_that_contains_string_space(self): - self.client.get("/?q=space") - - @task(1) - def search_for_book_that_contains_string_man(self): - self.client.get("/?q=man") - -class WebsiteUser(HttpLocust): - host = "http://bookshelf.example" - task_set = SampleTrafficTask - min_wait = 5000 - max_wait = 9000 - -``` - -Here is a simple locust file called `locustfile.py`, where we define a number of locust task grouped under the `TaskSet class`. Then we have the `HttpLocust class` which represents a user, where we define how long a simulated user should wait between executing tasks, as well as what TaskSet class should define the user’s “behavior”. - -using the filename locustfile.py allows us to start the process by simply running the command `locust`. If you choose to give your file a different name then you just need to reference the path using `locust -f /path/to/the/locust/file` to start the script. - -If you're getting excited and want to know more then the [quick start][16] guide will get up to speed. - -### Execute test and check perfomance - -It's time to see some action ![😮][17] - -Bookshelf app: -Run the application via `vagrant up && vagrant ssh` navigate to the `/vagrant` and run `./start_app.sh` - -Vagrant allows you to shut down the running machine using `vagrant halt` and to destroy the machine and all the resources that were created with it using `vagrant destroy`. Use this [link][18] to know more about the vagrant command line. - -![bookshelf_str.png][14] - -Go to your browser and use the private_ip address `192.168.50.4` or preferably `http://bookshelf.example` what we set in our `/etc/host` file of the system -`192.168.50.4 bookshelf.example` - -![bookshelf_app_web.png][14] - -Locust Swarm: -Within your load-testing folder, activate your `virtualenv`, get your dependencies down via `pip install -r requirements.txt` and run `locust` - -![locust_str.png][14] - -We're almost done: -Now got to `http://127.0.0.1:8089/` on your browser -![locust_rate.png][14] - -Enter the number of users you want to simulate and the hatch rate (i.e how many users you want to be generated per second) and start swarming your development environment - -**NB: You can also run locust against a development environment hosted via a cloud service if that is your use case. You don't have to confine yourself to vagrant.** - -With the generated report and metric from the process, you should be able to make a well-informed decision on with regards to your system architecture or at least know the limit of your system and prepare for an anticipated event. - -![locust_1.png][14] - -![locust_a.png][14] - -![locust_b.png][14] - -![locust_error.png][14] - -### Conclusion -Congrats!!! if you made it to the end. As a recap, we were able to talk about what load-testing is, why you would want to perform a load test on your application and how to do it using locust and vagrant with a VirtualBox provider and a Shell provisioner. We also looked at the metrics and data generated from the test. - -**NB: If you want a more concise vagrant production environment you can reference the docs [here][15].** - -Thanks for reading and feel free to like/share this post. - --------------------------------------------------------------------------------- - -via: https://www.codementor.io/samueljames/locust-io-load-testing-using-vagrant-ffwnjger9 - -作者:[Samuel James][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[1]:https://en.wikipedia.org/wiki/Non-functional_testing -[2]:https://en.wikipedia.org/wiki/Load_testing -[3]:https://locust.io/ -[4]:https://twemoji.maxcdn.com/2/72x72/1f603.png -[5]:https://www.vagrantup.com/intro/index.html -[6]:https://www.vagrantup.com/downloads.html -[7]:https://www.virtualbox.org/wiki/Downloads -[8]:https://github.com/andela-sjames/bookshelf -[9]:https://github.com/andela-sjames/load-testing -[10]:https://en.wikipedia.org/wiki/Vagrant_(software) -[11]:https://twemoji.maxcdn.com/2/72x72/1f609.png -[12]:https://www.vagrantup.com/intro/getting-started/install.html -[13]:https://github.com/andela-sjames/bookshelf/tree/master/.provision -[14]:data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw== -[15]:http://vagrant-django.readthedocs.io/en/latest/intro.html -[16]:https://docs.locust.io/en/latest/quickstart.html -[17]:https://twemoji.maxcdn.com/2/72x72/1f62e.png -[18]:https://www.vagrantup.com/docs/cli/ diff --git a/sources/tech/20180208 Apache Beam- a Python example.md b/sources/tech/20180208 Apache Beam- a Python example.md deleted file mode 100644 index da3beb5d03..0000000000 --- a/sources/tech/20180208 Apache Beam- a Python example.md +++ /dev/null @@ -1,243 +0,0 @@ -Apache Beam: a Python example -====== - -![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/EOfIfmx0QlDgc6rDnuNq) - -Nowadays, being able to handle huge amounts of data can be an interesting skill: analytics, user profiling, statistics — virtually any business that needs to extrapolate information from whatever data is, in one way or another, using some big data tools or platforms. - -One of the most interesting tool is Apache Beam, a framework that gives us the instruments to generate procedures to transform, process, aggregate, and manipulate data for our needs. - -Let’s try and see how we can use it in a very simple scenario. - -### The context - -Imagine that we have a database with information about users visiting a website, with each record containing: - - * country of the visiting user - * duration of the visit - * user name - - - -We want to create some reports containing: - - 1. for each country, the **number of users** visiting the website - 2. for each country, the **average visit time** - - - -We will use **Apache Beam** , a Google SDK (previously called Dataflow) representing a **programming model** aimed at simplifying the mechanism of large-scale data processing. - -It’s been donated to the Apache Foundation, and called Beam because it’s able to process data in whatever form you need: **batches** and **streams** (b-eam). It gives you the chance to define **pipelines** to process real-time data ( **streams** ) and historical data ( **batches** ). - -The pipeline definition is totally disjointed by the context that you will use to run it, so Beam gives you the chance to choose one of the supported runners you can use: - - * Beam model: local execution of your pipeline - * Google Cloud Dataflow: dataflow as a service - * Apache Flink - * Apache Spark - * Apache Gearpump - * Apache Hadoop MapReduce - * JStorm - * IBM Streams - - - -We will be running the beam model one, which basically executes everything on your local machine. - -### The programming model - -Though this is not going to be a deep explanation of the DataFlow programming model, it’s necessary to understand what a pipeline is: a set of manipulations being made on an input data set that provides a new set of data. More precisely, a pipeline is made of **transforms** applied to **collections.** - -Straight from the [Apache Beam website][1]: - -> A pipeline encapsulates your entire data processing task, from start to finish. This includes reading input data, transforming that data, and writing output data. - -The pipeline gets data injected from the outside and represents it as **collections** (formally named `PCollection` s ), each of them being - -> a potentially distributed, multi-element, data set - -When one or more `Transform` s are applied to a `PCollection`, a brand new `PCollection` is generated (and for this reason the resulting `PCollection` s are **immutable** objects). - -The first and last step of a pipeline are, of course, the ones that can read and write data to and from several kind of storages — you can find a list [here][2]. - -### The application - -We will have the data in a `csv` file, so the first thing we need to do is to read the contents of the file and provide a structured representation of all of the rows. - -A generic row of the `csv` file will be like the following: -``` -United States Of America, 0.5, John Doe - -``` - -with the columns being the country, the visit time in seconds, and the user name, respectively. - -Given the data we want to provide, let’s see what our pipeline will be doing and how. - -### Read the input data set - -The first step will be to read the input file. -``` -with apache_beam.Pipeline(options=options) as p: - - rows = ( - p | - ReadFromText(input_filename) | - apache_beam.ParDo(Split()) - ) - -``` - -In the above context, `p` is an instance of `apache_beam.Pipeline` and the first thing that we do is to apply a built-in transform, `apache_beam.io.textio.ReadFromText` that will load the contents of the file into a `PCollection`. After this, we apply a specific logic, `Split`, to process every row in the input file and provide a more convenient representation (a dictionary, specifically). - -Here’s the `Split` function: -``` -class Split(apache_beam.DoFn): - - def process(self, element): - country, duration, user = element.split(",") - - return [{ - 'country': country, - 'duration': float(duration), - 'user': user - }] - -``` - -The `ParDo` transform is a core one, and, as per official Apache Beam documentation: - -`ParDo` is useful for a variety of common data processing operations, including: - - * **Filtering a data set.** You can use `ParDo` to consider each element in a `PCollection` and either output that element to a new collection or discard it. - * **Formatting or type-converting each element in a data set.** If your input `PCollection` contains elements that are of a different type or format than you want, you can use `ParDo` to perform a conversion on each element and output the result to a new `PCollection`. - * **Extracting parts of each element in a data set.** If you have a`PCollection` of records with multiple fields, for example, you can use a `ParDo` to parse out just the fields you want to consider into a new `PCollection`. - * **Performing computations on each element in a data set.** You can use `ParDo` to perform simple or complex computations on every element, or certain elements, of a `PCollection` and output the results as a new `PCollection`. - - - -Please read more of this [here][3]. - -### Grouping relevant information under proper keys - -At this point, we have a list of valid rows, but we need to reorganize the information under keys that are the countries referenced by such rows. For example, if we have three rows like the following: - -> Spain (ES), 2.2, John Doe> Spain (ES), 2.9, John Wayne> United Kingdom (UK), 4.2, Frank Sinatra - -we need to rearrange the information like this: -``` -{ - "Spain (ES)": [2.2, 2.9], - "United kingdom (UK)": [4.2] -} - -``` - -If we do this, we have all the information in good shape to make all the calculations we need. - -Here we go: -``` -timings = ( - rows | - apache_beam.ParDo(CollectTimings()) | - "Grouping timings" >> apache_beam.GroupByKey() | - "Calculating average" >> apache_beam.CombineValues( - apache_beam.combiners.MeanCombineFn() - ) -) - -users = ( - rows | - apache_beam.ParDo(CollectUsers()) | - "Grouping users" >> apache_beam.GroupByKey() | - "Counting users" >> apache_beam.CombineValues( - apache_beam.combiners.CountCombineFn() - ) -) - -``` - -The classes `CollectTimings` and `CollectUsers` basically filter the rows that are of interest for our goal. They also rearrange each of them in the right form, that is something like: - -> (“Spain (ES)”, 2.2) - -At this point, we are able to use the `GroupByKey` transform, that will create a single record that, incredibly, groups all of the info that shares the same keys: - -> (“Spain (ES)”, (2.2, 2.9)) - -Note: the key is always the first element of the tuple. - -The very last missing bit of the logic to apply is the one that has to process the values associated to each key. The built-in transform is `apache_beam.CombineValues`, which is pretty much self explanatory. - -The logics that are applied are `apache_beam.combiners.MeanCombineFn` and `apache_beam.combiners.CountCombineFn` respectively: the former calculates the arithmetic mean, the latter counts the element of a set. - -For the sake of completeness, here is the definition of the two classes `CollectTimings` and `CollectUsers`: -``` -class CollectTimings(apache_beam.DoFn): - - def process(self, element): - """ - Returns a list of tuples containing country and duration - """ - - result = [ - (element['country'], element['duration']) - ] - return result - - -class CollectUsers(apache_beam.DoFn): - - def process(self, element): - """ - Returns a list of tuples containing country and user name - """ - result = [ - (element['country'], element['user']) - ] - return result - -``` - -Note: the operation of applying multiple times some transforms to a given `PCollection` generates multiple brand new collections. This is called **collection branching**. It’s very well represented here: - -Source: - -Basically, now we have two sets of information — the average visit time for each country and the number of users for each country. What we're missing is a single structure containing all of the information we want. - -Also, having made a pipeline branching, we need to recompose the data. We can do this by using `CoGroupByKey`, which is nothing less than a **join** made on two or more collections that have the same keys. - -The last two transforms are ones that format the info into `csv` entries while the other writes them to a file. - -After this, the resulting `output.txt` file will contain rows like this one: - -`Italy (IT),36,2.23611111111` - -meaning that 36 people visited the website from Italy, spending, on average, 2.23 seconds on the website. - -### The input data - -The data used for this simulation has been procedurally generated: 10,000 rows, with a maximum of 200 different users, spending between 1 and 5 seconds on the website. This was needed to have a rough estimate on the resulting values we obtained. A new article about **pipeline testing** will probably follow. - -### GitHub repository - -The GitHub repository for this article is [here][4]. - -The README.md file contains everything needed to try it locally.! - --------------------------------------------------------------------------------- - -via: https://www.codementor.io/brunoripa/apache-beam-a-python-example-gapr8smod - -作者:[Bruno Ripa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.codementor.io/brunoripa -[1]:https://href.li/?https://beam.apache.org -[2]:https://href.li/?https://beam.apache.org/documentation/programming-guide/#pipeline-io -[3]:https://beam.apache.org/documentation/programming-guide/#pardo -[4]:https://github.com/brunoripa/beam-example From eb75e7094954c1527ae212a714501f2843fed8f3 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 14:37:25 +0800 Subject: [PATCH 15/81] remove www.howtoforge.com --- ...nd Explained for Beginners (5 Examples).md | 103 ---- ...ersonal Backups with Duplicati on Linux.md | 314 ------------- ...ux touch command tutorial for beginners.md | 167 ------- ...nd Explained For Beginners (5 Examples).md | 110 ----- ...Directory changes with Incron on Debian.md | 224 --------- ...T Asset Management Software on Debian 9.md | 374 --------------- ... Tutorial for Beginners (with Examples).md | 96 ---- ...tall and Use iostat on Ubuntu 16.04 LTS.md | 225 --------- ...nd Explained for Beginners (8 Examples).md | 188 -------- ...and Tutorial for Beginners (5 Examples).md | 113 ----- ...Monitoring Server and Agent on Debian 9.md | 401 ---------------- ...nd Explained For Beginners (5 Examples).md | 191 -------- ...all and Configure XWiki on Ubuntu 16.04.md | 271 ----------- ...all Gogs Go Git Service on Ubuntu 16.04.md | 441 ------------------ 14 files changed, 3218 deletions(-) delete mode 100644 sources/tech/20170927 Linux Head Command Explained for Beginners (5 Examples).md delete mode 100644 sources/tech/20171212 Personal Backups with Duplicati on Linux.md delete mode 100644 sources/tech/20171222 Linux touch command tutorial for beginners.md delete mode 100644 sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md delete mode 100644 sources/tech/20180110 How to trigger commands on File-Directory changes with Incron on Debian.md delete mode 100644 sources/tech/20180111 How to Install Snipe-IT Asset Management Software on Debian 9.md delete mode 100644 sources/tech/20180112 Linux yes Command Tutorial for Beginners (with Examples).md delete mode 100644 sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md delete mode 100644 sources/tech/20180119 Linux mv Command Explained for Beginners (8 Examples).md delete mode 100644 sources/tech/20180126 Linux kill Command Tutorial for Beginners (5 Examples).md delete mode 100644 sources/tech/20180129 Install Zabbix Monitoring Server and Agent on Debian 9.md delete mode 100644 sources/tech/20180205 Linux md5sum Command Explained For Beginners (5 Examples).md delete mode 100644 sources/tech/20180208 How to Install and Configure XWiki on Ubuntu 16.04.md delete mode 100644 sources/tech/20180209 How to Install Gogs Go Git Service on Ubuntu 16.04.md diff --git a/sources/tech/20170927 Linux Head Command Explained for Beginners (5 Examples).md b/sources/tech/20170927 Linux Head Command Explained for Beginners (5 Examples).md deleted file mode 100644 index 74d4655bba..0000000000 --- a/sources/tech/20170927 Linux Head Command Explained for Beginners (5 Examples).md +++ /dev/null @@ -1,103 +0,0 @@ -Linux Head Command Explained for Beginners (5 Examples) -====== - -Sometimes, while working on the command line in Linux, you might want to take a quick look at a few initial lines of a file. For example, if a log file is continuously being updated, the requirement could be to view, say, first 10 lines of the log file every time. While viewing the file in an editor (like [vim][1]) is always an option, there exists a command line tool - dubbed **head** \- that lets you view initial few lines of a file very easily. - -In this article, we will discuss the basics of the head command using some easy to understand examples. Please note that all steps/instructions mentioned here have been tested on Ubuntu 16.04LTS. - -### Linux head command - -As already mentioned in the beginning, the head command lets users view the first part of files. Here's its syntax: - -head [OPTION]... [FILE]... - -And following is how the command's man page describes it: -``` -Print the  first  10 lines of each FILE to standard output. With more than one FILE, precede each -with a header giving the file name. -``` - -The following Q&A-type examples should give you a better idea of how the tool works: - -### Q1. How to print the first 10 lines of a file on terminal (stdout)? - -This is quite easy using head - in fact, it's the tool's default behavior. - -head [file-name] - -The following screenshot shows the command in action: - -[![How to print the first 10 lines of a file][2]][3] - -### Q2. How to tweak the number of lines head prints? - -While 10 is the default number of lines the head command prints, you can change this number as per your requirement. The **-n** command line option lets you do that. - -head -n [N] [File-name] - -For example, if you want to only print first 5 lines, you can convey this to the tool in the following way: - -head -n 5 file1 - -[![How to tweak number of lines head prints][4]][5] - -### Q3. How to restrict the output to a certain number of bytes? - -Not only number of lines, you can also restrict the head command output to a specific number of bytes. This can be done using the **-c** command line option. - -head -c [N] [File-name] - -For example, if you want head to only display first 25 bytes, here's how you can execute it: - -head -c 25 file1 - -[![restrict the output to a certain number of bytes][6]][7] - -So you can see that the tool displayed only the first 25 bytes in the output. - -Please note that [N] "may have a multiplier suffix: b 512, kB 1000, K 1024, MB 1000*1000, M 1024*1024, GB 1000*1000*1000, G 1024*1024*1024, and so on for T, P, E, Z, Y." - -### Q4. How to have head print filename in output? - -If for some reason, you want the head command to also print the file name in output, you can do that using the **-v** command line option. - -head -v [file-name] - -Here's an example: - -[![How to have head print filename in output][8]][9] - -So as you can see, the filename 'file 1' was displayed in the output. - -### Q5. How to have NUL as line delimiter, instead of newline? - -By default, the head command output is delimited by newline. But there's also an option of using NUL as the delimiter. The option **-z** or **\--zero-terminated** lets you do this. - -head -z [file-name] - -### Conclusion - -As most of you'd agree, head is a simple command to understand and use, meaning there's little learning curve associated with it. The features (in terms of command line options) it offers are also limited, and we've covered almost all of them. So give these options a try, and when you're done, take a look at the command's [man page][10] to know more. - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/linux-head-command/ - -作者:[Himanshu Arora][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com -[1]:https://www.howtoforge.com/vim-basics -[2]:https://www.howtoforge.com/images/linux_head_command/head-basic-usage.png -[3]:https://www.howtoforge.com/images/linux_head_command/big/head-basic-usage.png -[4]:https://www.howtoforge.com/images/linux_head_command/head-n-option.png -[5]:https://www.howtoforge.com/images/linux_head_command/big/head-n-option.png -[6]:https://www.howtoforge.com/images/linux_head_command/head-c-option.png -[7]:https://www.howtoforge.com/images/linux_head_command/big/head-c-option.png -[8]:https://www.howtoforge.com/images/linux_head_command/head-v-option.png -[9]:https://www.howtoforge.com/images/linux_head_command/big/head-v-option.png -[10]:https://linux.die.net/man/1/head diff --git a/sources/tech/20171212 Personal Backups with Duplicati on Linux.md b/sources/tech/20171212 Personal Backups with Duplicati on Linux.md deleted file mode 100644 index b6fcbdbd9e..0000000000 --- a/sources/tech/20171212 Personal Backups with Duplicati on Linux.md +++ /dev/null @@ -1,314 +0,0 @@ -Personal Backups with Duplicati on Linux -====== - -This tutorial is for performing personal backups to local USB hard drives, having encryption, deduplication and compression. - -The procedure was tested using [Duplicati 2.0.2.1][1] on [Debian 9.2][2] - -### Duplicati Installation - -Download the latest version from - -The software requires several libraries to work, mostly mono libraries. The easiest way to install the software is to let it fail the installation through dpkg and then install the missing packages with apt-get: - -sudo dpkg -i duplicati_2.0.2.1-1_all.deb -sudo apt-get --fix-broken install - -Note that the installation of the package fails on the first instance, then we use apt to install the dependencies. - -Start the daemon: - -sudo systemctl start duplicati.service - -And if you wish for it to start automatically with the OS use: - -sudo systemctl enable duplicati.service - -To check that the service is running: - -netstat -ltn | grep 8200 - -And you should receive a response like this one: - -[![][3]][4] - -After these steps you should be able to run the browser and access the local web service at http://localhost:8200 - -[![][5]][6] - -### Create a Backup Job - -Go to "Add backup" to configure a new backup job: - -[![][7]][8] - -Set a name for the job and a passphrase for encryption. You will need the passphrase to restore files, so pick a strong password and make sure you don't forget it: - -[![][9]][10] - -Set the destination: the directory where you are going to store the backup files: - -[![][11]][12] - -Select the source files to backup. I will pick just the Desktop folder for this example: - -[![][13]][14] - -Specify filters and exclusions if necessary: - -[![][15]][16] - -Configure a schedule, or disable automatic backups if you prefer to run them manually: - -[![][17]][18] - -I like to use manual backups when using USB drive destinations, and scheduled if I have a server to send backups through SSH or a Cloud based destination. - -Specify the versions to keep, and the Upload volume size (size of each partial file): - -[![][19]][20] - -Finally you should see the job created in a summary like this: - -[![][21]][22] - -### Run the Backup - -In the last seen summary, under Home, click "run now" to start the backup job. A progress bar will be seen by the top of the screen. - -After finishing the backup, you can see in the destination folder, a set of files called something like: -``` -duplicati-20171206T143926Z.dlist.zip.aes -duplicati-bdfad38a0b1f34b5db56c1de166260cd8.dblock.zip.aes -duplicati-i00d8dff418a749aa9d67d0c54b0e4149.dindex.zip.aes -``` - -The size of the blocks will be the one specified in the Upload volume size option. The files are compressed, and encrypted using the previously set passphrase. - -Once finished, you will see in the summary the last backup taken and the size: - -[![][23]][24] - -In this case it is only 1MB because I took a test folder. - -### Restore Files - -To restore files, simply access the web administration in http://localhost:8200, go to the "Restore" menu and select the backup job name. Then select the files to restore and click "continue": - -[![][25]][26] - -Select the restore files or folders and the restoration options: - -[![][27]][28] - -The restoration will start running, showing a progress bar on the top of the user interface. - -### Fixate the backup destination - -If you use a USB drive to perform the backups, it is a good idea to specify in the /etc/fstab the UUID of the drive, so that it always mount automatically in the /mnt/backup directory (or the directory of your choosing). - -To do so, connect your drive and check for the UUID: - -sudo blkid -``` -... -/dev/sdb1: UUID="4d608d85-e138-4546-9f22-4d78bef0b6a7" TYPE="ext4" PARTUUID="983a72cb-01" -... -``` - -And copy the UUID to include an entry in the /etc/fstab file: -``` -... -UUID=4d608d85-e138-4546-9f22-4d78bef0b6a7 /mnt/backup ext4 defaults 0 0 -... -``` - -### Remote Access to the GUI - -By default, Duplicati listens on localhost only, and it's meant to be that way. However it includes the possibility to add a password and to be accessible from the network: - -[![][29]][30] - -This setting is not recommended, as Duplicati has no SSL capabilities yet. What I would recommend if you need to use the backup GUI remotely, is using an SSH tunnel. - -To accomplish this, first enable SSH server in case you don't have it yet, the easiest way is running: - -sudo tasksel - -[![][31]][32] - -Once you have the SSH server running on the Duplicati host. Go to the computer from where you want to connect to the GUI and set the tunnel - -Let's consider that: - - * Duplicati backups and its GUI are running in the remote host 192.168.0.150 (that we call the server). - * The GUI on the server is listening on port 8200. - * jorge is a valid user name in the server. - * I will access the GUI from a host on the local port 12345. - - - -Then to open an SSH tunnel I run on the client: - -ssh -f jorge@192.168.0.150 -L 12345:localhost:8200 -N - -With netstat it can be checked that the port is open for localhost: - -netstat -ltn | grep :12345 -``` -tcp 0 0 127.0.0.1:12345 0.0.0.0:* LISTEN -tcp6 0 0 ::1:12345 :::* LISTEN -``` - -And now I can access the remote GUI by accessing http://127.0.0.1:12345 from the client browser - -[![][34]][35] - -Finally if you want to close the connection to the SSH tunnel you may kill the ssh process. First identify the PID: - -ps x | grep "[s]sh -f" -``` -26348 ? Ss 0:00 ssh -f [[email protected]][33] -L 12345:localhost:8200 -N -``` - -And kill it: - -kill -9 26348 - -Or you can do it all in one: - -kill -9 $(ps x | grep "[s]sh -f" | cut -d" " -f1) - -### Other Backup Repository Options - -If you prefer to store your backups on a remote server rather than on a local hard drive, Duplicati has several options. Standard protocols such as: - - * FTP - * OpenStack Object Storage / Swift - * SFTP (SSH) - * WebDAV - - - -And a wider list of proprietary protocols, such as: - - * Amazon Cloud Drive - * Amazon S3 - * Azure - * B2 Cloud Storage - * Box.com - * Dropbox - * Google Cloud Storage - * Google Drive - * HubiC - * Jottacloud - * mega.nz - * Microsoft One Drive - * Microsoft One Drive for Business - * Microsoft Sharepoint - * OpenStack Simple Storage - * Rackspace CloudFiles - - - -For FTP, SFTP, WebDAV is as simple as setting the server hostname or IP address, adding credentials and then using the whole previous process. As a result, I don't believe it is of any value describing them. - -However, as I find it useful for personal matters having a cloud based backup, I will describe the configuration for Dropbox, which uses the same procedure as for Google Drive and Microsoft OneDrive. - -#### Dropbox - -Let's create a new backup job and set the destination to Dropbox. All the configurations are exactly the same except for the destination that should be set like this: - -[![][36]][37] - -Once you set up "Dropbox" from the drop-down menu, and configured the destination folder, click on the OAuth link to set the authentication. - -A pop-up will emerge for you to login to Dropbox (or Google Drive or OneDrive depending on your choosing): - -[![][38]][39] - -After logging in you will be prompted to allow Duplicati app to your cloud storage: - -[![][40]][41] - -After finishing the last process, the AuthID field will be automatically filled in: - -[![][42]][43] - -Click on "Test Connection". When testing the connection you will be asked to create the folder in the case it does not exist: - -[![][44]][45] - -And finally it will give you a notification that the connection is successful: - -[![][46]][47] - -If you access your Dropbox account you will see the files, in the same format that we have seen before, under the defined folder: - -[![][48]][49] - -### Conclusions - -Duplicati is a multi-platform, feature-rich, easy to use backup solution for personal computers. It supports a wide variety of backup repositories what makes it a very versatile tool that can adapt to most personal needs. - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/tutorial/personal-backups-with-duplicati-on-linux/ - -作者:[][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com -[1]:https://updates.duplicati.com/beta/duplicati_2.0.2.1-1_all.deb -[2]:https://www.debian.org/releases/stable/ -[3]:https://www.howtoforge.com/images/personal_backups_with_duplicati/installation-netstat.png -[4]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/installation-netstat.png -[5]:https://www.howtoforge.com/images/personal_backups_with_duplicati/installation-web.png -[6]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/installation-web.png -[7]:https://www.howtoforge.com/images/personal_backups_with_duplicati/create-1.png -[8]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/create-1.png -[9]:https://www.howtoforge.com/images/personal_backups_with_duplicati/create-2.png -[10]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/create-2.png -[11]:https://www.howtoforge.com/images/personal_backups_with_duplicati/create-3.png -[12]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/create-3.png -[13]:https://www.howtoforge.com/images/personal_backups_with_duplicati/create-4.png -[14]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/create-4.png -[15]:https://www.howtoforge.com/images/personal_backups_with_duplicati/create-5.png -[16]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/create-5.png -[17]:https://www.howtoforge.com/images/personal_backups_with_duplicati/create-6.png -[18]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/create-6.png -[19]:https://www.howtoforge.com/images/personal_backups_with_duplicati/create-7.png -[20]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/create-7.png -[21]:https://www.howtoforge.com/images/personal_backups_with_duplicati/create-8.png -[22]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/create-8.png -[23]:https://www.howtoforge.com/images/personal_backups_with_duplicati/run-1.png -[24]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/run-1.png -[25]:https://www.howtoforge.com/images/personal_backups_with_duplicati/restore-1.png -[26]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/restore-1.png -[27]:https://www.howtoforge.com/images/personal_backups_with_duplicati/restore-2.png -[28]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/restore-2.png -[29]:https://www.howtoforge.com/images/personal_backups_with_duplicati/remote-1.png -[30]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/remote-1.png -[31]:https://www.howtoforge.com/images/personal_backups_with_duplicati/remote-sshd.png -[32]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/remote-sshd.png -[33]:https://www.howtoforge.com/cdn-cgi/l/email-protection -[34]:https://www.howtoforge.com/images/personal_backups_with_duplicati/remote-sshtun.png -[35]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/remote-sshtun.png -[36]:https://www.howtoforge.com/images/personal_backups_with_duplicati/db-1.png -[37]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/db-1.png -[38]:https://www.howtoforge.com/images/personal_backups_with_duplicati/db-2.png -[39]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/db-2.png -[40]:https://www.howtoforge.com/images/personal_backups_with_duplicati/db-4.png -[41]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/db-4.png -[42]:https://www.howtoforge.com/images/personal_backups_with_duplicati/db-5.png -[43]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/db-5.png -[44]:https://www.howtoforge.com/images/personal_backups_with_duplicati/db-6.png -[45]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/db-6.png -[46]:https://www.howtoforge.com/images/personal_backups_with_duplicati/db-7.png -[47]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/db-7.png -[48]:https://www.howtoforge.com/images/personal_backups_with_duplicati/db-8.png -[49]:https://www.howtoforge.com/images/personal_backups_with_duplicati/big/db-8.png diff --git a/sources/tech/20171222 Linux touch command tutorial for beginners.md b/sources/tech/20171222 Linux touch command tutorial for beginners.md deleted file mode 100644 index 30a6ffe5b5..0000000000 --- a/sources/tech/20171222 Linux touch command tutorial for beginners.md +++ /dev/null @@ -1,167 +0,0 @@ -Linux touch command tutorial for beginners (6 examples) -============================================================ - -### On this page - -1. [Linux Touch command][1] - -2. [1\. How to change access/modification time using touch command][2] - -3. [2\. How to change only access or modification time][3] - -4. [3\. How to make touch use access/modification times of existing file][4] - -5. [4\. How to create a new file using touch][5] - -6. [5\. How to force touch to not create any new file][6] - -7. [6\. How touch works in case of symbolic links][7] - -8. [Conclusion][8] - -Sometimes, while working on the command line in Linux, you might want to create a new file. Or, there may be times when the requirement is to change the timestamps of a file. Well, there exists a utility that can you can use in both these scenarios. The tool in question is **touch**, and in this tutorial, we will understand its basic functionality through easy to understand examples. - -Please note that all examples that we'll be using here have been tested on an Ubuntu 16.04 machine. - -### Linux Touch command - -The touch command is primarily used to change file timestamps, but if the file (whose name is passed as an argument) doesn't exist, then the tool creates it. - -Following is the command's generic syntax: - -``` -touch [OPTION]... FILE... -``` - -And here's how the man page explains this command: - -``` -DESCRIPTION -       Update  the  access  and modification times of each FILE to the current -       time. A FILE argument that does not exist is created empty, unless -c  or  -h -       is supplied. A  FILE  argument  string of - is handled specially and causes touch to -       change the times of the file associated with standard output. -``` - -The following Q&A type examples will give you a better idea of how the tool works. - -### 1\. How to change access/modification time using touch command - -This is simple, and pretty straight forward. Let's take an existing file as an example. The following screenshot shows the access and modification times for a file called 'apl.c.' - - [![change access/modification time using touch command](https://www.howtoforge.com/images/linux_hostname_command/touch-exist-file1.png)][9] - -Here's how you can use the touch command to change the file's access and modification times: - -``` -touch apl.c -``` - -The following screenshot confirms the change in these timestamps. - - [![Change file timestamp with touch command](https://www.howtoforge.com/images/linux_hostname_command/touch-exist-file2.png)][10] - -### 2\. How to change only access or modification time - -By default, the touch command changes both access and modification times of the input file. However, if you want, you can limit this behavior to any one of these timestamps. This means that you can either have the access time changed or the modification timestamp. - -In case you want to only change the access time, use the -a command line option. - -``` -touch -a [filename] -``` - -Similarly, if the requirement is to only change the modification time, use the -m command line option. - -``` -touch -m [filename] -``` - -### 3\. How to make touch use access/modification times of existing file - -If you want, you can also force the touch command to copy access and modification timestamps from a reference file. For example, suppose we want to change the timestamps for the file 'apl.c'. Here are the current timestamps for this file: - - [![make touch use access/modification times of existing file](https://www.howtoforge.com/images/linux_hostname_command/touch-exist-file21.png)][11] - -And this is the file which you want touch to use as its reference: - - [![Check file status with stat command](https://www.howtoforge.com/images/linux_hostname_command/touch-ref-file1.png)][12] - -Now, for touch to use the timestamps of 'apl' for 'apl.c', you'll need to use the -r command line option in the following way: - -``` -touch apl.c -r apl -``` - - [![touch to use the timestamps of other files](https://www.howtoforge.com/images/linux_hostname_command/touch-ref-file2.png)][13] - -The above screenshot shows that modification and access timestamps for 'apl.c' are now same as those for 'apl.' - -### 4\. How to create a new file using touch - -Creating a new file is also very easy. In fact, it happens automatically if the file name you pass as argument to the touch command doesn't exist. For example, to create a file named 'newfile', all you have to do is to run the following touch command: - -``` -touch newfile -``` - -### 5\. How to force touch to not create any new file - -Just in case there's a strict requirement that the touch command shouldn't create any new files, then you can use the -c option. - -``` -touch -c [filename] -``` - -The following screenshot shows that since 'newfile12' didn't exist, and we used the -c command line option, the touch command didn't create the file. - - [![force touch to not create a new file](https://www.howtoforge.com/images/linux_hostname_command/touch-c-option.png)][14] - -### 6\. How touch works in case of symbolic links - -By default, if you pass a symbolic link file name to the touch command, the change in access and modification timestamps will be for the original file (one which the symbolic link refers to). However, the tool also offers an option (-h) that lets you override this behavior. - -Here's how the man page explains the -h option: - -``` --h, --no-dereference -              affect each symbolic link instead of any referenced file (useful -              only on systems that can change the timestamps of a symlink) -``` - -So when you want to change the modification and access timestamps for the symbolic link (and not the original file), use the touch command in the following way: - -``` -touch -h [sym link file name] -``` - -### Conclusion - -As you'd agree, touch isn't a difficult command to understand and use. The examples/options we discussed in this tutorial should be enough to get you started with the tool. While newbies will mostly find themselves using the utility for creating new files, more experienced users play with it for multiple other purposes as well. For more information on the touch command, head to [its man page][15]. - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/tutorial/linux-touch-command/ - -作者:[ Himanshu Arora][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com/tutorial/linux-touch-command/ -[1]:https://www.howtoforge.com/tutorial/linux-touch-command/#linux-touch-command -[2]:https://www.howtoforge.com/tutorial/linux-touch-command/#-how-to-change-accessmodification-time-using-touch-command -[3]:https://www.howtoforge.com/tutorial/linux-touch-command/#-how-to-change-only-access-or-modification-time -[4]:https://www.howtoforge.com/tutorial/linux-touch-command/#-how-to-make-touch-use-accessmodification-times-of-existing-file -[5]:https://www.howtoforge.com/tutorial/linux-touch-command/#-how-to-create-a-new-file-using-touch -[6]:https://www.howtoforge.com/tutorial/linux-touch-command/#-how-to-force-touch-to-not-create-any-new-file -[7]:https://www.howtoforge.com/tutorial/linux-touch-command/#-how-touch-works-in-case-of-symbolic-links -[8]:https://www.howtoforge.com/tutorial/linux-touch-command/#conclusion -[9]:https://www.howtoforge.com/images/linux_hostname_command/big/touch-exist-file1.png -[10]:https://www.howtoforge.com/images/linux_hostname_command/big/touch-exist-file2.png -[11]:https://www.howtoforge.com/images/linux_hostname_command/big/touch-exist-file21.png -[12]:https://www.howtoforge.com/images/linux_hostname_command/big/touch-ref-file1.png -[13]:https://www.howtoforge.com/images/linux_hostname_command/big/touch-ref-file2.png -[14]:https://www.howtoforge.com/images/linux_hostname_command/big/touch-c-option.png -[15]:https://linux.die.net/man/1/touch diff --git a/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md b/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md deleted file mode 100644 index b426279815..0000000000 --- a/sources/tech/20180105 Linux paste Command Explained For Beginners (5 Examples).md +++ /dev/null @@ -1,110 +0,0 @@ -Linux paste Command Explained For Beginners (5 Examples) -====== - -Sometimes, while working on the command line in Linux, there may arise a situation wherein you have to merge lines of multiple files to create more meaningful/useful data. Well, you'll be glad to know there exists a command line utility **paste** that does this for you. In this tutorial, we will discuss the basics of this command as well as the main features it offers using easy to understand examples. - -But before we do that, it's worth mentioning that all examples mentioned in this article have been tested on Ubuntu 16.04 LTS. - -### Linux paste command - -As already mentioned above, the paste command merges lines of files. Here's the tool's syntax: - -``` -paste [OPTION]... [FILE]... -``` - -And here's how the mage of paste explains it: -``` -Write lines consisting of the sequentially corresponding lines from each FILE, separated by TABs, -to standard output. With no FILE, or when FILE is -, read standard input. -``` - -The following Q&A-styled examples should give you a better idea on how paste works. - -### Q1. How to join lines of multiple files using paste command? - -Suppose we have three files - file1.txt, file2.txt, and file3.txt - with following contents: - -[![How to join lines of multiple files using paste command][1]][2] - -And the task is to merge lines of these files in a way that each row of the final output contains index, country, and continent, then you can do that using paste in the following way: - -paste file1.txt file2.txt file3.txt - -[![result of merging lines][3]][4] - -### Q2. How to apply delimiters when using paste? - -Sometimes, there can be a requirement to add a delimiting character between entries of each resulting row. This can be done using the **-d** command line option, which requires you to provide the delimiting character you want to use. - -For example, to apply a colon (:) as a delimiting character, use the paste command in the following way: - -``` -paste -d : file1.txt file2.txt file3.txt -``` - -Here's the output this command produced on our system: - -[![How to apply delimiters when using paste][5]][6] - -### Q3. How to change the way in which lines are merged? - -By default, the paste command merges lines in a way that entries in the first column belongs to the first file, those in the second column are for the second file, and so on and so forth. However, if you want, you can change this so that the merge operation happens row-wise. - -This you can do using the **-s** command line option. - -``` -paste -s file1.txt file2.txt file3.txt -``` - -Following is the output: - -[![How to change the way in which lines are merged][7]][8] - -### Q4. How to use multiple delimiters? - -Yes, you can use multiple delimiters as well. For example, if you want to use both : and |, you can do that in the following way: - -``` -paste -d ':|' file1.txt file2.txt file3.txt -``` - -Following is the output: - -[![How to use multiple delimiters][9]][10] - -### Q5. How to make sure merged lines are NUL terminated? - -By default, lines merged through paste end in a newline. However, if you want, you can make them NUL terminated, something which you can do using the **-z** option. - -``` -paste -z file1.txt file2.txt file3.txt -``` - -### Conclusion - -As most of you'd agree, the paste command isn't difficult to understand and use. It may offer a limited set of command line options, but the tool does what it claims. You may not require it on daily basis, but paste can be a real-time saver in some scenarios. Just in case you need, [here's the tool's man page][11]. - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/linux-paste-command/ - -作者:[Himanshu Arora][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com -[1]:https://www.howtoforge.com/images/command-tutorial/paste-3-files.png -[2]:https://www.howtoforge.com/images/command-tutorial/big/paste-3-files.png -[3]:https://www.howtoforge.com/images/command-tutorial/paste-basic-usage.png -[4]:https://www.howtoforge.com/images/command-tutorial/big/paste-basic-usage.png -[5]:https://www.howtoforge.com/images/command-tutorial/paste-d-option.png -[6]:https://www.howtoforge.com/images/command-tutorial/big/paste-d-option.png -[7]:https://www.howtoforge.com/images/command-tutorial/paste-s-option.png -[8]:https://www.howtoforge.com/images/command-tutorial/big/paste-s-option.png -[9]:https://www.howtoforge.com/images/command-tutorial/paste-d-mult1.png -[10]:https://www.howtoforge.com/images/command-tutorial/big/paste-d-mult1.png -[11]:https://linux.die.net/man/1/paste diff --git a/sources/tech/20180110 How to trigger commands on File-Directory changes with Incron on Debian.md b/sources/tech/20180110 How to trigger commands on File-Directory changes with Incron on Debian.md deleted file mode 100644 index 2c66a40d1e..0000000000 --- a/sources/tech/20180110 How to trigger commands on File-Directory changes with Incron on Debian.md +++ /dev/null @@ -1,224 +0,0 @@ -How to trigger commands on File/Directory changes with Incron on Debian -====== - -This guide shows how you can install and use **incron** on a Debian 9 (Stretch) system. Incron is similar to cron, but instead of running commands based on time, it can trigger commands when file or directory events occur (e.g. a file modification, changes of permissions, etc.). - -### 1 Prerequisites - - * System administrator permissions (root login). All commands in this tutorial should be run as root user on the shell. - * I will use the editor "nano" to edit files. You may replace nano with an editor of your choice or install nano with "apt-get install nano" if it is not installed on your server. - - - -### 2 Installing Incron - -Incron is available in the Debian repository, so we install incron with the following apt command: - -``` -apt-get install incron -``` - -The installation process should be similar to the one in this screenshot. - -[![Installing Incron on Debian 9][1]][2] - -### 3 Using Incron - -Incron usage is very much like cron usage. You have the incrontab command that let's you list (-l), edit (-e), and remove (-r) incrontab entries. - -To learn more about it, see: - -``` -man incrontab -``` - -There you also find the following section: - -``` -If /etc/incron.allow exists only users listed here may use incron. Otherwise if /etc/incron.deny exists only users NOT listed here may use incron. If none of these files exists everyone is allowed to use incron. (Important note: This behavior is insecure and will be probably changed to be compatible with the style used by ISC Cron.) Location of these files can be changed in the configuration. -``` - -This means if we want to use incrontab as root, we must either delete /etc/incron.allow (which is unsafe because then every system user can use incrontab)... - -``` -rm -f /etc/incron.allow -``` - -... or add root to that file (recommended). Open the /etc/incron.allow file with nano: - -``` -nano /etc/incron.allow -``` - -And add the following line. Then save the file. -``` -root -``` - -Before you do this, you will get error messages like this one when trying to use incrontab: - -``` -server1:~# incrontab -l -user 'root' is not allowed to use incron -``` - - - -Afterwards it works: - -``` -server1:~# incrontab -l -no table for root -``` - - - -We can use the command: - -``` -incrontab -e -``` - -To create incron jobs. Before we do this, we take a look at the incron man page: - -``` -man 5 incrontab -``` - -The man page explains the format of the crontabs. Basically, the format is as follows... - -``` - -``` - -...where can be a directory (meaning the directory and/or the files directly in that directory (not files in subdirectories of that directory!) are watched) or a file. - - can be one of the following: - -IN_ACCESS File was accessed (read) (*) -IN_ATTRIB Metadata changed (permissions, timestamps, extended attributes, etc.) (*) -IN_CLOSE_WRITE File opened for writing was closed (*) -IN_CLOSE_NOWRITE File not opened for writing was closed (*) -IN_CREATE File/directory created in watched directory (*) -IN_DELETE File/directory deleted from watched directory (*) -IN_DELETE_SELF Watched file/directory was itself deleted -IN_MODIFY File was modified (*) -IN_MOVE_SELF Watched file/directory was itself moved -IN_MOVED_FROM File moved out of watched directory (*) -IN_MOVED_TO File moved into watched directory (*) -IN_OPEN File was opened (*) - -When monitoring a directory, the events marked with an asterisk (*) above can occur for files in the directory, in which case the name field in the -returned event data identifies the name of the file within the directory. - -The IN_ALL_EVENTS symbol is defined as a bit mask of all of the above events. Two additional convenience symbols are IN_MOVE, which is a combination of IN_MOVED_FROM and IN_MOVED_TO, and IN_CLOSE which combines IN_CLOSE_WRITE and IN_CLOSE_NOWRITE. - -The following further symbols can be specified in the mask: - -IN_DONT_FOLLOW Don't dereference pathname if it is a symbolic link -IN_ONESHOT Monitor pathname for only one event -IN_ONLYDIR Only watch pathname if it is a directory - -Additionally, there is a symbol which doesn't appear in the inotify symbol set. It is IN_NO_LOOP. This symbol disables monitoring events until the current one is completely handled (until its child process exits). - - is the command that should be run when the event occurs. The following wildcards may be used inside the command specification: - -``` -$$ dollar sign -#@ watched filesystem path (see above) -$# event-related file name -$% event flags (textually) -$& event flags (numerically) -``` - -If you watch a directory, then [[email protected]][3] holds the directory path and $# the file that triggered the event. If you watch a file, then [[email protected]][3] holds the complete path to the file and $# is empty. - -If you need the wildcards but are not sure what they translate to, you can create an incron job like this. Open the incron incrontab: - -``` -incrontab -e -``` - -and add the following line: - -``` -/tmp/ IN_MODIFY echo "$$ $@ $# $% $&" -``` - -Then you create or modify a file in the /tmp directory and take a look at /var/log/syslog - this log shows when an incron job was triggered, if it succeeded or if there were errors, and what the actual command was that it executed (i.e., the wildcards are replaced with their real values). - -``` -tail /var/log/syslog -``` - -``` -... -Jan 10 13:52:35 server1 incrond[1012]: (root) CMD (echo "$ /tmp .hello.txt.swp IN_MODIFY 2") -Jan 10 13:52:36 server1 incrond[1012]: (root) CMD (echo "$ /tmp .hello.txt.swp IN_MODIFY 2") -Jan 10 13:52:39 server1 incrond[1012]: (root) CMD (echo "$ /tmp hello.txt IN_MODIFY 2") -Jan 10 13:52:39 server1 incrond[1012]: (root) CMD (echo "$ /tmp .hello.txt.swp IN_MODIFY 2") -``` - -In this example I've edited the file /tmp/hello.txt; as you see [[email protected]][3] translates to /tmp, $# to _hello.txt_ , $% to IN_CREATE, and $& to 256. I used an editor that created a temporary .txt.swp file which results in the additional lines in syslog. - -Now enough theory. Let's create our first incron jobs. I'd like to monitor the file /etc/apache2/apache2.conf and the directory /etc/apache2/vhosts/, and whenever there are changes, I want incron to restart Apache. This is how we do it: - -``` -incrontab -e -``` -``` -/etc/apache2/apache2.conf IN_MODIFY /usr/sbin/service apache2 restart -/etc/apache2/sites-available/ IN_MODIFY /usr/sbin/service apache2 restart -``` - -That's it. For test purposes, you can modify your Apache configuration and take a look at /var/log/syslog, and you should see that incron restarts Apache. - -**NOTE** : Do not do any action from within an incron job in a directory that you monitor to avoid loops. **Example:** When you monitor the /tmp directory for changes and each change triggers a script that writes a log file in /tmp, this will cause a loop and might bring your system to high load or even crash it. - -To list all defined incron jobs, you can run: - -``` -incrontab -l -``` - -``` -server1:~# incrontab -l -/etc/apache2/apache2.conf IN_MODIFY /usr/sbin/service apache2 restart -/etc/apache2/vhosts/ IN_MODIFY /usr/sbin/service apache2 restart -``` - - - -To delete all incron jobs of the current user, run: - -``` -incrontab -r -``` - -``` -server1:~# incrontab -r -removing table for user 'root' -table for user 'root' successfully removed -``` - -### 4 Links - -Debian http://www.debian.org -Incron Software: http://inotify.aiken.cz/?section=incron&page=about&lang=en - - - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/tutorial/trigger-commands-on-file-or-directory-changes-with-incron-on-debian-9/ - -作者:[Till Brehm][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com -[1]:https://www.howtoforge.com/images/trigger-commands-on-file-or-directory-changes-with-incron-on-debian-8/incron-debian-9.png -[2]:https://www.howtoforge.com/images/trigger-commands-on-file-or-directory-changes-with-incron-on-debian-8/big/incron-debian-9.png diff --git a/sources/tech/20180111 How to Install Snipe-IT Asset Management Software on Debian 9.md b/sources/tech/20180111 How to Install Snipe-IT Asset Management Software on Debian 9.md deleted file mode 100644 index 80412f03f3..0000000000 --- a/sources/tech/20180111 How to Install Snipe-IT Asset Management Software on Debian 9.md +++ /dev/null @@ -1,374 +0,0 @@ -How to Install Snipe-IT Asset Management Software on Debian 9 -====== - -Snipe-IT is a free and open source IT assets management web application that can be used for tracking licenses, accessories, consumables, and components. It is written in PHP language and uses MySQL to store its data. It is a cross-platform application that works on all the major operating system like, Linux, Windows and Mac OS X. It easily integrates with Active Directory, LDAP and supports two-factor authentication with Google Authenticator. - -In this tutorial, we will learn how to install Snipe-IT on Debian 9 server. - -### Requirements - - * A server running Debian 9. - * A non-root user with sudo privileges. - - - -### Getting Started - -Before installing any packages, it is recommended to update the system package with the latest version. You can do this by running the following command: - -``` -sudo apt-get update -y -sudo apt-get upgrade -y -``` - -Next, restart the system to apply all the updates. Then install other required packages with the following command: - -``` -sudo apt-get install git curl unzip wget -y -``` - -Once all the packages are installed, you can proceed to the next step. - -### Install LAMP Server - -Snipe-IT runs on Apache web server, so you will need to install LAMP (Apache, MariaDB, PHP) to your system. - -First, install Apache, PHP and other PHP libraries with the following command: - -``` -sudo apt-get install apache2 libapache2-mod-php php php-pdo php-mbstring php-tokenizer php-curl php-mysql php-ldap php-zip php-fileinfo php-gd php-dom php-mcrypt php-bcmath -y -``` - -Once all the packages are installed, start Apache service and enable it to start on boot with the following command: - -``` -sudo systemctl start apache2 -sudo systemctl enable apache2 -``` - -### Install and Configure MariaDB - -Snipe-IT uses MariaDB to store its data. So you will need to install MariaDB to your system. By default, the latest version of the MariaDB is not available in the Debian 9 repository. So you will need to install MariaDB repository to your system. - -First, add the APT key with the following command: - -``` -sudo apt-get install software-properties-common -y -sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db -``` - -Next, add the MariaDB repository using the following command: - -``` -sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.1/debian stretch main' -``` - -Next, update the repository with the following command: - -``` -sudo apt-get update -y -``` - -Once the repository is updated, you can install MariaDB with the following command: - -``` -sudo apt-get install mariadb-server mariadb-client -y -``` - -Next, start the MariaDB service and enable it to start on boot time with the following command: - -``` -sudo systemctl start mysql -sudo systemctl start mysql -``` - -You can check the status of MariaDB server with the following command: - -``` -sudo systemctl status mysql -``` - -If everything is fine, you should see the following output: -``` -? mariadb.service - MariaDB database server - Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled) - Active: active (running) since Mon 2017-12-25 08:41:25 EST; 29min ago - Process: 618 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) - Process: 615 ExecStartPost=/etc/mysql/debian-start (code=exited, status=0/SUCCESS) - Process: 436 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ] && systemc - Process: 429 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) - Process: 418 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS) - Main PID: 574 (mysqld) - Status: "Taking your SQL requests now..." - Tasks: 27 (limit: 4915) - CGroup: /system.slice/mariadb.service - ??574 /usr/sbin/mysqld - -Dec 25 08:41:07 debian systemd[1]: Starting MariaDB database server... -Dec 25 08:41:14 debian mysqld[574]: 2017-12-25 8:41:14 140488893776448 [Note] /usr/sbin/mysqld (mysqld 10.1.26-MariaDB-0+deb9u1) starting as p -Dec 25 08:41:25 debian systemd[1]: Started MariaDB database server. - -``` - -Next, secure your MariaDB by running the following script: - -``` -sudo mysql_secure_installation -``` - -Answer all the questions as shown below: -``` -Set root password? [Y/n] n -Remove anonymous users? [Y/n] y -Disallow root login remotely? [Y/n] y -Remove test database and access to it? [Y/n] y -Reload privilege tables now? [Y/n] y - -``` - -Once MariaDB is secured, log in to MariaDB shell with the following command: - -``` -mysql -u root -p -``` - -Enter your root password when prompt, then create a database for Snipe-IT with the following command: - -``` -MariaDB [(none)]> create database snipeitdb character set utf8; -``` - -Next, create a user for Snipe-IT and grant all privileges to the Snipe-IT with the following command: - -``` -MariaDB [(none)]> GRANT ALL PRIVILEGES ON snipeitdb.* TO 'snipeit'@'localhost' IDENTIFIED BY 'password'; -``` - -Next, flush the privileges with the following command: - -``` -MariaDB [(none)]> flush privileges; -``` - -Finally, exit from the MariaDB console using the following command: - -``` -MariaDB [(none)]> quit -``` - -### Install Snipe-IT - -You can download the latest version of the Snipe-IT from Git repository with the following command: - -``` -git clone https://github.com/snipe/snipe-it snipe-it -``` - -Next, move the downloaded directory to the apache root directory with the following command: - -``` -sudo mv snipe-it /var/www/ -``` - -Next, you will need to install Composer to your system. You can install it with the following command: - -``` -curl -sS https://getcomposer.org/installer | php -sudo mv composer.phar /usr/local/bin/composer -``` - -Next, change the directory to snipe-it and Install PHP dependencies using Composer with the following command: - -``` -cd /var/www/snipe-it -sudo composer install --no-dev --prefer-source -``` -Next, generate the "APP_Key" with the following command: - -``` -sudo php artisan key:generate -``` - -You should see the following output: -``` -************************************** -* Application In Production! * -************************************** - - Do you really wish to run this command? (yes/no) [no]: - > yes - -Application key [base64:uWh7O0/TOV10asWpzHc0DH1dOxJHprnZw2kSOnbBXww=] set successfully. - -``` - -Next, you will need to populate MySQL with Snipe-IT's default database schema. You can do this by running the following command: - -``` -sudo php artisan migrate -``` - -Type yes, when prompted to confirm that you want to perform the migration: -``` -************************************** -* Application In Production! * -************************************** - - Do you really wish to run this command? (yes/no) [no]: - > yes - -Migration table created successfully. - -``` - -Next, copy sample .env file and make some changes in it: - -``` -sudo cp .env.example .env -sudo nano .env -``` - -Change the following lines: -``` -APP_URL=http://example.com -APP_TIMEZONE=US/Eastern -APP_LOCALE=en - -# -------------------------------------------- -# REQUIRED: DATABASE SETTINGS -# -------------------------------------------- -DB_CONNECTION=mysql -DB_HOST=localhost -DB_DATABASE=snipeitdb -DB_USERNAME=snipeit -DB_PASSWORD=password -DB_PREFIX=null -DB_DUMP_PATH='/usr/bin' - -``` - -Save and close the file when you are finished. - -Next, provide the appropriate ownership and file permissions with the following command: - -``` -sudo chown -R www-data:www-data storage public/uploads -sudo chmod -R 755 storage public/uploads -``` - -### Configure Apache For Snipe-IT - -Next, you will need to create an apache virtual host directive for Snipe-IT. You can do this by creating `snipeit.conf` file inside `/etc/apache2/sites-available` directory: - -``` -sudo nano /etc/apache2/sites-available/snipeit.conf -``` - -Add the following lines: -``` - -ServerAdmin webmaster@example.com - - Require all granted - AllowOverride All - - DocumentRoot /var/www/snipe-it/public - ServerName example.com - ErrorLog /var/log/apache2/snipeIT.error.log - CustomLog /var/log/apache2/access.log combined - - -``` - -Save and close the file when you are finished. Then, enable virtual host with the following command: - -``` -sudo a2ensite snipeit.conf -``` - -Next, enable PHP mcrypt, mbstring module and Apache rewrite module with the following command: - -``` -sudo phpenmod mcrypt -sudo phpenmod mbstring -sudo a2enmod rewrite -``` - -Finally, restart apache web server to apply all the changes: - -``` -sudo systemctl restart apache2 -``` - -### Configure Firewall - -By default, Snipe-IT runs on port 80, so you will need to allow port 80 through the firewall. By default, UFW firewall is not installed in Debian 9, so you will need to install it first. You can install it by just running the following command: - -``` -sudo apt-get install ufw -y -``` - -Once UFW is installed, enable it to start on boot time with the following command: - -``` -sudo ufw enable -``` - -Next, allow port 80 using the following command: - -``` -sudo ufw allow 80 -``` - -Next, reload the UFW firewall rule with the following command: - -``` -sudo ufw reload -``` - -### Access Snipe-IT - -Everything is now installed and configured, it's time to access Snipe-IT web interface. - -Open your web browser and type the URL, you will be redirected to the following page: - -[![Snipe-IT Checks the system][2]][3] - -The above page will do a system check to make sure your configuration looks correct. Next, click on the **Create Database Table** button you should see the following page: - -[![Create database table][4]][5] - -Here, click on the **Create User** page, you should see the following page: - -[![Create user][6]][7] - -Here, provide your Site name, Domain name, Admin username, and password, then click on the **Save User** button, you should see the Snipe-IT default dashboard as below: - -[![Snipe-IT Dashboard][8]][9] - -### Conclusion - -In the above tutorial, we have learned to install Snipe-IT on Debian 9 server. We have also learned to configure Snipe-IT through web interface.I hope you have now enough knowledge to deploy Snipe-IT in your production environment. For more information you can refer Snipe-IT [Documentation Page][10]. - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/tutorial/how-to-install-snipe-it-on-debian-9/ - -作者:[Hitesh Jethva][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com -[1]:/cdn-cgi/l/email-protection -[2]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/Screenshot-of-snipeit-page1.png -[3]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/big/Screenshot-of-snipeit-page1.png -[4]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/Screenshot-of-snipeit-page2.png -[5]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/big/Screenshot-of-snipeit-page2.png -[6]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/Screenshot-of-snipeit-page3.png -[7]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/big/Screenshot-of-snipeit-page3.png -[8]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/Screenshot-of-snipeit-page4.png -[9]:https://www.howtoforge.com/images/how_to_install_snipe_it_on_debian_9/big/Screenshot-of-snipeit-page4.png -[10]:https://snipe-it.readme.io/docs diff --git a/sources/tech/20180112 Linux yes Command Tutorial for Beginners (with Examples).md b/sources/tech/20180112 Linux yes Command Tutorial for Beginners (with Examples).md deleted file mode 100644 index a4b4ff385c..0000000000 --- a/sources/tech/20180112 Linux yes Command Tutorial for Beginners (with Examples).md +++ /dev/null @@ -1,96 +0,0 @@ -Linux yes Command Tutorial for Beginners (with Examples) -====== - -Most of the Linux commands you encounter do not depend on other operations for users to unlock their full potential, but there exists a small subset of command line tool which you can say are useless when used independently, but become a must-have or must-know when used with other command line operations. One such tool is **yes** , and in this tutorial, we will discuss this command with some easy to understand examples. - -But before we do that, it's worth mentioning that all examples provided in this tutorial have been tested on Ubuntu 16.04 LTS. - -### Linux yes command - -The yes command in Linux outputs a string repeatedly until killed. Following is the syntax of the command: - -``` -yes [STRING]... -yes OPTION -``` - -And here's what the man page says about this tool: -``` -Repeatedly output a line with all specified STRING(s), or 'y'. -``` - -The following Q&A-type examples should give you a better idea about the usage of yes. - -### Q1. How yes command works? - -As the man page says, the yes command produces continuous output - 'y' by default, or any other string if specified by user. Here's a screenshot that shows the yes command in action: - -[![How yes command works][1]][2] - -I could only capture the last part of the output as the output frequency was so fast, but the screenshot should give you a good idea about what kind of output the tool produces. - -You can also provide a custom string for the yes command to use in output. For example: - -``` -yes HTF -``` - -[![Repeat word with yes command][3]][4] - -### Q2. Where yes command helps the user? - -That's a valid question. Reason being, from what yes does, it's difficult to imagine the usefulness of the tool. But you'll be surprised to know that yes can not only save your time, but also automate some mundane tasks. - -For example, consider the following scenario: - -[![Where yes command helps the user][5]][6] - -You can see that user has to type 'y' for each query. It's in situation like these where yes can help. For the above scenario specifically, you can use yes in the following way: - -``` -yes | rm -ri test -``` - -[![yes command in action][7]][8] - -So the command made sure user doesn't have to write 'y' each time when rm asked for it. Of course, one would argue that we could have simply removed the '-i' option from the rm command. That's right, I took this example as it's simple enough to make people understand the situations in which yes can be helpful. - -Another - and probably more relevant - scenario would be when you're using the fsck command, and don't want to enter 'y' each time system asks your permission before fixing errors. - -### Q3. Is there any use of yes when it's used alone? - -Yes, there's at-least one use: to tell how well a computer system handles high amount of loads. Reason being, the tool utilizes 100% processor for systems that have a single processor. In case you want to apply this test on a system with multiple processors, you need to run a yes process for each processor. - -### Q4. What command line options yes offers? - -The tool only offers generic command line options: --help and --version. As the names suggests. the former displays help information related to the command, while the latter one outputs version related information. - -[![What command line options yes offers][9]][10] - -### Conclusion - -So now you'd agree that there could be several scenarios where the yes command would be of help. There are no command line options unique to yes, so effectively, there's no learning curve associated with the tool. Just in case you need, here's the command's [man page][11]. - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/linux-yes-command/ - -作者:[Himanshu Arora][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com -[1]:https://www.howtoforge.com/images/command-tutorial/yes-def-output.png -[2]:https://www.howtoforge.com/images/command-tutorial/big/yes-def-output.png -[3]:https://www.howtoforge.com/images/command-tutorial/yes-custom-string.png -[4]:https://www.howtoforge.com/images/command-tutorial/big/yes-custom-string.png -[5]:https://www.howtoforge.com/images/command-tutorial/rm-ri-output.png -[6]:https://www.howtoforge.com/images/command-tutorial/big/rm-ri-output.png -[7]:https://www.howtoforge.com/images/command-tutorial/yes-in-action.png -[8]:https://www.howtoforge.com/images/command-tutorial/big/yes-in-action.png -[9]:https://www.howtoforge.com/images/command-tutorial/yes-help-version1.png -[10]:https://www.howtoforge.com/images/command-tutorial/big/yes-help-version1.png -[11]:https://linux.die.net/man/1/yes diff --git a/sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md b/sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md deleted file mode 100644 index 7ddb17eb68..0000000000 --- a/sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md +++ /dev/null @@ -1,225 +0,0 @@ -How to Install and Use iostat on Ubuntu 16.04 LTS -====== - -iostat also known as input/output statistics is a popular Linux system monitoring tool that can be used to collect statistics of input and output devices. It allows users to identify performance issues of local disk, remote disk and system information. The iostat create reports, the CPU Utilization report, the Device Utilization report and the Network Filesystem report. - -In this tutorial, we will learn how to install iostat on Ubuntu 16.04 and how to use it. - -### Prerequisite - - * Ubuntu 16.04 desktop installed on your system. - * Non-root user with sudo privileges setup on your system - - - -### Install iostat - -By default, iostat is included with sysstat package in Ubuntu 16.04. You can easily install it by just running the following command: - -``` -sudo apt-get install sysstat -y -``` - -Once sysstat is installed, you can proceed to the next step. - -### iostat Basic Example - -Let's start by running the iostat command without any argument. This will displays information about the CPU usage, and I/O statistics of your system: - -``` -iostat -``` - -You should see the following output: -``` -Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) - -avg-cpu: %user %nice %system %iowait %steal %idle - 22.67 0.52 6.99 1.88 0.00 67.94 - -Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn -sda 15.15 449.15 119.01 771022 204292 - -``` - -In the above output, the first line display, Linux kernel version and hostname. Next two lines displays CPU statistics like, average CPU usage, percentage of time the CPU were idle and waited for I/O response, percentage of waiting time of virtual CPU and the percentage of time the CPU is idle. Next two lines displays the device utilization report like, number of blocks read and write per second and total block reads and write per second. - -By default iostat displays the report with current date. If you want to display the current time, run the following command: - -``` -iostat -t -``` - -You should see the following output: -``` -Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) - -Saturday 16 December 2017 09:44:55 IST -avg-cpu: %user %nice %system %iowait %steal %idle - 21.37 0.31 6.93 1.28 0.00 70.12 - -Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn -sda 9.48 267.80 79.69 771022 229424 - -``` - -To check the version of the iostat, run the following command: - -``` -iostat -V -``` - -Output: -``` -sysstat version 10.2.0 -(C) Sebastien Godard (sysstat orange.fr) - -``` - -You can listout all the options available with iostat command using the following command: - -``` -iostat --help -``` - -Output: -``` -Usage: iostat [ options ] [ [ ] ] -Options are: -[ -c ] [ -d ] [ -h ] [ -k | -m ] [ -N ] [ -t ] [ -V ] [ -x ] [ -y ] [ -z ] -[ -j { ID | LABEL | PATH | UUID | ... } ] -[ [ -T ] -g ] [ -p [ [,...] | ALL ] ] -[ [...] | ALL ] - -``` - -### iostat Advance Usage Example - -If you want to view only the device report only once, run the following command: - -``` -iostat -d -``` - -You should see the following output: -``` -Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) - -Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn -sda 12.18 353.66 102.44 771022 223320 - -``` - -To view the device report continuously for every 5 seconds, for 3 times: - -``` -iostat -d 5 3 -``` - -You should see the following output: -``` -Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) - -Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn -sda 11.77 340.71 98.95 771022 223928 - -Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn -sda 2.00 0.00 8.00 0 40 - -Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn -sda 0.60 0.00 3.20 0 16 - -``` - -If you want to view the statistics of specific devices, run the following command: - -``` -iostat -p sda -``` - -You should see the following output: -``` -Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) - -avg-cpu: %user %nice %system %iowait %steal %idle - 21.69 0.36 6.98 1.44 0.00 69.53 - -Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn -sda 11.00 316.91 92.38 771022 224744 -sda1 0.07 0.27 0.00 664 0 -sda2 0.01 0.05 0.00 128 0 -sda3 0.07 0.27 0.00 648 0 -sda4 10.56 315.21 92.35 766877 224692 -sda5 0.12 0.48 0.02 1165 52 -sda6 0.07 0.32 0.00 776 0 - -``` - -You can also view the statistics of multiple devices with the following command: - -``` -iostat -p sda, sdb, sdc -``` - -If you want to displays the device I/O statistics in MB/second, run the following command: - -``` -iostat -m -``` - -You should see the following output: -``` -Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) - -avg-cpu: %user %nice %system %iowait %steal %idle - 21.39 0.31 6.94 1.30 0.00 70.06 - -Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn -sda 9.67 0.27 0.08 752 223 - -``` - -If you want to view the extended information for a specific partition (sda4), run the following command: - -``` -iostat -x sda4 -``` - -You should see the following output: -``` -Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) - -avg-cpu: %user %nice %system %iowait %steal %idle - 21.26 0.28 6.87 1.19 0.00 70.39 - -Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util -sda4 0.79 4.65 5.71 2.68 242.76 73.28 75.32 0.35 41.80 43.66 37.84 4.55 3.82 - -``` - -If you want to displays only the CPU usage statistics, run the following command: - -``` -iostat -c -``` - -You should see the following output: -``` -Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) - -avg-cpu: %user %nice %system %iowait %steal %idle - 21.45 0.33 6.96 1.34 0.00 69.91 - -``` - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/tutorial/how-to-install-and-use-iostat-on-ubuntu-1604/ - -作者:[Hitesh Jethva][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com diff --git a/sources/tech/20180119 Linux mv Command Explained for Beginners (8 Examples).md b/sources/tech/20180119 Linux mv Command Explained for Beginners (8 Examples).md deleted file mode 100644 index 78cf02f4a9..0000000000 --- a/sources/tech/20180119 Linux mv Command Explained for Beginners (8 Examples).md +++ /dev/null @@ -1,188 +0,0 @@ -translating by cncuckoo - -Linux mv Command Explained for Beginners (8 Examples) -====== - -Just like [cp][1] for copying and rm for deleting, Linux also offers an in-built command for moving and renaming files. It's called **mv**. In this article, we will discuss the basics of this command line tool using easy to understand examples. Please note that all examples used in this tutorial have been tested on Ubuntu 16.04 LTS. - -#### Linux mv command - -As already mentioned, the mv command in Linux is used to move or rename files. Following is the syntax of the command: - -``` -mv [OPTION]... [-T] SOURCE DEST -mv [OPTION]... SOURCE... DIRECTORY -mv [OPTION]... -t DIRECTORY SOURCE... -``` - -And here's what the man page says about it: -``` -Rename SOURCE to DEST, or move SOURCE(s) to DIRECTORY. -``` - -The following Q&A-styled examples will give you a better idea on how this tool works. - -#### Q1. How to use mv command in Linux? - -If you want to just rename a file, you can use the mv command in the following way: - -``` -mv [filename] [new_filename] -``` - -For example: - -``` -mv names.txt fullnames.txt -``` - -[![How to use mv command in Linux][2]][3] - -Similarly, if the requirement is to move a file to a new location, use the mv command in the following way: - -``` -mv [filename] [dest-dir] -``` - -For example: - -``` -mv fullnames.txt /home/himanshu/Downloads -``` - -[![Linux mv command][4]][5] - -#### Q2. How to make sure mv prompts before overwriting? - -By default, the mv command doesn't prompt when the operation involves overwriting an existing file. For example, the following screenshot shows the existing full_names.txt was overwritten by mv without any warning or notification. - -[![How to make sure mv prompts before overwriting][6]][7] - -However, if you want, you can force mv to prompt by using the **-i** command line option. - -``` -mv -i [file_name] [new_file_name] -``` - -[![the -i command option][8]][9] - -So the above screenshots clearly shows that **-i** leads to mv asking for user permission before overwriting an existing file. Please note that in case you want to explicitly specify that you don't want mv to prompt before overwriting, then use the **-f** command line option. - -#### Q3. How to make mv not overwrite an existing file? - -For this, you need to use the **-n** command line option. - -``` -mv -n [filename] [new_filename] -``` - -The following screenshot shows the mv operation wasn't successful as a file with name 'full_names.txt' already existed and the command had -n option in it. - -[![How to make mv not overwrite an existing file][10]][11] - -Note: -``` -If you specify more than one of -i, -f, -n, only the final one takes effect. -``` - -#### Q4. How to make mv remove trailing slashes (if any) from source argument? - -To remove any trailing slashes from source arguments, use the **\--strip-trailing-slashes** command line option. - -``` -mv --strip-trailing-slashes [source] [dest] -``` - -Here's how the official documentation explains the usefulness of this option: -``` -This is useful when a - -source - - argument may have a trailing slash and specify a symbolic link to a directory. This scenario is in fact rather common because some shells can automatically append a trailing slash when performing file name completion on such symbolic links. Without this option, - -mv - -, for example, (via the system's rename function) must interpret a trailing slash as a request to dereference the symbolic link and so must rename the indirectly referenced - -directory - - and not the symbolic link. Although it may seem surprising that such behavior be the default, it is required by POSIX and is consistent with other parts of that standard. -``` - -#### Q5. How to make mv treat destination as normal file? - -To be absolutely sure that the destination entity is treated as a normal file (and not a directory), use the **-T** command line option. - -``` -mv -T [source] [dest] -``` - -Here's why this command line option exists: -``` -This can help avoid race conditions in programs that operate in a shared area. For example, when the command 'mv /tmp/source /tmp/dest' succeeds, there is no guarantee that /tmp/source was renamed to /tmp/dest: it could have been renamed to/tmp/dest/source instead, if some other process created /tmp/dest as a directory. However, if mv -T /tmp/source /tmp/dest succeeds, there is no question that/tmp/source was renamed to /tmp/dest. -``` -``` -In the opposite situation, where you want the last operand to be treated as a directory and want a diagnostic otherwise, you can use the --target-directory (-t) option. -``` - -#### Q6. How to make mv move file only when its newer than destination file? - -Suppose there exists a file named fullnames.txt in Downloads directory of your system, and there's a file with same name in your home directory. Now, you want to update ~/Downloads/fullnames.txt with ~/fullnames.txt, but only when the latter is newer. Then in this case, you'll have to use the **-u** command line option. - -``` -mv -u ~/fullnames.txt ~/Downloads/fullnames.txt -``` - -This option is particularly useful in cases when you need to take such decisions from within a shell script. - -#### Q7. How make mv emit details of what all it is doing? - -If you want mv to output information explaining what exactly it's doing, then use the **-v** command line option. - -``` -mv -v [filename] [new_filename] -``` - -For example, the following screenshots shows mv emitting some helpful details of what exactly it did. - -[![How make mv emit details of what all it is doing][12]][13] - -#### Q8. How to force mv to create backup of existing destination files? - -This you can do using the **-b** command line option. The backup file created this way will have the same name as the destination file, but with a tilde (~) appended to it. Here's an example: - -[![How to force mv to create backup of existing destination files][14]][15] - -#### Conclusion - -As you'd have guessed by now, mv is as important as cp and rm for the functionality it offers - renaming/moving files around is also one of the basic operations after all. We've discussed a majority of command line options this tool offers. So you can just practice them and start using the command. To know more about mv, head to its [man page][16]. - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/linux-mv-command/ - -作者:[Himanshu Arora][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com -[1]:https://www.howtoforge.com/linux-cp-command/ -[2]:https://www.howtoforge.com/images/command-tutorial/mv-rename-ex.png -[3]:https://www.howtoforge.com/images/command-tutorial/big/mv-rename-ex.png -[4]:https://www.howtoforge.com/images/command-tutorial/mv-transfer-file.png -[5]:https://www.howtoforge.com/images/command-tutorial/big/mv-transfer-file.png -[6]:https://www.howtoforge.com/images/command-tutorial/mv-overwrite.png -[7]:https://www.howtoforge.com/images/command-tutorial/big/mv-overwrite.png -[8]:https://www.howtoforge.com/images/command-tutorial/mv-prompt-overwrite.png -[9]:https://www.howtoforge.com/images/command-tutorial/big/mv-prompt-overwrite.png -[10]:https://www.howtoforge.com/images/command-tutorial/mv-n-option.png -[11]:https://www.howtoforge.com/images/command-tutorial/big/mv-n-option.png -[12]:https://www.howtoforge.com/images/command-tutorial/mv-v-option.png -[13]:https://www.howtoforge.com/images/command-tutorial/big/mv-v-option.png -[14]:https://www.howtoforge.com/images/command-tutorial/mv-b-option.png -[15]:https://www.howtoforge.com/images/command-tutorial/big/mv-b-option.png -[16]:https://linux.die.net/man/1/mv diff --git a/sources/tech/20180126 Linux kill Command Tutorial for Beginners (5 Examples).md b/sources/tech/20180126 Linux kill Command Tutorial for Beginners (5 Examples).md deleted file mode 100644 index 8fcdedef0e..0000000000 --- a/sources/tech/20180126 Linux kill Command Tutorial for Beginners (5 Examples).md +++ /dev/null @@ -1,113 +0,0 @@ -Linux kill Command Tutorial for Beginners (5 Examples) -====== - -Sometimes, while working on a Linux machine, you'll see that an application or a command line process gets stuck (becomes unresponsive). Then in those cases, terminating it is the only way out. Linux command line offers a utility that you can use in these scenarios. It's called **kill**. - -In this tutorial, we will discuss the basics of kill using some easy to understand examples. But before we do that, it's worth mentioning that all examples in the article have been tested on an Ubuntu 16.04 machine. - -#### Linux kill command - -The kill command is usually used to kill a process. Internally it sends a signal, and depending on what you want to do, there are different signals that you can send using this tool. Following is the command's syntax: - -``` -kill [options] [...] -``` - -And here's how the tool's man page describes it: -``` -The default signal for kill is TERM. Use -l or -L to list available signals. Particularly useful -signals include HUP, INT, KILL, STOP, CONT, and 0. Alternate signals may be specified in three ways: --9, -SIGKILL or -KILL. Negative PID values may be used to choose whole process groups; see the PGID -column in ps command output.  A PID of -1 is special; it indicates all processes except the kill -process  itself and init. -``` - -The following Q&A-styled examples should give you a better idea of how the kill command works. - -#### Q1. How to terminate a process using kill command? - -This is very easy - all you need to do is to get the pid of the process you want to kill, and then pass it to the kill command. - -``` -kill [pid] -``` - -For example, I wanted to kill the 'gthumb' process on my system. So i first used the ps command to fetch the application's pid, and then passed it to the kill command to terminate it. Here's the screenshot showing all this: - -[![How to terminate a process using kill command][1]][2] - -#### Q2. How to send a custom signal? - -As already mentioned in the introduction section, TERM is the default signal that kill sends to the application/process in question. However, if you want, you can send any other signal that kill supports using the **-s** command line option. - -``` -kill -s [signal] [pid] -``` - -For example, if a process isn't responding to the TERM signal (which allows the process to do final cleanup before quitting), you can go for the KILL signal (which doesn't let process do any cleanup). Following is the command you need to run in that case. - -``` -kill -s KILL [pid] -``` - -#### Q3. What all signals you can send using kill? - -Of course, the next logical question that'll come to your mind is how to know which all signals you can send using kill. Well, thankfully, there exists a command line option **-l** that lists all supported signals. - -``` -kill -l -``` - -Following is the output the above command produced on our system: - -[![What all signals you can send using kill][3]][4] - -#### Q4. What are the other ways in which signal can be sent? - -In one of the previous examples, we told you if you want to send the KILL signal, you can do it in the following way: - -``` -kill -s KILL [pid] -``` - -However, there are a couple of other alternatives as well: - -``` -kill -s SIGKILL [pid] - -kill -s 9 [pid] -``` - -The corresponding number can be known using the -l option we've already discussed in the previous example. - -#### Q5. How to kill all running process in one go? - -In case a user wants to kill all processes that they can (this depends on their privilege level), then instead of specifying a large number of process IDs, they can simply pass the -1 option to kill. - -For example: - -``` -kill -s KILL -1 -``` - -#### Conclusion - -The kill command is pretty straightforward to understand and use. There's a slight learning curve in terms of the list of signal options it offers, but as we explained in here, there's an option to take a quick look at that list as well. Just practice whatever we've discussed and you should be good to go. For more information, head to the tool's [man page][5]. - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/linux-kill-command/ - -作者:[Himanshu Arora][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com -[1]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/kill-default.png -[2]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/kill-default.png -[3]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/kill-l-option.png -[4]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/kill-l-option.png -[5]:https://linux.die.net/man/1/kill diff --git a/sources/tech/20180129 Install Zabbix Monitoring Server and Agent on Debian 9.md b/sources/tech/20180129 Install Zabbix Monitoring Server and Agent on Debian 9.md deleted file mode 100644 index 308b6f1341..0000000000 --- a/sources/tech/20180129 Install Zabbix Monitoring Server and Agent on Debian 9.md +++ /dev/null @@ -1,401 +0,0 @@ -Install Zabbix Monitoring Server and Agent on Debian 9 -====== - -Monitoring tools are used to continuously keep track of the status of the system and send out alerts and notifications if anything goes wrong. Also, monitoring tools help you to ensure that your critical systems, applications and services are always up and running. Monitoring tools are a supplement for your network security, allowing you to detect malicious traffic, where it's coming from and how to cancel it. - -Zabbix is a free, open source and the ultimate enterprise-level monitoring tool designed for real-time monitoring of millions of metrics collected from tens of thousands of servers, virtual machines and network devices. Zabbix has been designed to skill from small environment to large environment. Its web front-end is written in PHP, backend is written in C and uses MySQL, PostgreSQL, SQLite, Oracle or IBM DB2 to store data. Zabbix provides graphing functionality that allows you to get an overview of the current state of specific nodes and the network - -Some of the major features of the Zabbix are listed below: - - * Monitoring Servers, Databases, Applications, Network Devices, Vmware hypervisor, Virtual Machines and much more. - * Special designed to support small to large environments to improve the quality of your services and reduce operating costs by avoiding downtime. - * Fully open source, so you don't need to pay anything. - * Provide user friendly web interface to do everything from a central location. - * Comes with SNMP to monitor Network device and IPMI to monitor Hardware device. - * Web-based front end that allows full system control from a browser. - -This tutorial will walk you through the step by step instruction of how to install Zabbix Server and Zabbix agent on Debian 9 server. We will also explain how to add the Zabbix agent to the Zabbix server for monitoring. - -#### Requirements - - * Two system with Debian 9 installed. - * Minimum 1 GB of RAM and 10 DB of disk space required. Amount of RAM and Disk space depends on the number of hosts and the parameters that are being monitored. - * A non-root user with sudo privileges setup on your server. - - - -#### Getting Started - -Before starting, it is necessary to update your server's package repository to the latest stable version. You can update it by just running the following command on both instances: - -``` -sudo apt-get update -y -sudo apt-get upgrade -y -``` - -Next, restart your system to apply these changes. - -#### Install Apache, PHP and MariaDB - -Zabbix runs on Apache web server, written in PHP and uses MariaDB/MySQL to store their data. So in order to install Zabbix, you will require Apache, MariaDB and PHP to work. First, install Apache, PHP and Other PHP modules by running the following command: - -``` -sudo apt-get install apache2 libapache2-mod-php7.0 php7.0 php7.0-xml php7.0-bcmath php7.0-mbstring -y -``` - -Next, you will need to add MariaDB repository to your system. Because, latest version of the MariaDB is not available in Debian 9 default repository. - -You can add the repository by running the following command: - -``` -sudo apt-get install software-properties-common -y -sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xF1656F24C74CD1D8 -sudo add-apt-repository 'deb [arch=amd64] http://www.ftp.saix.net/DB/mariadb/repo/10.1/debian stretch main' -``` - -Next, update the repository by running the following command: - -``` -sudo apt-get update -y -``` - -Finally, install the MariaDB server with the following command: - -``` -sudo apt-get install mariadb-server -y -``` - -By default, MariaDB installation is not secured. So you will need to secure it first. You can do this by running the mysql_secure_installation script. - -``` -sudo mysql_secure_installation -``` - -Answer all the questions as shown below: -``` - -Enter current password for root (enter for none): Enter -Set root password? [Y/n]: Y -New password: -Re-enter new password: -Remove anonymous users? [Y/n]: Y -Disallow root login remotely? [Y/n]: Y -Remove test database and access to it? [Y/n]: Y -Reload privilege tables now? [Y/n]: Y - -``` - -The above script will set the root password, remove test database, remove anonymous user and Disallow root login from a remote location. - -Once the MariaDB installation is secured, start the Apache and MariaDB service and enable them to start on boot time by running the following command: - -``` -sudo systemctl start apache2 -sudo systemctl enable apache2 -sudo systemctl start mysql -sudo systemctl enable mysql -``` - -#### Installing Zabbix Server - -By default, Zabbix is available in the Debian 9 repository, but it might be outdated. So it is recommended to install most recent version from the official Zabbix repositories. You can download and add the latest version of the Zabbix repository with the following command: - -``` -wget http://repo.zabbix.com/zabbix/3.0/debian/pool/main/z/zabbix-release/zabbix-release_3.0-2+stretch_all.deb -``` - -Next, install the downloaded repository with the following command: - -``` -sudo dpkg -i zabbix-release_3.0-2+stretch_all.deb -``` - -Next, update the package cache and install Zabbix server with web front-end and Mysql support by running the following command: - -``` -sudo apt-get update -y -sudo apt-get install zabbix-server-mysql zabbix-frontend-php -y -``` - -You will also need to install the Zabbix agent to collect data about the Zabbix server status itself: - -``` -sudo apt-get install zabbix-agent -y -``` - -After installing Zabbix agent, start the Zabbix agent service and enable it to start on boot time by running the following command: - -``` -sudo systemctl start zabbix-agent -sudo systemctl enable zabbix-agent -``` - -#### Configuring Zabbix Database - -Zabbix uses MariaDB/MySQL as a database backend. So, you will need to create a MySQL database and User for zabbix installation: - -First, log into MySQL shell with the following command: - -``` -mysql -u root -p -``` - -Enter your root password, then create a database for Zabbix with the following command: - -``` -MariaDB [(none)]> CREATE DATABASE zabbixdb character set utf8 collate utf8_bin; -``` - -Next, create a user for Zabbix, assign a password and grant all privileges on Zabbix database with the following command: - -``` -MariaDB [(none)]> CREATE user zabbix identified by 'password'; -MariaDB [(none)]> GRANT ALL PRIVILEGES on zabbixdb.* to zabbixuser@localhost identified by 'password'; -``` - -Next, flush the privileges with the following command: - -``` -MariaDB [(none)]> FLUSH PRIVILEGES; -``` - -Finally, exit from the MySQL shell with the following command: - -``` -MariaDB [(none)]> exit; -``` - -Next, import initial schema and data to the newly created database with the following command: - -``` -cd /usr/share/doc/zabbix-server-mysql*/ -zcat create.sql.gz | mysql -u zabbix -p zabbixdb -``` - -#### Configuring Zabbix - -Zabbix creates its own configuration file at `/etc/zabbix/apache.conf`. Edit this file and update the Timezone and PHP setting as per your need: - -``` -sudo nano /etc/zabbix/apache.conf -``` - -Change the file as shown below: -``` - php_value max_execution_time 300 - php_value memory_limit 128M - php_value post_max_size 32M - php_value upload_max_filesize 8M - php_value max_input_time 300 - php_value always_populate_raw_post_data -1 - php_value date.timezone Asia/Kolkata - -``` - -Save the file when you are finished. - -Next, you will need to update the database details for Zabbix. You can do this by editing `/etc/zabbix/zabbix_server.conf` file: - -``` -sudo nano /etc/zabbix/zabbix_server.conf -``` - -Change the following lines: -``` -DBHost=localhost -DBName=zabbixdb -DBUser=zabbixuser -DBPassword=password - -``` - -Save and close the file when you are finished. Then restart all the services with the following command: - -``` -sudo systemctl restart apache2 -sudo systemctl restart mysql -sudo systemctl restart zabbix-server -``` - -#### Configuring Firewall - -Before proceeding, you will need to configure the UFW firewall to secure Zabbix server. - -First, make sure UFW is installed on your system. Otherewise, you can install it by running the following command: - -``` -sudo apt-get install ufw -y -``` - -Next, enable the UFW firewall: - -``` -sudo ufw enable -``` - -Next, allow port 10050, 10051 and 80 through UFW with the following command: - -``` -sudo ufw allow 10050/tcp -sudo ufw allow 10051/tcp -sudo ufw allow 80/tcp -``` - -Finally, reload the firewall to apply these changes with the following command: - -``` -sudo ufw reload -``` - -Once the UFW firewall is configured you can proceed to install the Zabbix server via web interface. - -#### Accessing Zabbix Web Installation Wizard - -Once everything is fine, it's time to access Zabbix web installation wizard. - -Open your web browser and navigate the URL , you will be redirected to the following page: - -[![Zabbix 3.0][2]][3] - -Click on the **Next step** button, you should see the following page: - -[![Zabbix Prerequisites][4]][5] - -Here, all the Zabbix pre-requisites are checked and verified, then click on the **Next step** button you should see the following page: - -[![Database Configuration][6]][7] - -Here, provide the Zabbix database name, database user and password then click on the **Next step** button, you should see the following page: - -[![Zabbix Server Details][8]][9] - -Here, specify the Zabbix server details and Port number then click on the **Next step** button, you should see the pre-installation summary of Zabbix Server in following page: - -[![Installation summary][10]][11] - -Next, click on the **Next step** button to start the Zabbix installation. Once the Zabbix installation is completed successfully, you should see the following page: - -[![Zabbix installed successfully][12]][13] - -Here, click on the **Finish** button, it will redirect to the Zabbix login page as shown below: - -[![Login to Zabbix][14]][15] - -Here, provide username as Admin and password as zabbix then click on the **Sign in** button. You should see the Zabbix server dashboard in the following image: - -[![Zabbix Dashboard][16]][17] - -Your Zabbix web installation is now finished. - -#### Install Zabbix Agent - -Now your Zabbix server is up and functioning. It's time to add Zabbix agent node to the Zabbix Server for Monitoring. - -First, log into Zabbix agent instance and add the Zabbix repository with the following command: - -``` -wget http://repo.zabbix.com/zabbix/3.0/debian/pool/main/z/zabbix-release/zabbix-release_3.0-2+stretch_all.deb -sudo dpkg -i zabbix-release_3.0-2+stretch_all.deb -sudo apt-get update -y -``` - -Once you have configured Zabbix repository on your system, install the Zabbix agent by just running the following command: - -``` -sudo apt-get install zabbix-agent -y -``` - -Once the Zabbix agent is installed, you will need to configure Zabbix agent to communicate with Zabbix server. You can do this by editing the Zabbix agent configuration file: - -``` -sudo nano /etc/zabbix/zabbix_agentd.conf -``` - -Change the file as shown below: -``` - #Zabbix Server IP Address / Hostname - - Server=192.168.0.103 - - #Zabbix Agent Hostname - - Hostname=zabbix-agent - - -``` - -Save and close the file when you are finished, then restart the Zabbix agent service and enable it to start on boot time with the following command: - -``` -sudo systemctl restart zabbix-agent -sudo systemctl enable zabbix-agent -``` - -#### Add Zabbix Agent Node to Zabbix Server - -Next, you will need to add the Zabbix agent node to the Zabbix server for monitoring. First, log in to the Zabbix server web interface. - -[![Zabbix UI][18]][19] - -Next, Click on **Configuration --> Hosts -> Create Host**, you should see the following page: - -[![Create Host in Zabbix][20]][21] - -Here, specify the Hostname, IP address and Group names of Zabbix agent. Then navigate to Templates tab, you should see the following page: - -[![specify the Hostname, IP address and Group name][22]][23] - -Here, search appropriate templates and click on **Add** button, you should see the following page: - -[![OS Template][24]][25] - -Finally, click on **Add** button again. You will see your new host with green labels indicating that everything is working fine. - -[![Hast successfully added to Zabbix][26]][27] - -If you have extra servers and network devices that you want to monitor, log into each host, install the Zabbix agent and add each host from the Zabbix web interface. - -#### Conclusion - -Congratulations! you have successfully installed the Zabbix server and Zabbix agent in Debian 9 server. You have also added Zabbix agent node to the Zabbix server for monitoring. You can now easily list the current issue and past history, get the latest data of hosts, list the current problems and also visualized the collected resource statistics such as CPU load, CPU utilization, Memory usage, etc via graphs. I hope you can now easily install and configure Zabbix on Debian 9 server and deploy it on production environment. Compared to other monitoring software, Zabbix allows you to build your own maps of different network segments while monitoring many hosts. You can also monitor Windows host using Zabbix windows agent. For more information, you can refer the [Zabbix Documentation Page][28]. Feel free to ask me if you have any questions. - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/tutorial/install-zabbix-monitoring-server-and-agent-on-debian-9/ - -作者:[Hitesh Jethva][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com -[1]:/cdn-cgi/l/email-protection -[2]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-zabbix-welcome-page.png -[3]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-zabbix-welcome-page.png -[4]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-zabbix-pre-requisite-check-page.png -[5]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-zabbix-pre-requisite-check-page.png -[6]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-zabbix-db-config-page.png -[7]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-zabbix-db-config-page.png -[8]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-zabbix-server-details.png -[9]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-zabbix-server-details.png -[10]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-pre-installation-summary.png -[11]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-pre-installation-summary.png -[12]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-zabbix-install-success.png -[13]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-zabbix-install-success.png -[14]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-zabbix-login-page.png -[15]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-zabbix-login-page.png -[16]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-zabbix-welcome-dashboard.png -[17]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-zabbix-welcome-dashboard.png -[18]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-zabbix-welcome-dashboard1.png -[19]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-zabbix-welcome-dashboard1.png -[20]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-zabbix-agent-host1.png -[21]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-zabbix-agent-host1.png -[22]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-zabbix-agent-add-templates.png -[23]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-zabbix-agent-add-templates.png -[24]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-zabbix-agent-select-templates.png -[25]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-zabbix-agent-select-templates.png -[26]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/Screenshot-of-zabbix-agent-dashboard.png -[27]:https://www.howtoforge.com/images/install_zabbix_monitoring_server_and_agent_on_debian_9/big/Screenshot-of-zabbix-agent-dashboard.png -[28]:https://www.zabbix.com/documentation/3.2/ diff --git a/sources/tech/20180205 Linux md5sum Command Explained For Beginners (5 Examples).md b/sources/tech/20180205 Linux md5sum Command Explained For Beginners (5 Examples).md deleted file mode 100644 index c9f67453fb..0000000000 --- a/sources/tech/20180205 Linux md5sum Command Explained For Beginners (5 Examples).md +++ /dev/null @@ -1,191 +0,0 @@ -Linux md5sum Command Explained For Beginners (5 Examples) -====== - -When downloading files, particularly installation files from websites, it is a good idea to verify that the download is valid. A website will often display a hash value for each file so that you can make sure the download completed correctly. In this article, we will be discussing the md5sum tool that you can use to validate the download. Two other utilities, sha256sum and sha512sum, work the same way as md5sum. - -### Linux md5sum command - -The md5sum command prints a 32-character (128-bit) checksum of the given file, using the MD5 algorithm. Following is the command syntax of this command line tool: - -``` -md5sum [OPTION]... [FILE]... -``` - -And here's how md5sum's man page explains it: -``` -Print or check MD5 (128-bit) checksums. - -``` - -The following Q&A-styled examples will give you an even better idea of the basic usage of md5sum. - -Note: We'll be using three files named file1.txt, file2.txt, and file3.txt as the input files in our examples. The text in each file is listed below. - -file1.txt: -``` -hi -hello -how are you -thanks. - -``` - -file2.txt: -``` -hi -hello to you -I am fine -Your welcome! - -``` - -file3.txt: -``` -hallo -Guten Tag -Wie geht es dir -Danke. - -``` - -### Q1. How to display the hash value? - -Use the command without any options to display the hash value and the filename. - -``` -md5sum file1.txt -``` - -Here's the output this command produced on our system: -``` -[Documents]$ md5sum file1.txt -1ff38cc592c4c5d0c8e3ca38be8f1eb1 file1.txt -[Documents]$ - -``` - -The output can also be displayed in a BSD-style format using the --tag option. - -md5sum --tag file1.txt -``` -[Documents]$ md5sum --tag file1.txt -MD5 (file1.txt) = 1ff38cc592c4c5d0c8e3ca38be8f1eb1 -[Documents]$ - -``` -### Q2. How to validate multiple files? - -The md5sum command can validate multiple files at one time. We will add file2.txt and file3.txt to demonstrate the capabilities. - -If you write the hashes to a file, you can use that file to check whether any of the files have changed. Here we are writing the hashes of the files to the file hashes, and then using that to validate that none of the files have changed. - -``` -md5sum file1.txt file2.txt file3.txt > hashes -md5sum --check hashes -``` - -``` -[Documents]$ md5sum file1.txt file2.txt file3.txt > hashes -[Documents]$ md5sum --check hashes -file1.txt: OK -file2.txt: OK -file3.txt: OK -[Documents]$ - -``` - -Now we will change file3.txt, adding a single exclamation mark to the end of the file, and rerun the command. - -``` -echo "!" >> file3.txt -md5sum --check hashes -``` - -``` -[Documents]$ md5sum --check hashes -file1.txt: OK -file2.txt: OK -file3.txt: FAILED -md5sum: WARNING: 1 computed checksum did NOT match -[Documents]$ - -``` - -You can see that file3.txt has changed. - -### Q3. How to display only modified files? - -If you have many files to check, you may want to display only the files that have changed. Using the "\--quiet" option, md5sum will only list the files that have changed. - -``` -md5sum --quiet --check hashes -``` - -``` -[Documents]$ md5sum --quiet --check hashes -file3.txt: FAILED -md5sum: WARNING: 1 computed checksum did NOT match -[Documents]$ - -``` - -### Q4. How to detect changes in a script? - -You may want to use md5sum in a script. Using the "\--status" option, md5sum won't print any output. Instead, the status code returns 0 if there are no changes, and 1 if the files don't match. The following script hashes.sh will return a 1 in the status code, because the files have changed. The script file is below: - -``` -sh hashes.sh -``` - -``` -hashes.sh: -#!/bin/bash -md5sum --status --check hashes -Result=$? -echo "File check status is: $Result" -exit $Result - -[Documents]$ sh hashes.sh -File check status is: 1 -[[email protected] Documents]$ - -``` - -### Q5. How to identify invalid hash values? - -md5sum can let you know if you have invalid hashes when you compare files. To warn you if any hash values are incorrect, you can use the --warn option. For this last example, we will use sed to insert an extra character at the beginning of the third line. This will change the hash value in the file hashes, making it invalid. - -``` -sed -i '3s/.*/a&/' hashes -md5sum --warn --check hashes -``` - -This shows that the third line has an invalid hash. -``` -[Documents]$ sed -i '3s/.*/a&/' hashes -[Documents]$ md5sum --warn --check hashes -file1.txt: OK -file2.txt: OK -md5sum: hashes: 3: improperly formatted MD5 checksum line -md5sum: WARNING: 1 line is improperly formatted -[Documents]$ - -``` - -### Conclusion - -The md5sum is a simple command which can quickly validate one or multiple files to determine whether any of them have changed from the original file. For more information on md5sum, see it's [man page.][1] - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/linux-md5sum-command/ - -作者:[David Paige][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com/ -[1]:https://linux.die.net/man/1/md5sum diff --git a/sources/tech/20180208 How to Install and Configure XWiki on Ubuntu 16.04.md b/sources/tech/20180208 How to Install and Configure XWiki on Ubuntu 16.04.md deleted file mode 100644 index 1418f9f354..0000000000 --- a/sources/tech/20180208 How to Install and Configure XWiki on Ubuntu 16.04.md +++ /dev/null @@ -1,271 +0,0 @@ -How to Install and Configure XWiki on Ubuntu 16.04 -====== - -XWiki is a free and open source wiki software written in Java runs on a servlet container like Tomcat, JBoss etc. XWiki uses databases such as MySQL or PostgreSQL to store its information. XWiki allows us to store structured data and execute the server script within wiki interface. You can host multiple blogs and manage or view your files and folders using XWiki. - -XWiki comes with lots of features, some of them are listed below: - - * Supports version control and ACL. - * Allows you to search the full wiki using wildcards. - * Easily export wiki pages to PDF, ODT, RTF, XML and HTML. - * Content organization and content import. - * Page editing using WYSIWYG editor. - - - -### Requirements - - * A server running Ubuntu 16.04. - * A non-root user with sudo privileges. - - - -Before starting, you will need to update the Ubuntu repository to the latest version. You can do this using the following command: - -``` -sudo apt-get update -y -sudo apt-get upgrade -y -``` - -Once the repository is updated, restart the system to apply all the updates. - -### Install Java - -Xwiki is a Java-based application, so you will need to install Java 8 first. By default Java 8 is not available in the Ubuntu repository. You can install Java 8 by adding the webupd8team PPA repository to your system. - -First, add the PPA by running the following command: - -``` -sudo add-apt-repository ppa:webupd8team/java -``` - -Next, update the repository with the following command: - -``` -sudo apt-get update -y -``` - -Once the repository is up to date, you can install Java 8 by running the following command: - -``` -sudo apt-get install oracle-java8-installer -y -``` - -After installing Java, you can check the version of Java with the following command: - -``` -java -version -``` - -You should see the following output: -``` -Java version "1.8.0_91" -Java(TM) SE Runtime Environment (build 1.8.0_91-b14) -Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode) - -``` - -### Download and Install Xwiki - -Next, you will need to download the setup file provided by XWiki. You can download it using the following command: - -``` -wget -``` - -Once the download is completed, you can install the downloaded package file using the java command as shown below: - -``` -sudo java -jar xwiki-enterprise-installer-generic-8.1-standard.jar -``` - -You should see the following output: -``` -28 Jan, 2018 6:57:37 PM INFO: Logging initialized at level 'INFO' -28 Jan, 2018 6:57:37 PM INFO: Commandline arguments: -28 Jan, 2018 6:57:37 PM INFO: Detected platform: ubuntu_linux,version=3.19.0-25-generic,arch=x64,symbolicName=null,javaVersion=1.7.0_151 -28 Jan, 2018 6:57:37 PM WARNING: Failed to determine hostname and IP address -Welcome to the installation of XWiki Enterprise 8.1! -The homepage is at: http://xwiki.org/ - -Press 1 to continue, 2 to quit, 3 to redisplay - -``` - -Now, press **`1`** to continue the installation, you should see the following output: -``` -Please read the following information: - - XWiki Enterprise - Readme - - - XWiki Enterprise Overview -XWiki Enterprise is a second generation Wiki engine, features professional features like - Wiki, Blog, Comments, User Rights, LDAP Authentication, PDF Export, and a lot more. -XWiki Enterprise also includes an advanced form and scripting engine which makes it an ideal - development environment for constructing data-based intranet applications. It has powerful - extensibility features, supports scripting, extensions and is based on a highly modular - architecture. The scripting engine allows to access a powerful API for accessing the XWiki - repository in read and write mode. -XWiki Enterprise is used by major companies around the world and has strong - Support for a professional usage of XWiki. - Pointers -Here are some pointers to get you started with XWiki once you have finished installing it: - -The documentation can be found on the XWiki.org web site -If you notice any issue please file a an issue in our issue tracker -If you wish to talk to XWiki users or developers please use our - Mailing lists & Forum -You can also access XWiki's - source code -If you need commercial support please visit the - Support page - - - -Press 1 to continue, 2 to quit, 3 to redisplay - -``` - -Now, press **`1`** to continue the installation, you should see the following output: -``` -See the NOTICE file distributed with this work for additional -information regarding copyright ownership. -This is free software; you can redistribute it and/or modify it -under the terms of the GNU Lesser General Public License as -published by the Free Software Foundation; either version 2.1 of -the License, or (at your option) any later version. -This software is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -Lesser General Public License for more details. -You should have received a copy of the GNU Lesser General Public -License along with this software; if not, write to the Free -Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -02110-1301 USA, or see the FSF site: http://www.fsf.org. - -Press 1 to accept, 2 to reject, 3 to redisplay - -``` - -Now, press **`1`** to accept the license agreement, you should see the following output: -``` -Select the installation path: [/usr/local/XWiki Enterprise 8.1] - -Press 1 to continue, 2 to quit, 3 to redisplay - -``` - -Now, press enter and press **1** to select default installation path, you should see the following output: -``` - [x] Pack 'Core' required -???????????????????????????????????????????????????????????????????????????????? - [x] Include optional pack 'Default Wiki' -???????????????????????????????????????????????????????????????????????????????? -Enter Y for Yes, N for No: -Y -Press 1 to continue, 2 to quit, 3 to redisplay - -``` - -Now, press **`Y`** and press **`1`** to continue the installation, you should see the following output: -``` -[ Starting to unpack ] -[ Processing package: Core (1/2) ] -[ Processing package: Default Wiki (2/2) ] -[ Unpacking finished ] - -``` - -Now, you will be asked to create shortcuts for the user, you can press ' **`Y'`** to add them. Next, you will be asked to generate an automatic installation script, just press Enter to select default value, once the installation is finished, you should see the following output: -``` -???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? -Generate an automatic installation script -???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? -Enter Y for Yes, N for No: -Y -Select the installation script (path must be absolute)[/usr/local/XWiki Enterprise 8.1/auto-install.xml] - -Installation was successful -application installed on /usr/local/XWiki Enterprise 8.1 -[ Writing the uninstaller data ... ] -[ Console installation done ] - -``` - -Now, XWiki is installed on your system, it's time to start XWiki startup script as shown below: - -``` -cd /usr/local/XWiki Enterprise 8.1 -sudo bash start_xwiki.sh -``` - -Please, wait for sometime to start processes. Now, you should see some messages on terminal as shown below: -``` -start_xwiki.sh: 79: start_xwiki.sh: -Starting Jetty on port 8080, please wait... -2018-01-28 19:12:41.842:INFO::main: Logging initialized @1266ms -2018-01-28 19:12:42.905:INFO:oejs.Server:main: jetty-9.2.13.v20150730 -2018-01-28 19:12:42.956:INFO:oejs.AbstractNCSARequestLog:main: Opened /usr/local/XWiki Enterprise 8.1/data/logs/2018_01_28.request.log -2018-01-28 19:12:42.965:INFO:oejdp.ScanningAppProvider:main: Deployment monitor [file:/usr/local/XWiki%20Enterprise%208.1/jetty/contexts/] at interval 0 -2018-01-28 19:13:31,485 [main] INFO o.x.s.s.i.EmbeddedSolrInstance - Starting embedded Solr server... -2018-01-28 19:13:31,507 [main] INFO o.x.s.s.i.EmbeddedSolrInstance - Using Solr home directory: [data/solr] -2018-01-28 19:13:43,371 [main] INFO o.x.s.s.i.EmbeddedSolrInstance - Started embedded Solr server. -2018-01-28 19:13:46.556:INFO:oejsh.ContextHandler:main: Started [email protected]{/xwiki,file:/usr/local/XWiki%20Enterprise%208.1/webapps/xwiki/,AVAILABLE}{/xwiki} -2018-01-28 19:13:46.697:INFO:oejsh.ContextHandler:main: Started [email protected]{/,file:/usr/local/XWiki%20Enterprise%208.1/webapps/root/,AVAILABLE}{/root} -2018-01-28 19:13:46.776:INFO:oejs.ServerConnector:main: Started [email protected]{HTTP/1.1}{0.0.0.0:8080} - -``` - -XWiki is now up and running, it's time to access XWiki web interface. - -### Access XWiki - -XWiki runs on port **8080** , so you will need to allow port 8080 through the firewall. First, enable the UFW firewall with the following command: - -``` -sudo ufw enable -``` - -Next, allow port **8080** through the UFW firewall with the following command: - -``` -sudo ufw allow 8080/tcp -``` - -Next, reload the firewall rules to apply all the changes by running the following command: - -``` -sudo ufw reload -``` - -You can get the status of the UFW firewall with the following command: - -``` -sudo ufw status -``` - -Now, open your web browser and type the URL **** , you will be redirected to the XWiki home page as shown below: - -[![XWiki Dashboard][1]][2] - -You can stop the XWiki server at any time by pressing **`Ctrl + C`** button in the terminal. - -### Conclusion - -Congratulations! you have successfully installed and configured XWiki on Ubuntu 16.04 server. I hope you can now easily host your own wiki site using XWiki on Ubuntu 16.04 server. For more information, you can check the XWiki official documentation page at . Feel free to comments me if you have any questions. - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/tutorial/how-to-install-and-configure-xwiki-on-ubuntu-1604/ - -作者:[Hitesh Jethva][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com -[1]:https://www.howtoforge.com/images/how_to_install_and_configure_xwiki_on_ubuntu_1604/Screenshot-of-xwiki-dashboard.png -[2]:https://www.howtoforge.com/images/how_to_install_and_configure_xwiki_on_ubuntu_1604/big/Screenshot-of-xwiki-dashboard.png diff --git a/sources/tech/20180209 How to Install Gogs Go Git Service on Ubuntu 16.04.md b/sources/tech/20180209 How to Install Gogs Go Git Service on Ubuntu 16.04.md deleted file mode 100644 index b30c6ebf90..0000000000 --- a/sources/tech/20180209 How to Install Gogs Go Git Service on Ubuntu 16.04.md +++ /dev/null @@ -1,441 +0,0 @@ -translated by cyleft - -How to Install Gogs Go Git Service on Ubuntu 16.04 -====== - -Gogs is free and open source Git service written in Go language. Gogs is a painless self-hosted git service that allows you to create and run your own Git server on a minimal hardware server. Gogs web-UI is very similar to GitHub and offers support for MySQL, PostgreSQL, and SQLite database. - -In this tutorial, we will show you step-by-step how to install and configure your own Git service using Gogs on Ubuntu 16.04. This tutorial will cover details including, how to install Go on Ubuntu system, install PostgreSQL, and install and configure Nginx web server as a reverse proxy for Go application. - -### Prerequisites - - * Ubuntu 16.04 - * Root privileges - - - -### What we will do - - 1. Update and Upgrade System - 2. Install and Configure PostgreSQL - 3. Install Go and Git - 4. Install Gogs - 5. Configure Gogs - 6. Running Gogs as a Service - 7. Install and Configure Nginx as a Reverse Proxy - 8. Testing - - - -Before going any further, update all Ubuntu repositories and upgrade all packages. - -Run the apt commands below. - -``` -sudo apt update -sudo apt upgrade -``` - -### Step 2 - Install and Configure PostgreSQL - -Gogs offers support for MySQL, PostgreSQL, SQLite3, MSSQL, and TiDB database systems. - -In this guide, we will be using PostgreSQL as a database for our Gogs installations. - -Install PostgreSQL using the apt command below. - -``` -sudo apt install -y postgresql postgresql-client libpq-dev -``` - -After the installation is complete, start the PostgreSQL service and enable it to launch everytime at system boot. - -``` -systemctl start postgresql -systemctl enable postgresql -``` - -PostgreSQL database has been installed on an Ubuntu system. - -Next, we need to create a new database and user for Gogs. - -Login as the 'postgres' user and run the 'psql' command to get the PostgreSQL shell. - -``` -su - postgres -psql -``` - -Create a new user named 'git', and give the user privileges for 'CREATEDB'. - -``` -CREATE USER git CREATEDB; -\password git -``` - -Create a database named 'gogs_production', and set the 'git' user as the owner of the database. - -``` -CREATE DATABASE gogs_production OWNER git; -``` - -[![Create the Gogs database][1]][2] - -New PostgreSQL database 'gogs_production' and user 'git' for Gogs installation has been created. - -### Step 3 - Install Go and Git - -Install Git from the repository using the apt command below. - -``` -sudo apt install git -``` - -Now add new user 'git' to the system. - -``` -sudo adduser --disabled-login --gecos 'Gogs' git -``` - -Login as the 'git' user and create a new 'local' directory. - -``` -su - git -mkdir -p /home/git/local -``` - -Go to the 'local' directory and download 'Go' (the latest version) using the wget command as shown below. - -``` -cd ~/local -wget -``` - -[![Install Go and Git][3]][4] - -Extract the go compressed file, then remove it. - -``` -tar -xf go1.9.2.linux-amd64.tar.gz -rm -f go1.9.2.linux-amd64.tar.gz -``` - -'Go' binary file has been downloaded in the '~/local/go' directory. Now we need to setup the environment - we need to define the 'GOROOT' and 'GOPATH directories so we can run a 'go' command on the system under 'git' user. - -Run all of the following commands. - -``` -cd ~/ -echo 'export GOROOT=$HOME/local/go' >> $HOME/.bashrc -echo 'export GOPATH=$HOME/go' >> $HOME/.bashrc -echo 'export PATH=$PATH:$GOROOT/bin:$GOPATH/bin' >> $HOME/.bashrc -``` - -And reload Bash by running the 'source ~/.bashrc' command as shown below. - -``` -source ~/.bashrc -``` - -Make sure you're using Bash as your default shell. - -[![Install Go programming language][5]][6] - -Now run the 'go' command for checking the version. - -``` -go version -``` - -And make sure you get the result as shown in the following screenshot. - -[![Check the go version][7]][8] - -Go is now installed on the system under 'git' user. - -### Step 4 - Install Gogs Go Git Service - -Login as the 'git' user and download 'Gogs' from GitHub using the 'go' command. - -``` -su - git -go get -u github.com/gogits/gogs -``` - -The command will download all Gogs source code in the 'GOPATH/src' directory. - -Go to the '$GOPATH/src/github.com/gogits/gogs' directory and build gogs using commands below. - -``` -cd $GOPATH/src/github.com/gogits/gogs -go build -``` - -And make sure you get no error. - -Now run Gogs Go Git Service using the command below. - -``` -./gogs web -``` - -The command will run Gogs on the default port 3000. - -[![Install Gogs Go Git Service][9]][10] - -Open your web browser and type your server IP address with port 3000, mine is - -And you should get the result as shown below. - -[![Gogs web installer][11]][12] - -Gogs is installed on the Ubuntu system. Now back to your terminal and press 'Ctrl + c' to exit. - -### Step 5 - Configure Gogs Go Git Service - -In this step, we will create a custom configuration for Gogs. - -Goto the Gogs installation directory and create a new 'custom/conf' directory. - -``` -cd $GOPATH/src/github.com/gogits/gogs -mkdir -p custom/conf/ -``` - -Copy default configuration to the custom directory and edit it using [vim][13]. - -``` -cp conf/app.ini custom/conf/app.ini -vim custom/conf/app.ini -``` - -In the ' **[server]** ' section, change the server 'HOST_ADDR' with '127.0.0.1'. -``` -[server] - PROTOCOL = http - DOMAIN = localhost - ROOT_URL = %(PROTOCOL)s://%(DOMAIN)s:%(HTTP_PORT)s/ - HTTP_ADDR = 127.0.0.1 - HTTP_PORT = 3000 - -``` - -In the ' **[database]** ' section, change everything with your own database info. -``` -[database] - DB_TYPE = postgres - HOST = 127.0.0.1:5432 - NAME = gogs_production - USER = git - PASSWD = [email protected]# - -``` - -Save and exit. - -Now verify the configuration by running the command as shown below. - -``` -./gogs web -``` - -And make sure you get the result as following. - -[![Configure the service][14]][15] - -Gogs is now running with our custom configuration, under 'localhost' with port 3000. - -### Step 6 - Running Gogs as a Service - -In this step, we will configure Gogs as a service on Ubuntu system. We will create a new service file configuration 'gogs.service' under the '/etc/systemd/system' directory. - -Go to the '/etc/systemd/system' directory and create a new service file 'gogs.service' using the [vim][13] editor. - -``` -cd /etc/systemd/system -vim gogs.service -``` - -Paste the following gogs service configuration there. -``` -[Unit] - Description=Gogs - After=syslog.target - After=network.target - After=mariadb.service mysqld.service postgresql.service memcached.service redis.service - - [Service] - # Modify these two values and uncomment them if you have - # repos with lots of files and get an HTTP error 500 because - # of that - ### - #LimitMEMLOCK=infinity - #LimitNOFILE=65535 - Type=simple - User=git - Group=git - WorkingDirectory=/home/git/go/src/github.com/gogits/gogs - ExecStart=/home/git/go/src/github.com/gogits/gogs/gogs web - Restart=always - Environment=USER=git HOME=/home/git - - [Install] - WantedBy=multi-user.target - -``` - -Save and exit. - -Now reload the systemd services. - -``` -systemctl daemon-reload -``` - -Start gogs service and enable it to launch everytime at system boot using the systemctl command. - -``` -systemctl start gogs -systemctl enable gogs -``` - -[![Run gogs as a service][16]][17] - -Gogs is now running as a service on Ubuntu system. - -Check it using the commands below. - -``` -netstat -plntu -systemctl status gogs -``` - -And you should get the result as shown below. - -[![Gogs is listening on the network interface][18]][19] - -### Step 7 - Configure Nginx as a Reverse Proxy for Gogs - -In this step, we will configure Nginx as a reverse proxy for Gogs. We will be using Nginx packages from its own repository. - -Add Nginx repository using the add-apt command. - -``` -sudo add-apt-repository -y ppa:nginx/stable -``` - -Now update all Ubuntu repositories and install Nginx using the apt command below. - -``` -sudo apt update -sudo apt install nginx -y -``` - -Next, goto the '/etc/nginx/sites-available' directory and create new virtual host file 'gogs'. - -``` -cd /etc/nginx/sites-available -vim gogs -``` - -Paste the following configuration there. -``` -server { -     listen 80; -     server_name git.hakase-labs.co; - -     location / { -         proxy_pass http://localhost:3000; -     } - } - -``` - -Save and exit. - -**Note:** - -Change the 'server_name' line with your own domain name. - -Now activate a new virtual host and test the nginx configuration. - -``` -ln -s /etc/nginx/sites-available/gogs /etc/nginx/sites-enabled/ -nginx -t -``` - -Make sure there is no error, then restart the Nginx service. - -``` -systemctl restart nginx -``` - -[![Nginx reverse proxy for gogs][20]][21] - -### Step 8 - Testing - -Open your web browser and type your gogs URL, mine is - -Now you will get the installation page. On top of the page, type all of your PostgreSQL database info. - -[![Gogs installer][22]][23] - -Now scroll to the bottom, and click the 'Admin account settings' dropdown. - -Type your admin user, password, and email. - -[![Type in the gogs install settings][24]][25] - -Then click the 'Install Gogs' button. - -And you will be redirected to the Gogs user Dashboard as shown below. - -[![Gogs dashboard][26]][27] - -Below is Gogs 'Admin Dashboard'. - -[![Browse the Gogs dashboard][28]][29] - -Gogs is now installed with PostgreSQL database and Nginx web server on Ubuntu 16.04 server - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/tutorial/how-to-install-gogs-go-git-service-on-ubuntu-1604/ - -作者:[Muhammad Arul][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.howtoforge.com/tutorial/server-monitoring-with-shinken-on-ubuntu-16-04/ -[1]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/1.png -[2]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/1.png -[3]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/2.png -[4]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/2.png -[5]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/3.png -[6]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/3.png -[7]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/4.png -[8]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/4.png -[9]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/5.png -[10]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/5.png -[11]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/6.png -[12]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/6.png -[13]:https://www.howtoforge.com/vim-basics -[14]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/7.png -[15]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/7.png -[16]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/8.png -[17]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/8.png -[18]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/9.png -[19]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/9.png -[20]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/10.png -[21]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/10.png -[22]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/11.png -[23]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/11.png -[24]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/12.png -[25]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/12.png -[26]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/13.png -[27]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/13.png -[28]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/14.png -[29]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/14.png From 9fe36f68b5fe34ce88a89714fc4124a893afdd57 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 14:38:31 +0800 Subject: [PATCH 16/81] remove jvns.ca --- sources/tech/20160810 How does gdb work.md | 220 --------------- ...20171010 Operating a Kubernetes network.md | 217 --------------- ...1121 Finding Files with mlocate- Part 3.md | 142 ---------- .../20171124 How do groups work on Linux.md | 143 ---------- sources/tech/20171224 My first Rust macro.md | 145 ---------- .../20180104 How does gdb call functions.md | 254 ------------------ ...ures resolving symbol addresses is hard.md | 163 ----------- 7 files changed, 1284 deletions(-) delete mode 100644 sources/tech/20160810 How does gdb work.md delete mode 100644 sources/tech/20171010 Operating a Kubernetes network.md delete mode 100644 sources/tech/20171121 Finding Files with mlocate- Part 3.md delete mode 100644 sources/tech/20171124 How do groups work on Linux.md delete mode 100644 sources/tech/20171224 My first Rust macro.md delete mode 100644 sources/tech/20180104 How does gdb call functions.md delete mode 100644 sources/tech/20180109 Profiler adventures resolving symbol addresses is hard.md diff --git a/sources/tech/20160810 How does gdb work.md b/sources/tech/20160810 How does gdb work.md deleted file mode 100644 index 56b0cfe7bf..0000000000 --- a/sources/tech/20160810 How does gdb work.md +++ /dev/null @@ -1,220 +0,0 @@ -translating by ucasFL - -How does gdb work? -============================================================ - -Hello! Today I was working a bit on my [ruby stacktrace project][1] and I realized that now I know a couple of things about how gdb works internally. - -Lately I’ve been using gdb to look at Ruby programs, so we’re going to be running gdb on a Ruby program. This really means the Ruby interpreter. First, we’re going to print out the address of a global variable: `ruby_current_thread`: - -### getting a global variable - -Here’s how to get the address of the global `ruby_current_thread`: - -``` -$ sudo gdb -p 2983 -(gdb) p & ruby_current_thread -$2 = (rb_thread_t **) 0x5598a9a8f7f0 - -``` - -There are a few places a variable can live: on the heap, the stack, or in your program’s text. Global variables are part of your program! You can think of them as being allocated at compile time, kind of. It turns out we can figure out the address of a global variable pretty easily! Let’s see how `gdb` came up with `0x5598a9a8f7f0`. - -We can find the approximate region this variable lives in by looking at a cool file in `/proc` called `/proc/$pid/maps`. - -``` -$ sudo cat /proc/2983/maps | grep bin/ruby -5598a9605000-5598a9886000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby -5598a9a86000-5598a9a8b000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby -5598a9a8b000-5598a9a8d000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby - -``` - -So! There’s this starting address `5598a9605000` That’s  _like_  `0x5598a9a8f7f0`, but different. How different? Well, here’s what I get when I subtract them: - -``` -(gdb) p/x 0x5598a9a8f7f0 - 0x5598a9605000 -$4 = 0x48a7f0 - -``` - -“What’s that number?”, you might ask? WELL. Let’s look at the **symbol table**for our program with `nm`. - -``` -sudo nm /proc/2983/exe | grep ruby_current_thread -000000000048a7f0 b ruby_current_thread - -``` - -What’s that we see? Could it be `0x48a7f0`? Yes it is! So!! If we want to find the address of a global variable in our program, all we need to do is look up the name of the variable in the symbol table, and then add that to the start of the range in `/proc/whatever/maps`, and we’re done! - -So now we know how gdb does that. But gdb does so much more!! Let’s skip ahead to… - -### dereferencing pointers - -``` -(gdb) p ruby_current_thread -$1 = (rb_thread_t *) 0x5598ab3235b0 - -``` - -The next thing we’re going to do is **dereference** that `ruby_current_thread`pointer. We want to see what’s in that address! To do that, gdb will run a bunch of system calls like this: - -``` -ptrace(PTRACE_PEEKTEXT, 2983, 0x5598a9a8f7f0, [0x5598ab3235b0]) = 0 - -``` - -You remember this address `0x5598a9a8f7f0`? gdb is asking “hey, what’s in that address exactly”? `2983` is the PID of the process we’re running gdb on. It’s using the `ptrace` system call which is how gdb does everything. - -Awesome! So we can dereference memory and figure out what bytes are at what memory addresses. Some useful gdb commands to know here are `x/40w variable` and `x/40b variable` which will display 40 words / bytes at a given address, respectively. - -### describing structs - -The memory at an address looks like this. A bunch of bytes! - -``` -(gdb) x/40b ruby_current_thread -0x5598ab3235b0: 16 -90 55 -85 -104 85 0 0 -0x5598ab3235b8: 32 47 50 -85 -104 85 0 0 -0x5598ab3235c0: 16 -64 -55 115 -97 127 0 0 -0x5598ab3235c8: 0 0 2 0 0 0 0 0 -0x5598ab3235d0: -96 -83 -39 115 -97 127 0 0 - -``` - -That’s useful, but not that useful! If you are a human like me and want to know what it MEANS, you need more. Like this: - -``` -(gdb) p *(ruby_current_thread) -$8 = {self = 94114195940880, vm = 0x5598ab322f20, stack = 0x7f9f73c9c010, - stack_size = 131072, cfp = 0x7f9f73d9ada0, safe_level = 0, raised_flag = 0, - last_status = 8, state = 0, waiting_fd = -1, passed_block = 0x0, - passed_bmethod_me = 0x0, passed_ci = 0x0, top_self = 94114195612680, - top_wrapper = 0, base_block = 0x0, root_lep = 0x0, root_svar = 8, thread_id = - 140322820187904, - -``` - -GOODNESS. That is a lot more useful. How does gdb know that there are all these cool fields like `stack_size`? Enter DWARF. DWARF is a way to store extra debugging data about your program, so that debuggers like gdb can do their job better! It’s generally stored as part of a binary. If I run `dwarfdump` on my Ruby binary, I get some output like this: - -(I’ve redacted it heavily to make it easier to understand) - -``` -DW_AT_name "rb_thread_struct" -DW_AT_byte_size 0x000003e8 -DW_TAG_member - DW_AT_name "self" - DW_AT_type <0x00000579> - DW_AT_data_member_location DW_OP_plus_uconst 0 -DW_TAG_member - DW_AT_name "vm" - DW_AT_type <0x0000270c> - DW_AT_data_member_location DW_OP_plus_uconst 8 -DW_TAG_member - DW_AT_name "stack" - DW_AT_type <0x000006b3> - DW_AT_data_member_location DW_OP_plus_uconst 16 -DW_TAG_member - DW_AT_name "stack_size" - DW_AT_type <0x00000031> - DW_AT_data_member_location DW_OP_plus_uconst 24 -DW_TAG_member - DW_AT_name "cfp" - DW_AT_type <0x00002712> - DW_AT_data_member_location DW_OP_plus_uconst 32 -DW_TAG_member - DW_AT_name "safe_level" - DW_AT_type <0x00000066> - -``` - -So. The name of the type of `ruby_current_thread` is `rb_thread_struct`. It has size `0x3e8` (or 1000 bytes), and it has a bunch of member items. `stack_size` is one of them, at an offset of 24, and it has type 31\. What’s 31? No worries! We can look that up in the DWARF info too! - -``` -< 1><0x00000031> DW_TAG_typedef - DW_AT_name "size_t" - DW_AT_type <0x0000003c> -< 1><0x0000003c> DW_TAG_base_type - DW_AT_byte_size 0x00000008 - DW_AT_encoding DW_ATE_unsigned - DW_AT_name "long unsigned int" - -``` - -So! `stack_size` has type `size_t`, which means `long unsigned int`, and is 8 bytes. That means that we can read the stack size! - -How that would break down, once we have the DWARF debugging data, is: - -1. Read the region of memory that `ruby_current_thread` is pointing to - -2. Add 24 bytes to get to `stack_size` - -3. Read 8 bytes (in little-endian format, since we’re on x86) - -4. Get the answer! - -Which in this case is 131072 or 128 kb. - -To me, this makes it a lot more obvious what debugging info is **for** – if we didn’t have all this extra metadata about what all these variables meant, we would have no idea what the bytes at address `0x5598ab3235b0` meant. - -This is also why you can install debug info for a program separately from your program – gdb doesn’t care where it gets the extra debug info from. - -### DWARF is confusing - -I’ve been reading a bunch of DWARF info recently. Right now I’m using libdwarf which hasn’t been the best experience – the API is confusing, you initialize everything in a weird way, and it’s really slow (it takes 0.3 seconds to read all the debugging data out of my Ruby program which seems ridiculous). I’ve been told that libdw from elfutils is better. - -Also, I casually remarked that you can look at `DW_AT_data_member_location` to get the offset of a struct member! But I looked up on Stack Overflow how to actually do that and I got [this answer][2]. Basically you start with a check like: - -``` -dwarf_whatform(attrs[i], &form, &error); - if (form == DW_FORM_data1 || form == DW_FORM_data2 - form == DW_FORM_data2 || form == DW_FORM_data4 - form == DW_FORM_data8 || form == DW_FORM_udata) { - -``` - -and then it keeps GOING. Why are there 8 million different `DW_FORM_data` things I need to check for? What is happening? I have no idea. - -Anyway my impression is that DWARF is a large and complicated standard (and possibly the libraries people use to generate DWARF are subtly incompatible?), but it’s what we have, so that’s what we work with! - -I think it’s really cool that I can write code that reads DWARF and my code actually mostly works. Except when it crashes. I’m working on that. - -### unwinding stacktraces - -In an earlier version of this post, I said that gdb unwinds stacktraces using libunwind. It turns out that this isn’t true at all! - -Someone who’s worked on gdb a lot emailed me to say that they actually spent a ton of time figuring out how to unwind stacktraces so that they can do a better job than libunwind does. This means that if you get stopped in the middle of a weird program with less debug info than you might hope for that’s done something strange with its stack, gdb will try to figure out where you are anyway. Thanks <3 - -### other things gdb does - -The few things I’ve described here (reading memory, understanding DWARF to show you structs) aren’t everything gdb does – just looking through Brendan Gregg’s [gdb example from yesterday][3], we see that gdb also knows how to - -* disassemble assembly - -* show you the contents of your registers - -and in terms of manipulating your program, it can - -* set breakpoints and step through a program - -* modify memory (!! danger !!) - -Knowing more about how gdb works makes me feel a lot more confident when using it! I used to get really confused because gdb kind of acts like a C REPL sometimes – you type `ruby_current_thread->cfp->iseq`, and it feels like writing C code! But you’re not really writing C at all, and it was easy for me to run into limitations in gdb and not understand why. - -Knowing that it’s using DWARF to figure out the contents of the structs gives me a better mental model and have more correct expectations! Awesome. - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2016/08/10/how-does-gdb-work/ - -作者:[ Julia Evans][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/ -[1]:http://jvns.ca/blog/2016/06/12/a-weird-system-call-process-vm-readv/ -[2]:https://stackoverflow.com/questions/25047329/how-to-get-struct-member-offset-from-dwarf-info -[3]:http://www.brendangregg.com/blog/2016-08-09/gdb-example-ncurses.html diff --git a/sources/tech/20171010 Operating a Kubernetes network.md b/sources/tech/20171010 Operating a Kubernetes network.md deleted file mode 100644 index abac12f718..0000000000 --- a/sources/tech/20171010 Operating a Kubernetes network.md +++ /dev/null @@ -1,217 +0,0 @@ -**translating by [erlinux](https://github.com/erlinux)** -Operating a Kubernetes network -============================================================ - -I’ve been working on Kubernetes networking a lot recently. One thing I’ve noticed is, while there’s a reasonable amount written about how to **set up** your Kubernetes network, I haven’t seen much about how to **operate** your network and be confident that it won’t create a lot of production incidents for you down the line. - -In this post I’m going to try to convince you of three things: (all I think pretty reasonable :)) - -* Avoiding networking outages in production is important - -* Operating networking software is hard - -* It’s worth thinking critically about major changes to your networking infrastructure and the impact that will have on your reliability, even if very fancy Googlers say “this is what we do at Google”. (google engineers are doing great work on Kubernetes!! But I think it’s important to still look at the architecture and make sure it makes sense for your organization.) - -I’m definitely not a Kubernetes networking expert by any means, but I have run into a few issues while setting things up and definitely know a LOT more about Kubernetes networking than I used to. - -### Operating networking software is hard - -Here I’m not talking about operating physical networks (I don’t know anything about that), but instead about keeping software like DNS servers & load balancers & proxies working correctly. - -I have been working on a team that’s responsible for a lot of networking infrastructure for a year, and I have learned a few things about operating networking infrastructure! (though I still have a lot to learn obviously). 3 overall thoughts before we start: - -* Networking software often relies very heavily on the Linux kernel. So in addition to configuring the software correctly you also need to make sure that a bunch of different sysctls are set correctly, and a misconfigured sysctl can easily be the difference between “everything is 100% fine” and “everything is on fire”. - -* Networking requirements change over time (for example maybe you’re doing 5x more DNS lookups than you were last year! Maybe your DNS server suddenly started returning TCP DNS responses instead of UDP which is a totally different kernel workload!). This means software that was working fine before can suddenly start having issues. - -* To fix a production networking issues you often need a lot of expertise. (for example see this [great post by Sophie Haskins on debugging a kube-dns issue][1]) I’m a lot better at debugging networking issues than I was, but that’s only after spending a huge amount of time investing in my knowledge of Linux networking. - -I am still far from an expert at networking operations but I think it seems important to: - -1. Very rarely make major changes to the production networking infrastructure (because it’s super disruptive) - -2. When you  _are_  making major changes, think really carefully about what the failure modes are for the new network architecture are - -3. Have multiple people who are able to understand your networking setup - -Switching to Kubernetes is obviously a pretty major networking change! So let’s talk about what some of the things that can go wrong are! - -### Kubernetes networking components - -The Kubernetes networking components we’re going to talk about in this post are: - -* Your overlay network backend (like flannel/calico/weave net/romana) - -* `kube-dns` - -* `kube-proxy` - -* Ingress controllers / load balancers - -* The `kubelet` - -If you’re going to set up HTTP services you probably need all of these. I’m not using most of these components yet but I’m trying to understand them, so that’s what this post is about. - -### The simplest way: Use host networking for all your containers - -Let’s start with the simplest possible thing you can do. This won’t let you run HTTP services in Kubernetes. I think it’s pretty safe because there are less moving parts. - -If you use host networking for all your containers I think all you need to do is: - -1. Configure the kubelet to configure DNS correctly inside your containers - -2. That’s it - -If you use host networking for literally every pod you don’t need kube-dns or kube-proxy. You don’t even need a working overlay network. - -In this setup your pods can connect to the outside world (the same way any process on your hosts would talk to the outside world) but the outside world can’t connect to your pods. - -This isn’t super important (I think most people want to run HTTP services inside Kubernetes and actually communicate with those services) but I do think it’s interesting to realize that at some level all of this networking complexity isn’t strictly required and sometimes you can get away without using it. Avoiding networking complexity seems like a good idea to me if you can. - -### Operating an overlay network - -The first networking component we’re going to talk about is your overlay network. Kubernetes assumes that every pod has an IP address and that you can communicate with services inside that pod by using that IP address. When I say “overlay network” this is what I mean (“the system that lets you refer to a pod by its IP address”). - -All other Kubernetes networking stuff relies on the overlay networking working correctly. You can read more about the [kubernetes networking model here][10]. - -The way Kelsey Hightower describes in [kubernetes the hard way][11] seems pretty good but it’s not really viable on AWS for clusters more than 50 nodes or so, so I’m not going to talk about that. - -There are a lot of overlay network backends (calico, flannel, weaveworks, romana) and the landscape is pretty confusing. But as far as I’m concerned an overlay network has 2 responsibilities: - -1. Make sure your pods can send network requests outside your cluster - -2. Keep a stable mapping of nodes to subnets and keep every node in your cluster updated with that mapping. Do the right thing when nodes are added & removed. - -Okay! So! What can go wrong with your overlay network? - -* The overlay network is responsible for setting up iptables rules (basically `iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE`) to ensure that containers can make network requests outside Kubernetes. If something goes wrong with this rule then your containers can’t connect to the external network. This isn’t that hard (it’s just a few iptables rules) but it is important. I made a [pull request][2] because I wanted to make sure this was resilient - -* Something can go wrong with adding or deleting nodes. We’re using the flannel hostgw backend and at the time we started using it, node deletion [did not work][3]. - -* Your overlay network is probably dependent on a distributed database (etcd). If that database has an incident, this can cause issues. For example [https://github.com/coreos/flannel/issues/610][4] says that if you have data loss in your flannel etcd cluster it can result in containers losing network connectivity. (this has now been fixed) - -* You upgrade Docker and everything breaks - -* Probably more things! - -I’m mostly talking about past issues in Flannel here but I promise I’m not picking on Flannel – I actually really **like** Flannel because I feel like it’s relatively simple (for instance the [vxlan backend part of it][12] is like 500 lines of code) and I feel like it’s possible for me to reason through any issues with it. And it’s obviously continuously improving. They’ve been great about reviewing pull requests. - -My approach to operating an overlay network so far has been: - -* Learn how it works in detail and how to debug it (for example the hostgw network backend for Flannel works by creating routes, so you mostly just need to do `sudo ip route list` to see whether it’s doing the correct thing) - -* Maintain an internal build so it’s easy to patch it if needed - -* When there are issues, contribute patches upstream - -I think it’s actually really useful to go through the list of merged PRs and see bugs that have been fixed in the past – it’s a bit time consuming but is a great way to get a concrete list of kinds of issues other people have run into. - -It’s possible that for other people their overlay networks just work but that hasn’t been my experience and I’ve heard other folks report similar issues. If you have an overlay network setup that is a) on AWS and b) works on a cluster more than 50-100 nodes where you feel more confident about operating it I would like to know. - -### Operating kube-proxy and kube-dns? - -Now that we have some thoughts about operating overlay networks, let’s talk about - -There’s a question mark next to this one because I haven’t done this. Here I have more questions than answers. - -Here’s how Kubernetes services work! A service is a collection of pods, which each have their own IP address (like 10.1.0.3, 10.2.3.5, 10.3.5.6) - -1. Every Kubernetes service gets an IP address (like 10.23.1.2) - -2. `kube-dns` resolves Kubernetes service DNS names to IP addresses (so my-svc.my-namespace.svc.cluster.local might map to 10.23.1.2) - -3. `kube-proxy` sets up iptables rules in order to do random load balancing between them. Kube-proxy also has a userspace round-robin load balancer but my impression is that they don’t recommend using it. - -So when you make a request to `my-svc.my-namespace.svc.cluster.local`, it resolves to 10.23.1.2, and then iptables rules on your local host (generated by kube-proxy) redirect it to one of 10.1.0.3 or 10.2.3.5 or 10.3.5.6 at random. - -Some things that I can imagine going wrong with this: - -* `kube-dns` is misconfigured - -* `kube-proxy` dies and your iptables rules don’t get updated - -* Some issue related to maintaining a large number of iptables rules - -Let’s talk about the iptables rules a bit, since doing load balancing by creating a bajillion iptables rules is something I had never heard of before! - -kube-proxy creates one iptables rule per target host like this: (these rules are from [this github issue][13]) - -``` --A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-E4QKA7SLJRFZZ2DD[b][c] --A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LZ7EGMG4DRXMY26H --A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI --A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS --A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y - -``` - -So kube-proxy creates a **lot** of iptables rules. What does that mean? What are the implications of that in for my network? There’s a great talk from Huawei called [Scale Kubernetes to Support 50,000 services][14] that says if you have 5,000 services in your kubernetes cluster, it takes **11 minutes** to add a new rule. If that happened to your real cluster I think it would be very bad. - -I definitely don’t have 5,000 services in my cluster, but 5,000 isn’t SUCH a bit number. The proposal they give to solve this problem is to replace this iptables backend for kube-proxy with IPVS which is a load balancer that lives in the Linux kernel. - -It seems like kube-proxy is going in the direction of various Linux kernel based load balancers. I think this is partly because they support UDP load balancing, and other load balancers (like HAProxy) don’t support UDP load balancing. - -But I feel comfortable with HAProxy! Is it possible to replace kube-proxy with HAProxy! I googled this and I found this [thread on kubernetes-sig-network][15] saying: - -> kube-proxy is so awesome, we have used in production for almost a year, it works well most of time, but as we have more and more services in our cluster, we found it was getting hard to debug and maintain. There is no iptables expert in our team, we do have HAProxy&LVS experts, as we have used these for several years, so we decided to replace this distributed proxy with a centralized HAProxy. I think this maybe useful for some other people who are considering using HAProxy with kubernetes, so we just update this project and make it open source: [https://github.com/AdoHe/kube2haproxy][5]. If you found it’s useful , please take a look and give a try. - -So that’s an interesting option! I definitely don’t have answers here, but, some thoughts: - -* Load balancers are complicated - -* DNS is also complicated - -* If you already have a lot of experience operating one kind of load balancer (like HAProxy), it might make sense to do some extra work to use that instead of starting to use an entirely new kind of load balancer (like kube-proxy) - -* I’ve been thinking about where we want to be using kube-proxy or kube-dns at all – I think instead it might be better to just invest in Envoy and rely entirely on Envoy for all load balancing & service discovery. So then you just need to be good at operating Envoy. - -As you can see my thoughts on how to operate your Kubernetes internal proxies are still pretty confused and I’m still not super experienced with them. It’s totally possible that kube-proxy and kube-dns are fine and that they will just work fine but I still find it helpful to think through what some of the implications of using them are (for example “you can’t have 5,000 Kubernetes services”). - -### Ingress - -If you’re running a Kubernetes cluster, it’s pretty likely that you actually need HTTP requests to get into your cluster so far. This blog post is already too long and I don’t know much about ingress yet so we’re not going to talk about that. - -### Useful links - -A couple of useful links, to summarize: - -* [The Kubernetes networking model][6] - -* How GKE networking works: [https://www.youtube.com/watch?v=y2bhV81MfKQ][7] - -* The aforementioned talk on `kube-proxy` performance: [https://www.youtube.com/watch?v=4-pawkiazEg][8] - -### I think networking operations is important - -My sense of all this Kubernetes networking software is that it’s all still quite new and I’m not sure we (as a community) really know how to operate all of it well. This makes me worried as an operator because I really want my network to keep working! :) Also I feel like as an organization running your own Kubernetes cluster you need to make a pretty large investment into making sure you understand all the pieces so that you can fix things when they break. Which isn’t a bad thing, it’s just a thing. - -My plan right now is just to keep learning about how things work and reduce the number of moving parts I need to worry about as much as possible. - -As usual I hope this was helpful and I would very much like to know what I got wrong in this post! - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/about -[1]:http://blog.sophaskins.net/blog/misadventures-with-kube-dns/ -[2]:https://github.com/coreos/flannel/pull/808 -[3]:https://github.com/coreos/flannel/pull/803 -[4]:https://github.com/coreos/flannel/issues/610 -[5]:https://github.com/AdoHe/kube2haproxy -[6]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model -[7]:https://www.youtube.com/watch?v=y2bhV81MfKQ -[8]:https://www.youtube.com/watch?v=4-pawkiazEg -[9]:https://jvns.ca/categories/kubernetes -[10]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model -[11]:https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/11-pod-network-routes.md -[12]:https://github.com/coreos/flannel/tree/master/backend/vxlan -[13]:https://github.com/kubernetes/kubernetes/issues/37932 -[14]:https://www.youtube.com/watch?v=4-pawkiazEg -[15]:https://groups.google.com/forum/#!topic/kubernetes-sig-network/3NlBVbTUUU0 diff --git a/sources/tech/20171121 Finding Files with mlocate- Part 3.md b/sources/tech/20171121 Finding Files with mlocate- Part 3.md deleted file mode 100644 index c9eccb2fc7..0000000000 --- a/sources/tech/20171121 Finding Files with mlocate- Part 3.md +++ /dev/null @@ -1,142 +0,0 @@ -Finding Files with mlocate: Part 3 -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/question-mark-2492009_1920.jpg?itok=stJ3GxL2) -In the previous articles in this short series, we [introduced the mlocate][1] (or just locate) command, and then discussed some ways [the updatedb tool][2] can be used to help you find that one particular file in a thousand. - -You are probably also aware of xargs as well as the find command. Our trusty friend locate can also play nicely with the --null option of xargs by outputting all of the results onto one line (without spaces which isn't great if you want to read it yourself) by using the -0 switch like this: -``` -# locate -0 .bash -``` - -An option I like to use (if I remember to use it -- because the locate command rarely needs to be queried twice thanks to its simple syntax) is the -e option. -``` -# locate -e .bash -``` - -For the curious, that -e switch means "existing." And, in this case, you can use -e to ensure that any files returned by the locate command do actually exist at the time of the query on your filesystems. - -It's almost magical, that even on a slow machine, the mastery of the modern locate command allows us to query its file database and then check against the actual existence of many files in seemingly no time whatsoever. Let's try a quick test with a file search that's going to return a zillion results and use the time command to see how long it takes both with and without the -e option being enabled. - -I'll choose files with the compressed .gz extension. Starting with a count, you can see there's not quite a zillion but a fair number of files ending in .gz on my machine, note the -c for "count": -``` -# locate -c .gz -7539 -``` - -This time, we'll output the list but time it and see the abbreviated results as follows: -``` -# time locate .gz -real 0m0.091s -user 0m0.025s -sys 0m0.012s -``` - -That's pretty swift, but it's only reading from the overnight-run database. Let's get it to do a check against those 7,539 files, too, to see if they truly exist and haven't been deleted or renamed since last night: -``` -# time locate -e .gz -real 0m0.096s -user 0m0.028s -sys 0m0.055s -``` - -The speed difference is nominal as you can see. There's no point in talking about lightning or blink-and-you-miss-it, because those aren't suitable yardsticks. Relative to the other indexing service I mentioned previously, let's just say that's pretty darned fast. - -If you need to move the efficient database file used by the locate command (in my version it lives here: /var/lib/mlocate/mlocate.db) then that's also easy to do. You may wish to do this, for example, because you've generated a massive database file (it's only 1.1MB in my case so it's really tiny in reality), which needs to be put onto a faster filesystem. - -Incidentally, even the mlocate utility appears to have created an slocate group of users on my machine, so don't be too alarmed if you see something similar, as shown here from a standard file listing: -``` --rw-r-----. 1 root slocate 1.1M Jan 11 11:11 /var/lib/mlocate/mlocate.db -``` - -Back to the matter in hand. If you want to move away from /var/lib/mlocate as your directory being used by the database then you can use this command syntax (and you'll have to become the "root" user with sudo -i or su - for at least the first command to work correctly): -``` -# updatedb -o /home/chrisbinnie/my_new.db -# locate -d /home/chrisbinnie/my_new.db SEARCH_TERM -``` - -Obviously, replace your database name and path. The SEARCH_TERM element is the fragment of the filename that you're looking for (wildcards and all). - -If you remember I mentioned that you need to run updatedb command as the superuser to reach all the areas of your filesystems. - -This next example should cover two useful scenarios in one. According to the manual, you can also create a "private" database for standard users as follows: -``` -# updatedb -l 0 -o DATABASE -U source_directory -``` - -Here the previously seen -o option means that we output our database to a file (obviously called DATABASE). The -l 0 addition apparently means that the "visibility" of the database file is affected. It means (if I'm reading the docs correctly) that my user can read it but, otherwise, without that option, only the locate command can. - -The second useful scenario for this example is that we can create a little database file specifying exactly which path its top-level should be. Have a look at the database-root or -U source_directory option in our example. If you don't specify a new root file path, then the whole filesystem(s) is scanned instead. - -If you want to get clever and chuck a couple of top-level source directories into one command, then you can manage that having created two separate databases. Very useful for scripting methinks. - -You can achieve that with this command: -``` -# locate -d /home/chrisbinnie/database_one -d /home/chrisbinnie/database_two SEARCH_TERM -``` - -The manual dutifully warns however that ALL users that can read the DATABASE file can also get the complete list of files in the subdirectories of the chosen source_directory. So use these commands with some care. - -### Priced To Sell - -Back to the mind-blowing simplicity of the locate command in use on a day-to-day basis. There are many times when newbies may confused with case-sensitivity on Unix-type systems. Simply use the conventional -i option to ignore case entirely when using the flexible locate command: -``` -# locate -i ChrisBinnie.pdf -``` - -If you have a file structure that has a number of symlinks holding it together, then there might be occasion when you want to remove broken symlinks from the search results. You can do that with this command: -``` -# locate -Le chrisbinnie_111111.xml -``` - -If you needed to limit the search results then you could use this functionality, also in a script for example (similar to the -c option for counting), as so: -``` -# locate -l25 *.gz -``` - -This command simply stops after outputting the first 25 files that were found. When piped through the grep command, it's very useful on a super busy system. - -### Popular Area - -We briefly touched upon performance earlier, and I happened to see this [nicely written blog entry][3], where the author discusses thoughts on the trade-offs between the database size becoming unwieldy and the speed at which results are delivered. - -What piqued my interest are the comments on how the original locate command was written and what limiting factors were considered during its creation. Namely how disk space isn't quite so precious any longer and nor is the delivery of results even when 700,000 files are involved. - -I'm certain that the author(s) of mlocate and its forebears would have something to say in response to that blog post. I suspect that holding onto the file permissions to give us the "secure" and "slocate" functionality in the database might be a fairly big hit in terms of overhead. And, as much as I enjoyed the post, I won't be writing a Bash script to replace mlocate any time soon. I'm more than happy with the locate command and extol its qualities at every opportunity. - -### Sold - -I hope you've acquired enough insight into the superb locate command to prune, tweak, adjust, and tune it to your unique set of requirements. As we've seen, it's fast, convenient, powerful, and efficient. Additionally, you can ignore the "root" user demands and use it within scripts for very specific tasks. - -My favorite aspect, however, is when I'm awakened in the middle of the night because of an emergency. It's not a good look, having to remember the complex find command and typing it slowly with bleary eyes (and managing to add lots of typos): -``` -# find . -type f -name "*.gz" -``` - -Instead of that, I can just use the simple locate command: -``` -# locate *.gz -``` - -As has been said, any fool can create something bigger, bolder, and tougher, but it takes a bit of genius to create something simpler. And, in terms of introducing more people to the venerable Unix-type command line, there's little argument that the locate command welcomes them with open arms. - -Learn more about essential sysadmin skills: Download the [Future Proof Your SysAdmin Career][4] ebook now. - -Chris Binnie's latest book, Linux Server Security: Hack and Defend, shows how hackers launch sophisticated attacks to compromise servers, steal data, and crack complex passwords, so you can learn how to defend against these attacks. In the book, he also talks you through making your servers invisible, performing penetration testing, and mitigating unwelcome attacks. You can find out more about DevSecOps and Linux security via his website ([http://www.devsecops.cc][5]). - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/2017/11/finding-files-mlocate-part-3 - -作者:[Chris Binnie][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/chrisbinnie -[1]:https://www.linux.com/blog/learn/intro-to-linux/2017/11/finding-files-mlocate -[2]:https://www.linux.com/blog/learn/intro-to-linux/finding-files-mlocate-part-2 -[3]:http://jvns.ca/blog/2015/03/05/how-the-locate-command-works-and-lets-rewrite-it-in-one-minute/ -[4]:https://go.pardot.com/l/6342/2017-07-17/3vwshv?utm_source=linco&utm_medium=blog&utm_campaign=sysadmin&utm_content=promo -[5]:http://www.devsecops.cc/ diff --git a/sources/tech/20171124 How do groups work on Linux.md b/sources/tech/20171124 How do groups work on Linux.md deleted file mode 100644 index 3e9c386e01..0000000000 --- a/sources/tech/20171124 How do groups work on Linux.md +++ /dev/null @@ -1,143 +0,0 @@ -HankChow Translating - -How do groups work on Linux? -============================================================ - -Hello! Last week, I thought I knew how users and groups worked on Linux. Here is what I thought: - -1. Every process belongs to a user (like `julia`) - -2. When a process tries to read a file owned by a group, Linux a) checks if the user `julia` can access the file, and b) checks which groups `julia` belongs to, and whether any of those groups owns & can access that file - -3. If either of those is true (or if the ‘any’ bits are set right) then the process can access the file - -So, for example, if a process is owned by the `julia` user and `julia` is in the `awesome` group, then the process would be allowed to read this file. - -``` -r--r--r-- 1 root awesome 6872 Sep 24 11:09 file.txt - -``` - -I had not thought carefully about this, but if pressed I would have said that it probably checks the `/etc/group` file at runtime to see what groups you’re in. - -### that is not how groups work - -I found out at work last week that, no, what I describe above is not how groups work. In particular Linux does **not** check which groups a process’s user belongs to every time that process tries to access a file. - -Here is how groups actually work! I learned this by reading Chapter 9 (“Process Credentials”) of [The Linux Programming Interface][1] which is an incredible book. As soon as I realized that I did not understand how users and groups worked, I opened up the table of contents with absolute confidence that it would tell me what’s up, and I was right. - -### how users and groups checks are done - -They key new insight for me was pretty simple! The chapter starts out by saying that user and group IDs are **attributes of the process**: - -* real user ID and group ID; - -* effective user ID and group ID; - -* saved set-user-ID and saved set-group-ID; - -* file-system user ID and group ID (Linux-specific); and - -* supplementary group IDs. - -This means that the way Linux **actually** does group checks to see a process can read a file is: - -* look at the process’s group IDs & supplementary group IDs (from the attributes on the process, **not** by looking them up in `/etc/group`) - -* look at the group on the file - -* see if they match - -Generally when doing access control checks it uses the **effective** user/group ID, not the real user/group ID. Technically when accessing a file it actually uses the **file-system** ids but those are usually the same as the effective uid/gid. - -### Adding a user to a group doesn’t put existing processes in that group - -Here’s another fun example that follows from this: if I create a new `panda` group and add myself (bork) to it, then run `groups` to check my group memberships – I’m not in the panda group! - -``` -bork@kiwi~> sudo addgroup panda -Adding group `panda' (GID 1001) ... -Done. -bork@kiwi~> sudo adduser bork panda -Adding user `bork' to group `panda' ... -Adding user bork to group panda -Done. -bork@kiwi~> groups -bork adm cdrom sudo dip plugdev lpadmin sambashare docker lxd - -``` - -no `panda` in that list! To double check, let’s try making a file owned by the `panda`group and see if I can access it: - -``` -$ touch panda-file.txt -$ sudo chown root:panda panda-file.txt -$ sudo chmod 660 panda-file.txt -$ cat panda-file.txt -cat: panda-file.txt: Permission denied - -``` - -Sure enough, I can’t access `panda-file.txt`. No big surprise there. My shell didn’t have the `panda` group as a supplementary GID before, and running `adduser bork panda` didn’t do anything to change that. - -### how do you get your groups in the first place? - -So this raises kind of a confusing question, right – if processes have groups baked into them, how do you get assigned your groups in the first place? Obviously you can’t assign yourself more groups (that would defeat the purpose of access control). - -It’s relatively clear how processes I **execute** from my shell (bash/fish) get their groups – my shell runs as me, and it has a bunch of group IDs on it. Processes I execute from my shell are forked from the shell so they get the same groups as the shell had. - -So there needs to be some “first” process that has your groups set on it, and all the other processes you set inherit their groups from that. That process is called your **login shell** and it’s run by the `login` program (`/bin/login`) on my laptop. `login` runs as root and calls a C function called `initgroups` to set up your groups (by reading `/etc/group`). It’s allowed to set up your groups because it runs as root. - -### let’s try logging in again! - -So! Let’s say I am running in a shell, and I want to refresh my groups! From what we’ve learned about how groups are initialized, I should be able to run `login` to refresh my groups and start a new login shell! - -Let’s try it: - -``` -$ sudo login bork -$ groups -bork adm cdrom sudo dip plugdev lpadmin sambashare docker lxd panda -$ cat panda-file.txt # it works! I can access the file owned by `panda` now! - -``` - -Sure enough, it works! Now the new shell that `login` spawned is part of the `panda` group! Awesome! This won’t affect any other shells I already have running. If I really want the new `panda` group everywhere, I need to restart my login session completely, which means quitting my window manager and logging in again. - -### newgrp - -Somebody on Twitter told me that if you want to start a new shell with a new group that you’ve been added to, you can use `newgrp`. Like this: - -``` -sudo addgroup panda -sudo adduser bork panda -newgrp panda # starts a new shell, and you don't have to be root to run it! - -``` - -You can accomplish the same(ish) thing with `sg panda bash` which will start a `bash` shell that runs with the `panda` group. - -### setuid sets the effective user ID - -I’ve also always been a little vague about what it means for a process to run as “setuid root”. It turns out that setuid sets the effective user ID! So if I (`julia`) run a setuid root process (like `passwd`), then the **real** user ID will be set to `julia`, and the **effective** user ID will be set to `root`. - -`passwd` needs to run as root, but it can look at its real user ID to see that `julia`started the process, and prevent `julia` from editing any passwords except for `julia`’s password. - -### that’s all! - -There are a bunch more details about all the edge cases and exactly how everything works in The Linux Programming Interface so I will not get into all the details here. That book is amazing. Everything I talked about in this post is from Chapter 9, which is a 17-page chapter inside a 1300-page book. - -The thing I love most about that book is that reading 17 pages about how users and groups work is really approachable, self-contained, super useful, and I don’t have to tackle all 1300 pages of it at once to learn helpful things :) - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2017/11/20/groups/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/ -[1]:http://man7.org/tlpi/ diff --git a/sources/tech/20171224 My first Rust macro.md b/sources/tech/20171224 My first Rust macro.md deleted file mode 100644 index a8002e050b..0000000000 --- a/sources/tech/20171224 My first Rust macro.md +++ /dev/null @@ -1,145 +0,0 @@ -My first Rust macro -============================================================ - -Last night I wrote a Rust macro for the first time!! The most striking thing to me about this was how **easy** it was – I kind of expected it to be a weird hard finicky thing, and instead I found that I could go from “I don’t know how macros work but I think I could do this with a macro” to “wow I’m done” in less than an hour. - -I used [these examples][2] to figure out how to write my macro. - -### what’s a macro? - -There’s more than one kind of macro in Rust – - -* macros defined using `macro_rules` (they have an exclamation mark and you call them like functions – `my_macro!()`) - -* “syntax extensions” / “procedural macros” like `#[derive(Debug)]` (you put these like annotations on your functions) - -* built-in macros like `println!` - -[Macros in Rust][3] and [Macros in Rust part II][4] seems like a nice overview of the different kinds with examples - -I’m not actually going to try to explain what a macro **is**, instead I will just show you what I used a macro for yesterday and hopefully that will be interesting. I’m going to be talking about `macro_rules!`, I don’t understand syntax extension/procedural macros yet. - -### compiling the `get_stack_trace` function for 30 different Ruby versions - -I’d written some functions that got the stack trace out of a running Ruby program (`get_stack_trace`). But the function I wrote only worked for Ruby 2.2.0 – here’s what it looked like. Basically it imported some structs from `bindings::ruby_2_2_0` and then used them. - -``` -use bindings::ruby_2_2_0::{rb_control_frame_struct, rb_thread_t, RString}; -fn get_stack_trace(pid: pid_t) -> Vec { - // some code using rb_control_frame_struct, rb_thread_t, RString -} - -``` - -Let’s say I wanted to instead have a version of `get_stack_trace` that worked for Ruby 2.1.6. `bindings::ruby_2_2_0` and `bindings::ruby_2_1_6` had basically all the same structs in them. But `bindings::ruby_2_1_6::rb_thread_t` wasn’t the **same** as `bindings::ruby_2_2_0::rb_thread_t`, it just had the same name and most of the same struct members. - -So I could implement a working function for Ruby 2.1.6 really easily! I just need to basically replace `2_2_0` for `2_1_6`, and then the compiler would generate different code (because `rb_thread_t` is different). Here’s a sketch of what the Ruby 2.1.6 version would look like: - -``` -use bindings::ruby_2_1_6::{rb_control_frame_struct, rb_thread_t, RString}; -fn get_stack_trace(pid: pid_t) -> Vec { - // some code using rb_control_frame_struct, rb_thread_t, RString -} - -``` - -### what I wanted to do - -I basically wanted to write code like this, to generate a `get_stack_trace` function for every Ruby version. The code inside `get_stack_trace` would be the same in every case, it’s just the `use bindings::ruby_2_1_3` that needed to be different - -``` -pub mod ruby_2_1_3 { - use bindings::ruby_2_1_3::{rb_control_frame_struct, rb_thread_t, RString}; - fn get_stack_trace(pid: pid_t) -> Vec { - // insert code here - } -} -pub mod ruby_2_1_4 { - use bindings::ruby_2_1_4::{rb_control_frame_struct, rb_thread_t, RString}; - fn get_stack_trace(pid: pid_t) -> Vec { - // same code - } -} -pub mod ruby_2_1_5 { - use bindings::ruby_2_1_5::{rb_control_frame_struct, rb_thread_t, RString}; - fn get_stack_trace(pid: pid_t) -> Vec { - // same code - } -} -pub mod ruby_2_1_6 { - use bindings::ruby_2_1_6::{rb_control_frame_struct, rb_thread_t, RString}; - fn get_stack_trace(pid: pid_t) -> Vec { - // same code - } -} - -``` - -### macros to the rescue! - -This really repetitive thing was I wanted to do was a GREAT fit for macros. Here’s what using `macro_rules!` to do this looked like! - -``` -macro_rules! ruby_bindings( - ($ruby_version:ident) => ( - pub mod $ruby_version { - use bindings::$ruby_version::{rb_control_frame_struct, rb_thread_t, RString}; - fn get_stack_trace(pid: pid_t) -> Vec { - // insert code here - } - } -)); - -``` - -I basically just needed to put my code in and insert `$ruby_version` in the places I wanted it to go in. So simple! I literally just looked at an example, tried the first thing I thought would work, and it worked pretty much right away. - -(the [actual code][5] is more lines and messier but the usage of macros is exactly as simple in this example) - -I was SO HAPPY about this because I’d been worried getting this to work would be hard but instead it was so easy!! - -### dispatching to the right code - -Then I wrote some super simple dispatch code to call the right code depending on which Ruby version was running! - -``` - let version = get_api_version(pid); - let stack_trace_function = match version.as_ref() { - "2.1.1" => stack_trace::ruby_2_1_1::get_stack_trace, - "2.1.2" => stack_trace::ruby_2_1_2::get_stack_trace, - "2.1.3" => stack_trace::ruby_2_1_3::get_stack_trace, - "2.1.4" => stack_trace::ruby_2_1_4::get_stack_trace, - "2.1.5" => stack_trace::ruby_2_1_5::get_stack_trace, - "2.1.6" => stack_trace::ruby_2_1_6::get_stack_trace, - "2.1.7" => stack_trace::ruby_2_1_7::get_stack_trace, - "2.1.8" => stack_trace::ruby_2_1_8::get_stack_trace, - // and like 20 more versions - _ => panic!("OH NO OH NO OH NO"), - }; - -``` - -### it works! - -I tried out my prototype, and it totally worked! The same program could get stack traces out the running Ruby program for all of the ~10 different Ruby versions I tried – it figured which Ruby version was running, called the right code, and got me stack traces!! - -Previously I’d compile a version for Ruby 2.2.0 but then if I tried to use it for any other Ruby version it would crash, so this was a huge improvement. - -There are still more issues with this approach that I need to sort out. The two main ones right now are: firstly the ruby binary that ships with Debian doesn’t have symbols and I need the address of the current thread, and secondly it’s still possible that `#ifdefs` will ruin my day. - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2017/12/24/my-first-rust-macro/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca -[1]:https://jvns.ca/categories/ruby-profiler -[2]:https://gist.github.com/jfager/5936197 -[3]:https://www.ncameron.org/blog/macros-in-rust-pt1/ -[4]:https://www.ncameron.org/blog/macros-in-rust-pt2/ -[5]:https://github.com/jvns/ruby-stacktrace/blob/b0b92863564e54da59ea7f066aff5bb0d92a4968/src/lib.rs#L249-L393 diff --git a/sources/tech/20180104 How does gdb call functions.md b/sources/tech/20180104 How does gdb call functions.md deleted file mode 100644 index c88fae999e..0000000000 --- a/sources/tech/20180104 How does gdb call functions.md +++ /dev/null @@ -1,254 +0,0 @@ -translating by ucasFL - -How does gdb call functions? -============================================================ - -(previous gdb posts: [how does gdb work? (2016)][4] and [three things you can do with gdb (2014)][5]) - -I discovered this week that you can call C functions from gdb! I thought this was cool because I’d previously thought of gdb as mostly a read-only debugging tool. - -I was really surprised by that (how does that WORK??). As I often do, I asked [on Twitter][6] how that even works, and I got a lot of really useful answers! My favorite answer was [Evan Klitzke’s example C code][7] showing a way to do it. Code that  _works_  is very exciting! - -I believe (through some stracing & experiments) that that example C code is different from how gdb actually calls functions, so I’ll talk about what I’ve figured out about what gdb does in this post and how I’ve figured it out. - -There is a lot I still don’t know about how gdb calls functions, and very likely some things in here are wrong. - -### What does it mean to call a C function from gdb? - -Before I get into how this works, let’s talk quickly about why I found it surprising / nonobvious. - -So, you have a running C program (the “target program”). You want to run a function from it. To do that, you need to basically: - -* pause the program (because it is already running code!) - -* find the address of the function you want to call (using the symbol table) - -* convince the program (the “target program”) to jump to that address - -* when the function returns, restore the instruction pointer and registers to what they were before - -Using the symbol table to figure out the address of the function you want to call is pretty straightforward – here’s some sketchy (but working!) Rust code that I’ve been using on Linux to do that. This code uses the [elf crate][8]. If I wanted to find the address of the `foo` function in PID 2345, I’d run `elf_symbol_value("/proc/2345/exe", "foo")`. - -``` -fn elf_symbol_value(file_name: &str, symbol_name: &str) -> Result> { - // open the ELF file - let file = elf::File::open_path(file_name).ok().ok_or("parse error")?; - // loop over all the sections & symbols until you find the right one! - let sections = &file.sections; - for s in sections { - for sym in file.get_symbols(&s).ok().ok_or("parse error")? { - if sym.name == symbol_name { - return Ok(sym.value); - } - } - } - None.ok_or("No symbol found")? -} - -``` - -This won’t totally work on its own, you also need to look at the memory maps of the file and add the symbol offset to the start of the place that file is mapped. But finding the memory maps isn’t so hard, they’re in `/proc/PID/maps`. - -Anyway, this is all to say that finding the address of the function to call seemed straightforward to me but that the rest of it (change the instruction pointer? restore the registers? what else?) didn’t seem so obvious! - -### You can’t just jump - -I kind of said this already but – you can’t just find the address of the function you want to run and then jump to that address. I tried that in gdb (`jump foo`) and the program segfaulted. Makes sense! - -### How you can call C functions from gdb - -First, let’s see that this is possible. I wrote a tiny C program that sleeps for 1000 seconds and called it `test.c`: - -``` -#include - -int foo() { - return 3; -} -int main() { - sleep(1000); -} - -``` - -Next, compile and run it: - -``` -$ gcc -o test test.c -$ ./test - -``` - -Finally, let’s attach to the `test` program with gdb: - -``` -$ sudo gdb -p $(pgrep -f test) -(gdb) p foo() -$1 = 3 -(gdb) quit - -``` - -So I ran `p foo()` and it ran the function! That’s fun. - -### Why is this useful? - -a few possible uses for this: - -* it lets you treat gdb a little bit like a C REPL, which is fun and I imagine could be useful for development - -* utility functions to display / navigate complex data structures quickly while debugging in gdb (thanks [@invalidop][1]) - -* [set an arbitrary process’s namespace while it’s running][2] (featuring a not-so-surprising appearance from my colleague [nelhage][3]!) - -* probably more that I don’t know about - -### How it works - -I got a variety of useful answers on Twitter when I asked how calling functions from gdb works! A lot of them were like “well you get the address of the function from the symbol table” but that is not the whole story!! - -One person pointed me to this nice 2 part series on how gdb works that they’d written: [Debugging with the natives, part 1][9] and [Debugging with the natives, part 2][10]. Part 1 explains approximately how calling functions works (or could work – figuring out what gdb **actually** does isn’t trivial, but I’ll try my best!). - -The steps outlined there are: - -1. Stop the process - -2. Create a new stack frame (far away from the actual stack) - -3. Save all the registers - -4. Set the registers to the arguments you want to call your function with - -5. Set the stack pointer to the new stack frame - -6. Put a trap instruction somewhere in memory - -7. Set the return address to that trap instruction - -8. Set the instruction pointer register to the address of the function you want to call - -9. Start the process again! - -I’m not going to go through how gdb does all of these (I don’t know!) but here are a few things I’ve learned about the various pieces this evening. - -**Create a stack frame** - -If you’re going to run a C function, most likely it needs a stack to store variables on! You definitely don’t want it to clobber your current stack. Concretely – before gdb calls your function (by setting the instruction pointer to it and letting it go), it needs to set the **stack pointer** to… something. - -There was some speculation on Twitter about how this works: - -> i think it constructs a new stack frame for the call right on top of the stack where you’re sitting! - -and: - -> Are you certain it does that? It could allocate a pseudo stack, then temporarily change sp value to that location. You could try, put a breakpoint there and look at the sp register address, see if it’s contiguous to your current program register? - -I did an experiment where (inside gdb) I ran:` - -``` -(gdb) p $rsp -$7 = (void *) 0x7ffea3d0bca8 -(gdb) break foo -Breakpoint 1 at 0x40052a -(gdb) p foo() -Breakpoint 1, 0x000000000040052a in foo () -(gdb) p $rsp -$8 = (void *) 0x7ffea3d0bc00 - -``` - -This seems in line with the “gdb constructs a new stack frame for the call right on top of the stack where you’re sitting” theory, since the stack pointer (`$rsp`) goes from being `...bca8` to `..bc00` – stack pointers grow downward, so a `bc00`stack pointer is **after** a `bca8` pointer. Interesting! - -So it seems like gdb just creates the new stack frames right where you are. That’s a bit surprising to me! - -**change the instruction pointer** - -Let’s see whether gdb changes the instruction pointer! - -``` -(gdb) p $rip -$1 = (void (*)()) 0x7fae7d29a2f0 <__nanosleep_nocancel+7> -(gdb) b foo -Breakpoint 1 at 0x40052a -(gdb) p foo() -Breakpoint 1, 0x000000000040052a in foo () -(gdb) p $rip -$3 = (void (*)()) 0x40052a - -``` - -It does! The instruction pointer changes from `0x7fae7d29a2f0` to `0x40052a` (the address of the `foo` function). - -I stared at the strace output and I still don’t understand **how** it changes, but that’s okay. - -**aside: how breakpoints are set!!** - -Above I wrote `break foo`. I straced gdb while running all of this and understood almost nothing but I found ONE THING that makes sense to me!! - -Here are some of the system calls that gdb uses to set a breakpoint. It’s really simple! It replaces one instruction with `cc` (which [https://defuse.ca/online-x86-assembler.htm][11] tells me means `int3` which means `send SIGTRAP`), and then once the program is interrupted, it puts the instruction back the way it was. - -I was putting a breakpoint on a function `foo` with the address `0x400528`. - -This `PTRACE_POKEDATA` is how gdb changes the code of running programs. - -``` -// change the 0x400528 instructions -25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003b8e589]) = 0 -25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003cce589) = 0 -// start the program running -25622 ptrace(PTRACE_CONT, 25618, 0x1, SIG_0) = 0 -// get a signal when it hits the breakpoint -25622 ptrace(PTRACE_GETSIGINFO, 25618, NULL, {si_signo=SIGTRAP, si_code=SI_KERNEL, si_value={int=-1447215360, ptr=0x7ffda9bd3f00}}) = 0 -// change the 0x400528 instructions back to what they were before -25622 ptrace(PTRACE_PEEKTEXT, 25618, 0x400528, [0x5d00000003cce589]) = 0 -25622 ptrace(PTRACE_POKEDATA, 25618, 0x400528, 0x5d00000003b8e589) = 0 - -``` - -**put a trap instruction somewhere** - -When gdb runs a function, it **also** puts trap instructions in a bunch of places! Here’s one of them (per strace). It’s basically replacing one instruction with `cc` (`int3`). - -``` -5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0 -5908 ptrace(PTRACE_PEEKTEXT, 5810, 0x7f6fa7c0b260, [0x48f389fd89485355]) = 0 -5908 ptrace(PTRACE_POKEDATA, 5810, 0x7f6fa7c0b260, 0x48f389fd894853cc) = 0 - -``` - -What’s `0x7f6fa7c0b260`? Well, I looked in the process’s memory maps, and it turns it’s somewhere in `/lib/x86_64-linux-gnu/libc-2.23.so`. That’s weird! Why is gdb putting trap instructions in libc? - -Well, let’s see what function that’s in. It turns out it’s `__libc_siglongjmp`. The other functions gdb is putting traps in are `__longjmp`, `____longjmp_chk`, `dl_main`, and `_dl_close_worker`. - -Why? I don’t know! Maybe for some reason when our function `foo()` returns, it’s calling `longjmp`, and that is how gdb gets control back? I’m not sure. - -### how gdb calls functions is complicated! - -I’m going to stop there (it’s 1am!), but now I know a little more! - -It seems like the answer to “how does gdb call a function?” is definitely not that simple. I found it interesting to try to figure a little bit of it out and hopefully you have too! - -I still have a lot of unanswered questions about how exactly gdb does all of these things, but that’s okay. I don’t really need to know the details of how this works and I’m happy to have a slightly improved understanding. - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/ -[1]:https://twitter.com/invalidop/status/949161146526781440 -[2]:https://github.com/baloo/setns/blob/master/setns.c -[3]:https://github.com/nelhage -[4]:https://jvns.ca/blog/2016/08/10/how-does-gdb-work/ -[5]:https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/ -[6]:https://twitter.com/b0rk/status/948060808243765248 -[7]:https://github.com/eklitzke/ptrace-call-userspace/blob/master/call_fprintf.c -[8]:https://cole14.github.io/rust-elf -[9]:https://www.cl.cam.ac.uk/~srk31/blog/2016/02/25/#native-debugging-part-1 -[10]:https://www.cl.cam.ac.uk/~srk31/blog/2017/01/30/#native-debugging-part-2 -[11]:https://defuse.ca/online-x86-assembler.htm diff --git a/sources/tech/20180109 Profiler adventures resolving symbol addresses is hard.md b/sources/tech/20180109 Profiler adventures resolving symbol addresses is hard.md deleted file mode 100644 index 971f575f5f..0000000000 --- a/sources/tech/20180109 Profiler adventures resolving symbol addresses is hard.md +++ /dev/null @@ -1,163 +0,0 @@ -Profiler adventures: resolving symbol addresses is hard! -============================================================ - -The other day I posted [How does gdb call functions?][1]. In that post I said: - -> Using the symbol table to figure out the address of the function you want to call is pretty straightforward - -Unsurprisingly, it turns out that figuring out the address in memory corresponding to a given symbol is actually not really that straightforward. This is actually something I’ve been doing in my profiler, and I think it’s interesting, so I thought I’d write about it! - -Basically the problem I’ve been trying to solve is – I have a symbol (like `ruby_api_version`), and I want to figure out which address that symbol is mapped to in my target process’s memory (so that I can get the data in it, like the Ruby process’s Ruby version). So far I’ve run into (and fixed!) 3 issues when trying to do this: - -1. When binaries are loaded into memory, they’re loaded at a random address (so I can’t just read the symbol table) - -2. The symbol I want isn’t necessary in the “main” binary (`/proc/PID/exe`, sometimes it’s in some other dynamically linked library) - -3. I need to look at the ELF program header to adjust which address I look at for the symbol - -I’ll start with some background, and then explain these 3 things! (I actually don’t know what gdb does) - -### what’s a symbol? - -Most binaries have functions and variables in them. For instance, Perl has a global variable called `PL_bincompat_options` and a function called `Perl_sv_catpv_mg`. - -Sometimes binaries need to look up functions from another binary (for example, if the binary is a dynamically linked library, you need to look up its functions by name). Also sometimes you’re debugging your code and you want to know what function an address corresponds to. - -Symbols are how you look up functions / variables in a binary. They’re in a section called the “symbol table”. The symbol table is basically an index for your binary! Sometimes they’re missing (“stripped”). There are a lot of binary formats, but this post is just about the usual binary format on Linux: ELF. - -### how do you get the symbol table of a binary? - -A thing that I learned today (or at least learned and then forgot) is that there are 2 possible sections symbols can live in: `.symtab` and `.dynsym`. `.dynsym` is the “dynamic symbol table”. According to [this page][2], the dynsym is a smaller version of the symtab that only contains global symbols. - -There are at least 3 ways to read the symbol table of a binary on Linux: you can use nm, objdump, or readelf. - -* **read the .symtab**: `nm $FILE`, `objdump --syms $FILE`, `readelf -a $FILE` - -* **read the .dynsym**: `nm -D $FILE`, `objdump --dynamic-syms $FILE`, `readelf -a $FILE` - -`readelf -a` is the same in both cases because `readelf -a` just shows you everything in an ELF file. It’s my favorite because I don’t need to guess where the information I want is, I can just print out everything and then use grep. - -Here’s an example of some of the symbols in `/usr/bin/perl`. You can see that each symbol has a **name**, a **value**, and a **type**. The value is basically the offset of the code/data corresponding to that symbol in the binary. (except some symbols have value 0\. I think that has something to do with dynamic linking but I don’t understand it so we’re not going to get into it) - -``` -$ readelf -a /usr/bin/perl -... - Num: Value Size Type Ndx Name - 523: 00000000004d6590 49 FUNC 14 Perl_sv_catpv_mg - 524: 0000000000543410 7 FUNC 14 Perl_sv_copypv - 525: 00000000005a43e0 202 OBJECT 16 PL_bincompat_options - 526: 00000000004e6d20 2427 FUNC 14 Perl_pp_ucfirst - 527: 000000000044a8c0 1561 FUNC 14 Perl_Gv_AMupdate -... - -``` - -### the question we want to answer: what address is a symbol mapped to? - -That’s enough background! - -Now – suppose I’m a debugger, and I want to know what address the `ruby_api_version` symbol is mapped to. Let’s use readelf to look at the relevant Ruby binary! - -``` -readelf -a ~/.rbenv/versions/2.1.6/bin/ruby | grep ruby_api_version - 365: 00000000001f9180 12 OBJECT GLOBAL DEFAULT 15 ruby_api_version - -``` - -Neat! The offset of `ruby_api_version` is `0x1f9180`. We’re done, right? Of course not! :) - -### Problem 1: ASLR (Address space layout randomization) - -Here’s the first issue: when Linux loads a binary into memory (like `~/.rbenv/versions/2.1.6/bin/ruby`), it doesn’t just load it at the `0` address. Instead, it usually adds a random offset. Wikipedia’s article on ASLR explains why: - -> Address space layout randomization (ASLR) is a memory-protection process for operating systems (OSes) that guards against buffer-overflow attacks by randomizing the location where system executables are loaded into memory. - -We can see this happening in practice: I started `/home/bork/.rbenv/versions/2.1.6/bin/ruby` 3 times and every time the process gets mapped to a different place in memory. (`0x56121c86f000`, `0x55f440b43000`, `0x56163334a000`) - -Here we’re meeting our good friend `/proc/$PID/maps` – this file contains a list of memory maps for a process. The memory maps tell us every address range in the process’s virtual memory (it turns out virtual memory isn’t contiguous! Instead process get a bunch of possibly-disjoint memory maps!). This file is so useful! You can find the address of the stack, the heap, every dynamically loaded library, anonymous memory maps, and probably more. - -``` -$ cat /proc/(pgrep -f 2.1.6)/maps | grep 'bin/ruby' -56121c86f000-56121caf0000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby -56121ccf0000-56121ccf5000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby -56121ccf5000-56121ccf7000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby -$ cat /proc/(pgrep -f 2.1.6)/maps | grep 'bin/ruby' -55f440b43000-55f440dc4000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby -55f440fc4000-55f440fc9000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby -55f440fc9000-55f440fcb000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby -$ cat /proc/(pgrep -f 2.1.6)/maps | grep 'bin/ruby' -56163334a000-5616335cb000 r-xp 00000000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby -5616337cb000-5616337d0000 r--p 00281000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby -5616337d0000-5616337d2000 rw-p 00286000 00:32 323508 /home/bork/.rbenv/versions/2.1.6/bin/ruby - -``` - -Okay, so in the last example we see that our binary is mapped at `0x56163334a000`. If we combine this with the knowledge that `ruby_api_version` is at `0x1f9180`, then that means that we just need to look that the address `0x1f9180 + 0x56163334a000` to find our variable, right? - -Yes! In this case, that works. But in other cases it won’t! So that brings us to problem 2. - -### Problem 2: dynamically loaded libraries - -Next up, I tried running system Ruby: `/usr/bin/ruby`. This binary has basically no symbols at all! Disaster! In particular it does not have a `ruby_api_version`symbol. - -But when I tried to print the `ruby_api_version` variable with gdb, it worked!!! Where was gdb finding my symbol? I found the answer with the help of our good friend: `/proc/PID/maps` - -It turns out that `/usr/bin/ruby` dynamically loads a library called `libruby-2.3`. You can see it in the memory maps here: - -``` -$ cat /proc/(pgrep -f /usr/bin/ruby)/maps | grep libruby -7f2c5d789000-7f2c5d9f1000 r-xp 00000000 00:14 /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0 -7f2c5d9f1000-7f2c5dbf0000 ---p 00268000 00:14 /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0 -7f2c5dbf0000-7f2c5dbf6000 r--p 00267000 00:14 /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0 -7f2c5dbf6000-7f2c5dbf7000 rw-p 0026d000 00:14 /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0 - -``` - -And if we read it with `readelf`, we find the address of that symbol! - -``` -readelf -a /usr/lib/x86_64-linux-gnu/libruby-2.3.so.2.3.0 | grep ruby_api_version - 374: 00000000001c72f0 12 OBJECT GLOBAL DEFAULT 13 ruby_api_version - -``` - -So in this case the address of the symbol we want is `0x7f2c5d789000` (the start of the libruby-2.3 memory map) plus `0x1c72f0`. Nice! But we’re still not done. There is (at least) one more mystery! - -### Problem 3: the `vaddr` offset in the ELF program header - -This one I just figured out today so it’s the one I have the shakiest understanding of. Here’s what happened. - -I was running system ruby on Ubuntu 14.04: Ruby 1.9.3\. And my usual code (find the libruby map, get its address, get the symbol offset, add them up) wasn’t working!!! I was confused. - -But I’d asked Julian if he knew of any weird stuff I need to worry about a while back and he said “well, you should read the code for `dlsym`, you’re trying to do basically the same thing”. So I decided to, instead of randomly guessing, go read the code for `dlsym`. - -The man page for `dlsym` says “dlsym, dlvsym - obtain address of a symbol in a shared object or executable”. Perfect!! - -[Here’s the dlsym code from musl I read][3]. (musl is like glibc, but, different. Maybe easier to read? I don’t understand it that well.) - -The dlsym code says (on line 1468) `return def.dso->base + def.sym->st_value;` That sounds like what I’m doing!! But what’s `dso->base`? It looks like `base = map - addr_min;`, and `addr_min = ph->p_vaddr;`. (there’s also some stuff that makes sure `addr_min` is aligned with the page size which I should maybe pay attention to.) - -So the code I want is something like `map_base - ph->p_vaddr + sym->st_value`. - -I looked up this `vaddr` thing in the ELF program header, subtracted it from my calculation, and voilà! It worked!!! - -### there are probably more problems! - -I imagine I will discover even more ways that I am calculating the symbol address wrong. It’s interesting that such a seemingly simple thing (“what’s the address of this symbol?”) is so complicated! - -It would be nice to be able to just call `dlsym` and have it do all the right calculations for me, but I think I can’t because the symbol is in a different process. Maybe I’m wrong about that though! I would like to be wrong about that. If you know an easier way to do all this I would very much like to know! - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2018/01/09/resolving-symbol-addresses/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca -[1]:https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/ -[2]:https://blogs.oracle.com/ali/inside-elf-symbol-tables -[3]:https://github.com/esmil/musl/blob/194f9cf93da8ae62491b7386edf481ea8565ae4e/src/ldso/dynlink.c#L1451 From 2b5773f1eaaccf2969831695032b85843b2fcf18 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 14:46:38 +0800 Subject: [PATCH 17/81] remove linuxtechlab.com --- ...Ultimate guide to securing SSH sessions.md | 139 --------- ...first Ansible server (automation) setup.md | 164 ----------- ...ful Linux Commands that you should know.md | 265 ------------------ ...ate with Let-s Encrypt on CentOS - RHEL.md | 115 -------- ...alling Awstat for analyzing Apache logs.md | 117 -------- ...ections using IFCONFIG - NMCLI commands.md | 176 ------------ 6 files changed, 976 deletions(-) delete mode 100644 sources/tech/20170428 Ultimate guide to securing SSH sessions.md delete mode 100644 sources/tech/20170505 Create your first Ansible server (automation) setup.md delete mode 100644 sources/tech/20170910 Useful Linux Commands that you should know.md delete mode 100644 sources/tech/20171212 Create a free Apache SSL certificate with Let-s Encrypt on CentOS - RHEL.md delete mode 100644 sources/tech/20180123 Installing Awstat for analyzing Apache logs.md delete mode 100644 sources/tech/20180202 Managing network connections using IFCONFIG - NMCLI commands.md diff --git a/sources/tech/20170428 Ultimate guide to securing SSH sessions.md b/sources/tech/20170428 Ultimate guide to securing SSH sessions.md deleted file mode 100644 index a96c4da6e7..0000000000 --- a/sources/tech/20170428 Ultimate guide to securing SSH sessions.md +++ /dev/null @@ -1,139 +0,0 @@ -Ultimate guide to securing SSH sessions -====== -Hi Linux-fanatics, in this tutorial we will be discussing some ways with which we make our ssh server more secure. OpenSSH is currently used by default to work on servers as physical access to servers is very limited. We use ssh to copy/backup files/folders, to remotely execute commands etc. But these ssh connections might not be as secure as we believee & we must make some changes to our default settings to make them more secure. - -Here are steps needed to secure our ssh sessions, - -### Use complex username & password - -This is first of the problem that needs to be addressed, I have known users who have '12345' as their password. It seems they are inviting hackers to get themselves hacked. You should always have a complex password. - -It should have at-least 8 characters with numbers & alphabets, lower case & upper case letter, and also special characters. A good example would be " ** ** _vXdrf23#$wd_**** " , it is not a word so dictionary attack will be useless & has uppercase, lowercase characters, numbers & special characters. - -### Limit user logins - -Not all the users are required to have access to ssh in an organization, so we should make changes to our configuration file to limit user logins. Let's say only Bob & Susan are authorized have access to ssh, so open your configuration file - -``` - $ vi /etc/ssh/sshd_config -``` - -& add the allowed users to the bottom of the file - -``` - AllowUsers bob susan -``` - -Save the file & restart the service. Now only Bob & Susan will have access to ssh , others won't be able to access ssh. - -### Configure Idle logout time - - -Once logged into ssh sessions, there is default time before sessions logs out on it own. By default idle logout time is 60 minutes, which according to me is way to much. Consider this, you logged into a session , executed some commands & then went out to get a cup of coffee but you forgot to log-out of the ssh. Just think what could be done in the 60 seconds, let alone in 60 minutes. - -So, its wise to reduce idle log-out time to something around 5 minutes & it can be done in config file only. Open '/etc/ssh/sshd_config' & change the values - -``` -ClientAliveInterval 300 -ClientAliveCountMax 0 -``` - -Its in seconds, so configure them accordingly. - -### Disable root logins - -As we know root have access to anything & everything on the server, so we must disable root access through ssh session. Even if it is needed to complete a task that only root can do, we can escalate the privileges of a normal user. - -To disable root access, open your configuration file & change the following parameter - -``` -PermitRootLogin no -ClientAliveCountMax 0 -``` - -This will disable root access to ssh sessions. - -### Enable Protocol 2 - -SSH protocol 1 had man in the middle attack issues & other security issues as well, all these issues were addressed in Protocol 2. So protocol 1 must not be used at any cost. To change the protocol , open your sshd_config file & change the following parameter - -``` - Protocol 2 -``` - -### Enable a warning screen - -It would be a good idea to enable a warning screen stating a warning about misuse of ssh, just before a user logs into the session. To create a warning screen, create a file named **" warning"** in **/etc/** folder (or any other folder) & write something like "We monitor all our sessions on continuously. Don't misuse your access or else you will be prosecuted" or whatever you wish to warn. You can also consult legal team about this warning to make it more official. - -After this file is create, open sshd_config file & enter the following parameter into the file - -``` - Banner /etc/issue -``` - -now you warning message will be displayed each time someone tries to access the session. - -### Use non-standard ssh port - -By default, ssh uses port 22 & all the brute force scripts are written for port 22 only. So to make your sessions even more secure, use a non-standard port like 15000. But make sure before selecting a port that its not being used by some other service. - -To change port, open sshd_config & change the following parameter - -``` - Port 15000 -``` - -Save & restart the service and you can access the ssh only with this new port. To start a session with custom port use the following command - -``` - $ ssh -p 15000 {server IP} -``` - -** Note:-** If using firewall, open the port on your firewall & we must also change the SELinux settings if using a custom port for ssh. Run the following command to update the SELinux label - -``` -$ semanage port -a -t ssh_port_t -p tcp 15000 -``` - -### Limit IP access - -If you have an environment where your server is accessed by only limited number of IP addresses, you can also allow access to those IP addresses only. Open sshd_config file & enter the following with your custom port - -``` -Port 15000 -ListenAddress 192.168.1.100 -ListenAddress 192.168.1.115 -``` - -Now ssh session will only be available to these mentioned IPs with the custom port 15000. - -### Disable empty passwords - -As mentioned already that you should only use complex username & passwords, so using an empty password for remote login is a complete no-no. To disable empty passwords, open sshd_config file & edit the following parameter - -``` -PermitEmptyPasswords no -``` - -### Use public/private key based authentication - -Using Public/Private key based authentication has its advantages i.e. you no longer need to enter the password when entering into a session (unless you are using a passphrase to decrypt the key) & no one can have access to your server until & unless they have the right authentication key. Process to setup public/private key based authentication is discussed in [**this tutorial here**][1]. - -So, this completes our tutorial on securing your ssh server. If having any doubts or issues, please leave a message in the comment box below. - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/ultimate-guide-to-securing-ssh-sessions/ - -作者:[SHUSAIN][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/configure-ssh-server-publicprivate-key/ -[2]:https://www.facebook.com/techlablinux/ -[3]:https://twitter.com/LinuxTechLab -[4]:https://plus.google.com/+linuxtechlab -[5]:http://linuxtechlab.com/contact-us-2/ diff --git a/sources/tech/20170505 Create your first Ansible server (automation) setup.md b/sources/tech/20170505 Create your first Ansible server (automation) setup.md deleted file mode 100644 index 7bcd2997c8..0000000000 --- a/sources/tech/20170505 Create your first Ansible server (automation) setup.md +++ /dev/null @@ -1,164 +0,0 @@ -Create your first Ansible server (automation) setup -====== -Automation/configuration management tools are the new craze in the IT world, organizations are moving towards adopting them. There are many tools that are available in market like Puppet, Chef, Ansible etc & in this tutorial, we are going to learn about Ansible. - -Ansible is an open source configuration tool; that is used to deploy, configure & manage servers. Ansible is one of the easiest automation tool to learn and master. It does not require you to learn complicated programming language like ruby (used in puppet & chef) & uses YAML, which is a very simple language. Also it does not require any special agent to be installed on client machines & only requires client machines to have python and ssh installed, both of these are usually available on systems. - -## Pre-requisites - -Before we move onto installation part, let's discuss the pre-requisites for Ansible - - 1. For server, we will need a machine with either CentOS or RHEL 7 installed & EPEL repository enabled - -To enable epel repository, use the commands below, - - **RHEL/CentOS 7** - -``` - $ rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm -``` - - **RHEL/CentOS 6 (64 Bit)** - -``` - $ rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -``` - - **RHEL/CentOS 6 (32 Bit)** - -``` - $ rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm -``` - - 2. For client machines, Open SSH & python should be installed. Also we need to configure password less login for ssh session (create public-private keys). To create public-private keys & configure password less login for ssh session, refer to our article " - -[Setting up SSH Server for Public/Private keys based Authentication (Password-less login)][1]" - - - -## Installation - -Once we have epel repository enabled, we can now install anisble using yum, - -``` - $ yum install ansible -``` - -## Configuring Ansible hosts - -We will now configure hosts that we want Ansible to manage. To do that we need to edit the file **/etc/ansible/host** s & add the clients in following syntax, - -``` -[group-name] -alias ansible_ssh_host=host_IP_address -``` - -where, alias is the alias name given to hosts we adding & it can be anything, - -host_IP_address is where we enter the IP address for the hosts. - -For this tutorial, we are going to add 2 clients/hosts for ansible to manage, so let's create an entry for these two hosts in the configuration file, - -``` - $ vi /etc/ansible/hosts - [test_clients] - client1 ansible_ssh_host=192.168.1.101 - client2 ansible_ssh_host=192.168.1.10 -``` - -Save file & exit it. Now as mentioned in pre-requisites, we should have a password less login to these clients from the ansible server. To check if that's the case, ssh into the clients and we should be able to login without password, - -``` - $ ssh root@192.168.1.101 -``` - -If that's working, then we can move further otherwise we need to create Public/Private keys for ssh session (Refer to article mentioned above in pre-requisites). - -We are using root to login to other servers but we can use other local users as well & we need to define it for Ansible whatever user we will be using. To do so, we will first create a folder named 'group_vars' in '/etc/ansible' - -``` - $ cd /etc/ansible - $ mkdir group_vars -``` - -Next, we will create a file named after the group we have created in 'etc/ansible/hosts' i.e. test_clients - -``` - $ vi test_clients -``` - -& add the ifollowing information about the user, - -``` - -- - ansible_ssh_user:root -``` - - **Note :-** File will start with '--' (minus symbol), so keep not of that. - -If we want to use same user for all the groups created, then we can create only a single file named 'all' to mention the user details for ssh login, instead of creating a file for every group. - -``` - $ vi /etc/ansible/group_vars/all - -- - ansible_ssh_user: root -``` - -Similarly, we can setup files for individual hosts as well. - -Now, the setup for the clients has been done. We will now push some simple commands to all the clients being managed by Ansible. - -## Testing hosts - -To check the connectivity of all the hosts, we will issue a command, - -``` - $ ansible -m ping all -``` - -If all the hosts are properly connected, it should return the following output, - -``` - client1 | SUCCESS = > { - " changed": false, - " ping": "pong" - } - client2 | SUCCESS = > { - " changed": false, - " ping": "pong" - } -``` - -We can also issue command to an individual host, - -``` - $ ansible -m ping client1 -``` - -or to the multiple hosts, - -``` - $ ansible -m ping client1:client2 -``` - -or even to a single group, - -``` - $ ansible -m ping test_client -``` - -This complete our tutorial on setting up an Ansible server, in our future posts we will further explore funtinalities offered by Ansible. If any having doubts or queries regarding this post, use the comment box below. - - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/create-first-ansible-server-automation-setup/ - -作者:[SHUSAIN][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/configure-ssh-server-publicprivate-key/ diff --git a/sources/tech/20170910 Useful Linux Commands that you should know.md b/sources/tech/20170910 Useful Linux Commands that you should know.md deleted file mode 100644 index b3975de6ec..0000000000 --- a/sources/tech/20170910 Useful Linux Commands that you should know.md +++ /dev/null @@ -1,265 +0,0 @@ -translating by liuxinyu123 - -Useful Linux Commands that you should know -====== -If you are Linux system administrator or just a Linux enthusiast/lover, than -you love & use command line aks CLI. Until some years ago majority of Linux -work was accomplished using CLI only & even there are some limitations to GUI -. Though there are plenty of Linux distributions that can complete tasks with -GUI but still learning CLI is major part of mastering Linux. - -To this effect, we present you list of useful Linux commands that you should -know. - - **Note:-** There is no definite order to all these commands & all of these -commands are equally important to learn & master in order to excel in Linux -administration. One more thing, we have only used some of the options for each -command for an example, you can refer to 'man pages' for complete list of -options for each command. - -### 1- top command - -'top' command displays the real time summary/information of our system. It -also displays the processes and all the threads that are running & are being -managed by the system kernel. - -Information provided by top command includes uptime, number of users, Load -average, running/sleeping/zombie processes, CPU usage in percentage based on -users/system etc, system memory free & used, swap memory etc. - -To use top command, open terminal & execute the comamnd, - - **$ top** - -To exit out the command, either press 'q' or 'ctrl+c'. - -### 2- free command - -'free' command is used to specifically used to get the information about -system memory or RAM. With this command we can get information regarding -physical memory, swap memory as well as system buffers. It provided amount of -total, free & used memory available on the system. - -To use this utility, execute following command in terminal - - **$ free** - -It will present all the data in kb or kilobytes, for megabytes use options -'-m' & '-g ' for gb. - -#### 3- cp command - -'cp' or copy command is used to copy files among the folders. Syntax for using -'cp' command is, - - **$ cp source destination** - -### 4- cd command - -'cd' command is used for changing directory . We can switch among directories -using cd command. - -To use it, execute - - **$ cd directory_location** - -### 5- ifconfig - -'Ifconfig' is very important utility for viewing & configuring network -information on Linux machine. - -To use it, execute - - **$ ifconfig** - -This will present the network information of all the networking devices on the -system. There are number of options that can be used with 'ifconfig' for -configuration, in fact they are some many options that we have created a -separate article for it ( **Read it here ||[IFCONFIG command : Learn with some -examples][1]** ). - -### 6- crontab command - -'Crontab' is another important utility that is used schedule a job on Linux -system. With crontab, we can make sure that a command or a script is executed -at the pre-defined time. To create a cron job, run - - **$ crontab -e** - -To display all the created jobs, run - - **$ crontab -l** - -You can read our detailed article regarding crontab ( **Read it here ||[ -Scheduling Important Jobs with Crontab][2]** ) - -### 7- cat command - -'cat' command has many uses, most common use is that it's used to display -content of a file, - - **$ cat file.txt** - -But it can also be used to merge two or more file using the syntax below, - - **$ cat file1 file2 file3 file4 > file_new** - -We can also use 'cat' command to clone a whole disk ( **Read it here || -[Cloning Disks using dd & cat commands for Linux systems][3]** ) - -### 8- df command - -'df' command is used to show the disk utilization of our whole Linux file -system. Simply run. - - **$ df** - -& we will be presented with disk complete utilization of all the partitions on -our Linux machine. - -### 9- du command - -'du' command shows the amount of disk that is being utilized by the files & -directories on our Linux machine. To run it, type - - **$ du /directory** - -( **Recommended Read :[Use of du & df commands with examples][4]** ) - -### 10- mv command - -'mv' command is used to move the files or folders from one location to -another. Command syntax for moving the files/folders is, - - **$ mv /source/filename /destination** - -We can also use 'mv' command to rename a file/folder. Syntax for changing name -is, - - **$ mv file_oldname file_newname** - -### 11- rm command - -'rm' command is used to remove files\folders from Linux system. To use it, run - - **$ rm filename** - -We can also use '-rf' option with 'rm' command to completely remove a -file\folder from the system but we must use this with caution. - -### 12- vi/vim command - -VI or VIM is very famous & one of the widely used CLI-based text editor for -Linux. It takes some time to master it but it has a great number of utilities, -which makes it a favorite for Linux users. - -For detailed knowledge of VIM, kindly refer to the articles [**Beginner 's -Guide to LVM (Logical Volume Management)** & **Working with Vi/Vim Editor : -Advanced concepts.**][5] - -### 13- ssh command - -SSH utility is to remotely access another machine from the current Linux -machine. To access a machine, execute - - **$ ssh[[email protected]][6] OR machine_name** - -Once we have remote access to machine, we can work on CLI of that machine as -if we are working on local machine. - -### 14- tar command - -'tar' command is used to compress & extract the files\folders. To compress the -files\folders using tar, execute - - **$ tar -cvf file.tar file_name** - -where file.tar will be the name of compressed folder & 'file_name' is the name -of source file or folders. To extract a compressed folder, - - **$ tar -xvf file.tar** - -For more details on 'tar' command, read [**Tar command : Compress & Decompress -the files\directories**][7] - -### 15- locate command - -'locate' command is used to locate files & folders on your Linux machines. To -use it, run - - **$ locate file_name** - -### 16- grep command - -'grep' command another very important command that a Linux administrator -should know. It comes especially handy when we want to grab a keyword or -multiple keywords from a file. Syntax for using it is, - - **$ grep 'pattern' file.txt** - -It will search for 'pattern' in the file 'file.txt' and produce the output on -the screen. We can also redirect the output to another file, - - **$ grep 'pattern' file.txt > newfile.txt** - -### 17- ps command - -'ps' command is especially used to get the process id of a running process. To -get information of all the processes, run - - **$ ps -ef** - -To get information regarding a single process, executed - - **$ ps -ef | grep java** - -### 18- kill command - -'kill' command is used to kill a running process. To kill a process we will -need its process id, which we can get using above 'ps' command. To kill a -process, run - - **$ kill -9 process_id** - -### 19- ls command - -'ls' command is used list all the files in a directory. To use it, execute - - **$ ls** - -### 20- mkdir command - -To create a directory in Linux machine, we use command 'mkdir'. Syntax for -using 'mkdir' is - - **$ mkdir new_dir** - -These were some of the useful linux commands that every System Admin should -know, we will soon be sharing another list of some more important commands -that you should know being a Linux lover. You can also leave your suggestions -and queries in the comment box below. - - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/useful-linux-commands-you-should-know/ - -作者:[][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com -[1]:http://linuxtechlab.com/ifconfig-command-learn-examples/ -[2]:http://linuxtechlab.com/scheduling-important-jobs-crontab/ -[3]:http://linuxtechlab.com/linux-disk-cloning-using-dd-cat-commands/ -[4]:http://linuxtechlab.com/du-df-commands-examples/ -[5]:http://linuxtechlab.com/working-vivim-editor-advanced-concepts/ -[6]:/cdn-cgi/l/email-protection#bbcec8dec9d5dad6defbf2ebdadfdfc9dec8c8 -[7]:http://linuxtechlab.com/tar-command-compress-decompress-files -[8]:https://www.facebook.com/linuxtechlab/ -[9]:https://twitter.com/LinuxTechLab -[10]:https://plus.google.com/+linuxtechlab -[11]:http://linuxtechlab.com/contact-us-2/ - diff --git a/sources/tech/20171212 Create a free Apache SSL certificate with Let-s Encrypt on CentOS - RHEL.md b/sources/tech/20171212 Create a free Apache SSL certificate with Let-s Encrypt on CentOS - RHEL.md deleted file mode 100644 index 1d85d60731..0000000000 --- a/sources/tech/20171212 Create a free Apache SSL certificate with Let-s Encrypt on CentOS - RHEL.md +++ /dev/null @@ -1,115 +0,0 @@ -Create a free Apache SSL certificate with Let’s Encrypt on CentOS & RHEL -====== - -Let's Encrypt is a free, automated & open certificate authority that is supported by ISRG, Internet Security Research Group. Let's encrypt provides X.509 certificates for TLS (Transport Layer Security) encryption via automated process which includes creation, validation, signing, installation, and renewal of certificates for secure websites. - -In this tutorial, we are going to discuss how to create an apache SSL certificate with Let's Encrypt certificate on Centos/RHEL 6 & 7\. To automate the Let's encrypt process, we will use Let's encrypt recommended ACME client i.e. CERTBOT, there are other ACME Clients as well but we will be using Certbot only. - -Certbot can automate certificate issuance and installation with no downtime, it automatically enables HTTPS on your website. It also has expert modes for people who don't want auto-configuration. It's easy to use, works on many operating systems, and has great documentation. - - **(Recommended Read:[Complete guide for Apache TOMCAT installation on Linux][1])** - -Let's start with Pre-requisites for creating an Apache SSL certificate with Let's Encrypt on CentOS, RHEL 6 &7….. - - -## Pre-requisites - - **1-** Obviously we will need Apache server to installed on our machine. We can install it with the following command, - - **# yum install httpd** - -For detailed Apache installation procedure, refer to our article[ **Step by Step guide to configure APACHE server.**][2] - - **2-** Mod_ssl should also be installed on the systems. Install it using the following command, - - **# yum install mod_ssl** - - **3-** Epel Repositories should be installed & enables. EPEL repositories are required as not all the dependencies can be resolved with default repos, hence EPEL repos are also required. Install them using the following command, - - **RHEL/CentOS 7** - - **# rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/packages/e/epel-release-7-11.noarch.rpm** - - **RHEL/CentOS 6 (64 Bit)** - - **# rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm** - - **RHEL/CentOS 6 (32 Bit)** - - **# rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm** - -Now let's start with procedure to install Let's Encrypt on CentOS /RHEL 7. - -## Let's encrypt on CentOS RHEL 7 - -Installation on CentOS 7 can easily performed with yum, with the following command, - - **$ yum install certbot-apache** - -Once installed, we can now create the SSL certificate with following command, - - **$ certbot -apache** - -Now just follow the on screen instructions to generate the certificate. During the setup, you will also be asked to enforce the HTTPS or to use HTTP , select either of the one you like. But if you enforce HTTPS, than all the changes required to use HTTPS will made by certbot setup otherwise we will have to make changes on our own. - -We can also generate certificate for multiple websites with single command, - - **$ certbot -apache -d example.com -d test.com** - -We can also opt to create certificate only, without automatically making any changes to any configuration files, with the following command, - - **$ certbot -apache certonly** - -Certbot issues SSL certificates hae 90 days validity, so we need to renew the certificates before that period is over. Ideal time to renew the certificate would be around 60 days. Run the following command, to renew the certifcate, - - **$ certbot renew** - -We can also automate the renewal process with a crontab job. Open the crontab & create a job, - - **$ crontab -e** - - **0 0 1 * comic core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated 选题模板.txt 中文排版指北.md /usr/bin/certbot renew >> /var/log/letsencrypt.log** - -This job will renew you certificate 1st of every month at 12 AM. - -## Let's Encrypt on CentOS 6 - -For using Let's encrypt on Centos 6, there are no cerbot packages for CentOS 6 but that does not mean we can't make use of let's encrypt on CentOS/RHEL 6, instead we can use the certbot script for creating/renewing the certificates. Install the script with the following command, - - **# wget https://dl.eff.org/certbot-auto** - - **# chmod a+x certbot-auto** - -Now we can use it similarly as we used commands for CentOS 7 but instead of certbot, we will use script. To create new certificate, - - **# sh path/certbot-auto -apache -d example.com** - -To create only cert, use - - **# sh path/certbot-auto -apache certonly** - -To renew cert, use - - **# sh path/certbot-auto renew** - -For creating a cron job, use - - **# crontab -e** - - **0 0 1 * * sh path/certbot-auto renew >> /var/log/letsencrypt.log** - -This was our tutorial on how to install and use let's encrypt on CentOS , RHEL 6 & 7 for creating a free SSL certificate for Apache servers. Please do leave your questions or queries down below. - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/create-free-apache-ssl-certificate-lets-encrypt-on-centos-rhel/ - -作者:[Shusain][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/complete-guide-apache-tomcat-installation-linux/ -[2]:http://linuxtechlab.com/beginner-guide-configure-apache/ diff --git a/sources/tech/20180123 Installing Awstat for analyzing Apache logs.md b/sources/tech/20180123 Installing Awstat for analyzing Apache logs.md deleted file mode 100644 index b635417a47..0000000000 --- a/sources/tech/20180123 Installing Awstat for analyzing Apache logs.md +++ /dev/null @@ -1,117 +0,0 @@ -Installing Awstat for analyzing Apache logs -====== -AWSTAT is free an very powerful log analyser tool for apache log files. After analyzing logs from apache, it present them in easy to understand graphical format. Awstat is short for Advanced Web statistics & it works on command line interface or on CGI. - -In this tutorial, we will be installing AWSTAT on our Centos 7 machine for analyzing apache logs. - -( **Recommended read** :[ **Scheduling important jobs with crontab**][1]) - -### Pre-requisites - - **1-** A website hosted on apache web server, to create one read below mentioned tutorials on apache web servers, - -( **Recommended reads** - [**installing Apache**][2], [**Securing apache with SSL cert**][3] & **hardening tips for apache** ) - - **2-** Epel repository enabled on the system, as Awstat packages are not available on default repositories. To enable epel-repo , run - -``` -$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm -``` - -### Installing Awstat - -Once the epel-repository has been enabled on the system, awstat can be installed by running, - -``` - $ yum install awstat -``` - -When awstat is installed, it creates a file for apache at '/etc/httpd/conf.d/awstat.conf' with some configurations. These configurations are good to be used incase web server &awstat are configured on the same machine but if awstat is on different machine than the webserver, then some changes are to be made to the file. - -#### Configuring Apache for Awstat - -To configure awstat for a remote web server, open /etc/httpd/conf.d/awstat.conf, & update the parameter 'Allow from' with the IP address of the web server - -``` -$ vi /etc/httpd/conf.d/awstat.conf - - -Options None -AllowOverride None - -# Apache 2.4 -Require local - - -# Apache 2.2 -Order allow,deny -Allow from 127.0.0.1 -Allow from 192.168.1.100 - - -``` - -Save the file & restart the apache services to implement the changes, - -``` - $ systemctl restart httpd -``` - -#### Configuring AWSTAT - -For every website that we add to awstat, a different configuration file needs to be created with the website information . An example file is created in folder '/etc/awstats' by the name 'awstats.localhost.localdomain.conf', we can make copies of it & configure our website with this, - -``` -$ cd /etc/awstats -$ cp awstats.localhost.localdomain.conf awstats.linuxtechlab.com.conf -``` - -Now open the file & edit the following three parameters to match your website, - -``` -$ vi awstats.linuxtechlab.com.conf - -LogFile="/var/log/httpd/access.log" -SiteDomain="linuxtechlab.com" -HostAliases=www.linuxtechlab.com localhost 127.0.0.1 -``` - -Last step is to update the configuration file, which can be done executing the command below, - -``` -/usr/share/awstats/wwwroot/cgi-bin/awstats.pl -config=linuxtechlab.com -update -``` - -#### Checking the awstat page - -To test/check the awstat page, open web-browser & enter the following URL in the address bar, -**https://linuxtechlab.com/awstats/awstats.pl?config=linuxtechlab.com** - -![awstat][5] - -**Note-** we can also schedule a cron job to update the awstat on regular basis. An example for the crontab - -``` -$ crontab -e -0 1 * * * /usr/share/awstats/wwwroot/cgi-bin/awstats.pl -config=linuxtechlab.com–update -``` - -We now end our tutorial on installing Awstat for analyzing apache logs, please leave your comments/queries in the comment box below. - - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/installing-awstat-analyzing-apache-logs/ - -作者:[SHUSAIN][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/scheduling-important-jobs-crontab/ -[2]:http://linuxtechlab.com/beginner-guide-configure-apache/ -[3]:http://linuxtechlab.com/create-ssl-certificate-apache-server/ -[4]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=602%2C312 -[5]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/04/awstat.jpg?resize=602%2C312 diff --git a/sources/tech/20180202 Managing network connections using IFCONFIG - NMCLI commands.md b/sources/tech/20180202 Managing network connections using IFCONFIG - NMCLI commands.md deleted file mode 100644 index 51f6e515d7..0000000000 --- a/sources/tech/20180202 Managing network connections using IFCONFIG - NMCLI commands.md +++ /dev/null @@ -1,176 +0,0 @@ -Managing network connections using IFCONFIG & NMCLI commands -====== -Earlier we have discussed how we can configure network connections using three different methods i.e. by editing network interface file, by using GUI & by using nmtui command ([ **READ ARTICLE HERE**][1]). In this tutorial, we are going to use two other methods to configure network connections on our RHEL/CentOS machines. -First utility that we will be using is ‘ifconfig’ & we can configure network on almost any Linux distribution using this method. - -### Using Ifconfig - -#### View current network settings - -To view network settings for all the active network interfaces, run - -``` -$ ifconfig -``` - -To view network settings all active, inactive interfaces, run - -``` -$ ifconfig -a -``` - - -Or to view network settings for a particular interface, run - -``` -$ ifconfig enOs3 -``` - -#### Assigning IP address to an interface - -To assign network information on an interface i.e. IP address, netmask & broadcast address, syntax is -ifconfig enOs3 IP_ADDRESS netmask SUBNET broadcast BROADCAST_ADDRESS -here, we need to pass information as per our network configurations. An example would be - -``` -$ ifconfig enOs3 192.168.1.100 netmask 255.255.255.0 broadcast 192.168.1.255 -``` - - -This will assign IP 192.168.1.100 on our network interface enOs3. We can also just modify IP or subnet or broadcast address by running the above command with only that parameter like, - -``` -$ ifconfig enOs3 192.168.1.100 -$ ifconfig enOs3 netmask 255.255.255.0 -$ ifconfig enOs3 broadcast 192.168.1.255 -``` - - -#### Enabling or disabling a network interface - -To enable a network interface, run - -``` -$ ifconfig enOs3 up -``` - - -To disable a network interface, run - -``` -$ ifconfig enOs3 down -``` - - -( **Recommended read** :- [**Assigning multiple IP addresses to a single NIC**][2]) - -**Note:-** When using ifconfig , entries for the gateway address are to be made in /etc/network file or use the following ‘route’ command to add a default gateway, - -``` -$ route add default gw 192.168.1.1 enOs3 -``` - - -For adding DNS, make an entry in /etc/resolv.conf. - -### Using NMCLI - -NetworkManager is used as default networking service on RHEL/CentOS 7 versions. It is a very powerful & useful utility for configuring and maintaining network connections. & to control the NetworkManager daemon we can use ‘nmcli’. - -**Syntax** for using nmcli is, -``` -$ nmcli [ OPTIONS ] OBJECT { COMMAND | help } -``` - -#### Viewing current network settings - -To display the status of NetworkManager, run - -``` -$ nmcli general status -``` - - -to display only the active connections, - -``` -$ nmcli connection show -a -``` - - -to display all active and inactive connections, run - -``` -$ nmcli connection show -``` - - -to display a list of devices recognized by NetworkManager and their current status, run - -``` -$ nmcli device status -``` - - -#### Assigning IP address to an interface - -To assign IP address & default gateway to a network interface, syntax for command is as follows, - -``` -$ nmcli connection add type ethernet con-name CONNECTION_name ifname INTERFACE_name ip4 IP_address gw4 GATEWAY_address -``` - - -Change the fields as per you network information, an example would be - -``` -$ nmcli connection add type ethernet con-name office ifname enOs3 ip4 192.168.1.100 gw4 192.168.1.1 -``` - - -Unlike ifconfig command , we can set up a DNS address using nmcli command. To assign a DNS server to an interface, run - -``` -$ nmcli connection modify office ipv4.dns “8.8.8.8” -``` - - -Lastly, we will bring up the newly added connection, - -``` -$ nmcli connection up office ifname enOs3 -``` - - -#### Enabling or disabling a network interface - -For enabling an interface using nnmcli, run - -``` -$ nmcli device connect enOs3 -``` - - -To disable an interface, run - -``` -$ nmcli device disconnect enOs3 -``` - - -That’s it guys. There are many other uses for both of these commands but examples mentioned here should get you started. If having any issues/queries, please mention them in the comment box down below. - - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/managing-network-using-ifconfig-nmcli-commands/ - -作者:[SHUSAIN][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/configuring-ip-address-rhel-centos/ -[2]:http://linuxtechlab.com/ip-aliasing-multiple-ip-single-nic/ From eb8395c07f81035e25fe3c0b1b094cad20377143 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 14:52:07 +0800 Subject: [PATCH 18/81] remove www.linuxandubuntu.com --- ...The Complete Partition Editor For Linux.md | 114 ------------- ...easons Why Linux Is Better Than Windows.md | 146 ----------------- ...ux Music Players To Stream Online Music.md | 134 --------------- ...0119 PlayOnLinux For Easier Use Of Wine.md | 153 ------------------ 4 files changed, 547 deletions(-) delete mode 100644 sources/tech/20171120 GParted The Complete Partition Editor For Linux.md delete mode 100644 sources/tech/20180102 10 Reasons Why Linux Is Better Than Windows.md delete mode 100644 sources/tech/20180102 Best Linux Music Players To Stream Online Music.md delete mode 100644 sources/tech/20180119 PlayOnLinux For Easier Use Of Wine.md diff --git a/sources/tech/20171120 GParted The Complete Partition Editor For Linux.md b/sources/tech/20171120 GParted The Complete Partition Editor For Linux.md deleted file mode 100644 index e6ab0e9c2c..0000000000 --- a/sources/tech/20171120 GParted The Complete Partition Editor For Linux.md +++ /dev/null @@ -1,114 +0,0 @@ -GParted The Complete Partition Editor For Linux -====== - -![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gparted-the-complete-partition-editor-for-linux_orig.jpg) -**Partition editing**is a task which not only requires carefulness but also a stable environment. Today GParted is one of the leading partition editing tools on Linux environment. - -**GParted**is not only easy but also remains powerful at the same time. Today I am going to list out the installation as well as basics to use GParted which will be helpful to newbies. - -### How to install GParted? - -​Downloading and installing gparted is not a much difficult task. Today GParted is available on almost all distros and can be easily installed from their specific software center. Just go to software center and search “GParted” or use command line package manager to install it. - - [![install gparted from software center](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-gparted-from-software-center_orig.jpg)][1] In case you don’t have a software center or GParted isn’t available on software center you can always grab it from its official website.[Download][2] - -#### The Basics - -​Using GParted isn’t difficult. When I opened it for the first time 3 years ago, sure I was confused for a moment but I was quickly able to use it. Launching GParted requires admin privileges so it requires your password for launching. This is normal. - - [![type password to provide for admin privileges](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/type-password-to-provide-for-admin-privileges_orig.png)][3] - -​Below is the screen that GParted will display when you launch it the first time i.e. all your partitions of the hard disk (It will differ PC to PC). - - [![gparted user interface in linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gparted-user-interface-in-linux_orig.png)][4] - -​The screen presented is not only simple but also effective. You will see that from left to right it displays address of the partition, type of partition, the mount point ( “/” indicates root), Label of partition (In case you name your partitions like I do), total size of partition and capacity used and unused as well as flags (never ever touch a partition with a flag unless you know what you are doing). The key sign in front of file systems indicates that the partition is currently mounted means used by the system. Right-click and select “unmount” to unmount it. - -You can see that it displays all sorts of information that you need to know about a particular partition you want to mess with. The bar with filled and unfilled portion indicates your hard disk. Also a simple thing “dev/sda” goes for hard disk while “dev/sdb” goes for your removable drives mostly flash drives (differs). - -You can change working on drives by clicking on the box at top right corner saying “dev/sda”. The tweaks you want are available on different options at the menu bar. - - [![edit usb partition in gparted](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edit-usb-partition-in-gparted_orig.png)][5] - -​This is my flash drive which I switched using the top right corner box as I told above. See it now indicates different details. Also as my drive is based on different format the color of bar changed. This is a really helpful feature as it indicates that partition changes if color differs. Editing external drives is also same as editing internal. - -#### Tweaking The Partitions - -Tweaking partition requires your full attention as this is somewhat risky because if it's done wrong, you will destroy your data. Keeping this point in mind proceed. - -Select a partition you want to work on rather it is on hard drive or flash drive is non-relevant. If the partition is mounted, unmount it. After unmounting the editing options will become available. - - [![gparted menu bar](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gparted-menu-bar_orig.png)][6] - -This options can be accessed by menu bar or right mouse button too. The first option is for creating a new partition, the second one to delete, third one to resize partition, the fourth one to copy, fifth to paste. The last options are important as the second last is to revert changes and the last one is to apply changes. - -GParted doesn’t make changes in real time but keeps track of changes done by you. So you can easily revert a change if it is caused by mistake. Lastly, you can apply it and save changes. - -​Now let’s come to editing part. - -Let us assume you want to create a new partition by deleting existing one. Select partition of your choice hit the delete key and it will be deleted. Now for creating a new partition select that first option on the menu bar that indicates “new”. - -You will get the following options. - - [![create new partition with gparted](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/create-new-partition-with-gparted_orig.png)][7] - -​Here you can easily resize the partition by either entering values manually or drag the bar. If you want to change alignment do it with align option. You can choose whether to keep partition primary or secondary by option “create as”. Name the partition in Label and choose the appropriate file system. In case you want to access this partition on other OS like windows better use “ - -[NTFS][8] - -” format. - -There are times when data partition table is hampered. GParted handles this thing well too. There is the option to create new data partition table under device option. Remember creating data partition will destroy present data. - - [![select table type in gparted](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/select-table-type-in-gparted_orig.jpg)][9] - -​But what to do when you already having a pen drive on which you want to do data rescue? Gparted helps here too. Reach data rescue option under device section from the menu bar. This option requires the installation of additional components that can be accessed from software center. - - [![scan disk partition in gparted](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/scan-disk-partition-in-gparted_orig.jpg)][10] - -​You can also align flags to certain partition by option “flags”. Remember not to mess with flags unless you know what you are doing. There are a lot of other tweaks too to explore and use. Do that but remember mess with something unless you know what you are doing. - -#### Applying Tweaks - - [![apply changes to disk in gparted partition editor](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/apply-changes-to-disk-in-gparted-partition-editor_orig.png)][11] - -​After you have done tweaking you need to apply them. This could be done by using apply option I mentioned above. It will give you a warning. Check out if everything is proper before applying and proceed the warning to apply tweaks and your changes are done. Enjoy!. - -#### Other Features - -​Gparted offers live environment image files to boot and repair partitions in case of something wrong that can be downloaded from the website. GParted also shows tweaks it can do on your system, partition information and many others. Remember options will be differently available as per system. - - [![file system support in gparted](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/file-system-support-in-gparted_orig.png)][12] - -### Conclusion - -​Here we reach the end of my long article. - -**GParted**is a really nice, powerful software that has great capabilities. There is also a nice community on _GParted_ that will surely help you in case you come across a bug or doubt in which we [LinuxAndUbuntu][13]are too included. The power of GParted will help you to do almost all partition related task but you should be careful about what you are doing. - -Remember to always check out in the last what you are applying and is it right or not. In case you run across a problem don’t hesitate to comment and ask as we are always willing to help you. - --------------------------------------------------------------------------------- - -via: http://www.linuxandubuntu.com/home/gparted-the-complete-partition-editor-for-linux - -作者:[LinuxAndUbuntu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxandubuntu.com -[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/install-gparted-from-software-center_orig.jpg -[2]:http://gparted.org/download.php -[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/type-password-to-provide-for-admin-privileges_orig.png -[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gparted-user-interface-in-linux_orig.png -[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edit-usb-partition-in-gparted_orig.png -[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gparted-menu-bar_orig.png -[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/create-new-partition-with-gparted_orig.png -[8]:http://www.linuxandubuntu.com/home/fdisk-command-to-manage-disk-partitions-in-linux -[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/select-table-type-in-gparted_orig.jpg -[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/scan-disk-partition-in-gparted_orig.jpg -[11]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/apply-changes-to-disk-in-gparted-partition-editor_orig.png -[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/file-system-support-in-gparted_orig.png -[13]:http://www.linuxandubuntu.com/ diff --git a/sources/tech/20180102 10 Reasons Why Linux Is Better Than Windows.md b/sources/tech/20180102 10 Reasons Why Linux Is Better Than Windows.md deleted file mode 100644 index 65dcd946f0..0000000000 --- a/sources/tech/20180102 10 Reasons Why Linux Is Better Than Windows.md +++ /dev/null @@ -1,146 +0,0 @@ -translate by cyleft -10 Reasons Why Linux Is Better Than Windows -====== -It is often seen that people get confused over choosing Windows or Linux as host operating system in both - -[server][1] - -and desktop spaces. People will focus on aspects of cost, the functionality provided, hardware compatibility, support, reliability, security, pre-built software, cloud-readiness etc. before they finalize. In this regard, this article covers ten reasons of using Linux over Windows. - -## 10 Reasons Why Linux Is Better Than Windows - -### 1. Total cost of ownership - - ![linux less costly than windows](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/linux-less-costly-than-windows.jpg?1514905265) - -The most obvious advantage is that Linux is free whereas Windows is not. Windows license cost is different for both desktop and server versions. In case of Linux OS either it can be desktop or server, distro comes with no cost. Not only the OS even the related applications are completely free and open source. - -For personal use, a single Windows OS license fee may appear inexpensive but when considered for business, more employees means more cost. Not only the OS license cost, organization need to be ready to pay for applications like MS Office, Exchange, SharePoint that run on Windows. - -​ - -In Windows world, you cannot modify the OS as its source code is not open source. Same is the case with proprietary applications running on it. However, in case of Linux, a user can download even the source code of a Linux OS, change it and use it spending no money. Though some Linux distros charge for support, they are inexpensive when compared to Windows license price. - -### 2. Beginner friendly and easy to use - - [![linux mint easy to use](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-mint-easy-to-use_orig.jpg)][2] - -Windows OS is one of the simplest desktop OS available today. Its graphical user-friendliness is exceptional. Though Windows has a relatively minimal learning curve, Linux distros like Ubuntu, Elementary OS, Linux Mint etc. are striving to improve the user experience that makes transition from Windows to Linux become smooth and easy. - -​ - -Linux distros allow user to choose one of the various desktop environments available: - -[Cinnamon][3] - -, - -[MATE][4] - -, - -[KDE][5] - -, - -[Xfce][6] - -, LXDE, GNOME etc. If a Windows user is looking to migrate to Linux, - -[WINE][7] - -(Wine Is Not an Emulator) can be installed to have a feel of MS Windows on a Linux system. - -### 3. Reliability - -Linux is more reliable when compared to Windows. Linux will rock with its top-notch design, built-in security resulting un-parallel up-time. Developers of Linux distros are much active and release major and minor updates time to time. Traditionally Unix-like systems are known for running for years without a single failure or having a situation which demands a restart. This is an important factor especially choosing a server system. Definitely Linux being a UNIX-like system, it will be a better choice. - -### 4. Hardware - - [![linux better hardware support](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-better-hardware-support_orig.jpg)][8] - -Linux systems are known for consuming fewer system resources (RAM, disk space etc.) when compared to Windows. Hardware vendors already realized the popularity of Linux and started making Linux compliant hardware/drivers. When running the OS on older hardware, Windows is slower. - -​ - -Linux distros like Lubuntu, Knoppix, LXLE, antiX, Puppy Linux are best suitable for aging machines. Old horses like 386 or 486 machines with decent RAM (>= 124/256) can run Linux. - -### 5. Software - -No doubt that Windows has a large set of commercial software available. Linux, on the other hand, makes use of open source software available for free. Linux armed with easy to use package managers which aid in installing and un-installing desired software applications. Linux is armed with decent desktop themes certainly run faster than Windows. - -​ - -For developers, the Linux terminal offers superior environment when compared to Windows. The exhaustive GNU compilers and utilities will be definitely useful for programming. Administrators can make use of package managers to manage software and of course, Linux has the unbeatable CLI interface. - -​ - -Have you heard about Tiny Core Linux? It comes at only 11MB size with the graphical desktop environment. You can choose to install from the hundreds of available Linux distros based on your need. Following table presents a partial list of Linux distros filtered based on need: - - [![linux vast software support](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux_1_orig.png)][9] - -### 6. Security - - [![linux is more secure than windows](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/linux-is-more-secure-than-windows.jpeg?1514906182)][10] - -Microsoft Windows OS is infamous for being vulnerable to malware, trojans, and viruses. Linux is almost non-vulnerable and more secure due to its inherent design. Linux does not require the use of commercial anti-virus/anti-malware packages. - -Linux respects privacy. Unlike windows, it does not generate logs and upload data from your machine. A user should be well aware of Windows privacy policy. - -### 7. Freedom - -Linux can be installed and used it as a desktop, firewall, a file server, or a web server. Linux allows a user to control every aspect of the operating systems. As Linux is an open-source operating system, it allows a user to modify its source (even source code of applications) itself as per the user requirements. Linux allows the user to install only the desired software nothing else (no bloatware). Linux allows full freedom to install open source applications its vast repository. Windows will bore you with its default desktop theme whereas with Linux you can choose from many desktop themes available. - -​You can breathe fresh air after choosing a Linux distro from an available list of Linux distros. - -With USB live-mode option, you can give a try to test a Linux distro before you finalize one for you. Booting via live-mode does not install the OS on a hard disk. Just go and give a try, you will fall in love. - -### 8. Annoying crashes and reboots - -There are times when Windows suddenly shows an annoying message saying that the machine needs to be restarted. Apart from showing “Applying update of 5 of 361.” kind messages, Windows will confuse you with several types of updates critical, security, definition, update rollup, service pack, tool, feature pack. I did not remember how many times the Windows rebooted last time to apply an update. - -When undergoing a software update or installing/uninstalling software on Linux systems, generally it does not need a machine reboot. Most of the system configuration changes can be done while the system is up. - -### 9. Server segment - - [![linux server](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-server_orig.jpg)][11] - -Linux is installed on the majority of servers demonstrating that it is the best choice with the minimal resource footprint. Even rivals are using Linux on their offerings. As software applications are moving to cloud platforms, windows servers are getting phased out to make room for Linux servers. Majority of the supercomputers to run on Linux. - -Though the battle between Linux and Windows continue in desktop-segment when comes to server-segment Linux evolves as a clear winner. Organizations rely on servers because they want their applications to run 24x7x365 with no or limited downtime. Linux already became favorite of most of the datacenters. - -### 10. Linux is everywhere - - [![linux is everywhere](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-is-everywhere_orig.jpg)][12] - -Yes, Linux is everywhere. From smallest device to largest supercomputers, Linux is everywhere. It can be a car, router, phone, medical devices, plane, TV, satellite, watch or school tablet, Linux will be there. - -The inventor Linus Torvalds himself would not have imagined about this kind of success when he was writing the Linux kernel first time. Kudos to Linus and Stallman for their contribution. - -## Conclusion - -​There is a saying - variety is the spice of life. It is true with respect to Linux distros. There are more than 600 active different distros to choose. Each is different on its own and meant for the specific purpose. Linux distros are highly customizable when compared to Windows. The above reasons mentioned are is just the tip of the iceberg. There is so much more than you could with Linux. Linux is powerful, flexible, secure, reliable, stable, fun… than Windows. One should always keep in mind that - free is not the best just like expensive is not the best. Linux will undoubtedly emerge as the winner when all aspects are considered. There is no reason why you would not choose Linux instead of Windows. Let us know your thoughts how you feel about Linux. - --------------------------------------------------------------------------------- - -via: http://www.linuxandubuntu.com/home/10-reasons-why-linux-is-better-than-windows - -作者:[Ramakrishna Jujare][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxandubuntu.com -[1]:http://www.linuxandubuntu.com/home/how-to-configure-sftp-server-on-centos -[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-mint-easy-to-use_orig.jpg -[3]:http://www.linuxandubuntu.com/home/cinnamon-desktop-the-best-desktop-environment-for-new-linux-user -[4]:http://www.linuxandubuntu.com/home/linuxandubuntu-review-of-ubuntu-mate-1710 -[5]:http://www.linuxandubuntu.com/home/best-kde-linux-distributions-for-your-desktop -[6]:http://www.linuxandubuntu.com/home/xfce-desktop-environment-a-linux-desktop-environment-for-everyone -[7]:http://www.linuxandubuntu.com/home/how-to-install-wine-and-run-windows-apps-in-linux -[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-better-hardware-support_orig.jpg -[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux_1_orig.png -[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/linux-is-more-secure-than-windows.jpeg -[11]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-server_orig.jpg -[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-is-everywhere_orig.jpg diff --git a/sources/tech/20180102 Best Linux Music Players To Stream Online Music.md b/sources/tech/20180102 Best Linux Music Players To Stream Online Music.md deleted file mode 100644 index 0780c0f397..0000000000 --- a/sources/tech/20180102 Best Linux Music Players To Stream Online Music.md +++ /dev/null @@ -1,134 +0,0 @@ -Best Linux Music Players To Stream Online Music -====== - ![Best Linux Music Players To Stream Online Music](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/best-linux-music-players-to-stream-online-music_orig.jpg) - -​For all the music lovers, what better way to enjoy music and relax than to stream your music online. Below are some of the best Linux music players out there you can use to stream music online and how you can get them running on your machine. It will be worth your while. - -## Best Linux Music Players To Stream Online Music - -### Spotify - - [![spotify stream online music](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/spotify_orig.png)][1] - -Spotify is known to be one of the best apps to stream online music. Spotify allows you to stream music and when you are offline, you can listen to your local files on your machine. Your music is organised into various genres to which the music belongs. Spotify allows you to see what your friends are listening to and also try them out yourself. The app looks great and is well organized, it is easy to search for songs. All you have to do is type into the search box and Wallah! You get the music you are searching for online. - -​The app is cross-platform and allows you to stream your favorite tunes. There are some catches though. You will need to create an account with Spotify to use the app. You can do so using Facebook or your email. You also have a choice to upgrade to premium which will be worth your money since you have access to high quality music and you can listen to music from any of your devices. Apparently Spotify is not available in every country. You can install Spotify by typing the following commands in the terminal: - -``` -sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 0DF731E45CE24F27EEEB1450EFDC8610341D9410 - -echo deb http://repository.spotify.com stable non-free | sudo tee /etc/apt/sources.list.d/spotify.list - -sudo apt-get update && sudo apt-get install spotify-client - -Once you have run the commands, you can start Spotify from your list of applications. -``` - -### Amarok - - [![Amarok stream online music in linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/amarok_orig.png)][2] - -Amarok is an open source media player that can play online music as well as local music files that you have on your PC. It has the capability of fetching online lyrics of any song whether online or playing locally, and the best bit about that is that the lyrics can scroll automatically. It is easy to customize to your preference. - -When you close Amarok, you have the option to let the music keep playing and a menu attach to the system tray to let you control your music so that it doesn’t use up all your resources as you enjoy your music. It has various services that you can choose from to stream your music from. It also integrates with the system tray as an icon with a menu that you can control music on. Amarok is available on the ubuntu software center for download and install. You can also use the following command in the terminal to install: - -``` -sudo apt-get update && sudo apt-get install amarok -``` - - [![amarok music player for linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/amarok_1_orig.png)][3] _Amarok in the notification area to control music play._ [![amarok configuration settings](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/amarok_2_orig.png)][4] _Some of the internet services found in amarok._ [![amarok panel widget](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/amarok_3_orig.png)][5] _Amarok integrates with the top bar when you close the main window._ - -### Audacious - - [![audacious music player to stream online music](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/audacious_orig.png)][6] - -Audacious is a simple, easy to use and customisable audio player. It is an open source audio player that you can stream your online music from. Audacious is not resource hungry leaving a lot for other applications, when you use it, you’ll feel that the app is light and has no impact on your system resources. In addition, you have the advantage of changing the theme to the theme you want. The themes are based on GTK and Winamp classic themes. You have the option to record music that you are streaming in case it pleases your ears. The player comes with visualizations that keep you feeling the rhythm. Audacious is available on the ubuntu software center for download and install. Alternatively, you can type the following to install it in ubuntu terminal: - -``` -sudo apt-get update && sudo apt-get install audacious -``` - - [![audacious music player interface](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/amarok_4_orig.png)][7] _Audacious has various themes to choose from._ - -### Rhythmbox - - [![rhythmbox stream online music](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/rhythmbox_orig.png)][8] - -Rhythmbox is an inbuilt app that comes with the GNOME desktop experience, however, if you want it on other distros, you will have to install it. It is a lightweight music player that you can use to stream your music. It is easy to use and not as complicated as other music players that stream online music. To install Rhythmbox, type the following commands in the terminal: - -``` -sudo add-apt-repository ppa:fossfreedom/rhythmbox - -sudo apt-get update && sudo apt-get install rhythmbox -``` - -### VLC Media Player - - [![vlc music player stream online music](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vlc-media-player_orig.png)][9] - -VLC is one of the most famous open source media players out there. The player has options to stream your music from, it is easy to use and even plays any time of media. It has tons of features for you to discover such as recording music streams, updating cover art for your tracks and even has an equaliser you can tweak around with so that your music comes out the way you want it to. You can add your own skins, or download and apply them, mess with the UI of the app to your preferences. To install, type the following commands into the terminal: - -``` -sudo apt-get update && sudo apt-get install vlc -``` - -### Harmony - - [![harmony stream online music](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/harmony_orig.png)][10] - -Harmony is an almost free online music streaming application. It’s almost free because you have an untimed free version that you can upgrade to a one time paid version. The free version has that annoying dialog reminding you to upgrade your installation. It has a great user interface and easy to use. It also has the capability of playing local files. You can install any compatible plugin of your choice thus getting more from your music player. It has multiple streaming sources options so that you can enjoy the most out of the Internet. All you have to do is enable the source you wish to stream media from and you are good to go. - -​In order to get the application on your machine, you will need to install using the official .deb file. To get the application, you will have to get it from the official site - -[here][11] - -. I would have wished if the creators of this app made an official PPA that one could use to install via the terminal. - -### Mellow Player - - [![mellow player stream online music](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/mellow-player_orig.png)][12] - -​The Mellow player is an across-platform open source and free music player that allows you to play online music. Mellow player has over 16 supported online music services that you can choose to stream from. There are options to install plugins that lets you install the service you want. If you want to install it, you will have to get it from the official site - -[here][13] - -. It comes in form of an app image that is easy to install since it doesn’t mess with other files since all required files come sandboxed as an app image. When you download the file, you will need to change the file properties by typing the following commands in the terminal: - -``` -chmod +x MellowPlayer-x86_64.AppImage -``` - -Then running it as follows: - -``` -sudo ./MellowPlayer-x86_64.AppImage -``` - -## Conclusion - -There are plenty of Linux music players out there but those are the best you would want to try out. All in all, the music players have their pros and cons, but the most important part is to sit down, open the app, start streaming your music and relax. The music players may or may not meet your expectations, but don’t forget to enjoy the next app incase the first isn’t for you. Enjoy! - --------------------------------------------------------------------------------- - -via: http://www.linuxandubuntu.com/home/best-linux-music-players-to-stream-online-music - -作者:[LINUXANDUBUNTU][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxandubuntu.com -[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/spotify_orig.png -[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/amarok_orig.png -[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/amarok_1_orig.png -[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/amarok_2_orig.png -[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/amarok_3_orig.png -[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/audacious_orig.png -[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/amarok_4_orig.png -[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/rhythmbox_orig.png -[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vlc-media-player_orig.png -[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/harmony_orig.png -[11]:https://getharmony.xyz/download -[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/mellow-player_orig.png -[13]:https://colinduquesnoy.github.io/MellowPlayer/ diff --git a/sources/tech/20180119 PlayOnLinux For Easier Use Of Wine.md b/sources/tech/20180119 PlayOnLinux For Easier Use Of Wine.md deleted file mode 100644 index 2af3433920..0000000000 --- a/sources/tech/20180119 PlayOnLinux For Easier Use Of Wine.md +++ /dev/null @@ -1,153 +0,0 @@ -PlayOnLinux For Easier Use Of Wine -====== - -![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux-for-easier-use-of-wine_orig.jpg) - -[PlayOnLinux][1] is a free program that helps to install, run, and manage Windows software on Linux. It can also manage virtual C: drives (known as Wine prefixes), and download and install certain Windows libraries for getting some software to run on Wine properly. Creating different drives using different Wine versions is also possible. It is very handy because what runs well in one version may not run as well (if at all) on a newer version. There is [PlayOnMac][2] for macOS and PlayOnBSD for FreeBSD. - -[Wine][3] is the compatibility layer that allows many programs developed for Windows to run under operating systems such as Linux, FreeBSD, macOS and other UNIX systems. The app database ([AppDB][4]) gives users an overview of a multitude of programs that will function on Wine, however successfully. - -Both programs can be obtained using your distribution’s software center or package manager for convenience. - -### Installing Programs Using PlayOnLinux - -Installing software is easy. PlayOnLinux has hundreds of scripts to aid in installing different software with which to run the setup. In the sidebar, select “Install Software”. You will find several categories to choose from. - -​ - -Hundreds of games can be installed this way. - - [![install games playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_orig.png)][5] - -​Office software can be installed as well, including Microsoft Office as shown here. - - [![microsoft office in linux playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_1_orig.png)][6] - -​Let’s install Notepad++ using the script. You can select the script to read the compatibility rating according to PlayOnLinux, and an overview of the program. To get a better idea of compatibility, refer to the WineHQ App Database and find “Browse Apps” to find a program like Notepad++. - - [![install notepad++ in linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_2_orig.png)][7] - -​Once you press “Install”, if you are using PlayOnLinux for the first time, you will encounter two popups: one to give you tips when installing programs with a script, and the other to not submit bug reports to WineHQ because PlayOnLinux has nothing to do with them. - -​ - -​During the installation, I was given the choice to either download the setup executable, or select one on the computer. I downloaded the file but received a File Mismatch error; however, I continued and it was successful. It’s not perfect, but it is functional. (It is possible to submit bug reports to PlayOnLinux if the option is given.) - -[![bug report on playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_3_orig.png)][8] - -Nevertheless, I was able to install Notepad++ successfully, run it, and update it to the latest version (at the time of writing 7.5.3) from version 7.4.2. - -​ - -Also during installation, it created a virtual C: drive specifically for Notepad++. As there are no other Wine versions available for PlayOnLinux to use, it defaults to using the version installed on the system. In this case, it is more than adequate for Notepad++ to run smoothly. - -### Installing Non-Listed Programs - -You can also install a program that is not on the list by pressing “Install Non-Listed Program” on the bottom-left corner of the install menu. Bear in mind that there is no script to install certain libraries to make things work properly. You will need to do this yourself. Look at the Wine AppDB for information for your program. Also, if the app isn’t listed, it doesn’t mean that it won’t work with Wine. It just means no one has given any information about it. - -​ - -I’ve installed Graphmatica, a graph plotting program, using this method. First I selected the option to install it on a new virtual drive. - - [![install non listed programs on linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_4_orig.png)][9] - -​Then I selected the option to install additional libraries after creating the drive and select a Wine version to use in doing so. - - [![playonlinux setup wizard](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_5_orig.png)][10] - -​I then proceeded to select Gecko (which encountered an error for some reason), and Mono 2.10 to install. - - [![playonlinux wizard POL_install](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_6_orig.png)][11] - -​Finally, I installed Graphmatica. It’s as simple as that. - - [![software installation done playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_7_orig.png)][12] - -A launcher can be created after installation. A list of executables found in the drive will appear. Search for the app executable (may not always be obvious) which may have its icon, select it and give it a display name. The icon will appear on the desktop. - - [![install graphmatica in linux playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_8_orig.png)][13] - [![playonlinux install windows software](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_9_orig.png)][14] - -### Multiple “C:” Drives - -Now that we have easily installed a program, let’s have a look at the drive configuration. In the main window, press “Configure” in the toolbar and this window will show. - - [![multiple c: drives in linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/playonlinux_10.png?1516170517)][15] - -On the left are the drives that are found within PlayOnLinux. To the right, the “General” tab allows you to create shortcuts of programs installed on that virtual drive. - -​ - -The “Wine” tab has 8 buttons, including those to launch the Wine configuration program (winecfg), control panel, registry editor, command prompt, etc. - - [![playonlinux configuration wine](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_11_orig.png)][16] - -​“Install Components” allows you to select different Windows libraries like DirectX 9, .NET Framework versions 2 – 4.5, Visual C++ runtime, etc., like [winetricks][17]. - - [![install playonlinux components](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_12_orig.png)][18] - -“Display” allows the user to control advanced graphics settings like GLSL support, video memory size, and more. And “Miscellaneous” is for other actions like running an executable found anywhere on the computer to be run under the selected virtual drive. - -### Creating Virtual Drives Without Installing Programs - -To create a drive without installing software, simply press “New” below the list of drives to launch the virtual drive creator. Drives are created using the same method used in installing programs not found in the install menu. Follow the prompts, select either a 32-bit or 64-bit installation (in this case we only have 32-bit versions so select 32-bit), choose the Wine version, and give the drive a name. Once completed, it will appear in the drive list. - - [![playonlinux sandbox](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_13_orig.png)][19] - -### Managing Wine Versions - -Entire Wine versions can be downloaded using the manager. To access this through the menu bar, press “Tools” and select “Manage Wine versions”. Sometimes different software can behave differently between Wine versions. A Wine update can break something that made your application work in the previous version; thus rendering the application broken or completely unusable. Therefore, this feature is one of the highlights of PlayOnLinux. - -​ - -If you’re still on the configuration window, in the “General” tab, you can also access the version manager by pressing the “+” button next to the Wine version field. - - [![playonlinux select wine version](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_14_orig.png)][20] - -To install a version of Wine (32-bit or 64-bit), simply select the version, and press the “>” button to download and install it. After installation, if setup executables for Mono, and/or the Gecko HTML engine have not yet been downloaded by PlayOnLinux, they will be downloaded. - -​ - -I went ahead and installed the 2.21-staging version of Wine afterward. - - [![select wine version playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_15_orig.png)][21] - -​To remove a version, press the “<” button. - -### Conclusion - -​This article demonstrated how to use PlayOnLinux to easily install Windows software into separate virtual C: drives, create and manage virtual drives, and manage several Wine versions. The software isn’t perfect, but it is still functional and useful. Managing different drives with different Wine versions is one of the key features of PlayOnLinux. It is a lot easier to use a front-end for Wine such as PlayOnLinux than pure Wine. - - --------------------------------------------------------------------------------- - -via: http://www.linuxandubuntu.com/home/playonlinux-for-easier-use-of-wine - -作者:[LinuxAndUbuntu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxandubuntu.com -[1]:https://www.playonlinux.com/en/ -[2]:https://www.playonmac.com -[3]:https://www.winehq.org/ -[4]:http://appdb.winehq.org/ -[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_orig.png -[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_1_orig.png -[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_2_orig.png -[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_3_orig.png -[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_4_orig.png -[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_5_orig.png -[11]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_6_orig.png -[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_7_orig.png -[13]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_8_orig.png -[14]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_9_orig.png -[15]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_10_orig.png -[16]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_11_orig.png -[17]:https://github.com/Winetricks/winetricks -[18]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_12_orig.png -[19]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_13_orig.png -[20]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_14_orig.png -[21]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_15_orig.png From f222d6cc8ff18e4dae15f9b55f85386f14098ba3 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 15:05:37 +0800 Subject: [PATCH 19/81] remove ryanmccue.ca --- .../20171128 Your API is missing Swagger.md | 56 --------- ...1 5 Podcasts Every Dev Should Listen to.md | 54 --------- ...print for Simple Scalable Microservices.md | 48 -------- ...ou Contract Out the Backend of Your App.md | 60 ---------- ...225 Where to Get Your App Backend Built.md | 88 -------------- .../20180111 What is the deal with GraphQL.md | 43 ------- .../20180129 5 Real World Uses for Redis.md | 109 ------------------ 7 files changed, 458 deletions(-) delete mode 100644 sources/talk/20171128 Your API is missing Swagger.md delete mode 100644 sources/talk/20171201 5 Podcasts Every Dev Should Listen to.md delete mode 100644 sources/talk/20171215 Blueprint for Simple Scalable Microservices.md delete mode 100644 sources/talk/20171223 5 Things to Look for When You Contract Out the Backend of Your App.md delete mode 100644 sources/talk/20171225 Where to Get Your App Backend Built.md delete mode 100644 sources/tech/20180111 What is the deal with GraphQL.md delete mode 100644 sources/tech/20180129 5 Real World Uses for Redis.md diff --git a/sources/talk/20171128 Your API is missing Swagger.md b/sources/talk/20171128 Your API is missing Swagger.md deleted file mode 100644 index af0106a121..0000000000 --- a/sources/talk/20171128 Your API is missing Swagger.md +++ /dev/null @@ -1,56 +0,0 @@ -Your API is missing Swagger -====== - -![](https://ryanmccue.ca/content/images/2017/11/top-20mobileapps--3-.png) - -We have all struggled through thrown together, convoluted API documentation. It is frustrating, and in the worst case, can lead to bad requests. The process of understanding an API is something most developers go through on a regular basis, so it is any wonder that the majority of APIs have horrific documentation. - -[Swagger][1] is the solution to this problem. Swagger came out in 2011 and is an open source software framework which has many tools that help developers design, build, document, and consume RESTful APIs. Designing an API using Swagger, or documenting it after with Swagger helps everyone consumers of your API seamlessly. One of the amazing features which many people do not know about Swagger is that you can actually **generate** a client from it! That's right, if a service you're consuming has Swagger documentation you can generate a client to consume it! - -All major languages support Swagger and connect it to your API. Depending on the language you're writing your API in you can have the Swagger documentation generated from the actual code. Here are some of the standout Swagger libraries I've seen recently. - -### Golang - -Golang has a couple great tools for integrating Swagger into your API. The first is [go-swagger][2], which is a tool that lets you generate the scaffolding for an API from a Swagger file. This is a fundamentally different way of thinking about APIs. Instead of building the endpoints and thinking about new ones on the fly, go-swagger gets you to think through your API before you write a single line of code. This can help visualize what you want the API to do first. Another tool which Golang has is called [Goa][3]. A quote from their website sums up what Goa is: - -> goa provides a novel approach for developing microservices that saves time when working on independent services and helps with keeping the overall system consistent. goa uses code generation to handle both the boilerplate and ancillary artifacts such as documentation, client modules, and client tools. - -They take designing the API before implementing it to a new level. Goa has a DSL to help you programmatically describe your entire API, from endpoints to payloads, to responses. From this DSL Goa generates a Swagger file for anyone that consumes your API, and it will enforce your endpoints output the correct data, which will keep your API and documentation in sync. This is counter-intuitive when you start, but after actually implementing an API with Goa, you will not know how you ever did it before. - -### Python - -[Flask][4] has a great extension for building an API with Swagger called [Flask-RESTPlus][5]. - -> If you are familiar with Flask, Flask-RESTPlus should be easy to pick up. It provides a coherent collection of decorators and tools to describe your API and expose its documentation properly using Swagger. - -It uses python decorators to generate swagger documentation and can be used to enforce endpoint output similar to Goa. It can be very powerful and makes generating swagger from an API stupid easy. - -### NodeJS - -Finally, NodeJS has a powerful tool for working with Swagger called [swagger-js-codegen][6]. It can generate both servers and clients from a swagger file. - -> This package generates a nodejs, reactjs or angularjs class from a swagger specification file. The code is generated using mustache templates and is quality checked by jshint and beautified by js-beautify. - -It is not quite as easy to use as Goa and Flask-RESTPlus, but if Node is your thing, this will do the job. It shines when it comes to generating frontend code to interface with your API, which is perfect if you're developing a web app to go along with the API. - -### Conclusion - -Swagger is a simple yet powerful representation of your RESTful API. When used properly it can help flush out your API design and make it easier to consume. Harnessing its full power can save you time by forming and visualizing your API before you write a line of code, then generate the boilerplate surrounding the core logic. And with tools like [Goa][3], [Flask-RESTPlus][5], and [swagger-js-codegen][6] which will make the whole experience of architecting and implementing an API painless, there is no excuse not to have Swagger. - --------------------------------------------------------------------------------- - -via: https://ryanmccue.ca/your-api-is-missing-swagger/ - -作者:[Ryan McCue][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://ryanmccue.ca/author/ryan/ -[1]:http://swagger.io -[2]:https://github.com/go-swagger/go-swagger -[3]:https://goa.design/ -[4]:http://flask.pocoo.org/ -[5]:https://github.com/noirbizarre/flask-restplus -[6]:https://github.com/wcandillon/swagger-js-codegen diff --git a/sources/talk/20171201 5 Podcasts Every Dev Should Listen to.md b/sources/talk/20171201 5 Podcasts Every Dev Should Listen to.md deleted file mode 100644 index 72586c8f35..0000000000 --- a/sources/talk/20171201 5 Podcasts Every Dev Should Listen to.md +++ /dev/null @@ -1,54 +0,0 @@ -5 Podcasts Every Dev Should Listen to -====== - -![](https://ryanmccue.ca/content/images/2017/11/Electric-Love.png) - -Being a developer is a tough job, the landscape is constantly changing, and new frameworks and best practices come out every month. Having a great go-to list of podcasts keeping you up to date on the industry can make a huge difference. I've done some of the hard work and created a list of the top 5 podcasts I personally listen too. - -### This Developer's Life - -Unlike many developer-focused podcasts, there is no talk of code or explanations of software architecture in [This Developer's Life][1]. There are just relatable stories from other developers. This Developer's Life dives into the issues developers face in their daily lives, from a developers point of view. [Rob Conery][2] and [Scott Hanselman][3] host the show and it focuses on all aspects of a developers life. For example, what it feels like to get fired. To hit a home run. To be competitive. It is a very well made podcast and isn't just for developers, but it can also be enjoyed by those that love and live with them. - -### Developer Tea - -Don’t have a lot of time? [Developer Tea][4] is "A podcast for developers designed to fit inside your tea break." The podcast exists to help driven developers connect with their purpose and excel at their work so that they can make an impact. Hosted by [Jonathan Cutrell][5], the director of technology at Whiteboard, Developer Tea breaks down the news and gives useful insights into all aspects of a developers life in and out of work. Cutrell explains listener questions mixed in with news, interviews, and career advice during his show, which releases multiple episodes every week. - -### Software Engineering Today - -[Software Engineering Daily][6] is a daily podcast which focuses on heavily technical topics like software development and system architecture. It covering a range of topics from load balancing at scale and serverless event-driven architecture to augmented reality. Hosted by [Jeff Meyerson][7], this podcast is great for developers who have a passion for learning about complicated software topics to expand their knowledge base. - -### Talking Code - -The [Talking Code][8] podcast is from 2015, and contains 24 episodes which have "short expert interviews that help you decode what developers are saying." The hosts, [Josh Smith][9] and [Venkat Dinavahi][10], talk about diverse web development topics like how to become an effective junior developer and how to go from junior to senior developer, to topics like building modern web applications and making the most out of your analytics. This podcast is perfect for those getting into web development and those who look to level up their web development skills. - -### The Laracasts Snippet - -[The Laracasts Snippet][11] is a bite-size podcast where each episode offers a single thought on some aspect of web development. The host, [Jeffrey Way][12], is a prominent character in the Laravel community and runs the site [Laracasts][12]. His insights are broad and are useful for developers of all backgrounds. - -### Conclusion - -Podcasts are on the rise and more and more developers are listening to them. With such a rapidly expanding list of new podcasts coming out it can be tough to pick the top 5, but if you listen to these podcasts, you will have a competitive edge as a developer. - --------------------------------------------------------------------------------- - -via: https://ryanmccue.ca/podcasts-every-developer-should-listen-too/ - -作者:[Ryan McCue][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://ryanmccue.ca/author/ryan/ -[1]:http://thisdeveloperslife.com/ -[2]:https://rob.conery.io/ -[3]:https://www.hanselman.com/ -[4]:https://developertea.com/ -[5]:http://jonathancutrell.com/ -[6]:https://softwareengineeringdaily.com/ -[7]:http://jeffmeyerson.com/ -[8]:http://talkingcode.com/ -[9]:https://twitter.com/joshsmith -[10]:https://twitter.com/venkatdinavahi -[11]:https://laracasts.simplecast.fm/ -[12]:https://laracasts.com diff --git a/sources/talk/20171215 Blueprint for Simple Scalable Microservices.md b/sources/talk/20171215 Blueprint for Simple Scalable Microservices.md deleted file mode 100644 index 8b79458501..0000000000 --- a/sources/talk/20171215 Blueprint for Simple Scalable Microservices.md +++ /dev/null @@ -1,48 +0,0 @@ -Blueprint for Simple Scalable Microservices -====== - -![](https://ryanmccue.ca/content/images/2017/12/Copy-of-Copy-of-Electric-Love--1-.png) - -When you're building a microservice, what do you value? A fully managed and scalable system? It's hard to know where to start with AWS; there are so many options for hosting code, you can use EC2, ECS, Elastic Beanstalk, Lambda. Everyone has patterns for deploying microservices. Using the pattern below will provide a great structure for a scalable microservice architecture. - -### Elastic Beanstalk - -The first and most important piece is [Elastic Beanstalk][1]. It is a great, simple way to deploy auto-scaling microservices. All you need to do is upload your code to Elastic Beanstalk via their command line tool or management console. Once it's in Elastic Beanstalk the deployment, capacity provisioning, load balancing, auto-scaling is handled by AWS. - -### S3 - -Another important service is [S3][2]; it is an object storage built to store and retrieve data. S3 has lots of uses, from storing images, to backups. Particular use cases are storing sensitive files such as private keys, environment variable files which will be accessed and used by multiple instances or services. Finally, using S3 for less sensitive, publically accessible files like configuration files, Dockerfiles, and images. - -### Kinesis - -[Kinesis][3] is a tool which allows for microservices to communicate with each other and other projects like Lambda, which we will discuss farther down. Kinesis does this by real-time, persistent data streaming, which enables microservices to emit events. Data can be persisted for up to 7 days for persistent and batch processing. - -### RDS - -[Amazon RDS][4] is a great, fully managed relational database hosted by AWS. Using RDS over your own database server is beneficial because AWS manages everything. It makes it easy to set up, operate, and scale a relational databases. - -### Lambda - -Finally, [AWS Lambda][5] lets you run code without provisioning or managing servers. Lambda has many uses; you can even create the whole APIs with it. Some great uses for it in a microservice architecture are cron jobs and image manipulation. Crons can be scheduled with [CloudWatch][6]. - -### Conclusion - -These AWS products you can create fully scalable, stateless microservices that can communicate with each other. Using Elastic Beanstalk to run microservices, S3 to store files, Kinesis to emit events and Lambdas to subscribe to them and run other tasks. Finally, RDS for easily managing and scaling relational databases. - --------------------------------------------------------------------------------- - -via: https://ryanmccue.ca/blueprint-for-simple-scalable-microservices/ - -作者:[Ryan McCue][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://ryanmccue.ca/author/ryan/ -[1]:https://aws.amazon.com/elasticbeanstalk/?nc2=h_m1 -[2]:https://aws.amazon.com/s3/?nc2=h_m1 -[3]:https://aws.amazon.com/kinesis/?nc2=h_m1 -[4]:https://aws.amazon.com/rds/?nc2=h_m1 -[5]:https://aws.amazon.com/lambda/?nc2=h_m1 -[6]:https://aws.amazon.com/cloudwatch/?nc2=h_m1 diff --git a/sources/talk/20171223 5 Things to Look for When You Contract Out the Backend of Your App.md b/sources/talk/20171223 5 Things to Look for When You Contract Out the Backend of Your App.md deleted file mode 100644 index e7319850be..0000000000 --- a/sources/talk/20171223 5 Things to Look for When You Contract Out the Backend of Your App.md +++ /dev/null @@ -1,60 +0,0 @@ -5 Things to Look for When You Contract Out the Backend of Your App -====== - -![](https://ryanmccue.ca/content/images/2017/12/Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Electric-Love.png) - -For many app developers, it can be hard to know what to do when it comes to the backend of your app. There are a few options, Firebase, throw together a quick Node API, contract it out. I am going to make a blog post soon weighing the pros and cons of each of these options, but for now, let's assume you want the API done professionally. - -You are going to want to look for specific things before you give the contract to some freelancer or agency. - -### 1. Documentation - -Documentation is one of the most important pieces here, the API could be amazing, but if it is impossible to understand which endpoints are available, what parameters they provide, and what they respond with you won't have much luck integrating the API into your app. Surprisingly this is one of the pieces with most contractors get wrong. - -So what are you looking for? First, make sure they understand the importance of documentation, this alone makes a huge difference. Second, the should preferably be using an open standard like [Swagger][1] for documentation. If they do both of these things, you should have documentation covered. - -### 2. Communication - -You know the saying "communication is key," well that applies to API development. This is harder to gauge, but sometimes a developer will get the contract, and then disappear. This doesn't mean they aren't working on it, but it means there isn't a good feedback loop to sort out problems before they get too large. - -A good way to get around this is to have a weekly, or however often you want, meeting to go over progress and make sure the API is shaping up the way you want. Even if the meeting is just going over the endpoints and confirming they are returning the data you need. - -### 3. Error Handling - -Error handling is crucial, this basically means if there is an error on the backend, whether it's an invalid request or an unexpected internal server error, it will be handled properly and a useful response is given to the client. It's important that they are handled gracefully. Often this can get overlooked in the API development process. - -This is a tricky thing to look out for, but by letting them know you expect useful error messages and maybe put it into the contract, you should get the error messages you need. This may seem like a small thing but being able to present the user of your app with the actual thing they've done wrong, like "Passwords must be between 6-64 characters" improves the UX immensely. - -### 4. Database - -This section may be a bit controversial, but I think that 90% of apps really just need a SQL database. I know NoSQL is sexy, but you get so many extra benefits from using SQL I feel that's what you should use for the backend of your app. Of course, there are cases where NoSQL is the better option, but broadly speaking you should probably just use a SQL database. - -SQL adds so much added flexibility by being able to add, modify, and remove columns. The option to aggregate data with a simple query is also immensely useful. And finally, the ability to do transactions and be sure all your data is valid will help you sleep better at night. - -The reason I say all the above is because I would recommend looking for someone who is willing to build your API with a SQL database. - -### 5. Infrastructure - -The last major thing to look for when contracting out your backend is infrastructure. This is essential because you want your app to scale. If you get 10,000 users join your app in one day for some reason, you want your backend to handle that. Using services like [AWS Elastic Beanstalk][2] or [Heroku][3] you can create APIs which will scale up automatically with load. That means if your app takes off overnight your API will scale with the load and not buckle under it. - -Making sure your contractor is building it with scalability in mind is key. I wrote a [post on scalable APIs][4] if you're interested in learning more about a good AWS stack. - -### Conclusion - -It is important to get a quality backend when you contract it out. You're paying for a professional to design and build the backend of your app, so if they're lacking in any of the above points it will reduce the chance of success for but the backend, but for your app. If you make a checklist with these points and go over them with contractors, you should be able to weed out the under-qualified applicants and focus your attention on the contractors that know what they're doing. - --------------------------------------------------------------------------------- - -via: https://ryanmccue.ca/things-to-look-for-when-you-contract-out-the-backend-your-app/ - -作者:[Ryan McCue][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://ryanmccue.ca/author/ryan/ -[1]:https://swagger.io/ -[2]:https://aws.amazon.com/elasticbeanstalk/ -[3]:https://www.heroku.com/ -[4]:https://ryanmccue.ca/blueprint-for-simple-scalable-microservices/ diff --git a/sources/talk/20171225 Where to Get Your App Backend Built.md b/sources/talk/20171225 Where to Get Your App Backend Built.md deleted file mode 100644 index 35d07bac18..0000000000 --- a/sources/talk/20171225 Where to Get Your App Backend Built.md +++ /dev/null @@ -1,88 +0,0 @@ -Where to Get Your App Backend Built -====== - -![](https://ryanmccue.ca/content/images/2017/12/Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Electric-Love.png) - -Building a great app takes lots of work. From designing the views to adding the right transitions and images. One thing which is often overlooked is the backend, connecting your app to the outside world. A backend which is not up to the same quality as your app can wreck even the most perfect user interface. That is why choosing the right option for your backend budget and needs is essential. - -There are three main choices you have when you're getting it built. First, you have agencies, they are a company with salespeople, project managers, and developers. Second, you have market rate freelancers, they are developers who charge market rate for their work and are often in North America or western Europe. Finally, there are budget freelancers, they are inexpensive and usually in parts of Asia and South America. - -I am going to break down the pros and cons of each of these options. - -### Agency - -Agencies are often a safe bet if you're looking for a more hands-off approach agencies are often the way to go, they have project managers who will manage your project and communicate your requirements to developers. This takes some of the work off of your plate and can free it up to work on your app. Agencies also often have a team of developers at their disposal, so if the developer working on your project takes a vacation, they can swap another developer in without much hassle. - -With all these upsides there is a downside. Price. Having a sales team, a project management team, and a developer team isn't cheap. Agencies often cost quite a bit of money compared to freelancers. - -So in summary: - -#### Pros - - * Hands Off - * No Single Point of Failure - - - -#### Cons - - * Very expensive - - - -### Market Rate Freelancer - -Another option you have are market rate freelancers, these are highly skilled developers who often have worked in agencies, but decided to go their own way and get clients themselves. They generally produce high-quality work at a lower cost than agencies. - -The downside to freelancers is since they're only one person they might not be available right away to start your work. Especially high demand freelancers you may have to wait a few weeks or months before they start development. They also are hard to replace, if they get sick or go on vacation, it can often be hard to find someone to continue the work, unless you get a good recommendation from the freelancer. - -#### Pros - - * Cost Effective - * Similar quality to agency - * Great for short term - - - -#### Cons - - * May not be available - * Hard to replace - - - -### Budget Freelancer - -The last option I'm going over is budget freelancers who are often found on job boards such as Fiverr and Upwork. They work for very cheap, but that often comes at the cost of quality and communication. Often you will not get what you're looking for, or it will be very brittle code which buckles under strain. - -If you're on a very tight budget, it may be worth rolling the dice on a highly rated budget freelancer, although you must be okay with the risk of potentially throwing the code away. - -#### Pros - - * Very cheap - - - -#### Cons - - * Often low quality - * May not be what you asked for - - - -### Conclusion - -Getting the right backend for your app is important. It is often a good idea to stick with agencies or market rate freelancers due to the predictability and higher quality code, but if you're on a very tight budget rolling the dice with budget freelancers could pay off. At the end of the day, it doesn't matter where the code is from, as long as it works and does what it's supposed to do. - - --------------------------------------------------------------------------------- - -via: https://ryanmccue.ca/where-to-get-your-app-backend-built/ - -作者:[Ryan McCue][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://ryanmccue.ca/author/ryan/ diff --git a/sources/tech/20180111 What is the deal with GraphQL.md b/sources/tech/20180111 What is the deal with GraphQL.md deleted file mode 100644 index c656769269..0000000000 --- a/sources/tech/20180111 What is the deal with GraphQL.md +++ /dev/null @@ -1,43 +0,0 @@ -translating---geekpi - -What is the deal with GraphQL? -====== - -![](https://ryanmccue.ca/content/images/2018/01/Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Electric-Love.png) - -There has been lots of talks lately about this thing called [GraphQL][1]. It is a relatively new technology coming out of Facebook and is starting to be widely adopted by large companies like [Github][2], Facebook, Twitter, Yelp, and many others. Basically, GraphQL is an alternative to REST, it replaces many dumb endpoints, `/user/1`, `/user/1/comments` with `/graphql` and you use the post body or query string to request the data you need, like, `/graphql?query={user(id:1){id,username,comments{text}}}`. You pick the pieces of data you need and can nest down to relations to avoid multiple calls. This is a different way of thinking about a backend, but in some situations, it makes practical sense. - -### My Experience with GraphQL - -Originally when I heard about it I was very skeptical, after dabbling in [Apollo Server][3] I was not convinced. Why would you use some silly new technology when you can simply build REST endpoints! But after digging deeper and learning more about its use cases, I came around. I still think REST has a place and will be important for the foreseeable future, but with how bad many APIs and their documentation are, this can be a breath of fresh air... - -### Why Use GraphQL Over REST? - -Although I have used GraphQL, and think it is a compelling and exciting technology, I believe it does not replace REST. That being said there are compelling reasons to pick GraphQL over REST in some situations. When you are building mobile apps or web apps which are made with high mobile traffic in mind GraphQL really shines. The reason for this is mobile data. REST uses many calls and often returns unused data whereas, with GraphQL, you can define precisely what you want to be returned for minimal data usage. - -You can get do all the above with REST by making multiple endpoints available, but that also adds complexity to the project. It also means there will be back and forth between the front and backend teams. - -### What Should You Use? - -GraphQL is a new technology which is now mainstream. But many developers are not aware of it or choose not to learn it because they think it's a fad. I feel like for most projects you can get away using either REST or GraphQL. Developing using GraphQL has great benefits like enforcing documentation, which helps teams work better together, and provides clear expectations for each query. This will likely speed up development after the initial hurdle of wrapping your head around GraphQL. - -Although I have been comparing GraphQL and REST, I think in most cases a mixture of the two will produce the best results. Combine the strengths of both instead of seeing it strightly as just using GraphQL or just using REST. - -### Final Thoughts - -Both technologies are here to stay. And done right both technologies can make fast and efficient backends. GraphQL has an edge up because it allows the client to query only the data they need by default, but that is at a potential sacrifice of endpoint speed. Ultimately, if I were starting a new project, I would go with a mix of both GraphQL and REST. - --------------------------------------------------------------------------------- - -via: https://ryanmccue.ca/what-is-the-deal-with-graphql/ - -作者:[Ryan McCue][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://ryanmccue.ca/author/ryan/ -[1]:http://graphql.org/ -[2]:https://developer.github.com/v4/ -[3]:https://github.com/apollographql/apollo-server diff --git a/sources/tech/20180129 5 Real World Uses for Redis.md b/sources/tech/20180129 5 Real World Uses for Redis.md deleted file mode 100644 index 61f7c09b3b..0000000000 --- a/sources/tech/20180129 5 Real World Uses for Redis.md +++ /dev/null @@ -1,109 +0,0 @@ -5 Real World Uses for Redis -============================================================ - - -Redis is a powerful in-memory data structure store which has many uses including a database, a cache, and a message broker. Most people often think of it a simple key-value store, but it has so much more power. I will be going over some real world examples of some of the many things Redis can do for you. - -### 1\. Full Page Cache - -The first thing is full page caching. If you are using server-side rendered content, you do not want to re-render each page for every single request. Using a cache like Redis, you can cache regularly requested content and drastically decrease latency for your most requested pages, and most frameworks have hooks for caching your pages with Redis. -Simple Commands - -``` -// Set the page that will last 1 minute -SET key "..." EX 60 - -// Get the page -GET key - -``` - -### 2\. Leaderboard - -One of the places Redis shines is for leaderboards. Because Redis is in-memory, it can deal with incrementing and decrementing very fast and efficiently. Compare this to running a SQL query every request the performance gains are huge! This combined with Redis's sorted sets means you can grab only the highest rated items in the list in milliseconds, and it is stupid easy to implement. -Simple Commands - -``` -// Add an item to the sorted set -ZADD sortedSet 1 "one" - -// Get all items from the sorted set -ZRANGE sortedSet 0 -1 - -// Get all items from the sorted set with their score -ZRANGE sortedSet 0 -1 WITHSCORES - -``` - -### 3\. Session Storage - -The most common use for Redis I have seen is session storage. Unlike other session stores like Memcache, Redis can persist data so in the situation where your cache goes down when it comes back up all the data will still be there. Although this isn't mission critical to be persisted, this feature can save your users lots of headaches. No one likes their session to be randomly dropped for no reason. -Simple Commands - -``` -// Set session that will last 1 minute -SET randomHash "{userId}" EX 60 - -// Get userId -GET randomHash - -``` - -### 4\. Queue - -One of the less common, but very useful things you can do with Redis is queue things. Whether it's a queue of emails or data to be consumed by another application, you can create an efficient queue it in Redis. Using this functionality is easy and natural for any developer who is familiar with Stacks and pushing and popping items. -Simple Commands - -``` -// Add a Message -HSET messages -ZADD due - -// Recieving Message -ZRANGEBYSCORE due -inf LIMIT 0 1 -HGET messages - -// Delete Message -ZREM due -HDEL messages - -``` - -### 5\. Pub/Sub - -The final real world use for Redis I am going to bring up in this post is pub/sub. This is one of the most powerful features Redis has built in; the possibilities are limitless. You can create a real-time chat system with it, trigger notifications for friend requests on social networks, etc... This feature is one of the most underrated features Redis offers but is very powerful, yet simple to use. -Simple Commands - -``` -// Add a message to a channel -PUBLISH channel message - -// Recieve messages from a channel -SUBSCRIBE channel - -``` - -### Conclusion - -I hope you enjoyed this list of some of the many real world uses for Redis. This is just scratching the surface of what Redis can do for you, but I hope it gave you some ideas of how you can use the full potential Redis has to offer. - --------------------------------------------------------------------------------- - -作者简介: - -Hi, my name is Ryan! I am a Software Developer with experience in many web frameworks and libraries including NodeJS, Django, Golang, and Laravel. - - -------------------- - - -via: https://ryanmccue.ca/5-real-world-uses-for-redis/ - -作者:[Ryan McCue ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://ryanmccue.ca/author/ryan/ -[1]:https://ryanmccue.ca/author/ryan/ \ No newline at end of file From 222c144e3244229b7a5b1303d71cb0c125af0918 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 15:09:32 +0800 Subject: [PATCH 20/81] remove www.rosehosting.com --- ...List and Delete iptables Firewall Rules.md | 106 -------------- ...rror establishing a database connection.md | 131 ------------------ ...8 How to Create a Sudo User on CentOS 7.md | 118 ---------------- 3 files changed, 355 deletions(-) delete mode 100644 sources/tech/20180118 How To List and Delete iptables Firewall Rules.md delete mode 100644 sources/tech/20180201 Error establishing a database connection.md delete mode 100644 sources/tech/20180208 How to Create a Sudo User on CentOS 7.md diff --git a/sources/tech/20180118 How To List and Delete iptables Firewall Rules.md b/sources/tech/20180118 How To List and Delete iptables Firewall Rules.md deleted file mode 100644 index b6b875ad11..0000000000 --- a/sources/tech/20180118 How To List and Delete iptables Firewall Rules.md +++ /dev/null @@ -1,106 +0,0 @@ -How To List and Delete iptables Firewall Rules -====== -![How To List and Delete iptables Firewall Rules][1] - -We'll show you, how to list and delete iptables firewall rules. Iptables is a command line utility that allows system administrators to configure the packet filtering rule set on Linux. iptables requires elevated privileges to operate and must be executed by user root, otherwise it fails to function. - -### How to List iptables Firewall Rules - -Iptables allows you to list all the rules which are already added to the packet filtering rule set. In order to be able to check this you need to have SSH access to the server. [Connect to your Linux VPS via SSH][2] and run the following command: -``` -sudo iptables -nvL -``` - -To run the command above your user need to have `sudo` privileges. Otherwise, you need to [add sudo user on your Linux VPS][3] or use the root user. - -If there are no rules added to the packet filtering ruleset the output should be similar to the one below: -``` -Chain INPUT (policy ACCEPT 0 packets, 0 bytes) - pkts bytes target prot opt in out source destination - -Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) - pkts bytes target prot opt in out source destination - -Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) - pkts bytes target prot opt in out source destination - -``` - -Since NAT (Network Address Translation) can also be configured via iptables, you can use iptables to list the NAT rules: -``` -sudo iptables -t nat -n -L -v -``` - -The output will be similar to the one below if there are no rules added: -``` -Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) - pkts bytes target prot opt in out source destination - -Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) - pkts bytes target prot opt in out source destination - -Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) - pkts bytes target prot opt in out source destination - -``` - -If this is the case we recommend you to check our tutorial on How to [Set Up a Firewall with iptables on Ubuntu and CentOS][4] to make your server more secure. - -### How to Delete iptables Firewall Rules - -At some point, you may need to remove a specific iptables firewall rule on your server. For that purpose you need to use the following syntax: -``` -iptables [-t table] -D chain rulenum -``` - -For example, if you have a firewall rule to block all connections from 111.111.111.111 to your server on port 22 and you want to remove that rule, you can use the following command: -``` -sudo iptables -D INPUT -s 111.111.111.111 -p tcp --dport 22 -j DROP -``` - -Now that you removed the iptables firewall rule you need to save the changes to make them persistent. - -In case you are using [Ubuntu VPS][5] you need to install additional package for that purpose. To install the required package use the following command: -``` -sudo apt-get install iptables-persistent -``` - -On **Ubutnu 14.04** you can save and reload the firewall rules using the commands below: -``` -sudo /etc/init.d/iptables-persistent save -sudo /etc/init.d/iptables-persistent reload -``` - -On **Ubuntu 16.04** use the following commands instead: -``` -sudo netfilter-persistent save -sudo netfilter-persistent reload -``` - -If you are using [CentOS VPS][6] you can save the changes using the command below: -``` -service iptables save -``` - -Of course, you don't have to list and delete iptables firewall rules if you use one of our [Managed VPS Hosting][7] services, in which case you can simply ask our expert Linux admins to help you list and delete iptables firewall rules on your server. They are available 24×7 and will take care of your request immediately. - -**PS**. If you liked this post, on how to list and delete iptables firewall rules, please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks. - --------------------------------------------------------------------------------- - -via: https://www.rosehosting.com/blog/how-to-list-and-delete-iptables-firewall-rules/ - -作者:[RoseHosting][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.rosehosting.com -[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/01/How-To-List-and-Delete-iptables-Firewall-Rules.jpg -[2]:https://www.rosehosting.com/blog/connect-to-your-linux-vps-via-ssh/ -[3]:https://www.rosehosting.com/blog/how-to-create-a-sudo-user-on-ubuntu/ -[4]:https://www.rosehosting.com/blog/how-to-set-up-a-firewall-with-iptables-on-ubuntu-and-centos/ -[5]:https://www.rosehosting.com/ubuntu-vps.html -[6]:https://www.rosehosting.com/centos-vps.html -[7]:https://www.rosehosting.com/managed-vps-hosting.html diff --git a/sources/tech/20180201 Error establishing a database connection.md b/sources/tech/20180201 Error establishing a database connection.md deleted file mode 100644 index deaf5c67e8..0000000000 --- a/sources/tech/20180201 Error establishing a database connection.md +++ /dev/null @@ -1,131 +0,0 @@ -Error establishing a database connection -====== -![Error establishing a database connection][1] - -Error establishing a database connection, is a very common error when you try to access your WordPress site. The database stores all the important information for your website, including your posts, comments, site configuration, user accounts, theme and plugin settings and so on. If the connection to your database cannot be established, your WordPress website will not load, and more then likely will give you the error: “Error establishing a database connection” In this tutorial we will show you, how to fix Error establishing a database connection in WordPress. - -The most common cause for “Error establishing a database connection” issue, is one of the following: - -Your database has been corrupted -Incorrect login credentials in your WordPress configuration file (wp-config.php) -Your MySQL service stopped working due to insufficient memory on the server (due to heavy traffic), or server problems - -![Error establishing a database connection][2] - -### 1. Requirements - -In order to troubleshoot “Error establishing a database connection” issue, a few requirements must be met: - - * SSH access to your server - * The database is located on the same server - * You need to know your database username, user password, and name of the database - - - -Also before you try to fix “Error establishing a database connection” error, it is highly recommended that you make a backup of both your website and database. - -### 1. Corrupted database - -The first step to do when trying to troubleshoot “Error establishing a database connection” problem is to check whether this error is present for both the front-end and the back-end of the your site. You can access your back-end via (replace “yourdomain” with your actual domain name) - -If the error remains the same for both your front-end and back-end then you should move to the next step. - -If you are able to access the back-end via and you see the following message: -``` -“One or more database tables are unavailable. The database may need to be repaired” - -``` - -it means that your database has been corrupted and you need to try to repair it. - -To do this, you must first enable the repair option in your wp-config.php file, located inside the WordPress site root directory, by adding the following line: -``` -define('WP_ALLOW_REPAIR', true); - -``` - -Now you can navigate to this this page: and click the “Repair and Optimize Database button.” - -For security reasons, remember to turn off the repair option be deleting the line we added before in the wp-config.php file. - -If this does not fix the problem or the database cannot be repaired you will probably need to restore it from a backup if you have one available. - -### 2. Check your wp-config.php file - -Another, probably most common reason, for failed database connection is because of incorrect database information set in your WordPress configuration file. - -The configuration file resides in your WordPress site root directory and it is called wp-config.php . - -Open the file and locate the following lines: -``` -define('DB_NAME', 'database_name'); -define('DB_USER', 'database_username'); -define('DB_PASSWORD', 'database_password'); -define('DB_HOST', 'localhost'); - -``` - -Make sure the correct database name, username, and password are set. Database host should be set to “localhost”. - -If you ever change your database username and password you should always update this file as well. - -If everything is set up properly and you are still getting the “Error establishing a database connection” error then the problem is probably on the server side and you should move on to the next step of this tutorial. - -### 3. Check your server - -Depending on the resources available, during high traffic hours, your server might not be able to handle all the load and it may stop your MySQL server. - -You can either contact your hosting provider about this or you can check it yourself if the MySQL server is properly running. - -To check the status of MySQL, log in to your server via [SSH][3] and use the following command: -``` -systemctl status mysql - -``` - -Or you can check if it is up in your active processes with: -``` -ps aux | grep mysql - -``` - -If your MySQL is not running you can start it with the following commands: -``` -systemctl start mysql - -``` - -You may also need to check the memory usage on your server. - -To check how much RAM you have available you can use the following command: -``` -free -m - -``` - -If your server is running low on memory you may want to consider upgrading your server. - -### 4. Conclusion - -Most of the time. the “Error establishing a database connection” error can be fixed by following one of the steps above. - -![How to Fix the Error Establishing a Database Connection in WordPress][4]Of course, you don’t have to fix, Error establishing a database connection, if you use one of our [WordPress VPS Hosting Services][5], in which case you can simply ask our expert Linux admins to help you fix the Error establishing a database connection in WordPress, for you. They are available 24×7 and will take care of your request immediately. - -**PS**. If you liked this post, on how to fix the Error establishing a database connection in WordPress, please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks. - --------------------------------------------------------------------------------- - -via: https://www.rosehosting.com/blog/error-establishing-a-database-connection/ - -作者:[RoseHosting][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.rosehosting.com -[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/02/error-establishing-a-database-connection.jpg -[2]:https://www.rosehosting.com/blog/wp-content/uploads/2018/02/Error-establishing-a-database-connection-e1517474875180.png -[3]:https://www.rosehosting.com/blog/connect-to-your-linux-vps-via-ssh/ -[4]:https://www.rosehosting.com/blog/wp-content/uploads/2018/02/How-to-Fix-the-Error-Establishing-a-Database-Connection-in-WordPress.jpg -[5]:https://www.rosehosting.com/wordpress-hosting.html diff --git a/sources/tech/20180208 How to Create a Sudo User on CentOS 7.md b/sources/tech/20180208 How to Create a Sudo User on CentOS 7.md deleted file mode 100644 index e309a483dd..0000000000 --- a/sources/tech/20180208 How to Create a Sudo User on CentOS 7.md +++ /dev/null @@ -1,118 +0,0 @@ -How to Create a Sudo User on CentOS 7 -====== -![How to create a sudo user on CentOS 7][1] - -We’ll guide you, how to create a sudo user on CentOS 7. Sudo is a Linux command line program that allows you to execute commands as superuser or another system user. The configuration file offers detailed access permissions, including enabling commands only from the invoking terminal; requiring a password per user or group; requiring re-entry of a password every time or never requiring a password at all for a particular command line. It can also be configured to permit passing arguments or multiple commands. In this tutorial we will show you how to create a sudo user on CentOS 7. - -### Steps to Create a New Sudo User on CentOS 7 - -#### 1. Connect via SSH - -First of all, [connect to your server via SSH][2]. Once you are logged in, you need to add a new system user. - -#### 2. Add New User in CentOS - -You can add a new system user using the following command: -``` -# adduser newuser - -``` - -You need to replace `newuser` with the name of the user you want to add. Also, you need to set up a password for the newly added user. - -#### 3. Create a Strong Password - -To set up a password you can use the following command: -``` -# passwd newuser - -``` - -Make sure you are using a [strong password][3], otherwise the password will fail against the dictionary check. You will be asked to enter the password again and once you enter it you will be notified that the authentication tokens are updated successfully: -``` -# passwd newuser -Changing password for user newuser. -New password: -Retype new password: -passwd: all authentication tokens updated successfully. - -``` - -#### 4. Add User to the Wheel Group in CentOS - -The wheel group is a special user group that allows all members in the group to run all commands. Therefore, you need to add the new user to this group so it can run commands as superuser. You can do that by using the following command: -``` -# usermod -aG wheel newuser - -``` - -Again, make sure you are using the name of the actual user instead of `newuser`. -Now, use `visudo` to open and edit the `/etc/sudoers` file. Make sure that the line that starts with `%wheel` is not commented. It should look exactly like this: -``` -### Allows people in group wheel to run all commands -%wheel ALL=(ALL) ALL - -``` - -Now that your new user is set up you can switch to that user and test if everything is OK. - -#### 5. Switch to the sudo User - -To switch to the new user, run the following command: -``` -# su - newuser - -``` - -Now run a command that usually doesn’t work for regular users like the one below: -``` -$ ls -la /root/ - -``` - -You will get the following error message: -``` -ls: cannot open directory /root/: Permission denied - -``` - -Try to run the same command, now with using `sudo` -``` -$ sudo ls -ls /root/ - -``` - -You will need to enter the password for the new user to proceed. If everything is OK, the command will list all the content in the `/root` directory. Another way to test this is to run the following command: -``` -$ sudo whoami - -``` - -The output of the command should be similar to the one below: -``` -$ sudo whoami -root - -``` - -Congratulations, now you have a sudo user which you can use to manage your CentOS 7, operating system. - -Of course, you don’t have to create a sudo user on CentOS 7, if you use one of our [CentOS 7 Hosting][4] services, in which case you can simply ask our expert Linux admins to create a sudo user on CentOS 7, for you. They are available 24×7 and will take care of your request immediately. - -**PS**. If you liked this post on **how to create a sudo user on CentOS 7** , please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks. - --------------------------------------------------------------------------------- - -via: https://www.rosehosting.com/blog/how-to-create-a-sudo-user-on-centos-7/ - -作者:[RoseHosting][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.rosehosting.com -[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/02/How-to-create-a-sudo-user-on-CentOS-7.jpg -[2]:https://www.rosehosting.com/blog/connect-to-your-linux-vps-via-ssh/ -[3]:https://www.rosehosting.com/blog/generate-password-linux-command-line/ -[4]:https://www.rosehosting.com/centos-vps.html From e126f31b17a18efd5c06cf74b809f0f947ec5e26 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 15:17:11 +0800 Subject: [PATCH 21/81] remove realpython.com --- ... Distributed Applications - Real Python.md | 239 --------- ...Copying of Python Objects - Real Python.md | 262 --------- ... With Python and Selenium - Real Python.md | 506 ------------------ 3 files changed, 1007 deletions(-) delete mode 100644 sources/tech/20180130 Python - Memcached- Efficient Caching in Distributed Applications - Real Python.md delete mode 100644 sources/tech/20180131 Shallow vs Deep Copying of Python Objects - Real Python.md delete mode 100644 sources/tech/20180206 Modern Web Automation With Python and Selenium - Real Python.md diff --git a/sources/tech/20180130 Python - Memcached- Efficient Caching in Distributed Applications - Real Python.md b/sources/tech/20180130 Python - Memcached- Efficient Caching in Distributed Applications - Real Python.md deleted file mode 100644 index 647a5968e6..0000000000 --- a/sources/tech/20180130 Python - Memcached- Efficient Caching in Distributed Applications - Real Python.md +++ /dev/null @@ -1,239 +0,0 @@ -Python + Memcached: Efficient Caching in Distributed Applications – Real Python -====== - -When writing Python applications, caching is important. Using a cache to avoid recomputing data or accessing a slow database can provide you with a great performance boost. - -Python offers built-in possibilities for caching, from a simple dictionary to a more complete data structure such as [`functools.lru_cache`][2]. The latter can cache any item using a [Least-Recently Used algorithm][3] to limit the cache size. - -Those data structures are, however, by definition local to your Python process. When several copies of your application run across a large platform, using a in-memory data structure disallows sharing the cached content. This can be a problem for large-scale and distributed applications. - -![](https://files.realpython.com/media/python-memcached.97e1deb2aa17.png) - -Therefore, when a system is distributed across a network, it also needs a cache that is distributed across a network. Nowadays, there are plenty of network servers that offer caching capability—we already covered [how to use Redis for caching with Django][4]. - -As you’re going to see in this tutorial, [memcached][5] is another great option for distributed caching. After a quick introduction to basic memcached usage, you’ll learn about advanced patterns such as “cache and set” and using fallback caches to avoid cold cache performance issues. - -### Installing memcached - -Memcached is [available for many platforms][6]: - - * If you run **Linux** , you can install it using `apt-get install memcached` or `yum install memcached`. This will install memcached from a pre-built package but you can alse build memcached from source, [as explained here][6]. - * For **macOS** , using [Homebrew][7] is the simplest option. Just run `brew install memcached` after you’ve installed the Homebrew package manager. - * On **Windows** , you would have to compile memcached yourself or find [pre-compiled binaries][8]. - - - -Once installed, memcached can simply be launched by calling the `memcached` command: -``` -$ memcached - -``` - -Before you can interact with memcached from Python-land you’ll need to install a memcached client library. You’ll see how to do this in the next section, along with some basic cache access operations. - -### Storing and Retrieving Cached Values Using Python - -If you never used memcached, it is pretty easy to understand. It basically provides a giant network-available dictionary. This dictionary has a few properties that are different from a classical Python dictionnary, mainly: - - * Keys and values have to be bytes - * Keys and values are automatically deleted after an expiration time - - - -Therefore, the two basic operations for interacting with memcached are `set` and `get`. As you might have guessed, they’re used to assign a value to a key or to get a value from a key, respectively. - -My preferred Python library for interacting with memcached is [`pymemcache`][9]—I recommend using it. You can simply [install it using pip][10]: -``` -$ pip install pymemcache - -``` - -The following code shows how you can connect to memcached and use it as a network-distributed cache in your Python applications: -``` ->>> from pymemcache.client import base - -# Don't forget to run `memcached' before running this next line: ->>> client = base.Client(('localhost', 11211)) - -# Once the client is instantiated, you can access the cache: ->>> client.set('some_key', 'some value') - -# Retrieve previously set data again: ->>> client.get('some_key') -'some value' - -``` - -memcached network protocol is really simple an its implementation extremely fast, which makes it useful to store data that would be otherwise slow to retrieve from the canonical source of data or to compute again: - -While straightforward enough, this example allows storing key/value tuples across the network and accessing them through multiple, distributed, running copies of your application. This is simplistic, yet powerful. And it’s a great first step towards optimizing your application. - -### Automatically Expiring Cached Data - -When storing data into memcached, you can set an expiration time—a maximum number of seconds for memcached to keep the key and value around. After that delay, memcached automatically removes the key from its cache. - -What should you set this cache time to? There is no magic number for this delay, and it will entirely depend on the type of data and application that you are working with. It could be a few seconds, or it might be a few hours. - -Cache invalidation, which defines when to remove the cache because it is out of sync with the current data, is also something that your application will have to handle. Especially if presenting data that is too old or or stale is to be avoided. - -Here again, there is no magical recipe; it depends on the type of application you are building. However, there are several outlying cases that should be handled—which we haven’t yet covered in the above example. - -A caching server cannot grow infinitely—memory is a finite resource. Therefore, keys will be flushed out by the caching server as soon as it needs more space to store other things. - -Some keys might also be expired because they reached their expiration time (also sometimes called the “time-to-live” or TTL.) In those cases the data is lost, and the canonical data source must be queried again. - -This sounds more complicated than it really is. You can generally work with the following pattern when working with memcached in Python: -``` -from pymemcache.client import base - - -def do_some_query(): - # Replace with actual querying code to a database, - # a remote REST API, etc. - return 42 - - -# Don't forget to run `memcached' before running this code -client = base.Client(('localhost', 11211)) -result = client.get('some_key') - -if result is None: - # The cache is empty, need to get the value - # from the canonical source: - result = do_some_query() - - # Cache the result for next time: - client.set('some_key', result) - -# Whether we needed to update the cache or not, -# at this point you can work with the data -# stored in the `result` variable: -print(result) - -``` - -> **Note:** Handling missing keys is mandatory because of normal flush-out operations. It is also obligatory to handle the cold cache scenario, i.e. when memcached has just been started. In that case, the cache will be entirely empty and the cache needs to be fully repopulated, one request at a time. - -This means you should view any cached data as ephemeral. And you should never expect the cache to contain a value you previously wrote to it. - -### Warming Up a Cold Cache - -Some of the cold cache scenarios cannot be prevented, for example a memcached crash. But some can, for example migrating to a new memcached server. - -When it is possible to predict that a cold cache scenario will happen, it is better to avoid it. A cache that needs to be refilled means that all of the sudden, the canonical storage of the cached data will be massively hit by all cache users who lack a cache data (also known as the [thundering herd problem][11].) - -pymemcache provides a class named `FallbackClient` that helps in implementing this scenario as demonstrated here: -``` -from pymemcache.client import base -from pymemcache import fallback - - -def do_some_query(): - # Replace with actual querying code to a database, - # a remote REST API, etc. - return 42 - - -# Set `ignore_exc=True` so it is possible to shut down -# the old cache before removing its usage from -# the program, if ever necessary. -old_cache = base.Client(('localhost', 11211), ignore_exc=True) -new_cache = base.Client(('localhost', 11212)) - -client = fallback.FallbackClient((new_cache, old_cache)) - -result = client.get('some_key') - -if result is None: - # The cache is empty, need to get the value - # from the canonical source: - result = do_some_query() - - # Cache the result for next time: - client.set('some_key', result) - -print(result) - -``` - -The `FallbackClient` queries the old cache passed to its constructor, respecting the order. In this case, the new cache server will always be queried first, and in case of a cache miss, the old one will be queried—avoiding a possible return-trip to the primary source of data. - -If any key is set, it will only be set to the new cache. After some time, the old cache can be decommissioned and the `FallbackClient` can be replaced directed with the `new_cache` client. - -### Check And Set - -When communicating with a remote cache, the usual concurrency problem comes back: there might be several clients trying to access the same key at the same time. memcached provides a check and set operation, shortened to CAS, which helps to solve this problem. - -The simplest example is an application that wants to count the number of users it has. Each time a visitor connects, a counter is incremented by 1. Using memcached, a simple implementation would be: -``` -def on_visit(client): - result = client.get('visitors') - if result is None: - result = 1 - else: - result += 1 - client.set('visitors', result) - -``` - -However, what happens if two instances of the application try to update this counter at the same time? - -The first call `client.get('visitors')` will return the same number of visitors for both of them, let’s say it’s 42. Then both will add 1, compute 43, and set the number of visitors to 43. That number is wrong, and the result should be 44, i.e. 42 + 1 + 1. - -To solve this concurrency issue, the CAS operation of memcached is handy. The following snippet implements a correct solution: -``` -def on_visit(client): - while True: - result, cas = client.gets('visitors') - if result is None: - result = 1 - else: - result += 1 - if client.cas('visitors', result, cas): - break - -``` - -The `gets` method returns the value, just like the `get` method, but it also returns a CAS value. - -What is in this value is not relevant, but it is used for the next method `cas` call. This method is equivalent to the `set` operation, except that it fails if the value has changed since the `gets` operation. In case of success, the loop is broken. Otherwise, the operation is restarted from the beginning. - -In the scenario where two instances of the application try to update the counter at the same time, only one succeeds to move the counter from 42 to 43. The second instance gets a `False` value returned by the `client.cas` call, and have to retry the loop. It will retrieve 43 as value this time, will increment it to 44, and its `cas` call will succeed, thus solving our problem. - -Incrementing a counter is interesting as an example to explain how CAS works because it is simplistic. However, memcached also provides the `incr` and `decr` methods to increment or decrement an integer in a single request, rather than doing multiple `gets`/`cas` calls. In real-world applications `gets` and `cas` are used for more complex data type or operations - -Most remote caching server and data store provide such a mechanism to prevent concurrency issues. It is critical to be aware of those cases to make proper use of their features. - -### Beyond Caching - -The simple techniques illustrated in this article showed you how easy it is to leverage memcached to speed up the performances of your Python application. - -Just by using the two basic “set” and “get” operations you can often accelerate data retrieval or avoid recomputing results over and over again. With memcached you can share the cache accross a large number of distributed nodes. - -Other, more advanced patterns you saw in this tutorial, like the Check And Set (CAS) operation allow you to update data stored in the cache concurrently across multiple Python threads or processes while avoiding data corruption. - -If you are interested into learning more about advanced techniques to write faster and more scalable Python applications, check out [Scaling Python][12]. It covers many advanced topics such as network distribution, queuing systems, distributed hashing, and code profiling. - --------------------------------------------------------------------------------- - -via: https://realpython.com/blog/python/python-memcache-efficient-caching/ - -作者:[Julien Danjou][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://realpython.com/team/jdanjou/ -[1]:https://realpython.com/blog/categories/python/ -[2]:https://docs.python.org/3/library/functools.html#functools.lru_cache -[3]:https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_Recently_Used_(LRU) -[4]:https://realpython.com/blog/python/caching-in-django-with-redis/ -[5]:http://memcached.org -[6]:https://github.com/memcached/memcached/wiki/Install -[7]:https://brew.sh/ -[8]:https://commaster.net/content/installing-memcached-windows -[9]:https://pypi.python.org/pypi/pymemcache -[10]:https://realpython.com/learn/python-first-steps/#11-pythons-power-packagesmodules -[11]:https://en.wikipedia.org/wiki/Thundering_herd_problem -[12]:https://scaling-python.com diff --git a/sources/tech/20180131 Shallow vs Deep Copying of Python Objects - Real Python.md b/sources/tech/20180131 Shallow vs Deep Copying of Python Objects - Real Python.md deleted file mode 100644 index 3dd0cb500c..0000000000 --- a/sources/tech/20180131 Shallow vs Deep Copying of Python Objects - Real Python.md +++ /dev/null @@ -1,262 +0,0 @@ -Shallow vs Deep Copying of Python Objects – Real Python -====== - -Assignment statements in Python do not create copies of objects, they only bind names to an object. For immutable objects, that usually doesn’t make a difference. - -But for working with mutable objects or collections of mutable objects, you might be looking for a way to create “real copies” or “clones” of these objects. - -Essentially, you’ll sometimes want copies that you can modify without automatically modifying the original at the same time. In this article I’m going to give you the rundown on how to copy or “clone” objects in Python 3 and some of the caveats involved. - -> **Note:** This tutorial was written with Python 3 in mind but there is little difference between Python 2 and 3 when it comes to copying objects. When there are differences I will point them out in the text. - -Let’s start by looking at how to copy Python’s built-in collections. Python’s built-in mutable collections like [lists, dicts, and sets][3] can be copied by calling their factory functions on an existing collection: -``` -new_list = list(original_list) -new_dict = dict(original_dict) -new_set = set(original_set) - -``` - -However, this method won’t work for custom objects and, on top of that, it only creates shallow copies. For compound objects like lists, dicts, and sets, there’s an important difference between shallow and deep copying: - - * A **shallow copy** means constructing a new collection object and then populating it with references to the child objects found in the original. In essence, a shallow copy is only one level deep. The copying process does not recurse and therefore won’t create copies of the child objects themselves. - - * A **deep copy** makes the copying process recursive. It means first constructing a new collection object and then recursively populating it with copies of the child objects found in the original. Copying an object this way walks the whole object tree to create a fully independent clone of the original object and all of its children. - - - - -I know, that was a bit of a mouthful. So let’s look at some examples to drive home this difference between deep and shallow copies. - -**Free Bonus:** [Click here to get access to a chapter from Python Tricks: The Book][4] that shows you Python's best practices with simple examples you can apply instantly to write more beautiful + Pythonic code. - -### Making Shallow Copies - -In the example below, we’ll create a new nested list and then shallowly copy it with the `list()` factory function: -``` ->>> xs = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] ->>> ys = list(xs) # Make a shallow copy - -``` - -This means `ys` will now be a new and independent object with the same contents as `xs`. You can verify this by inspecting both objects: -``` ->>> xs -[[1, 2, 3], [4, 5, 6], [7, 8, 9]] ->>> ys -[[1, 2, 3], [4, 5, 6], [7, 8, 9]] - -``` - -To confirm `ys` really is independent from the original, let’s devise a little experiment. You could try and add a new sublist to the original (`xs`) and then check to make sure this modification didn’t affect the copy (`ys`): -``` ->>> xs.append(['new sublist']) ->>> xs -[[1, 2, 3], [4, 5, 6], [7, 8, 9], ['new sublist']] ->>> ys -[[1, 2, 3], [4, 5, 6], [7, 8, 9]] - -``` - -As you can see, this had the expected effect. Modifying the copied list at a “superficial” level was no problem at all. - -However, because we only created a shallow copy of the original list, `ys` still contains references to the original child objects stored in `xs`. - -These children were not copied. They were merely referenced again in the copied list. - -Therefore, when you modify one of the child objects in `xs`, this modification will be reflected in `ys` as well—that’s because both lists share the same child objects. The copy is only a shallow, one level deep copy: -``` ->>> xs[1][0] = 'X' ->>> xs -[[1, 2, 3], ['X', 5, 6], [7, 8, 9], ['new sublist']] ->>> ys -[[1, 2, 3], ['X', 5, 6], [7, 8, 9]] - -``` - -In the above example we (seemingly) only made a change to `xs`. But it turns out that both sublists at index 1 in `xs` and `ys` were modified. Again, this happened because we had only created a shallow copy of the original list. - -Had we created a deep copy of `xs` in the first step, both objects would’ve been fully independent. This is the practical difference between shallow and deep copies of objects. - -Now you know how to create shallow copies of some of the built-in collection classes, and you know the difference between shallow and deep copying. The questions we still want answers for are: - - * How can you create deep copies of built-in collections? - * How can you create copies (shallow and deep) of arbitrary objects, including custom classes? - - - -The answer to these questions lies in the `copy` module in the Python standard library. This module provides a simple interface for creating shallow and deep copies of arbitrary Python objects. - -### Making Deep Copies - -Let’s repeat the previous list-copying example, but with one important difference. This time we’re going to create a deep copy using the `deepcopy()` function defined in the `copy` module instead: -``` ->>> import copy ->>> xs = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] ->>> zs = copy.deepcopy(xs) - -``` - -When you inspect `xs` and its clone `zs` that we created with `copy.deepcopy()`, you’ll see that they both look identical again—just like in the previous example: -``` ->>> xs -[[1, 2, 3], [4, 5, 6], [7, 8, 9]] ->>> zs -[[1, 2, 3], [4, 5, 6], [7, 8, 9]] - -``` - -However, if you make a modification to one of the child objects in the original object (`xs`), you’ll see that this modification won’t affect the deep copy (`zs`). - -Both objects, the original and the copy, are fully independent this time. `xs` was cloned recursively, including all of its child objects: -``` ->>> xs[1][0] = 'X' ->>> xs -[[1, 2, 3], ['X', 5, 6], [7, 8, 9]] ->>> zs -[[1, 2, 3], [4, 5, 6], [7, 8, 9]] - -``` - -You might want to take some time to sit down with the Python interpreter and play through these examples right about now. Wrapping your head around copying objects is easier when you get to experience and play with the examples firsthand. - -By the way, you can also create shallow copies using a function in the `copy` module. The `copy.copy()` function creates shallow copies of objects. - -This is useful if you need to clearly communicate that you’re creating a shallow copy somewhere in your code. Using `copy.copy()` lets you indicate this fact. However, for built-in collections it’s considered more Pythonic to simply use the list, dict, and set factory functions to create shallow copies. - -### Copying Arbitrary Python Objects - -The question we still need to answer is how do we create copies (shallow and deep) of arbitrary objects, including custom classes. Let’s take a look at that now. - -Again the `copy` module comes to our rescue. Its `copy.copy()` and `copy.deepcopy()` functions can be used to duplicate any object. - -Once again, the best way to understand how to use these is with a simple experiment. I’m going to base this on the previous list-copying example. Let’s start by defining a simple 2D point class: -``` -class Point: - def __init__(self, x, y): - self.x = x - self.y = y - - def __repr__(self): - return f'Point({self.x!r}, {self.y!r})' - -``` - -I hope you agree that this was pretty straightforward. I added a `__repr__()` implementation so that we can easily inspect objects created from this class in the Python interpreter. - -> **Note:** The above example uses a [Python 3.6 f-string][5] to construct the string returned by `__repr__`. On Python 2 and versions of Python 3 before 3.6 you’d use a different string formatting expression, for example: -``` -> def __repr__(self): -> return 'Point(%r, %r)' % (self.x, self.y) -> -``` - -Next up, we’ll create a `Point` instance and then (shallowly) copy it, using the `copy` module: -``` ->>> a = Point(23, 42) ->>> b = copy.copy(a) - -``` - -If we inspect the contents of the original `Point` object and its (shallow) clone, we see what we’d expect: -``` ->>> a -Point(23, 42) ->>> b -Point(23, 42) ->>> a is b -False - -``` - -Here’s something else to keep in mind. Because our point object uses primitive types (ints) for its coordinates, there’s no difference between a shallow and a deep copy in this case. But I’ll expand the example in a second. - -Let’s move on to a more complex example. I’m going to define another class to represent 2D rectangles. I’ll do it in a way that allows us to create a more complex object hierarchy—my rectangles will use `Point` objects to represent their coordinates: -``` -class Rectangle: - def __init__(self, topleft, bottomright): - self.topleft = topleft - self.bottomright = bottomright - - def __repr__(self): - return (f'Rectangle({self.topleft!r}, ' - f'{self.bottomright!r})') - -``` - -Again, first we’re going to attempt to create a shallow copy of a rectangle instance: -``` -rect = Rectangle(Point(0, 1), Point(5, 6)) -srect = copy.copy(rect) - -``` - -If you inspect the original rectangle and its copy, you’ll see how nicely the `__repr__()` override is working out, and that the shallow copy process worked as expected: -``` ->>> rect -Rectangle(Point(0, 1), Point(5, 6)) ->>> srect -Rectangle(Point(0, 1), Point(5, 6)) ->>> rect is srect -False - -``` - -Remember how the previous list example illustrated the difference between deep and shallow copies? I’m going to use the same approach here. I’ll modify an object deeper in the object hierarchy, and then you’ll see this change reflected in the (shallow) copy as well: -``` ->>> rect.topleft.x = 999 ->>> rect -Rectangle(Point(999, 1), Point(5, 6)) ->>> srect -Rectangle(Point(999, 1), Point(5, 6)) - -``` - -I hope this behaved how you expected it to. Next, I’ll create a deep copy of the original rectangle. Then I’ll apply another modification and you’ll see which objects are affected: -``` ->>> drect = copy.deepcopy(srect) ->>> drect.topleft.x = 222 ->>> drect -Rectangle(Point(222, 1), Point(5, 6)) ->>> rect -Rectangle(Point(999, 1), Point(5, 6)) ->>> srect -Rectangle(Point(999, 1), Point(5, 6)) - -``` - -Voila! This time the deep copy (`drect`) is fully independent of the original (`rect`) and the shallow copy (`srect`). - -We’ve covered a lot of ground here, and there are still some finer points to copying objects. - -It pays to go deep (ha!) on this topic, so you may want to study up on the [`copy` module documentation][6]. For example, objects can control how they’re copied by defining the special methods `__copy__()` and `__deepcopy__()` on them. - -### 3 Things to Remember - - * Making a shallow copy of an object won’t clone child objects. Therefore, the copy is not fully independent of the original. - * A deep copy of an object will recursively clone child objects. The clone is fully independent of the original, but creating a deep copy is slower. - * You can copy arbitrary objects (including custom classes) with the `copy` module. - - - -If you’d like to dig deeper into other intermediate-level Python programming techniques, check out this free bonus: - -**Free Bonus:** [Click here to get access to a chapter from Python Tricks: The Book][4] that shows you Python's best practices with simple examples you can apply instantly to write more beautiful + Pythonic code. - --------------------------------------------------------------------------------- - -via: https://realpython.com/blog/python/copying-python-objects/ - -作者:[Dan Bader][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://realpython.com/team/dbader/ -[1]:https://realpython.com/blog/categories/fundamentals/ -[2]:https://realpython.com/blog/categories/python/ -[3]:https://realpython.com/learn/python-first-steps/ -[4]:https://realpython.com/blog/python/copying-python-objects/ -[5]:https://dbader.org/blog/python-string-formatting -[6]:https://docs.python.org/3/library/copy.html diff --git a/sources/tech/20180206 Modern Web Automation With Python and Selenium - Real Python.md b/sources/tech/20180206 Modern Web Automation With Python and Selenium - Real Python.md deleted file mode 100644 index 0d22b944ea..0000000000 --- a/sources/tech/20180206 Modern Web Automation With Python and Selenium - Real Python.md +++ /dev/null @@ -1,506 +0,0 @@ -Translating by Flowsnow - -Modern Web Automation With Python and Selenium – Real Python -====== - -In this tutorial you’ll learn advanced Python web automation techniques: Using Selenium with a “headless” browser, exporting the scraped data to CSV files, and wrapping your scraping code in a Python class. - -### Motivation: Tracking Listening Habits - -Suppose that you have been listening to music on [bandcamp][4] for a while now, and you find yourself wishing you could remember a song you heard a few months back. - -Sure you could dig through your browser history and check each song, but that might be a pain… All you remember is that you heard the song a few months ago and that it was in the electronic genre. - -“Wouldn’t it be great,” you think to yourself, “if I had a record of my listening history? I could just look up the electronic songs from two months ago and I’d surely find it.” - -**Today, you will build a basic Python class, called`BandLeader` that connects to [bandcamp.com][4], streams music from the “discovery” section of the front page, and keeps track of your listening history.** - -The listening history will be saved to disk in a [CSV][5] file. You can then explore that CSV file in your favorite spreadsheet application or even with Python. - -If you have had some experience with [web scraping in Python][6], you are familiar with making HTTP requests and using Pythonic APIs to navigate the DOM. You will do more of the same today, except with one difference. - -**Today you will use a full-fledged browser running in headless mode to do the HTTP requests for you.** - -A [headless browser][7] is just a regular web browser, except that it contains no visible UI element. Just like you’d expect, it can do more than make requests: it can also render HTML (though you cannot see it), keep session information, and even perform asynchronous network communications by running JavaScript code. - -If you want to automate the modern web, headless browsers are essential. - -**Free Bonus:** [Click here to download a "Python + Selenium" project skeleton with full source code][1] that you can use as a foundation for your own Python web scraping and automation apps. - -### Setup - -Your first step, before writing a single line of Python, is to install a [Selenium][8] supported [WebDriver][9] for your favorite web browser. In what follows, you will be working with [Firefox][10], but [Chrome][11] could easily work too. - -So, assuming that the path `~/.local/bin` is in your execution `PATH`, here’s how you would install the Firefox webdriver, called `geckodriver`, on a Linux machine: -``` -$ wget https://github.com/mozilla/geckodriver/releases/download/v0.19.1/geckodriver-v0.19.1-linux64.tar.gz -$ tar xvfz geckodriver-v0.19.1-linux64.tar.gz -$ mv geckodriver ~/.local/bin - -``` - -Next, you install the [selenium][12] package, using `pip` or however else you like. If you made a [virtual environment][13] for this project, you just type: -``` -$ pip install selenium - -``` - -[ If you ever feel lost during the course of this tutorial, the full code demo can be found [on GitHub][14]. ] - -Now it’s time for a test drive: - -### Test Driving a Headless Browser - -To test that everything is working, you decide to try out a basic web search via [DuckDuckGo][15]. You fire up your preferred Python interpreter and type: -``` ->>> from selenium.webdriver import Firefox ->>> from selenium.webdriver.firefox.options import Options ->>> opts = Options() ->>> opts.set_headless() ->>> assert options.headless # operating in headless mode ->>> browser = Firefox(options=opts) ->>> browser.get('https://duckduckgo.com') - -``` - -So far you have created a headless Firefox browser navigated to `https://duckduckgo.com`. You made an `Options` instance and used it to activate headless mode when you passed it to the `Firefox` constructor. This is akin to typing `firefox -headless` at the command line. - -![](https://files.realpython.com/media/web-scraping-duckduckgo.f7bc7a5e2918.jpg) - -Now that a page is loaded you can query the DOM using methods defined on your newly minted `browser` object. But how do you know what to query? The best way is to open your web browser and use its developer tools to inspect the contents of the page. Right now you want to get ahold of the search form so you can submit a query. By inspecting DuckDuckGo’s home page you find that the search form `` element has an `id` attribute `"search_form_input_homepage"`. That’s just what you needed: -``` ->>> search_form = browser.find_element_by_id('search_form_input_homepage') ->>> search_form.send_keys('real python') ->>> search_form.submit() - -``` - -You found the search form, used the `send_keys` method to fill it out, and then the `submit` method to perform your search for `"Real Python"`. You can checkout the top result: -``` ->>> results = browser.find_elements_by_class_name('result') ->>> print(results[0].text) - -Real Python - Real Python -Get Real Python and get your hands dirty quickly so you spend more time making real applications. Real Python teaches Python and web development from the ground up ... -https://realpython.com - -``` - -Everything seems to be working. In order to prevent invisible headless browser instances from piling up on your machine, you close the browser object before exiting your python session: -``` ->>> browser.close() ->>> quit() - -``` - -### Groovin on Tunes - -You’ve tested that you can drive a headless browser using Python, now to put it to use. - - 1. You want to play music - 2. You want to browse and explore music - 3. You want information about what music is playing. - - - -To start, you navigate to and start to poke around in your browser’s developer tools. You discover a big shiny play button towards the bottom of the screen with a `class` attribute that contains the value`"playbutton"`. You check that it works: - - -``` ->>> opts = Option() ->>> opts.set_headless() ->>> browser = Firefox(options=opts) ->>> browser.get('https://bandcamp.com') ->>> browser.find_element_by_class('playbutton').click() - -``` - -You should hear music! Leave it playing and move back to your web browser. Just to the side of the play button is the discovery section. Again, you inspect this section and find that each of the currently visible available tracks has a `class` value of `"discover-item"`, and that each item seems to be clickable. In Python, you check this out: -``` ->>> tracks = browser.find_elements_by_class_name('discover-item') ->>> len(tracks) # 8 ->>> tracks[3].click() - -``` - -A new track should be playing! This is the first step to exploring bandcamp using Python! You spend a few minutes clicking on different tracks in your Python environment but soon grow tired of the meagre library of 8 songs. - -### Exploring the Catalogue - -Looking a back at your browser, you see the buttons for exploring all of the tracks featured in bandcamp’s music discovery section. By now this feels familiar: each button has a `class` value of `"item-page"`. The very last button is the “next” button that will display the next eight tracks in the catalogue. You go to work: -``` ->>> next_button = [e for e in browser.find_elements_by_class_name('item-page') - if e.text.lower().find('next') > -1] ->>> next_button.click() - -``` - -Great! Now you want to look at the new tracks, so you think “I’ll just repopulate my `tracks` variable like I did a few minutes ago”. But this is where things start to get tricky. - -First, bandcamp designed their site for humans to enjoy using, not for Python scripts to access programmatically. When you call `next_button.click()` the real web browser responds by executing some JavaScript code. If you try it out in your browser, you see that some time elapses as the catalogue of songs scrolls with a smooth animation effect. If you try to repopulate your `tracks` variable before the animation finishes, you may not get all the tracks and you may get some that you don’t want. - -The solution? You can just sleep for a second or, if you are just running all this in a Python shell, you probably wont even notice - after all it takes time for you to type too. - -Another slight kink is something that can only be discovered through experimentation. You try to run the same code again: -``` ->>> tracks = browser.find_elements_by_class_name('discover-item') ->>> assert(len(tracks) == 8) -AssertionError -... - -``` - -But you notice something strange. `len(tracks)` is not equal to `8` even though only the next batch of `8` should be displayed. Digging a little further you find that your list contains some tracks that were displayed before. To get only the tracks that are actually visible in the browser, you need to filter the results a little. - -After trying a few things, you decide to keep a track only if its `x` coordinate on the page fall within the bounding box of the containing element. The catalogue’s container has a `class` value of `"discover-results"`. Here’s how you proceed: -``` ->>> discover_section = self.browser.find_element_by_class_name('discover-results') ->>> left_x = discover_section.location['x'] ->>> right_x = left_x + discover_section.size['width'] ->>> discover_items = browser.find_element_by_class_name('discover_items') ->>> tracks = [t for t in discover_items - if t.location['x'] >= left_x and t.location['x'] < right_x] ->>> assert len(tracks) == 8 - -``` - -### Building a Class - -If you are growing weary of retyping the same commands over and over again in your Python environment, you should dump some of it into a module. A basic class for your bandcamp manipulation should do the following: - - 1. Initialize a headless browser and navigate to bandcamp - 2. Keep a list of available tracks - 3. Support finding more tracks - 4. Play, pause, and skip tracks - - - -All in one go, here’s the basic code: -``` -from selenium.webdriver import Firefox -from selenium.webdriver.firefox.options import Options -from time import sleep, ctime -from collections import namedtuple -from threading import Thread -from os.path import isfile -import csv - - -BANDCAMP_FRONTPAGE='https://bandcamp.com/' - -class BandLeader(): - def __init__(self): - # create a headless browser - opts = Options() - opts.set_headless() - self.browser = Firefox(options=opts) - self.browser.get(BANDCAMP_FRONTPAGE) - - # track list related state - self._current_track_number = 1 - self.track_list = [] - self.tracks() - - def tracks(self): - ''' - query the page to populate a list of available tracks - ''' - - # sleep to give the browser time to render and finish any animations - sleep(1) - - # get the container for the visible track list - discover_section = self.browser.find_element_by_class_name('discover-results') - left_x = discover_section.location['x'] - right_x = left_x + discover_section.size['width'] - - # filter the items in the list to include only those we can click - discover_items = self.browser.find_elements_by_class_name('discover-item') - self.track_list = [t for t in discover_items - if t.location['x'] >= left_x and t.location['x'] < right_x] - - # print the available tracks to the screen - for (i,track) in enumerate(self.track_list): - print('[{}]'.format(i+1)) - lines = track.text.split('\n') - print('Album : {}'.format(lines[0])) - print('Artist : {}'.format(lines[1])) - if len(lines) > 2: - print('Genre : {}'.format(lines[2])) - - def catalogue_pages(self): - ''' - print the available pages in the catalogue that are presently - accessible - ''' - print('PAGES') - for e in self.browser.find_elements_by_class_name('item-page'): - print(e.text) - print('') - - - def more_tracks(self,page='next'): - ''' - advances the catalog and repopulates the track list, we can pass in a number - to advance any of hte available pages - ''' - - next_btn = [e for e in self.browser.find_elements_by_class_name('item-page') - if e.text.lower().strip() == str(page)] - - if next_btn: - next_btn[0].click() - self.tracks() - - def play(self,track=None): - ''' - play a track. If no track number is supplied, the presently selected track - will play - ''' - - if track is None: - self.browser.find_element_by_class_name('playbutton').click() - elif type(track) is int and track <= len(self.track_list) and track >= 1: - self._current_track_number = track - self.track_list[self._current_track_number - 1].click() - - - def play_next(self): - ''' - plays the next available track - ''' - if self._current_track_number < len(self.track_list): - self.play(self._current_track_number+1) - else: - self.more_tracks() - self.play(1) - - - def pause(self): - ''' - pauses the playback - ''' - self.play() -``` - -Pretty neat. You can import this into your Python environment and run bandcamp programmatically! But wait, didn’t you start this whole thing because you wanted to keep track of information about your listening history? - -### Collecting Structured Data - -Your final task is to keep track of the songs that you actually listened to. How might you do this? What does it mean to actually listen to something anyway? If you are perusing the catalogue, stopping for a few seconds on each song, do each of those songs count? Probably not. You are going to allow some ‘exploration’ time to factor in to your data collection. - -Your goals are now to: - - 1. Collect structured information about the currently playing track - 2. Keep a “database” of tracks - 3. Save and restore that “database” to and from disk - - - -You decide to use a [namedtuple][16] to store the information that you track. Named tuples are good for representing bundles of attributes with no functionality tied to them, a bit like a database record. -``` -TrackRec = namedtuple('TrackRec', [ - 'title', - 'artist', - 'artist_url', - 'album', - 'album_url', - 'timestamp' # When you played it -]) - -``` - -In order to collect this information, you add a method to the `BandLeader` class. Checking back in with the browser’s developer tools, you find the right HTML elements and attributes to select all the information you need. Also, you only want to get information about the currently playing track if there music is actually playing at the time. Luckily, the page player adds a `"playing"` class to the play button whenever music is playing and removes it when the music stops. With these considerations in mind, you write a couple of methods: -``` -def is_playing(self): - ''' - returns `True` if a track is presently playing - ''' - playbtn = self.browser.find_element_by_class_name('playbutton') - return playbtn.get_attribute('class').find('playing') > -1 - - -def currently_playing(self): - ''' - Returns the record for the currently playing track, - or None if nothing is playing - ''' - try: - if self.is_playing(): - title = self.browser.find_element_by_class_name('title').text - album_detail = self.browser.find_element_by_css_selector('.detail-album > a') - album_title = album_detail.text - album_url = album_detail.get_attribute('href').split('?')[0] - artist_detail = self.browser.find_element_by_css_selector('.detail-artist > a') - artist = artist_detail.text - artist_url = artist_detail.get_attribute('href').split('?')[0] - return TrackRec(title, artist, artist_url, album_title, album_url, ctime()) - - except Exception as e: - print('there was an error: {}'.format(e)) - - return None -``` - -For good measure, you also modify the `play` method to keep track of the currently playing track: -``` -def play(self, track=None): - ''' - play a track. If no track number is supplied, the presently selected track - will play - ''' - - if track is None: - self.browser.find_element_by_class_name('playbutton').click() - elif type(track) is int and track <= len(self.track_list) and track >= 1: - self._current_track_number = track - self.track_list[self._current_track_number - 1].click() - - sleep(0.5) - if self.is_playing(): - self._current_track_record = self.currently_playing() -``` - -Next, you’ve got to keep a database of some kind. Though it may not scale well in the long run, you can go far with a simple list. You add `self.database = []` to `BandCamp`‘s `__init__` method. Because you want to allow for time to pass before entering a `TrackRec` object into the database, you decide to use Python’s [threading tools][17] to run a separate process that maintains the database in the background. - -You’ll supply a `_maintain()` method to `BandLeader` instances that will run it a separate thread. The new method will periodically check the value of `self._current_track_record` and add it to the database if it is new. - -You will start the thread when the class is instantiated by adding some code to `__init__`. -``` - # the new init -def __init__(self): - # create a headless browser - opts = Options() - opts.set_headless() - self.browser = Firefox(options=opts) - self.browser.get(BANDCAMP_FRONTPAGE) - - # track list related state - self._current_track_number = 1 - self.track_list = [] - self.tracks() - - # state for the database - self.database = [] - self._current_track_record = None - - # the database maintenance thread - self.thread = Thread(target=self._maintain) - self.thread.daemon = True # kills the thread with the main process dies - self.thread.start() - - self.tracks() - - -def _maintain(self): - while True: - self._update_db() - sleep(20) # check every 20 seconds - - -def _update_db(self): - try: - check = (self._current_track_record is not None - and (len(self.database) == 0 - or self.database[-1] != self._current_track_record) - and self.is_playing()) - if check: - self.database.append(self._current_track_record) - - except Exception as e: - print('error while updating the db: {}'.format(e) - -``` - -If you’ve never worked with multithreaded programming in Python, [you should read up on it!][18] For your present purpose, you can think of thread as a loop that runs in the background of the main Python process (the one you interact with directly). Every twenty seconds, the loop checks a few things to see if the database needs to be updated, and if it does, appends a new record. Pretty cool. - -The very last step is saving the database and restoring from saved states. Using the [csv][19] package you can ensure your database resides in a highly portable format, and remains usable even if you abandon your wonderful `BandLeader` class ;) - -The `__init__` method should be yet again altered, this time to accept a file path where you’d like to save the database. You’d like to load this database if it is available, and you’d like to save it periodically, whenever it is updated. The updates look like so: -``` -def __init__(self,csvpath=None): - self.database_path=csvpath - self.database = [] - - # load database from disk if possible - if isfile(self.database_path): - with open(self.database_path, newline='') as dbfile: - dbreader = csv.reader(dbfile) - next(dbreader) # to ignore the header line - self.database = [TrackRec._make(rec) for rec in dbreader] - - # .... the rest of the __init__ method is unchanged .... - - -# a new save_db method -def save_db(self): - with open(self.database_path,'w',newline='') as dbfile: - dbwriter = csv.writer(dbfile) - dbwriter.writerow(list(TrackRec._fields)) - for entry in self.database: - dbwriter.writerow(list(entry)) - - -# finally add a call to save_db to your database maintenance method -def _update_db(self): - try: - check = (self._current_track_record is not None - and self._current_track_record is not None - and (len(self.database) == 0 - or self.database[-1] != self._current_track_record) - and self.is_playing()) - if check: - self.database.append(self._current_track_record) - self.save_db() - - except Exception as e: - print('error while updating the db: {}'.format(e) -``` - -And voilà! You can listen to music and keep a record of what you hear! Amazing. - -Something interesting about the above is that [using a `namedtuple`][16] really begins to pay off. When converting to and from CSV format, you take advantage of the ordering of the rows in the CSV file to fill in the rows in the `TrackRec` objects. Likewise, you can create the header row of the CSV file by referencing the `TrackRec._fields` attribute. This is one of the reasons using a tuple ends up making sense for columnar data. - -### What’s Next and What Have You Learned? - -From here you could do loads more! Here are a few quick ideas that would leverage the mild superpower that is Python + Selenium: - - * You could extend the `BandLeader` class to navigate to album pages and play the tracks you find there - * You might decide to create playlists based on your favorite or most frequently heard tracks - * Perhaps you want to add an autoplay feature - * Maybe you’d like to query songs by date or title or artist and build playlists that way - - - -**Free Bonus:** [Click here to download a "Python + Selenium" project skeleton with full source code][1] that you can use as a foundation for your own Python web scraping and automation apps. - -You have learned that Python can do everything that a web browser can do, and a bit more. You could easily write scripts to control virtual browser instances that run in the cloud, create bots that interact with real users, or that mindlessly fill out forms! Go forth, and automate! - --------------------------------------------------------------------------------- - -via: https://realpython.com/blog/python/modern-web-automation-with-python-and-selenium/ - -作者:[Colin OKeefe][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://realpython.com/team/cokeefe/ -[1]:https://realpython.com/blog/python/modern-web-automation-with-python-and-selenium/# -[4]:https://bandcamp.com -[5]:https://en.wikipedia.org/wiki/Comma-separated_values -[6]:https://realpython.com/blog/python/python-web-scraping-practical-introduction/ -[7]:https://en.wikipedia.org/wiki/Headless_browser -[8]:http://www.seleniumhq.org/docs/ -[9]:https://en.wikipedia.org/wiki/Selenium_(software)#Selenium_WebDriver -[10]:https://www.mozilla.org/en-US/firefox/new/ -[11]:https://www.google.com/chrome/index.html -[12]:http://seleniumhq.github.io/selenium/docs/api/py/ -[13]:https://realpython.com/blog/python/python-virtual-environments-a-primer/ -[14]:https://github.com/realpython/python-web-scraping-examples -[15]:https://duckduckgo.com -[16]:https://dbader.org/blog/writing-clean-python-with-namedtuples -[17]:https://docs.python.org/3.6/library/threading.html#threading.Thread -[18]:https://dbader.org/blog/python-parallel-computing-in-60-seconds -[19]:https://docs.python.org/3.6/library/csv.html From c27651f71147a3f97d844c04edd6f622836e13ab Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 15:19:36 +0800 Subject: [PATCH 22/81] remove www.dataquest.io --- ...Introduction to AWS for Data Scientists.md | 212 ------------------ 1 file changed, 212 deletions(-) delete mode 100644 sources/tech/20180130 Introduction to AWS for Data Scientists.md diff --git a/sources/tech/20180130 Introduction to AWS for Data Scientists.md b/sources/tech/20180130 Introduction to AWS for Data Scientists.md deleted file mode 100644 index ada3585745..0000000000 --- a/sources/tech/20180130 Introduction to AWS for Data Scientists.md +++ /dev/null @@ -1,212 +0,0 @@ -Introduction to AWS for Data Scientists -====== -![sky-690293_1920][1] - -These days, many businesses use cloud based services; as a result various companies have started building and providing such services. Amazon [began the trend][2], with Amazon Web Services (AWS). While AWS began in 2006 as a side business, it now makes [$14.5 billion in revenue each year][3]. - -Other leaders in this area include: - - * Google--Google Cloud Platform (GCP) - * Microsoft--Azure Cloud Services - * IBM--IBM Cloud - - - -Cloud services are useful to businesses of all sizes--small companies benefit from the low cost, as compared to buying servers. Larger companies gain reliability and productivity, with less cost, since the services run on optimum energy and maintenance. - -These services are also powerful tools that you can use to ease your work. Setting up a Hadoop cluster to work with Spark manually could take days if it's your first time, but AWS sets that up for you in minutes. - -We are going to focus on AWS here because it comes with more products relevant to data scientists. In general, we can say familiarity with AWS helps data scientists to: - - 1. Prepare the infrastructure they need for their work (e.g. Hadoop clusters) with ease - 2. Easily set up necessary tools (e.g. Spark) - 3. Decrease expenses significantly--such as by paying for huge Hadoop clusters only when needed - 4. Spend less time on maintenance, as there's no need for tasks like manually backing up data - 5. Develop products and features that are ready to launch without needing help from engineers (or, at least, needing very little help) - - - -In this post, I'll give an overview of useful AWS services for data scientists -- what they are, why they're useful, and how much they cost. - -### Elastic Compute Cloud (EC2) - -Many other AWS services are built around EC2, making it a core piece of AWS. EC2s are in fact (virtual) servers that you can rent from Amazon and set up or run any program/application on it. These servers come in different operating systems and Amazon charges you based on the computing power and capacity of the server (i.e. Hard Drive capacity, CPU, Memory, etc.) and the duration the server been up. - -#### EC2 benefits - -For example, you can rent a Linux or Windows server with computation power and storage capacity that fits your specific needs and Amazon charges you based on these specifications and the duration you use the server. Note that previously AWS charged at least for one hour for each instance you run, but they recently changed their policy to [per-second billing][4]. - -One of the good things about EC2 is its scalability--by changing memory, number of vCPUs, bandwidth, and so on, you can easily scale your system up or down. Therefore, if you think a system doesn't have enough power for running a specific task or a calculation in your project is taking too long, you can scale up to finish your work and later scale down again to reduce the cost. EC2 is also very reliable, since Amazon takes care of the maintenance. - -#### EC2 cost - -EC2 instances are relatively low-cost, and there are different types of instances for different use cases. For example, there are instances that are optimized for computation and those have relatively lower cost on CPU usage. Or those optimized for memory have lower cost on memory usage. - -To give you an idea on EC2 cost, a general purpose medium instance with 2 vCPUs and 4 GIG of memory (at the time of writing this article) costs $0.0464 per hour for a linux server, see [Amazon EC2 Pricing][5] for prices and more information. AWS also now has [spot instance pricing][6], which calculates the price based on supply/demand at the time and provides up to a 90% discount for short term usages depending on the time you want to use the instance. For example, the same instance above costs $0.0173 per hour on spot pricing plan. - -Note that you have to add storage costs to the above as well. Most EC2 instances use Elastic Block Store (EBS) systems, which cost around $0.1/GIG/month; see the prices [here][7]. [Storage optimized instances][8] use Solid State Drive (SSD) systems, which are more expensive. - -![Ec2cost][9] - -EBS acts like an external hard drive. You can attach it to an instance, de-attach it, and re-attach it to another instance. You can also stop or terminate an instance after your work is done and not pay for the instance when it is idle. - -If you stop an instance, AWS will still keep the EBS live and as a result the data you have on the hard drive will remain intact (it's like powering off your computer). Later you can restart stopped instances and get access to the data you generated, or even tools you installed there in the previous sessions. However, when you stop an instance instead of terminating it, Amazon will still charge you for the attached EBS (~$0.1/GIG/month). If you terminate the instance, the EBS will get cleaned so you will lose all the data on that instance, but you no longer need to pay for the EBS. - -If you need to keep the data on EBS for your future use (let's say you have custom tools installed on that instance and you don't want to redo your work again later) you can make a snapshot of the EBS and can later restore it in a new EBS and attach it to a new instance. - -Snapshots get stored on S3 (Amazon's cheap storage system; we will get to it later) so it will cost you less ($0.05 per GB-month) to keep the data in EBS like that. However, it takes time (depending on the size of the EBS) to get snapshot and restoring it. Besides, reattaching a restored EBS to EC2 instance is not that straight forward, so it only make sense to use a snapshot like that if you know you are not going to use that EBS for a while. - -Note that to scale an instance up or down, you have to first stop the instance and then change the instance specifications. You can't decrease the EBS size, only increase it, and it's more difficult. You have to: - - 1. Stop the instance - 2. Make a snapshot out of the EBS - 3. Restore the snapshot in an EBS with the new size - 4. De-attach previous EBS - 5. Attach the new one. - - - -### Simple Storage Service (S3) - -S3 is AWS object (file) storage service. S3 is like Dropbox or Google drive, but way more scalable and is made particularly to work with codes and applications. - -S3 doesn't provide a user friendly interface since it is designed to work with online applications, not the end user. Therefore, working with S3 through APIs is easier than through its web console and there are many libraries and APIs developed (in various languages) to work with this service. For example, [Boto3][10] is a S3 library written in Python (in fact Boto3 is suitable for working with many other AWS services as well) . - -S3 stores files based on `bucket`s and `key`s. Buckets are similar to root folders, and keys are similar to subfolders and files. So if you store a file named `my_file.txt` on s3 like `myproject/mytextfiles/my_file.txt`, then "myproject" is the bucket you are using and then `mytextfiles/my_file.txt` is the key to that file. This is important to know since APIs will ask for the bucket and key separately when you want to retrieve your file from s3. - -#### S3 benefits - -There is no limit on the size of data you can store on S3--you just have to pay for the storage based on the size you need per month. - -S3 is also very reliable and "[it is designed to deliver 99.999999999% durability][11]". However, the service may not be always up. On February 28th, 2017 some of s3 servers went down for couple of hours and that disrupted many applications such as Slack, Trello, etc. see [these][12] [articles][13] for more information on this incident. - -#### S3 cost - -The cost is low, starting at $0.023 per GB per month for standard access, if you want to get access to these files regularly. It could go down even lower if you don't need to load data too frequently. See [Amazon S3 Pricing][14] for more information. - -AWS may charge you for other S3 related actions such as requests through APIs, but the cost for those are insignificant (less than $0.05 per 1,000 requests in most cases). - -### Relational Database Service (RDS) - -AWS RDS is a relational database service in the cloud. RDS currently supports SQL Server, MySQL, PostgreSQL, ORACLE, and a couple of other SQL-based frameworks. AWS sets up the system you need and configures the parameters so you can have a relational database up and running in minutes. RDS also handles backup, recovery, software patching, failure detection, and repairs by itself so you don't need to maintain the system. - -#### RDS benefits - -RDS is scalable, both computing power and the storage capacity can be scaled up or down easily. RDS system runs on EC2 servers (as I mentioned EC2 servers are the core of most of AWS services, including RDS service) so by computing power here we mean the computing power of the EC2 server our RDS service is running on, and you can scale up the computing power of this system up to 32 vCPUs and 244 GiB of RAM and changing the scale would not take more than few minutes. - -Scaling the storage requirements up or down is also possible. [Amazon Aurora][15] is a version of MySQL and PostgreSQL with some additional features, and can automatically scale up when more storage space is needed (you can define the maximum). The MySQL, MariaDB, Oracle, and PostgreSQL engines allow you to scale up on the fly without downtime. - -#### RDS cost - -The [cost of RDS servers][16] is based on three factors: computational power, storage, and data transfer. - -![RDSpricing][17] - -For example, a PostgreSQL system with medium computational power (2 vCPUs and 8 gig of memory) costs $0.182 per hour; you can pay less if you go under a one- or three-year contract. - -For storage, there are a [variety of options and prices][18]. If you choose single availability zone General Purpose SSD Storage (gp2), a good option for data scientists, the cost for a server in north Virginia at the time of writing this article is $0.115 per GB-month, and you can select from 5 GB to 16 TB of SSD. - -For data transfer, the cost varies a little based on the source and destination of data (one of which is RDS). For example, all data transferred from the internet into RDS is free. The first gig of data transferred from RDS to the internet is free as well, and for the next 10 terabytes of data in a month it costs $0.09 per GB; the cost decreases for transfering more data than that. - -### Redshift - -Redshift is Amazon's data warehouse service; it is a distributed system (something like the Hadoop framework) which lets you store huge amounts of data and get queries. The difference between this service and RDS is its high capacity and ability to work with big data (terabytes and petabytes). You can use simple SQL queries on Redshift as well. - -Redshift works on a distributed framework--data is distributed on different nodes (servers) connected on a cluster. Simply put, queries on a distributed system run in parallel on all the nodes and then the results get collected from each node and get summarized. - -#### Redshift benefits - -Redshift is highly scalable, meaning in theory (depending on the query, network structure and design, service specification, etc.) the speed of getting query out of 1 terabyte of data and 1 petabyte of data can match by scaling up (adding more cluster to) the system. - -When you create a table on Redshift, you can choose one of three distribution styles: EVEN, KEY, or ALL. - - * EVEN means the table rows will get distributed over all the nodes evenly. Then queries involving that table get distributed over the cluster and run in parallel, summarized at the end. Per Amazon's documentation, "[EVEN distribution is appropriate when a table does not participate in joins][19]". - - * ALL means that on each node there will be a copy of this table, so if you query for a join on that table, the table is already there on all the nodes and there is no need for copying the required data across the network from node to node. The problem is "[ALL distribution multiplies the storage required by the number of nodes in the cluster, and so it takes much longer to load, update, or insert data into multiple tables][19]". - - * In the KEY style, distribution rows of the table are distributed based on the values in one column, in an attempt to keep the rows with the same value of that column in the same node. Physically storing matching values on the same nodes make joining on that specific column faster in parallel systems, see more information [here][19]. - - - - -#### Redshift cost - -Redshift has two types of instances: Dense Compute or Dense Storage. Dense Compute is optimized for fast querying and it is cost effective for less than 500GB of data in size (~$5,500/TB/Year for a three-year contract with partial upfront). - -Dense Storage is optimized for high size storage (~$1,000/TB/Year for a three-year contract with partial upfront) and is cost effective for +500GB, but it is slower. You can find more general pricing [here][20]. - -You can also save a large amount of data on S3 and use [Amazon Redshift Spectrum][21] to run SQL query on that data. For Redshift Spectrum, AWS charges you by the number of bytes scanned by Redshift Spectrum per query; and $5 per terabyte of data scanned (10 megabyte minimum per query). - -### Elastic MapReduce (EMR) - -EMR is suitable for setting up Hadoop clusters with Spark and other distributed type applications. A Hadoop cluster can be used as a compute engine or a (distributed) storage system. However, if the data is so big that you need a distributed system to handle it, Redshift is more suitable and way cheaper than storing in EMR. - -There are three types of [nodes][22] on a cluster: - - * The master node (you only have one) is responsible for managing the cluster. It distributes the workloads to the core and task nodes, tracks the status of tasks, and monitors the health of the cluster. - * Core nodes run tasks and store the data. - * Task nodes can only run tasks. - - - -#### EMR benefits - -Since you can set EMR to install Apache Spark, this service is good for for cleaning, reformatting, and analyzing big data. You can use EMR on-demand, meaning you can set it to grab the code and data from a source (e.g. S3 for the code, and S3 or RDS for the data), run the task on the cluster, and store the results somewhere (again s3, RDS, or Redshift) and terminate the cluster. - -By using the service in such a way, you can reduce the cost of your cluster significantly. In my opinion, EMR is one of the most useful AWS services for data scientists. - -To setup an EMR cluster, you need to first configure applications you want to have on the cluster. Note that different versions of EMR come with different versions of the applications. For example, if you configure EMR version 5.10.0 to install Spark, the default version of the Spark for this version is 2.2.0. So if your code works only on Spark 1.6, you need to run EMR on the 4.x version. EMR will set up the network and configures all the nodes on the cluster along with needed tools. - -An EMR cluster comes with one master instance and a number of core nodes (slave instances). You can choose the number of core nodes, and can even select to have no core node and only use the master server for your work. Like other services, you can choose the computational power of the servers and the storage size available on each node. You can use autoscale option for your core nodes, meaning you can add rules to the system to add/remove core node (up to a maximum number you choose) if needed while running your code. See [Using Automatic Scaling in Amazon EMR][23] for more information on auto scaling. - -#### EMR pricing - -EMR pricing is based on the computational power you choose for different instances (master, core and task nodes). Basically, it is the cost of the EC2 servers plus the cost of EMR. You can find detailed pricing [here][24]. - -![EMRpricing][25] - -### Conclusion - -I have developed many end-to-end data-driven products (including reporting, machine learning models, and product health checking systems) for our company using Python and Spark on AWS, which later became good sources of income for the company. - -Experience working with cloud services, especially a well-known one like AWS, is a huge plus in your data scientist career. Many companies depend on these services now and use them constantly, so you being familiar with these services will give them the confidence that you need less training to get on board. With more and more people moving into data science, you want your resume to stand out as much as possible. - -Do you have cloud tips to add? [Let us know][26]. - --------------------------------------------------------------------------------- - -via: https://www.dataquest.io/blog/introduction-to-aws-for-data-scientists/ - -作者:[Read More][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.dataquest.io/blog/author/armin/ -[1]:/blog/content/images/2018/01/sky-690293_1920.jpg -[2]:http://www.computerweekly.com/feature/A-history-of-cloud-computing -[3]:https://www.forbes.com/sites/bobevans1/2017/07/28/ibm-beats-amazon-in-12-month-cloud-revenue-15-1-billion-to-14-5-billion/#53c3e14c39d6 -[4]:https://aws.amazon.com/blogs/aws/new-per-second-billing-for-ec2-instances-and-ebs-volumes/ -[5]:https://aws.amazon.com/ec2/pricing/on-demand/ -[6]:https://aws.amazon.com/ec2/spot/pricing/ -[7]:https://aws.amazon.com/ebs/pricing/ -[8]:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage-optimized-instances.html -[9]:/blog/content/images/2018/01/Ec2cost.png -[10]:https://boto3.readthedocs.io -[11]:https://aws.amazon.com/s3/ -[12]:https://aws.amazon.com/message/41926/ -[13]:https://venturebeat.com/2017/02/28/aws-is-investigating-s3-issues-affecting-quora-slack-trello/ -[14]:https://aws.amazon.com/s3/pricing/ -[15]:https://aws.amazon.com/rds/aurora/ -[16]:https://aws.amazon.com/rds/postgresql/pricing/ -[17]:/blog/content/images/2018/01/RDSpricing.png -[18]:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html -[19]:http://docs.aws.amazon.com/redshift/latest/dg/c_choosing_dist_sort.html -[20]:https://aws.amazon.com/redshift/pricing/ -[21]:https://aws.amazon.com/redshift/spectrum/ -[22]:http://docs.aws.amazon.com/emr/latest/DeveloperGuide/emr-nodes.html -[23]:https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-automatic-scaling.html -[24]:https://aws.amazon.com/emr/pricing/ -[25]:/blog/content/images/2018/01/EMRpricing.png -[26]:https://twitter.com/dataquestio From 1e083a61610621011c2b6d0d9e810f882f8b2771 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 15:42:12 +0800 Subject: [PATCH 23/81] remove blog.simos.info --- ... addresses from your LAN using a bridge.md | 173 -------------- .../20180129 How to use LXD instance types.md | 225 ------------------ ...How to use lxc remote with the LXD snap.md | 106 --------- 3 files changed, 504 deletions(-) delete mode 100644 sources/tech/20180129 How to make your LXD containers get IP addresses from your LAN using a bridge.md delete mode 100644 sources/tech/20180129 How to use LXD instance types.md delete mode 100644 sources/tech/20180201 How to use lxc remote with the LXD snap.md diff --git a/sources/tech/20180129 How to make your LXD containers get IP addresses from your LAN using a bridge.md b/sources/tech/20180129 How to make your LXD containers get IP addresses from your LAN using a bridge.md deleted file mode 100644 index 6f26f182b8..0000000000 --- a/sources/tech/20180129 How to make your LXD containers get IP addresses from your LAN using a bridge.md +++ /dev/null @@ -1,173 +0,0 @@ -How to make your LXD containers get IP addresses from your LAN using a bridge -====== -**Background** : LXD is a hypervisor that manages machine containers on Linux distributions. You install LXD on your Linux distribution and then you can launch machine containers into your distribution running all sort of (other) Linux distributions. - -In the previous post, we saw how to get our LXD container to receive an IP address from the local network (instead of getting the default private IP address), using **macvlan**. - -In this post, we are going to see how to use a **bridge** to make our containers get an IP address from the local network. Specifically, we are going to see how to do this using NetworkManager. If you have several public IP addresses, you can use this method (or the other with the **macvlan** ) in order to expose your LXD containers directly to the Internet. - -### Creating the bridge with NetworkManager - -See this post [How to configure a Linux bridge with Network Manager on Ubuntu][1] on how to create the bridge with NetworkManager. It explains that you - - 1. Use **NetworkManager** to **Add a New Connection** , a **Bridge**. - 2. When configuring the **Bridge** , you specify the real network connection (the device, like **eth0** or **enp3s12** ) that will be **the slave of the bridge**. You can verify the device of the network connection if you run **ip route list 0.0.0.0/0**. - 3. Then, you can remove the old network connection and just keep the slave. The slave device ( **bridge0** ) will now be the device that gets you your LAN IP address. - - - -At this point you would have again network connectivity. Here is the new device, **bridge0**. -``` -$ ifconfig bridge0 -bridge0 Link encap:Ethernet HWaddr 00:e0:4b:e0:a8:c2 - inet addr:192.168.1.64 Bcast:192.168.1.255 Mask:255.255.255.0 - inet6 addr: fe80::d3ca:7a11:f34:fc76/64 Scope:Link - UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 - RX packets:9143 errors:0 dropped:0 overruns:0 frame:0 - TX packets:7711 errors:0 dropped:0 overruns:0 carrier:0 - collisions:0 txqueuelen:1000 - RX bytes:7982653 (7.9 MB) TX bytes:1056263 (1.0 MB) -``` - -### Creating a new profile in LXD for bridge networking - -In LXD, there is a default profile and then you can create additional profile that either are independent from the default (like in the **macvlan** post), or can be chained with the default profile. Now we see the latter. - -First, create a new and empty LXD profile, called **bridgeprofile**. -``` -$ lxc create profile bridgeprofile -``` - -Here is the fragment to add to the new profile. The **eth0** is the interface name in the container, so for the Ubuntu containers it does not change. Then, **bridge0** is the interface that was created by NetworkManager. If you created that bridge by some other way, add here the appropriate interface name. The **EOF** at the end is just a marker when we copy and past to the profile. -``` -description: Bridged networking LXD profile -devices: - eth0: - name: eth0 - nictype: bridged - parent: bridge0 - type: nic -**EOF** -``` - -Paste the fragment to the new profile. -``` -$ cat <:] [:][] [--ephemeral|-e] [--profile|-p ...] [--config|-c ...] [--type|-t ] - -Create and start containers from images. - -Not specifying -p will result in the default profile. -Specifying "-p" with no argument will result in no profile. - -Examples: - lxc launch ubuntu:16.04 u1 - -Options: - -c, --config (= map[]) Config key/value to apply to the new container - --debug (= false) Enable debug mode - -e, --ephemeral (= false) Ephemeral container - --force-local (= false) Force using the local unix socket - --no-alias (= false) Ignore aliases when determining what command to run - -p, --profile (= []) Profile to apply to the new container -**-t (= "") Instance type** - --verbose (= false) Enable verbose mode -``` - -What do we put for Instance type? Here is the documentation, - - - -Simply put, an instance type is just a mnemonic shortcut for specific pair of CPU cores and RAM memory settings. For CPU you specify the number of cores and for RAM memory the amount in GB (assuming your own computer has enough cores and RAM so that LXD can allocate them to the newly created container). - -You would need an instance type if you want to create a machine container that resembles in the specs as close as possible what you will be installing later on, on AWS (Amazon), Azure (Microsoft) or GCE (Google). - -The instance type can have any of the following forms, - - * `` for example: **t2.micro** (LXD figures out that this refers to AWS t2.micro, therefore 1 core, 1GB RAM). - * `:` for example, **aws:t2.micro** (LXD quickly looks into the AWS types, therefore 1core, 1GB RAM). - * `c-m` for example, **c1-m1** (LXD explicitly allocates one core, and 1GB RAM). - - - -Where do these mnemonics like **t2.micro** come from? The documentation says from - -[![][1]][2] - -There are three sets of instance types, **aws** , **azure** and **gce**. Their names are listed in [the LXD instance type index file ][3]**.yaml,** -``` -aws: "aws.yaml" -gce: "gce.yaml" -azure: "azure.yaml" - -``` - -Over there, there are YAML configuration files for each of AWS, Azure and GCE, and in them there are settings for CPU cores and RAM memory. - -The actual URLs that the LXD client will be using, are - - - -Sample for AWS: -``` -t2.large: - cpu: 2.0 - mem: 8.0 -t2.medium: - cpu: 2.0 - mem: 4.0 -t2.micro: - cpu: 1.0 - mem: 1.0 -t2.nano: - cpu: 1.0 - mem: 0.5 -t2.small: - cpu: 1.0 - mem: 2.0 -``` - - - -Sample for Azure: -``` -ExtraSmall: - cpu: 1.0 - mem: 0.768 -Large: - cpu: 4.0 - mem: 7.0 -Medium: - cpu: 2.0 - mem: 3.5 -Small: - cpu: 1.0 - mem: 1.75 -Standard_A1_v2: - cpu: 1.0 - mem: 2.0 -``` - - - -Sample for GCE: -``` -f1-micro: - cpu: 0.2 - mem: 0.6 -g1-small: - cpu: 0.5 - mem: 1.7 -n1-highcpu-16: - cpu: 16.0 - mem: 14.4 -n1-highcpu-2: - cpu: 2.0 - mem: 1.8 -n1-highcpu-32: - cpu: 32.0 - mem: 28.8 -``` - -Let's see an example. Here, all of the following are all equivalent! Just run one of them to get a 1 CPU core/1GB RAM container. -``` -$ lxc launch ubuntu:x -t t2.micro aws-t2-micro - -$ lxc launch ubuntu:x -t aws:t2.micro aws-t2-micro - -$ lxc launch ubuntu:x -t c1-m1 aws-t2-micro -``` - -Let's verify that the constraints have been actually set for the container. -``` -$ lxc config get aws-t2-micro limits.cpu -1 - -$ lxc config get aws-t2-micro limits.cpu.allowance - - -$ lxc config get aws-t2-micro limits.memory -1024MB - -$ lxc config get aws-t2-micro limits.memory.enforce - - -``` - -There are generic limits for 1 CPU core and 1024MB/1GB RAM. For more, see [LXD resource control][4]. - -If you already have a running container and you wanted to set limits live (no need to restart it), here is how you would do that. -``` -$ lxc launch ubuntu:x mycontainer -Creating mycontainer -Starting mycontainer - -$ lxc config set mycontainer limits.cpu 1 -$ lxc config set mycontainer limits.memory 1GB -``` - -Let's see the config with the limits, -``` -$ lxc config show mycontainer -architecture: x86_64 -config: - image.architecture: amd64 - image.description: ubuntu 16.04 LTS amd64 (release) (20180126) - image.label: release - image.os: ubuntu - image.release: xenial - image.serial: "20180126" - image.version: "16.04" - limits.cpu: "1" - limits.memory: 1GB -... -``` - -### Troubleshooting - -#### I tried to the the memory limit but I get an error! - -I got this error, -``` -$ lxc config set mycontainer limits.memory 1 -error: Failed to set cgroup memory.limit_in_bytes="1": setting cgroup item for the container failed -Exit 1 -``` - -When you set the memory limit ( **limits.memory** ), you need to append a specifier like **GB** (as in 1GB). Because the number there is in bytes if no specifier is present, and one byte of memory is not going to work. - -#### I cannot set the limits in lxc launch -config! - -How do I use **lxc launch -config ConfigurationGoesHere**? - -Here is the documentation: -``` -$ lxc launch --help -Usage: lxc launch [ :] ... [--config|-c ...] -``` - -Here it is, -``` -$ lxc launch ubuntu:x --config limits.cpu=1 --config limits.memory=1GB mycontainer -Creating mycontainer -Starting mycontainer -``` - -That is, use multiple **- config** parameters. - - --------------------------------------------------------------------------------- - -via: https://blog.simos.info/how-to-use-lxd-instance-types/ - -作者:[Simos Xenitellis][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://blog.simos.info/author/simos/ -[1]:https://i1.wp.com/blog.simos.info/wp-content/uploads/2018/01/lxd-instance-types.png?resize=750%2C277&ssl=1 -[2]:https://i1.wp.com/blog.simos.info/wp-content/uploads/2018/01/lxd-instance-types.png?ssl=1 -[3]:https://uk.images.linuxcontainers.org/meta/instance-types/.yaml -[4]:https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/ diff --git a/sources/tech/20180201 How to use lxc remote with the LXD snap.md b/sources/tech/20180201 How to use lxc remote with the LXD snap.md deleted file mode 100644 index ccc7b40d02..0000000000 --- a/sources/tech/20180201 How to use lxc remote with the LXD snap.md +++ /dev/null @@ -1,106 +0,0 @@ -How to use lxc remote with the LXD snap -====== -**Background** : LXD is a hypervisor that manages machine containers on Linux distributions. You install LXD on your Linux distribution and then you can launch machine containers into your distribution running all sort of (other) Linux distributions. - -You have installed the LXD snap and you are happy using it. However, you are developing LXD and you would like to use your freshly compiled LXD client (executable: **lxc** ) on the LXD snap. - -Let’s run our compile lxc executable. -``` -$ ./lxc list -LXD socket not found; is LXD installed and running? -Exit 1 - -``` - -By default it cannot access the LXD server from the snap. We need to [set up a remote LXD host][1] and then configure the client to be able to connect to that remote LXD server. - -### Configuring the remote LXD server (snap) - -We run the following on the LXD snap, -``` -$ which lxd -/snap/bin/lxd - -$ sudo lxd init -Do you want to configure a new storage pool (yes/no) [default=yes]? no -Would you like LXD to be available over the network (yes/no) [default=no]? yes -Address to bind LXD to (not including port) [default=all]: press_enter_to_accept_default -Port to bind LXD to [default=8443]: press_enter_to_accept_default -Trust password for new clients: type_a_password -Again: type_the_same_password -Do you want to configure the LXD bridge (yes/no) [default=yes]? no -LXD has been successfully configured. - -$ - -``` - -Now the snap LXD server is configured to accept remote connections, and the clients much be configured with the correct trust password. - -### Configuring the client (compiled lxc) - -Let’s configure now the compiled lxc client. - -First, here is how the unconfigured compiled lxc client would react, -``` -$ ./lxc list -LXD socket not found; is LXD installed and running? -Exit 1 - -``` - -Now we add the remote, given the name **lxd.snap** , which binds on localhost (127.0.0.1). It asks to verify the certificate fingerprint. I am not aware how to view the fingerprint from inside the snap. We type the one-time password that we set earlier and we are good to go. -``` -$ lxc remote add lxd.snap 127.0.0.1 -Certificate fingerprint: 2c5829064cf795e29388b0d6310369fcf693257650b5c90c922a2d10f542831e -ok (y/n)? y -Admin password for lxd.snap: type_that_password -Client certificate stored at server: lxd.snap - -$ lxc remote list -+-----------------|------------------------------------------|---------------|-----------|--------|--------+ -| NAME | URL | PROTOCOL | AUTH TYPE | PUBLIC | STATIC | -+-----------------|------------------------------------------|---------------|-----------|--------|--------+ -| images | https://images.linuxcontainers.org | simplestreams | | YES | NO | -+-----------------|------------------------------------------|---------------|-----------|--------|--------+ -| local (default) | unix:// | lxd | tls | NO | YES | -+-----------------|------------------------------------------|---------------|-----------|--------|--------+ -| lxd.snap | https://127.0.0.1:8443 | lxd | tls | NO | NO | -+-----------------|------------------------------------------|---------------|-----------|--------|--------+ -| ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | | YES | YES | -+-----------------|------------------------------------------|---------------|-----------|--------|--------+ -| ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | | YES | YES | -+-----------------|------------------------------------------|---------------|-----------|--------|--------+ - -``` - -Still, the default remote is **local**. That means that **./lxc** will not work yet. We need to make **lxd.snap** the default remote. -``` -$ ./lxc list -LXD socket not found; is LXD installed and running? -Exit 1 - -$ ./lxc remote set-default lxd.snap - -$ ./lxc list -... now it works ... - -``` - -### Conclusion - -We saw how to get a client to access a LXD server. A more advanced scenario would be to have two LXD servers, and set them up so that each one can connect to the other. - - --------------------------------------------------------------------------------- - -via: https://blog.simos.info/how-to-use-lxc-remote-with-the-lxd-snap/ - -作者:[Simos Xenitellis][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://blog.simos.info/author/simos/ -[1]:https://stgraber.org/2016/04/12/lxd-2-0-remote-hosts-and-container-migration-612/ From a9a887ea57578790f7767e665f620ef45a249ec2 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 16:40:11 +0800 Subject: [PATCH 24/81] =?UTF-8?q?=E9=80=89=E9=A2=98:=203=20reasons=20to=20?= =?UTF-8?q?say=20'no'=20in=20DevOps?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0180222 3 reasons to say -no- in DevOps.md | 105 ++++++++++++++++++ 1 file changed, 105 insertions(+) create mode 100644 sources/talk/20180222 3 reasons to say -no- in DevOps.md diff --git a/sources/talk/20180222 3 reasons to say -no- in DevOps.md b/sources/talk/20180222 3 reasons to say -no- in DevOps.md new file mode 100644 index 0000000000..5f27fbaf47 --- /dev/null +++ b/sources/talk/20180222 3 reasons to say -no- in DevOps.md @@ -0,0 +1,105 @@ +3 reasons to say 'no' in DevOps +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_DesirePath.png?itok=N_zLVWlK) + +DevOps, it has often been pointed out, is a culture that emphasizes mutual respect, cooperation, continual improvement, and aligning responsibility with authority. + +Instead of saying no, it may be helpful to take a hint from improv comedy and say, "Yes, and..." or "Yes, but...". This opens the request from the binary nature of "yes" and "no" toward having a nuanced discussion around priority, capacity, and responsibility. + +However, sometimes you have no choice but to give a hard "no." These should be rare and exceptional, but they will occur. + +### Protecting yourself + +Both Agile and DevOps have been touted as ways to improve value to the customer and business, ultimately leading to greater productivity. While reasonable people can understand that the improvements will take time to yield, and the improvements will result in higher quality of work being done, and a better quality of life for those performing it, I think we can all agree that not everyone is reasonable. The less understanding that a person has of the particulars of a given task, the more likely they are to expect that it is a combination of "simple" and "easy." + +"You told me that [Agile/DevOps] is supposed to be all about us getting more productivity. Since we're doing [Agile/DevOps] now, you can take care of my need, right?" + +Like "Agile," some people have tried to use "DevOps" as a stick to coerce people to do more work than they can handle. Whether the person confronting you with this question is asking in earnest or is being manipulative doesn't really matter. + +The biggest areas of concern for me have been **capacity** , **firefighting/maintenance** , **level of quality** , and **" future me."** Many of these ultimately tie back to capacity, but they relate to a long-term effort in different respects. + +#### Capacity + +Capacity is simple: You know what your workload is, and how much flex occurs due to the unexpected. Exceeding your capacity will not only cause undue stress, but it could decrease the quality of your work and can injure your reputation with regards to making commitments. + +There are several avenues of discussion that can happen from here. The simplest is "Your request is reasonable, but I don't have the capacity to work on it." This seldom ends the conversation, and a discussion will often run up the flagpole to clarify priorities or reassign work. + +#### Firefighting/maintenance + +It's possible that the thing that you're being asked for won't take long to do, but it will require maintenance that you'll be expected to perform, including keeping it alive and fulfilling requests for it on behalf of others. + +An example in my mind is the Jenkins server that you're asked to stand up for someone else, but somehow end up being the sole owner and caretaker of. Even if you're careful to scope your level of involvement early on, you might be saddled with responsibility that you did not agree to. Should the service become unavailable, for example, you might be the one who is called. You might be called on to help triage a build that is failing. This is additional firefighting and maintenance work that you did not sign up for and now must fend off. + +This needs to be addressed as soon and publicly as possible. I'm not saying that (again, for example) standing up a Jenkins instance is a "no," but rather a ["Yes, but"][1]—where all parties understand that they take on the long-term care, feeding, and use of the product. Make sure to include all your bosses in this conversation so they can have your back. + +#### Level of quality + +There may be times when you are presented with requirements that include a timeframe that is...problematic. Perhaps you could get a "minimum (cough) viable (cough) product" out in that time. But it wouldn't be resilient or in any way ready for production. It might impact your time and productivity. It could end up hurting your reputation. + +The resulting conversation can get into the weeds, with lots of horse-trading about time and features. Another approach is to ask "What is driving this deadline? Where did that timeframe come from?" Discussing the bigger picture might lead to a better option, or that the timeline doesn't depend on the original date. + +#### Future me + +Ultimately, we are trying to protect "future you." These are lessons learned from the many times that "past me" has knowingly left "current me" to clean up. Sometimes we joke that "that's a problem for 'future me,'" but don't forget that 'future you' will just be 'you' eventually. I've cursed "past me" as a jerk many times. Do your best to keep other people from making "past you" be a jerk to "future you." + +I recognize that I have a significant amount of privilege in this area, but if you are told that you cannot say "no" on behalf of your own welfare, you should consider whether you are respected enough to maintain your autonomy. + +### Protecting the user experience + +Everyone should be an advocate for the user. Regardless of whether that user is right next to you, someone down the hall, or someone you have never met and likely never will, you must care for the customer. + +Behavior that is actively hostile to the user—whether it's a poor user experience or something more insidious like quietly violating reasonable expectations of privacy—deserves a "no." A common example of this would be automatically including people into a service or feature, forcing them to explicitly opt-out. + +If a "no" is not welcome, it bears considering, or explicitly asking, what the company's relationship with its customers is, who the company thinks of as it's customers, and what it thinks of them. + +When bringing up your objections, be clear about what they are. Additionally, remember that your coworkers are people too, and make it clear that you are not attacking their character; you simply find the idea disagreeable. + +### Legal, ethical, and moral grounds + +There might be situations that don't feel right. A simple test is to ask: "If this were to become public, or come up in a lawsuit deposition, would it be a scandal?" + +#### Ethics and morals + +If you are asked to lie, that should be a hard no. + +Remember if you will the Volkswagen Emissions Scandal of 2017? The emissions systems software was written such that it recognized that the vehicle was operated in a manner consistent with an emissions test, and would run more efficiently than under normal driving conditions. + +I don't know what you do in your job, or what your office is like, but I have a hard time imagining the Individual Contributor software engineer coming up with that as a solution on their own. In fact, I imagine a comment along the lines of "the engine engineers can't make their product pass the tests, so I need to hack the performance so that it will!" + +When the Volkswagen scandal came public, Volkswagen officials blamed the engineers. I find it unlikely that it came from the mind and IDE of an individual software engineer. Rather, it's more likely indicates significant systemic problems within the company culture. + +If you are asked to lie, get the request in writing, citing that the circumstances are suspect. If you are so privileged, decide whether you may decline the request on the basis that it is fundamentally dishonest and hostile to the customer, and would break the public's trust. + +#### Legal + +I am not a lawyer. If your work should involve legal matters, including requests from law enforcement, involve your company's legal counsel or speak with a private lawyer. + +With that said, if you are asked to provide information for law enforcement, I believe that you are within your rights to see the documentation that justifies the request. There should be a signed warrant. You should be provided with a copy of it, or make a copy of it yourself. + +When in doubt, begin recording and request legal counsel. + +It has been well documented that especially in the early years of the U.S. Patriot Act, law enforcement placed so many requests of telecoms that they became standard work, and the paperwork started slipping. While tedious and potentially stressful, make sure that the legal requirements for disclosure are met. + +If for no other reason, we would not want the good work of law enforcement to be put at risk because key evidence was improperly acquired, making it inadmissible. + +### Wrapping up + +You are going to be your single biggest advocate. There may be times when you are asked to compromise for the greater good. However, you should feel that your dignity is preserved, your autonomy is respected, and that your morals remain intact. + +If you don't feel that this is the case, get it on record, doing your best to communicate it calmly and clearly. + +Nobody likes being declined, but if you don't have the ability to say no, there may be a bigger problem than your environment not being DevOps. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/3-reasons-say-no-devops + +作者:[H. "Waldo" Grunenwal][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/gwaldo +[1]:http://gwaldo.blogspot.com/2015/12/fear-and-loathing-in-systems.html From 4e4668c2a0602b349adf53e23f540a8f6032b74e Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 16:40:26 +0800 Subject: [PATCH 25/81] add done: 20180222 3 reasons to say -no- in DevOps.md --- sources/remove_DMCA.sh | 7 +++++++ 1 file changed, 7 insertions(+) create mode 100755 sources/remove_DMCA.sh diff --git a/sources/remove_DMCA.sh b/sources/remove_DMCA.sh new file mode 100755 index 0000000000..2582f4a5b2 --- /dev/null +++ b/sources/remove_DMCA.sh @@ -0,0 +1,7 @@ +#!/bin/bash +domain="$1" +git checkout -b "$domain" +git grep -l "$domain"|while read file; do git rm "$file"; done +git commit -a -m "remove $domain" +git push -u origin "$domain" +git checkout master From 8b0980d1e3f494f3cb24e4ed14648f19098b2dac Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 16:43:53 +0800 Subject: [PATCH 26/81] add done: 20180222 3 reasons to say -no- in DevOps.md --- sources/remove_DMCA.sh | 7 ------- 1 file changed, 7 deletions(-) delete mode 100755 sources/remove_DMCA.sh diff --git a/sources/remove_DMCA.sh b/sources/remove_DMCA.sh deleted file mode 100755 index 2582f4a5b2..0000000000 --- a/sources/remove_DMCA.sh +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -domain="$1" -git checkout -b "$domain" -git grep -l "$domain"|while read file; do git rm "$file"; done -git commit -a -m "remove $domain" -git push -u origin "$domain" -git checkout master From 09cf3ef48e3341cb45345dfe871f35a23fa9f1ed Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 17:16:27 +0800 Subject: [PATCH 27/81] remove www.zdnet.com --- ...hanged programming and business forever.md | 108 --------------- ... best ways to secure your Android phone.md | 124 ------------------ 2 files changed, 232 deletions(-) delete mode 100644 sources/talk/20180202 -Open source is 20- How it changed programming and business forever.md delete mode 100644 sources/tech/20171103 -The 10 best ways to secure your Android phone.md diff --git a/sources/talk/20180202 -Open source is 20- How it changed programming and business forever.md b/sources/talk/20180202 -Open source is 20- How it changed programming and business forever.md deleted file mode 100644 index 1b7e6b6c37..0000000000 --- a/sources/talk/20180202 -Open source is 20- How it changed programming and business forever.md +++ /dev/null @@ -1,108 +0,0 @@ -​Open source is 20: How it changed programming and business forever -====== -![][1] - -Every company in the world now uses open-source software. Microsoft, once its greatest enemy, is [now an enthusiastic open supporter][2]. Even [Windows is now built using open-source techniques][3]. And if you ever searched on Google, bought a book from Amazon, watched a movie on Netflix, or looked at your friend's vacation pictures on Facebook, you're an open-source user. Not bad for a technology approach that turns 20 on February 3. - -Now, free software has been around since the first computers, but the philosophy of both free software and open source are both much newer. In the 1970s and 80s, companies rose up which sought to profit by making proprietary software. In the nascent PC world, no one even knew about free software. But, on the Internet, which was dominated by Unix and ITS systems, it was a different story. - -In the late 70s, [Richard M. Stallman][6], also known as RMS, then an MIT programmer, created a free printer utility based on its source code. But then a new laser printer arrived on the campus and he found he could no longer get the source code and so he couldn't recreate the utility. The angry [RMS created the concept of "Free Software."][7] - -RMS's goal was to create a free operating system, [Hurd][8]. To make this happen in September 1983, [he announced the creation of the GNU project][9] (GNU stands for GNU's Not Unix -- a recursive acronym). By January 1984, he was working full-time on the project. To help build it he created the grandfather of all free software/open-source compiler system [GCC][10] and other operating system utilities. Early in 1985, he published "[The GNU Manifesto][11]," which was the founding charter of the free software movement and launched the [Free Software Foundation (FSF)][12]. - -This went well for a few years, but inevitably, [RMS collided with proprietary companies][13]. The company Unipress took the code to a variation of his [EMACS][14] programming editor and turned it into a proprietary program. RMS never wanted that to happen again so he created the [GNU General Public License (GPL)][15] in 1989. This was the first copyleft license. It gave users the right to use, copy, distribute, and modify a program's source code. But if you make source code changes and distribute it to others, you must share the modified code. While there had been earlier free licenses, such as [1980's four-clause BSD license][16], the GPL was the one that sparked the free-software, open-source revolution. - -In 1997, [Eric S. Raymond][17] published his vital essay, "[The Cathedral and the Bazaar][18]." In it, he showed the advantages of the free-software development methodologies using GCC, the Linux kernel, and his experiences with his own [Fetchmail][19] project as examples. This essay did more than show the advantages of free software. The programming principles he described led the way for both [Agile][20] development and [DevOps][21]. Twenty-first century programming owes a large debt to Raymond. - -Like all revolutions, free software quickly divided its supporters. On one side, as John Mark Walker, open-source expert and Strategic Advisor at Glyptodon, recently wrote, "[Free software is a social movement][22], with nary a hint of business interests -- it exists in the realm of religion and philosophy. Free software is a way of life with a strong moral code." - -On the other were numerous people who wanted to bring "free software" to business. They would become the founders of "open source." They argued that such phrases as "Free as in freedom" and "Free speech, not beer," left most people confused about what that really meant for software. - -The [release of the Netscape web browser source code][23] sparked a meeting of free software leaders and experts at [a strategy session held on February 3rd][24], 1998 in Palo Alto, CA. There, Eric S. Raymond, Michael Tiemann, Todd Anderson, Jon "maddog" Hall, Larry Augustin, Sam Ockman, and Christine Peterson hammered out the first steps to open source. - -Peterson created the "open-source term." She remembered: - -> [The introduction of the term "open source software" was a deliberate effort][25] to make this field of endeavor more understandable to newcomers and to business, which was viewed as necessary to its spread to a broader community of users. The problem with the main earlier label, "free software," was not its political connotations, but that -- to newcomers -- its seeming focus on price is distracting. A term was needed that focuses on the key issue of source code and that does not immediately confuse those new to the concept. The first term that came along at the right time and fulfilled these requirements was rapidly adopted: open source. - -To help clarify what open source was, and wasn't, Raymond and Bruce Perens founded the [Open Source Initiative (OSI)][26]. Its purpose was, and still is, to define what are real open-source software licenses and what aren't. - -Stallman was enraged by open source. He wrote: - -> The two terms describe almost the same method/category of software, but they stand for [views based on fundamentally different values][27]. Open source is a development methodology; free software is a social movement. For the free software movement, free software is an ethical imperative, essential respect for the users' freedom. By contrast, the philosophy of open source considers issues in terms of how to make software 'better' -- in a practical sense only. It says that non-free software is an inferior solution to the practical problem at hand. Most discussion of "open source" pays no attention to right and wrong, only to popularity and success. - -He saw open source as kowtowing to business and taking the focus away from the personal freedom of being able to have free access to the code. Twenty years later, he's still angry about it. - -In a recent e-mail to me, Stallman said, it is a "common error is connecting me or my work or free software in general with the term 'Open Source.' That is the slogan adopted in 1998 by people who reject the philosophy of the Free Software Movement." In another message, he continued, "I rejected 'open source' because it was meant to bury the "free software" ideas of freedom. Open source inspired the release ofu seful free programs, but what's missing is the idea that users deserve control of their computing. We libre-software activists say, 'Software you can't change and share is unjust, so let's escape to our free replacement.' Open source says only, 'If you let users change your code, they might fix bugs.' What it does says is not wrong, but weak; it avoids saying the deeper point." - -Philosophical conflicts aside, open source has indeed become the model for practical software development. Larry Augustin, CEO of [SugarCRM][28], the open-source customer relationship management (CRM) Software-as-a-Service (SaaS), was one of the first to practice open-source in a commercial software business. Augustin showed that a successful business could be built on open-source software. - -Other companies quickly embraced this model. Besides Linux companies such as [Canonical][29], [Red Hat][30] and [SUSE][31], technology businesses such as [IBM][32] and [Oracle][33] also adopted it. This, in turn, led to open source's commercial success. More recently companies you would never think of for a moment as open-source businesses like [Wal-Mart][34] and [Verizon][35], now rely on open-source programs and have their own open-source projects. - -As Jim Zemlin, director of [The Linux Foundation][36], observed in 2014: - -> A [new business model][37] has emerged in which companies are joining together across industries to share development resources and build common open-source code bases on which they can differentiate their own products and services. - -Today, Hall looked back and said "I look at 'closed source' as a blip in time." Raymond is unsurprised at open-source's success. In an e-mail interview, Raymond said, "Oh, yeah, it *has* been 20 years -- and that's not a big deal because we won most of the fights we needed to quite a while ago, like in the first decade after 1998." - -"Ever since," he continued, "we've been mainly dealing with the problems of success rather than those of failure. And a whole new class of issues, like IoT devices without upgrade paths -- doesn't help so much for the software to be open if you can't patch it." - -In other words, he concludes, "The reward of victory is often another set of battles." - -These are battles that open source is poised to win. Jim Whitehurst, Red Hat's CEO and president told me: - -> The future of open source is bright. We are on the cusp of a new wave of innovation that will come about because information is being separated from physical objects thanks to the Internet of Things. Over the next decade, we will see entire industries based on open-source concepts, like the sharing of information and joint innovation, become mainstream. We'll see this impact every sector, from non-profits, like healthcare, education and government, to global corporations who realize sharing information leads to better outcomes. Open and participative innovation will become a key part of increasing productivity around the world. - -Others see open source extending beyond software development methods. Nick Hopman, Red Hat's senior director of emerging technology practices, said: - -> Open-source is much more than just a process to develop and expose technology. Open-source is a catalyst to drive change in every facet of society -- government, policy, medical diagnostics, process re-engineering, you name it -- and can leverage open principles that have been perfected through the experiences of open-source software development to create communities that drive change and innovation. Looking forward, open-source will continue to drive technology innovation, but I am even more excited to see how it changes the world in ways we have yet to even consider. - -Indeed. Open source has turned twenty, but its influence, and not just on software and business, will continue on for decades to come. - --------------------------------------------------------------------------------- - -via: http://www.zdnet.com/article/open-source-turns-20/ - -作者:[Steven J. Vaughan-Nichols][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ -[1]:https://zdnet1.cbsistatic.com/hub/i/r/2018/01/08/d9527281-2972-4cb7-bd87-6464d8ad50ae/thumbnail/570x322/9d4ef9007b3a3ce34de0cc39d2b15b0c/5a4faac660b22f2aba08fc3f-1280x7201jan082018150043poster.jpg -[2]:http://www.zdnet.com/article/microsoft-the-open-source-company/ -[3]:http://www.zdnet.com/article/microsoft-uses-open-source-software-to-create-windows/ -[4]:https://zdnet1.cbsistatic.com/hub/i/r/2016/11/18/a55b3c0c-7a8e-4143-893f-44900cb2767a/resize/220x165/6cd4e37b1904743ff1f579cb10d9e857/linux-open-source-money-penguin.jpg -[5]:http://www.zdnet.com/article/how-do-linux-and-open-source-companies-make-money-from-free-software/ -[6]:https://stallman.org/ -[7]:https://opensource.com/article/18/2/pivotal-moments-history-open-source -[8]:https://www.gnu.org/software/hurd/hurd.html -[9]:https://groups.google.com/forum/#!original/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J -[10]:https://gcc.gnu.org/ -[11]:https://www.gnu.org/gnu/manifesto.en.html -[12]:https://www.fsf.org/ -[13]:https://www.free-soft.org/gpl_history/ -[14]:https://www.gnu.org/s/emacs/ -[15]:https://www.gnu.org/licenses/gpl-3.0.en.html -[16]:http://www.linfo.org/bsdlicense.html -[17]:http://www.catb.org/esr/ -[18]:http://www.catb.org/esr/writings/cathedral-bazaar/ -[19]:http://www.fetchmail.info/ -[20]:https://www.agilealliance.org/agile101/ -[21]:https://aws.amazon.com/devops/what-is-devops/ -[22]:https://opensource.com/business/16/11/open-source-not-free-software?sc_cid=70160000001273HAAQ -[23]:http://www.zdnet.com/article/the-beginning-of-the-peoples-web-20-years-of-netscape/ -[24]:https://opensource.org/history -[25]:https://opensource.com/article/18/2/coining-term-open-source-software -[26]:https://opensource.org -[27]:https://www.gnu.org/philosophy/open-source-misses-the-point.html -[28]:https://www.sugarcrm.com/ -[29]:https://www.canonical.com/ -[30]:https://www.redhat.com/en -[31]:https://www.suse.com/ -[32]:https://developer.ibm.com/code/open/ -[33]:http://www.oracle.com/us/technologies/open-source/overview/index.html -[34]:http://www.zdnet.com/article/walmart-relies-on-openstack/ -[35]:https://www.networkworld.com/article/3195490/lan-wan/verizon-taps-into-open-source-white-box-fervor-with-new-cpe-offering.html -[36]:http://www.linuxfoundation.org/ -[37]:http://www.zdnet.com/article/it-takes-an-open-source-village-to-make-commercial-software/ diff --git a/sources/tech/20171103 -The 10 best ways to secure your Android phone.md b/sources/tech/20171103 -The 10 best ways to secure your Android phone.md deleted file mode 100644 index 0114b5f48d..0000000000 --- a/sources/tech/20171103 -The 10 best ways to secure your Android phone.md +++ /dev/null @@ -1,124 +0,0 @@ -​The 10 best ways to secure your Android phone -====== -![][1] - -The [most secure smartphones are Android smartphones][2]. Don't buy that? Apple's latest version of [iOS 11 was cracked a day -- a day! -- after it was released][3]. - -So Android is perfect? Heck no! - -Android is under constant attack and older versions are far more vulnerable than new ones. Way too many smartphone vendors still don't issue [Google's monthly Android security patches][4] in a timely fashion, or at all. And, zero-day attacks still pop up. - -So, what can you do to protect yourself? A lot actually. - -Here are my top 10 ways to keep you and your Android device safe from attackers. Many of these are pretty simple, but security is really more about doing safe things every time than fancy complicated security tricks. - -**1) Only buy smartphones from vendors who release Android patches quickly.** - -I recently got a [Google Pixel 2][5]. There were many reasons for this, but number one with a bullet was that Google makes sure its smartphones, such as the Pixel, the Pixel 2, Nexus 5X, and 6P get the freshest updates. This means they get the newest security patches as they're released. - -As for other major vendors, [Android Authority][6], the leading Android publication, found, the [best vendors for keeping their phones up to date][7] were, in order, from best to worse: LG, Motorola, HTC, Sony, Xiaomi, OnePlus, and Samsung. - -**2) Lock your phone.** - -I know, it's so simple. But, people still don't do it. Trust me. You're more likely to get into trouble by a pickpocket snatching your phone and running wild with your credit-card accounts than you from malware. - -What's the best way to lock your phone? Well, it's not sexy, but the good old [PIN remains the safest way][8]. Fingerprints, patterns, voice-recognition, iris scanning, etc. -- they're all more breakable. Just don't, for the sake of [Android Oreo][9] cookies, use 1-2-3-4, as your PIN. Thank you. - - **3) Use two-factor authentication.** - -While you're securing your phone, let's lock down your Google services as well. The best way of doing this is with [Google's own two-factor authentication][10]. - -Here's how to do it: Login-in to your [Google account and head to the two-step verification settings page][11]. Once there, choose "Using 2-step verification" from the menu. From there, follow the prompts. You'll be asked for your phone number. You can get verification codes by voice or SMS on your phone. I find texting easier. - -In seconds, you'll get a call with your verification number. You then enter this code into your web browser's data entry box Your device will then ask you if you want it to remember the computer you're using. If you answer, "yes" that programs will be authorized for use for 30-days. Finally, you turn on 2-step verification and you're done. - -You can also make this even simpler by using [Google Prompt][12]. With this you can authorize Google apps by simply entering "yes" when prompted on your phone. - -**4) Only use apps from the Google Play Store.** - -Seriously. The vast majority of Android malware comes from unreliable third party application sources. Sure, bogus apps make it into the Google Play Store from time to time, like the [ones which messaged premium-rate text services][13], but they're exception, not the rule. - -Google has also kept working on making the Play Store safer than ever. For example, [Google Play Protect][14] can automatically scan your Android device for malware when you install programs. Make sure it's on by going to Settings > Security > Play Protect. For maximum security, click Full scanning and "Scan device for security threats" on. - - **5) Use device encryption.** - -The next person who wants to [snoop in your phone may not be a crook, but a US Customs and Border Protection (CBP) agent][15]. If that idea creeps you out, you can put a roadblock in their way with encryption. That may land you in hot water with Homeland Security, but it's your call. - -To encrypt your device, go to Settings > Security > Encrypt Device and follow the prompts. - -By the way, the CBP also states "border searches conducted by CBP do not extend to information that is located solely on remote servers." So, your data may actually be safer in the cloud in this instance. - - **6) Use a Virtual Private Network.** - -If you're on the road -- whether it's your local coffee shop or the remote office in Singapore -- you're going to want to use free Wi-Fi. We all do. We all take big chances when we do since they tend of be as secure as a net built out of thread. To [make yourself safer you'll want to use a mobile Virtual Private Network (VPN)][16]. - -In my experience, the best of these are: [F-Secure Freedome VPN][17], [KeepSolid VPN Unlimited][18], [NordVPN][19], [Private Internet Access][20], and [TorGuard][21]. What you don't want to do, no matter how tempted you may be, is to use a free VPN service. None of them work worth a darn. - -**7) Password management.** - -When it comes to passwords, you have choices: 1) use the same password for everything, which is really dumb. 2) Write down your passwords on paper, which isn't as bad an idea as it sounds so long as you don't put them on a sticky note on your PC screen; 3) Memorize all your passwords, not terribly practical. Or, 4) use a password management program. - -Now Google comes with one built-in, but if you don't want to put all your security eggs in one cloud basket, you can use other mobile password management programs. The best of the bunch are: [LastPass][22], [1Password][23], and [Dashlane][24]. - - **8) Use anti-virus software.** - -While Google Play Protect does a good job of protecting your phone, when it comes to malware protection I believe is using a belt and suspenders. For my anti-virus (A/V) suspenders, I use Germany's [AV-TEST][25], an independent malware detection lab, results as my guide. - -So, the best freeware A/V program today is [Avast Mobile Security & Antivirus][26]. It's other security features, like its phone tracker, doesn't work that well, but it's good at finding and deleting malware. The best freemium A/V software is [Norton Mobile Security][27]. All its components work well and if you elect to go for the full package, it's only $25 for 10 devices. - - **9) Turn off connections when you don't need them.** - -If you're not using Wi-Fi or Bluetooth, turn them off. Besides saving some battery life, network connections can be used to attack you. The [BlueBorne Bluetooth][28] hackers are still alive, well, and ready to wreck your day. Don't give it a chance. - -True, [Android was patched to stop this attack in its September 2017 release][29]. Google's device family got the patch and [Samsung deployed it][30]. Has your vendor protected your device yet? Odds are they haven't. - -**10) If you don't use an app, uninstall it.** - -Every application comes with its own security problems. Most Android software vendors do a good job of updating their programs. Most of them. If you're not using an application, get rid of it. The fewer program doors you have into your smartphone, the fewer chances an attacker has to invade it. - -If you follow up with all these suggestions, your phone will be safer. It won't be perfectly safe -- nothing is in this world. But, you'll be much more secure than you are now, and that's not a small thing. - - - - --------------------------------------------------------------------------------- - -via: http://www.zdnet.com/article/the-ten-best-ways-to-secure-your-android-phone/ - -作者:[Steven J. Vaughan-Nichols][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ -[1]:https://zdnet1.cbsistatic.com/hub/i/r/2017/10/18/7147d044-cb9a-4e88-abc2-02279b21b74a/thumbnail/570x322/c665fa2b5bca56e1b98ec3a23bb2c90b/59e4fb2460b299f92c13a408-1280x7201oct182017131932poster.jpg -[2]:http://www.zdnet.com/article/the-worlds-most-secure-smartphones-and-why-theyre-all-androids/ -[3]:http://www.zdnet.com/article/ios-11-hacked-by-security-researchers-day-after-release/ -[4]:http://www.zdnet.com/article/googles-october-android-patches-have-landed-theres-a-big-fix-for-dnsmasq-bug/ -[5]:http://www.zdnet.com/product/google-pixel-2-xl/ -[6]:https://www.androidauthority.com/ -[7]:https://www.androidauthority.com/android-oem-update-speed-743073/ -[8]:http://fieldguide.gizmodo.com/whats-the-most-secure-way-to-lock-your-smartphone-1796948710 -[9]:http://www.zdnet.com/article/why-android-oreo-stacks-up-well-as-a-major-update/ -[10]:http://www.zdnet.com/article/how-to-use-google-two-factor-authentication/ -[11]:https://accounts.google.com/SmsAuthConfig -[12]:https://support.google.com/accounts/answer/7026266?co=GENIE.Platform%3DAndroid&hl=en -[13]:http://www.zdnet.com/article/android-malware-in-google-play-racked-up-4-2-million-downloads-so-are-you-a-victim/ -[14]:http://www.zdnet.com/article/google-play-protect-now-rolling-out-to-android-devices/ -[15]:http://www.zdnet.com/article/us-customs-says-border-agents-cant-search-cloud-data-from-travelers-phones/ -[16]:http://www.zdnet.com/article/what-you-must-know-about-mobile-vpns-to-protect-your-privacy/ -[17]:https://www.f-secure.com/en_US/web/home_us/freedome -[18]:https://www.vpnunlimitedapp.com/en -[19]:https://nordvpn.com/special/2ydeal/?utm_source=aff307&utm_medium=affiliate&utm_term=&utm_content=&utm_campaign=off15 -[20]:http://dw.cbsi.com/redir?ttag=vpn&topicbrcrm=virtual-private-network-services<ype=is&merid=50000882&mfgId=50000882&oid=2703-9234_1-0&ontid=9234&edId=3&siteid=1&channelid=6028&rsid=cbsicnetglobalsite&sc=US&sl=en&destUrl=https://www.privateinternetaccess.com/pages/buy-vpn/cnet -[21]:https://torguard.net/ -[22]:https://play.google.com/store/apps/details?id=com.lastpass.lpandroid -[23]:https://play.google.com/store/apps/details?id=com.agilebits.onepassword -[24]:https://play.google.com/store/apps/details?id=com.dashlane -[25]:https://www.av-test.org/ -[26]:https://play.google.com/store/apps/details?id=com.avast.android.mobilesecurity&hl=en -[27]:https://my.norton.com/mobile/home -[28]:http://www.zdnet.com/article/bluetooth-security-flaw-blueborne-iphone-android-windows-devices-at-risk/ -[29]:https://source.android.com/security/bulletin/2017-09-01 -[30]:https://www.sammobile.com/2017/09/25/samsung-rolls-security-patches-fix-blueborne-vulnerability/ From ac74de6f0dd57665fe75040c379fd6ef8958049b Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 17:19:26 +0800 Subject: [PATCH 28/81] remove www.thelinuxrain.com --- ...blo II with the GLIDE-to-OpenGL Wrapper.md | 101 ------------------ ...180111 BASH drivers, start your engines.md | 90 ---------------- 2 files changed, 191 deletions(-) delete mode 100644 sources/tech/20171120 How to Run Diablo II with the GLIDE-to-OpenGL Wrapper.md delete mode 100644 sources/tech/20180111 BASH drivers, start your engines.md diff --git a/sources/tech/20171120 How to Run Diablo II with the GLIDE-to-OpenGL Wrapper.md b/sources/tech/20171120 How to Run Diablo II with the GLIDE-to-OpenGL Wrapper.md deleted file mode 100644 index 42c08d703c..0000000000 --- a/sources/tech/20171120 How to Run Diablo II with the GLIDE-to-OpenGL Wrapper.md +++ /dev/null @@ -1,101 +0,0 @@ -How to Run Diablo II with the GLIDE-to-OpenGL Wrapper -====== -![](http://www.thelinuxrain.com/content/01-articles/198-how-to-run-diablo-2-glide-opengl-wrapper/headimage.jpg) - -**[Diablo II][1] is usually a breeze to run on Linux, thanks to WINE and so often times you need no special tricks. However, if you're like me and experience a few glitches and washed out colours in the standard fullscreen mode, you have two options: run the game in windowed mode and go without cinematics, or install a GLIDE-OpenGL wrapper and get the game running properly in its fullscreen glory again, without the glitches and colour problems. I detail how to do that in this article.** - -Yes, that's right, unless you run Diablo II in fullscreen, the cinematics won't work for some reason! I'm fairly sure this happens even on Windows if the game is in windowed mode, so while it's a curious side effect, it is what it is. And this is a game from 2001 we're talking about! - -Old or not though, Diablo II is undoubtedly one of my favourite games of all time. While not exactly Linux related (the game itself has never had a Linux port), I've sunk countless hours into the game in years past. So it's very pleasing to me that the game is very easily playable in Linux using WINE and generally from what I've known the game has needed little to no modification to run properly in WINE. However, it seems since the patches released in the last couple of years that Blizzard removed DirectDraw as a video rendering option from the game for some reason, leaving the game with just one option - Direct3D. Which seems to be the culprit of the fullscreen issues, which apparently even happens on modern Windows machines, so we're not even necessarily talking about a WINE issue here. - -For any users running into the fullscreen glitches and washed out colour palette, as long as you don't care about in-game cinematics and playing the game in a small 800x600 (the game's maximum resolution) window, you could just run the game in windowed mode (with the `-w` switch) and it will work fine. - -Example: -``` -wine ~/.wine/drive_c/Program\ Files\ \(x86\)/Diablo\ II/Game.exe -w -``` - -However, again, no cinematics here. Which may not bother you, but for mine the movies are one of the great and memorable aspects of Diablo II. Thankfully, there is a way to get the game running fullscreen correctly, with working movies, and plus the technique also gets the game running in it's original 4:3 aspect ratio, instead of the weird stretched out 16:9 state it does by default. Again, this may not be your preference, but personally I like it! Let's get to it. - -### The GLIDE to OpenGL wrapper - -Okay, so we said that the game only has one video mode now, that being Direct3D. Well, that's not completely true - the game still has the ancient GLIDE/3DFX mode available and gamers for years have known that for whatever reason, Diablo II actually runs better with GLIDE than Direct3D for hardware that supports it. - -Problem is... no modern video cards actually support the now defunct GLIDE anymore and 3DFX (the company) was taken over long ago by NVIDIA, so the whole thing kind of went the way of the dodo. Running the game with the `-3dfx` switch by default will only net you a crash to desktop (sad face). - -Thankfully, there is a wrapper available, seemingly made specifically for Diablo II, that actually translates the GLIDE interface to OpenGL. And being Linux users, OpenGL certainly suits us. - -So, assuming you have the game installed and fully patched (seriously, it's pretty much just click and install with WINE, exactly as you would in Windows. It's easy), you'll want to download the [GLIDE3-to-OpenGL-Wrapper by Sven Labusch][2]. - -Extract the files from the downloaded archive to your Diablo II game folder (eg. `~/.wine/drive_c/Program Files (x86)/Diablo II/`) - -The following is [from a forum guide][3] originally for Windows users, but it worked fine for me on Linux as well. The first two steps you should have already done, but then follow the instructions to configure the wrapper. You'll obviously have to make sure that glide-init.exe is executed with WINE. - -> 1) download GLIDE WRAPPER ( ). -> 2) extract file in the Diablo 2 folder, where the 'Diablo II.exe' is. -> 3) launch glide-init.exe. -> 4) Click on 'English/Deutsch' to change the language to english. -> 5) Click on 'OpenGL Info', then 'Query OpenGL info', wait for the query to finish. -> 6) Click on 'Setting': -> -uncheck 'windows-mode'. -> -check 'captured mouse'. -> -uncheck 'keep aspect ratio'. -> -uncheck 'vertical synchronization'. -> -select 'no' for 'fps-limit'. -> -select '1600x1200' for 'static size. -> -uncheck 'window extra'. -> -select 'auto' for 'refreshrate'. -> -check 'desktopresolution'. -> 7) Click on 'renderer': -> -select '64 mb' for 'texture-memory'. -> -select '2048x2048' for 'buffer texture size'. -> -uncheck ALL box EXCEPT 'shader-gama'. -> 8) Click on 'Extension': -> -check ALL box. -> 9) Click on 'Quit'. - -Make sure to follow that procedure exactly. - -Now, you should be able to launch the game with the `-3dfx` switch and be all good! -``` -wine ~/.wine/drive_c/Program\ Files\ \(x86\)/Diablo\ II/Game.exe -3dfx -``` - -![][4] - -Yes, the black bars will be unavoidable with the 4:3 aspect ratio (I'm playing on a 27 inch monitor with 1080p resolution), but at least the game looks as it was originally intended. Actually playing the game I don't even notice the black borders. - -### Making the switch persistent - -If you want the game to always launch with the `-3dfx` switch, even from the applications menu shortcut, then simply open the .desktop file with your favourite text editor. - -Example (with the Lord of Destruction expansion installed): -``` -gedit .local/share/applications/wine/Programs/Diablo\ II/Diablo\ II\ -\ Lord\ of\ Destruction.desktop -``` - -And simply add the `-3dfx` switch to the end of the line beginning with " **Exec=** ". Make sure it's at the very end! And then save and exit. - -And that's it! Running the game as standard from your applications menu should start the game up in its GLIDE/OpenGL magical glory. - -Happy demon slaying! - -### About the author - -Andrew Powell is the editor and owner of The Linux Rain who loves all things Linux, gaming and everything in between. - --------------------------------------------------------------------------------- - -via: http://www.thelinuxrain.com/articles/how-to-run-diablo-2-glide-opengl-wrapper - -作者:[Andrew Powell][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.thelinuxrain.com -[1]:http://us.blizzard.com/en-us/games/d2/ -[2]:http://www.svenswrapper.de/english/downloads.html -[3]:https://us.battle.net/forums/en/bnet/topic/20752595513 -[4]:http://www.thelinuxrain.com/content/01-articles/198-how-to-run-diablo-2-glide-opengl-wrapper/diablo2-linux.jpg diff --git a/sources/tech/20180111 BASH drivers, start your engines.md b/sources/tech/20180111 BASH drivers, start your engines.md deleted file mode 100644 index e5f8631e39..0000000000 --- a/sources/tech/20180111 BASH drivers, start your engines.md +++ /dev/null @@ -1,90 +0,0 @@ -Translating by Torival BASH drivers, start your engines -====== - -![](http://www.thelinuxrain.com/content/01-articles/201-bash-drivers-start-your-engines/headimage.jpg) - -There's always more than one way to do a job in the shell, and there may not be One Best Way to do that job, either. - -Nevertheless, different commands with the same output can differ in how long they take, how much memory they use and how hard they make the CPU work. - -Out of curiosity I trialled 6 different ways to get the last 5 characters from each line of a text file, which is a simple text-processing task. The 6 commands are explained below and are abbreviated here as awk5, echo5, grep5, rev5, sed5 and tail5. These were also the names of the files generated by the commands. - -### Tracking performance - -I ran the trial on a 1.6GB UTF-8 text file with 1559391514 characters on 3570866 lines, or an average of 437 characters per line, and no blank lines. The last 5 characters on every line were alphanumeric. - -To time the 6 commands I used **time** (the BASH shell built-in, not GNU **time** ) and while the commands were running I checked **top** to follow memory and CPU usage. My system is the Dell OptiPlex 9020 Micro described [here][1] and runs Debian 9. - -All 6 commands used between 1 and 1.4GB of memory (VIRT in **top** ), and awk5, echo5, grep5 and sed5 ran at close to 100% CPU usage. Interestingly, -rev5 ran at ca 30% CPU and tail5 at ca 15%. - -To ensure that all 6 commands had done the same job, I did a **diff** on the 6 output files, each about 21 MB: - -![][2] - -### And the winner is... - -Here are the elapsed times: - -![][3] - -Well, AWK (GNU AWK 4.1.4) is really fast. Sure, all 6 commands could process a 100-line file zippety-quick, but for big text-processing jobs, fire up your AWK. - -### Commands used -``` -awk '{print substr($0,length($0)-4,5)}' file > awk5 -``` - -awk5 used AWK's substring function. The function works on the whole line ($0), starts at the 4th character back from the last character (length($0)-4) and returns 5 characters (5). -``` -#!/bin/bash -while read line; do echo "${line: -5}"; done < file > echo5 -exit -``` - -echo5 was run as a script and uses a **while** loop for processing one line at a time. The BASH string function "${line: -5}" returns the last 5 characters in "$line". -``` -grep -o '.....$' file > grep5 -``` - -In grep5, **grep** searches each line for the last 5 characters (.....$) and returns (with the -o option) just that searched-for string. -``` -#!/bin/bash -while read line; do rev <<<"$line" | cut -c1-5 | rev; done < file > rev5 -exit -``` - -The rev5 trick in this script has appeared often in online forums. Each line is first reversed with **rev** , then **cut** is used to return the first 5 characters, then the 5-character string is reversed with **rev**. -``` -sed 's/.*\(.....\)/\1/' file > sed5 -``` - -sed5 is a simple use of **sed** (GNU sed 4.4) but was surprisingly slow in the trial. In each line, **sed** replaces zero or more characters leading up to the last 5 with just those last 5 (as a backreference). -``` -#!/bin/bash -while read line; do tail -c 6 <<<"$line"; done < file > tail5 -exit -``` - -The "-c 6" in the tail5 script means that **tail** captures the last 5 characters in each line plus the newline character at the end. - -Actually, the "-c" option captures bytes, not characters, meaning if the line ends in multi-byte characters the output will be corrupt. But would you really want to use the ultra-slow **tail** for this job in the first place? - -### About the Author - -Bob Mesibov is Tasmanian, retired and a keen Linux tinkerer. - --------------------------------------------------------------------------------- - -via: http://www.thelinuxrain.com/articles/bash-drivers-start-your-engines - -作者:[Bob Mesibov][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.thelinuxrain.com -[1]:http://www.thelinuxrain.com/articles/debian-9-on-a-dell-optiplex-9020-micro -[2]:http://www.thelinuxrain.com/content/01-articles/201-bash-drivers-start-your-engines/1.png -[3]:http://www.thelinuxrain.com/content/01-articles/201-bash-drivers-start-your-engines/2.png From 06e65beb52c7afbc1b192b7b33364e3d637b27db Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 17:20:24 +0800 Subject: [PATCH 29/81] remove www.tecmint.com --- ...riaDB Security Best Practices for Linux.md | 187 ------------------ ...IDS Intrusion Detection System on Linux.md | 102 ---------- 2 files changed, 289 deletions(-) delete mode 100644 sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md delete mode 100644 sources/tech/20180119 How to Install Tripwire IDS Intrusion Detection System on Linux.md diff --git a/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md b/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md deleted file mode 100644 index 8897f74f39..0000000000 --- a/sources/tech/20171201 12 MySQL MariaDB Security Best Practices for Linux.md +++ /dev/null @@ -1,187 +0,0 @@ -translating by zrszrszr -12 MySQL/MariaDB Security Best Practices for Linux -============================================================ - -MySQL is the world’s most popular open source database system and MariaDB (a fork of MySQL) is the world’s fastest growing open source database system. After installing MySQL server, it is insecure in it’s default configuration, and securing it is one of the essential tasks in general database management. - -This will contribute to hardening and boosting of overall Linux server security, as attackers always scan vulnerabilities in any part of a system, and databases have in the past been key target areas. A common example is the brute-forcing of the root password for the MySQL database. - -In this guide, we will explain useful MySQL/MariaDB security best practice for Linux. - -### 1\. Secure MySQL Installation - -This is the first recommended step after installing MySQL server, towards securing the database server. This script facilitates in improving the security of your MySQL server by asking you to: - -* set a password for the root account, if you didn’t set it during installation. - -* disable remote root user login by removing root accounts that are accessible from outside the local host. - -* remove anonymous-user accounts and test database which by default can be accessed by all users, even anonymous users. - -``` -# mysql_secure_installation -``` - -After running it, set the root password and answer the series of questions by entering [Yes/Y] and press [Enter]. - - [![Secure MySQL Installation](https://www.tecmint.com/wp-content/uploads/2017/12/Secure-MySQL-Installation.png)][2] - -Secure MySQL Installation - -### 2\. Bind Database Server To Loopback Address - -This configuration will restrict access from remote machines, it tells the MySQL server to only accept connections from within the localhost. You can set it in main configuration file. - -``` -# vi /etc/my.cnf [RHEL/CentOS] -# vi /etc/mysql/my.conf [Debian/Ubuntu] -OR -# vi /etc/mysql/mysql.conf.d/mysqld.cnf [Debian/Ubuntu] -``` - -Add the following line below under `[mysqld]` section. - -``` -bind-address = 127.0.0.1 -``` - -### 3\. Disable LOCAL INFILE in MySQL - -As part of security hardening, you need to disable local_infile to prevent access to the underlying filesystem from within MySQL using the following directive under `[mysqld]` section. - -``` -local-infile=0 -``` - -### 4\. Change MYSQL Default Port - -The Port variable sets the MySQL port number that will be used to listen on TCP/ IP connections. The default port number is 3306 but you can change it under the [mysqld] section as shown. - -``` -Port=5000 -``` - -### 5\. Enable MySQL Logging - -Logs are one of the best ways to understand what happens on a server, in case of any attacks, you can easily see any intrusion-related activities from log files. You can enable MySQL logging by adding the following variable under the `[mysqld]` section. - -``` -log=/var/log/mysql.log -``` - -### 6\. Set Appropriate Permission on MySQL Files - -Ensure that you have appropriate permissions set for all mysql server files and data directories. The /etc/my.conf file should only be writeable to root. This blocks other users from changing database server configurations. - -``` -# chmod 644 /etc/my.cnf -``` - -### 7\. Delete MySQL Shell History - -All commands you execute on MySQL shell are stored by the mysql client in a history file: ~/.mysql_history. This can be dangerous, because for any user accounts that you will create, all usernames and passwords typed on the shell will recorded in the history file. - -``` -# cat /dev/null > ~/.mysql_history -``` - -### 8\. Don’t Run MySQL Commands from Commandline - -As you already know, all commands you type on the terminal are stored in a history file, depending on the shell you are using (for example ~/.bash_history for bash). An attacker who manages to gain access to this history file can easily see any passwords recorded there. - -It is strongly not recommended to type passwords on the command line, something like this: - -``` -# mysql -u root -ppassword_ -``` - [![Connect MySQL with Password](https://www.tecmint.com/wp-content/uploads/2017/12/Connect-MySQL-with-Password.png)][3] - -Connect MySQL with Password - -When you check the last section of the command history file, you will see the password typed above. - -``` -# history -``` - [![Check Command History](https://www.tecmint.com/wp-content/uploads/2017/12/Check-Command-History.png)][4] - -Check Command History - -The appropriate way to connect MySQL is. - -``` -# mysql -u root -p -Enter password: -``` - -### 9\. Define Application-Specific Database Users - -For each application running on the server, only give access to a user who is in charge of a database for a given application. For example, if you have a wordpress site, create a specific user for the wordpress site database as follows. - -``` -# mysql -u root -p -MariaDB [(none)]> CREATE DATABASE osclass_db; -MariaDB [(none)]> CREATE USER 'osclassdmin'@'localhost' IDENTIFIED BY 'osclass@dmin%!2'; -MariaDB [(none)]> GRANT ALL PRIVILEGES ON osclass_db.* TO 'osclassdmin'@'localhost'; -MariaDB [(none)]> FLUSH PRIVILEGES; -MariaDB [(none)]> exit -``` - -and remember to always remove user accounts that are no longer managing any application database on the server. - -### 10\. Use Additional Security Plugins and Libraries - -MySQL includes a number of security plugins for: authenticating attempts by clients to connect to mysql server, password-validation and securing storage for sensitive information, which are all available in the free version. - -You can find more here: [https://dev.mysql.com/doc/refman/5.7/en/security-plugins.html][5] - -### 11\. Change MySQL Passwords Regularly - -This is a common piece of information/application/system security advice. How often you do this will entirely depend on your internal security policy. However, it can prevent “snoopers” who might have been tracking your activity over an long period of time, from gaining access to your mysql server. - -``` -MariaDB [(none)]> USE mysql; -MariaDB [(none)]> UPDATE user SET password=PASSWORD('YourPasswordHere') WHERE User='root' AND Host = 'localhost'; -MariaDB [(none)]> FLUSH PRIVILEGES; -``` - -### 12\. Update MySQL Server Package Regularly - -It is highly recommended to upgrade mysql/mariadb packages regularly to keep up with security updates and bug fixes, from the vendor’s repository. Normally packages in default operating system repositories are outdated. - -``` -# yum update -# apt update -``` - -After making any changes to the mysql/mariadb server, always restart the service. - -``` -# systemctl restart mariadb #RHEL/CentOS -# systemctl restart mysql #Debian/Ubuntu -``` - -Read Also: [15 Useful MySQL/MariaDB Performance Tuning and Optimization Tips][6] - -That’s all! We love to hear from you via the comment form below. Do share with us any MySQL/MariaDB security tips missing in the above list. - --------------------------------------------------------------------------------- - -via: https://www.tecmint.com/mysql-mariadb-security-best-practices-for-linux/ - -作者:[ Aaron Kili ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.tecmint.com/author/aaronkili/ -[1]:https://www.tecmint.com/learn-mysql-mariadb-for-beginners/ -[2]:https://www.tecmint.com/wp-content/uploads/2017/12/Secure-MySQL-Installation.png -[3]:https://www.tecmint.com/wp-content/uploads/2017/12/Connect-MySQL-with-Password.png -[4]:https://www.tecmint.com/wp-content/uploads/2017/12/Check-Command-History.png -[5]:https://dev.mysql.com/doc/refman/5.7/en/security-plugins.html -[6]:https://www.tecmint.com/mysql-mariadb-performance-tuning-and-optimization/ -[7]:https://www.tecmint.com/author/aaronkili/ -[8]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/ -[9]:https://www.tecmint.com/free-linux-shell-scripting-books/ diff --git a/sources/tech/20180119 How to Install Tripwire IDS Intrusion Detection System on Linux.md b/sources/tech/20180119 How to Install Tripwire IDS Intrusion Detection System on Linux.md deleted file mode 100644 index fb994b7f54..0000000000 --- a/sources/tech/20180119 How to Install Tripwire IDS Intrusion Detection System on Linux.md +++ /dev/null @@ -1,102 +0,0 @@ -How to Install Tripwire IDS (Intrusion Detection System) on Linux -============================================================ - - -Tripwire is a popular Linux Intrusion Detection System (IDS) that runs on systems in order to detect if unauthorized filesystem changes occurred over time. - -In CentOS and RHEL distributions, tripwire is not a part of official repositories. However, the tripwire package can be installed via [Epel repositories][1]. - -To begin, first install Epel repositories in CentOS and RHEL system, by issuing the below command. - -``` -# yum install epel-release -``` - -After you’ve installed Epel repositories, make sure you update the system with the following command. - -``` -# yum update -``` - -After the update process finishes, install Tripwire IDS software by executing the below command. - -``` -# yum install tripwire -``` - -Fortunately, tripwire is a part of Ubuntu and Debian default repositories and can be installed with following commands. - -``` -$ sudo apt update -$ sudo apt install tripwire -``` - -On Ubuntu and Debian, the tripwire installation will be asked to choose and confirm a site key and local key passphrase. These keys are used by tripwire to secure its configuration files. - - [![Create Tripwire Site and Local Key](https://www.tecmint.com/wp-content/uploads/2018/01/Create-Site-and-Local-key.png)][2] - -Create Tripwire Site and Local Key - -On CentOS and RHEL, you need to create tripwire keys with the below command and supply a passphrase for site key and local key. - -``` -# tripwire-setup-keyfiles -``` - [![Create Tripwire Keys](https://www.tecmint.com/wp-content/uploads/2018/01/Create-Tripwire-Keys.png)][3] - -Create Tripwire Keys - -In order to validate your system, you need to initialize Tripwire database with the following command. Due to the fact that the database hasn’t been initialized yet, tripwire will display a lot of false-positive warnings. - -``` -# tripwire --init -``` - [![Initialize Tripwire Database](https://www.tecmint.com/wp-content/uploads/2018/01/Initialize-Tripwire-Database.png)][4] - -Initialize Tripwire Database - -Finally, generate a tripwire system report in order to check the configurations by issuing the below command. Use `--help` switch to list all tripwire check command options. - -``` -# tripwire --check --help -# tripwire --check -``` - -After tripwire check command completes, review the report by opening the file with the extension `.twr` from /var/lib/tripwire/report/ directory with your favorite text editor command, but before that you need to convert to text file. - -``` -# twprint --print-report --twrfile /var/lib/tripwire/report/tecmint-20170727-235255.twr > report.txt -# vi report.txt -``` - [![Tripwire System Report](https://www.tecmint.com/wp-content/uploads/2018/01/Tripwire-System-Report.png)][5] - -Tripwire System Report - -That’s It! you have successfully installed Tripwire on Linux server. I hope you can now easily configure your [Tripwire IDS][6]. - --------------------------------------------------------------------------------- - -作者简介: - -I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting. - -------- - -via: https://www.tecmint.com/install-tripwire-ids-intrusion-detection-system-on-linux/ - -作者:[ Matei Cezar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.tecmint.com/author/cezarmatei/ -[1]:https://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ -[2]:https://www.tecmint.com/wp-content/uploads/2018/01/Create-Site-and-Local-key.png -[3]:https://www.tecmint.com/wp-content/uploads/2018/01/Create-Tripwire-Keys.png -[4]:https://www.tecmint.com/wp-content/uploads/2018/01/Initialize-Tripwire-Database.png -[5]:https://www.tecmint.com/wp-content/uploads/2018/01/Tripwire-System-Report.png -[6]:https://www.tripwire.com/ -[7]:https://www.tecmint.com/author/cezarmatei/ -[8]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/ -[9]:https://www.tecmint.com/free-linux-shell-scripting-books/ \ No newline at end of file From 3ecbd0f8261414c3230d7e205466db6b03abea47 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 17:25:28 +0800 Subject: [PATCH 30/81] remove www.ossblog.org --- ...tle with These Open Source Puzzle Games.md | 62 ---------- ...ighly Addictive Open Source Puzzle Game.md | 106 ------------------ 2 files changed, 168 deletions(-) delete mode 100644 sources/tech/20170923 Improve Your Mental Mettle with These Open Source Puzzle Games.md delete mode 100644 sources/tech/20171024 Highly Addictive Open Source Puzzle Game.md diff --git a/sources/tech/20170923 Improve Your Mental Mettle with These Open Source Puzzle Games.md b/sources/tech/20170923 Improve Your Mental Mettle with These Open Source Puzzle Games.md deleted file mode 100644 index 5650a304bf..0000000000 --- a/sources/tech/20170923 Improve Your Mental Mettle with These Open Source Puzzle Games.md +++ /dev/null @@ -1,62 +0,0 @@ -Improve Your Mental Mettle with These Open Source Puzzle Games -====== -### Tax Your Brain, Not Your Wallet - -Puzzle video games are a type of game that focuses on puzzle solving. A puzzle is a problem or set of problems a player has to solve within the confines of the game. - -The puzzle genre often tests problem-solving skills enhancing both analytical and critical thinking skills. Word completion, pattern recognition, logical reasoning, persistence, and sequence solving are some of the skills often required to prosper here. Some games offer unlimited time or attempts to solve a puzzle, others present time-limited exercises which increase the difficulty of the puzzle. Most puzzle games are basic in graphics but are very addictive. - -This genre owes its origins to puzzles and brain teasers. Traditional thinking games such as Hangman, Mastermind, and the mathematical game Nim were early computer implementations. - -Software developers can shape a gamer's brain in a multitude of directions -- cognitive awareness, logistics, reflexes, memory, to cite a selection -- puzzle games are appealing for all ages. - -Many of the biggest computer games concentrate on explosion-filled genres. But there's still strong demand for compelling puzzle games. It's a neglected genre in the mainstream. Here are our picks of the best games. We only advocate open source games here. And we give preference to games that run on multiple operating systems. - - -**PUZZLE GAMES** - -| --- | --- | -| **[Trackballs][1]** | Inspired by Marble Madness | -| **[Fish Fillets - Next Generation][2]** | Port of the Puzzle Game Fish Fillets | -| **[Frozen Bubble][3]** | A clone of the popular “Puzzle Bobble” game | -| **[Neverball][4]** | Tilt the Floor to Roll a Ball Game | -| **[Crack Attack!][5]** | Based on the Super Nintendo classic Tetris Attack - | -| **[Brain Workshop][6]** | Dual N-Back Game | -| **[Angry, Drunken Dwarves][7]** | “Falling Blocks” Puzzle Game | -| **[gbrainy][8]** | Brain Teaser Game for GNOME | -| **[Enigma][9]** | Huge Collection of Puzzle Games | -| **[Amoebax][10]** | Cute and Addictive Action-Puzzle Game | -| **[Dinothawr][11]** | Save your frozen friends by pushing them onto lava | -| **[Pingus][12]** | Lemmings Like Game | -| **[Kmahjongg][13]** | Remove Matching Mahjongg Tiles to Clear the Board | - - -For other games, check out our **[Games Portal page][14]**. - - --------------------------------------------------------------------------------- - -via: https://www.ossblog.org/improve-your-mental-mettle-with-these-open-source-puzzle-games/ - -作者:[Steve Emms][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ossblog.org/author/steve/ -[1]:https://www.ossblog.org/trackballs-inspired-marble-madness/ -[2]:https://www.ossblog.org/fish-fillets-ng-port-puzzle-game-fish-fillets/ -[3]:https://www.ossblog.org/frozen-bubble-puzzle-bobble-style-game/ -[4]:https://www.ossblog.org/neverball-tilt-floor-roll-ball-game/ -[5]:https://www.ossblog.org/crack-attack-based-super-nintendo-classic-tetris-attack/ -[6]:https://www.ossblog.org/brain-workshop-dual-n-back-game/ -[7]:https://www.ossblog.org/angry-drunken-dwarves-falling-blocks-puzzle-game/ -[8]:https://www.ossblog.org/gbrainy-brain-teaser-game-gnome/ -[9]:https://www.ossblog.org/enigma-huge-collection-puzzle-games/ -[10]:https://www.ossblog.org/amoebax-cute-addictive-action-puzzle-game/ -[11]:https://www.ossblog.org/dinothawr-save-frozen-friends/ -[12]:https://www.ossblog.org/pingus-lemmings-like-game/ -[13]:https://www.ossblog.org/kmahjongg-remove-matching-mahjongg-tiles-clear-board/ -[14]:https://www.ossblog.org/free-games/ diff --git a/sources/tech/20171024 Highly Addictive Open Source Puzzle Game.md b/sources/tech/20171024 Highly Addictive Open Source Puzzle Game.md deleted file mode 100644 index 9a662c835c..0000000000 --- a/sources/tech/20171024 Highly Addictive Open Source Puzzle Game.md +++ /dev/null @@ -1,106 +0,0 @@ -Highly Addictive Open Source Puzzle Game -====== -![](https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/10/Wizznic-Level4.png?resize=640%2C400&ssl=1) - -### About Wizznic! - -This is an open source game inspired by the classic Puzznic, a tile-matching puzzle arcade game developed and produced by Taito in 1989. The game is way more than a clone of Puzznic. But like Puzznic, it's a frighteningly addictive game. If you like puzzle games, Wizznic! is definitely a recommended download. - -The premise of the game is quite simple, but many of the levels are fiendishly difficult. The objective of each level is to make all the bricks vanish. The bricks disappear when they touch others of the same kind. The bricks are heavy, so you can only push them sideways, but not lift them up. The level has to be cleared of bricks before the time runs out, or you lose a life. With all but the first game pack, you only have 3 lives. - -### Installation - -I've mostly played Wizznic! on a Beelink S1 mini PC running a vanilla Ubuntu 17.10 installation. The mini PC only has on-board graphics, but this game doesn't require any fancy graphics card. I needed to install three SDL libraries before the game's binary would start. Many Linux users will already have these libraries installed on their PC, but they are trivial to install. - -`sudo apt install libsdl-dev` -`sudo apt-get install libsdl-image1.2` -`sudo apt-get install libsdl-mixer1.2` - -The full source code is available on GitHub available under an open source license, so you can compile the source code if you really want. The Windows binary works 'out of the box'. - -### Wizznic! in action - -To give a flavour of Wizznic! in action, here's a short YouTube video of the game in action. Apologies for the poor quality sound, this is my first video made with the Beelink S1 mini PC (see footnote). - -### Screenshots - -#### Level 4 from the Wizznic! 1 Official Pack - -![Wizznic! Level4][1] - -The puzzles in the first pack offer a gentle introduction to the game. - -#### Game Editor - -![Wizznic! Editor][2] - -The game sports its own puzzle creator. With the game editor, it's simple to make your own puzzles and share them with your friends, colleagues, and the rest of the world. - -Features of the game include: - - * Atmospheric music - composed by SeanHawk - * 2 game modes: Career, Arcade - * Many hours of mind-bending puzzles to master - * Create your own graphics (background images, tile sets, fonts), sound, levels, and packs - * Built-in game editor - create your own puzzles - * Play your own music - * High Score table for each level - * Skip puzzles after two failed attempts to solve them - * Game can be controlled with the mouse, no keyboard needed - * Level packs: - * Wizznic! 1 - Official Pack with 20 levels, 5 lives. A fairly gentle introduction - * Wizznic! 2 - Official Pack with 20 levels - * Wizznic Silver - Proof of concept with 8 levels - * Nes levels - NES Puzznic with 31 levels - * Puzznic! S.4 - Stage 4 from Puzznic with 10 levels - * Puzznic! S.8 - Stage 8 from Puzznic with 10 levels - * Puzznic! S.9 - Stage 9 from Puzznic with 10 levels - * Puzznic! S.10 - Stage 10 from Puzznic with 9 levels - * Puzznic! S.11 - Stage 11 from Puzznic with 10 levels - * Puzznic! S.12 - Stage 12 from Puzznic with 10 levels - * Puzznic! S.13 - Stage 13 from Puzznic with 10 levels - * Puzznic! S.14 - Stage 14 from Puzznic with 10 levels - - - -### Command-Line Options - -![Wizznic Command Options][3] - -By default OpenGL is enabled, but it can be disabled. There are options to play the game in full screen mode, or scale to a 640×480 window. There's also Oculus Rift support, and the ability to dump screenshots of the levels. - -**OS** **Supported** **Notes** ![][4]![][5] Besides Linux and Windows, there are official binaries available for Pandora, GP2X Wiz, GCW-Zero. There are also unofficial ports available for Android, Debian, Ubuntu, Gentoo, FreeBSD, Haiku, Amiga OS4, Canoo, Dingux, Motorola ZN5, U9, E8, EM30, VE66, EM35, and Playstation Portable. - -Homepage: **[wizznic.org][6]** -Developer: Jimmy Christensen (Programming, Graphics, Sound Direction), ViperMD (Graphics) -License: GNU GPL v3 -Written in: **[C][7]** - -![][8]![][5] ![][9]![][10] - -**Footnote** - -The game's audio is way better. I probably should have tried the record facility available from the command line (see later); instead I used vokoscreen to make the video. - - --------------------------------------------------------------------------------- - -via: https://www.ossblog.org/wizznic-highly-addictive-open-source-puzzle-game/ - -作者:[Steve Emms][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ossblog.org/author/steve/ -[1]:https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/10/Wizznic-Level4.png?resize=640%2C510&ssl=1 -[2]:https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/10/Wizznic-Editor.png?resize=640%2C510&ssl=1 -[3]:https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/10/Wizznic-CommandOptions.png?resize=800%2C397&ssl=1 -[4]:https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/01/linux.png?resize=48%2C48&ssl=1 -[5]:https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/01/tick.png?resize=49%2C48&ssl=1 -[6]:http://wizznic.org/ -[7]:https://www.ossblog.org/c-programming-language-profile/ -[8]:https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/01/windows.png?resize=48%2C48&ssl=1 -[9]:https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/01/apple_green.png?resize=48%2C48&ssl=1 -[10]:https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/01/cross.png?resize=48%2C48&ssl=1 From c33dfc97c951722ea2a7e8e1ac3ef7eaa2be1cbb Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 17:27:48 +0800 Subject: [PATCH 31/81] remove www.ectnews.com --- .../20171114 Take Linux and Run With It.md | 68 --------- ...Is a Solid Microsoft Office Alternative.md | 138 ------------------ 2 files changed, 206 deletions(-) delete mode 100644 sources/tech/20171114 Take Linux and Run With It.md delete mode 100644 sources/tech/20171227 SoftMaker for Linux Is a Solid Microsoft Office Alternative.md diff --git a/sources/tech/20171114 Take Linux and Run With It.md b/sources/tech/20171114 Take Linux and Run With It.md deleted file mode 100644 index b7b6cb9663..0000000000 --- a/sources/tech/20171114 Take Linux and Run With It.md +++ /dev/null @@ -1,68 +0,0 @@ -Take Linux and Run With It -============================================================ - -![](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2016-linux-1.jpg) - -![](https://www.linuxinsider.com/images/2015/image-credit-adobe-stock_130x15.gif) - - -"How do you run an operating system?" may seem like a simple question, since most of us are accustomed to turning on our computers and seeing our system spin up. However, this common model is only one way of running an operating system. As one of Linux's greatest strengths is versatility, Linux offers the most methods and environments for running it. - -To unleash the full power of Linux, and maybe even find a use for it you hadn't thought of, consider some less conventional ways of running it -- specifically, ones that don't even require installation on a computer's hard drive. - -### We'll Do It Live! - -Live-booting is a surprisingly useful and popular way to get the full Linux experience on the fly. While hard drives are where OSes reside most of the time, they actually can be installed to most major storage media, including CDs, DVDs and USB flash drives. - -When an OS is installed to some device other than a computer's onboard hard drive and subsequently booted instead of that onboard drive, it's called "live-booting" or running a "live session." - -At boot time, the user simply selects an external storage source for the hardware to look for boot information. If found, the computer follows the external device's boot instructions, essentially ignoring the onboard drive until the next time the user boots normally. Optical media are increasingly rare these days, so by far the most typical form that an external OS-carrying device takes is a USB stick. - -Most mainstream Linux distributions offer a way to run a live session as a way of trying them out. The live session doesn't save any user activity, and the OS resets to the clean default state after every shutdown. - -Live Linux sessions can be used for more than testing a distro, though. One application is for executing system repair for critically malfunctioning onboard (usually also Linux) systems. If an update or configuration made the onboard system unbootable, a full system backup is required, or the hard drive has sustained serious file corruption, the only recourse is to start up a live system and perform maintenance on the onboard drive. - -In these and similar scenarios, the onboard drive cannot be manipulated or corrected while also keeping the system stored on it running, so a live system takes on those burdens instead, leaving all but the problematic files on the onboard drive at rest. - -Live sessions also are perfectly suited for handling sensitive information. If you don't want a computer to retain any trace of the operations executed or information handled on it, especially if you are using hardware you can't vouch for -- like a public library or hotel business center computer -- a live session will provide you all the desktop computing functions to complete your task while retaining no trace of your session once you're finished. This is great for doing online banking or password input that you don't want a computer to remember. - -### Linux Virtually Anywhere - -Another approach for implementing Linux for more on-demand purposes is to run a virtual machine on another host OS. A virtual machine, or VM, is essentially a small computer running inside another computer and contained in a single large file. - -To run a VM, users simply install a hypervisor program (a kind of launcher for the VM), select a downloaded Linux OS image file (usually ending with a ".iso" file extension), and walk through the setup process. - -Most of the settings can be left at their defaults, but the key ones to configure are the amount of RAM and hard drive storage to lease to the VM. Fortunately, since Linux has a light footprint, you don't have to set these very high: 2 GB of RAM and 16 GB of storage should be plenty for the VM while still letting your host OS thrive. - -So what does this offer that a live system doesn't? First, whereas live systems are ephemeral, VMs can retain the data stored on them. This is great if you want to set up your Linux VM for a special use case, like software development or even security. - -When used for development, a Linux VM gives you the solid foundation of Linux's programming language suites and coding tools, and it lets you save your projects right in the VM to keep everything organized. - -If security is your goal, Linux VMs allow you to impose an extra layer between a potential hazard and your system. If you do your browsing from the VM, a malicious program would have to compromise not only your virtual Linux system, but also the hypervisor -- and  _then_ your host OS, a technical feat beyond all but the most skilled and determined adversaries. - -Second, you can start up your VM on demand from your host system, without having to power it down and start it up again as you would have to with a live session. When you need it, you can quickly bring up the VM, and when you're finished, you just shut it down and go back to what you were doing before. - -Your host system continues running normally while the VM is on, so you can attend to tasks simultaneously in each system. - -### Look Ma, No Installation! - -Just as there is no one form that Linux takes, there's also no one way to run it. Hopefully, this brief primer on the kinds of systems you can run has given you some ideas to expand your use models. - -The best part is that if you're not sure how these can help, live booting and virtual machines don't hurt to try!  -![](https://www.ectnews.com/images/end-enn.gif) - --------------------------------------------------------------------------------- - -via: https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html - -作者:[ Jonathan Terrasi ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html#searchbyline -[1]:https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html# -[2]:https://www.linuxinsider.com/perl/mailit/?id=84951 -[3]:https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html -[4]:https://www.linuxinsider.com/story/Take-Linux-and-Run-With-It-84951.html diff --git a/sources/tech/20171227 SoftMaker for Linux Is a Solid Microsoft Office Alternative.md b/sources/tech/20171227 SoftMaker for Linux Is a Solid Microsoft Office Alternative.md deleted file mode 100644 index 6d00745e7d..0000000000 --- a/sources/tech/20171227 SoftMaker for Linux Is a Solid Microsoft Office Alternative.md +++ /dev/null @@ -1,138 +0,0 @@ -SoftMaker for Linux Is a Solid Microsoft Office Alternative -====== -![](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-softmaker-office-2018-1.jpg) - - -[SoftMaker Office][6] could be a first-class professional-strength replacement for Microsoft Office on the Linux desktop. - -The Linux OS has its share of lightweight word processors and a few nearly worthy standalone spreadsheet apps, but very few high-end integrated office suites exist for Linux users. Generally, Linux office suites lack a really solid slide presentation creation tool. - -![PlanMaker Presentations][7] - -PlanMaker Presentations is a near-perfect Microsoft Powerpoint clone. - -Most Linux users opt for [LibreOffice][9] -- or maybe the withering [OpenOffice][10] -- or online subscriptions to Microsoft Office through Web browser apps. - -However, high-performing options for Linux office suites exist. The SoftMaker office suite product line is one such contender to replace Microsoft Office. - -The latest beta release of SoftMaker Office 2018 is fully compatible with Microsoft Office. It offers a completely redesigned user interface that allows users to work with either classic or ribbon menus. - -![SoftMaker UI Panel][11] - -On first use, you choose the user interface you prefer. You can change your default option at any time from the UI Panel in Settings. - -SoftMaker offers a complete line of free and paid office suite products that run on Android devices, Linux distros, and Windows or macOS PCs. - -![][12] - -### Rethinking Options - -The beta version of this commercial Linux office suite is free. When the final version is released, two Linux commercial versions will be available. The licenses for both let you run SoftMaker Office on five computers. It will be priced at US$69.95, or $99.95 if you want a few dictionary add-on tools included. - -Check out the free beta of the commercial version. A completely free open source-licensed version called "SoftMaker FreeOffice 2018," will be available soon. Switching is seamless. The FreeOffice line is distributed under the Mozilla Public License. - -The FreeOffice 2018 release will have the same ribbon interface option as SoftMaker Office 2018. The exact feature list is not finalized yet, according to Sales Manager Jordan Popov. Both the free and the paid versions will contain fully functional TextMaker, PlanMaker, and Presentations, just like the paid Linux SoftMaker Office 2018 release. The Linux edition has the Thunderbird email management client. - -When I reviewed SoftMaker FreeOffice 2016 and SoftMaker Office 2016, I found the paid and free versions to be almost identical in functionality and features. So opting for the free versus paid versions of the 2018 office suites might be a no-brainer. - -The value here is that the free open source and both commercial versions of the 2018 releases are true 64-bit products. Previous releases required some 32-bit dependencies to run on 64-bit architecture. - -### First Look Impressive - -The free version (FreeOffice 2018 for Linux) is not yet available for review. SoftMaker expects to release FreeOffice 2018 for Linux it at the end of the first quarter of 2018. - -So I took the free beta release of Office 2018 for a spin to check out the performance of the ribbon user interface. Its performance was impressive. - -I regularly use the latest version of LibreOffice and earlier versions of FreeOffice. Their user interfaces mimic standard drop-down menus. - -It took me some time to gegt used to the ribbon menu, since I was unfamiliar with using it on Microsoft Office, but I came to like it. - -### How It Works - -The process is a bit different than scrolling down drop-down menus. You click on a category in the toolbar row at the top of the application window and then scan across the lateral display of boxed lists of functions. - -The lateral array of clickable menu choices changes with each toolbar category selected. Once I learned what was where, my appreciation for the "ribbon effect" grew. - -![TextMaker screen shot][13] - -The ribbon interface gives users a different look and feel when creating or editing documents on the Linux desktop. - -The labels are: File, Home, Insert, Layout, References, Mailings, Review and View. Click the action you want and instantly see it applied. There are no cascading menus. - -At first, I did not like not having any customizations available for things like often-used actions. Then I right-clicked on an item in the ribbon and discovered a pop-up menu. - -This provides a way to customize a Quick Action Bar, customize the ribbon display choices, and show the Quick Action toolbar as a separate toolbar. That prompted me to sit up straight and dig in with eyes wide open. - -### Great User Experience - -I process a significant number of graphics-heavy documents each week that are produced with Microsoft Office. I edit many of these documents and create many more. - -Much of my work goes to users of Microsoft Office. LibreOffice and SoftMaker Office applications have little to no trouble handling native Microsoft file formats such as DOCX, XLSX and PPTX. - -LibreOffice formatting -- both on screen and printed versions -- are well-done most times. SoftMaker's document renderings are even more exact. - -The beta release of SoftMaker Office 2018 for Linux is even better. Especially with SoftMaker Office 2018, I can exchange files directly with Microsoft Office users without conversion. This obviates the need to import or export documents. - -Given the nearly indistinguishable performance between previous paid and free versions of SoftMaker products, it seems safe to expect the same type of performance from FreeOffice 2018 when it arrives. - -### Expanding Office Reach - -SoftOffice products can give you a complete cross-platform continuity for your office suite needs. - -Four Android editions are available: - - * SoftMaker Office Mobile is the paid or commercial version for Android phones. You can find it in Google Play as TextMaker Mobile, PlanMaker Mobile and Presentations Mobile. - * SoftMaker FreeOffice Mobile is the free version for Android phones. You can find it in Google Play as the FREE OFFICE version of TextMaker Mobile, PlanMaker Mobile, Presentations Mobile. - * SoftMaker Office HD is the paid or commercial version for Android tablets. You can find it in Google Play as TextMaker HD, PlanMaker HD and Presentations HD. - * SoftMaker Office HD Basic is the free version for Android tablets. You can find it in Google Play as TextMaker HD Basic, PlanMaker HD Basic and Presentations HD Basic. - - - -Also available are TextMaker HD Trial, PlanMaker HD Trial and Presentations HD Trial in Google Play. These apps work only for 30 days but have all the features of the full version (Office HD). - -### Bottom Line - -The easy access to the free download of SoftMaker Office 2018 gives you nothing to lose in checking out its suitability as a Microsoft Office replacement. If you decide to upgrade to the paid Linux release, you will pay $69.95 for a proprietary license. That is the same price as the Home and Student editions of Microsoft Office 365. - -If you opt for the free open source version, FreeOffice 2018, when it is released, you still could have a top-of-the-line alternative to other Linux tools that play well with Microsoft Office. - -Download the [SoftMaker Office 2018 beta][15]. - -### Want to Suggest a Review? - -Is there a Linux software application or distro you'd like to suggest for review? Something you love or would like to get to know? Please [email your ideas to me][16], and I'll consider them for a future Linux Picks and Pans column. And use the Reader Comments feature below to provide your input! ![][17] - -### about the author - -![][18] **Jack M. Germain** has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software. [Email Jack.][19] - - --------------------------------------------------------------------------------- - -via: https://www.linuxinsider.com/story/SoftMaker-for-Linux-Is-a-Solid-Microsoft-Office-Alternative-85018.html - -作者:[Jack M. Germain][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxinsider.com -[1]:https://www.linuxinsider.com/images/2008/atab.gif -[2]:https://www.linuxinsider.com/images/sda/all_ec_134x30.png -[4]:https://www.linuxinsider.com/adsys/count/10019/?nm=1-allec-ci-lin-1&ENN_rnd=15154948085323&ign=0/ign.gif -[5]:https://www.linuxinsider.com/images/article_images/linux5stars_580x24.jpg -[6]:http://www.softmaker.com/en/softmaker-office-linux -[7]:https://www.linuxinsider.com/article_images/2017/85018_620x358-small.jpg -[8]:https://www.linuxinsider.com/article_images/2017/85018_990x572.jpg (::::topclose:true) -[9]:http://www.libreoffice.org/ -[10]:http://www.openoffice.org/ -[11]:https://www.linuxinsider.com/article_images/2017/85018_620x439.jpg -[12]:https://www.linuxinsider.com/adsys/count/10087/?nm=1i-lin_160-1&ENN_rnd=15154948084583&ign=0/ign.gif -[13]:https://www.linuxinsider.com/article_images/2017/85018_620x264-small.jpg -[14]:https://www.linuxinsider.com/article_images/2017/85018_990x421.jpg (::::topclose:true) -[15]:http://www.softmaker.com/en/softmaker-office-linux- -[16]:mailto:jack.germain@ -[17]:https://www.ectnews.com/images/end-enn.gif -[18]:https://www.linuxinsider.com/images/rws572389/Jack%20M.%20Germain.jpg -[19]:mailto:jack.germain@newsroom.ectnews.comm From c904ecf3658b559fb6f001ba4a7d576d766d2b48 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 17:28:54 +0800 Subject: [PATCH 32/81] remove www.lieberbiber.de --- .../20171229 Forgotten FOSS Games- Boson.md | 131 ------------------ ...14 What a GNU C Compiler Bug looks like.md | 77 ---------- 2 files changed, 208 deletions(-) delete mode 100644 sources/tech/20171229 Forgotten FOSS Games- Boson.md delete mode 100644 sources/tech/20180114 What a GNU C Compiler Bug looks like.md diff --git a/sources/tech/20171229 Forgotten FOSS Games- Boson.md b/sources/tech/20171229 Forgotten FOSS Games- Boson.md deleted file mode 100644 index 7cbbc231b5..0000000000 --- a/sources/tech/20171229 Forgotten FOSS Games- Boson.md +++ /dev/null @@ -1,131 +0,0 @@ -Forgotten FOSS Games: Boson -====== - -![](http://www.lieberbiber.de/wp-content/uploads/2017/12/0.10-2-800x445.jpg) - -Back in September of 1999, just about a year after the KDE project had shipped its first release ever, Thomas Capricelli announced "our attempt to make a Real Time Strategy game (RTS) for the KDE project" on the [kde-announce][1] mailing list. Boson 0.1, as the attempt was called, was based on Qt 1.4, the KDE 1.x libraries, and described as being "Warcraft-like". - -Development continued at a fast pace over the following year. 3D artists and sound designers were invited to contribute, and basic game play (e.g. collecting oil and minerals) started working. The core engine gained much-needed features. A map editor was already part of the package. Four releases later, on October 30, 2000, the release of version 0.5 was celebrated as a major milestone, also because Boson had been ported to Qt 2.2.1 & KDE 2.0 to match the development of the projects it was based on. Then the project suddenly went into hiatus, as it happens so often with ambitious open source game projects. A new set of developers revived Boson one year later, in 2001, and decided to port the game to Qt 3, the KDE 3 libraries and the recently introduced libkdegames library. - -![][2] - -By version 0.6 (released in June of 2002) the project was on a very good path again, having been extended with all the features players were used to from similar RTS titles, e.g. fog of war, path-finding, units defending themselves automatically, the destruction of a radar/satellite station leading to the disappearance of the minimap, and so on. The game came with its own soundtrack (you had the choice between "Jungle" and "Progressive), although the tracks did sound a bit… similar to each other, and Techno hadn't been a thing in game soundtracks since Command and Conquer: Tiberian Sun. More maps and sound effects tightened the atmosphere, but there was no computer opponent with artificial intelligence, so you absolutely had to play over a local network or online. - -Sadly the old websites at and are no longer online, and YouTube was not a thing back then, so most of the old artwork, videos and roadmaps are lost. But the [Sourceforce page][3] has survived, and the [Subversion repository][4] contains screenshots from version 0.7 on and some older ones from unknown version numbers. - -### From 2D to 3D - -It might be hard to believe nowadays, but Boson was a 2D game until the release of version 0.7 in January of 2003. So it didn't look like Warcraft 3 (released in 2002), but much more like Warcraft 2 or the first five Command & Conquer titles. The engine was extended with OpenGL support and now "just" loaded the existing 3D models instead of forcing the developers to pre-render them into 2D sprites. Why so late? Because your average Linux installation simply didn't have OpenGL support when Boson was created back in 1999. The first XFree86 release to include GLX (OpenGL Extension to the X Window System) was version 4.0, published in 2000. And then it took a while to get OpenGL acceleration working in the major Linux graphics drivers (Matrox G200/G400, NVIDIA Riva TNT, ATI RagePro, S3 ViRGE and Intel 810). I can't say it was trivial to set up a Linux Desktop with hardware accelerated 3D until Ubuntu 7.04 put all the bits together for the first time and made it easy to install the proprietary NVIDIA drivers through the "Additional Drivers" settings dialogue. - -![](http://www.lieberbiber.de/wp-content/uploads/2017/12/gl_boson1.jpg) - -So when Boson switched to OpenGL in January of 2003, that sucked. You now absolutely needed hardware acceleration to be able play it, and well, it was January of 2003. GPUs still used [AGP][5] slots back then, ATI was still an independent company, the Athlon64 would not be released before September 2003, and you were happy if you even owned a GeForce FX or a Radeon 9000 card. Luckily I did, and when I came across Boson, I immediately downloaded it, built it on my Gentoo machine and ran it on my three-monitor setup (two 15″ TFTs and one 21″ CRT). After debugging 3D hardware acceleration for a day or two, naturally… - -![][6] - -Boson wasn't finished or even really playable back then (still no computer opponent, only a few units working, no good maps etc.), but it showed promise, especially in light of the recent release of Command & Conquer: Generals in February of 2003. The thought of having an actual open source alternative to a recently released AAA video game title was so encouraging that I started to make small contributions, mainly by [reporting bugs][7]. The cursor icon theme I created using Cinema 4D never made it into a release. - -### Development hell - -Boson went through four releases in 2003 alone, all the way up to version 0.10. Performance was improved, the engine was extended with support for Python scripts, adaptive level of detail, smoke, lighting, day/night switches, and flying units. The 3D models started to look nice, an elementary Artificial Intelligence opponent was added (!), and the list of dependencies grew longer. Release notices like "Don't crash when using proprietary NVidia drivers and no usable font was found (reported to NVidia nearly a year ago)" are a kind reminder that proprietary graphics drivers already were a pain to work with back then, in case anybody forgot. - -![][8] - -An important task from version 0.10 on was to remove (or at least weaken) the dependencies on Qt and KDE. To be honest I never really got why the whole game, or for that matter any application ever, had to be based on Qt and KDE to begin with. Qt is a very, very intrusive thing. It's not just a library full of cool stuff, it's a framework. It locks you into its concept of what an application is and how it is supposed to work, and what your code should look like and how it is supposed to be structured. You need a whole toolchain with a metacompiler because your code isn't even standard C++. - -Every time the Qt/KDE developers decide to break the ABI, deprecate a component or come up with a new and (supposedly) better solution to an existing solution, you have to follow - and that has happened way too often. Just ask the KDE developers how many times they had to port KDE just because Qt decided to change everything for the umpteenth time, and now imagine you depend on both Qt **and** KDE. Pretty much everything Qt offers can be solved in a less intrusive way nowadays. - -![](http://www.lieberbiber.de/wp-content/uploads/2017/12/buildings0001.png.wm-1024x576.jpg) - -Maybe the original Boson developers just wanted to take advantage of Qt's 2D graphics subsystem to make development easier. Or make sure the game can run on more than one platform (at least one release was known to work on FreeBSD). Or they hoped to become a part of the official KDE family to keep the project visible and attract more developers. Whatever the reason might have been, the cleanup was in full swing. aRts (the audio subsystem used by KDE 2 and 3) was replaced by the more standard OpenAL library. [libUFO][9] (which is one of the very few projects relying on the XUL language Mozilla uses to design UIs for Firefox and other application, BTW) was used to draw the on-screen menus. - -The release of version 0.11 was delayed for 16 months due to the two main developers being busy with other stuff, but the changelog was very long and exciting. "Realistic" water and environmental objects like trees were added to the maps, the engine learned how to handle wind. The path-finding algorithm and the artificial intelligence opponent became smarter, and everything seemed to slowly come together. - -![](http://www.lieberbiber.de/wp-content/uploads/2017/12/units0001.png.wm.jpg) - -By the time version 0.12 was released eight months later, Boson had working power plants, animated turrets, a new radar system and much more. Version 0.13, the last one to ever be officially released, again shipped an impressive amount of new features and improvements, but most of the changes were not really visible. - -Version 0.13 had been released in October of 2006, and after December of the same year the commit rate suddenly dropped to zero. There were only two commits in the whole year of 2007, followed by an unsucessful attempt to revive the project in 2008. In 2011 the "Help wanted" text was finally removed from the (broken) website and Boson was officially declared dead. - -### Let's Play Boson! - -The game no longer even builds on modern GNU/Linux distributions, mostly due to the unavailability of Qt 3 and the KDE 3 libraries and some other oddities. I managed to install Ubuntu 11.04 LTS in a VirtualBox, which was the last Ubuntu release to have Boson in its repositories. Don't be surprised by how bad the performance is, as far as I can tell it's not the fault of VirtualBox. Boson never ran fast on any kind of hardware and did everything in a single thread, probably losing a lot of performance when synchronizing with various subsystems. - -Here's a video of me trying to play. First I enable the eye candy (the shaders) and start one of the maps in the official "campaign" in which I am immediately attacked by the enemy and don't really have time to concentrate on resource collection, only to have the game crash on me before I loose to the enemy. Then I start a map without an enemy (there is supposed to be one, but my units never find it) so I have more time to actually explore all the options, buildings and units. - -Sound didn't work in this release, so I added some tracks of the official soundtrack to the audio track of the video. - -https://www.youtube.com/embed/18sqwNjlBow?feature=oembed - -You can clearly see that the whole codebase was still in full developer mode back in 2006. There are multiple checkboxes for debugging information at the top of the screen, some debugging text scrolls over the actual text of the game UI. Everything can be configured in the Options dialogue, and you can easily break the game by fiddling with internal settings like the OpenGL update interval. Set it too low (the default is 50 Hz), and the internal logic will get confused. Clearly this is because the OpenGL renderer and the game logic run in the same thread, something one would probably no longer do nowadays. - -![](http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_option_dialogue.png.wm.jpg) - -The game menu has all the standard options. There are three available campaigns, each one with its own missions. The mission overview hints that each map could have different conditions for winning and loosing, e.g. in the "Lost Village" mission you are not supposed to destroy the houses in the occupied village. There are ten available colours for the players and three different species: Human, Neutral and "Smackware". No idea where that Name comes from, judging from the unit models it looks like it was just another human player with different units. - -There is only a single type of artificial intelligence for the computer opponent. Pretty much all other RTS games offer multiple different opponent AIs. These are either following completely different strategies or are based on a few basic types of AIs which are limited by some external influences, e.g. limiting the rate of resources they get/can collect, or limiting them to certain unit types. - -![](http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_game_menu.png.wm-1024x537.jpg) - -The game itself does not look very attractive, even with the "realistic-looking" water. Smoke looks okay, but smoke is easy. There is nothing going on on the maps: The ground has only a single texture, the shoreline is very edgy at many points, there is no vegetation except for some lonely trees. Shadows look "wrong"(but enabling them seemed to cause crashes anyways). All the standard mouse and keyboard bindings (assign the selected units to a group, select group, move, attack, etc.) are there and working. - -One of the lesser common features is that you can zoom out of the map completely and the engine marks all units with a coloured rectangle. This is something Command & Conquer never had, but games like Supreme Commander did too. - -![][10] - -![][12] - -The game logic is identical to all the other "traditional" Base Building RTS games. You start with a base (Boson calls it the Command Bunker) and optionally some additional buildings and units. The Command Bunker builds all buildings, factory buildings produce units, electrical power or can fight units. Some buildings change the game, e.g. the existence of a Radar will show enemy units on the mini-map even if they are currently inside the fog of war. Some units can gather resources (Minerals and Oil in case of Boson) and bring them to refineries, each unit and building comes at the cost of a defined amount of these resources. Buildings require electrical power. Since war is mostly a game of logistics, finding and securing resources and destroying the opponent before the resources run out is key. There is a "Tech Tree" with dependencies, which prevents you from being able to build everything right from the start. For example advanced units require the existence of a Tech Center or something similar. - -There are basically two types of user interfaces for RTS games: In the first one building units and buildings is part of the UI itself. There is a central menu, often at the left or at the right of the screen, which shows all options and when you click one production starts regardless of whether your base or factory buildings are visible on the screen right now or not. In the second one you have to select the your Command Bunker or each of the factories manually and choose from their individual menus. Boson uses the second type. The menu items are not very clear and not easily visible, but I guess once you're accustomed to them the item locations move to muscle memory. - -![](http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_command_bunker.png.wm-1024x579.jpg) - -![](http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_weapons_factory.png.wm-1024x465.jpg) - -In total the game could probably already look quiet nice if somebody made a couple of nice maps and cleaned up the user interface. But there is a long list of annoying bugs. Units often simply stop if they encounter an obstacle. Mineral and Oil Harvesters are supposed to shuttle between the location of the resource and a refinery automatically, but their internal state machine seems to fail a lot. Send the collector to a Mineral Mine, it doesn't start to collect. Click around a lot, it suddenly starts to collect. When it it full, it doesn't go back to the refinery or goes there and doesn't unload. Sometimes the whole cycle works for a while and then breaks while you're not looking. Frustrating. - -Vehicles also sometimes go through water when they're not supposed to, or even go through the map geometry (!). This points at some major problem with collision detection. - -![](http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_vehicle_through_geometry.png.wm.jpg) - -The win/loose message does look a bit… beta as well 😉 - -[![][14]][14] - -### Why was it never finished? - -I think there were many reasons why Boson died. The engine was completely home-grown and lacking a lot in features, testing and performance. The less important subsystems, like audio output, were broken more often than not. There was no "real" focus on getting the basic parts (collecting resources, producing units, fighting battles) fully (!) working before time was spent on less important details like water, smoke, wind etc. Also there were many technical challenges. Most users wouldn't even have been able to enjoy the game even in 2006 due to the missing 3D acceleration on many Linux distributions (Ubuntu pretty much solved that in 2007, not earlier). Qt 4 had been released in 2006, and porting from Qt 3 to Qt 4 was not exactly easy. The KDE project decided to take this as an opportunity to overhaul pretty much every bit of code, leading to the KDE 4 Desktop. Boson didn't really need any of the functionality in either Qt or KDE, but it would have had been necessary to port everything anyways for no good reason. - -Also the competition became much stronger after 2004. The full source code for [Warzone 2100][15], an actual commercial RTS game with much more complicated game play, had been released under an open source license in 2004 and is still being maintained today. Fans of Total Annihilation started to work on 3D viewers for the game, leading to [Total Annihilation 3D][16] and the [Spring RTS][17] engine. - -Boson never had a big community of active players, so there was no pool new developers could have been recruited from. Obviously it died when the last few developers carrying it along no longer felt it was worth the time, and I think it is clear that the amount of work required to turn the game into something playable would still have been very high. - --------------------------------------------------------------------------------- - -via: http://www.lieberbiber.de/2017/12/29/forgotten-foss-games-boson/ - -作者:[sturmflut][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[1]:https://marc.info/?l=kde-announce&r=1&w=2 -[2]:http://www.lieberbiber.de/wp-content/uploads/2017/03/client8.jpg -[3]:http://boson.sourceforge.net -[4]:https://sourceforge.net/p/boson/code/HEAD/tree/ -[5]:https://en.wikipedia.org/wiki/Accelerated_Graphics_Port -[6]:http://www.lieberbiber.de/wp-content/uploads/2017/03/0.8-1-1024x768.jpg -[7]:https://sourceforge.net/p/boson/code/3888/ -[8]:http://www.lieberbiber.de/wp-content/uploads/2017/03/0.9-1.jpg -[9]:http://libufo.sourceforge.net/ -[10]:http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_game_1.png.wm-1024x510.jpg -[11]:http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_game_1.png.wm.jpg -[12]:http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_maximum_zoom_out.png.wm-1024x511.jpg -[13]:http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_maximum_zoom_out.png.wm.jpg -[14]:http://www.lieberbiber.de/wp-content/uploads/2017/12/boson_game_end.png.85.jpg -[15]:http://wz2100.net/ -[16]:https://github.com/zuzuf/TA3D -[17]:https://en.wikipedia.org/wiki/Spring_Engine diff --git a/sources/tech/20180114 What a GNU C Compiler Bug looks like.md b/sources/tech/20180114 What a GNU C Compiler Bug looks like.md deleted file mode 100644 index 3b95d4089b..0000000000 --- a/sources/tech/20180114 What a GNU C Compiler Bug looks like.md +++ /dev/null @@ -1,77 +0,0 @@ -What a GNU C Compiler Bug looks like -====== -Back in December a Linux Mint user sent a [strange bug report][1] to the darktable mailing list. Apparently the GNU C Compiler (GCC) on his system exited with the following error message, breaking the build process: -``` -cc1: error: unrecognized command line option '-Wno-format-truncation' [-Werror] -cc1: all warnings being treated as errors -src/iop/CMakeFiles/colortransfer.dir/build.make:67: recipe for target 'src/iop/CMakeFiles/colortransfer.dir/introspection_colortransfer.c.o' failed make[2]: 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reformat.sh [src/iop/CMakeFiles/colortransfer.dir/introspection_colortransfer.c.o] Error 1 CMakeFiles/Makefile2:6323: recipe for target 'src/iop/CMakeFiles/colortransfer.dir/all' failed - -make[1]: 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reformat.sh [src/iop/CMakeFiles/colortransfer.dir/all] Error 2 - -``` - -`-Wno-format-truncation` is a rather new GCC feature which instructs the compiler to issue a warning if it can already deduce at compile time that calls to formatted I/O functions like `snprintf()` or `vsnprintf()` might result in truncated output. - -That's definitely neat, but Linux Mint 18.3 (just like Ubuntu 16.04 LTS) uses GCC 5.4.0, which doesn't support this feature. And darktable relies on a chain of CMake macros to make sure it doesn't use any flags the compiler doesn't know about: -``` -CHECK_COMPILER_FLAG_AND_ENABLE_IT(-Wno-format-truncation) - -``` - -So why did this even happen? I logged into one of my Ubuntu 16.04 installations and tried to reproduce the problem. Which wasn't hard, I just had to check out the git tree in question and build it. Boom, same error. - -### The solution - -It turns out that while `-Wformat-truncation` isn't a valid option for GCC 5.4.0 (it's not documented), this version silently accepts the negation under some circumstances (!): -``` - -sturmflut@hogsmeade:/tmp$ gcc -Wformat-truncation -o test test.c -gcc: error: unrecognized command line option '-Wformat-truncation' -sturmflut@hogsmeade:/tmp$ gcc -Wno-format-truncation -o test test.c -sturmflut@hogsmeade:/tmp$ - -``` - -(test.c just contains an empty main() method). - -Because darktable uses `CHECK_COMPILER_FLAG_AND_ENABLE_IT(-Wno-format-truncation)`, it is fooled into thinking this compiler version actually supports `-Wno-format-truncation` at all times. The simple test case used by the CMake macro doesn't fail, but the compiler later decides to no longer silently accept the invalid command line option for some reason. - -One of the cases which triggered this was when the source file under compilation had already generated some other warnings before. If I forced a serialized build using `make -j1` on a clean darktable checkout on this machine, `./src/iop/colortransfer.c` actually was the first file which caused any -compiler warnings at all, so this is why the process failed exactly there. - -The minimum test case to trigger this behavior in GCC 5.4.0 is a C file with a `main()` function with a parameter which has the wrong type, like this one: -``` - -int main(int argc, int argv) -{ -} - -``` - -Then add `-Wall` to make sure the compiler will treat this as a warning, and it fails: -``` - -sturmflut@hogsmeade:/tmp$ gcc -Wall -Wno-format-truncation -o test test.c -test.c:1:5: warning: second argument of 'main' should be 'char **' [-Wmain] - int main(int argc, int argv) - ^ -cc1: warning: unrecognized command line option '-Wno-format-truncation' - -``` - -If you omit `-Wall`, the compiler will not generate the first warning and also not complain about `-Wno-format-truncation`. - -I've never run into this before, but I guess Ubuntu 16.04 is going to stay with us for a while since it is the current LTS release until May 2018, and even after that it will still be supported until 2021. So this buggy GCC version will most likely also stay alive for quite a while. Which is why the check for this flag has been removed from the - --------------------------------------------------------------------------------- - -via: http://www.lieberbiber.de/2018/01/14/what-a-gnu-compiler-bug-looks-like/ - -作者:[sturmflut][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.lieberbiber.de/author/sturmflut/ -[1]:https://www.mail-archive.com/darktable-dev@lists.darktable.org/msg02760.html From 74b67a87925a4f47029949d476826b4e73d5fa54 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 22 Feb 2018 18:01:09 +0800 Subject: [PATCH 33/81] remove systemoverlord.com --- ...ty Is Not an Absolute - System Overlord.md | 62 ----------------- ...ltiple reverse shells - System Overlord.md | 66 ------------------- 2 files changed, 128 deletions(-) delete mode 100644 sources/talk/20180205 Security Is Not an Absolute - System Overlord.md delete mode 100644 sources/tech/20180120 socat as a handler for multiple reverse shells - System Overlord.md diff --git a/sources/talk/20180205 Security Is Not an Absolute - System Overlord.md b/sources/talk/20180205 Security Is Not an Absolute - System Overlord.md deleted file mode 100644 index d0bd003c8f..0000000000 --- a/sources/talk/20180205 Security Is Not an Absolute - System Overlord.md +++ /dev/null @@ -1,62 +0,0 @@ -Security Is Not an Absolute -====== - -If there’s one thing I wish people from outside the security industry knew when dealing with information security, it’s that **Security is not an absolute**. Most of the time, it’s not even quantifiable. Even in the case of particular threat models, it’s often impossible to make statements about the security of a system with certainty. - -At work, I deal with a lot of very smart people who are not “security people”, but are well-meaning and trying to do the right thing. Online, I sometimes find myself in conversations on [/r/netsec][1], [/r/netsecstudents][2], [/r/asknetsec][3], or [security.stackexchange][4] where someone wants to know something about information security. Either way, it’s quite common that someone asks the fateful question: “Is this secure?”. There are actually only two answers to this question, and neither one is “Yes.” - -The first answer is, fairly obviously, “No.” There are some ideas that are not secure under any reasonable definition of security. Imagine an employer that makes the PIN for your payroll system the day and month on which you started your new job. Clearly, all it takes is someone posting “started my new job today!” to social media, and their PIN has been outed. Consider transporting an encrypted hard drive with the password on a sticky note attached to the outside of the drive. Both of these systems have employed some form of “security control” (even if I use the term loosely), and both are clearly insecure to even the most rudimentary of attacker. Consequently, answering “Is this secure?” with a firm “No” seems appropriate. - -The second answer is more nuanced: “It depends.” What it depends on, and whether those conditions exist in the system in use, are what many security professionals get paid to evaluate. For example, consider the employer in the previous paragraph. Instead of using a fixed scheme for PINs, they now generate a random 4-digit PIN and mail it to each new employee. Is this secure? That all depends on the threat model being applied to the scenario. If we allow an attacker unlimited attempts to log in as that user, then no 4 digit PIN (random or deterministic) is reasonably secure. On average, an attacker will need no more than 5000 requests to find the valid PIN. That can be done by a very basic script in 10s of minutes. If, on the other hand, we lock the account after 10 failed attempts, then we’ve reduced the attacker to a 0.1% chance of success for a given account. Is this secure? For a single account, this is probably reasonably secure (although most users might be uncomfortable at even a 1 in 1000 chance of an attacker succeeding against their personal account) but what if the attacker has a list of 1000 usernames? The attacker now has a **64%** chance of successfully accessing at least 1 account. I think most businesses would find those odds very much against their favor. - -So why can’t we ever come up with an answer of “Yes, this is a secure system”? Well, there’s several factors at play here. The first is that very little in life in general is an absolute: - - * Your doctor cannot tell you with certainty that you will be alive tomorrow. - * A seismologist can’t say that there absolutely won’t be a 9.0 earthquake that levels a big chunk of the West Coast. - * Your car manufacturer cannot guarantee that the 4 wheels on your car do not fall of on your way to work tomorrow. - - - -However, all of these possibilities are very remote events. Most people are comfortable with these probabilities, largely because they do not think much about them, but even if they did, they would believe that it would not happen to them. (And almost always, they would be correct in that assumption.) - -Unfortunately, in information security, we have three things working against us: - - * The risks are much less understood by those seeking to understand them. - * The reality is that there are enough security threats that are **much** more common than the events above. - * The threats against which security must guard are **adaptive**. - - - -Because most people have a hard time reasoning about the likelihood of attacks and threats against them, they seek absolute reassurance. They don’t want to be told “it depends”, they just want to hear “yes, you’re fine.” Many of these individuals are the hypochondriacs of the information security world – they think every possible attack will get them, and they want absolute reassurance they’re safe from those attacks. Alternatively, they don’t understand that there are degrees of security and threat models, and just want to be reassured that they are perfectly secure. Either way, the effect is the same – they don’t understand, but are afraid, and so want the reassurance of complete security. - -We’re in an era where security breaches are unfortunately common, and developers and users alike are hearing about these vulnerabilities and breaches all the time. This causes them to pay far more attention to security then they otherwise would. By itself, this isn’t bad – all of us in the industry have been trying to get everyone’s attention about security issues for decades. Getting it now is better late than never. But because we’re so far behind the curve, the breaches being common, everytone is rushing to find out their risk and get reassurance now. Rather than consider the nuances of the situation, they just want a simple answer to “Am I secure?” - -The last of these issues, however, is also the most unique to information security. For decades, we’ve looked for the formula to make a system perfectly secure. However, each countermeasure or security system is quickly defeated by attackers. We’re in a cat-and-mouse game, rather than an engineering discipline. - -This isn’t to say that security is not an engineering practice – it certainly is in many ways (and my official title claims that I am an engineer), but just that it differs from other engineering areas. The forces faced by a building do not change in face of design changes by the structural engineer. Gravity remains a constant, wind forces are predictible for a given design, the seismic nature of an area is approximately known. Making the building have stronger doors does not suddenly increase the wind forces on the windows. In security, however, when we “strengthen the doors”, the attackers do turn to the “windows” of our system. Our threats are **adaptive** – for each control we implement, they adapt to attempt to circumvent that control. For this reason, a system that was believed secure against the known threats one year is completely broken the next. - -Another form of the security absolutism is those that realize there are degrees of security, but want to take it to an almost ridiculous level of paranoia. Nearly always, these seem to be interested in forms of cryptography – perhaps because cryptography offers numbers that can be tweaked, giving an impression of differing levels of security. - - * Generating RSA encryption keys of over 4k bits in length, even though all cryptographers agree this is pointless. - * Asking why AES-512 doesn’t exist, even though SHA-512 does. (Because the length of a hash and the length of a key do not equal in effective strength against attacks.) - * Setting up bizarre browser settings and then complaining about websites being broken. (Disabling all JavaScript, all cookies, all ciphers that are less than 256 bits and not perfect forward secrecy, etc.) - - - -So the next time you want to know “Is this secure?”, consider the threat model: what are you trying to defend against? Recognize that there are no security absolutes and guarantees, and that good security engineering practice often involves compromise. Sometimes the compromise is one of usability or utility, sometimes the compromise involves working in a less-than-perfect world. - --------------------------------------------------------------------------------- - -via: https://systemoverlord.com/2018/02/05/security-is-not-an-absolute.html - -作者:[David][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://systemoverlord.com/about -[1]:https://reddit.com/r/netsec -[2]:https://reddit.com/r/netsecstudents -[3]:https://reddit.com/r/asknetsec -[4]:https://security.stackexchange.com diff --git a/sources/tech/20180120 socat as a handler for multiple reverse shells - System Overlord.md b/sources/tech/20180120 socat as a handler for multiple reverse shells - System Overlord.md deleted file mode 100644 index b57a1e0140..0000000000 --- a/sources/tech/20180120 socat as a handler for multiple reverse shells - System Overlord.md +++ /dev/null @@ -1,66 +0,0 @@ -socat as a handler for multiple reverse shells · System Overlord -====== - -I was looking for a new way to handle multiple incoming reverse shells. My shells needed to be encrypted and I preferred not to use Metasploit in this case. Because of the way I was deploying my implants, I wasn't able to use separate incoming port numbers or other ways of directing the traffic to multiple listeners. - -Obviously, it's important to keep each reverse shell separated, so I couldn't just have a listener redirecting all the connections to STDIN/STDOUT. I also didn't want to wait for sessions serially - obviously I wanted to be connected to all of my implants simultaneously. (And allow them to disconnect/reconnect as needed due to loss of network connectivity.) - -As I was thinking about the problem, I realized that I basically wanted `tmux` for reverse shells. So I began to wonder if there was some way to connect `openssl s_server` or something similar to `tmux`. Given the limitations of `s_server`, I started looking at `socat`. Despite it's versatility, I've actually only used it once or twice before this, so I spent a fair bit of time reading the man page and the examples. - -I couldn't find a way to get `socat` to talk directly to `tmux` in a way that would spawn each connection as a new window (file descriptors are not passed to the newly-started process in `tmux new-window`), so I ended up with a strange workaround. I feel a little bit like Rube Goldberg inventing C2 software (and I need to get something more permanent and featureful eventually, but this was a quick and dirty PoC), but I've put together a chain of `socat` to get a working solution. - -My implementation works by having a single `socat` process receive the incoming connections (forking on incoming connection), and executing a script that first starts a `socat` instance within tmux, and then another `socat` process to copy from the first to the second over a UNIX domain socket. - -Yes, this is 3 socat processes. It's a little ridiculous, but I couldn't find a better approach. Roughly speaking, the communications flow looks a little like this: -``` -TLS data <--> socat listener <--> script stdio <--> socat <--> unix socket <--> socat in tmux <--> terminal window - -``` - -Getting it started is fairly simple. Begin by generating your SSL certificate. In this case, I'm using a self-signed certificate, but obviously you could go through a commercial CA, Let's Encrypt, etc. -``` -openssl req -newkey rsa:2048 -nodes -keyout server.key -x509 -days 30 -out server.crt -cat server.key server.crt > server.pem - -``` - -Now we will create the script that is run on each incoming connection. This script needs to launch a `tmux` window running a `socat` process copying from a UNIX domain socket to `stdio` (in tmux), and then connecting another `socat` between the `stdio` coming in to the UNIX domain socket. -``` -#!/bin/bash - -SOCKDIR=$(mktemp -d) -SOCKF=${SOCKDIR}/usock - -# Start tmux, if needed -tmux start -# Create window -tmux new-window "socat UNIX-LISTEN:${SOCKF},umask=0077 STDIO" -# Wait for socket -while test ! -e ${SOCKF} ; do sleep 1 ; done -# Use socat to ship data between the unix socket and STDIO. -exec socat STDIO UNIX-CONNECT:${SOCKF} -``` - -The while loop is necessary to make sure that the last `socat` process does not attempt to open the UNIX domain socket before it has been created by the new `tmux` child process. - -Finally, we can launch the `socat` process that will accept the incoming requests (handling all the TLS steps) and execute our per-connection script: -``` -socat OPENSSL-LISTEN:8443,cert=server.pem,reuseaddr,verify=0,fork EXEC:./socatscript.sh - -``` - -This listens on port 8443, using the certificate and private key contained in `server.pem`, performs a `fork()` on accepting each incoming connection (so they do not block each other) and disables certificate verification (since we're not expecting our clients to provide a certificate). On the other side, it launches our script, providing the data from the TLS connection via STDIO. - -At this point, an incoming TLS connection connects, and is passed through our processes to eventually arrive on the `STDIO` of a new window in the running `tmux` server. Each connection gets its own window, allowing us to easily see and manage the connections for our implants. - --------------------------------------------------------------------------------- - -via: https://systemoverlord.com/2018/01/20/socat-as-a-handler-for-multiple-reverse-shells.html - -作者:[David][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://systemoverlord.com/about From 4b89bd50bb67cc5186ca438efe05d61b9f5ad3f1 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Thu, 22 Feb 2018 19:35:36 +0800 Subject: [PATCH 34/81] Update 20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md --- ...y Ways To Install Bing Desktop Wallpaper Changer On Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md b/sources/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md index 9e9dbb814c..ad09efdae3 100644 --- a/sources/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md +++ b/sources/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md @@ -1,3 +1,5 @@ +Translating by MjSeven + Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux ====== Are you bored with Linux desktop background and wants to set good looking wallpapers but don't know where to find? Don't worry we are here to help you. From 5d3907078ca7f255817a5e4f3749df19685643a4 Mon Sep 17 00:00:00 2001 From: Ezio Date: Thu, 22 Feb 2018 20:56:06 +0800 Subject: [PATCH 35/81] =?UTF-8?q?=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 修改文件名 --- ...124 Containers the GPL and copyleft-No reason for concern.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/talk/{20180124 Containers, the GPL, and copyleft- No reason for concern.md => 20180124 Containers the GPL and copyleft-No reason for concern.md} (100%) diff --git a/sources/talk/20180124 Containers, the GPL, and copyleft- No reason for concern.md b/sources/talk/20180124 Containers the GPL and copyleft-No reason for concern.md similarity index 100% rename from sources/talk/20180124 Containers, the GPL, and copyleft- No reason for concern.md rename to sources/talk/20180124 Containers the GPL and copyleft-No reason for concern.md From 3cab6294c986f0e15593d70f9a830d52cba8a75e Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Thu, 22 Feb 2018 21:11:19 +0800 Subject: [PATCH 36/81] Delete 20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md --- ...Bing Desktop Wallpaper Changer On Linux.md | 139 ------------------ 1 file changed, 139 deletions(-) delete mode 100644 sources/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md diff --git a/sources/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md b/sources/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md deleted file mode 100644 index ad09efdae3..0000000000 --- a/sources/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md +++ /dev/null @@ -1,139 +0,0 @@ -Translating by MjSeven - -Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux -====== -Are you bored with Linux desktop background and wants to set good looking wallpapers but don't know where to find? Don't worry we are here to help you. - -We all knows about bing search engine but most of us don't use that for some reasons and every one like Bing website background wallpapers which is very beautiful and stunning high-resolution images. - -If you would like to have these images as your desktop wallpapers you can do it manually but it's very difficult to download a new image daily and then set it as wallpaper. That's where automatic wallpaper changers comes into picture. - -[Bing Desktop Wallpaper Changer][1] will automatically downloads and changes desktop wallpaper to Bing Photo of the Day. All the wallpapers are stored in `/home/[user]/Pictures/BingWallpapers/`. - -### Method-1 : Using Utkarsh Gupta Shell Script - -This small python script, automatically downloading and changing the desktop wallpaper to Bing Photo of the day. The script runs automatically at the startup and works on GNU/Linux with Gnome or Cinnamon and there is no manual work and installer does everything for you. - -From version 2.0+, the Installer works like a normal Linux binary commands and It requests sudo permissions for some of the task. - -Just clone the repository and navigate to project's directory then run the shell script to install Bing Desktop Wallpaper Changer. -``` -$ https://github.com/UtkarshGpta/bing-desktop-wallpaper-changer/archive/master.zip -$ unzip master -$ cd bing-desktop-wallpaper-changer-master - -``` - -Run the `installer.sh` file with `--install` option to install Bing Desktop Wallpaper Changer. This will download and set Bing Photo of the Day for your Linux desktop. -``` -$ ./installer.sh --install - -Bing-Desktop-Wallpaper-Changer -BDWC Installer v3_beta2 - -GitHub: -Contributors: -. -. -[sudo] password for daygeek: ** - -****** - -** -. -Where do you want to install Bing-Desktop-Wallpaper-Changer? - Entering 'opt' or leaving input blank will install in /opt/bing-desktop-wallpaper-changer - Entering 'home' will install in /home/daygeek/bing-desktop-wallpaper-changer - Install Bing-Desktop-Wallpaper-Changer in (opt/home)? : ** - -Press Enter - -** - -Should we create bing-desktop-wallpaper-changer symlink to /usr/bin/bingwallpaper so you could easily execute it? - Create symlink for easy execution, e.g. in Terminal (y/n)? : ** - -y - -** - -Should bing-desktop-wallpaper-changer needs to autostart when you log in? (Add in Startup Application) - Add in Startup Application (y/n)? : ** - -y - -** -. -. -Executing bing-desktop-wallpaper-changer... - - -Finished!! - -``` - -[![][2]![][2]][3] - -To uninstall the script. -``` -$ ./installer.sh --uninstall - -``` - -Navigate to help page to know more options about this script. -``` -$ ./installer.sh --help - -``` - -### Method-2 : Using GNOME Shell extension - -Lightweight [GNOME shell extension][4] to change your wallpaper every day to Microsoft Bing's wallpaper. It will also show a notification containing the title and the explanation of the image. - -This extension is based extensively on the NASA APOD extension by Elinvention and inspired by Bing Desktop WallpaperChanger by Utkarsh Gupta. - -### Features - - * Fetches the Bing wallpaper of the day and sets as both lock screen and desktop wallpaper (these are both user selectable) - * Optionally force a specific region (i.e. locale) - * Automatically selects the highest resolution (and most appropriate wallpaper) in multiple monitor setups - * Optionally clean up Wallpaper directory after between 1 and 7 days (delete oldest first) - * Only attempts to download wallpapers when they have been updated - * Doesn't poll continuously - only once per day and on startup (a refresh is scheduled when Bing is due to update) - - - -### How to install - -Visit [extenisons.gnome.org][5] website and drag the toggle button to `ON` then hit `Install` button to install bing wallpaper GNOME extension. -[![][2]![][2]][6] - -After install the bing wallpaper GNOME extension, it will automatically download and set bing Photo of the Day for your Linux desktop, also it shows the notification about the wallpaper. -[![][2]![][2]][7] - -Tray indicator, will help you to perform few operations also open settings. -[![][2]![][2]][8] - -Customize the settings based on your requirement. -[![][2]![][2]][9] - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/bing-desktop-wallpaper-changer-linux-bing-photo-of-the-day/ - -作者:[2daygeek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/2daygeek/ -[1]:https://github.com/UtkarshGpta/bing-desktop-wallpaper-changer -[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-linux-5.png -[4]:https://github.com/neffo/bing-wallpaper-gnome-extension -[5]:https://extensions.gnome.org/extension/1262/bing-wallpaper-changer/ -[6]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-1.png -[7]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-2.png -[8]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-3.png -[9]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-4.png From 348621e2bf8bee258c8cad684286cd7773627dc2 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Thu, 22 Feb 2018 21:12:13 +0800 Subject: [PATCH 37/81] Create 20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md --- ...Bing Desktop Wallpaper Changer On Linux.md | 126 ++++++++++++++++++ 1 file changed, 126 insertions(+) create mode 100644 translated/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md diff --git a/translated/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md b/translated/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md new file mode 100644 index 0000000000..d4a0b0e3ba --- /dev/null +++ b/translated/tech/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md @@ -0,0 +1,126 @@ +两种简单的方式在 Linux 安装必应桌面墙纸更换器 +====== + +你是否厌倦了 Linux 桌面背景,想要设置好看的壁纸,但是不知道在哪里可以找到?别担心,我们在这里会帮助你。 + +我们都知道必应搜索引擎但是由于一些原因很少有人使用它,每个人都喜欢必应网站的背景壁纸,它是非常漂亮和惊人的高分辨率图像。 + +如果你想使用这些图片作为你的桌面壁纸,你可以手动下载它,但是很难去每天下载一个新的图片,然后把它设置为壁纸。这就是自动壁纸改变的地方。 + +[必应桌面墙纸更换器][1]会自动下载并将桌面壁纸更改为当天的必应照片。所有的壁纸都储存在 `/home/[user]/Pictures/BingWallpapers/`。 + +### 方法 1: 使用 Utkarsh Gupta Shell 脚本 + +这个小型 python 脚本会自动下载并将桌面壁纸更改为当天的必应照片。该脚本在机器时自动运行,并在 GNU/Linux 上使用 Gnome 或 Cinnamon 工作。它不需要手动工作,安装程序会为你做所有事情。 + +从 2.0+ 版本开始,安装程序就像普通的 Linux 二进制命令一样工作,它会为某些任务请求 sudo 权限。 + +只需克隆仓库并切换到项目目录,然后运行 shell 脚本即可安装必应桌面墙纸更换器。 + + $ https://github.com/UtkarshGpta/bing-desktop-wallpaper-changer/archive/master.zip + $ unzip master + $ cd bing-desktop-wallpaper-changer-master + +运行 `installer.sh` 使用 `--install` 选项来安装必应桌面墙纸更换器。它会下载并设置必应照片为你的 Linux 桌面。 + + $ ./installer.sh --install + + Bing-Desktop-Wallpaper-Changer + BDWC Installer v3_beta2 + + GitHub: + Contributors: + . + . + [sudo] password for daygeek: ** + + ****** + + ** + . + Where do you want to install Bing-Desktop-Wallpaper-Changer? + Entering 'opt' or leaving input blank will install in /opt/bing-desktop-wallpaper-changer + Entering 'home' will install in /home/daygeek/bing-desktop-wallpaper-changer + Install Bing-Desktop-Wallpaper-Changer in (opt/home)? : ** + + Press Enter + + ** + + Should we create bing-desktop-wallpaper-changer symlink to /usr/bin/bingwallpaper so you could easily execute it? + Create symlink for easy execution, e.g. in Terminal (y/n)? : ** + + y + + ** + + Should bing-desktop-wallpaper-changer needs to autostart when you log in? (Add in Startup Application) + Add in Startup Application (y/n)? : ** + + y + + ** + . + . + Executing bing-desktop-wallpaper-changer... + + + Finished!! + +[![][2]![][2]][3] + +卸载脚本 + + $ ./installer.sh --uninstall + +使用帮助页面了解更多关于此脚本的选项。 + + $ ./installer.sh --help + +### 方法 2: 使用 GNOME Shell 扩展 + +轻量级[GNOME shell 扩展][4],可将你的壁纸每天更改为微软必应的壁纸。它还会显示一个包含图像标题和解释的通知。 + +该扩展广泛地基于 Elinvention 的 NASA APOD 扩展,受到了 Utkarsh Gupta 的 Bing Desktop WallpaperChanger 启发。 + +### 特点 + +- 获取当天的必应壁纸并设置为锁屏和桌面墙纸(这两者都是用户可选的) +- 可强制选择某个特定区域(即地区) +- 在多个监视器设置中自动选择最高分辨率(和最合适的墙纸) +- 可以选择在1到7天之后清理墙纸目录(删除最旧的) +- 只有当他们被更新时,才会尝试下载壁纸 +- 不会持续进行更新 - 每天只进行一次,启动时也要进行一次(更新是在必应更新时进行的) + +### 如何安装 + +访问 [extenisons.gnome.org][5]网站并将切换按钮拖到 `ON`,然后点击 `Install` 按钮安装必应壁纸 GNOME 扩展。(译者注:页面上并没有发现 ON 按钮,但是有 Download 按钮) +[![][2]![][2]][6] + +安装必应壁纸 GNOME 扩展后,它会自动下载并为你的 Linux 桌面设置当天的必应照片,并显示关于壁纸的通知。 +[![][2]![][2]][7] + +托盘指示器将帮助你执行少量操作,也可以打开设置。 +[![][2]![][2]][8] + +根据你的要求自定义设置。 +[![][2]![][2]][9] + +-------------------------------------------------------------------------------- + +via: [https://www.2daygeek.com/bing-desktop-wallpaper-changer-linux-bing-photo-of-the-day/](https://www.2daygeek.com/bing-desktop-wallpaper-changer-linux-bing-photo-of-the-day/) + +作者:[2daygeek](https://www.2daygeek.com/author/2daygeek/) 译者:[MjSeven](https://github.com/MjSeven) 校对:[校对者 ID](a) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/2daygeek/ +[1]:https://github.com/UtkarshGpta/bing-desktop-wallpaper-changer +[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-linux-5.png +[4]:https://github.com/neffo/bing-wallpaper-gnome-extension +[5]:https://extensions.gnome.org/extension/1262/bing-wallpaper-changer/ +[6]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-1.png +[7]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-2.png +[8]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-3.png +[9]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-4.png From ed9e49836c03709565fa5664aac9eb87ab32e568 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Thu, 22 Feb 2018 23:06:55 +0800 Subject: [PATCH 38/81] Delete 20180205 New Linux User- Try These 8 Great Essential Linux Apps.md --- ... Try These 8 Great Essential Linux Apps.md | 99 ------------------- 1 file changed, 99 deletions(-) delete mode 100644 sources/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md diff --git a/sources/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md b/sources/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md deleted file mode 100644 index 733e7e5788..0000000000 --- a/sources/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md +++ /dev/null @@ -1,99 +0,0 @@ -translate by cyleft - -New Linux User? Try These 8 Great Essential Linux Apps -====== - -![](https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-00-Featured.png) - -When you are new to Linux, even if you are not new to computers in general, one of the problems you will face is which apps to use. With millions of Linux apps, the choice is certainly not easy. Below you will find eight (out of millions) essential Linux apps to get you settled in quickly. - -Most of these apps are not exclusive to Linux. If you have used Windows/Mac before, chances are you are familiar with some of them. Depending on what your needs and interests are, you might not need all these apps, but in my opinion, most or all of the apps on this list are useful for newbies who are just starting out on Linux. - -**Related** : [11 Portable Apps Every Linux User Should Use][1] - -### 1. Chromium Web Browser - -![linux-apps-01-chromium][2] - -There is hardly a user who doesn’t need a web browser. While you can find good old Firefox for almost any Linux distro, and there is also a bunch of other [Linux browsers][3], a browser you should definitely try is [Chromium][4]. It’s the open source counterpart of Google’s Chrome browser. The main advantages of Chromium is that it is secure and fast. There are also tons of add-ons for it. - -### 2. LibreOffice - -![linux-apps-02-libreoffice][5] - -[LibreOffice][6] is an open source Office suite that comes with word processor (Writer), spreadsheet (Calc), presentation (Impress), database (Base), formula editor (Math), and vector graphics and flowcharts (Draw) applications. It’s compatible with Microsoft Office documents, and there are even [LibreOffice extensions][7] if the default functionality isn’t enough for you. - -LibreOffice is definitely one essential Linux app that you should have on your Linux computer. - -### 3. GIMP - -![linux-apps-03-gimp][8] - -[GIMP][9] is a very powerful open-source image editor. It’s similar to Photoshop. With GIMP you can edit photos and create and edit raster images for the Web and print. It’s true there are simpler image editors for Linux, so if you have no idea about image processing at all, GIMP might look too complicated to you. GIMP goes way beyond simple image crop and resize – it offers layers, filters, masks, paths, etc. - -### 4. VLC Media Player - -![linux-apps-04-vlc][10] - -[VLC][11] is probably the best movie player. It’s cross-platform, so you might know it from Windows. What’s really special about VLC is that it comes with lots of codecs (not all of which are open source, though), so it will play (almost) any music or video file. - -### 5. Jitsy - -![linux-apps-05-jitsi][12] - -[Jitsy][13] is all about communication. You can use it for Google Talk, Facebook chat, Yahoo, ICQ and XMPP. It’s a multi-user tool for audio and video calls (including conference calls), as well as desktop streaming and group chats. Conversations are encrypted. With Jitsy you can also transfer files and record your calls. - -### 6. Synaptic - -![linux-apps-06-synaptic][14] - -[Synaptic][15] is an alternative app installer for Debian-based distros. It comes with some distros but not all, so if you are using a Debian-based Linux, but there is no Synaptic in it, you might want to give it a try. Synaptic is a GUI tool for adding and removing apps from your system, and typically veteran Linux users favor it over the [Software Center package manager][16] that comes with many distros as a default. - -**Related** : [10 Free Linux Productivity Apps You Haven’t Heard Of][17] - -### 7. VirtualBox - -![linux-apps-07-virtualbox][18] - -[VirtualBox][19] allows you to run a virtual machine on your computer. A virtual machine comes in handy when you want to install another Linux distro or operating system from within your current Linux distro. You can use it to run Windows apps as well. Performance will be slower, but if you have a powerful computer, it won’t be that bad. - -### 8. AisleRiot Solitaire - -![linux-apps-08-aisleriot][20] - -A solitaire pack is hardly an absolute necessity for a new Linux user, but since it’s so fun. If you are into solitaire games, this is a great solitaire pack. [AisleRiot][21] is one of the emblematic Linux apps, and this is for a reason – it comes with more than eighty solitaire games, including the popular Klondike, Bakers Dozen, Camelot, etc. Just be warned – it’s addictive and you might end up spending long hours playing with it! - -Depending on the distro you are using, the way to install these apps is not the same. However, most, if not all, of these apps will be available for install with a package manager for your distro, or even come pre-installed with your distro. The best thing is, you can install and try them out and easily remove them if you don’t like them. - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/essential-linux-apps/ - -作者:[Ada Ivanova][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/adaivanoff/ -[1]:https://www.maketecheasier.com/portable-apps-for-linux/ (11 Portable Apps Every Linux User Should Use) -[2]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-01-Chromium.jpg (linux-apps-01-chromium) -[3]:https://www.maketecheasier.com/linux-browsers-you-probably-havent-heard-of/ -[4]:http://www.chromium.org/ -[5]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-02-LibreOffice.jpg (linux-apps-02-libreoffice) -[6]:https://www.libreoffice.org/ -[7]:https://www.maketecheasier.com/best-libreoffice-extensions/ -[8]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-03-GIMP.jpg (linux-apps-03-gimp) -[9]:https://www.gimp.org/ -[10]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-04-VLC.jpg (linux-apps-04-vlc) -[11]:http://www.videolan.org/ -[12]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-05-Jitsi.jpg (linux-apps-05-jitsi) -[13]:https://jitsi.org/ -[14]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-06-Synaptic.jpg (linux-apps-06-synaptic) -[15]:http://www.nongnu.org/synaptic/ -[16]:https://www.maketecheasier.com/are-linux-gui-software-centers-any-good/ -[17]:https://www.maketecheasier.com/free-linux-productivity-apps-you-havent-heard-of/ (10 Free Linux Productivity Apps You Haven’t Heard Of) -[18]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-07-VirtualBox.jpg (linux-apps-07-virtualbox) -[19]:https://www.virtualbox.org/ -[20]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-08-AisleRiot.jpg (linux-apps-08-aisleriot) -[21]:https://wiki.gnome.org/Aisleriot From b7f9947d095ef3451f1b1191736dc0f9f81c06ba Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Thu, 22 Feb 2018 23:08:28 +0800 Subject: [PATCH 39/81] translated by cyleft 20180205 New Linux User- Try These 8 Great Essential Linux Apps.md --- ... Try These 8 Great Essential Linux Apps.md | 97 +++++++++++++++++++ 1 file changed, 97 insertions(+) create mode 100644 translated/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md diff --git a/translated/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md b/translated/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md new file mode 100644 index 0000000000..69a0817bd2 --- /dev/null +++ b/translated/tech/20180205 New Linux User- Try These 8 Great Essential Linux Apps.md @@ -0,0 +1,97 @@ +Linux 新用户?来试试这 8 款重要的软件 +====== + +![](https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-00-Featured.png) + +即便您不是计算机的新手,刚接触 Linux 时,通常都会面临选择使用应用软件的问题。在数百万 Linux 应用程序中,做起选择来并不轻松。本文中,您将能发现八个重要的 Linux 应用,帮助您快速选择应用程序。 + +下面这些应用程序大多不是 Linux 独有的。如果有过使用 Windows/Mac 的经验,您很可能会熟悉其中一些软件。根据兴趣和需求,下面的程序可能不全符合您的要求,但是在我看来,清单里大多数甚至全部的软件,对于新用户开启 Linux 之旅都是有帮助的。 + +**相关链接** : [每一个 Linux 用户都应该使用的 11 个便携软件][1] + +### 1. Chromium 网页浏览器 + +![linux-apps-01-chromium][2] + +很难有一个不需要使用网页浏览器的用户。您可以看到陈旧的 Linux 发行版几乎都会附带 Firefox(火狐浏览器)或者其他 [Linux 浏览器][3],关于浏览器,强烈建议您尝试 [Chromium][4]。它是谷歌浏览器的开源版。Chromium 的主要优点是速度和安全性。它同样拥有大量的附加组件。 + +### 2. LibreOffice + +![linux-apps-02-libreoffice][5] + +[LibreOffice][6] 是一个开源办公套件,其包括文字处理(Writer)、电子表格(Calc)、演示(Impress)、数据库(Base)、公式编辑器(Math)、矢量图和流程图(Draw)应用程序。它与 Microsoft Office 文档兼容,如果其基本功能不能满足需求,您可以使用 [LibreOffice 拓展][7]。 + +LibreOffice 当然是 Linux 应用中至关重要的一员,如果您使用 Linux 的计算机,安装它是有必要的。 + +### 3. GIMP(GNU Image Manipulation Program、GUN 图像处理程序) + +![linux-apps-03-gimp][8] + +[GIMP][9] 是一款非常强大的开源图片处理程序,它类似于 Photoshop。通过 GIMP,您可以编辑或是创建用于 web 或是打印的光栅图(位图)。如果您对专业的图片处理没有概念,Linux 自然提供有更简单的图像编辑器,GIMP 看上去可能会复杂一点。GIMP 并不单纯提供图片裁剪和大小调整,它更覆盖了图层、滤镜、遮罩、路径和其他一些高级功能。 + +### 4. VLC 媒体播放器 + +![linux-apps-04-vlc][10] + +[VLC][11] 也许就是最好的影音播放器了。它是跨平台的,所以您可能在 Windows 上也听说过它。VLC 最特殊的地方是其拥有大量解码器(并不是所有的解码器都开放源代码),所以它几乎可以播放所有的影音文件。 + +### 5. Jitsy + +![linux-apps-05-jitsi][12] + +[Jitsy][13] 完全是关于通讯的。您可以借助它使用 Google talk、Facebook chat、Yahoo、ICQ 和 XMPP。它是用于音视频通话(包括电话会议),桌面流和群组聊天的多用户工具。会话会被加密。Jistsy 同样能帮助您传输文件或记录电话。 + +### 6. Synaptic + +![linux-apps-06-synaptic][14] + +[Synaptic][15] 是一款基于 Debian 的系统发行版的另一款应用程序安装程序。并不是所有基于 Debian 的 Linux 都安装有它,如果您使用基于 Debian 的 Linux 操作系统没有预装,也许您可以试一试。Synaptic 是一款用于添加或移除系统应用的 GUI 工具,甚至相对于许多发行版默认安装的 [软件中心包管理器][16] ,经验丰富的 Linux 用户更亲睐于 Sunaptic。 + +**相关链接** : [10 款您没听说过的充当生产力的 Linux 应用程序][17] + +### 7. VirtualBox + +![linux-apps-07-virtualbox][18] + +[VirtualBox][19] 能支持您在计算机上运行虚拟机。当您想在当前 Linux 发行版上安装其他发行版或操作系统时,使用虚拟机会方便许多。您同样可以通过它运行 Windows 应用程序,性能可能会稍弱,但是如果您有一台强大的计算机,就不会那么糟。 + +### 8. AisleRiot Solitaire(纸牌游戏) + +![linux-apps-08-aisleriot][20] + +对于 Linux 的新用户来说,一款纸牌游戏并不是刚需,但是它真的太有趣了。当您进入这款纸牌游戏,您会发现,这是一款极好的纸牌包。[AisleRiot][21] 是 Linux 标志性的应用程序,原因是 - 它涵盖超过八十中纸牌游戏,包括流行的 Klondike、Bakers Dozen、Camelot 等等,这些只是预告片 - 它是会上瘾的,您可能会花很长时间沉迷于此! + +根据您所使用的发行版,这些软件会有不同的安装方法。但是大多数都可以通过您使用的发行版中的包管理器安装使用,甚至它们可能会预装在您的发行版上。安装并且尝试它们想必是最好的,如果不和您的胃口,您可以轻松地删除它们。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/essential-linux-apps/ + +作者:[Ada Ivanova][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/adaivanoff/ +[1]:https://www.maketecheasier.com/portable-apps-for-linux/ (11 Portable Apps Every Linux User Should Use) +[2]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-01-Chromium.jpg (linux-apps-01-chromium) +[3]:https://www.maketecheasier.com/linux-browsers-you-probably-havent-heard-of/ +[4]:http://www.chromium.org/ +[5]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-02-LibreOffice.jpg (linux-apps-02-libreoffice) +[6]:https://www.libreoffice.org/ +[7]:https://www.maketecheasier.com/best-libreoffice-extensions/ +[8]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-03-GIMP.jpg (linux-apps-03-gimp) +[9]:https://www.gimp.org/ +[10]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-04-VLC.jpg (linux-apps-04-vlc) +[11]:http://www.videolan.org/ +[12]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-05-Jitsi.jpg (linux-apps-05-jitsi) +[13]:https://jitsi.org/ +[14]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-06-Synaptic.jpg (linux-apps-06-synaptic) +[15]:http://www.nongnu.org/synaptic/ +[16]:https://www.maketecheasier.com/are-linux-gui-software-centers-any-good/ +[17]:https://www.maketecheasier.com/free-linux-productivity-apps-you-havent-heard-of/ (10 Free Linux Productivity Apps You Haven’t Heard Of) +[18]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-07-VirtualBox.jpg (linux-apps-07-virtualbox) +[19]:https://www.virtualbox.org/ +[20]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-08-AisleRiot.jpg (linux-apps-08-aisleriot) +[21]:https://wiki.gnome.org/Aisleriot From ee77b641e0c1c6f0c91d1053974af4cd778b4bef Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 23 Feb 2018 00:37:40 +0800 Subject: [PATCH 40/81] PRF:20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md @jessie-pang --- ...toring Tools Every SysAdmin Should Know.md | 267 +++++++++++------- 1 file changed, 164 insertions(+), 103 deletions(-) diff --git a/translated/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md b/translated/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md index bd57e8a1a3..53ba82bba6 100644 --- a/translated/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md +++ b/translated/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md @@ -8,14 +8,13 @@ 3. CPU 和内存瓶颈 4. 网络瓶颈 - ### 1. top - 进程活动监控命令 -top 命令显示 Linux 的进程。它提供了一个系统的实时动态视图,即实际的进程活动。默认情况下,它显示在服务器上运行的 CPU 占用率最高的任务,并且每五秒更新一次。 +`top` 命令会显示 Linux 的进程。它提供了一个运行中系统的实时动态视图,即实际的进程活动。默认情况下,它显示在服务器上运行的 CPU 占用率最高的任务,并且每五秒更新一次。 ![](https://www.cyberciti.biz/tips/wp-content/uploads/2009/06/top-Linux-monitoring-command.jpg) -图 01:Linux top 命令 +*图 01:Linux top 命令* #### top 的常用快捷键 @@ -23,22 +22,24 @@ top 命令显示 Linux 的进程。它提供了一个系统的实时动态视图 | 快捷键 | 用法 | | ---- | -------------------------------------- | -| t | 是否显示总结信息 | -| m | 是否显示内存信息 | -| A | 根据各种系统资源的利用率对进程进行排序,有助于快速识别系统中性能不佳的任务。 | -| f | 进入 top 的交互式配置屏幕,用于根据特定的需求而设置 top 的显示。 | -| o | 交互式地调整 top 每一列的顺序。 | -| r | 调整优先级(renice) | -| k | 杀掉进程(kill) | -| z | 开启或关闭彩色或黑白模式 | +| `t` | 是否显示汇总信息 | +| `m` | 是否显示内存信息 | +| `A` | 根据各种系统资源的利用率对进程进行排序,有助于快速识别系统中性能不佳的任务。 | +| `f` | 进入 `top` 的交互式配置屏幕,用于根据特定的需求而设置 `top` 的显示。 | +| `o` | 交互式地调整 `top` 每一列的顺序。 | +| `r` | 调整优先级(`renice`) | +| `k` | 杀掉进程(`kill`) | +| `z` | 切换彩色或黑白模式 | 相关链接:[Linux 如何查看 CPU 利用率?][1] ### 2. vmstat - 虚拟内存统计 -vmstat 命令报告有关进程、内存、分页、块 IO、陷阱和 cpu 活动等信息。 +`vmstat` 命令报告有关进程、内存、分页、块 IO、中断和 CPU 活动等信息。 -`# vmstat 3` +``` +# vmstat 3 +``` 输出示例: @@ -56,11 +57,15 @@ procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- #### 显示 Slab 缓存的利用率 -`# vmstat -m` +``` +# vmstat -m +``` #### 获取有关活动和非活动内存页面的信息 -`# vmstat -a` +``` +# vmstat -a +``` 相关链接:[如何查看 Linux 的资源利用率从而找到系统瓶颈?][2] @@ -84,9 +89,11 @@ root pts/1 10.1.3.145 17:43 0.00s 0.03s 0.00s w ### 4. uptime - Linux 系统运行了多久 -uptime 命令可以用来查看服务器运行了多长时间:当前时间、已运行的时间、当前登录的用户连接数,以及过去 1 分钟、5 分钟和 15 分钟的系统负载平均值。 +`uptime` 命令可以用来查看服务器运行了多长时间:当前时间、已运行的时间、当前登录的用户连接数,以及过去 1 分钟、5 分钟和 15 分钟的系统负载平均值。 -`# uptime` +``` +# uptime +``` 输出示例: @@ -94,13 +101,15 @@ uptime 命令可以用来查看服务器运行了多长时间:当前时间、 18:02:41 up 41 days, 23:42, 1 user, load average: 0.00, 0.00, 0.00 ``` -1 可以被认为是最佳负载值。不同的系统会有不同的负载:对于单核 CPU 系统来说,1 到 3 的负载值是可以接受的;而对于 SMP(对称多处理)系统来说,负载可以是 6 到 10。 +`1` 可以被认为是最佳负载值。不同的系统会有不同的负载:对于单核 CPU 系统来说,`1` 到 `3` 的负载值是可以接受的;而对于 SMP(对称多处理)系统来说,负载可以是 `6` 到 `10`。 ### 5. ps - 显示系统进程 -ps 命令显示当前运行的进程。要显示所有的进程,请使用 -A 或 -e 选项: +`ps` 命令显示当前运行的进程。要显示所有的进程,请使用 `-A` 或 `-e` 选项: -`# ps -A` +``` +# ps -A +``` 输出示例: @@ -132,23 +141,31 @@ ps 命令显示当前运行的进程。要显示所有的进程,请使用 -A 55704 pts/1 00:00:00 ps ``` -ps 与 top 类似,但它提供了更多的信息。 +`ps` 与 `top` 类似,但它提供了更多的信息。 #### 显示长输出格式 -`# ps -Al` +``` +# ps -Al +``` 显示完整输出格式(它将显示传递给进程的命令行参数): -`# ps -AlF` +``` +# ps -AlF +``` #### 显示线程(轻量级进程(LWP)和线程的数量(NLWP)) -`# ps -AlFH` +``` +# ps -AlFH +``` #### 在进程后显示线程 -`# ps -AlLm` +``` +# ps -AlLm +``` #### 显示系统上所有的进程 @@ -162,7 +179,7 @@ ps 与 top 类似,但它提供了更多的信息。 ``` # ps -ejH # ps axjf -# [pstree][4] +# pstree ``` #### 显示进程的安全信息 @@ -192,11 +209,15 @@ ps 与 top 类似,但它提供了更多的信息。 ``` # ps -C lighttpd -o pid= ``` + 或 + ``` # pgrep lighttpd ``` + 或 + ``` # pgrep -u vivek php-cgi ``` @@ -215,15 +236,19 @@ ps 与 top 类似,但它提供了更多的信息。 #### 找出占用 CPU 资源最多的前 10 个进程 -`# ps -auxf | sort -nr -k 3 | head -10` +``` +# ps -auxf | sort -nr -k 3 | head -10 +``` 相关链接:[显示 Linux 上所有运行的进程][5] ### 6. free - 内存使用情况 -free 命令显示了系统的可用和已用的物理内存及交换内存的总量,以及内核用到的缓存空间。 +`free` 命令显示了系统的可用和已用的物理内存及交换内存的总量,以及内核用到的缓存空间。 -`# free ` +``` +# free +``` 输出示例: @@ -242,9 +267,11 @@ Swap: 1052248 0 1052248 ### 7. iostat - CPU 平均负载和磁盘活动 -iostat 命令用于汇报 CPU 的使用情况,以及设备、分区和网络文件系统(NFS)的 IO 统计信息。 +`iostat` 命令用于汇报 CPU 的使用情况,以及设备、分区和网络文件系统(NFS)的 IO 统计信息。 -`# iostat ` +``` +# iostat +``` 输出示例: @@ -265,17 +292,21 @@ sda3 0.00 0.00 0.00 1615 0 ### 8. sar - 监控、收集和汇报系统活动 -sar 命令用于收集、汇报和保存系统活动信息。要查看网络统计,请输入: +`sar` 命令用于收集、汇报和保存系统活动信息。要查看网络统计,请输入: -`# sar -n DEV | more` +``` +# sar -n DEV | more +``` 显示 24 日的网络统计: `# sar -n DEV -f /var/log/sa/sa24 | more` -您还可以使用 sar 显示实时使用情况: +您还可以使用 `sar` 显示实时使用情况: -`# sar 4 5` +``` +# sar 4 5 +``` 输出示例: @@ -295,12 +326,13 @@ Average: all 2.02 0.00 0.27 0.01 0.00 97.70 + [如何将 Linux 系统资源利用率的数据写入文件中][53] + [如何使用 kSar 创建 sar 性能图以找出系统瓶颈][54] - ### 9. mpstat - 监控多处理器的使用情况 -mpstat 命令显示每个可用处理器的使用情况,编号从 0 开始。命令 mpstat -P ALL 显示了每个处理器的平均使用率: +`mpstat` 命令显示每个可用处理器的使用情况,编号从 0 开始。命令 `mpstat -P ALL` 显示了每个处理器的平均使用率: -`# mpstat -P ALL` +``` +# mpstat -P ALL +``` 输出示例: @@ -323,13 +355,17 @@ Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in) 06/26/2009 ### 10. pmap - 监控进程的内存使用情况 -pmap 命令用以显示进程的内存映射,使用此命令可以查找内存瓶颈。 +`pmap` 命令用以显示进程的内存映射,使用此命令可以查找内存瓶颈。 -`# pmap -d PID` +``` +# pmap -d PID +``` 显示 PID 为 47394 的进程的内存信息,请输入: -`# pmap -d 47394` +``` +# pmap -d 47394 +``` 输出示例: @@ -362,16 +398,15 @@ mapped: 933712K writeable/private: 4304K shared: 768000K 最后一行非常重要: - * **mapped: 933712K** 映射到文件的内存量 - * **writeable/private: 4304K** 私有地址空间 - * **shared: 768000K** 此进程与其他进程共享的地址空间 - + * `mapped: 933712K` 映射到文件的内存量 + * `writeable/private: 4304K` 私有地址空间 + * `shared: 768000K` 此进程与其他进程共享的地址空间 相关链接:[使用 pmap 命令查看 Linux 上单个程序或进程使用的内存][8] ### 11. netstat - Linux 网络统计监控工具 -netstat 命令显示网络连接、路由表、接口统计、伪装连接和多播连接等信息。 +`netstat` 命令显示网络连接、路由表、接口统计、伪装连接和多播连接等信息。 ``` # netstat -tulpn @@ -380,27 +415,32 @@ netstat 命令显示网络连接、路由表、接口统计、伪装连接和多 ### 12. ss - 网络统计 -ss 命令用于获取套接字统计信息。它可以显示类似于 netstat 的信息。不过 netstat 几乎要过时了,ss 命令更具优势。要显示所有 TCP 或 UDP 套接字: +`ss` 命令用于获取套接字统计信息。它可以显示类似于 `netstat` 的信息。不过 `netstat` 几乎要过时了,`ss` 命令更具优势。要显示所有 TCP 或 UDP 套接字: -`# ss -t -a` +``` +# ss -t -a +``` 或 -`# ss -u -a ` +``` +# ss -u -a +``` -显示所有带有 SELinux 安全上下文(Security Context)的 TCP 套接字: +显示所有带有 SELinux 安全上下文Security Context的 TCP 套接字: -`# ss -t -a -Z ` +``` +# ss -t -a -Z +``` -请参阅以下关于 ss 和 netstat 命令的资料: +请参阅以下关于 `ss` 和 `netstat` 命令的资料: + [ss:显示 Linux TCP / UDP 网络套接字信息][56] + [使用 netstat 命令获取有关特定 IP 地址连接的详细信息][57] - ### 13. iptraf - 获取实时网络统计信息 -iptraf 命令是一个基于 ncurses 的交互式 IP 网络监控工具。它可以生成多种网络统计信息,包括 TCP 信息、UDP 计数、ICMP 和 OSPF 信息、以太网负载信息、节点统计信息、IP 校验错误等。它以简单的格式提供了以下信息: +`iptraf` 命令是一个基于 ncurses 的交互式 IP 网络监控工具。它可以生成多种网络统计信息,包括 TCP 信息、UDP 计数、ICMP 和 OSPF 信息、以太网负载信息、节点统计信息、IP 校验错误等。它以简单的格式提供了以下信息: * 基于 TCP 连接的网络流量统计 * 基于网络接口的 IP 流量统计 @@ -410,41 +450,53 @@ iptraf 命令是一个基于 ncurses 的交互式 IP 网络监控工具。它可 ![Fig.02: General interface statistics: IP traffic statistics by network interface ][9] -图 02:常规接口统计:基于网络接口的 IP 流量统计 +*图 02:常规接口统计:基于网络接口的 IP 流量统计* ![Fig.03 Network traffic statistics by TCP connection][10] -图 03:基于 TCP 连接的网络流量统计 +*图 03:基于 TCP 连接的网络流量统计* 相关链接:[在 Centos / RHEL / Fedora Linux 上安装 IPTraf 以获取网络统计信息][11] ### 14. tcpdump - 详细的网络流量分析 -tcpdump 命令是简单的分析网络通信的命令。您需要充分了解 TCP/IP 协议才便于使用此工具。例如,要显示有关 DNS 的流量信息,请输入: +`tcpdump` 命令是简单的分析网络通信的命令。您需要充分了解 TCP/IP 协议才便于使用此工具。例如,要显示有关 DNS 的流量信息,请输入: -`# tcpdump -i eth1 'udp port 53'` +``` +# tcpdump -i eth1 'udp port 53' +``` 查看所有去往和来自端口 80 的 IPv4 HTTP 数据包,仅打印真正包含数据的包,而不是像 SYN、FIN 和仅含 ACK 这类的数据包,请输入: -`# tcpdump 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'` +``` +# tcpdump 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' +``` 显示所有目标地址为 202.54.1.5 的 FTP 会话,请输入: -`# tcpdump -i eth1 'dst 202.54.1.5 and (port 21 or 20'` +``` +# tcpdump -i eth1 'dst 202.54.1.5 and (port 21 or 20' +``` 打印所有目标地址为 192.168.1.5 的 HTTP 会话: -`# tcpdump -ni eth0 'dst 192.168.1.5 and tcp and port http'` +``` +# tcpdump -ni eth0 'dst 192.168.1.5 and tcp and port http' +``` 使用 [wireshark][12] 查看文件的详细内容,请输入: -`# tcpdump -n -i eth1 -s 0 -w output.txt src or dst port 80` +``` +# tcpdump -n -i eth1 -s 0 -w output.txt src or dst port 80 +``` ### 15. iotop - I/O 监控 -iotop 命令利用 Linux 内核监控 I/O 使用情况,它按进程或线程的顺序显示 I/O 使用情况。 +`iotop` 命令利用 Linux 内核监控 I/O 使用情况,它按进程或线程的顺序显示 I/O 使用情况。 -`$ sudo iotop` +``` +$ sudo iotop +``` 输出示例: @@ -454,9 +506,11 @@ iotop 命令利用 Linux 内核监控 I/O 使用情况,它按进程或线程 ### 16. htop - 交互式的进程查看器 -htop 是一款免费并开源的基于 ncurses 的 Linux 进程查看器。它比 top 命令更简单易用。您无需使用 PID、无需离开 htop 界面,便可以杀掉进程或调整其调度优先级。 +`htop` 是一款免费并开源的基于 ncurses 的 Linux 进程查看器。它比 `top` 命令更简单易用。您无需使用 PID、无需离开 `htop` 界面,便可以杀掉进程或调整其调度优先级。 -`$ htop` +``` +$ htop +``` 输出示例: @@ -464,40 +518,40 @@ htop 是一款免费并开源的基于 ncurses 的 Linux 进程查看器。它 相关链接:[CentOS / RHEL:安装 htop——交互式文本模式进程查看器][58] - ### 17. atop - 高级版系统与进程监控工具 -atop 是一个非常强大的交互式 Linux 系统负载监控器,它从性能的角度显示最关键的硬件资源信息。您可以快速查看 CPU、内存、磁盘和网络性能。它还可以从进程的级别显示哪些进程造成了相关 CPU 和内存的负载。 +`atop` 是一个非常强大的交互式 Linux 系统负载监控器,它从性能的角度显示最关键的硬件资源信息。您可以快速查看 CPU、内存、磁盘和网络性能。它还可以从进程的级别显示哪些进程造成了相关 CPU 和内存的负载。 -`$ atop` +``` +$ atop +``` ![atop Command Line Tools to Monitor Linux Performance][16] 相关链接:[CentOS / RHEL:安装 atop 工具——高级系统和进程监控器][59] - ### 18. ac 和 lastcomm -您一定需要监控 Linux 服务器上的进程和登录活动吧。psacct 或 acct 软件包中包含了多个用于监控进程活动的工具,包括: +您一定需要监控 Linux 服务器上的进程和登录活动吧。`psacct` 或 `acct` 软件包中包含了多个用于监控进程活动的工具,包括: - - 1. ac 命令:显示有关用户连接时间的统计信息 + 1. `ac` 命令:显示有关用户连接时间的统计信息 2. [lastcomm 命令][17]:显示已执行过的命令 - 3. accton 命令:打开或关闭进程账号记录功能 - 4. sa 命令:进程账号记录信息的摘要 + 3. `accton` 命令:打开或关闭进程账号记录功能 + 4. `sa` 命令:进程账号记录信息的摘要 相关链接:[如何对 Linux 系统的活动做详细的跟踪记录][18] ### 19. monit - 进程监控器 -Monit 是一个免费且开源的进程监控软件,它可以自动重启停掉的服务。您也可以使用 Systemd、daemontools 或其他类似工具来达到同样的目的。[本教程演示如何在 Debian 或 Ubuntu Linux 上安装和配置 monit 作为进程监控器][19]。 +`monit` 是一个免费且开源的进程监控软件,它可以自动重启停掉的服务。您也可以使用 Systemd、daemontools 或其他类似工具来达到同样的目的。[本教程演示如何在 Debian 或 Ubuntu Linux 上安装和配置 monit 作为进程监控器][19]。 - -### 20. nethogs - 找出占用带宽的进程 +### 20. NetHogs - 找出占用带宽的进程 NetHogs 是一个轻便的网络监控工具,它按照进程名称(如 Firefox、wget 等)对带宽进行分组。如果网络流量突然爆发,启动 NetHogs,您将看到哪个进程(PID)导致了带宽激增。 -`$ sudo nethogs` +``` +$ sudo nethogs +``` ![nethogs linux monitoring tools open source][20] @@ -505,31 +559,37 @@ NetHogs 是一个轻便的网络监控工具,它按照进程名称(如 Firef ### 21. iftop - 显示主机上网络接口的带宽使用情况 -iftop 命令监听指定接口(如 eth0)上的网络通信情况。[它显示了一对主机的带宽使用情况][22]。 +`iftop` 命令监听指定接口(如 eth0)上的网络通信情况。[它显示了一对主机的带宽使用情况][22]。 -`$ sudo iftop` +``` +$ sudo iftop +``` ![iftop in action][23] ### 22. vnstat - 基于控制台的网络流量监控工具 -vnstat 是一个简单易用的基于控制台的网络流量监视器,它为指定网络接口保留每小时、每天和每月网络流量日志。 +`vnstat` 是一个简单易用的基于控制台的网络流量监视器,它为指定网络接口保留每小时、每天和每月网络流量日志。 -`$ vnstat ` +``` +$ vnstat +``` ![vnstat linux network traffic monitor][25] 相关链接: + + [为 ADSL 或专用远程 Linux 服务器保留日常网络流量日志][60] + [CentOS / RHEL:安装 vnStat 网络流量监控器以保留日常网络流量日志][61] + [CentOS / RHEL:使用 PHP 网页前端接口查看 Vnstat 图表][62] - ### 23. nmon - Linux 系统管理员的调优和基准测量工具 -nmon 是 Linux 系统管理员用于性能调优的利器,它在命令行显示 CPU、内存、网络、磁盘、文件系统、NFS、消耗资源最多的进程和分区信息。 +`nmon` 是 Linux 系统管理员用于性能调优的利器,它在命令行显示 CPU、内存、网络、磁盘、文件系统、NFS、消耗资源最多的进程和分区信息。 -`$ nmon` +``` +$ nmon +``` ![nmon command][26] @@ -537,9 +597,11 @@ nmon 是 Linux 系统管理员用于性能调优的利器,它在命令行显 ### 24. glances - 密切关注 Linux 系统 -glances 是一款开源的跨平台监控工具。它在小小的屏幕上提供了大量的信息,还可以用作客户端-服务器架构。 +`glances` 是一款开源的跨平台监控工具。它在小小的屏幕上提供了大量的信息,还可以工作于客户端-服务器模式下。 -`$ glances` +``` +$ glances +``` ![Glances][28] @@ -547,11 +609,11 @@ glances 是一款开源的跨平台监控工具。它在小小的屏幕上提供 ### 25. strace - 查看系统调用 -想要跟踪 Linux 系统的调用和信号吗?试试 strace 命令吧。它对于调试网页服务器和其他服务器问题很有用。了解如何利用其 [追踪进程][30] 并查看它在做什么。 +想要跟踪 Linux 系统的调用和信号吗?试试 `strace` 命令吧。它对于调试网页服务器和其他服务器问题很有用。了解如何利用其 [追踪进程][30] 并查看它在做什么。 -### 26. /proc/ 文件系统 - 各种内核信息 +### 26. /proc 文件系统 - 各种内核信息 -/proc 文件系统提供了不同硬件设备和 Linux 内核的详细信息。更多详细信息,请参阅 [Linux 内核 /proc][31] 文档。常见的 /proc 例子: +`/proc` 文件系统提供了不同硬件设备和 Linux 内核的详细信息。更多详细信息,请参阅 [Linux 内核 /proc][31] 文档。常见的 `/proc` 例子: ``` # cat /proc/cpuinfo @@ -562,23 +624,23 @@ glances 是一款开源的跨平台监控工具。它在小小的屏幕上提供 ### 27. Nagios - Linux 服务器和网络监控 -[Nagios][32] 是一款普遍使用的开源系统和网络监控软件。您可以轻松地监控所有主机、网络设备和服务,当状态异常和恢复正常时它都会发出警报通知。[FAN][33] 是“全自动 Nagios”的缩写。FAN 的目标是提供包含由 Nagios 社区提供的大多数工具包的 Nagios 安装。FAN 提供了标准 ISO 格式的 CDRom 镜像,使安装变得更加容易。除此之外,为了改善 Nagios 的用户体验,发行版还包含了大量的工具。 +[Nagios][32] 是一款普遍使用的开源系统和网络监控软件。您可以轻松地监控所有主机、网络设备和服务,当状态异常和恢复正常时它都会发出警报通知。[FAN][33] 是“全自动 Nagios”的缩写。FAN 的目标是提供包含由 Nagios 社区提供的大多数工具包的 Nagios 安装。FAN 提供了标准 ISO 格式的 CD-Rom 镜像,使安装变得更加容易。除此之外,为了改善 Nagios 的用户体验,发行版还包含了大量的工具。 ### 28. Cacti - 基于 Web 的 Linux 监控工具 Cacti 是一个完整的网络图形化解决方案,旨在充分利用 RRDTool 的数据存储和图形功能。Cacti 提供了快速轮询器、高级图形模板、多种数据采集方法和用户管理功能。这些功能被包装在一个直观易用的界面中,确保可以实现从局域网到拥有数百台设备的复杂网络上的安装。它可以提供有关网络、CPU、内存、登录用户、Apache、DNS 服务器等的数据。了解如何在 CentOS / RHEL 下 [安装和配置 Cacti 网络图形化工具][34]。 -### 29. KDE System Guard - 实时系统报告和图形化显示 +### 29. KDE 系统监控器 - 实时系统报告和图形化显示 -KSysguard 是 KDE 桌面的网络化系统监控程序。这个工具可以通过 ssh 会话运行。它提供了许多功能,比如监控本地和远程主机的客户端-服务器架构。前端图形界面使用传感器来检索信息。传感器可以返回简单的值或更复杂的信息,如表格。每种类型的信息都有一个或多个显示界面,并被组织成工作表的形式,这些工作表可以分别保存和加载。所以,KSysguard 不仅是一个简单的任务管理器,还是一个控制大型服务器平台的强大工具。 +KSysguard 是 KDE 桌面的网络化系统监控程序。这个工具可以通过 ssh 会话运行。它提供了许多功能,比如可以监控本地和远程主机的客户端-服务器模式。前端图形界面使用传感器来检索信息。传感器可以返回简单的值或更复杂的信息,如表格。每种类型的信息都有一个或多个显示界面,并被组织成工作表的形式,这些工作表可以分别保存和加载。所以,KSysguard 不仅是一个简单的任务管理器,还是一个控制大型服务器平台的强大工具。 ![Fig.05 KDE System Guard][35] -图 05:KDE System Guard {图片来源:维基百科} +*图 05:KDE System Guard {图片来源:维基百科}* 详细用法,请参阅 [KSysguard 手册][36]。 -### 30. Gnome 系统监控器 +### 30. GNOME 系统监控器 系统监控程序能够显示系统基本信息,并监控系统进程、系统资源使用情况和文件系统。您还可以用其修改系统行为。虽然不如 KDE System Guard 强大,但它提供的基本信息对新用户还是有用的: @@ -598,7 +660,7 @@ KSysguard 是 KDE 桌面的网络化系统监控程序。这个工具可以通 ![Fig.06 The Gnome System Monitor application][37] -图 06:Gnome 系统监控程序 +*图 06:Gnome 系统监控程序* ### 福利:其他工具 @@ -606,16 +668,15 @@ KSysguard 是 KDE 桌面的网络化系统监控程序。这个工具可以通 * [nmap][38] - 扫描服务器的开放端口 * [lsof][39] - 列出打开的文件和网络连接等 - * [ntop][40] 网页工具 - ntop 是查看网络使用情况的最佳工具,与 top 命令之于进程的方式类似,即网络流量监控工具。您可以查看网络状态和 UDP、TCP、DNS、HTTP 等协议的流量分发。 - * [Conky][41] - X Window 系统的另一个很好的监控工具。它具有很高的可配置性,能够监视许多系统变量,包括 CPU 状态、内存、交换空间、磁盘存储、温度、进程、网络接口、电池、系统消息和电子邮件等。 + * [ntop][40] 基于网页的工具 - `ntop` 是查看网络使用情况的最佳工具,与 `top` 命令之于进程的方式类似,即网络流量监控工具。您可以查看网络状态和 UDP、TCP、DNS、HTTP 等协议的流量分发。 + * [Conky][41] - X Window 系统下的另一个很好的监控工具。它具有很高的可配置性,能够监视许多系统变量,包括 CPU 状态、内存、交换空间、磁盘存储、温度、进程、网络接口、电池、系统消息和电子邮件等。 * [GKrellM][42] - 它可以用来监控 CPU 状态、主内存、硬盘、网络接口、本地和远程邮箱及其他信息。 - * [mtr][43] - mtr 将 traceroute 和 ping 程序的功能结合在一个网络诊断工具中。 + * [mtr][43] - `mtr` 将 `traceroute` 和 `ping` 程序的功能结合在一个网络诊断工具中。 * [vtop][44] - 图形化活动监控终端 - 如果您有其他推荐的系统监控工具,欢迎在评论区分享。 -#### 关于作者 +### 关于作者 作者 Vivek Gite 是 nixCraft 的创建者,也是经验丰富的系统管理员,以及 Linux 操作系统和 Unix shell 脚本的培训师。他的客户遍布全球,行业涉及 IT、教育、国防航天研究以及非营利部门等。您可以在 [Twitter][45]、[Facebook][46] 和 [Google+][47] 上关注他。 @@ -625,7 +686,7 @@ via: https://www.cyberciti.biz/tips/top-linux-monitoring-tools.html 作者:[Vivek Gite][a] 译者:[jessie-pang](https://github.com/jessie-pang) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 71b01f7f5caa989f9dbab18b9f41fb25633b41ca Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 23 Feb 2018 00:39:53 +0800 Subject: [PATCH 41/81] PUB:20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md @jessie-pang https://linux.cn/article-9373-1.html --- ...30 Linux System Monitoring Tools Every SysAdmin Should Know.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md (100%) diff --git a/translated/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md b/published/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md similarity index 100% rename from translated/tech/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md rename to published/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md From 6386bd0e0daefa6743685ab0d94d769b6c3901d2 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 23 Feb 2018 08:49:26 +0800 Subject: [PATCH 42/81] translated --- ...hon Hello World and String Manipulation.md | 133 ------------------ ...hon Hello World and String Manipulation.md | 132 +++++++++++++++++ 2 files changed, 132 insertions(+), 133 deletions(-) delete mode 100644 sources/tech/20180204 Python Hello World and String Manipulation.md create mode 100644 translated/tech/20180204 Python Hello World and String Manipulation.md diff --git a/sources/tech/20180204 Python Hello World and String Manipulation.md b/sources/tech/20180204 Python Hello World and String Manipulation.md deleted file mode 100644 index 7a27b8b174..0000000000 --- a/sources/tech/20180204 Python Hello World and String Manipulation.md +++ /dev/null @@ -1,133 +0,0 @@ -translating---geekpi - -Python Hello World and String Manipulation -====== - -![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/eadkmsrBTcWSyCeA4qti) - -Before starting, I should mention that the [code][1] used in this blog post and in the [video][2] below is available on my github. - -With that, let’s get started! If you get lost, I recommend opening the [video][3] below in a separate tab. - -[Hello World and String Manipulation Video using Python][2] - -#### ** Get Started (Prerequisites) - -Install Anaconda (Python) on your operating system. You can either download anaconda from the [official site][4] and install on your own or you can follow these anaconda installation tutorials below. - -Install Anaconda on Windows: [Link][5] - -Install Anaconda on Mac: [Link][6] - -Install Anaconda on Ubuntu (Linux): [Link][7] - -#### Open a Jupyter Notebook - -Open your terminal (Mac) or command line and type the following ([see 1:16 in the video to follow along][8]) to open a Jupyter Notebook: -``` -jupyter notebook - -``` - -#### Print Statements/Hello World - -Type the following into a cell in Jupyter and type **shift + enter** to execute code. -``` -# This is a one line comment -print('Hello World!') - -``` - -![][9] -Output of printing ‘Hello World!’ - -#### Strings and String Manipulation - -Strings are a special type of a python class. As objects, in a class, you can call methods on string objects using the .methodName() notation. The string class is available by default in python, so you do not need an import statement to use the object interface to strings. -``` -# Create a variable -# Variables are used to store information to be referenced -# and manipulated in a computer program. -firstVariable = 'Hello World' -print(firstVariable) - -``` - -![][9] -Output of printing the variable firstVariable -``` -# Explore what various string methods -print(firstVariable.lower()) -print(firstVariable.upper()) -print(firstVariable.title()) - -``` - -![][9] -Output of using .lower(), .upper() , and title() methods -``` -# Use the split method to convert your string into a list -print(firstVariable.split(' ')) - -``` - -![][9] -Output of using the split method (in this case, split on space) -``` -# You can add strings together. -a = "Fizz" + "Buzz" -print(a) - -``` - -![][9] -string concatenation - -#### Look up what Methods Do - -For new programmers, they often ask how you know what each method does. Python provides two ways to do this. - - 1. (works in and out of Jupyter Notebook) Use **help** to lookup what each method does. - - - -![][9] -Look up what each method does - - 2. (Jupyter Notebook exclusive) You can also look up what a method does by having a question mark after a method. - - -``` -# To look up what each method does in jupyter (doesnt work outside of jupyter) -firstVariable.lower? - -``` - -![][9] -Look up what each method does in Jupyter - -#### Closing Remarks - -Please let me know if you have any questions either here or in the comments section of the [youtube video][2]. The code in the post is also available on my [github][1]. Part 2 of the tutorial series is [Simple Math][10]. - --------------------------------------------------------------------------------- - -via: https://www.codementor.io/mgalarny/python-hello-world-and-string-manipulation-gdgwd8ymp - -作者:[Michael][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.codementor.io/mgalarny -[1]:https://github.com/mGalarnyk/Python_Tutorials/blob/master/Python_Basics/Intro/Python3Basics_Part1.ipynb -[2]:https://www.youtube.com/watch?v=JqGjkNzzU4s -[3]:https://www.youtube.com/watch?v=kApPBm1YsqU -[4]:https://www.continuum.io/downloads -[5]:https://medium.com/@GalarnykMichael/install-python-on-windows-anaconda-c63c7c3d1444 -[6]:https://medium.com/@GalarnykMichael/install-python-on-mac-anaconda-ccd9f2014072 -[7]:https://medium.com/@GalarnykMichael/install-python-on-ubuntu-anaconda-65623042cb5a -[8]:https://youtu.be/JqGjkNzzU4s?t=1m16s -[9]:data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw== -[10]:https://medium.com/@GalarnykMichael/python-basics-2-simple-math-4ac7cc928738 diff --git a/translated/tech/20180204 Python Hello World and String Manipulation.md b/translated/tech/20180204 Python Hello World and String Manipulation.md new file mode 100644 index 0000000000..3c5bb2ac04 --- /dev/null +++ b/translated/tech/20180204 Python Hello World and String Manipulation.md @@ -0,0 +1,132 @@ +Python 中的 Hello World 和字符串操作 +====== + +![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/eadkmsrBTcWSyCeA4qti) + +开始之前,说一下本文中的[代码][1]和[视频][2]可以在我的 github 上找到。 + +那么,让我们开始吧!如果你糊涂了,我建议你在单独的选项卡中打开下面的[视频][3]。 + +[Python 的 Hello World 和字符串操作视频][2] + +#### ** 开始 (先决条件) + +在你的操作系统上安装 Anaconda(Python)。你可以从[官方网站][4]下载 anaconda 并自行安装,或者你可以按照以下这些 anaconda 安装教程进行安装。 + +在 Windows 上安装 Anaconda: [链接[5] + +在 Mac 上安装 Anaconda: [链接][6] + +在 Ubuntu (Linux) 上安装 Anaconda:[链接][7] + +#### 打开一个 Jupyter Notebook + +打开你的终端(Mac)或命令行,并输入以下内容([请参考视频中的 1:16 处][8])来打开 Jupyter Notebook: +``` +jupyter notebook + +``` + +#### 打印语句/Hello World + +在 Jupyter 的单元格中输入以下内容并按下 **shift + 回车**来执行代码。 +``` +# This is a one line comment +print('Hello World!') + +``` + +![][9] +打印输出 “Hello World!” + +#### 字符串和字符串操作 + +字符串是 python 类的一种特殊类型。作为对象,在类中,你可以使用 .methodName() 来调用字符串对象的方法。字符串类在 python 中默认是可用的,所以你不需要 import 语句来使用字符串对象接口。 +``` +# Create a variable +# Variables are used to store information to be referenced +# and manipulated in a computer program. +firstVariable = 'Hello World' +print(firstVariable) + +``` + +![][9] +输出打印变量 firstVariable +``` +# Explore what various string methods +print(firstVariable.lower()) +print(firstVariable.upper()) +print(firstVariable.title()) + +``` + +![][9] +使用 .lower()、.upper() 和 title() 方法输出 +``` +# Use the split method to convert your string into a list +print(firstVariable.split(' ')) + +``` + +![][9] +使用 split 方法输出(此例中以空格分隔) +``` +# You can add strings together. +a = "Fizz" + "Buzz" +print(a) + +``` + +![][9] +字符串连接 + +#### 查询方法的功能 + +对于新程序员,他们经常问你如何知道每种方法的功能。Python 提供了两种方法来实现。 + + 1.(在不在 Jupyter Notebook 中都可用)使用 **help** 查询每个方法的功能。 + + + +![][9] + 查询每个方法的功能 + + 2. (Jupyter Notebook exclusive) You can also look up what a method does by having a question mark after a method. + 2.(Jupyter Notebook 专用)你也可以通过在方法之后添加问号来查找方法的功能。 + + +``` +# To look up what each method does in jupyter (doesnt work outside of jupyter) +firstVariable.lower? + +``` + +![][9] +在 Jupyter 中查找每个方法的功能 + +#### 结束语 + +如果你对本文或在[ YouTube 视频][2]的评论部分有任何疑问,请告诉我们。文章中的代码也可以在我的 [github][1] 上找到。本系列教程的第 2 部分是[简单的数学操作][10]。 + +-------------------------------------------------------------------------------- + +via: https://www.codementor.io/mgalarny/python-hello-world-and-string-manipulation-gdgwd8ymp + +作者:[Michael][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.codementor.io/mgalarny +[1]:https://github.com/mGalarnyk/Python_Tutorials/blob/master/Python_Basics/Intro/Python3Basics_Part1.ipynb +[2]:https://www.youtube.com/watch?v=JqGjkNzzU4s +[3]:https://www.youtube.com/watch?v=kApPBm1YsqU +[4]:https://www.continuum.io/downloads +[5]:https://medium.com/@GalarnykMichael/install-python-on-windows-anaconda-c63c7c3d1444 +[6]:https://medium.com/@GalarnykMichael/install-python-on-mac-anaconda-ccd9f2014072 +[7]:https://medium.com/@GalarnykMichael/install-python-on-ubuntu-anaconda-65623042cb5a +[8]:https://youtu.be/JqGjkNzzU4s?t=1m16s +[9]:data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw== +[10]:https://medium.com/@GalarnykMichael/python-basics-2-simple-math-4ac7cc928738 From 86eadea7acbc60663b7d92f4abf27775663d2558 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 23 Feb 2018 08:53:46 +0800 Subject: [PATCH 43/81] translating --- ...Versatile Free Software for Partition Imaging and Cloning.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180116 A Versatile Free Software for Partition Imaging and Cloning.md b/sources/tech/20180116 A Versatile Free Software for Partition Imaging and Cloning.md index d5cf47b45e..592bce9548 100644 --- a/sources/tech/20180116 A Versatile Free Software for Partition Imaging and Cloning.md +++ b/sources/tech/20180116 A Versatile Free Software for Partition Imaging and Cloning.md @@ -1,3 +1,5 @@ +translating---geekpi + Partclone – A Versatile Free Software for Partition Imaging and Cloning ====== From a05d0781b3ff71ff921bc31558b4ebd90633f278 Mon Sep 17 00:00:00 2001 From: Yixun Xu Date: Thu, 22 Feb 2018 20:44:01 -0500 Subject: [PATCH 44/81] Translated: Advanced Dnsmasq Tips and Tricks --- ...180208 Advanced Dnsmasq Tips and Tricks.md | 156 ------------------ ...180208 Advanced Dnsmasq Tips and Tricks.md | 155 +++++++++++++++++ 2 files changed, 155 insertions(+), 156 deletions(-) delete mode 100644 sources/tech/20180208 Advanced Dnsmasq Tips and Tricks.md create mode 100644 translated/tech/20180208 Advanced Dnsmasq Tips and Tricks.md diff --git a/sources/tech/20180208 Advanced Dnsmasq Tips and Tricks.md b/sources/tech/20180208 Advanced Dnsmasq Tips and Tricks.md deleted file mode 100644 index 47a6cf16e6..0000000000 --- a/sources/tech/20180208 Advanced Dnsmasq Tips and Tricks.md +++ /dev/null @@ -1,156 +0,0 @@ -yixunx translating -Advanced Dnsmasq Tips and Tricks -====== - -!](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_3.25.47_pm.png?itok=2YaDe86d) - -Many people know and love Dnsmasq and rely on it for their local name services. Today we look at advanced configuration file management, how to test your configurations, some basic security, DNS wildcards, speedy DNS configuration, and some other tips and tricks. Next week, we'll continue with a detailed look at how to configure DNS and DHCP. - -### Testing Configurations - -When you're testing new configurations, you should run Dnsmasq from the command line, rather than as a daemon. This example starts it without launching the daemon, prints command output, and logs all activity: -``` -# dnsmasq --no-daemon --log-queries -dnsmasq: started, version 2.75 cachesize 150 -dnsmasq: compile time options: IPv6 GNU-getopt - DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack - ipset auth DNSSEC loop-detect inotify -dnsmasq: reading /etc/resolv.conf -dnsmasq: using nameserver 192.168.0.1#53 -dnsmasq: read /etc/hosts - 9 addresses - -``` - -You can see tons of useful information in this small example, including version, compiled options, system name service files, and its listening address. Ctrl+c stops it. By default, Dnsmasq does not have its own log file, so entries are dumped into multiple locations in `/var/log`. You can use good old `grep` to find Dnsmasq log entries. This example searches `/var/log` recursively, prints the line numbers after the filenames, and excludes `/var/log/dist-upgrade`: -``` -# grep -ir --exclude-dir=dist-upgrade dnsmasq /var/log/ - -``` - -Note the fun grep gotcha with `--exclude-dir=`: Don't specify the full path, but just the directory name. - -You can give Dnsmasq its own logfile with this command-line option, using whatever file you want: -``` -# dnsmasq --no-daemon --log-queries --log-facility=/var/log/dnsmasq.log - -``` - -Or enter it in your Dnsmasq configuration file as `log-facility=/var/log/dnsmasq.log`. - -### Configuration Files - -Dnsmasq is configured in `/etc/dnsmasq.conf`. Your Linux distribution may also use `/etc/default/dnsmasq`, `/etc/dnsmasq.d/`, and `/etc/dnsmasq.d-available/`. (No, there cannot be a universal method, as that is against the will of the Linux Cat Herd Ruling Cabal.) You have a fair bit of flexibility to organize your Dnsmasq configuration in a way that pleases you. - -`/etc/dnsmasq.conf` is the grandmother as well as the boss. Dnsmasq reads it first at startup. `/etc/dnsmasq.conf` can call other configuration files with the `conf-file=` option, for example `conf-file=/etc/dnsmasqextrastuff.conf`, and directories with the `conf-dir=` option, e.g. `conf-dir=/etc/dnsmasq.d`. - -Whenever you make a change in a configuration file, you must restart Dnsmasq. - -You may include or exclude configuration files by extension. The asterisk means include, and the absence of the asterisk means exclude: -``` -conf-dir=/etc/dnsmasq.d/,*.conf, *.foo -conf-dir=/etc/dnsmasq.d,.old, .bak, .tmp - -``` - -You may store your host configurations in multiple files with the `--addn-hosts=` option. - -Dnsmasq includes a syntax checker: -``` -$ dnsmasq --test -dnsmasq: syntax check OK. - -``` - -### Useful Configurations - -Always include these lines: -``` -domain-needed -bogus-priv - -``` - -These prevent packets with malformed domain names and packets with private IP addresses from leaving your network. - -This limits your name services exclusively to Dnsmasq, and it will not use `/etc/resolv.conf` or any other system name service files: -``` -no-resolv - -``` - -Reference other name servers. The first example is for a local private domain. The second and third examples are OpenDNS public servers: -``` -server=/fooxample.com/192.168.0.1 -server=208.67.222.222 -server=208.67.220.220 - -``` - -Or restrict just local domains while allowing external lookups for other domains. These are answered only from `/etc/hosts` or DHCP: -``` -local=/mehxample.com/ -local=/fooxample.com/ - -``` - -Restrict which network interfaces Dnsmasq listens to: -``` -interface=eth0 -interface=wlan1 - -``` - -Dnsmasq, by default, reads and uses `/etc/hosts`. This is a fabulously fast way to configure a lot of hosts, and the `/etc/hosts` file only has to exist on the same computer as Dnsmasq. You can make the process even faster by entering only the hostnames in `/etc/hosts`, and use Dnsmasq to add the domain. `/etc/hosts` looks like this: -``` -127.0.0.1 localhost -192.168.0.1 host2 -192.168.0.2 host3 -192.168.0.3 host4 - -``` - -Then add these lines to `dnsmasq.conf`, using your own domain, of course: -``` -expand-hosts -domain=mehxample.com - -``` - -Dnsmasq will automatically expand the hostnames to fully qualified domain names, for example, host2 to host2.mehxample.com. - -### DNS Wildcards - -In general, DNS wildcards are not a good practice because they invite abuse. But there are times when they are useful, such as inside the nice protected confines of your LAN. For example, Kubernetes clusters are considerably easier to manage with wildcard DNS, unless you enjoy making DNS entries for your hundreds or thousands of applications. Suppose your Kubernetes domain is mehxample.com; in Dnsmasq a wildcard that resolves all requests to mehxample.com looks like this: -``` -address=/mehxample.com/192.168.0.5 - -``` - -The address to use in this case is the public IP address for your cluster. This answers requests for hosts and subdomains in mehxample.com, except for any that are already configured in DHCP or `/etc/hosts`. - -Next week, we'll go into more detail on managing DNS and DHCP, including different options for different subnets, and providing authoritative name services. - -### Additional Resources - -* [DNS Spoofing with Dnsmasq][1] - -* [Dnsmasq For Easy LAN Name Services][2] - -* [Dnsmasq][3] - - - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/2/advanced-dnsmasq-tips-and-tricks - -作者:[CARLA SCHRODER][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/cschroder -[1]:https://www.linux.com/learn/intro-to-linux/2017/7/dns-spoofing-dnsmasq -[2]:https://www.linux.com/learn/dnsmasq-easy-lan-name-services -[3]:http://www.thekelleys.org.uk/dnsmasq/doc.html diff --git a/translated/tech/20180208 Advanced Dnsmasq Tips and Tricks.md b/translated/tech/20180208 Advanced Dnsmasq Tips and Tricks.md new file mode 100644 index 0000000000..1c4798dfa0 --- /dev/null +++ b/translated/tech/20180208 Advanced Dnsmasq Tips and Tricks.md @@ -0,0 +1,155 @@ +Dnsmasq 进阶技巧 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_3.25.47_pm.png?itok=2YaDe86d) + +许多人熟知和热爱 Dnsmasq,并在他们的本地域名服务上使用它。今天我们将介绍进阶配置文件管理、如何测试你的配置、一些基础的安全知识、DNS 泛域名、快速 DNS 配置,以及其他一些技巧与窍门。下个星期我们将继续详细讲解如何配置 DNS 和 DHCP。 + +### 测试配置 + +当你测试新的配置的时候,你应该从命令行运行 Dnsmasq,而不是使用守护进程。下面的例子演示了如何不用守护进程运行它,同时显示指令的输出并保留运行日志: +``` +# dnsmasq --no-daemon --log-queries +dnsmasq: started, version 2.75 cachesize 150 +dnsmasq: compile time options: IPv6 GNU-getopt + DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack + ipset auth DNSSEC loop-detect inotify +dnsmasq: reading /etc/resolv.conf +dnsmasq: using nameserver 192.168.0.1#53 +dnsmasq: read /etc/hosts - 9 addresses + +``` + +在这个小例子中你能看到许多有用的信息,包括版本、编译参数、系统域名服务文件、以及它的监听地址。可以使用 Ctrl+C 停止进程。在默认情况下,Dnsmasq 没有自己的日志文件,所以日志会被记录到 `/var/log` 目录下的多个地方。你可以使用经典的 `grep` 来找到 Dnsmasq 的日志文件。下面这条指令会递归式地搜索 `/var/log`、在每个匹配的文件名之后显示匹配的行数,并忽略 `/var/log/dist-upgrade` 里的内容: +``` +# grep -ir --exclude-dir=dist-upgrade dnsmasq /var/log/ + +``` + +使用 `grep --exclude-dir=` 时有一个有趣的小陷阱需要注意:不要使用完整路径,而应该只写目录名称。 + +你可以使用如下的命令行参数来让 Dnsmasq 使用你指定的文件作为它专属的日志文件: +``` +# dnsmasq --no-daemon --log-queries --log-facility=/var/log/dnsmasq.log + +``` + +或者在你的 Dnsmasq 配置文件中加上 `log-facility=/var/log/dnsmasq.log`。 + +### 配置文件 + +Dnsmasq 的配置文件位于 `/etc/dnsmasq.conf`。你的 Linux 发行版也可能会使用 `/etc/default/dnsmasq`、`/etc/dnsmasq.d/`,或者 `/etc/dnsmasq.d-available/`(不,我们不能统一标准,因为这违反了 Linux 七嘴八舌秘密议会的旨意)。你有很多自由来随意安置你的配置文件。 + +`/etc/dnsmasq.conf` 是德高望重的老大。Dnsmasq 在启动时会最先读取它。`/etc/dnsmasq.conf` 可以使用 `conf-file=` 选项来调用其他的配置文件,例如 `conf-file=/etc/dnsmasqextrastuff.conf`,或使用 `conf-dir=` 选项来调用目录下的所有文件,例如 `conf-dir=/etc/dnsmasq.d`。 + +每当你对配置文件进行了修改,你都必须重启 Dnsmasq。 + +你可以根据扩展名来包含或忽略配置文件。星号表示包含,不加星号表示忽略: +``` +conf-dir=/etc/dnsmasq.d/,*.conf, *.foo +conf-dir=/etc/dnsmasq.d,.old, .bak, .tmp + +``` + +你可以用 `--addn-hosts=` 选项来把你的主机配置分布在多个文件中。 + +Dnsmasq 包含了一个语法检查器: +``` +$ dnsmasq --test +dnsmasq: syntax check OK. + +``` + +### 实用配置 + +永远加入这几行: +``` +domain-needed +bogus-priv + +``` + +它们可以避免含有格式出错的域名或私人 IP 地址的数据包离开你的网络。 + +让你的域名服务只使用 Dnsmasq,而不去使用 `/etc/resolv.conf` 或任何其他的域名服务文件: +``` +no-resolv + +``` + +使用其他的域名服务器。第一个例子是只对于某一个域名使用不同的域名服务器。第二个和第三个例子是 OpenDNS 公用服务器: +``` +server=/fooxample.com/192.168.0.1 +server=208.67.222.222 +server=208.67.220.220 + +``` + +你也可以将某些域名限制为只能本地解析,但不影响其他域名。这些被限制的域名只能从 `/etc/hosts` 或 DHCP 解析: +``` +local=/mehxample.com/ +local=/fooxample.com/ + +``` + +限制 Dnsmasq 监听的网络接口: +``` +interface=eth0 +interface=wlan1 + +``` + +Dnsmasq 在默认设置下会读取并使用 `/etc/hosts`。这是一个又快又好的配置大量域名的方法,并且 `/etc/hosts` 只需要和 Dnsmasq 在同一台电脑上。你还可以让这个过程再快一些,可以在 `/etc/hosts` 文件中只写主机名,然后用 Dnsmasq 来添加域名。`/etc/hosts` 看上去是这样的: +``` +127.0.0.1 localhost +192.168.0.1 host2 +192.168.0.2 host3 +192.168.0.3 host4 + +``` + +然后把这几行写入 `dnsmasq.conf`(当然,要换成你自己的域名): +``` +expand-hosts +domain=mehxample.com + +``` + +Dnsmasq 会自动把这些主机名扩展为完整的域名,比如 host2 会变为 host2.mehxample.com。 + +### DNS 泛域名 + +一般来说,使用 DNS 泛域名不是一个好习惯,因为它们太容易被误用了。但它们有时会很有用,比如在你的局域网的严密保护之下的时候。一个例子是使用 DNS 泛域名会让 Kubernetes 集群变得容易管理许多,除非你喜欢给你成百上千的应用写 DNS 记录。假设你的 Kubernetes 域名是 mehxample.com,那么下面这行配置可以让 Dnsmasq 解析所有对 mehxample.com 的请求: +``` +address=/mehxample.com/192.168.0.5 + +``` + +这里使用的地址是你的集群的公网 IP 地址。这会响应对 mehxample.com 的所有主机名和子域名的请求,除非请求的目标地址已经在 DHCP 或者 `/etc/hosts` 中配置过。 + +下星期我们将探索更多的管理 DNS 和 DHCP 的细节,包括对不同的子网络使用不同的设置,以及提供权威域名服务器。 + +### 更多参考 + +* [使用 Dnsmasq 进行 DNS 欺骗][1] + +* [使用 Dnsmasq 配置简单的局域网域名服务][2] + +* [Dnsmasq][3] + + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/2/advanced-dnsmasq-tips-and-tricks + +作者:[CARLA SCHRODER][a] +译者:[yixunx](https://github.com/yixunx) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:https://www.linux.com/learn/intro-to-linux/2017/7/dns-spoofing-dnsmasq +[2]:https://www.linux.com/learn/dnsmasq-easy-lan-name-services +[3]:http://www.thekelleys.org.uk/dnsmasq/doc.html From a6a16c708cb2f7acf2a944fea7d7e5880e99b7ba Mon Sep 17 00:00:00 2001 From: qhwdw Date: Fri, 23 Feb 2018 10:31:15 +0800 Subject: [PATCH 45/81] Translated by qhwdw --- ...171221 Mail transfer agent (MTA) basics.md | 269 ------------------ ...171221 Mail transfer agent (MTA) basics.md | 267 +++++++++++++++++ 2 files changed, 267 insertions(+), 269 deletions(-) delete mode 100644 sources/tech/20171221 Mail transfer agent (MTA) basics.md create mode 100644 translated/tech/20171221 Mail transfer agent (MTA) basics.md diff --git a/sources/tech/20171221 Mail transfer agent (MTA) basics.md b/sources/tech/20171221 Mail transfer agent (MTA) basics.md deleted file mode 100644 index f731f919c8..0000000000 --- a/sources/tech/20171221 Mail transfer agent (MTA) basics.md +++ /dev/null @@ -1,269 +0,0 @@ -Translating by qhwdw -Mail transfer agent (MTA) basics -====== - -## Overview - -In this tutorial, learn to: - - * Use the `mail` command. - * Create mail aliases. - * Configure email forwarding. - * Understand common mail transfer agent (MTA) programs such as postfix, sendmail, qmail, and exim. - - - -## Controlling where your mail goes - -Email on a Linux system is delivered using MTAs. Your MTA delivers mail to other users on your system and MTAs communicate with each other to deliver mail all over a group of systems or all over the world. - -### Prerequisites - -To get the most from the tutorials in this series, you need a basic knowledge of Linux and a working Linux system on which you can practice the commands covered in this tutorial. You should be familiar with GNU and UNIX commands. Sometimes different versions of a program format output differently, so your results might not always look exactly like the listings shown here. - -In this tutorial, I use Ubuntu 14.04 LTS and sendmail 8.14.4 for the sendmail examples. - -## Mail transfer - -Mail transfer agents such as sendmail deliver mail between users and between systems. Most Internet mail uses the Simple Mail Transfer Protocol (SMTP), but local mail may be transferred through files or sockets among other possibilities. Mail is a store and forward operation, so mail is stored in some kind of file or database until a user collects it or a receiving system or communication link is available. Configuring and securing an MTA is quite a complex task, most of which is beyond the scope of this introductory tutorial. - -## The mail command - -If you use SMTP email, you probably know that there are many, many mail clients that you can use, including `mail`, `mutt`, `alpine`, `notmuch`, and a host of other console and graphical mail clients. The `mail` command is an old standby that can be used to script the sending of mail as well as receive and manage your incoming mail. - -You can use `mail` interactively to send messages by passing a list of addressees, or with no arguments you can use it to look at your incoming mail. Listing 1 shows how to send a message to user steve and user pat on your system with a carbon copy to user bob. When prompted for the cc:user and the subject, enter the body and complete the message by pressing **Ctrl+D** (hold down the Ctrl key and press D). - -##### Listing 1. Using `mail` interactively to send mail -``` -ian@attic4-u14:~$ mail steve,pat -Cc: bob -Subject: Test message 1 -This is a test message - -Ian -``` - -If all is well, your mail is sent. If there is an error, you will see an error message. For example, if you typed an invalid name as a recipient, the mail is not sent. Note that in this example, all users are on your local system and therefore all must be valid users. - -You can also send mail non-interactively using the command line. Listing 2 shows how to send a small message to users steve and pat. This capability is particularly useful in scripts. Different versions of the `mail` command are available in different packages. Some support a `-c` option for cc:, but the version I am using here does not, so I specify only the to: addresses. - -Listing 2. Using `mail` non-interactively -``` -ian@attic4-u14:~$ mail -t steve,pat -s "Test message 2" <<< "Another test.\n\nIan" -``` - -If you use `mail` with no options you will see a list of your incoming mail as shown in Listing 3. You see that user steve has the two messages I sent above, plus an earlier one from me and a later one from user bob. All the mail is marked as 'N' for new mail. - -Listing 3. Using `mail` for incoming mail -``` -steve@attic4-u14:~$ mail -"/var/mail/steve": 4 messages 4 new ->N 1 Ian Shields Tue Dec 12 21:03 16/704 test message - N 2 Ian Shields Tue Dec 12 21:04 18/701 Test message 1 - N 3 Ian Shields Tue Dec 12 21:23 15/661 Test message 2 - N 4 Bob C Tue Dec 12 21:45 17/653 How about lunch tomorrow? -? -``` - -The currently selected message is shown with a '>', which is message number 1 in Listing 3. If you press **Enter** , the first page of the next unread message will be displayed. Press the **Space bar** to page through the message. When you finish reading the message and return to the '?' prompt, press **Enter** again to view the next message, and so on. At any '?' prompt you can type 'h' to see the list of message headers again. The ones you have read will now show 'R' in the status as shown in Listing 4. - -Listing 4. Using 'h' to display mail headers -``` -? h - R 1 Ian Shields Tue Dec 12 21:03 16/704 test message - R 2 Ian Shields Tue Dec 12 21:04 18/701 Test message 1 ->R 3 Ian Shields Tue Dec 12 21:23 15/661 Test message 2 - N 4 Bob C Tue Dec 12 21:45 17/653 How about lunch tomorrow? -? -``` - -Here Steve has read the three messages from Ian but has not read the message from Bob. You can select individual messages by number, and you can also delete messages that you don't want by typing 'd', or '3d' to delete the third message. If you type 'q' you will quit the `mail` command. Messages that you have read will be transferred to the mbox file in your home directory and the unread messages will remain in your inbox, by default in /var/mail/$(id -un). See Listing 5. - -Listing 5. Using 'q' to quit `mail` -``` -? h - R 1 Ian Shields Tue Dec 12 21:03 16/704 test message - R 2 Ian Shields Tue Dec 12 21:04 18/701 Test message 1 ->R 3 Ian Shields Tue Dec 12 21:23 15/661 Test message 2 - N 4 Bob C Tue Dec 12 21:45 17/653 How about lunch tomorrow? -? q -Saved 3 messages in /home/steve/mbox -Held 1 message in /var/mail/steve -You have mail in /var/mail/steve -``` - -If you type 'x' to exit instead of 'q' to quit, your mailbox will be left unchanged. Because this is on the /var file system, your system administrator may allow mail to be kept there only for a limited time. To reread or otherwise process mail that has been saved to your local mbox file, use the `-f` option to specify the file you want to read. For example `mail -f mbox`. - -## Mail aliases - -In the previous section you saw how mail can be sent to various users on a system. You can use a fully qualified name, such as ian@myexampledomain.com to send mail to a user on another system. - -Sometimes you might want all the mail for a user to go to some other place. For example, you may have a server farm and want all the root mail to go to a central system administrator. Or you may want to create a mailing list where mail goes to several people. To do this, you use aliases that allow you to define one or more destinations for a given user name. The destinations may be other user mail boxes, files, pipes, or commands that do further processing. You do this by specifying the aliases in /etc/mail/aliases or /etc/aliases. Depending on your system, you may find that one of these is a symbolic link to the other, or you may have only one of them. You need root authority to change the aliases file. - -The general form of an alias is -name: addr_1, addr_2, addr_3, ... -where the name is a local user name to alias or an alias and the addr_1, addr_2, ... are one or more aliases. Aliases can be a local user, a local file name, another alias, a command, an include file, or an external address. - -So how does sendmail distinguish the aliases (the addr-N values)? - - * A local user name is a text string that matches the name of a user on this system. Technically this means it can be found using the `getpwnam` call . - * A local file name is a full path and file name that starts with '/'. It must be writeable by `sendmail`. Messages are appended to the file. - * A command starts with the pipe symbol (|). Messages are sent to the command using standard input. - * An include file alias starts with :include: and specifies a path and file name. The aliases in file are added to the aliases for this name. - * An external address is an email address such as john@somewhere.com. - - - -You should find an example file, such as /usr/share/sendmail/examples/db/aliases that was installed with your sendmail package. It contains some recommended aliases for postmaster, MAILER-DAEMON, abuse, and spam. In Listing 6, I have combined entries from the example file on my Ubuntu 14.04 LTS system with some rather artificial examples that illustrate several of the possibilities. - -Listing 6. Somewhat artificial /etc/mail/aliases example - -``` -ian@attic4-u14:~$ cat /etc/mail/aliases -# First include some default system aliases from -# /usr/share/sendmail/examples/db/aliases - -# -# Mail aliases for sendmail -# -# You must run newaliases(1) after making changes to this file. -# - -# Required aliases -postmaster: root -MAILER-DAEMON: postmaster - -# Common aliases -abuse: postmaster -spam: postmaster - -# Other aliases - -# Send steve's mail to bob and pat instead -steve: bob,pat - -# Send pat's mail to a file in her home directory and also to her inbox. -# Finally send it to a command that will make another copy. -pat: /home/pat/accumulated-mail, - \pat, - |/home/pat/makemailcopy.sh - -# Mailing list for system administrators -sysadmins: :include: /etc/aliases-sysadmins -``` - -Note that pat is both an alias and a user of the system. Alias expansion is recursive, so if an alias is also a name, then it will be expanded. Sendmail does not send mail twice to a given user, so if you just put 'pat' as an alias for 'pat', then it would be ignored since sendmail had already found and processed 'pat'. To avoid this problem, you prefix an alias name with a '\' to indicate that it is a name not subject to further aliasing. This way, pat's mail can be sent to her normal inbox as well as the file and command. - -Lines in the aliases that start with '$' are comments and are ignored. Lines that start with blanks are treated as continuation lines. - -The include file /etc/aliases-sysadmins is shown in Listing 7. - -Listing 7. The /etc/aliases-sysadmins include file -``` -ian@attic4-u14:~$ cat /etc/aliases-sysadmins - -# Mailing list for system administrators -bob,pat -``` - -## The newaliases command - -Most configuration files used by sendmail are compiled into database files. This is also true for mail aliases. You use the `newaliases` command to compile your /etc/mail/aliases and any included files to /etc/mail/aliases.db. Note that `newaliases` is equivalent to `sendmail -bi`. Listing 8 shows an example. - -Listing 8. Rebuild the database for the mail aliases file -``` -ian@attic4-u14:~$ sudo newaliases -/etc/mail/aliases: 7 aliases, longest 62 bytes, 184 bytes total -ian@attic4-u14:~$ ls -l /etc/mail/aliases* -lrwxrwxrwx 1 root smmsp 10 Dec 8 15:48 /etc/mail/aliases -> ../aliases --rw-r----- 1 smmta smmsp 12288 Dec 13 23:18 /etc/mail/aliases.db -``` - -## Examples of using aliases - -Listing 9 shows a simple shell script that is used as a command in my alias example. - -Listing 9. The makemailcopy.sh script -``` -ian@attic4-u14:~$ cat ~pat/makemailcopy.sh -#!/bin/bash - -# Note: Target file ~/mail-copy must be writeable by sendmail! -cat >> ~pat/mail-copy -``` - -Listing 10 shows the files that are updated when you put all this to the test. - -Listing 10. The /etc/aliases-sysadmins include file -``` -ian@attic4-u14:~$ date -Wed Dec 13 22:54:22 EST 2017 -ian@attic4-u14:~$ mail -t sysadmins -s "sysadmin test 1" <<< "Testing mail" -ian@attic4-u14:~$ ls -lrt $(find /var/mail ~pat -type f -mmin -3 2>/dev/null ) --rw-rw---- 1 pat mail 2046 Dec 13 22:54 /home/pat/mail-copy --rw------- 1 pat mail 13240 Dec 13 22:54 /var/mail/pat --rw-rw---- 1 pat mail 9442 Dec 13 22:54 /home/pat/accumulated-mail --rw-rw---- 1 bob mail 12522 Dec 13 22:54 /var/mail/bob -``` - -Some points to note: - - * There is a user 'mail' with group name 'mail' that is used by sendmail. - * User mail is stored by sendmail in /var/mail which is also the home directory of user 'mail'. The inbox for user 'ian' defaults to /var/mail/ian. - * If you want sendmail to write files in a user directory, the file must be writeable by sendmail. Rather than making it world writeable, it is customary to make it group writeable and make the group 'mail'. You may need a system administrator to do this for you. - - - -## Using a .forward file to forward mail - -The aliases file must be managed by a system administrator. Individual users can enable forwarding of their own mail using a .forward file in their own home directory. You can put anything in your .forward file that is allowed on the right side of the aliases file. The file contains plain text and does not need to be compiled. When mail is destined for you, sendmail checks for a .forward file in your home directory and processes the entries the same way it processes aliases. - -## Mail queues and the mailq command - -Linux mail handling uses a store-and-forward model. You have already seen that your incoming mail is stored in a file in /var/mail until you read it. Outgoing mail is also stored until a receiving server connection is available. You use the `mailq` command to see what mail is queued. Listing 11 shows an example of mail being sent to an external user, ian@attic4-c6, and the result of running the `mailq` command. In this case, there is currently no active link to attic4-c6, so the mail will remain queued until a link becomes active. - -Listing 11. Using the `mailq` command -``` -ian@attic4-u14:~$ mail -t ian@attic4-c6 -s "External mail" <<< "Testing external mail queues" -ian@attic4-u14:~$ mailq -MSP Queue status... -/var/spool/mqueue-client is empty - Total requests: 0 -MTA Queue status... - /var/spool/mqueue (1 request) ------Q-ID----- --Size-- -----Q-Time----- ------------Sender/Recipient----------- -vBE4mdE7025908* 29 Wed Dec 13 23:48 - - Total requests: 1 -``` - -## Other mail transfer agents - -In response to security issues with sendmail, several other mail transfer agents were developed during the 1990's. Postfix is perhaps the most popular, but qmail and exim are also widely used. - -Postfix started life at IBM research as an alternative to sendmail. It attempts to be fast, easy to administer, and secure. The outside looks somewhat like sendmail, but the inside is completely different. - -Qmail is a secure, reliable, efficient, simple message transfer agent developerd by Dan Bernstein. However, the core qmail package has not been updated for many years. Qmail and several other packages have now been collected into IndiMail. - -Exim is another MTA developed at the University of Cambridge. Originally, the name stood for EXperimental Internet Mailer. - -All of these MTAs were designed as sendmail replacements, so they all have some form of sendmail compatibility. Each can handle aliases and .forward files. Some provide a `sendmail` command as a front end to the particular MTA's own command. Most allow the usual sendmail options, although some options might be ignore silently. The `mailq` command is supported directly or by an alternate command with a similar function. For example, you can use `mailq` or `exim -bp` to display the exim mail queue. Needless to say, output can look different compared to that produced by sendmail's `mailq` command. - -See Related topics where you can find more information on all of these MTAs. - -This concludes your introduction to mail transfer agents on Linux. - - --------------------------------------------------------------------------------- - -via: https://www.ibm.com/developerworks/library/l-lpic1-108-3/index.html - -作者:[Ian Shields][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ibm.com -[1]:http://www.lpi.org -[2]:https://www.ibm.com/developerworks/library/l-lpic1-map/ diff --git a/translated/tech/20171221 Mail transfer agent (MTA) basics.md b/translated/tech/20171221 Mail transfer agent (MTA) basics.md new file mode 100644 index 0000000000..2ebc2ee277 --- /dev/null +++ b/translated/tech/20171221 Mail transfer agent (MTA) basics.md @@ -0,0 +1,267 @@ +邮件传输代理(MTA)基础 +====== + +## 概述 + +本教程中,你将学习: + + * 使用 `mail` 命令。 + * 创建邮件别名。 + * 配置电子邮件转发。 + * 了解常见邮件传输代理(MTA),比如,postfix、sendmail、qmail、以及 exim。 + + + +## 控制邮件去向 + +Linux 系统上的电子邮件是使用 MTAs 投递的。你的 MTA 投递邮件到你的系统上的其他用户,以及系统上的其它系统组或者与全世界的其它 MTAs 通讯以投递邮件。 + +### 前提条件 + +为完成本系列教程的大部分内容,你需要具备 Linux 的基础知识,你需要拥有一个 Linux 系统来实践本教程中的命令。你应该熟悉 GNU 以及 UNIX 命令。有时候不同版本的程序的输出格式可能不同,因此,在你的系统中输出的结果可能与我在下面列出的稍有不同。 + +在本教程中,我使用的是 Ubuntu 14.04 LTS 和 sendmail 8.14.4 来做的演示。 + +## 邮件传输 + +邮件传输代理(比如 sendmail)在用户和系统之间投递邮件。大量的因特网邮件使用简单邮件传输协议(SMTP),但是本地邮件可能是通过文件或者套接字等其它可能的方式来传输的。邮件是一种存储和转发的操作,因此,在用户接收邮件或者接收系统或者通讯联系可用之前,邮件一直是存储在某种文件或者数据库中。配置和确保 MTA 的安全是非常复杂的任务,它们中的大部分内容都已经超出了本教程的范围。 + +## mail 命令 + +如果你使用 SMTP 协议传输电子邮件,你或许知道你可以使用的许多邮件客户端,包括 `mail`、`mutt`、`alpine`、`notmuch`、以及其它基于主机控制台或者图形界面的邮件客户端。`mail` 命令是最老的、可用于脚本中的、发送和接收以及管理收到的邮件的备用命令。 + +你可以使用 `mail` 命令交互式的向列表中的收件人发送信息,或者不使用参数去查看你收到的邮件。Listing 1 展示了如何在你的系统上去发送信息到用户 steve 和 pat,同时抄送拷贝给用户 bob。当提示 cc:和 subject:时,输入相应的抄送用户以及邮件主题,接着输入邮件正文,输入完成后按下 **Ctrl+D** (按下 Ctrl 键并保持再按下 D 之后全部松开)。 + +##### Listing 1. 使用 `mail` 交互式发送邮件 +``` +ian@attic4-u14:~$ mail steve,pat +Cc: bob +Subject: Test message 1 +This is a test message + +Ian +``` + +如果一切顺利,你的邮件已经发出。如果在这里发生错误,你将看到错误信息。例如,如果你在接收者列表中输入一个无效的用户名,邮件将无法发送。注意在本示例中,所有的用户都在本地系统上存在,因此他们都是有效用户。 + +你也可以使用命令行以非交互式发送邮件。Listing 2 展示了如何给用户 steve 和 pat 发送一封邮件。这种方式可以用在脚本中。在不同的包中 `mail` 命令的版本不同。对于抄送(cc:)有些支持一个 `-c` 选项,但是我使用的这个版本不支持这个选项,因此,我仅将邮件发送到收件人。 + +Listing 2. 使用 `mail` 命令非交互式发送邮件 +``` +ian@attic4-u14:~$ mail -t steve,pat -s "Test message 2" <<< "Another test.\n\nIan" +``` + +如果你使用没有选项的 `mail` 命令,你将看到一个如 Listing 3 中所展示的那样一个收到的邮件的列表。你将看到用户 steve 有我上面发送的两个信息,再加上我以前发送的一个信息和后来用户 bob 发送的信息。所有的邮件都用 'N' 标记为新邮件。 + +Listing 3. 使用 `mail` 查看收到的邮件 +``` +steve@attic4-u14:~$ mail +"/var/mail/steve": 4 messages 4 new +>N 1 Ian Shields Tue Dec 12 21:03 16/704 test message + N 2 Ian Shields Tue Dec 12 21:04 18/701 Test message 1 + N 3 Ian Shields Tue Dec 12 21:23 15/661 Test message 2 + N 4 Bob C Tue Dec 12 21:45 17/653 How about lunch tomorrow? +? +``` + +当前选中的信息使用一个 '>' 来标识,它是 Listing 3 中的第一封邮件。如果你按下 **回车键(Enter)**,将显示下一封未读邮件的第一页。按下 **空格楗(Space bar)** 将显示这个信息的下一页。当你读完这个信息并想返回到 '?' 提示时,按下 **回车键** 再次查看下一封邮件,依次类推。在任何 '?' 提示符下,你可以输入 'h' 再次去查看邮件头。你看过的邮件前面将显示一个 'R' 状态,如 Listing 4 所示。 + +Listing 4. 使用 'h' 去显示邮件头 +``` +? h + R 1 Ian Shields Tue Dec 12 21:03 16/704 test message + R 2 Ian Shields Tue Dec 12 21:04 18/701 Test message 1 +>R 3 Ian Shields Tue Dec 12 21:23 15/661 Test message 2 + N 4 Bob C Tue Dec 12 21:45 17/653 How about lunch tomorrow? +? +``` + +在这个图中,Steve 已经读了三个信息,但是没有读来自 bob 的信息。你可以通过数字来选择单个的信息,你也可以通过输入 ‘d' 删除你不想要的信息,或者输入 '3d' 去删除三封信息。如果你输入 'q' 你将退出 `mail` 命令。已读的信息将被转移到你的 home 目录下的 mbox 文件中,而未读的信息仍然保留在你的收件箱中,默认在 /var/mail/$(id -un)。如 Listing 5 所示。 + +Listing 5. 使用 'q' 退出 `mail` +``` +? h + R 1 Ian Shields Tue Dec 12 21:03 16/704 test message + R 2 Ian Shields Tue Dec 12 21:04 18/701 Test message 1 +>R 3 Ian Shields Tue Dec 12 21:23 15/661 Test message 2 + N 4 Bob C Tue Dec 12 21:45 17/653 How about lunch tomorrow? +? q +Saved 3 messages in /home/steve/mbox +Held 1 message in /var/mail/steve +You have mail in /var/mail/steve +``` + +如果你输入 'x' 而不是使用 'q' 去退出,你的邮箱在退出后将不保留你做的改变。因为这在 /var 文件系统中,你的系统管理员可能仅允许邮件在一个有限的时间范围内去保留三封邮件。去重读或者以其它方式再次处理保存在你的本地邮箱中的邮件,你可以使用 `-f` 选项去指定想要去读的文件。比如,`mail -f mbox`。 + +## 邮件别名 + +在前面的节中,看了如何在系统上给许多用户发送邮件。你可以使用一个全限定名字(比如 ian@myexampledomain.com)给其它系统上的用户发送邮件。 + +有时候你可能希望用户的所有邮件都可以发送到其它地方。比如,你有一个服务器群,你希望所有的 root 用户的邮件都发给核心系统管理员。或者你可能希望去创建一个邮件列表,将邮件发送给一些人。为实现上述目标,你可以使用别名,别名允许你为一个给定的用户名定义一个或者多个目的地。这个目的地或者是其它用户的邮箱、文件、管道、或者是某个进一步处理的命令。你可以在 /etc/mail/aliases 或者 /etc/aliases 中创建别名来实现上述目的。根据你的系统的不同,你可以找到上述其中一个,符号链接到它们、或者其中之一。改变别名文件你需要有 root 权限。 + +别名的格式一般是: +name: addr_1, addr_2, addr_3, ... +name 的位置是一个本地用户名字到别名,或者一个别名和 addr_1,addr_2,... 一个或多个别名。别名可以是一个本地用户,一个本地文件名,另一个别名,一个命令,一个包含文件,或者一个外部地址。 + +因此,发送邮件时如何区分别名呢(addr-N 值)? + + * 一个本地用户名是你机器上系统中的一个用户名字。从技术角度来说,它可以通过调用 `getpwnam` 命令找到它。 + * 一个本地文件名是以 '/' 开始的完全路径和文件名。它必须通过 `sendmail` 来写。信息是追加到这个文件上的。 + * 一个命令是以一个管道符号开始的(|)。信息是通过标准输入的方式发送到命令的。 + * 一个包含文件别名是以 `:include:` 和指定的一个路径和文件名开始的。文件中的别名被添加到别名中。 + * 一个外部地址是一个电子邮件地址,比如 john@somewhere.com。 + + + +你可以在你的系统中找到一个示例文件,它是与你的 sendmail 包一起安装的,它的位置在 /usr/share/sendmail/examples/db/aliases。它包含一些给 postmaster、MAILER-DAEMON、abuse、和 spam的建议别名。在 Listing 6,我把我的 Ubuntu 14.04 LTS 系统上的一些示例文件,和人为的示例结合起来说明一些可能的情况。 + +Listing 6. 人为的 /etc/mail/aliases 示例 + +``` +ian@attic4-u14:~$ cat /etc/mail/aliases +# First include some default system aliases from +# /usr/share/sendmail/examples/db/aliases + +# +# Mail aliases for sendmail +# +# You must run newaliases(1) after making changes to this file. +# + +# Required aliases +postmaster: root +MAILER-DAEMON: postmaster + +# Common aliases +abuse: postmaster +spam: postmaster + +# Other aliases + +# Send steve's mail to bob and pat instead +steve: bob,pat + +# Send pat's mail to a file in her home directory and also to her inbox. +# Finally send it to a command that will make another copy. +pat: /home/pat/accumulated-mail, + \pat, + |/home/pat/makemailcopy.sh + +# Mailing list for system administrators +sysadmins: :include: /etc/aliases-sysadmins +``` + +注意那个 pat 既是一个别名也是一个系统中的用户。别名是以递归的方式展开的,因此,如果一个别名也是一个名字,那么它将被展开。Sendmail 并不会给同一个用户发送相同的邮件两遍,因此,如果你正好将 'pat' 作为 'pat' 的别名,那么 sendmail 在已经找到并处理完用户 ’pat‘ 之后,将忽略别名 'pat’。为避免这种问题,你可以在别名前使用一个'\' 做为前缀去指示它是一个不要进一步引起混淆的名字。在这种情况下,pat 的邮件除了文件和命令之外,其余的可能会被发送到他的正常的邮箱中。 + +在 aliases 中以 '$' 开始的行是注释,它会被忽略。以空白开始的行会以延续行来处理。 + +Listing 7 展示了包含文件 /etc/aliases-sysadmins。 + +Listing 7 包含文件 /etc/aliases-sysadmins +``` +ian@attic4-u14:~$ cat /etc/aliases-sysadmins + +# Mailing list for system administrators +bob,pat +``` + +## newaliases 命令 + +sendmail 使用的主要配置文件被编译成数据库文件。邮件别名也是如此。你可以使用 `newaliases` 命令去编译你的 /etc/mail/aliases 和任何包含文件到 /etc/mail/aliases.db 中。注意,那个 `newaliases` 命令等价于 `sendmail -bi`。Listing 8 展示了一个示例。 + +Listing 8. 为邮件别名重建数据库 +``` +ian@attic4-u14:~$ sudo newaliases +/etc/mail/aliases: 7 aliases, longest 62 bytes, 184 bytes total +ian@attic4-u14:~$ ls -l /etc/mail/aliases* +lrwxrwxrwx 1 root smmsp 10 Dec 8 15:48 /etc/mail/aliases -> ../aliases +-rw-r----- 1 smmta smmsp 12288 Dec 13 23:18 /etc/mail/aliases.db +``` + +## 使用别名的示例 + +Listing 9 展示了一个简单的 shell 脚本,它在我的别名示例中以一个命令的方式来使用。 + +Listing 9. makemailcopy.sh 脚本 +``` +ian@attic4-u14:~$ cat ~pat/makemailcopy.sh +#!/bin/bash + +# Note: Target file ~/mail-copy must be writeable by sendmail! +cat >> ~pat/mail-copy +``` + +Listing 10 展示了用于测试时更新的文件。 + +Listing 10. /etc/aliases-sysadmins 包含文件 +``` +ian@attic4-u14:~$ date +Wed Dec 13 22:54:22 EST 2017 +ian@attic4-u14:~$ mail -t sysadmins -s "sysadmin test 1" <<< "Testing mail" +ian@attic4-u14:~$ ls -lrt $(find /var/mail ~pat -type f -mmin -3 2>/dev/null ) +-rw-rw---- 1 pat mail 2046 Dec 13 22:54 /home/pat/mail-copy +-rw------- 1 pat mail 13240 Dec 13 22:54 /var/mail/pat +-rw-rw---- 1 pat mail 9442 Dec 13 22:54 /home/pat/accumulated-mail +-rw-rw---- 1 bob mail 12522 Dec 13 22:54 /var/mail/bob +``` + +需要注意的几点: + + * 有一个用户 'mail' 与 sendmail 使用的组名字 'mail' 是一样的。 + * sendmail 在 /var/mail 保存用户邮件,它也在用户 ‘mail' 的 home 目录下。用户 'ian' 的默认收件箱在 /var/mail/ian 中。 + * 如果你希望 sendmail 在用户目录下写入文件,这个文件必须允许 sendmail 可写入。与其让任何人都可以写入,还不如定义一个组可写入,组名称为 'mail'。这需要系统管理员来帮你完成。 + + + +## 使用一个 `.forward` 文件去转发邮件 + +别名文件是由系统管理员来管理的。个人用户可以使用它们自己的 home 目录下的 `.forward` 文件去转发他们自己的邮件。你可以在你的 `.forward` 文件中放任何东西,它可以放在别名文件的右侧。这个文件的内容是明文的,不需要编译。当你收到邮件时,sendmail 将检查你的 home 目录中的 `.forward` 文件,然后就像处理别名一样处理它。 + +## 邮件队列和 mailq 命令 + +Linux 邮件使用存储-转发的处理模式。你已经看到的已接收邮件,在你读它之前一直保存在文件 /var/mail 中。你发出的邮件在接收服务器连接可用之前也会被保存。你可以使用 `mailq` 命令去查看邮件队列。Listing 11 展示了一个发送给外部用户 ian@attic4-c6 的一个邮件示例,以及运行 `mailq` 命令的结果。在这个案例中,当前服务器没有连接到 attic4-c6,因此邮件在与对方服务器连接可用之前一直保存在队列中。 + +Listing 11. 使用 `mailq` 命令 +``` +ian@attic4-u14:~$ mail -t ian@attic4-c6 -s "External mail" <<< "Testing external mail queues" +ian@attic4-u14:~$ mailq +MSP Queue status... +/var/spool/mqueue-client is empty + Total requests: 0 +MTA Queue status... + /var/spool/mqueue (1 request) +-----Q-ID----- --Size-- -----Q-Time----- ------------Sender/Recipient----------- +vBE4mdE7025908* 29 Wed Dec 13 23:48 + + Total requests: 1 +``` + +## 其它邮件传输代理 + +为解决使用 sendmail 时安全方面的问题,在 上世纪九十年代开发了几个其它的邮件传输代理。Postfix 或许是最流行的一个,但是 qmail 和 exim 也大量使用。 + +Postfix 是 IBM 为代替 sendmail 而研发的。它更快、也易于管理、安全性更好一些。从外表看它非常像 sendmail,但是它的内部完全与 sendmail 不同。 + +Qmail 是一个安全、可靠、高效、简单的邮件传输代理,它由 Dan Bernstein 开发。但是,最近几年以来,它的核心包已经不再更新了。Qmail 和几个其它的包已经被吸收到 IndiMail 中了。 + +Exim 是另外一个 MTA,它由 University of Cambridge 开发。最初,它的名字是 `EXperimental Internet Mailer`。 + +所有的这些 MTAs 都是为代替 sendmail 而设计的,因此,它们它们都兼容 sendmail 的一些格式。它们都能够处理别名和 `.forward` 文件。有些规定了一个 `sendmail` 命令作为一个前端到特定的 MTA 的自有命令。尽管一些选项可能会被静默忽略,但是大多数都允许使用常见的 sendmail 选项。`mailq` 命令是被直接支持的,或者使用一个类似功能的命令来代替。比如,你可以使用 `mailq` 或者 `exim -bp` 去显示 exim 邮件队列。当然,输出可以看到与 sendmail 的 `mailq` 命令的不同之外。 + +查看相关的主题,你可以找到更多的关于这些 MTA 的更多信息。 + +对 Linux 上的邮件传输代理的介绍到此结束。 + +-------------------------------------------------------------------------------- + +via: https://www.ibm.com/developerworks/library/l-lpic1-108-3/index.html + +作者:[Ian Shields][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ibm.com +[1]:http://www.lpi.org +[2]:https://www.ibm.com/developerworks/library/l-lpic1-map/ From f97c8a32476c68562500bfc43549cc515eb3cc4b Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Fri, 23 Feb 2018 15:28:48 +0800 Subject: [PATCH 46/81] Delete 20180116 Analyzing the Linux boot process.md --- ...180116 Analyzing the Linux boot process.md | 253 ------------------ 1 file changed, 253 deletions(-) delete mode 100644 sources/tech/20180116 Analyzing the Linux boot process.md diff --git a/sources/tech/20180116 Analyzing the Linux boot process.md b/sources/tech/20180116 Analyzing the Linux boot process.md deleted file mode 100644 index 24a7cb971d..0000000000 --- a/sources/tech/20180116 Analyzing the Linux boot process.md +++ /dev/null @@ -1,253 +0,0 @@ -Translating by jessie-pang - -Analyzing the Linux boot process -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_boot.png?itok=FUesnJQp) - -Image by : Penguin, Boot. Modified by Opensource.com. CC BY-SA 4.0. - -The oldest joke in open source software is the statement that "the code is self-documenting." Experience shows that reading the source is akin to listening to the weather forecast: sensible people still go outside and check the sky. What follows are some tips on how to inspect and observe Linux systems at boot by leveraging knowledge of familiar debugging tools. Analyzing the boot processes of systems that are functioning well prepares users and developers to deal with the inevitable failures. - -In some ways, the boot process is surprisingly simple. The kernel starts up single-threaded and synchronous on a single core and seems almost comprehensible to the pitiful human mind. But how does the kernel itself get started? What functions do [initial ramdisk][1] ) and bootloaders perform? And wait, why is the LED on the Ethernet port always on? - -Read on for answers to these and other questions; the [code for the described demos and exercises][2] is also available on GitHub. - -### The beginning of boot: the OFF state - -#### Wake-on-LAN - -The OFF state means that the system has no power, right? The apparent simplicity is deceptive. For example, the Ethernet LED is illuminated because wake-on-LAN (WOL) is enabled on your system. Check whether this is the case by typing: -``` - $# sudo ethtool -``` - -where `` might be, for example, `eth0`. (`ethtool` is found in Linux packages of the same name.) If "Wake-on" in the output shows `g`, remote hosts can boot the system by sending a [MagicPacket][3]. If you have no intention of waking up your system remotely and do not wish others to do so, turn WOL off either in the system BIOS menu, or via: -``` -$# sudo ethtool -s wol d -``` - -The processor that responds to the MagicPacket may be part of the network interface or it may be the [Baseboard Management Controller][4] (BMC). - -#### Intel Management Engine, Platform Controller Hub, and Minix - -The BMC is not the only microcontroller (MCU) that may be listening when the system is nominally off. x86_64 systems also include the Intel Management Engine (IME) software suite for remote management of systems. A wide variety of devices, from servers to laptops, includes this technology, [which enables functionality][5] such as KVM Remote Control and Intel Capability Licensing Service. The [IME has unpatched vulnerabilities][6], according to [Intel's own detection tool][7]. The bad news is, it's difficult to disable the IME. Trammell Hudson has created an [me_cleaner project][8] that wipes some of the more egregious IME components, like the embedded web server, but could also brick the system on which it is run. - -The IME firmware and the System Management Mode (SMM) software that follows it at boot are [based on the Minix operating system][9] and run on the separate Platform Controller Hub processor, not the main system CPU. The SMM then launches the Universal Extensible Firmware Interface (UEFI) software, about which much has [already been written][10], on the main processor. The Coreboot group at Google has started a breathtakingly ambitious [Non-Extensible Reduced Firmware][11] (NERF) project that aims to replace not only UEFI but early Linux userspace components such as systemd. While we await the outcome of these new efforts, Linux users may now purchase laptops from Purism, System76, or Dell [with IME disabled][12], plus we can hope for laptops [with ARM 64-bit processors][13]. - -#### Bootloaders - -Besides starting buggy spyware, what function does early boot firmware serve? The job of a bootloader is to make available to a newly powered processor the resources it needs to run a general-purpose operating system like Linux. At power-on, there not only is no virtual memory, but no DRAM until its controller is brought up. A bootloader then turns on power supplies and scans buses and interfaces in order to locate the kernel image and the root filesystem. Popular bootloaders like U-Boot and GRUB have support for familiar interfaces like USB, PCI, and NFS, as well as more embedded-specific devices like NOR- and NAND-flash. Bootloaders also interact with hardware security devices like [Trusted Platform Modules][14] (TPMs) to establish a chain of trust from earliest boot. - -![Running the U-boot bootloader][16] - -Running the U-boot bootloader in the sandbox on the build host. - -The open source, widely used [U-Boot ][17]bootloader is supported on systems ranging from Raspberry Pi to Nintendo devices to automotive boards to Chromebooks. There is no syslog, and when things go sideways, often not even any console output. To facilitate debugging, the U-Boot team offers a sandbox in which patches can be tested on the build-host, or even in a nightly Continuous Integration system. Playing with U-Boot's sandbox is relatively simple on a system where common development tools like Git and the GNU Compiler Collection (GCC) are installed: -``` - - -$# git clone git://git.denx.de/u-boot; cd u-boot - -$# make ARCH=sandbox defconfig - -$# make; ./u-boot - -=> printenv - -=> help -``` - -That's it: you're running U-Boot on x86_64 and can test tricky features like [mock storage device][2] repartitioning, TPM-based secret-key manipulation, and hotplug of USB devices. The U-Boot sandbox can even be single-stepped under the GDB debugger. Development using the sandbox is 10x faster than testing by reflashing the bootloader onto a board, and a "bricked" sandbox can be recovered with Ctrl+C. - -### Starting up the kernel - -#### Provisioning a booting kernel - -Upon completion of its tasks, the bootloader will execute a jump to kernel code that it has loaded into main memory and begin execution, passing along any command-line options that the user has specified. What kind of program is the kernel? `file /boot/vmlinuz` indicates that it is a bzImage, meaning a big compressed one. The Linux source tree contains an [extract-vmlinux tool][18] that can be used to uncompress the file: -``` - - -$# scripts/extract-vmlinux /boot/vmlinuz-$(uname -r) > vmlinux - -$# file vmlinux - -vmlinux: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically - -linked, stripped -``` - -The kernel is an [Executable and Linking Format][19] (ELF) binary, like Linux userspace programs. That means we can use commands from the `binutils` package like `readelf` to inspect it. Compare the output of, for example: -``` - - -$# readelf -S /bin/date - -$# readelf -S vmlinux -``` - -The list of sections in the binaries is largely the same. - -So the kernel must start up something like other Linux ELF binaries ... but how do userspace programs actually start? In the `main()` function, right? Not precisely. - -Before the `main()` function can run, programs need an execution context that includes heap and stack memory plus file descriptors for `stdio`, `stdout`, and `stderr`. Userspace programs obtain these resources from the standard library, which is `glibc` on most Linux systems. Consider the following: -``` - - -$# file /bin/date - -/bin/date: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically - -linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, - -BuildID[sha1]=14e8563676febeb06d701dbee35d225c5a8e565a, - -stripped -``` - -ELF binaries have an interpreter, just as Bash and Python scripts do, but the interpreter need not be specified with `#!` as in scripts, as ELF is Linux's native format. The ELF interpreter [provisions a binary][20] with the needed resources by calling `_start()`, a function available from the `glibc` source package that can be [inspected via GDB][21]. The kernel obviously has no interpreter and must provision itself, but how? - -Inspecting the kernel's startup with GDB gives the answer. First install the debug package for the kernel that contains an unstripped version of `vmlinux`, for example `apt-get install linux-image-amd64-dbg`, or compile and install your own kernel from source, for example, by following instructions in the excellent [Debian Kernel Handbook][22]. `gdb vmlinux` followed by `info files` shows the ELF section `init.text`. List the start of program execution in `init.text` with `l *(address)`, where `address` is the hexadecimal start of `init.text`. GDB will indicate that the x86_64 kernel starts up in the kernel's file [arch/x86/kernel/head_64.S][23], where we find the assembly function `start_cpu0()` and code that explicitly creates a stack and decompresses the zImage before calling the `x86_64 start_kernel()` function. ARM 32-bit kernels have the similar [arch/arm/kernel/head.S][24]. `start_kernel()` is not architecture-specific, so the function lives in the kernel's [init/main.c][25]. `start_kernel()` is arguably Linux's true `main()` function. - -### From start_kernel() to PID 1 - -#### The kernel's hardware manifest: the device-tree and ACPI tables - -At boot, the kernel needs information about the hardware beyond the processor type for which it has been compiled. The instructions in the code are augmented by configuration data that is stored separately. There are two main methods of storing this data: [device-trees][26] and [ACPI tables][27]. The kernel learns what hardware it must run at each boot by reading these files. - -For embedded devices, the device-tree is a manifest of installed hardware. The device-tree is simply a file that is compiled at the same time as kernel source and is typically located in `/boot` alongside `vmlinux`. To see what's in the binary device-tree on an ARM device, just use the `strings` command from the `binutils` package on a file whose name matches `/boot/*.dtb`, as `dtb` refers to a device-tree binary. Clearly the device-tree can be modified simply by editing the JSON-like files that compose it and rerunning the special `dtc` compiler that is provided with the kernel source. While the device-tree is a static file whose file path is typically passed to the kernel by the bootloader on the command line, a [device-tree overlay][28] facility has been added in recent years, where the kernel can dynamically load additional fragments in response to hotplug events after boot. - -x86-family and many enterprise-grade ARM64 devices make use of the alternative Advanced Configuration and Power Interface ([ACPI][27]) mechanism. In contrast to the device-tree, the ACPI information is stored in the `/sys/firmware/acpi/tables` virtual filesystem that is created by the kernel at boot by accessing onboard ROM. The easy way to read the ACPI tables is with the `acpidump` command from the `acpica-tools` package. Here's an example: - -![ACPI tables on Lenovo laptops][30] - - -ACPI tables on Lenovo laptops are all set for Windows 2001. - -Yes, your Linux system is ready for Windows 2001, should you care to install it. ACPI has both methods and data, unlike the device-tree, which is more of a hardware-description language. ACPI methods continue to be active post-boot. For example, starting the command `acpi_listen` (from package `apcid`) and opening and closing the laptop lid will show that ACPI functionality is running all the time. While temporarily and dynamically [overwriting the ACPI tables][31] is possible, permanently changing them involves interacting with the BIOS menu at boot or reflashing the ROM. If you're going to that much trouble, perhaps you should just [install coreboot][32], the open source firmware replacement. - -#### From start_kernel() to userspace - -The code in [init/main.c][25] is surprisingly readable and, amusingly, still carries Linus Torvalds' original copyright from 1991-1992. The lines found in `dmesg | head` on a newly booted system originate mostly from this source file. The first CPU is registered with the system, global data structures are initialized, and the scheduler, interrupt handlers (IRQs), timers, and console are brought one-by-one, in strict order, online. Until the function `timekeeping_init()` runs, all timestamps are zero. This part of the kernel initialization is synchronous, meaning that execution occurs in precisely one thread, and no function is executed until the last one completes and returns. As a result, the `dmesg` output will be completely reproducible, even between two systems, as long as they have the same device-tree or ACPI tables. Linux is behaving like one of the RTOS (real-time operating systems) that runs on MCUs, for example QNX or VxWorks. The situation persists into the function `rest_init()`, which is called by `start_kernel()` at its termination. - -![Summary of early kernel boot process.][34] - -Summary of early kernel boot process. - -The rather humbly named `rest_init()` spawns a new thread that runs `kernel_init()`, which invokes `do_initcalls()`. Users can spy on `initcalls` in action by appending `initcall_debug` to the kernel command line, resulting in `dmesg` entries every time an `initcall` function runs. `initcalls` pass through seven sequential levels: early, core, postcore, arch, subsys, fs, device, and late. The most user-visible part of the `initcalls` is the probing and setup of all the processors' peripherals: buses, network, storage, displays, etc., accompanied by the loading of their kernel modules. `rest_init()` also spawns a second thread on the boot processor that begins by running `cpu_idle()` while it waits for the scheduler to assign it work. - -`kernel_init()` also [sets up symmetric multiprocessing][35] (SMP). With more recent kernels, find this point in `dmesg` output by looking for "Bringing up secondary CPUs..." SMP proceeds by "hotplugging" CPUs, meaning that it manages their lifecycle with a state machine that is notionally similar to that of devices like hotplugged USB sticks. The kernel's power-management system frequently takes individual cores offline, then wakes them as needed, so that the same CPU hotplug code is called over and over on a machine that is not busy. Observe the power-management system's invocation of CPU hotplug with the [BCC tool][36] called `offcputime.py`. - -Note that the code in `init/main.c` is nearly finished executing when `smp_init()` runs: The boot processor has completed most of the one-time initialization that the other cores need not repeat. Nonetheless, the per-CPU threads must be spawned for each core to manage interrupts (IRQs), workqueues, timers, and power events on each. For example, see the per-CPU threads that service softirqs and workqueues in action via the `ps -o psr` command. -``` - - -$\# ps -o pid,psr,comm $(pgrep ksoftirqd)   - - PID PSR COMMAND - -   7   0 ksoftirqd/0 - -  16   1 ksoftirqd/1 - -  22   2 ksoftirqd/2 - -  28   3 ksoftirqd/3 - - - -$\# ps -o pid,psr,comm $(pgrep kworker) - -PID  PSR COMMAND - -   4   0 kworker/0:0H - -  18   1 kworker/1:0H - -  24   2 kworker/2:0H - -  30   3 kworker/3:0H - -[ . .  . ] -``` - -where the PSR field stands for "processor." Each core must also host its own timers and `cpuhp` hotplug handlers. - -How is it, finally, that userspace starts? Near its end, `kernel_init()` looks for an `initrd` that can execute the `init` process on its behalf. If it finds none, the kernel directly executes `init` itself. Why then might one want an `initrd`? - -#### Early userspace: who ordered the initrd? - -Besides the device-tree, another file path that is optionally provided to the kernel at boot is that of the `initrd`. The `initrd` often lives in `/boot` alongside the bzImage file vmlinuz on x86, or alongside the similar uImage and device-tree for ARM. List the contents of the `initrd` with the `lsinitramfs` tool that is part of the `initramfs-tools-core` package. Distro `initrd` schemes contain minimal `/bin`, `/sbin`, and `/etc` directories along with kernel modules, plus some files in `/scripts`. All of these should look pretty familiar, as the `initrd` for the most part is simply a minimal Linux root filesystem. The apparent similarity is a bit deceptive, as nearly all the executables in `/bin` and `/sbin` inside the ramdisk are symlinks to the [BusyBox binary][37], resulting in `/bin` and `/sbin` directories that are 10x smaller than glibc's. - -Why bother to create an `initrd` if all it does is load some modules and then start `init` on the regular root filesystem? Consider an encrypted root filesystem. The decryption may rely on loading a kernel module that is stored in `/lib/modules` on the root filesystem ... and, unsurprisingly, in the `initrd` as well. The crypto module could be statically compiled into the kernel instead of loaded from a file, but there are various reasons for not wanting to do so. For example, statically compiling the kernel with modules could make it too large to fit on the available storage, or static compilation may violate the terms of a software license. Unsurprisingly, storage, network, and human input device (HID) drivers may also be present in the `initrd`--basically any code that is not part of the kernel proper that is needed to mount the root filesystem. The `initrd` is also a place where users can stash their own [custom ACPI][38] table code. - -![Rescue shell and a custom initrd.][40] - -Having some fun with the rescue shell and a custom `initrd`. - -`initrd`'s are also great for testing filesystems and data-storage devices themselves. Stash these test tools in the `initrd` and run your tests from memory rather than from the object under test. - -At last, when `init` runs, the system is up! Since the secondary processors are now running, the machine has become the asynchronous, preemptible, unpredictable, high-performance creature we know and love. Indeed, `ps -o pid,psr,comm -p 1` is liable to show that userspace's `init` process is no longer running on the boot processor. - -### Summary - -The Linux boot process sounds forbidding, considering the number of different pieces of software that participate even on simple embedded devices. Looked at differently, the boot process is rather simple, since the bewildering complexity caused by features like preemption, RCU, and race conditions are absent in boot. Focusing on just the kernel and PID 1 overlooks the large amount of work that bootloaders and subsidiary processors may do in preparing the platform for the kernel to run. While the kernel is certainly unique among Linux programs, some insight into its structure can be gleaned by applying to it some of the same tools used to inspect other ELF binaries. Studying the boot process while it's working well arms system maintainers for failures when they come. - -To learn more, attend Alison Chaiken's talk, [Linux: The first second][41], at [linux.conf.au][42], which will be held January 22-26 in Sydney. - -Thanks to [Akkana Peck][43] for originally suggesting this topic and for many corrections. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/analyzing-linux-boot-process - -作者:[Alison Chaiken][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/don-watkins -[1]:https://en.wikipedia.org/wiki/Initial_ramdisk -[2]:https://github.com/chaiken/LCA2018-Demo-Code -[3]:https://en.wikipedia.org/wiki/Wake-on-LAN -[4]:https://lwn.net/Articles/630778/ -[5]:https://www.youtube.com/watch?v=iffTJ1vPCSo&index=65&list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk -[6]:https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00086&languageid=en-fr -[7]:https://www.intel.com/content/www/us/en/support/articles/000025619/software.html -[8]:https://github.com/corna/me_cleaner -[9]:https://lwn.net/Articles/738649/ -[10]:https://lwn.net/Articles/699551/ -[11]:https://trmm.net/NERF -[12]:https://www.extremetech.com/computing/259879-dell-now-shipping-laptops-intels-management-engine-disabled -[13]:https://lwn.net/Articles/733837/ -[14]:https://linuxplumbersconf.org/2017/ocw/events/LPC2017/tracks/639 -[15]:/file/383501 -[16]:https://opensource.com/sites/default/files/u128651/linuxboot_1.png (Running the U-boot bootloader) -[17]:http://www.denx.de/wiki/DULG/Manual -[18]:https://github.com/torvalds/linux/blob/master/scripts/extract-vmlinux -[19]:http://man7.org/linux/man-pages/man5/elf.5.html -[20]:https://0xax.gitbooks.io/linux-insides/content/Misc/program_startup.html -[21]:https://github.com/chaiken/LCA2018-Demo-Code/commit/e543d9812058f2dd65f6aed45b09dda886c5fd4e -[22]:http://kernel-handbook.alioth.debian.org/ -[23]:https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/head_64.S -[24]:https://github.com/torvalds/linux/blob/master/arch/arm/boot/compressed/head.S -[25]:https://github.com/torvalds/linux/blob/master/init/main.c -[26]:https://www.youtube.com/watch?v=m_NyYEBxfn8 -[27]:http://events.linuxfoundation.org/sites/events/files/slides/x86-platform.pdf -[28]:http://lwn.net/Articles/616859/ -[29]:/file/383506 -[30]:https://opensource.com/sites/default/files/u128651/linuxboot_2.png (ACPI tables on Lenovo laptops) -[31]:https://www.mjmwired.net/kernel/Documentation/acpi/method-customizing.txt -[32]:https://www.coreboot.org/Supported_Motherboards -[33]:/file/383511 -[34]:https://opensource.com/sites/default/files/u128651/linuxboot_3.png (Summary of early kernel boot process.) -[35]:http://free-electrons.com/pub/conferences/2014/elc/clement-smp-bring-up-on-arm-soc -[36]:http://www.brendangregg.com/ebpf.html -[37]:https://www.busybox.net/ -[38]:https://www.mjmwired.net/kernel/Documentation/acpi/initrd_table_override.txt -[39]:/file/383516 -[40]:https://opensource.com/sites/default/files/u128651/linuxboot_4.png (Rescue shell and a custom initrd.) -[41]:https://rego.linux.conf.au/schedule/presentation/16/ -[42]:https://linux.conf.au/index.html -[43]:http://shallowsky.com/ From da6d1e80aa96a7bfb5792c7d88761175d1c23596 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Fri, 23 Feb 2018 15:29:51 +0800 Subject: [PATCH 47/81] 20180116 Analyzing the Linux boot process.md --- ...180116 Analyzing the Linux boot process.md | 260 ++++++++++++++++++ 1 file changed, 260 insertions(+) create mode 100644 translated/tech/20180116 Analyzing the Linux boot process.md diff --git a/translated/tech/20180116 Analyzing the Linux boot process.md b/translated/tech/20180116 Analyzing the Linux boot process.md new file mode 100644 index 0000000000..35e6201b55 --- /dev/null +++ b/translated/tech/20180116 Analyzing the Linux boot process.md @@ -0,0 +1,260 @@ +Linux 启动过程分析 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_boot.png?itok=FUesnJQp) + +图片由企鹅和靴子“赞助”,由 Opensource.com 修改。CC BY-SA 4.0。 + +关于开源软件最古老的笑话是:“代码是自文档化的(self-documenting)”。经验表明,阅读源代码就像听天气预报一样:明智的人依然出门会看看室外的天气。本文讲述了如何运用调试工具来观察和分析 Linux 系统的启动。分析一个正常的系统启动过程,有助于用户和开发人员应对不可避免的故障。 + +从某些方面看,启动过程非常简单。内核在单核上启动单线程和同步,似乎可以理解。但内核本身是如何启动的呢?[initrd(initial ramdisk)][1]和引导程序(bootloaders)具有哪些功能?还有,为什么以太网端口上的 LED 灯是常亮的呢? + +请继续阅读寻找答案。GitHub 也提供了 [介绍演示和练习的代码][2]。 + +### 启动的开始:OFF 状态 + +#### 局域网唤醒(Wake-on-LAN) + +OFF 状态表示系统没有上电,没错吧?表面简单,其实不然。例如,如果系统启用连局域网唤醒机制(WOL),以太网指示灯将亮起。通过以下命令来检查是否是这种情况: + +``` + $# sudo ethtool +``` + +其中 `` 是网络接口的名字,比如 `eth0`。(`ethtool` 可以在同名的 Linux 软件包中找到。)如果输出中的 “Wake-on” 显示 “g”,则远程主机可以通过发送 [魔法数据包(MagicPacket)][3] 来启动系统。如果您无意远程唤醒系统,也不希望其他人这样做,请在系统 BIOS 菜单中将 WOL 关闭,或者用以下方式: + +``` +$# sudo ethtool -s wol d +``` + +响应魔法数据包的处理器可能是网络接口的一部分,也可能是 [底板管理控制器(Baseboard Management Controller,BMC)][4]。 + +#### 英特尔管理引擎、平台路径控制器和 Minix + +BMC 不是唯一的在系统关闭时仍在监听的微控制器(MCU)。x86_64 系统还包含了用于远程管理系统的英特尔管理引擎(IME)软件套件。从服务器到笔记本电脑,各种各样的设备都包含了这项技术,开启了如 KVM 远程控制和英特尔功能许可服务等 [功能][5]。根据 [Intel 自己的检测工具][7],[IME 存在尚未修补的漏洞][6]。坏消息是,要禁用 IME 很难。Trammell Hudson 发起了一个 [me_cleaner 项目][8],它可以清除一些相对恶劣的 IME 组件,比如嵌入式 Web 服务器,但也可能会影响运行它的系统。 + +IME 固件和系统管理模式(SMM)软件是 [基于 Minix 操作系统][9] 的,并运行在单独的平台路径控制器上,而不是主 CPU 上。然后,SMM 启动位于主处理器上的通用可扩展固件接口(UEFI)软件,相关内容 [已被提及很多][10]。Google 的 Coreboot 小组已经启动了一个雄心勃勃的 [非扩展性缩减版固件][11](NERF)项目,其目的不仅是要取代 UEFI,还要取代早期的 Linux 用户空间组件,如 systemd。在我们等待这些新成果的同时,Linux 用户现在就可以从 Purism、System76 或 Dell 等处购买 [禁用了 IME][12] 的笔记本电脑,另外 [带有 ARM 64 位处理器笔记本电脑][13] 还是值得期待的。 + +#### +#### 引导程序 + +除了启动问题不断的间谍软件外,早期的引导固件还有什么功能呢?引导程序的作用是为新上电的处理器提供运行像 Linux 之类的通用操作系统所需的资源。在开机时,不但没有虚拟内存,在控制器启动之前连 DRAM 也没有。然后,引导程序打开电源,并扫描总线和接口,以定位到内核镜像和根文件系统的位置。U-Boot 和 GRUB 等常见的引导程序支持 USB、PCI 和 NFS 等接口,以及更多的嵌入式专用设备,如 NOR 和 NAND 闪存。引导程序还与 [可信平台模块][14](TPMs)等硬件安全设备进行交互,在启动最开始建立信任链。 + +![Running the U-boot bootloader][16] + +在构建主机上的沙盒中运行 U-boot 引导程序。 + +包括树莓派、任天堂设备、汽车板和 Chromebook 在内的系统都支持广泛使用的开源引导程序 [U-Boot][17]。它没有系统日志,当发生问题时,甚至没有任何控制台输出。为了便于调试,U-Boot 团队提供了一个沙盒,可以在构建主机甚至是夜间的持续整合(Continuous Integration)系统上测试补丁程序。如果系统上安装了 Git 和 GNU Compiler Collection(GCC)等通用的开发工具,使用 U-Boot 沙盒会相对简单: + +``` + + +$# git clone git://git.denx.de/u-boot; cd u-boot + +$# make ARCH=sandbox defconfig + +$# make; ./u-boot + +=> printenv + +=> help +``` + +在 x86_64 上运行 U-Boot,可以测试一些棘手的功能,如 [模拟存储设备][2] 重新分区、基于 TPM 的密钥操作以及 USB 设备热插拔等。U-Boot 沙盒甚至可以在 GDB 调试器下单步执行。使用沙盒进行开发的速度比将引导程序刷新到电路板上的测试快 10 倍,并且可以使用 Ctrl + C 恢复一个“变砖”的沙盒。 + +### 启动内核 + +#### 配置引导内核 + +完成任务后,引导程序将跳转到已加载到主内存中的内核代码,并开始执行,传递用户指定的任何命令行选项。内核是什么样的程序呢?用命令 `file /boot/vmlinuz` 可以看到它是一个“bzImage”,意思是一个大的压缩的镜像。Linux 源代码树包含了一个可以解压缩这个文件的工具—— [extract-vmlinux][18]: + +``` + + +$# scripts/extract-vmlinux /boot/vmlinuz-$(uname -r) > vmlinux + +$# file vmlinux + +vmlinux: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically + +linked, stripped +``` + +内核是一个 [可执行与可链接格式][19](ELF)的二进制文件,就像 Linux 的用户空间程序一样。这意味着我们可以使用 `binutils` 包中的命令,如 `readelf` 来检查它。比较一下输出,例如: + +``` + + +$# readelf -S /bin/date + +$# readelf -S vmlinux +``` + +这两个文件中的段内容大致相同。 + +所以内核必须像其他的 Linux ELF 文件一样启动,但用户空间程序是如何启动的呢?在 `main()` 函数中?并不确切。 + +在 `main()` 函数运行之前,程序需要一个执行上下文,包括堆栈内存以及 `stdio`、`stdout` 和 `stderr` 的文件描述符。用户空间程序从标准库(多数 Linux 系统在用“glibc”)中获取这些资源。参照以下输出: + +``` + + +$# file /bin/date + +/bin/date: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically + +linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, + +BuildID[sha1]=14e8563676febeb06d701dbee35d225c5a8e565a, + +stripped +``` + +ELF 二进制文件有一个解释器,就像 Bash 和 Python 脚本一样,但是解释器不需要像脚本那样用 `#!` 指定,因为 ELF 是 Linux 的原生格式。ELF 解释器通过调用 `_start()` 函数来用所需资源 [配置一个二进制文件][20],这个函数可以从 glibc 源代码包中找到,可以 [用 GDB 查看][21]。内核显然没有解释器,必须自我配置,这是怎么做到的呢? + +用 GDB 检查内核的启动给出了答案。首先安装内核的调试软件包,内核中包含一个未剥离的(unstripped)vmlinux,例如 `apt-get install linux-image-amd64-dbg`,或者从源代码编译和安装你自己的内核,可以参照 [Debian Kernel Handbook][22] 中的指令。`gdb vmlinux` 后加 `info files` 可显示 ELF 段 `init.text`。在 `init.text` 中用 `l *(address)` 列出程序执行的开头,其中 `address` 是 `init.text` 的十六进制开头。用 GDB 可以看到 x86_64 内核从内核文件 [arch/x86/kernel/head_64.S][23] 开始启动,在这个文件中我们找到了汇编函数 `start_cpu0()`,以及一段明确的代码显示在调用 `x86_64 start_kernel()` 函数之前创建了堆栈并解压了 zImage。ARM 32 位内核也有类似的文件 [arch/arm/kernel/head.S][24]。`start_kernel()` 不针对特定的体系结构,所以这个函数驻留在内核的 [init/main.c][25] 中。`start_kernel()` 可以说是 Linux 真正的 `main()` 函数。 + +### 从 start_kernel() 到 PID 1 + +#### 内核的硬件清单:设备树和 ACPI 表 + +在引导时,内核需要硬件信息,不仅仅是已编译过的处理器类型。代码中的指令通过单独存储的配置数据进行扩充。有两种主要的数据存储方法:[设备树][26] 和 [高级配置和电源接口(ACPI)表][27]。内核通过读取这些文件了解每次启动时需要运行的硬件。 + +对于嵌入式设备,设备树是已安装硬件的清单。设备树只是一个与内核源代码同时编译的文件,通常与 `vmlinux` 一样位于 `/boot` 目录中。要查看 ARM 设备上的设备树的内容,只需对名称与 `/boot/*.dtb` 匹配的文件执行 `binutils` 包中的 `strings` 命令即可,`dtb` 是指一个设备树二进制文件。显然,只需编辑构成它的类 JSON 文件并重新运行随内核源代码提供的特殊 `dtc` 编译器即可修改设备树。虽然设备树是一个静态文件,其文件路径通常由命令行引导程序传递给内核,但近年来增加了一个 [设备树覆盖][28] 的功能,内核在启动后可以动态加载热插拔的附加设备。 + +x86 系列和许多企业级的 ARM64 设备使用 [ACPI][27] 机制。与设备树不同的是,ACPI 信息存储在内核在启动时通过访问板载 ROM 而创建的 `/sys/firmware/acpi/tables` 虚拟文件系统中。读取 ACPI 表的简单方法是使用 `acpica-tools` 包中的 `acpidump` 命令。例如: + +![ACPI tables on Lenovo laptops][30] + + +联想笔记本电脑的 ACPI 表都是为 Windows 2001 设置的。 + +是的,你的 Linux 系统已经准备好用于 Windows 2001 了,你要考虑安装吗?与设备树不同,ACPI 具有方法和数据,而设备树更多地是一种硬件描述语言。ACPI 方法在启动后仍处于活动状态。例如,运行 `acpi_listen` 命令(在 `apcid` 包中),然后打开和关闭笔记本机盖会发现 ACPI 功能一直在运行。暂时地和动态地 [覆盖 ACPI 表][31] 是可能的,而永久地改变它需要在引导时与 BIOS 菜单交互或刷新 ROM。如果你遇到那么多麻烦,也许你应该 [安装 coreboot][32],这是开源固件的替代品。 + +#### 从 start_kernel() 到用户空间 + +[init/main.c][25] 中的代码竟然是可读的,而且有趣的是,它仍然在使用 1991 - 1992 年的 Linus Torvalds 的原始版权。在一个刚启动的系统上运行 `dmesg | head`,其输出主要来源于此文件。第一个 CPU 注册到系统中,全局数据结构被初始化,并且调度程序、中断处理程序(IRQ)、定时器和控制台按照严格的顺序逐一启动。在 `timekeeping_init()` 函数运行之前,所有的时间戳都是零。内核初始化的这部分是同步的,也就是说执行只发生在一个线程中,在最后一个完成并返回之前,没有任何函数会被执行。因此,即使在两个系统之间,`dmesg` 的输出也是完全可重复的,只要它们具有相同的设备树或 ACPI 表。Linux 的行为就像在 MCU 上运行的 RTOS(实时操作系统)一样,如 QNX 或 VxWorks。这种情况持续存在于函数 `rest_init()` 中,该函数在终止时由 `start_kernel()` 调用。 + +![Summary of early kernel boot process.][34] + +早期的内核启动流程 + +函数 `rest_init()` 产生了一个新进程以运行 `kernel_init()`,并调用了 `do_initcalls()`。用户可以通过将 `initcall_debug` 附加到内核命令行来监控 `initcalls`,这样每运行一次 `initcall` 函数就会产生 `dmesg` 条目。`initcalls` 会历经七个连续的级别:early、core、postcore、arch、subsys、fs、device 和 late。`initcalls` 最为用户可见的部分是所有处理器外围设备的探测和设置:总线、网络、存储和显示器等等,同时加载其内核模块。`rest_init()` 也会在引导处理器上产生第二个线程,它首先运行 `cpu_idle()`,然后等待调度器分配工作。 + +`kernel_init()` 也可以 [设置对称多处理(SMP)结构][35]。在较新的内核中,如果 `dmesg` 的输出中出现“启动第二个 CPU...”等字样,系统便使用了 SMP。SMP 通过“热插拔”CPU 来进行,这意味着它用状态机来管理其生命周期,这种状态机在概念上类似于热插拔的 U 盘一样。内核的电源管理系统经常会使某个核(core)离线,然后根据需要将其唤醒,以便在不忙的机器上反复调用同一段的 CPU 热插拔代码。观察电源管理系统调用 CPU 热插拔代码的 [BCC 工具][36] 称为 `offcputime.py`。 + +请注意,`init/main.c` 中的代码在 `smp_init()` 运行时几乎已执行完毕:引导处理器已经完成了大部分其他核无需重复的一次性初始化操作。尽管如此,跨 CPU 的线程仍然要在每个核上生成,以管理每个核的中断(IRQ)、工作队列、定时器和电源事件。例如,通过 `ps -o psr` 命令可以查看服务 softirqs 和 workqueues 在每个 CPU 上的线程。 + +``` + + +$\# ps -o pid,psr,comm $(pgrep ksoftirqd) + + PID PSR COMMAND + + 7 0 ksoftirqd/0 + + 16 1 ksoftirqd/1 + + 22 2 ksoftirqd/2 + + 28 3 ksoftirqd/3 + + + +$\# ps -o pid,psr,comm $(pgrep kworker) + +PID PSR COMMAND + + 4 0 kworker/0:0H + + 18 1 kworker/1:0H + + 24 2 kworker/2:0H + + 30 3 kworker/3:0H + +[ . . . ] +``` + +其中,PSR 字段代表“处理器”。每个核还必须拥有自己的定时器和 `cpuhp` 热插拔处理程序。 + +那么用户空间是如何启动的呢?在最后,`kernel_init()` 寻找可以代表它执行 `init` 进程的 `initrd`。如果没有找到,内核直接执行 `init` 本身。那么为什么需要 `initrd` 呢? + +#### 早期的用户空间:谁规定要用 initrd? + +除了设备树之外,在启动时可以提供给内核的另一个文件路径是 `initrd` 的路径。`initrd` 通常位于 `/boot` 目录中,与 x86 系统中的 bzImage 文件 vmlinuz 一样,或是与 ARM 系统中的 uImage 和设备树相同。用 `initramfs-tools-core` 软件包中的 `lsinitramfs` 工具可以列出 `initrd` 的内容。发行版的 `initrd` 方案包含了最小化的 `/bin`、`/sbin` 和 `/etc` 目录以及内核模块,还有 `/scripts` 中的一些文件。所有这些看起来都很熟悉,因为 `initrd` 大致上是一个简单的最小化 Linux 根文件系统。看似相似,其实不然,因为位于虚拟内存盘中的 `/bin` 和 `/sbin` 目录下的所有可执行文件几乎都是指向 [BusyBox binary][38] 的符号链接,由此导致 `/bin` 和 `/sbin` 目录比 glibc 的小 10 倍。 + +如果要做的只是加载一些模块,然后在普通的根文件系统上启动 `init`,为什么还要创建一个 `initrd` 呢?想想一个加密的根文件系统,解密可能依赖于加载一个位于根文件系统 `/lib/modules` 的内核模块,当然还有 `initrd` 中的。加密模块可能被静态地编译到内核中,而不是从文件加载,但有多种原因不希望这样做。例如,用模块静态编译内核可能会使其太大而不能适应存储空间,或者静态编译可能会违反软件许可条款。不出所料,存储、网络和人类输入设备(HID)驱动程序也可能存在于 `initrd` 中。`initrd` 基本上包含了任何挂载根文件系统所必需的非内核代码。`initrd` 也是用户存放 [自定义ACPI][38] 表代码的地方。 + +![Rescue shell and a custom initrd.][40] + +救援模式的 shell 和自定义的 `initrd` 还是很有意思的。 + +`initrd` 对测试文件系统和数据存储设备也很有用。将这些测试工具存放在 `initrd` 中,并从内存中运行测试,而不是从被测对象中运行。 + +最后,当 `init` 开始运行时,系统就启动啦!由于辅助处理器正在运行,机器已经成为我们所熟知和喜爱的异步、可抢占、不可预测和高性能的生物。的确,`ps -o pid,psr,comm -p 1` 很容易显示已不在引导处理器上运行的用户空间的 `init` 进程。 + +### Summary +### 总结 + +Linux 引导过程听起来或许令人生畏,即使考虑到简单嵌入式设备上的软件数量。换个角度来看,启动过程相当简单,因为启动中没有抢占、RCU 和竞争条件等扑朔迷离的复杂功能。只关注内核和 PID 1 会忽略了引导程序和辅助处理器为运行内核执行的大量准备工作。虽然内核在 Linux 程序中是独一无二的,但通过一些检查 ELF 文件的工具也可以了解其结构。学习一个正常的启动过程,可以帮助运维人员处理启动的故障。 + +要了解更多信息,请参阅 Alison Chaiken 的演讲——[Linux: The first second][41],将在 1 月 22 日至 26 日在悉尼举行。参见 [linux.conf.au][42]。 + +感谢 [Akkana Peck][43] 的提议和指正。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/analyzing-linux-boot-process + +作者:[Alison Chaiken][a] +译者:[jessie-pang](https://github.com/jessie-pang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/don-watkins +[1]:https://en.wikipedia.org/wiki/Initial_ramdisk +[2]:https://github.com/chaiken/LCA2018-Demo-Code +[3]:https://en.wikipedia.org/wiki/Wake-on-LAN +[4]:https://lwn.net/Articles/630778/ +[5]:https://www.youtube.com/watch?v=iffTJ1vPCSo&amp;amp;amp;amp;amp;index=65&amp;amp;amp;amp;amp;list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk +[6]:https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00086&amp;amp;amp;amp;amp;languageid=en-fr +[7]:https://www.intel.com/content/www/us/en/support/articles/000025619/software.html +[8]:https://github.com/corna/me_cleaner +[9]:https://lwn.net/Articles/738649/ +[10]:https://lwn.net/Articles/699551/ +[11]:https://trmm.net/NERF +[12]:https://www.extremetech.com/computing/259879-dell-now-shipping-laptops-intels-management-engine-disabled +[13]:https://lwn.net/Articles/733837/ +[14]:https://linuxplumbersconf.org/2017/ocw/events/LPC2017/tracks/639 +[15]:/file/383501 +[16]:https://opensource.com/sites/default/files/u128651/linuxboot_1.png "Running the U-boot bootloader" +[17]:http://www.denx.de/wiki/DULG/Manual +[18]:https://github.com/torvalds/linux/blob/master/scripts/extract-vmlinux +[19]:http://man7.org/linux/man-pages/man5/elf.5.html +[20]:https://0xax.gitbooks.io/linux-insides/content/Misc/program_startup.html +[21]:https://github.com/chaiken/LCA2018-Demo-Code/commit/e543d9812058f2dd65f6aed45b09dda886c5fd4e +[22]:http://kernel-handbook.alioth.debian.org/ +[23]:https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/head_64.S +[24]:https://github.com/torvalds/linux/blob/master/arch/arm/boot/compressed/head.S +[25]:https://github.com/torvalds/linux/blob/master/init/main.c +[26]:https://www.youtube.com/watch?v=m_NyYEBxfn8 +[27]:http://events.linuxfoundation.org/sites/events/files/slides/x86-platform.pdf +[28]:http://lwn.net/Articles/616859/ +[29]:/file/383506 +[30]:https://opensource.com/sites/default/files/u128651/linuxboot_2.png "ACPI tables on Lenovo laptops" +[31]:https://www.mjmwired.net/kernel/Documentation/acpi/method-customizing.txt +[32]:https://www.coreboot.org/Supported_Motherboards +[33]:/file/383511 +[34]:https://opensource.com/sites/default/files/u128651/linuxboot_3.png "Summary of early kernel boot process." +[35]:http://free-electrons.com/pub/conferences/2014/elc/clement-smp-bring-up-on-arm-soc +[36]:http://www.brendangregg.com/ebpf.html +[37]:https://www.busybox.net/ +[38]:https://www.mjmwired.net/kernel/Documentation/acpi/initrd_table_override.txt +[39]:/file/383516 +[40]:https://opensource.com/sites/default/files/u128651/linuxboot_4.png "Rescue shell and a custom initrd." +[41]:https://rego.linux.conf.au/schedule/presentation/16/ +[42]:https://linux.conf.au/index.html +[43]:http://shallowsky.com/ \ No newline at end of file From 3f4cdca8154289e35d84bcaa11af47492531af76 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Fri, 23 Feb 2018 15:59:59 +0800 Subject: [PATCH 48/81] Update 20180102 How To Find (Top-10) Largest Files In Linux.md --- .../20180102 How To Find (Top-10) Largest Files In Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180102 How To Find (Top-10) Largest Files In Linux.md b/sources/tech/20180102 How To Find (Top-10) Largest Files In Linux.md index 7e5d8c82a5..77c6238c9c 100644 --- a/sources/tech/20180102 How To Find (Top-10) Largest Files In Linux.md +++ b/sources/tech/20180102 How To Find (Top-10) Largest Files In Linux.md @@ -1,3 +1,5 @@ +Translating by jessie-pang + How To Find (Top-10) Largest Files In Linux ====== When you are running out of disk space in system, you may prefer to check with df command or du command or ncdu command but all these will tell you only current directory files and doesn't shows the system wide files. From aab9d2d9d9e44bd3660fb3595b8315e3781e2043 Mon Sep 17 00:00:00 2001 From: yangjiaqang Date: Fri, 23 Feb 2018 17:36:19 +0800 Subject: [PATCH 49/81] Update 20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md --- ... To Set Up PF Firewall on FreeBSD to Protect a Web Server.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md b/sources/tech/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md index 45ce0c0a7a..70709e6426 100644 --- a/sources/tech/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md +++ b/sources/tech/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md @@ -1,3 +1,5 @@ +yangjiaqiang 翻译中 + How To Set Up PF Firewall on FreeBSD to Protect a Web Server ====== From 408f0dffc95d4198b7631e31e1d78a8271af7337 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Fri, 23 Feb 2018 18:13:52 +0800 Subject: [PATCH 50/81] Update 20180207 Python Global Keyword (With Examples).md --- sources/tech/20180207 Python Global Keyword (With Examples).md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180207 Python Global Keyword (With Examples).md b/sources/tech/20180207 Python Global Keyword (With Examples).md index f4f4043c8d..93a84359b4 100644 --- a/sources/tech/20180207 Python Global Keyword (With Examples).md +++ b/sources/tech/20180207 Python Global Keyword (With Examples).md @@ -1,3 +1,5 @@ +Translating by MjSeven + Python Global Keyword (With Examples) ====== Before reading this article, make sure you have got some basics of [Python Global, Local and Nonlocal Variables][1]. From 84c0fb526180232cc1568a3a6e5af90d12b7403a Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 23 Feb 2018 18:48:51 +0800 Subject: [PATCH 51/81] PRF:20180206 What Is Kali Linux, and Do You Need It.md @qhwdw --- ...0180206 What Is Kali Linux, and Do You Need It.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/translated/tech/20180206 What Is Kali Linux, and Do You Need It.md b/translated/tech/20180206 What Is Kali Linux, and Do You Need It.md index f603d3a3fd..5506807a6b 100644 --- a/translated/tech/20180206 What Is Kali Linux, and Do You Need It.md +++ b/translated/tech/20180206 What Is Kali Linux, and Do You Need It.md @@ -3,19 +3,19 @@ Kali Linux 是什么,你需要它吗? ![](https://www.maketecheasier.com/assets/uploads/2018/01/kl-feat.jpg) -如果你听到一个 13 岁的黑客吹嘘它是多么的牛逼,是有可能的,因为有 Kali Linux 的存在。尽管有可能会被称为“脚本小子”,但是事实上,Kali 仍旧是安全专家手头的重要工具(或工具集)。 +如果你听到一个 13 岁的黑客吹嘘他是多么的牛逼,是有可能的,因为有 Kali Linux 的存在。尽管有可能会被称为“脚本小子”,但是事实上,Kali 仍旧是安全专家手头的重要工具(或工具集)。 -Kali 是一个基于 Debian 的 Linux 发行版。它的目标就是为了简单;在一个实用的工具包里尽可能多的包含渗透和审计工具。Kali 实现了这个目标。大多数做安全测试的开源工具都被囊括在内。 +Kali 是一个基于 Debian 的 Linux 发行版。它的目标就是为了简单:在一个实用的工具包里尽可能多的包含渗透和审计工具。Kali 实现了这个目标。大多数做安全测试的开源工具都被囊括在内。 -**相关** : [4 个极好的为隐私和案例设计的 Linux 发行版][1] +**相关** : [4 个极好的为隐私和安全设计的 Linux 发行版][1] ### 为什么是 Kali? ![Kali Linux Desktop][2] -[Kali][3] 是由 Offensive Security (https://www.offensive-security.com/)公司开发和维护的。它在安全领域是一家知名的、值得信赖的公司,它甚至还有一些受人尊敬的认证,来对安全从业人员做资格认证。 +[Kali][3] 是由 [Offensive Security](https://www.offensive-security.com/) 公司开发和维护的。它在安全领域是一家知名的、值得信赖的公司,它甚至还有一些受人尊敬的认证,来对安全从业人员做资格认证。 -Kali 也是一个简便的安全解决方案。Kali 并不要求你自己去维护一个 Linux,或者收集你自己的软件和依赖。它是一个“交钥匙工程”。所有这些繁杂的工作都不需要你去考虑,因此,你只需要专注于要审计的真实工作上,而不需要去考虑准备测试系统。 +Kali 也是一个简便的安全解决方案。Kali 并不要求你自己去维护一个 Linux 系统,或者你自己去收集软件和依赖项。它是一个“交钥匙工程”。所有这些繁杂的工作都不需要你去考虑,因此,你只需要专注于要审计的真实工作上,而不需要去考虑准备测试系统。 ### 如何使用它? @@ -61,7 +61,7 @@ via: https://www.maketecheasier.com/what-is-kali-linux-and-do-you-need-it/ 作者:[Nick Congleton][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c75b2f7eba72b927cca2067d38db06ebacfa4400 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 23 Feb 2018 18:49:52 +0800 Subject: [PATCH 52/81] PUB:20180206 What Is Kali Linux, and Do You Need It.md @qhwdw https://linux.cn/article-9375-1.html --- .../20180206 What Is Kali Linux, and Do You Need It.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180206 What Is Kali Linux, and Do You Need It.md (100%) diff --git a/translated/tech/20180206 What Is Kali Linux, and Do You Need It.md b/published/20180206 What Is Kali Linux, and Do You Need It.md similarity index 100% rename from translated/tech/20180206 What Is Kali Linux, and Do You Need It.md rename to published/20180206 What Is Kali Linux, and Do You Need It.md From fdcb89f1904d6923df243f1c2d3b230529eff4e1 Mon Sep 17 00:00:00 2001 From: Means Lee Date: Fri, 23 Feb 2018 19:07:39 +0800 Subject: [PATCH 53/81] translated --- ...nture Game in the Terminal with ncurses.md | 324 ++++++++++++++++++ 1 file changed, 324 insertions(+) create mode 100644 translated/20180126 Creating an Adventure Game in the Terminal with ncurses.md diff --git a/translated/20180126 Creating an Adventure Game in the Terminal with ncurses.md b/translated/20180126 Creating an Adventure Game in the Terminal with ncurses.md new file mode 100644 index 0000000000..ed8f875074 --- /dev/null +++ b/translated/20180126 Creating an Adventure Game in the Terminal with ncurses.md @@ -0,0 +1,324 @@ +通过ncurses在终端创建一个冒险游戏 +====== +怎样使用curses函数读取键盘并操作屏幕。 + +我[之前的文章][1]介绍了ncurses库并提供了一个简单的程序展示一些将文本放到屏幕上的一些curses函数。 + +### 探险 + +当我逐渐长大,家里有了一台苹果2电脑。我和我兄弟正是在这台电脑上自学了如何用AppleSoft BASIC写程序。我在写了一些数学智力游戏之后,继续创造游戏。作为80年代的人,我已经是龙与地下城桌游的粉丝,在游戏中角色扮演一个追求打败怪物并在陌生土地上抢掠的战士或者男巫。所以我创建一个基本的冒险游戏也在情理之中。 + +AppleSoft BASIC支持一种简洁的特性:在标准分辨率图形模式(GR模式)下,你可以检测屏幕上特定点的颜色。这为创建一个冒险游戏提供了捷径。比起创建并更新周期性传送到屏幕的内存地图,我现在可以依赖GR模式为我维护地图,我的程序还可以当玩家字符在屏幕四处移动的时候查询屏幕。通过这种方式,我让电脑完成了大部分艰难的工作。因此,我的自顶向下的冒险游戏使用了块状的GR模式图形来展示我的游戏地图。 + +我的冒险游戏使用了一张简单的地图,上面有一大片绿地伴着山脉从中间蔓延向下和一个在左上方的大湖。我要粗略地为桌游战役绘制这个地图,其中包含一个允许玩家穿过到远处的狭窄通道。 + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-map.jpg) + +图1.一个有湖和山的简单桌游地图 + +你可以用curses绘制这个地图,并用字符代表草地、山脉和水。接下来,我描述怎样使用curses那样做以及如何在Linux终端创建和进行类似的一个冒险游戏? + +### 构建程序 + +在我的上一篇文章,我提到了大多数curses程序以相同的一组指令获取终端类型和设置curses环境: + +``` +initscr(); +cbreak(); +noecho(); + +``` + +在这个程序,我添加了另外的语句: + +``` +keypad(stdscr, TRUE); + +``` + +这里的TRUE标志允许curses从用户终端读取小键盘和功能键。如果你想要在你的程序中使用上下左右方向键,你需要使用这里的keypad(stdscr, TRUE)。 + +这样做了之后,你可以你可以开始在终端屏幕上绘图了。curses函数包括了一系列方法在屏幕上绘制文本。在我之前的文章中,我展示了addch()和addstr()函数以及他们对应的在添加文本之前先移动到指定屏幕位置的副本mvaddch()和mvaddstr()函数。为了创建这个冒险游戏,你可以使用另外一组函数:vline()和hline(),以及它们对应的函数mvvline()和mvhline()。这些mv函数接收屏幕坐标,一个要绘制的字符和要重复此字符的次数。例如,mvhline(1, 2, '-', 20)将会绘制一条开始于第一行第二列并由20个横线组成的线段。 + +为了以编程方式绘制地图到终端,让我们先定义这个draw_map()函数: + +``` +#define GRASS ' ' +#define EMPTY '.' +#define WATER '~' +#define MOUNTAIN '^' +#define PLAYER '*' + +void draw_map(void) +{ + int y, x; + + /* 绘制探索地图 */ + + /* 背景 */ + + for (y = 0; y < LINES; y++) { + mvhline(y, 0, GRASS, COLS); + } + + /* 山和山道 */ + + for (x = COLS / 2; x < COLS * 3 / 4; x++) { + mvvline(0, x, MOUNTAIN, LINES); + } + + mvhline(LINES / 4, 0, GRASS, COLS); + + /* 湖 */ + + for (y = 1; y < LINES / 2; y++) { + mvhline(y, 1, WATER, COLS / 3); + } +} + +``` + +在绘制这副地图时,记住填充大块字符到屏幕使用的mvvline()和mvhline()函数。我绘制从0列开始的字符水平线(mvhline)以创建草地区域,直到整个屏幕的高度和宽度。我绘制从0行开始的多条垂直线(mvvline)在此上添加了山脉,绘制单行水平线添加了一条山道(mvhline)。并且,我通过绘制一系列短水平线(mvhline)创建了湖。这种绘制重叠方块的方式看起来似乎并没有效率,但是记住在我们调用refresh()函数之前curses并不会真正更新屏幕。 + +绘制完地图,创建游戏就还剩下进入循环让程序等待用户按下上下左右方向键中的一个然后让玩家图标正确移动了。如果玩家想要移动的地方是空的,就应该允许玩家到那里。 + +你可以把curses当做捷径使用。比起在程序中实例化一个版本的地图并复制到屏幕(这么复杂),你可以让屏幕为你跟踪所有东西。inch()函数和相关联的mvinch()函数允许你探测屏幕的内容。这让你可以查询curses以了解玩家想要移动到的位置是否被水填满或者被山阻挡。这样做你需要一个之后会用到的一个帮助函数: + +``` +int is_move_okay(int y, int x) +{ + int testch; + + /* 如果要进入的位置可以进入,返回true */ + + testch = mvinch(y, x); + return ((testch == GRASS) || (testch == EMPTY)); +} + +``` + +如你所见,这个函数探测行x、列y并在空间未被占据的时候返回true,否则返回false。 + +这样我们写移动循环就很容易了:从键盘获取一个键值然后根据是上下左右键移动用户字符。这里是一个简单版本的这种循环: + +``` + + do { + ch = getch(); + + /* 测试输入的值并获取方向 */ + + switch (ch) { + case KEY_UP: + if ((y > 0) && is_move_okay(y - 1, x)) { + y = y - 1; + } + break; + case KEY_DOWN: + if ((y < LINES - 1) && is_move_okay(y + 1, x)) { + y = y + 1; + } + break; + case KEY_LEFT: + if ((x > 0) && is_move_okay(y, x - 1)) { + x = x - 1; + } + break; + case KEY_RIGHT + if ((x < COLS - 1) && is_move_okay(y, x + 1)) { + x = x + 1; + } + break; + } + } + while (1); + +``` + +为了在游戏中使用(这个循环),你需要在循环里添加一些代码来启用其它的键(例如传统的移动键WASD)以提供方法供用户退出游戏和在屏幕上四处移动。这里是完整的程序: + +``` + +/* quest.c */ + +#include +#include + +#define GRASS ' ' +#define EMPTY '.' +#define WATER '~' +#define MOUNTAIN '^' +#define PLAYER '*' + +int is_move_okay(int y, int x); +void draw_map(void); + +int main(void) +{ + int y, x; + int ch; + + /* 初始化curses */ + + initscr(); + keypad(stdscr, TRUE); + cbreak(); + noecho(); + + clear(); + + /* 初始化探索地图 */ + + draw_map(); + + /* 在左下角初始化玩家 */ + + y = LINES - 1; + x = 0; + + do { + /* 默认获得一个闪烁的光标--表示玩家字符 */ + + mvaddch(y, x, PLAYER); + move(y, x); + refresh(); + + ch = getch(); + + /* 测试输入的键并获取方向 */ + + switch (ch) { + case KEY_UP: + case 'w': + case 'W': + if ((y > 0) && is_move_okay(y - 1, x)) { + mvaddch(y, x, EMPTY); + y = y - 1; + } + break; + case KEY_DOWN: + case 's': + case 'S': + if ((y < LINES - 1) && is_move_okay(y + 1, x)) { + mvaddch(y, x, EMPTY); + y = y + 1; + } + break; + case KEY_LEFT: + case 'a': + case 'A': + if ((x > 0) && is_move_okay(y, x - 1)) { + mvaddch(y, x, EMPTY); + x = x - 1; + } + break; + case KEY_RIGHT: + case 'd': + case 'D': + if ((x < COLS - 1) && is_move_okay(y, x + 1)) { + mvaddch(y, x, EMPTY); + x = x + 1; + } + break; + } + } + while ((ch != 'q') && (ch != 'Q')); + + endwin(); + + exit(0); +} + +int is_move_okay(int y, int x) +{ + int testch; + + /* 当空间可以进入时返回true */ + + testch = mvinch(y, x); + return ((testch == GRASS) || (testch == EMPTY)); +} + +void draw_map(void) +{ + int y, x; + + /* 绘制探索地图 */ + + /* 背景 */ + + for (y = 0; y < LINES; y++) { + mvhline(y, 0, GRASS, COLS); + } + + /* 山脉和山道 */ + + for (x = COLS / 2; x < COLS * 3 / 4; x++) { + mvvline(0, x, MOUNTAIN, LINES); + } + + mvhline(LINES / 4, 0, GRASS, COLS); + + /* 湖 */ + + for (y = 1; y < LINES / 2; y++) { + mvhline(y, 1, WATER, COLS / 3); + } +} + +``` + +在完整的程序清单中,你可以看见使用curses函数创建游戏的完整布置: + +1) 初始化curses环境。 + +2) 绘制地图。 + +3) 初始化玩家坐标(左下角) + +4) 循环: + +* 绘制玩家字符。 + +* 从键盘获取键值。 + +* 对应地上下左右调整玩家坐标。 + +* 重复。 + +5) 完成时关闭curses环境并退出。 + +### 开始玩 + +当你运行游戏时,玩家的字符在左下角初始化。当玩家在游戏区域四处移动的时候,程序创建了“一串”点。这样可以展示玩家经过了的点,让玩家避免经过不必要的路径。 + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-start.png) + +图2\. 初始化在左下角的玩家 + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-1.png) + +图3\. 玩家可以在游戏区域四处移动,例如湖周围和山的通道 + +为了创建上面这样的完整冒险游戏,你可能需要在他/她的字符在游戏区域四处移动的时候随机创建不同的怪物。你也可以创建玩家可以发现在打败敌人后可以掠夺的特殊道具,这些道具应能提高玩家的能力。 + +但是作为起点,这是一个展示如何使用curses函数读取键盘和操纵屏幕的好程序。 + +### 下一步 + +这是一个如何使用curses函数更新和读取屏幕和键盘的简单例子。按照你的程序需要做什么,curses可以做得更多。在下一篇文章中,我计划展示如何更新这个简单程序以使用颜色。同时,如果你想要学习更多curses,我鼓励你去读位于Linux文档计划的Pradeep Padala之[如何使用NCURSES编程][2]。 + + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/content/creating-adventure-game-terminal-ncurses + +作者:[Jim Hall][a] +译者:[Leemeans](https://github.com/leemeans) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/users/jim-hall +[1]:http://www.linuxjournal.com/content/getting-started-ncurses +[2]:http://tldp.org/HOWTO/NCURSES-Programming-HOWTO From 21fcb55a441e01b44d000508271e79a8787f3715 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 23 Feb 2018 19:15:11 +0800 Subject: [PATCH 54/81] PRF&PUB:20180105 How To Display Asterisks When You Type Password In terminal.md @geekpi --- ...isks When You Type Password In terminal.md | 32 ++++++++++--------- 1 file changed, 17 insertions(+), 15 deletions(-) rename {translated/tech => published}/20180105 How To Display Asterisks When You Type Password In terminal.md (62%) diff --git a/translated/tech/20180105 How To Display Asterisks When You Type Password In terminal.md b/published/20180105 How To Display Asterisks When You Type Password In terminal.md similarity index 62% rename from translated/tech/20180105 How To Display Asterisks When You Type Password In terminal.md rename to published/20180105 How To Display Asterisks When You Type Password In terminal.md index 0b764d093f..d189cd30e6 100644 --- a/translated/tech/20180105 How To Display Asterisks When You Type Password In terminal.md +++ b/published/20180105 How To Display Asterisks When You Type Password In terminal.md @@ -3,68 +3,70 @@ ![](https://www.ostechnix.com/wp-content/uploads/2018/01/Display-Asterisks-When-You-Type-Password-In-terminal-1-720x340.png) -当你在 Web 浏览器或任何 GUI 登录中输入密码时,密码会被标记成星号 ******** 或圆形符号 ••••••••••••• 。这是内置的安全机制,以防止你附近的用户看到你的密码。但是当你在终端输入密码来执行任何 **sudo** 或 **su** 的管理任务时,你不会在输入密码的时候看见星号或者圆形符号。它不会有任何输入密码的视觉指示,也不会有任何光标移动,什么也没有。你不知道你是否输入了所有的字符。你只会看到一个空白的屏幕! +当你在 Web 浏览器或任何 GUI 登录中输入密码时,密码会被标记成星号 `********` 或圆点符号 `•••••••••••••` 。这是内置的安全机制,以防止你附近的用户看到你的密码。但是当你在终端输入密码来执行任何 `sudo` 或 `su` 的管理任务时,你不会在输入密码的时候看见星号或者圆点符号。它不会有任何输入密码的视觉指示,也不会有任何光标移动,什么也没有。你不知道你是否输入了所有的字符。你只会看到一个空白的屏幕! 看看下面的截图。 ![][2] -正如你在上面的图片中看到的,我已经输入了密码,但没有任何指示(星号或圆形符号)。现在,我不确定我是否输入了所有密码。这个安全机制也可以防止你附近的人猜测密码长度。当然,这种行为可以改变。这是本指南要说的。这并不困难。请继续阅读。 +正如你在上面的图片中看到的,我已经输入了密码,但没有任何指示(星号或圆点符号)。现在,我不确定我是否输入了所有密码。这个安全机制也可以防止你附近的人猜测密码长度。当然,这种行为可以改变。这是本指南要说的。这并不困难。请继续阅读。 #### 当你在终端输入密码时显示星号 -要在终端输入密码时显示星号,我们需要在 **“/etc/sudoers”** 中做一些小修改。在做任何更改之前,最好备份这个文件。为此,只需运行: +要在终端输入密码时显示星号,我们需要在 `/etc/sudoers` 中做一些小修改。在做任何更改之前,最好备份这个文件。为此,只需运行: + ``` sudo cp /etc/sudoers{,.bak} ``` -上述命令将 /etc/sudoers 备份成名为 /etc/sudoers.bak。你可以恢复它,以防万一在编辑文件后出错。 +上述命令将 `/etc/sudoers` 备份成名为 `/etc/sudoers.bak`。你可以恢复它,以防万一在编辑文件后出错。 + +接下来,使用下面的命令编辑 `/etc/sudoers`: -接下来,使用下面的命令编辑 **“/etc/sudoers”**: ``` sudo visudo ``` 找到下面这行: + ``` Defaults env_reset ``` ![][3] -在该行的末尾添加一个额外的单词 **“,pwfeedback”**,如下所示。 +在该行的末尾添加一个额外的单词 `,pwfeedback`,如下所示。 + ``` Defaults env_reset,pwfeedback ``` ![][4] -然后,按下 **“CTRL + x”** 和 **“y”** 保存并关闭文件。重新启动终端以使更改生效。 +然后,按下 `CTRL + x` 和 `y` 保存并关闭文件。重新启动终端以使更改生效。 现在,当你在终端输入密码时,你会看到星号。 ![][5] -如果你对在终端输入密码时看不到密码感到不舒服,那么这个小技巧会有帮助。请注意,当你输入输入密码时其他用户就可以预测你的密码长度。如果你不介意,请按照上述方法进行更改,以使你的密码可见(当然,标记为星号!)。 +如果你对在终端输入密码时看不到密码感到不舒服,那么这个小技巧会有帮助。请注意,当你输入输入密码时其他用户就可以预测你的密码长度。如果你不介意,请按照上述方法进行更改,以使你的密码可见(当然,显示为星号!)。 现在就是这样了。还有更好的东西。敬请关注! 干杯! - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/display-asterisks-type-password-terminal/ 作者:[SK][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.ostechnix.com/author/sk/ -[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/password-1.png () -[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1.png () -[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1-1.png () -[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-2.png () +[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/password-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1.png +[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1-1.png +[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-2.png From c2e8d0e7e7276c6d7be66322dd821fcea692f4ea Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Fri, 23 Feb 2018 22:56:01 +0800 Subject: [PATCH 55/81] Delete 20180207 Python Global Keyword (With Examples).md --- ...7 Python Global Keyword (With Examples).md | 188 ------------------ 1 file changed, 188 deletions(-) delete mode 100644 sources/tech/20180207 Python Global Keyword (With Examples).md diff --git a/sources/tech/20180207 Python Global Keyword (With Examples).md b/sources/tech/20180207 Python Global Keyword (With Examples).md deleted file mode 100644 index 93a84359b4..0000000000 --- a/sources/tech/20180207 Python Global Keyword (With Examples).md +++ /dev/null @@ -1,188 +0,0 @@ -Translating by MjSeven - -Python Global Keyword (With Examples) -====== -Before reading this article, make sure you have got some basics of [Python Global, Local and Nonlocal Variables][1]. - -### Introduction to global Keyword - -In Python, `global` keyword allows you to modify the variable outside of the current scope. It is used to create a global variable and make changes to the variable in a local context. - -#### Rules of global Keyword - -The basic rules for `global` keyword in Python are: - - * When we create a variable inside a function, it’s local by default. - * When we define a variable outside of a function, it’s global by default. You don’t have to use `global` keyword. - * We use `global` keyword to read and write a global variable inside a function. - * Use of `global` keyword outside a function has no effect - - - -#### Use of global Keyword (With Example) - -Let’s take an example. - -##### Example 1: Accessing global Variable From Inside a Function -``` -c = 1 # global variable - -def add(): - print(c) - -add() - -``` - -When we run above program, the output will be: -``` -1 - -``` - -However, we may have some scenarios where we need to modify the global variable from inside a function. - -##### Example 2: Modifying Global Variable From Inside the Function -``` -c = 1 # global variable - -def add(): - c = c + 2 # increment c by 2 - print(c) - -add() - -``` - -When we run above program, the output shows an error: -``` -UnboundLocalError: local variable 'c' referenced before assignment - -``` - -This is because we can only access the global variable but cannot modify it from inside the function. - -The solution for this is to use the `global` keyword. - -##### Example 3: Changing Global Variable From Inside a Function using global -``` -c = 0 # global variable - -def add(): - global c - c = c + 2 # increment by 2 - print("Inside add():", c) - -add() -print("In main:", c) - -``` - -When we run above program, the output will be: -``` -Inside add(): 2 -In main: 2 - -``` - -In the above program, we define c as a global keyword inside the `add()` function. - -Then, we increment the variable c by `1`, i.e `c = c + 2`. After that, we call the `add()` function. Finally, we print global variable c. - -As we can see, change also occured on the global variable outside the function, `c = 2`. - -### Global Variables Across Python Modules - -In Python, we create a single module `config.py` to hold global variables and share information across Python modules within the same program. - -Here is how we can share global variable across the python modules. - -##### Example 4 : Share a global Variable Across Python Modules - -Create a `config.py` file, to store global variables -``` -a = 0 -b = "empty" - -``` - -Create a `update.py` file, to change global variables -``` -import config - -config.a = 10 -config.b = "alphabet" - -``` - -Create a `main.py` file, to test changes in value -``` -import config -import update - -print(config.a) -print(config.b) - -``` - -When we run the `main.py` file, the output will be -``` -10 -alphabet - -``` - -In the above, we create three files: `config.py`, `update.py` and `main.py`. - -The module `config.py` stores global variables of a and b. In `update.py` file, we import the `config.py` module and modify the values of a and b. Similarly, in `main.py` file we import both `config.py` and `update.py` module. Finally, we print and test the values of global variables whether they are changed or not. - -### Global in Nested Functions - -Here is how you can use a global variable in nested function. - -##### Example 5: Using a Global Variable in Nested Function -``` -def foo(): - x = 20 - - def bar(): - global x - x = 25 - - print("Before calling bar: ", x) - print("Calling bar now") - bar() - print("After calling bar: ", x) - -foo() -print("x in main : ", x) - -``` - -The output is : -``` -Before calling bar: 20 -Calling bar now -After calling bar: 20 -x in main : 25 - -``` - -In the above program, we declare global variable inside the nested function `bar()`. Inside `foo()` function, x has no effect of global keyword. - -Before and after calling `bar()`, the variable x takes the value of local variable i.e `x = 20`. Outside of the `foo()` function, the variable x will take value defined in the `bar()` function i.e `x = 25`. This is because we have used `global` keyword in x to create global variable inside the `bar()` function (local scope). - -If we make any changes inside the `bar()` function, the changes appears outside the local scope, i.e. `foo()`. - --------------------------------------------------------------------------------- - -via: https://www.programiz.com/python-programming/global-keyword - -作者:[programiz][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.programiz.com -[1]:https://www.programiz.com/python-programming/global-local-nonlocal-variables From 0afa7ac7c0e0a337bbd8e995cfd26a93763224c1 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Fri, 23 Feb 2018 22:57:19 +0800 Subject: [PATCH 56/81] Create 20180207 Python Global Keyword (With Examples).md --- ...7 Python Global Keyword (With Examples).md | 159 ++++++++++++++++++ 1 file changed, 159 insertions(+) create mode 100644 translated/tech/20180207 Python Global Keyword (With Examples).md diff --git a/translated/tech/20180207 Python Global Keyword (With Examples).md b/translated/tech/20180207 Python Global Keyword (With Examples).md new file mode 100644 index 0000000000..981de74983 --- /dev/null +++ b/translated/tech/20180207 Python Global Keyword (With Examples).md @@ -0,0 +1,159 @@ +Python Global 关键字(含示例) +====== +在读这篇文章之前,确保你对 [Python Global,Local 和 Nonlocal 变量][1] 有一定的基础。 + +### global 关键字简介 + +在 Python 中,`global` 关键字允许你修改当前范围之外的变量。它用于创建全局变量并在本地上下文中更改变量。 + +### global 关键字的规则 +在 Python 中,有关 `global` 关键字基本规则如下: + +* 当我们在一个函数中创建一个变量时,默认情况下它是本地变量。 +* 当我们在一个函数之外定义一个变量时,默认情况下它是全局变量。你不必使用 `global` 关键字。 +* 我们使用 `global` 关键字在一个函数中来读写全局变量。 +* 在一个函数外使用 `global` 关键字没有效果。 + +#### 使用 global 关键字(含示例) + +我们来举个例子。 + +##### 示例 1:从函数内部访问全局变量 + + c = 1 # 全局变量 + + def add(): + print(c) + + add() + +运行程序,输出为: + + 1 + +但是我们可能有一些场景需要从函数内部修改全局变量。 + +##### 示例 2:在函数内部修改全局变量 + + c = 1 # 全局变量 + + def add(): + c = c + 2 # 将 c 增加 2 + print(c) + + add() + +运行程序,输出显示错误: + + UnboundLocalError: local variable 'c' referenced before assignment + +这是因为在函数中,我们只能访问全局变量但是不能修改它。 + +解决的办法是使用 `global` 关键字。 + +##### 示例 3:使用 global 在函数中改变全局变量 + + c = 0 # global variable + + def add(): + global c + c = c + 2 # 将 c 增加 2 + print("Inside add():", c) + + add() + print("In main:", c) + +运行程序,输出为: + + Inside add(): 2 + In main: 2 + +在上面的程序中,我们在 `add()` 函数中定义了 c 将其作为 global 关键字。 + +然后,我们给变量 c 增加 `1`,(译注:这里应该是给 c 增加 `2` )即 `c = c + 2`。之后,我们调用了 `add()` 函数。最后,打印全局变量 c。 + +正如我们所看到的,在函数外的全局变量也发生了变化,`c = 2`。 + +### Python 模块中的全局变量 + +在 Python 中,我们创建一个单独的模块 `config.py` 来保存全局变量并在同一个程序中的 Python 模块之间共享信息。 + +以下是如何通过 Python 模块共享全局变量。 + +##### 示例 4:在Python模块中共享全局变量 + +创建 `config.py` 文件来存储全局变量 + + a = 0 + b = "empty" + +创建 `update.py` 文件来改变全局变量 + + import config + + config.a = 10 + config.b = "alphabet" + +创建 `main.py` 文件来测试其值的变化 + + import config + import update + + print(config.a) + print(config.b) + +运行 `main.py`,输出为: + + 10 + alphabet + +在上面,我们创建了三个文件: `config.py`, `update.py` 和 `main.py`。 + +在 `config.py` 模块中保存了全局变量 a 和 b。在 `update.py` 文件中,我们导入了 `config.py` 模块并改变了 a 和 b 的值。同样,在 `main.py` 文件,我们导入了 `config.py` 和 `update.py` 模块。最后,我们打印并测试全局变量的值,无论它们是否被改变。 + +### 在嵌套函数中的全局变量 + +以下是如何在嵌套函数中使用全局变量。 + +##### 示例 5:在嵌套函数中使用全局变量 + + def foo(): + x = 20 + + def bar(): + global x + x = 25 + + print("Before calling bar: ", x) + print("Calling bar now") + bar() + print("After calling bar: ", x) + + foo() + print("x in main : ", x) + +输出为: + + Before calling bar: 20 + Calling bar now + After calling bar: 20 + x in main : 25 + +在上面的程序中,我们在一个嵌套函数 `bar()` 中声明了全局变量。在 `foo()` 函数中, 变量 x 没有全局关键字的作用。 + +调用 `bar()` 之前和之后, 变量 x 取本地变量的值,即 `x = 20`。在 `foo()` 函数之外,变量 x 会取在函数 `bar()` 中的值,即 `x = 25`。这是因为在 `bar()` 中,我们对 x 使用 `global` 关键字创建了一个全局变量(本地范围)。 + +如果我们在 `bar()` 函数内进行了任何修改,那么这些修改就会出现在本地范围之外,即 `foo()`。 + +-------------------------------------------------------------------------------- + +via: [https://www.programiz.com/python-programming/global-keyword](https://www.programiz.com/python-programming/global-keyword) + +作者:[programiz][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.programiz.com +[1]:https://www.programiz.com/python-programming/global-local-nonlocal-variables From b72a9e6e70fcf7c498cdc85a1ccfb7f110ca6765 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 24 Feb 2018 08:46:01 +0800 Subject: [PATCH 57/81] translated --- ...rrect Misspelled Bash Commands In Linux.md | 96 ------------------- ...rrect Misspelled Bash Commands In Linux.md | 93 ++++++++++++++++++ 2 files changed, 93 insertions(+), 96 deletions(-) delete mode 100644 sources/tech/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md create mode 100644 translated/tech/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md diff --git a/sources/tech/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md b/sources/tech/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md deleted file mode 100644 index a24bd1fda3..0000000000 --- a/sources/tech/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md +++ /dev/null @@ -1,96 +0,0 @@ -translating---geekpi - -How To Easily Correct Misspelled Bash Commands In Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/02/Correct-Misspelled-Bash-Commands-720x340.png) - -I know, I know! You could just hit the UP arrow to bring up the command you just ran, and navigate to the misspelled word using the LEFT/RIGHT keys, and correct the misspelled word(s), finally hit ENTER key to run it again, right? But, wait. There is another easier way to correct misspelled Bash commands in GNU/Linux. This brief tutorial explains how to do it. Read on. - -### Correct Misspelled Bash Commands In Linux - -Have you run a mistyped command something like below? -``` -$ unme -r -bash: unme: command not found - -``` - -Did you notice? There is a typo in the above command. I missed the letter “a” in the “uname” command. - -I have done this kind of silly mistakes in many occasions. Before I know this trick, I used to hit UP arrow to bring up the command and go to the misspelled word in the command, correct the spelling and typos and hit the ENTER key to run that command again. But believe me. The below trick is super easy to correct any typos and spelling mistakes in a command you just ran. - -To easily correct the above misspelled command, just run: -``` -$ ^nm^nam^ - -``` - -This will replace the characters “nm” with “nam” in the “uname” command. Cool, yeah? It’s not only corrects the typos, but also runs the command. Check the following screenshot. - -![][2] - -Use this trick when you made a typo in a command. Please note that it works only in Bash shell. - -**Bonus tip:** - -Have you ever wondered how to automatically correct spelling mistakes and typos when using “cd” command? No? It’s alright! The following trick will explain how to do it. - -This trick will only help to correct the spelling mistakes and typos when using “cd” command. - -Let us say, you want to switch to “Downloads” directory using command: -``` -$ cd Donloads -bash: cd: Donloads: No such file or directory - -``` - -Oops! There is no such file or directory with name “Donloads”. Well, the correct name was “Downloads”. The “w” is missing in the above command. - -To fix this issue and automatically correct the typos while using cd command, edit your **.bashrc** file: -``` -$ vi ~/.bashrc - -``` - -Add the following line at end. -``` -[...] -shopt -s cdspell - -``` - -Type **:wq** to save and exit the file. - -Finally, run the following command to update the changes. -``` -$ source ~/.bashrc - -``` - -Now, if there are any typos or spelling mistakes in the path while using cd command, it will automatically corrects and land you in the correct directory. - -![][3] - -As you see in the above command, I intentionally made a typo (“Donloads” instead of “Downloads”), but Bash automatically detected the correct directory name and cd into it. - -[**Fish**][4] and **Zsh** shells have this feature built-in. So, you don’t need this trick if you use them. - -This trick, however, has some limitations. It works only if you use the correct case. In the above example, if you type “cd donloads” instead of “cd Donloads”, it won’t recognize the correct path. Also, if there were more than one letters missing in the path, it won’t work either. - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/easily-correct-misspelled-bash-commands-linux/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[2]:http://www.ostechnix.com/wp-content/uploads/2018/02/misspelled-command.png -[3]:http://www.ostechnix.com/wp-content/uploads/2018/02/cd-command.png -[4]:https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/ diff --git a/translated/tech/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md b/translated/tech/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md new file mode 100644 index 0000000000..c80d990a60 --- /dev/null +++ b/translated/tech/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md @@ -0,0 +1,93 @@ +如何在 Linux 中轻松修正拼写错误的 Bash 命令 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/02/Correct-Misspelled-Bash-Commands-720x340.png) + +我知道你可以按下向上箭头来调出你运行过的命令,然后使用左/右键移动到拼写错误的单词,并更正拼写错误的单词,最后按回车键再次运行它,对吗?可是等等。还有一种更简单的方法可以纠正 GNU/Linux 中拼写错误的 Bash 命令。这个教程解释了如何做到这一点。请继续阅读。 + +### 在 Linux 中纠正拼写错误的 Bash 命令 + +你有没有运行过类似于下面的错误输入命令? +``` +$ unme -r +bash: unme: command not found + +``` + +你注意到了吗?上面的命令中有一个错误。我在 “uname” 命令缺少了字母 “a”。 + +我在很多时候犯过这种愚蠢的错误。在我知道这个技巧之前,我习惯按下向上箭头来调出命令并转到命令中拼写错误的单词,纠正拼写错误,然后按回车键再次运行该命令。但相信我。下面的技巧非常易于纠正你刚刚运行的命令中的任何拼写错误。 + +要轻松更正上述拼写错误的命令,只需运行: +``` +$ ^nm^nam^ + +``` + +这会将 “uname” 命令中将 “nm” 替换为 “nam”。很酷,是吗?它不仅纠正错别字,而且还能运行命令。查看下面的截图。 + +![][2] + +当你在命令中输入错字时使用这个技巧。请注意,它仅适用于 Bash shell。 + +**额外提示:** + +你有没有想过在使用 “cd” 命令时如何自动纠正拼写错误?没有么?没关系!下面的技巧将解释如何做到这一点。 + +这个技巧只能纠正使用 “cd” 命令时的拼写错误。 + +比如说,你想使用命令切换到 “Downloads” 目录: +``` +$ cd Donloads +bash: cd: Donloads: No such file or directory + +``` + +哎呀!没有名称为 “Donloads” 的文件或目录。是的,正确的名称是 “Downloads”。上面的命令中缺少 “w”。 + +要解决此问题并在使用 cd 命令时自动更正错误,请编辑你的 **.bashrc** 文件: +``` +$ vi ~/.bashrc + +``` + +最后添加以下行。 +``` +[...] +shopt -s cdspell + +``` + +输入 **:wq** 保存并退出文件。 + +最后,运行以下命令更新更改。 +``` +$ source ~/.bashrc + +``` + +现在,如果在使用 cd 命令时路径中存在任何拼写错误,它将自动更正并进入正确的目录。 + +![][3] + +正如你在上面的命令中看到的那样,我故意输错(“Donloads” 而不是 “Downloads”),但 Bash 自动检测到正确的目录名并 cd 进入它。 + +[**Fish**][4] 和**Zsh** shell 内置的此功能。所以,如果你使用的是它们,那么你不需要这个技巧。 + +然而,这个技巧有一些局限性。它只适用于使用正确的大小写。在上面的例子中,如果你输入的是 “cd donloads” 而不是 “cd Donloads”,它将无法识别正确的路径。另外,如果路径中缺少多个字母,它也不起作用。 + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/easily-correct-misspelled-bash-commands-linux/ + +作者:[SK][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[2]:http://www.ostechnix.com/wp-content/uploads/2018/02/misspelled-command.png +[3]:http://www.ostechnix.com/wp-content/uploads/2018/02/cd-command.png +[4]:https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/ From d87d7f54989ac1347522a2c1733a9d908407a660 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 24 Feb 2018 08:51:18 +0800 Subject: [PATCH 58/81] translating --- ...Check Your Linux PC for Meltdown or Spectre Vulnerability.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180201 How to Check Your Linux PC for Meltdown or Spectre Vulnerability.md b/sources/tech/20180201 How to Check Your Linux PC for Meltdown or Spectre Vulnerability.md index 38cada57ee..1828625b63 100644 --- a/sources/tech/20180201 How to Check Your Linux PC for Meltdown or Spectre Vulnerability.md +++ b/sources/tech/20180201 How to Check Your Linux PC for Meltdown or Spectre Vulnerability.md @@ -1,3 +1,5 @@ +translating---geekpi + How to Check Your Linux PC for Meltdown or Spectre Vulnerability ====== From f0f90aef36108e5c724d0e2f88ac8bcf9daa1008 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 24 Feb 2018 10:24:35 +0800 Subject: [PATCH 59/81] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20configur?= =?UTF-8?q?e=20an=20Apache=20web=20server?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...2 How to configure an Apache web server.md | 233 ++++++++++++++++++ 1 file changed, 233 insertions(+) create mode 100644 sources/tech/20180222 How to configure an Apache web server.md diff --git a/sources/tech/20180222 How to configure an Apache web server.md b/sources/tech/20180222 How to configure an Apache web server.md new file mode 100644 index 0000000000..9846afc98c --- /dev/null +++ b/sources/tech/20180222 How to configure an Apache web server.md @@ -0,0 +1,233 @@ +How to configure an Apache web server +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG) + +I have hosted my own websites for many years now. Since switching from OS/2 to Linux more than 20 years ago, I have used [Apache][1] as my server software. Apache is solid, well-known, and quite easy to configure for a basic installation. It is not really that much more difficult to configure for a more complex setup, such as multiple websites. + +Installation and configuration of the Apache web server must be performed as root. Configuring the firewall also needs to be performed as root. Using a browser to view the results of this work should be done as a non-root user. (I use the useron `student` on my virtual host.) + +### Installation + +Note: I use a virtual machine (VM) using Fedora 27 with Apache 2.4.29. If you have a different distribution or a different release of Fedora, your commands and the locations and content of the configuration files may be different. However, the configuration lines you need to modify are the same. + +The Apache web server is easy to install. On my CentOS 6.x server, it just takes a simple `yum` command. It installs all the necessary dependencies if any are missing. I used the `dnf` command below on one of my Fedora virtual machines. The syntax for `dnf` and `yum` are the same except for the name of the command itself. +``` +dnf -y install httpd + +``` + +The VM is a very basic desktop installation I am using as a testbed for writing a book. Even on this system, only six dependencies were installed in under a minute. + +All the configuration files for Apache are located in `/etc/httpd/conf` and `/etc/httpd/conf.d`. The data for the websites is located in `/var/www` by default, but you can change that if you want. + +### Configuration + +The primary Apache configuration file is `/etc/httpd/conf/httpd.conf`. It contains a lot of configuration statements that don't need to be changed for a basic installation. In fact, only a few changes must be made to this file to get a basic website up and running. The file is very large so, rather than clutter this article with a lot of unnecessary stuff, I will show only those directives that you need to change. + +First, take a bit of time and browse through the `httpd.conf` file to familiarize yourself with it. One of the things I like about Red Hat versions of most configuration files is the number of comments that describe the various sections and configuration directives in the files. The `httpd.conf` file is no exception, as it is quite well commented. Use these comments to understand what the file is configuring. + +The first item to change is the `Listen` statement, which defines the IP address and port on which Apache is to listen for page requests. Right now, you just need to make this website available to the local machine, so use the `localhost` address. The line should look like this when you finish: +``` +Listen 127.0.0.1:80 + +``` + +With this directive set to the IP address of the `localhost`, Apache will listen only for connections from the local host. If you want the web server to listen for connections from remote hosts, you would use the host's external IP address. + +The `DocumentRoot` directive specifies the location of the HTML files that make up the pages of the website. That line does not need to be changed because it already points to the standard location. The line should look like this: +``` +DocumentRoot "/var/www/html" + +``` + +The Apache installation RPM creates the `/var/www` directory tree. If you wanted to change the location where the website files are stored, this configuration item is used to do that. For example, you might want to use a different name for the `www` subdirectory to make the identification of the website more explicit. That might look like this: +``` +DocumentRoot "/var/mywebsite/html" + +``` + +These are the only Apache configuration changes needed to create a simple website. For this little exercise, only one change was made to the `httpd.conf` file—the `Listen` directive. Everything else is already configured to produce a working web server. + +One other change is needed, however: opening port 80 in our firewall. I use [iptables][2] as my firewall, so I change `/etc/sysconfig/iptables` to add a statement that allows HTTP protocol. The entire file looks like this: +``` +# sample configuration for iptables service + +# you can edit this manually or use system-config-firewall + +# please do not ask us to add additional ports/services to this default configuration + +*filter + +:INPUT ACCEPT [0:0] + +:FORWARD ACCEPT [0:0] + +:OUTPUT ACCEPT [0:0] + +-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT + +-A INPUT -p icmp -j ACCEPT + +-A INPUT -i lo -j ACCEPT + +-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT + +-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT + +-A INPUT -j REJECT --reject-with icmp-host-prohibited + +-A FORWARD -j REJECT --reject-with icmp-host-prohibited + +COMMIT + +``` + +The line I added is the third from the bottom, which allows incoming traffic on port 80. Now I reload the altered iptables configuration. +``` +[root@testvm1 ~]# cd /etc/sysconfig/ ; iptables-restore iptables + +``` + +### Create the index.html file + +The `index.html` file is the default file a web server will serve up when you access the website using just the domain name and not a specific HTML file name. In the `/var/www/html` directory, create a file with the name `index.html`. Add the content `Hello World`. You do not need to add any HTML markup to make this work. The sole job of the web server is to serve up a stream of text data, and the server has no idea what the date is or how to render it. It simply transmits the data stream to the requesting host. + +After saving the file, set the ownership to `apache.apache`. +``` +[root@testvm1 html]# chown apache.apache index.html + +``` + +### Start Apache + +Apache is very easy to start. Current versions of Fedora use `systemd`. Run the following commands to start it and then to check the status of the server: +``` +[root@testvm1 ~]# systemctl start httpd + +[root@testvm1 ~]# systemctl status httpd + +● httpd.service - The Apache HTTP Server + +   Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled) + +   Active: active (running) since Thu 2018-02-08 13:18:54 EST; 5s ago + +     Docs: man:httpd.service(8) + + Main PID: 27107 (httpd) + +   Status: "Processing requests..." + +    Tasks: 213 (limit: 4915) + +   CGroup: /system.slice/httpd.service + +           ├─27107 /usr/sbin/httpd -DFOREGROUND + +           ├─27108 /usr/sbin/httpd -DFOREGROUND + +           ├─27109 /usr/sbin/httpd -DFOREGROUND + +           ├─27110 /usr/sbin/httpd -DFOREGROUND + +           └─27111 /usr/sbin/httpd -DFOREGROUND + + + +Feb 08 13:18:54 testvm1 systemd[1]: Starting The Apache HTTP Server... + +Feb 08 13:18:54 testvm1 systemd[1]: Started The Apache HTTP Server. + +``` + +The commands may be different on your server. On Linux systems that use SystemV start scripts, the commands would be: +``` +[root@testvm1 ~]# service httpd start + +Starting httpd: [Fri Feb 09 08:18:07 2018]          [  OK  ] + +[root@testvm1 ~]# service httpd status + +httpd (pid  14649) is running... + +``` + +If you have a web browser like Firefox or Chrome on your host, you can use the URL `localhost` on the URL line of the browser to display your web page, simple as it is. You could also use a text mode web browser like [Lynx][3] to view the web page. First, install Lynx (if it is not already installed). +``` +[root@testvm1 ~]# dnf -y install lynx + +``` + +Then use the following command to display the web page. +``` +[root@testvm1 ~]# lynx localhost + +``` + +The result looks like this in my terminal session. I have deleted a lot of the empty space on the page. +``` +  Hello World + + + + + + + + + +Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back. + +  Arrow keys: Up and Down to move.  Right to follow a link; Left to go back. + + H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list + +``` + +Next, edit your `index.html` file and add a bit of HTML markup so it looks like this: +``` +

Hello World

+ +``` + +Now refresh the browser. For Lynx, use the key combination Ctrl+R. The results look just a bit different. The text is in color, which is how Lynx displays headings if your terminal supports color, and it is now centered. In a GUI browser the text would be in a large font. +``` +                                   Hello World + + + + + + + + + +Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back. + +  Arrow keys: Up and Down to move.  Right to follow a link; Left to go back. + + H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list + +``` + +### Parting thoughts + +As you can see from this little exercise, it is easy to set up an Apache web server. The specifics will vary depending upon your distribution and the version of Apache supplied by that distribution. In my environment, this was a pretty trivial exercise. + +But there is more because Apache is very flexible and powerful. Next month I will discuss hosting multiple websites using a single instance of Apache. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/how-configure-apache-web-server + +作者:[David Both][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/dboth +[1]:https://httpd.apache.org/ +[2]:https://en.wikipedia.org/wiki/Iptables +[3]:http://lynx.browser.org/ From 65acf867b5c10366fd75a81eb92c84bb7a340c6f Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 24 Feb 2018 10:27:45 +0800 Subject: [PATCH 60/81] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Get=20St?= =?UTF-8?q?arted=20Using=20WSL=20in=20Windows=2010?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... to Get Started Using WSL in Windows 10.md | 119 ++++++++++++++++++ 1 file changed, 119 insertions(+) create mode 100644 sources/tech/20180220 How to Get Started Using WSL in Windows 10.md diff --git a/sources/tech/20180220 How to Get Started Using WSL in Windows 10.md b/sources/tech/20180220 How to Get Started Using WSL in Windows 10.md new file mode 100644 index 0000000000..e20ac04305 --- /dev/null +++ b/sources/tech/20180220 How to Get Started Using WSL in Windows 10.md @@ -0,0 +1,119 @@ +How to Get Started Using WSL in Windows 10 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/wsl-main.png?itok=wJ5WrU9U) + +In the [previous article][1], we talked about the Windows Subsystem for Linux (WSL) and its target audience. In this article, we will walk through the process of getting started with WSL on your Windows 10 machine. + +### Prepare your system for WSL + +You must be running the latest version of Windows 10 with Fall Creator Update installed. Then, check which version of Windows 10 is installed on your system by searching on “About” in the search box of the Start menu. You should be running version 1709 or the latest to use WSL. + +Here is a screenshot from my system. + +![kHFKOvrbG1gXdB9lsbTqXC4N4w0Lbsz1Bul5ey9m][2] + +If an older version is installed, you need to download and install the Windows 10 Fall Creator Update (FCU) from [this][3] page. Once FCU is installed, go to Update Settings (just search for “updates” in the search box of the Start menu) and install any available updates. + +Go to Turn Windows Features On or Off (you know the drill by now) and scroll to the bottom and tick on the box Windows Subsystem for Linux, as shown in the following figure. Click Ok. It will download and install the needed packages. + +![oV1mDqGe3zwQgL0N3rDasHH6ZwHtxaHlyrLzjw7x][4] + +Upon the completion of the installation, the system will offer to restart. Go ahead and reboot your machine. WSL won’t launch without a system reboot, as shown below: + +![GsNOQLJlHeZbkaCsrDIhfVvEoycu3D0upoTdt6aN][5] + +Once your system starts, go back to the Turn features on or off setting to confirm that the box next to Windows Subsystem for Linux is selected. + +### Install Linux in Windows + +There are many ways to install Linux on Windows, but we will choose the easiest way. Open the Windows Store and search for Linux. You will see the following option: + +![YAR4UgZiFAy2cdkG4U7jQ7_m81lrxR6aHSMOdED7][6] + +Click on Get the apps, and Windows Store will provide you with three options: Ubuntu, openSUSE Leap 42, and SUSE Linux Enterprise Server. You can install all three distributions side by side and run all three distributions simultaneously. To be able to use SLE, you need a subscription. + +In this case, I am installing openSUSE Leap 42 and Ubuntu. Select your desired distro and click on the Get button to install it. Once installed, you can launch openSUSE in Windows. It can be pinned to the Start menu for quick access. + +![4LU6eRrzDgBprDuEbSFizRuP1J_zS3rBnoJbU2OA][7] + +### Using Linux in Windows + +When you launch the distro, it will open the Bash shell and install the distro. Once installed, you can go ahead and start using it. Simple. Just bear in mind that there is no user in openSUSE and it runs as root user, whereas Ubuntu will ask you to create a user. On Ubuntu, you can perform administrative tasks as sudo user. + +You can easily create a user on openSUSE: +``` +# useradd [username] + +# passwd [username] + +``` + +Create a new password for the user and you are all set. For example: +``` +# useradd swapnil + +# passwd swapnil + +``` + +You can switch from root to this use by running the su command: +``` +su swapnil + +``` + +You do need non-root use to perform many tasks, like using commands like rsync to move files on your local machine. + +The first thing you need to do is update the distro. For openSUSE: +``` +zypper up + +``` + +For Ubuntu: +``` +sudo apt-get update + +sudo apt-get dist-upgrade + +``` + +![7cRgj1O6J8yfO3L4ol5sP-ZCU7_uwOuEoTzsuVW9][8] + +You now have native Linux Bash shell on Windows. Want to ssh into your server from Windows 10? There’s no need to install puTTY or Cygwin. Just open Bash and then ssh into your server. Easy peasy. + +Want to rsync files to your server? Go ahead and use rsync. It really transforms Windows into a usable machine for those Windows users who want to use native Linux command linux tools on their machines without having to deal with VMs. + +### Where is Fedora? + +You may be wondering about Fedora. Unfortunately, Fedora is not yet available through the store. Matthew Miller, the release manager of Fedora said on Twitter, “We're working on resolving some non-technical issues. I'm afraid I don't have any more than that right now.” + +We don’t know yet what these non-technical issues are. When some users asked why the WSL team could not publish Fedora themselves --- after all it’s an open source project -- Rich Turner, a project manager at Microsoft [responded][9], “We have a policy of not publishing others' IP into the store. We believe that the community would MUCH prefer to see a distro published by the distro owner vs. seeing it published by Microsoft or anyone else that isn't the authoritative source.” + +So, Microsoft can’t just go ahead and publish Debian or Arch Linux on Windows Store. The onus is on the official communities to bring their distros to Windows 10 users. + +### What’s next + +In the next article, we will talk about using Windows 10 as a Linux machine and performing most of the tasks that you would perform on your Linux system using the command-line tools. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10 + +作者:[SWAPNIL BHARTIYA][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/arnieswap +[1]:https://www.linux.com/blog/learn/2018/2/windows-subsystem-linux-bridge-between-two-platforms +[2]:https://lh6.googleusercontent.com/kHFKOvrbG1gXdB9lsbTqXC4N4w0Lbsz1Bul5ey9mr_E255GiiBxf8cRlatrte6z23yvo8lHJG8nQ_WeHhUNYqPp7kHuQTTMueqMshCT71JsbMr2Wih9KFHuHgNg1BclWz-iuBt4O +[3]:https://www.microsoft.com/en-us/software-download/windows10 +[4]:https://lh4.googleusercontent.com/oV1mDqGe3zwQgL0N3rDasHH6ZwHtxaHlyrLzjw7xF9M9_AcHPNSxM18KDWK2ZpVcUOfxVVpNH9LwUJT5EtRE7zUrJC_gWV5f345SZRAgXcJzOE-8rM8-RCPTNtns6vVP37V5Eflp +[5]:https://lh5.googleusercontent.com/GsNOQLJlHeZbkaCsrDIhfVvEoycu3D0upoTdt6aNEozAcQA59Z3hDu_SxT6I4K4gwxLPX0YnmUsCKjaQaaG2PoAgUYMcN0Zv0tBFaoUL3sZryddM4mdRj1E2tE-IK_GLK4PDa4zf +[6]:https://lh3.googleusercontent.com/YAR4UgZiFAy2cdkG4U7jQ7_m81lrxR6aHSMOdED7MKEoYxEsX_yLwyMj9N2edt3GJ2JLx6mUsFEZFILCCSBU2sMOqveFVWZTHcCXhFi5P2Xk-9Ikc3NK9seup5CJObIcYJPORdPW +[7]:https://lh6.googleusercontent.com/4LU6eRrzDgBprDuEbSFizRuP1J_zS3rBnoJbU2OAOH3Mx7nfOROfyf81k1s4YQyLBcu0qSXOoaqbYkXL5Wpp9gNCdKH_WsEcqWzjG6uXzYvCYQ42psOz6Iz3NF7ElsPrdiFI0cYv +[8]:https://lh6.googleusercontent.com/7cRgj1O6J8yfO3L4ol5sP-ZCU7_uwOuEoTzsuVW9cU5xiBWz_cpZ1IBidNT0C1wg9zROIncViUzXD0vPoH5cggQtuwkanRfRdDVXOI48AcKFLt-Iq2CBF4mGRwqqWvSOhb0HFpjm +[9]:https://github.com/Microsoft/WSL/issues/2584 From 63a8cb8c82a3acffa22a35dd151d3f375a8a9cbe Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 24 Feb 2018 10:30:56 +0800 Subject: [PATCH 61/81] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Create=20a=20wiki?= =?UTF-8?q?=20on=20your=20Linux=20desktop=20with=20Zim?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e a wiki on your Linux desktop with Zim.md | 115 ++++++++++++++++++ 1 file changed, 115 insertions(+) create mode 100644 sources/tech/20180221 Create a wiki on your Linux desktop with Zim.md diff --git a/sources/tech/20180221 Create a wiki on your Linux desktop with Zim.md b/sources/tech/20180221 Create a wiki on your Linux desktop with Zim.md new file mode 100644 index 0000000000..9929e45536 --- /dev/null +++ b/sources/tech/20180221 Create a wiki on your Linux desktop with Zim.md @@ -0,0 +1,115 @@ +Create a wiki on your Linux desktop with Zim +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_bees_network.png?itok=NFNRQpJi) + +There's no denying the usefulness of a wiki, even to a non-geek. You can do so much with one—write notes and drafts, collaborate on projects, build complete websites. And so much more. + +I've used more than a few wikis over the years, either for my own work or at various contract and full-time gigs I've held. While traditional wikis are fine, I really like the idea of [desktop wikis][1] . They're small, easy to install and maintain, and even easier to use. And, as you've probably guessed, there are a number a desktop wikis available for Linux. + +Let's take a look at one of the better desktop wikis: [Zim][2]. + +### Getting going + +You can either [download][3] and install Zim from the software's website, or do it the easy way and install it through your distro's package manager. + +Once Zim's installed, start it up. + +A key concept in Zim is notebooks. They're like a collection of wiki pages on a single subject. When you first start Zim, it asks you to specify a folder for your notebooks and the name of a notebook. Zim suggests "Notes" for the name, and `~/Notebooks/` for the folder. Change that if you want. I did. + +![](https://opensource.com/sites/default/files/u128651/zim1.png) + +After you set the name and the folder for your notebook, click **OK**. You get what's essentially a container for your wiki pages. + +![](https://opensource.com/sites/default/files/u128651/zim2.png) + +### Adding pages to a notebook + +So you have a container. Now what? You start adding pages to it, of course. To do that, select **File > New Page**. + +![](https://opensource.com/sites/default/files/u128651/zim3.png) + +Enter a name for the page, then click **OK**. From there, you can start typing to add information to that page. + +![](https://opensource.com/sites/default/files/u128651/zim4.png) + +That page can be whatever you want it to be: notes for a course you're taking, the outline for a book or article or essay, or an inventory of your books. It's up to you. + +Zim has a number of formatting options, including: + + * Headings + * Character formatting + * Bullet and numbered lists + * Checklists + + + +You can also add images and attach files to your wiki pages, and even pull in text from a text file. + +### Zim's wiki syntax + +You can add formatting to a page using the toolbar, but that's not the only way to do the deed. If, like me, you're kind of old school, you can use wiki markup for formatting. + +[Zim's markup][4] is based on the markup that's used with [DokuWiki][5]. It's essentially [WikiText][6] with a few minor variations. To create a bullet list, for example, type an asterisk. Surround a word or a phrase with two asterisks to make it bold. + +### Adding links + +If you have a number of pages in a notebook, it's easy to link them. There are two ways to do that. + +The first way is to use [CamelCase][7] to name the pages. Let's say I have a notebook called "Course Notes." I can rename the notebook for the data analysis course I'm taking by typing "AnalysisCourse." When I want to link to it from another page in the notebook, I just type "AnalysisCourse" and press the space bar. Instant hyperlink. + +The second way is to click the **Insert link** button on the toolbar. Type the name of the page you want to link to in the **Link to** field, select it from the displayed list of options, then click **Link**. + +![](https://opensource.com/sites/default/files/u128651/zim5.png) + +I've only been able to link between pages in the same notebook. Whenever I've tried to link to a page in another notebook, the file (which has the extension .txt) always opens in a text editor. + +### Exporting your wiki pages + +There might come a time when you want to use the information in a notebook elsewhere—say, in a document or on a web page. Instead of copying and pasting (and losing formatting), you can export your notebook pages to any of the following formats: + + * HTML + * LaTeX + * Markdown + * ReStructuredText + + + +To do that, click on the wiki page you want to export. Then, select **File > Export**. Decide whether to export the whole notebook or just a single page, then click **Forward**. + +![](https://opensource.com/sites/default/files/u128651/zim6.png) + +Select the file format you want to use to save the page or notebook. With HTML and LaTeX, you can choose a template. Play around to see what works best for you. For example, if you want to turn your wiki pages into HTML presentation slides, you can choose "SlideShow_s5" from the **Template** list. If you're wondering, that produces slides driven by the [S5 slide framework][8]. + +![](https://opensource.com/sites/default/files/u128651/zim7.png) + +Click **Forward**. If you're exporting a notebook, you can choose to export the pages as individual files or as one file. You can also point to the folder where you want to save the exported file. + +![](https://opensource.com/sites/default/files/u128651/zim8.png) + +### Is that all Zim can do? + +Not even close. Zim also has a number of [plugins][9] that expand its capabilities. It even packs a built-in web server that lets you view your notebooks as static HTML files. This is useful for sharing your pages and notebooks on an internal network. + +All in all, Zim is a powerful, yet compact tool for managing your information. It's easily the best desktop wiki I've used, and it's one that I keep going back to. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/create-wiki-your-linux-desktop-zim + +作者:[Scott Nesbitt][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/scottnesbitt +[1]:https://opensource.com/article/17/2/3-desktop-wikis +[2]:http://zim-wiki.org/ +[3]:http://zim-wiki.org/downloads.html +[4]:http://zim-wiki.org/manual/Help/Wiki_Syntax.html +[5]:https://www.dokuwiki.org/wiki:syntax +[6]:http://en.wikipedia.org/wiki/Wikilink +[7]:https://en.wikipedia.org/wiki/Camel_case +[8]:https://meyerweb.com/eric/tools/s5/ +[9]:http://zim-wiki.org/manual/Plugins.html From 1e889ae4d33c3b1d9e30c984577ec849daf116f5 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 24 Feb 2018 10:33:32 +0800 Subject: [PATCH 62/81] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Getting=20started?= =?UTF-8?q?=20with=20SQL?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20180221 Getting started with SQL.md | 250 ++++++++++++++++++ 1 file changed, 250 insertions(+) create mode 100644 sources/tech/20180221 Getting started with SQL.md diff --git a/sources/tech/20180221 Getting started with SQL.md b/sources/tech/20180221 Getting started with SQL.md new file mode 100644 index 0000000000..469716e478 --- /dev/null +++ b/sources/tech/20180221 Getting started with SQL.md @@ -0,0 +1,250 @@ +Getting started with SQL +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) + +Building a database using SQL is simpler than most people think. In fact, you don't even need to be an experienced programmer to use SQL to create a database. In this article, I'll explain how to create a simple relational database management system (RDMS) using MySQL 5.6. Before I get started, I want to quickly thank [SQL Fiddle][1], which I used to run my script. It provides a useful sandbox for testing simple scripts. + + +In this tutorial, I'll build a database that uses the simple schema shown in the entity relationship diagram (ERD) below. The database lists students and the course each is studying. I used two entities (i.e., tables) to keep things simple, with only a single relationship and dependency. The entities are called `dbo_students` and `dbo_courses`. + +![](https://opensource.com/sites/default/files/u128651/erd.png) + +The multiplicity of the database is 1-to-many, as each course can contain many students, but each student can study only one course. + +A quick note on terminology: + + 1. A table is called an entity. + 2. A field is called an attribute. + 3. A record is called a tuple. + 4. The script used to construct the database is called a schema. + + + +### Constructing the schema + +To construct the database, use the `CREATE TABLE ` command, then define each field name and data type. This database uses `VARCHAR(n)` (string) and `INT(n)` (integer), where n refers to the number of values that can be stored. For example `INT(2)` could be 01. + +This is the code used to create the two tables: +``` +CREATE TABLE dbo_students + +( + +  student_id INT(2) AUTO_INCREMENT NOT NULL, + +  student_name VARCHAR(50), + +  course_studied INT(2), + +  PRIMARY KEY (student_id) + +); + + + +CREATE TABLE dbo_courses + +( + +  course_id INT(2) AUTO_INCREMENT NOT NULL, + +  course_name VARCHAR(30), + +  PRIMARY KEY (course_id) + +); + +``` + +`NOT NULL` means that the field cannot be empty, and `AUTO_INCREMENT` means that when a new tuple is added, the ID number will be auto-generated with 1 added to the previously stored ID number in order to enforce referential integrity across entities. `PRIMARY KEY` is the unique identifier attribute for each table. This means each tuple has its own distinct identity. + +### Relationships as a constraint + +As it stands, the two tables exist on their own with no connections or relationships. To connect them, a foreign key must be identified. In `dbo_students`, the foreign key is `course_studied`, the source of which is within `dbo_courses`, meaning that the field is referenced. The specific command within SQL is called a `CONSTRAINT`, and this relationship will be added using another command called `ALTER TABLE`, which allows tables to be edited even after the schema has been constructed. + +The following code adds the relationship to the database construction script: +``` +ALTER TABLE dbo_students + +ADD CONSTRAINT FK_course_studied + +FOREIGN KEY (course_studied) REFERENCES dbo_courses(course_id); + +``` + +Using the `CONSTRAINT` command is not actually necessary, but it's good practice because it means the constraint can be named and it makes maintenance easier. Now that the database is complete, it's time to add some data. + +### Adding data to the database + +`INSERT INTO
` is the command used to directly choose which attributes (i.e., fields) data is added to. The entity name is defined first, then the attributes. Underneath this command is the data that will be added to that entity, creating a tuple. If `NOT NULL` has been specified, it means that the attribute cannot be left blank. The following code shows how to add records to the table: +``` +INSERT INTO dbo_courses(course_id,course_name) + +VALUES(001,'Software Engineering'); + +INSERT INTO dbo_courses(course_id,course_name) + +VALUES(002,'Computer Science'); + +INSERT INTO dbo_courses(course_id,course_name) + +VALUES(003,'Computing'); + + + +INSERT INTO dbo_students(student_id,student_name,course_studied) + +VALUES(001,'student1',001); + +INSERT INTO dbo_students(student_id,student_name,course_studied) + +VALUES(002,'student2',002); + +INSERT INTO dbo_students(student_id,student_name,course_studied) + +VALUES(003,'student3',002); + +INSERT INTO dbo_students(student_id,student_name,course_studied) + +VALUES(004,'student4',003); + +``` + +Now that the database schema is complete and data is added, it's time to run queries on the database. + +### Queries + +Queries follow a set structure using these commands: +``` +SELECT + +FROM + +WHERE + +``` + +To display all records within the `dbo_courses` entity and display the course code and course name, use an asterisk. This is a wildcard that eliminates the need to type all attribute names. (Its use is not recommended in production databases.) The code for this query is: +``` +SELECT * + +FROM dbo_courses + +``` + +The output of this query shows all tuples in the table, so all available courses can be displayed: +``` +| course_id |          course_name | + +|-----------|----------------------| + +|         1 | Software Engineering | + +|         2 |     Computer Science | + +|         3 |            Computing | + +``` + +In a future article, I'll explain more complicated queries using one of the three types of joins: Inner, Outer, or Cross. + +Here is the completed script: +``` +CREATE TABLE dbo_students + +( + +  student_id INT(2) AUTO_INCREMENT NOT NULL, + +  student_name VARCHAR(50), + +  course_studied INT(2), + +  PRIMARY KEY (student_id) + +); + + + +CREATE TABLE dbo_courses + +( + +  course_id INT(2) AUTO_INCREMENT NOT NULL, + +  course_name VARCHAR(30), + +  PRIMARY KEY (course_id) + +); + + + +ALTER TABLE dbo_students + +ADD CONSTRAINT FK_course_studied + +FOREIGN KEY (course_studied) REFERENCES dbo_courses(course_id); + + + +INSERT INTO dbo_courses(course_id,course_name) + +VALUES(001,'Software Engineering'); + +INSERT INTO dbo_courses(course_id,course_name) + +VALUES(002,'Computer Science'); + +INSERT INTO dbo_courses(course_id,course_name) + +VALUES(003,'Computing'); + + + +INSERT INTO dbo_students(student_id,student_name,course_studied) + +VALUES(001,'student1',001); + +INSERT INTO dbo_students(student_id,student_name,course_studied) + +VALUES(002,'student2',002); + +INSERT INTO dbo_students(student_id,student_name,course_studied) + +VALUES(003,'student3',002); + +INSERT INTO dbo_students(student_id,student_name,course_studied) + +VALUES(004,'student4',003); + + + +SELECT * + +FROM dbo_courses + +``` + +### Learning more + +SQL isn't difficult; I think it is simpler than programming, and the language is universal to different database systems. Note that `dbo.` is not a required entity-naming convention; I used it simply because it is the standard in Microsoft SQL Server. + +If you'd like to learn more, the best guide this side of the internet is [W3Schools.com][2]'s comprehensive guide to SQL for all database platforms. + +Please feel free to play around with my database. Also, if you have suggestions or questions, please respond in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/getting-started-sql + +作者:[Aaron Cocker][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/aaroncocker +[1]:http://sqlfiddle.com +[2]:https://www.w3schools.com/sql/default.asp From 7d32a437b674a0275018f18c76faf58cacc32b99 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 24 Feb 2018 10:45:00 +0800 Subject: [PATCH 63/81] =?UTF-8?q?=E9=80=89=E9=A2=98:=2012=20useful=20zyppe?= =?UTF-8?q?r=20command=20examples?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...80221 12 useful zypper command examples.md | 434 ++++++++++++++++++ 1 file changed, 434 insertions(+) create mode 100644 sources/tech/20180221 12 useful zypper command examples.md diff --git a/sources/tech/20180221 12 useful zypper command examples.md b/sources/tech/20180221 12 useful zypper command examples.md new file mode 100644 index 0000000000..2e5e2c59a9 --- /dev/null +++ b/sources/tech/20180221 12 useful zypper command examples.md @@ -0,0 +1,434 @@ +12 useful zypper command examples +====== +Learn zypper command with 12 useful examples along with sample outputs. zypper is used for package and patch management in Suse Linux systems. + +![zypper command examples][1] + +zypper is package management system powered by [ZYpp package manager engine][2]. Suse Linux uses zypper for package management. In this article we will be sharing 12 useful zypper commands along with examples whcih are helpful for your day today sysadmin tasks. + +Without any argument `zypper` command will list you all available switches which can be used. Its quite handy than referring to man page which is pretty much in detail. + +``` +root@kerneltalks # zypper + Usage: + zypper [--global-options] [--command-options] [arguments] + zypper [--command-options] [arguments] + + Global Options: + --help, -h Help. + --version, -V Output the version number. + --promptids Output a list of zypper's user prompts. + --config, -c Use specified config file instead of the default . + --userdata User defined transaction id used in history and plugins. + --quiet, -q Suppress normal output, print only error + messages. + --verbose, -v Increase verbosity. + --color + --no-color Whether to use colors in output if tty supports it. + --no-abbrev, -A Do not abbreviate text in tables. + --table-style, -s Table style (integer). + --non-interactive, -n Do not ask anything, use default answers + automatically. + --non-interactive-include-reboot-patches + Do not treat patches as interactive, which have + the rebootSuggested-flag set. + --xmlout, -x Switch to XML output. + --ignore-unknown, -i Ignore unknown packages. + + --reposd-dir, -D Use alternative repository definition file + directory. + --cache-dir, -C Use alternative directory for all caches. + --raw-cache-dir Use alternative raw meta-data cache directory. + --solv-cache-dir Use alternative solv file cache directory. + --pkg-cache-dir Use alternative package cache directory. + + Repository Options: + --no-gpg-checks Ignore GPG check failures and continue. + --gpg-auto-import-keys Automatically trust and import new repository + signing keys. + --plus-repo, -p Use an additional repository. + --plus-content Additionally use disabled repositories providing a specific keyword. + Try '--plus-content debug' to enable repos indic ating to provide debug packages. + --disable-repositories Do not read meta-data from repositories. + --no-refresh Do not refresh the repositories. + --no-cd Ignore CD/DVD repositories. + --no-remote Ignore remote repositories. + --releasever Set the value of $releasever in all .repo files (default: distribution version) + + Target Options: + --root, -R Operate on a different root directory. + --disable-system-resolvables + Do not read installed packages. + + Commands: + help, ? Print help. + shell, sh Accept multiple commands at once. + + Repository Management: + repos, lr List all defined repositories. + addrepo, ar Add a new repository. + removerepo, rr Remove specified repository. + renamerepo, nr Rename specified repository. + modifyrepo, mr Modify specified repository. + refresh, ref Refresh all repositories. + clean Clean local caches. + + Service Management: + services, ls List all defined services. + addservice, as Add a new service. + modifyservice, ms Modify specified service. + removeservice, rs Remove specified service. + refresh-services, refs Refresh all services. + + Software Management: + install, in Install packages. + remove, rm Remove packages. + verify, ve Verify integrity of package dependencies. + source-install, si Install source packages and their build + dependencies. + install-new-recommends, inr + Install newly added packages recommended + by installed packages. + + Update Management: + update, up Update installed packages with newer versions. + list-updates, lu List available updates. + patch Install needed patches. + list-patches, lp List needed patches. + dist-upgrade, dup Perform a distribution upgrade. + patch-check, pchk Check for patches. + + Querying: + search, se Search for packages matching a pattern. + info, if Show full information for specified packages. + patch-info Show full information for specified patches. + pattern-info Show full information for specified patterns. + product-info Show full information for specified products. + patches, pch List all available patches. + packages, pa List all available packages. + patterns, pt List all available patterns. + products, pd List all available products. + what-provides, wp List packages providing specified capability. + + Package Locks: + addlock, al Add a package lock. + removelock, rl Remove a package lock. + locks, ll List current package locks. + cleanlocks, cl Remove unused locks. + + Other Commands: + versioncmp, vcmp Compare two version strings. + targetos, tos Print the target operating system ID string. + licenses Print report about licenses and EULAs of + installed packages. + download Download rpms specified on the commandline to a local directory. + source-download Download source rpms for all installed packages + to a local directory. + + Subcommands: + subcommand Lists available subcommands. + +Type 'zypper help ' to get command-specific help. +``` +##### How to install package using zypper + +`zypper` takes `in` or `install` switch to install package on your system. Its same as [yum package installation][3], supplying package name as argument and package manager (zypper here) will resolve all dependencies and install them along with your required package. + +``` +# zypper install telnet +Refreshing service 'SMT-http_smt-ec2_susecloud_net'. +Refreshing service 'cloud_update'. +Loading repository data... +Reading installed packages... +Resolving package dependencies... + +The following NEW package is going to be installed: + telnet + +1 new package to install. +Overall download size: 51.8 KiB. Already cached: 0 B. After the operation, additional 113.3 KiB will be used. +Continue? [y/n/...? shows all options] (y): y +Retrieving package telnet-1.2-165.63.x86_64 (1/1), 51.8 KiB (113.3 KiB unpacked) +Retrieving: telnet-1.2-165.63.x86_64.rpm .........................................................................................................................[done] +Checking for file conflicts: .....................................................................................................................................[done] +(1/1) Installing: telnet-1.2-165.63.x86_64 .......................................................................................................................[done] +``` + +Above output for your reference in which we installed `telnet` package. + +Suggested read : [Install packages in YUM and APT systems][3] + +##### How to remove package using zypper + +For erasing or removing packages in Suse Linux, use `zypper` with `remove` or `rm` switch. + +``` +root@kerneltalks # zypper rm telnet +Loading repository data... +Reading installed packages... +Resolving package dependencies... + +The following package is going to be REMOVED: + telnet + +1 package to remove. +After the operation, 113.3 KiB will be freed. +Continue? [y/n/...? shows all options] (y): y +(1/1) Removing telnet-1.2-165.63.x86_64 ..........................................................................................................................[done] +``` +We removed previously installed telnet package here. + +##### Check dependencies and verify integrity of installed packages using zypper + +There are times when one can install package by force ignoring dependencies. `zypper` gives you power to scan all installed packages and checks for their dependencies too. If any dependency is missing, it offers you to install/rempve it and hence maintain integrity of your installed packages. + +Use `verify` or `ve` switch with `zypper` to check integrity of installed packages. + +``` +root@kerneltalks # zypper ve +Refreshing service 'SMT-http_smt-ec2_susecloud_net'. +Refreshing service 'cloud_update'. +Loading repository data... +Reading installed packages... + +Dependencies of all installed packages are satisfied. +``` +In above output, you can see last line confirms that all dependencies of installed packages are completed and no action required. + +##### How to download package using zypper in Suse Linux + +`zypper` offers way to download package in local directory without installation. You can use this downloaded package on another system with same configuration. Packages will be downloaded to `/var/cache/zypp/packages///` directory. + +``` +root@kerneltalks # zypper download telnet +Refreshing service 'SMT-http_smt-ec2_susecloud_net'. +Refreshing service 'cloud_update'. +Loading repository data... +Reading installed packages... +Retrieving package telnet-1.2-165.63.x86_64 (1/1), 51.8 KiB (113.3 KiB unpacked) +(1/1) /var/cache/zypp/packages/SMT-http_smt-ec2_susecloud_net:SLES12-SP3-Pool/x86_64/telnet-1.2-165.63.x86_64.rpm ................................................[done] + +download: Done. + +# ls -lrt /var/cache/zypp/packages/SMT-http_smt-ec2_susecloud_net:SLES12-SP3-Pool/x86_64/ +total 52 +-rw-r--r-- 1 root root 53025 Feb 21 03:17 telnet-1.2-165.63.x86_64.rpm + +``` +You can see we have downloaded telnet package locally using `zypper` + +Suggested read : [Download packages in YUM and APT systems without installing][4] + +##### How to list available package update in zypper + +`zypper` allows you to view all available updates for your installed packages so that you can plan update activity in advance. Use `list-updates` or `lu` switch to show you list of all available updates for installed packages. + +``` +root@kerneltalks # zypper lu +Refreshing service 'SMT-http_smt-ec2_susecloud_net'. +Refreshing service 'cloud_update'. +Loading repository data... +Reading installed packages... +S | Repository | Name | Current Version | Available Version | Arch +--|-----------------------------------|----------------------------|-------------------------------|------------------------------------|------- +v | SLES12-SP3-Updates | at-spi2-core | 2.20.2-12.3 | 2.20.2-14.3.1 | x86_64 +v | SLES12-SP3-Updates | bash | 4.3-82.1 | 4.3-83.5.2 | x86_64 +v | SLES12-SP3-Updates | ca-certificates-mozilla | 2.7-11.1 | 2.22-12.3.1 | noarch +v | SLE-Module-Containers12-Updates | containerd | 0.2.5+gitr639_422e31c-20.2 | 0.2.9+gitr706_06b9cb351610-16.8.1 | x86_64 +v | SLES12-SP3-Updates | crash | 7.1.8-4.3.1 | 7.1.8-4.6.2 | x86_64 +v | SLES12-SP3-Updates | rsync | 3.1.0-12.1 | 3.1.0-13.10.1 | x86_64 +``` +Output is properly formatted for easy reading. Column wise it shows name of repo where package belongs, package name, installed version, new updated available version & architecture. + +##### List and install patches in Suse linux + +Use `list-patches` or `lp` switch to display all available patches for your Suse Linux system which needs to be applied. + +``` +root@kerneltalks # zypper lp +Refreshing service 'SMT-http_smt-ec2_susecloud_net'. +Refreshing service 'cloud_update'. +Loading repository data... +Reading installed packages... + +Repository | Name | Category | Severity | Interactive | Status | Summary +----------------------------------|------------------------------------------|-------------|-----------|-------------|--------|------------------------------------------------------------------------------------ +SLE-Module-Containers12-Updates | SUSE-SLE-Module-Containers-12-2018-273 | security | important | --- | needed | Version update for docker, docker-runc, containerd, golang-github-docker-libnetwork +SLE-Module-Containers12-Updates | SUSE-SLE-Module-Containers-12-2018-62 | recommended | low | --- | needed | Recommended update for sle2docker +SLE-Module-Public-Cloud12-Updates | SUSE-SLE-Module-Public-Cloud-12-2018-268 | recommended | low | --- | needed | Recommended update for python-ecdsa +SLES12-SP3-Updates | SUSE-SLE-SERVER-12-SP3-2018-116 | security | moderate | --- | needed | Security update for rsync +---- output clipped ---- +SLES12-SP3-Updates | SUSE-SLE-SERVER-12-SP3-2018-89 | security | moderate | --- | needed | Security update for perl-XML-LibXML +SLES12-SP3-Updates | SUSE-SLE-SERVER-12-SP3-2018-90 | recommended | low | --- | needed | Recommended update for lvm2 + +Found 37 applicable patches: +37 patches needed (18 security patches) +``` + +Output is pretty much nicely organised with respective headers. You can easily figure out and plan your patch update accordingly. We can see out of 37 patches available on our system 18 are security ones and needs to be applied on high priority! + +You can install all needed patches by issuing `zypper patch` command. + +##### How to update package using zypper + +To update package using zypper, use `update` or `up` switch followed by package name. In above list updates command we learned that `rsync` package update is available on our server. Let update it now – + +``` +root@kerneltalks # zypper update rsync +Refreshing service 'SMT-http_smt-ec2_susecloud_net'. +Refreshing service 'cloud_update'. +Loading repository data... +Reading installed packages... +Resolving package dependencies... + +The following package is going to be upgraded: + rsync + +1 package to upgrade. +Overall download size: 325.2 KiB. Already cached: 0 B. After the operation, additional 64.0 B will be used. +Continue? [y/n/...? shows all options] (y): y +Retrieving package rsync-3.1.0-13.10.1.x86_64 (1/1), 325.2 KiB (625.5 KiB unpacked) +Retrieving: rsync-3.1.0-13.10.1.x86_64.rpm .......................................................................................................................[done] +Checking for file conflicts: .....................................................................................................................................[done] +(1/1) Installing: rsync-3.1.0-13.10.1.x86_64 .....................................................................................................................[done] +``` + +##### Search package using zypper in Suse Linux + +If you are not sure about full package name, no worries. You can search packages in zypper by supplying search string with `se` or `search` switch + +``` +root@kerneltalks # zypper se lvm +Refreshing service 'SMT-http_smt-ec2_susecloud_net'. +Refreshing service 'cloud_update'. +Loading repository data... +Reading installed packages... + +S | Name | Summary | Type +---|---------------|------------------------------|----------- + | libLLVM | Libraries for LLVM | package + | libLLVM-32bit | Libraries for LLVM | package + | llvm | Low Level Virtual Machine | package + | llvm-devel | Header Files for LLVM | package + | lvm2 | Logical Volume Manager Tools | srcpackage +i+ | lvm2 | Logical Volume Manager Tools | package + | lvm2-devel | Development files for LVM2 | package + +``` +In above example we searched `lvm` string and came up with the list shown above. You can use `Name` in zypper install/remove/update commands. + +##### Check installed package information using zypper + +You can check installed packages details using zypper. `info` or `if` switch will list out information of installed package. It can also displays package details which is not installed. In that case, `Installed` parameter will reflect `No` value. +``` +root@kerneltalks # zypper info rsync +Refreshing service 'SMT-http_smt-ec2_susecloud_net'. +Refreshing service 'cloud_update'. +Loading repository data... +Reading installed packages... + + +Information for package rsync: +------------------------------ +Repository : SLES12-SP3-Updates +Name : rsync +Version : 3.1.0-13.10.1 +Arch : x86_64 +Vendor : SUSE LLC +Support Level : Level 3 +Installed Size : 625.5 KiB +Installed : Yes +Status : up-to-date +Source package : rsync-3.1.0-13.10.1.src +Summary : Versatile tool for fast incremental file transfer +Description : + Rsync is a fast and extraordinarily versatile file copying tool. It can copy + locally, to/from another host over any remote shell, or to/from a remote rsync + daemon. It offers a large number of options that control every aspect of its + behavior and permit very flexible specification of the set of files to be + copied. It is famous for its delta-transfer algorithm, which reduces the amount + of data sent over the network by sending only the differences between the + source files and the existing files in the destination. Rsync is widely used + for backups and mirroring and as an improved copy command for everyday use. +``` + +##### List repositories using zypper + +To list repo use `lr` or `repos` switch with zypper command. It will list all available repos which includes enabled and not-enabled both repos. + +``` +root@kerneltalks # zypper lr +Refreshing service 'cloud_update'. +Repository priorities are without effect. All enabled repositories share the same priority. + +# | Alias | Name | Enabled | GPG Check | Refresh +---|--------------------------------------------------------------------------------------|-------------------------------------------------------|---------|-----------|-------- + 1 | SMT-http_smt-ec2_susecloud_net:SLE-Module-Adv-Systems-Management12-Debuginfo-Pool | SLE-Module-Adv-Systems-Management12-Debuginfo-Pool | No | ---- | ---- + 2 | SMT-http_smt-ec2_susecloud_net:SLE-Module-Adv-Systems-Management12-Debuginfo-Updates | SLE-Module-Adv-Systems-Management12-Debuginfo-Updates | No | ---- | ---- + 3 | SMT-http_smt-ec2_susecloud_net:SLE-Module-Adv-Systems-Management12-Pool | SLE-Module-Adv-Systems-Management12-Pool | Yes | (r ) Yes | No + 4 | SMT-http_smt-ec2_susecloud_net:SLE-Module-Adv-Systems-Management12-Updates | SLE-Module-Adv-Systems-Management12-Updates | Yes | (r ) Yes | Yes + 5 | SMT-http_smt-ec2_susecloud_net:SLE-Module-Containers12-Debuginfo-Pool | SLE-Module-Containers12-Debuginfo-Pool | No | ---- | ---- + 6 | SMT-http_smt-ec2_susecloud_net:SLE-Module-Containers12-Debuginfo-Updates | SLE-Module-Containers12-Debuginfo-Updates | No | ---- | ---- +``` + +here you need to check enabled column to check which repos are enabled and which are not. + +##### Add and remove repo in Suse Linux using zypper + +To add repo you will need URI of repo/.repo file or else you end up in below error. + +``` +root@kerneltalks # zypper addrepo -c SLES12-SP3-Updates +If only one argument is used, it must be a URI pointing to a .repo file. +``` + + +With URI, you can add repo like below : + +``` +root@kerneltalks # zypper addrepo -c http://smt-ec2.susecloud.net/repo/SUSE/Products/SLE-SDK/12-SP3/x86_64/product?credentials=SMT-http_smt-ec2_susecloud_net SLE-SDK12-SP3-Pool +Adding repository 'SLE-SDK12-SP3-Pool' ...........................................................................................................................[done] +Repository 'SLE-SDK12-SP3-Pool' successfully added + +URI : http://smt-ec2.susecloud.net/repo/SUSE/Products/SLE-SDK/12-SP3/x86_64/product?credentials=SMT-http_smt-ec2_susecloud_net +Enabled : Yes +GPG Check : Yes +Autorefresh : No +Priority : 99 (default priority) + +Repository priorities are without effect. All enabled repositories share the same priority. +``` + +Use `addrepo` or `ar` switch with `zypper` to add repo in Suse. Followed by URI and lastly you need to provide alias as well. + +To remove repo in Suse, use `removerepo` or `rr` switch with `zypper`. +``` +root@kerneltalks # zypper removerepo nVidia-Driver-SLE12-SP3 +Removing repository 'nVidia-Driver-SLE12-SP3' ....................................................................................................................[done] +Repository 'nVidia-Driver-SLE12-SP3' has been removed. +``` + +##### Clean local zypper cache + +Cleaning up local zypper caches with `zypper clean` command – + +``` +root@kerneltalks # zypper clean +All repositories have been cleaned up. +``` + +-------------------------------------------------------------------------------- + +via: https://kerneltalks.com/commands/12-useful-zypper-command-examples/ + +作者:[KernelTalks][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://kerneltalks.com +[1]:https://a2.kerneltalks.com/wp-content/uploads/2018/02/zypper-command-examples.png +[2]:https://en.wikipedia.org/wiki/ZYpp +[3]:https://kerneltalks.com/tools/package-installation-linux-yum-apt/ +[4]:https://kerneltalks.com/howto/download-package-using-yum-apt/ From a9ce7b8fda977ce98403fc2592103735568d8ac6 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 24 Feb 2018 10:48:29 +0800 Subject: [PATCH 64/81] =?UTF-8?q?=E9=80=89=E9=A2=98:=20cTop=20-=20A=20CLI?= =?UTF-8?q?=20Tool=20For=20Container=20Monitoring?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...p - A CLI Tool For Container Monitoring.md | 120 ++++++++++++++++++ 1 file changed, 120 insertions(+) create mode 100644 sources/tech/20180221 cTop - A CLI Tool For Container Monitoring.md diff --git a/sources/tech/20180221 cTop - A CLI Tool For Container Monitoring.md b/sources/tech/20180221 cTop - A CLI Tool For Container Monitoring.md new file mode 100644 index 0000000000..9a25f29436 --- /dev/null +++ b/sources/tech/20180221 cTop - A CLI Tool For Container Monitoring.md @@ -0,0 +1,120 @@ +cTop - A CLI Tool For Container Monitoring +====== +Recent days Linux containers are famous, even most of us already working on it and few of us start learning about it. + +We have already covered article about the famous GUI (Graphical User Interface) tools such as Portainer & Rancher. This will help us to manage containers through GUI. + +This tutorial will help us to understand and monitor Linux containers through cTop command. It’s a command-line tool like top command. + +### What’s cTop + +[ctop][1] provides a concise and condensed overview of real-time metrics for multiple containers. It’s Top-like interface for container metrics. + +It displays containers metrics such as CPU utilization, Memory utilization, Disk I/O Read & Write, Process ID (PID), and Network Transmit(TX – Transmit FROM this server) and receive(RX – Receive TO this server). + +ctop comes with built-in support for Docker and runC; connectors for other container and cluster systems are planned for future releases. +It doesn’t requires any arguments and uses Docker host variables by default. + +**Suggested Read :** +**(#)** [Portainer – A Simple Docker Management GUI][2] +**(#)** [Rancher – A Complete Container Management Platform For Production Environment][3] + +### How To Install cTop + +Developer offers a simple shell script, which help us to use ctop instantly. What we have to do, just download the ctop shell file at `/bin` directory for global access. Finally assign the execute permission to ctop shell file. + +Download the ctop shell file @ `/usr/local/bin` directory. +``` +$ sudo wget https://github.com/bcicen/ctop/releases/download/v0.7/ctop-0.7-linux-amd64 -O /usr/local/bin/ctop + +``` + +Set execute permission to ctop shell file. +``` +$ sudo chmod +x /usr/local/bin/ctop + +``` + +Alternatively you can install and run ctop through docker. Make sure you should have installed docker as a pre-prerequisites for this. To install docker, refer the following link. + +**Suggested Read :** +**(#)** [How to install Docker in Linux][4] +**(#)** [How to play with Docker images on Linux][5] +**(#)** [How to play with Docker containers on Linux][6] +**(#)** [How to Install, Run Applications inside Docker Containers][7] +``` +$ docker run --rm -ti \ + --name=ctop \ + -v /var/run/docker.sock:/var/run/docker.sock \ + quay.io/vektorlab/ctop:latest + +``` + +### How To Use cTop + +Just launch the ctop utility without any arguments. By default it’s bind with `a` key which display of all containers (running and non-running). +ctop header shows your system time and total number of containers. +``` +$ ctop + +``` + +You might get the output similar to below. +![][9] + +### How To Manage Containers + +You can able to administrate the containers using ctop. Select a container that you want to manage then hit `Enter` button and choose required options like start, stop, remove, etc,. +![][10] + +### How To Sort Containers + +By default ctop sort the containers using state field. Hit `s` key to sort the containers in the different aspect. +![][11] + +### How To View the Containers Metrics + +If you want to view more details & metrics about the container, just select the corresponding which you want to view then hit `o` key. +![][12] + +### How To View Container Logs + +Select the corresponding container which you want to view the logs then hit `l` key. +![][13] + +### Display Only Active Containers + +Run ctop command with `-a` option to show active containers only. +![][14] + +### Open Help Dialog Box + +Run ctop, just hit `h`key to open help section. +![][15] + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/ctop-a-command-line-tool-for-container-monitoring-and-management-in-linux/ + +作者:[2DAYGEEK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/2daygeek/ +[1]:https://github.com/bcicen/ctop +[2]:https://www.2daygeek.com/portainer-a-simple-docker-management-gui/ +[3]:https://www.2daygeek.com/rancher-a-complete-container-management-platform-for-production-environment/ +[4]:https://www.2daygeek.com/install-docker-on-centos-rhel-fedora-ubuntu-debian-oracle-archi-scentific-linux-mint-opensuse/ +[5]:https://www.2daygeek.com/list-search-pull-download-remove-docker-images-on-linux/ +[6]:https://www.2daygeek.com/create-run-list-start-stop-attach-delete-interactive-daemonized-docker-containers-on-linux/ +[7]:https://www.2daygeek.com/install-run-applications-inside-docker-containers/ +[8]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[9]:https://www.2daygeek.com/wp-content/uploads/2018/02/ctop-a-command-line-tool-for-container-monitoring-and-management-in-linux-1.png +[10]:https://www.2daygeek.com/wp-content/uploads/2018/02/ctop-a-command-line-tool-for-container-monitoring-and-management-in-linux-2.png +[11]:https://www.2daygeek.com/wp-content/uploads/2018/02/ctop-a-command-line-tool-for-container-monitoring-and-management-in-linux-3.png +[12]:https://www.2daygeek.com/wp-content/uploads/2018/02/ctop-a-command-line-tool-for-container-monitoring-and-management-in-linux-4a.png +[13]:https://www.2daygeek.com/wp-content/uploads/2018/02/ctop-a-command-line-tool-for-container-monitoring-and-management-in-linux-7.png +[14]:https://www.2daygeek.com/wp-content/uploads/2018/02/ctop-a-command-line-tool-for-container-monitoring-and-management-in-linux-5.png +[15]:https://www.2daygeek.com/wp-content/uploads/2018/02/ctop-a-command-line-tool-for-container-monitoring-and-management-in-linux-6.png From 0a244b1a27eb0617344cd625863b42aa2d9fd46f Mon Sep 17 00:00:00 2001 From: Flowsnow Date: Sat, 24 Feb 2018 10:49:16 +0800 Subject: [PATCH 65/81] =?UTF-8?q?[=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91]-20?= =?UTF-8?q?180129=20Parsing=20HTML=20with=20Python.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20180129 Parsing HTML with Python.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180129 Parsing HTML with Python.md b/sources/tech/20180129 Parsing HTML with Python.md index d0dbee596f..bc6e4ff2e6 100644 --- a/sources/tech/20180129 Parsing HTML with Python.md +++ b/sources/tech/20180129 Parsing HTML with Python.md @@ -1,3 +1,5 @@ +translating by Flowsnow + Parsing HTML with Python ====== From e6bff4dcd73b0c58606d20acdc497e21559b9043 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 24 Feb 2018 11:09:33 +0800 Subject: [PATCH 66/81] PRF:20171214 How to install and use encryptpad on ubuntu 16.04.md @singledo --- ...tall and Use Encryptpad on Ubuntu 16.04.md | 0 ...tall and use encryptpad on ubuntu 16.04.md | 151 ++++++++++-------- 2 files changed, 84 insertions(+), 67 deletions(-) delete mode 100644 sources/tech/20171214 How to Install and Use Encryptpad on Ubuntu 16.04.md diff --git a/sources/tech/20171214 How to Install and Use Encryptpad on Ubuntu 16.04.md b/sources/tech/20171214 How to Install and Use Encryptpad on Ubuntu 16.04.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/translated/tech/20171214 How to install and use encryptpad on ubuntu 16.04.md b/translated/tech/20171214 How to install and use encryptpad on ubuntu 16.04.md index 83e8f78645..baf29e563b 100644 --- a/translated/tech/20171214 How to install and use encryptpad on ubuntu 16.04.md +++ b/translated/tech/20171214 How to install and use encryptpad on ubuntu 16.04.md @@ -1,89 +1,106 @@ -# How To Install and Use Encryptpad on Ubuntu 16.04 -``` -EncryptPad 是一个免费的开源软件 ,它通过简单的图片转换和命令行接口来查看和修改加密的文件文件 ,它使用 OpenPGP RFC 4880 文件格式 。通过 EncryptPad ,你可以很容易的加密或者解密文件 。你能够像保存密码 ,信用卡信息 ,密码或者密钥文件这类的私人信息 。 -``` -## 特性 -- 支持 windows ,Linux ,和 Max OS 。 -- 可定制的密码生成器 ,足够健壮的密码 。 -- 随机密钥文件和密码生成器 。 -- 至此 GPG 和 EPD 文件格式 。 -- 通过 CURL 自动从远程远程仓库下载密钥 。 -- 密钥文件能够存储在加密文件中 。如果生效 ,你不需要每次打开文件都指定密钥文件 。 -- 提供只读模式来保护文件不被修改 。 -- 可加密二进制文件 。例如 图片 ,视屏 ,档案 。 +如何在 Ubuntu 16.04 上安装和使用 Encryptpad +============== + +EncryptPad 是一个自由开源软件,它通过简单方便的图形界面和命令行接口来查看和修改加密的文本,它使用 OpenPGP RFC 4880 文件格式。通过 EncryptPad,你可以很容易的加密或者解密文件。你能够像保存密码、信用卡信息等私人信息,并使用密码或者密钥文件来访问。 + +### 特性 + +- 支持 windows、Linux 和 Max OS。 +- 可定制的密码生成器,可生成健壮的密码。 +- 随机的密钥文件和密码生成器。 +- 支持 GPG 和 EPD 文件格式。 +- 能够通过 CURL 自动从远程远程仓库下载密钥。 +- 密钥文件的路径能够存储在加密的文件中。如果这样做的话,你不需要每次打开文件都指定密钥文件。 +- 提供只读模式来防止文件被修改。 +- 可加密二进制文件,例如图片、视频、归档等。 + + +在这份教程中,我们将学习如何在 Ubuntu 16.04 中安装和使用 EncryptPad。 + +### 环境要求 + +- 在系统上安装了 Ubuntu 16.04 桌面版本。 +- 在系统上有 `sudo` 的权限的普通用户。 + +### 安装 EncryptPad + +在默认情况下,EncryPad 在 Ubuntu 16.04 的默认仓库是不存在的。你需要安装一个额外的仓库。你能够通过下面的命令来添加它 : ``` -在这份引导说明中 ,我们将学习如何在 Ubuntu 16.04 中安装和使用 EncryptPad 。 +sudo apt-add-repository ppa:nilaimogard/webupd8 ``` -## 环境要求 -- 在系统上安装了 Ubuntu 16.04 桌面版本 。 -- 用户在系统上有 sudo 的权限 。 -## 安装 EncryptPad -在默认情况下 ,EncryPad 在 Ubuntu 16.04 的默认仓库是不存在的 。你需要安装一个额外的仓库 。你能够通过下面的命令来添加它 : -- **sudo apt-add-repository ppa:nilaimogard/webupd8** +下一步,用下面的命令来更新仓库: - 下一步 ,用下面的命令来更新仓库 : -- **sudo apt-get update -y** - - 最后一步 ,通过下面命令安装 EncryptPAd : -- **sudo apt-get install encryptpad encryptcli -y** - -当 EncryptPad 安装完成 ,你需要将它固定到 Ubuntu 的仪表板上 。 - -## 使用 EncryptPad 生成密钥和密码 ``` -现在 ,去 Ubunntu Dash 上输入 encryptpad ,你能够在你的屏幕上看到下面的图片 : +sudo apt-get update -y ``` + +最后一步,通过下面命令安装 EncryptPad: + +``` +sudo apt-get install encryptpad encryptcli -y +``` + +当 EncryptPad 安装完成后,你可以在 Ubuntu 的 Dash 上找到它。 + +### 使用 EncryptPad 生成密钥和密码 + +现在,在 Ubunntu Dash 上输入 `encryptpad`,你能够在你的屏幕上看到下面的图片 : + [![Ubuntu DeskTop][1]][2] -``` -下一步 ,点击 EncryptPad 的图标 。你能够看到 EncryptPad 的界面 ,有一个简单的文本编辑器以及顶部菜单栏 。 -``` +下一步,点击 EncryptPad 的图标。你能够看到 EncryptPad 的界面,它是一个简单的文本编辑器,带有顶部菜单栏。 + [![EncryptPad screen][3]][4] -``` -首先 ,你需要产生一个密钥和密码来给将来加密/解密任务使用 。点击顶部菜单栏中的 Encryption->Generate Key ,你会看见下面的界面 : -``` -[![Generate key][5]][6] -``` -选择文件保存的路径 ,点击 OK 按钮 ,你将看到下面的界面 。 -``` -[![select path][7]][8] -``` -输入密钥文件的密码 ,点击 OK 按钮 ,你将看到下面的界面 : -``` -[![last step][9]][10] -``` -点击 yes 按钮来完成进程 。 -``` -## 加密和解密文件 -``` -现在 ,密钥文件和密码都已经生成了 。现在可以执行加密和解密操作了 。在这个文件编辑器中打开一个文件文件 ,点击加密图标 ,你会看见下面的界面 : -``` -[![Encry operation][11]][12] -``` -提供需要加密的文件和指定输出的文件 ,提供密码和前面产生的密钥文件 。点击 Start 按钮来开始加密的进程 。当文件被成功的加密 ,会出现下面的界面 : -```` -[![Success Encrypt][13]][14] -``` -文件已经被密码和密钥加密了 。 -``` +首先,你需要生成一个密钥文件和密码用于加密/解密任务。点击顶部菜单栏中的 “Encryption->Generate Key”,你会看见下面的界面: + +[![Generate key][5]][6] + +选择文件保存的路径,点击 “OK” 按钮,你将看到下面的界面: + +[![select path][7]][8] + +输入密钥文件的密码,点击 “OK” 按钮 ,你将看到下面的界面: + +[![last step][9]][10] + +点击 “yes” 按钮来完成该过程。 + +### 加密和解密文件 + +现在,密钥文件和密码都已经生成了。可以执行加密和解密操作了。在这个文件编辑器中打开一个文件文件,点击 “encryption” 图标 ,你会看见下面的界面: + +[![Encry operation][11]][12] + +提供需要加密的文件和指定输出的文件,提供密码和前面产生的密钥文件。点击 “Start” 按钮来开始加密的进程。当文件被成功的加密,会出现下面的界面: + +[![Success Encrypt][13]][14] + +文件已经被该密码和密钥文件加密了。 + +如果你想解密被加密后的文件,打开 EncryptPad ,点击 “File Encryption” ,选择 “Decryption” 操作,提供加密文件的位置和你要保存输出的解密文件的位置,然后提供密钥文件地址,点击 “Start” 按钮,它将要求你输入密码,输入你先前加密使用的密码,点击 “OK” 按钮开始解密过程。当该过程成功完成,你会看到 “File has been decrypted successfully” 的消息 。 + -``` -如果你想解密被加密后的文件 ,打开 EncryptPad ,点击 File Encryption ,选择 Decryptio 操作 ,提供加密文件的地址和输出解密文件的地址 ,提供密钥文件地址 ,点击 Start 按钮 ,如果请求输入密码 ,输入你先前加密使用的密码 ,点击 OK 按钮开始解密过程 。当过程成功完成 ,你会看到 “ File has been decrypted successfully message ” 。 -``` [![decrypt ][16]][17] [![][18]][18] [![][13]] -**注意** -``` -如果你遗忘了你的密码或者丢失了密钥文件 ,没有其他的方法打开你的加密信息 。对于 EncrypePad 支持的格式是没有后门的 。 -``` +**注意:** + +如果你遗忘了你的密码或者丢失了密钥文件,就没有其他的方法可以打开你的加密信息了。对于 EncrypePad 所支持的格式是没有后门的。 -------------------------------------------------------------------------------- +via: https://www.howtoforge.com/tutorial/how-to-install-and-use-encryptpad-on-ubuntu-1604/ + +作者:[Hitesh Jethva][a] +译者:[singledo](https://github.com/singledo) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + [a]:https://www.howtoforge.com [1]:https://www.howtoforge.com/images/how_to_install_and_use_encryptpad_on_ubuntu_1604/Screenshot-of-encryptpad-dash.png From df2d97318394687aff101f9ec6bff0142a220962 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 24 Feb 2018 11:16:27 +0800 Subject: [PATCH 67/81] PUB:20171214 How to install and use encryptpad on ubuntu 16.04.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @singledo https://linux.cn/article-9377-1.html 这篇翻译不够认真,望继续努力。 --- .../20171214 How to install and use encryptpad on ubuntu 16.04.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171214 How to install and use encryptpad on ubuntu 16.04.md (100%) diff --git a/translated/tech/20171214 How to install and use encryptpad on ubuntu 16.04.md b/published/20171214 How to install and use encryptpad on ubuntu 16.04.md similarity index 100% rename from translated/tech/20171214 How to install and use encryptpad on ubuntu 16.04.md rename to published/20171214 How to install and use encryptpad on ubuntu 16.04.md From 8b4b118966820ebe73a802372af0c3cd2dcbcec9 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 24 Feb 2018 11:36:40 +0800 Subject: [PATCH 68/81] PRF&PUB:20171203 Increase Torrent Speed - Here Is Why It Will Never Work.md @lujun9972 --- ... Speed - Here Is Why It Will Never Work.md | 24 +++++++++---------- 1 file changed, 11 insertions(+), 13 deletions(-) rename {translated/tech => published}/20171203 Increase Torrent Speed - Here Is Why It Will Never Work.md (80%) diff --git a/translated/tech/20171203 Increase Torrent Speed - Here Is Why It Will Never Work.md b/published/20171203 Increase Torrent Speed - Here Is Why It Will Never Work.md similarity index 80% rename from translated/tech/20171203 Increase Torrent Speed - Here Is Why It Will Never Work.md rename to published/20171203 Increase Torrent Speed - Here Is Why It Will Never Work.md index 127bb21066..cbb2dda3e1 100644 --- a/translated/tech/20171203 Increase Torrent Speed - Here Is Why It Will Never Work.md +++ b/published/20171203 Increase Torrent Speed - Here Is Why It Will Never Work.md @@ -1,23 +1,22 @@ -Torrent 提速 - 为什么总是无济于事 +Torrent 提速为什么总是无济于事 ====== -![](http://www.theitstuff.com/wp-content/uploads/2017/11/increase-torrent-speed.jpg) +![](http://www.theitstuff.com/wp-content/uploads/2017/11/increase-torrent-speed.jpg) + 是不是总是想要 **更快的 torrent 速度**?不管现在的速度有多块,但总是无法对此满足。我们对 torrent 速度的痴迷使我们经常从包括 YouTube 视频在内的许多网站上寻找并应用各种所谓的技巧。但是相信我,从小到大我就没发现哪个技巧有用过。因此本文我们就就来看看,为什么尝试提高 torrent 速度是行不通的。 -## 影响速度的因素 +### 影响速度的因素 -### 本地因素 +#### 本地因素 -从下图中可以看到 3 台电脑分别对应的 A,B,C 三个用户。A 和 B 本地相连,而 C 的位置则比较远,它与本地之间有 1,2,3 三个连接点。 +从下图中可以看到 3 台电脑分别对应的 A、B、C 三个用户。A 和 B 本地相连,而 C 的位置则比较远,它与本地之间有 1、2、3 三个连接点。 [![][1]][2] 若用户 A 和用户 B 之间要分享文件,他们之间直接分享就能达到最大速度了而无需使用 torrent。这个速度跟互联网什么的都没有关系。 + 网线的性能 - + 网卡的性能 - + 路由器的性能 当谈到 torrent 的时候,人们都是在说一些很复杂的东西,但是却总是不得要点。 @@ -30,7 +29,7 @@ Torrent 提速 - 为什么总是无济于事 即使你把目标降到 30 Megabytes,然而你连接到路由器的电缆/网线的性能最多只有 100 megabits 也就是 10 MegaBytes。这是一个纯粹的瓶颈问题,由一个薄弱的环节影响到了其他强健部分,也就是说这个传输速率只能达到 10 Megabytes,即电缆的极限速度。现在想象有一个 torrent 即使能够用最大速度进行下载,那也会由于你的硬件不够强大而导致瓶颈。 -### 外部因素 +#### 外部因素 现在再来看一下这幅图。用户 C 在很遥远的某个地方。甚至可能在另一个国家。 @@ -40,24 +39,23 @@ Torrent 提速 - 为什么总是无济于事 第二,由于 C 与本地之间多个有连接点,其中一个点就有可能成为瓶颈所在,可能由于繁重的流量和相对薄弱的硬件导致了缓慢的速度。 -### Seeders( 译者注:做种者) 与 Leechers( 译者注:只下载不做种的人) +#### 做种者与吸血者 -关于此已经有了太多的讨论,总的想法就是搜索更多的种子,但要注意上面的那些因素,一个很好的种子提供者但是跟我之间的连接不好的话那也是无济于事的。通常,这不可能发生,因为我们也不是唯一下载这个资源的人,一般都会有一些在本地的人已经下载好了这个文件并已经在做种了。 +关于此已经有了太多的讨论,总的想法就是搜索更多的种子,但要注意上面的那些因素,有一个很好的种子提供者,但是跟我之间的连接不好的话那也是无济于事的。通常,这不可能发生,因为我们也不是唯一下载这个资源的人,一般都会有一些在本地的人已经下载好了这个文件并已经在做种了。 -## 结论 +### 结论 我们尝试搞清楚哪些因素影响了 torrent 速度的好坏。不管我们如何用软件进行优化,大多数时候是这是由于物理瓶颈导致的。我从来不关心那些软件,使用默认配置对我来说就够了。 希望你会喜欢这篇文章,有什么想法敬请留言。 - -------------------------------------------------------------------------------- via: http://www.theitstuff.com/increase-torrent-speed-will-never-work 作者:[Rishabh Kandari][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 47f9a5d9d1f79dbd9198ecbbc6b4d2be7e7b257d Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Sat, 24 Feb 2018 12:03:48 +0800 Subject: [PATCH 69/81] translated by cyleft --- ...nstall Gogs Go Git Service on Ubuntu 16.04 | 405 ++++++++++++++++++ 1 file changed, 405 insertions(+) create mode 100644 translated/tech/20180209 How to Install Gogs Go Git Service on Ubuntu 16.04 diff --git a/translated/tech/20180209 How to Install Gogs Go Git Service on Ubuntu 16.04 b/translated/tech/20180209 How to Install Gogs Go Git Service on Ubuntu 16.04 new file mode 100644 index 0000000000..6923c0e332 --- /dev/null +++ b/translated/tech/20180209 How to Install Gogs Go Git Service on Ubuntu 16.04 @@ -0,0 +1,405 @@ +如何在 Ubuntu 16.04 上使用 Gogs 安装 Go 语言编写的 Git 服务器 +====== + +Gogs 是由 Go 语言编写,提供开源且免费的 Git 服务。Gogs 是一款无痛式自托管的 Git 服务器,能在尽可能小的硬件资源开销上搭建并运行您的私有 Git 服务器。Gogs 的网页界面和 GitHub 十分相近,且提供 MySQL、PostgreSQL 和 SQLite 数据库支持。 + +在本教程中,我们将使用 Gogs 在 Ununtu 16.04 上按步骤,指导您安装和配置您的私有 Git 服务器。这篇教程中涵盖了如何在 Ubuntu 上安装 Go 语言、PostgreSQL 和安装并且配置 Nginx 网页服务器作为 Go 应用的反向代理的细节内容。 + +### 搭建环境 + + * Ubuntu 16.04 + * Root 权限 + +### 我们将会接触到的事物 + + 1. 更新和升级系统 + 2. 安装和配置 PostgreSQL + 3. 安装 Go 和 Git + 4. 安装 Gogs + 5. 配置 Gogs + 6. 运行 Gogs 服务器 + 7. 安装和配置 Nginx 反向代理 + 8. 测试 + +### 步骤 1 - 更新和升级系统 +继续之前,更新 Ubuntu 所有的库,升级所有包。 + +运行下面的 apt 命令 +``` +sudo apt update +sudo apt upgrade +``` + +### 步骤 2 - 安装和配置 PostgreSQL + +Gogs 提供 MySQL、PostgreSQL、SQLite 和 TiDB 数据库系统支持。 + +此步骤中,我们将使用 PostgreSQL 作为 Gogs 程序的数据库。 + +使用下面的 apt 命令安装 PostgreSQL。 +``` +sudo apt install -y postgresql postgresql-client libpq-dev +``` + +安装完成之后,启动 PostgreSQL 服务并设置为开机启动。 +``` +systemctl start postgresql +systemctl enable postgresql +``` + +此时 PostgreSQL 数据库在 Ubuntu 系统上完成安装了。 + +之后,我们需要为 Gogs 创建数据库和用户。 + +使用 'postgres' 用户登陆并运行 ‘psql’ 命令获取 PostgreSQL 操作界面. +``` +su - postgres +psql +``` + +创建一个名为 ‘git’ 的新用户,给予此用户 ‘CREATEDB’ 权限。 +``` +CREATE USER git CREATEDB; +\password git +``` + +创建名为 ‘gogs_production’ 的数据库,设置 ‘git’ 用户作为其所有者。 +``` +CREATE DATABASE gogs_production OWNER git; +``` + +[![创建 Gogs 数据库][1]][2] + +作为 Gogs 安装时的 ‘gogs_production’ PostgreSQL 数据库和 ‘git’ 用户已经创建完毕。 + +### 步骤 3 - 安装 Go 和 Git + +使用下面的 apt 命令从库中安装 Git。 +``` +sudo apt install git +``` + +此时,为系统创建名为 ‘git’ 的新用户。 +``` +sudo adduser --disabled-login --gecos 'Gogs' git +``` + +登陆 ‘git’ 账户并且创建名为 ‘local’ 的目录。 +``` +su - git +mkdir -p /home/git/local +``` + +切换到 ‘local’ 目录,依照下方所展示的内容,使用 wget 命令下载 ‘Go’(最新版)。 +``` +cd ~/local +wget +``` + +[![安装 Go 和 Git][3]][4] + +解压并且删除 go 的压缩文件。 +``` +tar -xf go1.9.2.linux-amd64.tar.gz +rm -f go1.9.2.linux-amd64.tar.gz +``` + +‘Go’ 二进制文件已经被下载到 ‘~/local/go’ 目录。此时我们需要设置环境变量 - 设置 ‘GOROOT’ 和 ‘GOPATH’ 目录到系统环境,这样,我们就可以在 ‘git’ 用户下执行 ‘go’ 命令。 + +执行下方的命令。 +``` +cd ~/ +echo 'export GOROOT=$HOME/local/go' >> $HOME/.bashrc +echo 'export GOPATH=$HOME/go' >> $HOME/.bashrc +echo 'export PATH=$PATH:$GOROOT/bin:$GOPATH/bin' >> $HOME/.bashrc +``` + +之后通过运行 'source ~/.bashrc' 重载 Bash,如下: +``` +source ~/.bashrc +``` + +确定您使用的 Bash 是默认的 shell。 + +[![安装 Go 编程语言][5]][6] + +现在运行 'go' 的版本查看命令。 +``` +go version +``` + +之后确保您得到下图所示的结果。 + +[![检查 go 版本][7]][8] + +现在,Go 已经安装在系统的 ‘git’ 用户下了。 + +### 步骤 4 - 使用 Gogs 安装 Git 服务 + +使用 ‘git’ 用户登陆并且使用 ‘go’ 命令从 GitHub 下载 ‘Gogs’。 +``` +su - git +go get -u github.com/gogits/gogs +``` + +此命令将在 ‘GOPATH/src’ 目录下载 Gogs 的所有源代码。 + +切换至 '$GOPATH/src/github.com/gogits/gogs' 目录,并且使用下列命令搭建 gogs。 +``` +cd $GOPATH/src/github.com/gogits/gogs +go build +``` + +确保您没有捕获到错误。 + +现在使用下面的命令运行 Gogs Go Git 服务器。 +``` +./gogs web +``` + +此命令将会默认运行 Gogs 在 3000 端口上。 + +[![安装 Gogs Go Git 服务][9]][10] + +打开网页浏览器,键入您的 IP 地址和端口号,我的是 + +您应该会得到于下方一致的反馈。 + +[![Gogs 网页服务器][11]][12] + +Gogs 已经在您的 Ubuntu 系统上安装完毕。现在返回到您的终端,并且键入 'Ctrl + c' 中止服务。 + +### 步骤 5 - 配置 Gogs Go Git 服务器 + +本步骤中,我们将为 Gogs 创建惯例配置。 + +进入 Gogs 安装目录并新建 ‘custom/conf’ 目录。 +``` +cd $GOPATH/src/github.com/gogits/gogs +mkdir -p custom/conf/ +``` + +复制默认的配置文件到 custom 目录,并使用 [vim][13] 修改。 +``` +cp conf/app.ini custom/conf/app.ini +vim custom/conf/app.ini +``` + +在 ‘ **[server]** ’ 选项中,修改 ‘HOST_ADDR’ 为 ‘127.0.0.1’. +``` +[server] + PROTOCOL = http + DOMAIN = localhost + ROOT_URL = %(PROTOCOL)s://%(DOMAIN)s:%(HTTP_PORT)s/ + HTTP_ADDR = 127.0.0.1 + HTTP_PORT = 3000 + +``` + +在 ‘ **[database]** ’ 选项中,按照您的数据库信息修改。 +``` +[database] + DB_TYPE = postgres + HOST = 127.0.0.1:5432 + NAME = gogs_production + USER = git + PASSWD = [email protected]# + +``` + +保存并退出。 + +运行下面的命令验证配置项。 +``` +./gogs web +``` + +并且确保您得到如下的结果。 + +[![配置服务器][14]][15] + +Gogs 现在已经按照自定义配置下运行在 ‘localhost’ 的 3000 端口上了。 + +### 步骤 6 - 运行 Gogs 服务器 + +这一步,我们将在 Ubuntu 系统上配置 Gogs 服务器。我们会在 ‘/etc/systemd/system’ 目录下创建一个新的服务器配置文件 ‘gogs.service’。 + +切换到 ‘/etc/systemd/system’ 目录,使用 [vim][13] 创建服务器配置文件 ‘gogs.service’。 +``` +cd /etc/systemd/system +vim gogs.service +``` + +粘贴下面的代码到 gogs 服务器配置文件中。 +``` +[Unit] + Description=Gogs + After=syslog.target + After=network.target + After=mariadb.service mysqld.service postgresql.service memcached.service redis.service + + [Service] + # Modify these two values and uncomment them if you have + # repos with lots of files and get an HTTP error 500 because + # of that + ### + #LimitMEMLOCK=infinity + #LimitNOFILE=65535 + Type=simple + User=git + Group=git + WorkingDirectory=/home/git/go/src/github.com/gogits/gogs + ExecStart=/home/git/go/src/github.com/gogits/gogs/gogs web + Restart=always + Environment=USER=git HOME=/home/git + + [Install] + WantedBy=multi-user.target + +``` + +之后保存并且退出。 + +现在可以重载系统服务器。 +``` +systemctl daemon-reload +``` + +使用下面的命令开启 gogs 服务器并设置为开机启动。 +``` +systemctl start gogs +systemctl enable gogs +``` + +[![运行 Gogs 服务器][16]][17] + +Gogs 服务器现在已经运行在 Ubuntu 系统上了。 + +使用下面的命令检测: +``` +netstat -plntu +systemctl status gogs +``` + +您应该会得到下图所示的结果。 + +[![Gogs is listening on the network interface][18]][19] + +### 步骤 7 - 为 Gogs 安装和配置 Nginx 反向代理 + +在本步中,我们将为 Gogs 安装和配置 Nginx 反向代理。我们会在自己的库中调用 Nginx 包。 + +使用下面的命令添加 Nginx 库。 +``` +sudo add-apt-repository -y ppa:nginx/stable +``` + +此时更新所有的库并且使用下面的命令安装 Nginx。 +``` +sudo apt update +sudo apt install nginx -y +``` + +之后,进入 ‘/etc/nginx/sites-available’ 目录并且创建虚拟主机文件 ‘gogs’。 +``` +cd /etc/nginx/sites-available +vim gogs +``` + +粘贴下面的代码到配置项。 +``` +server { +     listen 80; +     server_name git.hakase-labs.co; + +     location / { +         proxy_pass http://localhost:3000; +     } + } + +``` + +保存退出。 + +**注意:** +使用您的域名修改 ‘server_name’ 项。 + +现在激活虚拟主机并且测试 nginx 配置。 +``` +ln -s /etc/nginx/sites-available/gogs /etc/nginx/sites-enabled/ +nginx -t +``` + +确保没有抛错,重启 Nginx 服务器。 +``` +systemctl restart nginx +``` + +[![安装和配置 Nginx 反向代理][20]][21] + +### 步骤 8 - 测试 + +打开您的网页浏览器并且输入您的 gogs URL,我的是 + +现在您将进入安装界面。在页面的顶部,输入您所有的 PostgreSQL 数据库信息。 + +[![Gogs 安装][22]][23] + +之后,滚动到底部,点击 ‘Admin account settings’ 下拉选项。 + +输入您的管理者用户名和邮箱。 + +[![键入 gogs 安装设置][24]][25] + +之后点击 ‘Install Gogs’ 按钮。 + +然后您将会被重定向到下图显示的 Gogs 用户面板。 + +[![Gogs 面板][26]][27] + +下面是 Gogs ‘Admin Dashboard(管理员面板)’。 + +[![浏览 Gogs 面板][28]][29] + +现在,Gogs 已经通过 PostgreSQL 数据库和 Nginx 网页服务器在您的 Ubuntu 16.04 上完成安装。 + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/how-to-install-gogs-go-git-service-on-ubuntu-1604/ + +作者:[Muhammad Arul][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.howtoforge.com/tutorial/server-monitoring-with-shinken-on-ubuntu-16-04/ +[1]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/1.png +[2]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/1.png +[3]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/2.png +[4]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/2.png +[5]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/3.png +[6]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/3.png +[7]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/4.png +[8]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/4.png +[9]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/5.png +[10]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/5.png +[11]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/6.png +[12]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/6.png +[13]:https://www.howtoforge.com/vim-basics +[14]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/7.png +[15]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/7.png +[16]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/8.png +[17]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/8.png +[18]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/9.png +[19]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/9.png +[20]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/10.png +[21]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/10.png +[22]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/11.png +[23]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/11.png +[24]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/12.png +[25]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/12.png +[26]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/13.png +[27]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/13.png +[28]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/14.png +[29]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/14.png From 885daf886f4281ef88d5668241cc80683dc7ec35 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 24 Feb 2018 14:10:26 +0800 Subject: [PATCH 70/81] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20slowing=20dow?= =?UTF-8?q?n=20made=20me=20a=20better=20leader?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ow slowing down made me a better leader.md | 53 +++++++++++++++++++ 1 file changed, 53 insertions(+) create mode 100644 sources/talk/20180220 How slowing down made me a better leader.md diff --git a/sources/talk/20180220 How slowing down made me a better leader.md b/sources/talk/20180220 How slowing down made me a better leader.md new file mode 100644 index 0000000000..bd1b9c0749 --- /dev/null +++ b/sources/talk/20180220 How slowing down made me a better leader.md @@ -0,0 +1,53 @@ +How slowing down made me a better leader +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_leadership_brand.png?itok=YW1Syk4S) + +Early in my career, I thought the most important thing I could do was act. If my boss said jump, my reply was "how high?" + +But as I've grown as a leader and manager, I've realized that the most important traits I can offer are [patience][1] and listening. This patience and listening means I'm focusing on what's really important. I'm decisive, so I do not hesitate to act. Yet I've learned that my actions are more impactful when I consider input from multiple sources and offer advice on what we should be doing—not simply reacting to an immediate request. + +Practicing open leadership involves cultivating the patience and listening skills I need to collaborate on the [best plan of action, not just the quickest one][2]. It also gives me the tools I need to explain [why I'm saying "no"][3] (or, perhaps, "not now") to someone, so I can lead with transparency and confidence. + +If you're in software development and practice scrum, then the following argument might resonate with you: The patience and listening a manager displays are as important as her skills in sprint planning and running the sprint demo. Forget about them, and you'll lessen the impact you're able to have. + +### A focus on patience + +Focus and patience do not always come easily. Often, I find myself sitting in meetings and filling my notebook with action items. My default action can be to think: "We can simply do x and y will improve!" Then I remember that things are not so linear. + +I need to think about the other factors that can influence a situation. Pausing to take in data from multiple people and resources helps me flesh out a strategy that our organization needs for long-term success. It also helps me identify those shorter-term milestones that should lead us to deliver the business results I'm responsible for producing. + +Here's a great example from a time when patience wasn't something I valued as I should have—and how that hurt my performance. When I was based on North Carolina, I worked with someone based in Arizona. We didn't use video conferencing technologies, so I didn't get to observe her body language when we talked. While I was responsible for delivering the results for the project I led, she was one of the two people tasked with making sure I had adequate support. + +For whatever reason, when I talked with this person, when she asked me to do something, I did it. She would be providing input on my performance evaluation, so I wanted to make sure she was happy. At the time, I didn't possess the maturity to know I didn't need to make her happy; my focus should have been on other performance indicators. I should have spent more time listening and collaborating with her instead of picking up the first "action item" and working on it while she was still talking. + +After six months on the job, this person gave me some tough feedback. I was angry and sad. Didn't I do everything she'd asked? I had worked long hours, nearly seven days a week for six months. How dare she criticize my performance? + +Then, after I had my moment of anger followed by sadness, I thought about what she said. Her feedback was on point. + +The patience and listening a manager displays are as important as her skills in sprint planning and running the sprint demo. + +She had concerns about the project, and she held me accountable because I was responsible. We worked through the issues, and I learned that vital lesson about how to lead: Leadership does not mean "get it done right now." Leadership means putting together a strategy, then communicating and implementing plans in support of the strategy. It also means making mistakes and learning from these hiccups. + +### Lesson learned + +In hindsight, I realize I could have asked more questions to better understand the intent of her feedback. I also could have pushed back if the guidance from her did not align with other input I was receiving. By having the patience to listen to the various sources giving me input about the project, synthesizing what I learned, and creating a coherent plan for action, I would have been a better leader. I also would have had more purpose driving the work I was doing. Instead of reacting to a single data point, I would have been implementing a strategic plan. I also would have had a better performance evaluation. + +I eventually had some feedback for her. Next time we worked together, I didn't want to hear the feedback after six months. I wanted to hear the feedback earlier and more often so I could learn from the mistakes sooner. An ongoing discussion about the work is what should happen on any team. + +As I mature as a manager and leader, I hold myself to the same standards I ask my team to meet: Plan, work the plan, and reflect. Repeat. Don't let a fire drill created by an external force distract you from the plan you need to implement. Breaking work into small increments builds in space for reflections and adjustments to the plan. As Daniel Goleman writes, "Directing attention toward where it needs to go is a primal task of leadership." Don't be afraid of meeting this challenge. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/18/2/open-leadership-patience-listening + +作者:[Angela Robertson][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/arobertson98 +[1]:https://opensource.com/open-organization/16/3/my-most-difficult-leadership-lesson +[2]:https://opensource.com/open-organization/16/3/fastest-result-isnt-always-best-result +[3]:https://opensource.com/open-organization/17/5/saying-no-open-organization From 3938b3b830d570e08a396c635fd8ec74d10ec3fc Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 24 Feb 2018 14:12:18 +0800 Subject: [PATCH 71/81] =?UTF-8?q?=E9=80=89=E9=A2=98:=204=20considerations?= =?UTF-8?q?=20when=20naming=20software=20development=20projects?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...en naming software development projects.md | 91 +++++++++++++++++++ 1 file changed, 91 insertions(+) create mode 100644 sources/talk/20180220 4 considerations when naming software development projects.md diff --git a/sources/talk/20180220 4 considerations when naming software development projects.md b/sources/talk/20180220 4 considerations when naming software development projects.md new file mode 100644 index 0000000000..1e1add0b68 --- /dev/null +++ b/sources/talk/20180220 4 considerations when naming software development projects.md @@ -0,0 +1,91 @@ +4 considerations when naming software development projects +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hello-name-sticker-badge-tag.png?itok=fAgbMgBb) + +Working on a new open source project, you're focused on the code—getting that great new idea released so you can share it with the world. And you'll want to attract new contributors, so you need a terrific **name** for your project. + +We've all read guides for creating names, but how do you go about choosing the right one? Keeping that cool science fiction reference you're using internally might feel fun, but it won't mean much to new users you're trying to attract. A better approach is to choose a name that's memorable to new users and developers searching for your project. + +Names set expectations. Your project's name should showcase its functionality in the ecosystem and explain to users what your story is. In the crowded open source software world, it's important not to get entangled with other projects out there. Taking a little extra time now, before sending out that big announcement, will pay off later. + +Here are four factors to keep in mind when choosing a name for your project. + +### What does your project's code do? + +Start with your project: What does it do? You know the code intimately—but can you explain what it does to a new developer? Can you explain it to a CTO or non-developer at another company? What kinds of problems does your project solve for users? + +Your project's name needs to reflect what it does in a way that makes sense to newcomers who want to use or contribute to your project. That means considering the ecosystem for your technology and understanding if there are any naming styles or conventions used for similar kinds of projects. Imagine that you're trying to evaluate someone else's project: Would the name be appealing to you? + +Any distribution channels you push to are also part of the ecosystem. If your code will be in a Linux distribution, [npm][1], [CPAN][2], [Maven][3], or in a Ruby Gem, you need to review any naming standards or common practices for that package manager. Review any similar existing names in that distribution channel, and get a feel for naming styles of other programs there. + +### Who are the users and developers you want to attract? + +The hardest aspect of choosing a new name is putting yourself in the shoes of new users. You built this project; you already know how powerful it is, so while your cool name may sound great, it might not draw in new people. You need a name that is interesting to someone new, and that tells the world what problems your project solves. + +Great names depend on what kind of users you want to attract. Are you building an [Eclipse][4] plugin or npm module that's focused on developers? Or an analytics toolkit that brings visualizations to the average user? Understanding your user base and the kinds of open source contributors you want to attract is critical. + +Great names depend on what kind of users you want to attract. + +Take the time to think this through. Who does your project most appeal to, and how can it help them do their job? What kinds of problems does your code solve for end users? Understanding the target user helps you focus on what users need, and what kind of names or brands they respond to. + +Take the time to think this through. Who does your project most appeal to, and how can it help them do their job? What kinds of problems does your code solve for end users? Understanding the target user helps you focus on what users need, and what kind of names or brands they respond to. + +When you're open source, this equation changes a bit—your target is not just users; it's also developers who will want to contribute code back to your project. You're probably a developer, too: What kinds of names and brands excite you, and what images would entice you to try out someone else's new project? + +Once you have a better feel of what users and potential contributors expect, use that knowledge to refine your names. Remember, you need to step outside your project and think about how the name would appeal to someone who doesn't know how amazing your code is—yet. Once someone gets to your website, does the name synchronize with what your product does? If so, move to the next step. + +### Who else is using similar names for software? + +Now that you've tried on a user's shoes to evaluate potential names, what's next? Figuring out if anyone else is already using a similar name. It sometimes feels like all the best names are taken—but if you search carefully, you'll find that's not true. + +The first step is to do a few web searches using your proposed name. Search for the name, plus "software", "open source", and a few keywords for the functionality that your code provides. Look through several pages of results for each search to see what's out there in the software world. + +The first step is to do a few web searches using your proposed name. + +Unless you're using a completely made-up word, you'll likely get a lot of hits. The trick is understanding which search results might be a problem. Again, put on the shoes of a new user to your project. If you were searching for this great new product and saw the other search results along with your project's homepage, would you confuse them? Are the other search results even software products? If your product solves a similar problem to other search results, that's a problem: Users may gravitate to an existing product instead of a new one. + +Unless you're using a completely made-up word, you'll likely get a lot of hits. The trick is understanding which search results might be a problem. Again, put on the shoes of a new user to your project. If you were searching for this great new product and saw the other search results along with your project's homepage, would you confuse them? Are the other search results even software products? If your product solves a similar problem to other search results, that's a problem: Users may gravitate to an existing product instead of a new one. + +Similar non-software product names are rarely an issue unless they are famous trademarks—like Nike or Red Bull, for example—where the companies behind them won't look kindly on anyone using a similar name. Using the same name as a less famous non-software product might be OK, depending on how big your project gets. + +### How big do you plan to grow your project? + +Are you building a new node module or command-line utility, but not planning a career around it? Is your new project a million-dollar business idea, and you're thinking startup? Or is it something in between? + +If your project is a basic developer utility—something useful that developers will integrate into their workflow—then you have enough data to choose a name. Think through the ecosystem and how a new user would see your potential names, and pick one. You don't need perfection, just a name you're happy with that seems right for your project. + +If you're planning to build a business around your project, use these tips to develop a shortlist of names, but do more vetting before announcing the winner. Use for a business or major project requires some level of registered trademark search, which is usually performed by a law firm. + +### Common pitfalls + +Finally, when choosing a name, avoid these common pitfalls: + + * Using an esoteric acronym. If new users don't understand the name, they'll have a hard time finding you. + + * Using current pop-culture references. If you want your project's appeal to last, pick a name that will last. + + * Failing to consider non-English speakers. Does the name have a specific meaning in another language that might be confusing? + + * Using off-color jokes or potentially unsavory references. Even if it seems funny to developers, it may fall flat for newcomers and turn away contributors. + + + + +Good luck—and remember to take the time to step out of your shoes and consider how a newcomer to your project will think of the name. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/choosing-project-names-four-key-considerations + +作者:[Shane Curcuru][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/shane-curcuru +[1]:https://www.npmjs.com/ +[2]:https://www.cpan.org/ +[3]:https://maven.apache.org/ +[4]:https://www.eclipse.org/ From 9bba890ede08d46f938e81ce8063618af3b2c06b Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 24 Feb 2018 14:13:43 +0800 Subject: [PATCH 72/81] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20format?= =?UTF-8?q?=20academic=20papers=20on=20Linux=20with=20groff=20-me?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...academic papers on Linux with groff -me.md | 265 ++++++++++++++++++ 1 file changed, 265 insertions(+) create mode 100644 sources/tech/20180220 How to format academic papers on Linux with groff -me.md diff --git a/sources/tech/20180220 How to format academic papers on Linux with groff -me.md b/sources/tech/20180220 How to format academic papers on Linux with groff -me.md new file mode 100644 index 0000000000..5131cad7f5 --- /dev/null +++ b/sources/tech/20180220 How to format academic papers on Linux with groff -me.md @@ -0,0 +1,265 @@ +How to format academic papers on Linux with groff -me +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T) + +I was an undergraduate student when I discovered Linux in 1993. I was so excited to have the power of a Unix system right in my dorm room, but despite its many capabilities, Linux lacked applications. Word processors like LibreOffice and OpenOffice were years away. If you wanted to use a word processor, you likely booted your system into MS-DOS and used WordPerfect, the shareware GalaxyWrite, or a similar program. + +`nroff` and `troff`. They are different interfaces to the same system: `nroff` generates plaintext output, suitable for screens or line printers, and `troff` generates very pretty output, usually for printing on a laser printer. + +That was my method, since I needed to write papers for my classes, but I preferred staying in Linux. I knew from our "big Unix" campus computer lab that Unix systems provided a set of text-formatting programs calledand. They are different interfaces to the same system:generates plaintext output, suitable for screens or line printers, andgenerates very pretty output, usually for printing on a laser printer. + +On Linux, `nroff` and `troff` are combined as GNU troff, more commonly known as [groff][1]. I was happy to see a version of groff included in my early Linux distribution, so I set out to learn how to use it to write class papers. The first macro set I learned was the `-me` macro package, a straightforward, easy to learn macro set. + +The first thing to know about `groff` is that it processes and formats text according to a set of macros. A macro is usually a two-character command, set on a line by itself, with a leading dot. A macro might carry one or more options. When `groff` encounters one of these macros while processing a document, it will automatically format the text appropriately. + +Below, I'll share the basics of using `groff -me` to write simple documents like class papers. I won't go deep into the details, like how to create nested lists, keeps and displays, tables, and figures. + +### Paragraphs + +Let's start with an easy example you see in almost every type of document: paragraphs. Paragraphs can be formatted with the first line either indented or not (i.e., flush against the left margin). Many printed documents, including academic papers, magazines, journals, and books, use a combination of the two types, with the first (leading) paragraph in a document or chapter flush left and all other (regular) paragraphs indented. In `groff -me`, you can use both paragraph types: leading paragraphs (`.lp`) and regular paragraphs (`.pp`). +``` +.lp + +This is the first paragraph. + +.pp + +This is a standard paragraph. + +``` + +### Text formatting + +The macro to format text in bold is `.b` and to format in italics is `.i`. If you put `.b` or `.i` on a line by itself, then all text that comes after it will be in bold or italics. But it's more likely you just want to put one or a few words in bold or italics. To make one word bold or italics, put that word on the same line as `.b` or `.i`, as an option. To format multiple words in **bold** or italics, enclose your text in quotes. +``` +.pp + +You can do basic formatting such as + +.i italics + +or + +.b "bold text." + +``` + +In the above example, the period at the end of **bold text** will also be in bold type. In most cases, that's not what you want. It's more correct to only have the words **bold text** in bold, but not the trailing period. To get the effect you want, you can add a second argument to `.b` or `.i` to indicate any text that should trail the bolded or italicized text, but in normal type. For example, you might do this to ensure that the trailing period doesn't show up in bold type. +``` +.pp + +You can do basic formatting such as + +.i italics + +or + +.b "bold text" . + +``` + +### Lists + +With `groff -me`, you can create two types of lists: bullet lists (`.bu`) and numbered lists (`.np`). +``` +.pp + +Bullet lists are easy to make: + +.bu + +Apple + +.bu + +Banana + +.bu + +Pineapple + +.pp + +Numbered lists are as easy as: + +.np + +One + +.np + +Two + +.np + +Three + +.pp + +Note that numbered lists will reset at the next pp or lp. + +``` + +### Subheads + +If you're writing a long paper, you might want to divide your content into sections. With `groff -me`, you can create numbered headings (`.sh`) and unnumbered headings (`.uh`). In either, enclose the section title in quotes as an argument. For numbered headings, you also need to provide the heading level: `1` will give a first-level heading (e.g., 1.). Similarly, `2` and `3` will give second and third level headings, such as 2.1 or 3.1.1. +``` +.uh Introduction + +.pp + +Provide one or two paragraphs to describe the work + +and why it is important. + +.sh 1 "Method and Tools" + +.pp + +Provide a few paragraphs to describe how you + +did the research, including what equipment you used + +``` + +### Smart quotes and block quotes + +It's standard in any academic paper to cite other people's work as evidence. If you're citing a brief quote to highlight a key message, you can just type quotes around your text. But groff won't automatically convert your quotes into the "smart" or "curly" quotes used by modern word processing systems. To create them in `groff -me`, insert an inline macro to create the left quote (`\*(lq`) and right quote mark (`\*(rq`). +``` +.pp + +Christine Peterson coined the phrase \*(lqopen source.\*(rq + +``` + +There's also a shortcut in `groff -me` to create these quotes (`.q`) that I find easier to use. +``` +.pp + +Christine Peterson coined the phrase + +.q "open source." + +``` + +If you're citing a longer quote that spans several lines, you'll want to use a block quote. To do this, insert the blockquote macro (`.(q`) at the beginning and end of the quote. +``` +.pp + +Christine Peterson recently wrote about open source: + +.(q + +On April 7, 1998, Tim O'Reilly held a meeting of key + +leaders in the field. Announced in advance as the first + +.q "Freeware Summit," + +by April 14 it was referred to as the first + +.q "Open Source Summit." + +.)q + +``` + +### Footnotes + +To insert a footnote, include the footnote macro (`.(f`) before and after the footnote text, and use an inline macro (`\**`) to add the footnote mark. The footnote mark should appear both in the text and in the footnote itself. +``` +.pp + +Christine Peterson recently wrote about open source:\** + +.(f + +\**Christine Peterson. + +.q "How I coined the term open source." + +.i "OpenSource.com." + +1 Feb 2018. + +.)f + +.(q + +On April 7, 1998, Tim O'Reilly held a meeting of key + +leaders in the field. Announced in advance as the first + +.q "Freeware Summit," + +by April 14 it was referred to as the first + +.q "Open Source Summit." + +.)q + +``` + +### Cover page + +Most class papers require a cover page containing the paper's title, your name, and the date. Creating a cover page in `groff -me` requires some assembly. I find the easiest way is to use centered blocks of text and add extra lines between the title, name, and date. (I prefer to use two blank lines between each.) At the top of your paper, start with the title page (`.tp`) macro, insert five blank lines (`.sp 5` ), then add the centered text (`.(c`), and extra blank lines (`.sp 2`). +``` +.tp + +.sp 5 + +.(c + +.b "Writing Class Papers with groff -me" + +.)c + +.sp 2 + +.(c + +Jim Hall + +.)c + +.sp 2 + +.(c + +February XX, 2018 + +.)c + +.bp + +``` + +The last macro (`.bp`) tells groff to add a page break after the title page. + +### Learning more + +Those are the essentials of writing professional-looking a paper in `groff -me` with leading and indented paragraphs, bold and italics text, bullet and numbered lists, numbered and unnumbered section headings, block quotes, and footnotes. + +I've included a sample groff file to demonstrate all of this formatting. Save the `lorem-ipsum.me` file to your system and run it through groff. The `-Tps` option sets the output type to PostScript so you can send the document to a printer or convert it to a PDF file using the `ps2pdf` program. +``` +groff -Tps -me lorem-ipsum.me > lorem-ipsum.me.ps + +ps2pdf lorem-ipsum.me.ps lorem-ipsum.me.pdf + +``` + +If you'd like to use more advanced functions in `groff -me`, refer to Eric Allman's "Writing Papers with Groff using `−me`," which you should find on your system as `meintro.me` in groff's `doc` directory. It's a great reference document that explains other ways to format papers using the `groff -me` macros. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/how-format-academic-papers-linux-groff-me + +作者:[Jim Hall][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jim-hall +[1]:https://www.gnu.org/software/groff/ From a1e57f69aa80d964095a584e9d930f7a9ada16e2 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 24 Feb 2018 14:22:30 +0800 Subject: [PATCH 73/81] =?UTF-8?q?=E9=80=89=E9=A2=98:=20The=20List=20Of=20U?= =?UTF-8?q?seful=20Bash=20Keyboard=20Shortcuts?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... List Of Useful Bash Keyboard Shortcuts.md | 161 ++++++++++++++++++ 1 file changed, 161 insertions(+) create mode 100644 sources/tech/20180217 The List Of Useful Bash Keyboard Shortcuts.md diff --git a/sources/tech/20180217 The List Of Useful Bash Keyboard Shortcuts.md b/sources/tech/20180217 The List Of Useful Bash Keyboard Shortcuts.md new file mode 100644 index 0000000000..beba179fee --- /dev/null +++ b/sources/tech/20180217 The List Of Useful Bash Keyboard Shortcuts.md @@ -0,0 +1,161 @@ +The List Of Useful Bash Keyboard Shortcuts +====== +![](https://www.ostechnix.com/wp-content/uploads/2018/02/Bash-720x340.jpg) + +Nowadays, I spend more time in Terminal, trying to accomplish more in CLI than GUI. I learned many BASH tricks over time. And, here is the list of useful of BASH shortcuts that every Linux users should know to get things done faster in their BASH shell. I won’t claim that this list is a complete list of BASH shortcuts, but just enough to move around your BASH shell faster than before. Learning how to navigate faster in BASH Shell not only saves some time, but also makes you proud of yourself for learning something worth. Well, let’s get started. + +### List Of Useful Bash Keyboard Shortcuts + +#### ALT key shortcuts + +1\. **ALT+A** – Go to the beginning of a line. + +2\. **ALT+B** – Move one character before the cursor. + +3\. **ALT+C** – Suspends the running command/process. Same as CTRL+C + +4\. **ALT+D** – Closes the empty Terminal (I.e it closes the Terminal when there is nothing typed). Also deletes all chracters after the cursor. + +5\. **ALT+F** – Move forward one character. + +6\. **ALT+T** – Swaps the last two words. + +7\. **ALT+U** – Capitalize all characters in a word after the cursor. + +8\. **ALT+L** – Uncaptalize all characters in a word after the cursor. + +9\. **ALT+R** – Undo any changes to a command that you have brought from the history if you’ve edited it. + +As you see in the above output, I have pulled a command using reverse search and changed the last characters in that command and revert the changes using **ALT+R**. + +10\. **ALT+.** (note the dot at the end) – Use the last word of the previous command. + +If you want to use the same options for multiple commands, you can use this shortcut to bring back the last word of previous command. For instance, I need to short the contents of a directory using “ls -r” command. Also, I want to view my Kernel version using “uname -r”. In both commands, the common word is “-r”. This is where ALT+. shortcut comes in handy. First run, ls -r command to do reverse shorting and use the last word “-r” in the nex command i.e uname. + +#### CTRL key shortcuts + +1\. **CTRL+A** – Quickly move to the beginning of line. + +Let us say you’re typing a command something like below. While you’re at the N’th line, you noticed there is a typo in the first character +``` +$ gind . -mtime -1 -type + +``` + +Did you notice? I typed “gind” instead of “find” in the above command. You can correct this error by pressing the left arrow all the way to the first letter and replace “g” with “f”. Alternatively, just hit the **CTRL+A** or **Home** key to instantly go to the beginning of the line and replace the misspelled character. This will save you a few seconds. + +2\. **CTRL+B** – To move backward one character. + +This shortcut key can move the cursor backward one character i.e one character before the cursor. Alternatively, you can use LEFT arrow to move backward one character. + +3\. **CTRL+C** – Stop the currently running command + +If a command takes too long to complete or if you mistakenly run it, you can forcibly stop or quit the command by using **CTRL+C**. + +4\. **CTRL+D** – Delete one character backward. + +If you have a system where the BACKSPACE key isn’t working, you can use **CTRL+D** to delete one character backward. This shortcut also lets you logs out of the current session, similar to exit. + +5\. **CTRL+E** – Move to the end of line + +After you corrected any misspelled word in the start of a command or line, just hit **CTRL+E** to quickly move to the end of the line. Alternatively, you can use END key in your keyboard. + +6\. **CTRL+F** – Move forward one character + +If you want to move the cursor forward one character after another, just press **CTRL+F** instead of RIGHT arrow key. + +7\. **CTRL+G** – Leave the history searching mode without running the command. + +As you see in the above screenshot, I did the reverse search, but didn’t execute the command and left the history searching mode. + +8\. **CTRL+H** – Delete the characters before the cursor, same as BASKSPACE. + +9\. **CTRL+J** – Same as ENTER/RETURN key. + +ENTER key is not working? No problem! **CTRL+J** or **CTRL+M** can be used as an alternative to ENTER key. + +10\. **CTRL+K** – Delete all characters after the cursor. + +You don’t have to keep hitting the DELETE key to delete the characters after the cursor. Just press **CTRL+K** to delete all characters after the cursor. + +11\. **CTRL+L** – Clears the screen and redisplay the line. + +Don’t type “clear” to clear the screen. Just press CTRL+L to clear and redisplay the currently typed line. + +12\. **CTRL+M** – Same as CTRL+J or RETURN. + +13\. **CTRL+N** – Display next line in command history. + +You can also use DOWN arrow. + +14\. **CTRL+O** – Run the command that you found using reverse search i.e CTRL+R. + +15\. **CTRL+P** – Displays the previous line in command history. + +You can also use UP arrow. + +16\. **CTRL+R** – Searches the history backward (Reverse search). + +17\. **CTRL+S** – Searches the history forward. + +18\. **CTRL+T** – Swaps the last two characters. + +This is one of my favorite shortcut. Let us say you typed “sl” instead of “ls”. No problem! This shortcut will transposes the characters as in the below screenshot. + +![][2] + +19\. **CTRL+U** – Delete all characters before the cursor (Kills backward from point to the beginning of line). + +This shortcut will delete all typed characters backward at once. + +20\. **CTRL+V** – Makes the next character typed verbatim + +21\. **CTRL+W** – Delete the words before the cursor. + +Don’t confuse it with CTRL+U. CTRL+W won’t delete everything behind a cursor, but a single word. + +![][3] + +22\. **CTRL+X** – Lists the possible filename completions of the current word. + +23\. **CTRL+XX** – Move between start of command line and current cursor position (and back again). + +24\. **CTRL+Y** – Retrieves last item that you deleted or cut. + +Remember, we deleted a word “-al” using CTRL+W in the 21st command. You can retrieve that word instantly using CTRL+Y. + +![][4] + +See? I didn’t type “-al”. Instead, I pressed CTRL+Y to retrieve it. + +25\. **CTRL+Z** – Stops the current command. + +You may very well know this shortcut. It kills the currently running command. You can resume it with **fg** in the foreground or **bg** in the background. + +26\. **CTRL+[** – Equivalent to ESC key. + +#### Miscellaneous + +1\. **!!** – Repeats the last command. + +2\. **ESC+t** – Swaps the last tow words. + +That’s all I have in mind now. I will keep adding more if I came across any Bash shortcut keys in future. If you think there is a mistake in this article, please do notify me in the comments section below. I will update it asap. + +Cheers! + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/list-useful-bash-keyboard-shortcuts/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[2]:http://www.ostechnix.com/wp-content/uploads/2018/02/CTRLT-1.gif +[3]:http://www.ostechnix.com/wp-content/uploads/2018/02/CTRLW-1.gif +[4]:http://www.ostechnix.com/wp-content/uploads/2018/02/CTRLY-1.gif From fb10104b40494f4a73609332156c44cabc68aca1 Mon Sep 17 00:00:00 2001 From: yizhuyan Date: Sat, 24 Feb 2018 14:31:01 +0800 Subject: [PATCH 74/81] Create 20180131 10 things I love about Vue.md --- .../20180131 10 things I love about Vue.md | 138 ++++++++++++++++++ 1 file changed, 138 insertions(+) create mode 100644 translated/tech/20180131 10 things I love about Vue.md diff --git a/translated/tech/20180131 10 things I love about Vue.md b/translated/tech/20180131 10 things I love about Vue.md new file mode 100644 index 0000000000..16fae2d64f --- /dev/null +++ b/translated/tech/20180131 10 things I love about Vue.md @@ -0,0 +1,138 @@ +#我喜欢Vue的10个方面 +============================================================ + +![](https://cdn-images-1.medium.com/max/1600/1*X4ipeKVYzmY2M3UPYgUYuA.png) + + + + +我喜欢Vue。当我在2016年第一次接触它时,也许那时我已有了JavaScript框架疲劳的观点,因为我已经具有Backbone, Angular, React等框架的经验 +而且我也没有过度的热情去尝试一个新的框架。直到我在hacker news上读到一份评论,其描述Vue是类似于“新jquery”的JavaScript框架,从而激发了我的好奇心。在那之前,我已经相当满意React这个框架,它是一个很好的框架,基于可靠的设计原则,围绕着视图模板,虚拟DOM和状态响应等技术。而Vue也提供了这些重要的内容。在这篇文章中,我旨在解释为什么Vue适合我,为什么在上文中那些我尝试过的框架中选择它。也许你将同意我的一些观点,但至少我希望能够给大家关于使用Vue开发现代JavaScript应用的一些灵感。 + +##1\. 极少的模板语法 + +Vue默认提供的视图模板语法是极小的,简洁的和可扩展的。像其他Vue部分一样,可以很简单的使用类似JSX一样语法而不使用标准的模板语法(甚至有官方文档说明如何这样做),但是我觉得没必要这么做。关于JSX有好的方面,也有一些有依据的批评,如混淆了JavaScript和HTML,使得很容易在模板中编写出复杂的代码,而本来应该分开写在不同的地方的。 + +Vue没有使用标准的HTML来编写视图模板,而是使用极少的模板语法来处理简单的事情,如基于视图数据迭代创建元素。 +``` + + + + + +``` + + +我也喜欢Vue提供的简短绑定语法,“:”用于在模板中绑定数据变量,“@”用于绑定事件。这是一个细节,但写起来很爽而且能够让你的组件代码简洁。 + +##2\. 单文件组件 + +大多数人使用Vue,都使用“单文件组件”。本质上就是一个.vue文件对应一个组件,其中包含三部分(CSS,HTML和JavaScript) + +这种技术结合是对的。它让人很容易理解每个组件在一个单独的地方,同时也非常好的鼓励了大家保持每个组件代码的简短。如果你的组件中JavaScript,CSS和HTML代码占了很多行,那么就到了进一步模块化的时刻了。 + +在使用Vue组件中的 -``` - - -I also like the short-bindings provided by Vue, ‘:’ for binding data variables into your template and ‘@’ for binding to events. It’s a small thing, but it feels nice to type and keeps your components succinct. - -2\. Single File Components - -When most people write Vue, they do so using ‘single file components’. Essentially it is a file with the suffix .vue containing up to 3 parts (the css, html and javascript) for each component. - -This coupling of technologies feels right. It makes it easy to understand each component in a single place. It also has the nice side effect of encouraging you to keep your code short for each component. If the JavaScript, CSS and HTML for your component is taking up too many lines then it might be time to modularise further. - -When it comes to the