From ded956b87bf655b934b3c05aee702ba42c7a39be Mon Sep 17 00:00:00 2001 From: Octopus <15391606236@163.com> Date: Sun, 14 Jan 2018 09:19:10 +0800 Subject: [PATCH 001/226] Singledo. Apply for task --- .../tech/20171023 Processors - Everything You Need to Know.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171023 Processors - Everything You Need to Know.md b/sources/tech/20171023 Processors - Everything You Need to Know.md index e3ee2e5998..6ebfd18d5a 100644 --- a/sources/tech/20171023 Processors - Everything You Need to Know.md +++ b/sources/tech/20171023 Processors - Everything You Need to Know.md @@ -1,3 +1,4 @@ + translateing by singledo Processors - Everything You Need to Know ====== ![](http://www.theitstuff.com/wp-content/uploads/2017/10/processors-all-you-need-to-know.jpg) From b4cfa9f65d9006a113286784adaddf325e543fa4 Mon Sep 17 00:00:00 2001 From: Ocputs <15391606236@163.com> Date: Sun, 14 Jan 2018 23:24:47 +0800 Subject: [PATCH 002/226] Process --- translateing --- .../tech/20171023 Processors - Everything You Need to Know.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171023 Processors - Everything You Need to Know.md b/sources/tech/20171023 Processors - Everything You Need to Know.md index e3ee2e5998..fea63f43ca 100644 --- a/sources/tech/20171023 Processors - Everything You Need to Know.md +++ b/sources/tech/20171023 Processors - Everything You Need to Know.md @@ -1,3 +1,4 @@ + translateing --- singledo Processors - Everything You Need to Know ====== ![](http://www.theitstuff.com/wp-content/uploads/2017/10/processors-all-you-need-to-know.jpg) From 03dc1df41df80742b90116d8ad575356eee9d999 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Jan 2018 14:32:37 +0800 Subject: [PATCH 003/226] translate done: 20180104 How to Change Your Linux Console Fonts.md --- ... How to Change Your Linux Console Fonts.md | 88 ------------------- ... How to Change Your Linux Console Fonts.md | 88 +++++++++++++++++++ 2 files changed, 88 insertions(+), 88 deletions(-) delete mode 100644 sources/tech/20180104 How to Change Your Linux Console Fonts.md create mode 100644 translated/tech/20180104 How to Change Your Linux Console Fonts.md diff --git a/sources/tech/20180104 How to Change Your Linux Console Fonts.md b/sources/tech/20180104 How to Change Your Linux Console Fonts.md deleted file mode 100644 index 302f8459b4..0000000000 --- a/sources/tech/20180104 How to Change Your Linux Console Fonts.md +++ /dev/null @@ -1,88 +0,0 @@ -translating by lujun9972 -How to Change Your Linux Console Fonts -====== -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/font-size_0.png?itok=d97vmyYa) - -I try to be a peaceful soul, but some things make that difficult, like tiny console fonts. Mark my words, friends, someday your eyes will be decrepit and you won't be able to read those tiny fonts you coded into everything, and then you'll be sorry, and I will laugh. - -Fortunately, Linux fans, you can change your console fonts. As always, the ever-changing Linux landscape makes this less than straightforward, and font management on Linux is non-existent, so we'll muddle along as best we can. In this article, I'll show what I've found to be the easiest approach. - -### What is the Linux Console? - -Let us first clarify what we're talking about. When I say Linux console, I mean TTY1-6, the virtual terminals that you access from your graphical desktop with Ctrl+Alt+F1 through F6. To get back to your graphical environment, press Alt+F7. (This is no longer universal, however, and your Linux distribution may have it mapped differently. You may have more or fewer TTYs, and your graphical session may not be at F7. For example, Fedora puts the default graphical session at F2, and an extra one at F1.) I think it is amazingly cool that we can have both X and console sessions running at the same time. - -The Linux console is part of the kernel, and does not run in an X session. This is the same console you use on headless servers that have no graphical environments. I call the terminals in a graphical session X terminals, and terminal emulators is my catch-all name for both console and X terminals. - -But that's not all. The Linux console has come a long way from the early ANSI days, and thanks to the Linux framebuffer, it has Unicode and limited graphics support. There are also a number of console multimedia applications that we will talk about in a future article. - -### Console Screenshots - -The easy way to get console screenshots is from inside a virtual machine. Then you can use your favorite graphical screen capture program from the host system. You may also make screen captures from your console with [fbcat][1] or [fbgrab][2]. `fbcat` creates a portable pixmap format (PPM) image; this is a highly portable uncompressed image format that should be readable on any operating system, and of course you can convert it to whatever format you want. `fbgrab` is a wrapper script to `fbcat` that creates a PNG file. There are multiple versions of `fbgrab` written by different people floating around. Both have limited options and make only a full-screen capture. - -`fbcat` needs root permissions, and must redirect to a file. Do not specify a file extension, but only the filename: -``` -$ sudo fbcat > Pictures/myfile - -``` - -After cropping in GIMP, I get Figure 1. - -It would be nice to have a little padding on the left margin, so if any of you excellent readers know how to do this, please tell us in the comments. - -`fbgrab` has a few more options that you can read about in `man fbgrab`, such as capturing a different console, and time delay. This example makes a screen grab just like `fbcat`, except you don't have to explicitly redirect: -``` -$ sudo fbgrab Pictures/myOtherfile - -``` - -### Finding Fonts - -As far as I know, there is no way to list your installed kernel fonts other than looking in the directories they are stored in: `/usr/share/consolefonts/` (Debian/etc.), `/lib/kbd/consolefonts/` (Fedora), `/usr/share/kbd/consolefonts` (openSUSE)...you get the idea. - -### Changing Fonts - -Readable fonts are not a new concept. Embrace the old! Readability matters. And so does configurability, which sometimes gets lost in the rush to the new-shiny. - -On Debian/Ubuntu/etc. systems you can run `sudo dpkg-reconfigure console-setup` to set your console font, then run the `setupcon` command in your console to activate the changes. `setupcon` is part of the `console-setup` package. If your Linux distribution doesn't include it, there might be a package for you at [openSUSE][3]. - -You can also edit `/etc/default/console-setup` directly. This example sets the Terminus Bold font at 32 points, which is my favorite, and restricts the width to 80 columns. -``` -ACTIVE_CONSOLES="/dev/tty[1-6]" -CHARMAP="UTF-8" -CODESET="guess" -FONTFACE="TerminusBold" -FONTSIZE="16x32" -SCREEN_WIDTH="80" - -``` - -The FONTFACE and FONTSIZE values come from the font's filename, `TerminusBold32x16.psf.gz`. Yes, you have to know to reverse the order for FONTSIZE. Computers are so much fun. Run `setupcon` to apply the new configuration. You can see the whole character set for your active font with `showconsolefont`. Refer to `man console-setup` for complete options. - -### Systemd - -Systemd is different from `console-setup`, and you don't need to install anything, except maybe some extra font packages. All you do is edit `/etc/vconsole.conf` and then reboot. On my Fedora and openSUSE systems I had to install some extra Terminus packages to get the larger sizes as the installed fonts only went up to 16 points, and I wanted 32. This is the contents of `/etc/vconsole.conf` on both systems: -``` -KEYMAP="us" -FONT="ter-v32b" - -``` - -Come back next week to learn some more cool console hacks, and some multimedia console applications. - -Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/1/how-change-your-linux-console-fonts - -作者:[Carla Schroder][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/cschroder -[1]:http://jwilk.net/software/fbcat -[2]:https://github.com/jwilk/fbcat/blob/master/fbgrab -[3]:https://software.opensuse.org/package/console-setup -[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/translated/tech/20180104 How to Change Your Linux Console Fonts.md b/translated/tech/20180104 How to Change Your Linux Console Fonts.md new file mode 100644 index 0000000000..245f15924e --- /dev/null +++ b/translated/tech/20180104 How to Change Your Linux Console Fonts.md @@ -0,0 +1,88 @@ +如何更改 Linux 控制台上的字体 +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/font-size_0.png?itok=d97vmyYa) + +我尝试尽可能的保持心灵祥和,然而总有一些事情让我意难平,比如控制台字体太小了。记住我的话,朋友,有一天你的眼睛会退化,无法再看清你编码时用的那些细小字体,到那时你就后悔莫及了。 + +幸好,Linux 死忠们,你可以更改控制台的字体。按照 Linux 一贯的尿性,不断变化的 Linux 环境使得这个问题变得不太简单明了,而 Linux 上也没有字体管理这么个东西,这使得我们很容易就被搞晕了。本文,我将会向你展示,我找到的更改字体的最简方法。 + +### Linux 控制台是个什么鬼? + +首先让我们来澄清一下我们说的到底是个什么东西。当我提到 Linux 控制台,我指的是 TTY1-6,即你从图形环境用 `Ctrl-Alt-F1` 到 `F6` 切换到的虚拟终端。按下 `Ctrl+Alt+F7` 会切回图形环境。(不过这些热键已经不再通用,你的 Linux 发行版可能有不同的键映射。你的 TTY 的数量也可能不同,你图形环境会话也可能不在 `F7`。比如,Fedora 的默认图形会话是 `F2`,它只有一个额外的终端在 `F1`。) 我觉得能同时拥有 X 会话和终端绘画实在是太酷了。 + +Linux 控制台是内核的一部分,而且并不运行在 X 会话中。它和你在没有图形环境的无头服务器中用的控制台是一样的。我称呼在图形会话中的 X 终端为终端,而将控制台和 X 终端统称为终端模拟器。 + +但这还没完。Linux 终端从早期的 ANSI 时代开始已经经历了长久的发展,多亏了 Linux framebuffer,它现在支持 Unicode 并且对图形也有了有限的一些支持。而且出现了很多在控制台下运行的多媒体应用,这些我们在以后的文章中会提到。 + +### 控制台截屏 + +获取控制台截屏的最简单方法是让控制台跑在虚拟机内部。然后你可以在宿主系统上使用中意的截屏软件来抓取。不过借助 [fbcat][1] 和 [fbgrab][2] 你也可以直接在控制台上截屏。`fbcat` 会创建一个可移植的像素映射格式 (PPM) 图像; 这是一个高度可移植的未压缩图像格式,可以在所有的操作系统上读取,当然你也可以把它转换成任何喜欢的其他格式。`fbgrab` 则是 `fbcat` 的一个封装脚本,用来生成一个 PNG 文件。不同的人写过多个版本的 `fbgrab`。每个版本的选项都有限而且只能创建截取全屏。 + +`fbcat` 的执行需要 root 权限,而且它的输出需要重定向到文件中。你无需指定文件扩展名,只需要输入文件名就行了: +``` +$ sudo fbcat > Pictures/myfile + +``` + +在 GIMP 中裁剪后,就得到了图 1。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-1_10.png?itok=bHOxrZk9) +Figure 1:View after cropping。 + +如果能在左边空白处有一点填充就好了,如果有读者知道如何实现请在留言框中告诉我。 + +`fbgrab` 还有一些选项,你可以通过 `man fbgrab` 来查看,这些选项包括对另一个控制台进行截屏,以及延时截屏。在下面的例子中可以看到,`fbgrab` 截屏跟 `fbcat` 截屏类似,只是你无需明确进行输出重定性了: +``` +$ sudo fbgrab Pictures/myOtherfile + +``` + +### 查找字体 + +就我所知,除了查看字体存储目录 `/usr/share/consolefonts/`(Debian/etc。),`/lib/kbd/consolefonts/` (Fedora),`/usr/share/kbd/consolefonts` (openSUSE),外没有其他方法可以列出已安装的字体了。 + +### 更改字体 + +可读字体不是什么新概念。我们应该尊重以前的经验!可读性是很重要的。可配置性也很重要,然而现如今却不怎么看重了。 + +在 Debian/Ubuntu/ 等系统上,可以运行 `sudo dpkg-reconfigure console-setup` 来设置控制台字体,然后在控制台运行 `setupcon` 命令来让变更生效。`setupcon` 属于 `console-setup` 软件包中的一部分。若你的 Linux 发行版中不包含该工具,可以在 [openSUSE][3] 中下载到它。 + +你也可以直接编辑 `/etc/default/console-setup` 文件。下面这个例子中设置字体为 32 点大小的 Terminus Bold 字体,这是我的最爱,并且严格限制控制台宽度为 80 列。 +``` +ACTIVE_CONSOLES="/dev/tty[1-6]" +CHARMAP="UTF-8" +CODESET="guess" +FONTFACE="TerminusBold" +FONTSIZE="16x32" +SCREEN_WIDTH="80" + +``` + +这里的 FONTFACE 和 FONTSIZE 的值来自于字体的文件名,`TerminusBold32x16.psf.gz`。是的,你需要反转 FONTSIZE 中值的顺序。计算机就是这么搞笑。然后再运行 `setupcon` 来让新配置生效。可以使用 `showconsolefont` 来查看当前所用字体的所有字符集。要查看完整的选项说明请参考 `man console-setup`。 + +### Systemd + +Systemd 与 `console-setup` 不太一样,除了字体之外,你无需安装任何东西。你只需要编辑 `/etc/vconsole.conf` 然后重启就行了。我在 Fedora 和 openSUSE 系统中安装了一些额外的大型号的 Terminus 字体包,因为默认安装的字体最大只有 16 点而我想要的是 32 点。然后将 `/etc/vconsole.conf` 的内容修改为: +``` +KEYMAP="us" +FONT="ter-v32b" + +``` + +下周我们还将学习一些更加酷的控制台小技巧,以及一些在控制台上运行的多媒体应用。 + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/1/how-change-your-linux-console-fonts + +作者:[Carla Schroder][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:http://jwilk.net/software/fbcat +[2]:https://github.com/jwilk/fbcat/blob/master/fbgrab +[3]:https://software.opensuse.org/package/console-setup From b1e0869043be80269b2bb30f534adbc49f8bcda0 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Mon, 15 Jan 2018 14:35:50 +0800 Subject: [PATCH 004/226] Translating by qhwdw --- sources/tech/20170628 Notes on BPF and eBPF.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20170628 Notes on BPF and eBPF.md b/sources/tech/20170628 Notes on BPF and eBPF.md index 264319bf97..25a7456649 100644 --- a/sources/tech/20170628 Notes on BPF and eBPF.md +++ b/sources/tech/20170628 Notes on BPF and eBPF.md @@ -1,4 +1,4 @@ -Notes on BPF & eBPF +translating by qhwdw Notes on BPF & eBPF ============================================================ Today it was Papers We Love, my favorite meetup! Today [Suchakra Sharma][6]([@tuxology][7] on twitter/github) gave a GREAT talk about the original BPF paper and recent work in Linux on eBPF. It really made me want to go write eBPF programs! From f6ff967beebd1ebeac2a22a8ed674ed17654a964 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 15 Jan 2018 14:57:31 +0800 Subject: [PATCH 005/226] PRF:20170919 What Are Bitcoins.md @Flowsnow --- translated/tech/20170919 What Are Bitcoins.md | 23 ++++++++++--------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/translated/tech/20170919 What Are Bitcoins.md b/translated/tech/20170919 What Are Bitcoins.md index f3d93db089..49a58ef9d1 100644 --- a/translated/tech/20170919 What Are Bitcoins.md +++ b/translated/tech/20170919 What Are Bitcoins.md @@ -3,37 +3,38 @@ ![what are bitcoins](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-are-bitcoins_orig.jpg) -**[比特币][1]** 是一种数字货币或者说是电子现金,依靠点对点技术来完成交易。 由于使用点对点技术作为主要网络,比特币提供了一个类似于管理型经济的社区。 这就是说,比特币消除了货币管理的集中式管理方式,促进了货币的社区管理。 大部分比特币数字现金的挖掘和管理软件也是开源的。 +[比特币][1]Bitcoin 是一种数字货币或者说是电子现金,依靠点对点技术来完成交易。 由于使用点对点技术作为主要网络,比特币提供了一个类似于管制经济managed economy的社区。 这就是说,比特币消除了货币管理的集中式管理方式,促进了货币的社区管理。 大部分比特币数字现金的挖掘和管理软件也是开源的。 -第一个比特币软件是由Satoshi Nakamoto开发的,基于开源的密码协议。 比特币最小单位被称为Satoshi,它基本上是单比特币(0.00000001 BTC)的百万分之一。 +第一个比特币软件是由中本聪Satoshi Nakamoto开发的,基于开源的密码协议。 比特币最小单位被称为Satoshi,它基本上是一个比特币的百万分之一(0.00000001 BTC)。 -人们不能低估BITCOINS在数字经济中消除的界限。 例如,BITCOIN消除了由中央机构对货币进行的管理控制,并向整个社区提供控制和管理。 此外,BITCOIN基于开放源代码密码协议的事实使其成为一个开放的领域,其中存在价值波动,通货紧缩和通货膨胀等严格的活动。 当许多互联网用户正在意识到他们在网上完成交易的隐私性时,但是比特币正在变得比以往更受欢迎。 但是,对于那些了解暗网及其工作原理的人们,可以确认有些人早就开始使用它了。 +人们不能低估比特币在数字经济中消除的界限。 例如,比特币消除了由中央机构对货币进行的管理控制,并将控制和管理提供给整个社区。 此外,比特币基于开放源代码密码协议的事实使其成为一个开放的领域,其中存在价值波动、通货紧缩和通货膨胀等严格的活动。 当许多互联网用户正在意识到他们在网上完成交易的隐私性时,比特币正在变得比以往更受欢迎。 但是,对于那些了解暗网及其工作原理的人们,可以确认有些人早就开始使用它了。 -不利的一面是,比特币在匿名支付方面也非常安全,可能会对安全或个人健康构成威胁。 例如,暗网市场是进口药物甚至武器的主要供应商和零售商。 在暗网中使用BITCOINs有助于这种犯罪活动。 尽管如此,如果使用得当,比特币有许多的好处,可以消除一些由于集中的货币代理管理导致的经济上的谬误。 另外,比特币允许在世界任何地方交换现金。 比特币的使用也可以减少假冒,印刷或贬值。 同时,依托对等网络作为骨干网络,促进交易记录的分布式权限,交易会更加安全。 +不利的一面是,比特币在匿名支付方面也非常安全,可能会对安全或个人健康构成威胁。 例如,暗网市场是进口药物甚至武器的主要供应商和零售商。 在暗网中使用比特币有助于这种犯罪活动。 尽管如此,如果使用得当,比特币有许多的好处,可以消除一些由于集中的货币代理管理导致的经济上的谬误。 另外,比特币允许在世界任何地方交换现金。 比特币的使用也可以减少货币假冒、印刷或贬值。 同时,依托对等网络作为骨干网络,促进交易记录的分布式权限,交易会更加安全。 比特币的其他优点包括: - 在网上商业世界里,比特币促进资金安全和完全控制。这是因为买家受到保护,以免商家可能想要为较低成本的服务额外收取钱财。买家也可以选择在交易后不分享个人信息。此外,由于隐藏了个人信息,也就保护了身份不被盗窃。 -- 对于主要的共同货币灾难,比如如丢失,冻结或损坏,比特币是一种替代品。但是,始终都建议对比特币进行备份并使用密码加密。 +- 对于主要的常见货币灾难,比如如丢失、冻结或损坏,比特币是一种替代品。但是,始终都建议对比特币进行备份并使用密码加密。 - 使用比特币进行网上购物和付款时,收取的费用少或者不收取。这就提高了使用时的可承受性。 - 与其他电子货币不同,商家也面临较少的欺诈风险,因为比特币交易是无法逆转的。即使在高犯罪率和高欺诈的时刻,比特币也是有用的,因为在公开的公共总账(区块链)上难以对付某个人。 - 比特币货币也很难被操纵,因为它是开源的,密码协议是非常安全的。 - 交易也可以随时随地进行验证和批准。这是数字货币提供的灵活性水准。 -还可以阅读 - [Bitkey:专用于比特币交易的Linux发行版][2] +还可以阅读 - [Bitkey:专用于比特币交易的 Linux 发行版][2] ### 如何挖掘比特币和完成必要的比特币管理任务的应用程序 -在数字货币中,BITCOIN挖矿和管理需要额外的软件。有许多开源的比特币管理软件,便于进行支付,接收付款,加密和备份比特币,还有很多的比特币挖掘软件。有些网站,比如:通过查看广告赚取免费比特币的[Freebitcoin][4],MoonBitcoin是另一个可以免费注册并获得比特币的网站。但是,如果有空闲时间和相当多的人脉圈参与,会很方便。有很多提供比特币挖矿的网站,可以轻松注册然后开始挖矿。其中一个主要秘诀就是尽可能引入更多的人构建成一个大型的网络。 +在数字货币中,比特币挖矿和管理需要额外的软件。有许多开源的比特币管理软件,便于进行支付,接收付款,加密和备份比特币,还有很多的比特币挖掘软件。有些网站,比如:通过查看广告赚取免费比特币的 [Freebitcoin][4],MoonBitcoin 是另一个可以免费注册并获得比特币的网站。但是,如果有空闲时间和相当多的人脉圈参与,会很方便。有很多提供比特币挖矿的网站,可以轻松注册然后开始挖矿。其中一个主要秘诀就是尽可能引入更多的人构建成一个大型的网络。 -与比特币一起使用时需要的应用程序包括比特币钱包,使得人们可以安全的持有比特币。这就像使用实物钱包来保存硬通货币一样,而这里是以数字形式存在d的。钱包可以在这里下载 - [比特币-钱包][6]。其他类似的应用包括:与比特币钱包类似的[区块链][7]。 +与比特币一起使用时需要的应用程序包括比特币钱包,使得人们可以安全的持有比特币。这就像使用实物钱包来保存硬通货币一样,而这里是以数字形式存在的。钱包可以在这里下载 —— [比特币-钱包][6]。其他类似的应用包括:与比特币钱包类似的[区块链][7]。 -下面的屏幕截图分别显示了Freebitco和MoonBitco这两个挖矿网站。 +下面的屏幕截图分别显示了 Freebitco 和 MoonBitco 这两个挖矿网站。 [![freebitco bitcoin mining site](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/freebitco-bitcoin-mining-site_orig.jpg)][8] + [![moonbitcoin bitcoin mining site](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/moonbitcoin-bitcoin-mining-site_orig.png)][9] -获得比特币的方式多种多样。其中一些包括比特币挖矿机的使用,比特币在交易市场的购买以及免费的比特币在线采矿。比特币可以在[MtGox][10],[bitNZ][11],[Bitstamp][12],[BTC-E][13],[VertEx][14]等等这些网站买到,这些网站都提供了开源开源应用程序。这些应用包括:Bitminter,[5OMiner][15],[BFG Miner][16]等等。这些应用程序使用一些图形卡和处理器功能来生成比特币。在个人电脑上开采比特币的效率在很大程度上取决于显卡的类型和采矿设备的处理器。此外,还有很多安全的在线存储用于备份比特币。这些网站免费提供比特币存储服务。比特币管理网站的例子包括:[xapo][17] , [BlockChain][18] 等。在这些网站上注册需要有效的电子邮件和电话号码进行验证。 Xapo通过电话应用程序提供额外的安全性,无论何时进行新的登录都需要做请求验证。 +获得比特币的方式多种多样。其中一些包括比特币挖矿机的使用,比特币在交易市场的购买以及免费的比特币在线采矿。比特币可以在 [MtGox][10](LCTT 译注:本文比较陈旧,此交易所已经倒闭),[bitNZ][11],[Bitstamp][12],[BTC-E][13],[VertEx][14] 等等这些网站买到,这些网站都提供了开源开源应用程序。这些应用包括:Bitminter、[5OMiner][15],[BFG Miner][16] 等等。这些应用程序使用一些图形卡和处理器功能来生成比特币。在个人电脑上开采比特币的效率在很大程度上取决于显卡的类型和采矿设备的处理器。(LCTT 译注:目前个人挖矿已经几乎毫无意义了)此外,还有很多安全的在线存储用于备份比特币。这些网站免费提供比特币存储服务。比特币管理网站的例子包括:[xapo][17] , [BlockChain][18] 等。在这些网站上注册需要有效的电子邮件和电话号码进行验证。 Xapo 通过电话应用程序提供额外的安全性,无论何时进行新的登录都需要做请求验证。 ### 比特币的缺点 @@ -49,7 +50,7 @@ via: http://www.linuxandubuntu.com/home/things-you-need-to-know-about-bitcoins 作者:[LINUXANDUBUNTU][a] 译者:[Flowsnow](https://github.com/Flowsnow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From dab346fe64c50587bb194c1439f1554ce3329e6d Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 15 Jan 2018 14:57:55 +0800 Subject: [PATCH 006/226] PUB:20170919 What Are Bitcoins.md @Flowsnow https://linux.cn/article-9241-1.html --- {translated/tech => published}/20170919 What Are Bitcoins.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20170919 What Are Bitcoins.md (100%) diff --git a/translated/tech/20170919 What Are Bitcoins.md b/published/20170919 What Are Bitcoins.md similarity index 100% rename from translated/tech/20170919 What Are Bitcoins.md rename to published/20170919 What Are Bitcoins.md From 35d7e08b7b787d563ef7079019a8ae0b7a8f9bc6 Mon Sep 17 00:00:00 2001 From: Yinr Date: Mon, 15 Jan 2018 15:09:01 +0800 Subject: [PATCH 007/226] Translating by Yinr --- sources/tech/20180111 Multimedia Apps for the Linux Console.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180111 Multimedia Apps for the Linux Console.md b/sources/tech/20180111 Multimedia Apps for the Linux Console.md index 1b9171a795..6cdd3ef857 100644 --- a/sources/tech/20180111 Multimedia Apps for the Linux Console.md +++ b/sources/tech/20180111 Multimedia Apps for the Linux Console.md @@ -1,3 +1,5 @@ +Translating by Yinr + Multimedia Apps for the Linux Console ====== From 00002e74a11c72fe8c166919d89b279c9058779e Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 15 Jan 2018 15:11:52 +0800 Subject: [PATCH 008/226] =?UTF-8?q?=E5=88=A0=E9=99=A4=E9=94=99=E8=AF=AF?= =?UTF-8?q?=E7=9A=84=E9=87=8D=E5=A4=8D=E6=8F=90=E4=BA=A4?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @Valoniakim --- ... Language engineering for great justice.md | 59 ------------------- 1 file changed, 59 deletions(-) delete mode 100644 translated/tech/20171118 Language engineering for great justice.md diff --git a/translated/tech/20171118 Language engineering for great justice.md b/translated/tech/20171118 Language engineering for great justice.md deleted file mode 100644 index 301337b11c..0000000000 --- a/translated/tech/20171118 Language engineering for great justice.md +++ /dev/null @@ -1,59 +0,0 @@ - -最合理的语言工程模式 - -============================================================ - - - -当你熟练掌握一体化工程技术时,你就会发现它逐渐超过了技术优化的层面。我们制作的每件手工艺品都在一个大环境背景下,在这个环境中,人类的行为逐渐突破了经济意义,社会学意义,达到了奥地利经济学家所称的“人类行为学”,这是目的明确的人类行为所能达到的最大范围。 - - - -对我来说这并不只是抽象理论。当我在开源发展项目中编写时,我的行为就十分符合人类行为学的理论,这行为不是针对任何特定的软件技术或某个客观事物,它指的是在开发科技的过程中人类行为的背景环境。从人类行为学角度对科技进行的解读不断增加,大量的这种解读可以重塑科技框架,带来人类生产力和满足感的极大幅度增长,而这并不是由于我们换了工具,而是在于我们改变了掌握它们的方式。 - - - -在这个背景下,我在第三篇额外的文章中谈到了 C 语言的衰退和正在到来的巨大改变,而我们也确实能够感受到系统编程的新时代的到来,在这个时刻,我决定把我之前有的大体的预感具象化为更加具体的,更实用的点子,它们主要是关于计算机语言设计的分析,例如为什么他们会成功,或为什么他们会失败。 - - - -在我最近的一篇文章中,我写道:所有计算机语言都是对机器资源的成本和程序员工作成本的相对权衡的结果,和对其相对价值的体现。这些都是在一个计算能力成本不断下降但程序员工作成本不减反增的背景下产生的。我还强调了转化成本在使原有交易主张适用于当下环境中的新增角色。在文中我将编程人员描述为一个寻找今后最适方案的探索者。 - - - -现在我要讲一讲最后一点。以现有水平为起点,一个语言工程师有极大可能通过多种方式推动语言设计的发展。通过什么系统呢? GC 还是人工分配?使用何种配置,命令式语言,函数程式语言或是面向对象语言?但是从人类行为学的角度来说,我认为它的形式会更简洁,也许只是选择解决长期问题还是短期问题? - - - -所谓的“远”“近”之分,是指硬件成本的逐渐降低,软件复杂程度的上升和由现有语言向其他语言转化的成本的增加,根据它们的变化曲线所做出的判断。短期问题指编程人员眼下发现的问题,长期问题指可预见的一系列情况,但它们一段时间内不会到来。针对近期问题所做出的部署需要非常及时且有效,但随着情况的变化,短期解决方案有可能很快就不适用了。而长期的解决方案可能因其过于超前而夭折,或因其代价过高无法被接受。 - - - -在计算机刚刚面世的时候, FORTRAN 是近期亟待解决的问题, LISP 是远期问题。汇编语言是短期解决方案,图解说明非通用语言的分类应用,还有关门电阻不断上涨的成本。随着计算机技术的发展,PHP 和 Javascript逐渐应用于游戏中。至于长期的解决方案? Oberon , Ocaml , ML , XML-Docbook 都可以。 他们形成的激励机制带来了大量具有突破性和原创性的想法,事态蓬勃但未形成体系,那个时候距离专业语言的面世还很远,(值得注意的是这些想法的出现都是人类行为学中的因果,并非由于某种技术)。专业语言会失败,这是显而易见的,它的转入成本高昂,让大部分人望而却步,因此不能没能达到能够让主流群体接受的水平,被孤立,被搁置。这也是 LISP 不为人知的的过去,作为前 LISP 管理层人员,出于对它深深的爱,我为你们讲述了这段历史。 - - - -如果短期解决方案出现故障,它的后果更加惨不忍睹,最好的结果是期待一个相对体面的失败,好转换到另一个设计方案。(通常在转化成本较高时)如果他们执意继续,通常造成众多方案相互之间藕断丝连,形成一个不断扩张的复合体,一直维持到不能运转下去,变成一堆摇摇欲坠的杂物。是的,我说的就是 C++ 语言,还有 Java 描述语言,(唉)还有 Perl,虽然 Larry Wall 的好品味成功地让他维持了很多年,问题一直没有爆发,但在 Perl 6 发行时,他的好品味最终引爆了整个问题。 - - - -这种思考角度激励了编程人员向着两个不同的目的重新塑造语言设计: ①以远近为轴,在自身和预计的未来之间选取一个最适点,然后 ②降低由一种或多种语言转化为自身语言的转入成本,这样你就可以吸纳他们的用户群。接下来我会讲讲 C 语言是怎样占领全世界的。 - - - -在整个计算机发展史中,没有谁能比 C 语言完美地把握最适点的选取了,我要做的只是证明这一点,作为一种实用的主流语言, C 语言有着更长的寿命,它目睹了无数个竞争者的兴衰,但它的地位仍旧不可取代。从淘汰它的第一个竞争者到现在已经过了 35 年,但看起来C语言的终结仍旧不会到来。 - - - -当然,如果你愿意的话,可以把 C 语言的持久存在归功于人类的文化惰性,但那是对“文化惰性”这个词的曲解, C 语言一直得以延续的真正原因是没有人提供足够的转化费用! - - - -相反的, C 语言低廉的内部转化费用未得到应有的重视,C 语言是如此的千变万化,从它漫长统治时期的初期开始,它就可以适用于多种语言如 FORTRAN , Pascal , 汇编语言和 LISP 的编程习惯。在二十世纪八十年代我就注意到,我可以根据编程人员的编码风格判断出他的母语是什么,这也从另一方面证明了C 语言的魅力能够吸引全世界的人使用它。 - - - -C++ 语言同样胜在它低廉的转化费用。很快,大部分新兴的语言为了降低自身转化费用,纷纷参考 C 语言语法。请注意这给未来的语言设计环境带来了什么影响:它尽可能地提高了 C-like 语言的价值,以此来降低其他语言转化为 C 语言的转化成本。 - - - -另一种降低转入成本的方法十分简单,即使没接触过编程的人都能学会,但这种方法很难完成。我认为唯一使用了这种方法的 Python就是靠这种方法进入了职业比赛。对这个方法我一带而过,是因为它并不是我希望看到的,顺利执行的系统语言战略,虽然我很希望它不是那样的。 - - - -今天我们在2017年年底聚集在这里,下一项我们应该为某些暴躁的团体发声,如 Go 团队,但事实并非如此。 Go 这个项目漏洞百出,我甚至可以想象出它失败的各种可能,Go 团队太过固执独断,即使几乎整个用户群体都认为 Go 需要做出改变了,Go 团队也无动于衷,这是个大问题。 一旦发生故障, GC 发生延迟或者用牺牲生产量来弥补延迟,但无论如何,它都会严重影响到这种语言的应用,大幅缩小这种语言的适用范围。 - - - -即便如此,在 Go 的设计中,还是有一个我颇为认同的远大战略目标,想要理解这个目标,我们需要回想一下如果想要取代 C 语言,要面临的短期问题是什么。同我之前提到的,随着项目计划的不断扩张,故障率也在持续上升,这其中内存管理方面的故障尤其多,而内存管理一直是崩溃漏洞和安全漏洞的高发领域。 - - - -我们现在已经知道了两件十分中重要的紧急任务,要想取代 C 语言,首先要先做到这两点:(1)解决内存管理问题;(2)降低由 C 语言向本语言转化时所需的转入成本。纵观编程语言的历史——从人类行为学的角度来看,作为 C 语言的准替代者,如果不能有效解决转入成本过高这个问题,那他们所做的其他部分做得再好都不算数。相反的,如果他们把转入成本过高这个问题解决地很好,即使他们其他部分做的不是最好的,人们也不会对他们吹毛求疵。 - - - -这正是 Go 的做法,但这个理论并不是完美无瑕的,它也有局限性。目前 GC 延迟限制了它的发展,但 Go 现在选择照搬 Unix 下 C 语言的传染战略,让自身语言变成易于转入,便于传播的语言,其繁殖速度甚至快于替代品。但从长远角度看,这并不是个好办法。 - - - -当然, Rust 语言的不足是个十分明显的问题,我们不应当回避它。而它,正将自己定位为适用于长远计划的选择。在之前的部分中我已经谈到了为什么我觉得它还不完美,Rust 语言在 TIBOE 和PYPL 指数上的成就也证明了我的说法,在 TIBOE 上 Rust 从来没有进过前20名,在 PYPL 指数上它的成就也比 Go 差很多。 - - - -五年后 Rust 能发展的怎样还是个问题,如果他们愿意改变,我建议他们重视转入成本问题。以我个人经历来说,由 C 语言转入 Rust 语言的能量壁垒使人望而却步。如果编码提升工具比如 Corrode 只能把 C 语言映射为不稳定的 Rust 语言,但不能解决能量壁垒的问题;或者如果有更简单的方法能够自动注释所有权或试用期,人们也不再需要它们了——这些问题编译器就能够解决。目前我不知道怎样解决这个问题,但我觉得他们最好找出解决方案。 - - - -在最后我想强调一下,虽然在 Ken Thompson 的设计经历中,他看起来很少解决短期问题,但他对未来有着极大的包容性,并且这种包容性还在不断提升。当然 Unix 也是这样的, 它让我不禁暗自揣测,让我认为 Go 语言中令人不快的地方都其实是他们未来事业的基石(例如缺乏泛型)。如果要确认这件事是真假,我需要比 Ken 还要聪明,但这并不是一件容易让人相信的事情。 - - - --------------------------------------------------------------------------------- - - - -via: http://esr.ibiblio.org/?p=7745 - - - -作者:[Eric Raymond ][a] - -译者:[Valoniakim](https://github.com/Valoniakim) - -校对:[校对者ID](https://github.com/校对者ID) - - - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - - - -[a]:http://esr.ibiblio.org/?author=2 - -[1]:http://esr.ibiblio.org/?author=2 - -[2]:http://esr.ibiblio.org/?p=7711&cpage=1#comment-1913931 - -[3]:http://esr.ibiblio.org/?p=7745 From 197d339b599d3bf4b2909c25f65ebeadecf52d10 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Jan 2018 16:35:16 +0800 Subject: [PATCH 009/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=2010=20Tools=20To?= =?UTF-8?q?=20Add=20Some=20Spice=20To=20Your=20UNIX/Linux=20Shell=20Script?= =?UTF-8?q?s?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Spice To Your UNIX-Linux Shell Scripts.md | 383 ++++++++++++++++++ 1 file changed, 383 insertions(+) create mode 100644 sources/tech/20100419 10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md diff --git a/sources/tech/20100419 10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md b/sources/tech/20100419 10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md new file mode 100644 index 0000000000..d350bd07b8 --- /dev/null +++ b/sources/tech/20100419 10 Tools To Add Some Spice To Your UNIX-Linux Shell Scripts.md @@ -0,0 +1,383 @@ +10 Tools To Add Some Spice To Your UNIX/Linux Shell Scripts +====== +There are some misconceptions that shell scripts are only for a CLI environment. You can efficiently use various tools to write GUI and network (socket) scripts under KDE or Gnome desktops. Shell scripts can make use of some of the GUI widget (menus, warning boxes, progress bars, etc.). You can always control the final output, cursor position on the screen, various output effects, and more. With the following tools, you can build powerful, interactive, user-friendly UNIX / Linux bash shell scripts. + +Creating GUI application is not an expensive task but a task that takes time and patience. Luckily, both UNIX and Linux ships with plenty of tools to write beautiful GUI scripts. The following tools are tested on FreeBSD and Linux operating systems but should work under other UNIX like operating systems. + +### 1. notify-send Command + +The notify-send command allows you to send desktop notifications to the user via a notification daemon from the command line. This is useful to inform the desktop user about an event or display some form of information without getting in the user's way. You need to install the following package on a Debian/Ubuntu Linux using [apt command][1]/[apt-get command][2]: +`$ sudo apt-get install libnotify-bin` +CentOS/RHEL user try the following [yum command][3]: +`$ sudo yum install libnotify` +Fedora Linux user type the following dnf command: +`$ sudo dnf install libnotify` +In this example, send simple desktop notification from the command line, enter: +``` +### send some notification ## +notify-send "rsnapshot done :)" +``` + +Sample outputs: +![Fig:01: notify-send in action ][4] +Here is another code with additional options: +``` +.... +alert=18000 +live=$(lynx --dump http://money.rediff.com/ | grep 'BSE LIVE' | awk '{ print $5}' | sed 's/,//g;s/\.[0-9]*//g') +[ $notify_counter -eq 0 ] && [ $live -ge $alert ] && { notify-send -t 5000 -u low -i "BSE Sensex touched 18k"; notify_counter=1; } +... +``` + +Sample outputs: +![Fig.02: notify-send with timeouts and other options][5] +Where, + + * -t 5000: Specifies the timeout in milliseconds ( 5000 milliseconds = 5 seconds) + * -u low : Set the urgency level (i.e. low, normal, or critical). + * -i gtk-dialog-info : Set an icon filename or stock icon to display (you can set path as -i /path/to/your-icon.png). + + + +For more information on use of the notify-send utility, please refer to the notify-send man page, viewable by typing man notify-send from the command line: +``` +man notify-send +``` + +### #2: tput Command + +The tput command is used to set terminal features. With tput you can set: + + * Move the cursor around the screen. + * Get information about terminal. + * Set colors (background and foreground). + * Set bold mode. + * Set reverse mode and much more. + + + +Here is a sample code: +``` +#!/bin/bash + +# clear the screen +tput clear + +# Move cursor to screen location X,Y (top left is 0,0) +tput cup 3 15 + +# Set a foreground colour using ANSI escape +tput setaf 3 +echo "XYX Corp LTD." +tput sgr0 + +tput cup 5 17 +# Set reverse video mode +tput rev +echo "M A I N - M E N U" +tput sgr0 + +tput cup 7 15 +echo "1. User Management" + +tput cup 8 15 +echo "2. Service Management" + +tput cup 9 15 +echo "3. Process Management" + +tput cup 10 15 +echo "4. Backup" + +# Set bold mode +tput bold +tput cup 12 15 +read -p "Enter your choice [1-4] " choice + +tput clear +tput sgr0 +tput rc +``` + + +Sample outputs: +![Fig.03: tput in action][6] +For more detail concerning the tput command, see the following man page: +``` +man 5 terminfo +man tput +``` + +### #3: setleds Command + +The setleds command allows you to set the keyboard leds. In this example, set NumLock on: +``` +setleds -D +num +``` + +To turn it off NumLock, enter: +``` +setleds -D -num +``` + + * -caps : Clear CapsLock. + * +caps : Set CapsLock. + * -scroll : Clear ScrollLock. + * +scroll : Set ScrollLock. + + + +See setleds command man page for more information and options: +`man setleds` + +### #4: zenity Command + +The [zenity commadn will display GTK+ dialogs box][7], and return the users input. This allows you to present information, and ask for information from the user, from all manner of shell scripts. Here is a sample GUI client for the whois directory service for given domain name: + +```shell +#!/bin/bash +# Get domain name +_zenity="/usr/bin/zenity" +_out="/tmp/whois.output.$$" +domain=$(${_zenity} --title "Enter domain" \ + --entry --text "Enter the domain you would like to see whois info" ) + +if [ $? -eq 0 ] +then + # Display a progress dialog while searching whois database + whois $domain | tee >(${_zenity} --width=200 --height=100 \ + --title="whois" --progress \ + --pulsate --text="Searching domain info..." \ + --auto-kill --auto-close \ + --percentage=10) >${_out} + + # Display back output + ${_zenity} --width=800 --height=600 \ + --title "Whois info for $domain" \ + --text-info --filename="${_out}" +else + ${_zenity} --error \ + --text="No input provided" +fi +``` + +Sample outputs: +![Fig.04: zenity in Action][8] +See the zenity man page for more information and all other supports GTK+ widgets: +``` +zenity --help +man zenity +``` + +### #5: kdialog Command + +kdialog is just like zenity but it is designed for KDE desktop / qt apps. You can display dialogs using kdialog. The following will display message on screen: +``` +kdialog --dontagain myscript:nofilemsg --msgbox "File: '~/.backup/config' not found." +``` + +Sample outputs: +![Fig.05: Suppressing the display of a dialog ][9] + +See [shell scripting with KDE Dialogs][10] tutorial for more information. + +### #6: Dialog + +[Dialog is an application used in shell scripts][11] which displays text user interface widgets. It uses the curses or ncurses library. Here is a sample code: +``` +#!/bin/bash +dialog --title "Delete file" \ +--backtitle "Linux Shell Script Tutorial Example" \ +--yesno "Are you sure you want to permanently delete \"/tmp/foo.txt\"?" 7 60 + +# Get exit status +# 0 means user hit [yes] button. +# 1 means user hit [no] button. +# 255 means user hit [Esc] key. +response=$? +case $response in + 0) echo "File deleted.";; + 1) echo "File not deleted.";; + 255) echo "[ESC] key pressed.";; +esac +``` + +See the dialog man page for details: +`man dialog` + +#### A Note About Other User Interface Widgets Tools + +UNIX and Linux comes with lots of other tools to display and control apps from the command line, and shell scripts can make use of some of the KDE / Gnome / X widget set: + + * **gmessage** - a GTK-based xmessage clone. + * **xmessage** - display a message or query in a window (X-based /bin/echo) + * **whiptail** - display dialog boxes from shell scripts + * **python-dialog** - Python module for making simple Text/Console-mode user interfaces + + + +### #7: logger command + +The logger command writes entries in the system log file such as /var/log/messages. It provides a shell command interface to the syslog system log module: +``` +logger "MySQL database backup failed." +tail -f /var/log/messages +logger -t mysqld -p daemon.error "Database Server failed" +tail -f /var/log/syslog +``` + +Sample outputs: +``` +Apr 20 00:11:45 vivek-desktop kernel: [38600.515354] CPU0: Temperature/speed normal +Apr 20 00:12:20 vivek-desktop mysqld: Database Server failed +``` + +See howto [write message to a syslog / log file][12] for more information. Alternatively, you can see the logger man page for details: +`man logger` + +### #8: setterm Command + +The setterm command can set various terminal attributes. In this example, force screen to turn black in 15 minutes. Monitor standby will occur at 60 minutes: +``` +setterm -blank 15 -powersave powerdown -powerdown 60 +``` + +In this example show underlined text for xterm window: +``` +setterm -underline on; +echo "Add Your Important Message Here" +setterm -underline off +``` + +Another useful option is to turn on or off cursor: +``` +setterm -cursor off +``` + +Turn it on: +``` +setterm -cursor on +``` + +See the setterm command man page for details: +`man setterm` + +### #9: smbclient: Sending Messages To MS-Windows Workstations + +The smbclient command can talk to an SMB/CIFS server. It can send a message to selected users or all users on MS-Windows systems: +``` +smbclient -M WinXPPro </dev/tcp/localhost/25) &>/dev/null && echo "TCP port 25 open" || echo "TCP port 25 close" +``` + +You can use [bash loop and find out open ports][14] with the snippets: +``` +echo "Scanning TCP ports..." +for p in {1..1023} +do + (echo >/dev/tcp/localhost/$p) >/dev/null 2>&1 && echo "$p open" +done +``` + + +Sample outputs: +``` +Scanning TCP ports... +22 open +53 open +80 open +139 open +445 open +631 open +``` + +In this example, your bash script act as an HTTP client: +``` +#!/bin/bash +exec 3<> /dev/tcp/${1:-www.cyberciti.biz}/80 + +printf "GET / HTTP/1.0\r\n" >&3 +printf "Accept: text/html, text/plain\r\n" >&3 +printf "Accept-Language: en\r\n" >&3 +printf "User-Agent: nixCraft_BashScript v.%s\r\n" "${BASH_VERSION}" >&3 +printf "\r\n" >&3 + +while read LINE <&3 +do + # do something on $LINE + # or send $LINE to grep or awk for grabbing data + # or simply display back data with echo command + echo $LINE +done +``` + +See the bash man page for more information: +`man bash` + +### A Note About GUI Tools and Cronjob + +You need to request local display/input service using export DISPLAY=[user's machine]:0 command if you are [using cronjob][15] to call your scripts. For example, call /home/vivek/scripts/monitor.stock.sh as follows which uses zenity tool: +`@hourly DISPLAY=:0.0 /home/vivek/scripts/monitor.stock.sh` + +Have a favorite UNIX tool to spice up shell script? Share it in the comments below. + +### about the author + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][16], [Facebook][17], [Google+][18]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/spice-up-your-unix-linux-shell-scripts.html + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info) +[2]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info) +[3]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) +[4]:https://www.cyberciti.biz/media/new/tips/2010/04/notify-send.png (notify-send: Shell Script Get Or Send Desktop Notifications ) +[5]:https://www.cyberciti.biz/media/new/tips/2010/04/notify-send-with-icons-timeout.png (Linux / UNIX: Display Notifications From Your Shell Scripts With notify-send) +[6]:https://www.cyberciti.biz/media/new/tips/2010/04/tput-options.png (Linux / UNIX Script Colours and Cursor Movement With tput) +[7]:https://bash.cyberciti.biz/guide/Zenity:_Shell_Scripting_with_Gnome +[8]:https://www.cyberciti.biz/media/new/tips/2010/04/zenity-outputs.png (zenity: Linux / UNIX display Dialogs Boxes From The Shell Scripts) +[9]:https://www.cyberciti.biz/media/new/tips/2010/04/KDialog.png (Kdialog: Suppressing the display of a dialog ) +[10]:http://techbase.kde.org/Development/Tutorials/Shell_Scripting_with_KDE_Dialogs +[11]:https://bash.cyberciti.biz/guide/Bash_display_dialog_boxes +[12]:https://www.cyberciti.biz/tips/howto-linux-unix-write-to-syslog.html +[13]:https://www.cyberciti.biz/tips/freebsd-sending-a-message-to-windows-workstation.html +[14]:https://www.cyberciti.biz/faq/bash-for-loop/ +[15]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/ +[16]:https://twitter.com/nixcraft +[17]:https://facebook.com/nixcraft +[18]:https://plus.google.com/+CybercitiBiz From 6be090feaba54731c6b029d811e46f1f4c66d24b Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Jan 2018 16:42:18 +0800 Subject: [PATCH 010/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20The=20open=20orga?= =?UTF-8?q?nization=20and=20inner=20sourcing=20movements=20can=20share=20k?= =?UTF-8?q?nowledge?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... sourcing movements can share knowledge.md | 121 ++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 sources/tech/20180111 The open organization and inner sourcing movements can share knowledge.md diff --git a/sources/tech/20180111 The open organization and inner sourcing movements can share knowledge.md b/sources/tech/20180111 The open organization and inner sourcing movements can share knowledge.md new file mode 100644 index 0000000000..272c1b03ae --- /dev/null +++ b/sources/tech/20180111 The open organization and inner sourcing movements can share knowledge.md @@ -0,0 +1,121 @@ +The open organization and inner sourcing movements can share knowledge +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gov_collaborative_risk.png?itok=we8DKHuL) +Image by : opensource.com + +Red Hat is a company with roughly 11,000 employees. The IT department consists of roughly 500 members. Though it makes up just a fraction of the entire organization, the IT department is still sufficiently staffed to have many application service, infrastructure, and operational teams within it. Our purpose is "to enable Red Hatters in all functions to be effective, productive, innovative, and collaborative, so that they feel they can make a difference,"--and, more specifically, to do that by providing technologies and related services in a fashion that is as open as possible. + +Being open like this takes time, attention, and effort. While we always strive to be as open as possible, it can be difficult. For a variety of reasons, we don't always succeed. + +In this story, I'll explain a time when, in the rush to innovate, the Red Hat IT organization lost sight of its open ideals. But I'll also explore how returning to those ideals--and using the collaborative tactics of "inner source"--helped us to recover and greatly improve the way we deliver services. + +### About inner source + +Before I explain how inner source helped our team, let me offer some background on the concept. + +Inner source is the adoption of open source development practices between teams within an organization to promote better and faster delivery without requiring project resources be exposed to the world or openly licensed. It allows an organization to receive many of the benefits of open source development methods within its own walls. + +In this way, inner source aligns well with open organization strategies and principles; it provides a path for open, collaborative development. While the open organization defines its principles of openness broadly as transparency, inclusivity, adaptability, collaboration, and community--and covers how to use these open principles for communication, decision making, and many other topics--inner source is about the adoption of specific and tactical practices, processes, and patterns from open source communities to improve delivery. + +For instance, [the Open Organization Maturity Model][1] suggests that in order to be transparent, teams should, at minimum, share all project resources with the project team (though it suggests that it's generally better to share these resources with the entire organization). The common pattern in both inner source and open source development is to host all resources in a publicly available version control system, for source control management, which achieves the open organization goal of high transparency. + +Inner source aligns well with open organization strategies and principles. + +Another example of value alignment appears in the way open source communities accept contributions. In open source communities, source code is transparently available. Community contributions in the form of patches or merge requests are commonly accepted practices (even expected ones). This provides one example of how to meet the open organization's goal of promoting inclusivity and collaboration. + +### The challenge + +Early in 2014, Red Hat IT began its first steps toward making Amazon Web Services (AWS) a standard hosting offering for business critical systems. While teams within Red Hat IT had built several systems and services in AWS by this time, these were bespoke creations, and we desired to make deploying services to IT standards in AWS both simple and standardized. + +In order to make AWS cloud hosting meet our operational standards (while being scalable), the Cloud Enablement team within Red Hat IT decided that all infrastructure in AWS would be configured through code, rather than manually, and that everyone would use a standard set of tools. The Cloud Enablement team designed and built these standard tools; a separate group, the Platform Operations team, was responsible for provisioning and hosting systems and services in AWS using the tools. + +The Cloud Enablement team built a toolset, obtusely named "Template Util," based on AWS Cloud Formations configurations wrapped in a management layer to enforce certain configuration requirements and make stamping out multiple copies of services across environments easier. While the Template Util toolset technically met all our initial requirements, and we eventually provisioned the infrastructure for more than a dozen services with it, engineers in every team working with the tool found using it to be painful. Michael Johnson, one engineer using the tool, said "It made doing something relatively straightforward really complicated." + +Among the issues Template Util exhibited were: + + * Underlying cloud formations technologies implied constraints on application stack management at odds with how we managed our application systems. + * The tooling was needlessly complex and brittle in places, using multiple layered templating technologies and languages making syntax issues hard to debug. + * The code for the tool--and some of the data users needed to manipulate the tool--were kept in a repository that was difficult for most users to access. + * There was no standard process to contributing or accepting changes. + * The documentation was poor. + + + +As more engineers attempted to use the Template Util toolset, they found even more issues and limitations with the tools. Unhappiness continued to grow. To make matters worse, the Cloud Enablement team then shifted priorities to other deliverables without relinquishing ownership of the tool, so bug fixes and improvements to the tools were further delayed. + +The real, core issues here were our inability to build an inclusive community to collaboratively build shared tooling that met everyone's needs. Fear of losing "ownership," fear of changing requirements, and fear of seeing hard work abandoned all contributed to chronic conflict, which in turn led to poorer outcomes. + +### Crisis point + +By September 2015, more than a year after launching our first major service in AWS with the Template Util tool, we hit a crisis point. + +Many engineers refused to use the tools. That forced all of the related service provisioning work on a small set of engineers, further fracturing the community and disrupting service delivery roadmaps as these engineers struggled to deal with unexpected work. We called an emergency meeting and invited all the teams involved to find a solution. + +During the emergency meeting, we found that people generally thought we needed immediate change and should start the tooling effort over, but even the decision to start over wasn't unanimous. Many solutions emerged--sometimes multiple solutions from within a single team--all of which would require significant work to implement. While we couldn't reach a consensus on which solution to use during this meeting, we did reach an agreement to give proponents of different technologies two weeks to work together, across teams, to build their case with a prototype, which the community could then review. + +While we didn't reach a final and definitive decision, this agreement was the first point where we started to return to the open source ideals that guide our mission. By inviting all involved parties, we were able to be transparent and inclusive, and we could begin rebuilding our internal community. By making clear that we wanted to improve things and were open to new options, we showed our commitment to adaptability and meritocracy. Most importantly, the plan for building prototypes gave people a clear, return path to collaboration. + +When the community reviewed the prototypes, it determined that the clear leader was an Ansible-based toolset that would eventually become known, internally, as Ansicloud. (At the time, no one involved with this work had any idea that Red Hat would acquire Ansible the following month. It should also be noted that other teams within Red Hat have found tools based on Cloud Formation extremely useful, even when our specific Template Util tool did not find success.) + +This prototyping and testing phase didn't fix things overnight, though. While we had consensus on the general direction we needed to head, we still needed to improve the new prototype to the point at which engineers could use it reliably for production services. + +So over the next several months, a handful of engineers worked to further build and extend the Ansicloud toolset. We built three new production services. While we were sharing code, that sharing activity occurred at a low level of maturity. Some engineers had trouble getting access due to older processes. Other engineers headed in slightly different directions, with each engineer having to rediscover some of the core design issues themselves. + +### Returning to openness + +This led to a turning point: Building on top of the previous agreement, we focused on developing a unified vision and providing easier access. To do this, we: + + 1. created a list of specific goals for the project (both "must-haves" and "nice-to-haves"), + 2. created an open issue log for the project to avoid solving the same problem repeatedly, + 3. opened our code base so anyone in Red Hat could read or clone it, and + 4. made it easy for engineers to get trusted committer access + + + +Our agreement to collaborate, our finally unified vision, and our improved tool development methods spurred the growth of our community. Ansicloud adoption spread throughout the involved organizations, but this led to a new problem: The tool started changing more quickly than users could adapt to it, and improvements that different groups submitted were beginning to affect other groups in unanticipated ways. + +These issues resulted in our recent turn to inner source practices. While every open source project operates differently, we focused on adopting some best practices that seemed common to many of them. In particular: + + * We identified the business owner of the project and the core-contributor group of developers who would govern the development of the tools and decide what contributions to accept. While we want to keep things open, we can't have people working against each other or breaking each other's functionality. + * We developed a project README clarifying the purpose of the tool and specifying how to use it. We also created a CONTRIBUTING document explaining how to contribute, what sort of contributions would be useful, and what sort of tests a contribution would need to pass to be accepted. + * We began building continuous integration and testing services for the Ansicloud tool itself. This helped us ensure we could quickly and efficiently validate contributions technically, before the project accepted and merged them. + + + +With these basic agreements, documents, and tools available, we were back onto the path of open collaboration and successful inner sourcing. + +### Why it matters + +Why does inner source matter? + +From a developer community point of view, shifting from a traditional siloed development model to the inner source model has produced significant, quantifiable improvements: + + * Contributions to our tooling have grown 72% per week (by number of commits). + * The percentage of contributions from non-core committers has grown from 27% to 78%; the users of the toolset are driving its development. + * The contributor list has grown by 15%, primarily from new users of the tool set, rather than core committers, increasing our internal community. + + + +And the tools we've delivered through this project have allowed us to see dramatic improvements in our business outcomes. Using the Ansicloud tools, 54 new multi-environment application service deployments were created in 385 days (compared to 20 services in 1,013 days with the Template Util tools). We've gone from one new service deployment in a 50-day period to one every week--a seven-fold increase in the velocity of our delivery. + +What really matters here is that the improvements we saw were not aberrations. Inner source provides common, easily understood patterns that organizations can adopt to effectively promote collaboration (not to mention other open organization principles). By mirroring open source production practices, inner source can also mirror the benefits of open source code, which have been seen time and time again: higher quality code, faster development, and more engaged communities. + +This article is part of the [Open Organization Workbook project][2]. + +### about the author +Tom Benninger - Tom Benninger is a Solutions Architect, Systems Engineer, and continual tinkerer at Red Hat, Inc. Having worked with startups, small businesses, and larger enterprises, he has experience within a broad set of IT disciplines. His current area of focus is improving Application Lifecycle Management in the enterprise. He has a particular interest in how open source, inner source, and collaboration can help support modern application development practices and the adoption of DevOps, CI/CD, Agile,... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/18/1/open-orgs-and-inner-source-it + +作者:[Tom Benninger][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/tomben +[1]:https://opensource.com/open-organization/resources/open-org-maturity-model +[2]:https://opensource.com/open-organization/17/8/workbook-project-announcement From c1cf3839559dfbd30de712c1e7a5cea1018a0676 Mon Sep 17 00:00:00 2001 From: zjon Date: Mon, 15 Jan 2018 16:45:01 +0800 Subject: [PATCH 011/226] Translating zjon --- sources/tech/20180102 Best open source tutorials in 2017.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180102 Best open source tutorials in 2017.md b/sources/tech/20180102 Best open source tutorials in 2017.md index e9d9d7b9ad..7612772b49 100644 --- a/sources/tech/20180102 Best open source tutorials in 2017.md +++ b/sources/tech/20180102 Best open source tutorials in 2017.md @@ -1,3 +1,4 @@ +Translating zjon Best open source tutorials in 2017 ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G) From 8d70372bfe344bcaffc66a4ec69dc4ab9927213f Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Jan 2018 16:46:34 +0800 Subject: [PATCH 012/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Playing=20Quake?= =?UTF-8?q?=204=20on=20Linux=20in=202018?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...180114 Playing Quake 4 on Linux in 2018.md | 80 +++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100644 sources/tech/20180114 Playing Quake 4 on Linux in 2018.md diff --git a/sources/tech/20180114 Playing Quake 4 on Linux in 2018.md b/sources/tech/20180114 Playing Quake 4 on Linux in 2018.md new file mode 100644 index 0000000000..26dd305a4a --- /dev/null +++ b/sources/tech/20180114 Playing Quake 4 on Linux in 2018.md @@ -0,0 +1,80 @@ +Playing Quake 4 on Linux in 2018 +====== +A few months back [I wrote an article][1] outlining the various options Linux users now have for playing Doom 3, as well as stating which of the three contenders I felt to be the best option in 2017. Having already gone to the trouble of getting the original Doom 3 binary working on my modern Arch Linux system, it made me wonder just how much effort it would take to get the closed source Quake 4 port up and running again as well. + +### Getting it running + +[![][2]][3] [![][4]][5] + +Quake 4 was ported to Linux by Timothee Besset in 2005, although the binaries themselves were later taken down along with the rest of the id Software FTP server by ZeniMax. The original [Linux FAQ page][6] is still online though, and mirrors hosting the Linux installer still exist, such as [this one][7] ran by the fan website [Quaddicted][8]. Once downloaded this will give you a graphical installer which will install the game binary without any of the game assets. + +These will need to be taken from either the game discs of a retail Windows version as I did, or taken from an already installed Windows version of the game such as from [Steam][9]. Follow the steps in the Linux FAQ to the letter for best results. Please note that the [GOG.com][10] release of Quake 4 is unique in not supplying a valid CD key, something which is still required for the Linux port to launch. There are [ways to get around this][11], but we only condone these methods for legitimate purchasers. + +Like with Doom 3 I had to remove the libgcc_s.so.1, libSDL-1.2.id.so.0, and libstdc++.so.6 libraries that the game came with in the install directory in order to get it to run. I also ran into the same sound issue I had with Doom 3, meaning I had to modify the Quake4Config.cfg file located in the hidden ~/.quake4/q4base directory in the same fashion as before. However, this time I ran into a whole host of other issues that made me have to modify the configuration file as well. + +First off the language the game wanted to use would always default to Spanish, meaning I had to manually tell the game to use English instead. I also ran into a known issue on all platforms wherein the game would not properly recognize the available VRAM on modern graphics cards, and as such would force the game to use lower image quality settings. Quake 4 will also not render see-through surfaces unless anti-aliasing is enabled, although going beyond 8x caused the game not to load for me. + +Appending the following to the end of the Quake4Config.cfg file resolved all of my issues: + +``` +seta image_downSize "0" +seta image_downSizeBump "0" +seta image_downSizeSpecular "0" +seta image_filter "GL_LINEAR_MIPMAP_LINEAR" +seta image_ignoreHighQuality "0" +seta image_roundDown "0" +seta image_useCompression "0" +seta image_useNormalCompression "0" +seta image_anisotropy "16" +seta image_lodbias "0" +seta r_renderer "best" +seta r_multiSamples "8" +seta sys_lang "english" +seta s_alsa_pcm "hw:0,0" +seta com_allowConsole "1" +``` + +Please note that this will also set the game to use 8x anti-aliasing and restore the drop down console to how it worked in all of the previous Quake games. Similar to the Linux port of Doom 3 the Linux version of Quake 4 also does not support Creative EAX ADVANCED HD audio technology. Unlike Doom 3 though Quake 4 does seem to also feature an alternate method for surround sound, and widescreen support was thankfully patched into the game soon after its release. + +### Playing the game + +[![][12]][13] [![][14]][15] + +Over the years Quake 4 has gained something of a reputation as the black sheep of the Quake family, with many people complaining that the game's vehicle sections, squad mechanics, and general aesthetic made it feel too close to contemporary military shooters of the time. In the game's heart of hearts though it really does feel like a concerted sequel to Quake II, with some of developer Raven Software's own Star Trek: Voyager - Elite Force title thrown in for good measure. + +To me at least Quake 4 does stand as being one of the "Last of the Romans" in terms of being a first person shooter that embraced classic design ideals at a time when similar titles were not getting the support of major publishers. Most of the game still features the player moving between levels featuring fixed enemy placements, a wide variety of available weapons, traditional health packs, and an array of enemies each sporting unique attributes and skills. + +Quake 4 also offers a well made campaign that I found myself going back to on a higher skill level not long after I had already finished my first try at the game. Certain aspects like the vehicle sections do indeed drag the game down a bit, and the multiplayer aspect pails in comparison to its predecessor Quake III Arena, but overall I am quite pleased with what Raven Software was able to accomplish with the Doom 3 engine, especially when so few others tried. + +### Final thoughts + +If anyone ever needed a reason to be reminded of the value of video game source code releases, this is it. Most of the problems I encountered could have been easily sidestepped if Quake 4 source ports were available, but with the likes of John Carmack and Timothee Besset gone from id Software and the current climate at ZeniMax not looking too promising, it is doubtful that any such creations will ever materialize. Doom 3 source ports look to be the end of the road. + +Instead we are stuck using this cranky 32 bit binary with an obstructive CD Key check and a graphics system that freaks out at the sight of any modern video card sporting more than 512 MB of VRAM. The game itself has aged well, with graphics that still look great and dynamic lighting that is better than what is included with many modern titles. It is just a shame that it is now such a pain to get running, not just on Linux, but on any platform. + +-------------------------------------------------------------------------------- + +via: https://www.gamingonlinux.com/articles/playing-quake-4-on-linux-in-2018.11017 + +作者:[Hamish][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.gamingonlinux.com/profiles/6 +[1]:https://www.gamingonlinux.com/articles/playing-doom-3-on-linux-in-2017.10561 +[2]:https://www.gamingonlinux.com/uploads/articles/article_images/thumbs/20458196191515697921gol6.jpg +[3]:https://www.gamingonlinux.com/uploads/articles/article_images/20458196191515697921gol6.jpg +[4]:https://www.gamingonlinux.com/uploads/articles/article_images/thumbs/9405540721515697921gol6.jpg +[5]:https://www.gamingonlinux.com/uploads/articles/article_images/9405540721515697921gol6.jpg +[6]:http://zerowing.idsoftware.com/linux/quake4/Quake4FrontPage/ +[7]:https://www.quaddicted.com/files/idgames2/idstuff/quake4/linux/ +[8]:https://www.quaddicted.com/ +[9]:http://store.steampowered.com/app/2210/Quake_IV/ +[10]:https://www.gog.com/game/quake_4 +[11]:https://www.gog.com/forum/quake_series/quake_4_on_linux_no_cd_key/post31 +[12]:https://www.gamingonlinux.com/uploads/articles/article_images/thumbs/5043571471515951537gol6.jpg +[13]:https://www.gamingonlinux.com/uploads/articles/article_images/5043571471515951537gol6.jpg +[14]:https://www.gamingonlinux.com/uploads/articles/article_images/thumbs/6922853731515697921gol6.jpg +[15]:https://www.gamingonlinux.com/uploads/articles/article_images/6922853731515697921gol6.jpg From 134519f7761f3598b130785650b0ca570bbe7e80 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Jan 2018 17:22:08 +0800 Subject: [PATCH 013/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20What=20a=20GNU=20?= =?UTF-8?q?C=20Compiler=20Bug=20looks=20like?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...14 What a GNU C Compiler Bug looks like.md | 77 +++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 sources/tech/20180114 What a GNU C Compiler Bug looks like.md diff --git a/sources/tech/20180114 What a GNU C Compiler Bug looks like.md b/sources/tech/20180114 What a GNU C Compiler Bug looks like.md new file mode 100644 index 0000000000..3b95d4089b --- /dev/null +++ b/sources/tech/20180114 What a GNU C Compiler Bug looks like.md @@ -0,0 +1,77 @@ +What a GNU C Compiler Bug looks like +====== +Back in December a Linux Mint user sent a [strange bug report][1] to the darktable mailing list. Apparently the GNU C Compiler (GCC) on his system exited with the following error message, breaking the build process: +``` +cc1: error: unrecognized command line option '-Wno-format-truncation' [-Werror] +cc1: all warnings being treated as errors +src/iop/CMakeFiles/colortransfer.dir/build.make:67: recipe for target 'src/iop/CMakeFiles/colortransfer.dir/introspection_colortransfer.c.o' failed make[2]: 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reformat.sh [src/iop/CMakeFiles/colortransfer.dir/introspection_colortransfer.c.o] Error 1 CMakeFiles/Makefile2:6323: recipe for target 'src/iop/CMakeFiles/colortransfer.dir/all' failed + +make[1]: 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reformat.sh [src/iop/CMakeFiles/colortransfer.dir/all] Error 2 + +``` + +`-Wno-format-truncation` is a rather new GCC feature which instructs the compiler to issue a warning if it can already deduce at compile time that calls to formatted I/O functions like `snprintf()` or `vsnprintf()` might result in truncated output. + +That's definitely neat, but Linux Mint 18.3 (just like Ubuntu 16.04 LTS) uses GCC 5.4.0, which doesn't support this feature. And darktable relies on a chain of CMake macros to make sure it doesn't use any flags the compiler doesn't know about: +``` +CHECK_COMPILER_FLAG_AND_ENABLE_IT(-Wno-format-truncation) + +``` + +So why did this even happen? I logged into one of my Ubuntu 16.04 installations and tried to reproduce the problem. Which wasn't hard, I just had to check out the git tree in question and build it. Boom, same error. + +### The solution + +It turns out that while `-Wformat-truncation` isn't a valid option for GCC 5.4.0 (it's not documented), this version silently accepts the negation under some circumstances (!): +``` + +sturmflut@hogsmeade:/tmp$ gcc -Wformat-truncation -o test test.c +gcc: error: unrecognized command line option '-Wformat-truncation' +sturmflut@hogsmeade:/tmp$ gcc -Wno-format-truncation -o test test.c +sturmflut@hogsmeade:/tmp$ + +``` + +(test.c just contains an empty main() method). + +Because darktable uses `CHECK_COMPILER_FLAG_AND_ENABLE_IT(-Wno-format-truncation)`, it is fooled into thinking this compiler version actually supports `-Wno-format-truncation` at all times. The simple test case used by the CMake macro doesn't fail, but the compiler later decides to no longer silently accept the invalid command line option for some reason. + +One of the cases which triggered this was when the source file under compilation had already generated some other warnings before. If I forced a serialized build using `make -j1` on a clean darktable checkout on this machine, `./src/iop/colortransfer.c` actually was the first file which caused any +compiler warnings at all, so this is why the process failed exactly there. + +The minimum test case to trigger this behavior in GCC 5.4.0 is a C file with a `main()` function with a parameter which has the wrong type, like this one: +``` + +int main(int argc, int argv) +{ +} + +``` + +Then add `-Wall` to make sure the compiler will treat this as a warning, and it fails: +``` + +sturmflut@hogsmeade:/tmp$ gcc -Wall -Wno-format-truncation -o test test.c +test.c:1:5: warning: second argument of 'main' should be 'char **' [-Wmain] + int main(int argc, int argv) + ^ +cc1: warning: unrecognized command line option '-Wno-format-truncation' + +``` + +If you omit `-Wall`, the compiler will not generate the first warning and also not complain about `-Wno-format-truncation`. + +I've never run into this before, but I guess Ubuntu 16.04 is going to stay with us for a while since it is the current LTS release until May 2018, and even after that it will still be supported until 2021. So this buggy GCC version will most likely also stay alive for quite a while. Which is why the check for this flag has been removed from the + +-------------------------------------------------------------------------------- + +via: http://www.lieberbiber.de/2018/01/14/what-a-gnu-compiler-bug-looks-like/ + +作者:[sturmflut][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.lieberbiber.de/author/sturmflut/ +[1]:https://www.mail-archive.com/darktable-dev@lists.darktable.org/msg02760.html From 0b43a30a87887cc95f620ed9fc241c1d8927d000 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Jan 2018 18:28:19 +0800 Subject: [PATCH 014/226] rename --- ...0 Why isn-t open source hot among computer science students.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/tech/20190110 Why isn-t open source hot among computer science students.md => 20180110 Why isn-t open source hot among computer science students.md (100%) diff --git a/sources/tech/20190110 Why isn-t open source hot among computer science students.md b/20180110 Why isn-t open source hot among computer science students.md similarity index 100% rename from sources/tech/20190110 Why isn-t open source hot among computer science students.md rename to 20180110 Why isn-t open source hot among computer science students.md From f8d43ca658032577a9f6b921c628f85b16bbd2cd Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Jan 2018 18:38:49 +0800 Subject: [PATCH 015/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Set=20?= =?UTF-8?q?Up=20PF=20Firewall=20on=20FreeBSD=20to=20Protect=20a=20Web=20Se?= =?UTF-8?q?rver?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...wall on FreeBSD to Protect a Web Server.md | 333 ++++++++++++++++++ 1 file changed, 333 insertions(+) create mode 100644 sources/tech/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md diff --git a/sources/tech/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md b/sources/tech/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md new file mode 100644 index 0000000000..45ce0c0a7a --- /dev/null +++ b/sources/tech/20170829 How To Set Up PF Firewall on FreeBSD to Protect a Web Server.md @@ -0,0 +1,333 @@ +How To Set Up PF Firewall on FreeBSD to Protect a Web Server +====== + +I am a new FreeBSD server user and moved from netfilter on Linux. How do I setup a firewall with PF on FreeBSD server to protect a web server with single public IP address and interface? + + +PF is an acronym for packet filter. It was created for OpenBSD but has been ported to FreeBSD and other operating systems. It is a stateful packet filtering engine. This tutorial will show you how to set up a firewall with PF on FreeBSD 10.x and 11.x server to protect your web server. + + +## Step 1 - Turn on PF firewall + +You need to add the following three lines to /etc/rc.conf file: +``` +# echo 'pf_enable="YES"' >> /etc/rc.conf +# echo 'pf_rules="/usr/local/etc/pf.conf"' >> /etc/rc.conf +# echo 'pflog_enable="YES"' >> /etc/rc.conf +# echo 'pflog_logfile="/var/log/pflog"' >> /etc/rc.conf +``` +Where, + + 1. **pf_enable="YES"** - Turn on PF service. + 2. **pf_rules="/usr/local/etc/pf.conf"** - Read PF rules from this file. + 3. **pflog_enable="YES"** - Turn on logging support for PF. + 4. **pflog_logfile="/var/log/pflog"** - File where pflogd should store the logfile i.e. store logs in /var/log/pflog file. + + + +[![How To Set Up a Firewall with PF on FreeBSD to Protect a Web Server][1]][1] + +## Step 2 - Creating firewall rules in /usr/local/etc/pf.conf + +Type the following command: +``` +# vi /usr/local/etc/pf.conf +``` +Append the following PF rulesets : +``` +# vim: set ft=pf +# /usr/local/etc/pf.conf + +## Set your public interface ## +ext_if="vtnet0" + +## Set your server public IP address ## +ext_if_ip="172.xxx.yyy.zzz" + +## Set and drop these IP ranges on public interface ## +martians = "{ 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12, \ + 10.0.0.0/8, 169.254.0.0/16, 192.0.2.0/24, \ + 0.0.0.0/8, 240.0.0.0/4 }" + +## Set http(80)/https (443) port here ## +webports = "{http, https}" + +## enable these services ## +int_tcp_services = "{domain, ntp, smtp, www, https, ftp, ssh}" +int_udp_services = "{domain, ntp}" + +## Skip loop back interface - Skip all PF processing on interface ## +set skip on lo + +## Sets the interface for which PF should gather statistics such as bytes in/out and packets passed/blocked ## +set loginterface $ext_if + +## Set default policy ## +block return in log all +block out all + +# Deal with attacks based on incorrect handling of packet fragments +scrub in all + +# Drop all Non-Routable Addresses +block drop in quick on $ext_if from $martians to any +block drop out quick on $ext_if from any to $martians + +## Blocking spoofed packets +antispoof quick for $ext_if + +# Open SSH port which is listening on port 22 from VPN 139.xx.yy.zz Ip only +# I do not allow or accept ssh traffic from ALL for security reasons +pass in quick on $ext_if inet proto tcp from 139.xxx.yyy.zzz to $ext_if_ip port = ssh flags S/SA keep state label "USER_RULE: Allow SSH from 139.xxx.yyy.zzz" +## Use the following rule to enable ssh for ALL users from any IP address # +## pass in inet proto tcp to $ext_if port ssh +### [ OR ] ### +## pass in inet proto tcp to $ext_if port 22 + +# Allow Ping-Pong stuff. Be a good sysadmin +pass inet proto icmp icmp-type echoreq + +# All access to our Nginx/Apache/Lighttpd Webserver ports +pass proto tcp from any to $ext_if port $webports + +# Allow essential outgoing traffic +pass out quick on $ext_if proto tcp to any port $int_tcp_services +pass out quick on $ext_if proto udp to any port $int_udp_services + +# Add custom rules below +``` + +Save and close the file. PR [welcome here to improve rulesets][2]. To check for syntax error, run: +`# service pf check` +OR +`/etc/rc.d/pf check` +OR +`# pfctl -n -f /usr/local/etc/pf.conf ` + +## Step 3 - Start PF firewall + +The commands are as follows. Be careful you might be disconnected from your server over ssh based session: + +### Start PF + +`# service pf start` + +### Stop PF + +`# service pf stop` + +### Check PF for syntax error + +`# service pf check` + +### Restart PF + +`# service pf restart` + +### See PF status + +`# service pf status` +Sample outputs: +``` +Status: Enabled for 0 days 00:02:18 Debug: Urgent + +Interface Stats for vtnet0 IPv4 IPv6 + Bytes In 19463 0 + Bytes Out 18541 0 + Packets In + Passed 244 0 + Blocked 3 0 + Packets Out + Passed 136 0 + Blocked 12 0 + +State Table Total Rate + current entries 1 + searches 395 2.9/s + inserts 4 0.0/s + removals 3 0.0/s +Counters + match 19 0.1/s + bad-offset 0 0.0/s + fragment 0 0.0/s + short 0 0.0/s + normalize 0 0.0/s + memory 0 0.0/s + bad-timestamp 0 0.0/s + congestion 0 0.0/s + ip-option 0 0.0/s + proto-cksum 0 0.0/s + state-mismatch 0 0.0/s + state-insert 0 0.0/s + state-limit 0 0.0/s + src-limit 0 0.0/s + synproxy 0 0.0/s + map-failed 0 0.0/s +``` + + +### Command to start/stop/restart pflog service + +Type the following commands: +``` +# service pflog start +# service pflog stop +# service pflog restart +``` + +## Step 4 - A quick introduction to pfctl command + +You need to use the pfctl command to see PF ruleset and parameter configuration including status information from the packet filter. Let us see all common commands: + +### Show PF rules information + +`# pfctl -s rules` +Sample outputs: +``` +block return in log all +block drop out all +block drop in quick on ! vtnet0 inet from 172.xxx.yyy.zzz/24 to any +block drop in quick inet from 172.xxx.yyy.zzz/24 to any +pass in quick on vtnet0 inet proto tcp from 139.aaa.ccc.ddd to 172.xxx.yyy.zzz/24 port = ssh flags S/SA keep state label "USER_RULE: Allow SSH from 139.aaa.ccc.ddd" +pass inet proto icmp all icmp-type echoreq keep state +pass out quick on vtnet0 proto tcp from any to any port = domain flags S/SA keep state +pass out quick on vtnet0 proto tcp from any to any port = ntp flags S/SA keep state +pass out quick on vtnet0 proto tcp from any to any port = smtp flags S/SA keep state +pass out quick on vtnet0 proto tcp from any to any port = http flags S/SA keep state +pass out quick on vtnet0 proto tcp from any to any port = https flags S/SA keep state +pass out quick on vtnet0 proto tcp from any to any port = ftp flags S/SA keep state +pass out quick on vtnet0 proto tcp from any to any port = ssh flags S/SA keep state +pass out quick on vtnet0 proto udp from any to any port = domain keep state +pass out quick on vtnet0 proto udp from any to any port = ntp keep state +``` + +#### Show verbose output for each rule + +`# pfctl -v -s rules` + +#### Add rule numbers with verbose output for each rule + +`# pfctl -vvsr show` + +#### Show state + +``` +# pfctl -s state +# pfctl -s state | more +# pfctl -s state | grep 'something' +``` + +### How to disable PF from the CLI + +`# pfctl -d ` + +### How to enable PF from the CLI + +`# pfctl -e ` + +### How to flush ALL PF rules/nat/tables from the CLI + +`# pfctl -F all` +Sample outputs: +``` +rules cleared +nat cleared +0 tables deleted. +2 states cleared +source tracking entries cleared +pf: statistics cleared +pf: interface flags reset +``` + +#### How to flush only the PF RULES from the CLI + +`# pfctl -F rules ` + +#### How to flush only queue's from the CLI + +`# pfctl -F queue ` + +#### How to flush all stats that are not part of any rule from the CLI + +`# pfctl -F info` + +#### How to clear all counters from the CLI + +`# pfctl -z clear ` + +## Step 5 - See PF log + +PF logs are in binary format. To see them type: +`# tcpdump -n -e -ttt -r /var/log/pflog` +Sample outputs: +``` +Aug 29 15:41:11.757829 rule 0/(match) block in on vio0: 86.47.225.151.55806 > 45.FOO.BAR.IP.23: S 757158343:757158343(0) win 52206 [tos 0x28] +Aug 29 15:41:44.193309 rule 0/(match) block in on vio0: 5.196.83.88.25461 > 45.FOO.BAR.IP.26941: S 2224505792:2224505792(0) ack 4252565505 win 17520 (DF) [tos 0x24] +Aug 29 15:41:54.628027 rule 0/(match) block in on vio0: 45.55.13.94.50217 > 45.FOO.BAR.IP.465: S 3941123632:3941123632(0) win 65535 +Aug 29 15:42:11.126427 rule 0/(match) block in on vio0: 87.250.224.127.59862 > 45.FOO.BAR.IP.80: S 248176545:248176545(0) win 28200 (DF) +Aug 29 15:43:04.953537 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.7475: S 1164335542:1164335542(0) win 1024 +Aug 29 15:43:05.122156 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.7475: R 1164335543:1164335543(0) win 1200 +Aug 29 15:43:37.302410 rule 0/(match) block in on vio0: 94.130.12.27.18080 > 45.FOO.BAR.IP.64857: S 683904905:683904905(0) ack 4000841729 win 16384 +Aug 29 15:44:46.574863 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.7677: S 3451987887:3451987887(0) win 1024 +Aug 29 15:44:46.819754 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.7677: R 3451987888:3451987888(0) win 1200 +Aug 29 15:45:21.194752 rule 0/(match) block in on vio0: 185.40.4.130.55910 > 45.FOO.BAR.IP.80: S 3106068642:3106068642(0) win 1024 +Aug 29 15:45:32.999219 rule 0/(match) block in on vio0: 185.40.4.130.55910 > 45.FOO.BAR.IP.808: S 322591763:322591763(0) win 1024 +Aug 29 15:46:30.157884 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.6511: S 2412580953:2412580953(0) win 1024 [tos 0x28] +Aug 29 15:46:30.252023 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.6511: R 2412580954:2412580954(0) win 1200 [tos 0x28] +Aug 29 15:49:44.337015 rule 0/(match) block in on vio0: 189.219.226.213.22640 > 45.FOO.BAR.IP.23: S 14807:14807(0) win 14600 [tos 0x28] +Aug 29 15:49:55.161572 rule 0/(match) block in on vio0: 5.196.83.88.25461 > 45.FOO.BAR.IP.40321: S 1297217585:1297217585(0) ack 1051525121 win 17520 (DF) [tos 0x24] +Aug 29 15:49:59.735391 rule 0/(match) block in on vio0: 36.7.147.209.2545 > 45.FOO.BAR.IP.3389: SWE 3577047469:3577047469(0) win 8192 (DF) [tos 0x2 (E)] +Aug 29 15:50:00.703229 rule 0/(match) block in on vio0: 36.7.147.209.2546 > 45.FOO.BAR.IP.3389: SWE 1539382950:1539382950(0) win 8192 (DF) [tos 0x2 (E)] +Aug 29 15:51:33.880334 rule 0/(match) block in on vio0: 45.55.22.21.53510 > 45.FOO.BAR.IP.2362: udp 14 +Aug 29 15:51:34.006656 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.6491: S 151489102:151489102(0) win 1024 [tos 0x28] +Aug 29 15:51:34.274654 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.6491: R 151489103:151489103(0) win 1200 [tos 0x28] +Aug 29 15:51:36.393019 rule 0/(match) block in on vio0: 60.191.38.78.4249 > 45.FOO.BAR.IP.8000: S 3746478095:3746478095(0) win 29200 (DF) +Aug 29 15:51:57.213051 rule 0/(match) block in on vio0: 24.137.245.138.7343 > 45.FOO.BAR.IP.5358: S 14134:14134(0) win 14600 +Aug 29 15:52:37.852219 rule 0/(match) block in on vio0: 122.226.185.125.51128 > 45.FOO.BAR.IP.23: S 1715745381:1715745381(0) win 5840 (DF) +Aug 29 15:53:31.309325 rule 0/(match) block in on vio0: 189.218.148.69.377 > 45.FOO.BAR.IP5358: S 65340:65340(0) win 14600 [tos 0x28] +Aug 29 15:53:31.809570 rule 0/(match) block in on vio0: 13.93.104.140.53184 > 45.FOO.BAR.IP.1433: S 39854048:39854048(0) win 1024 +Aug 29 15:53:32.138231 rule 0/(match) block in on vio0: 13.93.104.140.53184 > 45.FOO.BAR.IP.1433: R 39854049:39854049(0) win 1200 +Aug 29 15:53:41.459088 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.6028: S 168338703:168338703(0) win 1024 +Aug 29 15:53:41.789732 rule 0/(match) block in on vio0: 77.72.82.22.47218 > 45.FOO.BAR.IP.6028: R 168338704:168338704(0) win 1200 +Aug 29 15:54:34.993594 rule 0/(match) block in on vio0: 212.47.234.50.5102 > 45.FOO.BAR.IP.5060: udp 408 (DF) [tos 0x28] +Aug 29 15:54:57.987449 rule 0/(match) block in on vio0: 51.15.69.145.5100 > 45.FOO.BAR.IP.5060: udp 406 (DF) [tos 0x28] +Aug 29 15:55:07.001743 rule 0/(match) block in on vio0: 190.83.174.214.58863 > 45.FOO.BAR.IP.23: S 757158343:757158343(0) win 27420 +Aug 29 15:55:51.269549 rule 0/(match) block in on vio0: 142.217.201.69.26112 > 45.FOO.BAR.IP.22: S 757158343:757158343(0) win 22840 +Aug 29 15:58:41.346028 rule 0/(match) block in on vio0: 169.1.29.111.29765 > 45.FOO.BAR.IP.23: S 757158343:757158343(0) win 28509 +Aug 29 15:59:11.575927 rule 0/(match) block in on vio0: 187.160.235.162.32427 > 45.FOO.BAR.IP.5358: S 22445:22445(0) win 14600 [tos 0x28] +Aug 29 15:59:37.826598 rule 0/(match) block in on vio0: 94.74.81.97.54656 > 45.FOO.BAR.IP.3128: S 2720157526:2720157526(0) win 1024 [tos 0x28] +Aug 29 15:59:37.991171 rule 0/(match) block in on vio0: 94.74.81.97.54656 > 45.FOO.BAR.IP.3128: R 2720157527:2720157527(0) win 1200 [tos 0x28] +Aug 29 16:01:36.990050 rule 0/(match) block in on vio0: 182.18.8.28.23299 > 45.FOO.BAR.IP.445: S 1510146048:1510146048(0) win 16384 +``` + +To see live log run: +`# tcpdump -n -e -ttt -i pflog0` +For more info the [PF FAQ][3], [FreeBSD HANDBOOK][4] and the following man pages: +``` +# man tcpdump +# man pfctl +# man pf +``` + +## about the author: + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][5], [Facebook][6], [Google+][7]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/faq/how-to-set-up-a-firewall-with-pf-on-freebsd-to-protect-a-web-server/ + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/faq/2017/08/howto-setup-a-firewall-with-pf-on-freebsd.001.jpeg +[2]:https://github.com/nixcraft/pf.conf/blob/master/pf.conf +[3]:https://www.openbsd.org/faq/pf/ +[4]:https://www.freebsd.org/doc/handbook/firewalls.html +[5]:https://twitter.com/nixcraft +[6]:https://facebook.com/nixcraft +[7]:https://plus.google.com/+CybercitiBiz From fd4b0f7947018cd102d6b9bd9b413f7627b0e341 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Jan 2018 18:45:46 +0800 Subject: [PATCH 016/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Best=20Linux=20Sc?= =?UTF-8?q?reenshot=20and=20Screencasting=20Tools?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...inux Screenshot and Screencasting Tools.md | 147 ++++++++++++++++++ 1 file changed, 147 insertions(+) create mode 100644 sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md diff --git a/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md b/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md new file mode 100644 index 0000000000..fbd10d2194 --- /dev/null +++ b/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md @@ -0,0 +1,147 @@ +Best Linux Screenshot and Screencasting Tools +====== +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/best-linux-screenshot-and-screencasting-tools_orig.jpg) + +There comes a time you want to capture an error on your screen and send it to the developers or want help from _Stack Overflow,_ you need the right tools to take that screenshot and save it or send it. There are tools in the form of programs and others as shell extensions for GNOME. Not to worry, here are the best Linux Screenshot taking tools that you can use to take those screenshots or make a screencast. + +## Best Linux Screenshot Or Screencasting Tools + +### 1\. Shutter + + [![shutter linux screenshot taking tools](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/shutter-linux-screenshot-taking-tools_orig.jpg)][2] + +[Shutter][3] is one of the best Linux screenshot taking tools. It has the advantage of taking different screenshots depending on what you want to take on your screen. After you take the screenshot, it allows you to see the screenshot before saving it after you take the screenshot. It also includes an extension menu that shows up on your top panel for GNOME. That makes accessing the app much easier and much convenient for anyone to use. + +​You can take screenshots of a selection, a window, desktop, window under cursor, section, menu, tooltip or web. Shutter allows you to upload the screenshots directly to the cloud using the preferred cloud services provider. This Linux tool also allows you to edit your screenshots before you save them. It also comes with plugins that you can add or remove. + +To install it, you will have to type the following in the terminal: + +``` +sudo add-apt-repository -y ppa:shutter/ppa +sudo apt-get update && sudo apt-get install shutter +``` + +### 2. Vokoscreen + + [![vokoscreen screencasting tool for linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-screencasting-tool-for-linux_orig.jpg)][4] + + +[Vokoscreen][5] is an app that allows you to record your screen as you show around and narrate what you are doing on the screen. It is easy to use, has a simple interface and includes a top panel menu for easy access when you are recording your screen. + +​ + +You can choose to record the whole screen, a window or just a selection of an area. Customizing the recording is easy to get the type of screen recording you want to achieve. Vokoscreen even allows you to create a gif as a screen recording. You can also record yourself using the webcam in case you were narrating as tutorials so that you can engage the learners. Once you are done, you can playback the recording right from the application so that you don’t have to keep navigating to find the recording. + + [![vokoscreen preferences](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-preferences_orig.jpg)][6] + +You can install Vocoscreen from your distro repository. Or download the package from [pkgs.org][7] , select the Linux distro you are using. + +``` +sudo dpkg -i vokoscreen_2.5.0-1_amd64.deb +``` + +### 3. OBS + + [![obs linux screencasting tool](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/obs-linux-screencasting-tool_orig.jpg)][8] + +[OBS][9] can be used to record your screen as well as record streams from the internet. It allows you to see whatever you are recording as you stream or as you narrate your screen recording. It allows you to choose the quality of your recording according to your preferences. It also allows you to choose the type of file you want your recording to save to. In addition to the feature of recording, you can switch to Studio mode allowing you to edit your recording to make a complete video without having to use any other external editing software. To install OBS in your Linux distribution, you must have FFmpeg installed on your machine. To install FFmpeg type the following in the terminal for ubuntu 14.04 and earlier: + +``` +sudo add-apt-repository ppa:kirillshkrogalev/ffmpeg-next + +sudo apt-get update && sudo apt-get install ffmpeg +``` + +​For ubuntu 15.04 and later you can just type the following in the terminal to install FFmpeg: + +``` +sudo apt-get install ffmpeg +``` + +​If you have already installed FFmpeg, type the following in the terminal to install OBS: + +``` +sudo add-apt-repository ppa:obsproject/obs-studio + +sudo apt-get update + +sudo apt-get install obs-studio +``` + +### 4. Green Recorder + + [![green recording linux tool](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/green-recording-linux-tool_orig.jpg)][10] + +[Green recorder][11] is a simple interface based program that allows you to record the screen. You can choose what to record including video or just audio and allow you to show the mouse pointer and even follow it as you record your screen. You can record a window or just a selected area on your screen so that only what you want to record shows up in your recording. You can customize the number of frames to record in your final video. In case you want to start recording after a delay, you have the option to configure the delay you wish to set. You have the option to run a command after the recording is done that will run on your machine immediately after you stop recording. + +​ + +To install green recorder, type the following in the terminal: + +``` +sudo add-apt-repository ppa:fossproject/ppa + +sudo apt update && sudo apt install green-recorder +``` + +### 5. Kazam + + [![kazam screencasting tool for linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kazam-screencasting-tool-for-linux_orig.jpg)][12] + +[Kazam][13] Linux screenshot tool is very popular amongst Linux users. It is an intuitive simple to use app that allows you to take a screencast or a screenshot allowing you to customise the delay before taking a screencast or screenshot. It allows you to select the area, window or fullscreen you want to capture. Kazam’s interface is well laid out and not as complicated as other apps. Its features will leave you happy about taking your screenshots. Kazam also includes a system tray icon and menu that allows you to take the screenshot without going to the application itself. + +​​ + +To install Kazam, type the following in the terminal: + +``` +sudo apt-get install kazam +``` + +​If the PPA is not found, you can install it manually using the following commands: + +``` +sudo add-apt-repository ppa:kazam-team/stable-series + +sudo apt-get update && sudo apt-get install kazam +``` + +### 6. Screenshot tool GNOME extension + + [![gnome screenshot extension](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-compressed_orig.jpg)][1] + +There is a GNOME extension just named screenshot tool that always shows up on the system panel until you disable it. It is convenient since it just sits on the system panel until you will trigger it to take a screenshot. The main advantage of this tool is that it is the quickest to access since it is always in your system panel unless you deactivate it in the tweak utility tool. The tool also has a preferences window allowing you to tweak it to your preferences. To install it on your GNOME desktop, head to extensions.gnome.org and search for “_Screenshot Tool”._ + +You must have the gnome extensions chrome extension installed as well as GNOME tweaks tool installed to use the tool. + + [![gnome screenshot extension preferences](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-preferences_orig.jpg)][14] + +The **Linux screenshot tools** are quite helpful especially when you don’t know what to do when you come across a problem and want to share the error with [the Linux community][15] or the developers of a program that you are using. Learning developers or programmers or anyone else need it will find these tools useful to share your screenshots. Youtubers and tutorial makers will find the screencasting tools even more useful when they use them to record their tutorials and post them.​ + + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/best-linux-screenshot-screencasting-tools + +作者:[linuxandubuntu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxandubuntu.com +[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-compressed_orig.jpg +[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/shutter-linux-screenshot-taking-tools_orig.jpg +[3]:http://shutter-project.org/ +[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-screencasting-tool-for-linux_orig.jpg +[5]:https://github.com/vkohaupt/vokoscreen +[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-preferences_orig.jpg +[7]:https://pkgs.org/download/vokoscreen +[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/obs-linux-screencasting-tool_orig.jpg +[9]:https://obsproject.com/ +[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/green-recording-linux-tool_orig.jpg +[11]:https://github.com/foss-project/green-recorder +[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kazam-screencasting-tool-for-linux_orig.jpg +[13]:https://launchpad.net/kazam +[14]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-preferences_orig.jpg +[15]:http://www.linuxandubuntu.com/home/top-10-communities-to-help-you-learn-linux From 46936bbba491802759f6839f72a981090dfa2fad Mon Sep 17 00:00:00 2001 From: Flowsnow Date: Mon, 15 Jan 2018 19:17:16 +0800 Subject: [PATCH 017/226] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90-20?= =?UTF-8?q?180111=20The=20Fold=20Command=20Tutorial=20With=20Examples=20Fo?= =?UTF-8?q?r=20Beginners.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nd Tutorial With Examples For Beginners.md | 114 ----------------- ...nd Tutorial With Examples For Beginners.md | 118 ++++++++++++++++++ 2 files changed, 118 insertions(+), 114 deletions(-) delete mode 100644 sources/tech/20180111 The Fold Command Tutorial With Examples For Beginners.md create mode 100644 translated/tech/20180111 The Fold Command Tutorial With Examples For Beginners.md diff --git a/sources/tech/20180111 The Fold Command Tutorial With Examples For Beginners.md b/sources/tech/20180111 The Fold Command Tutorial With Examples For Beginners.md deleted file mode 100644 index 0d0623bb7a..0000000000 --- a/sources/tech/20180111 The Fold Command Tutorial With Examples For Beginners.md +++ /dev/null @@ -1,114 +0,0 @@ -translating by Flowsnow - -The Fold Command Tutorial With Examples For Beginners -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/01/Fold-Command-2-720x340.png) - -Have you ever found yourself in a situation where you want to fold or break the output of a command to fit within a specific width? I have find myself in this situation few times while running VMs, especially the servers with no GUI. Just in case, if you ever wanted to limit the output of a command to a particular width, look nowhere! Here is where **fold** command comes in handy! The fold command wraps each line in an input file to fit a specified width and prints it to the standard output. - -In this brief tutorial, we are going to see the usage of fold command with practical examples. - -### The Fold Command Tutorial With Examples - -Fold command is the part of GNU coreutils package, so let us not bother about installation. - -The typical syntax of fold command: -``` -fold [OPTION]... [FILE]... -``` - -Allow me to show you some examples, so you can get a better idea about fold command. I have a file named **linux.txt** with some random lines. - -[![][1]][2] - -To wrap each line in the above file to default width, run: -``` -fold linux.txt -``` - -**80** columns per line is the default width. Here is the output of above command: - -[![][1]][3] - -As you can see in the above output, fold command has limited the output to a width of 80 characters. - -Of course, we can specify your preferred width, for example 50, like below: -``` -fold -w50 linux.txt -``` - -Sample output would be: - -[![][1]][4] - -Instead of just displaying output, we can also write the output to a new file as shown below: -``` -fold -w50 linux.txt > linux1.txt -``` - -The above command will wrap the lines of **linux.txt** to a width of 50 characters, and writes the output to new file named **linux1.txt**. - -Let us check the contents of the new file: -``` -cat linux1.txt -``` - -[![][1]][5] - -Did you closely notice the output of the previous commands? Some words are broken between lines. To overcome this issue, we can use -s flag to break the lines at spaces. - -The following command wraps each line in a given file to width "50" and breaks the line at spaces: -``` -fold -w50 -s linux.txt -``` - -Sample output: - -[![][1]][6] - -See? Now, the output is much clear. This command puts each space separated word in a new line and words with length > 50 are wrapped. - -In all above examples, we limited the output width by columns. However, we can enforce the width of the output to the number of bytes specified using **-b** option. The following command breaks the output at 20 bytes. -``` -fold -b20 linux.txt -``` - -Sample output: - -[![][1]][7] - -**Also read:** - -+ [The Uniq Command Tutorial With Examples For Beginners][8] - -For more details, refer the man pages. -``` -man fold -``` - -And, that's for now folks. You know now how to use fold command to limit the output of a command to fit in a specific width. I hope this was useful. We will be posting more useful guides everyday. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/fold-command-tutorial-examples-beginners/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-1.png -[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-2.png -[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-3-1.png -[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-4.png -[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-5-1.png -[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-6-1.png -[8]:https://www.ostechnix.com/uniq-command-tutorial-examples-beginners/ diff --git a/translated/tech/20180111 The Fold Command Tutorial With Examples For Beginners.md b/translated/tech/20180111 The Fold Command Tutorial With Examples For Beginners.md new file mode 100644 index 0000000000..9cc63eb46a --- /dev/null +++ b/translated/tech/20180111 The Fold Command Tutorial With Examples For Beginners.md @@ -0,0 +1,118 @@ +Fold命令入门级示例教程 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/01/Fold-Command-2-720x340.png) + +你有没有发现自己在某种情况下想要折叠或打破命令的输出用于适应特定的宽度? 在运行虚拟机的时候,我遇到了几次这种的情况,特别是没有GUI的服务器。 以防万一,如果你想限制一个命令的输出为一个特定的宽度,现在看看这里! **fold**命令在这里就能派的上用场了! fold命令以适合指定的宽度调整输入文件中的每一行并将其打印到标准输出。 + +在这个简短的教程中,我们将看到fold命令的用法,带有实例哦。 + +### fold命令示例教程 + +fold命令是GNU coreutils包的一部分,所以我们不用为安装的事情烦恼。 + +fold命令的典型语法: +``` +fold [OPTION]... [FILE]... +``` + +请允许我向您展示一些示例,以便您更好地了解fold命令。 我有一个名为linux.txt文件,内容是随机的。 + +Allow me to show you some examples, so you can get a better idea about fold command. I have a file named **linux.txt** with some random lines. + +![][2] + +要将上述文件中的每一行换行为默认宽度,请运行: + +``` +fold linux.txt +``` + +每行**80**列是默认的宽度。 这里是上述命令的输出: + +![][3] + +正如你在上面的输出中看到的,fold命令已经将输出限制为80个字符的宽度。 + +当然,我们可以指定您的首选宽度,例如50,如下所示: + +``` +fold -w50 linux.txt +``` + +Sample output would be: + +![][4] + +我们也可以将输出写入一个新的文件,如下所示: + +``` +fold -w50 linux.txt > linux1.txt +``` + +以上命令将把**linux.txt**的行宽度改为50个字符,并将输出写入到名为**linux1.txt**的新文件中。 + +让我们检查一下新文件的内容: + +``` +cat linux1.txt +``` + +![][5] + +你有没有注意到前面的命令的输出? 有些词在行之间被打破。 为了解决这个问题,我们可以使用-s标志来在空格处换行。 + +以下命令将给定文件中的每行调整为宽度“50”,并在空格处换到新行: + +``` +fold -w50 -s linux.txt +``` + +示例输出: + +![][6] + +看清楚了吗? 现在,输出很清楚。 换到新行中的单词都是用空格隔开的,所在行单词的长度大于50的时候就会被调整到下一行。 + +在所有上面的例子中,我们用列来限制输出宽度。 但是,我们可以使用**-b**选项将输出的宽度强制为指定的字节数。 以下命令以20个字节中断输出。 + +``` +fold -b20 linux.txt +``` + +Sample output: + +![][7] + +**另请阅读:** + ++ [Unix命令入门级示例教程][8] + +有关更多详细信息,请参阅man手册页。 +``` +man fold +``` + +而且,这些就是所有的内容了。 您现在知道如何使用fold命令以适应特定的宽度来限制命令的输出。 我希望这是有用的。 我们将每天发布更多有用的指南。 敬请关注! + +干杯! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/fold-command-tutorial-examples-beginners/ + +作者:[SK][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-2.png +[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-3-1.png +[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-4.png +[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-5-1.png +[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-6-1.png +[8]:https://www.ostechnix.com/uniq-command-tutorial-examples-beginners/ From 06ea73b599bab45c2824d98299401a2a9bfb9aa2 Mon Sep 17 00:00:00 2001 From: Flowsnow Date: Mon, 15 Jan 2018 19:20:28 +0800 Subject: [PATCH 018/226] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91-20?= =?UTF-8?q?171212=20How=20To=20Count=20The=20Number=20Of=20Files=20And=20F?= =?UTF-8?q?olders-Directories=20In=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nt The Number Of Files And Folders-Directories In Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md b/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md index 9e8de9c467..eca8dbc17b 100644 --- a/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md +++ b/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md @@ -1,5 +1,8 @@ +translating by Flowsnow + How To Count The Number Of Files And Folders/Directories In Linux ====== + Hi folks, today again we came with set of tricky commands that help you in many ways. It's kind of manipulation commands which help you to count files and directories in the current directory, recursive count, list of files created by particular user, etc,. In this tutorial, we are going to show you, how to use more than one command like, all together to perform some advanced actions using ls, egrep, wc and find command. The below set of commands which helps you in many ways. @@ -164,7 +167,6 @@ To experiment this, i'm going to create totally 7 files and 2 folders (5 regular ``` - -------------------------------------------------------------------------------- via: https://www.2daygeek.com/how-to-count-the-number-of-files-and-folders-directories-in-linux/ From 0702d8e7ba7f188efd5f971ea516f9b544c527d3 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Mon, 15 Jan 2018 20:56:01 +0800 Subject: [PATCH 019/226] Delete 20171207 How To Find Files Based On their Permissions.md 20171207 How To Find Files Based On their Permissions.md --- ...o Find Files Based On their Permissions.md | 172 ------------------ 1 file changed, 172 deletions(-) delete mode 100644 sources/tech/20171207 How To Find Files Based On their Permissions.md diff --git a/sources/tech/20171207 How To Find Files Based On their Permissions.md b/sources/tech/20171207 How To Find Files Based On their Permissions.md deleted file mode 100644 index d9e6ecc95a..0000000000 --- a/sources/tech/20171207 How To Find Files Based On their Permissions.md +++ /dev/null @@ -1,172 +0,0 @@ -translated by cyleft -How To Find Files Based On their Permissions -====== -Finding files in Linux is not a big deal. There are plenty of free and open source graphical utilities available on the market. In my opinion, finding files from command line is much easier and faster. We already knew how to [**find and sort files based on access and modification date and time**][1]. Today, we will see how to find files based on their permissions in Unix-like operating systems. - -For the purpose of this guide, I am going to create three files namely **file1** , **file2** and **file3** with permissions **777** , **766** , **655** respectively in a folder named **ostechnix**. -``` -mkdir ostechnix && cd ostechnix/ -``` -``` -install -b -m 777 /dev/null file1 -``` -``` -install -b -m 766 /dev/null file2 -``` -``` -install -b -m 655 /dev/null file3 -``` - -[![][2]][3] - -Now let us find the files based on their permissions. - -### Find files Based On their Permissions - -The typical syntax to find files based on their permissions is: -``` -find -perm mode -``` - -The MODE can be either with numeric or octal permission (like 777, 666.. etc) or symbolic permission (like u=x, a=r+x). - -Before going further, we can specify the MODE in three different ways. - - 1. If we specify the mode without any prefixes, it will find files of **exact** permissions. - 2. If we use **" -"** prefix with mode, at least the files should have the given permission, not the exact permission. - 3. If we use **" /"** prefix, either the owner, the group, or other should have permission to the file. - - - -Allow me to explain with some examples, so you can understand better. - -First, we will see finding files based on numeric permissions. - -### Find Files Based On their Numeric (octal) Permissions - -Now let me run the following command: -``` -find -perm 777 -``` - -This command will find the files with permission of **exactly 777** in the current directory. - -[![][2]][4] - -As you see in the above output, file1 is the only one that has **exact 777 permission**. - -Now, let us use "-" prefix and see what happens. -``` -find -perm -766 -``` - -[![][2]][5] - -As you see, the above command displays two files. We have set 766 permission to file2, but this command displays two files, why? Because, here we have used "-" prefix". It means that this command will find all files where the file owner has read/write/execute permissions, file group members have read/write permissions and everything else has also read/write permission. In our case, file1 and file2 have met this criteria. In other words, the files need not to have exact 766 permission. It will display any files that falls under this 766 permission. - -Next, we will use "/" prefix and see what happens. -``` -find -perm /222 -``` - -[![][2]][6] - -The above command will find files which are writable by somebody (either their owner, or their group, or anybody else). Here is another example. -``` -find -perm /220 -``` - -This command will find files which are writable by either their owner or their group. That means the files **don 't have to be writable** by **both the owner and group** to be matched; **either** will do. - -But if you run the same command with "-" prefix, you will only see the files only which are writable by both owner and group. -``` -find -perm -220 -``` - -The following screenshot will show you the difference between these two prefixes. - -[![][2]][7] - -Like I already said, we can also use symbolic notation to represent the file permissions. - -Also read: - -### Find Files Based On their Permissions using symbolic notation - -In the following examples, we use symbolic notations such as **u** ( for user), **g** (group), **o** (others). We can also use the letter **a** to represent all three of these categories. The permissions can be specified using letters **r** (read), **w** (write), **x** (executable). - -For instance, to find any file with group **write** permission, run: -``` -find -perm -g=w -``` - -[![][2]][8] - -As you see in the above example, file1 and file2 have group **write** permission. Please note that you can use either "=" or "+" for symbolic notation. It doesn't matter. For example, the following two commands do the same thing. -``` -find -perm -g=w -find -perm -g+w -``` - -To find any file which are writable by the file owner, run: -``` -find -perm -u=w -``` - -To find any file which are writable by all (the file owner, group and everyone else), run: -``` -find -perm -a=w -``` - -To find files which are writable by **both** their **owner** and their **group** , use this command: -``` -find -perm -g+w,u+w -``` - -The above command is equivalent of "find -perm -220" command. - -To find files which are writable by **either** their **owner** or their **group** , run: -``` -find -perm /u+w,g+w -``` - -Or, -``` -find -perm /u=w,g=w -``` - -These two commands does the same job as "find -perm /220" command. - -For more details, refer the man pages. -``` -man find -``` - -Also, check the [**man pages alternatives**][9] to learn more simplified examples of any Linux command. - -And, that's all for now folks. I hope this guide was useful. More good stuffs to come. Stay tuned. - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/find-files-based-permissions/ - -作者:[][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com -[1] https://www.ostechnix.com/find-sort-files-based-access-modification-date-time-linux/ -[2] data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3] http://www.ostechnix.com/wp-content/uploads/2017/12/find-files-1-1.png () -[4] http://www.ostechnix.com/wp-content/uploads/2017/12/find-files-2.png () -[5] http://www.ostechnix.com/wp-content/uploads/2017/12/find-files-3.png () -[6] http://www.ostechnix.com/wp-content/uploads/2017/12/find-files-6.png () -[7] http://www.ostechnix.com/wp-content/uploads/2017/12/find-files-7.png () -[8] http://www.ostechnix.com/wp-content/uploads/2017/12/find-files-8.png () -[9] https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ From 0633e68ce346b9d93037e7ef11c00eac414b1142 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Mon, 15 Jan 2018 20:56:33 +0800 Subject: [PATCH 020/226] translated --- ...o Find Files Based On their Permissions.md | 170 ++++++++++++++++++ 1 file changed, 170 insertions(+) create mode 100644 translated/tech/20171207 How To Find Files Based On their Permissions.md diff --git a/translated/tech/20171207 How To Find Files Based On their Permissions.md b/translated/tech/20171207 How To Find Files Based On their Permissions.md new file mode 100644 index 0000000000..7cfee6c3cf --- /dev/null +++ b/translated/tech/20171207 How To Find Files Based On their Permissions.md @@ -0,0 +1,170 @@ +根据权限查找文件 +====== + +在 Linux 中查找文件并不是什么大问题。市面上也有很多可靠的免费开源可视化的查询工具。但对我而言,查询文件,用命令行的方式会更快更简单。我们已经知道 [ 如何根据访问、修改文件时间寻找或整理文件 ][1]。今天,在基于 Unix 的操作系统中,我们将见识如何通过权限查询文件。 + +本段教程中,我将创建三个文件名为 **file1**,**file2** 和 **file3** 分别赋予 **777**,**766** 和 **655** 文件权限,并分别置于名为 **ostechnix** 的文件夹中。 +``` +mkdir ostechnix && cd ostechnix/ +``` +``` +install -b -m 777 /dev/null file1 +``` +``` +install -b -m 766 /dev/null file2 +``` +``` +install -b -m 655 /dev/null file3 +``` + +![][3] + +现在,让我们通过权限来查询一下文件。 + +### 根据权限查询文件 + +根据权限查询文件最具代表性的语法: +``` +find -perm mode +``` + +MODE 可以是代表权限的八进制数字(777,666…)也可以是权限符号(u=x,a=r+x)。 + +在深入之前,我们就以下三点详细说明 MODE 参数。 + + 1. 如果我们不指定任何参数前缀,它将会寻找 **具体** 权限的文件。 + 2. 如果我们使用 **“-”** 参数前缀, 寻找到的文件至少拥有 mode 所述的权限,而不是具体的权限(大于或等于此权限的文件都会被查找出来)。 + 3. 如果我们使用 **“/”** 参数前缀,那么所有者、组或者其他人任意一个应当享有此文件的权限。 + +为了让你更好的理解,让我举些例子。 + +首先,我们将要看到基于数字权限查询文件。 + +### 基于数字(八进制)权限查询文件 + +让我们运行下列命令: +``` +find -perm 777 +``` + +这条命令将会查询到当前目录权限为 **确切为 777** 权限的文件。 + +![1][4] + +当你看见屏幕输出行时,file1 是唯一一个拥有 **确切为 777 权限** 的文件。 + +现在,让我们使用 “-” 参数前缀,看看会发生什么。 +``` +find -perm -766 +``` + +![][5] + +如你所见,命令行上显示两个文件。我们给 file2 设置了 766 权限,但是命令行显示两个文件,什么鬼?因为,我们设置了 “-” 参数前缀。它意味着这条命令将在所有文件中查询文件所有者的 读/写/执行 权限,文件用户组的 读/写权限和其他用户的 读/写 全西安。本例中,file1 和 file2 都符合要求。换句话说,文件并不一样要求时确切的 766 权限。它将会显示任何属于(高于)此权限的文件 。 + +然后,让我们使用 “/” 参数前置,看看会发生什么。 +``` +find -perm /222 +``` + +![][6] + +上述命令将会查询所有者、用户组或其他拥有写权限的文件。这里有另外一个例子 + +``` +find -perm /220 +``` + +这条命令会查询所有者或用户组中拥有写权限的文件。这意味着 **所有者和用户组** 中匹配 **不全拥有写权限**。 + +如果你使用 “-” 前缀运行相同的命令,你只会看到所有者和用户组都拥有写权限的文件。 +``` +find -perm -220 +``` + +下面的截图会告诉你这两个参数前缀的不同。 + +![][7] + +如我之前说过的一样,我们可以使用符号表示文件权限。 + +请阅读: + +### 基于符号的文件权限查询文件 + +在下面的例子中,我们使用例如 **u**(所有者),**g**(用户组) 和 **o**(其他) 的符号表示法。我们也可以使用字母 **a** 代表上述三种类型。我们可以通过特指的 **r** (读), **w** (写), **x** (执行) 分别代表它们的权限。 + +例如,寻找用户组中拥有 **写** 权限的文件,执行: +``` +find -perm -g=w +``` + +![][8] + +上面的例子中,file1 和 file2 都拥有 **写** 权限。请注意,你可以等效使用 “=”或“+”两种符号标识。例如,下列两行相同效果的代码。 +``` +find -perm -g=w +find -perm -g+w +``` + +查询文件所有者中拥有写权限的文件,执行: +``` +find -perm -u=w +``` + +查询所有用户中拥有写权限的文件,执行: +``` +find -perm -a=w +``` + +查询 **所有者** 和 **用户组** 中同时拥有写权限的文件,执行: +``` +find -perm -g+w,u+w +``` + +上述命令等效与“find -perm -220”。 + +查询 **所有者** 或 **用户组** 中拥有写权限的文件,执行: +``` +find -perm /u+w,g+w +``` + +或者, +``` +find -perm /u=w,g=w +``` + +上述命令等效于 “find -perm /220”。 +更多详情,参照 man 手册。 +``` +man find +``` + +了解更多简化案例或其他 Linux 命令,查看[**man 手册**][9]。 + +然后,这就是所有的内容。希望这个教程有用。更多干货,敬请关注。 + +干杯! + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/find-files-based-permissions/ + +作者:[][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com +[1]:https://www.ostechnix.com/find-sort-files-based-access-modification-date-time-linux/ +[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7/ +[3]:https://www.ostechnix.com/wp-content/uploads/2017/12/find-files-1-1.png +[4]:https://www.ostechnix.com/wp-content/uploads/2017/12/find-files-2.png +[5]:https://www.ostechnix.com/wp-content/uploads/2017/12/find-files-3.png + +[6]:https://www.ostechnix.com/wp-content/uploads/2017/12/find-files-6.png +[7]:https://www.ostechnix.com/wp-content/uploads/2017/12/find-files-7.png +[8]:https://www.ostechnix.com/wp-content/uploads/2017/12/find-files-8.png +[9]:https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ From 3cce5803e4736bce9595bf6e51903136657eb82a Mon Sep 17 00:00:00 2001 From: cmn <2545489745@qq.com> Date: Mon, 15 Jan 2018 21:30:21 +0800 Subject: [PATCH 021/226] translated --- ...to examine network connections on Linux.md | 57 ++++++++++--------- 1 file changed, 29 insertions(+), 28 deletions(-) rename {sources => translated}/tech/20171019 More ways to examine network connections on Linux.md (60%) diff --git a/sources/tech/20171019 More ways to examine network connections on Linux.md b/translated/tech/20171019 More ways to examine network connections on Linux.md similarity index 60% rename from sources/tech/20171019 More ways to examine network connections on Linux.md rename to translated/tech/20171019 More ways to examine network connections on Linux.md index 41e19559bf..8afd276c88 100644 --- a/sources/tech/20171019 More ways to examine network connections on Linux.md +++ b/translated/tech/20171019 More ways to examine network connections on Linux.md @@ -1,13 +1,12 @@ -translating by kimii -More ways to examine network connections on Linux +检查 linux 上网络连接的更多方法 ====== -The ifconfig and netstat commands are incredibly useful, but there are many other commands that can help you see what's up with you network on Linux systems. Today's post explores some very handy commands for examining network connections. +ifconfig 和 netstat 命令当然非常有用,但还有很多其他命令能帮你查看 linux 系统上的网络状况。本文探索了一些检查网络连接的非常简便的命令。 -### ip command +### ip 命令 -The **ip** command shows a lot of the same kind of information that you'll get when you use **ifconfig**. Some of the information is in a different format - e.g., "192.168.0.6/24" instead of "inet addr:192.168.0.6 Bcast:192.168.0.255" and ifconfig is better for packet counts, but the ip command has many useful options. +**ip** 命令显示了许多与你使用 **ifconfig** 命令时的一样信息。其中一些信息以不同的格式呈现,比如使用“192.168.0.6/24”,而不是“inet addr:192.168.0.6 Bcast:192.168.0.255”,尽管 ifconfig 更适合数据包计数,但 ip 命令有许多有用的选项。 -First, here's the **ip a** command listing information on all network interfaces. +首先,这里是 **ip a** 命令列出的所有网络接口的信息。 ``` $ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 @@ -25,7 +24,7 @@ $ ip a ``` -If you want only to see a simple list of network interfaces, you can limit its output with **grep**. +如果你只想看到简单的网络接口列表,你可以用 **grep** 限制它的输出。 ``` $ ip a | grep inet inet 127.0.0.1/8 scope host lo @@ -35,7 +34,7 @@ $ ip a | grep inet ``` -You can get a glimpse of your default route using a command like this: +使用如下面的命令,你可以看到你的默认路由: ``` $ ip route show default via 192.168.0.1 dev eth0 @@ -43,18 +42,18 @@ default via 192.168.0.1 dev eth0 ``` -In this output, you can see that the default gateway is 192.168.0.1 through eth0 and that the local network is the fairly standard 192.168.0.0/24. +在这个输出中,你可以看到通过 eth0 的默认网关是 192.168.0.1,并且本地网络是相当标准的 192.168.0.0/24。 -You can also use the **ip** command to bring network interfaces up and shut them down. +你也可以使用 **ip** 命令来启用和禁用网络接口。 ``` $ sudo ip link set eth1 up $ sudo ip link set eth1 down ``` -### ethtool command +### ethtool 命令 -Another very useful tool for examining networks is **ethtool**. This command provides a lot of descriptive data on network interfaces. +另一个检查网络非常有用的工具是 **ethtool**。这个命令提供了网络接口上的许多描述性的数据。 ``` $ ethtool eth0 Settings for eth0: @@ -83,7 +82,7 @@ Cannot get wake-on-lan settings: Operation not permitted ``` -You can also use the **ethtool** command to examine ethernet driver settings. +你也可以使用 **ethtool** 命令来检查以太网驱动设置。 ``` $ ethtool -i eth0 driver: e1000e @@ -99,7 +98,7 @@ supports-priv-flags: no ``` -The autonegotiation details can be displayed with a command like this: +自动协商的详细信息可以用这样的命令来显示: ``` $ ethtool -a eth0 Pause parameters for eth0: @@ -109,9 +108,10 @@ TX: on ``` -### traceroute command +### traceroute 命令 -The **traceroute** command displays routing pathways. It works by using the TTL (time to live) field in the packet header in a series of packets to capture the path that packets take and how long they take to get from one hop to the next. Traceroute's output helps to gauge the health of network connections, since some routes might take much longer to reach the eventual destination. + +**traceroute** 命令显示路由路径。它通过在一系列数据包中设置数据包头的TTL(生存时间)字段来捕获数据包所经过的路径,以及数据包从一跳到下一跳需要的时间。Traceroute 的输出有助于评估网络连接的健康状况,因为某些路由可能需要花费更长的时间才能到达最终的目的地。 ``` $ sudo traceroute world.std.com traceroute to world.std.com (192.74.137.5), 30 hops max, 60 byte packets @@ -133,13 +133,13 @@ traceroute to world.std.com (192.74.137.5), 30 hops max, 60 byte packets ``` -### tcptraceroute command +### tcptraceroute 命令 -The **tcptraceroute** command does basically the same thing as traceroute except that it is able to bypass the most common firewall filters. As the command's man page explains, tcptraceroute sends out TCP SYN packets instead of UDP or ICMP ECHO packets, thus making it less susceptible to being blocked. +**tcptraceroute** 命令与 traceroute 基本上是一样的,只是它能够绕过最常见的防火墙的过滤。正如该命令的手册页所述,tcptraceroute 发送 TCP SYN 数据包而不是 UDP 或 ICMP ECHO 数据包,所以其不易被阻塞。 -### tcpdump command +### tcpdump 命令 -The **tcpdump** command allows you to capture network packets for later analysis. With the -D option, it lists available interfaces. +**tcpdump** 命令允许你捕获网络数据包来进一步分析。使用 -D 选项列出可用的网络接口。 ``` $ tcpdump -D 1.eth0 [Up, Running] @@ -157,7 +157,7 @@ $ tcpdump -D ``` -The -v (verbose) option controls how much detail you will see -- more v's, more details, but more than three v's doesn't add anything more. +-v(verbose)选项控制你看到的细节程度--越多的 v,越详细,但超过 3 个 v 不会有更多意义。 ``` $ sudo tcpdump -vv host 192.168.0.32 tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes @@ -172,9 +172,10 @@ tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 byt ``` -Expect to see a _lot_ of output when you run commands like this one. +当你运行像这样的命令时,会看到非常多的输出。 + +这个命令捕获来自特定主机和 eth0 上的 11 个数据包。-w 选项标识保存捕获包的文件。在这个示例命令中,我们只要求捕获 11 个数据包。 -This command captures 11 packets from a specific host and over eth0. The -w option identifies the file that will contain the capture packets. In this example command, we've only asked to capture 11 packets. ``` $ sudo tcpdump -c 11 -i eth0 src 192.168.0.32 -w packets.pcap tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes @@ -184,9 +185,10 @@ tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 byt ``` -### arp command +### arp 命令 + +arp 命令将 IPv4 地址映射到硬件地址。它所提供的信息也可以在一定程度上用于识别系统,因为网络适配器可以告诉你使用它们的系统的一些信息。下面的第二个MAC 地址,从 f8:8e:85 开始,很容易被识别为 Comtrend 路由器。 -The arp command maps IPv4 addresses to hardware addresses. The information provided can also be used to identify the systems to some extent, since the network adaptors in use can tell you something about the systems using them. The second MAC address below, starting with f8:8e:85, is easily identified as a Comtrend router. ``` $ arp -a ? (192.168.0.12) at b0:c0:90:3f:10:15 [ether] on eth0 @@ -194,15 +196,14 @@ $ arp -a ``` -The first line above shows the MAC address for the network adaptor on the system itself. This network adaptor appears to have been manufactured by Chicony Electronics in Taiwan. You can look up MAC address associations fairly easily on the web with tools such as this one from Wireshark -- https://www.wireshark.org/tools/oui-lookup.html - +上面的第一行显示了系统本身的网络适配器的 MAC 地址。该网络适配器似乎已由台湾 Chicony 电子公司制造。你可以很容易地在网上查找 MAC 地址关联,例如来自 Wireshark 的这个工具 -- https://www.wireshark.org/tools/oui-lookup.html -------------------------------------------------------------------------------- via: https://www.networkworld.com/article/3233306/linux/more-ways-to-examine-network-connections-on-linux.html 作者:[Sandra Henry-Stocker][a] -译者:[译者ID](https://github.com/译者ID) +译者:[kimii](https://github.com/kimii) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 3985f4504546dc130b3c95dd0dde7d0d85c49163 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 15 Jan 2018 22:03:29 +0800 Subject: [PATCH 022/226] PRF&PUB:20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md @lujun9972 https://linux.cn/article-9242-1.html --- ...ell Aliases For Linux - Unix - Mac OS X.md | 196 +++++++++++------- 1 file changed, 121 insertions(+), 75 deletions(-) rename {translated/tech => published}/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md (71%) diff --git a/translated/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md b/published/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md similarity index 71% rename from translated/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md rename to published/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md index d637c92858..236e0defa6 100644 --- a/translated/tech/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md +++ b/published/20120611 30 Handy Bash Shell Aliases For Linux - Unix - Mac OS X.md @@ -1,35 +1,43 @@ -Linux / Unix / Mac OS X 中的 30 个方便的 Bash shell 别名 +30 个方便的 Bash shell 别名 ====== -bash 别名不是把别的,只不过是指向命令的快捷方式而已。`alias` 命令允许用户只输入一个单词就运行任意一个命令或一组命令(包括命令选项和文件名)。执行 `alias` 命令会显示一个所有已定义别名的列表。你可以在 [~/.bashrc][1] 文件中自定义别名。使用别名可以在命令行中减少输入的时间,使工作更流畅,同时增加生产率。 + +bash 别名alias只不过是指向命令的快捷方式而已。`alias` 命令允许用户只输入一个单词就运行任意一个命令或一组命令(包括命令选项和文件名)。执行 `alias` 命令会显示一个所有已定义别名的列表。你可以在 [~/.bashrc][1] 文件中自定义别名。使用别名可以在命令行中减少输入的时间,使工作更流畅,同时增加生产率。 本文通过 30 个 bash shell 别名的实际案例演示了如何创建和使用别名。 ![30 Useful Bash Shell Aliase For Linux/Unix Users][2] -## bash alias 的那些事 +### bash alias 的那些事 bash shell 中的 alias 命令的语法是这样的: -### 如何列出 bash 别名 +``` +alias [alias-name[=string]...] +``` + +#### 如何列出 bash 别名 + +输入下面的 [alias 命令][3]: -输入下面的 [alias 命令 ][3]: ``` alias ``` + 结果为: + ``` alias ..='cd ..' alias amazonbackup='s3backup' alias apt-get='sudo apt-get' ... - ``` -默认 alias 命令会列出当前用户定义好的别名。 +`alias` 命令默认会列出当前用户定义好的别名。 -### 如何定义或者说创建一个 bash shell 别名 +#### 如何定义或者创建一个 bash shell 别名 + +使用下面语法 [创建别名][4]: -使用下面语法 [创建别名 ][4]: ``` alias name =value alias name = 'command' @@ -38,19 +46,22 @@ alias name = '/path/to/script' alias name = '/path/to/script.pl arg1' ``` -举个例子,输入下面命令并回车就会为常用的 `clear`( 清除屏幕)命令创建一个别名 **c**: +举个例子,输入下面命令并回车就会为常用的 `clear`(清除屏幕)命令创建一个别名 `c`: + ``` alias c = 'clear' ``` 然后输入字母 `c` 而不是 `clear` 后回车就会清除屏幕了: + ``` c ``` -### 如何临时性地禁用 bash 别名 +#### 如何临时性地禁用 bash 别名 + +下面语法可以[临时性地禁用别名][5]: -下面语法可以[临时性地禁用别名 ][5]: ``` ## path/to/full/command /usr/bin/clear @@ -60,37 +71,43 @@ c command ls ``` -### 如何删除 bash 别名 +#### 如何删除 bash 别名 + +使用 [unalias 命令来删除别名][6]。其语法为: -使用 [unalias 命令来删除别名 ][6]。其语法为: ``` unalias aliasname unalias foo ``` 例如,删除我们之前创建的别名 `c`: + ``` unalias c ``` -你还需要用文本编辑器删掉 [~/.bashrc 文件 ][1] 中的别名定义(参见下一部分内容)。 +你还需要用文本编辑器删掉 [~/.bashrc 文件][1] 中的别名定义(参见下一部分内容)。 -### 如何让 bash shell 别名永久生效 +#### 如何让 bash shell 别名永久生效 别名 `c` 在当前登录会话中依然有效。但当你登出或重启系统后,别名 `c` 就没有了。为了防止出现这个问题,将别名定义写入 [~/.bashrc file][1] 中,输入: + ``` vi ~/.bashrc ``` + 输入下行内容让别名 `c` 对当前用户永久有效: + ``` alias c = 'clear' ``` -保存并关闭文件就行了。系统级的别名(也就是对所有用户都生效的别名) 可以放在 `/etc/bashrc` 文件中。请注意,alias 命令内建于各种 shell 中,包括 ksh,tcsh/csh,ash,bash 以及其他 shell。 +保存并关闭文件就行了。系统级的别名(也就是对所有用户都生效的别名)可以放在 `/etc/bashrc` 文件中。请注意,`alias` 命令内建于各种 shell 中,包括 ksh,tcsh/csh,ash,bash 以及其他 shell。 -### 关于特权权限判断 +#### 关于特权权限判断 可以将下面代码加入 `~/.bashrc`: + ``` # if user is not root, pass all commands via sudo # if [ $UID -ne 0 ]; then @@ -99,9 +116,10 @@ if [ $UID -ne 0 ]; then fi ``` -### 定义与操作系统类型相关的别名 +#### 定义与操作系统类型相关的别名 + +可以将下面代码加入 `~/.bashrc` [使用 case 语句][7]: -可以将下面代码加入 `~/.bashrc` [使用 case 语句 ][7]: ``` ### Get os name via uname ### _myos="$(uname)" @@ -115,13 +133,14 @@ case $_myos in esac ``` -## 30 个 bash shell 别名的案例 +### 30 个 bash shell 别名的案例 你可以定义各种类型的别名来节省时间并提高生产率。 -### #1:控制 ls 命令的输出 +#### #1:控制 ls 命令的输出 + +[ls 命令列出目录中的内容][8] 而你可以对输出进行着色: -[ls 命令列出目录中的内容 ][8] 而你可以对输出进行着色: ``` ## Colorize the ls output ## alias ls = 'ls --color=auto' @@ -133,7 +152,8 @@ alias ll = 'ls -la' alias l.= 'ls -d . .. .git .gitignore .gitmodules .travis.yml --color=auto' ``` -### #2:控制 cd 命令的行为 +#### #2:控制 cd 命令的行为 + ``` ## get rid of command not found ## alias cd..= 'cd ..' @@ -147,9 +167,10 @@ alias .4= 'cd ../../../../' alias .5= 'cd ../../../../..' ``` -### #3:控制 grep 命令的输出 +#### #3:控制 grep 命令的输出 + +[grep 命令是一个用于在纯文本文件中搜索匹配正则表达式的行的命令行工具][9]: -[grep 命令是一个用于在纯文本文件中搜索匹配正则表达式的行的命令行工具 ][9]: ``` ## Colorize the grep command output for ease of use (good for log files)## alias grep = 'grep --color=auto' @@ -157,44 +178,51 @@ alias egrep = 'egrep --color=auto' alias fgrep = 'fgrep --color=auto' ``` -### #4:让计算器默认开启 math 库 +#### #4:让计算器默认开启 math 库 + ``` alias bc = 'bc -l' ``` -### #4:生成 sha1 数字签名 +#### #4:生成 sha1 数字签名 + ``` alias sha1 = 'openssl sha1' ``` -### #5:自动创建父目录 +#### #5:自动创建父目录 + +[mkdir 命令][10] 用于创建目录: -[mkdir 命令 ][10] 用于创建目录: ``` alias mkdir = 'mkdir -pv' ``` -### #6:为 diff 输出着色 +#### #6:为 diff 输出着色 + +你可以[使用 diff 来一行行第比较文件][11] 而一个名为 `colordiff` 的工具可以为 diff 输出着色: -你可以[使用 diff 来一行行第比较文件 ][11] 而一个名为 colordiff 的工具可以为 diff 输出着色: ``` # install colordiff package :) alias diff = 'colordiff' ``` -### #7:让 mount 命令的输出更漂亮,更方便人类阅读 +#### #7:让 mount 命令的输出更漂亮,更方便人类阅读 + ``` alias mount = 'mount |column -t' ``` -### #8:简化命令以节省时间 +#### #8:简化命令以节省时间 + ``` # handy short cuts # alias h = 'history' alias j = 'jobs -l' ``` -### #9:创建一系列新命令 +#### #9:创建一系列新命令 + ``` alias path = 'echo -e ${PATH//:/\\n}' alias now = 'date +"%T"' @@ -202,7 +230,8 @@ alias nowtime =now alias nowdate = 'date +"%d-%m-%Y"' ``` -### #10:设置 vim 为默认编辑器 +#### #10:设置 vim 为默认编辑器 + ``` alias vi = vim alias svi = 'sudo vi' @@ -210,7 +239,8 @@ alias vis = 'vim "+set si"' alias edit = 'vim' ``` -### #11:控制网络工具 ping 的输出 +#### #11:控制网络工具 ping 的输出 + ``` # Stop after sending count ECHO_REQUEST packets # alias ping = 'ping -c 5' @@ -219,16 +249,18 @@ alias ping = 'ping -c 5' alias fastping = 'ping -c 100 -s.2' ``` -### #12:显示打开的端口 +#### #12:显示打开的端口 + +使用 [netstat 命令][12] 可以快速列出服务区中所有的 TCP/UDP 端口: -使用 [netstat 命令 ][12] 可以快速列出服务区中所有的 TCP/UDP 端口: ``` alias ports = 'netstat -tulanp' ``` -### #13:唤醒休眠额服务器 +#### #13:唤醒休眠的服务器 + +[Wake-on-LAN (WOL) 是一个以太网标准][13],可以通过网络消息来开启服务器。你可以使用下面别名来[快速激活 nas 设备][14] 以及服务器: -[Wake-on-LAN (WOL) 是一个以太网标准 ][13],可以通过网络消息来开启服务器。你可以使用下面别名来[快速激活 nas 设备 ][14] 以及服务器: ``` ## replace mac with your actual server mac address # alias wakeupnas01 = '/usr/bin/wakeonlan 00:11:32:11:15:FC' @@ -236,9 +268,10 @@ alias wakeupnas02 = '/usr/bin/wakeonlan 00:11:32:11:15:FD' alias wakeupnas03 = '/usr/bin/wakeonlan 00:11:32:11:15:FE' ``` -### #14:控制防火墙 (iptables) 的输出 +#### #14:控制防火墙 (iptables) 的输出 + +[Netfilter 是一款 Linux 操作系统上的主机防火墙][15]。它是 Linux 发行版中的一部分,且默认情况下是激活状态。[这里列出了大多数 Liux 新手防护入侵者最常用的 iptables 方法][16]。 -[Netfilter 是一款 Linux 操作系统上的主机防火墙 ][15]。它是 Linux 发行版中的一部分,且默认情况下是激活状态。[这里列出了大多数 Liux 新手防护入侵者最常用的 iptables 方法 ][16]。 ``` ## shortcut for iptables and pass it via sudo# alias ipt = 'sudo /sbin/iptables' @@ -251,7 +284,8 @@ alias iptlistfw = 'sudo /sbin/iptables -L FORWARD -n -v --line-numbers' alias firewall =iptlist ``` -### #15:使用 curl 调试 web 服务器 /cdn 上的问题 +#### #15:使用 curl 调试 web 服务器 / CDN 上的问题 + ``` # get web server headers # alias header = 'curl -I' @@ -260,7 +294,8 @@ alias header = 'curl -I' alias headerc = 'curl -I --compress' ``` -### #16:增加安全性 +#### #16:增加安全性 + ``` # do not delete / or prompt if deleting more than 3 files at a time # alias rm = 'rm -I --preserve-root' @@ -276,9 +311,10 @@ alias chmod = 'chmod --preserve-root' alias chgrp = 'chgrp --preserve-root' ``` -### #17:更新 Debian Linux 服务器 +#### #17:更新 Debian Linux 服务器 + +[apt-get 命令][17] 用于通过因特网安装软件包 (ftp 或 http)。你也可以一次性升级所有软件包: -[apt-get 命令 ][17] 用于通过因特网安装软件包 (ftp 或 http)。你也可以一次性升级所有软件包: ``` # distro specific - Debian / Ubuntu and friends # # install with apt-get @@ -289,25 +325,27 @@ alias updatey = "sudo apt-get --yes" alias update = 'sudo apt-get update && sudo apt-get upgrade' ``` -### #18:更新 RHEL / CentOS / Fedora Linux 服务器 +#### #18:更新 RHEL / CentOS / Fedora Linux 服务器 + +[yum 命令][18] 是 RHEL / CentOS / Fedora Linux 以及其他基于这些发行版的 Linux 上的软件包管理工具: -[yum 命令 ][18] 是 RHEL / CentOS / Fedora Linux 以及其他基于这些发行版的 Linux 上的软件包管理工具: ``` ## distrp specifc RHEL/CentOS ## alias update = 'yum update' alias updatey = 'yum -y update' ``` -### #19:优化 sudo 和 su 命令 +#### #19:优化 sudo 和 su 命令 + ``` # become root # alias root = 'sudo -i' alias su = 'sudo -i' ``` -### #20:使用 sudo 执行 halt/reboot 命令 +#### #20:使用 sudo 执行 halt/reboot 命令 -[shutdown 命令 ][19] 会让 Linux / Unix 系统关机: +[shutdown 命令][19] 会让 Linux / Unix 系统关机: ``` # reboot / halt / poweroff alias reboot = 'sudo /sbin/reboot' @@ -316,7 +354,8 @@ alias halt = 'sudo /sbin/halt' alias shutdown = 'sudo /sbin/shutdown' ``` -### #21:控制 web 服务器 +#### #21:控制 web 服务器 + ``` # also pass it via sudo so whoever is admin can reload it without calling you # alias nginxreload = 'sudo /usr/local/nginx/sbin/nginx -s reload' @@ -327,7 +366,8 @@ alias httpdreload = 'sudo /usr/sbin/apachectl -k graceful' alias httpdtest = 'sudo /usr/sbin/apachectl -t && /usr/sbin/apachectl -t -D DUMP_VHOSTS' ``` -### #22:与备份相关的别名 +#### #22:与备份相关的别名 + ``` # if cron fails or if you want backup on demand just run these commands # # again pass it via sudo so whoever is in admin group can start the job # @@ -342,7 +382,8 @@ alias rsnapshotmonthly = 'sudo /home/scripts/admin/scripts/backup/wrapper.rsnaps alias amazonbackup =s3backup ``` -### #23:桌面应用相关的别名 - 按需播放的 avi/mp3 文件 +#### #23:桌面应用相关的别名 - 按需播放的 avi/mp3 文件 + ``` ## play video files in a current directory ## # cd ~/Download/movie-name @@ -364,10 +405,10 @@ alias nplaymp3 = 'for i in /nas/multimedia/mp3/*.mp3; do mplayer "$i"; done' alias music = 'mplayer --shuffle *' ``` +#### #24:设置系统管理相关命令的默认网卡 -### #24:设置系统管理相关命令的默认网卡 +[vnstat 一款基于终端的网络流量检测器][20]。[dnstop 是一款分析 DNS 流量的终端工具][21]。[tcptrack 和 iftop 命令显示][22] TCP/UDP 连接方面的信息,它监控网卡并显示其消耗的带宽。 -[vnstat 一款基于终端的网络流量检测器 ][20]。[dnstop 是一款分析 DNS 流量的终端工具 ][21]。[tcptrack 和 iftop 命令显示 ][22] TCP/UDP 连接方面的信息,它监控网卡并显示其消耗的带宽。 ``` ## All of our servers eth1 is connected to the Internets via vlan / router etc ## alias dnstop = 'dnstop -l 5 eth1' @@ -381,7 +422,8 @@ alias ethtool = 'ethtool eth1' alias iwconfig = 'iwconfig wlan0' ``` -### #25:快速获取系统内存,cpu 使用,和 gpu 内存相关信息 +#### #25:快速获取系统内存,cpu 使用,和 gpu 内存相关信息 + ``` ## pass options to free ## alias meminfo = 'free -m -l -t' @@ -404,9 +446,10 @@ alias cpuinfo = 'lscpu' alias gpumeminfo = 'grep -i --color memory /var/log/Xorg.0.log' ``` -### #26:控制家用路由器 +#### #26:控制家用路由器 + +`curl` 命令可以用来 [重启 Linksys 路由器][23]。 -curl 命令可以用来 [重启 Linksys 路由器 ][23]。 ``` # Reboot my home Linksys WAG160N / WAG54 / WAG320 / WAG120N Router / Gateway from *nix. alias rebootlinksys = "curl -u 'admin:my-super-password' 'http://192.168.1.2/setup.cgi?todo=reboot'" @@ -415,15 +458,17 @@ alias rebootlinksys = "curl -u 'admin:my-super-password' 'http://192.168.1.2/set alias reboottomato = "ssh admin@192.168.1.1 /sbin/reboot" ``` -### #27 wget 默认断点续传 +#### #27 wget 默认断点续传 + +[GNU wget 是一款用来从 web 下载文件的自由软件][25]。它支持 HTTP,HTTPS,以及 FTP 协议,而且它也支持断点续传: -[GNU Wget 是一款用来从 web 下载文件的自由软件 ][25]。它支持 HTTP,HTTPS,以及 FTP 协议,而且它页支持断点续传: ``` ## this one saved by butt so many times ## alias wget = 'wget -c' ``` -### #28 使用不同浏览器来测试网站 +#### #28 使用不同浏览器来测试网站 + ``` ## this one saved by butt so many times ## alias ff4 = '/opt/firefox4/firefox' @@ -438,9 +483,10 @@ alias ff =ff13 alias browser =chrome ``` -### #29:关于 ssh 别名的注意事项 +#### #29:关于 ssh 别名的注意事项 不要创建 ssh 别名,代之以 `~/.ssh/config` 这个 OpenSSH SSH 客户端配置文件。它的选项更加丰富。下面是一个例子: + ``` Host server10 Hostname 1.2.3.4 @@ -451,12 +497,13 @@ Host server10 TCPKeepAlive yes ``` -然后你就可以使用下面语句连接 peer1 了: +然后你就可以使用下面语句连接 server10 了: + ``` $ ssh server10 ``` -### #30:现在该分享你的别名了 +#### #30:现在该分享你的别名了 ``` ## set some other defaults ## @@ -486,27 +533,26 @@ alias cdnmdel = '/home/scripts/admin/cdn/purge_cdn_cache --profile akamai --stdi alias amzcdnmdel = '/home/scripts/admin/cdn/purge_cdn_cache --profile amazon --stdin' ``` -## 结论 +### 总结 本文总结了 *nix bash 别名的多种用法: - 1。为命令设置默认的参数(例如通过 `alias ethtool='ethtool eth0'` 设置 ethtool 命令的默认参数为 eth0)。 - 2。修正错误的拼写(通过 `alias cd。.='cd .。'`让 `cd。.` 变成 `cd .。`)。 - 3。缩减输入。 - 4。设置系统中多版本命令的默认路径(例如 GNU/grep 位于 /usr/local/bin/grep 中而 Unix grep 位于 /bin/grep 中。若想默认使用 GNU grep 则设置别名 `grep='/usr/local/bin/grep'` )。 - 5。通过默认开启命令(例如 rm,mv 等其他命令)的交互参数来增加 Unix 的安全性。 - 6。为老旧的操作系统(比如 MS-DOS 或者其他类似 Unix 的操作系统)创建命令以增加兼容性(比如 `alias del=rm` )。 +1. 为命令设置默认的参数(例如通过 `alias ethtool='ethtool eth0'` 设置 ethtool 命令的默认参数为 eth0)。 +2. 修正错误的拼写(通过 `alias cd..='cd ..'`让 `cd..` 变成 `cd ..`)。 +3. 缩减输入。 +4. 设置系统中多版本命令的默认路径(例如 GNU/grep 位于 `/usr/local/bin/grep` 中而 Unix grep 位于 `/bin/grep` 中。若想默认使用 GNU grep 则设置别名 `grep='/usr/local/bin/grep'` )。 +5. 通过默认开启命令(例如 `rm`,`mv` 等其他命令)的交互参数来增加 Unix 的安全性。 +6. 为老旧的操作系统(比如 MS-DOS 或者其他类似 Unix 的操作系统)创建命令以增加兼容性(比如 `alias del=rm`)。 我已经分享了多年来为了减少重复输入命令而使用的别名。若你知道或使用的哪些 bash/ksh/csh 别名能够减少输入,请在留言框中分享。 - -------------------------------------------------------------------------------- via: https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html 作者:[nixCraft][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 53e16c7b3652d4b129f1738e15b7725206efa821 Mon Sep 17 00:00:00 2001 From: Flowsnow Date: Mon, 15 Jan 2018 22:08:19 +0800 Subject: [PATCH 023/226] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90-20?= =?UTF-8?q?171212=20How=20To=20Count=20The=20Number=20Of=20Files=20And=20F?= =?UTF-8?q?olders-Directories=20In=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Files And Folders-Directories In Linux.md | 181 ------------------ ... Files And Folders-Directories In Linux.md | 163 ++++++++++++++++ 2 files changed, 163 insertions(+), 181 deletions(-) delete mode 100644 sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md create mode 100644 translated/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md diff --git a/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md b/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md deleted file mode 100644 index eca8dbc17b..0000000000 --- a/sources/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md +++ /dev/null @@ -1,181 +0,0 @@ -translating by Flowsnow - -How To Count The Number Of Files And Folders/Directories In Linux -====== - -Hi folks, today again we came with set of tricky commands that help you in many ways. It's kind of manipulation commands which help you to count files and directories in the current directory, recursive count, list of files created by particular user, etc,. - -In this tutorial, we are going to show you, how to use more than one command like, all together to perform some advanced actions using ls, egrep, wc and find command. The below set of commands which helps you in many ways. - -To experiment this, i'm going to create totally 7 files and 2 folders (5 regular files & 2 hidden files). See the below tree command output which clearly shows the files and folder lists. - -**Suggested Read :** [File Manipulation Commands][1] -``` -# tree -a /opt -/opt -├── magi -│   └── 2g -│   ├── test5.txt -│   └── .test6.txt -├── test1.txt -├── test2.txt -├── test3.txt -├── .test4.txt -└── test.txt - -2 directories, 7 files - -``` - -**Example-1 :** To count current directory files (excluded hidden files). Run the following command to determine how many files there are in the current directory and it doesn't count dotfiles. -``` -# ls -l . | egrep -c '^-' -4 - -``` - -**Details :** - - * `ls` : list directory contents - * `-l` : Use a long listing format - * `.` : List information about the FILEs (the current directory by default). - * `|` : control operator that send the output of one program to another program for further processing. - * `egrep` : print lines matching a pattern - * `-c` : General Output Control - * `'^-'` : This respectively match the empty string at the beginning and end of a line. - - - -**Example-2 :** To count current directory files which includes hidden files. This will include dotfiles as well in the current directory. -``` -# ls -la . | egrep -c '^-' -5 - -``` - -**Example-3 :** Run the following command to count current directory files & folders. It will count all together at once. -``` -# ls -1 | wc -l -5 - -``` - -**Details :** - - * `ls` : list directory contents - * `-l` : Use a long listing format - * `|` : control operator that send the output of one program to another program for further processing. - * `wc` : It's a command to print newline, word, and byte counts for each file - * `-l` : print the newline counts - - - -**Example-4 :** To count current directory files & folders which includes hidden files & directory. -``` -# ls -1a | wc -l -8 - -``` - -**Example-5 :** To count current directory files recursively which includes hidden files. -``` -# find . -type f | wc -l -7 - -``` - -**Details :** - - * `find` : search for files in a directory hierarchy - * `-type` : File is of type - * `f` : regular file - * `wc` : It's a command to print newline, word, and byte counts for each file - * `-l` : print the newline counts - - - -**Example-6 :** To print directories & files count using tree command (excluded hidden files). -``` -# tree | tail -1 -2 directories, 5 files - -``` - -**Example-7 :** To print directories & files count using tree command which includes hidden files. -``` -# tree -a | tail -1 -2 directories, 7 files - -``` - -**Example-8 :** Run the below command to count directory recursively which includes hidden directory. -``` -# find . -type d | wc -l -3 - -``` - -**Example-9 :** To count the number of files based on file extension. Here we are going to count `.txt` files. -``` -# find . -name "*.txt" | wc -l -7 - -``` - -**Example-10 :** Count all files in the current directory by using the echo command in combination with the wc command. `4` indicates the amount of files in the current directory. -``` -# echo * | wc -1 4 39 - -``` - -**Example-11 :** Count all directories in the current directory by using the echo command in combination with the wc command. `1` indicates the amount of directories in the current directory. -``` -# echo comic/ published/ sources/ translated/ | wc -1 1 6 - -``` - -**Example-12 :** Count all files and directories in the current directory by using the echo command in combination with the wc command. `5` indicates the amount of directories and files in the current directory. -``` -# echo * | wc -1 5 44 - -``` - -**Example-13 :** To count number of files in the system (Entire system) -``` -# find / -type f | wc -l -69769 - -``` - -**Example-14 :** To count number of folders in the system (Entire system) -``` -# find / -type d | wc -l -8819 - -``` - -**Example-15 :** Run the following command to count number of files, folders, hardlinks, and symlinks in the system (Entire system) -``` -# find / -type d -exec echo dirs \; -o -type l -exec echo symlinks \; -o -type f -links +1 -exec echo hardlinks \; -o -type f -exec echo files \; | sort | uniq -c - 8779 dirs - 69343 files - 20 hardlinks - 11646 symlinks - -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/how-to-count-the-number-of-files-and-folders-directories-in-linux/ - -作者:[Magesh Maruthamuthu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/magesh/ -[1]:https://www.2daygeek.com/empty-a-file-delete-contents-lines-from-a-file-remove-matching-string-from-a-file-remove-empty-blank-lines-from-a-file/ diff --git a/translated/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md b/translated/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md new file mode 100644 index 0000000000..5b8fe7f215 --- /dev/null +++ b/translated/tech/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md @@ -0,0 +1,163 @@ +如何统计Linux中文件和文件夹/目录的数量 +====== +嗨,伙计们,今天我们又来了一系列棘手的命令,会多方面帮助你。 这是一种操作命令,它可以帮助您计算当前目录中的文件和目录,递归计数,特定用户创建的文件列表等。 + +在本教程中,我们将向您展示如何使用多个命令,并使用ls,egrep,wc和find命令执行一些高级操作。 下面的命令很有帮助。 + +为了实验,我打算总共创建7个文件和2个文件夹(5个常规文件和2个隐藏文件)。 看到下面的tree命令的输出清楚的展示文件和文件夹列表。 + +**推荐阅读** [文件操作命令][1] +``` +# tree -a /opt +/opt +├── magi +│   └── 2g +│   ├── test5.txt +│   └── .test6.txt +├── test1.txt +├── test2.txt +├── test3.txt +├── .test4.txt +└── test.txt + +2 directories, 7 files + +``` + +**示例-1 :** 统计当前目录文件(排除隐藏文件)。 运行以下命令以确定当前目录中有多少个文件,并且不计算点文件(LCTT译者注:点文件即当前目录文件和上级目录文件)。 +``` +# ls -l . | egrep -c '^-' +4 +``` + +**细节:** + + * `ls` : 列出目录内容 + * `-l` : 使用长列表格式 + * `.` : 列出有关文件的信息(默认为当前目录) + * `|` : 控制操作器将一个程序的输出发送到另一个程序进行进一步处理 + * `egrep` : 打印符合模式的行 + * `-c` : 通用输出控制 + * `'^-'` : 它们分别匹配一行的开头和结尾的空字符串 + + + +**示例-2 :** 统计包含隐藏文件的当前目录文件。 包括当前目录中的点文件。 +``` +# ls -la . | egrep -c '^-' +5 +``` + +**示例-3 :** 运行以下命令来计算当前目录文件和文件夹。 它会一次计算所有的。 +``` +# ls -1 | wc -l +5 +``` + +**细节:** + + * `ls` : 列出目录内容 + * `-l` : 使用长列表格式 + * `|` : 控制操作器将一个程序的输出发送到另一个程序进行进一步处理 + * `wc` : 这是一个为每个文件打印换行符,字和字节数的命令 + * `-l` : 打印换行符数 + + + +**示例-4 :** 统计包含隐藏文件和目录的当前目录文件和文件夹。 +``` +# ls -1a | wc -l +8 +``` + +**示例-5 :** 递归计算当前目录文件,其中包括隐藏文件。 +``` +# find . -type f | wc -l +7 +``` + +**细节 :** + + * `find` : 搜索目录层次结构中的文件 + * `-type` : 文件类型 + * `f` : 常规文件 + * `wc` : 这是一个为每个文件打印换行符,字和字节数的命令 + * `-l` : 打印换行符数 + + + +**示例-6 :** 使用tree命令打印目录和文件数(排除隐藏文件)。 +``` +# tree | tail -1 +2 directories, 5 files +``` + +**示例-7 :** 使用包含隐藏文件的树命令打印目录和文件数。 +``` +# tree -a | tail -1 +2 directories, 7 files +``` + +**示例-8 :** 运行下面的命令递归计算包含隐藏目录的目录。 +``` +# find . -type d | wc -l +3 +``` + +**示例-9 :** 根据文件扩展名计算文件数量。 这里我们要计算 `.txt` 文件。 +``` +# find . -name "*.txt" | wc -l +7 +``` + +**示例-10 :** 使用echo命令和wc命令统计当前目录中的所有文件。 `4`表示当前目录中的文件数量。 +``` +# echo * | wc +1 4 39 +``` + +**示例-11 :** 通过使用echo命令和wc命令来统计当前目录中的所有目录。 `1`表示当前目录中的目录数量。 +``` +# echo comic/ published/ sources/ translated/ | wc +1 1 6 +``` + +**示例-12 :** 通过使用echo命令和wc命令来统计当前目录中的所有文件和目录。 `5`表示当前目录中的目录和文件的数量。 +``` +# echo * | wc +1 5 44 +``` + +**示例-13 :** 统计系统(整个系统)中的文件数。 +``` +# find / -type f | wc -l +69769 +``` + +**示例-14 :** 统计系统(整个系统)中的文件夹数。 +``` +# find / -type d | wc -l +8819 +``` + +**示例-15 :** 运行以下命令来计算系统(整个系统)中的文件,文件夹,硬链接和符号链接数。 +``` +# find / -type d -exec echo dirs \; -o -type l -exec echo symlinks \; -o -type f -links +1 -exec echo hardlinks \; -o -type f -exec echo files \; | sort | uniq -c + 8779 dirs + 69343 files + 20 hardlinks + 11646 symlinks +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-count-the-number-of-files-and-folders-directories-in-linux/ + +作者:[Magesh Maruthamuthu][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/magesh/ +[1]:https://www.2daygeek.com/empty-a-file-delete-contents-lines-from-a-file-remove-matching-string-from-a-file-remove-empty-blank-lines-from-a-file/ From 9b388b5d28d4b87d86d00491bc139de2b7a39032 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 15 Jan 2018 22:46:52 +0800 Subject: [PATCH 024/226] PRF&PUB:20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md @lujun9972 --- ...e Users After A Period Of Time In Linux.md | 46 +++++++++++++------ 1 file changed, 31 insertions(+), 15 deletions(-) rename {translated/tech => published}/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md (78%) diff --git a/translated/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md b/published/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md similarity index 78% rename from translated/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md rename to published/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md index 10decaada3..94bc84b462 100644 --- a/translated/tech/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md +++ b/published/20170916 How To Auto Logout Inactive Users After A Period Of Time In Linux.md @@ -1,7 +1,7 @@ 如何在 Linux 上让一段时间不活动的用户自动登出 ====== -![](https://www.ostechnix.com/wp-content/uploads/2017/09/logout-720x340.jpg) +![](https://www.ostechnix.com/wp-content/uploads/2017/09/logout-720x340.jpg) 让我们想象这么一个场景。你有一台服务器经常被网络中各系统的很多个用户访问。有可能出现某些用户忘记登出会话让会话保持会话处于连接状态。我们都知道留下一个处于连接状态的用户会话是一件多么危险的事情。有些用户可能会借此故意做一些损坏系统的事情。而你,作为一名系统管理员,会去每个系统上都检查一遍用户是否有登出吗?其实这完全没必要的。而且若网络中有成百上千台机器,这也太耗时了。不过,你可以让用户在本机或 SSH 会话上超过一定时间不活跃的情况下自动登出。本教程就将教你如何在类 Unix 系统上实现这一点。一点都不难。跟我做。 @@ -11,32 +11,40 @@ #### 方法 1: -编辑 **~/.bashrc** 或 **~/.bash_profile** 文件: +编辑 `~/.bashrc` 或 `~/.bash_profile` 文件: + ``` $ vi ~/.bashrc ``` + 或, + ``` $ vi ~/.bash_profile ``` -将下面行加入其中。 +将下面行加入其中: + ``` TMOUT=100 ``` -这回让用户在停止动作 100 秒后自动登出。你可以根据需要定义这个值。保存并关闭文件。 +这会让用户在停止动作 100 秒后自动登出。你可以根据需要定义这个值。保存并关闭文件。 运行下面命令让更改生效: + ``` $ source ~/.bashrc ``` + 或, + ``` $ source ~/.bash_profile ``` 现在让会话闲置 100 秒。100 秒不活动后,你会看到下面这段信息,并且用户会自动退出会话。 + ``` timed out waiting for input: auto-logout Connection to 192.168.43.2 closed. @@ -44,13 +52,16 @@ Connection to 192.168.43.2 closed. 该设置可以轻易地被用户所修改。因为,`~/.bashrc` 文件被用户自己所拥有。 -要修改或者删除超时设置,只需要删掉上面添加的行然后执行 "source ~/.bashrc" 命令让修改生效。 +要修改或者删除超时设置,只需要删掉上面添加的行然后执行 `source ~/.bashrc` 命令让修改生效。 + +此外,用户也可以运行下面命令来禁止超时: -此啊玩 i,用户也可以运行下面命令来禁止超时: ``` $ export TMOUT=0 ``` + 或, + ``` $ unset TMOUT ``` @@ -59,14 +70,16 @@ $ unset TMOUT #### 方法 2: -以 root 用户登陆 +以 root 用户登录。 创建一个名为 `autologout.sh` 的新文件。 + ``` # vi /etc/profile.d/autologout.sh ``` 加入下面内容: + ``` TMOUT=100 readonly TMOUT @@ -76,55 +89,58 @@ export TMOUT 保存并退出该文件。 为它添加可执行权限: + ``` # chmod +x /etc/profile.d/autologout.sh ``` 现在,登出或者重启系统。非活动用户就会在 100 秒后自动登出了。普通用户即使想保留会话连接但也无法修改该配置了。他们会在 100 秒后强制退出。 -这两种方法对本地会话和远程会话都适用(即本地登陆的用户和远程系统上通过 SSH 登陆的用户)。下面让我们来看看如何实现只自动登出非活动的 SSH 会话,而不自动登出本地会话。 +这两种方法对本地会话和远程会话都适用(即本地登录的用户和远程系统上通过 SSH 登录的用户)。下面让我们来看看如何实现只自动登出非活动的 SSH 会话,而不自动登出本地会话。 #### 方法 3: -这种方法,我们智慧让 SSH 会话用户在一段时间不活动后自动登出。 +这种方法,我们只会让 SSH 会话用户在一段时间不活动后自动登出。 编辑 `/etc/ssh/sshd_config` 文件: + ``` $ sudo vi /etc/ssh/sshd_config ``` 添加/修改下面行: + ``` ClientAliveInterval 100 ClientAliveCountMax 0 ``` 保存并退出该文件。重启 sshd 服务让改动生效。 + ``` $ sudo systemctl restart sshd ``` -现在,在远程系统通过 ssh 登陆该系统。100 秒后,ssh 会话就会自动关闭了,你也会看到下面消息: +现在,在远程系统通过 ssh 登录该系统。100 秒后,ssh 会话就会自动关闭了,你也会看到下面消息: + ``` $ Connection to 192.168.43.2 closed by remote host. Connection to 192.168.43.2 closed. ``` -现在,任何人从远程系统通过 SSH 登陆本系统,都会在 100 秒不活动后自动登出了。 +现在,任何人从远程系统通过 SSH 登录本系统,都会在 100 秒不活动后自动登出了。 -希望本文能对你有所帮助。我马上还会写另一篇实用指南。如果你觉得我们的指南有用,请在您的社交网络上分享,支持 OSTechNix! +希望本文能对你有所帮助。我马上还会写另一篇实用指南。如果你觉得我们的指南有用,请在您的社交网络上分享,支持 我们! 祝您好运! - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/auto-logout-inactive-users-period-time-linux/ 作者:[SK][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 2636b5c3ab92c8ab5f804c4ff96625d69332ce37 Mon Sep 17 00:00:00 2001 From: Flowsnow Date: Mon, 15 Jan 2018 23:09:25 +0800 Subject: [PATCH 025/226] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91-20?= =?UTF-8?q?180105=20Ansible-=20the=20Automation=20Framework=20That=20Think?= =?UTF-8?q?s=20Like=20a=20Sysadmin.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...le- the Automation Framework That Thinks Like a Sysadmin.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20180105 Ansible- the Automation Framework That Thinks Like a Sysadmin.md b/sources/tech/20180105 Ansible- the Automation Framework That Thinks Like a Sysadmin.md index 8e0a970f7e..c6ed399cfd 100644 --- a/sources/tech/20180105 Ansible- the Automation Framework That Thinks Like a Sysadmin.md +++ b/sources/tech/20180105 Ansible- the Automation Framework That Thinks Like a Sysadmin.md @@ -1,3 +1,5 @@ +translating by Flowsnow + Ansible: the Automation Framework That Thinks Like a Sysadmin ====== @@ -185,7 +187,6 @@ You should see the results of the uptime command for each host in the webservers In a future article, I plan start to dig in to Ansible's ability to manage the remote computers. I'll look at various modules and how you can use the ad-hoc mode to accomplish in a few keystrokes what would take a long time to handle individually on the command line. If you didn't get the results you expected from the sample Ansible commands above, take this time to make sure authentication is working. Check out [the Ansible docs][1] for more help if you get stuck. - -------------------------------------------------------------------------------- via: http://www.linuxjournal.com/content/ansible-automation-framework-thinks-sysadmin From d1d2c717b0e4ab55f3c3a9aca12b27eabca48841 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 15 Jan 2018 23:11:25 +0800 Subject: [PATCH 026/226] PRF&PUB:20171012 Install and Use YouTube-DL on Ubuntu 16.04.md @lujun9972 --- ...tall and Use YouTube-DL on Ubuntu 16.04.md | 35 +++++++++---------- 1 file changed, 17 insertions(+), 18 deletions(-) rename {translated/tech => published}/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md (76%) diff --git a/translated/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md b/published/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md similarity index 76% rename from translated/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md rename to published/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md index a40a1194d4..13c4dc78da 100644 --- a/translated/tech/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md +++ b/published/20171012 Install and Use YouTube-DL on Ubuntu 16.04.md @@ -1,14 +1,14 @@ 在 Ubuntu 16.04 上安装并使用 YouTube-DL ====== -Youtube-dl 是一个免费而开源的命令行视频下载工具,可以用来从 Youtube 等类似的网站上下载视频,目前它支持的网站除了 Youtube 还有 Facebook,Dailymotion,Google Video,Yahoo 等等。它构架于 pygtk 之上,需要 Python 的支持来运行。它支持很多操作系统,包括 Windows,Mac 以及 Unix。Youtube-dl 还有断点续传,下载整个频道或者整个播放清单中的视频,添加自定义的标题,代理,等等其他功能。 +Youtube-dl 是一个自由开源的命令行视频下载工具,可以用来从 Youtube 等类似的网站上下载视频,目前它支持的网站除了 Youtube 还有 Facebook、Dailymotion、Google Video、Yahoo 等等。它构架于 pygtk 之上,需要 Python 的支持来运行。它支持很多操作系统,包括 Windows、Mac 以及 Unix。Youtube-dl 还有断点续传、下载整个频道或者整个播放清单中的视频、添加自定义的标题、代理等等其他功能。 -本文中,我们将来学习如何在 Ubuntu16.04 上安装并使用 Youtube-dl 和 Youtube-dlg。我们还会学习如何以不同质量,不同格式来下载 Youtube 中的视频。 +本文中,我们将来学习如何在 Ubuntu 16.04 上安装并使用 Youtube-dl 和 Youtube-dlg。我们还会学习如何以不同质量,不同格式来下载 Youtube 中的视频。 ### 前置需求 - * 一台运行 Ubuntu 16.04 的服务器。 - * 非 root 用户但拥有 sudo 特权。 +* 一台运行 Ubuntu 16.04 的服务器。 +* 非 root 用户但拥有 sudo 特权。 让我们首先用下面命令升级系统到最新版: @@ -21,37 +21,37 @@ sudo apt-get upgrade -y ### 安装 Youtube-dl -默认情况下,Youtube-dl 并不在 Ubuntu-16.04 仓库中。你需要从官网上来下载它。使用 curl 命令可以进行下载: +默认情况下,Youtube-dl 并不在 Ubuntu-16.04 仓库中。你需要从官网上来下载它。使用 `curl` 命令可以进行下载: -首先,使用下面命令安装 curl: +首先,使用下面命令安装 `curl`: ``` sudo apt-get install curl -y ``` -然后,下载 youtube-dl 的二进制包: +然后,下载 `youtube-dl` 的二进制包: ``` curl -L https://yt-dl.org/latest/youtube-dl -o /usr/bin/youtube-dl ``` -接着,用下面命令更改 youtube-dl 二进制包的权限: +接着,用下面命令更改 `youtube-dl` 二进制包的权限: ``` sudo chmod 755 /usr/bin/youtube-dl ``` -youtube-dl 有算是安装好了,现在可以进行下一步了。 +`youtube-dl` 算是安装好了,现在可以进行下一步了。 ### 使用 Youtube-dl -运行下面命令会列出 youtube-dl 的所有可选项: +运行下面命令会列出 `youtube-dl` 的所有可选项: ``` youtube-dl --h ``` -Youtube-dl 支持多种视频格式,像 Mp4,WebM,3gp,以及 FLV 都支持。你可以使用下面命令列出指定视频所支持的所有格式: +`youtube-dl` 支持多种视频格式,像 Mp4,WebM,3gp,以及 FLV 都支持。你可以使用下面命令列出指定视频所支持的所有格式: ``` youtube-dl -F https://www.youtube.com/watch?v=j_JgXJ-apXs @@ -94,6 +94,7 @@ youtube-dl -f 18 https://www.youtube.com/watch?v=j_JgXJ-apXs ``` 该命令会下载 640x360 分辨率的 mp4 格式的视频: + ``` [youtube] j_JgXJ-apXs: Downloading webpage [youtube] j_JgXJ-apXs: Downloading video info webpage @@ -101,7 +102,6 @@ youtube-dl -f 18 https://www.youtube.com/watch?v=j_JgXJ-apXs [youtube] j_JgXJ-apXs: Downloading MPD manifest [download] Destination: B.A. PASS 2 Trailer no 2 _ Filmybox-j_JgXJ-apXs.mp4 [download] 100% of 6.90MiB in 00:47 - ``` 如果你想以 mp3 音频的格式下载 Youtube 视频,也可以做到: @@ -122,7 +122,7 @@ youtube-dl -citw https://www.youtube.com/channel/UCatfiM69M9ZnNhOzy0jZ41A youtube-dl --proxy http://proxy-ip:port https://www.youtube.com/watch?v=j_JgXJ-apXs ``` -若想一条命令下载多个 Youtube 视频,那么首先把所有要下载的 Youtube 视频 URL 存在一个文件中(假设这个文件叫 youtube-list.txt),然后运行下面命令: +若想一条命令下载多个 Youtube 视频,那么首先把所有要下载的 Youtube 视频 URL 存在一个文件中(假设这个文件叫 `youtube-list.txt`),然后运行下面命令: ``` youtube-dl -a youtube-list.txt @@ -130,7 +130,7 @@ youtube-dl -a youtube-list.txt ### 安装 Youtube-dl GUI -若你想要图形化的界面,那么 youtube-dlg 是你最好的选择。youtube-dlg 是一款由 wxPython 所写的免费而开源的 youtube-dl 界面。 +若你想要图形化的界面,那么 `youtube-dlg` 是你最好的选择。`youtube-dlg` 是一款由 wxPython 所写的免费而开源的 `youtube-dl` 界面。 该工具默认也不在 Ubuntu 16.04 仓库中。因此你需要为它添加 PPA。 @@ -138,14 +138,14 @@ youtube-dl -a youtube-list.txt sudo add-apt-repository ppa:nilarimogard/webupd8 ``` -下一步,更新软件包仓库并安装 youtube-dlg: +下一步,更新软件包仓库并安装 `youtube-dlg`: ``` sudo apt-get update -y sudo apt-get install youtube-dlg -y ``` -安装好 Youtube-dl 后,就能在 `Unity Dash` 中启动它了: +安装好 Youtube-dl 后,就能在 Unity Dash 中启动它了: [![][2]][3] @@ -157,14 +157,13 @@ sudo apt-get install youtube-dlg -y 恭喜你!你已经成功地在 Ubuntu 16.04 服务器上安装好了 youtube-dl 和 youtube-dlg。你可以很方便地从 Youtube 及任何 youtube-dl 支持的网站上以任何格式和任何大小下载视频了。 - -------------------------------------------------------------------------------- via: https://www.howtoforge.com/tutorial/install-and-use-youtube-dl-on-ubuntu-1604/ 作者:[Hitesh Jethva][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1f179ce0ea189e7523f1f3d66440c357a5cea0b5 Mon Sep 17 00:00:00 2001 From: wenwensnow <963555237@qq.com> Date: Tue, 16 Jan 2018 00:03:43 +0800 Subject: [PATCH 027/226] Create 20180102 HTTP errors in WordPress.md --- .../tech/20180102 HTTP errors in WordPress.md | 189 ++++++++++++++++++ 1 file changed, 189 insertions(+) create mode 100644 translated/tech/20180102 HTTP errors in WordPress.md diff --git a/translated/tech/20180102 HTTP errors in WordPress.md b/translated/tech/20180102 HTTP errors in WordPress.md new file mode 100644 index 0000000000..5acb3613be --- /dev/null +++ b/translated/tech/20180102 HTTP errors in WordPress.md @@ -0,0 +1,189 @@ +WordPress 中的HTTP错误 +====== +![http error wordpress][1] + +我们会向你介绍,如何修复WordPress中的HTTP错误(在Linux VPS上)。 下面列出了WordPress用户遇到的最常见的HTTP错误,我们的建议侧重于如何发现错误原因以及解决方法。 + + + + +### 1\. 修复在上传图像时出现的HTTP错误 + +如果你在基于WordPress的网页中上传图像时出现错误,这也许是因为服务器上PHP配置,例如存储空间不足或者其他配置问题造成的。 + + +用如下命令查找php配置文件: + + +``` +#php -i | grep php.ini +Configuration File (php.ini) Path => /etc +Loaded Configuration File => /etc/php.ini +``` + +根据输出结果,php配置文件位于 '/etc'文件夹下。编辑 '/etc/php.ini'文件,找出下列行,并按照下面的例子修改其中相对应的值: + + +``` +vi /etc/php.ini +``` +``` +upload_max_filesize = 64M +post_max_size = 32M +max_execution_time = 300 +max_input_time 300 +memory_limit = 128M +``` + +当然,如果你不习惯使用vi文本编辑器,你可以选用自己喜欢的。 + + +不要忘记重启你的网页服务器来让改动生效。 + + +如果你安装的网页服务器是Apache,你需要使用 .htaccess文件。首先,找到 .htaccess 文件。它位于WordPress安装路径的根文件夹下。如果没有找到 .htaccess文件,需要自己手动创建一个,然后加入如下内容: + + +``` +vi /www/html/path_to_wordpress/.htaccess +``` +``` +php_value upload_max_filesize 64M +php_value post_max_size 32M +php_value max_execution_time 180 +php_value max_input_time 180 + +# BEGIN WordPress + +RewriteEngine On +RewriteBase / +RewriteRule ^index\.php$ - [L] +RewriteCond %{REQUEST_FILENAME} !-f +RewriteCond %{REQUEST_FILENAME} !-d +RewriteRule . /index.php [L] + +# END WordPress +``` +如果你使用的网页服务器是nginx,在WordPress实例中具体配置nginx服务器的设置。详细配置和下面的例子相似: + +``` +server { + +listen 80; +client_max_body_size 128m; +client_body_timeout 300; + +server_name your-domain.com www.your-domain.com; + +root /var/www/html/wordpress; +index index.php; + +location = /favicon.ico { +log_not_found off; +access_log off; +} + +location = /robots.txt { +allow all; +log_not_found off; +access_log off; +} + +location / { +try_files $uri $uri/ /index.php?$args; +} + +location ~ \.php$ { +include fastcgi_params; +fastcgi_pass 127.0.0.1:9000; +fastcgi_index index.php; +fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; +} + +location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { +expires max; +log_not_found off; +} +} +``` + +根据自己的PHP配置,你需要将 'fastcgi_pass 127.0.0.1:9000;' 用类似于 'fastcgi_pass unix:/var/run/php7-fpm.sock;' 替换掉(依照实际连接方式) + + +重启nginx服务来使改动生效。 + + + +### 2\. 修复因为不恰当的文件权限而产生的HTTP错误 + +如果你在WordPress中出现一个意外错误,也许是因为不恰当的文件权限导致的,所以需要给WordPress文件和文件夹设置一个正确的权限: + +``` +chown www-data:www-data -R /var/www/html/path_to_wordpress/ +``` + +将 'www-data' 替换成实际的网页服务器用户,将 '/var/www/html/path_to_wordpress' 换成WordPress的实际安装路径。 + + +### 3\. 修复因为内存不足而产生的HTTP错误 + +你可以通过在wp-config.php中添加如下内容来设置PHP的最大内存限制: + +``` + define('WP_MEMORY_LIMIT', '128MB'); +``` + +### 4\. 修复因为PHP.INI文件错误配置而产生的HTTP错误 + +编辑PHP配置主文件,然后找到 'cgi.fix_pathinfo' 这一行。 这一行内容默认情况下是被注释掉的,默认值为1。取消这一行的注释(删掉这一行最前面的分号),然后将1改为0.同时需要修改 'date.timezone' 这一PHP设置,再次编辑 PHP 配置文件并将这一选项改成 'date.timezone = US/Central' (或者将等号后内容改为你所在的时区) + +``` + vi /etc/php.ini +``` +``` + cgi.fix_pathinfo=0 + date.timezone = America/New_York +``` + +### 5. 修复因为Apache mod_security模块而产生的HTTP错误 + +如果你在使用 Apache mod_security 模块,这可能也会引起问题。试着禁用这一模块,确认是否因为在 .htaccess 文件中加入如下内容而引起了问题: + +``` + +SecFilterEngine Off +SecFilterScanPOST Off + +``` + +### 6. 修复因为有问题的插件/主题而产生的HTTP错误 + +一些插件或主题也会导致HTTP错误以及其他问题。你可以首先禁用有问题的插件/主题,或暂时禁用所有WordPress插件。如果你有phpMyAdmin,使用它来禁用所有插件:在其中找到 wp_options这一表格,在 option_name 这一列中找到 'active_plugins' 这一行,然后将 option_value 改为 :a:0:{} + + +或者用以下命令通过SSH重命名插件所在文件夹: + +``` + mv /www/html/path_to_wordpress/wp-content/plugins /www/html/path_to_wordpress/wp-content/plugins.old +``` + +通常情况下,HTTP错误会被记录在网页服务器的日志文件中,所以寻找错误时一个很好的切入点就是查看服务器日志。 + + +如果你在使用WordPress VPS主机服务的话,你不需要自己去修复WordPress中出现的HTTP错误。你只要让你的Linux管理员来处理它们,他们24小时在线并且会立刻开始着手解决你的问题。 + + + +-------------------------------------------------------------------------------- + +via: https://www.rosehosting.com/blog/http-error-wordpress/ + +作者:[rosehosting][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.rosehosting.com +[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/01/http-error-wordpress.jpg +[2]:https://www.rosehosting.com/wordpress-hosting.html From 6355d4d86f0c5341db883d11bf0c9df4a3b7dd59 Mon Sep 17 00:00:00 2001 From: wenwensnow <963555237@qq.com> Date: Tue, 16 Jan 2018 00:06:29 +0800 Subject: [PATCH 028/226] Delete 20180102 HTTP errors in WordPress.md --- .../tech/20180102 HTTP errors in WordPress.md | 166 ------------------ 1 file changed, 166 deletions(-) delete mode 100644 sources/tech/20180102 HTTP errors in WordPress.md diff --git a/sources/tech/20180102 HTTP errors in WordPress.md b/sources/tech/20180102 HTTP errors in WordPress.md deleted file mode 100644 index 79c92c24b2..0000000000 --- a/sources/tech/20180102 HTTP errors in WordPress.md +++ /dev/null @@ -1,166 +0,0 @@ -translating by wenwensnow -HTTP errors in WordPress -====== -![http error wordpress][1] - -We'll show you, how to fix HTTP errors in WordPress, on a Linux VPS. Listed below are the most common HTTP errors in WordPress, experienced by WordPress users, and our suggestions on how to investigate and fix them. - -### 1\. Fix HTTP error in WordPress when uploading images - -If you get an error when uploading an image to your WordPress based site, it may be due to PHP configuration settings on your server, like insufficient memory limit or so. - -Locate the php configuration file using the following command: -``` -#php -i | grep php.ini -Configuration File (php.ini) Path => /etc -Loaded Configuration File => /etc/php.ini -``` - -According to the output, the PHP configuration file is located in the '/etc' directory, so edit the '/etc/php.ini' file, find the lines below and modify them with these values: -``` -vi /etc/php.ini -``` -``` -upload_max_filesize = 64M -post_max_size = 32M -max_execution_time = 300 -max_input_time 300 -memory_limit = 128M -``` - -Of course if you are unfamiliar with the vi text editor, use your favorite one. - -Do not forget to restart your web server for the changes to take effect. - -If the web server installed on your server is Apache, you may use .htaccess. First, locate the .htaccess file. It should be in the document root directory of the WordPress installation. If there is no .htaccess file, create one, then add the following content: -``` -vi /www/html/path_to_wordpress/.htaccess -``` -``` -php_value upload_max_filesize 64M -php_value post_max_size 32M -php_value max_execution_time 180 -php_value max_input_time 180 - -# BEGIN WordPress - -RewriteEngine On -RewriteBase / -RewriteRule ^index\.php$ - [L] -RewriteCond %{REQUEST_FILENAME} !-f -RewriteCond %{REQUEST_FILENAME} !-d -RewriteRule . /index.php [L] - -# END WordPress -``` - -If you are using nginx, configure the nginx server block about your WordPress instance. It should look something like the example below: -``` -server { - -listen 80; -client_max_body_size 128m; -client_body_timeout 300; - -server_name your-domain.com www.your-domain.com; - -root /var/www/html/wordpress; -index index.php; - -location = /favicon.ico { -log_not_found off; -access_log off; -} - -location = /robots.txt { -allow all; -log_not_found off; -access_log off; -} - -location / { -try_files $uri $uri/ /index.php?$args; -} - -location ~ \.php$ { -include fastcgi_params; -fastcgi_pass 127.0.0.1:9000; -fastcgi_index index.php; -fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; -} - -location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { -expires max; -log_not_found off; -} -} -``` - -Depending on the PHP configuration, you may need to replace 'fastcgi_pass 127.0.0.1:9000;' with 'fastcgi_pass unix:/var/run/php7-fpm.sock;' or so. - -Restart nginx service for the changes to take effect. - -### 2\. Fix HTTP error in WordPress due to incorrect file permissions - -If you get an unexpected HTTP error in WordPress, it may be due to incorrect file permissions, so set a proper ownership of your WordPress files and directories: -``` -chown www-data:www-data -R /var/www/html/path_to_wordpress/ -``` - -Replace 'www-data' with the actual web server user, and '/var/www/html/path_to_wordpress' with the actual path of the WordPress installation. - -### 3\. Fix HTTP error in WordPress due to memory limit - -The PHP memory_limit value can be set by adding this to your wp-config.php file: -``` - define('WP_MEMORY_LIMIT', '128MB'); -``` - -### 4\. Fix HTTP error in WordPress due to misconfiguration of PHP.INI - -Edit the main PHP configuration file and locate the line with the content 'cgi.fix_pathinfo' . This will be commented by default and set to 1. Uncomment the line (remove the semi-colon) and change the value from 1 to 0. You may also want to change the 'date.timezone' PHP setting, so edit the PHP configuration file and modify this setting to 'date.timezone = US/Central' (or whatever your timezone is). -``` - vi /etc/php.ini -``` -``` - cgi.fix_pathinfo=0 - date.timezone = America/New_York -``` - -### 5. Fix HTTP error in WordPress due to Apache mod_security modul - -If you are using the Apache mod_security module, it might be causing problems. Try to disable it to see if that is the problem by adding the following lines in .htaccess: -``` - -SecFilterEngine Off -SecFilterScanPOST Off - -``` - -### 6. Fix HTTP error in WordPress due to problematic plugin or theme - -Some plugins and/or themes may cause HTTP errors and other problems in WordPress. You can try to disable the problematic plugins/themes, or temporarily disable all the plugins. If you have phpMyAdmin, use it to deactivate all plugins: -Locate the table wp_options, under the option_name column (field) find the 'active_plugins' row and change the option_value field to: a:0:{} - -Or, temporarily rename your plugins directory via SSH using the following command: -``` - mv /www/html/path_to_wordpress/wp-content/plugins /www/html/path_to_wordpress/wp-content/plugins.old -``` - -In general, HTTP errors are logged in the web server log files, so a good starting point is to check the web server error log on your server. - -You don't have to Fix HTTP errors in WordPress, if you use one of our [WordPress VPS Hosting][2] services, in which case you can simply ask our expert Linux admins to **fix HTTP errors in WordPress** for you. They are available 24 ×7 and will take care of your request immediately. - --------------------------------------------------------------------------------- - -via: https://www.rosehosting.com/blog/http-error-wordpress/ - -作者:[rosehosting][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.rosehosting.com -[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/01/http-error-wordpress.jpg -[2]:https://www.rosehosting.com/wordpress-hosting.html From 1d02f79e2a8ce465c3743b85b7425957ce407272 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 16 Jan 2018 09:00:28 +0800 Subject: [PATCH 029/226] translated --- ... Default Settings With A Single Command.md | 61 ------------------- ... Default Settings With A Single Command.md | 59 ++++++++++++++++++ 2 files changed, 59 insertions(+), 61 deletions(-) delete mode 100644 sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md create mode 100644 translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md diff --git a/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md b/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md deleted file mode 100644 index a5f819da51..0000000000 --- a/sources/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md +++ /dev/null @@ -1,61 +0,0 @@ -translating---geekpi - -Reset Linux Desktop To Default Settings With A Single Command -====== -![](https://www.ostechnix.com/wp-content/uploads/2017/10/Reset-Linux-Desktop-To-Default-Settings-720x340.jpg) - -A while ago, we shared an article about [**Resetter**][1] - an useful piece of software which is used to reset Ubuntu to factory defaults within few minutes. Using Resetter, anyone can easily reset their Ubuntu system to the state when you installed it in the first time. Today, I stumbled upon a similar thing. No, It's not an application, but a single-line command to reset your Linux desktop settings, tweaks and customization to default state. - -### Reset Linux Desktop To Default Settings - -This command will reset Ubuntu Unity, Gnome and MATE desktops to the default state. I tested this command on both my **Arch Linux MATE** desktop and **Ubuntu 16.04 Unity** desktop. It worked on both systems. I hope it will work on other desktops as well. I don't have any Linux desktop with GNOME as of writing this, so I couldn't confirm it. But, I believe it will work on Gnome DE as well. - -**A word of caution:** Please be mindful that this command will reset all customization and tweaks you made in your system, including the pinned applications in the Unity launcher or Dock, desktop panel applets, desktop indicators, your system fonts, GTK themes, Icon themes, monitor resolution, keyboard shortcuts, window button placement, menu and launcher behaviour etc. - -Good thing is it will only reset the desktop settings. It won't affect the other applications that doesn't use dconf. Also, it won't delete your personal data. - -Now, let us do this. To reset Ubuntu Unity or any other Linux desktop with GNOME/MATE DEs to its default settings, run: -``` -dconf reset -f / -``` - -This is my Ubuntu 16.04 LTS desktop before running the above command: - -[![][2]][3] - -As you see, I have changed the desktop wallpaper and themes. - -This is how my Ubuntu 16.04 LTS desktop looks like after running that command: - -[![][2]][4] - -Look? Now, my Ubuntu desktop has gone to the factory settings. - -For more details about "dconf" command, refer man pages. -``` -man dconf -``` - -I personally prefer to use "Resetter" over "dconf" command for this purpose. Because, Resetter provides more options to the users. The users can decide which applications to remove, which applications to keep, whether to keep existing user account or create a new user and many. If you're too lazy to install Resetter, you can just use this "dconf" command to reset your Linux system to default settings within few minutes. - -And, that's all. Hope this helps. I will be soon here with another useful guide. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/reset-linux-desktop-default-settings-single-command/ - -作者:[Edwin Arteaga][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com -[1]:https://www.ostechnix.com/reset-ubuntu-factory-defaults/ -[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Before-resetting-Ubuntu-to-default-1.png () -[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/After-resetting-Ubuntu-to-default-1.png () diff --git a/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md b/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md new file mode 100644 index 0000000000..d486a777de --- /dev/null +++ b/translated/tech/20171002 Reset Linux Desktop To Default Settings With A Single Command.md @@ -0,0 +1,59 @@ +使用一个命令重置 Linux 桌面到默认设置 +====== +![](https://www.ostechnix.com/wp-content/uploads/2017/10/Reset-Linux-Desktop-To-Default-Settings-720x340.jpg) + +前段时间,我们分享了一篇关于 [**Resetter**][1] 的文章 - 这是一个有用的软件,可以在几分钟内将 Ubuntu 重置为出厂默认设置。使用 Resetter,任何人都可以轻松地将 Ubuntu 重置为第一次安装时的状态。今天,我偶然发现了一个类似的东西。不,它不是一个应用程序,而是一个单行的命令来重置你的 Linux 桌面设置、调整和定制到默认状态。 + +### 将 Linux 桌面重置为默认设置 + +这个命令会将 Ubuntu Unity、Gnome 和 MATE 桌面重置为默认状态。我在我的 **Arch Linux MATE** 和 **Ubuntu 16.04 Unity** 上测试了这个命令。它可以在两个系统上工作。我希望它也能在其他桌面上运行。在写这篇文章的时候,我还没有安装 GNOME 的 Linux 桌面,因此我无法确认。但是,我相信它也可以在 Gnome 桌面环境中使用。 + +**一句忠告:**请注意,此命令将重置你在系统中所做的所有定制和调整,包括 Unity 启动器或 Dock 中的固定应用程序、桌面小程序、桌面指示器、系统字体、GTK主题、图标主题、显示器分辨率、键盘快捷键、窗口按钮位置、菜单和启动器行为等。 + +好的是它只会重置桌面设置。它不会影响其他不使用 dconf 的程序。此外,它不会删除你的个人资料。 + +现在,让我们开始。要将 Ubuntu Unity 或其他带有 GNOME/MATE 环境的 Linux 桌面重置,运行下面的命令: +``` +dconf reset -f / +``` + +在运行上述命令之前,这是我的 Ubuntu 16.04 LTS 桌面: + +[![][2]][3] + +如你所见,我已经改变了桌面壁纸和主题。 + +这是运行该命令后,我的 Ubuntu 16.04 LTS 桌面的样子: + +[![][2]][4] + +看见了么?现在,我的 Ubuntu 桌面已经回到了出厂设置。 + +有关 “dconf” 命令的更多详细信息,请参阅手册页。 +``` +man dconf +``` + +在重置桌面上我个人更喜欢 “Resetter” 而不是 “dconf” 命令。因为,Resetter 给用户提供了更多的选择。用户可以决定删除哪些应用程序、保留哪些应用程序、是保留现有用户帐户还是创建新用户等等。如果你懒得安装 Resetter,你可以使用这个 “dconf” 命令在几分钟内将你的 Linux 系统重置为默认设置。 + +就是这样了。希望这个有帮助。我将很快发布另一篇有用的指导。敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/reset-linux-desktop-default-settings-single-command/ + +作者:[Edwin Arteaga][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com +[1]:https://www.ostechnix.com/reset-ubuntu-factory-defaults/ +[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Before-resetting-Ubuntu-to-default-1.png () +[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/After-resetting-Ubuntu-to-default-1.png () From dc662e7b2971a0e948d326257a2bca6d0cf8735b Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 16 Jan 2018 09:11:07 +0800 Subject: [PATCH 030/226] translating --- .../20171004 How To Create A Video From PDF Files In Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md b/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md index 27aa32dc77..5ecf5da24e 100644 --- a/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md +++ b/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md @@ -1,3 +1,5 @@ +translating---geekpi + How To Create A Video From PDF Files In Linux ====== ![](https://www.ostechnix.com/wp-content/uploads/2017/10/Video-1-720x340.jpg) From 6f19905558eeeb24c121932cb5055d89622e93bd Mon Sep 17 00:00:00 2001 From: qhwdw Date: Tue, 16 Jan 2018 11:39:12 +0800 Subject: [PATCH 031/226] Translating by qhwdw --- ...p guide for creating Master Slave replication in MariaDB.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/sources/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md b/sources/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md index 922ef18040..98474cbe78 100644 --- a/sources/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md +++ b/sources/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md @@ -1,4 +1,4 @@ -Step by Step guide for creating Master Slave replication in MariaDB +Translating by qhwdw Step by Step guide for creating Master Slave replication in MariaDB ====== In our earlier tutorials,we have already learned [**to install & configure MariaDB**][1] & also [**learned some basic administration commands for managing MariaDB**][2]. We are now going to learn to setup a MASTER SLAVE replication for MariaDB server. @@ -169,7 +169,6 @@ You will see that the output shows the same value that we inserted on the master This concludes our tutorial, please send your queries/questions through the comment box below. - -------------------------------------------------------------------------------- via: http://linuxtechlab.com/creating-master-slave-replication-mariadb/ From 3b8c56a260607648244148466a578c9f7e15a049 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Tue, 16 Jan 2018 11:57:01 +0800 Subject: [PATCH 032/226] Translated by qhwdw --- .../tech/20170628 Notes on BPF and eBPF.md | 152 ------------------ .../tech/20170628 Notes on BPF and eBPF.md | 152 ++++++++++++++++++ 2 files changed, 152 insertions(+), 152 deletions(-) delete mode 100644 sources/tech/20170628 Notes on BPF and eBPF.md create mode 100644 translated/tech/20170628 Notes on BPF and eBPF.md diff --git a/sources/tech/20170628 Notes on BPF and eBPF.md b/sources/tech/20170628 Notes on BPF and eBPF.md deleted file mode 100644 index 25a7456649..0000000000 --- a/sources/tech/20170628 Notes on BPF and eBPF.md +++ /dev/null @@ -1,152 +0,0 @@ -translating by qhwdw Notes on BPF & eBPF -============================================================ - -Today it was Papers We Love, my favorite meetup! Today [Suchakra Sharma][6]([@tuxology][7] on twitter/github) gave a GREAT talk about the original BPF paper and recent work in Linux on eBPF. It really made me want to go write eBPF programs! - -The paper is [The BSD Packet Filter: A New Architecture for User-level Packet Capture][8] - -I wanted to write some notes on the talk here because I thought it was super super good. - -To start, here are the [slides][9] and a [pdf][10]. The pdf is good because there are links at the end and in the PDF you can click the links. - -### what’s BPF? - -Before BPF, if you wanted to do packet filtering you had to copy all the packets into userspace and then filter them there (with “tap”). - -this had 2 problems: - -1. if you filter in userspace, it means you have to copy all the packets into userspace, copying data is expensive - -2. the filtering algorithms people were using were inefficient - -The solution to problem #1 seems sort of obvious, move the filtering logic into the kernel somehow. Okay. (though the details of how that’s done isn’t obvious, we’ll talk about that in a second) - -But why were the filtering algorithms inefficient! Well!! - -If you run `tcpdump host foo` it actually runs a relatively complicated query, which you could represent with this tree: - -![](https://jvns.ca/images/bpf-1.png) - -Evaluating this tree is kind of expensive. so the first insight is that you can actually represent this tree in a simpler way, like this: - -![](https://jvns.ca/images/bpf-2.png) - -Then if you have `ether.type = IP` and `ip.src = foo` you automatically know that the packet matches `host foo`, you don’t need to check anything else. So this data structure (they call it a “control flow graph” or “CFG”) is a way better representation of the program you actually want to execute to check matches than the tree we started with. - -### How BPF works in the kernel - -The main important here is that packets are just arrays of bytes. BPF programs run on these arrays of bytes. They’re not allowed to have loops but they  _can_  have smart stuff to figure out the length of the IP header (IPv6 & IPv4 are different lengths!) and then find the TCP port based on that length - -``` -x = ip_header_length -port = *(packet_start + x + port_offset) - -``` - -(it looks different from that but it’s basically the same). There’s a nice description of the virtual machine in the paper/slides so I won’t explain it. - -When you run `tcpdump host foo` this is what happens, as far as I understand - -1. convert `host foo` into an efficient DAG of the rules - -2. convert that DAG into a BPF program (in BPF bytecode) for the BPF virtual machine - -3. Send the BPF bytecode to the Linux kernel, which verifies it - -4. compile the BPF bytecode program into native code. For example [here’s the JIT code for ARM][1] and for [x86][2] - -5. when packets come in, Linux runs the native code to decide if that packet should be filtered or not. It’l often run only 100-200 CPU instructions for each packet that needs to be processed, which is super fast! - -### the present: eBPF - -But BPF has been around for a long time! Now we live in the EXCITING FUTURE which is eBPF. I’d heard about eBPF a bunch before but I felt like this helped me put the pieces together a little better. (i wrote this [XDP & eBPF post][11]back in April when I was at netdev) - -some facts about eBPF: - -* eBPF programs have their own bytecode language, and are compiled from that bytecode language into native code in the kernel, just like BPF programs - -* eBPF programs run in the kernel - -* eBPF programs can’t access arbitrary kernel memory. Instead the kernel provides functions to get at some restricted subset of things. - -* they  _can_  communicate with userspace programs through BPF maps - -* there’s a `bpf` syscall as of Linux 3.18 - -### kprobes & eBPF - -You can pick a function (any function!) in the Linux kernel and execute a program that you write every time that function happens. This seems really amazing and magical. - -For example! There’s this [BPF program called disksnoop][12] which tracks when you start/finish writing a block to disk. Here’s a snippet from the code: - -``` -BPF_HASH(start, struct request *); -void trace_start(struct pt_regs *ctx, struct request *req) { - // stash start timestamp by request ptr - u64 ts = bpf_ktime_get_ns(); - start.update(&req, &ts); -} -... -b.attach_kprobe(event="blk_start_request", fn_name="trace_start") -b.attach_kprobe(event="blk_mq_start_request", fn_name="trace_start") - -``` - -This basically declares a BPF hash (which the program uses to keep track of when the request starts / finishes), a function called `trace_start` which is going to be compiled into BPF bytecode, and attaches `trace_start` to the `blk_start_request` kernel function. - -This is all using the `bcc` framework which lets you write Python-ish programs that generate BPF code. You can find it (it has tons of example programs) at[https://github.com/iovisor/bcc][13] - -### uprobes & eBPF - -So I sort of knew you could attach eBPF programs to kernel functions, but I didn’t realize you could attach eBPF programs to userspace functions! That’s really exciting. Here’s [an example of counting malloc calls in Python using an eBPF program][14]. - -### things you can attach eBPF programs to - -* network cards, with XDP (which I wrote about a while back) - -* tc egress/ingress (in the network stack) - -* kprobes (any kernel function) - -* uprobes (any userspace function apparently ?? like in any C program with symbols.) - -* probes that were built for dtrace called “USDT probes” (like [these mysql probes][3]). Here’s an [example program using dtrace probes][4] - -* [the JVM][5] - -* tracepoints (not sure what that is yet) - -* seccomp / landlock security things - -* a bunch more things - -### this talk was super cool - -There are a bunch of great links in the slides and in [LINKS.md][15] in the iovisor repository. It is late now but soon I want to actually write my first eBPF program! - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2017/06/28/notes-on-bpf---ebpf/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca/ -[1]:https://github.com/torvalds/linux/blob/v4.10/arch/arm/net/bpf_jit_32.c#L512 -[2]:https://github.com/torvalds/linux/blob/v3.18/arch/x86/net/bpf_jit_comp.c#L189 -[3]:https://dev.mysql.com/doc/refman/5.7/en/dba-dtrace-ref-query.html -[4]:https://github.com/iovisor/bcc/blob/master/examples/tracing/mysqld_query.py -[5]:http://blogs.microsoft.co.il/sasha/2016/03/31/probing-the-jvm-with-bpfbcc/ -[6]:http://suchakra.in/ -[7]:https://twitter.com/tuxology -[8]:http://www.vodun.org/papers/net-papers/van_jacobson_the_bpf_packet_filter.pdf -[9]:https://speakerdeck.com/tuxology/the-bsd-packet-filter -[10]:http://step.polymtl.ca/~suchakra/PWL-Jun28-MTL.pdf -[11]:https://jvns.ca/blog/2017/04/07/xdp-bpf-tutorial/ -[12]:https://github.com/iovisor/bcc/blob/0c8c179fc1283600887efa46fe428022efc4151b/examples/tracing/disksnoop.py -[13]:https://github.com/iovisor/bcc -[14]:https://github.com/iovisor/bcc/blob/00f662dbea87a071714913e5c7382687fef6a508/tests/lua/test_uprobes.lua -[15]:https://github.com/iovisor/bcc/blob/master/LINKS.md diff --git a/translated/tech/20170628 Notes on BPF and eBPF.md b/translated/tech/20170628 Notes on BPF and eBPF.md new file mode 100644 index 0000000000..b7fad29ba1 --- /dev/null +++ b/translated/tech/20170628 Notes on BPF and eBPF.md @@ -0,0 +1,152 @@ +关于 BPF 和 eBPF 的笔记 +============================================================ + +今天,我喜欢的 meetup 网站上有一篇我超爱的文章![Suchakra Sharma][6]([@tuxology][7] 在 twitter/github)的一篇非常棒的关于传统 BPF 和在 Linux 中最新加入的 eBPF 的讨论文章,正是它促使我想去写一个 eBPF 的程序! + +这篇文章就是 —— [BSD 包过滤器:一个新的用户级包捕获架构][8] + +我想在讨论的基础上去写一些笔记,因为,我觉得它超级棒! + +这是 [幻灯片][9] 和一个 [pdf][10]。这个 pdf 非常好,结束的位置有一些链接,在 PDF 中你可以直接点击这个链接。 + +### 什么是 BPF? + +在 BPF 出现之前,如果你想去做包过滤,你必须拷贝所有进入用户空间的包,然后才能去过滤它们(使用 “tap”)。 + +这样做存在两个问题: + +1. 如果你在用户空间中过滤,意味着你将拷贝所有进入用户空间的包,拷贝数据的代价是很昂贵的。 + +2. 使用的过滤算法很低效 + +问题 #1 的解决方法似乎很明显,就是将过滤逻辑移到内核中。(虽然具体实现的细节并没有明确,我们将在稍后讨论) + +但是,为什么过滤算法会很低效? + +如果你运行 `tcpdump host foo`,它实际上运行了一个相当复杂的查询,用下图的这个树来描述它: + +![](https://jvns.ca/images/bpf-1.png) + +评估这个树有点复杂。因此,可以用一种更简单的方式来表示这个树,像这样: + +![](https://jvns.ca/images/bpf-2.png) + +然后,如果你设置 `ether.type = IP` 和  `ip.src = foo`,你必然明白匹配的包是 `host foo`,你也不用去检查任何其它的东西了。因此,这个数据结构(它们称为“控制流图” ,或者 “CFG”)是表示你真实希望去执行匹配检查的程序的最佳方法,而不是用前面的树。 + +### 为什么 BPF 要工作在内核中 + +这里的关键点是,包仅仅是个字节的数组。BPF 程序是运行在这些字节的数组上。它们不允许有循环(loops),但是,它们 _可以_  有聪明的办法知道 IP 包头(IPv6 和 IPv4 长度是不同的)以及基于它们的长度来找到 TCP 端口 + +``` +x = ip_header_length +port = *(packet_start + x + port_offset) + +``` + +(看起来不一样,其实它们基本上都相同)。在这个论文/幻灯片上有一个非常详细的虚拟机的描述,因此,我不打算解释它。 + +当你运行 `tcpdump host foo` 后,这时发生了什么?就我的理解,应该是如下的过程。 + +1. 转换 `host foo` 为一个高效的 DAG 规则 + +2. 转换那个 DAG 规则为 BPF 虚拟机的一个 BPF 程序(BPF 字节码) + +3. 发送 BPF 字节码到 Linux 内核,由 Linux 内核验证它 + +4. 编译这个 BPF 字节码程序为一个原生(native)代码。例如, [在 ARM 上是 JIT 代码][1] 以及为 [x86][2] 的机器码 + +5. 当包进入时,Linux 运行原生代码去决定是否过滤这个包。对于每个需要去处理的包,它通常仅需运行 100 - 200 个 CPU 指令就可以完成,这个速度是非常快的! + +### 现状:eBPF + +毕竟 BPF 出现已经有很长的时间了!现在,我们可以拥有一个更加令人激动的东西,它就是 eBPF。我以前听说过 eBPF,但是,我觉得像这样把这些片断拼在一起更好(我在 4 月份的 netdev 上我写了这篇 [XDP & eBPF 的文章][11]回复) + +关于 eBPF 的一些事实是: + +* eBPF 程序有它们自己的字节码语言,并且从那个字节码语言编译成内核原生代码,就像 BPF 程序 + +* eBPF 运行在内核中 + +* eBPF 程序不能随心所欲的访问内核内存。而是通过内核提供的函数去取得一些受严格限制的所需要的内容的子集。 + +* 它们  _可以_  与用户空间的程序通过 BPF 映射进行通讯 + +* 这是 Linux 3.18 的 `bpf` 系统调用 + +### kprobes 和 eBPF + +你可以在 Linux 内核中挑选一个函数(任意函数),然后运行一个你写的每次函数被调用时都运行的程序。这样看起来是不是很神奇。 + +例如:这里有一个 [名为 disksnoop 的 BPF 程序][12],它的功能是当你开始/完成写入一个块到磁盘时,触发它执行跟踪。下图是它的代码片断: + +``` +BPF_HASH(start, struct request *); +void trace_start(struct pt_regs *ctx, struct request *req) { + // stash start timestamp by request ptr + u64 ts = bpf_ktime_get_ns(); + start.update(&req, &ts); +} +... +b.attach_kprobe(event="blk_start_request", fn_name="trace_start") +b.attach_kprobe(event="blk_mq_start_request", fn_name="trace_start") + +``` + +从根本上来说,它声明一个 BPF 哈希(它的作用是当请求开始/完成时,这个程序去触发跟踪),一个名为 `trace_start` 的函数将被编译进 BPF 字节码,然后附加 `trace_start` 到内核函数 `blk_start_request` 上。 + +这里使用的是 `bcc` 框架,它可以使你写的 Python 化的程序去生成 BPF 代码。你可以在 [https://github.com/iovisor/bcc][13] 找到它(那里有非常多的示例程序)。 + +### uprobes 和 eBPF + +因为我知道你可以附加 eBPF 程序到内核函数上,但是,我不知道你能否将 eBPF 程序附加到用户空间函数上!那会有更多令人激动的事情。这是 [在 Python 中使用一个 eBPF 程序去计数 malloc 调用的示例][14]。 + +### 附加 eBPF 程序时应该考虑的事情 + +* 带 XDP 的网卡(我之前写过关于这方面的文章) + +* tc egress/ingress (在网络栈上) + +* kprobes(任意内核函数) + +* uprobes(很明显,任意用户空间函数??像带符号的任意 C 程序) + +* probes 是为 dtrace 构建的名为 “USDT probes” 的探针(像 [这些 mysql 探针][3])。这是一个 [使用 dtrace 探针的示例程序][4] + +* [JVM][5] + +* 跟踪点 + +* seccomp / landlock 安全相关的事情 + +* 更多的事情 + +### 这个讨论超级棒 + +在幻灯片里有很多非常好的链接,并且在  iovisor 仓库里有个 [LINKS.md][15]。现在已经很晚了,但是,很快我将写我的第一个 eBPF 程序了! + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/06/28/notes-on-bpf---ebpf/ + +作者:[Julia Evans ][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/ +[1]:https://github.com/torvalds/linux/blob/v4.10/arch/arm/net/bpf_jit_32.c#L512 +[2]:https://github.com/torvalds/linux/blob/v3.18/arch/x86/net/bpf_jit_comp.c#L189 +[3]:https://dev.mysql.com/doc/refman/5.7/en/dba-dtrace-ref-query.html +[4]:https://github.com/iovisor/bcc/blob/master/examples/tracing/mysqld_query.py +[5]:http://blogs.microsoft.co.il/sasha/2016/03/31/probing-the-jvm-with-bpfbcc/ +[6]:http://suchakra.in/ +[7]:https://twitter.com/tuxology +[8]:http://www.vodun.org/papers/net-papers/van_jacobson_the_bpf_packet_filter.pdf +[9]:https://speakerdeck.com/tuxology/the-bsd-packet-filter +[10]:http://step.polymtl.ca/~suchakra/PWL-Jun28-MTL.pdf +[11]:https://jvns.ca/blog/2017/04/07/xdp-bpf-tutorial/ +[12]:https://github.com/iovisor/bcc/blob/0c8c179fc1283600887efa46fe428022efc4151b/examples/tracing/disksnoop.py +[13]:https://github.com/iovisor/bcc +[14]:https://github.com/iovisor/bcc/blob/00f662dbea87a071714913e5c7382687fef6a508/tests/lua/test_uprobes.lua +[15]:https://github.com/iovisor/bcc/blob/master/LINKS.md From b2eafee9bc51d9efd284b9b8fdd9502cb66c6411 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 16 Jan 2018 13:07:31 +0800 Subject: [PATCH 033/226] PRF&PUB:20171219 Surf anonymously- Learn to install TOR network on Linux.md @geekpi --- ...- Learn to install TOR network on Linux.md | 75 ++++++++++++------- 1 file changed, 47 insertions(+), 28 deletions(-) rename {translated/tech => published}/20171219 Surf anonymously- Learn to install TOR network on Linux.md (57%) diff --git a/translated/tech/20171219 Surf anonymously- Learn to install TOR network on Linux.md b/published/20171219 Surf anonymously- Learn to install TOR network on Linux.md similarity index 57% rename from translated/tech/20171219 Surf anonymously- Learn to install TOR network on Linux.md rename to published/20171219 Surf anonymously- Learn to install TOR network on Linux.md index 97dc71c641..b2b763a1fb 100644 --- a/translated/tech/20171219 Surf anonymously- Learn to install TOR network on Linux.md +++ b/published/20171219 Surf anonymously- Learn to install TOR network on Linux.md @@ -1,70 +1,89 @@ 匿名上网:学习在 Linux 上安装 TOR 网络 ====== -Tor 网络是一个匿名网络来保护你的互联网以及隐私。Tor 网络是一组志愿者运营的服务器。Tor 通过在由志愿者运营的分布式中继系统之间跳转来保护互联网通信。这避免了人们窥探我们的网络,他们无法了解我们访问的网站或者用户身在何处,并且也可以让我们访问被屏蔽的网站。 + +Tor 网络是一个用来保护你的互联网以及隐私的匿名网络。Tor 网络是一组志愿者运营的服务器。Tor 通过在由志愿者运营的分布式中继系统之间跳转来保护互联网通信。这避免了人们窥探我们的网络,他们无法了解我们访问的网站或者用户身在何处,并且也可以让我们访问被屏蔽的网站。 在本教程中,我们将学习在各种 Linux 操作系统上安装 Tor 网络,以及如何使用它来配置我们的程序来保护通信。 - **(推荐阅读:[如何在 Linux 上安装 Tor 浏览器(Ubuntu、Mint、RHEL、Fedora、CentOS)][1])** + 推荐阅读:[如何在 Linux 上安装 Tor 浏览器(Ubuntu、Mint、RHEL、Fedora、CentOS)][1] ### CentOS/RHEL/Fedora -Tor 包是 EPEL 仓库的一部分,所以如果我们安装了 EPEL 仓库,我们可以直接使用 yum 来安装 Tor。如果你需要在您的系统上安装 EPEL 仓库,请使用下列适当的命令(基于操作系统和体系结构): +Tor 包是 EPEL 仓库的一部分,所以如果我们安装了 EPEL 仓库,我们可以直接使用 `yum` 来安装 Tor。如果你需要在您的系统上安装 EPEL 仓库,请使用下列适当的命令(基于操作系统和体系结构): - **RHEL/CentOS 7** +RHEL/CentOS 7: - **$ sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-11.noarch.rpm** +``` +$ sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-11.noarch.rpm +``` - **RHEL/CentOS 6 (64 位)** +RHEL/CentOS 6 (64 位): - **$ sudo rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm** +``` +$ sudo rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm +``` - **RHEL/CentOS 6 (32 位)** +RHEL/CentOS 6 (32 位): - **$ sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm** +``` +$ sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm +``` 安装完成后,我们可以用下面的命令安装 Tor 浏览器: - **$ sudo yum install tor** +``` +$ sudo yum install tor +``` ### Ubuntu -为了在 Ubuntu 机器上安装 Tor 网络,我们需要添加官方 Tor 仓库。我们需要将仓库信息添加到 “/etc/apt/sources.list” 中。 +为了在 Ubuntu 机器上安装 Tor 网络,我们需要添加官方 Tor 仓库。我们需要将仓库信息添加到 `/etc/apt/sources.list` 中。 - **$ sudo nano /etc/apt/sources.list** +``` +$ sudo nano /etc/apt/sources.list +``` 现在根据你的操作系统添加下面的仓库信息: - **Ubuntu 16.04** +Ubuntu 16.04: - **deb http://deb.torproject.org/torproject.org xenial main** -**deb-src http://deb.torproject.org/torproject.org xenial main** +``` +deb http://deb.torproject.org/torproject.org xenial main +deb-src http://deb.torproject.org/torproject.org xenial main +``` - **Ubuntu 14.04** +Ubuntu 14.04 - **deb http://deb.torproject.org/torproject.org trusty main** -**deb-src http://deb.torproject.org/torproject.org trusty main** +``` +deb http://deb.torproject.org/torproject.org trusty main +deb-src http://deb.torproject.org/torproject.org trusty main +``` 接下来打开终端并执行以下两个命令添加用于签名软件包的 gpg 密钥: - **$ gpg -keyserver keys.gnupg.net -recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89** -**$ gpg -export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -** +``` +$ gpg -keyserver keys.gnupg.net -recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 +$ gpg -export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add - +``` 现在运行更新并安装 Tor 网络: - **$ sudo apt-get update** -**$ sudo apt-get install tor deb.torproject.org-keyring** +``` +$ sudo apt-get update +$ sudo apt-get install tor deb.torproject.org-keyring +``` ### Debian 我们可以无需添加任何仓库在 Debian 上安装 Tor 网络。只要打开终端并以 root 身份执行以下命令: - **$ apt install tor** - -### +``` +$ apt install tor +``` ### Tor 配置 -如果你最终目的只是为了保护互联网浏览,而没有其他要求,直接使用 Tor 更好,但是如果你需要保护即时通信、IRC、Jabber 等程序,则需要配置这些应用程序进行安全通信。但在做之前,让我们先看看**[Tor 网站上提到的警告][2]**。 +如果你最终目的只是为了保护互联网浏览,而没有其他要求,直接使用 Tor 更好,但是如果你需要保护即时通信、IRC、Jabber 等程序,则需要配置这些应用程序进行安全通信。但在做之前,让我们先看看[Tor 网站上提到的警告][2]。 - 不要大流量使用 Tor - 不要在 Tor 中使用任何浏览器插件 @@ -72,7 +91,7 @@ Tor 包是 EPEL 仓库的一部分,所以如果我们安装了 EPEL 仓库, - 不要在线打开通过 Tor 下载的任何文档。 - 尽可能使用 Tor 桥 -现在配置程序来使用 Tor,例如 jabber。首先选择 “SOCKS代理” 而不是使用 HTTP 代理,并使用端口号 9050,或者也可以使用端口 9150(Tor 浏览器使用)。 +现在配置程序来使用 Tor,例如 jabber。首先选择 “SOCKS代理” 而不是使用 HTTP 代理,并使用端口号 `9050`,或者也可以使用端口 9150(Tor 浏览器使用)。 ![install tor network][4] @@ -90,7 +109,7 @@ via: http://linuxtechlab.com/learn-install-tor-network-linux/ 作者:[Shusain][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 30c44274eb9aa2b3c0ef5affc88c1505ca04f3c3 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 16 Jan 2018 13:19:11 +0800 Subject: [PATCH 034/226] PRF&PUB:20171212 How to Search PDF Files from the Terminal with pdfgrep.md @geekpi --- ...DF Files from the Terminal with pdfgrep.md | 71 +++++++++++++++++++ ...DF Files from the Terminal with pdfgrep.md | 64 ----------------- 2 files changed, 71 insertions(+), 64 deletions(-) create mode 100644 published/20171212 How to Search PDF Files from the Terminal with pdfgrep.md delete mode 100644 translated/tech/20171212 How to Search PDF Files from the Terminal with pdfgrep.md diff --git a/published/20171212 How to Search PDF Files from the Terminal with pdfgrep.md b/published/20171212 How to Search PDF Files from the Terminal with pdfgrep.md new file mode 100644 index 0000000000..495817a531 --- /dev/null +++ b/published/20171212 How to Search PDF Files from the Terminal with pdfgrep.md @@ -0,0 +1,71 @@ +如何使用 pdfgrep 从终端搜索 PDF 文件 +====== + +![](https://www.maketecheasier.com/assets/uploads/2017/12/search-pdf-terminal.jpg) + +诸如 [grep][1] 和 [ack-grep][2] 之类的命令行工具对于搜索匹配指定[正则表达式][3]的纯文本非常有用。但是你有没有试过使用这些工具在 PDF 中搜索?不要这么做!由于这些工具无法读取PDF文件,因此你不会得到任何结果。它们只能读取纯文本文件。 + +顾名思义,[pdfgrep][4] 是一个可以在不打开文件的情况下搜索 PDF 中的文本的小命令行程序。它非常快速 —— 比几乎所有 PDF 浏览器提供的搜索更快。`grep` 和 `pdfgrep` 的最大区别在于 `pdfgrep` 对页进行操作,而 `grep` 对行操作。`grep` 如果在一行上找到多个匹配项,它也会多次打印单行。让我们看看如何使用该工具。 + +### 安装 + +对于 Ubuntu 和其他基于 Ubuntu 的 Linux 发行版来说,这非常简单: + +``` +sudo apt install pdfgrep +``` + +对于其他发行版,只要在[包管理器][5]里输入 “pdfgrep” 查找,它就应该能够安装它。万一你想浏览其代码,你也可以查看项目的 [GitLab 页面][6]。 + +### 测试运行 + +现在你已经安装了这个工具,让我们去测试一下。`pdfgrep` 命令采用以下格式: + +``` +pdfgrep [OPTION...] PATTERN [FILE...] +``` + +- `OPTION` 是一个额外的属性列表,给出诸如 `-i` 或 `--ignore-case` 这样的命令,这两者都会忽略匹配正则中的大小写。 +- `PATTERN` 是一个扩展正则表达式。 + +- `FILE` 如果它在相同的工作目录就是文件的名称,或文件的路径。 + +我对 Python 3.6 官方文档运行该命令。下图是结果。 + +![pdfgrep search][7] + +红色高亮显示所有遇到单词 “queue” 的地方。在命令中加入 `-i` 选项将会匹配单词 “Queue”。请记住,当加入 `-i` 时,大小写并不重要。 + +### 其它 + +`pdfgrep` 有相当多的有趣的选项。不过,我只会在这里介绍几个。 + +* `-c` 或者 `--count`:这会抑制匹配的正常输出。它只显示在文件中遇到该单词的次数,而不是显示匹配的长输出。 +* `-p` 或者 `--page-count`:这个选项打印页面上匹配的页码和页面上的该匹配模式出现次数。 +* `-m` 或者 `--max-count` [number]:指定匹配的最大数目。这意味着当达到匹配次数时,该命令停止读取文件。 + +所支持的选项的完整列表可以在 man 页面或者 `pdfgrep` 在线[文档][8]中找到。如果你在批量处理一些文件,不要忘记,`pdfgrep` 可以同时搜索多个文件。可以通过更改 `GREP_COLORS` 环境变量来更改默认的匹配高亮颜色。 + +### 总结 + +下一次你想在 PDF 中搜索一些东西。请考虑使用 `pdfgrep`。该工具会派上用场,并且节省你的时间。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/search-pdf-files-pdfgrep/ + +作者:[Bruno Edoh][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com +[1]:https://www.maketecheasier.com/what-is-grep-and-uses/ +[2]: https://www.maketecheasier.com/ack-a-better-grep/ +[3]: https://www.maketecheasier.com/the-beginner-guide-to-regular-expressions/ +[4]: https://pdfgrep.org/ +[5]: https://www.maketecheasier.com/install-software-in-various-linux-distros/ +[6]: https://gitlab.com/pdfgrep/pdfgrep +[7]: https://www.maketecheasier.com/assets/uploads/2017/11/pdfgrep-screenshot.png (pdfgrep search) +[8]: https://pdfgrep.org/doc.html diff --git a/translated/tech/20171212 How to Search PDF Files from the Terminal with pdfgrep.md b/translated/tech/20171212 How to Search PDF Files from the Terminal with pdfgrep.md deleted file mode 100644 index 75aae3b97e..0000000000 --- a/translated/tech/20171212 How to Search PDF Files from the Terminal with pdfgrep.md +++ /dev/null @@ -1,64 +0,0 @@ -如何使用 pdfgrep 从终端搜索 PDF 文件 -====== -诸如 [grep][1] 和 [ack-grep][2] 之类的命令行工具对于搜索匹配指定[正则表达式][3]的纯文本非常有用。但是你有没有试过使用这些工具在 PDF 中搜索模板?不要这么做!由于这些工具无法读取PDF文件,因此你不会得到任何结果。他们只能读取纯文本文件。 - -顾名思义,[pdfgrep][4] 是一个小的命令行程序,可以在不打开文件的情况下搜索 PDF 中的文本。它非常快速 - 比几乎所有 PDF 浏览器提供的搜索更快。grep 和 pdfgrep 的区别在于 pdfgrep 对页进行操作,而 grep 对行操作。grep 如果在一行上找到多个匹配项,它也会多次打印单行。让我们看看如何使用该工具。 - -对于 Ubuntu 和其他基于 Ubuntu 的 Linux 发行版来说,这非常简单: -``` -sudo apt install pdfgrep -``` - -对于其他发行版,只要将 `pdfgrep` 作为[包管理器][5]的输入,它就应该能够安装。万一你想浏览代码,你也可以查看项目的[ GitLab 页面][6]。 - -现在你已经安装了这个工具,让我们去测试一下。pdfgrep 命令采用以下格式: -``` -pdfgrep [OPTION...] PATTERN [FILE...] -``` - - **OPTION** 是一个额外的属性列表,给出诸如 `-i` 或 `--ignore-case` 这样的命令,这两者都会忽略匹配正则中的大小写。 - - **PATTERN** 是一个扩展的正则表达式。 - - **FILE** 如果它在相同的工作目录或文件的路径,这是文件的名称。 - -我根据官方文档用 Python 3.6 运行命令。下图是结果。 - -![pdfgrep search][7] - -![pdfgrep search][7] - -红色高亮显示所有遇到单词 “queue” 的地方。在命令中加入 `-i` 选项将会匹配单词 “Queue”。请记住,当加入 `-i` 时,大小写并不重要。 - -pdfgrep 有相当多的有趣的选项。不过,我只会在这里介绍几个。 - - - * `-c` 或者 `--count`:这会抑制匹配的正常输出。它只显示在文件中遇到该单词的次数,而不是显示匹配的长输出, -  * `-p` 或者 `--page-count`:这个选项打印页面上匹配的页码和页面上的模式出现次数 -  * `-m` 或者 `--max-count` [number]:指定匹配的最大数目。这意味着当达到匹配次数时,该命令停止读取文件。 - - - -支持的选项的完整列表可以在 man 页面或者 pdfgrep 在线[文档][8]中找到。以防你在处理一些批量文件,不要忘记,pdfgrep 可以同时搜索多个文件。可以通过更改 GREP_COLORS 环境变量来更改默认的匹配高亮颜色。 - -下一次你想在 PDF 中搜索一些东西。请考虑使用 pdfgrep。该工具会派上用场,并且节省你的时间。 - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/search-pdf-files-pdfgrep/ - -作者:[Bruno Edoh][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com -[1] https://www.maketecheasier.com/what-is-grep-and-uses/ -[2] https://www.maketecheasier.com/ack-a-better-grep/ -[3] https://www.maketecheasier.com/the-beginner-guide-to-regular-expressions/ -[4] https://pdfgrep.org/ -[5] https://www.maketecheasier.com/install-software-in-various-linux-distros/ -[6] https://gitlab.com/pdfgrep/pdfgrep -[7] https://www.maketecheasier.com/assets/uploads/2017/11/pdfgrep-screenshot.png (pdfgrep search) -[8] https://pdfgrep.org/doc.html From 00365b8ce813173b5135b8939a45b23a4f66d02b Mon Sep 17 00:00:00 2001 From: qhwdw Date: Tue, 16 Jan 2018 13:24:20 +0800 Subject: [PATCH 035/226] Translated by qhwdw --- ...ing Master Slave replication in MariaDB.md | 184 ------------------ ...ing Master Slave replication in MariaDB.md | 184 ++++++++++++++++++ 2 files changed, 184 insertions(+), 184 deletions(-) delete mode 100644 sources/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md create mode 100644 translated/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md diff --git a/sources/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md b/sources/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md deleted file mode 100644 index 98474cbe78..0000000000 --- a/sources/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md +++ /dev/null @@ -1,184 +0,0 @@ -Translating by qhwdw Step by Step guide for creating Master Slave replication in MariaDB -====== -In our earlier tutorials,we have already learned [**to install & configure MariaDB**][1] & also [**learned some basic administration commands for managing MariaDB**][2]. We are now going to learn to setup a MASTER SLAVE replication for MariaDB server. - -Replication is used to create multiple copies of our database & these copies then can either be used as another database to run our queries on, queries that might otherwise affect performance of master server like running some heavy analytics queries or we can just use them for data redundancy purposes or for both. We can automate the whole process i.e. data replication occurs automatically from master to slave. Backups are be done without affecting the write operations of the master - -So we will now setup our **master-slave** replication, for this we need two machines with Mariadb installed. IP addresses for the both the machines are mentioned below, - - **Master -** 192.168.1.120 **Hostname-** master.ltechlab.com - - **Slave -** 192.168.1.130 **Hostname -** slave.ltechlab.com - -Once MariaDB has been installed in those machines, we will move on with the tutorial. If you need help installing and configuring maridb, have a[ **look at our tutorial HERE.**][1] - - -### **Step 1- Master Server Configuration** - -We are going to take a database named ' **important '** in MariaDB, that will be replicated to our slave server. To start the process, we will edit the files ' **/etc/my.cnf** ' , it's the configuration file for mariadb, - -``` -$ vi /etc/my.cnf -``` - -& look for section with [mysqld] & then enter the following details, - -``` -[mysqld] -log-bin -server_id=1 -replicate-do-db=important -bind-address=192.168.1.120 -``` - -Save & exit the file. Once done, restart the mariadb services, - -``` -$ systemctl restart mariadb -``` - -Next, we will login to our mariadb instance on master server, - -``` -$ mysql -u root -p -``` - -& then will create a new user for slave named 'slaveuser' & assign it necessary privileges by running the following command - -``` -STOP SLAVE; -GRANT REPLICATION SLAVE ON *.* TO 'slaveuser'@'%' IDENTIFIED BY 'iamslave'; -FLUSH PRIVILEGES; -FLUSH TABLES WITH READ LOCK; -SHOW MASTER STATUS; -``` - -**Note:- ** We need values from **MASTER_LOG_FILE and MASTER_LOG_POS ** from out of 'show master status' for configuring replication, so make sure that you have those. - -Once these commands run successfully, exit from the session by typing 'exit'. - -### Step2 - Create a backup of the database & move it slave - -Now we need to create backup of our database 'important' , which can be done using 'mysqldump' command, - -``` -$ mysqldump -u root -p important > important_backup.sql -``` - -Once the backup is complete, we need to log back into the mariadb & unlock our tables, - -``` -$ mysql -u root -p -$ UNLOCK TABLES; -``` - -& exit the session. Now we will move the database backup to our slave server which has a IPaddress of 192.168.1.130, - -This completes our configuration on Master server, we will now move onto configuring our slave server. - -### Step 3 Configuring Slave server - -We will again start with editing '/etc/my.cnf' file & look for section [mysqld] & enter the following details, - -``` -[mysqld] -server-id = 2 -replicate-do-db=important -[ …] -``` - -We will now restore our database to mariadb, by running - -``` -$ mysql -u root -p < /data/ important_backup.sql -``` - -When the process completes, we will provide the privileges to 'slaveuser' on db 'important' by logging into mariadb on slave server, - -``` -$ mysql -u root -p -``` - -``` -GRANT ALL PRIVILEGES ON important.* TO 'slaveuser'@'localhost' WITH GRANT OPTION; -FLUSH PRIVILEGES; -``` - -Next restart mariadb for implementing the changes. - -``` -$ systemctl restart mariadb -``` - -### **Step 4 Start the replication** - -Remember, we need **MASTER_LOG_FILE and MASTER_LOG_POS** variables which we got from running 'SHOW MASTER STATUS' on mariadb on master server. Now login to mariadb on slave server & we will tell our slave server where to look for the master by running the following commands, - -``` -STOP SLAVE; -CHANGE MASTER TO MASTER_HOST= '192.168.1.110′, MASTER_USER='slaveuser', MASTER_PASSWORD='iamslave', MASTER_LOG_FILE='mariadb-bin.000001′, MASTER_LOG_POS=460; -SLAVE START; -SHOW SLAVE STATUS\G; -``` - -**Note:-** Change details of your master as necessary. - -### Step 5 Testing the replication - -We will now create a new tables in our database on master to make sure if the replication is working or not. So, login to mariadb on master server, - -``` -$ mysql -u root -p -``` - -select the database 'important', - -``` -use important; -``` - -and create a table named test in the db, - -``` -create table test (c int); -``` - -then insert some value into it, - -``` -insert into test (c) value (1); -``` - -To check the added value, - -``` -select * from test; -``` - -& you will find that your db has a table has the value you inserted. - -Now let's login to our slave database to make sure if our data replication is working, - -``` -$ mysql -u root -p -$ use important; -$ select * from test; -``` - -You will see that the output shows the same value that we inserted on the master server, hence our replication is working fine without any issues. - -This concludes our tutorial, please send your queries/questions through the comment box below. - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/creating-master-slave-replication-mariadb/ - -作者:[Shusain][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/installing-configuring-mariadb-rhelcentos/ -[2]:http://linuxtechlab.com/mariadb-administration-commands-beginners/ diff --git a/translated/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md b/translated/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md new file mode 100644 index 0000000000..397843785e --- /dev/null +++ b/translated/tech/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md @@ -0,0 +1,184 @@ +一步一步学习如何在 MariaDB 中配置主从复制 +====== +在我们前面的教程中,我们已经学习了 [**如何安装和配置 MariaDB**][1],也学习了 [**管理 MariaDB 的一些基础命令**][2]。现在我们来学习,如何在 MariaDB 服务器上配置一个主从复制。 + +复制是用于为我们的数据库去创建多个副本,这些副本可以在其它数据库上用于运行查询,像一些非常繁重的查询可能会影响主数据库服务器的性能,或者我们可以使用它来做数据冗余,或者兼具以上两个目的。我们可以将这个过程自动化,即主服务器到从服务器的复制过程自动进行。执行备份而不影响在主服务器上的写操作。 + +因此,我们现在去配置我们的主-从复制,它需要两台安装了 MariaDB 的机器。它们的 IP 地址如下: + + **主服务器 -** 192.168.1.120 **主机名** master.ltechlab.com + + **从服务器 -** 192.168.1.130 **主机名 -** slave.ltechlab.com + +MariaDB 安装到这些机器上之后,我们继续进行本教程。如果你需要安装和配置 MariaDB 的教程,请查看[ **这个教程**][1]。 + + +### **第 1 步 - 主服务器配置** + +我们现在进入到 MariaDB 中的一个命名为 ' **important '** 的数据库,它将被复制到我们的从服务器。为开始这个过程,我们编辑名为 ' **/etc/my.cnf** ' 的文件,它是 MariaDB 的配置文件。 + +``` +$ vi /etc/my.cnf +``` + +在这个文件中找到 [mysqld] 节,然后输入如下内容: + +``` +[mysqld] +log-bin +server_id=1 +replicate-do-db=important +bind-address=192.168.1.120 +``` + +保存并退出这个文件。完成之后,需要重启 MariaDB 服务。 + +``` +$ systemctl restart mariadb +``` + +接下来,我们登入我们的主服务器上的 Mariadb 实例。 + +``` +$ mysql -u root -p +``` + +在它上面创建一个命名为 'slaveuser' 的为主从复制使用的新用户,然后运行如下的命令为它分配所需要的权限: + +``` +STOP SLAVE; +GRANT REPLICATION SLAVE ON *.* TO 'slaveuser'@'%' IDENTIFIED BY 'iamslave'; +FLUSH PRIVILEGES; +FLUSH TABLES WITH READ LOCK; +SHOW MASTER STATUS; +``` + +**注意: ** 我们配置主从复制需要 **MASTER_LOG_FILE 和 MASTER_LOG_POS ** 的值,它可以通过 'show master status' 来获得,因此,你一定要确保你记下了它们的值。 + +这些命令运行完成之后,输入 'exit' 退出这个会话。 + +### 第 2 步 - 创建一个数据库备份,并将它移动到从服务器上 + +现在,我们需要去为我们的数据库 'important' 创建一个备份,可以使用 'mysqldump' 命令去备份。 + +``` +$ mysqldump -u root -p important > important_backup.sql +``` + +备份完成后,我们需要重新登陆到 MariaDB 数据库,并解锁我们的表。 + +``` +$ mysql -u root -p +$ UNLOCK TABLES; +``` + +然后退出这个会话。现在,我们移动我们刚才的备份到从服务器上,它的 IP 地址是:192.168.1.130。 + +在主服务器上的配置已经完成了,现在,我们开始配置从服务器。 + +### 第 3 步:配置从服务器 + +我们再次去编辑 '/etc/my.cnf' 文件,找到配置文件中的 [mysqld] 节,然后输入如下内容: + +``` +[mysqld] +server-id = 2 +replicate-do-db=important +[ …] +``` + +现在,我们恢复我们主数据库的备份到从服务器的 MariaDB 上,运行如下命令: + +``` +$ mysql -u root -p < /data/ important_backup.sql +``` + +当这个恢复过程结束之后,我们将通过登入到从服务器上的 MariaDB,为数据库 'important' 上的用户 'slaveuser' 授权。 + +``` +$ mysql -u root -p +``` + +``` +GRANT ALL PRIVILEGES ON important.* TO 'slaveuser'@'localhost' WITH GRANT OPTION; +FLUSH PRIVILEGES; +``` + +接下来,为了这个变化生效,重启 MariaDB。 + +``` +$ systemctl restart mariadb +``` + +### **第 4 步:启动复制** + +记住,我们需要 **MASTER_LOG_FILE 和 MASTER_LOG_POS** 变量的值,它可以通过在主服务器上运行 'SHOW MASTER STATUS' 获得。现在登入到从服务器上的 MariaDB,然后通过运行下列命令,告诉我们的从服务器它应该去哪里找主服务器。 + +``` +STOP SLAVE; +CHANGE MASTER TO MASTER_HOST= '192.168.1.110′, MASTER_USER='slaveuser', MASTER_PASSWORD='iamslave', MASTER_LOG_FILE='mariadb-bin.000001′, MASTER_LOG_POS=460; +SLAVE START; +SHOW SLAVE STATUS\G; +``` + +**注意:** 请根据你的机器的具体情况来改变主服务器的配置。 + +### 第 5 步:测试复制 + +我们将在我们的主服务器上创建一个新表来测试主从复制是否正常工作。因此,登入到主服务器上的 MariaDB。 + +``` +$ mysql -u root -p +``` + +选择数据库为 'important': + +``` +use important; +``` + +在这个数据库上创建一个名为 ‘test’ 的表: + +``` +create table test (c int); +``` + +然后在这个表中插入一些数据: + +``` +insert into test (c) value (1); +``` + +检索刚才插入的值是否存在: + +``` +select * from test; +``` + +你将会看到刚才你插入的值已经在这个新建的表中了。 + +现在,我们登入到从服务器的数据库中,查看主从复制是否正常工作。 + +``` +$ mysql -u root -p +$ use important; +$ select * from test; +``` + +你可以看到与前面在主服务器上的命令输出是一样的。因此,说明我们的主从服务工作正常,没有发生任何问题。 + +我们的教程结束了,请在下面的评论框中留下你的查询/问题。 + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/creating-master-slave-replication-mariadb/ + +作者:[Shusain][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:http://linuxtechlab.com/installing-configuring-mariadb-rhelcentos/ +[2]:http://linuxtechlab.com/mariadb-administration-commands-beginners/ From fc3731c90dfbbbda62c797ce18fb412f950e0dc4 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Tue, 16 Jan 2018 15:09:56 +0800 Subject: [PATCH 036/226] translating by qhwdw --- ...06 System Calls Make the World Go Round.md | 153 ++++++++++++++++++ ...0319 ftrace trace your kernel functions.md | 2 +- ...nux container security - Opensource.com.md | 2 +- 3 files changed, 155 insertions(+), 2 deletions(-) create mode 100644 sources/tech/20141106 System Calls Make the World Go Round.md diff --git a/sources/tech/20141106 System Calls Make the World Go Round.md b/sources/tech/20141106 System Calls Make the World Go Round.md new file mode 100644 index 0000000000..0d0201e471 --- /dev/null +++ b/sources/tech/20141106 System Calls Make the World Go Round.md @@ -0,0 +1,153 @@ +# Translating by qhwdw System Calls Make the World Go Round + +I hate to break it to you, but a user application is a helpless brain in a vat: + +![](https://manybutfinite.com/img/os/appInVat.png) + +Every interaction with the outside world is mediated by the kernel through system calls. If an app saves a file, writes to the terminal, or opens a TCP connection, the kernel is involved. Apps are regarded as highly suspicious: at best a bug-ridden mess, at worst the malicious brain of an evil genius. + +These system calls are function calls from an app into the kernel. They use a specific mechanism for safety reasons, but really you're just calling the kernel's API. The term "system call" can refer to a specific function offered by the kernel (e.g., the open() system call) or to the calling mechanism. You can also say syscall for short. + +This post looks at system calls, how they differ from calls to a library, and tools to poke at this OS/app interface. A solid understanding of what happens within an app versus what happens through the OS can turn an impossible-to-fix problem into a quick, fun puzzle. + +So here's a running program, a user process: + +![](https://manybutfinite.com/img/os/sandbox.png) + +It has a private [virtual address space][2], its very own memory sandbox. The vat, if you will. In its address space, the program's binary file plus the libraries it uses are all [memory mapped][3]. Part of the address space maps the kernel itself. + +Below is the code for our program, pid, which simply retrieves its process id via [getpid(2)][4]: + +pid.c[download][1] + +| +​``` +123456789 +​``` + | +​``` +#include #include #include int main(){ pid_t p = getpid(); printf("%d\n", p);} +​``` + | + +In Linux, a process isn't born knowing its PID. It must ask the kernel, so this requires a system call: + +![](https://manybutfinite.com/img/os/syscallEnter.png) + +It all starts with a call to the C library's [getpid()][5], which is a wrapper for the system call. When you call functions like open(2), read(2), and friends, you're calling these wrappers. This is true for many languages where the native methods ultimately end up in libc. + +Wrappers offer convenience atop the bare-bones OS API, helping keep the kernel lean. Lines of code is where bugs live, and all kernel code runs in privileged mode, where mistakes can be disastrous. Anything that can be done in user mode should be done in user mode. Let the libraries offer friendly methods and fancy argument processing a la printf(3). + +Compared to web APIs, this is analogous to building the simplest possible HTTP interface to a service and then offering language-specific libraries with helper methods. Or maybe some caching, which is what libc's getpid() does: when first called it actually performs a system call, but the PID is then cached to avoid the syscall overhead in subsequent invocations. + +Once the wrapper has done its initial work it's time to jump into hyperspace the kernel. The mechanics of this transition vary by processor architecture. In Intel processors, arguments and the [syscall number][6] are [loaded into registers][7], then an [instruction][8] is executed to put the CPU in [privileged mode][9] and immediately transfer control to a global syscall [entry point][10] within the kernel. If you're interested in details, David Drysdale has two great articles in LWN ([first][11], [second][12]). + +The kernel then uses the syscall number as an [index][13] into [sys_call_table][14], an array of function pointers to each syscall implementation. Here, [sys_getpid][15] is called: + +![](https://manybutfinite.com/img/os/syscallExit.png) + +In Linux, syscall implementations are mostly arch-independent C functions, sometimes [trivial][16], insulated from the syscall mechanism by the kernel's excellent design. They are regular code working on general data structures. Well, apart from being completely paranoid about argument validation. + +Once their work is done they return normally, and the arch-specific code takes care of transitioning back into user mode where the wrapper does some post processing. In our example, [getpid(2)][17] now caches the PID returned by the kernel. Other wrappers might set the global errno variable if the kernel returns an error. Small things to let you know GNU cares. + +If you want to be raw, glibc offers the [syscall(2)][18] function, which makes a system call without a wrapper. You can also do so yourself in assembly. There's nothing magical or privileged about a C library. + +This syscall design has far-reaching consequences. Let's start with the incredibly useful [strace(1)][19], a tool you can use to spy on system calls made by Linux processes (in Macs, see [dtruss(1m)][20] and the amazing [dtrace][21]; in Windows, see [sysinternals][22]). Here's strace on pid: + +| +​``` +1234567891011121314151617181920 +​``` + | +​``` +~/code/x86-os$ strace ./pidexecve("./pid", ["./pid"], [/* 20 vars */]) = 0brk(0) = 0x9aa0000access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7767000access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3fstat64(3, {st_mode=S_IFREG|0644, st_size=18056, ...}) = 0mmap2(NULL, 18056, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7762000close(3) = 0[...snip...]getpid() = 14678fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 1), ...}) = 0mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7766000write(1, "14678\n", 614678) = 6exit_group(6) = ? +​``` + | + +Each line of output shows a system call, its arguments, and a return value. If you put getpid(2) in a loop running 1000 times, you would still have only one getpid() syscall because of the PID caching. We can also see that printf(3) calls write(2) after formatting the output string. + +strace can start a new process and also attach to an already running one. You can learn a lot by looking at the syscalls made by different programs. For example, what does the sshd daemon do all day? + +| +​``` +1234567891011121314151617181920212223242526272829 +​``` + | +​``` +~/code/x86-os$ ps ax | grep sshd12218 ? Ss 0:00 /usr/sbin/sshd -D~/code/x86-os$ sudo strace -p 12218Process 12218 attached - interrupt to quitselect(7, [3 4], NULL, NULL, NULL[ ... nothing happens ... No fun, it's just waiting for a connection using select(2) If we wait long enough, we might see new keys being generated and so on, but let's attach again, tell strace to follow forks (-f), and connect via SSH]~/code/x86-os$ sudo strace -p 12218 -f[lots of calls happen during an SSH login, only a few shown][pid 14692] read(3, "-----BEGIN RSA PRIVATE KEY-----\n"..., 1024) = 1024[pid 14692] open("/usr/share/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)[pid 14692] open("/etc/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)[pid 14692] open("/etc/ssh/ssh_host_dsa_key", O_RDONLY|O_LARGEFILE) = 3[pid 14692] open("/etc/protocols", O_RDONLY|O_CLOEXEC) = 4[pid 14692] read(4, "# Internet (IP) protocols\n#\n# Up"..., 4096) = 2933[pid 14692] open("/etc/hosts.allow", O_RDONLY) = 4[pid 14692] open("/lib/i386-linux-gnu/libnss_dns.so.2", O_RDONLY|O_CLOEXEC) = 4[pid 14692] stat64("/etc/pam.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0[pid 14692] open("/etc/pam.d/common-password", O_RDONLY|O_LARGEFILE) = 8[pid 14692] open("/etc/pam.d/other", O_RDONLY|O_LARGEFILE) = 4 +​``` + | + +SSH is a large chunk to bite off, but it gives a feel for strace usage. Being able to see which files an app opens can be useful ("where the hell is this config coming from?"). If you have a process that appears stuck, you can strace it and see what it might be doing via system calls. When some app is quitting unexpectedly without a proper error message, check if a syscall failure explains it. You can also use filters, time each call, and so so: + +| +​``` +123456789 +​``` + | +​``` +~/code/x86-os$ strace -T -e trace=recv curl -silent www.google.com. > /dev/nullrecv(3, "HTTP/1.1 200 OK\r\nDate: Wed, 05 N"..., 16384, 0) = 4164 <0.000007>recv(3, "fl a{color:#36c}a:visited{color:"..., 16384, 0) = 2776 <0.000005>recv(3, "adient(top,#4d90fe,#4787ed);filt"..., 16384, 0) = 4164 <0.000007>recv(3, "gbar.up.spd(b,d,1,!0);break;case"..., 16384, 0) = 2776 <0.000006>recv(3, "$),a.i.G(!0)),window.gbar.up.sl("..., 16384, 0) = 1388 <0.000004>recv(3, "margin:0;padding:5px 8px 0 6px;v"..., 16384, 0) = 1388 <0.000007>recv(3, "){window.setTimeout(function(){v"..., 16384, 0) = 1484 <0.000006> +​``` + | + +I encourage you to explore these tools in your OS. Using them well is like having a super power. + +But enough useful stuff, let's go back to design. We've seen that a userland app is trapped in its virtual address space running in ring 3 (unprivileged). In general, tasks that involve only computation and memory accesses do not require syscalls. For example, C library functions like [strlen(3)][23] and [memcpy(3)][24] have nothing to do with the kernel. Those happen within the app. + +The man page sections for a C library function (the 2 and 3 in parenthesis) also offer clues. Section 2 is used for system call wrappers, while section 3 contains other C library functions. However, as we saw with printf(3), a library function might ultimately make one or more syscalls. + +If you're curious, here are full syscall listings for [Linux][25] (also [Filippo's list][26]) and [Windows][27]. They have ~310 and ~460 system calls, respectively. It's fun to look at those because, in a way, they represent all that software can do on a modern computer. Plus, you might find gems to help with things like interprocess communication and performance. This is an area where "Those who do not understand Unix are condemned to reinvent it, poorly." + +Many syscalls perform tasks that take [eons][28] compared to CPU cycles, for example reading from a hard drive. In those situations the calling process is often put to sleep until the underlying work is completed. Because CPUs are so fast, your average program is I/O bound and spends most of its life sleeping, waiting on syscalls. By contrast, if you strace a program busy with a computational task, you often see no syscalls being invoked. In such a case, [top(1)][29] would show intense CPU usage. + +The overhead involved in a system call can be a problem. For example, SSDs are so fast that general OS overhead can be [more expensive][30] than the I/O operation itself. Programs doing large numbers of reads and writes can also have OS overhead as their bottleneck. [Vectored I/O][31] can help some. So can [memory mapped files][32], which allow a program to read and write from disk using only memory access. Analogous mappings exist for things like video card memory. Eventually, the economics of cloud computing might lead us to kernels that eliminate or minimize user/kernel mode switches. + +Finally, syscalls have interesting security implications. One is that no matter how obfuscated a binary, you can still examine its behavior by looking at the system calls it makes. This can be used to detect malware, for example. We can also record profiles of a known program's syscall usage and alert on deviations, or perhaps whitelist specific syscalls for programs so that exploiting vulnerabilities becomes harder. We have a ton of research in this area, a number of tools, but not a killer solution yet. + +And that's it for system calls. I'm sorry for the length of this post, I hope it was helpful. More (and shorter) next week, [RSS][33] and [Twitter][34]. Also, last night I made a promise to the universe. This post is dedicated to the glorious Clube Atlético Mineiro. + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/system-calls/ + +作者:[Gustavo Duarte][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/code/x86-os/pid.c +[2]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory +[3]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/ +[4]:http://linux.die.net/man/2/getpid +[5]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/getpid.c;h=937b1d4e113b1cff4a5c698f83d662e130d596af;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l49 +[6]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/syscalls/syscall_64.tbl#L48 +[7]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86_64/sysdep.h;h=4a619dafebd180426bf32ab6b6cb0e5e560b718a;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l139 +[8]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86_64/sysdep.h;h=4a619dafebd180426bf32ab6b6cb0e5e560b718a;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l179 +[9]:https://manybutfinite.com/post/cpu-rings-privilege-and-protection +[10]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/entry_64.S#L354-L386 +[11]:http://lwn.net/Articles/604287/ +[12]:http://lwn.net/Articles/604515/ +[13]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/entry_64.S#L422 +[14]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/syscall_64.c#L25 +[15]:https://github.com/torvalds/linux/blob/v3.17/kernel/sys.c#L800-L809 +[16]:https://github.com/torvalds/linux/blob/v3.17/kernel/sys.c#L800-L859 +[17]:http://linux.die.net/man/2/getpid +[18]:http://linux.die.net/man/2/syscall +[19]:http://linux.die.net/man/1/strace +[20]:https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/dtruss.1m.html +[21]:http://dtrace.org/blogs/brendan/2011/10/10/top-10-dtrace-scripts-for-mac-os-x/ +[22]:http://technet.microsoft.com/en-us/sysinternals/bb842062.aspx +[23]:http://linux.die.net/man/3/strlen +[24]:http://linux.die.net/man/3/memcpy +[25]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/syscalls/syscall_64.tbl +[26]:https://filippo.io/linux-syscall-table/ +[27]:http://j00ru.vexillium.org/ntapi/ +[28]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait/ +[29]:http://linux.die.net/man/1/top +[30]:http://danluu.com/clwb-pcommit/ +[31]:http://en.wikipedia.org/wiki/Vectored_I/O +[32]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/ +[33]:http://feeds.feedburner.com/GustavoDuarte +[34]:http://twitter.com/food4hackers \ No newline at end of file diff --git a/sources/tech/20170319 ftrace trace your kernel functions.md b/sources/tech/20170319 ftrace trace your kernel functions.md index 0ff3fd6416..3ca42ab1a3 100644 --- a/sources/tech/20170319 ftrace trace your kernel functions.md +++ b/sources/tech/20170319 ftrace trace your kernel functions.md @@ -1,4 +1,4 @@ -ftrace: trace your kernel functions! +Translating by qhwdw ftrace: trace your kernel functions! ============================================================ Hello! Today we’re going to talk about a debugging tool we haven’t talked about much before on this blog: ftrace. What could be more exciting than a new debugging tool?! diff --git a/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md b/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md index b992cac2c3..6bb722f516 100644 --- a/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md +++ b/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md @@ -1,4 +1,4 @@ -10 layers of Linux container security | Opensource.com +Translating by qhwdw 10 layers of Linux container security | Opensource.com ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA) From ea33991ab49852635ebb1367a35e405f3ec087de Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 16 Jan 2018 22:15:32 +0800 Subject: [PATCH 037/226] PRF&PUB:20171119 10 Best LaTeX Editors For Linux.md @FSSlc https://linux.cn/article-9247-1.html --- ...0171119 10 Best LaTeX Editors For Linux.md | 144 ++++++++++++++ ...0171119 10 Best LaTeX Editors For Linux.md | 184 ------------------ 2 files changed, 144 insertions(+), 184 deletions(-) create mode 100644 published/20171119 10 Best LaTeX Editors For Linux.md delete mode 100644 translated/tech/20171119 10 Best LaTeX Editors For Linux.md diff --git a/published/20171119 10 Best LaTeX Editors For Linux.md b/published/20171119 10 Best LaTeX Editors For Linux.md new file mode 100644 index 0000000000..0493502624 --- /dev/null +++ b/published/20171119 10 Best LaTeX Editors For Linux.md @@ -0,0 +1,144 @@ +10 款 Linux 平台上最好的 LaTeX 编辑器 +====== + +**简介:一旦你克服了 LaTeX 的学习曲线,就没有什么比 LaTeX 更棒了。下面介绍的是针对 Linux 和其他平台的最好的 LaTeX 编辑器。** + +### LaTeX 是什么? + +[LaTeX][1] 是一个文档制作系统。与纯文本编辑器不同,在 LaTeX 编辑器中你不能只写纯文本,为了组织文档的内容,你还必须使用一些 LaTeX 命令。 + +![LaTeX 示例][3] + +LaTeX 编辑器一般用在出于学术目的的科学研究文档或书籍的出版,最重要的是,当你需要处理包含众多复杂数学符号的文档时,它能够为你带来方便。当然,使用 LaTeX 编辑器是很有趣的,但它也并非总是很有用,除非你对所要编写的文档有一些特别的需求。 + +### 为什么你应当使用 LaTeX? + +好吧,正如我前面所提到的那样,使用 LaTeX 编辑器便意味着你有着特定的需求。为了捣腾 LaTeX 编辑器,并不需要你有一颗极客的头脑。但对于那些使用一般文本编辑器的用户来说,它并不是一个很有效率的解决方法。 + +假如你正在寻找一款工具来精心制作一篇文档,同时你对花费时间在格式化文本上没有任何兴趣,那么 LaTeX 编辑器或许正是你所寻找的那款工具。在 LaTeX 编辑器中,你只需要指定文档的类型,它便会相应地为你设置好文档的字体种类和大小尺寸。正是基于这个原因,难怪它会被认为是 [给作家的最好开源工具][4] 之一。 + +但请务必注意: LaTeX 编辑器并不是自动化的工具,你必须首先学会一些 LaTeX 命令来让它能够精确地处理文本的格式。 + +### 针对 Linux 平台的 10 款最好 LaTeX 编辑器 + +事先说明一下,以下列表并没有一个明确的先后顺序,序号为 3 的编辑器并不一定比序号为 7 的编辑器优秀。 + +#### 1、 LyX + +![][5] + +[LyX][6] 是一个开源的 LaTeX 编辑器,即是说它是网络上可获取到的最好的文档处理引擎之一。LyX 帮助你集中于你的文章,并忘记对单词的格式化,而这些正是每个 LaTeX 编辑器应当做的。LyX 能够让你根据文档的不同,管理不同的文档内容。一旦安装了它,你就可以控制文档中的很多东西了,例如页边距、页眉、页脚、空白、缩进、表格等等。 + +假如你正忙着精心撰写科学类文档、研究论文或类似的文档,你将会很高兴能够体验到 LyX 的公式编辑器,这也是其特色之一。 LyX 还包括一系列的教程来入门,使得入门没有那么多的麻烦。 + +#### 2、 Texmaker + +![][7] + +[Texmaker][8] 被认为是 GNOME 桌面环境下最好的 LaTeX 编辑器之一。它呈现出一个非常好的用户界面,带来了极好的用户体验。它也被称之为最实用的 LaTeX 编辑器之一。假如你经常进行 PDF 的转换,你将发现 TeXmaker 相比其他编辑器更加快速。在你书写的同时,你也可以预览你的文档最终将是什么样子的。同时,你也可以观察到可以很容易地找到所需要的符号。 + +Texmaker 也提供一个扩展的快捷键支持。你有什么理由不试着使用它呢? + +#### 3、 TeXstudio + +![][9] + +假如你想要一个这样的 LaTeX 编辑器:它既能为你提供相当不错的自定义功能,又带有一个易用的界面,那么 [TeXstudio][10] 便是一个完美的选择。它的 UI 确实很简单,但是不粗糙。 TeXstudio 带有语法高亮,自带一个集成的阅读器,可以让你检查参考文献,同时还带有一些其他的辅助工具。 + +它同时还支持某些酷炫的功能,例如自动补全,链接覆盖,书签,多游标等等,这使得书写 LaTeX 文档变得比以前更加简单。 + +TeXstudio 的维护很活跃,对于新手或者高级写作者来说,这使得它成为一个引人注目的选择。 + +#### 4、 Gummi + +![][11] + +[Gummi][12] 是一个非常简单的 LaTeX 编辑器,它基于 GTK+ 工具箱。当然,在这个编辑器中你找不到许多华丽的选项,但如果你只想能够立刻着手写作, 那么 Gummi 便是我们给你的推荐。它支持将文档输出为 PDF 格式,支持语法高亮,并帮助你进行某些基础的错误检查。尽管在 GitHub 上它已经不再被活跃地维护,但它仍然工作地很好。 + +#### 5、 TeXpen + +![][13] + +[TeXpen][14] 是另一个简洁的 LaTeX 编辑器。它为你提供了自动补全功能。但其用户界面或许不会让你感到印象深刻。假如你对用户界面不在意,又想要一个超级容易的 LaTeX 编辑器,那么 TeXpen 将满足你的需求。同时 TeXpen 还能为你校正或提高在文档中使用的英语语法和表达式。 + +#### 6、 ShareLaTeX + +![][15] + +[ShareLaTeX][16] 是一款在线 LaTeX 编辑器。假如你想与某人或某组朋友一同协作进行文档的书写,那么这便是你所需要的。 + +它提供一个免费方案和几种付费方案。甚至来自哈佛大学和牛津大学的学生也都使用它来进行个人的项目。其免费方案还允许你添加一位协作者。 + +其付费方案允许你与 GitHub 和 Dropbox 进行同步,并且能够记录完整的文档修改历史。你可以为你的每个方案选择多个协作者。对于学生,它还提供单独的计费方案。 + +#### 7、 Overleaf + +![][17] + +[Overleaf][18] 是另一款在线的 LaTeX 编辑器。它与 ShareLaTeX 类似,它为专家和学生提供了不同的计费方案。它也提供了一个免费方案,使用它你可以与 GitHub 同步,检查你的修订历史,或添加多个合作者。 + +在每个项目中,它对文件的数目有所限制。所以在大多数情况下如果你对 LaTeX 文件非常熟悉,这并不会为你带来不便。 + +#### 8、 Authorea + +![][19] + +[Authorea][20] 是一个美妙的在线 LaTeX 编辑器。当然,如果考虑到价格,它可能不是最好的一款。对于免费方案,它有 100 MB 的数据上传限制和每次只能创建一个私有文档。而付费方案则提供更多的额外好处,但如果考虑到价格,它可能不是最便宜的。你应该选择 Authorea 的唯一原因应该是因为其用户界面。假如你喜爱使用一款提供令人印象深刻的用户界面的工具,那就不要错过它。 + +#### 9、 Papeeria + +![][21] + +[Papeeria][22] 是在网络上你能够找到的最为便宜的 LaTeX 在线编辑器,如果考虑到它和其他的编辑器一样可信赖的话。假如你想免费地使用它,则你不能使用它开展私有项目。但是,如果你更偏爱公共项目,它允许你创建不限数目的项目,添加不限数目的协作者。它的特色功能是有一个非常简便的画图构造器,并且在无需额外费用的情况下使用 Git 同步。假如你偏爱付费方案,它赋予你创建 10 个私有项目的能力。 + +#### 10、 Kile + +![Kile LaTeX 编辑器][23] + +位于我们最好 LaTeX 编辑器清单的最后一位是 [Kile][24] 编辑器。有些朋友对 Kile 推崇备至,很大程度上是因为其提供某些特色功能。 + +Kile 不仅仅是一款编辑器,它还是一款类似 Eclipse 的 IDE 工具,提供了针对文档和项目的一整套环境。除了快速编译和预览功能,你还可以使用诸如命令的自动补全 、插入引用,按照章节来组织文档等功能。你真的应该使用 Kile 来见识其潜力。 + +Kile 在 Linux 和 Windows 平台下都可获取到。 + +### 总结 + +所以上面便是我们推荐的 LaTeX 编辑器,你可以在 Ubuntu 或其他 Linux 发行版本中使用它们。 + +当然,我们可能还遗漏了某些可以在 Linux 上使用并且有趣的 LaTeX 编辑器。如若你正好知道它们,请在下面的评论中让我们知晓。 + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/LaTeX-editors-linux/ + +作者:[Ankush Das][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/ankush/ +[1]:https://www.LaTeX-project.org/ +[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/latex-sample-example.jpeg +[4]:https://itsfoss.com/open-source-tools-writers/ +[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/lyx_latex_editor.jpg +[6]:https://www.LyX.org/ +[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/texmaker_latex_editor.jpg +[8]:http://www.xm1math.net/texmaker/ +[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/tex_studio_latex_editor.jpg +[10]:https://www.texstudio.org/ +[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/gummi_latex_editor.jpg +[12]:https://github.com/alexandervdm/gummi +[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/texpen_latex_editor.jpg +[14]:https://sourceforge.net/projects/texpen/ +[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/sharelatex.jpg +[16]:https://www.shareLaTeX.com/ +[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/overleaf.jpg +[18]:https://www.overleaf.com/ +[19]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/authorea.jpg +[20]:https://www.authorea.com/ +[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/papeeria_latex_editor.jpg +[22]:https://www.papeeria.com/ +[23]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/11/kile-latex-800x621.png +[24]:https://kile.sourceforge.io/ diff --git a/translated/tech/20171119 10 Best LaTeX Editors For Linux.md b/translated/tech/20171119 10 Best LaTeX Editors For Linux.md deleted file mode 100644 index 9b4650ac97..0000000000 --- a/translated/tech/20171119 10 Best LaTeX Editors For Linux.md +++ /dev/null @@ -1,184 +0,0 @@ -针对 Linux 平台的 10 款最好 LaTeX 编辑器 -====== -**简介:一旦你克服了 LaTeX 的学习曲线,就没有什么比得上 LaTeX 了。下面介绍的是针对 Linux 和其他平台的最好的 LaTeX 编辑器。** - -## LaTeX 是什么? - -[LaTeX][1] 是一个文档制作系统。与纯文本编辑器不同,在 LaTeX 编辑器中你不能只写纯文本,为了组织文档的内容,你还必须使用一些 LaTeX 命令。 - -![LaTeX 示例][2]![LaTeX 示例][3] - -LaTeX 编辑器一般用在出于学术目的的科学研究文档或书籍的出版,最重要的是,当你需要处理包含众多复杂数学符号的文档时,它能够为你带来方便。当然,使用 LaTeX 编辑器是很有趣的,但它也并非总是很有用,除非你对所要编写的文档有一些特别的需求。 - -## 为什么你应当使用 LaTeX? - -好吧,正如我前面所提到的那样,使用 LaTeX 编辑器便意味着你有着特定的需求。为了捣腾 LaTeX 编辑器,并不需要你有一颗极客的头脑。但对于那些使用一般文本编辑器的用户来说,它并不是一个很有效率的解决方法。 - -假如你正在寻找一款工具来精心制作一篇文档,同时你对花费时间在格式化文本上没有任何兴趣,那么 LaTeX 编辑器或许正是你所寻找的那款工具。在 LaTeX 编辑器中,你只需要指定文档的类型,它便会相应地为你设置好文档的字体种类和大小尺寸。正是基于这个原因,难怪它会被认为是 [给作家的最好开源工具][4] 之一。 - -但请务必注意: LaTeX 编辑器并不是自动化的工具,你必须首先学会一些 LaTeX 命令来让它能够精确地处理文本的格式。 - -## 针对 Linux 平台的 10 款最好 LaTeX 编辑器 - -事先说明一下,以下列表并没有一个明确的先后顺序,序号为 3 的编辑器并不一定比序号为 7 的编辑器优秀。 - -### 1\. LyX - -![][2] - -![][5] - -LyX 是一个开源的 LaTeX 编辑器,即是说它是网络上可获取到的最好的文档处理引擎之一。LyX 帮助你集中于你的文章,并忘记对单词的格式化,而这些正是每个 LaTeX 编辑器应当做的。LyX 能够让你根据文档的不同,管理不同的文档内容。一旦安装了它,你就可以控制文档中的很多东西了,例如页边距,页眉,页脚,空白,缩进,表格等等。 - -假如你正忙着精心撰写科学性的文档,研究论文或类似的文档,你将会很高兴能够体验到 LyX 的公式编辑器,这也是其特色之一。 LyX 还包括一系列的教程来入门,使得入门没有那么多的麻烦。 - -[LyX][6] - -### 2\. Texmaker - -![][2] - -![][7] - -Texmaker 被认为是 GNOME 桌面环境下最好的 LaTeX 编辑器之一。它呈现出一个非常好的用户界面,带来了极好的用户体验。它也被冠以最实用的 LaTeX 编辑器之一。假如你经常进行 PDF 的转换,你将发现 TeXmaker 相比其他编辑器更加快速。在你书写的同时,你也可以预览你的文档最终将是什么样子的。同时,你也可以观察到可以很容易地找到所需要的符号。 - -Texmaker 也提供一个扩展的快捷键支持。你有什么理由不试着使用它呢? - -[Texmaker][8] - -### 3\. TeXstudio - -![][2] - -![][9] - -假如你想要一个这样的 LaTeX 编辑器:它既能为你提供相当不错的自定义功能,又带有一个易用的界面,那么 TeXstudio 便是一个完美的选择。它的 UI 确实很简单,但是不粗糙。 TeXstudio 带有语法高亮,自带一个集成的阅读器,可以让你检查参考文献,同时还带有一些其他的辅助工具。 - -它同时还支持某些酷炫的功能,例如自动补全,链接覆盖,书签,多游标等等,这使得书写 LaTeX 文档变得比以前更加简单。 - -TeXstudio 的维护很活跃,对于新手或者高级写作者来说,这使得它成为一个引人注目的选择。 - -[TeXstudio][10] - -### 4\. Gummi - -![][2] - -![][11] - -Gummi 是一个非常简单的 LaTeX 编辑器,它基于 GTK+ 工具箱。当然,在这个编辑器中你找不到许多华丽的选项,但如果你只想能够立刻着手写作, 那么 Gummi 便是我们给你的推荐。它支持将文档输出为 PDF 格式,支持语法高亮,并帮助你进行某些基础的错误检查。尽管在 GitHub 上它已经不再被活跃地维护,但它仍然工作地很好。 - -[Gummi][12] - -### 5\. TeXpen - -![][2] - -![][13] - -TeXpen 是另一个简洁的 LaTeX 编辑器。它为你提供了自动补全功能。但其用户界面或许不会让你感到印象深刻。假如你对用户界面不在意,又想要一个超级容易的 LaTeX 编辑器,那么 TeXpen 将满足你的需求。同时 TeXpen 还能为你校正或提高在文档中使用的英语语法和表达式。 - -[TeXpen][14] - -### 6\. ShareLaTeX - -![][2] - -![][15] - -ShareLaTeX 是一款在线 LaTeX 编辑器。假如你想与某人或某组朋友一同协作进行文档的书写,那么这便是你所需要的。 - -它提供一个免费方案和几种付费方案。甚至来自哈佛大学和牛津大学的学生也都使用它来进行个人的项目。其免费方案还允许你添加一位协作者。 - -其付费方案允许你与 GitHub 和 Dropbox 进行同步,并且能够记录完整的文档修改历史。你可以为你的每个方案选择多个协作者。对于学生,它还提供单独的计费方案。 - -[ShareLaTeX][16] - -### 7\. Overleaf - -![][2] - -![][17] - -Overleaf 是另一款在线的 LaTeX 编辑器。它与 ShareLaTeX 类似,它为专家和学生提供了不同的计费方案。它也提供了一个免费方案,使用它你可以与 GitHub 同步,检查你的修订历史,或添加多个合作者。 - -在每个项目中,它对文件的数目有所限制。所以在大多数情况下如果你对 LaTeX 文件非常熟悉,这并不会为你带来不便。 - -[Overleaf][18] - -### 8\. Authorea - -![][2] - -![][19] - -Authorea 是一个美妙的在线 LaTeX 编辑器。当然,如果考虑到价格,它可能不是最好的一款。对于免费方案,它有 100 MB 的数据上传限制和每次只能创建一个私有文档。而付费方案则提供更多的额外好处,但如果考虑到价格,它可能不是最便宜的。你应该选择 Authorea 的唯一原因应该是因为其用户界面。假如你喜爱使用一款提供令人印象深刻的用户界面的工具,那就不要错过它。 - -[Authorea][20] - -### 9\. Papeeria - -![][2] - -![][21] - -Papeeria 是在网络上你能够找到的最为便宜的 LaTeX 在线编辑器,如果考虑到它和其他的编辑器一样可信赖的话。假如你想免费地使用它,则你不能使用它开展私有项目。但是,如果你更偏爱公共项目,它允许你创建不限数目的项目,添加不限数目的协作者。它的特色功能是有一个非常简便的画图构造器,并且在无需额外费用的情况下使用 Git 同步。假如你偏爱付费方案,它赋予你创建 10 个私有项目的能力。 - -[Papeeria][22] - -### 10\. Kile - -![Kile LaTeX 编辑器][2] - -![Kile LaTeX 编辑器][23] - -位于我们最好 LaTeX 编辑器清单的最后一位是 Kile 编辑器。有些朋友对 Kile 推崇备至,很大程度上是因为其提供某些特色功能。 - -Kile 不仅仅是一款编辑器,它还是一款类似 Eclipse 的 IDE 工具,提供了针对文档和项目的一整套环境。除了快速编译和预览功能,你还可以使用诸如命令的自动补全,插入引用,按照章节来组织文档等功能。你真的应该使用 Kile 来见识其潜力。 - -Kile 在 Linux 和 Windows 平台下都可获取到。 - -[Kile][24] - -### 总结 - -所以上面便是我们推荐的 LaTeX 编辑器,你可以在 Ubuntu 或其他 Linux 发行版本中使用它们。 - -当然,我们可能还遗漏了某些可以在 Linux 上使用并且有趣的 LaTeX 编辑器。如若你正好知道它们,请在下面的评论中让我们知晓。 - - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/LaTeX-editors-linux/ - -作者:[Ankush Das][a] -译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/ankush/ -[1]:https://www.LaTeX-project.org/ -[2]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs= -[3]:https://itsfoss.com/wp-content/uploads/2017/11/LaTeX-sample-example.jpeg -[4]:https://itsfoss.com/open-source-tools-writers/ -[5]:https://itsfoss.com/wp-content/uploads/2017/10/LyX_LaTeX_editor.jpg -[6]:https://www.LyX.org/ -[7]:https://itsfoss.com/wp-content/uploads/2017/10/texmaker_LaTeX_editor.jpg -[8]:http://www.xm1math.net/texmaker/ -[9]:https://itsfoss.com/wp-content/uploads/2017/10/tex_studio_LaTeX_editor.jpg -[10]:https://www.texstudio.org/ -[11]:https://itsfoss.com/wp-content/uploads/2017/10/gummi_LaTeX_editor.jpg -[12]:https://github.com/alexandervdm/gummi -[13]:https://itsfoss.com/wp-content/uploads/2017/10/texpen_LaTeX_editor.jpg -[14]:https://sourceforge.net/projects/texpen/ -[15]:https://itsfoss.com/wp-content/uploads/2017/10/shareLaTeX.jpg -[16]:https://www.shareLaTeX.com/ -[17]:https://itsfoss.com/wp-content/uploads/2017/10/overleaf.jpg -[18]:https://www.overleaf.com/ -[19]:https://itsfoss.com/wp-content/uploads/2017/10/authorea.jpg -[20]:https://www.authorea.com/ -[21]:https://itsfoss.com/wp-content/uploads/2017/10/papeeria_LaTeX_editor.jpg -[22]:https://www.papeeria.com/ -[23]:https://itsfoss.com/wp-content/uploads/2017/11/kile-LaTeX-800x621.png -[24]:https://kile.sourceforge.io/ From 1105a52012530e91d9145504b7bdaf4098b84657 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 16 Jan 2018 23:43:09 +0800 Subject: [PATCH 038/226] PRF:20090127 Anatomy of a Program in Memory.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @qhwdw 请复核有无问题。 --- ...20090127 Anatomy of a Program in Memory.md | 38 +++++++++---------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/translated/tech/20090127 Anatomy of a Program in Memory.md b/translated/tech/20090127 Anatomy of a Program in Memory.md index aa478535f4..e185881262 100644 --- a/translated/tech/20090127 Anatomy of a Program in Memory.md +++ b/translated/tech/20090127 Anatomy of a Program in Memory.md @@ -1,47 +1,47 @@ -剖析内存中的程序 +剖析内存中的程序之秘 ============================================================ -内存管理是一个操作系统的核心任务;它对程序员和系统管理员来说也是至关重要的。在接下来的几篇文章中,我将从实践出发着眼于内存管理,并深入到它的内部结构。尽管这些概念很普通,示例也大都来自于 32 位 x86 架构的 Linux 和 Windows 上。第一篇文章描述了在内存中程序如何分布。 +内存管理是操作系统的核心任务;它对程序员和系统管理员来说也是至关重要的。在接下来的几篇文章中,我将从实践出发着眼于内存管理,并深入到它的内部结构。虽然这些概念很通用,但示例大都来自于 32 位 x86 架构的 Linux 和 Windows 上。这第一篇文章描述了在内存中程序如何分布。 -在一个多任务操作系统中的每个进程都运行在它自己的内存“沙箱”中。这个沙箱是一个虚拟地址空间,它在 32 位的模式中它总共有 4GB 的内存地址块。这些虚拟地址是通过内核页表映射到物理地址的,并且这些虚拟地址是由操作系统内核来维护,进而被进程所消费的。每个进程都有它自己的一组页表,但是在它这里仅是一个钩子。一旦虚拟地址被启用,这些虚拟地址将被应用到这台电脑上的 _所有软件_,_包括内核本身_。因此,一部分虚拟地址空间必须保留给内核使用: +在一个多任务操作系统中的每个进程都运行在它自己的内存“沙箱”中。这个沙箱是一个虚拟地址空间virtual address space,在 32 位的模式中它总共有 4GB 的内存地址块。这些虚拟地址是通过内核页表page table映射到物理地址的,并且这些虚拟地址是由操作系统内核来维护,进而被进程所消费的。每个进程都有它自己的一组页表,但是这里有点玄机。一旦虚拟地址被启用,这些虚拟地址将被应用到这台电脑上的 _所有软件_,_包括内核本身_。因此,一部分虚拟地址空间必须保留给内核使用: ![Kernel/User Memory Split](http://static.duartes.org/img/blogPosts/kernelUserMemorySplit.png) -但是,这并不说内核就使用了很多的物理内存,恰恰相反,它只使用了很少一部分用于去做地址映射。内核空间在内核页表中被标记为仅 [特权代码][1] (ring 2 或更低)独占使用,因此,如果一个用户模式的程序尝试去访问它,将触发一个页面故障错误。在 Linux 中,内核空间是始终存在的,并且在所有进程中都映射相同的物理内存。内核代码和数据总是可寻址的,准备随时去处理中断或者系统调用。相比之下,用户模式中的地址空间,在每次进程切换时都会发生变化: +但是,这并**不是**说内核就使用了很多的物理内存,恰恰相反,它只使用了很少一部分可用的地址空间映射到其所需要的物理内存。内核空间在内核页表中被标记为独占使用于 [特权代码][1] (ring 2 或更低),因此,如果一个用户模式的程序尝试去访问它,将触发一个页面故障错误。在 Linux 中,内核空间是始终存在的,并且在所有进程中都映射相同的物理内存。内核代码和数据总是可寻址的,准备随时去处理中断或者系统调用。相比之下,用户模式中的地址空间,在每次进程切换时都会发生变化: ![Process Switch Effects on Virtual Memory](http://static.duartes.org/img/blogPosts/virtualMemoryInProcessSwitch.png) -蓝色的区域代表映射到物理地址的虚拟地址空间,白色的区域是尚未映射的部分。在上面的示例中,Firefox 因它令人惊奇的“狂吃”内存而使用了大量的虚拟内存空间。在地址空间中不同的组合对应了不同的内存段,像堆、栈、等等。请注意,这些段只是一系列内存地址的简化表示,它与 [Intel 类型的段][2] _并没有任何关系_ 。不过,这是一个在 Linux 中的标准的段布局: +蓝色的区域代表映射到物理地址的虚拟地址空间,白色的区域是尚未映射的部分。在上面的示例中,众所周知的内存“饕餮” Firefox 使用了大量的虚拟内存空间。在地址空间中不同的条带对应了不同的内存段,像heapstack等等。请注意,这些段只是一系列内存地址的简化表示,它与 [Intel 类型的段][2] _并没有任何关系_ 。不过,这是一个在 Linux 进程的标准段布局: ![Flexible Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxFlexibleAddressSpaceLayout.png) -当计算是快乐、安全、讨人喜欢的时候,在机器中的几乎每个进程上,它们的起始虚拟地址段都是完全相同的。这将使远程挖掘安全漏洞变得容易。一个漏洞利用经常需要去引用绝对内存位置:在栈中的一个地址,这个地址可能是一个库的函数,等等。远程攻击必须要“盲选”这个地址,因为地址空间都是相同的。当攻击者们这样做的时候,人们就会受到伤害。因此,地址空间随机化开始流行起来。Linux 随机化栈、内存映射段、以及在堆上增加起始地址偏移量。不幸的是,32 位的地址空间是非常拥挤的,为地址空间随机化留下的空间不多,因此 [妨碍了地址空间随机化的效果][6]。 +当计算机还是快乐、安全的时代时,在机器中的几乎每个进程上,那些段的起始虚拟地址都是**完全相同**的。这将使远程挖掘安全漏洞变得容易。漏洞利用经常需要去引用绝对内存位置:比如在栈中的一个地址,一个库函数的地址,等等。远程攻击闭着眼睛也会选择这个地址,因为地址空间都是相同的。当攻击者们这样做的时候,人们就会受到伤害。因此,地址空间随机化开始流行起来。Linux 会通过在其起始地址上增加偏移量来随机化[栈][3]、[内存映射段][4]、以及[堆][5]。不幸的是,32 位的地址空间是非常拥挤的,为地址空间随机化留下的空间不多,因此 [妨碍了地址空间随机化的效果][6]。 -在进程地址空间中最高的段是栈,在大多数编程语言中它存储本地变量和函数参数。调用一个方法或者函数将推送一个新的栈帧到这个栈。当函数返回时这个栈帧被删除。这个简单的设计,可能是因为数据严格遵循 [后进先出(LIFO)][7] 的次序,这意味着跟踪栈内容时不需要复杂的数据结构 – 一个指向栈顶的简单指针就可以做到。推送和弹出也因此而非常快且准确。也可能是,持续的栈区重用倾向于在 [CPU 缓存][8] 中保持活跃的栈内存,这样可以加快访问速度。进程中的每个线程都有它自己的栈。 +在进程地址空间中最高的段是栈,在大多数编程语言中它存储本地变量和函数参数。调用一个方法或者函数将推送一个新的栈帧stack frame到这个栈。当函数返回时这个栈帧被删除。这个简单的设计,可能是因为数据严格遵循 [后进先出(LIFO)][7] 的次序,这意味着跟踪栈内容时不需要复杂的数据结构 —— 一个指向栈顶的简单指针就可以做到。推入和弹出也因此而非常快且准确。也可能是,持续的栈区重用往往会在 [CPU 缓存][8] 中保持活跃的栈内存,这样可以加快访问速度。进程中的每个线程都有它自己的栈。 -向栈中推送更多的而不是刚合适的数据可能会耗尽栈的映射区域。这将触发一个页面故障,在 Linux 中它是通过 [expand_stack()][9] 来处理的,它会去调用 [acct_stack_growth()][10] 来检查栈的增长是否正常。如果栈的大小低于 RLIMIT_STACK 的值(一般是 8MB 大小),那么这是一个正常的栈增长和程序的合理使用,否则可能是发生了未知问题。这是一个栈大小按需调节的常见机制。但是,栈的大小达到了上述限制,将会发生一个栈溢出,并且,程序将会收到一个段故障错误。当映射的栈为满足需要而扩展后,在栈缩小时,映射区域并不会收缩。就像美国联邦政府的预算一样,它只会扩张。 +向栈中推送更多的而不是刚合适的数据可能会耗尽栈的映射区域。这将触发一个页面故障,在 Linux 中它是通过 [`expand_stack()`][9] 来处理的,它会去调用 [`acct_stack_growth()`][10] 来检查栈的增长是否正常。如果栈的大小低于 `RLIMIT_STACK` 的值(一般是 8MB 大小),那么这是一个正常的栈增长和程序的合理使用,否则可能是发生了未知问题。这是一个栈大小按需调节的常见机制。但是,栈的大小达到了上述限制,将会发生一个栈溢出,并且,程序将会收到一个段故障Segmentation Fault错误。当映射的栈区为满足需要而扩展后,在栈缩小时,映射区域并不会收缩。就像美国联邦政府的预算一样,它只会扩张。 -动态栈增长是 [唯一例外的情况][11] ,当它去访问一个未映射的内存区域,如上图中白色部分,是允许的。除此之外的任何其它访问未映射的内存区域将在段故障中触发一个页面故障。一些映射区域是只读的,因此,尝试去写入到这些区域也将触发一个段故障。 +动态栈增长是 [唯一例外的情况][11] ,当它去访问一个未映射的内存区域,如上图中白色部分,是允许的。除此之外的任何其它访问未映射的内存区域将触发一个页面故障,导致段故障。一些映射区域是只读的,因此,尝试去写入到这些区域也将触发一个段故障。 -在栈的下面,有内存映射段。在这里,内核将文件内容直接映射到内存。任何应用程序都可以通过 Linux 的 [mmap()][12] 系统调用( [实现][13])或者 Windows 的 [CreateFileMapping()][14] / [MapViewOfFile()][15] 来请求一个映射。内存映射是实现文件 I/O 的方便高效的方式。因此,它经常被用于加载动态库。有时候,也被用于去创建一个并不匹配任何文件的匿名内存映射,这种映射经常被用做程序数据的替代。在 Linux 中,如果你通过 [malloc()][16] 去请求一个大的内存块,C 库将会创建这样一个匿名映射而不是使用堆内存。这里的‘大’ 表示是超过了MMAP_THRESHOLD 设置的字节数,它的缺省值是 128 kB,可以通过 [mallopt()][17] 去调整这个设置值。 +在栈的下面,有内存映射段。在这里,内核将文件内容直接映射到内存。任何应用程序都可以通过 Linux 的 [`mmap()`][12] 系统调用( [代码实现][13])或者 Windows 的 [`CreateFileMapping()`][14] / [`MapViewOfFile()`][15] 来请求一个映射。内存映射是实现文件 I/O 的方便高效的方式。因此,它经常被用于加载动态库。有时候,也被用于去创建一个并不匹配任何文件的匿名内存映射,这种映射经常被用做程序数据的替代。在 Linux 中,如果你通过 [`malloc()`][16] 去请求一个大的内存块,C 库将会创建这样一个匿名映射而不是使用堆内存。这里所谓的“大”表示是超过了`MMAP_THRESHOLD` 设置的字节数,它的缺省值是 128 kB,可以通过 [`mallopt()`][17] 去调整这个设置值。 -接下来讲的是“堆”,就在我们接下来的地址空间中,堆提供运行时内存分配,像栈一样,但又不同于栈的是,它分配的数据生存期要长于分配它的函数。大多数编程语言都为程序去提供堆管理支持。因此,满足内存需要是编程语言运行时和内核共同来做的事情。在 C 中,堆分配的接口是 [malloc()][18] ,它是个用户友好的接口,然而在编程语言的垃圾回收中,像 C# 中,这个接口使用 new 关键字。 +接下来讲的是“堆”,就在我们接下来的地址空间中,堆提供运行时内存分配,像栈一样,但又不同于栈的是,它分配的数据生存期要长于分配它的函数。大多数编程语言都为程序提供了堆管理支持。因此,满足内存需要是编程语言运行时和内核共同来做的事情。在 C 中,堆分配的接口是 [`malloc()`][18] 一族,然而在垃圾回收式编程语言中,像 C#,这个接口使用 `new` 关键字。 -如果在堆中有足够的空间去满足内存请求,它可以由编程语言运行时来处理内存分配请求,而无需内核参与。否则将通过 [brk()][19] 系统调用([实现][20])来扩大堆以满足内存请求所需的大小。堆的管理是比较 [复杂的][21],在面对我们程序的混乱分配模式时,它通过复杂的算法,努力在速度和内存使用效率之间取得一种平衡。服务一个堆请求所需要的时间可能是非常可观的。实时系统有一个 [特定用途的分配器][22] 去处理这个问题。堆也会出现  _碎片化_ ,如下图所示: +如果在堆中有足够的空间可以满足内存请求,它可以由编程语言运行时来处理内存分配请求,而无需内核参与。否则将通过 [`brk()`][19] 系统调用([代码实现][20])来扩大堆以满足内存请求所需的大小。堆管理是比较 [复杂的][21],在面对我们程序的混乱分配模式时,它通过复杂的算法,努力在速度和内存使用效率之间取得一种平衡。服务一个堆请求所需要的时间可能是非常可观的。实时系统有一个 [特定用途的分配器][22] 去处理这个问题。堆也会出现  _碎片化_ ,如下图所示: ![Fragmented Heap](http://static.duartes.org/img/blogPosts/fragmentedHeap.png) -最后,我们取得了内存的低位段:BSS、数据、以及程序文本。在 C 中,静态(全局)变量的内容都保存在 BSS 和数据中。它们之间的不同之处在于,BSS 保存 _未初始化的_  静态变量的内容,它的值在源代码中并没有被程序员设置。BSS 内存区域是_匿名_的:它没有映射到任何文件上。如果你在程序中写这样的语句 static int cntActiveUserscntActiveUsers 的内容就保存在 BSS 中。 +最后,我们抵达了内存的低位段:BSS、数据、以及程序文本。在 C 中,静态(全局)变量的内容都保存在 BSS 和数据中。它们之间的不同之处在于,BSS 保存 _未初始化的_  静态变量的内容,它的值在源代码中并没有被程序员设置。BSS 内存区域是 _匿名_ 的:它没有映射到任何文件上。如果你在程序中写这样的语句 `static int cntActiveUsers`,`cntActiveUsers` 的内容就保存在 BSS 中。 -反过来,数据段,用于保存在源代码中静态变量_初始化后_的内容。这个内存区域是_非匿名_的。它映射到程序的二进值镜像上的一部分,这个二进制镜像包含在源代码中给定初始化值的静态变量内容。因此,如果你在程序中写这样的语句 static int cntWorkerBees = 10,那么,cntWorkerBees 的内容就保存在数据段中,并且初始值为 10。尽管可以通过数据段映射到一个文件,但是这是一个私有内存映射,意味着,如果在内存中这个文件发生了变化,它并不会将这种变化反映到底层的文件上。必须是这样的,否则,分配的全局变量将会改变你磁盘上的二进制文件镜像,这种做法就太不可思议了! +反过来,数据段,用于保存在源代码中静态变量 _初始化后_ 的内容。这个内存区域是 _非匿名_ 的。它映射了程序的二进值镜像上的一部分,包含了在源代码中给定初始化值的静态变量内容。因此,如果你在程序中写这样的语句 `static int cntWorkerBees = 10`,那么,`cntWorkerBees` 的内容就保存在数据段中,并且初始值为 `10`。尽管可以通过数据段映射到一个文件,但是这是一个私有内存映射,意味着,如果改变内存,它并不会将这种变化反映到底层的文件上。必须是这样的,否则,分配的全局变量将会改变你磁盘上的二进制文件镜像,这种做法就太不可思议了! -用图去展示一个数据段是很困难的,因为它使用一个指针。在那种情况下,指针 gonzo 的_内容_ – 保存在数据段上的一个 4 字节的内存地址。它并没有指向一个真实的字符串。而这个字符串存在于文本段中,文本段是只读的,它用于保存你的代码中的类似于字符串常量这样的内容。文本段也映射你的内存中的库,但是,如果你的程序写入到这个区域,将会触发一个段故障错误。尽管在 C 中,它比不上从一开始就避免这种指针错误那么有效,但是,这种机制也有助于避免指针错误。这里有一个展示这些段和示例变量的图: +用图去展示一个数据段是很困难的,因为它使用一个指针。在那种情况下,指针 `gonzo` 的_内容_(一个 4 字节的内存地址)保存在数据段上。然而,它并没有指向一个真实的字符串。而这个字符串存在于文本段中,文本段是只读的,它用于保存你的代码中的类似于字符串常量这样的内容。文本段也会在内存中映射你的二进制文件,但是,如果你的程序写入到这个区域,将会触发一个段故障错误。尽管在 C 中,它比不上从一开始就避免这种指针错误那么有效,但是,这种机制也有助于避免指针错误。这里有一个展示这些段和示例变量的图: ![ELF Binary Image Mapped Into Memory](http://static.duartes.org/img/blogPosts/mappingBinaryImage.png) -你可以通过读取 /proc/pid_of_process/maps 文件来检查 Linux 进程中的内存区域。请记住,一个段可以包含很多的区域。例如,每个内存映射的文件一般都在 mmap 段中的它自己的区域中,而动态库有类似于BSS 和数据一样的额外的区域。下一篇文章中我们将详细说明“区域(area)”的真正含义是什么。此外,有时候人们所说的“数据段(data segment)”是指“数据 + BSS + 堆”。 +你可以通过读取 `/proc/pid_of_process/maps` 文件来检查 Linux 进程中的内存区域。请记住,一个段可以包含很多的区域。例如,每个内存映射的文件一般都在 mmap 段中的它自己的区域中,而动态库有类似于 BSS 和数据一样的额外的区域。下一篇文章中我们将详细说明“区域area”的真正含义是什么。此外,有时候人们所说的“数据段data segment”是指“数据data + BSS + 堆”。 -你可以使用 [nm][23] 和 [objdump][24] 命令去检查二进制镜像,去显示它们的符号、地址、段、等等。最终,在 Linux 中上面描述的虚拟地址布局是一个“弹性的”布局,这就是这几年来的缺省情况。它假设 RLIMIT_STACK 有一个值。如果没有值的话,Linux 将恢复到如下所示的“经典” 布局: +你可以使用 [nm][23] 和 [objdump][24] 命令去检查二进制镜像,去显示它们的符号、地址、段等等。最终,在 Linux 中上面描述的虚拟地址布局是一个“弹性的”布局,这就是这几年来的缺省情况。它假设 `RLIMIT_STACK` 有一个值。如果没有值的话,Linux 将恢复到如下所示的“经典” 布局: ![Classic Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxClassicAddressSpaceLayout.png) @@ -51,9 +51,9 @@ via: http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory/ -作者:[gustavo ][a] +作者:[gustavo][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 7d5f73161cb998ced96be124bced89f0925e6d81 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 17 Jan 2018 08:53:26 +0800 Subject: [PATCH 039/226] translated --- ...isks When You Type Password In terminal.md | 72 ------------------- ...isks When You Type Password In terminal.md | 70 ++++++++++++++++++ 2 files changed, 70 insertions(+), 72 deletions(-) delete mode 100644 sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md create mode 100644 translated/tech/20180105 How To Display Asterisks When You Type Password In terminal.md diff --git a/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md b/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md deleted file mode 100644 index 7a49972103..0000000000 --- a/sources/tech/20180105 How To Display Asterisks When You Type Password In terminal.md +++ /dev/null @@ -1,72 +0,0 @@ -translating---geekpi - -How To Display Asterisks When You Type Password In terminal -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/01/Display-Asterisks-When-You-Type-Password-In-terminal-1-720x340.png) - -When you type passwords in a web browser login or any GUI login, the passwords will be masked as asterisks like 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reedit.sh reformat.sh or bullets like •••••••••••••. This is the built-in security mechanism to prevent the users near you to view your password. But when you type the password in Terminal to perform any administrative task with **sudo** or **su** , you won't even the see the asterisks or bullets as you type the password. There won't be any visual indication of entering passwords, there won't be any cursor movement, nothing at all. You will not know whether you entered all characters or not. All you will see just a blank screen! - -Look at the following screenshot. - -![][2] - -As you see in the above image, I've already entered the password, but there was no indication (either asterisks or bullets). Now, I am not sure whether I entered all characters in my password or not. This security mechanism also prevents the person near you to guess the password length. Of course, this behavior can be changed. This is what this guide all about. It is not that difficult. Read on! - -#### Display Asterisks When You Type Password In terminal - -To display asterisks as you type password in Terminal, we need to make a small modification in **" /etc/sudoers"** file. Before making any changes, it is better to backup this file. To do so, just run: -``` -sudo cp /etc/sudoers{,.bak} -``` - -The above command will backup /etc/sudoers file to a new file named /etc/sudoers.bak. You can restore it, just in case something went wrong after editing the file. - -Next, edit **" /etc/sudoers"** file using command: -``` -sudo visudo -``` - -Find the following line: -``` -Defaults env_reset -``` - -![][3] - -Add an extra word **" ,pwfeedback"** to the end of that line as shown below. -``` -Defaults env_reset,pwfeedback -``` - -![][4] - -Then, press **" CTRL+x"** and **" y"** to save and close the file. Restart your Terminal to take effect the changes. - -Now, you will see asterisks when you enter password in Terminal. - -![][5] - -If you're not comfortable to see a blank screen when you type passwords in Terminal, the small tweak will help. Please be aware that the other users can predict the password length if they see the password when you type it. If you don't mind it, go ahead make the changes as described above to make your password visible (masked as asterisks, of course!). - -And, that's all for now. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/display-asterisks-type-password-terminal/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/password-1.png () -[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1.png () -[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1-1.png () -[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-2.png () diff --git a/translated/tech/20180105 How To Display Asterisks When You Type Password In terminal.md b/translated/tech/20180105 How To Display Asterisks When You Type Password In terminal.md new file mode 100644 index 0000000000..0b764d093f --- /dev/null +++ b/translated/tech/20180105 How To Display Asterisks When You Type Password In terminal.md @@ -0,0 +1,70 @@ +如何在终端输入密码时显示星号 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/01/Display-Asterisks-When-You-Type-Password-In-terminal-1-720x340.png) + +当你在 Web 浏览器或任何 GUI 登录中输入密码时,密码会被标记成星号 ******** 或圆形符号 ••••••••••••• 。这是内置的安全机制,以防止你附近的用户看到你的密码。但是当你在终端输入密码来执行任何 **sudo** 或 **su** 的管理任务时,你不会在输入密码的时候看见星号或者圆形符号。它不会有任何输入密码的视觉指示,也不会有任何光标移动,什么也没有。你不知道你是否输入了所有的字符。你只会看到一个空白的屏幕! + +看看下面的截图。 + +![][2] + +正如你在上面的图片中看到的,我已经输入了密码,但没有任何指示(星号或圆形符号)。现在,我不确定我是否输入了所有密码。这个安全机制也可以防止你附近的人猜测密码长度。当然,这种行为可以改变。这是本指南要说的。这并不困难。请继续阅读。 + +#### 当你在终端输入密码时显示星号 + +要在终端输入密码时显示星号,我们需要在 **“/etc/sudoers”** 中做一些小修改。在做任何更改之前,最好备份这个文件。为此,只需运行: +``` +sudo cp /etc/sudoers{,.bak} +``` + +上述命令将 /etc/sudoers 备份成名为 /etc/sudoers.bak。你可以恢复它,以防万一在编辑文件后出错。 + +接下来,使用下面的命令编辑 **“/etc/sudoers”**: +``` +sudo visudo +``` + +找到下面这行: +``` +Defaults env_reset +``` + +![][3] + +在该行的末尾添加一个额外的单词 **“,pwfeedback”**,如下所示。 +``` +Defaults env_reset,pwfeedback +``` + +![][4] + +然后,按下 **“CTRL + x”** 和 **“y”** 保存并关闭文件。重新启动终端以使更改生效。 + +现在,当你在终端输入密码时,你会看到星号。 + +![][5] + +如果你对在终端输入密码时看不到密码感到不舒服,那么这个小技巧会有帮助。请注意,当你输入输入密码时其他用户就可以预测你的密码长度。如果你不介意,请按照上述方法进行更改,以使你的密码可见(当然,标记为星号!)。 + +现在就是这样了。还有更好的东西。敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/display-asterisks-type-password-terminal/ + +作者:[SK][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/password-1.png () +[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1.png () +[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-1-1.png () +[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/visudo-2.png () From 925181e67bb8d1af8775865eb47a2bd27be40ca0 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 17 Jan 2018 08:57:53 +0800 Subject: [PATCH 040/226] translating --- .../tech/20170920 Easy APT Repository - Iain R. Learmonth.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md b/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md index b0031f8c94..144cff3d7b 100644 --- a/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md +++ b/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md @@ -1,3 +1,5 @@ +translating---geekpi + Easy APT Repository · Iain R. Learmonth ====== From 43c33d93ea93338cef7eb0c4f5f520124d7fb065 Mon Sep 17 00:00:00 2001 From: BriFuture <752736341@qq.com> Date: Wed, 17 Jan 2018 10:54:39 +0800 Subject: [PATCH 041/226] BriFuture is translating this article --- .../tech/20150703 Let-s Build A Simple Interpreter. Part 2..md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md b/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md index b9f923e048..7b6cde8c30 100644 --- a/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md +++ b/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md @@ -1,3 +1,5 @@ +BriFuture is translating this article. + Let’s Build A Simple Interpreter. Part 2. ====== From e3201483d9660114c97ff840f0e0ea820c20f87e Mon Sep 17 00:00:00 2001 From: qhwdw Date: Wed, 17 Jan 2018 13:55:19 +0800 Subject: [PATCH 042/226] Translated by qhwdw --- ...06 System Calls Make the World Go Round.md | 153 ---------------- ...06 System Calls Make the World Go Round.md | 164 ++++++++++++++++++ 2 files changed, 164 insertions(+), 153 deletions(-) delete mode 100644 sources/tech/20141106 System Calls Make the World Go Round.md create mode 100644 translated/tech/20141106 System Calls Make the World Go Round.md diff --git a/sources/tech/20141106 System Calls Make the World Go Round.md b/sources/tech/20141106 System Calls Make the World Go Round.md deleted file mode 100644 index 0d0201e471..0000000000 --- a/sources/tech/20141106 System Calls Make the World Go Round.md +++ /dev/null @@ -1,153 +0,0 @@ -# Translating by qhwdw System Calls Make the World Go Round - -I hate to break it to you, but a user application is a helpless brain in a vat: - -![](https://manybutfinite.com/img/os/appInVat.png) - -Every interaction with the outside world is mediated by the kernel through system calls. If an app saves a file, writes to the terminal, or opens a TCP connection, the kernel is involved. Apps are regarded as highly suspicious: at best a bug-ridden mess, at worst the malicious brain of an evil genius. - -These system calls are function calls from an app into the kernel. They use a specific mechanism for safety reasons, but really you're just calling the kernel's API. The term "system call" can refer to a specific function offered by the kernel (e.g., the open() system call) or to the calling mechanism. You can also say syscall for short. - -This post looks at system calls, how they differ from calls to a library, and tools to poke at this OS/app interface. A solid understanding of what happens within an app versus what happens through the OS can turn an impossible-to-fix problem into a quick, fun puzzle. - -So here's a running program, a user process: - -![](https://manybutfinite.com/img/os/sandbox.png) - -It has a private [virtual address space][2], its very own memory sandbox. The vat, if you will. In its address space, the program's binary file plus the libraries it uses are all [memory mapped][3]. Part of the address space maps the kernel itself. - -Below is the code for our program, pid, which simply retrieves its process id via [getpid(2)][4]: - -pid.c[download][1] - -| -​``` -123456789 -​``` - | -​``` -#include #include #include int main(){ pid_t p = getpid(); printf("%d\n", p);} -​``` - | - -In Linux, a process isn't born knowing its PID. It must ask the kernel, so this requires a system call: - -![](https://manybutfinite.com/img/os/syscallEnter.png) - -It all starts with a call to the C library's [getpid()][5], which is a wrapper for the system call. When you call functions like open(2), read(2), and friends, you're calling these wrappers. This is true for many languages where the native methods ultimately end up in libc. - -Wrappers offer convenience atop the bare-bones OS API, helping keep the kernel lean. Lines of code is where bugs live, and all kernel code runs in privileged mode, where mistakes can be disastrous. Anything that can be done in user mode should be done in user mode. Let the libraries offer friendly methods and fancy argument processing a la printf(3). - -Compared to web APIs, this is analogous to building the simplest possible HTTP interface to a service and then offering language-specific libraries with helper methods. Or maybe some caching, which is what libc's getpid() does: when first called it actually performs a system call, but the PID is then cached to avoid the syscall overhead in subsequent invocations. - -Once the wrapper has done its initial work it's time to jump into hyperspace the kernel. The mechanics of this transition vary by processor architecture. In Intel processors, arguments and the [syscall number][6] are [loaded into registers][7], then an [instruction][8] is executed to put the CPU in [privileged mode][9] and immediately transfer control to a global syscall [entry point][10] within the kernel. If you're interested in details, David Drysdale has two great articles in LWN ([first][11], [second][12]). - -The kernel then uses the syscall number as an [index][13] into [sys_call_table][14], an array of function pointers to each syscall implementation. Here, [sys_getpid][15] is called: - -![](https://manybutfinite.com/img/os/syscallExit.png) - -In Linux, syscall implementations are mostly arch-independent C functions, sometimes [trivial][16], insulated from the syscall mechanism by the kernel's excellent design. They are regular code working on general data structures. Well, apart from being completely paranoid about argument validation. - -Once their work is done they return normally, and the arch-specific code takes care of transitioning back into user mode where the wrapper does some post processing. In our example, [getpid(2)][17] now caches the PID returned by the kernel. Other wrappers might set the global errno variable if the kernel returns an error. Small things to let you know GNU cares. - -If you want to be raw, glibc offers the [syscall(2)][18] function, which makes a system call without a wrapper. You can also do so yourself in assembly. There's nothing magical or privileged about a C library. - -This syscall design has far-reaching consequences. Let's start with the incredibly useful [strace(1)][19], a tool you can use to spy on system calls made by Linux processes (in Macs, see [dtruss(1m)][20] and the amazing [dtrace][21]; in Windows, see [sysinternals][22]). Here's strace on pid: - -| -​``` -1234567891011121314151617181920 -​``` - | -​``` -~/code/x86-os$ strace ./pidexecve("./pid", ["./pid"], [/* 20 vars */]) = 0brk(0) = 0x9aa0000access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7767000access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3fstat64(3, {st_mode=S_IFREG|0644, st_size=18056, ...}) = 0mmap2(NULL, 18056, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7762000close(3) = 0[...snip...]getpid() = 14678fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 1), ...}) = 0mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7766000write(1, "14678\n", 614678) = 6exit_group(6) = ? -​``` - | - -Each line of output shows a system call, its arguments, and a return value. If you put getpid(2) in a loop running 1000 times, you would still have only one getpid() syscall because of the PID caching. We can also see that printf(3) calls write(2) after formatting the output string. - -strace can start a new process and also attach to an already running one. You can learn a lot by looking at the syscalls made by different programs. For example, what does the sshd daemon do all day? - -| -​``` -1234567891011121314151617181920212223242526272829 -​``` - | -​``` -~/code/x86-os$ ps ax | grep sshd12218 ? Ss 0:00 /usr/sbin/sshd -D~/code/x86-os$ sudo strace -p 12218Process 12218 attached - interrupt to quitselect(7, [3 4], NULL, NULL, NULL[ ... nothing happens ... No fun, it's just waiting for a connection using select(2) If we wait long enough, we might see new keys being generated and so on, but let's attach again, tell strace to follow forks (-f), and connect via SSH]~/code/x86-os$ sudo strace -p 12218 -f[lots of calls happen during an SSH login, only a few shown][pid 14692] read(3, "-----BEGIN RSA PRIVATE KEY-----\n"..., 1024) = 1024[pid 14692] open("/usr/share/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)[pid 14692] open("/etc/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)[pid 14692] open("/etc/ssh/ssh_host_dsa_key", O_RDONLY|O_LARGEFILE) = 3[pid 14692] open("/etc/protocols", O_RDONLY|O_CLOEXEC) = 4[pid 14692] read(4, "# Internet (IP) protocols\n#\n# Up"..., 4096) = 2933[pid 14692] open("/etc/hosts.allow", O_RDONLY) = 4[pid 14692] open("/lib/i386-linux-gnu/libnss_dns.so.2", O_RDONLY|O_CLOEXEC) = 4[pid 14692] stat64("/etc/pam.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0[pid 14692] open("/etc/pam.d/common-password", O_RDONLY|O_LARGEFILE) = 8[pid 14692] open("/etc/pam.d/other", O_RDONLY|O_LARGEFILE) = 4 -​``` - | - -SSH is a large chunk to bite off, but it gives a feel for strace usage. Being able to see which files an app opens can be useful ("where the hell is this config coming from?"). If you have a process that appears stuck, you can strace it and see what it might be doing via system calls. When some app is quitting unexpectedly without a proper error message, check if a syscall failure explains it. You can also use filters, time each call, and so so: - -| -​``` -123456789 -​``` - | -​``` -~/code/x86-os$ strace -T -e trace=recv curl -silent www.google.com. > /dev/nullrecv(3, "HTTP/1.1 200 OK\r\nDate: Wed, 05 N"..., 16384, 0) = 4164 <0.000007>recv(3, "fl a{color:#36c}a:visited{color:"..., 16384, 0) = 2776 <0.000005>recv(3, "adient(top,#4d90fe,#4787ed);filt"..., 16384, 0) = 4164 <0.000007>recv(3, "gbar.up.spd(b,d,1,!0);break;case"..., 16384, 0) = 2776 <0.000006>recv(3, "$),a.i.G(!0)),window.gbar.up.sl("..., 16384, 0) = 1388 <0.000004>recv(3, "margin:0;padding:5px 8px 0 6px;v"..., 16384, 0) = 1388 <0.000007>recv(3, "){window.setTimeout(function(){v"..., 16384, 0) = 1484 <0.000006> -​``` - | - -I encourage you to explore these tools in your OS. Using them well is like having a super power. - -But enough useful stuff, let's go back to design. We've seen that a userland app is trapped in its virtual address space running in ring 3 (unprivileged). In general, tasks that involve only computation and memory accesses do not require syscalls. For example, C library functions like [strlen(3)][23] and [memcpy(3)][24] have nothing to do with the kernel. Those happen within the app. - -The man page sections for a C library function (the 2 and 3 in parenthesis) also offer clues. Section 2 is used for system call wrappers, while section 3 contains other C library functions. However, as we saw with printf(3), a library function might ultimately make one or more syscalls. - -If you're curious, here are full syscall listings for [Linux][25] (also [Filippo's list][26]) and [Windows][27]. They have ~310 and ~460 system calls, respectively. It's fun to look at those because, in a way, they represent all that software can do on a modern computer. Plus, you might find gems to help with things like interprocess communication and performance. This is an area where "Those who do not understand Unix are condemned to reinvent it, poorly." - -Many syscalls perform tasks that take [eons][28] compared to CPU cycles, for example reading from a hard drive. In those situations the calling process is often put to sleep until the underlying work is completed. Because CPUs are so fast, your average program is I/O bound and spends most of its life sleeping, waiting on syscalls. By contrast, if you strace a program busy with a computational task, you often see no syscalls being invoked. In such a case, [top(1)][29] would show intense CPU usage. - -The overhead involved in a system call can be a problem. For example, SSDs are so fast that general OS overhead can be [more expensive][30] than the I/O operation itself. Programs doing large numbers of reads and writes can also have OS overhead as their bottleneck. [Vectored I/O][31] can help some. So can [memory mapped files][32], which allow a program to read and write from disk using only memory access. Analogous mappings exist for things like video card memory. Eventually, the economics of cloud computing might lead us to kernels that eliminate or minimize user/kernel mode switches. - -Finally, syscalls have interesting security implications. One is that no matter how obfuscated a binary, you can still examine its behavior by looking at the system calls it makes. This can be used to detect malware, for example. We can also record profiles of a known program's syscall usage and alert on deviations, or perhaps whitelist specific syscalls for programs so that exploiting vulnerabilities becomes harder. We have a ton of research in this area, a number of tools, but not a killer solution yet. - -And that's it for system calls. I'm sorry for the length of this post, I hope it was helpful. More (and shorter) next week, [RSS][33] and [Twitter][34]. Also, last night I made a promise to the universe. This post is dedicated to the glorious Clube Atlético Mineiro. - --------------------------------------------------------------------------------- - -via:https://manybutfinite.com/post/system-calls/ - -作者:[Gustavo Duarte][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://duartes.org/gustavo/blog/about/ -[1]:https://manybutfinite.com/code/x86-os/pid.c -[2]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory -[3]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/ -[4]:http://linux.die.net/man/2/getpid -[5]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/getpid.c;h=937b1d4e113b1cff4a5c698f83d662e130d596af;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l49 -[6]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/syscalls/syscall_64.tbl#L48 -[7]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86_64/sysdep.h;h=4a619dafebd180426bf32ab6b6cb0e5e560b718a;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l139 -[8]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86_64/sysdep.h;h=4a619dafebd180426bf32ab6b6cb0e5e560b718a;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l179 -[9]:https://manybutfinite.com/post/cpu-rings-privilege-and-protection -[10]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/entry_64.S#L354-L386 -[11]:http://lwn.net/Articles/604287/ -[12]:http://lwn.net/Articles/604515/ -[13]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/entry_64.S#L422 -[14]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/syscall_64.c#L25 -[15]:https://github.com/torvalds/linux/blob/v3.17/kernel/sys.c#L800-L809 -[16]:https://github.com/torvalds/linux/blob/v3.17/kernel/sys.c#L800-L859 -[17]:http://linux.die.net/man/2/getpid -[18]:http://linux.die.net/man/2/syscall -[19]:http://linux.die.net/man/1/strace -[20]:https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/dtruss.1m.html -[21]:http://dtrace.org/blogs/brendan/2011/10/10/top-10-dtrace-scripts-for-mac-os-x/ -[22]:http://technet.microsoft.com/en-us/sysinternals/bb842062.aspx -[23]:http://linux.die.net/man/3/strlen -[24]:http://linux.die.net/man/3/memcpy -[25]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/syscalls/syscall_64.tbl -[26]:https://filippo.io/linux-syscall-table/ -[27]:http://j00ru.vexillium.org/ntapi/ -[28]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait/ -[29]:http://linux.die.net/man/1/top -[30]:http://danluu.com/clwb-pcommit/ -[31]:http://en.wikipedia.org/wiki/Vectored_I/O -[32]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/ -[33]:http://feeds.feedburner.com/GustavoDuarte -[34]:http://twitter.com/food4hackers \ No newline at end of file diff --git a/translated/tech/20141106 System Calls Make the World Go Round.md b/translated/tech/20141106 System Calls Make the World Go Round.md new file mode 100644 index 0000000000..e2841b6d4b --- /dev/null +++ b/translated/tech/20141106 System Calls Make the World Go Round.md @@ -0,0 +1,164 @@ +# 系统调用,让世界转起来! + +我其实不想将它分解开给你看,一个用户应用程序在整个系统中就像一个可怜的孤儿一样无依无靠: + +![](https://manybutfinite.com/img/os/appInVat.png) + +它与外部世界的每个交流都要在内核的帮助下通过系统调用才能完成。一个应用程序要想保存一个文件、写到终端、或者打开一个 TCP 连接,内核都要参与。应用程序是被内核高度怀疑的:认为它到处充斥着 bugs,而最糟糕的是那些充满邪恶想法的天才大脑(写的恶意程序)。 + +这些系统调用是从一个应用程序到内核的函数调用。它们因为安全考虑使用一个特定的机制,实际上你只是调用了内核的 API。“系统调用”这个术语指的是调用由内核提供的特定功能(比如,系统调用 open())或者是调用途径。你也可以简称为:syscall。 + +这篇文章讲解系统调用,系统调用与调用一个库有何区别,以及在操作系统/应用程序接口上的刺探工具。如果想彻底了解应用程序借助操作系统都发生的哪些事情?那么就可以将一个不可能解决的问题转变成一个快速而有趣的难题。 + +因此,下图是一个运行着的应用程序,一个用户进程: + +![](https://manybutfinite.com/img/os/sandbox.png) + +它有一个私有的 [虚拟地址空间][2]—— 它自己的内存沙箱。整个系统都在地址空间中,程序的二进制文件加上它所需要的库全部都 [被映射到内存中][3]。内核自身也映射为地址空间的一部分。 + +下面是我们程序的代码和 PID,进程的 PID 可以通过 [getpid(2)][4]: + +pid.c [download][1] + +| +​``` +123456789 +​``` + | +​``` +#include #include #include int main(){ pid_t p = getpid(); printf("%d\n", p);} +​``` + | + +**(致校对:本文的所有代码部分都出现了排版错误,请与原文核对确认!!)** + +在 Linux 中,一个进程并不是一出生就知道它的 PID。要想知道它的 PID,它必须去询问内核,因此,这个询问请求也是一个系统调用: + +![](https://manybutfinite.com/img/os/syscallEnter.png) + +它的第一步是开始于调用一个 C 库的 [getpid()][5],它是系统调用的一个封装。当你调用一些功能时,比如,open(2)、read(2)、以及相关的一些支持时,你就调用了这些封装。其实,对于大多数编程语言在这一块的原生方法,最终都是在 libc 中完成的。 + +极简设计的操作系统都提供了方便的 API 封装,这样可以保持内核的简洁。所有的内核代码运行在特权模式下,有 bugs 的内核代码行将会产生致命的后果。在用户模式下做的任何事情都是在用户模式中完成的。由库来提供友好的方法和想要的参数处理,像 printf(3) 这样。 + +我们拿一个 web APIs 进行比较,内核的封装方式与构建一个简单易行的 HTTP 接口去提供服务是类似的,然后使用特定语言的守护方法去提供特定语言的库。或者也可能有一些缓存,它是库的 getpid() 完成的内容:首次调用时,它真实地去执行了一个系统调用,然后,它缓存了 PID,这样就可以避免后续调用时的系统调用开销。 + +一旦封装完成,它做的第一件事就是进入了超空间(hyperspace)的内核(译者注:一个快速而安全的计算环境,独立于操作系统而存在)。这种转换机制因处理器架构设计不同而不同。(译者注:就是前一段时间爆出的存在于处理器硬件中的运行于 Ring -3 的操作系统,比如,Intel 的 ME)在 Intel 处理器中,参数和 [系统调用号][6] 是 [加载到寄存器中的][7],然后,运行一个 [指令][8] 将 CPU 置于 [特权模式][9] 中,并立即将控制权转移到内核中的全局系统调用 [入口][10]。如果你对这些细节感兴趣,David Drysdale 在 LWN 上有两篇非常好的文章([第一篇][11],[第二篇][12])。 + +内核然后使用这个系统调用号作为进入 [sys_call_table][14] 的一个 [索引][13],它是一个函数指针到每个系统调用实现的数组。在这里,调用 了 [sys_getpid][15]: + +![](https://manybutfinite.com/img/os/syscallExit.png) + +在 Linux 中,系统调用大多数都实现为独立的 C 函数,有时候这样做 [很琐碎][16],但是通过内核优秀的设计,系统调用被严格隔离。它们是工作在一般数据结构中的普通代码。关于这些争论的验证除了完全偏执的以外,其它的还是非常好的。 + +一旦它们的工作完成,它们就会正常返回,然后,根据特定代码转回到用户模式,封装将在那里继续做一些后续处理工作。在我们的例子中,[getpid(2)][17] 现在缓存了由内核返回的 PID。如果内核返回了一个错误,另外的封装可以去设置全局 errno 变量。让你知道 GNU 所关心的一些小事。 + +如果你想看未处理的原生内容,glibc 提供了 [syscall(2)][18] 函数,它可以不通过封装来产生一个系统调用。你也可以通过它来做一个你自己的封装。这对一个 C 库来说,并不神奇,也不是保密的。 + +这种系统调用的设计影响是很深远的。我们从一个非常有用的 [strace(1)][19] 开始,这个工具可以用来监视 Linux 进程的系统调用(在 Mac 上,看 [dtruss(1m)][20] 和神奇的 [dtrace][21];在 Windows 中,看 [sysinternals][22])。这里在 pid 上的跟踪: + +| +​``` +1234567891011121314151617181920 +​``` + | +​``` +~/code/x86-os$ strace ./pidexecve("./pid", ["./pid"], [/* 20 vars */]) = 0brk(0) = 0x9aa0000access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7767000access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3fstat64(3, {st_mode=S_IFREG|0644, st_size=18056, ...}) = 0mmap2(NULL, 18056, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7762000close(3) = 0[...snip...]getpid() = 14678fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 1), ...}) = 0mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7766000write(1, "14678\n", 614678) = 6exit_group(6) = ? +​``` + | + +输出的每一行都显示了一个系统调用 、它的参数、以及返回值。如果你在一个循环中将 getpid(2) 运行 1000 次,你就会发现始终只有一个 getpid() 系统调用,因为,它的 PID 已经被缓存了。我们也可以看到在格式化输出字符串之后,printf(3) 调用了 write(2)。 + +strace 可以开始一个新进程,也可以附加到一个已经运行的进程上。你可以通过不同程序的系统调用学到很多的东西。例如,sshd 守护进程一天都干了什么? + +| +​``` +1234567891011121314151617181920212223242526272829 +​``` + | +​``` +~/code/x86-os$ ps ax | grep sshd12218 ? Ss 0:00 /usr/sbin/sshd -D~/code/x86-os$ sudo strace -p 12218Process 12218 attached - interrupt to quitselect(7, [3 4], NULL, NULL, NULL[ ... nothing happens ... No fun, it's just waiting for a connection using select(2) If we wait long enough, we might see new keys being generated and so on, but let's attach again, tell strace to follow forks (-f), and connect via SSH]~/code/x86-os$ sudo strace -p 12218 -f[lots of calls happen during an SSH login, only a few shown][pid 14692] read(3, "-----BEGIN RSA PRIVATE KEY-----\n"..., 1024) = 1024[pid 14692] open("/usr/share/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)[pid 14692] open("/etc/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)[pid 14692] open("/etc/ssh/ssh_host_dsa_key", O_RDONLY|O_LARGEFILE) = 3[pid 14692] open("/etc/protocols", O_RDONLY|O_CLOEXEC) = 4[pid 14692] read(4, "# Internet (IP) protocols\n#\n# Up"..., 4096) = 2933[pid 14692] open("/etc/hosts.allow", O_RDONLY) = 4[pid 14692] open("/lib/i386-linux-gnu/libnss_dns.so.2", O_RDONLY|O_CLOEXEC) = 4[pid 14692] stat64("/etc/pam.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0[pid 14692] open("/etc/pam.d/common-password", O_RDONLY|O_LARGEFILE) = 8[pid 14692] open("/etc/pam.d/other", O_RDONLY|O_LARGEFILE) = 4 +​``` + | + +看懂 SSH 的调用是块难啃的骨头,但是,如果搞懂它你就学会了跟踪。也可以用它去看一个应用程序打开的哪个文件是有用的(“这个配置是从哪里来的?”)。如果你有一个出现错误的进程,你可以跟踪它,然后去看它通过系统调用做了什么?当一些应用程序没有提供适当的错误信息而意外退出时,你可以去检查它是否是一个系统调用失败。你也可以使用过滤器,查看每个调用的次数,等等: + +| + +``` + +``` + +123456789 + +``` + +``` + + | +​``` +~/code/x86-os$ strace -T -e trace=recv curl -silent www.google.com. > /dev/nullrecv(3, "HTTP/1.1 200 OK\r\nDate: Wed, 05 N"..., 16384, 0) = 4164 <0.000007>recv(3, "fl a{color:#36c}a:visited{color:"..., 16384, 0) = 2776 <0.000005>recv(3, "adient(top,#4d90fe,#4787ed);filt"..., 16384, 0) = 4164 <0.000007>recv(3, "gbar.up.spd(b,d,1,!0);break;case"..., 16384, 0) = 2776 <0.000006>recv(3, "$),a.i.G(!0)),window.gbar.up.sl("..., 16384, 0) = 1388 <0.000004>recv(3, "margin:0;padding:5px 8px 0 6px;v"..., 16384, 0) = 1388 <0.000007>recv(3, "){window.setTimeout(function(){v"..., 16384, 0) = 1484 <0.000006> +​``` + | + +我鼓励你去浏览在你的操作系统中的这些工具。使用它们会让你觉得自己像个超人一样强大。 + +但是,足够有用的东西,往往要让我们深入到它的设计中。我们可以看到那些用户空间中的应用程序是被严格限制在它自己的虚拟地址空间中,它的虚拟地址空间运行在 Ring 3(非特权模式)中。一般来说,只涉及到计算和内存访问的任务是不需要请求系统调用的。例如,像 [strlen(3)][23] 和 [memcpy(3)][24] 这样的 C 库函数并不需要内核去做什么。这些都是在应用程序内部发生的事。 + +一个 C 库函数的 man 页面节上(在圆括号 2 和 3 中)也提供了线索。节 2 是用于系统调用封装,而节 3 包含了其它 C 库函数。但是,正如我们在 printf(3) 中所看到的,一个库函数可以最终产生一个或者多个系统调用。 + +如果你对此感到好奇,这里是 [Linux][25] ( [Filippo's list][26])和 [Windows][27] 的全部系统调用列表。它们各自有 ~310 和 ~460 个系统调用。看这些系统调用是非常有趣的,因为,它们代表了软件在现代的计算机上能够做什么。另外,你还可能在这里找到与进程间通讯和性能相关的“宝藏”。这是一个“不懂 Unix 的人注定最终还要重新发明一个蹩脚的 Unix ” 的地方。(译者注:“Those who do not understand Unix are condemned to reinvent it,poorly。”这句话是 [Henry Spencer][35] 的名言,反映了 Unix 的设计哲学,它的一些理念和文化是一种技术发展的必须结果,看似糟糕却无法超越。) + +与 CPU 周期相比,许多系统调用花很长的时间去执行任务,例如,从一个硬盘驱动器中读取内容。在这种情况下,调用进程在底层的工作完成之前一直处于休眠状态。因为,CPUs 运行的非常快,一般的程序都因为 I/O 的限制在它的生命周期的大部分时间处于休眠状态,等待系统的调用。相反,如果你跟踪一个计算密集型任务,你经常会看到没有任何的系统调用参与其中。在这种情况下,[top(1)][29] 将显示大量的 CPU 使用。 + +在一个系统调用中的开销可能会是一个问题。例如,固态硬盘比普通硬盘要快很多,但是,操作系统的开销可能比 I/O 操作本身的开销 [更加昂贵][30]。执行大量读写操作的程序可能就是操作系统开销的瓶颈所在。[向量化 I/O][31] 对此有一些帮助。因此要做 [文件的内存映射][32],它允许一个程序仅访问内存就可以读或写磁盘文件。类似的映射也存在于像视频卡这样的地方。最终,经济性俱佳的云计算可能导致内核在用户模式/内核模式的切换消失或者最小化。 + +最终,系统调用还有益于系统安全。一是,无论看起来多么模糊的一个二进制程序,你都可以通过观察它的系统调用来检查它的行为。这种方式可能用于去检测恶意程序。例如,我们可以记录一个未知程序的系统调用的策略,并对它的偏差进行报警,或者对程序调用指定一个白名单,这样就可以让漏洞利用变得更加困难。在这个领域,我们有大量的研究,和许多工具,但是没有“杀手级”的解决方案。 + +这就是系统调用。很抱歉这篇文章有点长,我希望它对你有用。接下来的时间,我将写更多(短的)文章,也可以在 [RSS][33] 和 [Twitter][34] 关注我。这篇文章献给 glorious Clube Atlético Mineiro。 + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/system-calls/ + +作者:[Gustavo Duarte][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/code/x86-os/pid.c +[2]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory +[3]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/ +[4]:http://linux.die.net/man/2/getpid +[5]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/getpid.c;h=937b1d4e113b1cff4a5c698f83d662e130d596af;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l49 +[6]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/syscalls/syscall_64.tbl#L48 +[7]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86_64/sysdep.h;h=4a619dafebd180426bf32ab6b6cb0e5e560b718a;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l139 +[8]:https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86_64/sysdep.h;h=4a619dafebd180426bf32ab6b6cb0e5e560b718a;hb=4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85#l179 +[9]:https://manybutfinite.com/post/cpu-rings-privilege-and-protection +[10]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/entry_64.S#L354-L386 +[11]:http://lwn.net/Articles/604287/ +[12]:http://lwn.net/Articles/604515/ +[13]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/entry_64.S#L422 +[14]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/syscall_64.c#L25 +[15]:https://github.com/torvalds/linux/blob/v3.17/kernel/sys.c#L800-L809 +[16]:https://github.com/torvalds/linux/blob/v3.17/kernel/sys.c#L800-L859 +[17]:http://linux.die.net/man/2/getpid +[18]:http://linux.die.net/man/2/syscall +[19]:http://linux.die.net/man/1/strace +[20]:https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/dtruss.1m.html +[21]:http://dtrace.org/blogs/brendan/2011/10/10/top-10-dtrace-scripts-for-mac-os-x/ +[22]:http://technet.microsoft.com/en-us/sysinternals/bb842062.aspx +[23]:http://linux.die.net/man/3/strlen +[24]:http://linux.die.net/man/3/memcpy +[25]:https://github.com/torvalds/linux/blob/v3.17/arch/x86/syscalls/syscall_64.tbl +[26]:https://filippo.io/linux-syscall-table/ +[27]:http://j00ru.vexillium.org/ntapi/ +[28]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait/ +[29]:http://linux.die.net/man/1/top +[30]:http://danluu.com/clwb-pcommit/ +[31]:http://en.wikipedia.org/wiki/Vectored_I/O +[32]:https://manybutfinite.com/post/page-cache-the-affair-between-memory-and-files/ +[33]:http://feeds.feedburner.com/GustavoDuarte +[34]:http://twitter.com/food4hackers +[35]:https://en.wikipedia.org/wiki/Henry_Spencer \ No newline at end of file From b37a68352f5e6ac695e630a83682da77b10b13b1 Mon Sep 17 00:00:00 2001 From: zjon Date: Wed, 17 Jan 2018 15:05:49 +0800 Subject: [PATCH 043/226] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?=E9=A2=98=E7=9B=AE=EF=BC=9A20180102=20Best=20open=20source=20tu?= =?UTF-8?q?torials=20in=202017.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0102 Best open source tutorials in 2017.md | 83 ------------------ ...0102 Best open source tutorials in 2017.md | 85 +++++++++++++++++++ 2 files changed, 85 insertions(+), 83 deletions(-) delete mode 100644 sources/tech/20180102 Best open source tutorials in 2017.md create mode 100644 translated/tech/20180102 Best open source tutorials in 2017.md diff --git a/sources/tech/20180102 Best open source tutorials in 2017.md b/sources/tech/20180102 Best open source tutorials in 2017.md deleted file mode 100644 index 7612772b49..0000000000 --- a/sources/tech/20180102 Best open source tutorials in 2017.md +++ /dev/null @@ -1,83 +0,0 @@ -Translating zjon -Best open source tutorials in 2017 -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G) - -A well-written tutorial is a great supplement to any software's official documentation. It can also be an effective alternative if that official documentation is poorly written, incomplete, or non-existent. - -In 2017, Opensource.com published a number of excellent tutorials on a variety of topics. Those tutorials weren't just for experts. We aimed them at users of all levels of skill and experience. - -Let's take a look at the best of those tutorials. - -### It's all about the code - -For many, their first foray into open source involved contributing code to one project or another. Where do you go to learn to code or program? The following two articles are great starting points. - -While not a tutorial in the strictest sense of the word, VM Brasseur's [How to get started learning to program][1] is a good starting point for the neophyte coder. It doesn't merely point out some excellent resources that will help you get started, but also offers important advice about understanding your learning style and how to pick a language. - -If you've logged a more than a few hours in an [IDE][2] or a text editor, you'll probably want to learn a bit more about different approaches to coding. Fraser Tweedale's [Introduction to functional programming][3] does a fine job of introducing a paradigm that you can apply to many widely used programming languages. - -### Going Linux - -Linux is arguably the poster child of open source. It runs a good chunk of the web and powers the world's top supercomputers. And it gives anyone an alternative to proprietary operating systems on their desktops. - -If you're interested in diving deeper into Linux, here are a trio of tutorials for you. - -Jason Baker looks at [setting the Linux $PATH variable][4]. He guides you through this "important skill for any beginning Linux user," which enables you to point the system to directories containing programs and scripts. - -Embrace your inner techie with David Both's guide to [building a DNS name server][5]. He documents, in considerable detail, how to set up and run the server, including what configuration files to edit and how to edit them. - -Want to go a bit more retro in your computing? Jim Hall shows you how to [run DOS programs in Linux][6] using [FreeDOS][7] and [QEMU][8]. Hall's article focuses on running DOS productivity tools, but it's not all serious--he talks about running his favorite DOS games, too. - -### Three slices of Pi - -It's no secret that inexpensive single-board computers have made hardware hacking fun again. Not only that, but they've made it more accessible to more people, regardless of their age or their level of technical proficiency. - -The [Raspberry Pi][9] is probably the most widely used single-board computer out there. Ben Nuttall walks us through how to install and set up [a Postgres database on a Raspberry Pi][10]. From there, you're ready to use it in whatever project you have in mind. - -If your tastes include both the literary and technical, you might be interested in Don Watkins' [How to turn a Raspberry Pi into an eBook server][11]. With a little work and a copy of the [Calibre eBook management software][12], you'll be able to get to your favorite eBooks anywhere you are. - -Raspberry isn't the only flavor of Pi out there. There's also the [Orange Pi Pc Plus][13], an open-source single-board computer. David Egts looks at [getting started with this hackable mini-computer][14]. - -### Day-to-day computing - -Open source isn't just for techies. Mere mortals use it to do their daily work and be more productive. Here are a trio of articles for those of us who have 10 thumbs when it comes to anything technical (and for those who don't). - -When you think of microblogging, you probably think Twitter. But Twitter has more than its share of problems. [Mastodon][15] is an open alternative to Twitter that debuted in 2016. Since then, Mastodon has gained a sizeable base of users. Seth Kenlon explains [how to join and use Mastodon][16], and even shows you how to cross-post between Mastodon and Twitter. - -Do you need a little help staying on top of your expenses? All you need is a spreadsheet and the right template. My article on [getting control of your finances][17] shows you how to create a simple, attractive finance-tracking spreadsheet with [LibreOffice Calc][18] (or any other spreadsheet editor). - -ImageMagick is a powerful tool for manipulating graphics. It's one, though, that many people don't use as often as they should. That means they forget the commands just when they need them the most. If that's you, then keep Greg Pittman's [introductory tutorial to ImageMagick][19] handy for those times you need some help. - -Do you have a favorite tutorial published by Opensource.com in 2017? Feel free to share it with the community by leaving a comment. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/best-tutorials - -作者:[Scott Nesbitt][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/scottnesbitt -[1]:https://opensource.com/article/17/4/how-get-started-learning-program -[2]:https://en.wikipedia.org/wiki/Integrated_development_environment -[3]:https://opensource.com/article/17/4/introduction-functional-programming -[4]:https://opensource.com/article/17/6/set-path-linux -[5]:https://opensource.com/article/17/4/build-your-own-name-server -[6]:https://opensource.com/article/17/10/run-dos-applications-linux -[7]:http://www.freedos.org/ -[8]:https://www.qemu.org -[9]:https://en.wikipedia.org/wiki/Raspberry_Pi -[10]:https://opensource.com/article/17/10/set-postgres-database-your-raspberry-pi -[11]:https://opensource.com/article/17/6/raspberrypi-ebook-server -[12]:https://calibre-ebook.com/ -[13]:http://www.orangepi.org/ -[14]:https://opensource.com/article/17/1/how-to-orange-pi -[15]:https://joinmastodon.org/ -[16]:https://opensource.com/article/17/4/guide-to-mastodon -[17]:https://opensource.com/article/17/8/budget-libreoffice-calc -[18]:https://www.libreoffice.org/discover/calc/ -[19]:https://opensource.com/article/17/8/imagemagick diff --git a/translated/tech/20180102 Best open source tutorials in 2017.md b/translated/tech/20180102 Best open source tutorials in 2017.md new file mode 100644 index 0000000000..892c7d7a8e --- /dev/null +++ b/translated/tech/20180102 Best open source tutorials in 2017.md @@ -0,0 +1,85 @@ +Translating zjon +2017最佳开源教程 +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G) + +一个精心编写的教程是任何软件的官方文档的一个很好的补充。 如果官方文件写得不好,不完整或不存在,它也可能是一个有效的选择。 + +2017、Opensource.com 发布一些有关各种主题的优秀教程。这些教程不只是针对专家们的。我们把他们针对各种技能水平和经验的用户。 + +让我们来看看最好的教程。 + +### 关于代码 + +对许多人来说,他们对开源的第一次涉足涉及为一个项目或另一个项目提供代码。你在哪里学习编码或编程?以下两篇文章是很好的起点。 + +严格来说,VM Brasseur 的[如何开始学习编程][1]是为新手程序员的一个很好的起点,而不是一个教程。它不仅指出了一些有助于你开始学习的优秀资源,而且还提供了了解你的学习方式和如何选择语言的重要建议。 + +如果您已经在一个 [IDE][2] 或文本编辑器中记录了几个小时,那么您可能需要学习更多关于编码的不同方法。Fraser Tweedale 的[功能编程的简介][3]很好地引入范式可以应用到许多广泛使用的编程语言。 + +### 流行的 Linux + +Linux 是开源的典范。它运行了大量的网络,为世界顶级超级计算机提供动力。它让任何人都可以在台式机上使用专有的操作系统。 + +如果你有兴趣深入Linux,这里有三个教程供你参考。 + +Jason Baker 查看[设置 Linux $PATH 变量][4]。他引导你通过这一“任何Linux初学者的重要技巧”,使您能够将系统指向包含程序和脚本的目录。 + +拥抱你的核心技师 David Both 指南[建立一个 DNS 域名服务器][5]。他详细地记录了如何设置和运行服务器,包括要编辑的配置文件以及如何编辑它们。 + +想在你的电脑上更复古一点吗?Jim Hall 告诉你如何[在 Linux 下运行 DOS 程序][6]使用 [FreeDOS][7]和 [qemu][8]。Hall 的文章着重于运行 DOS 生产力工具,但并不全是严肃的——他也谈到了运行他最喜欢的 DOS 游戏。 + +### 3 个 Pi + +廉价的单板机使硬件再次变得有趣,这并不是秘密。不仅如此,它们使更多的人更容易接近,无论他们的年龄或技术水平如何。 + +其中,[树莓派][9]可能是最广泛使用的单板计算机。Ben Nuttall 带我们通过如何安装和设置 [Postgres 数据库在树莓派上][10]。从那里,你可以在任何你想要的项目中使用它。 + +如果你的品味包括文学和技术,你可能会对 Don Watkins 的[如何将树莓派变成电子书服务器][11]感兴趣。有一点工作和一个 [Calibre 电子书管理软件][12]的副本,你就可以得到你最喜欢的电子书,无论你在哪里。 + +树莓派并不是其中唯一有特点的。还有 [Orange Pi Pc Plus][13],一种开源的单板机。David Egts 看着[开始使用这个可编程迷你电脑][14]。 + +### 日常计算学 + +开源并不仅针对技术专家,更多的凡人用它来做日常工作,而且更加效率。这里有三篇文章,使我们这些笨手笨脚的人做任何事情变得优雅(或者不是)。 + +当你想到微博的时候,你可能会想到 Twitter。但是 Twitter 的问题多于它的问题。[Mastodon][15] 是 Twitter 的开放的替代方案,它在 2016 年首次亮相。从此, Mastodon 就获得相当大的用户基数。Seth Kenlon 说明[如何加入和使用 Mastodon][16],甚至告诉你如何在 Mastodon 和 Twitter 间交替使用。 + +你需要一点帮助来维持开支吗?你所需要的只是一个电子表格和正确的模板。我的文章[要控制你的财政状况] [17],向你展示了如何用[LibreOffice Calc][18] (或任何其他电子表格编辑器)创建一个简单而有吸引力的财务跟踪。 + +ImageMagick 是强大的图形处理工具。但是,很多人不经常使用。这意味着他们在最需要它们时忘记了命令。如果是你,Greg Pittman 的 [ImageMagick 入门教程][19]在你需要一些帮助时候能派上用场。 + +你有最喜欢的 2017 Opensource.com 公布的教程吗?请随意留言与社区分享。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/best-tutorials + +作者:[Scott Nesbitt][a] +译者:[zjon](https://github.com/zjon) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/scottnesbitt +[1]:https://opensource.com/article/17/4/how-get-started-learning-program +[2]:https://en.wikipedia.org/wiki/Integrated_development_environment +[3]:https://opensource.com/article/17/4/introduction-functional-programming +[4]:https://opensource.com/article/17/6/set-path-linux +[5]:https://opensource.com/article/17/4/build-your-own-name-server +[6]:https://opensource.com/article/17/10/run-dos-applications-linux +[7]:http://www.freedos.org/ +[8]:https://www.qemu.org +[9]:https://en.wikipedia.org/wiki/Raspberry_Pi +[10]:https://opensource.com/article/17/10/set-postgres-database-your-raspberry-pi +[11]:https://opensource.com/article/17/6/raspberrypi-ebook-server +[12]:https://calibre-ebook.com/ +[13]:http://www.orangepi.org/ +[14]:https://opensource.com/article/17/1/how-to-orange-pi +[15]:https://joinmastodon.org/ +[16]:https://opensource.com/article/17/4/guide-to-mastodon +[17]:https://opensource.com/article/17/8/budget-libreoffice-calc +[18]:https://www.libreoffice.org/discover/calc/ +[19]:https://opensource.com/article/17/8/imagemagick + + From 00a6d3687e130090c27a3e417887750dda6c6148 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 17 Jan 2018 21:32:32 +0800 Subject: [PATCH 044/226] PRF:20170502 A beginner-s guide to Raspberry Pi 3.md @qhwdw --- ...02 A beginner-s guide to Raspberry Pi 3.md | 84 +++++++++++-------- 1 file changed, 48 insertions(+), 36 deletions(-) diff --git a/translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md b/translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md index b53c397aed..38b892e0ec 100644 --- a/translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md +++ b/translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md @@ -1,103 +1,115 @@ -一个树莓派 3 的新手指南 +树莓派 3 的新手指南 ====== +> 这个教程将帮助你入门树莓派 3Raspberry Pi 3。 + ![](https://images.techhive.com/images/article/2017/03/raspberry2-100711632-large.jpeg) -这篇文章是我的使用树莓派 3 创建新项目的每周系列文章的一部分。该系列的第一篇文章专注于入门,它主要讲使用 PIXEL 桌面去安装树莓派、设置网络以及其它的基本组件。 +这篇文章是我的使用树莓派 3 创建新项目的每周系列文章的一部分。该系列的这个第一篇文章专注于入门,它主要讲安装 Raspbian 和 PIXEL 桌面,以及设置网络和其它的基本组件。 ### 你需要: - * 一台树莓派 3 - * 一个 5v 2mAh 带 USB 接口的电源适配器 - * 至少 8GB 容量的 Micro SD 卡 - * Wi-Fi 或者以太网线 - * 散热片 - * 键盘和鼠标 - * 一台 PC 显示器 - * 一台用于准备 microSD 卡的 Mac 或者 PC +* 一台树莓派 3 +* 一个 5v 2mAh 带 USB 接口的电源适配器 +* 至少 8GB 容量的 Micro SD 卡 +* Wi-Fi 或者以太网线 +* 散热片 +* 键盘和鼠标 +* 一台 PC 显示器 +* 一台用于准备 microSD 卡的 Mac 或者 PC - - -现在市面上有很多基于 Linux 操作系统的树莓派,这种树莓派你可以直接安装它,但是,如果你是第一次接触树莓派,我推荐使用 NOOBS,它是树莓派官方的操作系统安装器,它安装操作系统到设备的过程非常简单。 +现在有很多基于 Linux 操作系统可用于树莓派,你可以直接安装它,但是,如果你是第一次接触树莓派,我推荐使用 NOOBS,它是树莓派官方的操作系统安装器,它安装操作系统到该设备的过程非常简单。 在你的电脑上从 [这个链接][1] 下载 NOOBS。它是一个 zip 压缩文件。如果你使用的是 MacOS,可以直接双击它,MacOS 会自动解压这个文件。如果你使用的是 Windows,右键单击它,选择“解压到这里”。 -如果你运行的是 Linux,如何去解压 zip 文件取决于你的桌面环境,因为,不同的桌面环境下解压文件的方法不一样,但是,使用命令行可以很容易地完成解压工作。 +如果你运行的是 Linux 桌面,如何去解压 zip 文件取决于你的桌面环境,因为,不同的桌面环境下解压文件的方法不一样,但是,使用命令行可以很容易地完成解压工作。 -`$ unzip NOOBS.zip` +``` +$ unzip NOOBS.zip +``` 不管它是什么操作系统,打开解压后的文件,你看到的应该是如下图所示的样子: -![content][3] Swapnil Bhartiya +![content][3] 现在,在你的 PC 上插入 Micro SD 卡,将它格式化成 FAT32 格式的文件系统。在 MacOS 上,使用磁盘实用工具去格式化 Micro SD 卡: -![format][4] Swapnil Bhartiya +![format][4] -在 Windows 上,只需要右键单击这个卡,然后选择“格式化”选项。如果是在 Linux 上,不同的桌面环境使用不同的工具,就不一一去讲解了。在这里我写了一个教程,[在 Linux 上使用命令行接口][5] 去格式化 SD 卡为 Fat32 文件系统。 +在 Windows 上,只需要右键单击这个卡,然后选择“格式化”选项。如果是在 Linux 上,不同的桌面环境使用不同的工具,就不一一去讲解了。在这里我写了一个教程,[在 Linux 上使用命令行界面][5] 去格式化 SD 卡为 Fat32 文件系统。 -在你拥有了 FAT32 格式的文件系统后,就可以去拷贝下载的 NOOBS 目录的内容到这个卡的根目录下。如果你使用的是 MacOS 或者 Linux,可以使用 rsync 将 NOOBS 的内容传到 SD 卡的根目录中。在 MacOS 或者 Linux 中打开终端应用,然后运行如下的 rsync 命令: +在你的卡格式成了 FAT32 格式的文件系统后,就可以去拷贝下载的 NOOBS 目录的内容到这个卡的根目录下。如果你使用的是 MacOS 或者 Linux,可以使用 `rsync` 将 NOOBS 的内容传到 SD 卡的根目录中。在 MacOS 或者 Linux 中打开终端应用,然后运行如下的 rsync 命令: -`rsync -avzP /path_of_NOOBS /path_of_sdcard` +``` +rsync -avzP /path_of_NOOBS /path_of_sdcard +``` 一定要确保选择了 SD 卡的根目录,在我的案例中(在 MacOS 上),它是: -`rsync -avzP /Users/swapnil/Downloads/NOOBS_v2_2_0/ /Volumes/U/` +``` +rsync -avzP /Users/swapnil/Downloads/NOOBS_v2_2_0/ /Volumes/U/ +``` 或者你也可以拷贝粘贴 NOOBS 目录中的内容。一定要确保将 NOOBS 目录中的内容全部拷贝到 Micro SD 卡的根目录下,千万不能放到任何的子目录中。 -现在可以插入这张 Micro SD 卡到树莓派 3 中,连接好显示器、键盘鼠标和电源适配器。如果你拥有有线网络,我建议你使用它,因为有线网络下载和安装操作系统更快。树莓派将引导到 NOOBS,它将提供一个供你去选择安装的分发版列表。从第一个选项中选择树莓派,紧接着会出现如下图的画面。 +现在可以插入这张 MicroSD 卡到树莓派 3 中,连接好显示器、键盘鼠标和电源适配器。如果你拥有有线网络,我建议你使用它,因为有线网络下载和安装操作系统更快。树莓派将引导到 NOOBS,它将提供一个供你去选择安装的分发版列表。从第一个选项中选择 Raspbian,紧接着会出现如下图的画面。 -![raspi config][6] Swapnil Bhartiya +![raspi config][6] -在你安装完成后,树莓派将重新启动,你将会看到一个欢迎使用树莓派的画面。现在可以去配置它,并且去运行系统更新。大多数情况下,我们都是在没有外设的情况下使用树莓派的,都是使用 SSH 基于网络远程去管理它。这意味着你不需要为了管理树莓派而去为它接上鼠标键盘和显示器。 +在你安装完成后,树莓派将重新启动,你将会看到一个欢迎使用树莓派的画面。现在可以去配置它,并且去运行系统更新。大多数情况下,我们都是在没有外设的情况下使用树莓派的,都是使用 SSH 基于网络远程去管理它。这意味着你不需要为了管理树莓派而去为它接上鼠标、键盘和显示器。 开始使用它的第一步是,配置网络(假如你使用的是 Wi-Fi)。点击顶部面板上的网络图标,然后在出现的网络列表中,选择你要配置的网络并为它输入正确的密码。 -![wireless][7] Swapnil Bhartiya +![wireless][7] 恭喜您,无线网络的连接配置完成了。在进入下一步的配置之前,你需要找到你的网络为树莓派分配的 IP 地址,因为远程管理会用到它。 打开一个终端,运行如下的命令: -`ifconfig` +``` +ifconfig +``` -现在,记下这个设备的 wlan0 部分的 IP 地址。它一般显示为 “inet addr” +现在,记下这个设备的 `wlan0` 部分的 IP 地址。它一般显示为 “inet addr”。 -现在,可以去启用 SSH 了,在树莓派上打开一个终端,然后打开 raspi-config 工具。 +现在,可以去启用 SSH 了,在树莓派上打开一个终端,然后打开 `raspi-config` 工具。 -`sudo raspi-config` +``` +sudo raspi-config +``` 树莓派的默认用户名和密码分别是 “pi” 和 “raspberry”。在上面的命令中你会被要求输入密码。树莓派配置工具的第一个选项是去修改默认密码,我强烈推荐你修改默认密码,尤其是你基于网络去使用它的时候。 第二个选项是去修改主机名,如果在你的网络中有多个树莓派时,主机名用于区分它们。一个有意义的主机名可以很容易在网络上识别每个设备。 -然后进入到接口选项,去启用摄像头、SSH、以及 VNC。如果你在树莓派上使用了一个涉及到多媒体的应用程序,比如,家庭影院系统或者 PC,你也可以去改变音频输出选项。缺省情况下,它的默认输出到 HDMI 接口,但是,如果你使用外部音响,你需要去改变音频输出设置。转到树莓派配置工具的高级配置选项,选择音频,然后选择 3.5mm 作为默认输出。 +然后进入到接口选项,去启用摄像头、SSH、以及 VNC。如果你在树莓派上使用了一个涉及到多媒体的应用程序,比如,家庭影院系统或者 PC,你也可以去改变音频输出选项。缺省情况下,它的默认输出到 HDMI 接口,但是,如果你使用外部音响,你需要去改变音频输出设置。转到树莓派配置工具的高级配置选项,选择音频,然后选择 “3.5mm” 作为默认输出。 [小提示:使用箭头键去导航,使用回车键去选择] -一旦所有的改变被应用, 树莓派将要求重新启动。你可以从树莓派上拔出显示器、鼠标键盘,以后可以通过网络来管理它。现在可以在你的本地电脑上打开终端。如果你使用的是 Windows,你可以使用 Putty 或者去读我的文章 - 怎么在 Windows 10 上安装 Ubuntu Bash。 +一旦应用了所有的改变, 树莓派将要求重新启动。你可以从树莓派上拔出显示器、鼠标键盘,以后可以通过网络来管理它。现在可以在你的本地电脑上打开终端。如果你使用的是 Windows,你可以使用 Putty 或者去读我的文章 - 怎么在 Windows 10 上安装 Ubuntu Bash。 在你的本地电脑上输入如下的 SSH 命令: -`ssh pi@IP_ADDRESS_OF_Pi` +``` +ssh pi@IP_ADDRESS_OF_Pi +``` 在我的电脑上,这个命令是这样的: -`ssh pi@10.0.0.161` +``` +ssh pi@10.0.0.161 +``` 输入它的密码,你登入到树莓派了!现在你可以从一台远程电脑上去管理你的树莓派。如果你希望通过因特网去管理树莓派,可以去阅读我的文章 - [如何在你的计算机上启用 RealVNC][8]。 在该系列的下一篇文章中,我将讲解使用你的树莓派去远程管理你的 3D 打印机。 -**这篇文章是作为 IDG 投稿网络的一部分发表的。[想加入吗?][9]** - -------------------------------------------------------------------------------- via: https://www.infoworld.com/article/3176488/linux/a-beginner-s-guide-to-raspberry-pi-3.html 作者:[Swapnil Bhartiya][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9cf061f973b1b0287f2333f07e576f541d8104b9 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 17 Jan 2018 21:32:55 +0800 Subject: [PATCH 045/226] PUB:20170502 A beginner-s guide to Raspberry Pi 3.md @qhwdw https://linux.cn/article-9249-1.html --- .../20170502 A beginner-s guide to Raspberry Pi 3.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20170502 A beginner-s guide to Raspberry Pi 3.md (100%) diff --git a/translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md b/published/20170502 A beginner-s guide to Raspberry Pi 3.md similarity index 100% rename from translated/tech/20170502 A beginner-s guide to Raspberry Pi 3.md rename to published/20170502 A beginner-s guide to Raspberry Pi 3.md From 40b337710f3d9b0a62129dd9998f751dec5a48f2 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 17 Jan 2018 21:49:32 +0800 Subject: [PATCH 046/226] PRF&PUB:20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md @lujun9972 --- ...lback An Updates In RHEL-CentOS Systems.md | 69 ++++++++++--------- 1 file changed, 37 insertions(+), 32 deletions(-) rename {translated/tech => published}/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md (83%) diff --git a/translated/tech/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md b/published/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md similarity index 83% rename from translated/tech/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md rename to published/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md index bb721e0042..f9bc03e3e1 100644 --- a/translated/tech/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md +++ b/published/20170828 How To Use YUM History Command To Rollback An Updates In RHEL-CentOS Systems.md @@ -1,23 +1,26 @@ -在 RHEL/CentOS 系统上使用 YUM History 命令回滚升级操作 +在 RHEL/CentOS 系统上使用 YUM history 命令回滚升级操作 ====== + 为服务器打补丁是 Linux 系统管理员的一项重要任务,为的是让系统更加稳定,性能更加优化。厂商经常会发布一些安全/高危的补丁包,相关软件需要升级以防范潜在的安全风险。 -Yum (Yellowdog Update Modified) 是 CentOS 和 RedHat 系统上用的 RPM 包管理工具,Yum history 命令允许系统管理员将系统回滚到上一个状态,但由于某些限制,回滚不是在所有情况下都能成功,有时 yum 命令可能什么都不做,有时可能会删掉一些其他的包。 +Yum (Yellowdog Update Modified) 是 CentOS 和 RedHat 系统上用的 RPM 包管理工具,`yum history` 命令允许系统管理员将系统回滚到上一个状态,但由于某些限制,回滚不是在所有情况下都能成功,有时 `yum` 命令可能什么都不做,有时可能会删掉一些其他的包。 -我建议你在升级之前还是要做一个完整的系统备份,而 yum history 并不能用来替代系统备份的。系统备份能让你将系统还原到任意时候的节点状态。 +我建议你在升级之前还是要做一个完整的系统备份,而 `yum history` 并不能用来替代系统备份的。系统备份能让你将系统还原到任意时候的节点状态。 **推荐阅读:** -**(#)** [在 RHEL/CentOS 系统上使用 YUM 命令管理软件包 ][1] -**(#)** [在 Fedora 系统上使用 DNF (YUM 的一个分支) 命令管理软件包 ][2] -**(#)** [如何让 History 命令显示日期和时间 ][3] -某些情况下,安装的应用程序在升级了补丁之后不能正常工作或者出现一些错误(可能是由于库不兼容或者软件包升级导致的),那该怎么办呢? +- [在 RHEL/CentOS 系统上使用 YUM 命令管理软件包][1] +- [在 Fedora 系统上使用 DNF (YUM 的一个分支)命令管理软件包 ][2] +- [如何让 history 命令显示日期和时间][3] + +某些情况下,安装的应用程序在升级了补丁之后不能正常工作或者出现一些错误(可能是由于库不兼容或者软件包升级导致的),那该怎么办呢? + +与应用开发团队沟通,并找出导致库和软件包的问题所在,然后使用 `yum history` 命令进行回滚。 -与应用开发团队沟通,并找出导致库和软件包的问题所在,然后使用 yum history 命令进行回滚。 **注意:** - * 它不支持回滚 selinux,selinux-policy-*,kernel,glibc (以及依赖 glibc 的包,比如 gcc)。 - * 不建议将系统降级到更低的版本(比如 CentOS 6.9 降到 CentOS 6.8),这回导致系统处于不稳定的状态 +* 它不支持回滚 selinux,selinux-policy-*,kernel,glibc (以及依赖 glibc 的包,比如 gcc)。 +* 不建议将系统降级到更低的版本(比如 CentOS 6.9 降到 CentOS 6.8),这会导致系统处于不稳定的状态 让我们先来看看系统上有哪些包可以升级,然后挑选出一些包来做实验。 @@ -66,10 +69,10 @@ Upgrade 4 Package(s) Total download size: 5.5 M Is this ok [y/N]: n - ``` -你会发现 `git` 包可以被升级,那我们就用它来实验吧。运行下面命令获得软件包的版本信息(当前安装的版本和可以升级的版本)。 +你会发现 `git` 包可以被升级,那我们就用它来实验吧。运行下面命令获得软件包的版本信息(当前安装的版本和可以升级的版本)。 + ``` # yum list git Loaded plugins: fastestmirror, security @@ -80,10 +83,10 @@ Installed Packages git.x86_64 1.7.1-8.el6 @base Available Packages git.x86_64 1.7.1-9.el6_9 updates - ``` 运行下面命令来将 `git` 从 `1.7.1-8` 升级到 `1.7.1-9`。 + ``` # yum update git Loaded plugins: fastestmirror, presto @@ -147,27 +150,29 @@ Dependency Updated: perl-Git.noarch 0:1.7.1-9.el6_9 Complete! - ``` 验证升级后的 `git` 版本. + ``` # yum list git Installed Packages git.x86_64 1.7.1-9.el6_9 @updates -or +或 # rpm -q git git-1.7.1-9.el6_9.x86_64 - ``` -现在我们成功升级这个软件包,可以对它进行回滚了. 步骤如下. +现在我们成功升级这个软件包,可以对它进行回滚了。步骤如下。 + +### 使用 YUM history 命令回滚升级操作 + +首先,使用下面命令获取 yum 操作的 id。下面的输出很清晰地列出了所有需要的信息,例如操作 id、谁做的这个操作(用户名)、操作日期和时间、操作的动作(安装还是升级)、操作影响的包数量。 -首先,使用下面命令获取yum操作id. 下面的输出很清晰地列出了所有需要的信息,例如操作 id, 谁做的这个操作(用户名), 操作日期和时间, 操作的动作(安装还是升级), 操作影响的包数量. ``` # yum history -or +或 # yum history list all Loaded plugins: fastestmirror, presto ID | Login user | Date and time | Action(s) | Altered @@ -185,10 +190,10 @@ ID | Login user | Date and time | Action(s) | Altered 3 | root | 2016-10-18 12:53 | Install | 1 2 | root | 2016-09-30 10:28 | E, I, U | 31 EE 1 | root | 2016-07-26 11:40 | E, I, U | 160 EE - ``` -上面命令现实有两个包受到了影响,因为 git 还升级了它的依赖包 **perl-Git**. 运行下面命令来查看关于操作的详细信息. +上面命令显示有两个包受到了影响,因为 `git` 还升级了它的依赖包 `perl-Git`。 运行下面命令来查看关于操作的详细信息。 + ``` # yum history info 13 Loaded plugins: fastestmirror, presto @@ -214,7 +219,8 @@ history info ``` -运行下面命令来回滚 `git` 包到上一个版本. +运行下面命令来回滚 `git` 包到上一个版本。 + ``` # yum history undo 13 Loaded plugins: fastestmirror, presto @@ -279,21 +285,21 @@ Installed: git.x86_64 0:1.7.1-8.el6 perl-Git.noarch 0:1.7.1-8.el6 Complete! - ``` -回滚后, 使用下面命令来检查降级包的版本. +回滚后,使用下面命令来检查降级包的版本。 + ``` # yum list git -or +或 # rpm -q git git-1.7.1-8.el6.x86_64 - ``` ### 使用YUM downgrade 命令回滚升级 -此外,我们也可以使用 YUM downgrade 命令回滚升级. +此外,我们也可以使用 YUM `downgrade` 命令回滚升级。 + ``` # yum downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6 Loaded plugins: search-disabled-repos, security, ulninfo @@ -346,14 +352,14 @@ Installed: git.x86_64 0:1.7.1-8.el6 perl-Git.noarch 0:1.7.1-8.el6 Complete! - ``` -**注意 :** 你也需要降级依赖包, 否则它会删掉当前版本的依赖包而不是对依赖包做降级,因为downgrade命令无法处理依赖关系. +注意: 你也需要降级依赖包,否则它会删掉当前版本的依赖包而不是对依赖包做降级,因为 `downgrade` 命令无法处理依赖关系。 ### 至于 Fedora 用户 -命令是一样的,只需要将包管理器名称从YUM改成DNF就行了. +命令是一样的,只需要将包管理器名称从 `yum` 改成 `dnf` 就行了。 + ``` # dnf list git # dnf history @@ -361,7 +367,6 @@ Complete! # dnf history undo # dnf list git # dnf downgrade git-1.7.1-8.el6 perl-Git-1.7.1-8.el6 - ``` -------------------------------------------------------------------------------- @@ -370,7 +375,7 @@ via: https://www.2daygeek.com/rollback-fallback-updates-downgrade-packages-cento 作者:[2daygeek][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 072fcf98c1f7c2b6b0960e3f8b760f9cc2f4daeb Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 17 Jan 2018 21:55:43 +0800 Subject: [PATCH 047/226] PRF&PUB:20171230 How To Sync Time Between Linux And Windows Dual Boot.md @lujun9972 --- ... Time Between Linux And Windows Dual Boot.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) rename {translated/tech => published}/20171230 How To Sync Time Between Linux And Windows Dual Boot.md (73%) diff --git a/translated/tech/20171230 How To Sync Time Between Linux And Windows Dual Boot.md b/published/20171230 How To Sync Time Between Linux And Windows Dual Boot.md similarity index 73% rename from translated/tech/20171230 How To Sync Time Between Linux And Windows Dual Boot.md rename to published/20171230 How To Sync Time Between Linux And Windows Dual Boot.md index 2738213365..1c152f8ba5 100644 --- a/translated/tech/20171230 How To Sync Time Between Linux And Windows Dual Boot.md +++ b/published/20171230 How To Sync Time Between Linux And Windows Dual Boot.md @@ -1,12 +1,15 @@ 解决 Linux 和 Windows 双启动带来的时间同步问题 ====== -想在保留 windows 系统的前提下尝试其他 Linux 发行版,双启动是个常用的做法。这种方法如此风行是因为实现双启动是一件很容易的事情。然而这也带来了一个大问题,那就是 **时间**。 + +![](http://www.theitstuff.com/wp-content/uploads/2017/12/How-To-Sync-Time-Between-Linux-And-Windows-Dual-Boot.jpg) + +想在保留 Windows 系统的前提下尝试其他 Linux 发行版,双启动是个常用的做法。这种方法如此风行是因为实现双启动是一件很容易的事情。然而这也带来了一个大问题,那就是 **时间**。 是的,你没有看错。若你只是用一个操作系统,时间同步不会有什么问题。但若有 Windows 和 Linux 两个系统,则可能出现时间同步上的问题。Linux 使用的是格林威治时间而 Windows 使用的是本地时间。当你从 Linux 切换到 Windows 或者从 Windows 切换到 Linux 时,就可能显示错误的时间了。 不过不要担心,这个问题很好解决。 -点击 windows 系统中的开始菜单,然后搜索 regedit。 +点击 Windows 系统中的开始菜单,然后搜索 regedit。 [![open regedit in windows 10][1]][1] @@ -14,15 +17,13 @@ [![windows 10 registry editor][2]][2] -在左边的导航菜单,导航到 - +在左边的导航菜单,导航到 `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation`。 - **`HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation`** - -在右边窗口,右键点击空白位置,然后选择 **`New>> DWORD(32 bit) Value`**。 +在右边窗口,右键点击空白位置,然后选择 `New >> DWORD(32 bit) Value`。 [![change time format utc from windows registry][3]][3] -之后,会有新生成一个条目,而且这个条目默认是高亮的。将这个条目重命名为 `**RealTimeIsUniversal**` 并设置值为 **1。** +之后,你会新生成一个条目,而且这个条目默认是高亮的。将这个条目重命名为 `RealTimeIsUniversal` 并设置值为 `1`。 [![set universal time utc in windows][4]][4] @@ -34,7 +35,7 @@ via: http://www.theitstuff.com/how-to-sync-time-between-linux-and-windows-dual-b 作者:[Rishabh Kandari][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c1ba624d393dac3db3bb0f9040d5ed7e79c1491d Mon Sep 17 00:00:00 2001 From: Ezio Date: Wed, 17 Jan 2018 22:06:50 +0800 Subject: [PATCH 048/226] =?UTF-8?q?20180117-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...7 Some thoughts on Spectre and Meltdown.md | 104 ++++++++++++++++++ 1 file changed, 104 insertions(+) create mode 100644 sources/talk/20180117 Some thoughts on Spectre and Meltdown.md diff --git a/sources/talk/20180117 Some thoughts on Spectre and Meltdown.md b/sources/talk/20180117 Some thoughts on Spectre and Meltdown.md new file mode 100644 index 0000000000..ae8ce0d204 --- /dev/null +++ b/sources/talk/20180117 Some thoughts on Spectre and Meltdown.md @@ -0,0 +1,104 @@ +### Some thoughts on Spectre and Meltdown + +By now I imagine that all of my regular readers, and a large proportion of the rest of the world, have heard of the security issues dubbed "Spectre" and "Meltdown". While there have been some excellent technical explanations of these issues from several sources — I particularly recommend the [Project Zero][3] blog post — I have yet to see anyone really put these into a broader perspective; nor have I seen anyone make a serious attempt to explain these at a level suited for a wide audience. While I have not been involved with handling these issues directly, I think it's time for me to step up and provide both a wider context and a more broadly understandable explanation. + +The story of these attacks starts in late 2004\. I had submitted my doctoral thesis and had a few months before flying back to Oxford for my defense, so I turned to some light reading: Intel's latest "Optimization Manual", full of tips on how to write faster code. (Eking out every last nanosecond of performance has long been an interest of mine.) Here I found an interesting piece of advice: On Intel CPUs with "Hyper-Threading", a common design choice (aligning the top of thread stacks on page boundaries) should be avoided because it would result in some resources being overused and others being underused, with a resulting drop in performance. This started me thinking: If two programs can hurt each others' performance by accident, one should be able to  _measure_  whether its performance is being hurt by the other; if it can measure whether its performance is being hurt by people not following Intel's optimization guidelines, it should be able to measure whether its performance is being hurt by other patterns of resource usage; and if it can measure that, it should be able to make deductions about what the other program is doing. + +It took me a few days to convince myself that information could be stolen in this manner, but within a few weeks I was able to steal an [RSA][4] private key from [OpenSSL][5]. Then started the lengthy process of quietly notifying Intel and all the major operating system vendors; and on Friday the 13th of May 2005 I presented [my paper][6] describing this new attack at [BSDCan][7] 2005 — the first attack of this type exploiting how a running program causes changes to the microarchitectural state of a CPU. Three months later, the team of Osvik, Shamir, and Tromer published [their work][8], which showed how the same problem could be exploited to steal [AES][9] keys. + +Over the years there have been many attacks which expoit different aspects of CPU design — exploiting L1 data cache collisions, exploiting L1 code cache collisions, exploiting L2 cache collisions, exploiting the TLB, exploiting branch prediction, etc. — but they have all followed the same basic mechanism: A program does something which interacts with the internal state of a CPU, and either we can measure that internal state (the more common case) or we can set up that internal state before the program runs in a way which makes the program faster or slower. These new attacks use the same basic mechanism, but exploit an entirely new angle. But before I go into details, let me go back to basics for a moment. + +#### Understanding the attacks + +These attacks exploit something called a "side channel". What's a side channel? It's when information is revealed as an inadvertant side effect of what you're doing. For example, in the movie [2001][10], Bowman and Poole enter a pod to ensure that the HAL 9000 computer cannot hear their conversation — but fail to block the  _optical_  channel which allows Hal to read their lips. Side channels are related to a concept called "covert channels": Where side channels are about stealing information which was not intended to be conveyed, covert channels are about conveying information which someone is trying to prevent you from sending. The famous case of a [Prisoner of War][11] blinking the word "TORTURE" in Morse code is an example of using a covert channel to convey information. + +Another example of a side channel — and I'll be elaborating on this example later, so please bear with me if it seems odd — is as follows: I want to know when my girlfriend's passport expires, but she won't show me her passport (she complains that it has a horrible photo) and refuses to tell me the expiry date. I tell her that I'm going to take her to Europe on vacation in August and watch what happens: If she runs out to renew her passport, I know that it will expire before August; while if she doesn't get her passport renewed, I know that it will remain valid beyond that date. Her desire to ensure that her passport would be valid inadvertantly revealed to me some information: Whether its expiry date was before or after August. + +Over the past 12 years, people have gotten reasonably good at writing programs which avoid leaking information via side channels; but as the saying goes, if you make something idiot-proof, the world will come up with a better idiot; in this case, the better idiot is newer and faster CPUs. The Spectre and Meltdown attacks make use of something called "speculative execution". This is a mechanism whereby, if a CPU isn't sure what you want it to do next, it will  _speculatively_  perform some action. The idea here is that if it guessed right, it will save time later — and if it guessed wrong, it can throw away the work it did and go back to doing what you asked for. As long as it sometimes guesses right, this saves time compared to waiting until it's absolutely certain about what it should be doing next. Unfortunately, as several researchers recently discovered, it can accidentally leak some information during this speculative execution. + +Going back to my analogy: I tell my girlfriend that I'm going to take her on vacation in June, but I don't tell her where yet; however, she knows that it will either be somewhere within Canada (for which she doesn't need a passport, since we live in Vancouver) or somewhere in Europe. She knows that it takes time to get a passport renewed, so she checks her passport and (if it was about to expire) gets it renewed just in case I later reveal that I'm going to take her to Europe. If I tell her later that I'm only taking her to Ottawa — well, she didn't need to renew her passport after all, but in the mean time her behaviour has already revealed to me whether her passport was about to expire. This is what Google refers to "variant 1" of the Spectre vulnerability: Even though she didn't need her passport, she made sure it was still valid  _just in case_  she was going to need it. + +"Variant 2" of the Spectre vulnerability also relies on speculative execution but in a more subtle way. Here, instead of the CPU knowing that there are two possible execution paths and choosing one (or potentially both!) to speculatively execute, the CPU has no idea what code it will need to execute next. However, it has been keeping track and knows what it did the last few times it was in the same position, and it makes a guess — after all, there's no harm in guessing since if it guesses wrong it can just throw away the unneeded work. Continuing our analogy, a "Spectre version 2" attack on my girlfriend would be as follows: I spend a week talking about how Oxford is a wonderful place to visit and I really enjoyed the years I spent there, and then I tell her that I want to take her on vacation. She very reasonably assumes that — since I've been talking about Oxford so much — I must be planning on taking her to England, and runs off to check her passport and potentially renew it... but in fact I tricked her and I'm only planning on taking her to Ottawa. + +This "version 2" attack is far more powerful than "version 1" because it can be used to exploit side channels present in many different locations; but it is also much harder to exploit and depends intimately on details of CPU design, since the attacker needs to make the CPU guess the correct (wrong) location to anticipate that it will be visiting next. + +Now we get to the third attack, dubbed "Meltdown". This one is a bit weird, so I'm going to start with the analogy here: I tell my girlfriend that I want to take her to the Korean peninsula. She knows that her passport is valid for long enough; but she immediately runs off to check that her North Korean visa hasn't expired. Why does she have a North Korean visa, you ask? Good question. She doesn't — but she runs off to check its expiry date anyway! Because she doesn't have a North Korean visa, she (somehow) checks the expiry date on  _someone else's_  North Korean visa, and then (if it is about to expire) runs out to renew it — and so by telling her that I want to take her to Korea for a vacation  _I find out something she couldn't have told me even if she wanted to_ . If this sounds like we're falling down a [Dodgsonian][12] rabbit hole... well, we are. The most common reaction I've heard from security people about this is "Intel CPUs are doing  _what???_ ", and it's not by coincidence that one of the names suggested for an early Linux patch was Forcefully Unmap Complete Kernel With Interrupt Trampolines (FUCKWIT). (For the technically-inclined: Intel CPUs continue speculative execution through faults, so the fact that a page of memory cannot be accessed does not prevent it from, well, being accessed.) + +#### How users can protect themselves + +So that's what these vulnerabilities are all about; but what can regular users do to protect themselves? To start with, apply the damn patches. For the next few months there are going to be patches to operating systems; patches to individual applications; patches to phones; patches to routers; patches to smart televisions... if you see a notification saying "there are updates which need to be installed", **install the updates**. (However, this doesn't mean that you should be stupid: If you get an email saying "click here to update your system", it's probably malware.) These attacks are complicated, and need to be fixed in many ways in many different places, so  _each individual piece of software_  may have many patches as the authors work their way through from fixing the most easily exploited vulnerabilities to the more obscure theoretical weaknesses. + +What else can you do? Understand the implications of these vulnerabilities. Intel caught some undeserved flak for stating that they believe "these exploits do not have the potential to corrupt, modify or delete data"; in fact, they're quite correct in a direct sense, and this distinction is very relevant. A side channel attack inherently  _reveals information_ , but it does not by itself allow someone to take control of a system. (In some cases side channels may make it easier to take advantage of other bugs, however.) As such, it's important to consider what information could be revealed: Even if you're not working on top secret plans for responding to a ballistic missile attack, you've probably accessed password-protected websites (Facebook, Twitter, Gmail, perhaps your online banking...) and possibly entered your credit card details somewhere today. Those passwords and credit card numbers are what you should worry about. + +Now, in order for you to be attacked, some code needs to run on your computer. The most likely vector for such an attack is through a website — and the more shady the website the more likely you'll be attacked. (Why? Because if the owners of a website are already doing something which is illegal — say, selling fake prescription drugs — they're far more likely to agree if someone offers to pay them to add some "harmless" extra code to their site.) You're not likely to get attacked by visiting your bank's website; but if you make a practice of visiting the less reputable parts of the World Wide Web, it's probably best to not log in to your bank's website at the same time. Remember, this attack won't allow someone to take over your computer — all they can do is get access to information which is in your computer's memory  _at the time they carry out the attack_ . + +For greater paranoia, avoid accessing suspicious websites  _after_  you handle any sensitive information (including accessing password-protected websites or entering your credit card details). It's possible for this information to linger in your computer's memory even after it isn't needed — it will stay there until it's overwritten, usually because the memory is needed for something else — so if you want to be safe you should reboot your computer in between. + +For maximum paranoia: Don't connect to the internet from systems you care about. In the industry we refer to "airgapped" systems; this is a reference back to the days when connecting to a network required wires, so if there was a literal gap with just air between two systems, there was no way they could communicate. These days, with ubiquitous wifi (and in many devices, access to mobile phone networks) the terminology is in need of updating; but if you place devices into "airplane" mode it's unlikely that they'll be at any risk. Mind you, they won't be nearly as useful — there's almost always a tradeoff between security and usability, but if you're handling something really sensitive, you may want to consider this option. (For my [Tarsnap online backup service][13] I compile and cryptographically sign the packages on a system which has never been connected to the Internet. Before I turned it on for the first time, I opened up the case and pulled out the wifi card; and I copy files on and off the system on a USB stick. Tarsnap's slogan, by the way, is "Online backups  _for the truly paranoid_ ".) + +#### How developers can protect everyone + +The patches being developed and distributed by operating systems — including microcode updates from Intel — will help a lot, but there are still steps individual developers can take to reduce the risk of their code being exploited. + +First, practice good "cryptographic hygiene": Information which isn't in memory can't be stolen this way. If you have a set of cryptographic keys, load only the keys you need for the operations you will be performing. If you take a password, use it as quickly as possible and then immediately wipe it from memory. This [isn't always possible][14], especially if you're using a high level language which doesn't give you access to low level details of pointers and memory allocation; but there's at least a chance that it will help. + +Second, offload sensitive operations — especially cryptographic operations — to other processes. The security community has become more aware of [privilege separation][15] over the past two decades; but we need to go further than this, to separation of  _information_  — even if two processes need exactly the same operating system permissions, it can be valuable to keep them separate in order to avoid information from one process leaking via a side channel attack against the other. + +One common design paradigm I've seen recently is to "[TLS][16] all the things", with a wide range of applications gaining understanding of the TLS protocol layer. This is something I've objected to in the past as it results in unnecessary exposure of applications to vulnerabilities in the TLS stacks they use; side channel attacks provide another reason, namely the unnecessary exposure of the TLS stack to side channels in the application. If you want to add TLS to your application, don't add it to the application itself; rather, use a separate process to wrap and unwrap connections with TLS, and have your application take unencrypted connections over a local (unix) socket or a loopback TCP/IP connection. + +Separating code into multiple processes isn't always practical, however, for reasons of both performance and practical matters of code design. I've been considering (since long before these issues became public) another form of mitigation: Userland page unmapping. In many cases programs have data structures which are "private" to a small number of source files; for example, a random number generator will have internal state which is only accessed from within a single file (with appropriate functions for inputting entropy and outputting random numbers), and a hash table library would have a data structure which is allocated, modified, accessed, and finally freed only by that library via appropriate accessor functions. If these memory allocations can be corralled into a subset of the system address space, and the pages in question only mapped upon entering those specific routines, it could dramatically reduce the risk of information being revealed as a result of vulnerabilities which — like these side channel attacks — are limited to leaking information but cannot be (directly) used to execute arbitrary code. + +Finally, developers need to get better at providing patches: Not just to get patches out promptly, but also to get them into users' hands  _and to convince users to install them_ . That last part requires building up trust; as I wrote last year, one of the worst problems facing the industry is the [mixing of security and non-security updates][17]. If users are worried that they'll lose features (or gain "features" they don't want), they won't install the updates you recommend; it's essential to give users the option of getting security patches without worrying about whether anything else they rely upon will change. + +#### What's next? + +So far we've seen three attacks demonstrated: Two variants of Spectre and one form of Meltdown. Get ready to see more over the coming months and years. Off the top of my head, there are four vulnerability classes I expect to see demonstrated before long: + +* Attacks on [p-code][1] interpreters. Google's "Variant 1" demonstrated an attack where a conditional branch was mispredicted resulting in a bounds check being bypassed; but the same problem could easily occur with mispredicted branches in aswitch statement resulting in the wrong  _operation_  being performed on a valid address. On p-code machines which have an opcode for "jump to this address, which contains machine code" (not entirely unlikely in the case of bytecode machines which automatically transpile "hot spots" into host machine code), this could very easily be exploited as a "speculatively execute attacker-provided code" mechanism. + +* Structure deserializing. This sort of code handles attacker-provided inputs which often include the lengths or numbers of fields in a structure, along with bounds checks to ensure the validity of the serialized structure. This is prime territory for a CPU to speculatively reach past the end of the input provided if it mispredicts the layout of the structure. + +* Decompressors, especially in HTTP(S) stacks. Data decompression inherently involves a large number of steps of "look up X in a table to get the length of a symbol, then adjust pointers and perform more memory accesses" — exactly the sort of behaviour which can leak information via cache side channels if a branch mispredict results in X being speculatively looked up in the wrong table. Add attacker-controlled inputs to HTTP stacks and the fact that services speaking HTTP are often required to perform request authentication and/or include TLS stacks, and you have all the conditions needed for sensitive information to be leaked. + +* Remote attacks. As far as I'm aware, all of the microarchitectural side channels demonstrated over the past 14 years have made use of "attack code" running on the system in question to observe the state of the caches or other microarchitectural details in order to extract the desired data. This makes attacks far easier, but should not be considered to be a prerequisite! Remote timing attacks are feasible, and I am confident that we will see a demonstration of "innocent" code being used for the task of extracting the microarchitectural state information before long. (Indeed, I think it is very likely that [certain people][2] are already making use of such remote microarchitectural side channel attacks.) + +#### Final thoughts on vulnerability disclosure + +The way these issues were handled was a mess; frankly, I expected better of Google, I expected better of Intel, and I expected better of the Linux community. When I found that Hyper-Threading was easily exploitable, I spent five months notifying the security community and preparing everyone for my announcement of the vulnerability; but when the embargo ended at midnight UTC and FreeBSD published its advisory a few minutes later, the broader world was taken entirely by surprise. Nobody knew what was coming aside from the people who needed to know; and the people who needed to know had months of warning. + +Contrast that with what happened this time around. Google discovered a problem and reported it to Intel, AMD, and ARM on June 1st. Did they then go around contacting all of the operating systems which would need to work on fixes for this? Not even close. FreeBSD was notified  _the week before Christmas_ , over six months after the vulnerabilities were discovered. Now, FreeBSD can occasionally respond very quickly to security vulnerabilities, even when they arise at inconvenient times — on November 30th 2009 a [vulnerability was reported][18] at 22:12 UTC, and on December 1st I [provided a patch][19] at 01:20 UTC, barely over 3 hours later — but that was an extremely simple bug which needed only a few lines of code to fix; the Spectre and Meltdown issues are orders of magnitude more complex. + +To make things worse, the Linux community was notified  _and couldn't keep their mouths shut_ . Standard practice for multi-vendor advisories like this is that an embargo date is set, and **nobody does anything publicly prior to that date**. People don't publish advisories; they don't commit patches into their public source code repositories; and they  _definitely_  don't engage in arguments on public mailing lists about whether the patches are needed for different CPUs. As a result, despite an embargo date being set for January 9th, by January 4th anyone who cared knew about the issues and there was code being passed around on Twitter for exploiting them. + +This is not the first time I've seen people get sloppy with embargoes recently, but it's by far the worst case. As an industry we pride ourselves on the concept of responsible disclosure — ensuring that people are notified in time to prepare fixes before an issue is disclosed publicly — but in this case there was far too much disclosure and nowhere near enough responsibility. We can do better, and I sincerely hope that next time we do. + +-------------------------------------------------------------------------------- + +via: http://www.daemonology.net/blog/2018-01-17-some-thoughts-on-spectre-and-meltdown.html + +作者:[ Daemonic Dispatches][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.daemonology.net/blog/ +[1]:https://en.wikipedia.org/wiki/P-code_machine +[2]:https://en.wikipedia.org/wiki/National_Security_Agency +[3]:https://googleprojectzero.blogspot.ca/2018/01/reading-privileged-memory-with-side.html +[4]:https://en.wikipedia.org/wiki/RSA_(cryptosystem) +[5]:https://www.openssl.org/ +[6]:http://www.daemonology.net/papers/cachemissing.pdf +[7]:http://www.bsdcan.org/ +[8]:https://eprint.iacr.org/2005/271.pdf +[9]:https://en.wikipedia.org/wiki/Advanced_Encryption_Standard +[10]:https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_(film) +[11]:https://en.wikipedia.org/wiki/Jeremiah_Denton +[12]:https://en.wikipedia.org/wiki/Lewis_Carroll +[13]:https://www.tarsnap.com/ +[14]:http://www.daemonology.net/blog/2014-09-06-zeroing-buffers-is-insufficient.html +[15]:https://en.wikipedia.org/wiki/Privilege_separation +[16]:https://en.wikipedia.org/wiki/Transport_Layer_Security +[17]:http://www.daemonology.net/blog/2017-06-14-oil-changes-safety-recalls-software-patches.html +[18]:http://seclists.org/fulldisclosure/2009/Nov/371 +[19]:https://lists.freebsd.org/pipermail/freebsd-security/2009-December/005369.html From b0ed5c447c9885427a6d47026df8f25fea2b6d94 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 17 Jan 2018 22:16:23 +0800 Subject: [PATCH 049/226] PRF&PUB:20171214 A step-by-step guide to building open culture.md @lujun9972 --- ...-by-step guide to building open culture.md | 48 +++++++++++++++++++ ...-by-step guide to building open culture.md | 43 ----------------- 2 files changed, 48 insertions(+), 43 deletions(-) create mode 100644 published/20171214 A step-by-step guide to building open culture.md delete mode 100644 translated/tech/20171214 A step-by-step guide to building open culture.md diff --git a/published/20171214 A step-by-step guide to building open culture.md b/published/20171214 A step-by-step guide to building open culture.md new file mode 100644 index 0000000000..9bfea16c69 --- /dev/null +++ b/published/20171214 A step-by-step guide to building open culture.md @@ -0,0 +1,48 @@ +手把手教你构建开放式文化 +====== + +> 这本开放式组织的最新著作是大规模体验开方的手册。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/red_shoes_whitehurst_lead.jpeg?itok=jKL6AKeW) + +我们于 2015 年发表开放组织Open Organization 后,很多各种类型、各种规模的公司都对“开放式”文化究竟意味着什么感到好奇。甚至当我跟别的公司谈论我们产品和服务的优势时,也总是很快就从谈论技术转移到人和文化上去了。几乎所有对推动创新和保持行业竞争优势有兴趣的人都在思考这个问题。 + +不是只有高层领导团队senior leadership teams才对开放式工作感兴趣。[红帽公司最近一次调查 ][1] 发现 [81% 的受访者 ][2] 同意这样一种说法:“拥有开放式的组织文化对我们公司非常重要。” + +然而要注意的是。同时只有 [67% 的受访者 ][3] 认为:“我们的组织有足够的资源来构建开放式文化。” + +这个结果与我从其他公司那交流所听到的相吻合:人们希望在开放式文化中工作,他们只是不知道该怎么做。对此我表示同情,因为组织的行事风格是很难捕捉、评估和理解的。在 [Catalyst-In-Chief][4] 中,我将其称之为“组织中最神秘莫测的部分。” + +《开放式组织》认为, 在数字转型有望改变我们工作的许多传统方式的时代,拥抱开放文化是创造持续创新的最可靠途径。当我们在书写这本书的时候,我们所关注的是描述在红帽公司中兴起的那种文化--而不是编写一本如何操作的书。我们并不会制定出一步步的流程来让其他组织采用。 + +这也是为什么与其他领导者和高管谈论他们是如何开始构建开放式文化的会那么有趣。在创建开放组织时,很多高管会说我们要“改变我们的文化”。但是文化并不是一项输入。它是一项输出——它是人们互动和日常行为的副产品。 + +告诉组织成员“更加透明地工作”,“更多地合作”,以及“更加包容地行动”并没有什么作用。因为像“透明”,“合作”和“包容”这一类的文化特质并不是行动。他们只是组织内指导行为的价值观而已。 + +要如何才能构建开放式文化呢? + +在过去的两年里,Opensource.com 社区收集了各种以开放的精神来进行工作、管理和领导的最佳实践方法。现在我们在新书 《[The Open Organization Workbook][5]》 中将之分享出来,这是一本更加规范的引发文化变革的指引。 + +要记住,任何改变,尤其是巨大的改变,都需要承诺、耐心,以及努力的工作。我推荐你在通往伟大成功的大道上先使用这本工作手册来实现一些微小的,有意义的成果。 + +通过阅读这本书,你将能够构建一个开放而又富有创新的文化氛围,使你们的人能够茁壮成长。我已經迫不及待想听听你的故事了。 + +本文摘自 《[Open Organization Workbook project][6]》。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/17/12/whitehurst-workbook-introduction + +作者:[Jim Whitehurst][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jwhitehurst +[1]:https://www.redhat.com/en/blog/red-hat-releases-2017-open-source-culture-survey-results +[2]:https://www.techvalidate.com/tvid/923-06D-74C +[3]:https://www.techvalidate.com/tvid/D30-09E-B52 +[4]:https://opensource.com/open-organization/resources/catalyst-in-chief +[5]:https://opensource.com/open-organization/resources/workbook +[6]:https://opensource.com/open-organization/17/8/workbook-project-announcement diff --git a/translated/tech/20171214 A step-by-step guide to building open culture.md b/translated/tech/20171214 A step-by-step guide to building open culture.md deleted file mode 100644 index d6674c4286..0000000000 --- a/translated/tech/20171214 A step-by-step guide to building open culture.md +++ /dev/null @@ -1,43 +0,0 @@ -手把手教你构建开放式文化 -====== -我们于 2015 年发表 `开放组织 (Open Organization)` 后,很对各种类型不同大小的公司都对“开放式”文化究竟意味着什么感到好奇。甚至当我跟别的公司谈论我们产品和服务的优势时,也总是很快就从谈论技术转移到人和文化上去了。几乎所有对推动创新和保持行业竞争优势有兴趣的人都在思考这个问题。 - -不是只有高级领导团队 (Senior leadership teams) 才对开放式工作感兴趣。[红帽公司最近一次调查 ][1] 发现 [81% 的受访者 ][2] 同意这样一种说法:"拥有开放式的组织文化对我们公司非常重要。" - -然而要注意的是。同时只有 [67% 的受访者 ][3] 认为:"我们的组织有足够的资源来构建开放式文化。" - -这个结果与我从其他公司那交流所听到的相吻合:人们希望在开放式文化中工作,他们只是不知道该怎么做。对此我表示同情,因为组织的行事风格是很难捕捉,评估,和理解的。在 [Catalyst-In-Chief][4] 中,我将其称之为 "组织中最神秘莫测的部分。" - -开放式组织之所以让人神往是因为在这个数字化转型有望改变传统工作方式的时代,拥抱开放文化是保持持续创新的最可靠的途径。当我们在书写本文的时候,我们所关注的是描述在红帽公司中兴起的那种文化--而不是编写一本如何操作的书。我们并不会制定出一步步的流程来让其他组织采用。 - -这也是为什么与其他领导者和高管谈论他们是如何开始构建开放式文化的会那么有趣。在创建开发组织时,很多高管会说我们要"改变我们的文化"。但是文化并不是一项输入。它是一项输出--它是人们互动和日常行为的副产品。 - -告诉组织成员"更加透明地工作","更多地合作",以及 "更加包容地行动" 并没有什么作用。因为像 "透明," "合作," and "包容" 这一类的文化特质并不是行动。他们只是组织内指导行为的价值观而已。 - -纳入要如何才能构建开放式文化呢? - -在过去的两年里,Opensource.com 设计收集了各种以开放的精神来进行工作,管理和领导的最佳实践方法。现在我们在新书 [The Open Organization Workbook][5] 中将之分享出来,这是一本更加规范的引发文化变革的指引。 - -要记住,任何改变,尤其是巨大的改变,都需要许诺 (commitment),耐心,以及努力的工作。我推荐你在通往伟大成功的大道上先使用这本工作手册来实现一些微小的,有意义的成果。 - -通过阅读这本书,你将能够构建一个开放而又富有创新的文化氛围,使你们的人能够茁壮成长。我已經迫不及待想听听你的故事了。 - -本文摘自 [Open Organization Workbook project][6]。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/open-organization/17/12/whitehurst-workbook-introduction - -作者:[Jim Whitehurst][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jwhitehurst -[1]:https://www.redhat.com/en/blog/red-hat-releases-2017-open-source-culture-survey-results -[2]:https://www.techvalidate.com/tvid/923-06D-74C -[3]:https://www.techvalidate.com/tvid/D30-09E-B52 -[4]:https://opensource.com/open-organization/resources/catalyst-in-chief -[5]:https://opensource.com/open-organization/resources/workbook -[6]:https://opensource.com/open-organization/17/8/workbook-project-announcement From 6f529a7a6ddfd091453fe7c08f2ed237a2583d5b Mon Sep 17 00:00:00 2001 From: Shucheng <741932183@qq.com> Date: Wed, 17 Jan 2018 22:34:02 +0800 Subject: [PATCH 050/226] Translating --- ... How to bind ntpd to specific IP addresses on Linux-Unix.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md b/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md index be091e91a2..93afb77e85 100644 --- a/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md +++ b/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md @@ -1,5 +1,8 @@ +Translating by Drshu + How to bind ntpd to specific IP addresses on Linux/Unix ====== + By default, my ntpd/NTP server listens on all interfaces or IP address i.e 0.0.0.0:123. How do I make sure ntpd only listen on a specific IP address such as localhost or 192.168.1.1:123 on a Linux or FreeBSD Unix server? NTP is an acronym for Network Time Protocol. It is used for clock synchronization between computers. The ntpd program is an operating system daemon which sets and maintains the system time of day in synchronism with Internet standard time servers. From 684d98a543219b0f983f1d639cbdd321857a61bd Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Wed, 17 Jan 2018 22:56:57 +0800 Subject: [PATCH 051/226] apply for translation --- sources/tech/20171226 How to Configure Linux for Children.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171226 How to Configure Linux for Children.md b/sources/tech/20171226 How to Configure Linux for Children.md index 318e4126a7..a0b8bb4394 100644 --- a/sources/tech/20171226 How to Configure Linux for Children.md +++ b/sources/tech/20171226 How to Configure Linux for Children.md @@ -1,3 +1,4 @@ +translate by cyleft How to Configure Linux for Children ====== From 86e77fbcb458203334c79d9b1e90fe389d7df76b Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Wed, 17 Jan 2018 23:01:31 +0800 Subject: [PATCH 052/226] apply for translation --- .../20170918 3 text editor alternatives to Emacs and Vim.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md b/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md index 742e1d9f92..835db13b2f 100644 --- a/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md +++ b/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md @@ -1,3 +1,5 @@ +## translate by cyleft + 3 text editor alternatives to Emacs and Vim ====== From 4665f70088826ff5780189a8a34e06d3b78e8155 Mon Sep 17 00:00:00 2001 From: Shucheng <741932183@qq.com> Date: Wed, 17 Jan 2018 23:30:30 +0800 Subject: [PATCH 053/226] Translated and fix some errors --- ... to specific IP addresses on Linux-Unix.md | 97 -------------- ... to specific IP addresses on Linux-Unix.md | 123 ++++++++++++++++++ 2 files changed, 123 insertions(+), 97 deletions(-) delete mode 100644 sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md create mode 100644 translated/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md diff --git a/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md b/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md deleted file mode 100644 index 93afb77e85..0000000000 --- a/sources/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md +++ /dev/null @@ -1,97 +0,0 @@ -Translating by Drshu - -How to bind ntpd to specific IP addresses on Linux/Unix -====== - -By default, my ntpd/NTP server listens on all interfaces or IP address i.e 0.0.0.0:123. How do I make sure ntpd only listen on a specific IP address such as localhost or 192.168.1.1:123 on a Linux or FreeBSD Unix server? - -NTP is an acronym for Network Time Protocol. It is used for clock synchronization between computers. The ntpd program is an operating system daemon which sets and maintains the system time of day in synchronism with Internet standard time servers. -[![How to prevent NTPD from listening on 0.0.0.0:123 and binding to specific IP addresses on a Linux/Unix server][1]][1] -The NTP is configured using ntp.conf located in /etc/ directory. - -## interface directive in /etc/ntp.conf - - -You can prevent ntpd to listen on 0.0.0.0:123 by setting the interface command. The syntax is: -`interface listen IPv4|IPv6|all -interface ignore IPv4|IPv6|all -interface drop IPv4|IPv6|all` -The above configures which network addresses ntpd listens or dropped without processing any requests. The ignore prevents opening matching addresses, drop causes ntpd to open the address and drop all received packets without examination. For example to ignore listing on all interfaces, add the following in /etc/ntp.conf: -`interface ignore wildcard` -To listen to only 127.0.0.1 and 192.168.1.1 addresses: -`interface listen 127.0.0.1 -interface listen 192.168.1.1` -Here is my sample /etc/ntp.conf file from FreeBSD cloud server: -`$ egrep -v '^#|$^' /etc/ntp.conf` -Sample outputs: -``` -tos minclock 3 maxclock 6 -pool 0.freebsd.pool.ntp.org iburst -restrict default limited kod nomodify notrap noquery nopeer -restrict -6 default limited kod nomodify notrap noquery nopeer -restrict source limited kod nomodify notrap noquery -restrict 127.0.0.1 -restrict -6 ::1 -leapfile "/var/db/ntpd.leap-seconds.list" -interface ignore wildcard -interface listen 172.16.3.1 -interface listen 10.105.28.1 -``` - - -## Restart ntpd - -Reload/restart the ntpd on a FreeBSD unix: -`$ sudo /etc/rc.d/ntpd restart` -OR [use the following command on a Debian/Ubuntu Linux][2]: -`$ sudo systemctl restart ntp` -OR [use the following on a CentOS/RHEL 7/Fedora Linux][2]: -`$ sudo systemctl restart ntpd` - -## Verification - -Use the netstat command/ss command for verification or to make sure ntpd bind to the specific IP address only: -`$ netstat -tulpn | grep :123` -OR -`$ ss -tulpn | grep :123` -Sample outputs: -``` -udp 0 0 10.105.28.1:123 0.0.0.0:* - -udp 0 0 172.16.3.1:123 0.0.0.0:* - -``` - -udp 0 0 10.105.28.1:123 0.0.0.0:* - udp 0 0 172.16.3.1:123 0.0.0.0:* - - -Use [the sockstat command on a FreeBSD Unix server][3]: -`$ sudo sockstat -$ sudo sockstat -4 -$ sudo sockstat -4 | grep :123` -Sample outputs: -``` -root ntpd 59914 22 udp4 127.0.0.1:123 *:* -root ntpd 59914 24 udp4 127.0.1.1:123 *:* -``` - -root ntpd 59914 22 udp4 127.0.0.1:123 *:* root ntpd 59914 24 udp4 127.0.1.1:123 *:* - -## Posted by:Vivek Gite - -The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][4], [Facebook][5], [Google+][6]. - --------------------------------------------------------------------------------- - -via: https://www.cyberciti.biz/faq/how-to-bind-ntpd-to-specific-ip-addresses-on-linuxunix/ - -作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.cyberciti.biz -[1]:https://www.cyberciti.biz/media/new/faq/2017/10/how-to-prevent-ntpd-to-listen-on-all-interfaces-on-linux-unix-box.jpg -[2]:https://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/ -[3]:https://www.cyberciti.biz/faq/freebsd-unix-find-the-process-pid-listening-on-a-certain-port-commands/ -[4]:https://twitter.com/nixcraft -[5]:https://facebook.com/nixcraft -[6]:https://plus.google.com/+CybercitiBiz diff --git a/translated/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md b/translated/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md new file mode 100644 index 0000000000..6fd4ee93a3 --- /dev/null +++ b/translated/tech/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md @@ -0,0 +1,123 @@ +如何在 Linux/Unix 之上绑定 ntpd 到特定的 IP 地址 +====== + +默认的情况下,我们的 ntpd/NTP 服务器会监听所有的端口或者 IP 地址,也就是:0.0.0.0:123。 怎么才可以在一个 Linux 或是 FreeBSD Unix 服务器上,确保只监听特定的 IP 地址,比如 localhost 或者是 192.168.1.1:123 ? + +NTP 是网络时间协议的首字母简写,这是一个用来同步两台电脑之间时间的协议。ntpd 是一个操作系统守护进程,可以设置并且保证系统的时间与互联网标准时间服务器同步。 + +[![如何在Linux和Unix服务器,防止 NTPD 监听0.0.0.0:123 并将其绑定到特定的 IP 地址][1]][1] + +NTP使用 `/etc/directory` 之下的 `ntp.conf`作为配置文件。 + + + +## /etc/ntp.conf 之中的端口指令 + +你可以通过设置端口命令来防止 ntpd 监听 0.0.0.0:123,语法如下: + +``` +interface listen IPv4|IPv6|all +interface ignore IPv4|IPv6|all +interface drop IPv4|IPv6|all +``` + +上面的配置可以使 ntpd 监听或者断开一个网络地址而不需要任何的请求。**这样将会** 举个例子,如果要忽略所有端口之上的监听,加入下面的语句到`/etc/ntp.conf`: + +The above configures which network addresses ntpd listens or dropped without processing any requests. **The ignore prevents opening matching addresses, drop causes ntpd to open the address and drop all received packets without examination.** For example to ignore listing on all interfaces, add the following in /etc/ntp.conf: + +`interface ignore wildcard` + +如果只监听 127.0.0.1 和 192.168.1.1 则是这样: + +``` +interface listen 127.0.0.1 +interface listen 192.168.1.1 +``` + +这是我 FreeBSD 云服务器上的样例 /etc/ntp.conf 文件: + +`$ egrep -v '^#|$^' /etc/ntp.conf` + +样例输出为: + +``` +tos minclock 3 maxclock 6 +pool 0.freebsd.pool.ntp.org iburst +restrict default limited kod nomodify notrap noquery nopeer +restrict -6 default limited kod nomodify notrap noquery nopeer +restrict source limited kod nomodify notrap noquery +restrict 127.0.0.1 +restrict -6 ::1 +leapfile "/var/db/ntpd.leap-seconds.list" +interface ignore wildcard +interface listen 172.16.3.1 +interface listen 10.105.28.1 +``` + + +## 重启 ntpd + +在 FreeBSD Unix 之上重新加载/重启 ntpd + +`$ sudo /etc/rc.d/ntpd restart` +或者 [在Debian和Ubuntu Linux 之上使用下面的命令][2]: +`$ sudo systemctl restart ntp` +或者 [在CentOS/RHEL 7/Fedora Linux 之上使用下面的命令][2]: +`$ sudo systemctl restart ntpd` + +## 校验 + +使用 `netstat` 和 `ss` 命令来检查 ntpd只绑定到了特定的 IP 地址: + +`$ netstat -tulpn | grep :123` +或是 +`$ ss -tulpn | grep :123` +样例输出: + +``` +udp 0 0 10.105.28.1:123 0.0.0.0:* - +udp 0 0 172.16.3.1:123 0.0.0.0:* - +``` +使用 + +使用 [socksata命令(FreeBSD Unix 服务群)][3]: + +``` +$ sudo sockstat +$ sudo sockstat -4 +$ sudo sockstat -4 | grep :123 +``` + + +样例输出: + +``` +root ntpd 59914 22 udp4 127.0.0.1:123 *:* +root ntpd 59914 24 udp4 127.0.1.1:123 *:* +``` + + + +## Vivek Gite 投稿 + +这个作者是 nixCraft 的作者并且是一位经验丰富的系统管理员,也是一名 Linux 操作系统和 Unix shell 脚本的训练师。他为全球不同行业,包括 IT、教育业、安全防护、空间研究和非营利性组织的客户工作。关注他的 [Twitter][4], [Facebook][5], [Google+][6]。 + + + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/faq/how-to-bind-ntpd-to-specific-ip-addresses-on-linuxunix/ + +作者:[Vivek Gite][a] +译者:[Drshu](https://github.com/Drshu) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/faq/2017/10/how-to-prevent-ntpd-to-listen-on-all-interfaces-on-linux-unix-box.jpg +[2]:https://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/ +[3]:https://www.cyberciti.biz/faq/freebsd-unix-find-the-process-pid-listening-on-a-certain-port-commands/ +[4]:https://twitter.com/nixcraft +[5]:https://facebook.com/nixcraft +[6]:https://plus.google.com/+CybercitiBiz From 18412f69c554145d199e8a469b9dfd018ad81c17 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Thu, 18 Jan 2018 08:51:21 +0800 Subject: [PATCH 054/226] Delete 20170921 Mastering file searches on Linux.md --- ...170921 Mastering file searches on Linux.md | 224 ------------------ 1 file changed, 224 deletions(-) delete mode 100644 sources/tech/20170921 Mastering file searches on Linux.md diff --git a/sources/tech/20170921 Mastering file searches on Linux.md b/sources/tech/20170921 Mastering file searches on Linux.md deleted file mode 100644 index 524585003c..0000000000 --- a/sources/tech/20170921 Mastering file searches on Linux.md +++ /dev/null @@ -1,224 +0,0 @@ -Translating by jessie-pang - -Mastering file searches on Linux -====== - -![](https://images.idgesg.net/images/article/2017/09/telescope-100736548-large.jpg) - -There are many ways to search for files on Linux systems and the commands can be very easy or very specific -- narrowing down your search criteria to find what just you're looking for and nothing else. In today's post, we're going to examine some of the most useful commands and options for your file searches. We're going to look into: - - * Quick finds - * More complex search criteria - * Combining conditions - * Reversing criteria - * Simple vs. detailed responses - * Looking for duplicate files - - - -There are actually several useful commands for searching for files. The **find** command may be the most obvious, but it's not the only command or always the fastest way to find what you're looking for. - -### Quick file search commands: which and locate - -The simplest commands for searching for files are probably **which** and **locate**. Both have some constraints that you should be aware of. The **which** command is only going to search through directories on your search path looking for files that are executable. It is generally used to identify commands. If you are curious about what command will be run when you type "which", for example, you can use the command "which which" and it will point you to the executable. -``` -$ which which -/usr/bin/which - -``` - -The **which** command will display the first executable that it finds with the name you supply (i.e., the one you would run if you use that command) and then stop. - -The **locate** command is a bit more generous. However, it has a constraint, as well. It will find any number of files, but only if the file names are contained in a database prepared by the **updatedb** command. That file will likely be stored in some location like /var/lib/mlocate/mlocate.db, but is not intended to be read by anything other than the locate command. Updates to this file are generally made by updatedb running daily through cron. - -Simple **find** commands don't require a lot more effort, but they do require a starting point for the search and some kind of search criteria. The simplest find command -- one that searches for files by name -- might look like this: -``` -$ find . -name runme -./bin/runme - -``` - -Searching from the current position in the file system by file name as shown will also involve searching all subdirectories unless a search depth is specified. - -### More than just file names - -The **find** command allows you to search on a number of criteria beyond just file names. These include file owner, group, permissions, size, modification time, lack of an active owner or group and file type. And you can do things beyond just locating the files. You can delete them, rename them, change ownership, change permissions, or run nearly any command against the located files. - -These two commands would find 1) files owned by root within the current directory and 2) files _not_ owned by the specified user (in this case, shs). In this case, both responses are the same, but they won't always be. -``` -$ find . -user root -ls - 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./xyz -> /home/peanut/xyz -$ find . ! -user shs -ls - 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./xyz -> /home/peanut/xyz - -``` - -The ! character represents "not" -- reversing the condition that follows it. - -The command below finds files that have a particular set of permissions. -``` -$ find . -perm 750 -ls - 397176 4 -rwxr-x--- 1 shs shs 115 Sep 14 13:52 ./ll - 398209 4 -rwxr-x--- 1 shs shs 117 Sep 21 08:55 ./get-updates - 397145 4 drwxr-x--- 2 shs shs 4096 Sep 14 15:42 ./newdir - -``` - -This command displays files with 777 permissions that are _not_ symbolic links. -``` -$ sudo find /home -perm 777 ! -type l -ls - 397132 4 -rwxrwxrwx 1 shs shs 18 Sep 15 16:06 /home/shs/bin/runme - 396949 4 -rwxrwxrwx 1 root root 558 Sep 21 11:21 /home/oops - -``` - -The following command looks for files that are larger than a gigabyte in size. And notice that we've located a very interesting file. It represents the physical memory of this system in the ELF core file format. -``` -$ sudo find / -size +1G -ls -4026531994 0 -r-------- 1 root root 140737477881856 Sep 21 11:23 /proc/kcore - 1444722 15332 -rw-rw-r-- 1 shs shs 1609039872 Sep 13 15:55 /home/shs/Downloads/ubuntu-17.04-desktop-amd64.iso - -``` - -Finding files by file type is easy as long as you know how the file types are described for the find command. -``` -b = block special file -c = character special file -d = directory -p = named pipe -f = regular file -l = symbolic link -s = socket -D = door (Solaris only) - -``` - -In the commands below, we are looking for symbolic links and sockets. -``` -$ find . -type l -ls - 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./whatever -> /home/peanut/whatever -$ find . -type s -ls - 395256 0 srwxrwxr-x 1 shs shs 0 Sep 21 08:50 ./.gnupg/S.gpg-agent - -``` - -You can also search for files by inode number. -``` -$ find . -inum 397132 -ls - 397132 4 -rwx------ 1 shs shs 18 Sep 15 16:06 ./bin/runme - -``` - -Another way to search for files by inode involves using the **debugfs** command. On a large file system, this command might be considerably faster than using find. You may need to install icheck. -``` -$ sudo debugfs -R 'ncheck 397132' /dev/sda1 -debugfs 1.42.13 (17-May-2015) -Inode Pathname -397132 /home/shs/bin/runme - -``` - -In the following command, we're starting in our home directory (~), limiting the depth of our search (how deeply we'll search subdirectories) and looking only for files that have been created or modified within the last day (mtime setting). -``` -$ find ~ -maxdepth 2 -mtime -1 -ls - 407928 4 drwxr-xr-x 21 shs shs 4096 Sep 21 12:03 /home/shs - 394006 8 -rw------- 1 shs shs 5909 Sep 21 08:18 /home/shs/.bash_history - 399612 4 -rw------- 1 shs shs 53 Sep 21 08:50 /home/shs/.Xauthority - 399615 4 drwxr-xr-x 2 shs shs 4096 Sep 21 09:32 /home/shs/Downloads - -``` - -### More than just listing files - -With an **-exec** option, the find command allows you to change files in some way once you've found them. You simply need to follow the -exec option with the command you want to run. -``` -$ find . -name runme -exec chmod 700 {} \; -$ find . -name runme -ls - 397132 4 -rwx------ 1 shs shs 18 Sep 15 16:06 ./bin/runme - -``` - -In this command, {} represents the name of the file. This command would change permissions on any files named "runme" in the current directory and subdirectories. - -Put whatever command you want to run following the -exec option and using a syntax similar to what you see above. - -### Other search criteria - -As shown in one of the examples above, you can also search by other criteria -- file age, owner, permissions, etc. Here are some examples. - -#### Finding by user -``` -$ sudo find /home -user peanut -/home/peanut -/home/peanut/.bashrc -/home/peanut/.bash_logout -/home/peanut/.profile -/home/peanut/examples.desktop - -``` - -#### Finding by file permissions -``` -$ sudo find /home -perm 777 -/home/shs/whatever -/home/oops - -``` - -#### Finding by age -``` -$ sudo find /home -mtime +100 -/home/shs/.mozilla/firefox/krsw3giq.default/gmp-gmpopenh264/1.6/gmpopenh264.info -/home/shs/.mozilla/firefox/krsw3giq.default/gmp-gmpopenh264/1.6/libgmpopenh264.so - -``` - -#### Finding by age comparison - -Commands like this allow you to find files newer than some other file. -``` -$ sudo find /var/log -newer /var/log/syslog -/var/log/auth.log - -``` - -### Finding duplicate files - -If you're looking to clean up disk space, you might want to remove large duplicate files. The best way to determine whether files are truly duplicates is to use the **fdupes** command. This command uses md5 checksums to determine if files have the same content. With the -r (recursive) option, fdupes will run through a directory and find files that have the same checksum and are thus identical in content. - -If you run a command like this as root, you will likely find a lot of duplicate files, but many will be startup files that were added to home directories when they were created. -``` -# fdupes -rn /home > /tmp/dups.txt -# more /tmp/dups.txt -/home/jdoe/.profile -/home/tsmith/.profile -/home/peanut/.profile -/home/rocket/.profile - -/home/jdoe/.bashrc -/home/tsmith/.bashrc -/home/peanut/.bashrc -/home/rocket/.bashrc - -``` - -Similarly, you might find a lot of duplicate configuration files in /usr that you shouldn't remove. So, be careful with the fdupes output. - -The fdupes command isn't always speedy, but keeping in mind that it's running checksum queries over a lot of files to compare them, you'll probably appreciate how efficient it is. - -### Wrap-up - -There are lots of way to locate files on Linux systems. If you can describe what you're looking for, one of the commands above will help you find it. - - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3227075/linux/mastering-file-searches-on-linux.html - -作者:[Sandra Henry-Stocker][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ From 74ede909d8dcb494316abcf802f867acfd4af305 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Thu, 18 Jan 2018 08:52:37 +0800 Subject: [PATCH 055/226] 20170921 Mastering file searches on Linux.md --- ...170921 Mastering file searches on Linux.md | 234 ++++++++++++++++++ 1 file changed, 234 insertions(+) create mode 100644 translated/tech/20170921 Mastering file searches on Linux.md diff --git a/translated/tech/20170921 Mastering file searches on Linux.md b/translated/tech/20170921 Mastering file searches on Linux.md new file mode 100644 index 0000000000..ec3dae4acc --- /dev/null +++ b/translated/tech/20170921 Mastering file searches on Linux.md @@ -0,0 +1,234 @@ +精通 Linux 上的文件搜索 +====== + +![](https://images.idgesg.net/images/article/2017/09/telescope-100736548-large.jpg) + +在 Linux 系统上搜索文件的方法有很多,有的命令很简单,有的很详细。我们的目标是:缩小搜索范围,找到您正在寻找的文件,又不受其他文件的干扰。在今天的文章中,我们将研究一些对文件搜索最有用的命令和选项。我们将涉及: + + * 快速搜索 + * 更复杂的搜索条件 + * 合并条件 + * 反转条件 + * 简单和详细的回应 + * 寻找重复的文件 + +有很多有用的命令可以搜索文件,**find** 命令可能是其中最有名的,但它不是唯一的命令,也不一定总是找到目标文件的最快方法。 + +### 快速搜索命令:which 和 locate + +搜索文件的最简单的命令可能就是 **which** 和 **locate** 了,但二者都有一些局限性。**which** 命令只会在系统定义的搜索路径中,查找可执行的文件,通常用于识别命令。如果您对输入 which 时会运行的命令感到好奇,您可以使用命令 which which,它会指向对应的可执行文件。 + +``` +$ which which +/usr/bin/which + +``` + +**which** 命令会显示它找到的第一个以相应名称命名的可执行文件(也就是使用该命令时将运行的那个文件),然后停止。 + +**locate** 命令更厉害一点,它可以查找任意数量的文件,但它也有一个限制:仅当文件名被包含在由 **updatedb** 命令准备的数据库时才有效。该文件可能会存储在某个位置,如 /var/lib/mlocate/mlocate.db,但不能用 locate 以外的任何命令读取。这个文件的更新通常是通过每天通过 cron 运行的 updatedb 进行的。 + +简单的 **find** 命令不需要太多限制,不过它需要搜索的起点和指定搜索条件。最简单的 find 命令:按文件名搜索文件。如下所示: + +``` +$ find . -name runme +./bin/runme + +``` + +如上所示,通过文件名搜索文件系统的当前位置将会搜索所有子目录,除非您指定了搜索深度。 + +### 不仅仅是文件名 + +**find** 命令允许您搜索除文件名以外的多种条件,包括文件所有者、组、权限、大小、修改时间、缺少所有者或组和文件类型等。除了查找文件外,您还可以删除文件、对其进行重命名、更改所有者、更改权限和对文件运行几乎任何命令。 + +下面两条命令会查找:在当前目录中 root 用户拥有的文件,以及非指定用户(在本例中为 shs)拥有的文件。在这个例子中,两个输出是一样的,但并不总是如此。 + +``` +$ find . -user root -ls + 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./xyz -> /home/peanut/xyz +$ find . ! -user shs -ls + 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./xyz -> /home/peanut/xyz + +``` + +感叹号“!”字符代表“非”:反转跟随其后的条件。 + +下面的命令将查找具有特定权限的文件: + +``` +$ find . -perm 750 -ls + 397176 4 -rwxr-x--- 1 shs shs 115 Sep 14 13:52 ./ll + 398209 4 -rwxr-x--- 1 shs shs 117 Sep 21 08:55 ./get-updates + 397145 4 drwxr-x--- 2 shs shs 4096 Sep 14 15:42 ./newdir + +``` + +接下来的命令显示具有 777 权限的非符号链接文件: + +``` +$ sudo find /home -perm 777 ! -type l -ls + 397132 4 -rwxrwxrwx 1 shs shs 18 Sep 15 16:06 /home/shs/bin/runme + 396949 4 -rwxrwxrwx 1 root root 558 Sep 21 11:21 /home/oops + +``` + +以下命令将查找大小超过千兆字节的文件。请注意,我们找到了一个非常有趣的文件。它在 ELF 核心文件格式中代表该系统的物理内存。 + +``` +$ sudo find / -size +1G -ls + 4026531994 0 -r-------- 1 root root 140737477881856 Sep 21 11:23 /proc/kcore + 1444722 15332 -rw-rw-r-- 1 shs shs 1609039872 Sep 13 15:55 /home/shs/Downloads/ubuntu-17.04-desktop-amd64.iso + +``` + +只要您知道 find 命令是如何描述文件类型的,就可以通过文件类型来查找文件。 + +``` +b = 块设备文件 +c = 字符设备文件 +d = 目录 +p = 命名管道 +f = 常规文件 +l = 符号链接 +s = 套接字 +D = 门(仅限 Solaris) + +``` + +在下面的命令中,我们要寻找符号链接和套接字: + +``` +$ find . -type l -ls + 396926 0 lrwxrwxrwx 1 root root 21 Sep 21 09:03 ./whatever -> /home/peanut/whatever +$ find . -type s -ls + 395256 0 srwxrwxr-x 1 shs shs 0 Sep 21 08:50 ./.gnupg/S.gpg-agent + +``` + +您还可以根据 inode 数字来搜索文件: + +``` +$ find . -inum 397132 -ls + 397132 4 -rwx------ 1 shs shs 18 Sep 15 16:06 ./bin/runme + +``` + +另一种通过 inode 搜索文件的方法是使用 **debugfs** 命令。在大的文件系统上,这个命令可能比 find 快得多,您可能需要安装 icheck。 + +``` +$ sudo debugfs -R 'ncheck 397132' /dev/sda1 +debugfs 1.42.13 (17-May-2015) +Inode Pathname +397132 /home/shs/bin/runme + +``` + +在下面的命令中,我们从主目录(〜)开始,限制搜索的深度(是我们将搜索子目录的层数),并只查看在最近一天内创建或修改的文件(mtime 设置)。 + +``` +$ find ~ -maxdepth 2 -mtime -1 -ls + 407928 4 drwxr-xr-x 21 shs shs 4096 Sep 21 12:03 /home/shs + 394006 8 -rw------- 1 shs shs 5909 Sep 21 08:18 /home/shs/.bash_history + 399612 4 -rw------- 1 shs shs 53 Sep 21 08:50 /home/shs/.Xauthority + 399615 4 drwxr-xr-x 2 shs shs 4096 Sep 21 09:32 /home/shs/Downloads + +``` + +### 不仅仅是罗列文件 + +使用 **-exec** 选项,在您使用 find 命令找到文件后可以以某种方式更改文件。您只需参照 -exec 选项即可运行相应的命令。 + +``` +$ find . -name runme -exec chmod 700 {} \; +$ find . -name runme -ls + 397132 4 -rwx------ 1 shs shs 18 Sep 15 16:06 ./bin/runme + +``` + +在这条命令中,“{}”代表文件名。此命令将更改当前目录和子目录中任何名为“runme”的文件的权限。 + +把您想运行的任何命令放在 -exec 选项之后,并使用类似于上面命令的语法即可。 + +### 其他搜索条件 + +如上面的例子所示,您还可以通过其他条件进行搜索:文件的修改时间、所有者、权限等。以下是一些示例。 + +#### 根据用户查找文件 +``` +$ sudo find /home -user peanut +/home/peanut +/home/peanut/.bashrc +/home/peanut/.bash_logout +/home/peanut/.profile +/home/peanut/examples.desktop + +``` + +#### 根据权限查找文件 +``` +$ sudo find /home -perm 777 +/home/shs/whatever +/home/oops + +``` + +#### 根据修改时间查找文件 +``` +$ sudo find /home -mtime +100 +/home/shs/.mozilla/firefox/krsw3giq.default/gmp-gmpopenh264/1.6/gmpopenh264.info +/home/shs/.mozilla/firefox/krsw3giq.default/gmp-gmpopenh264/1.6/libgmpopenh264.so + +``` + +#### 通过比较修改时间查找文件 + +像这样的命令可以让您找到修改时间较近的文件。 + +``` +$ sudo find /var/log -newer /var/log/syslog +/var/log/auth.log + +``` + +### 寻找重复的文件 + +如果您正在清理磁盘空间,则可能需要删除较大的重复文件。确定文件是否真正重复的最好方法是使用 **fdupes** 命令。此命令使用 md5 校验和来确定文件是否具有相同的内容。使用 -r(递归)选项,fdupes 将在一个目录下并查找具有相同校验和而被确定为内容相同的文件。 + +如果以 root 身份运行这样的命令,您可能会发现很多重复的文件,但是很多文件都是创建时被添加到主目录的启动文件。 + +``` +# fdupes -rn /home > /tmp/dups.txt +# more /tmp/dups.txt +/home/jdoe/.profile +/home/tsmith/.profile +/home/peanut/.profile +/home/rocket/.profile + +/home/jdoe/.bashrc +/home/tsmith/.bashrc +/home/peanut/.bashrc +/home/rocket/.bashrc + +``` + +同样,您可能会在 /usr 中发现很多重复的但不该删除的配置文件。所以,请谨慎利用 fdupes 的输出。 + +fdupes 命令并不总是很快,但是要记住,它正在对许多文件运行校验和来做比较,你可能会意识到它的有效性。 + +### 总结 + +有很多方法可以在 Linux 系统上查找文件。如果您可以描述清楚您正在寻找什么,上面的命令将帮助您找到目标。 + + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3227075/linux/mastering-file-searches-on-linux.html + +作者:[Sandra Henry-Stocker][a] +译者:[jessie-pang](https://github.com/jessie-pang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ \ No newline at end of file From 5f3263e57987d4766136856fe9f5ef9d04308438 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 18 Jan 2018 09:00:26 +0800 Subject: [PATCH 056/226] translated --- ...ing the Linux find command with caution.md | 95 ------------------- ...ing the Linux find command with caution.md | 93 ++++++++++++++++++ 2 files changed, 93 insertions(+), 95 deletions(-) delete mode 100644 sources/tech/20171016 Using the Linux find command with caution.md create mode 100644 translated/tech/20171016 Using the Linux find command with caution.md diff --git a/sources/tech/20171016 Using the Linux find command with caution.md b/sources/tech/20171016 Using the Linux find command with caution.md deleted file mode 100644 index bb43f2cd76..0000000000 --- a/sources/tech/20171016 Using the Linux find command with caution.md +++ /dev/null @@ -1,95 +0,0 @@ -translating---geekpi - -Using the Linux find command with caution -====== -![](https://images.idgesg.net/images/article/2017/10/caution-sign-100738884-large.jpg) -A friend recently reminded me of a useful option that can add a little caution to the commands that I run with the Linux find command. It's called -ok and it works like the -exec option except for one important difference -- it makes the find command ask for permission before taking the specified action. - -Here's an example. If you were looking for files that you intended to remove from the system using find, you might run a command like this: -``` -$ find . -name runme -exec rm {} \; - -``` - -Anywhere within the current directory and its subdirectories, any files named "runme" would be summarily removed -- provided, of course, you have permission to remove them. Use the -ok command instead, and you'll see something like this. The find command will ask for approval before removing the files. Answering **y** for "yes" would allow the find command to go ahead and remove the files one by one. -``` -$ find . -name runme -ok rm {} \; -< rm ... ./bin/runme > ? - -``` - -### The -exedir command is also an option - -Another option that can be used to modify the behavior of the find command and potentially make it more controllable is the -execdir command. Where -exec runs whatever command is specified, -execdir runs the specified command from the directory in which the located file resides rather than from the directory in which the find command is run. Here's an example of how it works: -``` -$ pwd -/home/shs -$ find . -name runme -execdir pwd \; -/home/shs/bin - -``` -``` -$ find . -name runme -execdir ls \; -ls rm runme - -``` - -So far, so good. One important thing to keep in mind, however, is that the -execdir option will also run commands from the directories in which the located files reside. If you run the command shown below and the directory contains a file named "ls", it will run that file and it will run it even if the file does _not_ have execute permissions set. Using **-exec** or **-execdir** is similar to running a command by sourcing it. -``` -$ find . -name runme -execdir ls \; -Running the /home/shs/bin/ls file - -``` -``` -$ find . -name runme -execdir rm {} \; -This is an imposter rm command - -``` -``` -$ ls -l bin -total 12 --r-x------ 1 shs shs 25 Oct 13 18:12 ls --rwxr-x--- 1 shs shs 36 Oct 13 18:29 rm --rw-rw-r-- 1 shs shs 28 Oct 13 18:55 runme - -``` -``` -$ cat bin/ls -echo Running the $0 file -$ cat bin/rm -echo This is an imposter rm command - -``` - -### The -okdir option also asks for permission - -To be more cautious, you can use the **-okdir** option. Like **-ok** , this option will prompt for permission to run the command. -``` -$ find . -name runme -okdir rm {} \; -< rm ... ./bin/runme > ? - -``` - -You can also be careful to specify the commands you want to run with full paths to avoid any problems with imposter commands like those shown above. -``` -$ find . -name runme -execdir /bin/rm {} \; - -``` - -The find command has a lot of options besides the default print. Some can make your file searching more precise, but a little caution is always a good idea. - -Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3233305/linux/using-the-linux-find-command-with-caution.html - -作者:[Sandra Henry-Stocker][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[1]:https://www.facebook.com/NetworkWorld/ -[2]:https://www.linkedin.com/company/network-world diff --git a/translated/tech/20171016 Using the Linux find command with caution.md b/translated/tech/20171016 Using the Linux find command with caution.md new file mode 100644 index 0000000000..552d1738f7 --- /dev/null +++ b/translated/tech/20171016 Using the Linux find command with caution.md @@ -0,0 +1,93 @@ +谨慎使用 Linux find 命令 +====== +![](https://images.idgesg.net/images/article/2017/10/caution-sign-100738884-large.jpg) +最近有朋友提醒我在运行 find 命令的时候可以添加一个有用的选项来增加一些谨慎。它是 -ok,除了一个重要的区别之外,它的工作方式与 -exec 相似,它使 find 命令在执行指定的操作之前请求权限。 + +这有一个例子。如果你使用 find 命令查找文件并删除它们,则可以运行下面的命令: +``` +$ find . -name runme -exec rm {} \; + +``` + +在当前目录及其子目录中中任何名为 “runme” 的文件都将被立即删除 - 当然,你要有权删除它们。改用 -ok 选项,你会看到类似这样的东西。find 命令将在删除文件之前会请求权限。回答 **y** 代表 “yes” 将允许 find 命令继续并逐个删除文件。 +``` +$ find . -name runme -ok rm {} \; +< rm ... ./bin/runme > ? + +``` + +### -exedir 命令也是一个选项 + +另一个可以用来修改 find​​ 命令行为并可能使其更可控的选项是 -execdir 命令。其中 -exec 运行指定的任何命令,-execdir 从文件所在的目录运行指定的命令,而不是运行 find 命令所在的目录。这是一个它的例子: +``` +$ pwd +/home/shs +$ find . -name runme -execdir pwd \; +/home/shs/bin + +``` +``` +$ find . -name runme -execdir ls \; +ls rm runme + +``` + +到现在为止还挺好。但要记住的是,-execdir 也会在匹配文件的目录中执行命令。如果运行下面的命令,并且目录包含一个名为 “ls” 的文件,那么即使该文件_没有_执行权限,它也将运行该文件。使用 **-exec** 或 **-execdir** 类似于通过 source 来运行命令。 +``` +$ find . -name runme -execdir ls \; +Running the /home/shs/bin/ls file + +``` +``` +$ find . -name runme -execdir rm {} \; +This is an imposter rm command + +``` +``` +$ ls -l bin +total 12 +-r-x------ 1 shs shs 25 Oct 13 18:12 ls +-rwxr-x--- 1 shs shs 36 Oct 13 18:29 rm +-rw-rw-r-- 1 shs shs 28 Oct 13 18:55 runme + +``` +``` +$ cat bin/ls +echo Running the $0 file +$ cat bin/rm +echo This is an imposter rm command + +``` + +### -okdir 选项也会请求权限 + +要更谨慎,可以使用 **-okdir** 选项。类似 **-ok**,该选项将要求权限来运行该命令。 +``` +$ find . -name runme -okdir rm {} \; +< rm ... ./bin/runme > ? + +``` + +你也可以小心地指定你想用的命令的完整路径,以避免像上面那样的冒牌命令出现的任何问题。 +``` +$ find . -name runme -execdir /bin/rm {} \; + +``` + +find 命令除了默认打印之外还有很多选项。有些可以使你的文件搜索更精确,但一点小心总是一个好主意。 + +在 [Facebook][1] 和 [LinkedIn][2] 上加入网络世界社区来进行评论。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3233305/linux/using-the-linux-find-command-with-caution.html + +作者:[Sandra Henry-Stocker][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[1]:https://www.facebook.com/NetworkWorld/ +[2]:https://www.linkedin.com/company/network-world From 792f3b370a60f0bf6910373e2c090598127a4858 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Thu, 18 Jan 2018 09:00:43 +0800 Subject: [PATCH 057/226] Update 20170921 Mastering file searches on Linux.md --- .../tech/20170921 Mastering file searches on Linux.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/translated/tech/20170921 Mastering file searches on Linux.md b/translated/tech/20170921 Mastering file searches on Linux.md index ec3dae4acc..29cb7da963 100644 --- a/translated/tech/20170921 Mastering file searches on Linux.md +++ b/translated/tech/20170921 Mastering file searches on Linux.md @@ -7,7 +7,7 @@ * 快速搜索 * 更复杂的搜索条件 - * 合并条件 +  * 连接条件 * 反转条件 * 简单和详细的回应 * 寻找重复的文件 @@ -26,7 +26,7 @@ $ which which **which** 命令会显示它找到的第一个以相应名称命名的可执行文件(也就是使用该命令时将运行的那个文件),然后停止。 -**locate** 命令更厉害一点,它可以查找任意数量的文件,但它也有一个限制:仅当文件名被包含在由 **updatedb** 命令准备的数据库时才有效。该文件可能会存储在某个位置,如 /var/lib/mlocate/mlocate.db,但不能用 locate 以外的任何命令读取。这个文件的更新通常是通过每天通过 cron 运行的 updatedb 进行的。 +**locate** 命令更大方一点,它可以查找任意数量的文件,但它也有一个限制:仅当文件名被包含在由 **updatedb** 命令准备的数据库时才有效。该文件可能会存储在某个位置,如 /var/lib/mlocate/mlocate.db,但不能用 locate 以外的任何命令读取。这个文件的更新通常是通过每天通过 cron 运行的 updatedb 进行的。 简单的 **find** 命令不需要太多限制,不过它需要搜索的起点和指定搜索条件。最简单的 find 命令:按文件名搜索文件。如下所示: @@ -135,7 +135,7 @@ $ find ~ -maxdepth 2 -mtime -1 -ls ``` -### 不仅仅是罗列文件 +### 不仅仅是列出文件 使用 **-exec** 选项,在您使用 find 命令找到文件后可以以某种方式更改文件。您只需参照 -exec 选项即可运行相应的命令。 @@ -231,4 +231,4 @@ via: https://www.networkworld.com/article/3227075/linux/mastering-file-searches- 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ \ No newline at end of file +[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/ From 802a3f4bff94750f2c0c2c6abd5b372c31419be0 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Thu, 18 Jan 2018 09:02:43 +0800 Subject: [PATCH 058/226] Update 20170921 Mastering file searches on Linux.md --- translated/tech/20170921 Mastering file searches on Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20170921 Mastering file searches on Linux.md b/translated/tech/20170921 Mastering file searches on Linux.md index 29cb7da963..e964a35a64 100644 --- a/translated/tech/20170921 Mastering file searches on Linux.md +++ b/translated/tech/20170921 Mastering file searches on Linux.md @@ -7,7 +7,7 @@ * 快速搜索 * 更复杂的搜索条件 -  * 连接条件 + * 连接条件 * 反转条件 * 简单和详细的回应 * 寻找重复的文件 From 5f0b51243cf8dda950843bbd167ae4f33ee57abf Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 18 Jan 2018 09:06:10 +0800 Subject: [PATCH 059/226] translating --- sources/tech/20171002 Bash Bypass Alias Linux-Unix Command.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171002 Bash Bypass Alias Linux-Unix Command.md b/sources/tech/20171002 Bash Bypass Alias Linux-Unix Command.md index 87f99fcbd2..ba2d9cdb4c 100644 --- a/sources/tech/20171002 Bash Bypass Alias Linux-Unix Command.md +++ b/sources/tech/20171002 Bash Bypass Alias Linux-Unix Command.md @@ -1,3 +1,5 @@ +translating---geekpi + Bash Bypass Alias Linux/Unix Command ====== I defined mount bash shell alias as follows on my Linux system: From a64c7f3f4cd7f28e5406b8a1ca339602692b2c9e Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 18 Jan 2018 11:11:42 +0800 Subject: [PATCH 060/226] =?UTF-8?q?=E6=94=BE=E9=94=99=E4=BD=8D=E7=BD=AE?= =?UTF-8?q?=E4=BA=86?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @lujun9972 --- ...0 Why isn-t open source hot among computer science students.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename 20180110 Why isn-t open source hot among computer science students.md => sources/talk/20180110 Why isn-t open source hot among computer science students.md (100%) diff --git a/20180110 Why isn-t open source hot among computer science students.md b/sources/talk/20180110 Why isn-t open source hot among computer science students.md similarity index 100% rename from 20180110 Why isn-t open source hot among computer science students.md rename to sources/talk/20180110 Why isn-t open source hot among computer science students.md From aed1cb80e675b024069cc62d512482205ded5206 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 18 Jan 2018 11:20:46 +0800 Subject: [PATCH 061/226] PRF&PUB:20170820 How To Display Date And Time In History Command.md @lujun9972 https://linux.cn/article-9253-1.html --- ...isplay Date And Time In History Command.md | 56 ++++++++++--------- 1 file changed, 29 insertions(+), 27 deletions(-) rename {translated/tech => published}/20170820 How To Display Date And Time In History Command.md (69%) diff --git a/translated/tech/20170820 How To Display Date And Time In History Command.md b/published/20170820 How To Display Date And Time In History Command.md similarity index 69% rename from translated/tech/20170820 How To Display Date And Time In History Command.md rename to published/20170820 How To Display Date And Time In History Command.md index 402b471d92..b3fd163009 100644 --- a/translated/tech/20170820 How To Display Date And Time In History Command.md +++ b/published/20170820 How To Display Date And Time In History Command.md @@ -1,19 +1,21 @@ -让 History 命令显示日期和时间 +让 history 命令显示日期和时间 ====== -我们都对 History 命令很熟悉。它将终端上 bash 执行过的所有命令存储到 `.bash_history` 文件中,来帮助我们复查用户之前执行过的命令。 -默认情况下 history 命令直接显示用户执行的命令而不会输出运行命令时的日期和时间,即使 history 命令记录了这个时间。 +我们都对 `history` 命令很熟悉。它将终端上 bash 执行过的所有命令存储到 `.bash_history` 文件中,来帮助我们复查用户之前执行过的命令。 -运行 history 命令时,它会检查一个叫做 `HISTTIMEFORMAT` 的环境变量,这个环境变量指明了如何格式化输出 history 命令中记录的这个时间。 +默认情况下 `history` 命令直接显示用户执行的命令而不会输出运行命令时的日期和时间,即使 `history` 命令记录了这个时间。 -若该值为 null 或者根本没有设置,则它跟大多数系统默认显示的一样,不会现实日期和时间。 +运行 `history` 命令时,它会检查一个叫做 `HISTTIMEFORMAT` 的环境变量,这个环境变量指明了如何格式化输出 `history` 命令中记录的这个时间。 -`HISTTIMEFORMAT` 使用 strftime 来格式化显示时间 (strftime - 将日期和时间转换为字符串)。history 命令输出日期和时间能够帮你更容易地追踪问题。 +若该值为 null 或者根本没有设置,则它跟大多数系统默认显示的一样,不会显示日期和时间。 - * **%T:** 替换为时间 ( %H:%M:%S )。 - * **%F:** 等同于 %Y-%m-%d (ISO 8601:2000 标准日期格式)。 +`HISTTIMEFORMAT` 使用 `strftime` 来格式化显示时间(`strftime` - 将日期和时间转换为字符串)。`history` 命令输出日期和时间能够帮你更容易地追踪问题。 + +* `%T`: 替换为时间(`%H:%M:%S`)。 +* `%F`: 等同于 `%Y-%m-%d` (ISO 8601:2000 标准日期格式)。 + +下面是 `history` 命令默认的输出。 -下面是 history 命令默认的输出。 ``` # history 1 yum install -y mysql-server mysql-client @@ -46,36 +48,36 @@ 28 sysdig 29 yum install httpd mysql 30 service httpd start - ``` -根据需求,有三种不同的方法设置环境变量。 +根据需求,有三种不同的设置环境变量的方法。 - * 临时设置当前用户的环境变量 - * 永久设置当前/其他用户的环境变量 - * 永久设置所有用户的环境变量 +* 临时设置当前用户的环境变量 +* 永久设置当前/其他用户的环境变量 +* 永久设置所有用户的环境变量 **注意:** 不要忘了在最后那个单引号前加上空格,否则输出会很混乱的。 -### 方法 -1: +### 方法 1: + +运行下面命令为为当前用户临时设置 `HISTTIMEFORMAT` 变量。这会一直生效到下次重启。 -运行下面命令为为当前用户临时设置 HISTTIMEFORMAT 变量。这会一直生效到下次重启。 ``` # export HISTTIMEFORMAT='%F %T ' - ``` -### 方法 -2: +### 方法 2: + +将 `HISTTIMEFORMAT` 变量加到 `.bashrc` 或 `.bash_profile` 文件中,让它永久生效。 -将 HISTTIMEFORMAT 变量加到 `.bashrc` 或 `.bash_profile` 文件中,让它永久生效。 ``` # echo 'HISTTIMEFORMAT="%F %T "' >> ~/.bashrc 或 # echo 'HISTTIMEFORMAT="%F %T "' >> ~/.bash_profile - ``` 运行下面命令来让文件中的修改生效。 + ``` # source ~/.bashrc 或 @@ -83,21 +85,22 @@ ``` -### 方法 -3: +### 方法 3: + +将 `HISTTIMEFORMAT` 变量加入 `/etc/profile` 文件中,让它对所有用户永久生效。 -将 HISTTIMEFORMAT 变量加入 `/etc/profile` 文件中,让它对所有用户永久生效。 ``` # echo 'HISTTIMEFORMAT="%F %T "' >> /etc/profile - ``` 运行下面命令来让文件中的修改生效。 + ``` # source /etc/profile - ``` -输出结果为。 +输出结果为: + ``` # history 1 2017-08-16 15:30:15 yum install -y mysql-server mysql-client @@ -130,7 +133,6 @@ 28 2017-08-16 15:30:15 sysdig 29 2017-08-16 15:30:15 yum install httpd mysql 30 2017-08-16 15:30:15 service httpd start - ``` -------------------------------------------------------------------------------- @@ -138,7 +140,7 @@ via: https://www.2daygeek.com/display-date-time-linux-bash-history-command/ 作者:[2daygeek][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 392eba4a4cd17b335e2bee89a1896e7e6d4fc774 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 18 Jan 2018 11:40:02 +0800 Subject: [PATCH 062/226] PRF:20160625 Trying out LXD containers on our Ubuntu.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @lujun9972 你的译文经常出现半角符号被替换为中文句号、冒号的情况。 --- ...Trying out LXD containers on our Ubuntu.md | 188 ++++++++++-------- 1 file changed, 102 insertions(+), 86 deletions(-) diff --git a/translated/tech/20160625 Trying out LXD containers on our Ubuntu.md b/translated/tech/20160625 Trying out LXD containers on our Ubuntu.md index 29a19792fa..78257d02d0 100644 --- a/translated/tech/20160625 Trying out LXD containers on our Ubuntu.md +++ b/translated/tech/20160625 Trying out LXD containers on our Ubuntu.md @@ -1,90 +1,101 @@ -在 Ubuntu 上玩玩 LXD 容器 +在 Ubuntu 上体验 LXD 容器 ====== -本文的主角是容器,一种类似虚拟机但更轻量级的构造。你可以轻易地在你的 Ubuntu 桌面系统中创建一堆个容器! -虚拟机会虚拟出正太电脑让你来安装客户机操作系统。**相比之下**,容器**复用**了主机 Linux 内核,只是简单地 **包容** 了我们选择的根文件系统(也就是运行时环境)。Linux 内核有很多功能可以将运行的 Linux 容器与我们的主机分割开(也就是我们的 Ubuntu 桌面)。 +本文的主角是容器,一种类似虚拟机但更轻量级的构造。你可以轻易地在你的 Ubuntu 桌面系统中创建一堆容器! -Linux 本身需要一些手工操作来直接管理他们。好在,有 LXD( 读音为 Lex-deeh),一款为我们管理 Linux 容器的服务。 +虚拟机会虚拟出整个电脑让你来安装客户机操作系统。**相比之下**,容器**复用**了主机的 Linux 内核,只是简单地 **包容** 了我们选择的根文件系统(也就是运行时环境)。Linux 内核有很多功能可以将运行的 Linux 容器与我们的主机分割开(也就是我们的 Ubuntu 桌面)。 -我们将会看到如何 +Linux 本身需要一些手工操作来直接管理他们。好在,有 LXD(读音为 Lex-deeh),这是一款为我们管理 Linux 容器的服务。 - 1。在我们的 Ubuntu 桌面上配置容器, - 2。创建容器, - 3。安装一台 web 服务器, - 4。测试一下这台 web 服务器,以及 - 5。清理所有的东西。 +我们将会看到如何: + +1. 在我们的 Ubuntu 桌面上配置容器, +2. 创建容器, +3. 安装一台 web 服务器, +4. 测试一下这台 web 服务器,以及 +5. 清理所有的东西。 ### 设置 Ubuntu 容器 -如果你安装的是 Ubuntu 16.04,那么你什么都不用做。只要安装下面所列出的一些额外的包就行了。若你安装的是 Ubuntu 14.04.x 或 Ubuntu 15.10,那么按照 [LXD 2.0:Installing and configuring LXD [2/12]][1] 来进行一些操作,然后再回来。 +如果你安装的是 Ubuntu 16.04,那么你什么都不用做。只要安装下面所列出的一些额外的包就行了。若你安装的是 Ubuntu 14.04.x 或 Ubuntu 15.10,那么按照 [LXD 2.0 系列(二):安装与配置][1] 来进行一些操作,然后再回来。 确保已经更新了包列表: + ``` sudo apt update sudo apt upgrade ``` -安装 **lxd** 包: +安装 `lxd` 包: + ``` sudo apt install lxd ``` 若你安装的是 Ubuntu 16.04,那么还可以让你的容器文件以 ZFS 文件系统的格式进行存储。Ubuntu 16.04 的 Linux kernel 包含了支持 ZFS 必要的内核模块。若要让 LXD 使用 ZFS 进行存储,我们只需要安装 ZFS 工具包。没有 ZFS,容器会在主机文件系统中以单独的文件形式进行存储。通过 ZFS,我们就有了写入时拷贝等功能,可以让任务完成更快一些。 -安装 **zfsutils-linux** 包 (若你安装的是 Ubuntu 16.04.x): +安装 `zfsutils-linux` 包(若你安装的是 Ubuntu 16.04.x): + ``` sudo apt install zfsutils-linux ``` -安装好 LXD 后,包安装脚本应该会将你加入 **lxd** 组。该组成员可以使你无需通过 sudo 就能直接使用 LXD 管理容器。根据 Linux 的尿性,**你需要先登出桌面会话然后再登陆** 才能应用 **lxd** 的组成员关系。(若你是高手,也可以通过在当前 shell 中执行 newgrp lxd 命令,就不用重登陆了)。 +安装好 LXD 后,包安装脚本应该会将你加入 `lxd` 组。该组成员可以使你无需通过 `sudo` 就能直接使用 LXD 管理容器。根据 Linux 的习惯,**你需要先登出桌面会话然后再登录** 才能应用 `lxd` 的组成员关系。(若你是高手,也可以通过在当前 shell 中执行 `newgrp lxd` 命令,就不用重登录了)。 在开始使用前,LXD 需要初始化存储和网络参数。 运行下面命令: + ``` -$ **sudo  lxd init** -Name of the storage backend to use (dir or zfs):**zfs** -Create a new ZFS pool (yes/no)?**yes** -Name of the new ZFS pool:**lxd-pool** -Would you like to use an existing block device (yes/no)?**no** -Size in GB of the new loop device (1GB minimum):**30** -Would you like LXD to be available over the network (yes/no)?**no** -Do you want to configure the LXD bridge (yes/no)?**yes** -**> You will be asked about the network bridge configuration。Accept all defaults and continue。** -Warning:Stopping lxd.service,but it can still be activated by: +$ sudo lxd init +Name of the storage backend to use (dir or zfs): zfs +Create a new ZFS pool (yes/no)? yes +Name of the new ZFS pool: lxd-pool +Would you like to use an existing block device (yes/no)? no +Size in GB of the new loop device (1GB minimum): 30 +Would you like LXD to be available over the network (yes/no)? no +Do you want to configure the LXD bridge (yes/no)? yes +> You will be asked about the network bridge configuration. Accept all defaults and continue. +Warning: Stopping lxd.service, but it can still be activated by: lxd.socket - LXD has been successfully configured。 + LXD has been successfully configured. $ _ ``` -我们在一个(独立)的文件而不是块设备(即分区)中构建了一个文件系统来作为 ZFS 池,因此我们无需进行额外的分区操作。在本例中我指定了 30GB 大小,这个空间取之于根(/) 文件系统中。这个文件就是 `/var/lib/lxd/zfs.img`。 +我们在一个(单独)的文件而不是块设备(即分区)中构建了一个文件系统来作为 ZFS 池,因此我们无需进行额外的分区操作。在本例中我指定了 30GB 大小,这个空间取之于根(`/`) 文件系统中。这个文件就是 `/var/lib/lxd/zfs.img`。 -行了!最初的配置完成了。若有问题,或者想了解其他信息,请阅读 https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/ +行了!最初的配置完成了。若有问题,或者想了解其他信息,请阅读 https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/ 。 ### 创建第一个容器 -所有 LXD 的管理操作都可以通过 **lxc** 命令来进行。我们通过给 **lxc** 不同参数来管理容器。 +所有 LXD 的管理操作都可以通过 `lxc` 命令来进行。我们通过给 `lxc` 不同参数来管理容器。 + ``` lxc list ``` + 可以列出所有已经安装的容器。很明显,这个列表现在是空的,但这表示我们的安装是没问题的。 ``` lxc image list ``` -列出可以用来启动容器的(已经缓存)镜像列表。很明显这个列表也是空的,但这也说明我们的安装是没问题的。 + +列出可以用来启动容器的(已经缓存的)镜像列表。很明显这个列表也是空的,但这也说明我们的安装是没问题的。 ``` lxc image list ubuntu: ``` -列出可以下载并启动容器的远程镜像。而且指定了是显示 Ubuntu 镜像。 + +列出可以下载并启动容器的远程镜像。而且指定了显示 Ubuntu 镜像。 ``` lxc image list images: ``` -列出可以用来启动容器的(已经缓存)各种发行版的镜像列表。这会列出各种发行版的镜像比如 Alpine,Debian,Gentoo,Opensuse 以及 Fedora。 -让我们启动一个 Ubuntu 16.04 容器,并称之为 c1: +列出可以用来启动容器的(已经缓存的)各种发行版的镜像列表。这会列出各种发行版的镜像比如 Alpine、Debian、Gentoo、Opensuse 以及 Fedora。 + +让我们启动一个 Ubuntu 16.04 容器,并称之为 `c1`: + ``` $ lxc launch ubuntu:x c1 Creating c1 @@ -92,9 +103,10 @@ Starting c1 $ ``` -我们使用 launch 动作,然后选择镜像 **ubuntu:x** (x 表示 Xenial/16.04 镜像),最后我们使用名字 `c1` 作为容器的名称。 +我们使用 `launch` 动作,然后选择镜像 `ubuntu:x` (`x` 表示 Xenial/16.04 镜像),最后我们使用名字 `c1` 作为容器的名称。 让我们来看看安装好的首个容器, + ``` $ lxc list @@ -105,56 +117,60 @@ $ lxc list +---------|---------|----------------------|------|------------|-----------+ ``` -我们的首个容器 c1 已经运行起来了,它还有自己的 IP 地址(可以本地访问)。我们可以开始用它了! +我们的首个容器 c1 已经运行起来了,它还有自己的 IP 地址(可以本地访问)。我们可以开始用它了! ### 安装 web 服务器 -我们可以在容器中运行命令。运行命令的动作为 **exec**。 +我们可以在容器中运行命令。运行命令的动作为 `exec`。 + ``` $ lxc exec c1 -- uptime 11:47:25 up 2 min,0 users,load average:0.07,0.05,0.04 $ _ ``` -在 exec 后面,我们指定容器,最后输入要在容器中运行的命令。运行时间只有 2 分钟,这是个新出炉的容器:-)。 +在 `exec` 后面,我们指定容器、最后输入要在容器中运行的命令。该容器的运行时间只有 2 分钟,这是个新出炉的容器:-)。 + +命令行中的 `--` 跟我们 shell 的参数处理过程有关。若我们的命令没有任何参数,则完全可以省略 `-`。 -命令行中的`--`跟我们 shell 的参数处理过程有关是告诉。若我们的命令没有任何参数,则完全可以省略`-`。 ``` $ lxc exec c1 -- df -h ``` -这是一个必须要`-`的例子,由于我们的命令使用了参数 -h。若省略了 -,会报错。 +这是一个必须要 `-` 的例子,由于我们的命令使用了参数 `-h`。若省略了 `-`,会报错。 + +然后我们运行容器中的 shell 来更新包列表。 -然我们运行容器中的 shell 来新包列表。 ``` $ lxc exec c1 bash -root@c1:~# apt update -Ign http://archive.ubuntu.com trusty InRelease -Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB] -Get:2 http://security.ubuntu.com trusty-security InRelease [65.9 kB] -.。。 -Hit http://archive.ubuntu.com trusty/universe Translation-en -Fetched 11.2 MB in 9s (1228 kB/s) -Reading package lists。.. Done -root@c1:~# **apt upgrade** -Reading package lists。.. Done -Building dependency tree -.。。 -Processing triggers for man-db (2.6.7.1-1ubuntu1) .。。 -Setting up dpkg (1.17.5ubuntu5.7) .。。 -root@c1:~# _ +root@c1:~# apt update +Ign http://archive.ubuntu.com trusty InRelease +Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB] +Get:2 http://security.ubuntu.com trusty-security InRelease [65.9 kB] +... +Hit http://archive.ubuntu.com trusty/universe Translation-en +Fetched 11.2 MB in 9s (1228 kB/s) +Reading package lists... Done +root@c1:~# apt upgrade +Reading package lists... Done +Building dependency tree +... +Processing triggers for man-db (2.6.7.1-1ubuntu1) ... +Setting up dpkg (1.17.5ubuntu5.7) ... +root@c1:~# _ ``` -我们使用 **nginx** 来做 web 服务器。nginx 在某些方面要比 Apache web 服务器更酷一些。 +我们使用 nginx 来做 web 服务器。nginx 在某些方面要比 Apache web 服务器更酷一些。 + ``` -root@c1:~# apt install nginx -Reading package lists。.. Done +root@c1:~# apt install nginx +Reading package lists... Done Building dependency tree -.。。 -Setting up nginx-core (1.4.6-1ubuntu3.5) .。。 -Setting up nginx (1.4.6-1ubuntu3.5) .。。 -Processing triggers for libc-bin (2.19-0ubuntu6.9) .。。 -root@c1:~# _ +... +Setting up nginx-core (1.4.6-1ubuntu3.5) ... +Setting up nginx (1.4.6-1ubuntu3.5) ... +Processing triggers for libc-bin (2.19-0ubuntu6.9) ... +root@c1:~# _ ``` 让我们用浏览器访问一下这个 web 服务器。记住 IP 地址为 10.173.82.158,因此你需要在浏览器中输入这个 IP。 @@ -162,59 +178,59 @@ root@c1:~# _ [![lxd-nginx][2]][3] 让我们对页面文字做一些小改动。回到容器中,进入默认 HTML 页面的目录中。 + ``` -root@c1:~# **cd /var/www/html/** -root@c1:/var/www/html# **ls -l** +root@c1:~# cd /var/www/html/ +root@c1:/var/www/html# ls -l total 2 --rw-r--r-- 1 root root 612 Jun 25 12:15 index.nginx-debian.html -root@c1:/var/www/html# +-rw-r--r-- 1 root root 612 Jun 25 12:15 index.nginx-debian.html +root@c1:/var/www/html# ``` -使用 nano 编辑文件,然后保存 +使用 nano 编辑文件,然后保存: [![lxd-nginx-nano][4]][5] -子后,再刷一下页面看看, +之后,再刷一下页面看看, [![lxd-nginx-modified][6]][7] ### 清理 让我们清理一下这个容器,也就是删掉它。当需要的时候我们可以很方便地创建一个新容器出来。 -``` -$ **lxc list** -+---------|---------|----------------------|------|------------|-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+---------|---------|----------------------|------|------------|-----------+ -| c1 | RUNNING | 10.173.82.169 (eth0) | | PERSISTENT | 0 | -+---------|---------|----------------------|------|------------|-----------+ -$ **lxc stop c1** -$ **lxc delete c1** -$ **lxc list** -+---------|---------|----------------------|------|------------|-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+---------|---------|----------------------|------|------------|-----------+ -+---------|---------|----------------------|------|------------|-----------+ +``` +$ lxc list ++---------+---------+----------------------+------+------------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++---------+---------+----------------------+------+------------+-----------+ +| c1 | RUNNING | 10.173.82.169 (eth0) | | PERSISTENT | 0 | ++---------+---------+----------------------+------+------------+-----------+ +$ lxc stop c1 +$ lxc delete c1 +$ lxc list ++---------+---------+----------------------+------+------------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++---------+---------+----------------------+------+------------+-----------+ ++---------+---------+----------------------+------+------------+-----------+ ``` -我们停止(关闭)这个容器,然后删掉它了。 +我们停止(关闭)这个容器,然后删掉它了。 本文至此就结束了。关于容器有很多玩法。而这只是配置 Ubuntu 并尝试使用容器的第一步而已。 - -------------------------------------------------------------------------------- via: https://blog.simos.info/trying-out-lxd-containers-on-our-ubuntu/ 作者:[Simos Xenitellis][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://blog.simos.info/author/simos/ -[1]:https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/ +[1]:https://linux.cn/article-7687-1.html [2]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx.png?resize=564%2C269&ssl=1 [3]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx.png?ssl=1 [4]:https://i2.wp.com/blog.simos.info/wp-content/uploads/2016/06/lxd-nginx-nano.png?resize=750%2C424&ssl=1 From 6bd1b262290d89fae82704944f473e36ede12a03 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 18 Jan 2018 11:40:17 +0800 Subject: [PATCH 063/226] PUB:20160625 Trying out LXD containers on our Ubuntu.md @lujun9972 --- .../20160625 Trying out LXD containers on our Ubuntu.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20160625 Trying out LXD containers on our Ubuntu.md (100%) diff --git a/translated/tech/20160625 Trying out LXD containers on our Ubuntu.md b/published/20160625 Trying out LXD containers on our Ubuntu.md similarity index 100% rename from translated/tech/20160625 Trying out LXD containers on our Ubuntu.md rename to published/20160625 Trying out LXD containers on our Ubuntu.md From eca4eff04a7076a19f3e40d6079b2829110f178d Mon Sep 17 00:00:00 2001 From: qhwdw Date: Thu, 18 Jan 2018 12:08:35 +0800 Subject: [PATCH 064/226] Translated by qhwdw --- ...0319 ftrace trace your kernel functions.md | 284 ------------------ ...0319 ftrace trace your kernel functions.md | 284 ++++++++++++++++++ 2 files changed, 284 insertions(+), 284 deletions(-) delete mode 100644 sources/tech/20170319 ftrace trace your kernel functions.md create mode 100644 translated/tech/20170319 ftrace trace your kernel functions.md diff --git a/sources/tech/20170319 ftrace trace your kernel functions.md b/sources/tech/20170319 ftrace trace your kernel functions.md deleted file mode 100644 index 3ca42ab1a3..0000000000 --- a/sources/tech/20170319 ftrace trace your kernel functions.md +++ /dev/null @@ -1,284 +0,0 @@ -Translating by qhwdw ftrace: trace your kernel functions! -============================================================ - -Hello! Today we’re going to talk about a debugging tool we haven’t talked about much before on this blog: ftrace. What could be more exciting than a new debugging tool?! - -Better yet, ftrace isn’t new! It’s been around since Linux kernel 2.6, or about 2008. [here’s the earliest documentation I found with some quick Gooogling][10]. So you might be able to use it even if you’re debugging an older system! - -I’ve known that ftrace exists for about 2.5 years now, but hadn’t gotten around to really learning it yet. I’m supposed to run a workshop tomorrow where I talk about ftrace, so today is the day we talk about it! - -### what’s ftrace? - -ftrace is a Linux kernel feature that lets you trace Linux kernel function calls. Why would you want to do that? Well, suppose you’re debugging a weird problem, and you’ve gotten to the point where you’re staring at the source code for your kernel version and wondering what **exactly** is going on. - -I don’t read the kernel source code very often when debugging, but occasionally I do! For example this week at work we had a program that was frozen and stuck spinning inside the kernel. Looking at what functions were being called helped us understand better what was happening in the kernel and what systems were involved (in that case, it was the virtual memory system)! - -I think ftrace is a bit of a niche tool (it’s definitely less broadly useful and harder to use than strace) but that it’s worth knowing about. So let’s learn about it! - -### first steps with ftrace - -Unlike strace and perf, ftrace isn’t a **program** exactly – you don’t just run `ftrace my_cool_function`. That would be too easy! - -If you read [Debugging the kernel using Ftrace][11] it starts out by telling you to `cd /sys/kernel/debug/tracing` and then do various filesystem manipulations. - -For me this is way too annoying – a simple example of using ftrace this way is something like - -``` -cd /sys/kernel/debug/tracing -echo function > current_tracer -echo do_page_fault > set_ftrace_filter -cat trace - -``` - -This filesystem interface to the tracing system (“put values in these magic files and things will happen”) seems theoretically possible to use but really not my preference. - -Luckily, team ftrace also thought this interface wasn’t that user friendly and so there is an easier-to-use interface called **trace-cmd**!!! trace-cmd is a normal program with command line arguments. We’ll use that! I found an intro to trace-cmd on LWN at [trace-cmd: A front-end for Ftrace][12]. - -### getting started with trace-cmd: let’s trace just one function - -First, I needed to install `trace-cmd` with `sudo apt-get install trace-cmd`. Easy enough. - -For this first ftrace demo, I decided I wanted to know when my kernel was handling a page fault. When Linux allocates memory, it often does it lazily (“you weren’t  _really_  planning to use that memory, right?“). This means that when an application tries to actually write to memory that it allocated, there’s a page fault and the kernel needs to give the application physical memory to use. - -Let’s start `trace-cmd` and make it trace the `do_page_fault` function! - -``` -$ sudo trace-cmd record -p function -l do_page_fault - plugin 'function' -Hit Ctrl^C to stop recording - -``` - -I ran it for a few seconds and then hit `Ctrl+C`. Awesome! It created a 2.5MB file called `trace.dat`. Let’s see what’s that file! - -``` -$ sudo trace-cmd report - chrome-15144 [000] 11446.466121: function: do_page_fault - chrome-15144 [000] 11446.467910: function: do_page_fault - chrome-15144 [000] 11446.469174: function: do_page_fault - chrome-15144 [000] 11446.474225: function: do_page_fault - chrome-15144 [000] 11446.474386: function: do_page_fault - chrome-15144 [000] 11446.478768: function: do_page_fault - CompositorTileW-15154 [001] 11446.480172: function: do_page_fault - chrome-1830 [003] 11446.486696: function: do_page_fault - CompositorTileW-15154 [001] 11446.488983: function: do_page_fault - CompositorTileW-15154 [001] 11446.489034: function: do_page_fault - CompositorTileW-15154 [001] 11446.489045: function: do_page_fault - -``` - -This is neat – it shows me the process name (chrome), process ID (15144), CPU (000), and function that got traced. - -By looking at the whole report, (`sudo trace-cmd report | grep chrome`) I can see that we traced for about 1.5 seconds and in that time Chrome had about 500 page faults. Cool! We have done our first ftrace! - -### next ftrace trick: let’s trace a process! - -Okay, but just seeing one function is kind of boring! Let’s say I want to know everything that’s happening for one program. I use a static site generator called Hugo. What’s the kernel doing for Hugo? - -Hugo’s PID on my computer right now is 25314, so I recorded all the kernel functions with: - -``` -sudo trace-cmd record --help # I read the help! -sudo trace-cmd record -p function -P 25314 # record for PID 25314 - -``` - -`sudo trace-cmd report` printed out 18,000 lines of output. If you’re interested, you can see [all 18,000 lines here][13]. - -18,000 lines is a lot so here are some interesting excerpts. - -This looks like what happens when the `clock_gettime` system call runs. Neat! - -``` - compat_SyS_clock_gettime - SyS_clock_gettime - clockid_to_kclock - posix_clock_realtime_get - getnstimeofday64 - __getnstimeofday64 - arch_counter_read - __compat_put_timespec - -``` - -This is something related to process scheduling: - -``` - cpufreq_sched_irq_work - wake_up_process - try_to_wake_up - _raw_spin_lock_irqsave - do_raw_spin_lock - _raw_spin_lock - do_raw_spin_lock - walt_ktime_clock - ktime_get - arch_counter_read - walt_update_task_ravg - exiting_task - -``` - -Being able to see all these function calls is pretty cool, even if I don’t quite understand them. - -### “function graph” tracing - -There’s another tracing mode called `function_graph`. This is the same as the function tracer except that it instruments both entering  _and_  exiting a function. [Here’s the output of that tracer][14] - -``` -sudo trace-cmd record -p function_graph -P 25314 - -``` - -Again, here’s a snipped (this time from the futex code) - -``` - | futex_wake() { - | get_futex_key() { - | get_user_pages_fast() { - 1.458 us | __get_user_pages_fast(); - 4.375 us | } - | __might_sleep() { - 0.292 us | ___might_sleep(); - 2.333 us | } - 0.584 us | get_futex_key_refs(); - | unlock_page() { - 0.291 us | page_waitqueue(); - 0.583 us | __wake_up_bit(); - 5.250 us | } - 0.583 us | put_page(); -+ 24.208 us | } - -``` - -We see in this example that `get_futex_key` gets called right after `futex_wake`. Is that what really happens in the source code? We can check!! [Here’s the definition of futex_wake in Linux 4.4][15] (my kernel version). - -I’ll save you a click: it looks like this: - -``` -static int -futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset) -{ - struct futex_hash_bucket *hb; - struct futex_q *this, *next; - union futex_key key = FUTEX_KEY_INIT; - int ret; - WAKE_Q(wake_q); - - if (!bitset) - return -EINVAL; - - ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, VERIFY_READ); - -``` - -So the first function called in `futex_wake` really is `get_futex_key`! Neat! Reading the function trace was definitely an easier way to find that out than by reading the kernel code, and it’s nice to see how long all of the functions took. - -### How to know what functions you can trace - -If you run `sudo trace-cmd list -f` you’ll get a list of all the functions you can trace. That’s pretty simple but it’s important. - -### one last thing: events! - -So, now we know how to trace functions in the kernel! That’s really cool! - -There’s one more class of thing we can trace though! Some events don’t correspond super well to function calls. For example, you might want to knowwhen a program is scheduled on or off the CPU! You might be able to figure that out by peering at function calls, but I sure can’t. - -So the kernel also gives you a few events so you can see when a few important things happen. You can see a list of all these events with `sudo cat /sys/kernel/debug/tracing/available_events` - -I looked at all the sched_switch events. I’m not exactly sure what sched_switch is but it’s something to do with scheduling I guess. - -``` -sudo cat /sys/kernel/debug/tracing/available_events -sudo trace-cmd record -e sched:sched_switch -sudo trace-cmd report - -``` - -The output looks like this: - -``` - 16169.624862: Chrome_ChildIOT:24817 [112] S ==> chrome:15144 [120] - 16169.624992: chrome:15144 [120] S ==> swapper/3:0 [120] - 16169.625202: swapper/3:0 [120] R ==> Chrome_ChildIOT:24817 [112] - 16169.625251: Chrome_ChildIOT:24817 [112] R ==> chrome:1561 [112] - 16169.625437: chrome:1561 [112] S ==> chrome:15144 [120] - -``` - -so you can see it switching from PID 24817 -> 15144 -> kernel -> 24817 -> 1561 -> 15114\. (all of these events are on the same CPU) - -### how does ftrace work? - -ftrace is a dynamic tracing system. This means that when I start ftracing a kernel function, the **function’s code gets changed**. So – let’s suppose that I’m tracing that `do_page_fault` function from before. The kernel will insert some extra instructions in the assembly for that function to notify the tracing system every time that function gets called. The reason it can add extra instructions is that Linux compiles in a few extra NOP instructions into every function, so there’s space to add tracing code when needed. - -This is awesome because it means that when I’m not using ftrace to trace my kernel, it doesn’t affect performance at all. When I do start tracing, the more functions I trace, the more overhead it’ll have. - -(probably some of this is wrong, but this is how I think ftrace works anyway) - -### use ftrace more easily: brendan gregg’s tools & kernelshark - -As we’ve seen in this post, you need to think quite a lot about what individual kernel functions / events do to use ftrace directly. This is cool, but it’s also a lot of work! - -Brendan Gregg (our linux debugging tools hero) has repository of tools that use ftrace to give you information about various things like IO latency. They’re all in his [perf-tools][16] repository on GitHub. - -The tradeoff here is that they’re easier to use, but you’re limited to things that Brendan Gregg thought of & decided to make a tool for. Which is a lot of things! :) - -Another tool for visualizing the output of ftrace better is [kernelshark][17]. I haven’t played with it much yet but it looks useful. You can install it with `sudo apt-get install kernelshark`. - -### a new superpower - -I’m really happy I took the time to learn a little more about ftrace today! Like any kernel tool, it’ll work differently between different kernel versions, but I hope that you find it useful one day. - -### an index of ftrace articles - -Finally, here’s a list of a bunch of ftrace articles I found. Many of them are on LWN (Linux Weekly News), which is a pretty great source of writing on Linux. (you can buy a [subscription][18]!) - -* [Debugging the kernel using Ftrace - part 1][1] (Dec 2009, Steven Rostedt) - -* [Debugging the kernel using Ftrace - part 2][2] (Dec 2009, Steven Rostedt) - -* [Secrets of the Linux function tracer][3] (Jan 2010, Steven Rostedt) - -* [trace-cmd: A front-end for Ftrace][4] (Oct 2010, Steven Rostedt) - -* [Using KernelShark to analyze the real-time scheduler][5] (2011, Steven Rostedt) - -* [Ftrace: The hidden light switch][6] (2014, Brendan Gregg) - -* the kernel documentation: (which is quite useful) [Documentation/ftrace.txt][7] - -* documentation on events you can trace [Documentation/events.txt][8] - -* some docs on ftrace design for linux kernel devs (not as useful, but interesting) [Documentation/ftrace-design.txt][9] - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2017/03/19/getting-started-with-ftrace/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca -[1]:https://lwn.net/Articles/365835/ -[2]:https://lwn.net/Articles/366796/ -[3]:https://lwn.net/Articles/370423/ -[4]:https://lwn.net/Articles/410200/ -[5]:https://lwn.net/Articles/425583/ -[6]:https://lwn.net/Articles/608497/ -[7]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace.txt -[8]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/events.txt -[9]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace-design.txt -[10]:https://lwn.net/Articles/290277/ -[11]:https://lwn.net/Articles/365835/ -[12]:https://lwn.net/Articles/410200/ -[13]:https://gist.githubusercontent.com/jvns/e5c2d640f7ec76ed9ed579be1de3312e/raw/78b8425436dc4bb5bb4fa76a4f85d5809f7d1ef2/trace-cmd-report.txt -[14]:https://gist.githubusercontent.com/jvns/f32e9b06bcd2f1f30998afdd93e4aaa5/raw/8154d9828bb895fd6c9b0ee062275055b3775101/function_graph.txt -[15]:https://github.com/torvalds/linux/blob/v4.4/kernel/futex.c#L1313-L1324 -[16]:https://github.com/brendangregg/perf-tools -[17]:https://lwn.net/Articles/425583/ -[18]:https://lwn.net/subscribe/Info diff --git a/translated/tech/20170319 ftrace trace your kernel functions.md b/translated/tech/20170319 ftrace trace your kernel functions.md new file mode 100644 index 0000000000..ccb5b76256 --- /dev/null +++ b/translated/tech/20170319 ftrace trace your kernel functions.md @@ -0,0 +1,284 @@ +ftrace:跟踪你的内核函数! +============================================================ + +大家好!今天我们将去讨论一个调试工具:ftrace,之前我的博客上还没有讨论过它。还有什么能比一个新的调试工具更让人激动呢? + +这个非常棒的 ftrace 并不是个新的工具!它大约在 Linux 的 2.6 内核版本中就有了,时间大约是在 2008 年。[这里是我用谷歌能找到的一些文档][10]。因此,如果你是一个调试系统的“老手”,可能早就已经使用它了! + +我知道,ftrace 已经存在了大约 2.5 年了,但是还没有真正的去学习它。假设我明天要召开一个专题研究会,那么,关于 ftrace 应该讨论些什么?因此,今天是时间去讨论一下它了! + +### 什么是 ftrace? + +ftrace 是一个 Linux 内核特性,它可以让你去跟踪 Linux 内核的函数调用。为什么要这么做呢?好吧,假设你调试一个奇怪的问题,而你已经得到了你的内核版本中这个问题在源代码中的开始的位置,而你想知道这里到底发生了什么? + +每次在调试的时候,我并不会经常去读内核源代码,但是,极个别的情况下会去读它!例如,本周在工作中,我有一个程序在内核中卡死了。查看到底是调用了什么函数、哪些系统涉及其中,能够帮我更好的理解在内核中发生了什么!(在我的那个案例中,它是虚拟内存系统) + +我认为 ftrace 是一个十分好用的工具(它肯定没有 strace 那样广泛被使用,使用难度也低于它),但是它还是值得你去学习。因此,让我们开始吧! + +### 使用 ftrace 的第一步 + +不像 strace 和 perf,ftrace 并不是真正的 **程序** – 你不能只运行 `ftrace my_cool_function`。那样太容易了! + +如果你去读 [使用 Ftrace 调试内核][11],它会告诉你从 `cd /sys/kernel/debug/tracing` 开始,然后做很多文件系统的操作。 + +对于我来说,这种办法太麻烦 – 使用 ftrace 的一个简单例子应该像这样: + +``` +cd /sys/kernel/debug/tracing +echo function > current_tracer +echo do_page_fault > set_ftrace_filter +cat trace + +``` + +这个文件系统到跟踪系统的接口(“给这些神奇的文件赋值,然后该发生的事情就会发生”)理论上看起来似乎可用,但是它不是我的首选方式。 + +幸运的是,ftrace 团队也考虑到这个并不友好的用户界面,因此,它有了一个更易于使用的界面,它就是 **trace-cmd**!!!trace-cmd 是一个带命令行参数的普通程序。我们后面将使用它!我在 LWN 上找到了一个 trace-cmd 的使用介绍:[trace-cmd: Ftrace 的一个前端][12]。 + +### 开始使用 trace-cmd:让 trace 仅跟踪一个函数 + +首先,我需要去使用 `sudo apt-get install trace-cmd` 安装 `trace-cmd`,这一步很容易。 + +对于第一个 ftrace 的演示,我决定去了解我的内核如何去处理一个页面故障。当 Linux 分配内存时,它经常偷懒,(“你并不是  _真的_  计划去使用内存,对吗?”)。这意味着,当一个应用程序尝试去对分配给它的内存进行写入时,就会发生一个页面故障,而这个时候,内核才会真正的为应用程序去分配物理内存。 + +我们开始使用 `trace-cmd` 并让它跟踪 `do_page_fault` 函数! + +``` +$ sudo trace-cmd record -p function -l do_page_fault + plugin 'function' +Hit Ctrl^C to stop recording + +``` + +我将它运行了几秒钟,然后按下了 `Ctrl+C`。 让我大吃一惊的是,它竟然产生了一个 2.5MB 大小的名为 `trace.dat` 的跟踪文件。我们来看一下这个文件的内容! + +``` +$ sudo trace-cmd report + chrome-15144 [000] 11446.466121: function: do_page_fault + chrome-15144 [000] 11446.467910: function: do_page_fault + chrome-15144 [000] 11446.469174: function: do_page_fault + chrome-15144 [000] 11446.474225: function: do_page_fault + chrome-15144 [000] 11446.474386: function: do_page_fault + chrome-15144 [000] 11446.478768: function: do_page_fault + CompositorTileW-15154 [001] 11446.480172: function: do_page_fault + chrome-1830 [003] 11446.486696: function: do_page_fault + CompositorTileW-15154 [001] 11446.488983: function: do_page_fault + CompositorTileW-15154 [001] 11446.489034: function: do_page_fault + CompositorTileW-15154 [001] 11446.489045: function: do_page_fault + +``` + +看起来很整洁 – 它展示了进程名(chrome)、进程 ID (15144)、CPU(000)、以及它跟踪的函数。 + +通过察看整个文件,(`sudo trace-cmd report | grep chrome`)可以看到,我们跟踪了大约 1.5 秒,在这 1.5 秒的时间段内,Chrome 发生了大约 500 个页面故障。真是太酷了!这就是我们做的第一个 ftrace! + +### 下一个 ftrace 技巧:我们来跟踪一个进程! + +好吧,只看一个函数是有点无聊!假如我想知道一个程序中都发生了什么事情。我使用一个名为 Hugo 的静态站点生成器。看看内核为 Hugo 都做了些什么事情? + +在我的电脑上 Hugo 的 PID 现在是 25314,因此,我使用如下的命令去记录所有的内核函数: + +``` +sudo trace-cmd record --help # I read the help! +sudo trace-cmd record -p function -P 25314 # record for PID 25314 + +``` + +`sudo trace-cmd report` 输出了 18,000 行。如果你对这些感兴趣,你可以看 [这里是所有的 18,000 行的输出][13]。 + +18,000 行太多了,因此,在这里仅摘录其中几行。 + +当系统调用 `clock_gettime` 运行时,都发生了什么。 + +``` + compat_SyS_clock_gettime + SyS_clock_gettime + clockid_to_kclock + posix_clock_realtime_get + getnstimeofday64 + __getnstimeofday64 + arch_counter_read + __compat_put_timespec + +``` + +这是与进程调试相关的一些东西: + +``` + cpufreq_sched_irq_work + wake_up_process + try_to_wake_up + _raw_spin_lock_irqsave + do_raw_spin_lock + _raw_spin_lock + do_raw_spin_lock + walt_ktime_clock + ktime_get + arch_counter_read + walt_update_task_ravg + exiting_task + +``` + +虽然你可能还不理解它们是做什么的,但是,能够看到所有的这些函数调用也是件很酷的事情。 + +### “function graph” 跟踪 + +这里有另外一个模式,称为 `function_graph`。除了它既可以进入也可以退出一个函数外,其它的功能和函数跟踪器是一样的。[这里是那个跟踪器的输出][14] + +``` +sudo trace-cmd record -p function_graph -P 25314 + +``` + +同样,这里只是一个片断(这次来自 futex 代码) + +``` + | futex_wake() { + | get_futex_key() { + | get_user_pages_fast() { + 1.458 us | __get_user_pages_fast(); + 4.375 us | } + | __might_sleep() { + 0.292 us | ___might_sleep(); + 2.333 us | } + 0.584 us | get_futex_key_refs(); + | unlock_page() { + 0.291 us | page_waitqueue(); + 0.583 us | __wake_up_bit(); + 5.250 us | } + 0.583 us | put_page(); ++ 24.208 us | } + +``` + +我们看到在这个示例中,在 `futex_wake` 后面调用了 `get_futex_key`。这是在源代码中真实发生的事情吗?我们可以检查一下!![这里是在 Linux 4.4 中 futex_wake 的定义][15] (我的内核版本是 4.4)。 + +为节省时间我直接贴出来,它的内容如下: + +``` +static int +futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset) +{ + struct futex_hash_bucket *hb; + struct futex_q *this, *next; + union futex_key key = FUTEX_KEY_INIT; + int ret; + WAKE_Q(wake_q); + + if (!bitset) + return -EINVAL; + + ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, VERIFY_READ); + +``` + +如你所见,在 `futex_wake` 中的第一个函数调用真的是 `get_futex_key`! 太棒了!相比阅读内核代码,阅读函数跟踪肯定是更容易的找到结果的办法,并且让人高兴的是,还能看到所有的函数用了多长时间。 + +### 如何知道哪些函数可以被跟踪 + +如果你去运行 `sudo trace-cmd list -f`,你将得到一个你可以跟踪的函数的列表。它很简单但是也很重要。 + +### 最后一件事:事件! + +现在,我们已经知道了怎么去跟踪内核中的函数,真是太酷了! + +还有一类我们可以跟踪的东西!有些事件与我们的函数调用并不相符。例如,你可能想去知道当一个程序被调度进入或者离开 CPU 时,都发生了什么事件!你可能想通过“盯着”函数调用计算出来,但是,我告诉你,不可行! + +由于函数也为你提供了几种事件,因此,你可以看到当重要的事件发生时,都发生了什么事情。你可以使用 `sudo cat /sys/kernel/debug/tracing/available_events` 来查看这些事件的一个列表。  + +我查看了全部的 sched_switch 事件。我并不完全知道 sched_switch 是什么,但是,我猜测它与调度有关。 + +``` +sudo cat /sys/kernel/debug/tracing/available_events +sudo trace-cmd record -e sched:sched_switch +sudo trace-cmd report + +``` + +输出如下: + +``` + 16169.624862: Chrome_ChildIOT:24817 [112] S ==> chrome:15144 [120] + 16169.624992: chrome:15144 [120] S ==> swapper/3:0 [120] + 16169.625202: swapper/3:0 [120] R ==> Chrome_ChildIOT:24817 [112] + 16169.625251: Chrome_ChildIOT:24817 [112] R ==> chrome:1561 [112] + 16169.625437: chrome:1561 [112] S ==> chrome:15144 [120] + +``` + +现在,可以很清楚地看到这些切换,从 PID 24817 -> 15144 -> kernel -> 24817 -> 1561 -> 15114\。(所有的这些事件都发生在同一个 CPU 上) + +### ftrace 是如何工作的? + +ftrace 是一个动态跟踪系统。当启动 ftracing 去跟踪内核函数时,**函数的代码会被改变**。因此 – 我们假设去跟踪 `do_page_fault` 函数。内核将在那个函数的汇编代码中插入一些额外的指令,以便每次该函数被调用时去提示跟踪系统。内核之所以能够添加额外的指令的原因是,Linux 将额外的几个 NOP 指令编译进每个函数中,因此,当需要的时候,这里有添加跟踪代码的地方。 + +这是一个十分复杂的问题,因为,当不需要使用 ftrace 去跟踪我的内核时,它根本就不影响性能。而当我需要跟踪时,跟踪的函数越多,产生的开销就越大。 + +(或许有些是不对的,但是,我认为的 ftrace 就是这样工作的) + +### 更容易地使用 ftrace:brendan gregg 的工具 & kernelshark + +正如我们在文件中所讨论的,你需要去考虑很多的关于单个的内核函数/事件直接使用 ftrace 都做了些什么。能够做到这一点很酷!但是也需要做大量的工作! + +Brendan Gregg (我们的 linux 调试工具“大神”)有个工具仓库,它使用 ftrace 去提供关于像 I/O 延迟这样的各种事情的信息。这是它在 GitHub 上全部的 [perf-tools][16] 仓库。 + +这里有一个权衡(tradeoff),那就是这些工具易于使用,但是被限制仅用于 Brendan Gregg 认可的事情。决定将它做成一个工具,那需要做很多的事情!:) + +另一个工具是将 ftrace 的输出可视化,做的比较好的是 [kernelshark][17]。我还没有用过它,但是看起来似乎很有用。你可以使用 `sudo apt-get install kernelshark` 来安装它。 + +### 一个新的超能力 + +我很高兴能够花一些时间去学习 ftrace!对于任何内核工具,不同的内核版本有不同的功效,我希望有一天你能发现它很有用! + +### ftrace 系列文章的一个索引 + +最后,这里是我找到的一些 ftrace 方面的文章。它们大部分在 LWN (Linux 新闻周刊)上,它是 Linux 的一个极好的资源(你可以购买一个 [订阅][18]!) + +* [使用 Ftrace 调试内核 - part 1][1] (Dec 2009, Steven Rostedt) + +* [使用 Ftrace 调试内核 - part 2][2] (Dec 2009, Steven Rostedt) + +* [Linux 函数跟踪器的秘密][3] (Jan 2010, Steven Rostedt) + +* [trace-cmd:Ftrace 的一个前端][4] (Oct 2010, Steven Rostedt) + +* [使用 KernelShark 去分析实时调试器][5] (2011, Steven Rostedt) + +* [Ftrace: 神秘的开关][6] (2014, Brendan Gregg) + +* 内核文档:(它十分有用) [Documentation/ftrace.txt][7] + +* 你能跟踪的事件的文档 [Documentation/events.txt][8] + +* linux 内核开发上的一些 ftrace 设计文档 (不是有用,而是有趣!) [Documentation/ftrace-design.txt][9] + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/03/19/getting-started-with-ftrace/ + +作者:[Julia Evans ][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca +[1]:https://lwn.net/Articles/365835/ +[2]:https://lwn.net/Articles/366796/ +[3]:https://lwn.net/Articles/370423/ +[4]:https://lwn.net/Articles/410200/ +[5]:https://lwn.net/Articles/425583/ +[6]:https://lwn.net/Articles/608497/ +[7]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace.txt +[8]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/events.txt +[9]:https://raw.githubusercontent.com/torvalds/linux/v4.4/Documentation/trace/ftrace-design.txt +[10]:https://lwn.net/Articles/290277/ +[11]:https://lwn.net/Articles/365835/ +[12]:https://lwn.net/Articles/410200/ +[13]:https://gist.githubusercontent.com/jvns/e5c2d640f7ec76ed9ed579be1de3312e/raw/78b8425436dc4bb5bb4fa76a4f85d5809f7d1ef2/trace-cmd-report.txt +[14]:https://gist.githubusercontent.com/jvns/f32e9b06bcd2f1f30998afdd93e4aaa5/raw/8154d9828bb895fd6c9b0ee062275055b3775101/function_graph.txt +[15]:https://github.com/torvalds/linux/blob/v4.4/kernel/futex.c#L1313-L1324 +[16]:https://github.com/brendangregg/perf-tools +[17]:https://lwn.net/Articles/425583/ +[18]:https://lwn.net/subscribe/Info From 5cc3ef066c394b08778e2e63a847d16012536d90 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Thu, 18 Jan 2018 13:42:11 +0800 Subject: [PATCH 065/226] 20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md --- ... Your Website From Application Layer DOS Attacks With mod.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md b/sources/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md index 2bb34b90ef..c640d776c1 100644 --- a/sources/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md +++ b/sources/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md @@ -1,3 +1,5 @@ +Translating by jessie-pang + Protecting Your Website From Application Layer DOS Attacks With mod ====== There exist many ways of maliciously taking a website offline. The more complicated methods involve technical knowledge of databases and programming. A far simpler method is known as a "Denial Of Service", or "DOS" attack. This attack derives its name from its goal which is to deny your regular clients or site visitors normal website service. From fe20c972139000b5f9d1d48ae00533febeeca6da Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Thu, 18 Jan 2018 13:59:08 +0800 Subject: [PATCH 066/226] 20171120 How to use special permissions- the setuid, setgid and sticky bits.md --- ...e special permissions- the setuid, setgid and sticky bits.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md b/sources/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md index ab38f8856a..e221a0cbbf 100644 --- a/sources/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md +++ b/sources/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md @@ -1,3 +1,5 @@ +Translating by jessie-pang + How to use special permissions: the setuid, setgid and sticky bits ====== From e2a632597379de5f5d99b106a128f91e4a26f084 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 18 Jan 2018 14:58:09 +0800 Subject: [PATCH 067/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=208=20KDE=20Plasma?= =?UTF-8?q?=20Tips=20and=20Tricks=20to=20Improve=20Your=20Productivity?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...and Tricks to Improve Your Productivity.md | 96 +++++++++++++++++++ 1 file changed, 96 insertions(+) create mode 100644 sources/tech/20180112 8 KDE Plasma Tips and Tricks to Improve Your Productivity.md diff --git a/sources/tech/20180112 8 KDE Plasma Tips and Tricks to Improve Your Productivity.md b/sources/tech/20180112 8 KDE Plasma Tips and Tricks to Improve Your Productivity.md new file mode 100644 index 0000000000..66e96549c7 --- /dev/null +++ b/sources/tech/20180112 8 KDE Plasma Tips and Tricks to Improve Your Productivity.md @@ -0,0 +1,96 @@ +8 KDE Plasma Tips and Tricks to Improve Your Productivity +====== + +![](https://www.maketecheasier.com/assets/uploads/2018/01/kde-plasma-desktop-featured.jpg) + +KDE's Plasma is easily one of the most powerful desktop environments available for Linux. It's highly configurable, and it looks pretty good, too. That doesn't amount to a whole lot unless you can actually get things done. + +You can easily configure Plasma and make use of a lot of its convenient and time-saving features to boost your productivity and have a desktop that empowers you, rather than getting in your way. + +These tips aren't in any particular order, so you don't need to prioritize. Pick the ones that best fit your workflow. + + **Related** : [10 of the Best KDE Plasma Applications You Should Try][1] + +### 1. Multimedia Controls + +This isn't so much of a tip as it is something that's good to keep in mind. Plasma keeps multimedia controls everywhere. You don't need to open your media player every time you need to pause, resume, or skip a song; you can mouse over the minimized window or even control it via the lock screen. There's no need to scramble to log in to change a song or because you forgot to pause one. + +### 2. KRunner + +![KDE Plasma KRunner][2] + +KRunner is an often under-appreciated feature of the Plasma desktop. Most people are used to digging through the application launcher menu to find the program that they're looking to launch. That's not necessary with KRunner. + +To use KRunner, make sure that your focus is on the desktop itself. (Click on it instead of a window.) Then, start typing the name of the program that you want. KRunner will automatically drop down from the top of your screen with suggestions. Click or press Enter on the one you're looking for. It's much faster than remembering which category your program is under. + +### 3. Jump Lists + +![KDE Plasma Jump Lists][3] + +Jump lists are a fairly recent addition to the Plasma desktop. They allow you to launch an application directly to a specific section or feature. + +So if you have a launcher on a menu bar, you can right-click and get a list of places to jump to. Select where you want to go, and you're off. + +### 4. KDE Connect + +![KDE Connect Menu Android][4] + +[KDE Connect][5] is a massive help if you have an Android phone. It connects the phone to your desktop so you can share things seamlessly between the devices. + +With KDE Connect, you can see your [Android device's notification][6] on your desktop in real time. It also enables you to send and receive text messages from Plasma without ever picking up your phone. + +KDE Connect also lets you send files and share web pages between your phone and your computer. You can easily move from one device to the other without a lot of hassle or losing your train of thought. + +### 5. Plasma Vaults + +![KDE Plasma Vault][7] + +Plasma Vaults are another new addition to the Plasma desktop. They are KDE's simple solution to encrypted files and folders. If you don't work with encrypted files, this one won't really save you any time. If you do, though, vaults are a much simpler approach. + +Plasma Vaults let you create encrypted directories as a regular user without root and manage them from your task bar. You can mount and unmount the directories on the fly without the need for external programs or additional privileges. + +### 6. Pager Widget + +![KDE Plasma Pager][8] + +Configure your desktop with the pager widget. It allows you to easily access three additional workspaces for even more screen room. + +Add the widget to your menu bar, and you can slide between multiple workspaces. These are all the size of your screen, so you gain multiple times the total screen space. That lets you lay out more windows without getting confused by a minimized mess or disorganization. + +### 7. Create a Dock + +![KDE Plasma Dock][9] + +Plasma is known for its flexibility and the room it allows for configuration. Use that to your advantage. If you have programs that you're always using, consider setting up an OS X style dock with your most used applications. You'll be able to get them with a single click rather than going through a menu or typing in their name. + +### 8. Add a File Tree to Dolphin + +![Plasma Dolphin Directory][10] + +It's much easier to navigate folders in a directory tree. Dolphin, Plasma's default file manager, has built-in functionality to display a directory listing in the form of a tree on the side of the folder window. + +To enable the directory tree, click on the "Control" tab, then "Configure Dolphin," "View Modes," and "Details." Finally, select "Expandable Folders." + +Remember that these tips are just tips. Don't try to force yourself to do something that's getting in your way. You may hate using file trees in Dolphin. You may never use Pager. That's alright. There may even be something that you personally like that's not listed here. Do what works for you. That said, at least a few of these should shave some serious time out of your work day. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/kde-plasma-tips-tricks-improve-productivity/ + +作者:[Nick Congleton][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/nickcongleton/ +[1]:https://www.maketecheasier.com/10-best-kde-plasma-applications/ (10 of the Best KDE Plasma Applications You Should Try) +[2]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-krunner.jpg (KDE Plasma KRunner) +[3]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-jumplist.jpg (KDE Plasma Jump Lists) +[4]:https://www.maketecheasier.com/assets/uploads/2017/05/kde-connect-menu-e1494899929112.jpg (KDE Connect Menu Android) +[5]:https://www.maketecheasier.com/send-receive-sms-linux-kde-connect/ +[6]:https://www.maketecheasier.com/android-notifications-ubuntu-kde-connect/ +[7]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-vault.jpg (KDE Plasma Vault) +[8]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-pager.jpg (KDE Plasma Pager) +[9]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-dock.jpg (KDE Plasma Dock) +[10]:https://www.maketecheasier.com/assets/uploads/2017/10/pe-dolphin.jpg (Plasma Dolphin Directory) From 8c33003d4f15ceee31f9b9f8ebd14ddfce621a81 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Thu, 18 Jan 2018 17:02:43 +0800 Subject: [PATCH 068/226] Delete 20160808 Top 10 Command Line Games For Linux.md --- ...808 Top 10 Command Line Games For Linux.md | 242 ------------------ 1 file changed, 242 deletions(-) delete mode 100644 sources/tech/20160808 Top 10 Command Line Games For Linux.md diff --git a/sources/tech/20160808 Top 10 Command Line Games For Linux.md b/sources/tech/20160808 Top 10 Command Line Games For Linux.md deleted file mode 100644 index 1dbe6030f3..0000000000 --- a/sources/tech/20160808 Top 10 Command Line Games For Linux.md +++ /dev/null @@ -1,242 +0,0 @@ - translated by cyleft - -Top 10 Command Line Games For Linux -====== -Brief: This article lists the **best command line games for Linux**. - -Linux has never been the preferred operating system for gaming. Though [gaming on Linux][1] has improved a lot lately. You can [download Linux games][2] from a number of resources. - -There are dedicated [Linux distributions for gaming][3]. Yes, they do exist. But, we are not going to see the Linux gaming distributions today. - -Linux has one added advantage over its Windows counterpart. It has got the mighty Linux terminal. You can do a hell lot of things in terminal including playing **command line games**. - -Yeah, hardcore terminal lovers, gather around. Terminal games are light, fast and hell lotta fun to play. And the best thing of all, you've got a lot of classic retro games in Linux terminal. - -[Suggested read: Gaming On Linux:All You Need To Know][20] - -### Best Linux terminal games - -So let's crack this list and see what are some of the best Linux terminal games. - -### 1. Bastet - -Who hasn't spent hours together playing [Tetris][4]? Simple, but totally addictive. Bastet is the Tetris of Linux. - -![Bastet Linux terminal game][5] - -Use the command below to get Bastet: -``` -sudo apt install bastet -``` - -To play the game, run the below command in terminal: -``` -bastet -``` - -Use spacebar to rotate the bricks and arrow keys to guide. - -### 2. Ninvaders - -Space Invaders. I remember tussling for high score with my brother on this. One of the best arcade games out there. - -![nInvaders command line game in Linux][6] - -Copy paste the command to install Ninvaders. -``` -sudo apt-get install ninvaders -``` - -To play this game, use the command below: -``` -ninvaders -``` - -Arrow keys to move the spaceship. Space bar to shoot at the aliens. - -[Suggested read:Top 10 Best Linux Games eleased in 2016 That You Can Play Today][21] - - -### 3. Pacman4console - -Yes, the King of the Arcade is here. Pacman4console is the terminal version of the popular arcade hit, Pacman. - -![Pacman4console is a command line Pacman game in Linux][7] - -Use the command to get pacman4console: -``` -sudo apt-get install pacman4console -``` - -Open a terminal, and I suggest you maximize it. Type the command below to launch the game: -``` -pacman4console -``` - -Use the arrow keys to control the movement. - -### 4. nSnake - -Remember the snake game in old Nokia phones? - -That game kept me hooked to the phone for a really long time. I used to devise various coiling patterns to manage the grown up snake. - -![nsnake : Snake game in Linux terminal][8] - -We have the [snake game in Linux terminal][9] thanks to [nSnake][9]. Use the command below to install it. -``` -sudo apt-get install nsnake -``` - -To play the game, type in the below command to launch the game. -``` -nsnake -``` - -Use arrow keys to move the snake and feed it. - -### 5. Greed - -Greed is little like Tron, minus the speed and adrenaline. - -Your location is denoted by a blinking '@'. You are surrounded by numbers and you can choose to move in any of the 4 directions, - -The direction you choose has a number and you move exactly that number of steps. And you repeat the step again. You cannot revisit the visited spot again and the game ends when you cannot make a move. - -I made it sound more complicated than it really is. - -![Greed : Tron game in Linux command line][10] - -Grab greed with the command below: -``` -sudo apt-get install greed -``` - -To launch the game use the command below. Then use the arrow keys to play the game. -``` -greed -``` - -### 6. Air Traffic Controller - -What's better than being a pilot? An air traffic controller. You can simulate an entire air traffic system in your terminal. To be honest, managing air traffic from a terminal kinda feels, real. - -![Air Traffic Controller game in Linux][11] - -Install the game using the command below: -``` -sudo apt-get install bsdgames -``` - -Type in the command below to launch the game: -``` -atc -``` - -ATC is not a child's play. So read the man page using the command below. - -### 7. Backgammon - -Whether You have played [Backgammon][12] before or not, You should check this out. The instructions and control manuals are all so friendly. Play it against computer or your friend if you prefer. - -![Backgammon terminal game in Linux][13] - -Install Backgammon using this command: -``` -sudo apt-get install bsdgames -``` - -Type in the below command to launch the game: -``` -backgammon -``` - -Press 'y' when prompted for rules of the game. - -### 8. Moon Buggy - -Jump. Fire. Hours of fun. No more words. - -![Moon buggy][14] - -Install the game using the command below: -``` -sudo apt-get install moon-buggy -``` - -Use the below command to start the game: -``` -moon-buggy -``` - -Press space to jump, 'a' or 'l' to shoot. Enjoy - -### 9. 2048 - -Here's something to make your brain flex. [2048][15] is a strategic as well as a highly addictive game. The goal is to get a score of 2048. - -![2048 game in Linux terminal][16] - -Copy paste the commands below one by one to install the game. -``` -wget https://raw.githubusercontent.com/mevdschee/2048.c/master/2048.c - -gcc -o 2048 2048.c -``` - -Type the below command to launch the game and use the arrow keys to play. -``` -./2048 -``` - -### 10. Tron - -How can this list be complete without a brisk action game? - -![Tron Linux terminal game][17] - -Yes, the snappy Tron is available on Linux terminal. Get ready for some serious nimble action. No installation hassle nor setup hassle. One command will launch the game. All You need is an internet connection. -``` -ssh sshtron.zachlatta.com -``` - -You can even play this game in multiplayer if there are other gamers online. Read more about [Tron game in Linux][18]. - -### Your pick? - -There you have it, people. Top 10 Linux terminal games. I guess it's ctrl+alt+T now. What is Your favorite among the list? Or got some other fun stuff for the terminal? Do share. - -With inputs from [Abhishek Prakash][19]. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/best-command-line-games-linux/ - -作者:[Aquil Roshan][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/aquil/ -[1]:https://itsfoss.com/linux-gaming-guide/ -[2]:https://itsfoss.com/download-linux-games/ -[3]:https://itsfoss.com/manjaro-gaming-linux/ -[4]:https://en.wikipedia.org/wiki/Tetris -[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/bastet.jpg -[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/ninvaders.jpg -[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/pacman.jpg -[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/nsnake.jpg -[9]:https://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/ -[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/greed.jpg -[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/atc.jpg -[12]:https://en.wikipedia.org/wiki/Backgammon -[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/backgammon.jpg -[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/moon-buggy.jpg -[15]:https://itsfoss.com/2048-offline-play-ubuntu/ -[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/2048.jpg -[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/tron.jpg -[18]:https://itsfoss.com/play-tron-game-linux-terminal/ -[19]:https://twitter.com/abhishek_pc -[20]:https://itsfoss.com/linux-gaming-guide/ -[21]:https://itsfoss.com/best-linux-games/ From 3dc8c1ca29a17d7ccac562d27654af8ced2da4ef Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Thu, 18 Jan 2018 17:03:36 +0800 Subject: [PATCH 069/226] translated by cyleft Top 10 Command Line Games For Linux --- ...808 Top 10 Command Line Games For Linux.md | 237 ++++++++++++++++++ 1 file changed, 237 insertions(+) create mode 100644 translated/tech/20160808 Top 10 Command Line Games For Linux.md diff --git a/translated/tech/20160808 Top 10 Command Line Games For Linux.md b/translated/tech/20160808 Top 10 Command Line Games For Linux.md new file mode 100644 index 0000000000..86d5e6fcf7 --- /dev/null +++ b/translated/tech/20160808 Top 10 Command Line Games For Linux.md @@ -0,0 +1,237 @@ +Linux 命令行游戏 Top 10 +====== +概要: 本文列举了 **Linux 中最好的命令行游戏**。 + +Linux 从来都不是游戏的首选操作系统。尽管近日来 [Linux 的游戏][1] 提供了很多。你可以在 [下载 Linux 游戏][2] 得到许多资源。 + +这有专门的 [游戏版 Linux][3]。它确实存在。但是今天,我们并不是要欣赏游戏版 Linux。 + +Linux 有一个超过 Windows 的优势。它拥有一个强大的 Linux 终端。在 Linux 终端上,你可以做很多事情,包括玩 **命令行游戏**。 + +当然,毕竟是 Linux 终端的核心爱好者、拥护者。终端游戏轻便,快速,有地狱般的魔力。而这最有意思的事情是,你可以在 Linux 终端上重温大量经典游戏。 + +[推荐阅读:Linux 上游戏,你所需要了解的全部][20] + +### 最好的 Linux 终端游戏 + +来揭秘这张榜单,找出 Linux 终端最好的游戏。 + +### 1. Bastet + +谁还没花上几个小时玩 [俄罗斯方块][4] ?简单而且容易上瘾。 Bastet 就是 Linux 版的俄罗斯方块。 + +![Linux 终端游戏 Bastet][5] + +使用下面的命令获取 Bastet: +``` +sudo apt install bastet +``` + +运行下列命令,在终端上开始这个游戏: +``` +bastet +``` + +使用空格键旋转方块,方向键控制方块移动 + +### 2. Ninvaders + +Space Invaders(太空侵略者)。我任记得这个游戏里,和我弟弟(哥哥)在高分之路上扭打。这是最好的街机游戏之一。 + +![Linux 终端游戏 nInvaders][6] + +复制粘贴这段代码安装 Ninvaders。 +``` +sudo apt-get install ninvaders +``` + +使用下面的命令开始游戏: +``` +ninvaders +``` + +方向键移动太空飞船。空格键设计外星人。 + +[推荐阅读:2016 你可以开始的 Linux 游戏 Top 10][21] + +### 3. Pacman4console + +是的,这个就是街机之王。Pacman4console 是最受欢迎的街机游戏 Pacman(吃豆豆)终端版。 + +![Linux 命令行吃豆豆游戏 Pacman4console][7] + +使用以下命令获取 pacman4console: +``` +sudo apt-get install pacman4console +``` + +打开终端,建议使用最大的终端界面(29x32)。键入以下命令启动游戏: +``` +pacman4console +``` + +使用方向键控制移动。 + +### 4. nSnake + +记得在老式诺基亚手机里玩的贪吃蛇游戏吗? + +这个游戏让我保持对手机着迷很长时间。我曾经设计过各种姿态去获得更长的蛇身。 + +![nsnake : Linux 终端上的贪吃蛇游戏][8] + +我们拥有 [Linux 终端上的贪吃蛇游戏][9] 得感谢 [nSnake][9]。使用下面的命令安装它: +``` +sudo apt-get install nsnake +``` + +键入下面的命令开始游戏: +``` +nsnake +``` + +使用方向键控制蛇身,获取豆豆。 + +### 5. Greed + +Greed 有点像精简调加速和肾上腺素的 Tron(类似贪吃蛇的进化版)。 + +你当前的位置由‘@’表示。你被数字包围了,你可以在四个方向任意移动。你选择的移动方向上标识的数字,就是你能移动的步数。走过的路不能再走,如果你无路可走,游戏结束。 + +听起来,似乎我让它变得更复杂了。 + +![Greed : 命令行上的 Tron][10] + +通过下列命令获取 Greed: +``` +sudo apt-get install greed +``` + +通过下列命令启动游戏,使用方向键控制游戏。 +``` +greed +``` + +### 6. Air Traffic Controller + +还有什么比做飞行员更有意思的?空中交通管制员。在你的终端中,你可以模拟一个空中要塞。说实话,在终端里管理空中交通蛮有意思的。 + +![Linux 空中交通管理员][11] + +使用下列命令安装游戏: +``` +sudo apt-get install bsdgames +``` + +键入下列命令启动游戏: +``` +atc +``` + +ATC 不是孩子玩的游戏。建议查看官方文档。 + +### 7. Backgammon(双陆棋) + +无论之前你有没有玩过 [双陆棋][12],你都应该看看这个。 它的说明书和控制手册都非常友好。如果你喜欢,可以挑战你的电脑或者你的朋友。 + +![Linux 终端上的双陆棋][13] + +使用下列命令安装双陆棋: +``` +sudo apt-get install bsdgames +``` + +键入下列命令启动游戏: +``` +backgammon +``` + +当你需要提示游戏规则时,回复 ‘y’。 + +### 8. Moon Buggy + +跳跃。疯狂。欢乐时光不必多言。 + +![Moon buggy][14] + +使用下列命令安装游戏: +``` +sudo apt-get install moon-buggy +``` + +使用下列命令启动游戏: +``` +moon-buggy +``` + +空格跳跃,‘a’或者‘l’射击。尽情享受吧。 + +### 9. 2048 + +2048 可以活跃你的大脑。[2048][15] 是一个策咯游戏,很容易上瘾。以获取 2048 分为目标。 + +![Linux 终端上的 2048][16] + +复制粘贴下面的命令安装游戏: +``` +wget https://raw.githubusercontent.com/mevdschee/2048.c/master/2048.c + +gcc -o 2048 2048.c +``` + +键入下列命令启动游戏: +``` +./2048 +``` + +### 10. Tron + +没有动作类游戏,这张榜单怎么可能结束? + +![Linux 终端游戏 Tron][17] + +是的,Linux 终端可以实现这种精力充沛的游戏 Tron。为接下来迅捷的反应做准备吧。无需被下载和安装困扰。一个命令即可启动游戏,你只需要一个网络连接 +``` +ssh sshtron.zachlatta.com +``` + +如果由别的在线游戏者,你可以多人游戏。了解更多:[Linux 终端游戏 Tron][18]. + +### 你看上了哪一款? + +朋友,Linux 终端游戏 Top 10,都分享给你了。我猜你现在正准备键入 ctrl+alt+T(终端快捷键) 了。榜单中,那个是你最喜欢的游戏?或者为终端提供其他的有趣的事物?尽情分享吧! + +在 [Abhishek Prakash][19] 回复。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-command-line-games-linux/ + +作者:[Aquil Roshan][a] +译者:[CYLeft](https://github.com/CYleft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/aquil/ +[1]:https://itsfoss.com/linux-gaming-guide/ +[2]:https://itsfoss.com/download-linux-games/ +[3]:https://itsfoss.com/manjaro-gaming-linux/ +[4]:https://en.wikipedia.org/wiki/Tetris +[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/bastet.jpg +[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/ninvaders.jpg +[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/pacman.jpg +[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/nsnake.jpg +[9]:https://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/ +[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/greed.jpg +[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/atc.jpg +[12]:https://en.wikipedia.org/wiki/Backgammon +[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/backgammon.jpg +[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/moon-buggy.jpg +[15]:https://itsfoss.com/2048-offline-play-ubuntu/ +[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/2048.jpg +[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/08/tron.jpg +[18]:https://itsfoss.com/play-tron-game-linux-terminal/ +[19]:https://twitter.com/abhishek_pc +[20]:https://itsfoss.com/linux-gaming-guide/ +[21]:https://itsfoss.com/best-linux-games/ From 51e7225e44a6f5b7c4134264baa2b40cb2a23f21 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 18 Jan 2018 17:53:42 +0800 Subject: [PATCH 070/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=202=20scientific=20?= =?UTF-8?q?calculators=20for=20the=20Linux=20desktop?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...tific calculators for the Linux desktop.md | 111 ++++++++++++++++++ 1 file changed, 111 insertions(+) create mode 100644 sources/tech/20180115 2 scientific calculators for the Linux desktop.md diff --git a/sources/tech/20180115 2 scientific calculators for the Linux desktop.md b/sources/tech/20180115 2 scientific calculators for the Linux desktop.md new file mode 100644 index 0000000000..f91450b383 --- /dev/null +++ b/sources/tech/20180115 2 scientific calculators for the Linux desktop.md @@ -0,0 +1,111 @@ +2 scientific calculators for the Linux desktop +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OpenData_CityNumbers.png?itok=lC03ce76) + +Image by : opensource.com + +Every Linux desktop environment comes with at least a simple desktop calculator, but most of those simple calculators are just that: a simple tool for simple calculations. + +Fortunately, there are exceptions; programs that go far beyond square roots and a couple of trigonometric functions, yet are still easy to use. Here are two powerful calculator tools for Linux, plus a couple of bonus options. + +### SpeedCrunch + +[SpeedCrunch][1] is a high-precision scientific calculator with a simple Qt5 graphical interface and strong focus on the keyboard. + +![SpeedCrunch graphical interface][3] + + +SpeedCrunch at work + +It supports working with units and comes loaded with all kinds of functions. + +For example, by writing: +`2 * 10^6 newton / (meter^2)` + +you get: +`= 2000000 pascal` + +By default, SpeedCrunch delivers its results in the international unit system, but units can be transformed with the "in" instruction. + +For example: +`3*10^8 meter / second in kilo meter / hour` + +produces: +`= 1080000000 kilo meter / hour` + +With the `F5` key, all results will turn into scientific notation (`1.08e9 kilo meter / hour`), while with `F2` only numbers that are small enough or big enough will change. More options are available on the Configuration menu. + +The list of available functions is really impressive. It works on Linux, Windows, and MacOS, and it's licensed under GPLv2; you can access its source code on [Bitbucket][4]. + +### Qalculate! + +[Qalculate!][5] (with the exclamation point) has a long and complex history. + +The project offers a powerful library that can be used by other programs (the Plasma desktop can use it to perform calculations from krunner) and a graphical interface built on GTK3. It allows you to work with units, handle physical constants, create graphics, use complex numbers, matrices, and vectors, choose arbitrary precision, and more. + + +![Qalculate! Interface][7] + + +Looking for some physical constants on Qalculate! + +Its use of units is far more intuitive than SpeedCrunch's and it understands common prefixes without problem. Have you heard of an exapascal pressure? I hadn't (the Sun's core stops at `~26 PPa`), but Qalculate! has no problem understanding the meaning of `1 EPa`. Also, Qalculate! is more flexible with syntax errors, so you don't need to worry about closing all those parentheses: if there is no ambiguity, Qalculate! will give you the right answer. + +After a long period on which the project seemed orphaned, it came back to life in 2016 and has been going strong since, with more than 10 versions in just one year. It's licensed under GPLv2 (with source code on [GitHub][8]) and offers versions for Linux and Windows, as well as a MacOS port. + +### Bonus calculators + +#### ConvertAll + +OK, it's not a "calculator," yet this simple application is incredibly useful. + +Most unit converters stop at a long list of basic units and a bunch of common combinations, but not [ConvertAll][9]. Trying to convert from astronomical units per year into inches per second? It doesn't matter if it makes sense or not, if you need to transform a unit of any kind, ConvertAll is the tool for you. + +Just write the starting unit and the final unit in the corresponding boxes; if the units are compatible, you'll get the transformation without protest. + +The main application is written in PyQt5, but there is also an [online version written in JavaScript][10]. + +#### (wx)Maxima with the units package + +Sometimes (OK, many times) a desktop calculator is not enough and you need more raw power. + +[Maxima][11] is a computer algebra system (CAS) with which you can do derivatives, integrals, series, equations, eigenvectors and eigenvalues, Taylor series, Laplace and Fourier transformations, as well as numerical calculations with arbitrary precision, graph on two and three dimensions… we could fill several pages just listing its capabilities. + +[wxMaxima][12] is a well-designed graphical frontend for Maxima that simplifies the use of many Maxima options without compromising others. On top of the full power of Maxima, wxMaxima allows you to create "notebooks" on which you write comments, keep your graphics with your math, etc. One of the (wx)Maxima combo's most impressive features is that it works with dimension units. + +On the prompt, just type: +`load("unit")` + +press Shift+Enter, wait a few seconds, and you'll be ready to work. + +By default, the unit package works with the basic MKS units, but if you prefer, for instance, to get `N` instead of `kg*m/s2`, you just need to type: +`setunits(N)` + +Maxima's help (which is also available from wxMaxima's help menu) will give you more information. + +Do you use these programs? Do you know another great desktop calculator for scientists and engineers or another related tool? Tell us about them in the comments! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/scientific-calculators-linux + +作者:[Ricardo Berlasso][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/rgb-es +[1]:http://speedcrunch.org/index.html +[2]:/file/382511 +[3]:https://opensource.com/sites/default/files/u128651/speedcrunch.png (SpeedCrunch graphical interface) +[4]:https://bitbucket.org/heldercorreia/speedcrunch +[5]:https://qalculate.github.io/ +[6]:/file/382506 +[7]:https://opensource.com/sites/default/files/u128651/qalculate-600.png (Qalculate! Interface) +[8]:https://github.com/Qalculate +[9]:http://convertall.bellz.org/ +[10]:http://convertall.bellz.org/js/ +[11]:http://maxima.sourceforge.net/ +[12]:https://andrejv.github.io/wxmaxima/ From 51f8159ea6266019b6325fbe31a68b13fcb7622d Mon Sep 17 00:00:00 2001 From: XYenChi <466530436@qq.com> Date: Thu, 18 Jan 2018 18:02:10 +0800 Subject: [PATCH 071/226] XYenChi is translating --- sources/tech/20171231 Why You Should Still Love Telnet.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171231 Why You Should Still Love Telnet.md b/sources/tech/20171231 Why You Should Still Love Telnet.md index 6e6976fda4..201ee91bd4 100644 --- a/sources/tech/20171231 Why You Should Still Love Telnet.md +++ b/sources/tech/20171231 Why You Should Still Love Telnet.md @@ -1,3 +1,4 @@ +XYenChi is translating Why You Should Still Love Telnet ====== Telnet, the protocol and the command line tool, were how system administrators used to log into remote servers. However, due to the fact that there is no encryption all communication, including passwords, are sent in plaintext meant that Telnet was abandoned in favour of SSH almost as soon as SSH was created. From 2e21cd06892ab6397b31826e14e22a467d5efe44 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 18 Jan 2018 18:02:23 +0800 Subject: [PATCH 072/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Create?= =?UTF-8?q?=20A=20Bootable=20Zorin=20OS=20USB=20Drive?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...To Create A Bootable Zorin OS USB Drive.md | 315 ++++++++++++++++++ 1 file changed, 315 insertions(+) create mode 100644 sources/tech/20180116 How To Create A Bootable Zorin OS USB Drive.md diff --git a/sources/tech/20180116 How To Create A Bootable Zorin OS USB Drive.md b/sources/tech/20180116 How To Create A Bootable Zorin OS USB Drive.md new file mode 100644 index 0000000000..4ab7fea3f6 --- /dev/null +++ b/sources/tech/20180116 How To Create A Bootable Zorin OS USB Drive.md @@ -0,0 +1,315 @@ +How To Create A Bootable Zorin OS USB Drive +====== +![Zorin OS][17] + +### Introduction + +In this guide I will show you how to create a bootable Zorin OS USB Drive. + +To be able to follow this guide you will need the following: + + * A blank USB drive + * An internet connection + + + +### What Is Zorin OS? + +Zorin OS is a Linux based operating system. + +If you are a Windows user you might wonder why you would bother with Zorin OS. If you are a Linux user then you might also wonder why you would use Zorin OS over other distributions such as Linux Mint or Ubuntu. + +If you are using an older version of Windows and you can't afford to upgrade to Windows 10 or your computer doesn't have the right specifications for running Windows 10 then Zorin OS provides a free (or cheap, depending how much you choose to donate) upgrade path allowing you to continue to use your computer in a much more secure environment. + +If your current operating system is Windows XP or Windows Vista then you might consider using Zorin OS Lite as opposed to Zorin OS Core. + +The features of Zorin OS Lite are generally the same as the Zorin OS Core product but some of the applications installed and the desktop environment used for displaying menus and icons and other Windowsy features take up much less memory and processing power. + +If you are running Windows 7 then your operating system is coming towards the end of its life. You could probably upgrade to Windows 10 but at a hefty price. + +Not everybody has the finances to pay for a new Windows license and not everybody has the money to buy a brand new computer. + +Zorin OS will help you extend the life of your computer and you will still feel you are using a premium product and that is because you will be. The product with the highest price doesn't always provide the best value. + +Whilst we are talking about value for money, Zorin OS allows you to install the best free and open source software available and comes with a good selection of packages pre-installed. + +For the home user, using Zorin OS doesn't have to feel any different to running Windows. You can browse the web using the browser of your choice, you can listen to music and watch videos. There are mail clients and other productivity tools. + +Talking of productivity there is LibreOffice. LibreOffice has everything the average home user requires from an office suite with a word processor, spreadsheet and presentations package. + +If you want to run Windows software then you can use the pre-installed PlayOnLinux and WINE packages to install and run all manner of packages including Microsoft Office. + +By running Zorin OS you will get the extra security benefits of running a Linux based operating system. + +Are you fed up with Windows updates stalling your productivity? When Windows wants to install updates it requires a reboot and then a long wait whilst it proceeds to install update after update. Sometimes it even forces a reboot whilst you are busy working. + +Zorin OS is different. Updates download and install themselves whilst you are using the computer. You won't even need to know it is happening. + +Why Zorin over Mint or Ubuntu? Zorin is the happy stepping stone between Windows and Linux. It is Linux but you don't need to care that it is Linux. If you decide later on to move to something different then so be it but there really is no need. + +### The Zorin OS Website + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinwebsite1-678x381.png) + +You can visit the Zorin OS website by visiting [www.zorinos.com][18]. + +The homepage of the Zorin OS website tells you everything you need to know. + +"Zorin OS is an alternative to Windows and macOX, designed to make your computer faster, more powerful and secure". + +There is nothing that tells you that Zorin OS is based on Linux. There is no need for Zorin to tell you that because even though Windows used to be heavily based on DOS you didn't need to know DOS commands to use it. Likewise you don't necessarily need to know Linux commands to use Zorin. + +If you scroll down the page you will see a slide show highlighting the way the desktop looks and feels under Zorin. + +The good thing is that you can customise the user interface so that if you prefer a Windows layout you can use a Windows style layout but if you prefer a Mac style layout you can go for that as well. + +Zorin OS is based on Ubuntu Linux and the website uses this fact to highlight that underneath it has a stable base and it highlights the security benefits provided by Linux. + +If you want to see what applications are available for Zorin then there is a link to do that and Zorin never sells your data and protects your privacy. + +### What Are The Different Versions Of Zorin OS + +#### Zorin OS Ultimate + +The ultimate edition takes the core edition and adds other features such as different layouts, more applications pre-installed and extra games. + +The ultimate edition comes at a price of 19 euros which is a bargain compared to other operating systems. + +#### Zorin OS Core + +The core version is the standard edition and comes with everything the average person could need from the outset. + +This is the version I will show you how to download and install in this guide. + +#### Zorin OS Lite + +Zorin OS Lite also has an ultimate version available and a core version. Zorin OS Lite is perfect for older computers and the main difference is the desktop environments used to display menus and handle screen elements such as icons and panels. + +Zorin OS Lite is less memory intensive than Zorin OS. + +#### Zorin OS Business + +Zorin OS Business comes with business applications installed as standard such as finance applications and office applications. + +### How To Get Zorin OS + +To download Zorin OS visit . + +To get the core version scroll past the Zorin Ultimate section until you get to the Zorin Core section. + +You will see a small pay panel which allows you to choose how much you wish to pay for Zorin Core with a purchase now button underneath. + +#### How To Pay For Zorin OS + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinwebsite1-678x381.png) + +You can choose from the three preset amounts or enter an amount of your choice in the "Custom" box. + +When you click "Purchase Zorin OS Core" the following window will appear: + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/payforzorin.png) + +You can now enter your email and credit card information. + +When you click the "pay" button a window will appear with a download link. + +#### How To Get Zorin OS For Free + +If you don't wish to pay anything at all you can enter zero (0) into the custom box. The button will change and will show the words "Download Zorin OS Core". + +#### How To Download Zorin OS + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/downloadzorin.png) + +Whether you have bought Zorin or have chosen to download for free, a window will appear with the option to download a 64 bit or 32 bit version of Zorin. + +Most modern computers are capable of running 64 bit operating systems but in order to check within Windows click the "start" button and type "system information". + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/systeminfo.png) + +Click on the "System Information" desktop app and halfway down the right panel you will see the words "system type". If you see the words "x64 based PC" then the system is capable of running 64-bit operating systems. + +If your computer is capable of running 64-bit operating systems click on the "Download 64 bit" button otherwise click on "Download 32 bit". + +The ISO image file for Zorin will now start to download to your computer. + +### How To Verify If The Zorin OS Download Is Valid + +It is important to check whether the download is valid for many reasons. + +If the file has only partially downloaded or there were interruptions whilst downloading and you had to resume then the image might not be perfect and it should be downloaded again. + +More importantly you should check the validity to make sure the version you downloaded is genuine and wasn't uploaded by a hacker. + +In order to check the validity of the ISO image you should download a piece of software called QuickHash for Windows from . + +Click the "download" link and when the file has downloaded double click on it. + +Click on the relevant application file within the zip file. If you have a 32-bit system click "Quickhash-v2.8.4-32bit" or for a 64-bit system click "Quickhash-v2.8.4-64bit". + +Click on the "Run" button. + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinhash.png) + +Click the SHA256 radio button on the left side of the screen and then click on the file tab. + +Click "Select File" and navigate to the downloads folder. + +Choose the Zorin ISO image downloaded previously. + +A progress bar will now work out the hash value for the ISO image. + +To compare this with the valid keys available for Zorin visit and scroll down until you see the list of checksums as follows: + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/zorinhashcodes.png) + +Select the long list of scrambled characters next to the version of Zorin OS that you downloaded and press CTRL and C to copy. + +Go back to the Quickhash screen and paste the value into the "Expected hash value" box by pressing CTRL and V. + +You should see the words "Expected hash matches the computed file hash, OK". + +If the values do not match you will see the words "Expected hash DOES NOT match the computed file hash" and you should download the ISO image again. + +### How To Create A Bootable Zorin OS USB Drive + +In order to be able to install Zorin you will need to install a piece of software called Etcher. You will also need a blank USB drive. + +You can download Etcher from . + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/downloadetcher.png) + +If you are using a 64 bit computer click on the "Download for Windows x64" link otherwise click on the little arrow and choose "Etcher for Windows x86 (32-bit) (Installer)". + +Insert the USB drive into your computer and double click on the "Etcher" setup executable file. + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/etcherlicense.png) + +When the license screen appears click "I Agree". + +Etcher should start automatically after the installation completes but if it doesn't you can press the Windows key or click the start button and search for "Etcher". + +![](http://dailylinuxuser.com/wp-content/uploads/2018/01/etcherscreen.png) + +Click on "Select Image" and select the "Zorin" ISO image downloaded previously. + +Click "Flash". + +Windows will ask for your permission to continue. Click "Yes" to accept. + +After a while a window will appear with the words "Flash Complete". + +### How To Buy A Zorin OS USB Drive + +If the above instructions seem too much like hard work then you can order a Zorin USB Drive by clicking one of the following links: + +* [Zorin OS Core – 32-bit DVD][1] + +* [Zorin OS Core – 64-bit DVD][2] + +* [Zorin OS Core – 16 gigabyte USB drive (32-bit)][3] + +* [Zorin OS Core – 32 gigabyte USB drive (32-bit)][4] + +* [Zorin OS Core – 64 gigabyte USB drive (32-bit)][5] + +* [Zorin OS Core – 16 gigabyte USB drive (64-bit)][6] + +* [Zorin OS Core – 32 gigabyte USB drive (64-bit)][7] + +* [Zorin OS Core – 64 gigabyte USB drive (64-bit)][8] + +* [Zorin OS Lite – 32-bit DVD][9] + +* [Zorin OS Lite – 64-bit DVD][10] + +* [Zorin OS Lite – 16 gigabyte USB drive (32-bit)][11] + +* [Zorin OS Lite – 32 gigabyte USB drive (32-bit)][12] + +* [Zorin OS Lite – 64 gigabyte USB drive (32-bit)][13] + +* [Zorin OS Lite – 16 gigabyte USB drive (64-bit)][14] + +* [Zorin OS Lite – 32 gigabyte USB drive (64-bit)][15] + +* [Zorin OS Lite – 64 gigabyte USB drive (64-bit)][16] + + +### How To Boot Into Zorin OS Live + +On older computers simply insert the USB drive and restart the computer. The boot menu for Zorin should appear straight away. + +On modern computers insert the USB drive, restart the computer and before Windows loads press the appropriate function key to bring up the boot menu. + +The following list shows the key or keys you can press for the most popular computer manufacturers. + + * Acer - Escape, F12, F9 + * Asus - Escape, F8 + * Compaq - Escape, F9 + * Dell - F12 + * Emachines - F12 + * HP - Escape, F9 + * Intel - F10 + * Lenovo - F8, F10, F12 + * Packard Bell - F8 + * Samsung - Escape, F12 + * Sony - F10, F11 + * Toshiba - F12 + + + +Check the manufacturer's website to find the key for your computer if it isn't listed or keep trying different function keys or the escape key. + +A screen will appear with the following three options: + + 1. Try Zorin OS without Installing + 2. Install Zorin OS + 3. Check disc for defects + + + +Choose "Try Zorin OS without Installing" by pressing enter with that option selected. + +### Summary + +You can now try Zorin OS without damaging your current operating system. + +To get back to your original operating system reboot and remove the USB drive. + +### How To Remove Zorin OS From The USB Drive + +If you have decided that Zorin OS is not for you and you want to get the USB drive back into its pre-Zorin state follow this guide: + +[How To Fix A USB Drive After Linux Has Been Installed On It][19] + +-------------------------------------------------------------------------------- + +via: http://dailylinuxuser.com/2018/01/how-to-create-a-bootable-zorin-os-usb-drive.html + +作者:[admin][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[1]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-install-live-dvd-32bit.html?affiliate=everydaylinuxuser +[2]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-install-live-dvd-64bit.html?affiliate=everydaylinuxuser +[3]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-16gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser +[4]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-32gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser +[5]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-64gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser +[6]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-16gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser +[7]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-32gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser +[8]:https://www.osdisc.com/products/zorinos/zorin-os-122-core-64gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser +[9]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-install-live-dvd-32bit.html?affiliate=everydaylinuxuser +[10]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-install-live-dvd-64bit.html?affiliate=everydaylinuxuser +[11]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-16gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser +[12]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-32gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser +[13]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-64gb-usb-flash-drive-32bit.html?affiliate=everydaylinuxuser +[14]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-16gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser +[15]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-32gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser +[16]:https://www.osdisc.com/products/zorinos/zorin-os-122-lite-64gb-usb-flash-drive-64bit.html?affiliate=everydaylinuxuser +[17]:http://dailylinuxuser.com/wp-content/uploads/2018/01/zorindesktop-678x381.png (Zorin OS) +[18]:http://www.zorinos.com +[19]:http://dailylinuxuser.com/2016/04/how-to-fix-usb-drive-after-linux-has.html From b418f5933868634dab980fdc0b23709e87c55c56 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 18 Jan 2018 18:10:14 +0800 Subject: [PATCH 073/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Why=20building=20?= =?UTF-8?q?a=20community=20is=20worth=20the=20extra=20effort?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...g a community is worth the extra effort.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 sources/tech/20180116 Why building a community is worth the extra effort.md diff --git a/sources/tech/20180116 Why building a community is worth the extra effort.md b/sources/tech/20180116 Why building a community is worth the extra effort.md new file mode 100644 index 0000000000..ec971e84eb --- /dev/null +++ b/sources/tech/20180116 Why building a community is worth the extra effort.md @@ -0,0 +1,66 @@ +Why building a community is worth the extra effort +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_brandbalance.png?itok=XSQ1OU16) + +When we launched [Nethesis][1] in 2003, we were just system integrators. We only used existing open source projects. Our business model was clear: Add multiople forms of value to those projects: know-how, documentation for the Italian market, extra modules, professional support, and training courses. We gave back to upstream projects as well, through upstream code contributions and by participating in their communities. + +Times were different then. We couldn't use the term "open source" too loudly. People associated it with words like: "nerdy," "no value" and, worst of all, "free." Not too good for a business. + +On a Saturday in 2010, with pasties and espresso in hand, the Nethesis staff were discussing how to move things forward (hey, we like to eat and drink while we innovate!). In spite of the momentum working against us, we decided not to change course. In fact, we decided to push harder--to make open source, and an open way of working, a successful model for running a business. + +Over the years, we've proven that model's potential. And one thing has been key to our success: community. + +In this three-part series, I'll explain the important role community plays in an open organization's existence. I'll explore why an organization would want to build a community, and discuss how to build one--because I really do believe it's the best way to generate new innovations today. + +### The crazy idea + +Together with the Nethesis guys, we decided to build our own open source project: our own operating system, built on top of CentOS (because we didn't want to reinvent the wheel). We assumed that we had the experience, know-how, and workforce to achieve it. We felt brave. + +And we very much wanted to build an operating system called [NethServer][2] with one mission: making a sysadmin's life easier with open source. We knew we could create a Linux distribution for a server that would be more accessible, easier to adopt, and simpler to understand than anything currently offered. + +Above all, though, we decided to create a real, 100% open project with three primary rules: + + * completely free to download, + * openly developed, and + * community-driven + + + +That last one is important. We were a company; we were able to develop it by ourselves. We would have been more effective (and made quicker decisions) if we'd done the work in-house. It would be so simple, like any other company in Italy. + +But we were so deeply into open source culture culture that we chose a different path. + +We really wanted as many people as possible around us, around the product, and around the company. We wanted as many perspectives on the work as possible. We realized: Alone, you can go fast--but if you want to go far, you need to go together. + +So we decided to build a community instead. + +### What next? + +We realized that creating a community has so many benefits. For example, if the people who use your product are really involved in the project, they will provide feedback and use cases, write documentation, catch bugs, compare with other products, suggest features, and contribute to development. All of this generates innovations, attracts contributors and customers, and expands your product's user base. + +But quicky the question arose: How can we build a community? We didn't know how to achieve that. We'd participated in many communities, but we'd never built one. + +We were good at code--not with people. And we were a company, an organization with very specific priorities. So how were we going to build a community and a foster good relationships between the company and the community itself? + +We did the first thing you had to do: study. We learned from experts, blogs, and lots of books. We experimented. We failed many times, collected data from the outcomes, and tested them again. + +Eventually we learned the golden rule of the community management: There is no golden rule of community management. + +People are too complex and communities are too different to have one rule "to rule them all," + +One thing I can say, however, is that an healthy relationship between a community and a company is always a process of give and take. In my next article, I'll discuss what your organization should expect to give if it wants a flourishing and innovating community. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/18/1/why-build-community-1 + +作者:[Alessio Fattorini][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/alefattorini +[1]:http://www.nethesis.it/ +[2]:http://www.nethserver.org/ From d487bb0197f2c7618e0cfa17d20761bc2db8876b Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 18 Jan 2018 18:50:27 +0800 Subject: [PATCH 074/226] PRF:20090127 Anatomy of a Program in Memory.md @qhwdw --- translated/tech/20090127 Anatomy of a Program in Memory.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/translated/tech/20090127 Anatomy of a Program in Memory.md b/translated/tech/20090127 Anatomy of a Program in Memory.md index e185881262..4a08caa4be 100644 --- a/translated/tech/20090127 Anatomy of a Program in Memory.md +++ b/translated/tech/20090127 Anatomy of a Program in Memory.md @@ -15,7 +15,7 @@ ![Flexible Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxFlexibleAddressSpaceLayout.png) -当计算机还是快乐、安全的时代时,在机器中的几乎每个进程上,那些段的起始虚拟地址都是**完全相同**的。这将使远程挖掘安全漏洞变得容易。漏洞利用经常需要去引用绝对内存位置:比如在栈中的一个地址,一个库函数的地址,等等。远程攻击闭着眼睛也会选择这个地址,因为地址空间都是相同的。当攻击者们这样做的时候,人们就会受到伤害。因此,地址空间随机化开始流行起来。Linux 会通过在其起始地址上增加偏移量来随机化[栈][3]、[内存映射段][4]、以及[堆][5]。不幸的是,32 位的地址空间是非常拥挤的,为地址空间随机化留下的空间不多,因此 [妨碍了地址空间随机化的效果][6]。 +当计算机还是快乐、安全的时代时,在机器中的几乎每个进程上,那些段的起始虚拟地址都是**完全相同**的。这将使远程挖掘安全漏洞变得容易。漏洞利用经常需要去引用绝对内存位置:比如在栈中的一个地址,一个库函数的地址,等等。远程攻击可以闭着眼睛选择这个地址,因为地址空间都是相同的。当攻击者们这样做的时候,人们就会受到伤害。因此,地址空间随机化开始流行起来。Linux 会通过在其起始地址上增加偏移量来随机化[栈][3]、[内存映射段][4]、以及[堆][5]。不幸的是,32 位的地址空间是非常拥挤的,为地址空间随机化留下的空间不多,因此 [妨碍了地址空间随机化的效果][6]。 在进程地址空间中最高的段是栈,在大多数编程语言中它存储本地变量和函数参数。调用一个方法或者函数将推送一个新的栈帧stack frame到这个栈。当函数返回时这个栈帧被删除。这个简单的设计,可能是因为数据严格遵循 [后进先出(LIFO)][7] 的次序,这意味着跟踪栈内容时不需要复杂的数据结构 —— 一个指向栈顶的简单指针就可以做到。推入和弹出也因此而非常快且准确。也可能是,持续的栈区重用往往会在 [CPU 缓存][8] 中保持活跃的栈内存,这样可以加快访问速度。进程中的每个线程都有它自己的栈。 @@ -25,7 +25,7 @@ 在栈的下面,有内存映射段。在这里,内核将文件内容直接映射到内存。任何应用程序都可以通过 Linux 的 [`mmap()`][12] 系统调用( [代码实现][13])或者 Windows 的 [`CreateFileMapping()`][14] / [`MapViewOfFile()`][15] 来请求一个映射。内存映射是实现文件 I/O 的方便高效的方式。因此,它经常被用于加载动态库。有时候,也被用于去创建一个并不匹配任何文件的匿名内存映射,这种映射经常被用做程序数据的替代。在 Linux 中,如果你通过 [`malloc()`][16] 去请求一个大的内存块,C 库将会创建这样一个匿名映射而不是使用堆内存。这里所谓的“大”表示是超过了`MMAP_THRESHOLD` 设置的字节数,它的缺省值是 128 kB,可以通过 [`mallopt()`][17] 去调整这个设置值。 -接下来讲的是“堆”,就在我们接下来的地址空间中,堆提供运行时内存分配,像栈一样,但又不同于栈的是,它分配的数据生存期要长于分配它的函数。大多数编程语言都为程序提供了堆管理支持。因此,满足内存需要是编程语言运行时和内核共同来做的事情。在 C 中,堆分配的接口是 [`malloc()`][18] 一族,然而在垃圾回收式编程语言中,像 C#,这个接口使用 `new` 关键字。 +接下来讲的是“堆”,就在我们接下来的地址空间中,堆提供运行时内存分配,像栈一样,但又不同于栈的是,它分配的数据生存期要长于分配它的函数。大多数编程语言都为程序提供了堆管理支持。因此,满足内存需要是编程语言运行时和内核共同来做的事情。在 C 中,堆分配的接口是 [`malloc()`][18] 一族,然而在支持垃圾回收的编程语言中,像 C#,这个接口使用 `new` 关键字。 如果在堆中有足够的空间可以满足内存请求,它可以由编程语言运行时来处理内存分配请求,而无需内核参与。否则将通过 [`brk()`][19] 系统调用([代码实现][20])来扩大堆以满足内存请求所需的大小。堆管理是比较 [复杂的][21],在面对我们程序的混乱分配模式时,它通过复杂的算法,努力在速度和内存使用效率之间取得一种平衡。服务一个堆请求所需要的时间可能是非常可观的。实时系统有一个 [特定用途的分配器][22] 去处理这个问题。堆也会出现  _碎片化_ ,如下图所示: @@ -51,7 +51,7 @@ via: http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory/ -作者:[gustavo][a] +作者:[Gustavo Duarte][a] 译者:[qhwdw](https://github.com/qhwdw) 校对:[wxy](https://github.com/wxy) From d5eec05b521f820487477dff10805416f87665c2 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 18 Jan 2018 18:50:47 +0800 Subject: [PATCH 075/226] PUB:20090127 Anatomy of a Program in Memory.md @qhwdw https://linux.cn/article-9255-1.html --- .../tech => published}/20090127 Anatomy of a Program in Memory.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20090127 Anatomy of a Program in Memory.md (100%) diff --git a/translated/tech/20090127 Anatomy of a Program in Memory.md b/published/20090127 Anatomy of a Program in Memory.md similarity index 100% rename from translated/tech/20090127 Anatomy of a Program in Memory.md rename to published/20090127 Anatomy of a Program in Memory.md From 3d36e3d24dc80b033b845de6eab6d52e665392db Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 18 Jan 2018 20:41:27 +0800 Subject: [PATCH 076/226] translate done: 20180109 Linux size Command Tutorial for Beginners (6 Examples).md --- ...and Tutorial for Beginners (6 Examples).md | 143 ------------------ ...and Tutorial for Beginners (6 Examples).md | 137 +++++++++++++++++ 2 files changed, 137 insertions(+), 143 deletions(-) delete mode 100644 sources/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md create mode 100644 translated/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md diff --git a/sources/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md b/sources/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md deleted file mode 100644 index 4467e442c5..0000000000 --- a/sources/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md +++ /dev/null @@ -1,143 +0,0 @@ -translating by lujun9972 -Linux size Command Tutorial for Beginners (6 Examples) -====== - -As some of you might already know, an object or executable file in Linux consists of several sections (like txt and data). In case you want to know the size of each section, there exists a command line utility - dubbed **size** \- that provides you this information. In this tutorial, we will discuss the basics of this tool using some easy to understand examples. - -But before we do that, it's worth mentioning that all examples mentioned in this article have been tested on Ubuntu 16.04LTS. - -## Linux size command - -The size command basically lists section sizes as well as total size for the input object file(s). Here's the syntax for the command: -``` -size [-A|-B|--format=compatibility] -            [--help] -            [-d|-o|-x|--radix=number] -            [--common] -            [-t|--totals] -            [--target=bfdname] [-V|--version] -            [objfile...] -``` - -And here's how the man page describes this utility: -``` -The GNU size utility lists the section sizes---and the total size---for each of the object or archive files objfile in its argument list. By default, one line of output is generated for each object file or each module in an archive. - -objfile... are the object files to be examined. If none are specified, the file "a.out" will be used. -``` - -Following are some Q&A-styled examples that'll give you a better idea about how the size command works. - -## Q1. How to use size command? - -Basic usage of size is very simple. All you have to do is to pass the object/executable file name as input to the tool. Following is an example: - -``` -size apl -``` - -Following is the output the above command produced on our system: - -[![How to use size command][1]][2] - -The first three entries are for text, data, and bss sections, with their corresponding sizes. Then comes the total in decimal and hexadecimal formats. And finally, the last entry is for the filename. - -## Q2. How to switch between different output formats? - -The default output format, the man page for size says, is similar to the Berkeley's format. However, if you want, you can go for System V convention as well. For this, you'll have to use the **\--format** option with SysV as value. - -``` -size apl --format=SysV -``` - -Here's the output in this case: - -[![How to switch between different output formats][3]][4] - -## Q3. How to switch between different size units? - -By default, the size of sections is displayed in decimal. However, if you want, you can have this information on octal as well as hexadecimal. For this, use the **-o** and **-x** command line options. - -[![How to switch between different size units][5]][6] - -Here's what the man page says about these options: -``` --d --o --x ---radix=number - -Using one of these options, you can control whether the size of each section is given in decimal -(-d, or --radix=10); octal (-o, or --radix=8); or hexadecimal (-x, or --radix=16).  In ---radix=number, only the three values (8, 10, 16) are supported. The total size is always given in -two radices; decimal and hexadecimal for -d or -x output, or octal and hexadecimal if you're using --o. -``` - -## Q4. How to make size command show totals of all object files? - -If you are using size to find out section sizes for multiple files in one go, then if you want, you can also have the tool provide totals of all column values. You can enable this feature using the **-t** command line option. - -``` -size -t [file1] [file2] ... -``` - -The following screenshot shows this command line option in action: - -[![How to make size command show totals of all object files][7]][8] - -The last row in the output has been added by the **-t** command line option. - -## Q5. How to make size print total size of common symbols in each file? - -If you are running the size command with multiple input files, and want the command to display common symbols in each file, then you can do this with the **\--common** command line option. - -``` -size --common [file1] [file2] ... -``` - -It's also worth mentioning that when using Berkeley format these are included in the bss size. - -## Q6. What are the other available command line options? - -Aside from the ones discussed until now, size also offers some generic command line options like **-v** (for version info) and **-h** (for summary of eligible arguments and options) - -[![What are the other available command line options][9]][10] - -In addition, you can also make size read command-line options from a file. This you can do using the **@file** option. Following are some details related to this option: -``` -The options read are inserted in place of the original @file option. If file does not exist, or - cannot be read, then the option will be treated literally, and not removed. Options in file are -separated by whitespace. A whitespace character may be included in an option by surrounding the -entire option in either single or double quotes. Any character (including a backslash) may be -included by prefixing the character to be included with a backslash. The file may itself contain -additional @file options; any such options will be processed recursively. -``` - -## Conclusion - -One thing is clear, the size command isn't for everybody. It's aimed at only those who deal with the structure of object/executable files in Linux. So if you are among the target audience, practice the options we've discussed here, and you should be ready to use the tool on daily basis. For more information on size, head to its [man page][11]. - - --------------------------------------------------------------------------------- - -via: https://www.howtoforge.com/linux-size-command/ - -作者:[Himanshu Arora][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.howtoforge.com -[1]:https://www.howtoforge.com/images/command-tutorial/size-basic-usage.png -[2]:https://www.howtoforge.com/images/command-tutorial/big/size-basic-usage.png -[3]:https://www.howtoforge.com/images/command-tutorial/size-format-option.png -[4]:https://www.howtoforge.com/images/command-tutorial/big/size-format-option.png -[5]:https://www.howtoforge.com/images/command-tutorial/size-o-x-options.png -[6]:https://www.howtoforge.com/images/command-tutorial/big/size-o-x-options.png -[7]:https://www.howtoforge.com/images/command-tutorial/size-t-option.png -[8]:https://www.howtoforge.com/images/command-tutorial/big/size-t-option.png -[9]:https://www.howtoforge.com/images/command-tutorial/size-v-x1.png -[10]:https://www.howtoforge.com/images/command-tutorial/big/size-v-x1.png -[11]:https://linux.die.net/man/1/size diff --git a/translated/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md b/translated/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md new file mode 100644 index 0000000000..3681dfa3c6 --- /dev/null +++ b/translated/tech/20180109 Linux size Command Tutorial for Beginners (6 Examples).md @@ -0,0 +1,137 @@ +六个例子带你入门 size 命令 +====== + +正如你所知道的那样,Linux 中的目标文件或着说可执行文件由多个段组成(比如 txt 和 data)。若你想知道每个段的大小,那么确实存在这么一个命令行工具 - 那就是 `size`。在本教程中,我们将会用几个简单易懂的案例来讲解该工具的基本用法。 + +在我们开始前,有必要先声明一下,本文的所有案例都在 Ubuntu 16.04LTS 中测试过了 .04LTS。 + +## Linux size 命令 + +size 命令基本上就是输出指定木比奥文件各段及其总和的大小。下面是该命令的语法: +``` +size [-A|-B|--format=compatibility] +            [--help] +            [-d|-o|-x|--radix=number] +            [--common] +            [-t|--totals] +            [--target=bfdname] [-V|--version] +            [objfile...] +``` + +man 页是这样描述它的: +``` +GNU的size程序列出参数列表objfile中,各目标文件(object)或存档库文件(archive)的段节(section)大小 — 以及总大小.默认情况下,对每目标文件或存档库中的每个模块都会产生一行输出. + +objfile... 是待检查的目标文件(object). 如果没有指定, 则默认为文件 "a.out". +``` + +下面是一些问答方式的案例,希望能让你对 size 命令有所了解。 + +## Q1。如何使用 size 命令? + +size 的基本用法很简单。你只需要将目标文件/可执行文件名称作为输入就行了。下面是一个例子: + +``` +size apl +``` + +该命令在我的系统中的输出如下: + +[![How to use size command][1]][2] + +前三部分的内容是 text,data,和 bss 段及其相应的大小。然后是十进制格式和十六进制格式的总大小。最后是文件名。 + +## Q2。如何切换不同的输出格式? + +根据 man 页的说法,size 的默认输出格式类似于 Berkeley 的格式。然而,如果你想的话,你也可以使用 System V 规范。要做到这一点,你可以使用 `--format` 选项加上 `SysV` 值。 + +``` +size apl --format=SysV +``` + +下面是它的输出: + +[![How to switch between different output formats][3]][4] + +## Q3。如何切换使用其他的单位? + +默认情况下,段的大小是以十进制的方式来展示。然而,如果你想的话,也可以使用八进制或十六进制来表示。对应的命令行参数分别为 `o` 和 `-x`。 + +[![How to switch between different size units][5]][6] + +关于这些参数,man 页是这么说的: +``` +-d +-o +-x +--radix=number + +使用这几个选项,你可以让各个段节的大小以十进制(`-d',或`--radix 10');八进制(`-o',或`--radix 8');或十六进制(`-x',或`--radix 16')数字的格式显示.`--radix number' 只支持三个数值参数 (8, 10, 16).总共大小以两种进制给出; `-d'或`-x'的十进制和十六进制输出,或`-o'的 八进制和 十六进制 输出. +``` + +## Q4。如何让 size 命令显示所有对象文件的总大小? + +如果你用 size 一次性查找多个文件的段大小,则通过使用 `-t` 选项还可以让它显示各列值的总和。 + +``` +size -t [file1] [file2] ... +``` + +下面是该命令的执行的截屏: + +[![How to make size command show totals of all object files][7]][8] + +`-t` 选项让它多加了最后那一行。 + +## Q5。如何让 size 输出每个文件中公共符号的总大小? + +若你为 size 提供多个输入文件作为参数,而且想让它显示每个文件中公共符号(指 common segment 中的 symbol) 的大小,则你可以带上 `--common` 选项。 + +``` +size --common [file1] [file2] ... +``` + +另外需要指出的是,当使用 Berkeley 格式时,和谐公共符号的大小被纳入了 bss 大小中。 + +## Q6。还有什么其他的选项? + +除了刚才提到的那些选项外,size 还有一些一般性的命令行选项,比如 `v` (显示版本信息) 和 `-h` (可选参数和选项的 summary) + +[![What are the other available command line options][9]][10] + +除此之外,你也可以使用 `@file` 选项来让 size 从文件中读取命令行选项。下面是详细的相关说明: +``` +读出来的选项会插入并替代原来的@file选项。若文件不存在或着无法读取,则该选项不会被删除,而是会以字面意义来解释该选项。 + +文件中的选项以空格分隔。当选项中要包含空格时需要用单引号或双引号将整个选项包起来。 +通过在字符前面添加一个反斜杠可以将任何字符(包括反斜杠本身)纳入到选项中。 +文件本身也能包含其他的@file选项;任何这样的选项都会被递归处理。 +``` + +## 结论 + +很明显,size 命令并不适用于所有人。它的目标群体是那些需要处理 Linux 中目标文件/可执行文件结构的人。因此,如果你刚好是目标受众,那么多试试我们这里提到的那些选项,你应该做好每天都使用这个工具的准备。想了解关于 size 的更多信息,请阅读它的 [man 页 ][11]。 + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/linux-size-command/ + +作者:[Himanshu Arora][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:https://www.howtoforge.com/images/command-tutorial/size-basic-usage.png +[2]:https://www.howtoforge.com/images/command-tutorial/big/size-basic-usage.png +[3]:https://www.howtoforge.com/images/command-tutorial/size-format-option.png +[4]:https://www.howtoforge.com/images/command-tutorial/big/size-format-option.png +[5]:https://www.howtoforge.com/images/command-tutorial/size-o-x-options.png +[6]:https://www.howtoforge.com/images/command-tutorial/big/size-o-x-options.png +[7]:https://www.howtoforge.com/images/command-tutorial/size-t-option.png +[8]:https://www.howtoforge.com/images/command-tutorial/big/size-t-option.png +[9]:https://www.howtoforge.com/images/command-tutorial/size-v-x1.png +[10]:https://www.howtoforge.com/images/command-tutorial/big/size-v-x1.png +[11]:https://linux.die.net/man/1/size From 9f3cd21f7d683210edd50b8fd7cc0ffdd85e2c9e Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 18 Jan 2018 21:02:12 +0800 Subject: [PATCH 077/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux/Unix=20App?= =?UTF-8?q?=20For=20Prevention=20Of=20RSI=20(Repetitive=20Strain=20Injury)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ntion Of RSI (Repetitive Strain Injury).md | 140 ++++++++++++++++++ 1 file changed, 140 insertions(+) create mode 100644 sources/tech/20091104 Linux-Unix App For Prevention Of RSI (Repetitive Strain Injury).md diff --git a/sources/tech/20091104 Linux-Unix App For Prevention Of RSI (Repetitive Strain Injury).md b/sources/tech/20091104 Linux-Unix App For Prevention Of RSI (Repetitive Strain Injury).md new file mode 100644 index 0000000000..0adea8a54c --- /dev/null +++ b/sources/tech/20091104 Linux-Unix App For Prevention Of RSI (Repetitive Strain Injury).md @@ -0,0 +1,140 @@ +Linux/Unix App For Prevention Of RSI (Repetitive Strain Injury) +====== +![workrave-image][1] + +[A repetitive strain injury][2] (RSI) is occupational overuse syndrome, non-specific arm pain or work related upper limb disorder. RSI caused from overusing the hands to perform a repetitive task, such as typing, writing, or clicking a mouse. Unfortunately, most people do not understand what RSI is or how dangerous it can be. You can easily prevent RSI using open source software called Workrave. + + +### What are the symptoms of RSI? + +I'm quoting from this [page][3]. Do you experience: + + 1. Fatigue or lack of endurance? + 2. Weakness in the hands or forearms? + 3. Tingling, numbness, or loss of sensation? + 4. Heaviness: Do your hands feel like dead weight? + 5. Clumsiness: Do you keep dropping things? + 6. Lack of strength in your hands? Is it harder to open jars? Cut vegetables? + 7. Lack of control or coordination? + 8. Chronically cold hands? + 9. Heightened awareness? Just being slightly more aware of a body part can be a clue that something is wrong. + 10. Hypersensitivity? + 11. Frequent self-massage (subconsciously)? + 12. Sympathy pains? Do your hands hurt when someone else talks about their hand pain? + + + +### How to reduce your risk of Developing RSI + + * Take breaks, when using your computer, every 30 minutes or so. Use software such as workrave to prevent RSI. + * Regular exercise can prevent all sort of injuries including RSI. + * Use good posture. Adjust your computer desk and chair to support muscles necessary for good posture. + + + +### Workrave + +Workrave is a free open source software application intended to prevent computer users from developing RSI or myopia. The software periodically locks the screen while an animated character, "Miss Workrave," walks the user through various stretching exercises and urges them to take a coffee break. The program frequently alerts you to take micro-pauses, rest breaks and restricts you to your daily limit. The program works under MS-Windows and Linux, UNIX-like operating systems. + +#### Install workrave + +Type the following [apt command][4]/[apt-get command][5] under a Debian / Ubuntu Linux: +`$ sudo apt-get install workrave` +Fedora Linux user should type the following dnf command: +`$ sudo dnf install workrave` +RHEL/CentOS Linux user should enable EPEL repo and install it using [yum command][6]: +``` +### [ **tested on a CentOS/RHEL 7.x and clones** ] ### +$ sudo yum install epel-release +$ sudo yum install https://rpms.remirepo.net/enterprise/remi-release-7.rpm +$ sudo yum install workrave +``` +Arch Linux user type the following pacman command to install it: +`$ sudo pacman -S workrave` +FreeBSD user can install it using the following pkg command: +`# pkg install workrave` +OpenBSD user can install it using the following pkg_add command +``` +$ doas pkg_add workrave +``` + +#### How to configure workrave + +Workrave works as an applet which is a small application whose user interface resides within a panel. You need to add workrave to panel to control behavior and appearance of the software. + +##### Adding a New Workrave Object To Panel + + * Right-click on a vacant space on a panel to open the panel popup menu. + * Choose Add to Panel. + * The Add to Panel dialog opens.The available panel objects are listed alphabetically, with launchers at the top. Select workrave applet and click on Add button. + +![Fig.01: Adding an Object \(Workrave\) to a Panel][7] +Fig.01: Adding an Object (Workrave) to a Panel + +##### How Do I Modify Properties Of Workrave Software? + +To modify the properties of an object workrave, perform the following steps: + + * Right-click on the workrave object to open the panel object popup. + * Choose Preference. Use the Properties dialog to modify the properties as required. + +![](https://www.cyberciti.biz/media/new/tips/2009/11/linux-gnome-workwave-preferences-.png) +Fig.02: Modifying the Properties of The Workrave Software + +#### Workrave in Action + +The main window shows the time remaining until it suggests a pause. The windows can be closed and you will the time remaining on the panel itself: +![Fig.03: Time reaming counter ][8] +Fig.03: Time reaming counter + +![Fig.04: Miss Workrave - an animated character walks you through various stretching exercises][9] +Fig.04: Miss Workrave - an animated character walks you through various stretching exercises + +The break prelude window, bugging you to take a micro-pause: +![Fig.05: Time for a micro-pause remainder ][10] +Fig.05: Time for a micro-pause remainder + +![Fig.06: You can skip Micro-break ][11] +Fig.06: You can skip Micro-break + +##### References: + + 1. [Workrave project][12] home page. + 2. [pokoy][13] lightweight daemon that helps prevent RSI and other computer related stress. + 3. [A Pomodoro][14] timer for GNOME 3. + 4. [RSI][2] from the wikipedia. + + + +### about the author + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][15], [Facebook][16], [Google+][17]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/repetitive-strain-injury-prevention-software.html + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz/ +[1]:https://www.cyberciti.biz/media/new/tips/2009/11/workrave-image.jpg (workrave-image) +[2]:https://en.wikipedia.org/wiki/Repetitive_strain_injury +[3]:https://web.eecs.umich.edu/~cscott/rsi.html##symptoms +[4]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info) +[5]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info) +[6]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) +[7]:https://www.cyberciti.biz/media/new/tips/2009/11/add-workwave-to-panel.png (Adding an Object (Workrave) to a Gnome Panel) +[8]:https://www.cyberciti.biz/media/new/tips/2009/11/screenshot-workrave.png (Workrave main window shows the time remaining until it suggests a pause.) +[9]:https://www.cyberciti.biz/media/new/tips/2009/11/miss-workrave.png (Miss Workrave Sofrware character walks you through various RSI stretching exercises ) +[10]:https://www.cyberciti.biz/media/new/tips/2009/11/time-for-micro-pause.gif (Workrave RSI Software Time for a micro-pause remainder ) +[11]:https://www.cyberciti.biz/media/new/tips/2009/11/Micro-break.png (Workrave RSI Software Micro-break ) +[12]:http://www.workrave.org/ +[13]:https://github.com/ttygde/pokoy +[14]:http://gnomepomodoro.org +[15]:https://twitter.com/nixcraft +[16]:https://facebook.com/nixcraft +[17]:https://plus.google.com/+CybercitiBiz From a1fd7ca41ef8446fccd7c23f8b42a3974d305ba9 Mon Sep 17 00:00:00 2001 From: Locez Date: Fri, 19 Jan 2018 00:51:42 +0800 Subject: [PATCH 078/226] Reviewed by Locez --- ...1016 Using the Linux find command with caution.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/translated/tech/20171016 Using the Linux find command with caution.md b/translated/tech/20171016 Using the Linux find command with caution.md index 552d1738f7..e0b8b49763 100644 --- a/translated/tech/20171016 Using the Linux find command with caution.md +++ b/translated/tech/20171016 Using the Linux find command with caution.md @@ -1,7 +1,7 @@ 谨慎使用 Linux find 命令 ====== ![](https://images.idgesg.net/images/article/2017/10/caution-sign-100738884-large.jpg) -最近有朋友提醒我在运行 find 命令的时候可以添加一个有用的选项来增加一些谨慎。它是 -ok,除了一个重要的区别之外,它的工作方式与 -exec 相似,它使 find 命令在执行指定的操作之前请求权限。 +最近有朋友提醒我可以添加一个有用的选项来更加谨慎地运行 find 命令,它是 -ok。除了一个重要的区别之外,它的工作方式与 -exec 相似,它使 find 命令在执行指定的操作之前请求权限。 这有一个例子。如果你使用 find 命令查找文件并删除它们,则可以运行下面的命令: ``` @@ -9,7 +9,7 @@ $ find . -name runme -exec rm {} \; ``` -在当前目录及其子目录中中任何名为 “runme” 的文件都将被立即删除 - 当然,你要有权删除它们。改用 -ok 选项,你会看到类似这样的东西。find 命令将在删除文件之前会请求权限。回答 **y** 代表 “yes” 将允许 find 命令继续并逐个删除文件。 +在当前目录及其子目录中中任何名为 “runme” 的文件都将被立即删除 - 当然,你要有权删除它们。改用 -ok 选项,你会看到类似这样的东西,find 命令将在删除文件之前会请求权限。回答 **y** 代表 “yes” 将允许 find 命令继续并逐个删除文件。 ``` $ find . -name runme -ok rm {} \; < rm ... ./bin/runme > ? @@ -18,7 +18,7 @@ $ find . -name runme -ok rm {} \; ### -exedir 命令也是一个选项 -另一个可以用来修改 find​​ 命令行为并可能使其更可控的选项是 -execdir 命令。其中 -exec 运行指定的任何命令,-execdir 从文件所在的目录运行指定的命令,而不是运行 find 命令所在的目录。这是一个它的例子: +另一个可以用来修改 find 命令行为并可能使其更可控的选项是 -execdir 。其中 -exec 运行指定的任何命令,-execdir 从文件所在的目录运行指定的命令,而不是在运行 find 命令的目录运行。这是一个它的例子: ``` $ pwd /home/shs @@ -32,7 +32,7 @@ ls rm runme ``` -到现在为止还挺好。但要记住的是,-execdir 也会在匹配文件的目录中执行命令。如果运行下面的命令,并且目录包含一个名为 “ls” 的文件,那么即使该文件_没有_执行权限,它也将运行该文件。使用 **-exec** 或 **-execdir** 类似于通过 source 来运行命令。 +到现在为止还挺好。但要记住的是,-execdir 也会在匹配文件的目录中执行命令。如果运行下面的命令,并且目录包含一个名为 “ls” 的文件,那么即使该文件没有_执行权限,它也将运行该文件。使用 **-exec** 或 **-execdir** 类似于通过 source 来运行命令。 ``` $ find . -name runme -execdir ls \; Running the /home/shs/bin/ls file @@ -61,7 +61,7 @@ echo This is an imposter rm command ### -okdir 选项也会请求权限 -要更谨慎,可以使用 **-okdir** 选项。类似 **-ok**,该选项将要求权限来运行该命令。 +要更谨慎,可以使用 **-okdir** 选项。类似 **-ok**,该选项将请求权限来运行该命令。 ``` $ find . -name runme -okdir rm {} \; < rm ... ./bin/runme > ? @@ -74,7 +74,7 @@ $ find . -name runme -execdir /bin/rm {} \; ``` -find 命令除了默认打印之外还有很多选项。有些可以使你的文件搜索更精确,但一点小心总是一个好主意。 +find 命令除了默认打印之外还有很多选项,有些可以使你的文件搜索更精确,但谨慎一点总是好的。 在 [Facebook][1] 和 [LinkedIn][2] 上加入网络世界社区来进行评论。 From 65e4969a9e89a3eac7b0244e9992902c225031d5 Mon Sep 17 00:00:00 2001 From: Locez Date: Fri, 19 Jan 2018 00:52:46 +0800 Subject: [PATCH 079/226] Reviewed by Locez --- .../tech/20171016 Using the Linux find command with caution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20171016 Using the Linux find command with caution.md b/translated/tech/20171016 Using the Linux find command with caution.md index e0b8b49763..a72ff48c11 100644 --- a/translated/tech/20171016 Using the Linux find command with caution.md +++ b/translated/tech/20171016 Using the Linux find command with caution.md @@ -84,7 +84,7 @@ via: https://www.networkworld.com/article/3233305/linux/using-the-linux-find-com 作者:[Sandra Henry-Stocker][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[Locez](https://github.com/locez) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From bd93ad3285ee1e4d61a83fa5e71661548c930f7e Mon Sep 17 00:00:00 2001 From: Locez Date: Fri, 19 Jan 2018 01:33:17 +0800 Subject: [PATCH 080/226] Reviewed by Locez --- ...up Japanese Language Environment In Arch Linux.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md b/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md index e924dcbf28..97bbfe6fb6 100644 --- a/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md +++ b/translated/tech/20171108 How To Setup Japanese Language Environment In Arch Linux.md @@ -7,7 +7,7 @@ ### 在Arch Linux中设置日语环境 -首先,安装必要的日语字体,以正确查看日语 ASCII 格式: +首先,为了正确查看日语 ASCII 格式,先安装必要的日语字体: ``` sudo pacman -S adobe-source-han-sans-jp-fonts otf-ipafont ``` @@ -27,7 +27,7 @@ pacaur -S ttf-monapo sudo pacman -S ibus ibus-anthy ``` -在 **~/.xprofile** 中添加以下行(如果不存在,创建一个): +在 **~/.xprofile** 中添加以下几行(如果不存在,创建一个): ``` # Settings for Japanese input export GTK_IM_MODULE='ibus' @@ -38,7 +38,7 @@ export XMODIFIERS=@im='ibus' ibus-daemon -drx ``` -~/.xprofile 允许我们在窗口管理器启动之前在 X 用户会话开始时执行命令。 +~/.xprofile 允许我们在 X 用户会话开始时且在窗口管理器启动之前执行命令。 保存并关闭文件。重启 Arch Linux 系统以使更改生效。 @@ -72,9 +72,9 @@ ibus-setup [![][2]][8] -你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,单击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下**Command/Window 键+空格键**来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。 +你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,点击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下**SUPER 键+空格键**(LCTT译注:SUPER KEY 通常为 Command/Window KEY)来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。 -你现在知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持 OSTechNix。 +现在你知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持 OSTechNix。 @@ -84,7 +84,7 @@ via: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/ 作者:[][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[Locez](https://github.com/locez) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 6df56b99aa629f42e752ccf34a040242e64911e8 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 19 Jan 2018 09:00:01 +0800 Subject: [PATCH 081/226] translated --- ... with Vi-Vim Editor - Advanced concepts.md | 119 ------------------ ... with Vi-Vim Editor - Advanced concepts.md | 116 +++++++++++++++++ 2 files changed, 116 insertions(+), 119 deletions(-) delete mode 100644 sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md create mode 100644 translated/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md diff --git a/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md b/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md deleted file mode 100644 index a12c95e409..0000000000 --- a/sources/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md +++ /dev/null @@ -1,119 +0,0 @@ -translating---geekpi - - -Working with Vi/Vim Editor : Advanced concepts -====== -Earlier we have discussed some basics about VI/VIM editor but VI & VIM are both very powerful editors and there are many other functionalities that can be used with these editors. In this tutorial, we are going to learn some advanced uses of VI/VIM editor. - -( **Recommended Read** : [Working with VI editor : The Basics ][1]) - -## Opening multiple files with VI/VIM editor - -To open multiple files, command would be same as is for a single file; we just add the file name for second file as well. - -``` - $ vi file1 file2 file 3 -``` - -Now to browse to next file, we can use - -``` -$ :n -``` - -or we can also use - -``` -$ :e filename -``` - -## Run external commands inside the editor - -We can run external Linux/Unix commands from inside the vi editor, i.e. without exiting the editor. To issue a command from editor, go back to Command Mode if in Insert mode & we use the BANG i.e. '!' followed by the command that needs to be used. Syntax for running a command is, - -``` -$ :! command -``` - -An example for this would be - -``` -$ :! df -H -``` - -## Searching for a pattern - -To search for a word or pattern in the text file, we use following two commands in command mode, - - * command '/' searches the pattern in forward direction - - * command '?' searched the pattern in backward direction - - -Both of these commands are used for same purpose, only difference being the direction they search in. An example would be, - - `$ :/ search pattern` (If at beginning of the file) - - `$ :/ search pattern` (If at the end of the file) - -## Searching & replacing a pattern - -We might be required to search & replace a word or a pattern from our text files. So rather than finding the occurrence of word from whole text file & replace it, we can issue a command from the command mode to replace the word automatically. Syntax for using search & replacement is, - -``` -$ :s/pattern_to_be_found/New_pattern/g -``` - -Suppose we want to find word "alpha" & replace it with word "beta", the command would be - -``` -$ :s/alpha/beta/g -``` - -If we want to only replace the first occurrence of word "alpha", then the command would be - -``` -$ :s/alpha/beta/ -``` - -## Using Set commands - -We can also customize the behaviour, the and feel of the vi/vim editor by using the set command. Here is a list of some options that can be use set command to modify the behaviour of vi/vim editor, - - `$ :set ic ` ignores cases while searching - - `$ :set smartcase ` enforce case sensitive search - - `$ :set nu` display line number at the begining of the line - - `$ :set hlsearch ` highlights the matching words - - `$ : set ro ` change the file type to read only - - `$ : set term ` prints the terminal type - - `$ : set ai ` sets auto-indent - - `$ :set noai ` unsets the auto-indent - -Some other commands to modify vi editors are, - - `$ :colorscheme ` its used to change the color scheme for the editor. (for VIM editor only) - - `$ :syntax on ` will turn on the color syntax for .xml, .html files etc. (for VIM editor only) - -This complete our tutorial, do mention your queries/questions or suggestions in the comment box below. - - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/working-vivim-editor-advanced-concepts/ - -作者:[Shusain][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/working-vi-editor-basics/ diff --git a/translated/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md b/translated/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md new file mode 100644 index 0000000000..d31527b055 --- /dev/null +++ b/translated/tech/20170524 Working with Vi-Vim Editor - Advanced concepts.md @@ -0,0 +1,116 @@ +使用 Vi/Vim 编辑器:高级概念 +====== +早些时候我们已经讨论了一些关于 VI/VIM 编辑器的基础知识,但是 VI 和 VIM 都是非常强大的编辑器,还有很多其他的功能可以和编辑器一起使用。在本教程中,我们将学习 VI/VIM 编辑器的一些高级用法。 + +(**推荐阅读**:[使用 VI 编辑器:基础知识] [1]) + +## 使用 VI/VIM 编辑器打开多个文件 + +要打开多个文件,命令将与打开单个文件相同。我们只要添加第二个文件的名称。 + +``` + $ vi file1 file2 file 3 +``` + +要浏览到下一个文件,我们可以使用 + +``` +$ :n +``` + +或者我们也可以使用 + +``` +$ :e filename +``` + +## 在编辑器中运行外部命令 + +我们可以在 vi 编辑器内部运行外部的 Linux/Unix 命令,也就是说不需要退出编辑器。要在编辑器中运行命令,如果在插入模式下,先返回到命令模式,我们使用 BANG 也就是 “!” 接着是需要使用的命令。运行命令的语法是: + +``` +$ :! command +``` + +这是一个例子 + +``` +$ :! df -H +``` + +## 根据模板搜索 + +要在文本文件中搜索一个单词或模板,我们在命令模式下使用以下两个命令: + + * 命令 “/” 代表正向搜索模板 + + * 命令 “?” 代表正向搜索模板 + + +这两个命令都用于相同的目的,唯一不同的是它们搜索的方向。一个例子是: + + `$ :/ search pattern` (如果在文件的开头) + + `$ :? search pattern` (如果在文件末尾) + +## 搜索并替换一个模板 + +我们可能需要搜索和替换我们的文本中的单词或模板。我们不是从整个文本中找到单词的出现的地方并替换它,我们可以在命令模式中使用命令来自动替换单词。使用搜索和替换的语法是: + +``` +$ :s/pattern_to_be_found/New_pattern/g +``` + +假设我们想要将单词 “alpha” 用单词 “beta” 代替,命令就是这样: + +``` +$ :s/alpha/beta/g +``` + +如果我们只想替换第一个出现的 “alpha”,那么命令就是: + +``` +$ :s/alpha/beta/ +``` + +## 使用 set 命令 + +我们也可以使用 set 命令自定义 vi/vim 编辑器的行为和外观。下面是一些可以使用 set 命令修改 vi/vim 编辑器行为的选项列表: + + `$ :set ic ` 在搜索时忽略大小写 + + `$ :set smartcase ` 搜索强制区分大小写 + + `$ :set nu` 在每行开始显示行号 + + `$ :set hlsearch ` 高亮显示匹配的单词 + + `$ : set ro ` 将文件类型更改为只读 + + `$ : set term ` 打印终端类型 + + `$ : set ai ` 设置自动缩进 + + `$ :set noai ` 取消自动缩进 + +其他一些修改 vi 编辑器的命令是: + + `$ :colorscheme ` 用来改变编辑器的配色方案 。(仅适用于 VIM 编辑器) + + `$ :syntax on ` 为 .xml、.html 等文件打开颜色方案。(仅适用于VIM编辑器) + +这篇结束了本系列教程,请在下面的评论栏中提出你的疑问/问题或建议。 + + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/working-vivim-editor-advanced-concepts/ + +作者:[Shusain][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:http://linuxtechlab.com/working-vi-editor-basics/ From 11785596f67f9e5a60561bed02691441981e5f9c Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 19 Jan 2018 09:16:23 +0800 Subject: [PATCH 082/226] translating --- ...71027 Easy guide to secure VNC server with TLS encryption.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171027 Easy guide to secure VNC server with TLS encryption.md b/sources/tech/20171027 Easy guide to secure VNC server with TLS encryption.md index 189e57535f..7548991798 100644 --- a/sources/tech/20171027 Easy guide to secure VNC server with TLS encryption.md +++ b/sources/tech/20171027 Easy guide to secure VNC server with TLS encryption.md @@ -1,3 +1,5 @@ +translating---geekpi + Easy guide to secure VNC server with TLS encryption ====== In this tutorial, we will learn to install VNC server & secure VNC server sessions with TLS encryption. From a78ef7c5023ded21775f823a530b4f47c53169a4 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 11:22:37 +0800 Subject: [PATCH 083/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20debuggers?= =?UTF-8?q?=20really=20work?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20180115 How debuggers really work.md | 99 +++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 sources/tech/20180115 How debuggers really work.md diff --git a/sources/tech/20180115 How debuggers really work.md b/sources/tech/20180115 How debuggers really work.md new file mode 100644 index 0000000000..452bc67823 --- /dev/null +++ b/sources/tech/20180115 How debuggers really work.md @@ -0,0 +1,99 @@ +How debuggers really work +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/annoyingbugs.png?itok=ywFZ99Gs) + +Image by : opensource.com + +A debugger is one of those pieces of software that most, if not every, developer uses at least once during their software engineering career, but how many of you know how they actually work? During my talk at [linux.conf.au 2018][1] in Sydney, I will be talking about writing a debugger from scratch... in [Rust][2]! + +In this article, the terms debugger/tracer are interchangeably. "Tracee" refers to the process being traced by the tracer. + +### The ptrace system call + +Most debuggers heavily rely on a system call known as `ptrace(2)`, which has the prototype: +``` + + +long ptrace(enum __ptrace_request request, pid_t pid, void *addr, void *data); +``` + +This is a system call that can manipulate almost all aspects of a process; however, before the debugger can attach to a process, the "tracee" has to call `ptrace` with the request `PTRACE_TRACEME`. This tells Linux that it is legitimate for the parent to attach via `ptrace` to this process. But... how do we coerce a process into calling `ptrace`? Easy-peasy! `fork/execve` provides an easy way of calling `ptrace` after `fork` but before the tracee really starts using `execve`. Conveniently, `fork` will also return the `pid` of the tracee, which is required for using `ptrace` later. + +Now that the tracee can be traced by the debugger, important changes take place: + + * Every time a signal is delivered to the tracee, it stops and a wait-event is delivered to the tracer that can be captured by the `wait` family of system calls. + * Each `execve` system call will cause a `SIGTRAP` to be delivered to the tracee. (Combined with the previous item, this means the tracee is stopped before an `execve` can fully take place.) + + + +This means that, once we issue the `PTRACE_TRACEME` request and call the `execve` system call to actually start the program in the tracee, the tracee will immediately stop, since `execve` delivers a `SIGTRAP`, and that is caught by a wait-event in the tracer. How do we continue? As one would expect, `ptrace` has a number of requests that can be used for telling the tracee it's fine to continue: + + * `PTRACE_CONT`: This is the simplest. The tracee runs until it receives a signal, at which point a wait-event is delivered to the tracer. This is most commonly used to implement "continue-until-breakpoint" and "continue-forever" options of a real-world debugger. Breakpoints will be covered below. + * `PTRACE_SYSCALL`: Very similar to `PTRACE_CONT`, but stops before a system call is entered and also before a system call returns to userspace. It can be used in combination with other requests (which we will cover later in this article) to monitor and modify a system call's arguments or return value. `strace`, the system call tracer, uses this request heavily to figure out what system calls are made by a process. + * `PTRACE_SINGLESTEP`: This one is pretty self-explanatory. If you used a debugger before, this request executes the next instruction, but stops immediately after. + + + +We can stop the process with a variety of requests, but how do we get the state of the tracee? The state of a process is mostly captured by its registers, so of course `ptrace` has a request to get (or modify!) the registers: + + * `PTRACE_GETREGS`: This request will give the registers' state as it was when a tracee was stopped. + * `PTRACE_SETREGS`: If the tracer has the values of registers from a previous call to `PTRACE_GETREGS`, it can modify the values in that structure and set the registers to the new values via this request. + * `PTRACE_PEEKUSER` and `PTRACE_POKEUSER`: These allow reading from the tracee's `USER` area, which holds the registers and other useful information. This can be used to modify a single register, without the more heavyweight `PTRACE_{GET,SET}REGS`. + + + +Modifying the registers isn't always sufficient in a debugger. A debugger will sometimes need to read some parts of the memory or even modify it. The GNU Project Debugger (GDB) can use `print` to get the value of a memory location or a variable. `ptrace` has the functionality to implement this: + + * `PTRACE_PEEKTEXT` and `PTRACE_POKETEXT`: These allow reading and writing a word in the address space of the tracee. Of course, the tracee has to be stopped for this to work. + + + +Real-world debuggers also have features like breakpoints and watchpoints. In the next section, I'll dive into the architectural details of debugging support. For the purposes of clarity and conciseness, this article will consider x86 only. + +### Architectural support + +`ptrace` is all cool, but how does it work? In the previous section, we've seen that `ptrace` has quite a bit to do with signals: `SIGTRAP` can be delivered during single-stepping, before `execve` and before or after system calls. Signals can be generated a number of ways, but we will look at two specific examples that can be used by debuggers to stop a program (effectively creating a breakpoint!) at a given location: + + * **Undefined instructions:** When a process tries to execute an undefined instruction, an exception is raised by the CPU. This exception is handled via a CPU interrupt, and a handler corresponding to the interrupt in the kernel is called. This will result in a `SIGILL` being sent to the process. This, in turn, causes the process to stop, and the tracer is notified via a wait-event. It can then decide what to do. On x86, an instruction `ud2` is guaranteed to be always undefined. + + * **Debugging interrupt:** The problem with the previous approach is that the `ud2` instruction takes two bytes of machine code. A special instruction exists that takes one byte and raises an interrupt. It's `int $3` and the machine code is `0xCC`. When this interrupt is raised, the kernel sends a `SIGTRAP` to the process and, just as before, the tracer is notified. + + + + +This is fine, but how do we coerce the tracee to execute these instructions? Easy: `ptrace` has `PTRACE_POKETEXT`, which can override a word at a memory location. A debugger would read the original word at the location using `PTRACE_PEEKTEXT` and replace it with `0xCC`, remembering the original byte and the fact that it is a breakpoint in its internal state. The next time the tracee executes at the location, it is automatically stopped by the virtue of a `SIGTRAP`. The debugger's end user can then decide how to continue (for instance, inspect the registers). + +Okay, we've covered breakpoints, but what about watchpoints? How does a debugger stop a program when a certain memory location is read or written? Surely you wouldn't just overwrite every instruction with `int $3` that could read or write some memory location. Meet debug registers, a set of registers designed to fulfill this goal more efficiently: + + * `DR0` to `DR3`: Each of these registers contains an address (a memory location), where the debugger wants the tracee to stop for some reason. The reason is specified as a bitmask in `DR7`. + * `DR4` and `DR5`: These obsolete aliases to `DR6` and `DR7`, respectively. + * `DR6`: Debug status. Contains information about which `DR0` to `DR3` caused the debugging exception to be raised. This is used by Linux to figure out the information passed along with the `SIGTRAP` to the tracee. + * `DR7`: Debug control. Using the bits in these registers, the debugger can control how the addresses specified in `DR0` to `DR3` are interpreted. A bitmask controls the size of the watchpoint (whether 1, 2, 4, or 8 bytes are monitored) and whether to raise an exception on execution, reading, writing, or either of reading and writing. + + + +Because the debug registers form part of the `USER` area of a process, the debugger can use `PTRACE_POKEUSER` to write values into the debug registers. The debug registers are only relevant to a specific process and are thus restored to the value at preemption before the process regains control of the CPU. + +### Tip of the iceberg + +We've glanced at the iceberg a debugger is: we've covered `ptrace`, went over some of its functionality, then we had a look at how `ptrace` is implemented. Some parts of `ptrace` can be implemented in software, but other parts have to be implemented in hardware, otherwise they'd be very expensive or even impossible. + +There's plenty that we didn't cover, of course. Questions, like "how does a debugger know where a variable is in memory?" remain open due to space and time constraints, but I hope you've learned something from this article; if it piqued your interest, there are plenty of resources available online to learn more. + +For more, attend Levente Kurusa's talk, [Let's Write a Debugger!][3], at [linux.conf.au][1], which will be held January 22-26 in Sydney. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/how-debuggers-really-work + +作者:[Levente Kurusa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/lkurusa +[1]:https://linux.conf.au/index.html +[2]:https://www.rust-lang.org +[3]:https://rego.linux.conf.au/schedule/presentation/91/ From 75808cb1e84a08ebe2f14dad1b0886c184ec7870 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 19 Jan 2018 11:25:42 +0800 Subject: [PATCH 084/226] PRF:20161004 What happens when you start a process on Linux.md @jessie-pang --- ...ppens when you start a process on Linux.md | 56 +++++++------------ 1 file changed, 19 insertions(+), 37 deletions(-) diff --git a/translated/tech/20161004 What happens when you start a process on Linux.md b/translated/tech/20161004 What happens when you start a process on Linux.md index 5c97fe7dc4..952405c3e1 100644 --- a/translated/tech/20161004 What happens when you start a process on Linux.md +++ b/translated/tech/20161004 What happens when you start a process on Linux.md @@ -1,52 +1,47 @@ 当你在 Linux 上启动一个进程时会发生什么? =========================================================== - 本文是关于 fork 和 exec 是如何在 Unix 上工作的。你或许已经知道,也有人还不知道。几年前当我了解到这些时,我惊叹不已。 我们要做的是启动一个进程。我们已经在博客上讨论了很多关于**系统调用**的问题,每当你启动一个进程或者打开一个文件,这都是一个系统调用。所以你可能会认为有这样的系统调用: ``` start_process(["ls", "-l", "my_cool_directory"]) - ``` 这是一个合理的想法,显然这是它在 DOS 或 Windows 中的工作原理。我想说的是,这并不是 Linux 上的工作原理。但是,我查阅了文档,确实有一个 [posix_spawn][2] 的系统调用基本上是这样做的,不过这不在本文的讨论范围内。 ### fork 和 exec -Linux 上的 `posix_spawn` 是通过两个系统调用实现的,分别是 `fork` 和 `exec`(实际上是 execve),这些都是人们常常使用的。尽管在 OS X 上,人们使用 `posix_spawn`,而 fork 和 exec 是不提倡的,但我们将讨论的是 Linux。 +Linux 上的 `posix_spawn` 是通过两个系统调用实现的,分别是 `fork` 和 `exec`(实际上是 `execve`),这些都是人们常常使用的。尽管在 OS X 上,人们使用 `posix_spawn`,而 `fork` 和 `exec` 是不提倡的,但我们将讨论的是 Linux。 -Linux 中的每个进程都存在于“进程树”中。你可以通过运行 `pstree` 命令查看进程树。树的根是 `init`,进程号是 1。每个进程(init 除外)都有一个父进程,一个进程都可以有很多子进程。 +Linux 中的每个进程都存在于“进程树”中。你可以通过运行 `pstree` 命令查看进程树。树的根是 `init`,进程号是 1。每个进程(`init` 除外)都有一个父进程,一个进程都可以有很多子进程。 所以,假设我要启动一个名为 `ls` 的进程来列出一个目录。我是不是只要发起一个进程 `ls` 就好了呢?不是的。 -我要做的是,创建一个子进程,这个子进程是我本身的一个克隆,然后这个子进程的“大脑”被替代,变成 `ls`。 +我要做的是,创建一个子进程,这个子进程是我(`me`)本身的一个克隆,然后这个子进程的“脑子”被吃掉了,变成 `ls`。 开始是这样的: ``` my parent |- me - ``` -然后运行 `fork()`,生成一个子进程,是我自己的一份克隆: +然后运行 `fork()`,生成一个子进程,是我(`me`)自己的一份克隆: ``` my parent |- me |-- clone of me - ``` -然后我让子进程运行 `exec("ls")`,变成这样: +然后我让该子进程运行 `exec("ls")`,变成这样: ``` my parent |- me |-- ls - ``` 当 ls 命令结束后,我几乎又变回了我自己: @@ -55,24 +50,22 @@ my parent my parent |- me |-- ls (zombie) - ``` -在这时 ls 其实是一个僵尸进程。这意味着它已经死了,但它还在等我,以防我需要检查它的返回值(使用 `wait` 系统调用)。一旦我获得了它的返回值,我将再次恢复独自一人的状态。 +在这时 `ls` 其实是一个僵尸进程。这意味着它已经死了,但它还在等我,以防我需要检查它的返回值(使用 `wait` 系统调用)。一旦我获得了它的返回值,我将再次恢复独自一人的状态。 ``` my parent |- me - ``` ### fork 和 exec 的代码实现 -如果你要编写一个 shell,这是你必须做的一个练习(这是一个非常有趣和有启发性的项目。Kamal 在 Github 上有一个很棒的研讨会:[https://github.com/kamalmarhubi/shell-workshop][3]) +如果你要编写一个 shell,这是你必须做的一个练习(这是一个非常有趣和有启发性的项目。Kamal 在 Github 上有一个很棒的研讨会:[https://github.com/kamalmarhubi/shell-workshop][3])。 -事实证明,有了 C 或 Python 的技能,你可以在几个小时内编写一个非常简单的 shell,例如 bash。(至少如果你旁边能有个人多少懂一点,如果没有的话用时会久一点。)我已经完成啦,真的很棒。 +事实证明,有了 C 或 Python 的技能,你可以在几个小时内编写一个非常简单的 shell,像 bash 一样。(至少如果你旁边能有个人多少懂一点,如果没有的话用时会久一点。)我已经完成啦,真的很棒。 -这就是 fork 和 exec 在程序中的实现。我写了一段 C 的伪代码。请记住,[fork 也可能会失败哦。][4] +这就是 `fork` 和 `exec` 在程序中的实现。我写了一段 C 的伪代码。请记住,[fork 也可能会失败哦。][4] ``` int pid = fork(); @@ -80,7 +73,7 @@ int pid = fork(); // “我”是谁呢?可能是子进程也可能是父进程 if (pid == 0) { // 我现在是子进程 - // 我的大脑将被替代,然后变成一个完全不一样的进程“ls” + // “ls” 吃掉了我脑子,然后变成一个完全不一样的进程 exec(["ls"]) } else if (pid == -1) { // 天啊,fork 失败了,简直是灾难! @@ -89,59 +82,48 @@ if (pid == 0) { // 继续做一个酷酷的美男子吧 // 需要的话,我可以等待子进程结束 } - ``` -### 上文提到的“大脑被替代“是什么意思呢? +### 上文提到的“脑子被吃掉”是什么意思呢? 进程有很多属性: * 打开的文件(包括打开的网络连接) - * 环境变量 - * 信号处理程序(在程序上运行 Ctrl + C 时会发生什么?) - * 内存(你的“地址空间”) - * 寄存器 - -* 可执行文件(/proc/$pid/exe) - +* 可执行文件(`/proc/$pid/exe`) * cgroups 和命名空间(与 Linux 容器相关) - * 当前的工作目录 - * 运行程序的用户 - * 其他我还没想到的 -当你运行 `execve` 并让另一个程序替代你的时候,实际上几乎所有东西都是相同的! 你们有相同的环境变量、信号处理程序和打开的文件等等。 +当你运行 `execve` 并让另一个程序吃掉你的脑子的时候,实际上几乎所有东西都是相同的! 你们有相同的环境变量、信号处理程序和打开的文件等等。 唯一改变的是,内存、寄存器以及正在运行的程序,这可是件大事。 ### 为何 fork 并非那么耗费资源(写入时复制) -你可能会问:“如果我有一个使用了 2 GB 内存的进程,这是否意味着每次我启动一个子进程,所有 2 GB 的内存都要被复制一次?这听起来要耗费很多资源!“ +你可能会问:“如果我有一个使用了 2GB 内存的进程,这是否意味着每次我启动一个子进程,所有 2 GB 的内存都要被复制一次?这听起来要耗费很多资源!” -事实上,Linux 为 fork() 调用实现了写入时复制(copy on write),对于新进程的 2 GB 内存来说,就像是“看看旧的进程就好了,是一样的!”。然后,当如果任一进程试图写入内存,此时系统才真正地复制一个内存的副本给该进程。如果两个进程的内存是相同的,就不需要复制了。 +事实上,Linux 为 `fork()` 调用实现了写时复制copy on write,对于新进程的 2GB 内存来说,就像是“看看旧的进程就好了,是一样的!”。然后,当如果任一进程试图写入内存,此时系统才真正地复制一个内存的副本给该进程。如果两个进程的内存是相同的,就不需要复制了。 ### 为什么你需要知道这么多 -你可能会说,好吧,这些琐事听起来很厉害,但为什么这么重要?关于信号处理程序或环境变量的细节会被继承吗?这对我的日常编程有什么实际影响呢? +你可能会说,好吧,这些细节听起来很厉害,但为什么这么重要?关于信号处理程序或环境变量的细节会被继承吗?这对我的日常编程有什么实际影响呢? -有可能哦!比如说,在 Kamal 的博客上有一个很有意思的 [bug][5]。它讨论了 Python 如何使信号处理程序忽略了 SIGPIPE。也就是说,如果你从 Python 里运行一个程序,默认情况下它会忽略 SIGPIPE!这意味着,程序从 Python 脚本和从 shell 启动的表现会**有所不同**。在这种情况下,它会造成一个奇怪的问题。 +有可能哦!比如说,在 Kamal 的博客上有一个很有意思的 [bug][5]。它讨论了 Python 如何使信号处理程序忽略了 `SIGPIPE`。也就是说,如果你从 Python 里运行一个程序,默认情况下它会忽略 `SIGPIPE`!这意味着,程序从 Python 脚本和从 shell 启动的表现会**有所不同**。在这种情况下,它会造成一个奇怪的问题。 所以,你的程序的环境(环境变量、信号处理程序等)可能很重要,都是从父进程继承来的。知道这些,在调试时是很有用的。 - -------------------------------------------------------------------------------- via: https://jvns.ca/blog/2016/10/04/exec-will-eat-your-brain/ -作者:[ Julia Evans][a] +作者:[Julia Evans][a] 译者:[jessie-pang](https://github.com/jessie-pang) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From b28957e19a947e9062ad076d9051805f3ea7e427 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 11:31:47 +0800 Subject: [PATCH 085/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Instal?= =?UTF-8?q?l=20and=20Optimize=20Apache=20on=20Ubuntu=20=E2=80=93=20ThisHos?= =?UTF-8?q?ting.Rocks?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ze Apache on Ubuntu - ThisHosting.Rocks.md | 267 ++++++++++++++++++ 1 file changed, 267 insertions(+) create mode 100644 sources/tech/20180116 How to Install and Optimize Apache on Ubuntu - ThisHosting.Rocks.md diff --git a/sources/tech/20180116 How to Install and Optimize Apache on Ubuntu - ThisHosting.Rocks.md b/sources/tech/20180116 How to Install and Optimize Apache on Ubuntu - ThisHosting.Rocks.md new file mode 100644 index 0000000000..eba7ce9c54 --- /dev/null +++ b/sources/tech/20180116 How to Install and Optimize Apache on Ubuntu - ThisHosting.Rocks.md @@ -0,0 +1,267 @@ +How to Install and Optimize Apache on Ubuntu +====== + +This is the beginning of our LAMP tutorial series: how to install the Apache web server on Ubuntu. + +These instructions should work on any Ubuntu-based distro, including Ubuntu 14.04, Ubuntu 16.04, [Ubuntu 18.04][1], and even non-LTS Ubuntu releases like 17.10. They were tested and written for Ubuntu 16.04. + +Apache (aka httpd) is the most popular and most widely used web server, so this should be useful for everyone. + +### Before we begin installing Apache + +Some requirements and notes before we begin: + + * Apache may already be installed on your server, so check if it is first. You can do so with the "apachectl -V" command that outputs the Apache version you're using and some other information. + * You'll need an Ubuntu server. You can buy one from [Vultr][2], they're one of the [best and cheapest cloud hosting providers][3]. Their servers start from $2.5 per month. + * You'll need the root user or a user with sudo access. All commands below are executed by the root user so we didn't have to append 'sudo' to each command. + * You'll need [SSH enabled][4] if you use Ubuntu or an SSH client like [MobaXterm][5] if you use Windows. + + + +That's most of it. Let's move onto the installation. + + + + + +### Install Apache on Ubuntu + +The first thing you always need to do is update Ubuntu before you do anything else. You can do so by running: +``` +apt-get update && apt-get upgrade +``` + +Next, to install Apache, run the following command: +``` +apt-get install apache2 +``` + +If you want to, you can also install the Apache documentation and some Apache utilities. You'll need the Apache utilities for some of the modules we'll install later. +``` +apt-get install apache2-doc apache2-utils +``` + +**And that 's it. You've successfully installed Apache.** + +You'll still need to configure it. + +### Configure and Optimize Apache on Ubuntu + +There are various configs you can do on Apache, but the main and most common ones are explained below. + +#### Check if Apache is running + +By default, Apache is configured to start automatically on boot, so you don't have to enable it. You can check if it's running and other relevant information with the following command: +``` +systemctl status apache2 +``` + +[![check if apache is running][6]][6] + +And you can check what version you're using with +``` +apachectl -V +``` + +A simpler way of checking this is by visiting your server's IP address. If you get the default Apache page, then everything's working fine. + +#### Update your firewall + +If you use a firewall (which you should), you'll probably need to update your firewall rules and allow access to the default ports. The most common firewall used on Ubuntu is UFW, so the instructions below are for UFW. + +To allow traffic through both the 80 (http) and 443 (https) ports, run the following command: +``` +ufw allow 'Apache Full' +``` + +#### Install common Apache modules + +Some modules are frequently recommended and you should install them. We'll include instructions for the most common ones: + +##### Speed up your website with the PageSpeed module + +The PageSpeed module will optimize and speed up your Apache server automatically. + +First, go to the [PageSpeed download page][7] and choose the file you need. We're using a 64-bit Ubuntu server and we'll install the latest stable version. Download it using wget: +``` +wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb +``` + +Then, install it with the following commands: +``` +dpkg -i mod-pagespeed-stable_current_amd64.deb +apt-get -f install +``` + +Restart Apache for the changes to take effect: +``` +systemctl restart apache2 +``` + +##### Enable rewrites/redirects using the mod_rewrite module + +This module is used for rewrites (redirects), as the name suggests. You'll need it if you use WordPress or any other CMS for that matter. To install it, just run: +``` +a2enmod rewrite +``` + +And restart Apache again. You may need some extra configurations depending on what CMS you're using, if any. Google it for specific instructions for your setup. + +##### Secure your Apache with the ModSecurity module + +ModSecurity is a module used for security, again, as the name suggests. It basically acts as a firewall, and it monitors your traffic. To install it, run the following command: +``` +apt-get install libapache2-modsecurity +``` + +And restart Apache again: +``` +systemctl restart apache2 +``` + +ModSecurity comes with a default setup that's enough by itself, but if you want to extend it, you can use the [OWASP rule set][8]. + +##### Block DDoS attacks using the mod_evasive module + +You can use the mod_evasive module to block and prevent DDoS attacks on your server, though it's debatable how useful it is in preventing attacks. To install it, use the following command: +``` +apt-get install libapache2-mod-evasive +``` + +By default, mod_evasive is disabled, to enable it, edit the following file: +``` +nano /etc/apache2/mods-enabled/evasive.conf +``` + +And uncomment all the lines (remove #) and configure it per your requirements. You can leave everything as-is if you don't know what to edit. + +[![mod_evasive][9]][9] + +And create a log file: +``` +mkdir /var/log/mod_evasive +chown -R www-data:www-data /var/log/mod_evasive +``` + +That's it. Now restart Apache for the changes to take effect: +``` +systemctl restart apache2 +``` + +There are [additional modules][10] you can install and configure, but it's all up to you and the software you're using. They're usually not required. Even the 4 modules we included are not required. If a module is required for a specific application, then they'll probably note that. + +#### Optimize Apache with the Apache2Buddy script + +Apache2Buddy is a script that will automatically fine-tune your Apache configuration. The only thing you need to do is run the following command and the script does the rest automatically: +``` +curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl +``` + +You may need to install curl if you don't have it already installed. Use the following command to install curl: +``` +apt-get install curl +``` + +#### Additional configurations + +There's some extra stuff you can do with Apache, but we'll leave them for another tutorial. Stuff like enabling http/2 support, turning off (or on) KeepAlive, tuning your Apache even more. You don't have to do any of this, but you can find tutorials online and do it if you can't wait for our tutorials. + +### Create your first website with Apache + +Now that we're done with all the tuning, let's move onto creating an actual website. Follow our instructions to create a simple HTML page and a virtual host that's going to run on Apache. + +The first thing you need to do is create a new directory for your website. Run the following command to do so: +``` +mkdir -p /var/www/example.com/public_html +``` + +Of course, replace example.com with your desired domain. You can get a cheap domain name from [Namecheap][11]. + +Don't forget to replace example.com in all of the commands below. + +Next, create a simple, static web page. Create the HTML file: +``` +nano /var/www/example.com/public_html/index.html +``` + +And paste this: +``` + +     +       Simple Page +     +     +      

If you're seeing this in your browser then everything works.

+     + +``` + +Save and close the file. + +Configure the permissions of the directory: +``` +chown -R www-data:www-data /var/www/example.com +chmod -R og-r /var/www/example.com +``` + +Create a new virtual host for your site: +``` +nano /etc/apache2/sites-available/example.com.conf +``` + +And paste the following: +``` + +     ServerAdmin admin@example.com +     ServerName example.com +     ServerAlias www.example.com +    +     DocumentRoot /var/www/example.com/public_html +     +     ErrorLog ${APACHE_LOG_DIR}/error.log +     CustomLog ${APACHE_LOG_DIR}/access.log combined + +``` + +This is a basic virtual host. You may need a more advanced .conf file depending on your setup. + +Save and close the file after updating everything accordingly. + +Now, enable the virtual host with the following command: +``` +a2ensite example.com.conf +``` + +And finally, restart Apache for the changes to take effect: +``` +systemctl restart apache2 +``` + +That's it. You're done. Now you can visit example.com and view your page. + + + +-------------------------------------------------------------------------------- + +via: https://thishosting.rocks/how-to-install-optimize-apache-ubuntu/ + +作者:[ThisHosting][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://thishosting.rocks +[1]:https://thishosting.rocks/ubuntu-18-04-new-features-release-date/ +[2]:https://thishosting.rocks/go/vultr/ +[3]:https://thishosting.rocks/cheap-cloud-hosting-providers-comparison/ +[4]:https://thishosting.rocks/how-to-enable-ssh-on-ubuntu/ +[5]:https://mobaxterm.mobatek.net/ +[6]:https://thishosting.rocks/wp-content/uploads/2018/01/apache-running.jpg +[7]:https://www.modpagespeed.com/doc/download +[8]:https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project +[9]:https://thishosting.rocks/wp-content/uploads/2018/01/mod_evasive.jpg +[10]:https://httpd.apache.org/docs/2.4/mod/ +[11]:https://thishosting.rocks/neamcheap-review-cheap-domains-cool-names +[12]:https://thishosting.rocks/wp-content/plugins/patron-button-and-widgets-by-codebard/images/become_a_patron_button.png +[13]:https://www.patreon.com/thishostingrocks From b31876664610f1764d7bdf4108632acc7ae217f0 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 19 Jan 2018 11:33:22 +0800 Subject: [PATCH 086/226] PUB:20161004 What happens when you start a process on Linux.md @jessie-pang https://linux.cn/article-9256-1.html --- .../20161004 What happens when you start a process on Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20161004 What happens when you start a process on Linux.md (100%) diff --git a/translated/tech/20161004 What happens when you start a process on Linux.md b/published/20161004 What happens when you start a process on Linux.md similarity index 100% rename from translated/tech/20161004 What happens when you start a process on Linux.md rename to published/20161004 What happens when you start a process on Linux.md From d858f7b1634002f429148c99e27a34c3a28ccea2 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 11:42:16 +0800 Subject: [PATCH 087/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20A=20Versatile=20F?= =?UTF-8?q?ree=20Software=20for=20Partition=20Imaging=20and=20Cloning?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...tware for Partition Imaging and Cloning.md | 97 +++++++++++++++++++ 1 file changed, 97 insertions(+) create mode 100644 sources/tech/20180116 A Versatile Free Software for Partition Imaging and Cloning.md diff --git a/sources/tech/20180116 A Versatile Free Software for Partition Imaging and Cloning.md b/sources/tech/20180116 A Versatile Free Software for Partition Imaging and Cloning.md new file mode 100644 index 0000000000..d5cf47b45e --- /dev/null +++ b/sources/tech/20180116 A Versatile Free Software for Partition Imaging and Cloning.md @@ -0,0 +1,97 @@ +Partclone – A Versatile Free Software for Partition Imaging and Cloning +====== + +![](https://www.fossmint.com/wp-content/uploads/2018/01/Partclone-Backup-Tool-For-Linux.png) + +**[Partclone][1]** is a free and open-source tool for creating and cloning partition images brought to you by the developers of **Clonezilla**. In fact, **Partclone** is one of the tools that **Clonezilla** is based on. + +It provides users with the tools required to backup and restores used partition blocks along with high compatibility with several file systems thanks to its ability to use existing libraries like **e2fslibs** to read and write partitions e.g. **ext2**. + +Its best stronghold is the variety of formats it supports including ext2, ext3, ext4, hfs+, reiserfs, reiser4, btrfs, vmfs3, vmfs5, xfs, jfs, ufs, ntfs, fat(12/16/32), exfat, f2fs, and nilfs. + +It also has a plethora of available programs including **partclone.ext2** (ext3 & ext4), partclone.ntfs, partclone.exfat, partclone.hfsp, and partclone.vmfs (v3 and v5), among others. + +### Features in Partclone + + * **Freeware:** **Partclone** is free for everyone to download and use. + * **Open Source:** **Partclone** is released under the GNU GPL license and is open to contribution on [GitHub][2]. + * **Cross-Platform** : Available on Linux, Windows, MAC, ESX file system backup/restore, and FreeBSD. + * An online [Documentation page][3] from where you can view help docs and track its GitHub issues. + * An online [user manual][4] for beginners and pros alike. + * Rescue support. + * Clone partitions to image files. + * Restore image files to partitions. + * Duplicate partitions quickly. + * Support for raw clone. + * Displays transfer rate and elapsed time. + * Supports piping. + * Support for crc32. + * Supports vmfs for ESX vmware server and ufs for FreeBSD file system. + + + +There are a lot more features bundled in **Partclone** and you can see the rest of them [here][5]. + +[__Download Partclone for Linux][6] + +### How to Install and Use Partclone + +To install Partclone on Linux. +``` +$ sudo apt install partclone [On Debian/Ubuntu] +$ sudo yum install partclone [On CentOS/RHEL/Fedora] + +``` + +Clone partition to image. +``` +# partclone.ext4 -d -c -s /dev/sda1 -o sda1.img + +``` + +Restore image to partition. +``` +# partclone.ext4 -d -r -s sda1.img -o /dev/sda1 + +``` + +Partition to partition clone. +``` +# partclone.ext4 -d -b -s /dev/sda1 -o /dev/sdb1 + +``` + +Display image information. +``` +# partclone.info -s sda1.img + +``` + +Check image. +``` +# partclone.chkimg -s sda1.img + +``` + +Are you a **Partclone** user? I wrote on [**Deepin Clone**][7] just recently and apparently, there are certain tasks Partclone is better at handling. What has been your experience with other backup and restore utility tools? + +Do share your thoughts and suggestions with us in the comments section below. + +-------------------------------------------------------------------------------- + +via: https://www.fossmint.com/partclone-linux-backup-clone-tool/ + +作者:[Martins D. Okoi;View All Posts;Peter Beck;Martins Divine Okoi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[1]:https://partclone.org/ +[2]:https://github.com/Thomas-Tsai/partclone +[3]:https://partclone.org/help/ +[4]:https://partclone.org/usage/ +[5]:https://partclone.org/features/ +[6]:https://partclone.org/download/ +[7]:https://www.fossmint.com/deepin-clone-system-backup-restore-for-deepin-users/ From efa3610c0bd56eac4f3a9623fe9b5d7b7a10bc84 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 19 Jan 2018 11:46:56 +0800 Subject: [PATCH 088/226] PRF&PUB:20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md @lujun9972 https://linux.cn/article-9257-1.html --- ...Projects And Resources Hosted In GitHub.md | 83 ++++++++++--------- 1 file changed, 44 insertions(+), 39 deletions(-) rename {translated/tech => published}/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md (58%) diff --git a/translated/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md b/published/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md similarity index 58% rename from translated/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md rename to published/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md index 80566a8ae0..c8129ee61e 100644 --- a/translated/tech/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md +++ b/published/20170927 How To Easily Find Awesome Projects And Resources Hosted In GitHub.md @@ -1,8 +1,9 @@ -如何方便地寻找 GitHub 上超棒的项目和资源 +如何轻松地寻找 GitHub 上超棒的项目和资源 ====== -![](https://www.ostechnix.com/wp-content/uploads/2017/09/Awesome-finder-Find-Awesome-Projects-720x340.png) -在 **GitHub** 网站上每天都会新增上百个项目。由于 GitHub 上有成千上万的项目,要在上面搜索好的项目简直要累死人。好在,有那么一伙人已经创建了一些这样的列表。其中包含的类别五花八门,如编程,数据库,编辑器,游戏,娱乐等。这使得我们寻找在 GitHub 上托管的项目,软件,资源,裤,书籍等其他东西变得容易了很多。有一个 GitHub 用户更进了一步,创建了一个名叫 `Awesome-finder` 的命令行工具,用来在 awesome 系列的仓库中寻找超棒的项目和资源。该工具帮助我们不需要离开终端(当然也就不需要使用浏览器了)的情况下浏览 awesome 列表。 +![](https://www.ostechnix.com/wp-content/uploads/2017/09/Awesome-finder-Find-Awesome-Projects-720x340.png) + +在 GitHub 网站上每天都会新增上百个项目。由于 GitHub 上有成千上万的项目,要在上面搜索好的项目简直要累死人。好在,有那么一伙人已经创建了一些这样的列表。其中包含的类别五花八门,如编程、数据库、编辑器、游戏、娱乐等。这使得我们寻找在 GitHub 上托管的项目、软件、资源、库、书籍等其他东西变得容易了很多。有一个 GitHub 用户更进了一步,创建了一个名叫 `Awesome-finder` 的命令行工具,用来在 awesome 系列的仓库中寻找超棒的项目和资源。该工具可以让我们不需要离开终端(当然也就不需要使用浏览器了)的情况下浏览 awesome 列表。 在这篇简单的说明中,我会向你演示如何方便地在类 Unix 系统中浏览 awesome 列表。 @@ -12,12 +13,14 @@ 使用 `pip` 可以很方便地安装该工具,`pip` 是一个用来安装使用 Python 编程语言开发的程序的包管理器。 -在 **Arch Linux** 一起衍生发行版中(比如 **Antergos**,**Manjaro Linux**),你可以使用下面命令安装 `pip`: +在 Arch Linux 及其衍生发行版中(比如 Antergos,Manjaro Linux),你可以使用下面命令安装 `pip`: + ``` sudo pacman -S python-pip ``` -在 **RHEL**,**CentOS** 中: +在 RHEL,CentOS 中: + ``` sudo yum install epel-release ``` @@ -25,32 +28,33 @@ sudo yum install epel-release sudo yum install python-pip ``` -在 **Fedora** 上: +在 Fedora 上: + ``` sudo dnf install epel-release -``` -``` sudo dnf install python-pip ``` -在 **Debian**,**Ubuntu**,**Linux Mint** 上: +在 Debian,Ubuntu,Linux Mint 上: + ``` sudo apt-get install python-pip ``` -在 **SUSE**,**openSUSE** 上: +在 SUSE,openSUSE 上: ``` sudo zypper install python-pip ``` -PIP 安装好后,用下面命令来安装 'Awesome-finder'。 +`pip` 安装好后,用下面命令来安装 'Awesome-finder'。 + ``` sudo pip install awesome-finder ``` #### 用法 -Awesome-finder 会列出 GitHub 网站中如下这些主题(其实就是仓库)的内容: +Awesome-finder 会列出 GitHub 网站中如下这些主题(其实就是仓库)的内容: * awesome * awesome-android @@ -66,83 +70,84 @@ Awesome-finder 会列出 GitHub 网站中如下这些主题(其实就是仓库) * awesome-scala * awesome-swift - 该列表会定期更新。 比如,要查看 `awesome-go` 仓库中的列表,只需要输入: + ``` awesome go ``` 你就能看到用 “Go” 写的所有流行的东西了,而且这些东西按字母顺序进行了排列。 -[![][1]][2] +![][2] -你可以通过 **上/下** 箭头在列表中导航。一旦找到所需要的东西,只需要选中它,然后按下 **回车** 键就会用你默认的 web 浏览器打开相应的链接了。 +你可以通过 上/下 箭头在列表中导航。一旦找到所需要的东西,只需要选中它,然后按下回车键就会用你默认的 web 浏览器打开相应的链接了。 类似的, - * "awesome android" 命令会搜索 **awesome-android** 仓库。 - * "awesome awesome" 命令会搜索 **awesome** 仓库。 - * "awesome elixir" 命令会搜索 **awesome-elixir**。 - * "awesome go" 命令会搜索 **awesome-go**。 - * "awesome ios" 命令会搜索 **awesome-ios**。 - * "awesome java" 命令会搜索 **awesome-java**。 - * "awesome javascript" 命令会搜索 **awesome-javascript**。 - * "awesome php" 命令会搜索 **awesome-php**。 - * "awesome python" 命令会搜索 **awesome-python**。 - * "awesome ruby" 命令会搜索 **awesome-ruby**。 - * "awesome rust" 命令会搜索 **awesome-rust**。 - * "awesome scala" 命令会搜索 **awesome-scala**。 - * "awesome swift" 命令会搜索 **awesome-swift**。 + * `awesome android` 命令会搜索 awesome-android 仓库。 + * `awesome awesome` 命令会搜索 awesome 仓库。 + * `awesome elixir` 命令会搜索 awesome-elixir。 + * `awesome go` 命令会搜索 awesome-go。 + * `awesome ios` 命令会搜索 awesome-ios。 + * `awesome java` 命令会搜索 awesome-java。 + * `awesome javascript` 命令会搜索 awesome-javascript。 + * `awesome php` 命令会搜索 awesome-php。 + * `awesome python` 命令会搜索 awesome-python。 + * `awesome ruby` 命令会搜索 awesome-ruby。 + * `awesome rust` 命令会搜索 awesome-rust。 + * `awesome scala` 命令会搜索 awesome-scala。 + * `awesome swift` 命令会搜索 awesome-swift。 -而且,它还会随着你在提示符中输入的内容而自动进行筛选。比如,当我输入 "dj" 后,他会显示与 Django 相关的内容。 +而且,它还会随着你在提示符中输入的内容而自动进行筛选。比如,当我输入 `dj` 后,他会显示与 Django 相关的内容。 -[![][1]][3] +![][3] 若你想从最新的 `awesome-`( 而不是用缓存中的数据) 中搜索,使用 `-f` 或 `-force` 标志: + ``` awesome -f (--force) - ``` -**像这样:** +像这样: + ``` awesome python -f ``` 或, + ``` awesome python --force ``` -上面命令会显示 **awesome-python** GitHub 仓库中的列表。 +上面命令会显示 awesome-python GitHub 仓库中的列表。 很棒,对吧? -要退出这个工具的话,按下 **ESC** 键。要显示帮助信息,输入: +要退出这个工具的话,按下 ESC 键。要显示帮助信息,输入: + ``` awesome -h ``` 本文至此就结束了。希望本文能对你产生帮助。如果你觉得我们的文章对你有帮助,请将他们分享到你的社交网络中去,造福大众。我们马上还有其他好东西要来了。敬请期待! - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/ 作者:[SK][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.ostechnix.com/author/sk/ [1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_008-1.png () -[3]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_009.png () +[2]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_008-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2017/09/sk@sk_009.png [4]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=reddit (Click to share on Reddit) [5]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=twitter (Click to share on Twitter) [6]:https://www.ostechnix.com/easily-find-awesome-projects-resources-hosted-github/?share=facebook (Click to share on Facebook) From b7b1e736e7f8e40af1f1942ff586f71bb6cc8119 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 11:57:31 +0800 Subject: [PATCH 089/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Monitor=20your=20?= =?UTF-8?q?Kubernetes=20Cluster?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0180116 Monitor your Kubernetes Cluster.md | 264 ++++++++++++++++++ 1 file changed, 264 insertions(+) create mode 100644 sources/tech/20180116 Monitor your Kubernetes Cluster.md diff --git a/sources/tech/20180116 Monitor your Kubernetes Cluster.md b/sources/tech/20180116 Monitor your Kubernetes Cluster.md new file mode 100644 index 0000000000..f0ac585f6f --- /dev/null +++ b/sources/tech/20180116 Monitor your Kubernetes Cluster.md @@ -0,0 +1,264 @@ +Monitor your Kubernetes Cluster +====== +This article originally appeared on [Kevin Monroe's blog][1] + +Keeping an eye on logs and metrics is a necessary evil for cluster admins. The benefits are clear: metrics help you set reasonable performance goals, while log analysis can uncover issues that impact your workloads. The hard part, however, is getting a slew of applications to work together in a useful monitoring solution. + +In this post, I'll cover monitoring a Kubernetes cluster with [Graylog][2] (for logging) and [Prometheus][3] (for metrics). Of course that's not just wiring 3 things together. In fact, it'll end up looking like this: + +![][4] + +As you know, Kubernetes isn't just one thing -- it's a system of masters, workers, networking bits, etc(d). Similarly, Graylog comes with a supporting cast (apache2, mongodb, etc), as does Prometheus (telegraf, grafana, etc). Connecting the dots in a deployment like this may seem daunting, but the right tools can make all the difference. + +I'll walk through this using [conjure-up][5] and the [Canonical Distribution of Kubernetes][6] (CDK). I find the conjure-up interface really helpful for deploying big software, but I know some of you hate GUIs and TUIs and probably other UIs too. For those folks, I'll do the same deployment again from the command line. + +Before we jump in, note that Graylog and Prometheus will be deployed alongside Kubernetes and not in the cluster itself. Things like the Kubernetes Dashboard and Heapster are excellent sources of information from within a running cluster, but my objective is to provide a mechanism for log/metric analysis whether the cluster is running or not. + +### The Walk Through + +First things first, install conjure-up if you don't already have it. On Linux, that's simply: +``` +sudo snap install conjure-up --classic +``` + +There's also a brew package for macOS users: +``` +brew install conjure-up +``` + +You'll need at least version 2.5.2 to take advantage of the recent CDK spell additions, so be sure to `sudo snap refresh conjure-up` or `brew update && brew upgrade conjure-up` if you have an older version installed. + +Once installed, run it: +``` +conjure-up +``` + +![][7] + +You'll be presented with a list of various spells. Select CDK and press `Enter`. + +![][8] + +At this point, you'll see additional components that are available for the CDK spell. We're interested in Graylog and Prometheus, so check both of those and hit `Continue`. + +You'll be guided through various cloud choices to determine where you want your cluster to live. After that, you'll see options for post-deployment steps, followed by a review screen that lets you see what is about to be deployed: + +![][9] + +In addition to the typical K8s-related applications (etcd, flannel, load-balancer, master, and workers), you'll see additional applications related to our logging and metric selections. + +The Graylog stack includes the following: + + * apache2: reverse proxy for the graylog web interface + * elasticsearch: document database for the logs + * filebeat: forwards logs from K8s master/workers to graylog + * graylog: provides an api for log collection and an interface for analysis + * mongodb: database for graylog metadata + + + +The Prometheus stack includes the following: + + * grafana: web interface for metric-related dashboards + * prometheus: metric collector and time series database + * telegraf: sends host metrics to prometheus + + + +You can fine tune the deployment from this review screen, but the defaults will suite our needs. Click `Deploy all Remaining Applications` to get things going. + +The deployment will take a few minutes to settle as machines are brought online and applications are configured in your cloud. Once complete, conjure-up will show a summary screen that includes links to various interesting endpoints for you to browse: + +![][10] + +#### Exploring Logs + +Now that Graylog has been deployed and configured, let's take a look at some of the data we're gathering. By default, the filebeat application will send both syslog and container log events to graylog (that's `/var/log/*.log` and `/var/log/containers/*.log` from the kubernetes master and workers). + +Grab the apache2 address and graylog admin password as follows: +``` +juju status --format yaml apache2/0 | grep public-address + public-address: +juju run-action --wait graylog/0 show-admin-password + admin-password: +``` + +Browse to `http://` and login with admin as the username and as the password. **Note:** if the interface is not immediately available, please wait as the reverse proxy configuration may take up to 5 minutes to complete. + +Once logged in, head to the `Sources` tab to get an overview of the logs collected from our K8s master and workers: + +![][11] + +Drill into those logs by clicking the `System / Inputs` tab and selecting `Show received messages` for the filebeat input: + +![][12] + +From here, you may want to play around with various filters or setup Graylog dashboards to help identify the events that are most important to you. Check out the [Graylog Dashboard][13] docs for details on customizing your view. + +#### Exploring Metrics + +Our deployment exposes two types of metrics through our grafana dashboards: system metrics include things like cpu/memory/disk utilization for the K8s master and worker machines, and cluster metrics include container-level data scraped from the K8s cAdvisor endpoints. + +Grab the grafana address and admin password as follows: +``` +juju status --format yaml grafana/0 | grep public-address + public-address: +juju run-action --wait grafana/0 get-admin-password + password: +``` + +Browse to `http://:3000` and login with admin as the username and as the password. Once logged in, check out the cluster metric dashboard by clicking the `Home` drop-down box and selecting `Kubernetes Metrics (via Prometheus)`: + +![][14] + +We can also check out the system metrics of our K8s host machines by switching the drop-down box to `Node Metrics (via Telegraf) ` + +![][15] + + +### The Other Way + +As alluded to in the intro, I prefer the wizard-y feel of conjure-up to guide me through complex software deployments like Kubernetes. Now that we've seen the conjure-up way, some of you may want to see a command line approach to achieve the same results. Still others may have deployed CDK previously and want to extend it with the Graylog/Prometheus components described above. Regardless of why you've read this far, I've got you covered. + +The tool that underpins conjure-up is [Juju][16]. Everything that the CDK spell did behind the scenes can be done on the command line with Juju. Let's step through how that works. + +**Starting From Scratch** + +If you're on Linux, install Juju like this: +``` +sudo snap install juju --classic +``` + +For macOS, Juju is available from brew: +``` +brew install juju +``` + +Now setup a controller for your preferred cloud. You may be prompted for any required cloud credentials: +``` +juju bootstrap +``` + +We then need to deploy the base CDK bundle: +``` +juju deploy canonical-kubernetes +``` + +**Starting From CDK** + +With our Kubernetes cluster deployed, we need to add all the applications required for Graylog and Prometheus: +``` +## deploy graylog-related applications +juju deploy xenial/apache2 +juju deploy xenial/elasticsearch +juju deploy xenial/filebeat +juju deploy xenial/graylog +juju deploy xenial/mongodb +``` +``` +## deploy prometheus-related applications +juju deploy xenial/grafana +juju deploy xenial/prometheus +juju deploy xenial/telegraf +``` + +Now that the software is deployed, connect them together so they can communicate: +``` +## relate graylog applications +juju relate apache2:reverseproxy graylog:website +juju relate graylog:elasticsearch elasticsearch:client +juju relate graylog:mongodb mongodb:database +juju relate filebeat:beats-host kubernetes-master:juju-info +juju relate filebeat:beats-host kubernetes-worker:jujuu-info +``` +``` +## relate prometheus applications +juju relate prometheus:grafana-source grafana:grafana-source +juju relate telegraf:prometheus-client prometheus:target +juju relate kubernetes-master:juju-info telegraf:juju-info +juju relate kubernetes-worker:juju-info telegraf:juju-info +``` + +At this point, all the applications can communicate with each other, but we have a bit more configuration to do (e.g., setting up the apache2 reverse proxy, telling prometheus how to scrape k8s, importing our grafana dashboards, etc): +``` +## configure graylog applications +juju config apache2 enable_modules="headers proxy_html proxy_http" +juju config apache2 vhost_http_template="$(base64 )" +juju config elasticsearch firewall_enabled="false" +juju config filebeat \ + logpath="/var/log/*.log /var/log/containers/*.log" +juju config filebeat logstash_hosts=":5044" +juju config graylog elasticsearch_cluster_name="" +``` +``` +## configure prometheus applications +juju config prometheus scrape-jobs="" +juju run-action --wait grafana/0 import-dashboard \ + dashboard="$(base64 )" +``` + +Some of the above steps need values specific to your deployment. You can get these in the same way that conjure-up does: + + * : fetch our sample [template][17] from github + * : `juju run --unit graylog/0 'unit-get private-address'` + * : `juju config elasticsearch cluster-name` + * : fetch our sample [scraper][18] from github; [substitute][19]appropriate values for `[K8S_PASSWORD][20]` and `[K8S_API_ENDPOINT][21]` + * : fetch our [host][22] and [k8s][23] dashboards from github + + + +Finally, you'll want to expose the apache2 and grafana applications to make their web interfaces accessible: +``` +## expose relevant endpoints +juju expose apache2 +juju expose grafana +``` + +Now that we have everything deployed, related, configured, and exposed, you can login and poke around using the same steps from the **Exploring Logs** and **Exploring Metrics** sections above. + +### The Wrap Up + +My goal here was to show you how to deploy a Kubernetes cluster with rich monitoring capabilities for logs and metrics. Whether you prefer a guided approach or command line steps, I hope it's clear that monitoring complex deployments doesn't have to be a pipe dream. The trick is to figure out how all the moving parts work, make them work together repeatably, and then break/fix/repeat for a while until everyone can use it. + +This is where tools like conjure-up and Juju really shine. Leveraging the expertise of contributors to this ecosystem makes it easy to manage big software. Start with a solid set of apps, customize as needed, and get back to work! + +Give these bits a try and let me know how it goes. You can find enthusiasts like me on Freenode IRC in **#conjure-up** and **#juju**. Thanks for reading! + +### About the author + +Kevin joined Canonical in 2014 with his focus set on modeling complex software. He found his niche on the Juju Big Software team where his mission is to capture operational knowledge of Big Data and Machine Learning applications into repeatable (and reliable!) solutions. + +-------------------------------------------------------------------------------- + +via: https://insights.ubuntu.com/2018/01/16/monitor-your-kubernetes-cluster/ + +作者:[Kevin Monroe][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://insights.ubuntu.com/author/kwmonroe/ +[1]:https://medium.com/@kwmonroe/monitor-your-kubernetes-cluster-a856d2603ec3 +[2]:https://www.graylog.org/ +[3]:https://prometheus.io/ +[4]:https://insights.ubuntu.com/wp-content/uploads/706b/1_TAA57DGVDpe9KHIzOirrBA.png +[5]:https://conjure-up.io/ +[6]:https://jujucharms.com/canonical-kubernetes +[7]:https://insights.ubuntu.com/wp-content/uploads/98fd/1_o0UmYzYkFiHIs2sBgj7G9A.png +[8]:https://insights.ubuntu.com/wp-content/uploads/0351/1_pgVaO_ZlalrjvYd5pOMJMA.png +[9]:https://insights.ubuntu.com/wp-content/uploads/9977/1_WXKxMlml2DWA5Kj6wW9oXQ.png +[10]:https://insights.ubuntu.com/wp-content/uploads/8588/1_NWq7u6g6UAzyFxtbM-ipqg.png +[11]:https://insights.ubuntu.com/wp-content/uploads/a1c3/1_hHK5mSrRJQi6A6u0yPSGOA.png +[12]:https://insights.ubuntu.com/wp-content/uploads/937f/1_cP36lpmSwlsPXJyDUpFluQ.png +[13]:http://docs.graylog.org/en/2.3/pages/dashboards.html +[14]:https://insights.ubuntu.com/wp-content/uploads/9256/1_kskust3AOImIh18QxQPgRw.png +[15]:https://insights.ubuntu.com/wp-content/uploads/2037/1_qJpjPOTGMQbjFY5-cZsYrQ.png +[16]:https://jujucharms.com/ +[17]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/graylog/steps/01_install-graylog/graylog-vhost.tmpl +[18]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/prometheus-scrape-k8s.yaml +[19]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L25 +[20]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L10 +[21]:https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/after-deploy#L11 +[22]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/grafana-telegraf.json +[23]:https://raw.githubusercontent.com/conjure-up/spells/master/canonical-kubernetes/addons/prometheus/steps/01_install-prometheus/grafana-k8s.json From 7114a15e12262914d3e3ff16a32290b993dac567 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 12:57:26 +0800 Subject: [PATCH 090/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Analyzing=20the?= =?UTF-8?q?=20Linux=20boot=20process?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...180116 Analyzing the Linux boot process.md | 251 ++++++++++++++++++ 1 file changed, 251 insertions(+) create mode 100644 sources/tech/20180116 Analyzing the Linux boot process.md diff --git a/sources/tech/20180116 Analyzing the Linux boot process.md b/sources/tech/20180116 Analyzing the Linux boot process.md new file mode 100644 index 0000000000..0bf807c6bb --- /dev/null +++ b/sources/tech/20180116 Analyzing the Linux boot process.md @@ -0,0 +1,251 @@ +Analyzing the Linux boot process +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_boot.png?itok=FUesnJQp) + +Image by : Penguin, Boot. Modified by Opensource.com. CC BY-SA 4.0. + +The oldest joke in open source software is the statement that "the code is self-documenting." Experience shows that reading the source is akin to listening to the weather forecast: sensible people still go outside and check the sky. What follows are some tips on how to inspect and observe Linux systems at boot by leveraging knowledge of familiar debugging tools. Analyzing the boot processes of systems that are functioning well prepares users and developers to deal with the inevitable failures. + +In some ways, the boot process is surprisingly simple. The kernel starts up single-threaded and synchronous on a single core and seems almost comprehensible to the pitiful human mind. But how does the kernel itself get started? What functions do [initial ramdisk][1] ) and bootloaders perform? And wait, why is the LED on the Ethernet port always on? + +Read on for answers to these and other questions; the [code for the described demos and exercises][2] is also available on GitHub. + +### The beginning of boot: the OFF state + +#### Wake-on-LAN + +The OFF state means that the system has no power, right? The apparent simplicity is deceptive. For example, the Ethernet LED is illuminated because wake-on-LAN (WOL) is enabled on your system. Check whether this is the case by typing: +``` + $# sudo ethtool +``` + +where `` might be, for example, `eth0`. (`ethtool` is found in Linux packages of the same name.) If "Wake-on" in the output shows `g`, remote hosts can boot the system by sending a [MagicPacket][3]. If you have no intention of waking up your system remotely and do not wish others to do so, turn WOL off either in the system BIOS menu, or via: +``` +$# sudo ethtool -s wol d +``` + +The processor that responds to the MagicPacket may be part of the network interface or it may be the [Baseboard Management Controller][4] (BMC). + +#### Intel Management Engine, Platform Controller Hub, and Minix + +The BMC is not the only microcontroller (MCU) that may be listening when the system is nominally off. x86_64 systems also include the Intel Management Engine (IME) software suite for remote management of systems. A wide variety of devices, from servers to laptops, includes this technology, [which enables functionality][5] such as KVM Remote Control and Intel Capability Licensing Service. The [IME has unpatched vulnerabilities][6], according to [Intel's own detection tool][7]. The bad news is, it's difficult to disable the IME. Trammell Hudson has created an [me_cleaner project][8] that wipes some of the more egregious IME components, like the embedded web server, but could also brick the system on which it is run. + +The IME firmware and the System Management Mode (SMM) software that follows it at boot are [based on the Minix operating system][9] and run on the separate Platform Controller Hub processor, not the main system CPU. The SMM then launches the Universal Extensible Firmware Interface (UEFI) software, about which much has [already been written][10], on the main processor. The Coreboot group at Google has started a breathtakingly ambitious [Non-Extensible Reduced Firmware][11] (NERF) project that aims to replace not only UEFI but early Linux userspace components such as systemd. While we await the outcome of these new efforts, Linux users may now purchase laptops from Purism, System76, or Dell [with IME disabled][12], plus we can hope for laptops [with ARM 64-bit processors][13]. + +#### Bootloaders + +Besides starting buggy spyware, what function does early boot firmware serve? The job of a bootloader is to make available to a newly powered processor the resources it needs to run a general-purpose operating system like Linux. At power-on, there not only is no virtual memory, but no DRAM until its controller is brought up. A bootloader then turns on power supplies and scans buses and interfaces in order to locate the kernel image and the root filesystem. Popular bootloaders like U-Boot and GRUB have support for familiar interfaces like USB, PCI, and NFS, as well as more embedded-specific devices like NOR- and NAND-flash. Bootloaders also interact with hardware security devices like [Trusted Platform Modules][14] (TPMs) to establish a chain of trust from earliest boot. + +![Running the U-boot bootloader][16] + +Running the U-boot bootloader in the sandbox on the build host. + +The open source, widely used [U-Boot ][17]bootloader is supported on systems ranging from Raspberry Pi to Nintendo devices to automotive boards to Chromebooks. There is no syslog, and when things go sideways, often not even any console output. To facilitate debugging, the U-Boot team offers a sandbox in which patches can be tested on the build-host, or even in a nightly Continuous Integration system. Playing with U-Boot's sandbox is relatively simple on a system where common development tools like Git and the GNU Compiler Collection (GCC) are installed: +``` + + +$# git clone git://git.denx.de/u-boot; cd u-boot + +$# make ARCH=sandbox defconfig + +$# make; ./u-boot + +=> printenv + +=> help +``` + +That's it: you're running U-Boot on x86_64 and can test tricky features like [mock storage device][2] repartitioning, TPM-based secret-key manipulation, and hotplug of USB devices. The U-Boot sandbox can even be single-stepped under the GDB debugger. Development using the sandbox is 10x faster than testing by reflashing the bootloader onto a board, and a "bricked" sandbox can be recovered with Ctrl+C. + +### Starting up the kernel + +#### Provisioning a booting kernel + +Upon completion of its tasks, the bootloader will execute a jump to kernel code that it has loaded into main memory and begin execution, passing along any command-line options that the user has specified. What kind of program is the kernel? `file /boot/vmlinuz` indicates that it is a bzImage, meaning a big compressed one. The Linux source tree contains an [extract-vmlinux tool][18] that can be used to uncompress the file: +``` + + +$# scripts/extract-vmlinux /boot/vmlinuz-$(uname -r) > vmlinux + +$# file vmlinux + +vmlinux: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically + +linked, stripped +``` + +The kernel is an [Executable and Linking Format][19] (ELF) binary, like Linux userspace programs. That means we can use commands from the `binutils` package like `readelf` to inspect it. Compare the output of, for example: +``` + + +$# readelf -S /bin/date + +$# readelf -S vmlinux +``` + +The list of sections in the binaries is largely the same. + +So the kernel must start up something like other Linux ELF binaries ... but how do userspace programs actually start? In the `main()` function, right? Not precisely. + +Before the `main()` function can run, programs need an execution context that includes heap and stack memory plus file descriptors for `stdio`, `stdout`, and `stderr`. Userspace programs obtain these resources from the standard library, which is `glibc` on most Linux systems. Consider the following: +``` + + +$# file /bin/date + +/bin/date: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically + +linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, + +BuildID[sha1]=14e8563676febeb06d701dbee35d225c5a8e565a, + +stripped +``` + +ELF binaries have an interpreter, just as Bash and Python scripts do, but the interpreter need not be specified with `#!` as in scripts, as ELF is Linux's native format. The ELF interpreter [provisions a binary][20] with the needed resources by calling `_start()`, a function available from the `glibc` source package that can be [inspected via GDB][21]. The kernel obviously has no interpreter and must provision itself, but how? + +Inspecting the kernel's startup with GDB gives the answer. First install the debug package for the kernel that contains an unstripped version of `vmlinux`, for example `apt-get install linux-image-amd64-dbg`, or compile and install your own kernel from source, for example, by following instructions in the excellent [Debian Kernel Handbook][22]. `gdb vmlinux` followed by `info files` shows the ELF section `init.text`. List the start of program execution in `init.text` with `l *(address)`, where `address` is the hexadecimal start of `init.text`. GDB will indicate that the x86_64 kernel starts up in the kernel's file [arch/x86/kernel/head_64.S][23], where we find the assembly function `start_cpu0()` and code that explicitly creates a stack and decompresses the zImage before calling the `x86_64 start_kernel()` function. ARM 32-bit kernels have the similar [arch/arm/kernel/head.S][24]. `start_kernel()` is not architecture-specific, so the function lives in the kernel's [init/main.c][25]. `start_kernel()` is arguably Linux's true `main()` function. + +### From start_kernel() to PID 1 + +#### The kernel's hardware manifest: the device-tree and ACPI tables + +At boot, the kernel needs information about the hardware beyond the processor type for which it has been compiled. The instructions in the code are augmented by configuration data that is stored separately. There are two main methods of storing this data: [device-trees][26] and [ACPI tables][27]. The kernel learns what hardware it must run at each boot by reading these files. + +For embedded devices, the device-tree is a manifest of installed hardware. The device-tree is simply a file that is compiled at the same time as kernel source and is typically located in `/boot` alongside `vmlinux`. To see what's in the binary device-tree on an ARM device, just use the `strings` command from the `binutils` package on a file whose name matches `/boot/*.dtb`, as `dtb` refers to a device-tree binary. Clearly the device-tree can be modified simply by editing the JSON-like files that compose it and rerunning the special `dtc` compiler that is provided with the kernel source. While the device-tree is a static file whose file path is typically passed to the kernel by the bootloader on the command line, a [device-tree overlay][28] facility has been added in recent years, where the kernel can dynamically load additional fragments in response to hotplug events after boot. + +x86-family and many enterprise-grade ARM64 devices make use of the alternative Advanced Configuration and Power Interface ([ACPI][27]) mechanism. In contrast to the device-tree, the ACPI information is stored in the `/sys/firmware/acpi/tables` virtual filesystem that is created by the kernel at boot by accessing onboard ROM. The easy way to read the ACPI tables is with the `acpidump` command from the `acpica-tools` package. Here's an example: + +![ACPI tables on Lenovo laptops][30] + + +ACPI tables on Lenovo laptops are all set for Windows 2001. + +Yes, your Linux system is ready for Windows 2001, should you care to install it. ACPI has both methods and data, unlike the device-tree, which is more of a hardware-description language. ACPI methods continue to be active post-boot. For example, starting the command `acpi_listen` (from package `apcid`) and opening and closing the laptop lid will show that ACPI functionality is running all the time. While temporarily and dynamically [overwriting the ACPI tables][31] is possible, permanently changing them involves interacting with the BIOS menu at boot or reflashing the ROM. If you're going to that much trouble, perhaps you should just [install coreboot][32], the open source firmware replacement. + +#### From start_kernel() to userspace + +The code in [init/main.c][25] is surprisingly readable and, amusingly, still carries Linus Torvalds' original copyright from 1991-1992. The lines found in `dmesg | head` on a newly booted system originate mostly from this source file. The first CPU is registered with the system, global data structures are initialized, and the scheduler, interrupt handlers (IRQs), timers, and console are brought one-by-one, in strict order, online. Until the function `timekeeping_init()` runs, all timestamps are zero. This part of the kernel initialization is synchronous, meaning that execution occurs in precisely one thread, and no function is executed until the last one completes and returns. As a result, the `dmesg` output will be completely reproducible, even between two systems, as long as they have the same device-tree or ACPI tables. Linux is behaving like one of the RTOS (real-time operating systems) that runs on MCUs, for example QNX or VxWorks. The situation persists into the function `rest_init()`, which is called by `start_kernel()` at its termination. + +![Summary of early kernel boot process.][34] + +Summary of early kernel boot process. + +The rather humbly named `rest_init()` spawns a new thread that runs `kernel_init()`, which invokes `do_initcalls()`. Users can spy on `initcalls` in action by appending `initcall_debug` to the kernel command line, resulting in `dmesg` entries every time an `initcall` function runs. `initcalls` pass through seven sequential levels: early, core, postcore, arch, subsys, fs, device, and late. The most user-visible part of the `initcalls` is the probing and setup of all the processors' peripherals: buses, network, storage, displays, etc., accompanied by the loading of their kernel modules. `rest_init()` also spawns a second thread on the boot processor that begins by running `cpu_idle()` while it waits for the scheduler to assign it work. + +`kernel_init()` also [sets up symmetric multiprocessing][35] (SMP). With more recent kernels, find this point in `dmesg` output by looking for "Bringing up secondary CPUs..." SMP proceeds by "hotplugging" CPUs, meaning that it manages their lifecycle with a state machine that is notionally similar to that of devices like hotplugged USB sticks. The kernel's power-management system frequently takes individual cores offline, then wakes them as needed, so that the same CPU hotplug code is called over and over on a machine that is not busy. Observe the power-management system's invocation of CPU hotplug with the [BCC tool][36] called `offcputime.py`. + +Note that the code in `init/main.c` is nearly finished executing when `smp_init()` runs: The boot processor has completed most of the one-time initialization that the other cores need not repeat. Nonetheless, the per-CPU threads must be spawned for each core to manage interrupts (IRQs), workqueues, timers, and power events on each. For example, see the per-CPU threads that service softirqs and workqueues in action via the `ps -o psr` command. +``` + + +$\# ps -o pid,psr,comm $(pgrep ksoftirqd)   + + PID PSR COMMAND + +   7   0 ksoftirqd/0 + +  16   1 ksoftirqd/1 + +  22   2 ksoftirqd/2 + +  28   3 ksoftirqd/3 + + + +$\# ps -o pid,psr,comm $(pgrep kworker) + +PID  PSR COMMAND + +   4   0 kworker/0:0H + +  18   1 kworker/1:0H + +  24   2 kworker/2:0H + +  30   3 kworker/3:0H + +[ . .  . ] +``` + +where the PSR field stands for "processor." Each core must also host its own timers and `cpuhp` hotplug handlers. + +How is it, finally, that userspace starts? Near its end, `kernel_init()` looks for an `initrd` that can execute the `init` process on its behalf. If it finds none, the kernel directly executes `init` itself. Why then might one want an `initrd`? + +#### Early userspace: who ordered the initrd? + +Besides the device-tree, another file path that is optionally provided to the kernel at boot is that of the `initrd`. The `initrd` often lives in `/boot` alongside the bzImage file vmlinuz on x86, or alongside the similar uImage and device-tree for ARM. List the contents of the `initrd` with the `lsinitramfs` tool that is part of the `initramfs-tools-core` package. Distro `initrd` schemes contain minimal `/bin`, `/sbin`, and `/etc` directories along with kernel modules, plus some files in `/scripts`. All of these should look pretty familiar, as the `initrd` for the most part is simply a minimal Linux root filesystem. The apparent similarity is a bit deceptive, as nearly all the executables in `/bin` and `/sbin` inside the ramdisk are symlinks to the [BusyBox binary][37], resulting in `/bin` and `/sbin` directories that are 10x smaller than glibc's. + +Why bother to create an `initrd` if all it does is load some modules and then start `init` on the regular root filesystem? Consider an encrypted root filesystem. The decryption may rely on loading a kernel module that is stored in `/lib/modules` on the root filesystem ... and, unsurprisingly, in the `initrd` as well. The crypto module could be statically compiled into the kernel instead of loaded from a file, but there are various reasons for not wanting to do so. For example, statically compiling the kernel with modules could make it too large to fit on the available storage, or static compilation may violate the terms of a software license. Unsurprisingly, storage, network, and human input device (HID) drivers may also be present in the `initrd`--basically any code that is not part of the kernel proper that is needed to mount the root filesystem. The `initrd` is also a place where users can stash their own [custom ACPI][38] table code. + +![Rescue shell and a custom initrd.][40] + +Having some fun with the rescue shell and a custom `initrd`. + +`initrd`'s are also great for testing filesystems and data-storage devices themselves. Stash these test tools in the `initrd` and run your tests from memory rather than from the object under test. + +At last, when `init` runs, the system is up! Since the secondary processors are now running, the machine has become the asynchronous, preemptible, unpredictable, high-performance creature we know and love. Indeed, `ps -o pid,psr,comm -p 1` is liable to show that userspace's `init` process is no longer running on the boot processor. + +### Summary + +The Linux boot process sounds forbidding, considering the number of different pieces of software that participate even on simple embedded devices. Looked at differently, the boot process is rather simple, since the bewildering complexity caused by features like preemption, RCU, and race conditions are absent in boot. Focusing on just the kernel and PID 1 overlooks the large amount of work that bootloaders and subsidiary processors may do in preparing the platform for the kernel to run. While the kernel is certainly unique among Linux programs, some insight into its structure can be gleaned by applying to it some of the same tools used to inspect other ELF binaries. Studying the boot process while it's working well arms system maintainers for failures when they come. + +To learn more, attend Alison Chaiken's talk, [Linux: The first second][41], at [linux.conf.au][42], which will be held January 22-26 in Sydney. + +Thanks to [Akkana Peck][43] for originally suggesting this topic and for many corrections. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/analyzing-linux-boot-process + +作者:[Alison Chaiken][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/don-watkins +[1]:https://en.wikipedia.org/wiki/Initial_ramdisk +[2]:https://github.com/chaiken/LCA2018-Demo-Code +[3]:https://en.wikipedia.org/wiki/Wake-on-LAN +[4]:https://lwn.net/Articles/630778/ +[5]:https://www.youtube.com/watch?v=iffTJ1vPCSo&index=65&list=PLbzoR-pLrL6pISWAq-1cXP4_UZAyRtesk +[6]:https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00086&languageid=en-fr +[7]:https://www.intel.com/content/www/us/en/support/articles/000025619/software.html +[8]:https://github.com/corna/me_cleaner +[9]:https://lwn.net/Articles/738649/ +[10]:https://lwn.net/Articles/699551/ +[11]:https://trmm.net/NERF +[12]:https://www.extremetech.com/computing/259879-dell-now-shipping-laptops-intels-management-engine-disabled +[13]:https://lwn.net/Articles/733837/ +[14]:https://linuxplumbersconf.org/2017/ocw/events/LPC2017/tracks/639 +[15]:/file/383501 +[16]:https://opensource.com/sites/default/files/u128651/linuxboot_1.png (Running the U-boot bootloader) +[17]:http://www.denx.de/wiki/DULG/Manual +[18]:https://github.com/torvalds/linux/blob/master/scripts/extract-vmlinux +[19]:http://man7.org/linux/man-pages/man5/elf.5.html +[20]:https://0xax.gitbooks.io/linux-insides/content/Misc/program_startup.html +[21]:https://github.com/chaiken/LCA2018-Demo-Code/commit/e543d9812058f2dd65f6aed45b09dda886c5fd4e +[22]:http://kernel-handbook.alioth.debian.org/ +[23]:https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/head_64.S +[24]:https://github.com/torvalds/linux/blob/master/arch/arm/boot/compressed/head.S +[25]:https://github.com/torvalds/linux/blob/master/init/main.c +[26]:https://www.youtube.com/watch?v=m_NyYEBxfn8 +[27]:http://events.linuxfoundation.org/sites/events/files/slides/x86-platform.pdf +[28]:http://lwn.net/Articles/616859/ +[29]:/file/383506 +[30]:https://opensource.com/sites/default/files/u128651/linuxboot_2.png (ACPI tables on Lenovo laptops) +[31]:https://www.mjmwired.net/kernel/Documentation/acpi/method-customizing.txt +[32]:https://www.coreboot.org/Supported_Motherboards +[33]:/file/383511 +[34]:https://opensource.com/sites/default/files/u128651/linuxboot_3.png (Summary of early kernel boot process.) +[35]:http://free-electrons.com/pub/conferences/2014/elc/clement-smp-bring-up-on-arm-soc +[36]:http://www.brendangregg.com/ebpf.html +[37]:https://www.busybox.net/ +[38]:https://www.mjmwired.net/kernel/Documentation/acpi/initrd_table_override.txt +[39]:/file/383516 +[40]:https://opensource.com/sites/default/files/u128651/linuxboot_4.png (Rescue shell and a custom initrd.) +[41]:https://rego.linux.conf.au/schedule/presentation/16/ +[42]:https://linux.conf.au/index.html +[43]:http://shallowsky.com/ From ab54388ea83daeb37b4d267d8104ba88022ba222 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 13:02:08 +0800 Subject: [PATCH 091/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Instal?= =?UTF-8?q?l=20and=20Use=20iostat=20on=20Ubuntu=2016.04=20LTS?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...tall and Use iostat on Ubuntu 16.04 LTS.md | 225 ++++++++++++++++++ 1 file changed, 225 insertions(+) create mode 100644 sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md diff --git a/sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md b/sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md new file mode 100644 index 0000000000..7ddb17eb68 --- /dev/null +++ b/sources/tech/20180116 How to Install and Use iostat on Ubuntu 16.04 LTS.md @@ -0,0 +1,225 @@ +How to Install and Use iostat on Ubuntu 16.04 LTS +====== + +iostat also known as input/output statistics is a popular Linux system monitoring tool that can be used to collect statistics of input and output devices. It allows users to identify performance issues of local disk, remote disk and system information. The iostat create reports, the CPU Utilization report, the Device Utilization report and the Network Filesystem report. + +In this tutorial, we will learn how to install iostat on Ubuntu 16.04 and how to use it. + +### Prerequisite + + * Ubuntu 16.04 desktop installed on your system. + * Non-root user with sudo privileges setup on your system + + + +### Install iostat + +By default, iostat is included with sysstat package in Ubuntu 16.04. You can easily install it by just running the following command: + +``` +sudo apt-get install sysstat -y +``` + +Once sysstat is installed, you can proceed to the next step. + +### iostat Basic Example + +Let's start by running the iostat command without any argument. This will displays information about the CPU usage, and I/O statistics of your system: + +``` +iostat +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 22.67 0.52 6.99 1.88 0.00 67.94 + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 15.15 449.15 119.01 771022 204292 + +``` + +In the above output, the first line display, Linux kernel version and hostname. Next two lines displays CPU statistics like, average CPU usage, percentage of time the CPU were idle and waited for I/O response, percentage of waiting time of virtual CPU and the percentage of time the CPU is idle. Next two lines displays the device utilization report like, number of blocks read and write per second and total block reads and write per second. + +By default iostat displays the report with current date. If you want to display the current time, run the following command: + +``` +iostat -t +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +Saturday 16 December 2017 09:44:55 IST +avg-cpu: %user %nice %system %iowait %steal %idle + 21.37 0.31 6.93 1.28 0.00 70.12 + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 9.48 267.80 79.69 771022 229424 + +``` + +To check the version of the iostat, run the following command: + +``` +iostat -V +``` + +Output: +``` +sysstat version 10.2.0 +(C) Sebastien Godard (sysstat orange.fr) + +``` + +You can listout all the options available with iostat command using the following command: + +``` +iostat --help +``` + +Output: +``` +Usage: iostat [ options ] [ [ ] ] +Options are: +[ -c ] [ -d ] [ -h ] [ -k | -m ] [ -N ] [ -t ] [ -V ] [ -x ] [ -y ] [ -z ] +[ -j { ID | LABEL | PATH | UUID | ... } ] +[ [ -T ] -g ] [ -p [ [,...] | ALL ] ] +[ [...] | ALL ] + +``` + +### iostat Advance Usage Example + +If you want to view only the device report only once, run the following command: + +``` +iostat -d +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 12.18 353.66 102.44 771022 223320 + +``` + +To view the device report continuously for every 5 seconds, for 3 times: + +``` +iostat -d 5 3 +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 11.77 340.71 98.95 771022 223928 + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 2.00 0.00 8.00 0 40 + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 0.60 0.00 3.20 0 16 + +``` + +If you want to view the statistics of specific devices, run the following command: + +``` +iostat -p sda +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 21.69 0.36 6.98 1.44 0.00 69.53 + +Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 11.00 316.91 92.38 771022 224744 +sda1 0.07 0.27 0.00 664 0 +sda2 0.01 0.05 0.00 128 0 +sda3 0.07 0.27 0.00 648 0 +sda4 10.56 315.21 92.35 766877 224692 +sda5 0.12 0.48 0.02 1165 52 +sda6 0.07 0.32 0.00 776 0 + +``` + +You can also view the statistics of multiple devices with the following command: + +``` +iostat -p sda, sdb, sdc +``` + +If you want to displays the device I/O statistics in MB/second, run the following command: + +``` +iostat -m +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 21.39 0.31 6.94 1.30 0.00 70.06 + +Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn +sda 9.67 0.27 0.08 752 223 + +``` + +If you want to view the extended information for a specific partition (sda4), run the following command: + +``` +iostat -x sda4 +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 21.26 0.28 6.87 1.19 0.00 70.39 + +Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util +sda4 0.79 4.65 5.71 2.68 242.76 73.28 75.32 0.35 41.80 43.66 37.84 4.55 3.82 + +``` + +If you want to displays only the CPU usage statistics, run the following command: + +``` +iostat -c +``` + +You should see the following output: +``` +Linux 3.19.0-25-generic (Ubuntu-PC) Saturday 16 December 2017 _x86_64_ (4 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 21.45 0.33 6.96 1.34 0.00 69.91 + +``` + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/tutorial/how-to-install-and-use-iostat-on-ubuntu-1604/ + +作者:[Hitesh Jethva][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com From ae2f3f3f5657177a93ba28db20943e3aab184c37 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Fri, 19 Jan 2018 13:50:53 +0800 Subject: [PATCH 092/226] Delete 20170918 3 text editor alternatives to Emacs and Vim.md --- ...xt editor alternatives to Emacs and Vim.md | 104 ------------------ 1 file changed, 104 deletions(-) delete mode 100644 sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md diff --git a/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md b/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md deleted file mode 100644 index 835db13b2f..0000000000 --- a/sources/tech/20170918 3 text editor alternatives to Emacs and Vim.md +++ /dev/null @@ -1,104 +0,0 @@ -## translate by cyleft - -3 text editor alternatives to Emacs and Vim -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_blue.png?itok=IfckxN48) - -Before you start reaching for those implements of mayhem, Emacs and Vim fans, understand that this article isn't about putting the boot to your favorite editor. I'm a professed Emacs guy, but one who also likes Vim. A lot. - -That said, I realize that Emacs and Vim aren't for everyone. It might be that the silliness of the so-called [Editor war][1] has turned some people off. Or maybe they just want an editor that is less demanding and has a more modern sheen. - -If you're looking for an alternative to Emacs or Vim, keep reading. Here are three that might interest you. - -### Geany - - -![Editing a LaTeX document with Geany][3] - - -Editing a LaTeX document with Geany - -[Geany][4] is an old favorite from the days when I computed on older hardware running lightweight Linux distributions. Geany started out as my [LaTeX][5] editor, but quickly became the app in which I did all of my text editing. - -Although Geany is billed as a small and fast [IDE][6] (integrated development environment), it's definitely not just a techie's tool. Geany is small and it is fast, even on older hardware or a [Chromebook running Linux][7]. You can use Geany for everything from editing configuration files to maintaining a task list or journal, from writing an article or a book to doing some coding and scripting. - -[Plugins][8] give Geany a bit of extra oomph. Those plugins expand the editor's capabilities, letting you code or work with markup languages more effectively, manipulate text, and even check your spelling. - -### Atom - - -![Editing a webpage with Atom][10] - - -Editing a webpage with Atom - -[Atom][11] is a new-ish kid in the text editing neighborhood. In the short time it's been on the scene, though, Atom has gained a dedicated following. - -What makes Atom attractive is that you can customize it. If you're of a more technical bent, you can fiddle with the editor's configuration. If you aren't all that technical, Atom has [a number of themes][12] you can use to change how the editor looks. - -And don't discount Atom's thousands of [packages][13]. They extend the editor in many different ways, enabling you to turn it into the text editing or development environment that's right for you. Atom isn't just for coders. It's a very good [text editor for writers][14], too. - -### Xed - -![Writing this article in Xed][16] - - -Writing this article in Xed - -Maybe Atom and Geany are a bit heavy for your tastes. Maybe you want a lighter editor, something that's not bare bones but also doesn't have features you'll rarely (if ever) use. In that case, [Xed][17] might be what you're looking for. - -If Xed looks familiar, it's a fork of the Pluma text editor for the MATE desktop environment. I've found that Xed is a bit faster and a bit more responsive than Pluma--your mileage may vary, though. - -Although Xed isn't as rich in features as other editors, it doesn't do too badly. It has solid syntax highlighting, a better-than-average search and replace function, a spelling checker, and a tabbed interface for editing multiple files in a single window. - -### Other editors worth exploring - -I'm not a KDE guy, but when I worked in that environment, [KDevelop][18] was my go-to editor for heavy-duty work. It's a lot like Geany in that KDevelop is powerful and flexible without a lot of bulk. - -Although I've never really felt the love, more than a couple of people I know swear by [Brackets][19]. It is powerful, and I have to admit its [extensions][20] look useful. - -Billed as a "text editor for developers," [Notepadqq][21] is an editor that's reminiscent of [Notepad++][22]. It's in the early stages of development, but Notepadqq does look promising. - -[Gedit][23] and [Kate][24] are excellent for anyone whose text editing needs are simple. They're definitely not bare bones--they pack enough features to do heavy text editing. Both Gedit and Kate balance that by being speedy and easy to use. - -Do you have another favorite text editor that's not Emacs or Vim? Feel free to share by leaving a comment. - -### About The Author -Scott Nesbitt;I'M A Long-Time User Of Free Open Source Software;Write Various Things For Both Fun;Profit. I Don'T Take Myself Too Seriously;I Do All Of My Own Stunts. You Can Find Me At These Fine Establishments On The Web - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/9/3-alternatives-emacs-and-vim - -作者:[Scott Nesbitt][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/scottnesbitt -[1]:https://en.wikipedia.org/wiki/Editor_war -[2]:/file/370196 -[3]:https://opensource.com/sites/default/files/u128651/geany.png (Editing a LaTeX document with Geany) -[4]:https://www.geany.org/ -[5]:https://opensource.com/article/17/6/introduction-latex -[6]:https://en.wikipedia.org/wiki/Integrated_development_environment -[7]:https://opensource.com/article/17/4/linux-chromebook-gallium-os -[8]:http://plugins.geany.org/ -[9]:/file/370191 -[10]:https://opensource.com/sites/default/files/u128651/atom.png (Editing a webpage with Atom) -[11]:https://atom.io -[12]:https://atom.io/themes -[13]:https://atom.io/packages -[14]:https://opensource.com/article/17/5/atom-text-editor-packages-writers -[15]:/file/370201 -[16]:https://opensource.com/sites/default/files/u128651/xed.png (Writing this article in Xed) -[17]:https://github.com/linuxmint/xed -[18]:https://www.kdevelop.org/ -[19]:http://brackets.io/ -[20]:https://registry.brackets.io/ -[21]:http://notepadqq.altervista.org/s/ -[22]:https://opensource.com/article/16/12/notepad-text-editor -[23]:https://wiki.gnome.org/Apps/Gedit -[24]:https://kate-editor.org/ From c5392887eda827c09d243c3b83737feef12284b1 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Fri, 19 Jan 2018 13:51:59 +0800 Subject: [PATCH 093/226] translated by cyleft 20170918 3 text editor alternatives to Emacs and Vim.md --- ...xt editor alternatives to Emacs and Vim.md | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 translated/tech/20170918 3 text editor alternatives to Emacs and Vim.md diff --git a/translated/tech/20170918 3 text editor alternatives to Emacs and Vim.md b/translated/tech/20170918 3 text editor alternatives to Emacs and Vim.md new file mode 100644 index 0000000000..136214ce33 --- /dev/null +++ b/translated/tech/20170918 3 text editor alternatives to Emacs and Vim.md @@ -0,0 +1,102 @@ +3 个替代 Emacs 的 Vim 文本编辑器 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_blue.png?itok=IfckxN48) + +Emacs 和 Vim 的粉丝们,在你们开始编辑器之争之前,请你们理解,这篇文章并不会把引导放在诸位最喜欢的编辑器上。我是一个 Emacs 爱好者,但是也很喜欢 Vim。 + +就是说,我已经意识到 Emacs 和 Vim 并不适合所有人。也许 [编辑器之争][1] 略显幼稚,让很多人失望了。也许他们只是想要有一个不太苛刻的现代化的编辑器。 + +如果你正寻找可以替代 Emacs 或者 Vim 的编辑器,请继续阅读下去。这里有三个可能会让你感兴趣的编辑器。 + +### Geany + + +![用 Geany 编辑一个 LaTeX 文档][3] + + +你可以用 Geany 编辑 LaTeX 文档 + +[Geany][4] 是一个古老的编辑器,当我还在过时的硬件上运行轻量级 Linux 发行版的时候,[Geany][4] 就是一个优秀的的编辑器。Geany 开始于我的 [LaTeX][5] 编辑,但是很快就成为我所有应用程序的编辑器了。 + +尽管 Geany 号称是轻量且高速的 [IDE][6](集成开发环境),但是它绝不仅仅是一个技术工具。Geany 轻便快捷,即便是在一个过时的机器或是 [运行 Linux 的 Chromebook][7] 也能轻松运行起来。无论是编辑配置文件维护任务列表、写文章、代码还是脚本,Geany 都能轻松胜任。 + +[插件][8] 给 Geany 带来一些额外的魅力。这些插件拓展了 Geany 的功能,让你编码或是处理一些标记语言变得更高效,帮助你处理文本,甚至做拼写检查。 + +### Atom + + +![使用 Atom 编辑网页][10] + + +使用 Atom 编辑网页 + +在文本编辑器领域,[Atom][11] 后来居上。很短的时间内,Atom 就获得了一批忠实的追随者。 + +Atom 的定制功能让其拥有如此的吸引力。如果有一些技术癖好,你完全可以在这个编辑器上随意设置。如果你不仅仅是忠于技术,Atom 也有 [一些主题][12] ,你可以用来更改编辑器外观。 + +千万不要低估 Atom 数以千计的 [拓展包][13]。它们能在不同功能上拓展 Atom,能根据你的爱好把 Atom 转化成合适的文本编辑器或是开发环境。Atom 不仅为程序员提供服务。它同样适用于 [作家的文本编辑器][14]。 + +### Xed + +![使用 Xed 编辑文章][16] + + +使用 Xed 编辑文章 + +可能对用户体验来说,Atom 和 Geany 略显臃肿。也许你只想要一个轻量级,一个不要太露骨也不要有太多很少使用的特性的编辑器,如此看来,[Xed][17] 正是你所期待的。 + +如果 Xed 你看着眼熟,那是因为它是 MATE 桌面环境中 Pluma 编辑器上的分支。我发现相比于 Pluma,Xed 可能速度更快一点,响应更灵敏一点--不过,因人而异吧。 + +虽然 Xed 没有那么多的功能,但也不至于太糟。它有扎实的语法高亮,略强于一般的搜索替换和拼写检查功能以及单窗口编辑多文件的选项卡式界面。 + +### 其他值得发掘的编辑器 + +我不是 KDE 痴,当我工作在 KDE 环境下时, [KDevelop][18] 就已经是我深度工作时的首选了。它很强大而且灵活,又没有过大的体积,很像 Genany。 + +虽然我还没感受过爱,但是我发誓我和我了解的几个人都在 [Brackets][19] 感受到了。它很强大,而且不得不承认它的 [拓展][20] 真的很实用。 + +被称为 “开发者的编辑器” 的 [Notepadqq][21] ,总让人联想到 [Notepad++][22]。虽然它的发展仍处于早期阶段,但至少它看起来还是很有前景的。 + +对于那些只有简单的文本编辑器需求的人来说,[Gedit][23] 和 [Kate][24] 相比是极好的。它绝不是太过原始的编辑器--它有丰富的功能去完成大型文本编辑。无论是 Gedit 还是 Kate 都缘于速度和易上手而齐名。 + +你有其他 Emacs 和 Vim 之外的挚爱编辑器么?留言下来,免费分享。 + +### 关于作者 +Scott Nesbitt;我长期使用开源软件;记录各种有趣的事物;利益。做自己力所能及的事,并不把自己当回事。你可以在网络上的这些地方找到我。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/9/3-alternatives-emacs-and-vim + +作者:[Scott Nesbitt][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/scottnesbitt +[1]:https://en.wikipedia.org/wiki/Editor_war +[2]:/file/370196 +[3]:https://opensource.com/sites/default/files/u128651/geany.png (Editing a LaTeX document with Geany) +[4]:https://www.geany.org/ +[5]:https://opensource.com/article/17/6/introduction-latex +[6]:https://en.wikipedia.org/wiki/Integrated_development_environment +[7]:https://opensource.com/article/17/4/linux-chromebook-gallium-os +[8]:http://plugins.geany.org/ +[9]:/file/370191 +[10]:https://opensource.com/sites/default/files/u128651/atom.png (Editing a webpage with Atom) +[11]:https://atom.io +[12]:https://atom.io/themes +[13]:https://atom.io/packages +[14]:https://opensource.com/article/17/5/atom-text-editor-packages-writers +[15]:/file/370201 +[16]:https://opensource.com/sites/default/files/u128651/xed.png (Writing this article in Xed) +[17]:https://github.com/linuxmint/xed +[18]:https://www.kdevelop.org/ +[19]:http://brackets.io/ +[20]:https://registry.brackets.io/ +[21]:http://notepadqq.altervista.org/s/ +[22]:https://opensource.com/article/16/12/notepad-text-editor +[23]:https://wiki.gnome.org/Apps/Gedit +[24]:https://kate-editor.org/ From 6e6900cdf3a07b0365f2be1eaa49b348eb5f6920 Mon Sep 17 00:00:00 2001 From: Locez Date: Fri, 19 Jan 2018 14:23:23 +0800 Subject: [PATCH 094/226] Reviewed by Locez --- ...r passwords using pass password manager.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/translated/tech/20171121 How to organize your passwords using pass password manager.md b/translated/tech/20171121 How to organize your passwords using pass password manager.md index b129a5daf9..be460cc720 100644 --- a/translated/tech/20171121 How to organize your passwords using pass password manager.md +++ b/translated/tech/20171121 How to organize your passwords using pass password manager.md @@ -3,9 +3,9 @@ ### 目标 -学习使用 "pass" 密码管理器来组织你的密码 +学习在 Linux 上使用 "pass" 密码管理器来管理你的密码 -### 需求 +### 条件 * 需要 root 权限来安装需要的包 @@ -16,15 +16,15 @@ ### 约定 * **#** - 执行指定命令需要 root 权限,可以是直接使用 root 用户来执行或者使用 `sudo` 命令来执行 - * **$** - 使用非特权普通用户执行指定命令 + * **$** - 使用普通的非特权用户执行指定命令 ### 介绍 -如果你有根据目的不同设置不同密码的好习惯,你可能已经感受到要一个密码管理器的必要性了。在 Linux 上有很多选择,可以是专利软件(如果你敢的话)也可以是开源软件。如果你跟我一样喜欢简洁的话,你可能会对 `pass` 感兴趣。 +如果你有根据不同的意图设置不同密码的好习惯,你可能已经感受到需要一个密码管理器的必要性了。在 Linux 上有很多选择,可以是专利软件(如果你敢的话)也可以是开源软件。如果你跟我一样喜欢简洁的话,你可能会对 `pass` 感兴趣。 ### First steps -Pass 作为一个密码管理器,其实际上是对类似 `gpg` 和 `git` 等可信赖的实用工具的一种封装。虽然它也有图形界面,但它专门设计能成在命令行下工作的:因此它也可以在 headless machines 上工作 (LCTT 注:根据 wikipedia 的说法,所谓 headless machines 是指没有显示器、键盘和鼠标的机器,一般通过网络链接来控制)。 +Pass 作为一个密码管理器,其实际上是一些你可能早已每天使用的、可信赖且实用的工具的一种封装,比如 `gpg` 和 `git` 。虽然它也有图形界面,但它专门设计能成在命令行下工作的:因此它也可以在 headless machines 上工作 (LCTT 注:根据 wikipedia 的说法,所谓 headless machines 是指没有显示器、键盘和鼠标的机器,一般通过网络链接来控制)。 ### 步骤 1 - 安装 @@ -42,7 +42,7 @@ Pass 不在官方仓库中,但你可以从 `epel` 中获取道它。要在 Cen # yum install epel-release ``` -然而在 Red Hat 企业版的 Linux 上,这个额外的源是不可用的;你需要从官方的 EPEL 网站上下载它。 +然而在 Red Hat 企业版的 Linux 上,这个额外的源是不可用的;你需要从 EPEL 官方网站上下载它。 #### Debian and Ubuntu ``` @@ -95,12 +95,12 @@ Password Store pass mysite ``` -然而更好的方法是使用 `-c` 选项让 pass 将密码直接拷贝道粘帖板上: +然而更好的方法是使用 `-c` 选项让 pass 将密码直接拷贝到剪切板上: ``` pass -c mysite ``` -这种情况下粘帖板中的内容会在 `45` 秒后自动清除。两种方法都会要求你输入 gpg 密码。 +这种情况下剪切板中的内容会在 `45` 秒后自动清除。两种方法都会要求你输入 gpg 密码。 ### 生成密码 @@ -109,11 +109,11 @@ Pass 也可以为我们自动生成(并自动存储)安全密码。假设我们 pass generate mysite 15 ``` -若希望密码只包含字母和数字则可以是使用 `--no-symbols` 选项。生成的密码会显示在屏幕上。也可以通过 `--clip` 或 `-c` 选项让 pass 吧密码直接拷贝到粘帖板中。通过使用 `-q` 或 `--qrcode` 选项来生成二维码: +若希望密码只包含字母和数字则可以是使用 `--no-symbols` 选项。生成的密码会显示在屏幕上。也可以通过 `--clip` 或 `-c` 选项让 pass 把密码直接拷贝到剪切板中。通过使用 `-q` 或 `--qrcode` 选项来生成二维码: ![qrcode][1] -从上面的截屏中尅看出,生成了一个二维码,不过由于 `mysite` 的密码以及存在了,pass 会提示我们确认是否要覆盖原密码。 +从上面的截屏中尅看出,生成了一个二维码,不过由于 `mysite` 的密码已经存在了,pass 会提示我们确认是否要覆盖原密码。 Pass 使用 `/dev/urandom` 设备作为(伪)随机数据生成器来生成密码,同时它使用 `xclip` 工具来将密码拷贝到粘帖板中,同时使用 `qrencode` 来将密码以二维码的形式显示出来。在我看来,这种模块化的设计正是它最大的优势:它并不重复造轮子,而只是将常用的工具包装起来完成任务。 @@ -131,9 +131,9 @@ pass git init pass git remote add ``` -我们可以把这个仓库当成普通密码仓库来用。唯一的不同点在于每次我们新增或修改一个密码,`pass` 都会自动将该文件加入索引并创建一个提交。 +我们可以把这个密码仓库当成普通仓库来用。唯一的不同点在于每次我们新增或修改一个密码,`pass` 都会自动将该文件加入索引并创建一个提交。 -`pass` 有一个叫做 `qtpass` 的图形界面,而且 `pass` 也支持 Windows 和 MacOs。通过使用 `PassFF` 插件,它还能获取 firefox 中存储的密码。在它的项目网站上可以查看更多详细信息。试一下 `pass` 吧,你不会失望的! +`pass` 有一个叫做 `qtpass` 的图形界面,而且也支持 Windows 和 MacOs。通过使用 `PassFF` 插件,它还能获取 firefox 中存储的密码。在它的项目网站上可以查看更多详细信息。试一下 `pass` 吧,你不会失望的! -------------------------------------------------------------------------------- @@ -142,7 +142,7 @@ via: https://linuxconfig.org/how-to-organize-your-passwords-using-pass-password- 作者:[Egidio Docile][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[Locez](https://github.com/locez) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From ac3835d416f21cdff3f00391c51e3c6951045f8f Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 14:58:52 +0800 Subject: [PATCH 095/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20get=20?= =?UTF-8?q?into=20DevOps?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../talk/20180117 How to get into DevOps.md | 143 ++++++++++++++++++ 1 file changed, 143 insertions(+) create mode 100644 sources/talk/20180117 How to get into DevOps.md diff --git a/sources/talk/20180117 How to get into DevOps.md b/sources/talk/20180117 How to get into DevOps.md new file mode 100644 index 0000000000..09e50ae4f2 --- /dev/null +++ b/sources/talk/20180117 How to get into DevOps.md @@ -0,0 +1,143 @@ +How to get into DevOps +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E) + +I've observed a sharp uptick of developers and systems administrators interested in "getting into DevOps" within the past year or so. This pattern makes sense: In an age in which a single developer can spin up a globally distributed infrastructure for an application with a few dollars and a few API calls, the gap between development and systems administration is closer than ever. Although I've seen plenty of blog posts and articles about cool DevOps tools and thoughts to think about, I've seen fewer content on pointers and suggestions for people looking to get into this work. + +My goal with this article is to draw what that path looks like. My thoughts are based upon several interviews, chats, late-night discussions on [reddit.com/r/devops][1], and random conversations, likely over beer and delicious food. I'm also interested in hearing feedback from those who have made the jump; if you have, please reach out through [my blog][2], [Twitter][3], or in the comments below. I'd love to hear your thoughts and stories. + +### Olde world IT + +Understanding history is key to understanding the future, and DevOps is no exception. To understand the pervasiveness and popularity of the DevOps movement, understanding what IT was like in the late '90s and most of the '00s is helpful. This was my experience. + +I started my career in late 2006 as a Windows systems administrator in a large, multi-national financial services firm. In those days, adding new compute involved calling Dell (or, in our case, CDW) and placing a multi-hundred-thousand-dollar order of servers, networking equipment, cables, and software, all destined for your on- and offsite datacenters. Although VMware was still convincing companies that using virtual machines was, indeed, a cost-effective way of hosting its "performance-sensitive" application, many companies, including mine, pledged allegiance to running applications on their physical hardware. + +Our technology department had an entire group dedicated to datacenter engineering and operations, and its job was to negotiate our leasing rates down to some slightly less absurd monthly rate and ensure that our systems were being cooled properly (an exponentially difficult problem if you have enough equipment). If the group was lucky/wealthy enough, the offshore datacenter crew knew enough about all of our server models to not accidentally pull the wrong thing during after-hours trading. Amazon Web Services and Rackspace were slowly beginning to pick up steam, but were far from critical mass. + +In those days, we also had teams dedicated to ensuring that the operating systems and software running on top of that hardware worked when they were supposed to. The engineers were responsible for designing reliable architectures for patching, monitoring, and alerting these systems as well as defining what the "gold image" looked like. Most of this work was done with a lot of manual experimentation, and the extent of most tests was writing a runbook describing what you did, and ensuring that what you did actually did what you expected it to do after following said runbook. This was important in a large organization like ours, since most of the level 1 and 2 support was offshore, and the extent of their training ended with those runbooks. + +(This is the world that your author lived in for the first three years of his career. My dream back then was to be the one who made the gold standard!) + +Software releases were another beast altogether. Admittedly, I didn't gain a lot of experience working on this side of the fence. However, from stories that I've gathered (and recent experience), much of the daily grind for software development during this time went something like this: + + * Developers wrote code as specified by the technical and functional requirements laid out by business analysts from meetings they weren't invited to. + * Optionally, developers wrote unit tests for their code to ensure that it didn't do anything obviously crazy, like try to divide over zero without throwing an exception. + * When done, developers would mark their code as "Ready for QA." A quality assurance person would pick up the code and run it in their own environment, which might or might not be like production or even the environment the developer used to test their own code against. + * Failures would get sent back to the developers within "a few days or weeks" depending on other business activities and where priorities fell. + + + +Although sysadmins and developers didn't often see eye to eye, the one thing they shared a common hatred for was "change management." This was a composition of highly regulated (and in the case of my employer at the time), highly necessary rules and procedures governing when and how technical changes happened in a company. Most companies followed [ITIL][4] practices, which, in a nutshell, asked a lot of questions around why, when, where, and how things happened and provided a process for establishing an audit trail of the decisions that led up to those answers. + +As you could probably gather from my short history lesson, many, many things were done manually in IT. This led to a lot of mistakes. Lots of mistakes led up to lots of lost revenue. Change management's job was to minimize those lost revenues; this usually came in the form of releases only every two weeks and changes to servers, regardless of their impact or size, queued up to occur between Friday at 4 p.m. and Monday at 5:59 a.m. (Ironically, this batching of work led to even more mistakes, usually more serious ones.) + +### DevOps isn't a Tiger Team + +You might be thinking "What is Carlos going on about, and when is he going to talk about Ansible playbooks?" I love Ansible tons, but hang on; this is important. + +Have you ever been assigned to a project where you had to interact with the "DevOps" team? Or did you have to rely on a "configuration management" or "CI/CD" team to ensure your pipeline was set up properly? Have you had to attend meetings about your release and what it pertains to--weeks after the work was marked "code complete"? + +If so, then you're reliving history. All of that comes from all of the above. + +[Silos form][5] out of an instinctual draw to working with people like ourselves. Naturally, it's no surprise that this human trait also manifests in the workplace. I even saw this play out at a 250-person startup where I used to work. When I started, developers all worked in common pods and collaborated heavily with each other. As the codebase grew in complexity, developers who worked on common features naturally aligned with each other to try and tackle the complexity within their own feature. Soon afterwards, feature teams were officially formed. + +Sysadmins and developers at many of the companies I worked at not only formed natural silos like this, but also fiercely competed with each other. Developers were mad at sysadmins when their environments were broken. Developers were mad at sysadmins when their environments were too locked down. Sysadmins were mad that developers were breaking their environments in arbitrary ways all of the time. Sysadmins were mad at developers for asking for way more computing power than they needed. Neither side understood each other, and worse yet, neither side wanted to. + +Most developers were uninterested in the basics of operating systems, kernels, or, in some cases, computer hardware. As well, most sysadmins, even Linux sysadmins, took a 10-foot pole approach to learning how to code. They tried a bit of C in college, hated it and never wanted to touch an IDE again. Consequently, developers threw their environment problems over the wall to sysadmins, sysadmins prioritized them with the hundreds of other things that were thrown over the wall to them, and everyone busy-waited angrily while hating each other. The purpose of DevOps was to put an end to this. + +DevOps isn't a team. CI/CD isn't a group in Jira. DevOps is a way of thinking. According to the movement, in an ideal world, developers, sysadmins, and business stakeholders would be working as one team. While they might not know everything about each other's worlds, not only do they all know enough to understand each other and their backlogs, but they can, for the most part, speak the same language. + +This is the basis behind having all infrastructure and business logic be in code and subject to the same deployment pipelines as the software that sits on top of it. Everybody is winning because everyone understands each other. This is also the basis behind the rise of other tools like chatbots and easily accessible monitoring and graphing. + +[Adam Jacob said][6] it best: "DevOps is the word we will use to describe the operational side of the transition to enterprises being software led." + +### What do I need to know to get into DevOps? + +I'm commonly asked this question, and the answer, like most open-ended questions like this, is: It depends. + +At the moment, the "DevOps engineer" varies from company to company. Smaller companies that have plenty of software developers but fewer folks that understand infrastructure will likely look for people with more experience administrating systems. Other, usually larger and/or older companies that have a solid sysadmin organization will likely optimize for something closer to a [Google site reliability engineer][7], i.e. "a software engineer to design an operations function." This isn't written in stone, however, as, like any technology job, the decision largely depends on the hiring manager sponsoring it. + +That said, we typically look for engineers who are interested in learning more about: + + * How to administrate and architect secure and scalable cloud platforms (usually on AWS, but Azure, Google Cloud Platform, and PaaS providers like DigitalOcean and Heroku are popular too); + * How to build and optimize deployment pipelines and deployment strategies on popular [CI/CD][8] tools like Jenkins, Go continuous delivery, and cloud-based ones like Travis CI or CircleCI; + * How to monitor, log, and alert on changes in your system with timeseries-based tools like Kibana, Grafana, Splunk, Loggly, or Logstash; and + * How to maintain infrastructure as code with configuration management tools like Chef, Puppet, or Ansible, as well as deploy said infrastructure with tools like Terraform or CloudFormation. + + + +Containers are becoming increasingly popular as well. Despite the [beef against the status quo][9] surrounding Docker at scale, containers are quickly becoming a great way of achieving an extremely high density of services and applications running on fewer systems while increasing their reliability. (Orchestration tools like Kubernetes or Mesos can spin up new containers in seconds if the host they're being served by fails.) Given this, having knowledge of Docker or rkt and an orchestration platform will go a long way. + +If you're a systems administrator that's looking to get into DevOps, you will also need to know how to write code. Python and Ruby are popular languages for this purpose, as they are portable (i.e., can be used on any operating system), fast, and easy to read and learn. They also form the underpinnings of the industry's most popular configuration management tools (Python for Ansible, Ruby for Chef and Puppet) and cloud API clients (Python and Ruby are commonly used for AWS, Azure, and Google Cloud Platform clients). + +If you're a developer looking to make this change, I highly recommend learning more about Unix, Windows, and networking fundamentals. Even though the cloud abstracts away many of the complications of administrating a system, debugging slow application performance is aided greatly by knowing how these things work. I've included a few books on this topic in the next section. + +If this sounds overwhelming, you aren't alone. Fortunately, there are plenty of small projects to dip your feet into. One such toy project is Gary Stafford's Voter Service, a simple Java-based voting platform. We ask our candidates to take the service from GitHub to production infrastructure through a pipeline. One can combine that with Rob Mile's awesome DevOps Tutorial repository to learn about ways of doing this. + +Another great way of becoming familiar with these tools is taking popular services and setting up an infrastructure for them using nothing but AWS and configuration management. Set it up manually first to get a good idea of what to do, then replicate what you just did using nothing but CloudFormation (or Terraform) and Ansible. Surprisingly, this is a large part of the work that we infrastructure devs do for our clients on a daily basis. Our clients find this work to be highly valuable! + +### Books to read + +If you're looking for other resources on DevOps, here are some theory and technical books that are worth a read. + +#### Theory books + + * [The Phoenix Project][10] by Gene Kim. This is a great book that covers much of the history I explained earlier (with much more color) and describes the journey to a lean company running on agile and DevOps. + * [Driving Technical Change][11] by Terrance Ryan. Awesome little book on common personalities within most technology organizations and how to deal with them. This helped me out more than I expected. + * [Peopleware][12] by Tom DeMarco and Tim Lister. A classic on managing engineering organizations. A bit dated, but still relevant. + * [Time Management for System Administrators][13] by Tom Limoncelli. While this is heavily geared towards sysadmins, it provides great insight into the life of a systems administrator at most large organizations. If you want to learn more about the war between sysadmins and developers, this book might explain more. + * [The Lean Startup][14] by Eric Ries. Describes how Eric's 3D avatar company, IMVU, discovered how to work lean, fail fast, and find profit faster. + * [Lean Enterprise][15] by Jez Humble and friends. This book is an adaption of The Lean Startup for the enterprise. Both are great reads and do a good job of explaining the business motivation behind DevOps. + * [Infrastructure As Code][16] by Kief Morris. Awesome primer on, well, infrastructure as code! It does a great job of describing why it's essential for any business to adopt this for their infrastructure. + * [Site Reliability Engineering][17] by Betsy Beyer, Chris Jones, Jennifer Petoff, and Niall Richard Murphy. A book explaining how Google does SRE, or also known as "DevOps before DevOps was a thing." Provides interesting opinions on how to handle uptime, latency, and keeping engineers happy. + + + +#### Technical books + +If you're looking for books that'll take you straight to code, you've come to the right section. + + * [TCP/IP Illustrated][18] by the late W. Richard Stevens. This is the classic (and, arguably, complete) tome on the fundamental networking protocols, with special emphasis on TCP/IP. If you've heard of Layers 1, 2, 3, and 4 and are interested in learning more, you'll need this book. + * [UNIX and Linux System Administration Handbook][19] by Evi Nemeth, Trent Hein, and Ben Whaley. A great primer into how Linux and Unix work and how to navigate around them. + * [Learn Windows Powershell In A Month of Lunches][20] by Don Jones and Jeffrey Hicks. If you're doing anything automated with Windows, you will need to learn how to use Powershell. This is the book that will help you do that. Don Jones is a well-known MVP in this space. + * Practically anything by [James Turnbull][21]. He puts out great technical primers on popular DevOps-related tools. + + + +From companies deploying everything to bare metal (there are plenty that still do, for good reasons) to trailblazers doing everything serverless, DevOps is likely here to stay for a while. The work is interesting, the results are impactful, and, most important, it helps bridge the gap between technology and business. It's a wonderful thing to see. + +Originally published at [Neurons Firing on a Keyboard][22], CC-BY-SA. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/getting-devops + +作者:[Carlos Nunez][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/carlosonunez +[1]:https://www.reddit.com/r/devops/ +[2]:https://carlosonunez.wordpress.com/ +[3]:https://twitter.com/easiestnameever +[4]:https://en.wikipedia.org/wiki/ITIL +[5]:https://www.psychologytoday.com/blog/time-out/201401/getting-out-your-silo +[6]:https://twitter.com/adamhjk/status/572832185461428224 +[7]:https://landing.google.com/sre/interview/ben-treynor.html +[8]:https://en.wikipedia.org/wiki/CI/CD +[9]:https://thehftguy.com/2016/11/01/docker-in-production-an-history-of-failure/ +[10]:https://itrevolution.com/book/the-phoenix-project/ +[11]:https://pragprog.com/book/trevan/driving-technical-change +[12]:https://en.wikipedia.org/wiki/Peopleware:_Productive_Projects_and_Teams +[13]:http://shop.oreilly.com/product/9780596007836.do +[14]:http://theleanstartup.com/ +[15]:https://info.thoughtworks.com/lean-enterprise-book.html +[16]:http://infrastructure-as-code.com/book/ +[17]:https://landing.google.com/sre/book.html +[18]:https://en.wikipedia.org/wiki/TCP/IP_Illustrated +[19]:http://www.admin.com/ +[20]:https://www.manning.com/books/learn-windows-powershell-in-a-month-of-lunches-third-edition +[21]:https://jamesturnbull.net/ +[22]:https://carlosonunez.wordpress.com/2017/03/02/getting-into-devops/ From b508a5eec456eff2e3a71bb5098cda45c1ff5174 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 15:16:37 +0800 Subject: [PATCH 096/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Avoiding=20Server?= =?UTF-8?q?=20Disaster?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20180117 Avoiding Server Disaster.md | 125 ++++++++++++++++++ 1 file changed, 125 insertions(+) create mode 100644 sources/tech/20180117 Avoiding Server Disaster.md diff --git a/sources/tech/20180117 Avoiding Server Disaster.md b/sources/tech/20180117 Avoiding Server Disaster.md new file mode 100644 index 0000000000..cb88fe20d9 --- /dev/null +++ b/sources/tech/20180117 Avoiding Server Disaster.md @@ -0,0 +1,125 @@ +Avoiding Server Disaster +====== + +Worried that your server will go down? You should be. Here are some disaster-planning tips for server owners. + +If you own a car or a house, you almost certainly have insurance. Insurance seems like a huge waste of money. You pay it every year and make sure that you get the best possible price for the best possible coverage, and then you hope you never need to use the insurance. Insurance seems like a really bad deal—until you have a disaster and realize that had it not been for the insurance, you might have been in financial ruin. + +Unfortunately, disasters and mishaps are a fact of life in the computer industry. And so, just as you pay insurance and hope never to have to use it, you also need to take time to ensure the safety and reliability of your systems—not because you want disasters to happen, or even expect them to occur, but rather because you have to. + +If your website is an online brochure for your company and then goes down for a few hours or even days, it'll be embarrassing and annoying, but not financially painful. But, if your website is your business, when your site goes down, you're losing money. If that's the case, it's crucial to ensure that your server and software are not only unlikely to go down, but also easily recoverable if and when that happens. + +Why am I writing about this subject? Well, let's just say that this particular problem hit close to home for me, just before I started to write this article. After years of helping clients around the world to ensure the reliability of their systems, I made the mistake of not being as thorough with my own. ("The shoemaker's children go barefoot", as the saying goes.) This means that just after launching my new online product for Python developers, a seemingly trivial upgrade turned into a disaster. The precautions I put in place, it turns out, weren't quite enough—and as I write this, I'm still putting my web server together. I'll survive, as will my server and business, but this has been a painful and important lesson—one that I'll do almost anything to avoid repeating in the future. + +So in this article, I describe a number of techniques I've used to keep servers safe and sound through the years, and to reduce the chances of a complete meltdown. You can think of these techniques as insurance for your server, so that even if something does go wrong, you'll be able to recover fairly quickly. + +I should note that most of the advice here assumes no redundancy in your architecture—that is, a single web server and (at most) a single database server. If you can afford to have a bunch of servers of each type, these sorts of problems tend to be much less frequent. However, that doesn't mean they go away entirely. Besides, although people like to talk about heavy-duty web applications that require massive iron in order to run, the fact is that many businesses run on small, one- and two-computer servers. Moreover, those businesses don't need more than that; the ROI (return on investment) they'll get from additional servers cannot be justified. However, the ROI from a good backup and recovery plan is huge, and thus worth the investment. + +### The Parts of a Web Application + +Before I can talk about disaster preparation and recovery, it's important to consider the different parts of a web application and what those various parts mean for your planning. + +For many years, my website was trivially small and simple. Even if it contained some simple programs, those generally were used for sending email or for dynamically displaying different assets to visitors. The entire site consisted of some static HTML, images, JavaScript and CSS. No database or other excitement was necessary. + +At the other end of the spectrum, many people have full-blown web applications, sitting on multiple servers, with one or more databases and caches, as well as HTTP servers with extensively edited configuration files. + +But even when considering those two extremes, you can see that a web application consists of only a few parts: + +* The application software itself. + +* Static assets for that application. + +* Configuration file(s) for the HTTP server(s). + +* Database configuration files. + +* Database schema and contents. + +Assuming that you're using a high-level language, such as Python, Ruby or JavaScript, everything in this list either is a file or can be turned into one. (All databases make it possible to "dump" their contents onto disk, into a format that then can be loaded back into the database server.) + +Consider a site containing only application software, static assets and configuration files. (In other words, no database is involved.) In many cases, such a site can be backed up reliably in Git. Indeed, I prefer to keep my sites in Git, backed up on a commercial hosting service, such as GitHub or Bitbucket, and then deployed using a system like Capistrano. + +In other words, you develop the site on your own development machine. Whenever you are happy with a change that you've made, you commit the change to Git (on your local machine) and then do a git push to your central repository. In order to deploy your application, you then use Capistrano to do a cap deploy, which reads the data from the central repository, puts it into the appropriate place on the server's filesystem, and you're good to go. + +This system keeps you safe in a few different ways. The code itself is located in at least three locations: your development machine, the server and the repository. And those central repositories tend to be fairly reliable, if only because it's in the financial interest of the hosting company to ensure that things are reliable. + +I should add that in such a case, you also should include the HTTP server's configuration files in your Git repository. Those files aren't likely to change very often, but I can tell you from experience, if you're recovering from a crisis, the last thing you want to think about is how your Apache configuration files should look. Copying those files into your Git repository will work just fine. + +### Backing Up Databases + +You could argue that the difference between a "website" and a "web application" is a database. Databases long have powered the back ends of many web applications and for good reason—they allow you to store and retrieve data reliably and flexibly. The power that modern open-source databases provides was unthinkable just a decade or two ago, and there's no reason to think that they'll be any less reliable in the future. + +And yet, just because your database is pretty reliable doesn't mean that it won't have problems. This means you're going to want to keep a snapshot ("dump") of the database's contents around, in case the database server corrupts information, and you need to roll back to a previous version. + +My favorite solution for such a problem is to dump the database on a regular basis, preferably hourly. Here's a shell script I've used, in one form or another, for creating such regular database dumps: + +``` + +#!/bin/sh + +BACKUP_ROOT="/home/database-backups/" +YEAR=`/bin/date +'%Y'` +MONTH=`/bin/date +'%m'` +DAY=`/bin/date +'%d'` + +DIRECTORY="$BACKUP_ROOT/$YEAR/$MONTH/$DAY" +USERNAME=dbuser +DATABASE=dbname +HOST=localhost +PORT=3306 + +/bin/mkdir -p $DIRECTORY + +/usr/bin/mysqldump -h $HOST --databases $DATABASE -u $USERNAME + ↪| /bin/gzip --best --verbose > + ↪$DIRECTORY/$DATABASE-dump.gz + +``` + +The above shell script starts off by defining a bunch of variables, from the directory in which I want to store the backups, to the parts of the date (stored in $YEAR, $MONTH and $DAY). This is so I can have a separate directory for each day of the month. I could, of course, go further and have separate directories for each hour, but I've found that I rarely need more than one backup from a day. + +Once I have defined those variables, I then use the mkdir command to create a new directory. The -p option tells mkdir that if necessary, it should create all of the directories it needs such that the entire path will exist. + +Finally, I then run the database's "dump" command. In this particular case, I'm using MySQL, so I'm using the mysqldump command. The output from this command is a stream of SQL that can be used to re-create the database. I thus take the output from mysqldump and pipe it into gzip, which compresses the output file. Finally, the resulting dumpfile is placed, in compressed form, inside the daily backup directory. + +Depending on the size of your database and the amount of disk space you have on hand, you'll have to decide just how often you want to run dumps and how often you want to clean out old ones. I know from experience that dumping every hour can cause some load problems. On one virtual machine I've used, the overall administration team was unhappy that I was dumping and compressing every hour, which they saw as an unnecessary use of system resources. + +If you're worried your system will run out of disk space, you might well want to run a space-checking program that'll alert you when the filesystem is low on free space. In addition, you can run a cron job that uses find to erase all dumpfiles from before a certain cutoff date. I'm always a bit nervous about programs that automatically erase backups, so I generally prefer not to do this. Rather, I run a program that warns me if the disk usage is going above 85% (which is usually low enough to ensure that I can fix the problem in time, even if I'm on a long flight). Then I can go in and remove the problematic files by hand. + +When you back up your database, you should be sure to back up the configuration for that database as well. The database schema and data, which are part of the dumpfile, are certainly important. However, if you find yourself having to re-create your server from scratch, you'll want to know precisely how you configured the database server, with a particular emphasis on the filesystem configuration and memory allocations. I tend to use PostgreSQL for most of my work, and although postgresql.conf is simple to understand and configure, I still like to keep it around with my dumpfiles. + +Another crucial thing to do is to check your database dumps occasionally to be sure that they are working the way you want. It turns out that the backups I thought I was making weren't actually happening, in no small part because I had modified the shell script and hadn't double-checked that it was creating useful backups. Occasionally pulling out one of your dumpfiles and restoring it to a separate (and offline!) database to check its integrity is a good practice, both to ensure that the dump is working and that you remember how to restore it in the case of an emergency. + +### Storing Backups + +But wait. It might be great to have these backups, but what if the server goes down entirely? In the case of the code, I mentioned to ensure that it was located on more than one machine, ensuring its integrity. By contrast, your database dumps are now on the server, such that if the server fails, your database dumps will be inaccessible. + +This means you'll want to have your database dumps stored elsewhere, preferably automatically. How can you do that? + +There are a few relatively easy and inexpensive solutions to this problem. If you have two servers—ideally in separate physical locations—you can use rsync to copy the files from one to the other. Don't rsync the database's actual files, since those might get corrupted in transfer and aren't designed to be copied when the server is running. By contrast, the dumpfiles that you have created are more than able to go elsewhere. Setting up a remote server, with a user specifically for handling these backup transfers, shouldn't be too hard and will go a long way toward ensuring the safety of your data. + +I should note that using rsync in this way basically requires that you set up passwordless SSH, so that you can transfer without having to be physically present to enter the password. + +Another possible solution is Amazon's Simple Storage Server (S3), which offers astonishing amounts of disk space at very low prices. I know that many companies use S3 as a simple (albeit slow) backup system. You can set up a cron job to run a program that copies the contents of a particular database dumpfile directory onto a particular server. The assumption here is that you're not ever going to use these backups, meaning that S3's slow searching and access will not be an issue once you're working on the server. + +Similarly, you might consider using Dropbox. Dropbox is best known for its desktop client, but it has a "headless", text-based client that can be used on Linux servers without a GUI connected. One nice advantage of Dropbox is that you can share a folder with any number of people, which means you can have Dropbox distribute your backup databases everywhere automatically, including to a number of people on your team. The backups arrive in their Dropbox folder, and you can be sure that the LAMP is conditional. + +Finally, if you're running a WordPress site, you might want to consider VaultPress, a for-pay backup system. I must admit that in the weeks before I took my server down with a database backup error, I kept seeing ads in WordPress for VaultPress. "Who would buy that?", I asked myself, thinking that I'm smart enough to do backups myself. Of course, after disaster occurred and my database was ruined, I realized that $30/year to back up all of my data is cheap, and I should have done it before. + +### Conclusion + +When it comes to your servers, think less like an optimistic programmer and more like an insurance agent. Perhaps disaster won't strike, but if it does, will you be able to recover? Making sure that even if your server is completely unavailable, you'll be able to bring up your program and any associated database is crucial. + +My preferred solution involves combining a Git repository for code and configuration files, distributed across several machines and services. For the databases, however, it's not enough to dump your database; you'll need to get that dump onto a separate machine, and preferably test the backup file on a regular basis. That way, even if things go wrong, you'll be able to get back up in no time. + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/content/avoiding-server-disaster + +作者:[Reuven M.Lerner][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/user/1000891 From b86f6c36a905ee4212cd49e823271525456e82da Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 15:16:56 +0800 Subject: [PATCH 097/226] add done: 20180117 Avoiding Server Disaster.md --- "sources/```\n```\ntech" | 127 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 127 insertions(+) create mode 100644 "sources/```\n```\ntech" diff --git "a/sources/```\n```\ntech" "b/sources/```\n```\ntech" new file mode 100644 index 0000000000..8dbc7d2a28 --- /dev/null +++ "b/sources/```\n```\ntech" @@ -0,0 +1,127 @@ +Avoiding Server Disaster +====== + +Worried that your server will go down? You should be. Here are some disaster-planning tips for server owners. + +If you own a car or a house, you almost certainly have insurance. Insurance seems like a huge waste of money. You pay it every year and make sure that you get the best possible price for the best possible coverage, and then you hope you never need to use the insurance. Insurance seems like a really bad deal—until you have a disaster and realize that had it not been for the insurance, you might have been in financial ruin. + +Unfortunately, disasters and mishaps are a fact of life in the computer industry. And so, just as you pay insurance and hope never to have to use it, you also need to take time to ensure the safety and reliability of your systems—not because you want disasters to happen, or even expect them to occur, but rather because you have to. + +If your website is an online brochure for your company and then goes down for a few hours or even days, it'll be embarrassing and annoying, but not financially painful. But, if your website is your business, when your site goes down, you're losing money. If that's the case, it's crucial to ensure that your server and software are not only unlikely to go down, but also easily recoverable if and when that happens. + +Why am I writing about this subject? Well, let's just say that this particular problem hit close to home for me, just before I started to write this article. After years of helping clients around the world to ensure the reliability of their systems, I made the mistake of not being as thorough with my own. ("The shoemaker's children go barefoot", as the saying goes.) This means that just after launching my new online product for Python developers, a seemingly trivial upgrade turned into a disaster. The precautions I put in place, it turns out, weren't quite enough—and as I write this, I'm still putting my web server together. I'll survive, as will my server and business, but this has been a painful and important lesson—one that I'll do almost anything to avoid repeating in the future. + +So in this article, I describe a number of techniques I've used to keep servers safe and sound through the years, and to reduce the chances of a complete meltdown. You can think of these techniques as insurance for your server, so that even if something does go wrong, you'll be able to recover fairly quickly. + +I should note that most of the advice here assumes no redundancy in your architecture—that is, a single web server and (at most) a single database server. If you can afford to have a bunch of servers of each type, these sorts of problems tend to be much less frequent. However, that doesn't mean they go away entirely. Besides, although people like to talk about heavy-duty web applications that require massive iron in order to run, the fact is that many businesses run on small, one- and two-computer servers. Moreover, those businesses don't need more than that; the ROI (return on investment) they'll get from additional servers cannot be justified. However, the ROI from a good backup and recovery plan is huge, and thus worth the investment. + +### The Parts of a Web Application + +Before I can talk about disaster preparation and recovery, it's important to consider the different parts of a web application and what those various parts mean for your planning. + +For many years, my website was trivially small and simple. Even if it contained some simple programs, those generally were used for sending email or for dynamically displaying different assets to visitors. The entire site consisted of some static HTML, images, JavaScript and CSS. No database or other excitement was necessary. + +At the other end of the spectrum, many people have full-blown web applications, sitting on multiple servers, with one or more databases and caches, as well as HTTP servers with extensively edited configuration files. + +But even when considering those two extremes, you can see that a web application consists of only a few parts: + +* The application software itself. + +* Static assets for that application. + +* Configuration file(s) for the HTTP server(s). + +* Database configuration files. + +* Database schema and contents. + +Assuming that you're using a high-level language, such as Python, Ruby or JavaScript, everything in this list either is a file or can be turned into one. (All databases make it possible to "dump" their contents onto disk, into a format that then can be loaded back into the database server.) + +Consider a site containing only application software, static assets and configuration files. (In other words, no database is involved.) In many cases, such a site can be backed up reliably in Git. Indeed, I prefer to keep my sites in Git, backed up on a commercial hosting service, such as GitHub or Bitbucket, and then deployed using a system like Capistrano. + +In other words, you develop the site on your own development machine. Whenever you are happy with a change that you've made, you commit the change to Git (on your local machine) and then do a git push to your central repository. In order to deploy your application, you then use Capistrano to do a cap deploy, which reads the data from the central repository, puts it into the appropriate place on the server's filesystem, and you're good to go. + +This system keeps you safe in a few different ways. The code itself is located in at least three locations: your development machine, the server and the repository. And those central repositories tend to be fairly reliable, if only because it's in the financial interest of the hosting company to ensure that things are reliable. + +I should add that in such a case, you also should include the HTTP server's configuration files in your Git repository. Those files aren't likely to change very often, but I can tell you from experience, if you're recovering from a crisis, the last thing you want to think about is how your Apache configuration files should look. Copying those files into your Git repository will work just fine. + +### Backing Up Databases + +You could argue that the difference between a "website" and a "web application" is a database. Databases long have powered the back ends of many web applications and for good reason—they allow you to store and retrieve data reliably and flexibly. The power that modern open-source databases provides was unthinkable just a decade or two ago, and there's no reason to think that they'll be any less reliable in the future. + +And yet, just because your database is pretty reliable doesn't mean that it won't have problems. This means you're going to want to keep a snapshot ("dump") of the database's contents around, in case the database server corrupts information, and you need to roll back to a previous version. + +My favorite solution for such a problem is to dump the database on a regular basis, preferably hourly. Here's a shell script I've used, in one form or another, for creating such regular database dumps: + +``` + +#!/bin/sh + +BACKUP_ROOT="/home/database-backups/" +YEAR=`/bin/date +'%Y'` +MONTH=`/bin/date +'%m'` +DAY=`/bin/date +'%d'` + +DIRECTORY="$BACKUP_ROOT/$YEAR/$MONTH/$DAY" +USERNAME=dbuser +DATABASE=dbname +HOST=localhost +PORT=3306 + +/bin/mkdir -p $DIRECTORY + +/usr/bin/mysqldump -h $HOST --databases $DATABASE -u $USERNAME + ↪| /bin/gzip --best --verbose > + ↪$DIRECTORY/$DATABASE-dump.gz + +``` + +The above shell script starts off by defining a bunch of variables, from the directory in which I want to store the backups, to the parts of the date (stored in $YEAR, $MONTH and $DAY). This is so I can have a separate directory for each day of the month. I could, of course, go further and have separate directories for each hour, but I've found that I rarely need more than one backup from a day. + +Once I have defined those variables, I then use the mkdir command to create a new directory. The -p option tells mkdir that if necessary, it should create all of the directories it needs such that the entire path will exist. + +Finally, I then run the database's "dump" command. In this particular case, I'm using MySQL, so I'm using the mysqldump command. The output from this command is a stream of SQL that can be used to re-create the database. I thus take the output from mysqldump and pipe it into gzip, which compresses the output file. Finally, the resulting dumpfile is placed, in compressed form, inside the daily backup directory. + +Depending on the size of your database and the amount of disk space you have on hand, you'll have to decide just how often you want to run dumps and how often you want to clean out old ones. I know from experience that dumping every hour can cause some load problems. On one virtual machine I've used, the overall administration team was unhappy that I was dumping and compressing every hour, which they saw as an unnecessary use of system resources. + +If you're worried your system will run out of disk space, you might well want to run a space-checking program that'll alert you when the filesystem is low on free space. In addition, you can run a cron job that uses find to erase all dumpfiles from before a certain cutoff date. I'm always a bit nervous about programs that automatically erase backups, so I generally prefer not to do this. Rather, I run a program that warns me if the disk usage is going above 85% (which is usually low enough to ensure that I can fix the problem in time, even if I'm on a long flight). Then I can go in and remove the problematic files by hand. + +When you back up your database, you should be sure to back up the configuration for that database as well. The database schema and data, which are part of the dumpfile, are certainly important. However, if you find yourself having to re-create your server from scratch, you'll want to know precisely how you configured the database server, with a particular emphasis on the filesystem configuration and memory allocations. I tend to use PostgreSQL for most of my work, and although postgresql.conf is simple to understand and configure, I still like to keep it around with my dumpfiles. + +Another crucial thing to do is to check your database dumps occasionally to be sure that they are working the way you want. It turns out that the backups I thought I was making weren't actually happening, in no small part because I had modified the shell script and hadn't double-checked that it was creating useful backups. Occasionally pulling out one of your dumpfiles and restoring it to a separate (and offline!) database to check its integrity is a good practice, both to ensure that the dump is working and that you remember how to restore it in the case of an emergency. + +### Storing Backups + +But wait. It might be great to have these backups, but what if the server goes down entirely? In the case of the code, I mentioned to ensure that it was located on more than one machine, ensuring its integrity. By contrast, your database dumps are now on the server, such that if the server fails, your database dumps will be inaccessible. + +This means you'll want to have your database dumps stored elsewhere, preferably automatically. How can you do that? + +There are a few relatively easy and inexpensive solutions to this problem. If you have two servers—ideally in separate physical locations—you can use rsync to copy the files from one to the other. Don't rsync the database's actual files, since those might get corrupted in transfer and aren't designed to be copied when the server is running. By contrast, the dumpfiles that you have created are more than able to go elsewhere. Setting up a remote server, with a user specifically for handling these backup transfers, shouldn't be too hard and will go a long way toward ensuring the safety of your data. + +I should note that using rsync in this way basically requires that you set up passwordless SSH, so that you can transfer without having to be physically present to enter the password. + +Another possible solution is Amazon's Simple Storage Server (S3), which offers astonishing amounts of disk space at very low prices. I know that many companies use S3 as a simple (albeit slow) backup system. You can set up a cron job to run a program that copies the contents of a particular database dumpfile directory onto a particular server. The assumption here is that you're not ever going to use these backups, meaning that S3's slow searching and access will not be an issue once you're working on the server. + +Similarly, you might consider using Dropbox. Dropbox is best known for its desktop client, but it has a "headless", text-based client that can be used on Linux servers without a GUI connected. One nice advantage of Dropbox is that you can share a folder with any number of people, which means you can have Dropbox distribute your backup databases everywhere automatically, including to a number of people on your team. The backups arrive in their Dropbox folder, and you can be sure that the LAMP is conditional. + +Finally, if you're running a WordPress site, you might want to consider VaultPress, a for-pay backup system. I must admit that in the weeks before I took my server down with a database backup error, I kept seeing ads in WordPress for VaultPress. "Who would buy that?", I asked myself, thinking that I'm smart enough to do backups myself. Of course, after disaster occurred and my database was ruined, I realized that $30/year to back up all of my data is cheap, and I should have done it before. + +### Conclusion + +When it comes to your servers, think less like an optimistic programmer and more like an insurance agent. Perhaps disaster won't strike, but if it does, will you be able to recover? Making sure that even if your server is completely unavailable, you'll be able to bring up your program and any associated database is crucial. + +My preferred solution involves combining a Git repository for code and configuration files, distributed across several machines and services. For the databases, however, it's not enough to dump your database; you'll need to get that dump onto a separate machine, and preferably test the backup file on a regular basis. That way, even if things go wrong, you'll be able to get back up in no time. + + + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/content/avoiding-server-disaster + +作者:[Reuven M. Lerner][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/user/1000891 From ef8655e09d63872aed515be45f45060f76dda777 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 15:20:04 +0800 Subject: [PATCH 098/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=204=20artificial=20?= =?UTF-8?q?intelligence=20trends=20to=20watch?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...artificial intelligence trends to watch.md | 56 +++++++++++++++++++ 1 file changed, 56 insertions(+) create mode 100644 sources/talk/20180104 4 artificial intelligence trends to watch.md diff --git a/sources/talk/20180104 4 artificial intelligence trends to watch.md b/sources/talk/20180104 4 artificial intelligence trends to watch.md new file mode 100644 index 0000000000..9c84bba147 --- /dev/null +++ b/sources/talk/20180104 4 artificial intelligence trends to watch.md @@ -0,0 +1,56 @@ +4 artificial intelligence trends to watch +====== + +![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Mentor.png?itok=K-6s_q2C) + +However much your IT operation is using [artificial intelligence][1] today, expect to be doing more with it in 2018. Even if you have never dabbled in AI projects, this may be the year talk turns into action, says David Schatsky, managing director at [Deloitte][2]. "The number of companies doing something with AI is on track to rise," he says. + +Check out his AI predictions for the coming year: + +### 1. Expect more enterprise AI pilot projects + +Many of today's off-the-shelf applications and platforms that companies already routinely use incorporate AI. "But besides that, a growing number of companies are experimenting with machine learning or natural language processing to solve particular problems or help understand their data, or automate internal processes, or improve their own products and services," Schatsky says. + +**[ What IT jobs will be hot in the AI age? See our related article, [8 emerging AI jobs for IT pros][3]. ]** + +"Beyond that, the intensity with which companies are working with AI will rise," he says. "Companies that are early adopters already mostly have five or fewer projects underway, but we think that number will rise to having 10 or more pilots underway." One reason for this prediction, he says, is that AI technologies are getting better and easier to use. + +### 2. AI will help with data science talent crunch + +Talent is a huge problem in data science, where most large companies are struggling to hire the data scientists they need. AI can take up some of the load, Schatsky says. "The practice of data science is increasingly automatable with tools offered both by startups and large, established technology vendors," he says. A lot of data science work is repetitive and tedious, and ripe for automation, he explains. "Data scientists aren't going away, but they're going to get much more productive. So a company that can only do a few data science projects without automation will be able to do much more with automation, even if it can't hire any more data scientists." + +### 3. Synthetic data models will ease bottlenecks + +Before you can train a machine learning model, you have to get the data to train it on, Schatsky notes. That's not always easy. "That's often a business bottleneck, not a production bottleneck," he says. In some cases you can't get the data because of regulations governing things like health records and financial information. + +Synthetic data models can take a smaller set of data and use it to generate the larger set that may be needed, he says. "If you used to need 10,000 data points to train a model but could only get 2,000, you can now generate the missing 8,000 and go ahead and train your model." + +### 4. AI decision-making will become more transparent + +One of the business problems with AI is that it often operates as a black box. That is, once you train a model, it will spit out answers that you can't necessarily explain. "Machine learning can automatically discover patterns in data that a human can't see because it's too much data or too complex," Schatsky says. "Having discovered these patterns, it can make predictions about new data it hasn't seen." + +The problem is that sometimes you really do need to know the reasons behind an AI finding or prediction. "You feed in a medical image and the model says, based on the data you've given me, there's a 90 percent chance that there's a tumor in this image," Schatsky says. "You say, 'Why do you think so?' and the model says, 'I don't know, that's what the data would suggest.'" + +If you follow that data, you're going to have to do exploratory surgery on a patient, Schatsky says. That's a tough call to make when you can't explain why. "There are a lot of situations where even though the model produces very accurate results, if it can't explain how it got there, nobody wants to trust it." + +There are also situations where because of regulations, you literally can't use data that you can't explain. "If a bank declines a loan application, it needs to be able to explain why," Schatsky says. "That's a regulation, at least in the U.S. Traditionally, a human underwriter makes that call. A machine learning model could be more accurate, but if it can't explain its answer, it can't be used." + +Most algorithms were not designed to explain their reasoning. "So researchers are finding clever ways to get AI to spill its secrets and explain what variables make it more likely that this patient has a tumor," he says. "Once they do that, a human can look at the answers and see why it came to that conclusion." + +That means AI findings and decisions can be used in many areas where they can't be today, he says. "That will make these models more trustworthy and more usable in the business world." + + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2018/1/4-ai-trends-watch + +作者:[Minda Zetlin][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://enterprisersproject.com/user/minda-zetlin +[1]:https://enterprisersproject.com/tags/artificial-intelligence +[2]:https://www2.deloitte.com/us/en.html +[3]:https://enterprisersproject.com/article/2017/12/8-emerging-ai-jobs-it-pros?sc_cid=70160000000h0aXAAQ From 8395542d4e3225593928009ec55803de108a1a34 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 15:22:21 +0800 Subject: [PATCH 099/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Why=20DevSecOps?= =?UTF-8?q?=20matters=20to=20IT=20leaders?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...115 Why DevSecOps matters to IT leaders.md | 86 +++++++++++++++++++ 1 file changed, 86 insertions(+) create mode 100644 sources/talk/20180115 Why DevSecOps matters to IT leaders.md diff --git a/sources/talk/20180115 Why DevSecOps matters to IT leaders.md b/sources/talk/20180115 Why DevSecOps matters to IT leaders.md new file mode 100644 index 0000000000..e731013e2b --- /dev/null +++ b/sources/talk/20180115 Why DevSecOps matters to IT leaders.md @@ -0,0 +1,86 @@ +Why DevSecOps matters to IT leaders +====== + +![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/TEP_SecurityTraining1_620x414_1014.png?itok=zqxqJGDG) + +If [DevOps][1] is ultimately about building better software, that means better-secured software, too. + +Enter the term "DevSecOps." Like any IT term, DevSecOps - a descendant of the better-established DevOps - could be susceptible to hype and misappropriation. But the term has real meaning for IT leaders who've embraced a culture of DevOps and the practices and tools that help deliver on its promise. + +Speaking of which: What does "DevSecOps" mean? + +"DevSecOps is a portmanteau of development, security, and operations," says Robert Reeves, CTO and co-founder at [Datical][2]. "It reminds us that security is just as important to our applications as creating them and deploying them to production." + +**[ Want DevOps advice from other CIOs? See our comprehensive resource, [DevOps: The IT Leader's Guide][3]. ]** + +One easy way to explain DevSecOps to non-technical people: It bakes security into the development process intentionally and earlier. + +"Security teams have historically been isolated from development teams - and each team has developed deep expertise in different areas of IT," [Red Hat][4] security strategist Kirsten Newcomer [told us][5] recently. "It doesn't need to be this way. Enterprises that care deeply about security and also care deeply about their ability to quickly deliver business value through software are finding ways to move security left in their application development lifecycles. They're adopting DevSecOps by integrating security practices, tooling, and automation throughout the CI/CD pipeline." + +"To do this well, they're integrating their teams - security professionals are embedded with application development teams from inception (design) through to production deployment," she says. "Both sides are seeing the value - each team expands their skill sets and knowledge base, making them more valuable technologists. DevOps done right - or DevSecOps - improves IT security." + +IT teams are tasked with delivering services faster and more frequently than ever before. DevOps can be a great enabler of this, in part because it can remove some of the traditional friction between development and operations teams that commonly surfaced when Ops was left out of the process until deployment time and Dev tossed its code over an invisible wall, never to manage it again, much less have any infrastructure responsibility. That kind of siloed approach causes problems, to put it mildly, in the digital age. According to Reeves, the same holds true if security exists in a silo. + +"We have adopted DevOps because it's proven to improve our IT performance by removing the barriers between development and operations," Reeves says. "Much like we shouldn't wait until the end of the deployment cycle to involve operations, we shouldn't wait until the end to involve security." + +### Why DevSecOps is here to stay + +It may be tempting to see DevSecOps as just another buzzword, but for security-conscious IT leaders, it's a substantive term: Security must be a first-class citizen in the software development pipeline, not something that gets bolted on as a final step before a deploy, or worse, as a team that gets scrambled only after an actual incident occurs. + +"DevSecOps is not just a buzzword - it is the current and future state of IT for multiple reasons," says George Gerchow, VP of security and compliance at [Sumo Logic][6]. "The most important benefit is the ability to bake security into development and operational processes to provide guardrails - not barriers - to achieve agility and innovation." + +Moreover, the appearance of the DevSecOps on the scene might be another sign that DevOps itself is maturing and digging deep roots inside IT. + +"The culture of DevOps in the enterprise is here to stay, and that means that developers are delivering features and updates to the production environment at an increasingly higher velocity, especially as the self-organizing teams become more comfortable with both collaboration and measurement of results," says Mike Kail, CTO and co-founder at [CYBRIC][7]. + +Teams and companies that have kept their old security practices in place while embracing DevOps are likely experiencing an increasing amount of pain managing security risks as they continue to deploy faster and more frequently. + +"The current, manual testing approaches of security continue to fall further and further behind." + +"The current, manual testing approaches of security continue to fall further and further behind, and leveraging both automation and collaboration to shift security testing left into the software development life cycle, thus driving the culture of DevSecOps, is the only way for IT leaders to increase overall resiliency and delivery security assurance," Kail says. + +Shifting security testing left (earlier) benefits developers, too: Rather than finding out about a glaring hole in their code right before a new or updated service is set to deploy, they can identify and resolve potential issues during much earlier stages of development - often with little or no intervention from security personnel. + +"Done right, DevSecOps can ingrain security into the development lifecycle, empowering developers to more quickly and easily secure their applications without security disruptions," says Brian Wilson, chief information security officer at [SAS][8]. + +Wilson points to static (SAST) and source composition analysis (SCA) tools, integrated into a team's continuous delivery pipelines, as useful technologies that help make this possible by giving developers feedback about potential issues in their own code as well as vulnerabilities in third-party dependencies. + +"As a result, developers can proactively and iteratively mitigate appsec issues and rerun security scans without the need to involve security personnel," Wilson says. He notes, too, that DevSecOps can also help the Dev team streamline updates and patching. + +DevSecOps doesn't mean you no longer need security pros, just as DevOps doesn't mean you no longer need infrastructure experts; it just helps reduce the likelihood of flaws finding their way into production, or from slowing down deployments because they're caught late in the pipeline. + +"We're here if they have questions or need help, but having given developers the tools they need to secure their apps, we're less likely to find a showstopper issue during a penetration test," Wilson says. + +### DevSecOps meets Meltdown + +Sumo Logic's Gerchow shares a timely example of the DevSecOps culture in action: When the recent [Meltdown and Spectre][9] news hit, the team's DevSecOps approach enabled a rapid response to mitigate its risks without any noticeable disruption to internal or external customers, which Gerchow said was particularly important for the cloud-native, highly regulated company. + +The first step: Gerchow's small security team, which he notes also has development skills, was able to work with one of its main cloud vendors via Slack to ensure its infrastructure was completely patched within 24 hours. + +"My team then began OS-level fixes immediately with zero downtime to end users without having to open tickets and requests with engineering that would have meant waiting on a long change management process. All the changes were accounted for via automated Jira tickets opened via Slack and monitored through our logs and analytics solution," Gerchow explains. + +In essence, it sounds a whole lot like the culture of DevOps, matched with the right mix of people, processes, and tools, but it explicitly includes security as part of that culture and mix. + +"In traditional environments, it would have taken weeks or months to do this with downtime because all three development, operations, and security functions were siloed," Gerchow says. "With a DevSecOps process and mindset, end users get a seamless experience with easy communication and same-day fixes." + + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2018/1/why-devsecops-matters-it-leaders + +作者:[Kevin Casey][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://enterprisersproject.com/user/kevin-casey +[1]:https://enterprisersproject.com/tags/devops +[2]:https://www.datical.com/ +[3]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ +[4]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA +[5]:https://enterprisersproject.com/article/2017/10/what-s-next-devops-5-trends-watch +[6]:https://www.sumologic.com/ +[7]:https://www.cybric.io/ +[8]:https://www.sas.com/en_us/home.html +[9]:https://www.redhat.com/en/blog/what-are-meltdown-and-spectre-heres-what-you-need-know?intcmp=701f2000000tjyaAAA From db4e42204b86579494f69738761668bd3b169a32 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 15:24:26 +0800 Subject: [PATCH 100/226] remove useless file --- "sources/```\n```\ntech" | 127 --------------------------------------- 1 file changed, 127 deletions(-) delete mode 100644 "sources/```\n```\ntech" diff --git "a/sources/```\n```\ntech" "b/sources/```\n```\ntech" deleted file mode 100644 index 8dbc7d2a28..0000000000 --- "a/sources/```\n```\ntech" +++ /dev/null @@ -1,127 +0,0 @@ -Avoiding Server Disaster -====== - -Worried that your server will go down? You should be. Here are some disaster-planning tips for server owners. - -If you own a car or a house, you almost certainly have insurance. Insurance seems like a huge waste of money. You pay it every year and make sure that you get the best possible price for the best possible coverage, and then you hope you never need to use the insurance. Insurance seems like a really bad deal—until you have a disaster and realize that had it not been for the insurance, you might have been in financial ruin. - -Unfortunately, disasters and mishaps are a fact of life in the computer industry. And so, just as you pay insurance and hope never to have to use it, you also need to take time to ensure the safety and reliability of your systems—not because you want disasters to happen, or even expect them to occur, but rather because you have to. - -If your website is an online brochure for your company and then goes down for a few hours or even days, it'll be embarrassing and annoying, but not financially painful. But, if your website is your business, when your site goes down, you're losing money. If that's the case, it's crucial to ensure that your server and software are not only unlikely to go down, but also easily recoverable if and when that happens. - -Why am I writing about this subject? Well, let's just say that this particular problem hit close to home for me, just before I started to write this article. After years of helping clients around the world to ensure the reliability of their systems, I made the mistake of not being as thorough with my own. ("The shoemaker's children go barefoot", as the saying goes.) This means that just after launching my new online product for Python developers, a seemingly trivial upgrade turned into a disaster. The precautions I put in place, it turns out, weren't quite enough—and as I write this, I'm still putting my web server together. I'll survive, as will my server and business, but this has been a painful and important lesson—one that I'll do almost anything to avoid repeating in the future. - -So in this article, I describe a number of techniques I've used to keep servers safe and sound through the years, and to reduce the chances of a complete meltdown. You can think of these techniques as insurance for your server, so that even if something does go wrong, you'll be able to recover fairly quickly. - -I should note that most of the advice here assumes no redundancy in your architecture—that is, a single web server and (at most) a single database server. If you can afford to have a bunch of servers of each type, these sorts of problems tend to be much less frequent. However, that doesn't mean they go away entirely. Besides, although people like to talk about heavy-duty web applications that require massive iron in order to run, the fact is that many businesses run on small, one- and two-computer servers. Moreover, those businesses don't need more than that; the ROI (return on investment) they'll get from additional servers cannot be justified. However, the ROI from a good backup and recovery plan is huge, and thus worth the investment. - -### The Parts of a Web Application - -Before I can talk about disaster preparation and recovery, it's important to consider the different parts of a web application and what those various parts mean for your planning. - -For many years, my website was trivially small and simple. Even if it contained some simple programs, those generally were used for sending email or for dynamically displaying different assets to visitors. The entire site consisted of some static HTML, images, JavaScript and CSS. No database or other excitement was necessary. - -At the other end of the spectrum, many people have full-blown web applications, sitting on multiple servers, with one or more databases and caches, as well as HTTP servers with extensively edited configuration files. - -But even when considering those two extremes, you can see that a web application consists of only a few parts: - -* The application software itself. - -* Static assets for that application. - -* Configuration file(s) for the HTTP server(s). - -* Database configuration files. - -* Database schema and contents. - -Assuming that you're using a high-level language, such as Python, Ruby or JavaScript, everything in this list either is a file or can be turned into one. (All databases make it possible to "dump" their contents onto disk, into a format that then can be loaded back into the database server.) - -Consider a site containing only application software, static assets and configuration files. (In other words, no database is involved.) In many cases, such a site can be backed up reliably in Git. Indeed, I prefer to keep my sites in Git, backed up on a commercial hosting service, such as GitHub or Bitbucket, and then deployed using a system like Capistrano. - -In other words, you develop the site on your own development machine. Whenever you are happy with a change that you've made, you commit the change to Git (on your local machine) and then do a git push to your central repository. In order to deploy your application, you then use Capistrano to do a cap deploy, which reads the data from the central repository, puts it into the appropriate place on the server's filesystem, and you're good to go. - -This system keeps you safe in a few different ways. The code itself is located in at least three locations: your development machine, the server and the repository. And those central repositories tend to be fairly reliable, if only because it's in the financial interest of the hosting company to ensure that things are reliable. - -I should add that in such a case, you also should include the HTTP server's configuration files in your Git repository. Those files aren't likely to change very often, but I can tell you from experience, if you're recovering from a crisis, the last thing you want to think about is how your Apache configuration files should look. Copying those files into your Git repository will work just fine. - -### Backing Up Databases - -You could argue that the difference between a "website" and a "web application" is a database. Databases long have powered the back ends of many web applications and for good reason—they allow you to store and retrieve data reliably and flexibly. The power that modern open-source databases provides was unthinkable just a decade or two ago, and there's no reason to think that they'll be any less reliable in the future. - -And yet, just because your database is pretty reliable doesn't mean that it won't have problems. This means you're going to want to keep a snapshot ("dump") of the database's contents around, in case the database server corrupts information, and you need to roll back to a previous version. - -My favorite solution for such a problem is to dump the database on a regular basis, preferably hourly. Here's a shell script I've used, in one form or another, for creating such regular database dumps: - -``` - -#!/bin/sh - -BACKUP_ROOT="/home/database-backups/" -YEAR=`/bin/date +'%Y'` -MONTH=`/bin/date +'%m'` -DAY=`/bin/date +'%d'` - -DIRECTORY="$BACKUP_ROOT/$YEAR/$MONTH/$DAY" -USERNAME=dbuser -DATABASE=dbname -HOST=localhost -PORT=3306 - -/bin/mkdir -p $DIRECTORY - -/usr/bin/mysqldump -h $HOST --databases $DATABASE -u $USERNAME - ↪| /bin/gzip --best --verbose > - ↪$DIRECTORY/$DATABASE-dump.gz - -``` - -The above shell script starts off by defining a bunch of variables, from the directory in which I want to store the backups, to the parts of the date (stored in $YEAR, $MONTH and $DAY). This is so I can have a separate directory for each day of the month. I could, of course, go further and have separate directories for each hour, but I've found that I rarely need more than one backup from a day. - -Once I have defined those variables, I then use the mkdir command to create a new directory. The -p option tells mkdir that if necessary, it should create all of the directories it needs such that the entire path will exist. - -Finally, I then run the database's "dump" command. In this particular case, I'm using MySQL, so I'm using the mysqldump command. The output from this command is a stream of SQL that can be used to re-create the database. I thus take the output from mysqldump and pipe it into gzip, which compresses the output file. Finally, the resulting dumpfile is placed, in compressed form, inside the daily backup directory. - -Depending on the size of your database and the amount of disk space you have on hand, you'll have to decide just how often you want to run dumps and how often you want to clean out old ones. I know from experience that dumping every hour can cause some load problems. On one virtual machine I've used, the overall administration team was unhappy that I was dumping and compressing every hour, which they saw as an unnecessary use of system resources. - -If you're worried your system will run out of disk space, you might well want to run a space-checking program that'll alert you when the filesystem is low on free space. In addition, you can run a cron job that uses find to erase all dumpfiles from before a certain cutoff date. I'm always a bit nervous about programs that automatically erase backups, so I generally prefer not to do this. Rather, I run a program that warns me if the disk usage is going above 85% (which is usually low enough to ensure that I can fix the problem in time, even if I'm on a long flight). Then I can go in and remove the problematic files by hand. - -When you back up your database, you should be sure to back up the configuration for that database as well. The database schema and data, which are part of the dumpfile, are certainly important. However, if you find yourself having to re-create your server from scratch, you'll want to know precisely how you configured the database server, with a particular emphasis on the filesystem configuration and memory allocations. I tend to use PostgreSQL for most of my work, and although postgresql.conf is simple to understand and configure, I still like to keep it around with my dumpfiles. - -Another crucial thing to do is to check your database dumps occasionally to be sure that they are working the way you want. It turns out that the backups I thought I was making weren't actually happening, in no small part because I had modified the shell script and hadn't double-checked that it was creating useful backups. Occasionally pulling out one of your dumpfiles and restoring it to a separate (and offline!) database to check its integrity is a good practice, both to ensure that the dump is working and that you remember how to restore it in the case of an emergency. - -### Storing Backups - -But wait. It might be great to have these backups, but what if the server goes down entirely? In the case of the code, I mentioned to ensure that it was located on more than one machine, ensuring its integrity. By contrast, your database dumps are now on the server, such that if the server fails, your database dumps will be inaccessible. - -This means you'll want to have your database dumps stored elsewhere, preferably automatically. How can you do that? - -There are a few relatively easy and inexpensive solutions to this problem. If you have two servers—ideally in separate physical locations—you can use rsync to copy the files from one to the other. Don't rsync the database's actual files, since those might get corrupted in transfer and aren't designed to be copied when the server is running. By contrast, the dumpfiles that you have created are more than able to go elsewhere. Setting up a remote server, with a user specifically for handling these backup transfers, shouldn't be too hard and will go a long way toward ensuring the safety of your data. - -I should note that using rsync in this way basically requires that you set up passwordless SSH, so that you can transfer without having to be physically present to enter the password. - -Another possible solution is Amazon's Simple Storage Server (S3), which offers astonishing amounts of disk space at very low prices. I know that many companies use S3 as a simple (albeit slow) backup system. You can set up a cron job to run a program that copies the contents of a particular database dumpfile directory onto a particular server. The assumption here is that you're not ever going to use these backups, meaning that S3's slow searching and access will not be an issue once you're working on the server. - -Similarly, you might consider using Dropbox. Dropbox is best known for its desktop client, but it has a "headless", text-based client that can be used on Linux servers without a GUI connected. One nice advantage of Dropbox is that you can share a folder with any number of people, which means you can have Dropbox distribute your backup databases everywhere automatically, including to a number of people on your team. The backups arrive in their Dropbox folder, and you can be sure that the LAMP is conditional. - -Finally, if you're running a WordPress site, you might want to consider VaultPress, a for-pay backup system. I must admit that in the weeks before I took my server down with a database backup error, I kept seeing ads in WordPress for VaultPress. "Who would buy that?", I asked myself, thinking that I'm smart enough to do backups myself. Of course, after disaster occurred and my database was ruined, I realized that $30/year to back up all of my data is cheap, and I should have done it before. - -### Conclusion - -When it comes to your servers, think less like an optimistic programmer and more like an insurance agent. Perhaps disaster won't strike, but if it does, will you be able to recover? Making sure that even if your server is completely unavailable, you'll be able to bring up your program and any associated database is crucial. - -My preferred solution involves combining a Git repository for code and configuration files, distributed across several machines and services. For the databases, however, it's not enough to dump your database; you'll need to get that dump onto a separate machine, and preferably test the backup file on a regular basis. That way, even if things go wrong, you'll be able to get back up in no time. - - - --------------------------------------------------------------------------------- - -via: http://www.linuxjournal.com/content/avoiding-server-disaster - -作者:[Reuven M. Lerner][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxjournal.com/user/1000891 From 8b8d14f647d1362bdbfd814e41b34f518c8d0568 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Fri, 19 Jan 2018 15:25:24 +0800 Subject: [PATCH 101/226] Translated by qhwdw --- ...nux container security - Opensource.com.md | 131 ------------------ ...nux container security - Opensource.com.md | 131 ++++++++++++++++++ 2 files changed, 131 insertions(+), 131 deletions(-) delete mode 100644 sources/tech/20171009 10 layers of Linux container security - Opensource.com.md create mode 100644 translated/tech/20171009 10 layers of Linux container security - Opensource.com.md diff --git a/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md b/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md deleted file mode 100644 index 6bb722f516..0000000000 --- a/sources/tech/20171009 10 layers of Linux container security - Opensource.com.md +++ /dev/null @@ -1,131 +0,0 @@ -Translating by qhwdw 10 layers of Linux container security | Opensource.com -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA) - -Containers provide an easy way to package applications and deliver them seamlessly from development to test to production. This helps ensure consistency across a variety of environments, including physical servers, virtual machines (VMs), or private or public clouds. These benefits are leading organizations to rapidly adopt containers in order to easily develop and manage the applications that add business value. - -Enterprises require strong security, and anyone running essential services in containers will ask, "Are containers secure?" and "Can we trust containers with our applications?" - -Securing containers is a lot like securing any running process. You need to think about security throughout the layers of the solution stack before you deploy and run your container. You also need to think about security throughout the application and container lifecycle. - -Try these 10 key elements to secure different layers of the container solution stack and different stages of the container lifecycle. - -### 1. The container host operating system and multi-tenancy - -Containers make it easier for developers to build and promote an application and its dependencies as a unit and to get the most use of servers by enabling multi-tenant application deployments on a shared host. It's easy to deploy multiple applications on a single host, spinning up and shutting down individual containers as needed. To take full advantage of this packaging and deployment technology, the operations team needs the right environment for running containers. Operations needs an operating system that can secure containers at the boundaries, securing the host kernel from container escapes and securing containers from each other. - -### 2. Container content (use trusted sources) - -Containers are Linux processes with isolation and resource confinement that enable you to run sandboxed applications on a shared host kernel. Your approach to securing containers should be the same as your approach to securing any running process on Linux. Dropping privileges is important and still the best practice. Even better is to create containers with the least privilege possible. Containers should run as user, not root. Next, make use of the multiple levels of security available in Linux. Linux namespaces, Security-Enhanced Linux ( [SELinux][1] ), [cgroups][2] , capabilities, and secure computing mode ( [seccomp][3] ) are five of the security features available for securing containers. - -When it comes to security, what's inside your container matters. For some time now, applications and infrastructures have been composed from readily available components. Many of these are open source packages, such as the Linux operating system, Apache Web Server, Red Hat JBoss Enterprise Application Platform, PostgreSQL, and Node.js. Containerized versions of these packages are now also readily available, so you don't have to build your own. But, as with any code you download from an external source, you need to know where the packages originated, who built them, and whether there's any malicious code inside them. - -### 3. Container registries (secure access to container images) - -Your teams are building containers that layer content on top of downloaded public container images, so it's critical to manage access to and promotion of the downloaded container images and the internally built images in the same way other types of binaries are managed. Many private registries support storage of container images. Select a private registry that helps to automate policies for the use of container images stored in the registry. - -### 4. Security and the build process - -In a containerized environment, the software-build process is the stage in the lifecycle where application code is integrated with needed runtime libraries. Managing this build process is key to securing the software stack. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production. It's also important to maintain the immutability of your containers--in other words, do not patch running containers; rebuild and redeploy them instead. - -Whether you work in a highly regulated industry or simply want to optimize your team's efforts, design your container image management and build process to take advantage of container layers to implement separation of control, so that the: - - * Operations team manages base images - * Architects manage middleware, runtimes, databases, and other such solutions - * Developers focus on application layers and just write code - - - -Finally, sign your custom-built containers so that you can be sure they are not tampered with between build and deployment. - -### 5. Control what can be deployed within a cluster - -In case anything falls through during the build process, or for situations where a vulnerability is discovered after an image has been deployed, add yet another layer of security in the form of tools for automated, policy-based deployment. - -Let's look at an application that's built using three container image layers: core, middleware, and the application layer. An issue is discovered in the core image and that image is rebuilt. Once the build is complete, the image is pushed to the container platform registry. The platform can detect that the image has changed. For builds that are dependent on this image and have triggers defined, the platform will automatically rebuild the application image, incorporating the fixed libraries. - -Add yet another layer of security in the form of tools for automated, policy-based deployment. - -Once the build is complete, the image is pushed to container platform's internal registry. It immediately detects changes to images in its internal registry and, for applications where triggers are defined, automatically deploys the updated image, ensuring that the code running in production is always identical to the most recently updated image. All these capabilities work together to integrate security capabilities into your continuous integration and continuous deployment (CI/CD) process and pipeline. - -### 6. Container orchestration: Securing the container platform - -Once the build is complete, the image is pushed to container platform's internal registry. It immediately detects changes to images in its internal registry and, for applications where triggers are defined, automatically deploys the updated image, ensuring that the code running in production is always identical to the most recently updated image. All these capabilities work together to integrate security capabilities into your continuous integration and continuous deployment (CI/CD) process and pipeline. - -Of course, applications are rarely delivered in a single container. Even simple applications typically have a frontend, a backend, and a database. And deploying modern microservices applications in containers means deploying multiple containers, sometimes on the same host and sometimes distributed across multiple hosts or nodes, as shown in this diagram. - -When managing container deployment at scale, you need to consider: - - * Which containers should be deployed to which hosts? - * Which host has more capacity? - * Which containers need access to each other? How will they discover each other? - * How will you control access to--and management of--shared resources, like network and storage? - * How will you monitor container health? - * How will you automatically scale application capacity to meet demand? - * How will you enable developer self-service while also meeting security requirements? - - - -Given the wealth of capabilities for both developers and operators, strong role-based access control is a critical element of the container platform. For example, the orchestration management servers are a central point of access and should receive the highest level of security scrutiny. APIs are key to automating container management at scale and used to validate and configure the data for pods, services, and replication controllers; perform project validation on incoming requests; and invoke triggers on other major system components. - -### 7. Network isolation - -Deploying modern microservices applications in containers often means deploying multiple containers distributed across multiple nodes. With network defense in mind, you need a way to isolate applications from one another within a cluster. A typical public cloud container service, like Google Container Engine (GKE), Azure Container Services, or Amazon Web Services (AWS) Container Service, are single-tenant services. They let you run your containers on the VM cluster that you initiate. For secure container multi-tenancy, you want a container platform that allows you to take a single cluster and segment the traffic to isolate different users, teams, applications, and environments within that cluster. - -With network namespaces, each collection of containers (known as a "pod") gets its own IP and port range to bind to, thereby isolating pod networks from each other on the node. Pods from different namespaces (projects) cannot send packets to or receive packets from pods and services of a different project by default, with the exception of options noted below. You can use these features to isolate developer, test, and production environments within a cluster; however, this proliferation of IP addresses and ports makes networking more complicated. In addition, containers are designed to come and go. Invest in tools that handle this complexity for you. The preferred tool is a container platform that uses [software-defined networking][4] (SDN) to provide a unified cluster network that enables communication between containers across the cluster. - -### 8. Storage - -Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Container platforms should provide plugins for multiple flavors of storage, including network file systems (NFS), AWS Elastic Block Stores (EBS), GCE Persistent Disks, GlusterFS, iSCSI, RADOS (Ceph), Cinder, etc. - -A persistent volume (PV) can be mounted on a host in any way supported by the resource provider. Providers will have different capabilities, and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read only. Each PV gets its own set of access modes describing that specific PV's capabilities, such as ReadWriteOnce, ReadOnlyMany, and ReadWriteMany. - -### 9. API management, endpoint security, and single sign-on (SSO) - -Securing your applications includes managing application and API authentication and authorization. - -Web SSO capabilities are a key part of modern applications. Container platforms can come with various containerized services for developers to use when building their applications. - -APIs are key to applications composed of microservices. These applications have multiple independent API services, leading to proliferation of service endpoints, which require additional tools for governance. An API management tool is also recommended. All API platforms should offer a variety of standard options for API authentication and security, which can be used alone or in combination, to issue credentials and control access. - -Securing your applications includes managing application and API authentication and authorization. - -These options include standard API keys, application ID and key pairs, and OAuth 2.0. - -### 10. Roles and access management in a cluster federation - -These options include standard API keys, application ID and key pairs, and OAuth 2.0. - -In July 2016, Kubernetes 1.3 introduced [Kubernetes Federated Clusters][5]. This is one of the exciting new features evolving in the Kubernetes upstream, currently in beta in Kubernetes 1.6. Federation is useful for deploying and accessing application services that span multiple clusters running in the public cloud or enterprise datacenters. Multiple clusters can be useful to enable application high availability across multiple availability zones or to enable common management of deployments or migrations across multiple cloud providers, such as AWS, Google Cloud, and Azure. - -When managing federated clusters, you must be sure that your orchestration tools provide the security you need across the different deployment platform instances. As always, authentication and authorization are key--as well as the ability to securely pass data to your applications, wherever they run, and manage application multi-tenancy across clusters. Kubernetes is extending Cluster Federation to include support for Federated Secrets, Federated Namespaces, and Ingress objects. - -### Choosing a container platform - -Of course, it is not just about security. Your container platform needs to provide an experience that works for your developers and your operations team. It needs to offer a secure, enterprise-grade container-based application platform that enables both developers and operators, without compromising the functions needed by each team, while also improving operational efficiency and infrastructure utilization. - -Learn more in Daniel's talk, [Ten Layers of Container Security][6], at [Open Source Summit EU][7], which will be held October 23-26 in Prague. - -### About The Author -Daniel Oh;Microservives;Agile;Devops;Java Ee;Container;Openshift;Jboss;Evangelism - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/10/10-layers-container-security - -作者:[Daniel Oh][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/daniel-oh -[1]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux -[2]:https://en.wikipedia.org/wiki/Cgroups -[3]:https://en.wikipedia.org/wiki/Seccomp -[4]:https://en.wikipedia.org/wiki/Software-defined_networking -[5]:https://kubernetes.io/docs/concepts/cluster-administration/federation/ -[6]:https://osseu17.sched.com/mobile/#session:f2deeabfc1640d002c1d55101ce81223 -[7]:http://events.linuxfoundation.org/events/open-source-summit-europe diff --git a/translated/tech/20171009 10 layers of Linux container security - Opensource.com.md b/translated/tech/20171009 10 layers of Linux container security - Opensource.com.md new file mode 100644 index 0000000000..1c3425d008 --- /dev/null +++ b/translated/tech/20171009 10 layers of Linux container security - Opensource.com.md @@ -0,0 +1,131 @@ +Linux 容器安全的 10 个层面 +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA) + +容器提供了打包应用程序的一种简单方法,它实现了从开发到测试到投入生产系统的无缝传递。它也有助于确保跨不同环境的连贯性,包括物理服务器、虚拟机、以及公有云或私有云。这些好处使得一些组织为了更方便地部署和管理为他们提升业务价值的应用程序,而快速部署容器。 + +企业要求存储安全,在容器中运行基础服务的任何人都会问,“容器安全吗?”以及“怎么相信运行在容器中的我的应用程序是安全的?” + +安全的容器就像是许多安全运行的进程。在你部署和运行你的容器之前,你需要去考虑整个解决方案栈~~(致校对,容器是由不同的层堆叠而成,英文原文中使用的stack,可以直译为“解决方案栈”,但是似乎没有这一习惯说法,也可以翻译为解决方案的不同层级,哪个更合适?)~~各个层面的安全。你也需要去考虑应用程序和容器整个生命周期的安全。 + +尝试从这十个关键的因素去确保容器解决方案栈不同层面、以及容器生命周期的不同阶段的安全。 + +### 1. 容器宿主机操作系统和多租户环境 + +由于容器将应用程序和它的依赖作为一个单元来处理,使得开发者构建和升级应用程序变得更加容易,并且,容器可以启用多租户技术将许多应用程序和服务部署到一台共享主机上。在一台单独的主机上以容器方式部署多个应用程序、按需启动和关闭单个容器都是很容易的。为完全实现这种打包和部署技术的优势,运营团队需要运行容器的合适环境。运营者需要一个安全的操作系统,它能够在边界上保护容器安全、从容器中保护主机内核、以及保护容器彼此之间的安全。 + +### 2. 容器内容(使用可信来源) + +容器是隔离的 Linux 进程,并且在一个共享主机的内核中,容器内使用的资源被限制在仅允许你运行着应用程序的沙箱中。保护容器的方法与保护你的 Linux 中运行的任何进程的方法是一样的。降低权限是非常重要的,也是保护容器安全的最佳实践。甚至是使用尽可能小的权限去创建容器。容器应该以一个普通用户的权限来运行,而不是 root 权限的用户。在 Linux 中可以使用多级安全,Linux 命名空间、安全强化 Linux( [SELinux][1])、[cgroups][2] 、capabilities(译者注:Linux 内核的一个安全特性,它打破了传统的普通用户与 root 用户的概念,在进程级提供更好的安全控制)、以及安全计算模式( [seccomp][3] ),Linux 的这五种安全特性可以用于保护容器的安全。 + +在谈到安全时,首先要考虑你的容器里面有什么?例如 ,有些时候,应用程序和基础设施是由很多可用的组件所构成。它们中的一些是开源的包,比如,Linux 操作系统、Apache Web 服务器、Red Hat JBoss 企业应用平台、PostgreSQL、以及Node.js。这些包的容器化版本已经可以使用了,因此,你没有必要自己去构建它们。但是,对于你从一些外部来源下载的任何代码,你需要知道这些包的原始来源,是谁构建的它,以及这些包里面是否包含恶意代码。 + +### 3. 容器注册(安全访问容器镜像) + +你的团队所构建的容器的最顶层的内容是下载的公共容器镜像,因此,管理和下载容器镜像以及内部构建镜像,与管理和下载其它类型的二进制文件的方式是相同的,这一点至关重要。许多私有的注册者支持容器镜像的保存。选择一个私有的注册者,它可以帮你将存储在它的注册中的容器镜像实现策略自动化。 + +### 4. 安全性与构建过程 + +在一个容器化环境中,构建过程是软件生命周期的一个阶段,它将所需的运行时库和应用程序代码集成到一起。管理这个构建过程对于软件栈安全来说是很关键的。遵守“一次构建,到处部署”的原则,可以确保构建过程的结果正是生产系统中需要的。保持容器的恒定不变也很重要 — 换句话说就是,不要对正在运行的容器打补丁,而是,重新构建和部署它们。 + +不论是因为你处于一个高强度监管的行业中,还是只希望简单地优化你的团队的成果,去设计你的容器镜像管理以及构建过程,可以使用容器层的优势来实现控制分离,因此,你应该去这么做: + + * 运营团队管理基础镜像 + * 设计者管理中间件、运行时、数据库、以及其它解决方案 + * 开发者专注于应用程序层面,并且只写代码 + + + +最后,标记好你的定制构建容器,这样可以确保在构建和部署时不会搞混乱。 + +### 5. 控制好在同一个集群内部署应用 + +如果是在构建过程中出现的任何问题,或者在镜像被部署之后发现的任何漏洞,那么,请在基于策略的、自动化工具上添加另外的安全层。 + +我们来看一下,一个应用程序的构建使用了三个容器镜像层:内核、中间件、以及应用程序。如果在内核镜像中发现了问题,那么只能重新构建镜像。一旦构建完成,镜像就会被发布到容器平台注册中。这个平台可以自动检测到发生变化的镜像。对于基于这个镜像的其它构建将被触发一个预定义的动作,平台将自己重新构建应用镜像,合并进修复库。 + +在基于策略的、自动化工具上添加另外的安全层。 + +一旦构建完成,镜像将被发布到容器平台的内部注册中。在它的内部注册中,会立即检测到镜像发生变化,应用程序在这里将会被触发一个预定义的动作,自动部署更新镜像,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的这些功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。 + +### 6. 容器编配:保护容器平台 + +一旦构建完成,镜像被发布到容器平台的内部注册中。内部注册会立即检测到镜像的变化,应用程序在这里会被触发一个预定义的动作,自己部署更新,确保运行在生产系统中的代码总是使用更新后的最新的镜像。所有的功能协同工作,将安全功能集成到你的持续集成和持续部署(CI/CD)过程和管道中。~~(致校对:这一段和上一段是重复的,请确认,应该是选题工具造成的重复!!)~~ + +当然了,应用程序很少会部署在单一的容器中。甚至,单个应用程序一般情况下都有一个前端、一个后端、以及一个数据库。而在容器中以微服务模式部署的应用程序,意味着应用程序将部署在多个容器中,有时它们在同一台宿主机上,有时它们是分布在多个宿主机或者节点上,如下面的图所示:~~(致校对:图去哪里了???应该是选题问题的问题!)~~ + +在大规模的容器部署时,你应该考虑: + + * 哪个容器应该被部署在哪个宿主机上? + * 那个宿主机应该有什么样的性能? + * 哪个容器需要访问其它容器?它们之间如何发现彼此? + * 你如何控制和管理对共享资源的访问,像网络和存储? + * 如何监视容器健康状况? + * 如何去自动扩展性能以满足应用程序的需要? + * 如何在满足安全需求的同时启用开发者的自助服务? + + + +考虑到开发者和运营者的能力,提供基于角色的访问控制是容器平台的关键要素。例如,编配管理服务器是中心访问点,应该接受最高级别的安全检查。APIs 是规模化的自动容器平台管理的关键,可以用于为 pods、服务、以及复制控制器去验证和配置数据;在入站请求上执行项目验证;以及调用其它主要系统组件上的触发器。 + +### 7. 网络隔离 + +在容器中部署现代微服务应用,经常意味着跨多个节点在多个容器上部署。考虑到网络防御,你需要一种在一个集群中的应用之间的相互隔离的方法。一个典型的公有云容器服务,像 Google 容器引擎(GKE)、Azure 容器服务、或者 Amazon Web 服务(AWS)容器服务,是单租户服务。他们让你在你加入的虚拟机集群上运行你的容器。对于多租户容器的安全,你需要容器平台为你启用一个单一集群,并且分割通讯以隔离不同的用户、团队、应用、以及在这个集群中的环境。 + +使用网络命名空间,容器内的每个集合(即大家熟知的“pod”)得到它自己的 IP 和绑定的端口范围,以此来从一个节点上隔离每个 pod 网络。除使用下文所述的选项之外,~~(选项在哪里???,请查看原文,是否是选题丢失???)~~默认情况下,来自不同命名空间(项目)的Pods 并不能发送或者接收其它 Pods 上的包和不同项目的服务。你可以使用这些特性在同一个集群内,去隔离开发者环境、测试环境、以及生产环境。但是,这样会导致 IP 地址和端口数量的激增,使得网络管理更加复杂。另外,容器是被反复设计的,你应该在处理这种复杂性的工具上进行投入。在容器平台上比较受欢迎的工具是使用 [软件定义网络][4] (SDN) 去提供一个定义的网络集群,它允许跨不同集群的容器进行通讯。 + +### 8. 存储 + +容器即可被用于无状态应用,也可被用于有状态应用。保护附加存储是保护有状态服务的一个关键要素。容器平台对多个受欢迎的存储提供了插件,包括网络文件系统(NFS)、AWS 弹性块存储(EBS)、GCE 持久磁盘、GlusterFS、iSCSI、 RADOS(Ceph)、Cinder、等等。 + +一个持久卷(PV)可以通过资源提供者支持的任何方式装载到一个主机上。提供者有不同的性能,而每个 PV 的访问模式是设置为被特定的卷支持的特定模式。例如,NFS 能够支持多路客户端同时读/写,但是,一个特定的 NFS 的 PV 可以在服务器上被发布为只读模式。每个 PV 得到它自己的一组反应特定 PV 性能的访问模式的描述,比如,ReadWriteOnce、ReadOnlyMany、以及 ReadWriteMany。 + +### 9. API 管理、终端安全、以及单点登陆(SSO) + +保护你的应用包括管理应用、以及 API 的认证和授权。 + +Web SSO 能力是现代应用程序的一个关键部分。在构建它们的应用时,容器平台带来了开发者可以使用的多种容器化服务。 + +APIs 是微服务构成的应用程序的关键所在。这些应用程序有多个独立的 API 服务,这导致了终端服务数量的激增,它就需要额外的管理工具。推荐使用 API 管理工具。所有的 API 平台应该提供多种 API 认证和安全所需要的标准选项,这些选项既可以单独使用,也可以组合使用,以用于发布证书或者控制访问。 + +保护你的应用包括管理应用以及 API 的认证和授权。~~(致校对:这一句话和本节的第一句话重复)~~ + +这些选项包括标准的 API keys、应用 ID 和密钥对、 以及 OAuth 2.0。 + +### 10. 在一个联合集群中的角色和访问管理 + +这些选项包括标准的 API keys、应用 ID 和密钥对、 以及 OAuth 2.0。~~(致校对:这一句和上一节最后一句重复)~~ + +在 2016 年 7 月份,Kubernetes 1.3 引入了 [Kubernetes 联合集群][5]。这是一个令人兴奋的新特性之一,它是在 Kubernetes 上游、当前的 Kubernetes 1.6 beta 中引用的。联合是用于部署和访问跨多集群运行在公有云或企业数据中心的应用程序服务的。多个集群能够用于去实现应用程序的高可用性,应用程序可以跨多个可用区域、或者去启用部署公共管理、或者跨不同的供应商进行迁移,比如,AWS、Google Cloud、以及 Azure。 + +当管理联合集群时,你必须确保你的编配工具能够提供,你所需要的跨不同部署平台的实例的安全性。一般来说,认证和授权是很关键的 — 不论你的应用程序运行在什么地方,将数据安全可靠地传递给它们,以及管理跨集群的多租户应用程序。Kubernetes 扩展了联合集群,包括对联合的秘密数据、联合的命名空间、以及 Ingress objects 的支持。 + +### 选择一个容器平台 + +当然,它并不仅关乎安全。你需要提供一个你的开发者团队和运营团队有相关经验的容器平台。他们需要一个安全的、企业级的基于容器的应用平台,它能够同时满足开发者和运营者的需要,而且还能够提高操作效率和基础设施利用率。 + +想从 Daniel 在 [欧盟开源峰会][7] 上的 [容器安全的十个层面][6] 的演讲中学习更多知识吗?这个峰会将于10 月 23 - 26 日在 Prague 举行。 + +### 关于作者 +Daniel Oh;Microservives;Agile;Devops;Java Ee;Container;Openshift;Jboss;Evangelism + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/10/10-layers-container-security + +作者:[Daniel Oh][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/daniel-oh +[1]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux +[2]:https://en.wikipedia.org/wiki/Cgroups +[3]:https://en.wikipedia.org/wiki/Seccomp +[4]:https://en.wikipedia.org/wiki/Software-defined_networking +[5]:https://kubernetes.io/docs/concepts/cluster-administration/federation/ +[6]:https://osseu17.sched.com/mobile/#session:f2deeabfc1640d002c1d55101ce81223 +[7]:http://events.linuxfoundation.org/events/open-source-summit-europe From 9ebf1ef657dbf4848d02ca3238f1cda96d535e80 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 15:29:39 +0800 Subject: [PATCH 102/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20tee=20Com?= =?UTF-8?q?mand=20Explained=20for=20Beginners=20(6=20Examples)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nd Explained for Beginners (6 Examples).md | 130 ++++++++++++++++++ 1 file changed, 130 insertions(+) create mode 100644 sources/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md diff --git a/sources/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md b/sources/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md new file mode 100644 index 0000000000..e1be9e3da2 --- /dev/null +++ b/sources/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md @@ -0,0 +1,130 @@ +Linux tee Command Explained for Beginners (6 Examples) +====== + +There are times when you want to manually track output of a command and also simultaneously make sure the output is being written to a file so that you can refer to it later. If you are looking for a Linux tool which can do this for you, you'll be glad to know there exists a command **tee** that's built for this purpose. + +In this tutorial, we will discuss the basics of the tee command using some easy to understand examples. But before we do that, it's worth mentioning that all examples used in this article have been tested on Ubuntu 16.04 LTS. + +### Linux tee command + +The tee command basically reads from the standard input and writes to standard output and files. Following is the syntax of the command: + +``` +tee [OPTION]... [FILE]... +``` + +And here's how the man page explains it: +``` +Copy standard input to each FILE, and also to standard output. +``` + +The following Q&A-styled examples should give you a better idea on how the command works. + +### Q1. How to use tee command in Linux? + +Suppose you are using the ping command for some reason. + +ping google.com + +[![How to use tee command in Linux][1]][2] + +And what you want, is that the output should also get written to a file in parallel. Then here's where you can use the tee command. + +``` +ping google.com | tee output.txt +``` + +The following screenshot shows the output was written to the 'output.txt' file along with being written on stdout. + +[![tee command output][3]][4] + +So that should clear the basic usage of tee. + +### Q2. How to make sure tee appends information in files? + +By default, the tee command overwrites information in a file when used again. However, if you want, you can change this behavior by using the -a command line option. + +``` +[command] | tee -a [file] +``` + +So basically, the -a option forces tee to append information to the file. + +### Q3. How to make tee write to multiple files? + +That's pretty easy. You just have to mention their names. + +``` +[command] | tee [file1] [file2] [file3] +``` + +For example: + +``` +ping google.com | tee output1.txt output2.txt output3.txt +``` + +[![How to make tee write to multiple files][5]][6] + +### Q4. How to make tee redirect output of one command to another? + +You can not only use tee to simultaneously write output to files, but also to pass on the output as input to other commands. For example, the following command will not only store the filenames in 'output.txt' but also let you know - through wc - the number of entries in the output.txt file. + +``` +ls file* | tee output.txt | wc -l +``` + +[![How to make tee redirect output of one command to another][7]][8] + +### Q5. How to write to a file with elevated privileges using tee? + +Suppose you opened a file in the [Vim editor][9], made a lot of changes, and then when you tried saving those changes, you got an error that made you realize that it's a root-owned file, meaning you need to have sudo privileges to save these changes. + +[![How to write to a file with elevated privileges using tee][10]][11] + +In scenarios like these, you can use tee to elevate privileges on the go. + +``` +:w !sudo tee % +``` + +The aforementioned command will ask you for root password, and then let you save the changes. + +### Q6. How to make tee ignore interrupt? + +The -i command line option enables tee to ignore the interrupt signal (`SIGINT`), which is usually issued when you press the crl+c key combination. + +``` +[command] | tee -i [file] +``` + +This is useful when you want to kill the command with ctrl+c but want tee to exit gracefully. + +### Conclusion + +You'll likely agree now that tee is an extremely useful command. We've discussed it's basic usage as well as majority of its command line options here. The tool doesn't have a steep learning curve, so just practice all these examples, and you should be good to go. For more information, head to the tool's [man page][12]. + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/linux-tee-command/ + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:https://www.howtoforge.com/images/command-tutorial/ping-example.png +[2]:https://www.howtoforge.com/images/command-tutorial/big/ping-example.png +[3]:https://www.howtoforge.com/images/command-tutorial/ping-with-tee.png +[4]:https://www.howtoforge.com/images/command-tutorial/big/ping-with-tee.png +[5]:https://www.howtoforge.com/images/command-tutorial/tee-mult-files1.png +[6]:https://www.howtoforge.com/images/command-tutorial/big/tee-mult-files1.png +[7]:https://www.howtoforge.com/images/command-tutorial/tee-redirect-output.png +[8]:https://www.howtoforge.com/images/command-tutorial/big/tee-redirect-output.png +[9]:https://www.howtoforge.com/vim-basics +[10]:https://www.howtoforge.com/images/command-tutorial/vim-write-error.png +[11]:https://www.howtoforge.com/images/command-tutorial/big/vim-write-error.png +[12]:https://linux.die.net/man/1/tee From 6556a122786b2788b6d96b956e3bb5b16ae7d6ce Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 15:32:30 +0800 Subject: [PATCH 103/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20use=20?= =?UTF-8?q?Fio=20(Flexible=20I/O=20Tester)=20to=20Measure=20Disk=20Perform?= =?UTF-8?q?ance=20in=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...r) to Measure Disk Performance in Linux.md | 254 ++++++++++++++++++ 1 file changed, 254 insertions(+) create mode 100644 sources/tech/20170805 How to use Fio (Flexible I-O Tester) to Measure Disk Performance in Linux.md diff --git a/sources/tech/20170805 How to use Fio (Flexible I-O Tester) to Measure Disk Performance in Linux.md b/sources/tech/20170805 How to use Fio (Flexible I-O Tester) to Measure Disk Performance in Linux.md new file mode 100644 index 0000000000..c2659f3664 --- /dev/null +++ b/sources/tech/20170805 How to use Fio (Flexible I-O Tester) to Measure Disk Performance in Linux.md @@ -0,0 +1,254 @@ +How to use Fio (Flexible I/O Tester) to Measure Disk Performance in Linux +====== +![](https://wpmojo.com/wp-content/uploads/2017/08/wpmojo.com-how-to-use-fio-to-measure-disk-performance-in-linux-dotlayer.com-how-to-use-fio-to-measure-disk-performance-in-linux-816x457.jpeg) + +Fio which stands for Flexible I/O Tester [is a free and open source][1] disk I/O tool used both for benchmark and stress/hardware verification developed by Jens Axboe. + +It has support for 19 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. + +Fio accepts job descriptions in a simple-to-understand text format. Several example job files are included. Fio displays all sorts of I/O performance information, including complete IO latencies and percentiles. + +It is in wide use in many places, for both benchmarking, QA, and verification purposes. It supports Linux, FreeBSD, NetBSD, OpenBSD, OS X, OpenSolaris, AIX, HP-UX, Android, and Windows. + +In this tutorial, we will be using Ubuntu 16 and you are required to have sudo or root privileges to the computer. We will go over the installation and use of fio. + +### Installing fio from Source + +We are going to clone the repo on GitHub. Install the prerequisites, and then we will build the packages from the source code. Lets' start by making sure we have git installed. +``` + +sudo apt-get install git + + +``` + +For centOS users you can use: +``` + +sudo yum install git + + +``` + +Now we change directory to /opt and clone the repo from Github: +``` + +cd /opt +git clone https://github.com/axboe/fio + + +``` + +You should see the output below: +``` + +Cloning into 'fio'... +remote: Counting objects: 24819, done. +remote: Compressing objects: 100% (44/44), done. +remote: Total 24819 (delta 39), reused 62 (delta 32), pack-reused 24743 +Receiving objects: 100% (24819/24819), 16.07 MiB | 0 bytes/s, done. +Resolving deltas: 100% (16251/16251), done. +Checking connectivity... done. + + +``` + +Now, we change directory into the fio codebase by typing the command below inside the opt folder: +``` + +cd fio + + +``` + +We can finally build fio from source using the `make` build utility bu using the commands below: +``` + +# ./configure +# make +# make install + + +``` + +### Installing fio on Ubuntu + +For Ubuntu and Debian, fio is available on the main repository. You can easily install fio using the standard package managers such as yum and apt-get. + +For Ubuntu and Debian you can simple use: +``` + +sudo apt-get install fio + + +``` + +For CentOS/Redhat you can simple use: +On CentOS, you might need to install EPEL repository to your system before you can have access to fio. You can install it by running the following command: +``` + +sudo yum install epel-release -y + + +``` + +You can then install fio using the command below: +``` + +sudo yum install fio -y + + +``` + +### Disk Performace testing with Fio + +With Fio is installed on your system. It's time to see how to use Fio with some examples below. We are going to perform a random write, read and read and write test. + +### Performing a Random Write Test + +Let's start by running the following command. This command will write a total 4GB file [4 jobs x 512 MB = 2GB] running 2 processes at a time: +``` + +sudo fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=512M --numjobs=2 --runtime=240 --group_reporting + + +``` +``` + +... +fio-2.2.10 +Starting 2 processes + +randwrite: (groupid=0, jobs=2): err= 0: pid=7271: Sat Aug 5 13:28:44 2017 + write: io=1024.0MB, bw=2485.5MB/s, iops=636271, runt= 412msec + slat (usec): min=1, max=268, avg= 1.79, stdev= 1.01 + clat (usec): min=0, max=13, avg= 0.20, stdev= 0.40 + lat (usec): min=1, max=268, avg= 2.03, stdev= 1.01 + clat percentiles (usec): + | 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0], + | 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0], + | 70.00th=[ 0], 80.00th=[ 1], 90.00th=[ 1], 95.00th=[ 1], + | 99.00th=[ 1], 99.50th=[ 1], 99.90th=[ 1], 99.95th=[ 1], + | 99.99th=[ 1] + lat (usec) : 2=99.99%, 4=0.01%, 10=0.01%, 20=0.01% + cpu : usr=15.14%, sys=84.00%, ctx=8, majf=0, minf=26 + IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% + submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% + complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% + issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 + latency : target=0, window=0, percentile=100.00%, depth=1 + +Run status group 0 (all jobs): + WRITE: io=1024.0MB, aggrb=2485.5MB/s, minb=2485.5MB/s, maxb=2485.5MB/s, mint=412msec, maxt=412msec + +Disk stats (read/write): + sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% + + +``` + +### Performing a Random Read Test + +We are going to perform a random read test now, we will be trying to read a random 2Gb file +``` + +sudo fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4 --runtime=240 --group_reporting + + +``` + +You should see the output below: +``` + +... +fio-2.2.10 +Starting 4 processes +randread: Laying out IO file(s) (1 file(s) / 512MB) +randread: Laying out IO file(s) (1 file(s) / 512MB) +randread: Laying out IO file(s) (1 file(s) / 512MB) +randread: Laying out IO file(s) (1 file(s) / 512MB) +Jobs: 4 (f=4): [r(4)] [100.0% done] [71800KB/0KB/0KB /s] [17.1K/0/0 iops] [eta 00m:00s] +randread: (groupid=0, jobs=4): err= 0: pid=7586: Sat Aug 5 13:30:52 2017 + read : io=2048.0MB, bw=80719KB/s, iops=20179, runt= 25981msec + slat (usec): min=72, max=10008, avg=195.79, stdev=94.72 + clat (usec): min=2, max=28811, avg=2971.96, stdev=760.33 + lat (usec): min=185, max=29080, avg=3167.96, stdev=798.91 + clat percentiles (usec): + | 1.00th=[ 2192], 5.00th=[ 2448], 10.00th=[ 2576], 20.00th=[ 2736], + | 30.00th=[ 2800], 40.00th=[ 2832], 50.00th=[ 2928], 60.00th=[ 3024], + | 70.00th=[ 3120], 80.00th=[ 3184], 90.00th=[ 3248], 95.00th=[ 3312], + | 99.00th=[ 3536], 99.50th=[ 6304], 99.90th=[15168], 99.95th=[18816], + | 99.99th=[22912] + bw (KB /s): min=17360, max=25144, per=25.05%, avg=20216.90, stdev=1605.65 + lat (usec) : 4=0.01%, 10=0.01%, 250=0.01%, 500=0.01%, 750=0.01% + lat (usec) : 1000=0.01% + lat (msec) : 2=0.01%, 4=99.27%, 10=0.44%, 20=0.24%, 50=0.04% + cpu : usr=1.35%, sys=5.18%, ctx=524309, majf=0, minf=98 + IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0% + submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% + complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% + issued : total=r=524288/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 + latency : target=0, window=0, percentile=100.00%, depth=16 + +Run status group 0 (all jobs): + READ: io=2048.0MB, aggrb=80718KB/s, minb=80718KB/s, maxb=80718KB/s, mint=25981msec, maxt=25981msec + +Disk stats (read/write): + sda: ios=521587/871, merge=0/1142, ticks=96664/612, in_queue=97284, util=99.85% + + +``` + +Finally, we want to show a sample read-write test to see how the kind out output that fio returns. + +### Read Write Performance Test + +The command below will measure random read/write performance of USB Pen drive (/dev/sdc1): +``` + +sudo fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 + + +``` + +Below is the outout we get from the command above. +``` + +fio-2.2.10 +Starting 1 process +Jobs: 1 (f=1): [m(1)] [100.0% done] [217.8MB/74452KB/0KB /s] [55.8K/18.7K/0 iops] [eta 00m:00s] +test: (groupid=0, jobs=1): err= 0: pid=8475: Sat Aug 5 13:36:04 2017 + read : io=3071.7MB, bw=219374KB/s, iops=54843, runt= 14338msec + write: io=1024.4MB, bw=73156KB/s, iops=18289, runt= 14338msec + cpu : usr=6.78%, sys=20.81%, ctx=1007218, majf=0, minf=9 + IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% + submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% + complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% + issued : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 + latency : target=0, window=0, percentile=100.00%, depth=64 + +Run status group 0 (all jobs): + READ: io=3071.7MB, aggrb=219374KB/s, minb=219374KB/s, maxb=219374KB/s, mint=14338msec, maxt=14338msec + WRITE: io=1024.4MB, aggrb=73156KB/s, minb=73156KB/s, maxb=73156KB/s, mint=14338msec, maxt=14338msec + +Disk stats (read/write): + sda: ios=774141/258944, merge=1463/899, ticks=748800/150316, in_queue=900720, util=99.35% + + +``` + +We hope you enjoyed this tutorial and enjoyed following along, Fio is a very useful tool and we hope you can use it in your next debugging activity. If you enjoyed reading this post feel free to leave a comment of questions. Go ahead and clone the repo and play around with the code. + + +-------------------------------------------------------------------------------- + +via: https://wpmojo.com/how-to-use-fio-to-measure-disk-performance-in-linux/ + +作者:[Alex Pearson][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://wpmojo.com/author/wpmojo/ +[1]:https://github.com/axboe/fio From d0f380b07871b04e8679c676dd3b3777cbe37bbf Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 19 Jan 2018 15:33:38 +0800 Subject: [PATCH 104/226] PRF:20170927 Microservices and containers- 5 pitfalls to avoid.md @qhwdw --- ...ces and containers- 5 pitfalls to avoid.md | 40 ++++++++++--------- 1 file changed, 22 insertions(+), 18 deletions(-) diff --git a/translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md b/translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md index eb556dd301..b8d8c8d410 100644 --- a/translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md +++ b/translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md @@ -1,57 +1,61 @@ 微服务和容器:需要去防范的 5 个“坑” ====== +> 微服务与容器天生匹配,但是你需要避开一些常见的陷阱。 + ![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Containers%20Ecosystem.png?itok=lDTaYXzk) 因为微服务和容器是 [天生的“一对”][1],所以一起来使用它们,似乎也就不会有什么问题。当我们将这对“天作之合”投入到生产系统后,你就会发现,随着你的 IT 基础的提升,等待你的将是大幅上升的成本。是不是这样的? +(让我们等一下,等人们笑声过去) + 是的,很遗憾,这并不是你所希望的结果。虽然这两种技术的组合是非常强大的,但是,如果没有很好的规划和适配,它们并不能发挥出强大的性能来。在前面的文章中,我们整理了如果你想 [使用它们你应该掌握的知识][2]。但是,那些都是组织在容器中使用微服务时所遇到的常见问题。 -事先了解这些可能出现的问题,可以为你的成功奠定更坚实的基础。 +事先了解这些可能出现的问题,能够帮你避免这些问题,为你的成功奠定更坚实的基础。 -微服务和容器技术的出现是基于组织的需要、知识、资源等等更多的现实的要求。Mac Browning 说,“他们最常犯的一个 [错误] 是试图一次就想“搞定”一切”,他是 [DigitalOcean][3] 的工程部经理。“而真正需要面对的问题是,你的公司应该采用什么样的容器和微服务。” +微服务和容器技术的出现是基于组织的需要、知识、资源等等更多的现实的要求。Mac Browning 说,“他们最常犯的一个 [错误] 是试图一次就想‘搞定’一切”,他是 [DigitalOcean][3] 的工程部经理。“而真正需要面对的问题是,你的公司应该采用什么样的容器和微服务。” **[ 努力向你的老板和同事去解释什么是微服务?阅读我们的入门读本[如何简单明了地解释微服务][4]。]** Browning 和其他的 IT 专业人员分享了他们遇到的,在组织中使用容器化微服务时的五个陷阱,特别是在他们的生产系统生命周期的早期时候。在你的组织中需要去部署微服务和容器时,了解这些知识,将有助于你去评估微服务和容器化的部署策略。 -### 1. 在部署微服务和容器化上,试图同时从零开始 +### 1、 在部署微服务和容器化上,试图同时从零开始 -如果你刚开始从完全的实体服务器上开始改变,或者如果你的组织在微服务和容器化上还没有足够的知识储备,那么,请记住:微服务和容器化并不是拴在一起,不可分别部署的。这就意味着,你可以发挥你公司内部专家的技术特长,先从部署其中的一个开始。Kevin McGrath,CTO, [Sungard 服务可用性][5] 资深设计师,他建议,通过首先使用容器化来为你的团队建立知识和技能储备,通过对现有应用或者新应用进行容器化部署,接着再将它们迁移到微服务架构,这样才能在最后的阶段感受到它们的优势所在。 +如果你刚开始从完全的单例应用开始改变,或者如果你的组织在微服务和容器化上还没有足够的知识储备,那么,请记住:微服务和容器化并不是拴在一起、不可分别部署的。这就意味着,你可以发挥你公司内部专家的技术特长,先从部署其中的一个开始。Sungard Availability Services][5] 的资深 CTO 架构师 Kevin McGrath 建议,通过首先使用容器化来为你的团队建立知识和技能储备,通过对现有应用或者新应用进行容器化部署,接着再将它们迁移到微服务架构,这样才能最终感受到它们的优势所在。 -McGrath 说,“微服务要想运行的很好,需要公司经过多年的反复迭代,这样才能实现快速部署和迁移”,“如果组织不能实现快速迁移,那么支持微服务将很困难。实现快速迁移,容器化可以帮助你,这样就不用担心业务整体停机” +McGrath 说,“微服务要想运行的很好,需要公司经过多年的反复迭代,这样才能实现快速部署和迁移”,“如果组织不能实现快速迁移,那么支持微服务将很困难。实现快速迁移,容器化可以帮助你,这样就不用担心业务整体停机”。 -### 2. 从一个面向客户的或者关键的业务应用开始 +### 2、 从一个面向客户的或者关键的业务应用开始 -对组织来说,一个相关陷阱恰恰就是引入容器、微服务、或者同时两者都引入的这个开端:在尝试征服一片丛林中的雄狮之前,你应该先去征服处于食物链底端的一些小动物,以取得一些实践经验。 +对组织来说,一个相关陷阱恰恰就是从容器、微服务、或者两者同时起步:在尝试征服一片丛林中的雄狮之前,你应该先去征服处于食物链底端的一些小动物,以取得一些实践经验。 -在你的学习过程中预期会有一些错误出现 - 你是希望这些错误发生在面向客户的关键业务应用上,还是,仅对 IT 或者其他内部团队可见的低风险应用上? +在你的学习过程中可以预期会有一些错误出现 —— 你是希望这些错误发生在面向客户的关键业务应用上,还是,仅对 IT 或者其他内部团队可见的低风险应用上? DigitalOcean 的 Browning 说,“如果整个生态系统都是新的,为了获取一些微服务和容器方面的操作经验,那么,将它们先应用到影响面较低的区域,比如像你的持续集成系统或者内部工具,可能是一个低风险的做法。”你获得这方面的经验以后,当然会将这些技术应用到为客户提供服务的生产系统上。而现实情况是,不论你准备的如何周全,都不可避免会遇到问题,因此,需要提前为可能出现的问题制定应对之策。 -### 3. 在没有合适的团队之前引入了太多的复杂性 +### 3、 在没有合适的团队之前引入了太多的复杂性 由于微服务架构的弹性,它可能会产生复杂的管理需求。 -作为 [Red Hat][6] 技术的狂热拥护者,[Gordon Haff][7] 最近写道,“一个符合 OCI 标准的容器运行时本身管理单个容器是很擅长的,但是,当你开始使用多个容器和容器化应用时,并将它们分解为成百上千个节点后,管理和编配它们将变得极为复杂。最终,你将回过头来需要将容器分组来提供服务 - 比如,跨容器的网络、安全、测控” +作为 [Red Hat][6] 技术的狂热拥护者,[Gordon Haff][7] 最近写道,“一个符合 OCI 标准的容器运行时本身管理单个容器是很擅长的,但是,当你开始使用多个容器和容器化应用时,并将它们分解为成百上千个节点后,管理和编配它们将变得极为复杂。最终,你将需要回过头来将容器分组来提供服务 —— 比如,跨容器的网络、安全、测控”。 -Haff 提示说,“幸运的是,由于容器是可移植的,并且,与之相关的管理栈也是可移植的”。“这时出现的编配技术,比如像 [Kubernetes][8] ,使得这种 IT 需求变得简单化了”(更多内容请查阅 Haff 的文章:[容器化为编写应用带来的 5 个优势][1]) +Haff 提示说,“幸运的是,由于容器是可移植的,并且,与之相关的管理栈也是可移植的”。“这时出现的编配技术,比如像 [Kubernetes][8] ,使得这种 IT 需求变得简单化了”(更多内容请查阅 Haff 的文章:[容器化为编写应用带来的 5 个优势][1])。 另外,你需要合适的团队去做这些事情。如果你已经有 [DevOps shop][9],那么,你可能比较适合做这种转换。因为,从一开始你已经聚集了相关技能的人才。 -Mike Kavis 说,“随着时间的推移,会有越来越多的服务得以部署,管理起来会变得很不方便”,他是 [Cloud Technology Partners][10] 的副总裁兼首席云架构设计师。他说,“在 DevOps 的关键过程中,确保各个领域的专家 - 开发、测试、安全、运营等等 - 全部者参与进来,并且在基于容器的微服务中,在构建、部署、运行、安全方面实现协作。” +Mike Kavis 说,“随着时间的推移,部署了越来越多的服务,管理起来会变得很不方便”,他是 [Cloud Technology Partners][10] 的副总裁兼首席云架构设计师。他说,“在 DevOps 的关键过程中,确保各个领域的专家 —— 开发、测试、安全、运营等等 —— 全部都参与进来,并且在基于容器的微服务中,在构建、部署、运行、安全方面实现协作。” -### 4. 忽视重要的需求:自动化 +### 4、 忽视重要的需求:自动化 除了具有一个合适的团队之外,那些在基于容器化的微服务部署比较成功的组织都倾向于以“实现尽可能多的自动化”来解决固有的复杂性。 -Carlos Sanchez 说,“实现分布式架构并不容易,一些常见的挑战,像数据持久性、日志、排错等等,在微服务架构中都会变得很复杂”,他是 [CloudBees][11] 的资深软件工程师。根据定义,Sanchez 提到的分布式架构,随着业务的增长,将变成一个巨大无比的繁重的运营任务。“服务和组件的增殖,将使得运营自动化变成一项非常强烈的需求”。Sanchez 警告说。“手动管理将限制服务的规模” +Carlos Sanchez 说,“实现分布式架构并不容易,一些常见的挑战,像数据持久性、日志、排错等等,在微服务架构中都会变得很复杂”,他是 [CloudBees][11] 的资深软件工程师。根据定义,Sanchez 提到的分布式架构,随着业务的增长,将变成一个巨大无比的繁重的运营任务。“服务和组件的增殖,将使得运营自动化变成一项非常强烈的需求”。Sanchez 警告说。“手动管理将限制服务的规模”。 -### 5. 随着时间的推移,微服务变得越来越臃肿 +### 5、 随着时间的推移,微服务变得越来越臃肿 -在一个容器中运行一个服务或者软件组件并不神奇。但是,这样做并不能证明你就一定在使用微服务。Manual Nedbal, [ShieldX Networks][12] 的 CTO,它警告说,IT 专业人员要确保,随着时间的推移,微服务仍然是微服务。 +在一个容器中运行一个服务或者软件组件并不神奇。但是,这样做并不能证明你就一定在使用微服务。Manual Nedbal, [ShieldX Networks][12] 的 CTO,他警告说,IT 专业人员要确保,随着时间的推移,微服务仍然是微服务。 -Nedbal 说,“随着时间的推移,一些软件组件积累了大量的代码和特性,将它们将在一个容器中将会产生并不需要的微服务,也不会带来相同的优势”,也就是说,“随着组件的变大,工程师需要找到合适的时机将它们再次分解” +Nedbal 说,“随着时间的推移,一些软件组件积累了大量的代码和特性,将它们放在一个容器中将会产生并不需要的微服务,也不会带来相同的优势”,也就是说,“随着组件的变大,工程师需要找到合适的时机将它们再次分解”。 -------------------------------------------------------------------------------- @@ -59,7 +63,7 @@ via: https://enterprisersproject.com/article/2017/9/using-microservices-containe 作者:[Kevin Casey][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1fa0fd9544369ed12b39556a62a44b87e06f2ab2 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 19 Jan 2018 15:33:57 +0800 Subject: [PATCH 105/226] PUB:20170927 Microservices and containers- 5 pitfalls to avoid.md @qhwdw https://linux.cn/article-9258-1.html --- .../20170927 Microservices and containers- 5 pitfalls to avoid.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20170927 Microservices and containers- 5 pitfalls to avoid.md (100%) diff --git a/translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md b/published/20170927 Microservices and containers- 5 pitfalls to avoid.md similarity index 100% rename from translated/tech/20170927 Microservices and containers- 5 pitfalls to avoid.md rename to published/20170927 Microservices and containers- 5 pitfalls to avoid.md From 1db7e4c59e9cd90bb1836ec659c274116f5e1f3b Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 15:44:51 +0800 Subject: [PATCH 106/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Boot?= =?UTF-8?q?=20Into=20Linux=20Command=20Line?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...115 How To Boot Into Linux Command Line.md | 61 +++++++++++++++++++ 1 file changed, 61 insertions(+) create mode 100644 sources/tech/20180115 How To Boot Into Linux Command Line.md diff --git a/sources/tech/20180115 How To Boot Into Linux Command Line.md b/sources/tech/20180115 How To Boot Into Linux Command Line.md new file mode 100644 index 0000000000..7a63f47f90 --- /dev/null +++ b/sources/tech/20180115 How To Boot Into Linux Command Line.md @@ -0,0 +1,61 @@ +How To Boot Into Linux Command Line +====== +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/how-to-boot-into-linux-command-line_orig.jpg) + +There may be times where you need or want to boot up a [Linux][1] system without using a GUI, that is with no X, but rather opt for the command line. Whatever the reason, fortunately, booting straight into the Linux **command-line** is very simple. It requires a simple change to the boot parameter after the other kernel options. This change specifies the runlevel to boot the system into. + +### ​Why Do This? + +If your system does not run Xorg because the configuration is invalid, or if the display manager is broken, or whatever may prevent the GUI from starting properly, booting into the command-line will allow you to troubleshoot by logging into a terminal (assuming you know what you’re doing to start with) and do whatever you need to do. Booting into the command-line is also a great way to become more familiar with the terminal, otherwise, you can do it just for fun. + +### ​Accessing GRUB Menu + +On startup, you will need access to the GRUB boot menu. You may need to hold the SHIFT key down before the system boots if the menu isn’t set to display every time the computer is started. In the menu, the [Linux distribution][2] entry must be selected. Once highlighted, press ‘e’ to edit the boot parameters. + + [![zorin os grub menu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnu-grub_orig.png)][3] + + Older GRUB versions follow a similar mechanism. The boot manager should provide instructions on how to edit the boot parameters. + +### ​​Specify the Runlevel + +​An editor will appear and you will see the options that GRUB parses to the kernel. Navigate to the line that starts with ‘linux’ (older GRUB versions may be ‘kernel’; select that and follow the instructions). This specifies parameters to parse into the kernel. At the end of that line (may appear to span multiple lines, depending on resolution), you simply specify the runlevel to boot into, which is 3 (multi-user mode, text-only). + + [![customize grub menu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_orig.png)][4] + +Pressing Ctrl-X or F10 will boot the system using those parameters. Boot-up will continue as normal. The only thing that has changed is the runlevel to boot into. + +​ + +This is what was started up: + + [![boot linux in command line](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_1_orig.png)][5] + +### Runlevels + +You can specify different runlevels to boot into with runlevel 5 being the default one. 1 boots into “single-user” mode, which boots into a root shell. 3 provides a multi-user, command-line only system. + +### Switch From Command-Line + +At some point, you may want to run the display manager again to use a GUI, and the quickest way to do that is running this: +``` +$ sudo init 5 +``` + +And it is as simple as that. Personally, I find the command-line much more exciting and hands-on than using GUI tools; however, that’s just my preference. + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/how-to-boot-into-linux-command-line + +作者:[LinuxAndUbuntu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxandubuntu.com +[1]:http://www.linuxandubuntu.com/home/category/linux +[2]:http://www.linuxandubuntu.com/home/category/distros +[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnu-grub_orig.png +[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_orig.png +[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_1_orig.png From a4227beccb5964861743e736494571cbb0d4c993 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 15:49:22 +0800 Subject: [PATCH 107/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20instal?= =?UTF-8?q?l=20software=20applications=20on=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... install software applications on Linux.md | 261 ++++++++++++++++++ 1 file changed, 261 insertions(+) create mode 100644 sources/tech/20180111 How to install software applications on Linux.md diff --git a/sources/tech/20180111 How to install software applications on Linux.md b/sources/tech/20180111 How to install software applications on Linux.md new file mode 100644 index 0000000000..6414bd19be --- /dev/null +++ b/sources/tech/20180111 How to install software applications on Linux.md @@ -0,0 +1,261 @@ +How to install software applications on Linux +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_) + +Image by : Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0 + +How do you install an application on Linux? As with many operating systems, there isn't just one answer to that question. Applications can come from so many sources--it's nearly impossible to count--and each development team may deliver their software whatever way they feel is best. Knowing how to install what you're given is part of being a true power user of your OS. + +### Repositories + +For well over a decade, Linux has used software repositories to distribute software. A "repository" in this context is a public server hosting installable software packages. A Linux distribution provides a command, and usually a graphical interface to that command, that pulls the software from the server and installs it onto your computer. It's such a simple concept that it has served as the model for all major cellphone operating systems and, more recently, the "app stores" of the two major closed source computer operating systems. + + +![Linux repository][2] + +Not an app store + +Installing from a software repository is the primary method of installing apps on Linux. It should be the first place you look for any application you intend to install. + +To install from a software repository, there's usually a command: +``` + + +$ sudo dnf install inkscape +``` + +The actual command you use depends on what distribution of Linux you use. Fedora uses `dnf`, OpenSUSE uses `zypper`, Debian and Ubuntu use `apt`, Slackware uses `sbopkg`, FreeBSD uses `pkg_add`, and Illumos-based OpenIndiana uses `pkg`. Whatever you use, the incantation usually involves searching for the proper name of what you want to install, because sometimes what you call software is not its official or solitary designation: +``` + + +$ sudo dnf search pyqt + +PyQt.x86_64 : Python bindings for Qt3 + +PyQt4.x86_64 : Python bindings for Qt4 + +python-qt5.x86_64 : PyQt5 is Python bindings for Qt5 +``` + +Once you have located the name of the package you want to install, use the `install` subcommand to perform the actual download and automated install: +``` + + +$ sudo dnf install python-qt5 +``` + +For specifics on installing from a software repository, see your distribution's documentation. + +The same generally holds true with the graphical tools. Search for what you think you want, and then install it. + +![](https://opensource.com/sites/default/files/u128651/apper.png) + +Like the underlying command, the name of the graphical installer depends on what distribution you are running. The relevant application is usually tagged with the software or package keywords, so search your launcher or menu for those terms, and you'll find what you need. Since open source is all about user choice, if you don't like the graphical user interface (GUI) that your distribution provides, there may be an alternative that you can install. And now you know how to do that. + +#### Extra repositories + +Your distribution has its standard repository for software that it packages for you, and there are usually extra repositories common to your distribution. For example, [EPEL][3] serves Red Hat Enterprise Linux and CentOS, [RPMFusion][4] serves Fedora, Ubuntu has various levels of support as well as a Personal Package Archive (PPA) network, [Packman][5] provides extra software for OpenSUSE, and [SlackBuilds.org][6] provides community build scripts for Slackware. + +By default, your Linux OS is set to look at just its official repositories, so if you want to use additional software collections, you must add extra repositories yourself. You can usually install a repository as though it were a software package. In fact, when you install certain software, such as [GNU Ring][7] video chat, the [Vivaldi][8] web browser, Google Chrome, and many others, what you are actually installing is access to their private repositories, from which the latest version of their application is installed to your machine. + + +![Installing a repo][10] + +Installing a repo + +You can also add the repository manually by editing a text file and adding it to your package manager's configuration directory, or by running a command to install the repository. As usual, the exact command you use depends on the distribution you are running; for example, here is a `dnf` command that adds a repository to the system: +``` + + +$ sudo dnf config-manager --add-repo=http://example.com/pub/centos/7 +``` + +### Installing apps without repositories + +The repository model is so popular because it provides a link between the user (you) and the developer. When important updates are released, your system kindly prompts you to accept the updates, and you can accept them all from one centralized location. + +Sometimes, though, there are times when a package is made available with no repository attached. These installable packages come in several forms. + +#### Linux packages + +Sometimes, a developer distributes software in a common Linux packaging format, such as RPM, DEB, or the newer but very popular FlatPak or Snap formats. You make not get access to a repository with this download; you might just get the package. + +The video editor [Lightworks][11], for example, provides a `.deb` file for APT users and an `.rpm` file for RPM users. When you want to update, you return to the website and download the latest appropriate file. + +These one-off packages can be installed with all the same tools used when installing from a repository. If you double-click the package you download, a graphical installer launches and steps you through the install process. + +Alternately, you can install from a terminal. The difference here is that a lone package file you've downloaded from the internet isn't coming from a repository. It's a "local" install, meaning your package management software doesn't need to download it to install it. Most package managers handle this transparently: +``` + + +$ sudo dnf install ~/Downloads/lwks-14.0.0-amd64.rpm +``` + +In some cases, you need to take additional steps to get the application to run, so carefully read the documentation about the software you're installing. + +#### Generic install scripts + +Some developers release their packages in one of several generic formats. Common extensions include `.run` and `.sh`. NVIDIA graphic card drivers, Foundry visual FX packages like Nuke and Mari, and many DRM-free games from [GOG][12] use this style of installer. + +This model of installation relies on the developer to deliver an installation "wizard." Some of the installers are graphical, while others just run in a terminal. + +There are two ways to run these types of installers. + + 1. You can run the installer directly from a terminal: + + +``` + + +$ sh ./game/gog_warsow_x.y.z.sh +``` + + 2. Alternately, you can run it from your desktop by marking it as executable. To mark an installer executable, right-click on its icon and select **Properties**. + +![Giving an installer executable permission][14] + + +Giving an installer executable permission + +Once you've given permission for it to run, double-click the icon to start the install. + +![GOG installer][16] + +GOG installer + +For the rest of the install, just follow the instructions on the screen. + +#### AppImage portable apps + +The AppImage format is relatively new to Linux, although its concept is based on both NeXT and Rox. The idea is simple: everything required to run an application is placed into one directory, and then that directory is treated as an "app." To run the application, you just double-click the icon, and it runs. There's no need or expectation that the application is installed in the traditional sense; it just runs from wherever you have it lying around on your hard drive. + +Despite its ability to run as a self-contained app, an AppImage usually offers to do some soft system integration. + +![AppImage system integration][18] + +AppImage system integration + +If you accept this offer, a local `.desktop` file is installed to your home directory. A `.desktop` file is a small configuration file used by the Applications menu and mimetype system of a Linux desktop. Essentially, placing the desktop config file in your home directory's application list "installs" the application without actually installing it. You get all the benefits of having installed something, and the benefits of being able to run something locally, as a "portable app." + +#### Application directory + +Sometimes, a developer just compiles an application and posts the result as a download, with no install script and no packaging. Usually, this means that you download a TAR file, [extract it][19], and then double-click the executable file (it's usually the one with the name of the software you downloaded). + +![Twine downloaded for Linux][21] + + +Twine downloaded for Linux + +When presented with this style of software delivery, you can either leave it where you downloaded it and launch it manually when you need it, or you can do a quick and dirty install yourself. This involves two simple steps: + + 1. Save the directory to a standard location and launch it manually when you need it. + 2. Save the directory to a standard location and create a `.desktop` file to integrate it into your system. + + + +If you're just installing applications for yourself, it's traditional to keep a `bin` directory (short for "binary") in your home directory as a storage location for locally installed applications and scripts. If you have other users on your system who need access to the applications, it's traditional to place the binaries in `/opt`. Ultimately, it's up to you where you store the application. + +Downloads often come in directories with versioned names, such as `twine_2.13` or `pcgen-v6.07.04`. Since it's reasonable to assume you'll update the application at some point, it's a good idea to either remove the version number or to create a symlink to the directory. This way, the launcher that you create for the application can remain the same, even though you update the application itself. + +To create a `.desktop` launcher file, open a text editor and create a file called `twine.desktop`. The [Desktop Entry Specification][22] is defined by [FreeDesktop.org][23]. Here is a simple launcher for a game development IDE called Twine, installed to the system-wide `/opt` directory: +``` + + +[Desktop Entry] + +Encoding=UTF-8 + +Name=Twine + +GenericName=Twine + +Comment=Twine + +Exec=/opt/twine/Twine + +Icon=/usr/share/icons/oxygen/64x64/categories/applications-games.png + +Terminal=false + +Type=Application + +Categories=Development;IDE; +``` + +The tricky line is the `Exec` line. It must contain a valid command to start the application. Usually, it's just the full path to the thing you downloaded, but in some cases, it's something more complex. For example, a Java application might need to be launched as an argument to Java itself: +``` + + +Exec=java -jar /path/to/foo.jar +``` + +Sometimes, a project includes a wrapper script that you can run so you don't have to figure out the right command: +``` + + +Exec=/opt/foo/foo-launcher.sh +``` + +In the Twine example, there's no icon bundled with the download, so the example `.desktop` file assigns a generic gaming icon that shipped with the KDE desktop. You can use workarounds like that, but if you're more artistic, you can just create your own icon, or you can search the Internet for a good icon. As long as the `Icon` line points to a valid PNG or SVG file, your application will inherit the icon. + +The example script also sets the application category primarily to Development, so in KDE, GNOME, and most other Application menus, Twine appears under the Development category. + +To get this example to appear in an Application menu, place the `twine.desktop` file into one of two places: + + * Place it in `~/.local/share/applications` if you're storing the application in your own home directory. + * Place it in `/usr/share/applications` if you're storing the application in `/opt` or another system-wide location and want it to appear in all your users' Application menus. + + + +And now the application is installed as it needs to be and integrated with the rest of your system. + +### Compiling from source + +Finally, there's the truly universal install format: source code. Compiling an application from source code is a great way to learn how applications are structured, how they interact with your system, and how they can be customized. It's by no means a push-button process, though. It requires a build environment, it usually involves installing dependency libraries and header files, and sometimes a little bit of debugging. + +To learn more about compiling from source code, [read my article][24] on the topic. + +### Now you know + +Some people think installing software is a magical process that only developers understand, or they think it "activates" an application, as if the binary executable file isn't valid until it has been "installed." Hopefully, learning about the many different methods of installing has shown you that install is really just shorthand for "copying files from one place to the appropriate places on your system." There's nothing mysterious about it. As long as you approach each install without expectations of how it's supposed to happen, and instead look for what the developer has set up as the install process, it's generally easy, even if it is different from what you're used to. + +The important thing is that an installer is honest with you. If you come across an installer that attempts to install additional software without your consent (or maybe it asks for consent, but in a confusing or misleading way), or that attempts to run checks on your system for no apparent reason, then don't continue an install. + +Good software is flexible, honest, and open. And now you know how to get good software onto your computer. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/how-install-apps-linux + +作者:[Seth Kenlon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[1]:/file/382591 +[2]:https://opensource.com/sites/default/files/u128651/repo.png (Linux repository) +[3]:https://fedoraproject.org/wiki/EPEL +[4]:http://rpmfusion.org +[5]:http://packman.links2linux.org/ +[6]:http://slackbuilds.org +[7]:https://ring.cx/en/download/gnu-linux +[8]:http://vivaldi.com +[9]:/file/382566 +[10]:https://opensource.com/sites/default/files/u128651/access.png (Installing a repo) +[11]:https://www.lwks.com/ +[12]:http://gog.com +[13]:/file/382581 +[14]:https://opensource.com/sites/default/files/u128651/exec.jpg (Giving an installer executable permission) +[15]:/file/382586 +[16]:https://opensource.com/sites/default/files/u128651/gog.jpg (GOG installer) +[17]:/file/382576 +[18]:https://opensource.com/sites/default/files/u128651/appimage.png (AppImage system integration) +[19]:https://opensource.com/article/17/7/how-unzip-targz-file +[20]:/file/382596 +[21]:https://opensource.com/sites/default/files/u128651/twine.jpg (Twine downloaded for Linux) +[22]:https://specifications.freedesktop.org/desktop-entry-spec/desktop-entry-spec-latest.html +[23]:http://freedesktop.org +[24]:https://opensource.com/article/17/10/open-source-cats From 1ef3d44ec9b3ed4d6da79ae70c51cd4e825bec01 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 15:56:25 +0800 Subject: [PATCH 108/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Ansible=20Tutoria?= =?UTF-8?q?l:=20Intorduction=20to=20simple=20Ansible=20commands?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Intorduction to simple Ansible commands.md | 156 ++++++++++++++++++ 1 file changed, 156 insertions(+) create mode 100644 sources/tech/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md diff --git a/sources/tech/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md b/sources/tech/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md new file mode 100644 index 0000000000..e72d90301c --- /dev/null +++ b/sources/tech/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md @@ -0,0 +1,156 @@ +Ansible Tutorial: Intorduction to simple Ansible commands +====== +In our earlier Ansible tutorial, we discussed [**the installation & configuration of Ansible**][1]. Now in this ansible tutorial, we will learn some basic examples of ansible commands that we will use to manage our infrastructure. So let us start by looking at the syntax of a complete ansible command, + +``` +$ ansible -m -a +``` + +Here, we can also use a single host or all in place of & are optional to provide. Now let's look at some basic commands to use with ansible, + +### Check connectivity of hosts + +We have used this command in our previous tutorial also. The command to check connectivity of hosts is + +``` +$ ansible -m ping +``` + +### Rebooting hosts + +``` +$ ansible -a "/sbin/reboot" +``` + +### Checking host 's system information + +Ansible collects the system's information for all the hosts connected to it. To display the information of hosts, run + +``` +$ ansible -m setup | less +``` + +Secondly, to check a particular info from the collected information by passing an argument, + +``` +$ ansible -m setup -a "filter=ansible_distribution" +``` + +### Transfering files + +For transferring files we use a module 'copy' & complete command that is used is + +``` +$ ansible -m copy -a "src=/home/dan dest=/tmp/home" +``` + +### Manging users + +So to manage the users on the connected hosts, we use a module named 'user' & comamnds to use it are as follows, + +#### Creating a new user + +``` + $ ansible -m user -a "name=testuser password=" +``` + +#### Deleting a user + +``` +$ ansible -m user -a "name=testuser state=absent" +``` + + **Note:-** To create an encrypted password, use the 'mkpasswd -method=sha-512' command. + +### Changing permissions & ownership + +So for changing ownership of files of connected hosts, we use module named 'file' & commands used are + +### Changing permission of a file + +``` +$ ansible -m file -a "dest=/home/dan/file1.txt mode=777" +``` + +### Changing ownership of a file + +``` + $ ansible -m file -a "dest=/home/dan/file1.txt mode=777 owner=dan group=dan" +``` + +### Managing Packages + +So, we can manage the packages installed on all the hosts connected to ansible by using 'yum' & 'apt' modules & the complete commands used are + +#### Check if package is installed & update it + +``` +$ ansible -m yum -a "name=ntp state=latest" +``` + +#### Check if package is installed & don't update it + +``` +$ ansible -m yum -a "name=ntp state=present" +``` + +#### Check if package is at a specific version + +``` +$ ansible -m yum -a "name= ntp-1.8 state=present" +``` + +#### Check if package is not installed + +``` +$ ansible -m yum -a "name=ntp state=absent" +``` + +### Managing services + +So to manage services with ansible, we use a modules 'service' & complete commands that are used are, + +#### Starting a service + +``` +$ansible -m service -a "name=httpd state=started" +``` + +#### Stopping a service + +``` +$ ansible -m service -a "name=httpd state=stopped" +``` + +#### Restarting a service + +``` +$ ansible -m service -a "name=httpd state=restarted" +``` + +So this completes our tutorial of some simple, one line commands that can be used with ansible. Also, for our future tutorials, we will learn to create plays & playbooks that help us manage our hosts more easliy & efficiently. + +If you think we have helped you or just want to support us, please consider these :- + +Connect to us: [Facebook][2] | [Twitter][3] | [Google Plus][4] + +Become a Supporter - [Make a contribution via PayPal][5] + +Linux TechLab is thankful for your continued support. + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/ansible-tutorial-simple-commands/ + +作者:[SHUSAIN][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:http://linuxtechlab.com/create-first-ansible-server-automation-setup/ +[2]:https://www.facebook.com/linuxtechlab/ +[3]:https://twitter.com/LinuxTechLab +[4]:https://plus.google.com/+linuxtechlab +[5]:http://linuxtechlab.com/contact-us-2/ From e07099a7984a0280bd22528eea0f653588479ae1 Mon Sep 17 00:00:00 2001 From: ypingcn <1344632698@qq.com> Date: Fri, 19 Jan 2018 16:28:53 +0800 Subject: [PATCH 109/226] Claim:20180112 Top 5 Firefox extensions to install now.md --- .../tech/20180112 Top 5 Firefox extensions to install now.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180112 Top 5 Firefox extensions to install now.md b/sources/tech/20180112 Top 5 Firefox extensions to install now.md index 6e11993b45..3717b7c96d 100644 --- a/sources/tech/20180112 Top 5 Firefox extensions to install now.md +++ b/sources/tech/20180112 Top 5 Firefox extensions to install now.md @@ -1,3 +1,5 @@ +translating by ypingcn + Top 5 Firefox extensions to install now ====== From 5ffa2440012df42af6b628895bdce90b3cf5ab71 Mon Sep 17 00:00:00 2001 From: darksun Date: Fri, 19 Jan 2018 16:30:11 +0800 Subject: [PATCH 110/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20resolv?= =?UTF-8?q?e=20mount.nfs:=20Stale=20file=20handle=20error?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...olve mount.nfs- Stale file handle error.md | 96 +++++++++++++++++++ 1 file changed, 96 insertions(+) create mode 100644 sources/tech/20170101 How to resolve mount.nfs- Stale file handle error.md diff --git a/sources/tech/20170101 How to resolve mount.nfs- Stale file handle error.md b/sources/tech/20170101 How to resolve mount.nfs- Stale file handle error.md new file mode 100644 index 0000000000..d57280df28 --- /dev/null +++ b/sources/tech/20170101 How to resolve mount.nfs- Stale file handle error.md @@ -0,0 +1,96 @@ +How to resolve mount.nfs: Stale file handle error +====== +Learn how to resolve mount.nfs: Stale file handle error on Linux platform. This is Network File System error can be resolved from client or server end. + + _![][1]_ + +When you are using Network File System in your environment, you must have seen`mount.nfs: Stale file handle` error at times. This error denotes that NFS share is unable to mount since something has changed since last good known configuration. + +Whenever you reboot NFS server or some of the NFS processes are not running on client or server or share is not properly exported at server; these can be reasons for this error. Moreover its irritating when this error comes to previously mounted NFS share. Because this means configuration part is correct since it was previously mounted. In such case once can try following commands: + +Make sure NFS service are running good on client and server. + +``` +# service nfs status +rpc.svcgssd is stopped +rpc.mountd (pid 11993) is running... +nfsd (pid 12009 12008 12007 12006 12005 12004 12003 12002) is running... +rpc.rquotad (pid 11988) is running... +``` + +> Stay connected to your favorite windows applications from anywhere on any device with [ windows 7 cloud desktop ][2] from CloudDesktopOnline.com. Get Office 365 with expert support and free migration from [ Apps4Rent.com ][3]. + +If NFS share currently mounted on client, then un-mount it forcefully and try to remount it on NFS client. Check if its properly mounted by `df` command and changing directory inside it. + +``` +# umount -f /mydata_nfs + +# mount -t nfs server:/nfs_share /mydata_nfs + +#df -k +------ output clipped ----- +server:/nfs_share 41943040 892928 41050112 3% /mydata_nfs +``` + +In above mount command, server can be IP or [hostname ][4]of NFS server. + +If you are getting error while forcefully un-mounting like below : + +``` +# umount -f /mydata_nfs +umount2: Device or resource busy +umount: /mydata_nfs: device is busy +umount2: Device or resource busy +umount: /mydata_nfs: device is busy +``` +Then you can check which all processes or users are using that mount point with `lsof` command like below: + +``` +# lsof |grep mydata_nfs +lsof: WARNING: can't stat() nfs file system /mydata_nfs + Output information may be incomplete. +su 3327 root cwd unknown /mydata_nfs/dir (stat: Stale NFS file handle) +bash 3484 grid cwd unknown /mydata_nfs/MYDB (stat: Stale NFS file handle) +bash 20092 oracle11 cwd unknown /mydata_nfs/MPRP (stat: Stale NFS file handle) +bash 25040 oracle11 cwd unknown /mydata_nfs/MUYR (stat: Stale NFS file handle) +``` + +If you see in above example that 4 PID are using some files on said mount point. Try killing them off to free mount point. Once done you will be able to un-mount it properly. + +Sometimes it still give same error for mount command. Then try mounting after restarting NFS service at client using below command. + +``` +# service nfs restart +Shutting down NFS daemon: [ OK ] +Shutting down NFS mountd: [ OK ] +Shutting down NFS quotas: [ OK ] +Shutting down RPC idmapd: [ OK ] +Starting NFS services: [ OK ] +Starting NFS quotas: [ OK ] +Starting NFS mountd: [ OK ] +Starting NFS daemon: [ OK ] +``` + +Also read : [How to restart NFS step by step in HPUX][5] + +Even if this didnt solve your issue, final step is to restart services at NFS server. Caution! This will disconnect all NFS shares which are exported from NFS server. All clients will see mount point disconnect. This step is where 99% you will get your issue resolved. If not then [NFS configurations][6] must be checked, provided you have changed configuration and post that you started seeing this error. + +Outputs in above post are from RHEL6.3 server. Drop us your comments related to this post. + +-------------------------------------------------------------------------------- + +via: https://kerneltalks.com/troubleshooting/resolve-mount-nfs-stale-file-handle-error/ + +作者:[KernelTalks][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://kerneltalks.com +[1]:http://kerneltalks.com/wp-content/uploads/2017/01/nfs_error-2-150x150.png +[2]:https://www.clouddesktoponline.com/ +[3]:http://www.apps4rent.com +[4]:https://kerneltalks.com/linux/all-you-need-to-know-about-hostname-in-linux/ +[5]:http://kerneltalks.com/hpux/restart-nfs-in-hpux/ +[6]:http://kerneltalks.com/linux/nfs-configuration-linux-hpux/ From 529573d5a54bed1c44c4cf3d61517b765ee5dcf6 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Fri, 19 Jan 2018 16:55:47 +0800 Subject: [PATCH 111/226] Update 20170927 Linux directory structure- -lib explained.md --- .../tech/20170927 Linux directory structure- -lib explained.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20170927 Linux directory structure- -lib explained.md b/sources/tech/20170927 Linux directory structure- -lib explained.md index 3f8322d630..ff9ec9b72f 100644 --- a/sources/tech/20170927 Linux directory structure- -lib explained.md +++ b/sources/tech/20170927 Linux directory structure- -lib explained.md @@ -1,3 +1,5 @@ +translate by cy + Linux directory structure: /lib explained ====== [![lib folder linux][1]][1] From 18d7e217d5b515634eed94e7dd5486fef03d459b Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Fri, 19 Jan 2018 17:02:50 +0800 Subject: [PATCH 112/226] apply for translation 20171117 Command line fun- Insult the user when typing wrong bash command.md --- ... line fun- Insult the user when typing wrong bash command.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md b/sources/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md index 123dca59cb..7e1ab30faa 100644 --- a/sources/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md +++ b/sources/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md @@ -1,3 +1,5 @@ +translate by cyleft + Command line fun: Insult the user when typing wrong bash command ====== You can configure sudo command to insult user when they type the wrong password. Now, it is possible to abuse insult the user when they enter the wrong command at the shell prompt. From f004d6b18e36311b500cc3053edfa2cfea85dc8b Mon Sep 17 00:00:00 2001 From: BriFuture <752736341@qq.com> Date: Fri, 19 Jan 2018 20:23:24 +0800 Subject: [PATCH 113/226] Translation Finished --- ...t-s Build A Simple Interpreter. Part 2..md | 246 ------------------ ...t-s Build A Simple Interpreter. Part 2..md | 234 +++++++++++++++++ 2 files changed, 234 insertions(+), 246 deletions(-) delete mode 100644 sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md create mode 100644 translated/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md diff --git a/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md b/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md deleted file mode 100644 index 7b6cde8c30..0000000000 --- a/sources/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md +++ /dev/null @@ -1,246 +0,0 @@ -BriFuture is translating this article. - -Let’s Build A Simple Interpreter. Part 2. -====== - -In their amazing book "The 5 Elements of Effective Thinking" the authors Burger and Starbird share a story about how they observed Tony Plog, an internationally acclaimed trumpet virtuoso, conduct a master class for accomplished trumpet players. The students first played complex music phrases, which they played perfectly well. But then they were asked to play very basic, simple notes. When they played the notes, the notes sounded childish compared to the previously played complex phrases. After they finished playing, the master teacher also played the same notes, but when he played them, they did not sound childish. The difference was stunning. Tony explained that mastering the performance of simple notes allows one to play complex pieces with greater control. The lesson was clear - to build true virtuosity one must focus on mastering simple, basic ideas. - -The lesson in the story clearly applies not only to music but also to software development. The story is a good reminder to all of us to not lose sight of the importance of deep work on simple, basic ideas even if it sometimes feels like a step back. While it is important to be proficient with a tool or framework you use, it is also extremely important to know the principles behind them. As Ralph Waldo Emerson said: - -> "If you learn only methods, you'll be tied to your methods. But if you learn principles, you can devise your own methods." - -On that note, let's dive into interpreters and compilers again. - -Today I will show you a new version of the calculator from [Part 1][1] that will be able to: - - 1. Handle whitespace characters anywhere in the input string - 2. Consume multi-digit integers from the input - 3. Subtract two integers (currently it can only add integers) - - - -Here is the source code for your new version of the calculator that can do all of the above: -``` -# Token types -# EOF (end-of-file) token is used to indicate that -# there is no more input left for lexical analysis -INTEGER, PLUS, MINUS, EOF = 'INTEGER', 'PLUS', 'MINUS', 'EOF' - - -class Token(object): - def __init__(self, type, value): - # token type: INTEGER, PLUS, MINUS, or EOF - self.type = type - # token value: non-negative integer value, '+', '-', or None - self.value = value - - def __str__(self): - """String representation of the class instance. - - Examples: - Token(INTEGER, 3) - Token(PLUS '+') - """ - return 'Token({type}, {value})'.format( - type=self.type, - value=repr(self.value) - ) - - def __repr__(self): - return self.__str__() - - -class Interpreter(object): - def __init__(self, text): - # client string input, e.g. "3 + 5", "12 - 5", etc - self.text = text - # self.pos is an index into self.text - self.pos = 0 - # current token instance - self.current_token = None - self.current_char = self.text[self.pos] - - def error(self): - raise Exception('Error parsing input') - - def advance(self): - """Advance the 'pos' pointer and set the 'current_char' variable.""" - self.pos += 1 - if self.pos > len(self.text) - 1: - self.current_char = None # Indicates end of input - else: - self.current_char = self.text[self.pos] - - def skip_whitespace(self): - while self.current_char is not None and self.current_char.isspace(): - self.advance() - - def integer(self): - """Return a (multidigit) integer consumed from the input.""" - result = '' - while self.current_char is not None and self.current_char.isdigit(): - result += self.current_char - self.advance() - return int(result) - - def get_next_token(self): - """Lexical analyzer (also known as scanner or tokenizer) - - This method is responsible for breaking a sentence - apart into tokens. - """ - while self.current_char is not None: - - if self.current_char.isspace(): - self.skip_whitespace() - continue - - if self.current_char.isdigit(): - return Token(INTEGER, self.integer()) - - if self.current_char == '+': - self.advance() - return Token(PLUS, '+') - - if self.current_char == '-': - self.advance() - return Token(MINUS, '-') - - self.error() - - return Token(EOF, None) - - def eat(self, token_type): - # compare the current token type with the passed token - # type and if they match then "eat" the current token - # and assign the next token to the self.current_token, - # otherwise raise an exception. - if self.current_token.type == token_type: - self.current_token = self.get_next_token() - else: - self.error() - - def expr(self): - """Parser / Interpreter - - expr -> INTEGER PLUS INTEGER - expr -> INTEGER MINUS INTEGER - """ - # set current token to the first token taken from the input - self.current_token = self.get_next_token() - - # we expect the current token to be an integer - left = self.current_token - self.eat(INTEGER) - - # we expect the current token to be either a '+' or '-' - op = self.current_token - if op.type == PLUS: - self.eat(PLUS) - else: - self.eat(MINUS) - - # we expect the current token to be an integer - right = self.current_token - self.eat(INTEGER) - # after the above call the self.current_token is set to - # EOF token - - # at this point either the INTEGER PLUS INTEGER or - # the INTEGER MINUS INTEGER sequence of tokens - # has been successfully found and the method can just - # return the result of adding or subtracting two integers, - # thus effectively interpreting client input - if op.type == PLUS: - result = left.value + right.value - else: - result = left.value - right.value - return result - - -def main(): - while True: - try: - # To run under Python3 replace 'raw_input' call - # with 'input' - text = raw_input('calc> ') - except EOFError: - break - if not text: - continue - interpreter = Interpreter(text) - result = interpreter.expr() - print(result) - - -if __name__ == '__main__': - main() -``` - -Save the above code into the calc2.py file or download it directly from [GitHub][2]. Try it out. See for yourself that it works as expected: it can handle whitespace characters anywhere in the input; it can accept multi-digit integers, and it can also subtract two integers as well as add two integers. - -Here is a sample session that I ran on my laptop: -``` -$ python calc2.py -calc> 27 + 3 -30 -calc> 27 - 7 -20 -calc> -``` - -The major code changes compared with the version from [Part 1][1] are: - - 1. The get_next_token method was refactored a bit. The logic to increment the pos pointer was factored into a separate method advance. - 2. Two more methods were added: skip_whitespace to ignore whitespace characters and integer to handle multi-digit integers in the input. - 3. The expr method was modified to recognize INTEGER -> MINUS -> INTEGER phrase in addition to INTEGER -> PLUS -> INTEGER phrase. The method now also interprets both addition and subtraction after having successfully recognized the corresponding phrase. - -In [Part 1][1] you learned two important concepts, namely that of a **token** and a **lexical analyzer**. Today I would like to talk a little bit about **lexemes** , **parsing** , and **parsers**. - -You already know about tokens. But in order for me to round out the discussion of tokens I need to mention lexemes. What is a lexeme? A **lexeme** is a sequence of characters that form a token. In the following picture you can see some examples of tokens and sample lexemes and hopefully it will make the relationship between them clear: - -![][3] - -Now, remember our friend, the expr method? I said before that that's where the interpretation of an arithmetic expression actually happens. But before you can interpret an expression you first need to recognize what kind of phrase it is, whether it is addition or subtraction, for example. That's what the expr method essentially does: it finds the structure in the stream of tokens it gets from the get_next_token method and then it interprets the phrase that is has recognized, generating the result of the arithmetic expression. - -The process of finding the structure in the stream of tokens, or put differently, the process of recognizing a phrase in the stream of tokens is called **parsing**. The part of an interpreter or compiler that performs that job is called a **parser**. - -So now you know that the expr method is the part of your interpreter where both **parsing** and **interpreting** happens - the expr method first tries to recognize ( **parse** ) the INTEGER -> PLUS -> INTEGER or the INTEGER -> MINUS -> INTEGER phrase in the stream of tokens and after it has successfully recognized ( **parsed** ) one of those phrases, the method interprets it and returns the result of either addition or subtraction of two integers to the caller. - -And now it's time for exercises again. - -![][4] - - 1. Extend the calculator to handle multiplication of two integers - 2. Extend the calculator to handle division of two integers - 3. Modify the code to interpret expressions containing an arbitrary number of additions and subtractions, for example "9 - 5 + 3 + 11" - - - -**Check your understanding.** - - 1. What is a lexeme? - 2. What is the name of the process that finds the structure in the stream of tokens, or put differently, what is the name of the process that recognizes a certain phrase in that stream of tokens? - 3. What is the name of the part of the interpreter (compiler) that does parsing? - - - - -I hope you liked today's material. In the next article of the series you will extend your calculator to handle more complex arithmetic expressions. Stay tuned. - - --------------------------------------------------------------------------------- - -via: https://ruslanspivak.com/lsbasi-part2/ - -作者:[Ruslan Spivak][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://ruslanspivak.com -[1]:http://ruslanspivak.com/lsbasi-part1/ (Part 1) -[2]:https://github.com/rspivak/lsbasi/blob/master/part2/calc2.py -[3]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_lexemes.png -[4]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_exercises.png diff --git a/translated/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md b/translated/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md new file mode 100644 index 0000000000..a22a94bae0 --- /dev/null +++ b/translated/tech/20150703 Let-s Build A Simple Interpreter. Part 2..md @@ -0,0 +1,234 @@ +让我们做个简单的解释器(2) +====== + +在一本叫做 《高效思考的 5 要素》 的书中,作者 Burger 和 Starbird 讲述了一个关于他们如何研究 Tony Plog 的故事,一个举世闻名的交响曲名家,为一些有才华的演奏者开创了一个大师班。这些学生一开始演奏复杂的乐曲,他们演奏的非常好。然后他们被要求演奏非常基础简单的乐曲。当他们演奏这些乐曲时,与之前所演奏的相比,听起来非常幼稚。在他们结束演奏后,老师也演奏了同样的乐曲,但是听上去非常娴熟。差别令人震惊。Tony 解释道,精通简单符号可以让人更好的掌握复杂的部分。这个例子很清晰 - 要成为真正的名家,必须要掌握简单基础的思想。 + +故事中的例子明显不仅仅适用于音乐,而且适用于软件开发。这个故事告诉我们不要忽视繁琐工作中简单基础的概念的重要性,哪怕有时候这让人感觉是一种倒退。尽管熟练掌握一门工具或者框架非常重要,了解他们背后的原理也是极其重要的。正如 Palph Waldo Emerson 所说: + +> “如果你只学习方法,你就会被方法束缚。但如果你知道原理,就可以发明自己的方法。” + +有鉴于此,让我们再次深入了解解释器和编译器。 + +今天我会向你们展示一个全新的计算器,与 [第一部分][1] 相比,它可以做到: + + 1. 处理输入字符串任意位置的空白符 + 2. 识别输入字符串中的多位整数 + 3. 做两个整数之间的减法(目前它仅能加减整数) + + +新版本计算器的源代码在这里,它可以做到上述的所有事情: +``` +# 标记类型 +# EOF (end-of-file 文件末尾) 标记是用来表示所有输入都解析完成 +INTEGER, PLUS, MINUS, EOF = 'INTEGER', 'PLUS', 'MINUS', 'EOF' + + +class Token(object): + def __init__(self, type, value): + # token 类型: INTEGER, PLUS, MINUS, or EOF + self.type = type + # token 值: 非负整数值, '+', '-', 或无 + self.value = value + + def __str__(self): + """String representation of the class instance. + + Examples: + Token(INTEGER, 3) + Token(PLUS '+') + """ + return 'Token({type}, {value})'.format( + type=self.type, + value=repr(self.value) + ) + + def __repr__(self): + return self.__str__() + + +class Interpreter(object): + def __init__(self, text): + # 客户端字符输入, 例如. "3 + 5", "12 - 5", + self.text = text + # self.pos 是 self.text 的索引 + self.pos = 0 + # 当前标记实例 + self.current_token = None + self.current_char = self.text[self.pos] + + def error(self): + raise Exception('Error parsing input') + + def advance(self): + """Advance the 'pos' pointer and set the 'current_char' variable.""" + self.pos += 1 + if self.pos > len(self.text) - 1: + self.current_char = None # Indicates end of input + else: + self.current_char = self.text[self.pos] + + def skip_whitespace(self): + while self.current_char is not None and self.current_char.isspace(): + self.advance() + + def integer(self): + """Return a (multidigit) integer consumed from the input.""" + result = '' + while self.current_char is not None and self.current_char.isdigit(): + result += self.current_char + self.advance() + return int(result) + + def get_next_token(self): + """Lexical analyzer (also known as scanner or tokenizer) + + This method is responsible for breaking a sentence + apart into tokens. + """ + while self.current_char is not None: + + if self.current_char.isspace(): + self.skip_whitespace() + continue + + if self.current_char.isdigit(): + return Token(INTEGER, self.integer()) + + if self.current_char == '+': + self.advance() + return Token(PLUS, '+') + + if self.current_char == '-': + self.advance() + return Token(MINUS, '-') + + self.error() + + return Token(EOF, None) + + def eat(self, token_type): + # 将当前的标记类型与传入的标记类型作比较,如果他们相匹配,就 + # “eat” 掉当前的标记并将下一个标记赋给 self.current_token, + # 否则抛出一个异常 + if self.current_token.type == token_type: + self.current_token = self.get_next_token() + else: + self.error() + + def expr(self): + """Parser / Interpreter + + expr -> INTEGER PLUS INTEGER + expr -> INTEGER MINUS INTEGER + """ + # 将输入中的第一个标记设置成当前标记 + self.current_token = self.get_next_token() + + # 当前标记应该是一个整数 + left = self.current_token + self.eat(INTEGER) + + # 当前标记应该是 ‘+’ 或 ‘-’ + op = self.current_token + if op.type == PLUS: + self.eat(PLUS) + else: + self.eat(MINUS) + + # 当前标记应该是一个整数 + right = self.current_token + self.eat(INTEGER) + # 在上述函数调用后,self.current_token 就被设为 EOF 标记 + + # 这时要么是成功地找到 INTEGER PLUS INTEGER,要么是 INTEGER MINUS INTEGER + # 序列的标记,并且这个方法可以仅仅返回两个整数的加或减的结果,就能高效解释客户端的输入 + if op.type == PLUS: + result = left.value + right.value + else: + result = left.value - right.value + return result + + +def main(): + while True: + try: + # To run under Python3 replace 'raw_input' call + # with 'input' + text = raw_input('calc> ') + except EOFError: + break + if not text: + continue + interpreter = Interpreter(text) + result = interpreter.expr() + print(result) + + +if __name__ == '__main__': + main() +``` + +把上面的代码保存到 calc2.py 文件中,或者直接从 [GitHub][2] 上下载。试着运行它。看看它是不是正常工作:它应该能够处理输入中任意位置的空白符;能够接受多位的整数,并且能够对两个整数做减法和加法。 + +这是我在自己的笔记本上运行的示例: +``` +$ python calc2.py +calc> 27 + 3 +30 +calc> 27 - 7 +20 +calc> +``` + +与 [第一部分][1] 的版本相比,主要的代码改动有: + + 1. get_next_token 方法重写了很多。增加指针位置的逻辑之前是放在一个单独的方法中。 + 2. 增加了一些方法:skip_whitespace 用于忽略空白字符,integer 用于处理输入字符的多位整数。 + 3. expr 方法修改成了可以识别 “整数 -> 减号 -> 整数” 词组和 “整数 -> 加号 -> 整数” 词组。在成功识别相应的词组后,这个方法现在可以解释加法和减法。 + +[第一部分][1] 中你学到了两个重要的概念,叫做 **标记** 和 **词法分析**。现在我想谈一谈 **词法**, **解析**,和**解析器**。 + +你已经知道标记。但是为了让我详细的讨论标记,我需要谈一谈词法。词法是什么?**词法** 是一个标记中的字符序列。在下图中你可以看到一些关于标记的例子,还好这可以让它们之间的关系变得清晰: + +![][3] + +现在还记得我们的朋友,expr 方法吗?我之前说过,这是数学表达式实际被解释的地方。但是你要先识别这个表达式有哪些词组才能解释它,比如它是加法还是减法。expr 方法最重要的工作是:它从 get_next_token 方法中得到流,并找出标记流的结构然后解释已经识别出的词组,产生数学表达式的结果。 + +在标记流中找出结构的过程,或者换种说法,识别标记流中的词组的过程就叫 **解析**。解释器或者编译器中执行这个任务的部分就叫做 **解析器**。 + +现在你知道 expr 方法就是你的解释器的部分,**解析** 和 **解释** 都在这里发生 - expr 方法首先尝试识别(**解析**)标记流里的 “整数 -> 加法 -> 整数” 或者 “整数 -> 减法 -> 整数” 词组,成功识别后 (**解析**) 其中一个词组,这个方法就开始解释它,返回两个整数的和或差。 + +又到了练习的时间。 + +![][4] + + 1. 扩展这个计算器,让它能够计算两个整数的乘法 + 2. 扩展这个计算器,让它能够计算两个整数的除法 + 3. 修改代码,让它能够解释包含了任意数量的加法和减法的表达式,比如 “9 - 5 + 3 + 11” + + + +**检验你的理解:** + + 1. 词法是什么? + 2. 找出标记流结构的过程叫什么,或者换种说法,识别标记流中一个词组的过程叫什么? + 3. 解释器(编译器)执行解析的部分叫什么? + + +希望你喜欢今天的内容。在该系列的下一篇文章里你就能扩展计算器从而处理更多复杂的算术表达式。敬请期待。 + +-------------------------------------------------------------------------------- + +via: https://ruslanspivak.com/lsbasi-part2/ + +作者:[Ruslan Spivak][a] +译者:[BriFuture](https://github.com/BriFuture) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://ruslanspivak.com +[1]:http://ruslanspivak.com/lsbasi-part1/ (Part 1) +[2]:https://github.com/rspivak/lsbasi/blob/master/part2/calc2.py +[3]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_lexemes.png +[4]:https://ruslanspivak.com/lsbasi-part2/lsbasi_part2_exercises.png From ea06a5ef2fdabfd3a929b9c726a5933e2f266f75 Mon Sep 17 00:00:00 2001 From: Torival Date: Fri, 19 Jan 2018 20:44:26 +0800 Subject: [PATCH 114/226] Delete 20140210 Three steps to learning GDB.md --- .../20140210 Three steps to learning GDB.md | 113 ------------------ 1 file changed, 113 deletions(-) delete mode 100644 sources/tech/20140210 Three steps to learning GDB.md diff --git a/sources/tech/20140210 Three steps to learning GDB.md b/sources/tech/20140210 Three steps to learning GDB.md deleted file mode 100644 index 3e94e3d77f..0000000000 --- a/sources/tech/20140210 Three steps to learning GDB.md +++ /dev/null @@ -1,113 +0,0 @@ -Translating by Torival Three steps to learning GDB -============================================================ - -Debugging C programs used to scare me a lot. Then I was writing my [operating system][2] and I had so many bugs to debug! I was extremely fortunate to be using the emulator qemu, which lets me attach a debugger to my operating system. The debugger is called `gdb`. - -I’m going to explain a couple of small things you can do with `gdb`, because I found it really confusing to get started. We’re going to set a breakpoint and examine some memory in a tiny program. - -### 1\. Set breakpoints - -If you’ve ever used a debugger before, you’ve probably set a breakpoint. - -Here’s the program that we’re going to be “debugging” (though there aren’t any bugs): - -``` -#include -void do_thing() { - printf("Hi!\n"); -} -int main() { - do_thing(); -} - -``` - -Save this as `hello.c`. We can debug it with gdb like this: - -``` -bork@kiwi ~> gcc -g hello.c -o hello -bork@kiwi ~> cat -bork@kiwi ~> gdb ./hello -``` - -This compiles `hello.c` with debugging symbols (so that gdb can do better work), and gives us kind of scary prompt that just says - -`(gdb)` - -We can then set a breakpoint using the `break` command, and then `run` the program. - -``` -(gdb) break do_thing -Breakpoint 1 at 0x4004f8 -(gdb) run -Starting program: /home/bork/hello - -Breakpoint 1, 0x00000000004004f8 in do_thing () -``` - -This stops the program at the beginning of `do_thing`. - -We can find out where we are in the call stack with `where`: (thanks to [@mgedmin][3] for the tip) - -``` -(gdb) where -#0 do_thing () at hello.c:3 -#1 0x08050cdb in main () at hello.c:6 -(gdb) -``` - -### 2\. Look at some assembly code - -We can look at the assembly code for our function using the `disassemble`command! This is cool. This is x86 assembly. I don’t understand it very well, but the line that says `callq` is what does the `printf` function call. - -``` -(gdb) disassemble do_thing -Dump of assembler code for function do_thing: - 0x00000000004004f4 <+0>: push %rbp - 0x00000000004004f5 <+1>: mov %rsp,%rbp -=> 0x00000000004004f8 <+4>: mov $0x40060c,%edi - 0x00000000004004fd <+9>: callq 0x4003f0 - 0x0000000000400502 <+14>: pop %rbp - 0x0000000000400503 <+15>: retq -``` - -You can also shorten `disassemble` to `disas` - -### 3\. Examine some memory! - -The main thing I used `gdb` for when I was debugging my kernel was to examine regions of memory to make sure they were what I thought they were. The command for examining memory is `examine`, or `x` for short. We’re going to use `x`. - -From looking at that assembly above, it seems like `0x40060c` might be the address of the string we’re printing. Let’s check! - -``` -(gdb) x/s 0x40060c -0x40060c: "Hi!" -``` - -It is! Neat! Look at that. The `/s` part of `x/s` means “show it to me like it’s a string”. I could also have said “show me 10 characters” like this: - -``` -(gdb) x/10c 0x40060c -0x40060c: 72 'H' 105 'i' 33 '!' 0 '\000' 1 '\001' 27 '\033' 3 '\003' 59 ';' -0x400614: 52 '4' 0 '\000' -``` - -You can see that the first four characters are ‘H’, ‘i’, and ‘!’, and ‘\0’ and then after that there’s more unrelated stuff. - -I know that gdb does lots of other stuff, but I still don’t know it very well and `x`and `break` got me pretty far. You can read the [documentation for examining memory][4]. - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/ - -作者:[Julia Evans ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://jvns.ca -[1]:https://jvns.ca/categories/spytools -[2]:http://jvns.ca/blog/categories/kernel -[3]:https://twitter.com/mgedmin -[4]:https://ftp.gnu.org/old-gnu/Manuals/gdb-5.1.1/html_chapter/gdb_9.html#SEC56 From 033647c67c6354f0ad1f4703b258b831e3486645 Mon Sep 17 00:00:00 2001 From: Torival Date: Fri, 19 Jan 2018 20:46:48 +0800 Subject: [PATCH 115/226] Create 20140210 Three steps to learning GDB.md --- .../20140210 Three steps to learning GDB.md | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) create mode 100644 translated/tech/20140210 Three steps to learning GDB.md diff --git a/translated/tech/20140210 Three steps to learning GDB.md b/translated/tech/20140210 Three steps to learning GDB.md new file mode 100644 index 0000000000..5139321ac2 --- /dev/null +++ b/translated/tech/20140210 Three steps to learning GDB.md @@ -0,0 +1,108 @@ +# 三步上手GDB + +调试C程序,曾让我很困扰。然而当我之前在写我的[操作系统][2]时,我有很多的BUG需要调试。我很幸运的使用上了qemu模拟器,它允许我将调试器附加到我的操作系统。这个调试器就是`gdb`。 + +我得解释一下,你可以使用`gdb`先做一些小事情,因为我发现初学它的时候真的很混乱。我们接下来会在一个小程序中,设置断点,查看内存。. + +### 1. 设断点 + +如果你曾经使用过调试器,那你可能已经会设置断点了。 + +下面是一个我们要调试的程序(虽然没有任何Bug): + +``` +#include +void do_thing() { + printf("Hi!\n"); +} +int main() { + do_thing(); +} + +``` + +另存为 `hello.c`. 我们可以使用dbg调试它,像这样: + +``` +bork@kiwi ~> gcc -g hello.c -o hello +bork@kiwi ~> gdb ./hello +``` + +以上是带调试信息编译 `hello.c`(为了gdb可以更好工作),并且它会给我们醒目的提示符,就像这样: +`(gdb)` + +我们可以使用`break`命令设置断点,然后使用`run`开始调试程序。 + +``` +(gdb) break do_thing +Breakpoint 1 at 0x4004f8 +(gdb) run +Starting program: /home/bork/hello + +Breakpoint 1, 0x00000000004004f8 in do_thing () +``` +程序暂停在了`do_thing`开始的地方。 + +我们可以通过`where`查看我们所在的调用栈。 +``` +(gdb) where +#0 do_thing () at hello.c:3 +#1 0x08050cdb in main () at hello.c:6 +(gdb) +``` + +### 2. 阅读汇编代码 + +使用`disassemble`命令,我们可以看到这个函数的汇编代码。棒级了。这是x86汇编代码。虽然我不是很懂它,但是`callq`这一行是`printf`函数调用。 + +``` +(gdb) disassemble do_thing +Dump of assembler code for function do_thing: + 0x00000000004004f4 <+0>: push %rbp + 0x00000000004004f5 <+1>: mov %rsp,%rbp +=> 0x00000000004004f8 <+4>: mov $0x40060c,%edi + 0x00000000004004fd <+9>: callq 0x4003f0 + 0x0000000000400502 <+14>: pop %rbp + 0x0000000000400503 <+15>: retq +``` + +你也可以使用`disassemble`的缩写`disas`。` + +### 3. 查看内存 + +当调试我的内核时,我使用`gdb`的主要原因是,以确保内存布局是如我所想的那样。检查内存的命令是`examine`,或者使用缩写`x`。我们将使用`x`。 + +通过阅读上面的汇编代码,似乎`0x40060c`可能是我们所要打印的字符串地址。我们来试一下。 + +``` +(gdb) x/s 0x40060c +0x40060c: "Hi!" +``` + +的确是这样。`x/s`中`/s`部分,意思是“把它作为字符串展示”。我也可以“展示10个字符”,像这样: + +``` +(gdb) x/10c 0x40060c +0x40060c: 72 'H' 105 'i' 33 '!' 0 '\000' 1 '\001' 27 '\033' 3 '\003' 59 ';' +0x400614: 52 '4' 0 '\000' +``` + +你可以看到前四个字符是'H','i','!',和'\0',并且它们之后的是一些不相关的东西。 + +我知道gdb很多其他的东西,但是我任然不是很了解它,其中`x`和`break`让我获得很多。你还可以阅读 [do umentation for examining memory][4]。 + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2014/02/10/three-steps-to-learning-gdb/ + +作者:[Julia Evans ][a] +译者:[Torival](https://github.com/Torival) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca +[1]:https://jvns.ca/categories/spytools +[2]:http://jvns.ca/blog/categories/kernel +[3]:https://twitter.com/mgedmin +[4]:https://ftp.gnu.org/old-gnu/Manuals/gdb-5.1.1/html_chapter/gdb_9.html#SEC56 From abe27046ade20fb0669e8c80e7c90849897d5a17 Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 20 Jan 2018 14:19:46 +0800 Subject: [PATCH 116/226] translate done: 20171024 Run Linux On Android Devices, No Rooting Required.md --- ...On Android Devices, No Rooting Required.md | 68 ------------------- ...On Android Devices, No Rooting Required.md | 65 ++++++++++++++++++ 2 files changed, 65 insertions(+), 68 deletions(-) delete mode 100644 sources/tech/20171024 Run Linux On Android Devices, No Rooting Required.md create mode 100644 translated/tech/20171024 Run Linux On Android Devices, No Rooting Required.md diff --git a/sources/tech/20171024 Run Linux On Android Devices, No Rooting Required.md b/sources/tech/20171024 Run Linux On Android Devices, No Rooting Required.md deleted file mode 100644 index e93ea4638a..0000000000 --- a/sources/tech/20171024 Run Linux On Android Devices, No Rooting Required.md +++ /dev/null @@ -1,68 +0,0 @@ -translating by lujun9972 -Run Linux On Android Devices, No Rooting Required! -====== -![](https://www.ostechnix.com/wp-content/uploads/2017/10/Termux-720x340.jpg) - -The other day I was searching for a simple and easy way to run Linux on Android. My only intention was to just use Linux with some basic applications like SSH, Git, awk etc. Not much! I don't want to root the Android device. I have a Tablet PC that I mostly use for reading EBooks, news, and few Linux blogs. I don't use it much for other activities. So, I decided to use it for some Linux activities. After spending few minutes on Google Play Store, one app immediately caught my attention and I wanted to give it a try. If you're ever wondered how to run Linux on Android devices, this one might help. - -### Termux - An Android terminal emulator to run Linux on Android and Chrome OS - -**Termux** is an Android terminal emulator and Linux environment app. Unlike many other apps, you don 't need to root your device or no setup required. It just works out of the box! A minimal base Linux system will be installed automatically, and of course you can install other packages with APT package manager. In short, you can use your Android device like a pocket Linux computer. It's not just for Android, you can install it on your Chrome OS too. - -Termux offers many significant features than you would think. - - * It allows you to SSH to your remote server via openSSH. - * You can also SSH into your Android devices from any remote system. - * Sync your smart phone contacts to a remote system using rsync and curl. - * You could choose any shells such as BASH, ZSH, and FISH etc. - * You can choose different text editors such as Emacs, Nano, and Vim to edit/view files. - * Install any packages of your choice in your Android devices using APT package manager. Up-to-date versions of Git, Perl, Python, Ruby and Node.js are all available. - * Connect your Android device with a bluetooth Keyboard, mouse and external display and use it like a convergence device. Termux supports keyboard shortcuts . - * Termux allows you to run almost all GNU/Linux commands. - - - -It also has some extra features. You can enable them by installing the addons. For instance, **Termux:API** addon will allow you to Access Android and Chrome hardware features. The other useful addons are: - - * Termux:Boot - Run script(s) when your device boots. - * Termux:Float - Run Termux in a floating window. - * Termux:Styling - Provides color schemes and powerline-ready fonts to customize the appearance of the Termux terminal. - * Termux:Task - Provides an easy way to call Termux executables from Tasker and compatible apps. - * Termux:Widget - Provides an easy way to start small scriptlets from the home screen. - - - -To know more about termux, open the built-in help section by long-pressing anywhere on the terminal and selecting the Help menu option. The only drawback is it **requires Android 5.0 and higher versions**. It could be more useful for many users if it supports Android 4.x and older versions. Termux is available in **Google Play Store** and **F-Droid**. - -To install Termux from Google Play Store, click the following button. - -[![termux][1]][2] - -To install it from F-Droid, click the following button. - -[![][1]][3] - -You know now how to try Linux on your android devices using Termux. Do you use any other better apps worth trying? Please mention them in the comment section below. I'd love to try them too! - -Cheers! - -Resource: - -+[Termux website][4] - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/termux-run-linux-android-devices-no-rooting-required/ - -作者:[SK][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:https://play.google.com/store/apps/details?id=com.termux -[3]:https://f-droid.org/packages/com.termux/ -[4]:https://termux.com/ diff --git a/translated/tech/20171024 Run Linux On Android Devices, No Rooting Required.md b/translated/tech/20171024 Run Linux On Android Devices, No Rooting Required.md new file mode 100644 index 0000000000..929c3ecdf8 --- /dev/null +++ b/translated/tech/20171024 Run Linux On Android Devices, No Rooting Required.md @@ -0,0 +1,65 @@ +无需 Root 实现在 Android 设备上运行 Linux +====== +![](https://www.ostechnix.com/wp-content/uploads/2017/10/Termux-720x340.jpg) + +曾今,我尝试过搜索一种简单的可以在 Android 上运行 Linux 的方法。我当时唯一的意图只是想使用 Linux 以及一些基本的用用程序,比如 SSH,Git,awk 等。要求的并不多!我不不想 root Android 设备。我有一台平板电脑,主要用于阅读电子书,新闻和少数 Linux 博客。除此之外也不怎么用它了。因此我决定用它来实现一些 Linux 的功能。在 Google Play 商店上浏览了几分钟后,一个应用程序瞬间引起了我的注意,勾起了我实验的欲望。如果你也想在 Android 设备上运行 Linux,这个应用可能会有所帮助。 + +### Termux - 在 Android 和 Chrome OS 上运行的 Android 终端模拟器 + +**Termux** 是一个 Android 终端模拟器以及提供 Linux 环境的应用程序。跟许多其他应用程序不同,你无需 root 设备也无需进行设置。它是开箱即用的!它会自动安装好一个最基本的 Linux 系统,当然你也可以使用 APT 软件包管理器来安装其他软件包。总之,你可以让你的 Android 设备变成一台袖珍的 Linux 电脑。它不仅适用于 Android,你还能在 Chrome OS 上安装它。 + +![](http://www.ostechnix.com/wp-content/uploads/2017/10/termux.png) + +Termux 提供了许多重要的功能,比您想象的要多。 + + * 它允许你通过 openSSH 登陆远程服务器 + * 你还能够从远程系统 SSH 到 Android 设备中。 + * 使用 rsync 和 curl 将您的智能手机通讯录同步到远程系统。 + * 支持不同的 shell,比如 BASH,ZSH,以及 FISH 等等。 + * 可以选择不同的文本编辑器来编辑/查看文件,支持 Emacs,Nano 和 Vim。 + * 使用 APT 软件包管理器在 Android 设备上安装你想要的软件包。支持 Git,Perl,Python,Ruby 和 Node.js 的最新版本。 + * 可以将 Android 设备与蓝牙键盘,鼠标和外置显示器连接起来,就像是整合在一起的设备一样。Termux 支持键盘快捷键。 + * Termux 支持几乎所有 GNU/Linux 命令。 + +此外通过安装插件可以启用其他一些功能。例如,**Termux:API** 插件允许你访问 Android 和 Chrome 的硬件功能。其他有用的插件包括: + + * Termux:Boot - 设备启动时运行脚本 + * Termux:Float - 在浮动窗口中运行 Termux + * Termux:Styling - 提供配色方案和支持 powerline 的字体来定制 Termux 终端的外观。 + * Termux:Task - 提供一种从任务栏类的应用中调用 Termux 可执行文件的简易方法。 + * Termux:Widget - 提供一种从主屏幕启动小脚本的建议方法。 + +要了解更多有关 termux 的信息,请长按终端上的任意位置并选择“帮助”菜单选项来打开内置的帮助部分。它唯一的缺点就是**需要 Android 5.0 及更高版本**。如果它支持 Android 4.x 和旧版本的话,将会更有用的多。你可以在** Google Play 商店 **和** F-Droid **中找到并安装 Termux。 + +要在 Google Play 商店中安装 Termux,点击下面按钮。 + +[![termux][1]][2] + +若要在 F-Droid 中安装,则点击下面按钮。 + +[![][1]][3] + +你现在知道如何使用 Termux 在 Android 设备上使用 Linux 了。你有用过其他更好的应用吗?请在下面留言框中留言。我很乐意也去尝试他们! + +此致敬礼! + +相关资源: + ++[Termux 官网 ][4] + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/termux-run-linux-android-devices-no-rooting-required/ + +作者:[SK][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:https://play.google.com/store/apps/details?id=com.termux +[3]:https://f-droid.org/packages/com.termux/ +[4]:https://termux.com/ From f97126272d4bece1d698a3518b4ab86e573f5bce Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 20 Jan 2018 15:20:04 +0800 Subject: [PATCH 117/226] PRF&PUB:20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md @lujun9972 https://linux.cn/article-9259-1.html --- ...Piano In Terminal Using Our PC Keyboard.md | 42 ++++++++++++------- 1 file changed, 26 insertions(+), 16 deletions(-) rename {translated/tech => published}/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md (73%) diff --git a/translated/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md b/published/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md similarity index 73% rename from translated/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md rename to published/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md index 9da14a545e..0e1ff54829 100644 --- a/translated/tech/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md +++ b/published/20171028 Let Us Play Piano In Terminal Using Our PC Keyboard.md @@ -1,17 +1,23 @@ 让我们使用 PC 键盘在终端演奏钢琴 ====== -厌倦了工作?那么来吧,让我们弹弹钢琴!是的,你没有看错。谁需要真的钢琴啊?我们可以用 PC 键盘在命令行下就能弹钢琴。向你们介绍一下 **Piano-rs** - 这是一款用 Rust 语言编写的,可以让你用 PC 键盘在终端弹钢琴的简单工具。它免费,开源,而且基于 MIT 协议。你可以在任何支持 Rust 的操作系统中使用它。 +![](https://www.ostechnix.com/wp-content/uploads/2017/10/Play-Piano-In-Terminal-720x340.jpg) -### Piano-rs:使用 PC 键盘在终端弹钢琴 +厌倦了工作?那么来吧,让我们弹弹钢琴!是的,你没有看错,根本不需要真的钢琴。我们可以用 PC 键盘在命令行下就能弹钢琴。向你们介绍一下 `piano-rs` —— 这是一款用 Rust 语言编写的,可以让你用 PC 键盘在终端弹钢琴的简单工具。它自由开源,基于 MIT 协议。你可以在任何支持 Rust 的操作系统中使用它。 + +### piano-rs:使用 PC 键盘在终端弹钢琴 #### 安装 确保系统已经安装了 Rust 编程语言。若还未安装,运行下面命令来安装它。 + ``` curl https://sh.rustup.rs -sSf | sh ``` -安装程序会问你是否默认安装还是自定义安装还是取消安装。我希望默认安装,因此输入 **1** (数字一)。 +(LCTT 译注:这种直接通过 curl 执行远程 shell 脚本是一种非常危险和不成熟的做法。) + +安装程序会问你是否默认安装还是自定义安装还是取消安装。我希望默认安装,因此输入 `1` (数字一)。 + ``` info: downloading installer @@ -43,7 +49,7 @@ default host triple: x86_64-unknown-linux-gnu 1) Proceed with installation (default) 2) Customize installation 3) Cancel installation -**1** +1 info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu' 223.6 KiB / 223.6 KiB (100 %) 215.1 KiB/s ETA: 0 s @@ -72,9 +78,10 @@ environment variable. Next time you log in this will be done automatically. To configure your current shell run source $HOME/.cargo/env ``` -登出然后重启系统来将 cargo 的 bin 目录纳入 PATH 变量中。 +登出然后重启系统来将 cargo 的 bin 目录纳入 `PATH` 变量中。 校验 Rust 是否正确安装: + ``` $ rustc --version rustc 1.21.0 (3b72af97e 2017-10-09) @@ -83,40 +90,44 @@ rustc 1.21.0 (3b72af97e 2017-10-09) 太棒了!Rust 成功安装了。是时候构建 piano-rs 应用了。 使用下面命令克隆 Piano-rs 仓库: + ``` git clone https://github.com/ritiek/piano-rs ``` -上面命令会在当前工作目录创建一个名为 "piano-rs" 的目录并下载所有内容到其中。进入该目录: +上面命令会在当前工作目录创建一个名为 `piano-rs` 的目录并下载所有内容到其中。进入该目录: + ``` cd piano-rs ``` 最后,运行下面命令来构建 Piano-rs: + ``` cargo build --release ``` 编译过程要花上一阵子。 -#### Usage +#### 用法 + +编译完成后,在 `piano-rs` 目录中运行下面命令: -编译完成后,在 **piano-rs** 目录中运行下面命令: ``` ./target/release/piano-rs ``` -这就我们在终端上的钢琴键盘了!可以开始弹指一些音符了。按下按键可以弹奏相应音符。使用 **左/右** 方向键可以在弹奏时调整音频。而,使用 **上/下** 方向键可以在弹奏时调整音长。 +这就是我们在终端上的钢琴键盘了!可以开始弹指一些音符了。按下按键可以弹奏相应音符。使用 **左/右** 方向键可以在弹奏时调整音频。而,使用 **上/下** 方向键可以在弹奏时调整音长。 -[![][1]][2] +![][2] -Piano-rs 使用与 [**multiplayerpiano.com**][3] 一样的音符和按键。另外,你可以使用[**这些音符 **][4] 来学习弹指各种流行歌曲。 +Piano-rs 使用与 [multiplayerpiano.com][3] 一样的音符和按键。另外,你可以使用[这些音符][4] 来学习弹指各种流行歌曲。 要查看帮助。输入: + ``` $ ./target/release/piano-rs -h -``` -``` + piano-rs 0.1.0 Ritiek Malhotra Play piano in the terminal using PC keyboard. @@ -141,19 +152,18 @@ OPTIONS: 此致敬礼! - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/let-us-play-piano-terminal-using-pc-keyboard/ 作者:[SK][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.ostechnix.com/author/sk/ [1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2017/10/Piano.png () +[2]:http://www.ostechnix.com/wp-content/uploads/2017/10/Piano.png [3]:http://www.multiplayerpiano.com/ [4]:https://pastebin.com/CX1ew0uB From a340c2fab4251a8931007e12cfa4da369b49d0cd Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 20 Jan 2018 15:33:15 +0800 Subject: [PATCH 118/226] PRF&PUB:20171106 Autorandr- automatically adjust screen layout.md @geekpi --- ...ndr- automatically adjust screen layout.md | 51 +++++++++++++++++++ ...ndr- automatically adjust screen layout.md | 50 ------------------ 2 files changed, 51 insertions(+), 50 deletions(-) create mode 100644 published/20171106 Autorandr- automatically adjust screen layout.md delete mode 100644 translated/tech/20171106 Autorandr- automatically adjust screen layout.md diff --git a/published/20171106 Autorandr- automatically adjust screen layout.md b/published/20171106 Autorandr- automatically adjust screen layout.md new file mode 100644 index 0000000000..3e87fce587 --- /dev/null +++ b/published/20171106 Autorandr- automatically adjust screen layout.md @@ -0,0 +1,51 @@ +autorandr:自动调整屏幕布局 +====== + +像许多笔记本用户一样,我经常将笔记本插入到不同的显示器上(桌面上有多台显示器,演示时有投影机等)。运行 `xrandr` 命令或点击界面非常繁琐,编写脚本也不是很好。 + +最近,我遇到了 [autorandr][1],它使用 EDID(和其他设置)检测连接的显示器,保存 `xrandr` 配置并恢复它们。它也可以在加载特定配置时运行任意脚本。我已经打包了它,目前仍在 NEW 状态。如果你不能等待,[这是 deb][2],[这是 git 仓库][3]。 + +要使用它,只需安装软件包,并创建你的初始配置(我这里用的名字是 `undocked`): + +``` +autorandr --save undocked +``` + +然后,连接你的笔记本(或者插入你的外部显示器),使用 `xrandr`(或其他任何)更改配置,然后保存你的新配置(我这里用的名字是 workstation): + +``` +autorandr --save workstation +``` + +对你额外的配置(或当你有新的配置)进行重复操作。 + +`autorandr` 有 `udev`、`systemd` 和 `pm-utils` 钩子,当新的显示器出现时 `autorandr --change` 应该会立即运行。如果需要,也可以手动运行 `autorandr --change` 或 `autorandr - load workstation`。你也可以在加载配置后在 `~/.config/autorandr/$PROFILE/postswitch` 添加自己的脚本来运行。由于我运行 i3,我的工作站配置如下所示: + +``` +#!/bin/bash + +xrandr --dpi 92 +xrandr --output DP2-2 --primary +i3-msg '[workspace="^(1|4|6)"] move workspace to output DP2-2;' +i3-msg '[workspace="^(2|5|9)"] move workspace to output DP2-3;' +i3-msg '[workspace="^(3|8)"] move workspace to output DP2-1;' +``` + +它适当地修正了 dpi,设置主屏幕(可能不需要?),并移动 i3 工作区。你可以通过在配置文件目录中添加一个 `block` 钩子来安排配置永远不会运行。 + +如果你定期更换显示器,请看一下! + +-------------------------------------------------------------------------------- + +via: https://www.donarmstrong.com/posts/autorandr/ + +作者:[Don Armstrong][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.donarmstrong.com +[1]:https://github.com/phillipberndt/autorandr +[2]:https://www.donarmstrong.com/autorandr_1.2-1_all.deb +[3]:https://git.donarmstrong.com/deb_pkgs/autorandr.git diff --git a/translated/tech/20171106 Autorandr- automatically adjust screen layout.md b/translated/tech/20171106 Autorandr- automatically adjust screen layout.md deleted file mode 100644 index 4dc8095669..0000000000 --- a/translated/tech/20171106 Autorandr- automatically adjust screen layout.md +++ /dev/null @@ -1,50 +0,0 @@ -Autorandr:自动调整屏幕布局 -====== -像许多笔记本用户一样,我经常将笔记本插入到不同的显示器上(桌面上有多台显示器,演示时有投影机等)。运行 xrandr 命令或点击界面非常繁琐,编写脚本也不是很好。 - -最近,我遇到了 [autorandr][1],它使用 EDID(和其他设置)检测连接的显示器,保存 xrandr 配置并恢复它们。它也可以在加载特定配置时运行任意脚本。我已经打包了它,目前仍在 NEW 状态。如果你不能等待,[这是 deb][2],[这是 git 仓库][3]。 - -要使用它,只需安装软件包,并创建你的初始配置(我这里是 undocked): -``` - autorandr --save undocked - -``` - -然后,连接你的笔记本(或者插入你的外部显示器),使用 xrandr(或其他任何)更改配置,然后保存你的新配置(我这里是 workstation): -``` -autorandr --save workstation - -``` - -对你额外的配置(或当你有新的配置)进行重复操作。 - -Autorandr 有 `udev`、`systemd` 和 `pm-utils` 钩子,当新的显示器出现时 `autorandr --change` 应该会立即运行。如果需要,也可以手动运行 `autorandr --change` 或 `autorandr - load workstation`。你也可以在加载配置后在 `~/.config/autorandr/$PROFILE/postswitch` 添加自己的脚本来运行。由于我运行 i3,我的工作站配置如下所示: -``` - #!/bin/bash - - xrandr --dpi 92 - xrandr --output DP2-2 --primary - i3-msg '[workspace="^(1|4|6)"] move workspace to output DP2-2;' - i3-msg '[workspace="^(2|5|9)"] move workspace to output DP2-3;' - i3-msg '[workspace="^(3|8)"] move workspace to output DP2-1;' - -``` - -它适当地修正了 dpi,设置主屏幕(可能不需要?),并移动 i3 工作区。你可以通过在配置文件目录中添加一个 `block` 钩子来安排配置永远不会运行。 - -如果你定期更换显示器,请看一下! - --------------------------------------------------------------------------------- - -via: https://www.donarmstrong.com/posts/autorandr/ - -作者:[Don Armstrong][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.donarmstrong.com -[1]:https://github.com/phillipberndt/autorandr -[2]:https://www.donarmstrong.com/autorandr_1.2-1_all.deb -[3]:https://git.donarmstrong.com/deb_pkgs/autorandr.git From 177845842776322a4fbb96220d4445467dcd860d Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Sat, 20 Jan 2018 19:43:13 +0800 Subject: [PATCH 119/226] Delete 20171226 How to Configure Linux for Children.md --- ...226 How to Configure Linux for Children.md | 144 ------------------ 1 file changed, 144 deletions(-) delete mode 100644 sources/tech/20171226 How to Configure Linux for Children.md diff --git a/sources/tech/20171226 How to Configure Linux for Children.md b/sources/tech/20171226 How to Configure Linux for Children.md deleted file mode 100644 index a0b8bb4394..0000000000 --- a/sources/tech/20171226 How to Configure Linux for Children.md +++ /dev/null @@ -1,144 +0,0 @@ -translate by cyleft -How to Configure Linux for Children -====== - -![](https://www.maketecheasier.com/assets/uploads/2017/12/keep-kids-safe-online-hero.jpg) - -If you've been around computers for a while, you might associate Linux with a certain stereotype of computer user. How do you know someone uses Linux? Don't worry, they'll tell you. - -But Linux is an exceptionally customizable operating system. This allows users an unprecedented degree of control. In fact, parents can set up a specialized distro of Linux for children, ensuring children don't stumble across dangerous content accidentally. While the process is more prolonged than using Windows, it's also more powerful and durable. Linux is also free, which can make it well-suited for classroom or computer lab deployment. - -## Linux Distros for Children - -These Linux distros for children are built with simplified, kid-friendly interfaces. An adult will need to install and set up the operating system at first, but kids can run the computer entirely alone. You'll find large colorful interfaces, plenty of pictures and simple language. - -Unfortunately, none of these distros are regularly updated, and some are no longer in active development. That doesn't mean they won't work, but it does make malfunctions more likely. - -![qimo-gcompris][1] - - -### 1. Edubuntu - -[Edubuntu][2] is an education-specific fork of the popular Ubuntu operating system. It has a rich graphical environment and ships with a lot of educational software that's easy to update and maintain. It's designed for children in middle and high school. - -### 2. Ubermix - -[Ubermix][3] is designed from the ground up with the needs of education in mind. Ubermix takes all the complexity out of student devices by making them as reliable and easy-to-use as a cell phone without sacrificing the power and capabilities of a full operating system. With a turn-key, five-minute installation, twenty-second quick recovery mechanism, and more than sixty free applications pre-installed, ubermix turns whatever hardware you have into a powerful device for learning. - -### 3. Sugar - -[Sugar][4] is the operating system built for the One Laptop Per Child initiative. Sugar is pretty different from normal desktop Linux, with a heavy bias towards classroom use and teaching programming skills. - - **Note** : do note that there are several more Linux distros for kids that we didn't include in the list above because they have not been actively developed or were abandoned a long time ago. - -## Content Filtering Linux for Children - -The best tool for protecting children from accessing inappropriate content is you, but you can't be there all the time. Content filtering via proxy filtering sets up certain URLs as "off limits." There are two main tools you can use. - -![linux-for-children-content-filtering][5] - -### 1. DansGuardian - -[DansGuardian][6], an open-source content filter that works on virtually every Linux distro, is flexible and powerful, requiring command-line setup with a proxy of your choice. If you don't mind digging into proxy settings, this is the most powerful choice. - -Setting up DansGuardian is not an easy task, and you can follow the installation instructions on its main page. But once it is set up, it is a very effective tool to filter out unwanted content. - -### 2. Parental Control: Family Friendly Filter - -[Parental Control: Family Friendly Filter][7] is an extension for Firefox that allows parents to block sites containing pornography and any other kind of inappropriate material. You can blacklist particular domains so that bad websites are always blocked. - -![firefox-content-filter-addon][8] - -If you are still using an older version of Firefox that doesn't support [web extensions][9], then you can check out [ProCon Latte Content Filter][10]. Parents add domains to a pre-loaded blacklist and set a password to keep the extension from being modified. - -### 3. Blocksi Web Filter - -[Blocksi Web Filter][11] is an extension for Chrome and is useful for Web and Youtube filtering. It also comes with a time-access control so that you can limit the hours your kids can access the Web. - -## Fun Stuff - -![linux-for-children-tux-kart][12] - -Any computer for children better have some games on it, educational or otherwise. While Linux isn't as gaming-friendly as Windows, it's getting closer all the time. Here are several suggestions for constructive games you might load on to Linux for children: - -* [Super Tux Kart][21] (kart racing game) - -* [GCompris][22] (educational game suite) - -* [Secret Maryo Chronicles][23] (Super Mario clone) - -* [Childsplay][24] (educational/memory games) - -* [EToys][25] (programming for kids) - -* [TuxTyping][26], (typing game) - -* [Kalzium][27] (periodic table guide) - -* [Tux of Math Command][28] (math arcade games) - -* [Pink Pony][29] (Tron-like racing game) - -* [KTuberling][30] (constructor game) - -* [TuxPaint][31] (painting) - -* [Blinken][32] ([memory][33] game) - -* [KTurtle][34] (educational programming environment) - -* [KStars][35] (desktop planetarium) - -* [Marble][36] (virtual globe) - -* [KHangman][37] (hangman guessing game) - -## Conclusion: Why Linux for Children? - -Linux has a reputation for being needlessly complex. So why use Linux for children? It's about setting kids up to learn. Working with Linux provides many opportunities to learn how the operating system works. As children get older, they'll have opportunities to explore, driven by their own interests and curiosity. Because the Linux platform is so open to users, it's an excellent venue for children to discover a life-long love of computers. - -This article was first published in July 2010 and was updated in December 2017. - -Image by [Children at school][13] - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/configure-linux-for-children/ - -作者:[Alexander Fox][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/alexfox/ -[1]:https://www.maketecheasier.com/assets/uploads/2010/08/qimo-gcompris.jpg (qimo-gcompris) -[2]:http://www.edubuntu.org -[3]:http://www.ubermix.org/ -[4]:http://wiki.sugarlabs.org/go/Downloads -[5]:https://www.maketecheasier.com/assets/uploads/2017/12/linux-for-children-content-filtering.png (linux-for-children-content-filtering) -[6]:https://help.ubuntu.com/community/DansGuardian -[7]:https://addons.mozilla.org/en-US/firefox/addon/family-friendly-filter/ -[8]:https://www.maketecheasier.com/assets/uploads/2017/12/firefox-content-filter-addon.png (firefox-content-filter-addon) -[9]:https://www.maketecheasier.com/best-firefox-web-extensions/ -[10]:https://addons.mozilla.org/en-US/firefox/addon/procon-latte/ -[11]:https://chrome.google.com/webstore/detail/blocksi-web-filter/pgmjaihnmedpcdkjcgigocogcbffgkbn?hl=en -[12]:https://www.maketecheasier.com/assets/uploads/2017/12/linux-for-children-tux-kart-e1513389774535.jpg (linux-for-children-tux-kart) -[13]:https://www.flickr.com/photos/lupuca/8720604364 -[21]:http://supertuxkart.sourceforge.net/ -[22]:http://gcompris.net/ -[23]:http://www.secretmaryo.org/ -[24]:http://www.schoolsplay.org/ -[25]:http://www.squeakland.org/about/intro/ -[26]:http://tux4kids.alioth.debian.org/tuxtype/index.php -[27]:http://edu.kde.org/kalzium/ -[28]:http://tux4kids.alioth.debian.org/tuxmath/index.php -[29]:http://code.google.com/p/pink-pony/ -[30]:http://games.kde.org/game.php?game=ktuberling -[31]:http://www.tuxpaint.org/ -[32]:https://www.kde.org/applications/education/blinken/ -[33]:https://www.ebay.com/sch/i.html?_nkw=memory -[34]:https://www.kde.org/applications/education/kturtle/ -[35]:https://www.kde.org/applications/education/kstars/ -[36]:https://www.kde.org/applications/education/marble/ -[37]:https://www.kde.org/applications/education/khangman/ From 1b8a462c8bf9d7439745427f2d2d3800aeb6c191 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Sat, 20 Jan 2018 19:44:30 +0800 Subject: [PATCH 120/226] translated by cyleft 20171226 How to Configure Linux for Children.md --- ...226 How to Configure Linux for Children.md | 144 ++++++++++++++++++ 1 file changed, 144 insertions(+) create mode 100644 translated/tech/20171226 How to Configure Linux for Children.md diff --git a/translated/tech/20171226 How to Configure Linux for Children.md b/translated/tech/20171226 How to Configure Linux for Children.md new file mode 100644 index 0000000000..75f69ed53e --- /dev/null +++ b/translated/tech/20171226 How to Configure Linux for Children.md @@ -0,0 +1,144 @@ +如何配置一个小朋友使用的 Linux +====== + +![](https://www.maketecheasier.com/assets/uploads/2017/12/keep-kids-safe-online-hero.jpg) + +如果你在电脑边工作有一段时间,提到 Linux,你应该会联想到一些特定的人群。然而,你怎么会知道都有些什么样的人在使用 Linux?别担心,这就告诉你。 + +Linux 是一个可以深度定制的操作系统。这就赋予了用户高度控制权。事实上,家长们可以针对小朋友设置出一个 Linux 专业发行版,确保让孩子不会在不经意间接触那些高危地带。但是相比 Windows,这些设置显得更费时,但是一劳永逸。Linux 的开源免费,让教室或计算机实验室系统部署变得容易。 + +## 小朋友的 Linux 发行版 + +这些为儿童而简化的 Linux 发行版,界面对儿童十分友好。家长只需要先安装和设置,孩子就可以完全独立地使用计算机了。你将看见多彩的图形界面,丰富的图画,简明的语言。 + +不过,不幸的是,这类发行版不会经常更新,甚至有些已经不再积极发展了。但也不意味着不能使用,只是故障发生率可能会高一点。 + +![qimo-gcompris][1] + + +### 1. Edubuntu + +[Edubuntu][2] 是 Ubuntu 的一个分支系统,专用于教育事业。它拥有丰富的图形环境,持续维护并更新的教育软件。它被设计成初高中学生专用的操作系统。 + +### 2. Ubermix + +[Ubermix][3] 是根据教育需求而被设计出来的。Ubermix 将学生从复杂的计算机设备中解脱出来,通过一部简单而可靠手机,不牺牲性能,把完整的操作系统呈现出来。一站式服务,五分钟安装,二十分钟快速还原机制,超过 60 个免费预装软件,ubermix 就可以让你的硬件变成功能强大的学习设备。 + +### 3. Sugar + +[Sugar][4] 是为每一个孩子的笔记本而设计的操作系统。Sugar 和普通桌面 Linux 大不相同,它更专注于学生课堂上的教学软件。 + + **注意** :很多为儿童开发的 Linux 发行版我并没有列举,因为它们大都不再积极维护或是被长时间遗弃。 + +## 为小朋友过筛选内容的 Linux + +只有你,最能保护孩子拒绝访问少儿不宜的内容,但是你不可能每分每秒都在孩子身边。但是你可以设置“限制访问”的 URL 到内容过滤代理服务器(通过软件)。这里有两个主要的软件可以帮助你。 + +![儿童内容过滤 Linux][5] + +### 1. DansGuardian + +[DansGuardian][6],一个开源内容过滤软件,几乎可以工作在任何 Linux 发行版上,需要你在命令行选择设置一个代理。如果你不深究代理服务器的设置,这可能是最强力的选择。 + +配置 DansGuardian 可不是轻松活儿,但是你可以跟着安装说明按步骤完成。一旦设置完成,它将是过滤不良内容的高效工具。 + +### 2. 家长可控:方便家长的过滤器 + +[家长可控:方便家长的过滤器][7] 是 Firefoxis 的插件,允许家长屏蔽包含色情在内的任何少儿不宜的网站。你可以设置不良网站黑名单,屏蔽之。 + +![firefox 内容过滤插件][8] + +你使用的 Firefox 可能不支持 [网页插件][9],如果版本过旧的话。安装该插件,你需要检索 [ProCon Latte 内容过滤][10]。家长们添加网址到预设的黑名单内,然后设置密码,防止设置被篡改。 + +### 3. Blocksi 网页过滤 + +[Blocksi 网页过滤][11] 是 Chrome 浏览器的插件,能有效过滤网页和 Youtube。它也提供限时服务,这样你可以限制家里小朋友的上网时间。 + +## 闲趣 + +![Linux 儿童游戏:tux kart][12] + +一台青少年使用的计算机,不管是否是用作教育,最好都要有一些游戏。虽然 Linux 没有 Windows 那么好的游戏性,但也在奋力追赶。这有几个具有建树性的游戏建议你为孩子的计算机安装。 + +* [Super Tux Kart][21](竞速卡丁车) + +* [GCompris][22](适合教育的游戏) + +* [Secret Maryo Chronicles][23](超级马里奥) + +* [Childsplay][24](教育/记忆力游戏) + +* [EToys][25](儿童编程) + +* [TuxTyping][26](打字游戏) + +* [Kalzium][27](元素周期表) + +* [Tux of Math Command][28](数学游戏) + +* [Pink Pony][29](Tron 风格竞速游戏) + +* [KTuberling][30](创造游戏) + +* [TuxPaint][31](绘画) + +* [Blinken][32]([记忆力][33] 游戏) + +* [KTurtle][34](编程指导环境) + +* [KStars][35](天文馆) + +* [Marble][36](虚拟地球) + +* [KHangman][37](猜单词) + +## 结论:为什么给孩子使用 Linux? + +Linux 以复杂著称。那为什么给孩子使用 Linux?这是为了让孩子适应 Linux。在 Linux 上工作给了解系统运行提供了很多机会。当孩子长大,它们就有随自己兴趣探索的机会。得益于 Linux 如此开放的平台,孩子们才能得到这么一个极佳的场所发现自己对计算机的毕生之恋。 + + +本文于 2010 年 7 月首发,2017 年 12 月更新。 + +图片来自 [在校学生][13] + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/configure-linux-for-children/ + +作者:[Alexander Fox][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/alexfox/ +[1]:https://www.maketecheasier.com/assets/uploads/2010/08/qimo-gcompris.jpg (qimo-gcompris) +[2]:http://www.edubuntu.org +[3]:http://www.ubermix.org/ +[4]:http://wiki.sugarlabs.org/go/Downloads +[5]:https://www.maketecheasier.com/assets/uploads/2017/12/linux-for-children-content-filtering.png (linux-for-children-content-filtering) +[6]:https://help.ubuntu.com/community/DansGuardian +[7]:https://addons.mozilla.org/en-US/firefox/addon/family-friendly-filter/ +[8]:https://www.maketecheasier.com/assets/uploads/2017/12/firefox-content-filter-addon.png (firefox-content-filter-addon) +[9]:https://www.maketecheasier.com/best-firefox-web-extensions/ +[10]:https://addons.mozilla.org/en-US/firefox/addon/procon-latte/ +[11]:https://chrome.google.com/webstore/detail/blocksi-web-filter/pgmjaihnmedpcdkjcgigocogcbffgkbn?hl=en +[12]:https://www.maketecheasier.com/assets/uploads/2017/12/linux-for-children-tux-kart-e1513389774535.jpg (linux-for-children-tux-kart) +[13]:https://www.flickr.com/photos/lupuca/8720604364 +[21]:http://supertuxkart.sourceforge.net/ +[22]:http://gcompris.net/ +[23]:http://www.secretmaryo.org/ +[24]:http://www.schoolsplay.org/ +[25]:http://www.squeakland.org/about/intro/ +[26]:http://tux4kids.alioth.debian.org/tuxtype/index.php +[27]:http://edu.kde.org/kalzium/ +[28]:http://tux4kids.alioth.debian.org/tuxmath/index.php +[29]:http://code.google.com/p/pink-pony/ +[30]:http://games.kde.org/game.php?game=ktuberling +[31]:http://www.tuxpaint.org/ +[32]:https://www.kde.org/applications/education/blinken/ +[33]:https://www.ebay.com/sch/i.html?_nkw=memory +[34]:https://www.kde.org/applications/education/kturtle/ +[35]:https://www.kde.org/applications/education/kstars/ +[36]:https://www.kde.org/applications/education/marble/ +[37]:https://www.kde.org/applications/education/khangman/ From cbe8e260bd3d058bcf99e778fa88573ff8adc0fa Mon Sep 17 00:00:00 2001 From: BriFuture <752736341@qq.com> Date: Sat, 20 Jan 2018 19:59:46 +0800 Subject: [PATCH 121/226] BriFuture is translating this article --- .../tech/20150812 Let-s Build A Simple Interpreter. Part 3..md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20150812 Let-s Build A Simple Interpreter. Part 3..md b/sources/tech/20150812 Let-s Build A Simple Interpreter. Part 3..md index 2502d2624a..d9deb9f50e 100644 --- a/sources/tech/20150812 Let-s Build A Simple Interpreter. Part 3..md +++ b/sources/tech/20150812 Let-s Build A Simple Interpreter. Part 3..md @@ -1,3 +1,5 @@ +BriFuture is Translating this article + Let’s Build A Simple Interpreter. Part 3. ====== From 6a368b661b8a1015c29e596a9f7123fec61e871c Mon Sep 17 00:00:00 2001 From: darksun Date: Sat, 20 Jan 2018 22:34:48 +0800 Subject: [PATCH 122/226] translate done: 20171102 What is huge pages in Linux.md --- .../20171102 What is huge pages in Linux.md | 138 ------------------ .../20171102 What is huge pages in Linux.md | 137 +++++++++++++++++ 2 files changed, 137 insertions(+), 138 deletions(-) delete mode 100644 sources/tech/20171102 What is huge pages in Linux.md create mode 100644 translated/tech/20171102 What is huge pages in Linux.md diff --git a/sources/tech/20171102 What is huge pages in Linux.md b/sources/tech/20171102 What is huge pages in Linux.md deleted file mode 100644 index 448280643f..0000000000 --- a/sources/tech/20171102 What is huge pages in Linux.md +++ /dev/null @@ -1,138 +0,0 @@ -translating by lujun9972 -What is huge pages in Linux? -====== -Learn about huge pages in Linux. Understand what is hugepages, how to configure it, how to check current state and how to disable it. - -![Huge Pages in Linux][1] - -In this article, we will walk you though details about huge pages so that you will be able to answer : what is huge pages in Linux? How to enable/disable huge pages? How to determine huge page value? in Linux like RHEL6, RHEL7, Ubuntu etc. - -Lets start with Huge pages basics. - -### What is Huge page in Linux? - -Huge pages are helpful in virtual memory management in Linux system. As name suggests, they help is managing huge size pages in memory in addition to standard 4KB page size. You can define as huge as 1GB page size using huge pages. - -During system boot, you reserve your memory portion with huge pages for your application. This memory portion i.e. these memory occupied by huge pages is never swapped out of memory. It will stick there until you change your configuration. This increases application performance to great extent like Oracle database with pretty large memory requirement. - -### Why use huge page? - -In virtual memory management, kernel maintains table in which it has mapping of virtual memory address to physical address. For every page transaction, kernel needs to load related mapping. If you have small size pages then you need to load more numbers of pages resulting kernel to load more mapping tables. This decreases performance. - -Using huge pages, means you will need fewer pages. This decreases number of mapping tables to load by kernel to great extent. This increases your kernel level performance which ultimately benefits your application. - -In short, by enabling huge pages, system has fewer page tables to deal with and hence less overhead to access / maintain them! - -### How to configure huge pages? - -Run below command to check current huge pages details. - -``` -root@kerneltalks # grep Huge /proc/meminfo -AnonHugePages: 0 kB -HugePages_Total: 0 -HugePages_Free: 0 -HugePages_Rsvd: 0 -HugePages_Surp: 0 -Hugepagesize: 2048 kB -``` - -In above output you can see one page size is 2MB `Hugepagesize` and total of 0 pages on system `HugePages_Total`. This huge page size can be increased from 2MB to max 1GB. - -Run below script to get how much huge pages your system needs currently . Script is from Oracle and can be found. - -``` -#!/bin/bash -# -# hugepages_settings.sh -# -# Linux bash script to compute values for the -# recommended HugePages/HugeTLB configuration -# -# Note: This script does calculation for all shared memory -# segments available when the script is run, no matter it -# is an Oracle RDBMS shared memory segment or not. -# Check for the kernel version -KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'` -# Find out the HugePage size -HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}` -# Start from 1 pages to be on the safe side and guarantee 1 free HugePage -NUM_PG=1 -# Cumulative number of pages required to handle the running shared memory segments -for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"` -do - MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q` - if [ $MIN_PG -gt 0 ]; then - NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q` - fi -done -# Finish with results -case $KERN in - '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`; - echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;; - '2.6' | '3.8' | '3.10' | '4.1' ) echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; - *) echo "Unrecognized kernel version $KERN. Exiting." ;; -esac -# End -``` -You can save it in `/tmp` as `hugepages_settings.sh` and then run it like below : -``` -root@kerneltalks # sh /tmp/hugepages_settings.sh -Recommended setting: vm.nr_hugepages = 124 -``` - -Output will be similar to some number as shown in above sample output. - -This means your system needs 124 huge pages of 2MB each! If you have set 4MB as page size then output would have been 62. You got the point, right? - -### Configure hugepages in kernel - -Now last part is to configure above stated [kernel parameter][2] and reload it. Add below value in `/etc/sysctl.conf` and reload configuration by issuing `sysctl -p` command. - -``` -vm .nr_hugepages=126 -``` - -Notice that we added 2 extra pages in kernel since we want to keep couple of pages spare than actual required number. - -Now, huge pages has been configured in kernel but to allow your application to use them you need to increase memory limits as well. New memory limit should be 126 pages x 2 MB each = 252 MB i.e. 258048 KB. - -You need to edit below settings in `/etc/security/limits.conf` - -``` -soft memlock 258048 -hard memlock 258048 -``` - -Sometimes these settings are configured in app specific files like for Oracle DB its in `/etc/security/limits.d/99-grid-oracle-limits.conf` - -Thats it! You might want to restart your application to make use of these new huge pages. - -### How to disable hugepages? - -HugePages are generally enabled by default. Use below command to check current state of hugepages. - -``` -root@kerneltalks# cat /sys/kernel/mm/transparent_hugepage/enabled -[always] madvise never -``` - -`[always]` flag in output shows that hugepages are enabled on system. - -For RedHat base systems file path is `/sys/kernel/mm/redhat_transparent_hugepage/enabled` - -If you want to disable huge pages then add `transparent_hugepage=never` at the end of `kernel` line in `/etc/grub.conf` and reboot the system. - --------------------------------------------------------------------------------- - -via: https://kerneltalks.com/services/what-is-huge-pages-in-linux/ - -作者:[Shrikant Lavhate][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://kerneltalks.com -[1]:https://c1.kerneltalks.com/wp-content/uploads/2017/11/hugepages-in-linux.png -[2]:https://kerneltalks.com/linux/how-to-tune-kernel-parameters-in-linux/ diff --git a/translated/tech/20171102 What is huge pages in Linux.md b/translated/tech/20171102 What is huge pages in Linux.md new file mode 100644 index 0000000000..ee261956ad --- /dev/null +++ b/translated/tech/20171102 What is huge pages in Linux.md @@ -0,0 +1,137 @@ +Linux 中的 huge pages 是个什么玩意? +====== +学习 Linux 中的 huge pages( 巨大页)。理解什么是 hugepages,如何进行配置,如何查看当前状态以及如何禁用它。 + +![Huge Pages in Linux][1] + +本文,我们会详细介绍 huge page,让你能够回答:Linux 中的 huge page 是什么玩意?在 RHEL6,RHEL7,Ubuntu 等 Linux 中,如何启用/禁用 huge pages?如何查看 huge page 的当前值? + +首先让我们从 Huge page 的基础知识开始讲起。 + +### Linux 中的 Huge page 是个什么玩意? + +Huge pages 有助于 Linux 系统进行虚拟内存管理。顾名思义,除了标准的 4KB 大小的页面外,他们还能帮助管理内存中的巨大页面。使用 huge pages,你最大可以定义 1GB 的页面大小。 + +在系统启动期间,huge pages 会为应用程序预留一部分内存。这部分内存,即被 huge pages 占用的这些存储器永远不会被交换出内存。它会一直保留其中除非你修改了配置。这会极大地提高像 Orcle 数据库这样的需要海量内存的应用程序的性能。 + +### 为什么使用巨大的页? + +在虚拟内存管理中,内核维护一个将虚拟内存地址映射到物理地址的表,对于每个页面操作,内核都需要加载相关的映射标。如果你的内存页很小,那么你需要加载的页就会很多,导致内核加载更多的映射表。而这会降低性能。 + +使用巨大的页,意味着所需要的页变少了。从而大大减少由内核加载的映射表的数量。这提高了内核级别的性能最终有利于应用程序的性能。 + +简而言之,通过启用 huge pages,系统具只需要处理较少的页面映射表,从而减少访问/维护它们的开销! + +### 如何配置 huge pages? + +运行下面命令来查看当前 huge pages 的详细内容。 + +``` +root@kerneltalks # grep Huge /proc/meminfo +AnonHugePages: 0 kB +HugePages_Total: 0 +HugePages_Free: 0 +HugePages_Rsvd: 0 +HugePages_Surp: 0 +Hugepagesize: 2048 kB +``` + +从上面输出可以看到,每个页的大小为 2MB(`Hugepagesize`) 并且系统中目前有 0 个页 (`HugePages_Total`)。这里巨大页的大小可以从 2MB 增加到 1GB。 + +运行下面的脚本可以获取系统当前需要多少个巨大页。该脚本取之于 Oracle。 + +``` +#!/bin/bash +# +# hugepages_settings.sh +# +# Linux bash script to compute values for the +# recommended HugePages/HugeTLB configuration +# +# Note: This script does calculation for all shared memory +# segments available when the script is run, no matter it +# is an Oracle RDBMS shared memory segment or not. +# Check for the kernel version +KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'` +# Find out the HugePage size +HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}` +# Start from 1 pages to be on the safe side and guarantee 1 free HugePage +NUM_PG=1 +# Cumulative number of pages required to handle the running shared memory segments +for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"` +do + MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q` + if [ $MIN_PG -gt 0 ]; then + NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q` + fi +done +# Finish with results +case $KERN in + '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`; + echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;; + '2.6' | '3.8' | '3.10' | '4.1' ) echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; + *) echo "Unrecognized kernel version $KERN. Exiting." ;; +esac +# End +``` +将它以 `hugepages_settings.sh` 为名保存到 `/tmp` 中,然后运行之: +``` +root@kerneltalks # sh /tmp/hugepages_settings.sh +Recommended setting: vm.nr_hugepages = 124 +``` + +输出如上结果,只是数字会有一些出入。 + +这意味着,你系统需要 124 个每个 2MB 的巨大页!若你设置页面大小为 4MB,则结果就变成了 62。你明白了吧? + +### 配置内核中的 hugepages + +本文最后一部分内容是配置上面提到的 [内核参数 ][2] 然后重新加载。将下面内容添加到 `/etc/sysctl.conf` 中,然后输入 `sysctl -p` 命令重新加载配置。 + +``` +vm .nr_hugepages=126 +``` + +注意我们这里多加了两个额外的页,因为我们希望在实际需要的页面数量外多一些额外的空闲页。 + +现在,内核已经配置好了,但是要让应用能够使用这些巨大页还需要提高内存的使用阀值。新的内存阀值应该为 126 个页 x 每个页 2 MB = 252 MB,也就是 258048 KB。 + +你需要编辑 `/etc/security/limits.conf` 中的如下配置 + +``` +soft memlock 258048 +hard memlock 258048 +``` + +某些情况下,这些设置是在指定应用的文件中配置的,比如 Oracle DB 就是在 `/etc/security/limits.d/99-grid-oracle-limits.conf` 中配置的。 + +这就完成了!你可能还需要重启应用来让应用来使用这些新的巨大页。 + +### 如何禁用 hugepages? + +HugePages 默认是开启的。使用下面命令来查看 hugepages 的当前状态。 + +``` +root@kerneltalks# cat /sys/kernel/mm/transparent_hugepage/enabled +[always] madvise never +``` + +输出中的 `[always]` 标志说明系统启用了 hugepages。 + +若使用的是基于 RedHat 的系统,则应该要查看的文件路径为 `/sys/kernel/mm/redhat_transparent_hugepage/enabled`。 + +若想禁用巨大页,则在 `/etc/grub.conf` 中的 `kernel` 行后面加上 `transparent_hugepage=never`,然后重启系统。 + +-------------------------------------------------------------------------------- + +via: https://kerneltalks.com/services/what-is-huge-pages-in-linux/ + +作者:[Shrikant Lavhate][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://kerneltalks.com +[1]:https://c1.kerneltalks.com/wp-content/uploads/2017/11/hugepages-in-linux.png +[2]:https://kerneltalks.com/linux/how-to-tune-kernel-parameters-in-linux/ From 512caebbed0ff1945081c33bd04be2b2e11f85bc Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 20 Jan 2018 22:54:40 +0800 Subject: [PATCH 123/226] PRF:20171226 How to Configure Linux for Children.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @CYLeft 这篇翻译的比较随意,应该在尽量保持原意的基础上调整语言习惯。 --- ...226 How to Configure Linux for Children.md | 65 +++++++------------ 1 file changed, 24 insertions(+), 41 deletions(-) diff --git a/translated/tech/20171226 How to Configure Linux for Children.md b/translated/tech/20171226 How to Configure Linux for Children.md index 75f69ed53e..8889150238 100644 --- a/translated/tech/20171226 How to Configure Linux for Children.md +++ b/translated/tech/20171226 How to Configure Linux for Children.md @@ -3,99 +3,82 @@ ![](https://www.maketecheasier.com/assets/uploads/2017/12/keep-kids-safe-online-hero.jpg) -如果你在电脑边工作有一段时间,提到 Linux,你应该会联想到一些特定的人群。然而,你怎么会知道都有些什么样的人在使用 Linux?别担心,这就告诉你。 +如果你接触电脑有一段时间了,提到 Linux,你应该会联想到一些特定的人群。你觉得哪些人在使用 Linux?别担心,这就告诉你。 -Linux 是一个可以深度定制的操作系统。这就赋予了用户高度控制权。事实上,家长们可以针对小朋友设置出一个 Linux 专业发行版,确保让孩子不会在不经意间接触那些高危地带。但是相比 Windows,这些设置显得更费时,但是一劳永逸。Linux 的开源免费,让教室或计算机实验室系统部署变得容易。 +Linux 是一个可以深度定制的操作系统。这就赋予了用户高度控制权。事实上,家长们可以针对小朋友设置出一个专门的 Linux 发行版,确保让孩子不会在不经意间接触那些高危地带。但是相比 Windows,这些设置显得更费时,但是一劳永逸。Linux 的开源免费,让教室或计算机实验室系统部署变得容易。 -## 小朋友的 Linux 发行版 +### 小朋友的 Linux 发行版 这些为儿童而简化的 Linux 发行版,界面对儿童十分友好。家长只需要先安装和设置,孩子就可以完全独立地使用计算机了。你将看见多彩的图形界面,丰富的图画,简明的语言。 -不过,不幸的是,这类发行版不会经常更新,甚至有些已经不再积极发展了。但也不意味着不能使用,只是故障发生率可能会高一点。 +不过,不幸的是,这类发行版不会经常更新,甚至有些已经不再积极开发了。但也不意味着不能使用,只是故障发生率可能会高一点。 ![qimo-gcompris][1] +#### 1. Edubuntu -### 1. Edubuntu +[Edubuntu][2] 是 Ubuntu 的一个分支版本,专用于教育事业。它拥有丰富的图形环境和大量教育软件,易于更新维护。它被设计成初高中学生专用的操作系统。 -[Edubuntu][2] 是 Ubuntu 的一个分支系统,专用于教育事业。它拥有丰富的图形环境,持续维护并更新的教育软件。它被设计成初高中学生专用的操作系统。 +#### 2. Ubermix -### 2. Ubermix +[Ubermix][3] 是根据教育需求而被设计出来的。Ubermix 将学生从复杂的计算机设备中解脱出来,就像手机一样简单易用,而不会牺牲性能和操作系统的全部能力。一键开机、五分钟安装、二十秒钟快速还原机制,以及超过 60 个的免费预装软件,ubermix 就可以让你的硬件变成功能强大的学习设备。 -[Ubermix][3] 是根据教育需求而被设计出来的。Ubermix 将学生从复杂的计算机设备中解脱出来,通过一部简单而可靠手机,不牺牲性能,把完整的操作系统呈现出来。一站式服务,五分钟安装,二十分钟快速还原机制,超过 60 个免费预装软件,ubermix 就可以让你的硬件变成功能强大的学习设备。 +#### 3. Sugar -### 3. Sugar +[Sugar][4] 是为“每个孩子一台笔记本(OLPC)计划”而设计的操作系统。Sugar 和普通桌面 Linux 大不相同,它更专注于学生课堂使用和教授编程能力。 -[Sugar][4] 是为每一个孩子的笔记本而设计的操作系统。Sugar 和普通桌面 Linux 大不相同,它更专注于学生课堂上的教学软件。 +**注意** :很多为儿童开发的 Linux 发行版我并没有列举,因为它们大都不再积极维护或是被长时间遗弃。 - **注意** :很多为儿童开发的 Linux 发行版我并没有列举,因为它们大都不再积极维护或是被长时间遗弃。 - -## 为小朋友过筛选内容的 Linux +### 为小朋友过筛选内容的 Linux 只有你,最能保护孩子拒绝访问少儿不宜的内容,但是你不可能每分每秒都在孩子身边。但是你可以设置“限制访问”的 URL 到内容过滤代理服务器(通过软件)。这里有两个主要的软件可以帮助你。 ![儿童内容过滤 Linux][5] -### 1. DansGuardian +#### 1、 DansGuardian -[DansGuardian][6],一个开源内容过滤软件,几乎可以工作在任何 Linux 发行版上,需要你在命令行选择设置一个代理。如果你不深究代理服务器的设置,这可能是最强力的选择。 +[DansGuardian][6],一个开源内容过滤软件,几乎可以工作在任何 Linux 发行版上,灵活而强大,需要你通过命令行设置你的代理。如果你不深究代理服务器的设置,这可能是最强力的选择。 配置 DansGuardian 可不是轻松活儿,但是你可以跟着安装说明按步骤完成。一旦设置完成,它将是过滤不良内容的高效工具。 -### 2. 家长可控:方便家长的过滤器 +#### 2、 Parental Control: Family Friendly Filter -[家长可控:方便家长的过滤器][7] 是 Firefoxis 的插件,允许家长屏蔽包含色情在内的任何少儿不宜的网站。你可以设置不良网站黑名单,屏蔽之。 +[Parental Control: Family Friendly Filter][7] 是 Firefox 的插件,允许家长屏蔽包含色情内容在内的任何少儿不宜的网站。你也可以设置不良网站黑名单,将其一直屏蔽。 ![firefox 内容过滤插件][8] -你使用的 Firefox 可能不支持 [网页插件][9],如果版本过旧的话。安装该插件,你需要检索 [ProCon Latte 内容过滤][10]。家长们添加网址到预设的黑名单内,然后设置密码,防止设置被篡改。 +你使用的老版本的 Firefox 可能不支持 [网页插件][9],那么你可以使用 [ProCon Latte 内容过滤器][10]。家长们添加网址到预设的黑名单内,然后设置密码,防止设置被篡改。 -### 3. Blocksi 网页过滤 +#### 3、 Blocksi 网页过滤 -[Blocksi 网页过滤][11] 是 Chrome 浏览器的插件,能有效过滤网页和 Youtube。它也提供限时服务,这样你可以限制家里小朋友的上网时间。 +[Blocksi 网页过滤][11] 是 Chrome 浏览器插件,能有效过滤网页和 Youtube。它也提供限时服务,这样你可以限制家里小朋友的上网时间。 -## 闲趣 +### 闲趣 ![Linux 儿童游戏:tux kart][12] -一台青少年使用的计算机,不管是否是用作教育,最好都要有一些游戏。虽然 Linux 没有 Windows 那么好的游戏性,但也在奋力追赶。这有几个具有建树性的游戏建议你为孩子的计算机安装。 +给孩子们使用的计算机,不管是否是用作教育,最好都要有一些游戏。虽然 Linux 没有 Windows 那么好的游戏性,但也在奋力追赶。这有建议几个有益的游戏,你可以安装到孩子们的计算机上。 * [Super Tux Kart][21](竞速卡丁车) - * [GCompris][22](适合教育的游戏) - * [Secret Maryo Chronicles][23](超级马里奥) - * [Childsplay][24](教育/记忆力游戏) - * [EToys][25](儿童编程) - * [TuxTyping][26](打字游戏) - * [Kalzium][27](元素周期表) - * [Tux of Math Command][28](数学游戏) - * [Pink Pony][29](Tron 风格竞速游戏) - * [KTuberling][30](创造游戏) - * [TuxPaint][31](绘画) - * [Blinken][32]([记忆力][33] 游戏) - * [KTurtle][34](编程指导环境) - * [KStars][35](天文馆) - * [Marble][36](虚拟地球) - * [KHangman][37](猜单词) -## 结论:为什么给孩子使用 Linux? - -Linux 以复杂著称。那为什么给孩子使用 Linux?这是为了让孩子适应 Linux。在 Linux 上工作给了解系统运行提供了很多机会。当孩子长大,它们就有随自己兴趣探索的机会。得益于 Linux 如此开放的平台,孩子们才能得到这么一个极佳的场所发现自己对计算机的毕生之恋。 +### 结论:为什么给孩子使用 Linux? +Linux 以复杂著称。那为什么给孩子使用 Linux?这是为了让孩子适应 Linux。在 Linux 上工作给了解系统运行提供了很多机会。当孩子长大,他们就有随自己兴趣探索的机会。得益于 Linux 如此开放的平台,孩子们才能得到这么一个极佳的场所发现自己对计算机的毕生之恋。 本文于 2010 年 7 月首发,2017 年 12 月更新。 @@ -107,7 +90,7 @@ via: https://www.maketecheasier.com/configure-linux-for-children/ 作者:[Alexander Fox][a] 译者:[CYLeft](https://github.com/CYLeft) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From df948e9056a99c6b721840466709070e694f0217 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 20 Jan 2018 22:55:05 +0800 Subject: [PATCH 124/226] PUB:20171226 How to Configure Linux for Children.md @CYLeft https://linux.cn/article-9261-1.html --- .../20171226 How to Configure Linux for Children.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171226 How to Configure Linux for Children.md (100%) diff --git a/translated/tech/20171226 How to Configure Linux for Children.md b/published/20171226 How to Configure Linux for Children.md similarity index 100% rename from translated/tech/20171226 How to Configure Linux for Children.md rename to published/20171226 How to Configure Linux for Children.md From c0678c831bb6db1feed63ae0e066a88840509771 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 20 Jan 2018 23:05:20 +0800 Subject: [PATCH 125/226] PRF&PUB:20171215 How to find and tar files into a tar ball.md @geekpi --- ...w to find and tar files into a tar ball.md | 135 ++++++++++++++++++ ...w to find and tar files into a tar ball.md | 120 ---------------- 2 files changed, 135 insertions(+), 120 deletions(-) create mode 100644 published/20171215 How to find and tar files into a tar ball.md delete mode 100644 translated/tech/20171215 How to find and tar files into a tar ball.md diff --git a/published/20171215 How to find and tar files into a tar ball.md b/published/20171215 How to find and tar files into a tar ball.md new file mode 100644 index 0000000000..3dc34e7ab6 --- /dev/null +++ b/published/20171215 How to find and tar files into a tar ball.md @@ -0,0 +1,135 @@ +如何找出并打包文件成 tar 包 +====== + +Q:我想找出所有的 *.doc 文件并将它们创建成一个 tar 包,然后存储在 `/nfs/backups/docs/file.tar` 中。是否可以在 Linux 或者类 Unix 系统上查找并 tar 打包文件? + +`find` 命令用于按照给定条件在目录层次结构中搜索文件。`tar` 命令是用于 Linux 和类 Unix 系统创建 tar 包的归档工具。 + +[![How to find and tar files on linux unix][1]][1] + +让我们看看如何将 `tar` 命令与 `find` 命令结合在一个命令行中创建一个 tar 包。 + +### Find 命令 + +语法是: + +``` +find /path/to/search -name "file-to-search" -options +## 找出所有 Perl(*.pl)文件 ## +find $HOME -name "*.pl" -print +## 找出所有 *.doc 文件 ## +find $HOME -name "*.doc" -print +## 找出所有 *.sh(shell 脚本)并运行 ls -l 命令 ## +find . -iname "*.sh" -exec ls -l {} + +``` + +最后一个命令的输出示例: + +``` +-rw-r--r-- 1 vivek vivek 1169 Apr 4 2017 ./backups/ansible/cluster/nginx.build.sh +-rwxr-xr-x 1 vivek vivek 1500 Dec 6 14:36 ./bin/cloudflare.pure.url.sh +lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/cmspostupload.sh -> postupload.sh +lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/cmspreupload.sh -> preupload.sh +lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/cmssuploadimage.sh -> uploadimage.sh +lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/faqpostupload.sh -> postupload.sh +lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/faqpreupload.sh -> preupload.sh +lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/faquploadimage.sh -> uploadimage.sh +-rw-r--r-- 1 vivek vivek 778 Nov 6 14:44 ./bin/mirror.sh +-rwxr-xr-x 1 vivek vivek 136 Apr 25 2015 ./bin/nixcraft.com.301.sh +-rwxr-xr-x 1 vivek vivek 547 Jan 30 2017 ./bin/paypal.sh +-rwxr-xr-x 1 vivek vivek 531 Dec 31 2013 ./bin/postupload.sh +-rwxr-xr-x 1 vivek vivek 437 Dec 31 2013 ./bin/preupload.sh +-rwxr-xr-x 1 vivek vivek 1046 May 18 2017 ./bin/purge.all.cloudflare.domain.sh +lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/tipspostupload.sh -> postupload.sh +lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/tipspreupload.sh -> preupload.sh +lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/tipsuploadimage.sh -> uploadimage.sh +-rwxr-xr-x 1 vivek vivek 1193 Oct 18 2013 ./bin/uploadimage.sh +-rwxr-xr-x 1 vivek vivek 29 Nov 6 14:33 ./.vim/plugged/neomake/tests/fixtures/errors.sh +-rwxr-xr-x 1 vivek vivek 215 Nov 6 14:33 ./.vim/plugged/neomake/tests/helpers/trap.sh +``` + +### Tar 命令 + +要[创建 /home/vivek/projects 目录的 tar 包][2],运行: + +``` +$ tar -cvf /home/vivek/projects.tar /home/vivek/projects +``` + +### 结合 find 和 tar 命令 + +语法是: + +``` +find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} \; +``` + +或者 + +``` +find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} + +``` + +例子: + +``` +find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" \; +``` + +或者 + +``` +find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" + +``` + +这里,find 命令的选项: + +* `-name "*.doc"`:按照给定的模式/标准查找文件。在这里,在 $HOME 中查找所有 *.doc 文件。 +* `-exec tar ...` :对 `find` 命令找到的所有文件执行 `tar` 命令。 + +这里,`tar` 命令的选项: + +* `-r`:将文件追加到归档末尾。参数与 `-c` 选项具有相同的含义。 +* `-v`:详细输出。 +* `-f out.tar` : 将所有文件追加到 out.tar 中。 + +也可以像下面这样将 `find` 命令的输出通过管道输入到 `tar` 命令中: + +``` +find $HOME -name "*.doc" -print0 | tar -cvf /tmp/file.tar --null -T - +``` + +传递给 `find` 命令的 `-print0` 选项处理特殊的文件名。`--null` 和 `-T` 选项告诉 `tar` 命令从标准输入/管道读取输入。也可以使用 `xargs` 命令: + +``` +find $HOME -type f -name "*.sh" | xargs tar cfvz /nfs/x230/my-shell-scripts.tgz +``` + +有关更多信息,请参阅下面的 man 页面: + +``` +$ man tar +$ man find +$ man xargs +$ man bash +``` + +------------------------------ + +作者简介: + +作者是 nixCraft 的创造者,是一名经验丰富的系统管理员,也是 Linux 操作系统/Unix shell 脚本培训师。他曾与全球客户以及 IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 Twitter、Facebook 和 Google+ 上关注他。 + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/faq/linux-unix-find-tar-files-into-tarball-command/ + +作者:[Vivek Gite][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/faq/2017/12/How-to-find-and-tar-files-on-linux-unix.jpg +[2]:https://www.cyberciti.biz/faq/creating-a-tar-file-linux-command-line/ diff --git a/translated/tech/20171215 How to find and tar files into a tar ball.md b/translated/tech/20171215 How to find and tar files into a tar ball.md deleted file mode 100644 index b1cc728635..0000000000 --- a/translated/tech/20171215 How to find and tar files into a tar ball.md +++ /dev/null @@ -1,120 +0,0 @@ -如何找出并打包文件成 tar 包 -====== - -我想找出所有的 \*.doc 文件并将它们创建成一个 tar 包,然后存储在 /nfs/backups/docs/file.tar 中。是否可以在 Linux 或者类 Unix 系统上查找并 tar 打包文件? - -find 命令用于按照给定条件在目录层次结构中搜索文件。tar 命令是用于 Linux 和类 Unix 系统创建 tar 包的归档工具。 - -[![How to find and tar files on linux unix][1]][1] - -让我们看看如何将 tar 命令与 find 命令结合在一个命令行中创建一个 tar 包。 - -## Find 命令 - -语法是: -``` -find /path/to/search -name "file-to-search" -options -## 找出所有 Perl(*.pl)文件 ## -find $HOME -name "*.pl" -print -## 找出所有 \*.doc 文件 ## -find $HOME -name "*.doc" -print -## 找出所有 *.sh(shell 脚本)并运行 ls -l 命令 ## -find . -iname "*.sh" -exec ls -l {} + -``` -最后一个命令的输出示例: -``` --rw-r--r-- 1 vivek vivek 1169 Apr 4 2017 ./backups/ansible/cluster/nginx.build.sh --rwxr-xr-x 1 vivek vivek 1500 Dec 6 14:36 ./bin/cloudflare.pure.url.sh -lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/cmspostupload.sh -> postupload.sh -lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/cmspreupload.sh -> preupload.sh -lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/cmssuploadimage.sh -> uploadimage.sh -lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/faqpostupload.sh -> postupload.sh -lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/faqpreupload.sh -> preupload.sh -lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/faquploadimage.sh -> uploadimage.sh --rw-r--r-- 1 vivek vivek 778 Nov 6 14:44 ./bin/mirror.sh --rwxr-xr-x 1 vivek vivek 136 Apr 25 2015 ./bin/nixcraft.com.301.sh --rwxr-xr-x 1 vivek vivek 547 Jan 30 2017 ./bin/paypal.sh --rwxr-xr-x 1 vivek vivek 531 Dec 31 2013 ./bin/postupload.sh --rwxr-xr-x 1 vivek vivek 437 Dec 31 2013 ./bin/preupload.sh --rwxr-xr-x 1 vivek vivek 1046 May 18 2017 ./bin/purge.all.cloudflare.domain.sh -lrwxrwxrwx 1 vivek vivek 13 Dec 31 2013 ./bin/tipspostupload.sh -> postupload.sh -lrwxrwxrwx 1 vivek vivek 12 Dec 31 2013 ./bin/tipspreupload.sh -> preupload.sh -lrwxrwxrwx 1 vivek vivek 14 Dec 31 2013 ./bin/tipsuploadimage.sh -> uploadimage.sh --rwxr-xr-x 1 vivek vivek 1193 Oct 18 2013 ./bin/uploadimage.sh --rwxr-xr-x 1 vivek vivek 29 Nov 6 14:33 ./.vim/plugged/neomake/tests/fixtures/errors.sh --rwxr-xr-x 1 vivek vivek 215 Nov 6 14:33 ./.vim/plugged/neomake/tests/helpers/trap.sh -``` - -## Tar 命令 - -要[创建 /home/vivek/projects 目录的 tar 包][2],运行: -``` -$ tar -cvf /home/vivek/projects.tar /home/vivek/projects -``` - -## 结合 find 和 tar 命令 - -语法是: -``` -find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} \; -``` -或者 -``` -find /dir/to/search/ -name "*.doc" -exec tar -rvf out.tar {} + -``` -例子: -``` -find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" \; -``` -或者 -``` -find $HOME -name "*.doc" -exec tar -rvf /tmp/all-doc-files.tar "{}" + -``` -这里,find 命令的选项: - - * **-name "*.doc"** : 按照给定的模式/标准查找文件。在这里,在 $HOME 中查找所有 \*.doc 文件。 - * **-exec tar ...** : 对 find 命令找到的所有文件执行 tar 命令。 - -这里,tar 命令的选项: - - * **-r** : 将文件追加到归档末尾。参数与 -c 选项具有相同的含义。 - * **-v** : 详细输出。 - * **-f** : out.tar : 将所有文件追加到 out.tar 中。 - - - -也可以像下面这样将 find 命令的输出通过管道输入到 tar 命令中: -``` -find $HOME -name "*.doc" -print0 | tar -cvf /tmp/file.tar --null -T - -``` -传递给 find 命令的 -print0 选项处理特殊的文件名。-null 和 -T 选项告诉 tar 命令从标准输入/管道读取输入。也可以使用 xargs 命令: -``` -find $HOME -type f -name "*.sh" | xargs tar cfvz /nfs/x230/my-shell-scripts.tgz -``` -有关更多信息,请参阅下面的 man 页面: -``` -$ man tar -$ man find -$ man xargs -$ man bash -``` - ------------------------------- - -作者简介: - -作者是 nixCraft 的创造者,是一名经验丰富的系统管理员,也是 Linux 操作系统/Unix shell 脚本培训师。他曾与全球客户以及 IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 Twitter、Facebook 和 Google+ 上关注他。 - --------------------------------------------------------------------------------- - -via: https://www.cyberciti.biz/faq/linux-unix-find-tar-files-into-tarball-command/ - -作者:[Vivek Gite][a] -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.cyberciti.biz -[1]:https://www.cyberciti.biz/media/new/faq/2017/12/How-to-find-and-tar-files-on-linux-unix.jpg -[2]:https://www.cyberciti.biz/faq/creating-a-tar-file-linux-command-line/ From 9985916a59afd1d29f53a9e21ae0d496d02be5f9 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Sun, 21 Jan 2018 09:34:07 +0800 Subject: [PATCH 126/226] Update 20180116 Analyzing the Linux boot process.md --- sources/tech/20180116 Analyzing the Linux boot process.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180116 Analyzing the Linux boot process.md b/sources/tech/20180116 Analyzing the Linux boot process.md index 0bf807c6bb..24a7cb971d 100644 --- a/sources/tech/20180116 Analyzing the Linux boot process.md +++ b/sources/tech/20180116 Analyzing the Linux boot process.md @@ -1,3 +1,5 @@ +Translating by jessie-pang + Analyzing the Linux boot process ====== From 2d83d99817fe21b82a31cb3646d28a4d629fe8b4 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Sun, 21 Jan 2018 10:42:59 +0800 Subject: [PATCH 127/226] Delete 20171120 How to use special permissions- the setuid, setgid and sticky bits.md --- ...ons- the setuid, setgid and sticky bits.md | 105 ------------------ 1 file changed, 105 deletions(-) delete mode 100644 sources/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md diff --git a/sources/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md b/sources/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md deleted file mode 100644 index e221a0cbbf..0000000000 --- a/sources/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md +++ /dev/null @@ -1,105 +0,0 @@ -Translating by jessie-pang - -How to use special permissions: the setuid, setgid and sticky bits -====== - -### Objective - -Getting to know how special permissions works, how to identify and set them. - -### Requirements - - * Knowledge of the standard unix/linux permissions system - -### Difficulty - -EASY - -### Conventions - - * **#** \- requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command - * **$** \- given command to be executed as a regular non-privileged user - - - -### Introduction - -Normally, on a unix-like operating system, the ownership of files and directories is based on the default `uid` (user-id) and `gid` (group-id) of the user who created them. The same thing happens when a process is launched: it runs with the effective user-id and group-id of the user who started it, and with the corresponding privileges. This behavior can be modified by using special permissions. - -### The setuid bit - -When the `setuid` bit is used, the behavior described above it's modified so that when an executable is launched, it does not run with the privileges of the user who launched it, but with that of the file owner instead. So, for example, if an executable has the `setuid` bit set on it, and it's owned by root, when launched by a normal user, it will run with root privileges. It should be clear why this represents a potential security risk, if not used correctly. - -An example of an executable with the setuid permission set is `passwd`, the utility we can use to change our login password. We can verify that by using the `ls` command: -``` - -ls -l /bin/passwd --rwsr-xr-x. 1 root root 27768 Feb 11 2017 /bin/passwd - -``` - -How to identify the `setuid` bit? As you surely have noticed looking at the output of the command above, the `setuid` bit is represented by an `s` in place of the `x` of the executable bit. The `s` implies that the executable bit is set, otherwise you would see a capital `S`. This happens when the `setuid` or `setgid` bits are set, but the executable bit is not, showing the user an inconsistency: the `setuid` and `setgit` bits have no effect if the executable bit is not set. The setuid bit has no effect on directories. - -### The setgid bit - -Unlike the `setuid` bit, the `setgid` bit has effect on both files and directories. In the first case, the file which has the `setgid` bit set, when executed, instead of running with the privileges of the group of the user who started it, runs with those of the group which owns the file: in other words, the group ID of the process will be the same of that of the file. - -When used on a directory, instead, the `setgid` bit alters the standard behavior so that the group of the files created inside said directory, will not be that of the user who created them, but that of the parent directory itself. This is often used to ease the sharing of files (files will be modifiable by all the users that are part of said group). Just like the setuid, the setgid bit can easily be spotted (in this case on a test directory): -``` - -ls -ld test -drwxrwsr-x. 2 egdoc egdoc 4096 Nov 1 17:25 test - -``` - -This time the `s` is present in place of the executable bit on the group sector. - -### The sticky bit - -The sticky bit works in a different way: while it has no effect on files, when used on a directory, all the files in said directory will be modifiable only by their owners. A typical case in which it is used, involves the `/tmp` directory. Typically this directory is writable by all users on the system, so to make impossible for one user to delete the files of another one, the sticky bit is set: -``` - -$ ls -ld /tmp -drwxrwxrwt. 14 root root 300 Nov 1 16:48 /tmp - -``` - -In this case the owner, the group, and all other users, have full permissions on the directory (read, write and execute). The sticky bit is identifiable by a `t` which is reported where normally the executable `x` bit is shown, in the "other" section. Again, a lowercase `t` implies that the executable bit is also present, otherwise you would see a capital `T`. - -### How to set special bits - -Just like normal permissions, the special bits can be assigned with the `chmod` command, using the numeric or the `ugo/rwx` format. In the former case the `setuid`, `setgid`, and `sticky` bits are represented respectively by a value of 4, 2 and 1. So for example if we want to set the `setgid` bit on a directory we would execute: -``` -$ chmod 2775 test -``` - -With this command we set the `setgid` bit on the directory, (identified by the first of the four numbers), and gave full privileges on it to it's owner and to the user that are members of the group the directory belongs to, plus read and execute permission for all the other users (remember the execute bit on a directory means that a user is able to `cd` into it or use `ls` to list its content). - -The other way we can set the special permissions bits is to use the ugo/rwx syntax: -``` -$ chmod g+s test -``` - -To apply the `setuid` bit to a file, we would have run: -``` -$ chmod u+s file -``` - -While to apply the sticky bit: -``` -$ chmod o+t test -``` - -The use of special permissions can be very useful in some situations, but if not used correctly the can introduce serious vulnerabilities, so think twice before using them. - --------------------------------------------------------------------------------- - -via: https://linuxconfig.org/how-to-use-special-permissions-the-setuid-setgid-and-sticky-bits - -作者:[Egidio Docile][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://linuxconfig.org From 7c02e4685f08b1c606e15fb59a7953952d421eff Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Sun, 21 Jan 2018 10:44:30 +0800 Subject: [PATCH 128/226] 20171120 How to use special permissions- the setuid, setgid and sticky bits.md --- ...ons- the setuid, setgid and sticky bits.md | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) create mode 100644 translated/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md diff --git a/translated/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md b/translated/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md new file mode 100644 index 0000000000..80805b0d30 --- /dev/null +++ b/translated/tech/20171120 How to use special permissions- the setuid, setgid and sticky bits.md @@ -0,0 +1,108 @@ +如何使用特殊权限:setuid、setgid 和 sticky 位 +====== + +### 目标 + +了解特殊权限的工作原理,以及如何识别和设置它们。 + +### 要求 + + * 了解标准的 Unix / Linux 权限系统 + +### 难度 + +简单 + +### 约定 + + * **#** \- 要求直接以 root 用户或使用 `sudo` 命令执行指定的命令 + * **$** \- 用普通的非特权用户来执行指定的命令 + +### 介绍 + +通常,在类 Unix 操作系统上,文件和目录的所有权是基于文件创建者的默认 `uid`(user-id)和 `gid`(group-id)的。启动一个进程时也是同样的情况:它以启动它的用户的 uid 和 gid 运行,并具有相应的权限。这种行为可以通过使用特殊的权限进行改变。 + +### setuid 位 + +当使用 setuid 位时,之前描述的行为会有所变化,所以当一个可执行文件启动时,它不会以启动它的用户的权限运行,而是以该文件所有者的权限运行。所以,如果在一个可执行文件上设置了 setuid 位,并且该文件由 root 拥有,当一个普通用户启动它时,它将以 root 权限运行。显然,如果 setuid 位使用不当的话,会带来潜在的安全风险。 + +使用 setuid 权限的可执行文件的例子是 `passwd`,我们可以使用该程序更改登录密码。我们可以通过使用 `ls` 命令来验证: + +``` + +ls -l /bin/passwd +-rwsr-xr-x. 1 root root 27768 Feb 11 2017 /bin/passwd + +``` + +如何识别 `setuid` 位呢?相信您在上面命令的输出已经注意到,`setuid` 位是用 `s` 来表示的,代替了可执行位的 `x`。小写的 `s` 意味着可执行位已经被设置,否则你会看到一个大写的 `S`。大写的 `S` 发生于当设置了 `setuid` 或 `setgid` 位、但没有设置可执行位 `x` 时。它用于提醒用户这个矛盾的设置:如果可执行位未设置,则 `setuid` 和 `setgid` 位均不起作用。setuid 位对目录没有影响。 + +### setgid 位 + +与 `setuid` 位不同,`setgid` 位对文件和目录都有影响。在第一个例子中,具有 `setgid` 位设置的文件在执行时,不是以启动它的用户所属组的权限运行,而是以拥有该文件的组运行。换句话说,进程的 gid 与文件的 gid 相同。 + +当在一个目录上使用时,`setgid` 位与一般的行为不同,它使得在所述目录内创建的文件,不属于创建者所属的组,而是属于父目录所属的组。这个功能通常用于文件共享(目录所属组中的所有用户都可以修改文件)。就像 setuid 一样,setgid 位很容易识别(我们用 test 目录举例): + +``` + +ls -ld test +drwxrwsr-x. 2 egdoc egdoc 4096 Nov 1 17:25 test + +``` + +这次 `s` 出现在组权限的可执行位上。 + +### sticky 位 + +Sticky 位的工作方式有所不同:它对文件没有影响,但当它在目录上使用时,所述目录中的所有文件只能由其所有者删除或移动。一个典型的例子是 `/tmp` 目录,通常系统中的所有用户都对这个目录有写权限。所以,设置 sticky 位使用户不能删除其他用户的文件: + +``` + +$ ls -ld /tmp +drwxrwxrwt. 14 root root 300 Nov 1 16:48 /tmp + +``` + +在上面的例子中,目录所有者、组和其他用户对该目录具有完全的权限(读、写和执行)。Sticky 位在可执行位上用 `t` 来标识。同样,小写的 `t` 表示可执行权限 `x`也被设置了,否则你会看到一个大写字母 `T`。 + +### 如何设置特殊权限位 + +就像普通的权限一样,特殊权限位可以用 `chmod` 命令设置,使用数字或者 `ugo/rwx` 格式。在前一种情况下,`setuid`、`setgid` 和 `sticky` 位分别由数值 4、2 和 1 表示。例如,如果我们要在目录上设置 `setgid` 位,我们可以运行: + +``` +$ chmod 2775 test +``` + +通过这个命令,我们在目录上设置了 `setgid` 位(由四个数字中的第一个数字标识),并给它的所有者和该目录所属组的所有用户赋予全部权限,对其他用户赋予读和执行的权限(目录上的执行位意味着用户可以 `cd` 进入该目录或使用 `ls` 列出其内容)。 + +另一种设置特殊权限位的方法是使用 `ugo/rwx` 语法: + +``` +$ chmod g+s test +``` + +要将 `setuid` 位应用于一个文件,我们可以运行: + +``` +$ chmod u+s file +``` + +要设置 Sticky 位,可运行: + +``` +$ chmod o+t test +``` + +在某些情况下,使用特殊权限会非常有用。但如果使用不当,可能会引入严重的漏洞,因此使用之前请三思。 + +-------------------------------------------------------------------------------- + +via: https://linuxconfig.org/how-to-use-special-permissions-the-setuid-setgid-and-sticky-bits + +作者:[Egidio Docile][a] +译者:[jessie-pang](https://github.com/jessie-pang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://linuxconfig.org \ No newline at end of file From 86643403789e84de2e6af63513db73afc187f39d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Sun, 21 Jan 2018 13:53:48 +0800 Subject: [PATCH 129/226] Create 20180121 Two great uses for the cp command: Bash shortcuts.md --- ...uses for the cp command: Bash shortcuts.md | 163 ++++++++++++++++++ 1 file changed, 163 insertions(+) create mode 100644 sources/tech/20180121 Two great uses for the cp command: Bash shortcuts.md diff --git a/sources/tech/20180121 Two great uses for the cp command: Bash shortcuts.md b/sources/tech/20180121 Two great uses for the cp command: Bash shortcuts.md new file mode 100644 index 0000000000..baf9549636 --- /dev/null +++ b/sources/tech/20180121 Two great uses for the cp command: Bash shortcuts.md @@ -0,0 +1,163 @@ +Two great uses for the cp command: Bash shortcuts +============================================================ + +### Here's how to streamline the backup and synchronize functions of the cp command. + + [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/clh_portrait2.jpg?itok=w2fRuoKj)][1]  19 Jan 2018 [Chris Hermansen][2] [Feed][3]  + +8[up][4] + + [4 comments][5] +![Two great uses for the cp command: Bash shortcuts ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC) + +Image by :  + +[Internet Archive Book Images][6]. Modified by Opensource.com. CC BY-SA 4.0 + +Last July, I wrote about [two great uses for the cp command][7]: making a backup of a file, and synchronizing a secondary copy of a folder. + +Having discovered these great utilities, I find that they are more verbose than necessary, so I created shortcuts to them in my Bash shell startup script. I thought I’d share these shortcuts in case they are useful to others or could offer inspiration to Bash users who haven’t quite taken on aliases or shell functions. + +### Updating a second copy of a folder – Bash alias + +The general pattern for updating a second copy of a folder with cp is: + +``` +cp -r -u -v SOURCE-FOLDER DESTINATION-DIRECTORY +``` + +I can easily remember the -r option because I use it often when copying folders around. I can probably, with some more effort, remember -v, and with even more effort, -u (is it “update” or “synchronize” or…). + +Or I can just use the [alias capability in Bash][8] to convert the cp command and options to something more memorable, like this: + +``` +alias sync='cp -r -u -v' +``` + +``` +sync Pictures /media/me/4388-E5FE +``` + +Not sure if you already have a sync alias defined? You can list all your currently defined aliases by typing the word alias at the command prompt in your terminal window. + +Like this so much you just want to start using it right away? Open a terminal window and type: + +``` +echo "alias sync='cp -r -u -v'" >> ~/.bash_aliases +``` + +``` +me@mymachine~$ alias + +alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' + +alias egrep='egrep --color=auto' + +alias fgrep='fgrep --color=auto' + +alias grep='grep --color=auto' + +alias gvm='sdk' + +alias l='ls -CF' + +alias la='ls -A' + +alias ll='ls -alF' + +alias ls='ls --color=auto' + +alias sync='cp -r -u -v' + +me@mymachine:~$ +``` + +### Making versioned backups – Bash function + +The general pattern for making a backup of a file with cp is: + +``` +cp --force --backup=numbered WORKING-FILE BACKED-UP-FILE +``` + +Besides remembering the options to the cp command, we also need to remember to repeat the WORKING-FILE name a second time. But why repeat ourselves when [a Bash function][9] can take care of that overhead for us, like this: + +Again, you can save this to your .bash_aliases file in your home directory. + +``` +function backup { + +    if [ $# -ne 1 ]; then + +        echo "Usage: $0 filename" + +    elif [ -f $1 ] ; then + +        echo "cp --force --backup=numbered $1 $1" + +        cp --force --backup=numbered $1 $1 + +    else + +        echo "$0: $1 is not a file" + +    fi + +} +``` + +The first if statement checks to make sure that only one argument is provided to the function, otherwise printing the correct usage with the echo command. + +The elif statement checks to make sure the argument provided is a file, and if so, it (verbosely) uses the second echo to print the cp command to be used and then executes it. + +If the single argument is not a file, the third echo prints an error message to that effect. + +In my home directory, if I execute the backup command so defined on the file checkCounts.sql, I see that backup creates a file called checkCounts.sql.~1~. If I execute it once more, I see a new file checkCounts.sql.~2~. + +Success! As planned, I can go on editing checkCounts.sql, but if I take a snapshot of it every so often with backup, I can return to the most recent snapshot should I run into trouble. + +At some point, it’s better to start using git for version control, but backup as defined above is a nice cheap tool when you need to create snapshots but you’re not ready for git. + +### Conclusion + +In my last article, I promised you that repetitive tasks can often be easily streamlined through the use of shell scripts, shell functions, and shell aliases. + +Here I’ve shown concrete examples of the use of shell aliases and shell functions to streamline the synchronize and backup functionality of the cp command. If you’d like to learn more about this, check out the two articles cited above: [How to save keystrokes at the command line with alias][10] and [Shell scripting: An introduction to the shift method and custom functions][11], written by my colleagues Greg and Seth, respectively. + +### Topics + + [Linux][12] + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/clh_portrait2.jpg?itok=V1V-YAtY)][13] Chris Hermansen  + +- + + Engaged in computing since graduating from the University of British Columbia in 1978, I have been a full-time Linux user since 2005 and a full-time Solaris, SunOS and UNIX System V user before that. On the technical side of things, I have spent a great deal of my career doing data analysis; especially spatial data analysis. I have a substantial amount of programming experience in relation to data analysis, using awk, Python, PostgreSQL, PostGIS and lately Groovy. I have also built a few... [more about Chris Hermansen][14] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/two-great-uses-cp-command-update + +作者:[ ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[1]:https://opensource.com/users/clhermansen +[2]:https://opensource.com/users/clhermansen +[3]:https://opensource.com/user/37806/feed +[4]:https://opensource.com/article/18/1/two-great-uses-cp-command-update?rate=J_7R7wSPbukG9y8jrqZt3EqANfYtVAwZzzpopYiH3C8 +[5]:https://opensource.com/article/18/1/two-great-uses-cp-command-update#comments +[6]:https://www.flickr.com/photos/internetarchivebookimages/14803082483/in/photolist-oy6EG4-pZR3NZ-i6r3NW-e1tJSX-boBtf7-oeYc7U-o6jFKK-9jNtc3-idt2G9-i7NG1m-ouKjXe-owqviF-92xFBg-ow9e4s-gVVXJN-i1K8Pw-4jybMo-i1rsBr-ouo58Y-ouPRzz-8cGJHK-85Evdk-cru4Ly-rcDWiP-gnaC5B-pAFsuf-hRFPcZ-odvBMz-hRCE7b-mZN3Kt-odHU5a-73dpPp-hUaaAi-owvUMK-otbp7Q-ouySkB-hYAgmJ-owo4UZ-giHgqu-giHpNc-idd9uQ-osAhcf-7vxk63-7vwN65-fQejmk-pTcLgA-otZcmj-fj1aSX-hRzHQk-oyeZfR +[7]:https://opensource.com/article/17/7/two-great-uses-cp-command +[8]:https://opensource.com/article/17/5/introduction-alias-command-line-tool +[9]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions +[10]:https://opensource.com/article/17/5/introduction-alias-command-line-tool +[11]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions +[12]:https://opensource.com/tags/linux +[13]:https://opensource.com/users/clhermansen +[14]:https://opensource.com/users/clhermansen From 3bf27dbb76bb3bd93678c7ffdea6cbe1ab12a725 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Sun, 21 Jan 2018 14:00:26 +0800 Subject: [PATCH 130/226] Delete 20180121 Two great uses for the cp command: Bash shortcuts.md --- ...uses for the cp command: Bash shortcuts.md | 163 ------------------ 1 file changed, 163 deletions(-) delete mode 100644 sources/tech/20180121 Two great uses for the cp command: Bash shortcuts.md diff --git a/sources/tech/20180121 Two great uses for the cp command: Bash shortcuts.md b/sources/tech/20180121 Two great uses for the cp command: Bash shortcuts.md deleted file mode 100644 index baf9549636..0000000000 --- a/sources/tech/20180121 Two great uses for the cp command: Bash shortcuts.md +++ /dev/null @@ -1,163 +0,0 @@ -Two great uses for the cp command: Bash shortcuts -============================================================ - -### Here's how to streamline the backup and synchronize functions of the cp command. - - [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/clh_portrait2.jpg?itok=w2fRuoKj)][1]  19 Jan 2018 [Chris Hermansen][2] [Feed][3]  - -8[up][4] - - [4 comments][5] -![Two great uses for the cp command: Bash shortcuts ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC) - -Image by :  - -[Internet Archive Book Images][6]. Modified by Opensource.com. CC BY-SA 4.0 - -Last July, I wrote about [two great uses for the cp command][7]: making a backup of a file, and synchronizing a secondary copy of a folder. - -Having discovered these great utilities, I find that they are more verbose than necessary, so I created shortcuts to them in my Bash shell startup script. I thought I’d share these shortcuts in case they are useful to others or could offer inspiration to Bash users who haven’t quite taken on aliases or shell functions. - -### Updating a second copy of a folder – Bash alias - -The general pattern for updating a second copy of a folder with cp is: - -``` -cp -r -u -v SOURCE-FOLDER DESTINATION-DIRECTORY -``` - -I can easily remember the -r option because I use it often when copying folders around. I can probably, with some more effort, remember -v, and with even more effort, -u (is it “update” or “synchronize” or…). - -Or I can just use the [alias capability in Bash][8] to convert the cp command and options to something more memorable, like this: - -``` -alias sync='cp -r -u -v' -``` - -``` -sync Pictures /media/me/4388-E5FE -``` - -Not sure if you already have a sync alias defined? You can list all your currently defined aliases by typing the word alias at the command prompt in your terminal window. - -Like this so much you just want to start using it right away? Open a terminal window and type: - -``` -echo "alias sync='cp -r -u -v'" >> ~/.bash_aliases -``` - -``` -me@mymachine~$ alias - -alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' - -alias egrep='egrep --color=auto' - -alias fgrep='fgrep --color=auto' - -alias grep='grep --color=auto' - -alias gvm='sdk' - -alias l='ls -CF' - -alias la='ls -A' - -alias ll='ls -alF' - -alias ls='ls --color=auto' - -alias sync='cp -r -u -v' - -me@mymachine:~$ -``` - -### Making versioned backups – Bash function - -The general pattern for making a backup of a file with cp is: - -``` -cp --force --backup=numbered WORKING-FILE BACKED-UP-FILE -``` - -Besides remembering the options to the cp command, we also need to remember to repeat the WORKING-FILE name a second time. But why repeat ourselves when [a Bash function][9] can take care of that overhead for us, like this: - -Again, you can save this to your .bash_aliases file in your home directory. - -``` -function backup { - -    if [ $# -ne 1 ]; then - -        echo "Usage: $0 filename" - -    elif [ -f $1 ] ; then - -        echo "cp --force --backup=numbered $1 $1" - -        cp --force --backup=numbered $1 $1 - -    else - -        echo "$0: $1 is not a file" - -    fi - -} -``` - -The first if statement checks to make sure that only one argument is provided to the function, otherwise printing the correct usage with the echo command. - -The elif statement checks to make sure the argument provided is a file, and if so, it (verbosely) uses the second echo to print the cp command to be used and then executes it. - -If the single argument is not a file, the third echo prints an error message to that effect. - -In my home directory, if I execute the backup command so defined on the file checkCounts.sql, I see that backup creates a file called checkCounts.sql.~1~. If I execute it once more, I see a new file checkCounts.sql.~2~. - -Success! As planned, I can go on editing checkCounts.sql, but if I take a snapshot of it every so often with backup, I can return to the most recent snapshot should I run into trouble. - -At some point, it’s better to start using git for version control, but backup as defined above is a nice cheap tool when you need to create snapshots but you’re not ready for git. - -### Conclusion - -In my last article, I promised you that repetitive tasks can often be easily streamlined through the use of shell scripts, shell functions, and shell aliases. - -Here I’ve shown concrete examples of the use of shell aliases and shell functions to streamline the synchronize and backup functionality of the cp command. If you’d like to learn more about this, check out the two articles cited above: [How to save keystrokes at the command line with alias][10] and [Shell scripting: An introduction to the shift method and custom functions][11], written by my colleagues Greg and Seth, respectively. - -### Topics - - [Linux][12] - -### About the author - - [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/clh_portrait2.jpg?itok=V1V-YAtY)][13] Chris Hermansen  - -- - - Engaged in computing since graduating from the University of British Columbia in 1978, I have been a full-time Linux user since 2005 and a full-time Solaris, SunOS and UNIX System V user before that. On the technical side of things, I have spent a great deal of my career doing data analysis; especially spatial data analysis. I have a substantial amount of programming experience in relation to data analysis, using awk, Python, PostgreSQL, PostGIS and lately Groovy. I have also built a few... [more about Chris Hermansen][14] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/two-great-uses-cp-command-update - -作者:[ ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[1]:https://opensource.com/users/clhermansen -[2]:https://opensource.com/users/clhermansen -[3]:https://opensource.com/user/37806/feed -[4]:https://opensource.com/article/18/1/two-great-uses-cp-command-update?rate=J_7R7wSPbukG9y8jrqZt3EqANfYtVAwZzzpopYiH3C8 -[5]:https://opensource.com/article/18/1/two-great-uses-cp-command-update#comments -[6]:https://www.flickr.com/photos/internetarchivebookimages/14803082483/in/photolist-oy6EG4-pZR3NZ-i6r3NW-e1tJSX-boBtf7-oeYc7U-o6jFKK-9jNtc3-idt2G9-i7NG1m-ouKjXe-owqviF-92xFBg-ow9e4s-gVVXJN-i1K8Pw-4jybMo-i1rsBr-ouo58Y-ouPRzz-8cGJHK-85Evdk-cru4Ly-rcDWiP-gnaC5B-pAFsuf-hRFPcZ-odvBMz-hRCE7b-mZN3Kt-odHU5a-73dpPp-hUaaAi-owvUMK-otbp7Q-ouySkB-hYAgmJ-owo4UZ-giHgqu-giHpNc-idd9uQ-osAhcf-7vxk63-7vwN65-fQejmk-pTcLgA-otZcmj-fj1aSX-hRzHQk-oyeZfR -[7]:https://opensource.com/article/17/7/two-great-uses-cp-command -[8]:https://opensource.com/article/17/5/introduction-alias-command-line-tool -[9]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions -[10]:https://opensource.com/article/17/5/introduction-alias-command-line-tool -[11]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions -[12]:https://opensource.com/tags/linux -[13]:https://opensource.com/users/clhermansen -[14]:https://opensource.com/users/clhermansen From 5aea97c8f4d6cd9da4361d1547a85d3cbd47e46c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Sun, 21 Jan 2018 14:09:56 +0800 Subject: [PATCH 131/226] Create Two great uses for the cp command Bash shortcuts.md --- ... uses for the cp command Bash shortcuts.md | 163 ++++++++++++++++++ 1 file changed, 163 insertions(+) create mode 100644 sources/tech/Two great uses for the cp command Bash shortcuts.md diff --git a/sources/tech/Two great uses for the cp command Bash shortcuts.md b/sources/tech/Two great uses for the cp command Bash shortcuts.md new file mode 100644 index 0000000000..baf9549636 --- /dev/null +++ b/sources/tech/Two great uses for the cp command Bash shortcuts.md @@ -0,0 +1,163 @@ +Two great uses for the cp command: Bash shortcuts +============================================================ + +### Here's how to streamline the backup and synchronize functions of the cp command. + + [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/clh_portrait2.jpg?itok=w2fRuoKj)][1]  19 Jan 2018 [Chris Hermansen][2] [Feed][3]  + +8[up][4] + + [4 comments][5] +![Two great uses for the cp command: Bash shortcuts ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC) + +Image by :  + +[Internet Archive Book Images][6]. Modified by Opensource.com. CC BY-SA 4.0 + +Last July, I wrote about [two great uses for the cp command][7]: making a backup of a file, and synchronizing a secondary copy of a folder. + +Having discovered these great utilities, I find that they are more verbose than necessary, so I created shortcuts to them in my Bash shell startup script. I thought I’d share these shortcuts in case they are useful to others or could offer inspiration to Bash users who haven’t quite taken on aliases or shell functions. + +### Updating a second copy of a folder – Bash alias + +The general pattern for updating a second copy of a folder with cp is: + +``` +cp -r -u -v SOURCE-FOLDER DESTINATION-DIRECTORY +``` + +I can easily remember the -r option because I use it often when copying folders around. I can probably, with some more effort, remember -v, and with even more effort, -u (is it “update” or “synchronize” or…). + +Or I can just use the [alias capability in Bash][8] to convert the cp command and options to something more memorable, like this: + +``` +alias sync='cp -r -u -v' +``` + +``` +sync Pictures /media/me/4388-E5FE +``` + +Not sure if you already have a sync alias defined? You can list all your currently defined aliases by typing the word alias at the command prompt in your terminal window. + +Like this so much you just want to start using it right away? Open a terminal window and type: + +``` +echo "alias sync='cp -r -u -v'" >> ~/.bash_aliases +``` + +``` +me@mymachine~$ alias + +alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' + +alias egrep='egrep --color=auto' + +alias fgrep='fgrep --color=auto' + +alias grep='grep --color=auto' + +alias gvm='sdk' + +alias l='ls -CF' + +alias la='ls -A' + +alias ll='ls -alF' + +alias ls='ls --color=auto' + +alias sync='cp -r -u -v' + +me@mymachine:~$ +``` + +### Making versioned backups – Bash function + +The general pattern for making a backup of a file with cp is: + +``` +cp --force --backup=numbered WORKING-FILE BACKED-UP-FILE +``` + +Besides remembering the options to the cp command, we also need to remember to repeat the WORKING-FILE name a second time. But why repeat ourselves when [a Bash function][9] can take care of that overhead for us, like this: + +Again, you can save this to your .bash_aliases file in your home directory. + +``` +function backup { + +    if [ $# -ne 1 ]; then + +        echo "Usage: $0 filename" + +    elif [ -f $1 ] ; then + +        echo "cp --force --backup=numbered $1 $1" + +        cp --force --backup=numbered $1 $1 + +    else + +        echo "$0: $1 is not a file" + +    fi + +} +``` + +The first if statement checks to make sure that only one argument is provided to the function, otherwise printing the correct usage with the echo command. + +The elif statement checks to make sure the argument provided is a file, and if so, it (verbosely) uses the second echo to print the cp command to be used and then executes it. + +If the single argument is not a file, the third echo prints an error message to that effect. + +In my home directory, if I execute the backup command so defined on the file checkCounts.sql, I see that backup creates a file called checkCounts.sql.~1~. If I execute it once more, I see a new file checkCounts.sql.~2~. + +Success! As planned, I can go on editing checkCounts.sql, but if I take a snapshot of it every so often with backup, I can return to the most recent snapshot should I run into trouble. + +At some point, it’s better to start using git for version control, but backup as defined above is a nice cheap tool when you need to create snapshots but you’re not ready for git. + +### Conclusion + +In my last article, I promised you that repetitive tasks can often be easily streamlined through the use of shell scripts, shell functions, and shell aliases. + +Here I’ve shown concrete examples of the use of shell aliases and shell functions to streamline the synchronize and backup functionality of the cp command. If you’d like to learn more about this, check out the two articles cited above: [How to save keystrokes at the command line with alias][10] and [Shell scripting: An introduction to the shift method and custom functions][11], written by my colleagues Greg and Seth, respectively. + +### Topics + + [Linux][12] + +### About the author + + [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/clh_portrait2.jpg?itok=V1V-YAtY)][13] Chris Hermansen  + +- + + Engaged in computing since graduating from the University of British Columbia in 1978, I have been a full-time Linux user since 2005 and a full-time Solaris, SunOS and UNIX System V user before that. On the technical side of things, I have spent a great deal of my career doing data analysis; especially spatial data analysis. I have a substantial amount of programming experience in relation to data analysis, using awk, Python, PostgreSQL, PostGIS and lately Groovy. I have also built a few... [more about Chris Hermansen][14] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/two-great-uses-cp-command-update + +作者:[ ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[1]:https://opensource.com/users/clhermansen +[2]:https://opensource.com/users/clhermansen +[3]:https://opensource.com/user/37806/feed +[4]:https://opensource.com/article/18/1/two-great-uses-cp-command-update?rate=J_7R7wSPbukG9y8jrqZt3EqANfYtVAwZzzpopYiH3C8 +[5]:https://opensource.com/article/18/1/two-great-uses-cp-command-update#comments +[6]:https://www.flickr.com/photos/internetarchivebookimages/14803082483/in/photolist-oy6EG4-pZR3NZ-i6r3NW-e1tJSX-boBtf7-oeYc7U-o6jFKK-9jNtc3-idt2G9-i7NG1m-ouKjXe-owqviF-92xFBg-ow9e4s-gVVXJN-i1K8Pw-4jybMo-i1rsBr-ouo58Y-ouPRzz-8cGJHK-85Evdk-cru4Ly-rcDWiP-gnaC5B-pAFsuf-hRFPcZ-odvBMz-hRCE7b-mZN3Kt-odHU5a-73dpPp-hUaaAi-owvUMK-otbp7Q-ouySkB-hYAgmJ-owo4UZ-giHgqu-giHpNc-idd9uQ-osAhcf-7vxk63-7vwN65-fQejmk-pTcLgA-otZcmj-fj1aSX-hRzHQk-oyeZfR +[7]:https://opensource.com/article/17/7/two-great-uses-cp-command +[8]:https://opensource.com/article/17/5/introduction-alias-command-line-tool +[9]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions +[10]:https://opensource.com/article/17/5/introduction-alias-command-line-tool +[11]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions +[12]:https://opensource.com/tags/linux +[13]:https://opensource.com/users/clhermansen +[14]:https://opensource.com/users/clhermansen From 04e0b220d684656255b24eb20086c909ba750f10 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Sun, 21 Jan 2018 14:56:00 +0800 Subject: [PATCH 132/226] Delete 20171117 Command line fun- Insult the user when typing wrong bash command.md --- ...the user when typing wrong bash command.md | 176 ------------------ 1 file changed, 176 deletions(-) delete mode 100644 sources/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md diff --git a/sources/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md b/sources/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md deleted file mode 100644 index 7e1ab30faa..0000000000 --- a/sources/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md +++ /dev/null @@ -1,176 +0,0 @@ -translate by cyleft - -Command line fun: Insult the user when typing wrong bash command -====== -You can configure sudo command to insult user when they type the wrong password. Now, it is possible to abuse insult the user when they enter the wrong command at the shell prompt. - - -## Say hello bash-insulter - -From the Github page: - -> Randomly insults the user when typing wrong command. It use a new builtin error-handling function named command_not_found_handle in bash 4.x. - -## Installation - -Type the following git command to clone repo: -`git clone https://github.com/hkbakke/bash-insulter.git bash-insulter` -Sample outputs: -``` -Cloning into 'bash-insulter'... -remote: Counting objects: 52, done. -remote: Compressing objects: 100% (49/49), done. -remote: Total 52 (delta 12), reused 12 (delta 2), pack-reused 0 -Unpacking objects: 100% (52/52), done. - -``` - -Edit your ~/.bashrc or /etc/bash.bashrc using a text editor such as vi command: -`$ vi ~/.bashrc` -Append the following lines (see [if..else..fi statement][1] and [source command][2]): -``` -if [ -f $HOME/bash-insulter/src/bash.command-not-found ]; then - source $HOME/bash-insulter/src/bash.command-not-found -fi -``` - -Save and close the file. Login again or just run it manually if you do not want to logout: -``` -$ . $HOME/bash-insulter/src/bash.command-not-found -``` - -## How do I use it? - -Just type some invalid commands: -``` -$ ifconfigs -$ dates -``` -Sample outputs: -[![An interesting bash hook feature to insult you when you type an invalid command. ][3]][3] - -## Customization - -You need to edit $HOME/bash-insulter/src/bash.command-not-found: -`$ vi $HOME/bash-insulter/src/bash.command-not-found` -Sample code: -``` -command_not_found_handle () { - local INSULTS=( - "Boooo!" - "Don't you know anything?" - "RTFM!" - "Hahaha, n00b!" - "Wow! That was impressively wrong!" - "What are you doing??" - "Pathetic" - "...and this is the best you can do??" - "The worst one today!" - "n00b alert!" - "Your application for reduced salary has been sent!" - "lol" - "u suk" - "lol... plz" - "plz uninstall" - "And the Darwin Award goes to.... ${USER}!" - "ERROR_INCOMPETENT_USER" - "Incompetence is also competence" - "Bad." - "Fake it till you make it!" - "What is this...? Amateur hour!?" - "Come on! You can do it!" - "Nice try." - "What if... you type an actual command the next time!" - "What if I told you... it is possible to type valid commands." - "Y u no speak computer???" - "This is not Windows" - "Perhaps you should leave the command line alone..." - "Please step away from the keyboard!" - "error code: 1D10T" - "ACHTUNG! ALLES TURISTEN UND NONTEKNISCHEN LOOKENPEEPERS! DAS KOMPUTERMASCHINE IST NICHT FÜR DER GEFINGERPOKEN UND MITTENGRABEN! ODERWISE IST EASY TO SCHNAPPEN DER SPRINGENWERK, BLOWENFUSEN UND POPPENCORKEN MIT SPITZENSPARKEN. IST NICHT FÜR GEWERKEN BEI DUMMKOPFEN. DER RUBBERNECKEN SIGHTSEEREN KEEPEN DAS COTTONPICKEN HÄNDER IN DAS POCKETS MUSS. ZO RELAXEN UND WATSCHEN DER BLINKENLICHTEN." - "Pro tip: type a valid command!" - ) - - # Seed "random" generator - RANDOM=$(date +%s%N) - VALUE=$((${RANDOM}%2)) - - if [[ ${VALUE} -lt 1 ]]; then - printf "\n $(tput bold)$(tput setaf 1)$(shuf -n 1 -e "${INSULTS[@]}")$(tput sgr0)\n\n" - fi - - echo "-bash: $1: command not found" - - # Return the exit code normally returned on invalid command - return 127 -} -``` - -## sudo insults - -Edit the sudoers file: -`$ sudo visudo` -Append the following line: -`Defaults insults` -Or update as follows i.e. add insults at the end of line: -`Defaults !lecture,tty_tickets,!fqdn,insults` -Here is my file: -``` -Defaults env_reset -Defaults mail_badpass -Defaults secure_path = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin" -## If set, sudo will insult users when they enter an incorrect password. ## -Defaults insults - -# Host alias specification - -# User alias specification - -# Cmnd alias specification - -# User privilege specification -root ALL = (ALL:ALL) ALL - -# Members of the admin group may gain root privileges -% admin ALL = (ALL) ALL   - -# Allow members of group sudo to execute any command -% sudo ALL = (ALL:ALL) ALL   - -# See sudoers(5) for more information on "#include" directives: - -#includedir /etc/sudoers.d -``` - -Try it out: -``` -$ sudo -k # clear old stuff so that we get a fresh prompt -$ sudo ls /root/ -$ sudo -i -``` -Sample session: -[![An interesting sudo feature to insult you when you type an invalid password.][4]][4] - -## Say hello to sl - -[sl is a joke software or classic UNIX][5] game. It is a steam locomotive runs across your screen if you type "sl" (Steam Locomotive) instead of "ls" by mistake. -`$ sl` -[![Linux / UNIX Desktop Fun: Steam Locomotive][6]][5] - --------------------------------------------------------------------------------- - -via: https://www.cyberciti.biz/howto/insult-linux-unix-bash-user-when-typing-wrong-command/ - -作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.cyberciti.biz -[1]:https://bash.cyberciti.biz/guide/If..else..fi -[2]:https://bash.cyberciti.biz/guide/Source_command -[3]:https://www.cyberciti.biz/media/new/cms/2017/11/bash-insulter-Insults-the-user-when-typing-wrong-command.jpg -[4]:https://www.cyberciti.biz/media/new/cms/2017/11/sudo-insults.jpg -[5]:https://www.cyberciti.biz/tips/displays-animations-when-accidentally-you-type-sl-instead-of-ls.html -[6]:https://www.cyberciti.biz/media/new/tips/2011/05/sl_command_steam_locomotive.png From 9a3f156775c819a81b2d1d2745b5e937e8b8747b Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Sun, 21 Jan 2018 14:57:04 +0800 Subject: [PATCH 133/226] translated by cyleft 20171117 Command line fun- Insult the user when typing wrong bash command.md --- ...the user when typing wrong bash command.md | 174 ++++++++++++++++++ 1 file changed, 174 insertions(+) create mode 100644 translated/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md diff --git a/translated/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md b/translated/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md new file mode 100644 index 0000000000..3f1cacfaab --- /dev/null +++ b/translated/tech/20171117 Command line fun- Insult the user when typing wrong bash command.md @@ -0,0 +1,174 @@ +命令行乐趣:恶搞输错 Bash 命令的用户 +====== +你可以通过配置 sudo 命令去恶搞输入错误密码的用户。但是之后,shell 的恶搞提示语可能会滥用于输入错误命令的用户。 + + +## 你好 bash-insulter + +来自 Github 页面: + +> 当用户键入错误命令,随机嘲讽。它使用了一个 bash4.x. 版本的全新内置错误处理函数,叫 command_not_found_handle。 + +## 安装 + +键入下列 git 命令克隆一个仓库: +`git clone https://github.com/hkbakke/bash-insulter.git bash-insulter` +示例输出: +``` +Cloning into 'bash-insulter'... +remote: Counting objects: 52, done. +remote: Compressing objects: 100% (49/49), done. +remote: Total 52 (delta 12), reused 12 (delta 2), pack-reused 0 +Unpacking objects: 100% (52/52), done. + +``` + +用文本编辑器,编辑你的 ~/.bashrc 或者 /etc/bash.bashrc 文件,比如说使用 vi: +`$ vi ~/.bashrc` +在其后追加这一行(具体了解请查看 [if..else..fi 声明][1] 和 [命令源码][2]): +``` +if [ -f $HOME/bash-insulter/src/bash.command-not-found ]; then + source $HOME/bash-insulter/src/bash.command-not-found +fi +``` + +保存并关闭文件。重新登陆,如果不想退出账号也可以手动运行它: +``` +$ . $HOME/bash-insulter/src/bash.command-not-found +``` + +## 如何使用它? + +尝试键入一些无效命令: +``` +$ ifconfigs +$ dates +``` +示例输出: +[![一个有趣的 bash 钩子功能,嘲讽输入了错误命令的你。][3]][3] + +## 自定义 + +你需要编辑 $HOME/bash-insulter/src/bash.command-not-found: +`$ vi $HOME/bash-insulter/src/bash.command-not-found` +示例代码: +``` +command_not_found_handle () { + local INSULTS=( + "Boooo!" + "Don't you know anything?" + "RTFM!" + "Hahaha, n00b!" + "Wow! That was impressively wrong!" + "What are you doing??" + "Pathetic" + "...and this is the best you can do??" + "The worst one today!" + "n00b alert!" + "Your application for reduced salary has been sent!" + "lol" + "u suk" + "lol... plz" + "plz uninstall" + "And the Darwin Award goes to.... ${USER}!" + "ERROR_INCOMPETENT_USER" + "Incompetence is also competence" + "Bad." + "Fake it till you make it!" + "What is this...? Amateur hour!?" + "Come on! You can do it!" + "Nice try." + "What if... you type an actual command the next time!" + "What if I told you... it is possible to type valid commands." + "Y u no speak computer???" + "This is not Windows" + "Perhaps you should leave the command line alone..." + "Please step away from the keyboard!" + "error code: 1D10T" + "ACHTUNG! ALLES TURISTEN UND NONTEKNISCHEN LOOKENPEEPERS! DAS KOMPUTERMASCHINE IST NICHT FÜR DER GEFINGERPOKEN UND MITTENGRABEN! ODERWISE IST EASY TO SCHNAPPEN DER SPRINGENWERK, BLOWENFUSEN UND POPPENCORKEN MIT SPITZENSPARKEN. IST NICHT FÜR GEWERKEN BEI DUMMKOPFEN. DER RUBBERNECKEN SIGHTSEEREN KEEPEN DAS COTTONPICKEN HÄNDER IN DAS POCKETS MUSS. ZO RELAXEN UND WATSCHEN DER BLINKENLICHTEN." + "Pro tip: type a valid command!" + ) + + # 设置“随机”种子发生器 + RANDOM=$(date +%s%N) + VALUE=$((${RANDOM}%2)) + + if [[ ${VALUE} -lt 1 ]]; then + printf "\n $(tput bold)$(tput setaf 1)$(shuf -n 1 -e "${INSULTS[@]}")$(tput sgr0)\n\n" + fi + + echo "-bash: $1: command not found" + + # 无效命令,常规返回已存在的代码 + return 127 +} +``` + +## sudo 嘲讽 + +编辑 sudoers 文件: +`$ sudo visudo` +追加下面这一行: +`Defaults insults` +或者像下面尾行增加一句嘲讽语: +`Defaults !lecture,tty_tickets,!fqdn,insults` +这是我的文件: +``` +Defaults env_reset +Defaults mail_badpass +Defaults secure_path = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin" +## If set, sudo will insult users when they enter an incorrect password. ## +Defaults insults + +# Host alias specification + +# User alias specification + +# Cmnd alias specification + +# User privilege specification +root ALL = (ALL:ALL) ALL + +# Members of the admin group may gain root privileges +% admin ALL = (ALL) ALL   + +# Allow members of group sudo to execute any command +% sudo ALL = (ALL:ALL) ALL   + +# See sudoers(5) for more information on "#include" directives: + +#includedir /etc/sudoers.d +``` + +Try it out: +``` +$ sudo -k # clear old stuff so that we get a fresh prompt +$ sudo ls /root/ +$ sudo -i +``` +样例对话: +[![当输入错误密码时,你会被一个有趣的的 sudo 嘲讽语戏弄。][4]][4] + +## 你好 sl + +[sl 或是 UNIX 经典捣蛋软件][5] 游戏。当你错误的把 “ls” 输入成 “sl”,将会有一辆蒸汽机车穿过你的屏幕。 +`$ sl` +[![Linux / UNIX 桌面乐趣: 蒸汽机车][6]][5] + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/howto/insult-linux-unix-bash-user-when-typing-wrong-command/ + +作者:[Vivek Gite][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://bash.cyberciti.biz/guide/If..else..fi +[2]:https://bash.cyberciti.biz/guide/Source_command +[3]:https://www.cyberciti.biz/media/new/cms/2017/11/bash-insulter-Insults-the-user-when-typing-wrong-command.jpg +[4]:https://www.cyberciti.biz/media/new/cms/2017/11/sudo-insults.jpg +[5]:https://www.cyberciti.biz/tips/displays-animations-when-accidentally-you-type-sl-instead-of-ls.html +[6]:https://www.cyberciti.biz/media/new/tips/2011/05/sl_command_steam_locomotive.png From a398b1614d419c355b6732c4d60c6d1ff70f3c19 Mon Sep 17 00:00:00 2001 From: fan Li <15201710458@163.com> Date: Sun, 21 Jan 2018 15:38:33 +0800 Subject: [PATCH 134/226] Delete 20111124 How to find hidden processes and ports on Linux-Unix-Windows.md --- ...ocesses and ports on Linux-Unix-Windows.md | 208 ------------------ 1 file changed, 208 deletions(-) delete mode 100644 sources/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md diff --git a/sources/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md b/sources/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md deleted file mode 100644 index 15b667f3d2..0000000000 --- a/sources/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md +++ /dev/null @@ -1,208 +0,0 @@ -Translating by ljgibbslf - -How to find hidden processes and ports on Linux/Unix/Windows -====== -Unhide is a little handy forensic tool to find hidden processes and TCP/UDP ports by rootkits / LKMs or by another hidden technique. This tool works under Linux, Unix-like system, and MS-Windows operating systems. From the man page: - -> It detects hidden processes using three techniques: -> -> 1. The proc technique consists of comparing /proc with the output of [/bin/ps][1]. -> 2. The sys technique consists of comparing information gathered from [/bin/ps][1] with information gathered from system calls. -> 3. The brute technique consists of bruteforcing the all process IDs. This technique is only available on Linux 2.6 kernels. -> - - - -Most rootkits/malware use the power of the kernel to hide, they are only visible from within the kernel. You can use unhide or tool such as [rkhunter to scan for rootkits, backdoors, and possible][2] local exploits. -[![How to find hidden process and ports on Linux, Unix, FreeBSD and Windows][3]][3] -This page describes how to install unhide and search for hidden process and TCP/UDP ports. - -### How do I Install Unhide? - -It is recommended that you run this tool from read-only media. To install the same under a Debian or Ubuntu Linux, type the following [apt-get command][4]/[apt command][5]: -`$ sudo apt-get install unhide` -Sample outputs: -``` -[sudo] password for vivek: -Reading package lists... Done -Building dependency tree -Reading state information... Done -Suggested packages: - rkhunter -The following NEW packages will be installed: - unhide -0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. -Need to get 46.6 kB of archives. -After this operation, 136 kB of additional disk space will be used. -Get:1 http://in.archive.ubuntu.com/ubuntu artful/universe amd64 unhide amd64 20130526-1 [46.6 kB] -Fetched 46.6 kB in 0s (49.0 kB/s) -Selecting previously unselected package unhide. -(Reading database ... 205367 files and directories currently installed.) -Preparing to unpack .../unhide_20130526-1_amd64.deb ... -Unpacking unhide (20130526-1) ... -Setting up unhide (20130526-1) ... -Processing triggers for man-db (2.7.6.1-2) ... -``` - -### How to install unhide on a RHEL/CentOS/Oracle/Scientific/Fedora Linux - -Type the following [yum command][6] (first turn on [EPLE repo on a CentOS/RHEL version 6.x][7] or [version 7.x][8]): -`$ sudo yum install unhide` -If you are using a Fedora Linux, type the following dnf command: -`$ sudo dnf install unhide` - -### How to install unhide on an Arch Linux - -Type the following pacman command: -`$ sudo pacman -S unhide` - -### FreeBSD : Install unhide - -Type the following command to install unhide using the port, enter: -``` -# cd /usr/ports/security/unhide/ -# make install clean -``` -OR, you can install the same using the binary package with help of pkg command: -`# pkg install unhide` -**unhide-tcp** is a forensic tool that identifies TCP/UDP ports that are listening but are not listed in [/bin/netstat][9] or [/bin/ss command][10] through brute forcing of all TCP/UDP ports available. - -### How do I use unhide tool? - -The syntax is: -` unhide [options] test_list` -Test_list is one or more of the following standard tests: - - 1. brute - 2. proc - 3. procall - 4. procfs - 5. quick - 6. reverse - 7. sys - - - -Elementary tests: - - 1. checkbrute - 2. checkchdir - 3. checkgetaffinity - 4. checkgetparam - 5. checkgetpgid - 6. checkgetprio - 7. checkRRgetinterval - 8. checkgetsched - 9. checkgetsid - 10. checkkill - 11. checknoprocps - 12. checkopendir - 13. checkproc - 14. checkquick - 15. checkreaddir - 16. checkreverse - 17. checksysinfo - 18. checksysinfo2 - 19. checksysinfo3 - - - -You can use it as follows: -``` -# unhide proc -# unhide sys -# unhide quick -``` -Sample outputs: -``` -Unhide 20130526 -Copyright © 2013 Yago Jesus & Patrick Gouin -License GPLv3+ : GNU GPL version 3 or later -http://www.unhide-forensics.info - -NOTE : This version of unhide is for systems using Linux >= 2.6 - -Used options: -[*]Searching for Hidden processes through comparison of results of system calls, proc, dir and ps -``` - -### How to use unhide-tcp forensic tool that identifies TCP/UDP ports - -From the man page: - -> unhide-tcp is a forensic tool that identifies TCP/UDP ports that are listening but are not listed by /sbin/ss (or alternatively by /bin/netstat) through brute forcing of all TCP/UDP ports available. -> Note1 : On FreeBSD ans OpenBSD, netstat is allways used as iproute2 doesn't exist on these OS. In addition, on FreeBSD, sockstat is used instead of fuser. -> Note2 : If iproute2 is not available on the system, option -n or -s SHOULD be given on the command line. - -``` -# unhide-tcp -``` -Sample outputs: -``` -Unhide 20100201 -http://www.security-projects.com/?Unhide - -Starting TCP checking - -Starting UDP checking -``` - -(Fig.02: No hidden ports found using the unhide-tcp command) -However, I found something interesting: -`# unhide-tcp ` -Sample outputs: -``` -Unhide 20100201 -http://www.security-projects.com/?Unhide - - -Starting TCP checking - -Found Hidden port that not appears in netstat: 1048 -Found Hidden port that not appears in netstat: 1049 -Found Hidden port that not appears in netstat: 1050 -Starting UDP checking - -``` - -The [netstat -tulpn][11] or [ss commands][12] displayed nothing about the hidden TCP ports # 1048, 1049, and 1050: -``` -# netstat -tulpn | grep 1048 -# ss -lp -# ss -l | grep 1048 -``` -For more info read man pages by typing the following command: -``` -$ man unhide -$ man unhide-tcp -``` - -### A note about Windows users - -You can grab the WinUnhide/WinUnhide-TCP by [visiting this page][13]. - - --------------------------------------------------------------------------------- - -via: https://www.cyberciti.biz/tips/linux-unix-windows-find-hidden-processes-tcp-udp-ports.html - -作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.cyberciti.biz -[1]:https://www.cyberciti.biz/faq/show-all-running-processes-in-linux/ (Linux / Unix ps command) -[2]:https://www.cyberciti.biz/faq/howto-check-linux-rootkist-with-detectors-software/ -[3]:https://www.cyberciti.biz/tips/wp-content/uploads/2011/11/Linux-FreeBSD-Unix-Windows-Find-Hidden-Process-Ports.jpg -[4]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info) -[5]://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info) -[6]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) -[7]:https://www.cyberciti.biz/faq/fedora-sl-centos-redhat6-enable-epel-repo/ -[8]:https://www.cyberciti.biz/faq/installing-rhel-epel-repo-on-centos-redhat-7-x/ -[9]:https://www.cyberciti.biz/tips/linux-display-open-ports-owner.html (Linux netstat command) -[10]:https://www.cyberciti.biz/tips/linux-investigate-sockets-network-connections.html -[11]:https://www.cyberciti.biz/tips/netstat-command-tutorial-examples.html -[12]:https://www.cyberciti.biz/tips/linux-investigate-sockets-network-connections.html -[13]:http://www.unhide-forensics.info/?Windows:Download From 841f1c94d8850afd57a863357b5d368e830e3f7e Mon Sep 17 00:00:00 2001 From: fan Li <15201710458@163.com> Date: Sun, 21 Jan 2018 15:42:55 +0800 Subject: [PATCH 135/226] Create 20111124 How to find hidden processes and ports on Linux-Unix-Windows.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完成 之前有冲突的问题 现在重新fork一下后再pr 应该没问题了 --- ...ocesses and ports on Linux-Unix-Windows.md | 183 ++++++++++++++++++ 1 file changed, 183 insertions(+) create mode 100644 translated/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md diff --git a/translated/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md b/translated/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md new file mode 100644 index 0000000000..dd834e3a53 --- /dev/null +++ b/translated/tech/20111124 How to find hidden processes and ports on Linux-Unix-Windows.md @@ -0,0 +1,183 @@ +# 如何在 Linux/Unix/Windows 中发现隐藏的进程和端口 + + +unhide 是一个小巧的网络取证工具,能够发现那些借助 rootkits,LKM 等其他技术隐藏的进程和 TCP/UDP 端口。这个工具在 Linux,unix-like,Windows 等操作系统下都可以工作。根据其 man 页面的说明: + +> Unhide 通过下述三项技术来发现隐藏的进程。 +> 1. 进程相关的技术,包括将 /proc 目录与 /bin/ps 命令的输出进行比较。 +> 2. 系统相关的技术,包括将 ps 命令的输出结果同从系统调用方面得到的信息进行比较。 +> 3. 穷举法相关的技术,包括对所有的进程 ID 进行暴力求解,该技术仅限于在基于 Linux2.6 内核的系统中使用。 + +绝大多数的 Rootkits 工具或者恶意软件借助内核来实现进程隐藏,这些进程只在内核内部可见。你可以使用 unhide 或者诸如 rkhunter 等工具,扫描 rootkit 程序,后门程序以及一些可能存在的本地漏洞。 + +![本文讲解如何在多个操作系统下安装和使用unhide][1] +如何安装 unhide +----------- + +这里首先建议你在只读介质上运行这个工具。如果使用的是 Ubuntu 或者 Debian 发行版,输入下述的 apt-get/apt 命令以安装 Unhide:`$ sudo apt-get install unhide` 一切顺利的话你的命令行会输出以下内容: + + [sudo] password for vivek: + Reading package lists... Done + Building dependency tree + Reading state information... Done + Suggested packages: + rkhunter + The following NEW packages will be installed: + unhide + 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. + Need to get 46.6 kB of archives. + After this operation, 136 kB of additional disk space will be used. + Get:1 http://in.archive.ubuntu.com/ubuntu artful/universe amd64 unhide amd64 20130526-1 [46.6 kB] + Fetched 46.6 kB in 0s (49.0 kB/s) + Selecting previously unselected package unhide. + (Reading database ... 205367 files and directories currently installed.) + Preparing to unpack .../unhide_20130526-1_amd64.deb ... + Unpacking unhide (20130526-1) ... + Setting up unhide (20130526-1) ... + Processing triggers for man-db (2.7.6.1-2) ... + +如何在RHEL/CentOS/Oracle/Scientific/Fedora上安装 unhide +------------------------------------------------------------------ + +你可以使用以下的 yum 命令: + + `Sudo yum install unhide` + +在 Fedora 上则使用以下 dnf 命令: + + Sudo dnf install unhide. + +如何在 Arch 上安装 unhide +------------------- + + 键入以下 pacman 命令安装 $ sudo pacman -S unhide + +如何在 FreeBSD 上安装 unhide +---------------------- + +可以通过以下的命令使用 port 来安装 unhide + + # cd /usr/ports/security/unhide/ + # make install clean + +或者可以通过二进制文件安装hide,使用 pkg 命令安装 + + # pkg install unhide + +Unhide-tcp 取证工具通过对所有可用的 TCP/IP 端口进行暴力求解的方式,辨别所有正在监听,却没有列入 /bin/netstat 或者 /bin/ss command 目录的 TCP/IP 端口身份。 + +如何使用 unhide 工具? +--------------- + +Unhide 的语法是 `unhide [options] test_list` test_list 参数可以是以下测试列表中的一个或者多个标准测试: + + + 1. Brute + 2. proc + 3. procall + 4. procfs + 5. quick + 6. reverse + 7. sys + +基本测试: + + 1. checkbrute + 2. checkchdir + 3. checkgetaffinity + 4. checkgetparam + 5. checkgetpgid + 6. checkgetprio + 7. checkRRgetinterval + 8. checkgetsched + 9. checkgetsid + 10. checkkill + 11. checknoprocps + 12. checkopendir + 13. checkproc + 14. checkquick + 15. checkreaddir + 16. checkreverse + 17. checksysinfo + 18. checksysinfo2 + 19. checksysinfo3 + +你可以通过以下示例命令使用 unhide: + + # unhide proc + # unhide sys + # unhide quick + +示例输出: + + Unhide 20130526 + Copyright © 2013 Yago Jesus & Patrick Gouin + License GPLv3+ : GNU GPL version 3 or later + http://www.unhide-forensics.info + + NOTE : This version of unhide is for systems using Linux >= 2.6 + + Used options: + [*]Searching for Hidden processes through comparison of results of system calls, proc, dir and ps + +如何使用 unhide-tcp 工具辨明 TCP/UDP 端口的身份 +---------------------------------- + +以下是来自 man 页面的介绍 + +> unhide-tcp is a forensic tool that identifies TCP/UDP ports that are +> listening but are not listed by /sbin/ss (or alternatively by +> /bin/netstat) through brute forcing of all TCP/UDP ports available. +> Note1 : On FreeBSD ans OpenBSD, netstat is allways used as iproute2 +> doesn't exist on these OS. In addition, on FreeBSD, sockstat is used +> instead of fuser. Note2 : If iproute2 is not available on the system, +> option -n or -s SHOULD be given on the command line. + +Unhide-tcp 取证工具,通过对所有可用的 TCP/IP 端口进行暴力求解的方式,辨别所有正在监听,却没有列入 /bin/netstat 或者 /bin/ss command 目录的 TCP/IP 端口身份。请注意:对于 FreeBSD,OpenBSD系统,一般使用 iproute2,fuser 命令取代在这些操作系统上不存在的 netstat,sockstat 命令。请注意 2:如果操作系统不支持 iproute2 命令,在使用 unhide 时需要在命令上加上 -n 或者 -s 选项。 + + # `unhide-tcp` + +示例输出: + + Unhide 20100201 + http://www.security-projects.com/?Unhide + Starting TCP checking + Starting UDP checking + +上述操作中,没有发现隐藏的端口。但在下述示例中,我展示了一些有趣的事。 + + # `unhide-tcp` + +示例输出: + + Unhide 20100201 + http://www.security-projects.com/?Unhide + Starting TCP checking + Found Hidden port that not appears in netstat: 1048 + Found Hidden port that not appears in netstat: 1049 + Found Hidden port that not appears in netstat: 1050 + Starting UDP checking + +可以看到 netstat -tulpn 和 ss commands 命令确实没有反映出这三个隐藏的端口 + + # netstat -tulpn | grep 1048 + # ss -lp + # ss -l | grep 1048 + +通过下述的 man 命令可以更多地了解unhide + + $ man unhide + $ man unhide-tcp + +Windows 用户如何安装使用 unhide +--------------------- +你可以通过这个[页面][2]获取 Windows 版本的 unhide + +via: https://www.cyberciti.biz/tips/linux-unix-windows-find-hidden-processes-tcp-udp-ports.html +作者:Vivek Gite 译者:[ljgibbs][3] 校对:校对者ID +本文由 LCTT 原创编译,Linux中国 荣誉推出! + + + [1]: https://camo.githubusercontent.com/51ee31c20a799512dcd09d88cacbe8dd04731529/68747470733a2f2f7777772e6379626572636974692e62697a2f746970732f77702d636f6e74656e742f75706c6f6164732f323031312f31312f4c696e75782d467265654253442d556e69782d57696e646f77732d46696e642d48696464656e2d50726f636573732d506f7274732e6a7067 + [2]: http://www.unhide-forensics.info/?Windows:Download + [3]: https://github.com/ljgibbslf From 0cd757965437dea51da6359c1f88271e7a378884 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Sun, 21 Jan 2018 19:01:45 +0800 Subject: [PATCH 136/226] Update 20180110 Best Linux Screenshot and Screencasting Tools.md --- .../20180110 Best Linux Screenshot and Screencasting Tools.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md b/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md index fbd10d2194..aebbb62466 100644 --- a/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md +++ b/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md @@ -1,3 +1,5 @@ +translated by cyleft + Best Linux Screenshot and Screencasting Tools ====== ![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/best-linux-screenshot-and-screencasting-tools_orig.jpg) From 8b429eb47ab18ecd4f3a1cd6d196f492f397ada6 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Sun, 21 Jan 2018 19:02:40 +0800 Subject: [PATCH 137/226] apply for translation --- .../20180110 Best Linux Screenshot and Screencasting Tools.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md b/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md index aebbb62466..90ab5189f9 100644 --- a/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md +++ b/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md @@ -1,4 +1,4 @@ -translated by cyleft +translated by cyleft. Best Linux Screenshot and Screencasting Tools ====== From 1e60811b5611e11ff4bbf93b3c887f6ff7a7bcd7 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 21 Jan 2018 19:35:21 +0800 Subject: [PATCH 138/226] PRF:20171226 How to use-run bash aliases over ssh based session.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @lujun9972 “[[email protected]][3]” 是怎么来的? --- ...run bash aliases over ssh based session.md | 72 +++++++++++-------- 1 file changed, 44 insertions(+), 28 deletions(-) diff --git a/translated/tech/20171226 How to use-run bash aliases over ssh based session.md b/translated/tech/20171226 How to use-run bash aliases over ssh based session.md index e93f9be95e..03c60be3d7 100644 --- a/translated/tech/20171226 How to use-run bash aliases over ssh based session.md +++ b/translated/tech/20171226 How to use-run bash aliases over ssh based session.md @@ -1,7 +1,8 @@ 通过 ssh 会话执行 bash 别名 ====== -我在远程主机上[上设置过一个叫做 file_repl 的 bash 别名 ][1] . 当我使用 ssh 命令登陆远程主机后,可以很正常的使用这个别名。然而这个 bash 别名却无法通过 ssh 来运行,像这样: +我在远程主机上[上设置过一个叫做 file_repl 的 bash 别名 ][1]。当我使用 ssh 命令登录远程主机后,可以很正常的使用这个别名。然而这个 bash 别名却无法通过 ssh 来运行,像这样: + ``` $ ssh vivek@server1.cyberciti.biz file_repl bash:file_repl:command not found @@ -9,38 +10,48 @@ bash:file_repl:command not found 我要怎样做才能通过 ssh 命令运行 bash 别名呢? -SSH 客户端 (ssh) 是一个登陆远程服务器并在远程系统上执行 shell 命令的 Linux/Unix 命令。它被设计用来在两个非信任的机器上通过不安全的网络(比如互联网)提供安全的加密通讯。 +SSH 客户端 (ssh) 是一个登录远程服务器并在远程系统上执行 shell 命令的 Linux/Unix 命令。它被设计用来在两个非信任的机器上通过不安全的网络(比如互联网)提供安全的加密通讯。 -## 如何用 ssh 客户端执行命令 +### 如何用 ssh 客户端执行命令 + +通过 ssh 运行 `free` 命令或 [date 命令][2] 可以这样做: + +``` +$ ssh vivek@server1.cyberciti.biz date +``` -通过 ssh 运行 free 命令或 [date 命令 ][2] 可以这样做: -`$ ssh vivek@server1.cyberciti.biz date` 结果为: + ``` Tue Dec 26 09:02:50 UTC 2017 ``` -或者 -`$ ssh vivek@server1.cyberciti.biz free -h` -结果为: +或者: + +``` +$ ssh vivek@server1.cyberciti.biz free -h +``` + +结果为: + ``` -  total used free shared buff/cache available Mem:2.0G 428M 138M 145M 1.4G 1.1G Swap:0B 0B 0B ``` -## 理解 bash shell 以及命令的类型 +### 理解 bash shell 以及命令的类型 [bash shell][4] 共有下面几类命令: - 1。别名,比如 ll - 2。关键字,比如 if - 3。函数(用户自定义函数,比如 genpasswd) - 4。内置命令,比如 pwd - 5。外部文件,比如 /bin/date +1. 别名,比如 `ll` +2. 关键字,比如 `if` +3. 函数 (用户自定义函数,比如 `genpasswd`) +4. 内置命令,比如 `pwd` +5. 外部文件,比如 `/bin/date` + +[type 命令][5] 和 [command 命令][6] 可以用来查看命令类型: -The [type 命令 ][5] 和 [command 命令 ][6] 可以用来查看命令类型: ``` $ type -a date date is /bin/date @@ -51,33 +62,38 @@ pwd is a shell builtin $ type -a file_repl is aliased to `sudo -i /shared/takes/master.replication' ``` -date 和 free 都是外部命令而 file_repl 是 `sudo -i /shared/takes/master.replication` 的别名。你不能直接执行像 file_repl 这样的别名: +`date` 和 `free` 都是外部命令,而 `file_repl` 是 `sudo -i /shared/takes/master.replication` 的别名。你不能直接执行像 `file_repl` 这样的别名: ``` $ ssh user@remote file_repl ``` -## 在 Unix 系统上无法直接通过 ssh 客户端执行 bash 别名 +### 在 Unix 系统上无法直接通过 ssh 客户端执行 bash 别名 要解决这个问题可以用下面方法运行 ssh 命令: + ``` $ ssh -t user@remote /bin/bash -ic 'your-alias-here' $ ssh -t user@remote /bin/bash -ic 'file_repl' ``` -ssh 命令选项: - - 1。**-t**:[强制分配伪终端。可以用来在远程机器上执行任意的 ][7] 基于屏幕的程序,有时这非常有用。当使用 `-t` 时你可能会收到一个类似" bash:cannot set terminal process group (-1):Inappropriate ioctl for device。bash:no job control in this shell ." 的错误。 +`ssh` 命令选项: +- `-t`:[强制分配伪终端。可以用来在远程机器上执行任意的][7] 基于屏幕的程序,有时这非常有用。当使用 `-t` 时你可能会收到一个类似“bash: cannot set terminal process group (-1): Inappropriate ioctl for device. bash: no job control in this shell .”的错误。 bash shell 的选项: - 1。**-i**:运行交互 shell,这样 shell 才能运行 bash 别名 - 2。**-c**:要执行的命令取之于第一个非选项参数的命令字符串。若在命令字符串后面还有其他参数,这些参会会作为位置参数传递给命令,参数从 $0 开始。 +- `-i`:运行交互 shell,这样 shell 才能运行 bash 别名。 +- `-c`:要执行的命令取之于第一个非选项参数的命令字符串。若在命令字符串后面还有其他参数,这些参数会作为位置参数传递给命令,参数从 `$0` 开始。 总之,要运行一个名叫 `ll` 的 bash 别名,可以运行下面命令: -`$ ssh -t [[email protected]][3] -ic 'll'` + +``` +$ ssh -t vivek@server1.cyberciti.biz -ic 'll' +``` + 结果为: + [![Running bash aliases over ssh based session when using Unix or Linux ssh cli][8]][8] 下面是我的一个 shell 脚本的例子: @@ -100,9 +116,10 @@ ssh ${box} /usr/bin/lxc file push /tmp/https.www.cyberciti.biz.410.url.conf ngin ssh -t ${box} /bin/bash -ic 'push_config_job' ``` -## 相关资料 +### 相关资料 + +更多信息请输入下面命令查看 [OpenSSH 客户端][9] 和 [bash 的 man 帮助 ][10]: -更多信息请输入下面命令查看 [OpenSSH client][9] 和 [bash 的 man 帮助 ][10]: ``` $ man ssh $ man bash @@ -110,14 +127,13 @@ $ help type $ help command ``` - -------------------------------------------------------------------------------- via: https://www.cyberciti.biz/faq/use-bash-aliases-ssh-based-session/ 作者:[Vivek Gite][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e1b5bf2c464462a4108b26261c068d3a1925a9aa Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 21 Jan 2018 19:35:55 +0800 Subject: [PATCH 139/226] PUB:20171226 How to use-run bash aliases over ssh based session.md @lujun9972 https://linux.cn/article-9263-1.html --- ...20171226 How to use-run bash aliases over ssh based session.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171226 How to use-run bash aliases over ssh based session.md (100%) diff --git a/translated/tech/20171226 How to use-run bash aliases over ssh based session.md b/published/20171226 How to use-run bash aliases over ssh based session.md similarity index 100% rename from translated/tech/20171226 How to use-run bash aliases over ssh based session.md rename to published/20171226 How to use-run bash aliases over ssh based session.md From d719fafbf1a70c3925f9d19a8a5aebbd3b9f7b07 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E5=AE=88=E6=B0=B8?= Date: Sun, 21 Jan 2018 20:03:49 +0800 Subject: [PATCH 140/226] Rename Two great uses for the cp command Bash shortcuts.md to 20180121 Two great uses for the cp command Bash shortcuts.md --- ... 20180121 Two great uses for the cp command Bash shortcuts.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/tech/{Two great uses for the cp command Bash shortcuts.md => 20180121 Two great uses for the cp command Bash shortcuts.md} (100%) diff --git a/sources/tech/Two great uses for the cp command Bash shortcuts.md b/sources/tech/20180121 Two great uses for the cp command Bash shortcuts.md similarity index 100% rename from sources/tech/Two great uses for the cp command Bash shortcuts.md rename to sources/tech/20180121 Two great uses for the cp command Bash shortcuts.md From 780b798330e0c7ae75cbd3c45278ef63ece8b99b Mon Sep 17 00:00:00 2001 From: ypingcn <1344632698@qq.com> Date: Sun, 21 Jan 2018 20:13:20 +0800 Subject: [PATCH 141/226] Create 20180112 Top 5 Firefox extensions to install now.md --- ...Top 5 Firefox extensions to install now.md | 79 +++++++++++++++++++ 1 file changed, 79 insertions(+) create mode 100644 translated/tech/20180112 Top 5 Firefox extensions to install now.md diff --git a/translated/tech/20180112 Top 5 Firefox extensions to install now.md b/translated/tech/20180112 Top 5 Firefox extensions to install now.md new file mode 100644 index 0000000000..9f4698aea7 --- /dev/null +++ b/translated/tech/20180112 Top 5 Firefox extensions to install now.md @@ -0,0 +1,79 @@ +五个值得现在安装的火狐插件 +====== + +合适的插件能大大增强你浏览器的功能,但仔细挑选插件很重要。本文有五个值得一看的插件。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/firefox_blue_lead.jpg) + +对于很多用户来说,网页浏览器已经成为电脑使用体验的重要环节。现代浏览器已经发展成强大、可拓展的平台。作为平台的一部分,_插件_能添加或修改浏览器的功能。火狐插件的构建使用了 WebExtensions API ,一个跨浏览器的开发系统。 + +你得安装哪一个插件?一般而言,这个问题的答案取决于你如何使用你的浏览器、你对于隐私的看法、你信任插件开发者多少以及其他个人喜好。 + +首先,我想指出浏览器插件通常需要读取和(或者)修改你浏览的网页上的每项内容。你应该_非常_仔细地考虑这件事的后果。如果一个插件有修改所有你访问过的网页的权限,那么它可能记录你的按键、拦截信用卡信息、在线跟踪你、插入广告,以及其他各种各样邪恶的行为。 + +并不是每个插件都偷偷摸摸地做这些事,但是在你安装任何插件之前,你要慎重考虑下插件安装来源、涉及的权限、你的风险数据和其他因素。记住,你可以从个人数据的角度来管理一个插件如何影响你的攻击面( LCTT 译者注:攻击面是指入侵者能尝试获取或提取数据的途径总和)——例如使用特定的配置、不使用插件来完成例如网上银行的操作。 + +考虑到这一点,这里有你或许想要考虑的五个火狐插件 + +### uBlock Origin + +![ublock origin ad blocker screenshot][2] + +ublock Origin 可以拦截广告和恶意网页,还允许用户定义自己的内容过滤器。 + +[uBlock Origin][3] 是一款快速、内存占用低、适用范围广的拦截器,它不仅能屏蔽广告,还能让你执行你自己的内容过滤。uBlock Origin 默认使用多份预定义好的过滤名单来拦截广告、跟踪器和恶意网页。它允许你任意地添加列表和规则,或者锁定在一个默认拒绝的模式。除了强大之外,这个插件已被证明是效率高、性能好。 + +### Privacy Badger + +![privacy badger ad blocker][5] + +Privacy Badger 运用了算法来无缝地屏蔽侵犯用户准则的广告和跟踪器。 + +正如它名字所表明,[Privacy Badger][6] 是一款专注于隐私的插件,它屏蔽广告和第三方跟踪器。EFF (LCTT 译者注:EFF全称是电子前哨基金会(Electronic Frontier Foundation),旨在宣传互联网版权和监督执法机构 )说:“我们想要推荐一款能自动分析并屏蔽任何侵犯用户准则的跟踪器和广告,而 Privacy Badger 诞生于此目的;它不用任何设置、知识或者用户的配置,就能运行得很好;它是由一个明显为用户服务而不是为广告主服务的组织出品;它使用算法来绝定什么正在跟踪,什么没有在跟踪” + +为什么 Privacy Badger 出现在这列表上的原因跟 uBlock Origin 如此相似?其中一个原因是Privacy Badger 从根本上跟 uBlock Origin 的工作不同。另一个原因是纵深防御的做法是个可以跟随的合理策略。 + +### LastPass + +![lastpass password manager screenshot][8] + +LastPass 是一款用户友好的密码管理插件,支持双重授权。 + +这个插件对于很多人来说是个有争议的补充。你是否应该使用密码管理器——如果你用了,你是否应该选择一个浏览器插件——这都是个热议的话题,而答案取决于你的风险资料。我想说大部分不关心的电脑用户应该用一个,因为这比起常见的选择:每一处使用相同的弱密码,都好太多了。 + +[LastPass][9] 对于用户很友好,支持双重授权,相当安全。这家公司过去出过点安全事故,但是都处理得当,而且资金充足。记住使用密码管理器不是非此即彼的命题。很多用户选择使用密码管理器管理绝大部分密码,但是保持了一点复杂性,为例如银行这样重要的网页精心设计了密码和使用多重认证。 + +### Xmarks Sync + +[Xmarks Sync][10] 是一款方便的插件,能跨实例同步你的书签、打开的标签页、配置项和浏览器历史。如果你有多台机器,想要在桌面设备和移动设备之间同步、或者在同一台设备使用不同的浏览器,那来看看 Xmarks Sync 。(注意这款插件最近被 LastPass 收购) + +### Awesome Screenshot Plus + +[Awesome Screenshot Plus][11] 允许你很容易捕获任意网页的全部或部分区域,也能添加注释、评论、使敏感信息模糊等。你还能用一个可选的在线服务来分享图片。我发现这工具在网页调试时截图、讨论设计和分享信息上很棒。这是一款比你预期中发现自己使用得多的工具。 + +我发现这五款插件有用,我把它们推荐给其他人。这就是说,还有很多浏览器插件。我好奇其他的哪一款是 Opensource.com 社区用户正在使用并推荐的。让评论中让我知道。(LCTT 译者注:本文引用自 Opensource.com ,这两句话意在引导用户留言,推荐自己使用的插件) + +![Awesome Screenshot Plus screenshot][13] + +Awesome Screenshot Plus 允许你容易地截下任何网页的部分或全部内容。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/top-5-firefox-extensions + +作者:[Jeremy Garcia][a] +译者:[ypingcn](https://github.com/ypingcn) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jeremy-garcia +[2]: https://opensource.com/sites/default/files/ublock.png "ublock origin ad blocker screenshot" +[3]: https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/ +[5]: https://opensource.com/sites/default/files/images/life-uploads/privacy_badger_1.0.1.png "privacy badger ad blocker screenshot" +[6]: https://www.eff.org/privacybadger +[8]: https://opensource.com/sites/default/files/images/life-uploads/lastpass4.jpg "lastpass password manager screenshot" +[9]: https://addons.mozilla.org/en-US/firefox/addon/lastpass-password-manager/ +[10]: https://addons.mozilla.org/en-US/firefox/addon/xmarks-sync/ +[11]: https://addons.mozilla.org/en-US/firefox/addon/screenshot-capture-annotate/ +[13]: https://opensource.com/sites/default/files/screenshot_from_2018-01-04_17-11-32.png "Awesome Screenshot Plus screenshot" From 21464e7b49c6cdcbdd26a19f10afc447301a8460 Mon Sep 17 00:00:00 2001 From: ypingcn <1344632698@qq.com> Date: Sun, 21 Jan 2018 20:13:41 +0800 Subject: [PATCH 142/226] Delete 20180112 Top 5 Firefox extensions to install now.md --- ...Top 5 Firefox extensions to install now.md | 85 ------------------- 1 file changed, 85 deletions(-) delete mode 100644 sources/tech/20180112 Top 5 Firefox extensions to install now.md diff --git a/sources/tech/20180112 Top 5 Firefox extensions to install now.md b/sources/tech/20180112 Top 5 Firefox extensions to install now.md deleted file mode 100644 index 3717b7c96d..0000000000 --- a/sources/tech/20180112 Top 5 Firefox extensions to install now.md +++ /dev/null @@ -1,85 +0,0 @@ -translating by ypingcn - -Top 5 Firefox extensions to install now -====== - -The right extensions can greatly enhance your browser's capabilities, but it's important to choose carefully. Here are five that are worth a look. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/firefox_blue_lead.jpg?itok=gYaubJUv) - -The web browser has become a critical component of the computing experience for many users. Modern browsers have evolved into powerful and extensible platforms. As part of this, _extensions_ can add or modify their functionality. Extensions for Firefox are built using the WebExtensions API, a cross-browser development system. - -Which extensions should you install? Generally, that decision comes down to how you use your browser, your views on privacy, how much you trust extension developers, and other personal preferences. - -First, I'd like to point out that browser extensions often require the ability to read and/or change everything on the web pages you visit. You should consider the ramifications of this _very_ carefully. If an extension has modify access to all the web pages you visit, it could act as a key logger, intercept credit card information, track you online, insert advertisements, and perform a variety of other nefarious activities. - -That doesn't mean every extension will surreptitiously do these things, but you should carefully consider the installation source, the permissions involved, your risk profile, and other factors before you install any extension. Keep in mind you can use profiles to manage how an extension impacts your attack surface--for example, using a dedicated profile with no extensions to perform tasks such as online banking. - -With that in mind, here are five Firefox extensions that you may want to consider. - -### uBlock Origin - -![ublock origin ad blocker screenshot][2] - - -Ublock Origin blocks ads and malware while enabling users to define their own content filters. - -[uBlock Origin][3] is a fast, low-memory, wide-spectrum blocker that not only blocks ads but also lets you enforce your own content filtering. The default behavior of uBlock Origin is to block ads, trackers, and malware sites using multiple predefined filter lists. From there it allows you to arbitrarily add lists and rules, or even lock down to a default-deny mode. In addition to being powerful, this extension has proven to be efficient and performant. - -### Privacy Badger - -![privacy badger ad blocker][5] - - -Privacy Badger uses algorithms to seamlessly block ads and trackers that violate the principles of user consent. - -As its name indicates, [Privacy Badger][6] is a privacy-focused extension that blocks ads and third-party trackers. From the EFF: "Privacy Badger was born out of our desire to be able to recommend a single extension that would automatically analyze and block any tracker or ad that violated the principle of user consent; which could function well without any settings, knowledge, or configuration by the user; which is produced by an organization that is unambiguously working for its users rather than for advertisers; and which uses algorithmic methods to decide what is and isn't tracking." - -Why is Privacy Badger on this list when it may seem so similar to uBlock Origin? One reason is that it fundamentally works differently than uBlock Origin. Another is that a practice of defense in depth is a sound policy to follow. - -### LastPass - -![lastpass password manager screenshot][8] - - -LastPass is a user-friendly password manager plugin that supports two-factor authorization. - -This is likely a controversial addition for many. Whether you should use a password manager at all--and if you do, whether you should choose one that has a browser plugin--is a hotly debated topic, and the answer very much depends on your personal risk profile. I'd assert that most casual computer users should use one, because it's much better than the most common alternative: using the same weak password everywhere. - -[LastPass][9] is user-friendly, supports two-factor authentication, and is reasonably secure. The company has had a few security incidents in the past, but it responded well and is well-funded moving forward. Keep in mind that using a password manager isn't an all-or-nothing proposition. Many users choose to use it for the majority of their passwords, while keeping a few complicated, well-constructed passwords for important sites such as banking and multi-factor authentication in their head. - -### Xmarks Sync - -[Xmarks Sync][10] is a convenient extension that will sync your bookmarks, open tabs, profiles, and browser history across instances. If you have multiple machines, want to sync across desktop and mobile, or use multiple different browsers on the same machine, take a look at Xmarks Sync. (Note that this extension was recently acquired by LastPass.) - -### Awesome Screenshot Plus - -[Awesome Screenshot Plus][11] allows you to easily capture all or part of any web page, as well as add annotations and comments, blur sensitive information, and more. You can also share images using an optional online service. I've found this tool great for capturing parts of sites for debugging issues, discussing design, and sharing information. It's one of those tools you'll find yourself using more than you might have expected. - -I've found all five of these extensions useful, and I recommend them to others. That said, there are many browser extensions out there. I'm curious about which ones other Opensource.com community members currently use and recommend. Let me know in the comments. - -![Awesome Screenshot Plus screenshot][13] - - -Awesome Screenshot Plus allows you to easily capture all or part of any web page. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/1/top-5-firefox-extensions - -作者:[Jeremy Garcia][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jeremy-garcia -[2]:https://opensource.com/sites/default/files/ublock.png (ublock origin ad blocker screenshot) -[3]:https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/ -[5]:https://opensource.com/sites/default/files/images/life-uploads/privacy_badger_1.0.1.png (privacy badger ad blocker screenshot) -[6]:https://www.eff.org/privacybadger -[8]:https://opensource.com/sites/default/files/images/life-uploads/lastpass4.jpg (lastpass password manager screenshot) -[9]:https://addons.mozilla.org/en-US/firefox/addon/lastpass-password-manager/ -[10]:https://addons.mozilla.org/en-US/firefox/addon/xmarks-sync/ -[11]:https://addons.mozilla.org/en-US/firefox/addon/screenshot-capture-annotate/ -[13]:https://opensource.com/sites/default/files/screenshot_from_2018-01-04_17-11-32.png (Awesome Screenshot Plus screenshot) From d0acc11feec5670a0c327a2dcf024ea0f06ea4ac Mon Sep 17 00:00:00 2001 From: Kane Gong Date: Sun, 21 Jan 2018 20:37:20 +0800 Subject: [PATCH 143/226] kaneg is translating. --- ...0180103 How to preconfigure LXD containers with cloud-init.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180103 How to preconfigure LXD containers with cloud-init.md b/sources/tech/20180103 How to preconfigure LXD containers with cloud-init.md index ed6eacd2fb..d94b5fa2b8 100644 --- a/sources/tech/20180103 How to preconfigure LXD containers with cloud-init.md +++ b/sources/tech/20180103 How to preconfigure LXD containers with cloud-init.md @@ -1,3 +1,4 @@ +kaneg is translating. How to preconfigure LXD containers with cloud-init ====== You are creating containers and you want them to be somewhat preconfigured. For example, you want them to run automatically **apt update** as soon as they are launched. Or, get some packages pre-installed, or run a few commands. Here is how to perform this early initialization with [**cloud-init**][1] through [LXD to container images that support **cloud-init**][2]. From f53cc82cb02cc3250c168c12c39943a923c676c2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 21 Jan 2018 21:00:52 +0800 Subject: [PATCH 144/226] =?UTF-8?q?=E4=BF=AE=E6=AD=A3?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... great uses for the cp command Bash shortcuts.md | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-) diff --git a/sources/tech/20180121 Two great uses for the cp command Bash shortcuts.md b/sources/tech/20180121 Two great uses for the cp command Bash shortcuts.md index baf9549636..d4ad7c7140 100644 --- a/sources/tech/20180121 Two great uses for the cp command Bash shortcuts.md +++ b/sources/tech/20180121 Two great uses for the cp command Bash shortcuts.md @@ -3,16 +3,9 @@ Two great uses for the cp command: Bash shortcuts ### Here's how to streamline the backup and synchronize functions of the cp command. - [![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/clh_portrait2.jpg?itok=w2fRuoKj)][1]  19 Jan 2018 [Chris Hermansen][2] [Feed][3]  - -8[up][4] - - [4 comments][5] ![Two great uses for the cp command: Bash shortcuts ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC) -Image by :  - -[Internet Archive Book Images][6]. Modified by Opensource.com. CC BY-SA 4.0 +>Image by : [Internet Archive Book Images][6]. Modified by Opensource.com. CC BY-SA 4.0 Last July, I wrote about [two great uses for the cp command][7]: making a backup of a file, and synchronizing a secondary copy of a folder. @@ -140,13 +133,13 @@ Here I’ve shown concrete examples of the use of shell aliases and shell functi via: https://opensource.com/article/18/1/two-great-uses-cp-command-update -作者:[ ][a] +作者:[Chris Hermansen][a] 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]: +[a]:https://opensource.com/users/clhermansen [1]:https://opensource.com/users/clhermansen [2]:https://opensource.com/users/clhermansen [3]:https://opensource.com/user/37806/feed From f41e1204be0847cb4aea24bc64bf890ffa014915 Mon Sep 17 00:00:00 2001 From: fan Li <15201710458@163.com> Date: Sun, 21 Jan 2018 21:14:02 +0800 Subject: [PATCH 145/226] Update 20170511 Working with VI editor - The Basics.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 申请翻译 --- sources/tech/20170511 Working with VI editor - The Basics.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20170511 Working with VI editor - The Basics.md b/sources/tech/20170511 Working with VI editor - The Basics.md index 4056c3c9ec..6653a1b2cc 100644 --- a/sources/tech/20170511 Working with VI editor - The Basics.md +++ b/sources/tech/20170511 Working with VI editor - The Basics.md @@ -1,3 +1,5 @@ +translating by ljgibbslf + Working with VI editor : The Basics ====== VI editor is a powerful command line based text editor that was originally created for Unix but has since been ported to various Unix & Linux distributions. In Linux there exists another, advanced version of VI editor called VIM (also known as VI IMproved ). VIM only adds funtionalities to already powefrul VI editor, some of the added functionalities a From 2c52fe6bca53405ed1eed220dda829d7ac6ae6bb Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 21 Jan 2018 21:43:26 +0800 Subject: [PATCH 146/226] =?UTF-8?q?20180121-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...For Googles In-house Linux Distribution.md | 103 ++++++++++++++++++ 1 file changed, 103 insertions(+) create mode 100644 sources/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md diff --git a/sources/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md b/sources/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md new file mode 100644 index 0000000000..1da1dcf64d --- /dev/null +++ b/sources/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md @@ -0,0 +1,103 @@ +No More Ubuntu! Debian is the New Choice For Google’s In-house Linux Distribution +============================================================ + +_Brief: For years Google used Goobuntu, an in-house, Ubuntu-based operating system. Goobuntu is now being replaced by gLinux, which is based on Debian Testing._ + +If you have read [Ubuntu facts][18], you probably already know that Google uses a Linux distribution called [Goobuntu][19] as the development platform. It is a custom Linux distribution based on…(easy to guess)… Ubuntu. + +Goobuntu is basically a “[light skin over standard Ubuntu][20]“. It is based on the LTS releases of Ubuntu. If you think that Google contributes to the testing or development of Ubuntu, you are wrong. Google is simply a paying customer for Canonical’s [Ubuntu Advantage Program][21]. [Canonical][22] is the parent company behind Ubuntu. + +### Meet gLinux: Google’s new Linux distribution based on Debian Buster + +![gLinux from Goobuntu](https://itsfoss.com/wp-content/uploads/2018/01/glinux-announcement-800x450.jpg) + +After more than five years with Ubuntu, Google is replacing Goobuntu with gLinux, a Linux distribution based on Debian Testing release. + +As [MuyLinux reports][23], gLinux is being built from the source code of the packages and Google introduces its own changes to it. The changes will also be contributed to the upstream. + +This ‘news’ is not really new. It was announced in Debconf’17 in August last year. Somehow the story did not get the attention it deserves. + +You can watch the presentation in Debconf video [here][24]. The gLinux presentation starts around 12:00. + +[Suggested readCity of Barcelona Kicks Out Microsoft in Favor of Linux and Open Source][25] + +### Moving from Ubuntu 14.04 LTS to Debian 10 Buster + +Once Google opted Ubuntu LTS for stability. Now it is moving to Debian testing branch for timely testing the packages. But it is not clear why Google decided to switch to Debian from Ubuntu. + +How does Google plan to move to Debian Testing? The current Debian Testing release is upcoming Debian 10 Buster. Google has developed an internal tool to migrate the existing systems from Ubuntu 14.04 LTS to Debian 10 Buster. Project leader Margarita claimed in the Debconf talk that tool was tested to be working fine. + +Google also plans to send the changes to Debian Upstream and hence contributing to its development. + +![gLinux testing plan from Google](https://itsfoss.com/wp-content/uploads/2018/01/glinux-testing-plan.jpg) +Development plan for gLinux + +### Ubuntu loses a big customer! + +Back in 2012, Canonical had clarified that Google is not their largest business desktop customer. However, it is safe to say that Google was a big customer for them. As Google prepares to switch to Debian, this will surely result in revenue loss for Canonical. + +[Suggested readMandrake Linux Creator Launches a New Open Source Mobile OS][26] + +### What do you think? + +Do keep in mind that Google doesn’t restrict its developers from using any operating system. However, use of Linux is encouraged. + +If you are thinking that you can get your hands on either of Goobuntu or gLinux, you’ll have to get a job at Google. It is an internal project of Google and is not accessible to the general public. + +Overall, it is a good news for Debian, especially if they get changes to upstream. Cannot say the same for Ubuntu though. I have contacted Canonical for a comment but have got no response so far. + +Update: Canonical responded that they “don’t share details of relationships with individual customers” and hence they cannot provide details about revenue and any other such details. + +What are your views on Google ditching Ubuntu for Debian? + +[Share3K][9][Tweet][10][+1][11][Share161][12][Stumble][13][Reddit644][14]SHARES3K + +
+ +Filed Under: [News][15]Tagged With: [glinux][16], [goobuntu][17] + +
+ +![](https://secure.gravatar.com/avatar/20749c268f5d3e4d2c785499eb6a17c0?s=125&d=mm&r=g) + +#### About Abhishek Prakash + +I am a professional software developer, and founder of It’s FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I’m a huge fan of Agatha Christie’s work. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/goobuntu-glinux-google/ + +作者:[Abhishek Prakash ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/abhishek/ +[1]:https://itsfoss.com/author/abhishek/ +[2]:https://itsfoss.com/goobuntu-glinux-google/#comments +[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[4]:https://twitter.com/share?original_referer=/&text=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%E2%80%99s+In-house+Linux+Distribution&url=https://itsfoss.com/goobuntu-glinux-google/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=abhishek_foss +[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[7]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution +[8]:https://www.reddit.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution +[9]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[10]:https://twitter.com/share?original_referer=/&text=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%E2%80%99s+In-house+Linux+Distribution&url=https://itsfoss.com/goobuntu-glinux-google/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=abhishek_foss +[11]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[12]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[13]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution +[14]:https://www.reddit.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution +[15]:https://itsfoss.com/category/news/ +[16]:https://itsfoss.com/tag/glinux/ +[17]:https://itsfoss.com/tag/goobuntu/ +[18]:https://itsfoss.com/facts-about-ubuntu/ +[19]:https://en.wikipedia.org/wiki/Goobuntu +[20]:http://www.zdnet.com/article/the-truth-about-goobuntu-googles-in-house-desktop-ubuntu-linux/ +[21]:https://www.ubuntu.com/support +[22]:https://www.canonical.com/ +[23]:https://www.muylinux.com/2018/01/15/goobuntu-glinux-google/ +[24]:https://debconf17.debconf.org/talks/44/ +[25]:https://itsfoss.com/barcelona-open-source/ +[26]:https://itsfoss.com/eelo-mobile-os/ From 5eb9566813f82ef69bd896fa0decdfe64d303425 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 21 Jan 2018 21:45:39 +0800 Subject: [PATCH 147/226] =?UTF-8?q?20180121=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @ljgibbslf 文章标题是文章发布时间,不是选题时间 --- ...80119 Two great uses for the cp command Bash shortcuts.md} | 4 ---- 1 file changed, 4 deletions(-) rename sources/tech/{20180121 Two great uses for the cp command Bash shortcuts.md => 20180119 Two great uses for the cp command Bash shortcuts.md} (99%) diff --git a/sources/tech/20180121 Two great uses for the cp command Bash shortcuts.md b/sources/tech/20180119 Two great uses for the cp command Bash shortcuts.md similarity index 99% rename from sources/tech/20180121 Two great uses for the cp command Bash shortcuts.md rename to sources/tech/20180119 Two great uses for the cp command Bash shortcuts.md index d4ad7c7140..b3a5200278 100644 --- a/sources/tech/20180121 Two great uses for the cp command Bash shortcuts.md +++ b/sources/tech/20180119 Two great uses for the cp command Bash shortcuts.md @@ -117,15 +117,11 @@ In my last article, I promised you that repetitive tasks can often be easily str Here I’ve shown concrete examples of the use of shell aliases and shell functions to streamline the synchronize and backup functionality of the cp command. If you’d like to learn more about this, check out the two articles cited above: [How to save keystrokes at the command line with alias][10] and [Shell scripting: An introduction to the shift method and custom functions][11], written by my colleagues Greg and Seth, respectively. -### Topics - - [Linux][12] ### About the author [![](https://opensource.com/sites/default/files/styles/profile_pictures/public/clh_portrait2.jpg?itok=V1V-YAtY)][13] Chris Hermansen  --  Engaged in computing since graduating from the University of British Columbia in 1978, I have been a full-time Linux user since 2005 and a full-time Solaris, SunOS and UNIX System V user before that. On the technical side of things, I have spent a great deal of my career doing data analysis; especially spatial data analysis. I have a substantial amount of programming experience in relation to data analysis, using awk, Python, PostgreSQL, PostGIS and lately Groovy. I have also built a few... [more about Chris Hermansen][14] From 5911af98f2d6b6cbe0afcacdb2335438a0d20301 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Sun, 21 Jan 2018 21:45:51 +0800 Subject: [PATCH 148/226] Update 20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md --- ...is the New Choice For Googles In-house Linux Distribution.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md b/sources/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md index 1da1dcf64d..5a63106d2f 100644 --- a/sources/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md +++ b/sources/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md @@ -1,3 +1,5 @@ +Translating by jessie-pang + No More Ubuntu! Debian is the New Choice For Google’s In-house Linux Distribution ============================================================ From 33811b476bd50133f57a7d82f44910ce845639f1 Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 22:01:17 +0800 Subject: [PATCH 149/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20instal?= =?UTF-8?q?l=20Spotify=20application=20on=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...to install Spotify application on Linux.md | 101 ++++++++++++++++++ 1 file changed, 101 insertions(+) create mode 100644 sources/tech/20180119 How to install Spotify application on Linux.md diff --git a/sources/tech/20180119 How to install Spotify application on Linux.md b/sources/tech/20180119 How to install Spotify application on Linux.md new file mode 100644 index 0000000000..e5b6f94a74 --- /dev/null +++ b/sources/tech/20180119 How to install Spotify application on Linux.md @@ -0,0 +1,101 @@ +How to install Spotify application on Linux +====== + +How do I install Spotify app on Ubuntu Linux desktop to stream music? + +Spotify is a digital music stream service that provides you access to tons of songs. You can stream for free or buy a subscription. Creating a playlist is possible. A subscriber can listen music ad-free. You get better sound quality. This page **shows how to install Spotify on Linux using a snap package manager that works on Ubuntu, Mint, Debian, Fedora, Arch and many other distros**. + +### Installing spotify application on Linux + +The procedure to install spotify on Linux is as follows: + +1. Install snapd +2. Turn on snapd +3. Find Spotify snap: +``` +snap find spotify +``` +4. Install spotify music app: +``` +do snap install spotify +``` +5. Run it: +``` +spotify & +``` + +Let us see all steps and examples in details. + +### Step 1 - Install Snapd + +You need to install snapd package. It is daemon (service) and tooling that enable snap packages on Linux operating system. + +#### Snapd on a Debian/Ubuntu/Mint Linux + +Type the following [apt command][1]/[apt-get command][2] as follows: +`$ sudo apt install snapd` + +#### Install snapd on an Arch Linux + +snapd is available in the Arch User Repository (AUR) only. Run yaourt command (see [how to install yaourt on Archlinux][3]): +``` +$ sudo yaourt -S snapd +$ sudo systemctl enable --now snapd.socket +``` + +#### Get snapd on a Fedora Linux + +Run snapd command +``` +sudo dnf install snapd +sudo ln -s /var/lib/snapd/snap /snap +``` + +#### OpenSUSE install snapd + +Execute the snap command: +`$ snap find spotify` +[![snap search for spotify app command][4]][4] +Install it: +`$ sudo snap install spotify` +[![How to install Spotify application on Linux using snap command][5]][5] + +### Step 3 - Run spotify and enjoy it(译注:原博客中就是这么直接跳到step3的) + +Run it from GUI or simply type: +`$ spotify` +Automatically sign in to your account on startup: +``` +$ spotify --username vivek@nixcraft.com +$ spotify --username vivek@nixcraft.com --password 'myPasswordHere' +``` +Start spotify client with given URI when initialized: +`$ spotify--uri=` +Start with the specified URL: +`$ spotify--url=` +[![Spotify client app running on my Ubuntu Linux desktop][6]][6] + +### About the author + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][7], [Facebook][8], [Google+][9]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/faq/how-to-install-spotify-application-on-linux/ + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info) +[2]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info) +[3]:https://www.cyberciti.biz/faq/how-to-install-yaourt-in-arch-linux/ +[4]:https://www.cyberciti.biz/media/new/faq/2018/01/snap-search-for-spotify-app-command.jpg +[5]:https://www.cyberciti.biz/media/new/faq/2018/01/How-to-install-Spotify-application-on-Linux-using-snap-command.jpg +[6]:https://www.cyberciti.biz/media/new/faq/2018/01/Spotify-client-app-running-on-my-Ubuntu-Linux-desktop.jpg +[7]:https://twitter.com/nixcraft +[8]:https://facebook.com/nixcraft +[9]:https://plus.google.com/+CybercitiBiz From bdbc8caf3e69779708551c30ad37d499b66d9708 Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 22:04:43 +0800 Subject: [PATCH 150/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20technology?= =?UTF-8?q?=20changes=20the=20rules=20for=20doing=20agile?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ology changes the rules for doing agile.md | 95 +++++++++++++++++++ 1 file changed, 95 insertions(+) create mode 100644 sources/talk/20180117 How technology changes the rules for doing agile.md diff --git a/sources/talk/20180117 How technology changes the rules for doing agile.md b/sources/talk/20180117 How technology changes the rules for doing agile.md new file mode 100644 index 0000000000..1b67935509 --- /dev/null +++ b/sources/talk/20180117 How technology changes the rules for doing agile.md @@ -0,0 +1,95 @@ +How technology changes the rules for doing agile +====== + +![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Containers%20Ecosystem.png?itok=lDTaYXzk) + +More companies are trying agile and [DevOps][1] for a clear reason: Businesses want more speed and more experiments - which lead to innovations and competitive advantage. DevOps helps you gain that speed. But doing DevOps in a small group or startup and doing it at scale are two very different things. Any of us who've worked in a cross-functional group of 10 people, come up with a great solution to a problem, and then tried to apply the same patterns across a team of 100 people know the truth: It often doesn't work. This path has been so hard, in fact, that it has been easy for IT leaders to put off agile methodology for another year. + +But that time is over. If you've tried and stalled, it's time to jump back in. + +Until now, DevOps required customized answers for many organizations - lots of tweaks and elbow grease. But today, [Linux containers ][2]and Kubernetes are fueling standardization of DevOps tools and processes. That standardization will only accelerate. The technology we are using to practice the DevOps way of working has finally caught up with our desire to move faster. + +Linux containers and [Kubernetes][3] are changing the way teams interact. Moreover, on the Kubernetes platform, you can run any application you now run on Linux. What does that mean? You can run a tremendous number of enterprise apps (and handle even previously vexing coordination issues between Windows and Linux.) Finally, containers and Kubernetes will handle almost all of what you'll run tomorrow. They're being future-proofed to handle machine learning, AI, and analytics workloads - the next wave of problem-solving tools. + +**[ See our related article,[4 container adoption patterns: What you need to know. ] ][4]** + +Think about machine learning, for example. Today, people still find the patterns in much of an enterprise's data. When machines find the patterns (think machine learning), your people will be able to act on them faster. With the addition of AI, machines can not only find but also act on patterns. Today, with people doing everything, three weeks is an aggressive software development sprint cycle. With AI, machines can change code multiple times per second. Startups will use that capability - to disrupt you. + +Consider how fast you have to be to compete. If you can't make a leap of faith now to DevOps and a one week cycle, think of what will happen when that startup points its AI-fueled process at you. It's time to move to the DevOps way of working now, or get left behind as your competitors do. + +### How are containers changing how teams work? + +DevOps has frustrated many groups trying to scale this way of working to a bigger group. Many IT (and business) people are suspicious of agile: They've heard it all before - languages, frameworks, and now models (like DevOps), all promising to revolutionize application development and IT process. + +**[ Want DevOps advice from other CIOs? See our comprehensive resource, [DevOps: The IT Leader's Guide][5]. ]** + +It's not easy to "sell" quick development sprints to your stakeholders, either. Imagine if you bought a house this way. You're not going to pay a fixed amount to your builder anymore. Instead, you get something like: "We'll pour the foundation in 4 weeks and it will cost x. Then we'll frame. Then we'll do electrical. But we only know the timing on the foundation right now." People are used to buying homes with a price up front and a schedule. + +The challenge is that building software is not like building a house. The same builder builds thousands of houses that are all the same. Software projects are never the same. This is your first hurdle to get past. + +Dev and operations teams really do work differently: I know because I've worked on both sides. We incent them differently. Developers are rewarded for changing and creating, while operations pros are rewarded for reducing cost and ensuring security. We put them in different groups and generally minimize interaction. And the roles typically attract technical people who think quite differently. This situation sets IT up to fail. You have to be willing to break down these barriers. + +Think of what has traditionally happened. You throw pieces over the wall, then the business throws requirements over the wall because they are operating in "house-buying" mode: "We'll see you in 9 months." Developers build to those requirements and make changes as needed for technical constraints. Then they throw it over the wall to operations to "figure out how to run this." Operations then works diligently to make a slew of changes to align the software with their infrastructure. And what's the end result? + +More often than not, the end result isn't even recognizable to the business when they see it in its final glory. We've watched this pattern play out time and time again in our industry for the better part of two decades. It's time for a change. + +It's Linux containers that truly crack the problem - because containers close the gap between development and operations. They allow both teams to understand and design to all of the critical requirements, but still uniquely fulfill their team's responsibilities. Basically, we take out the telephone game between developers and operations. With containers, we can have smaller operations teams, even teams responsible for millions of applications, but development teams that can change software as quickly as needed. (In larger organizations, the desired pace may be faster than humans can respond on the operations side.) + +With containers, you're separating what is delivered from where it runs. Your operations teams are responsible for the host that will run the containers and the security footprint, and that's all. What does this mean? + +First, it means you can get going on DevOps now, with the team you have. That's right. Keep teams focused on the expertise they already have: With containers, just teach them the bare minimum of the required integration dependencies. + +If you try and retrain everyone, no one will be that good at anything. Containers let teams interact, but alongside a strong boundary, built around each team's strengths. Your devs know what needs to be consumed, but don't need to know how to make it run at scale. Ops teams know the core infrastructure, but don't need to know the minutiae of the app. Also, Ops teams can update apps to address new security implications, before you become the next trending data breach story. + +Teaching a large IT organization of say 30,000 people both ops and devs skills? It would take you a decade. You don't have that kind of time. + +When people talk about "building new, cloud-native apps will get us out of this problem," think critically. You can build cloud-native apps in 10-person teams, but that doesn't scale for a Fortune 1000 company. You can't just build new microservices one by one until you're somehow not reliant on your existing team: You'll end up with a siloed organization. It's an alluring idea, but you can't count on these apps to redefine your business. I haven't met a company that could fund parallel development at this scale and succeed. IT budgets are already constrained; doubling or tripling them for an extended period of time just isn't realistic. + +### When the remarkable happens: Hello, velocity + +Linux containers were made to scale. Once you start to do so, [orchestration tools like Kubernetes come into play][6] - because you'll need to run thousands of containers. Applications won't consist of just a single container, they will depend on many different pieces, all running on containers, all running as a unit. If they don't, your apps won't run well in production. + +Think of how many small gears and levers come together to run your business: The same is true for any application. Developers are responsible for all the pulleys and levers in the application. (You could have an integration nightmare if developers don't own those pieces.) At the same time, your operations team is responsible for all the pulleys and levers that make up your infrastructure, whether on-premises or in the cloud. With Kubernetes as an abstraction, your operations team can give the application the fuel it needs to run - without being experts on all those pieces. + +Developers get to experiment. The operations team keeps infrastructure secure and reliable. This combination opens up the business to take small risks that lead to innovation. Instead of having to make only a couple of bet-the-farm size bets, real experimentation happens inside the company, incrementally and quickly. + +In my experience, this is where the remarkable happens inside organizations: Because people say "How do we change planning to actually take advantage of this ability to experiment?" It forces agile planning. + +For example, KeyBank, which uses a DevOps model, containers, and Kubernetes, now deploys code every day. (Watch this [video][7] in which John Rzeszotarski, director of Continuous Delivery and Feedback at KeyBank, explains the change.) Similarly, Macquarie Bank uses DevOps and containers to put something in production every day. + +Once you push software every day, it changes every aspect of how you plan - and [accelerates the rate of change to the business][8]. "An idea can get to a customer in a day," says Luis Uguina, CDO of Macquarie's banking and financial services group. (See this [case study][9] on Red Hat's work with Macquarie Bank). + +### The right time to build something great + +The Macquarie example demonstrates the power of velocity. How would that change your approach to your business? Remember, Macquarie is not a startup. This is the type of disruptive power that CIOs face, not only from new market entrants but also from established peers. + +The developer freedom also changes the talent equation for CIOs running agile shops. Suddenly, individuals within huge companies (even those not in the hottest industries or geographies) can have great impact. Macquarie uses this dynamic as a recruiting tool, promising developers that all new hires will push something live within the first week. + +At the same time, in this day of cloud-based compute and storage power, we have more infrastructure available than ever. That's fortunate, considering the [leaps that machine learning and AI tools will soon enable][10]. + +This all adds up to this being the right time to build something great. Given the pace of innovation in the market, you need to keep building great things to keep customers loyal. So if you've been waiting to place your bet on DevOps, now is the right time. Containers and Kubernetes have changed the rules - in your favor. + +**Want more wisdom like this, IT leaders? [Sign up for our weekly email newsletter][11].** + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2018/1/how-technology-changes-rules-doing-agile + +作者:[Matt Hicks][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://enterprisersproject.com/user/matt-hicks +[1]:https://enterprisersproject.com/tags/devops +[2]:https://www.redhat.com/en/topics/containers?intcmp=701f2000000tjyaAAA +[3]:https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA +[4]:https://enterprisersproject.com/article/2017/8/4-container-adoption-patterns-what-you-need-know?sc_cid=70160000000h0aXAAQ +[5]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ +[6]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity +[7]:https://www.redhat.com/en/about/videos/john-rzeszotarski-keybank-red-hat-summit-2017?intcmp=701f2000000tjyaAAA +[8]:https://enterprisersproject.com/article/2017/11/dear-cios-stop-beating-yourselves-being-behind-transformation +[9]:https://www.redhat.com/en/resources/macquarie-bank-case-study?intcmp=701f2000000tjyaAAA +[10]:https://enterprisersproject.com/article/2018/1/4-ai-trends-watch +[11]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ From c0517266d96116bead0536c4bf03084d98cd799d Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 22:07:21 +0800 Subject: [PATCH 151/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Configuring=20MSM?= =?UTF-8?q?TP=20On=20Ubuntu=2016.04=20(Again)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nfiguring MSMTP On Ubuntu 16.04 (Again).md | 82 +++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 sources/tech/20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md diff --git a/sources/tech/20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md b/sources/tech/20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md new file mode 100644 index 0000000000..9ddb25b40b --- /dev/null +++ b/sources/tech/20180118 Configuring MSMTP On Ubuntu 16.04 (Again).md @@ -0,0 +1,82 @@ +Configuring MSMTP On Ubuntu 16.04 (Again) +====== +This post exists as a copy of what I had on my previous blog about configuring MSMTP on Ubuntu 16.04; I'm posting it as-is for posterity, and have no idea if it'll work on later versions. As I'm not hosting my own Ubuntu/MSMTP server anymore I can't see any updates being made to this, but if I ever do have to set this up again I'll create an updated post! Anyway, here's what I had… + +I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in a previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you're using Apache as the web server, but I'm sure it shouldn't be too different if your web server of choice is something else. + +I use [msmtp][1] for sending emails from this blog to notify me of comments and upgrades etc. Here I'm going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too. + +To begin, we need to install 3 packages: +`sudo apt-get install msmtp msmtp-mta ca-certificates` +Once these are installed, a default config is required. By default msmtp will look at `/etc/msmtprc`, so I created that using vim, though any text editor will do the trick. This file looked something like this: +``` +# Set defaults. +defaults +# Enable or disable TLS/SSL encryption. +tls on +tls_starttls on +tls_trust_file /etc/ssl/certs/ca-certificates.crt +# Setup WP account's settings. +account +host smtp.gmail.com +port 587 +auth login +user +password +from +logfile /var/log/msmtp/msmtp.log + +account default : + +``` + +Any of the uppercase items (i.e. ``) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to. + +Once that file is saved, we'll update the permissions on the above configuration file -- msmtp won't run if the permissions on that file are too open -- and create the directory for the log file. +``` +sudo mkdir /var/log/msmtp +sudo chown -R www-data:adm /var/log/msmtp +sudo chmod 0600 /etc/msmtprc + +``` + +Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don't get too large as well as keeping the log directory a little tidier. To do this, we create `/etc/logrotate.d/msmtp` and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently. +``` +/var/log/msmtp/*.log { +rotate 12 +monthly +compress +missingok +notifempty +} + +``` + +Now that the logging is configured, we need to tell PHP to use msmtp by editing `/etc/php/7.0/apache2/php.ini` and updating the sendmail path from +`sendmail_path =` +to +`sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a -t"` +Here I did run into an issue where even though I specified the account name it wasn't sending emails correctly when I tested it. This is why the line `account default : ` was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run `sudo service apache2 restart`, then run `php -a` and execute the following +``` +mail ('personal@email.com', 'Test Subject', 'Test body text'); +exit(); + +``` + +Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps). + +I make no claims that this is the most secure configuration, so if you come across this and realise it's grossly insecure or something is drastically wrong please let me know and I'll update it accordingly. + + +-------------------------------------------------------------------------------- + +via: https://codingproductivity.wordpress.com/2018/01/18/configuring-msmtp-on-ubuntu-16-04-again/ + +作者:[JOE][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://codingproductivity.wordpress.com/author/joeb454/ +[1]:http://msmtp.sourceforge.net/ From 5f60131a0b88fd5a08ee565d40f5e6a8535be8d5 Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 22:13:44 +0800 Subject: [PATCH 152/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Play?= =?UTF-8?q?=20Sound=20Through=20Two=20or=20More=20Output=20Devices=20in=20?= =?UTF-8?q?Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ugh Two or More Output Devices in Linux.md | 62 +++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 sources/tech/20180118 How to Play Sound Through Two or More Output Devices in Linux.md diff --git a/sources/tech/20180118 How to Play Sound Through Two or More Output Devices in Linux.md b/sources/tech/20180118 How to Play Sound Through Two or More Output Devices in Linux.md new file mode 100644 index 0000000000..2f35b15ac7 --- /dev/null +++ b/sources/tech/20180118 How to Play Sound Through Two or More Output Devices in Linux.md @@ -0,0 +1,62 @@ +translating by lujun9972 +How to Play Sound Through Two or More Output Devices in Linux +====== + +![](https://www.maketecheasier.com/assets/uploads/2018/01/output-audio-multiple-devices-featured.jpg) + +Handling audio in Linux can be a pain. Pulseaudio has made it both better and worse. While some things work better than they did before, other things have become more complicated. Handling audio output is one of those things. + +If you want to enable multiple audio outputs from your Linux PC, you can use a simple utility to enable your other sound devices on a virtual interface. It's a lot easier than it sounds. + +In case you're wondering why you'd want to do this, a pretty common instance is playing video from your computer on a TV and using both the PC and TV speakers. + +### Install Paprefs + +The easiest way to enable audio playback from multiple sources is to use a simple graphical utility called "paprefs." It's short for PulseAudio Preferences. + +It's available through the Ubuntu repositories, so just install it with Apt. +``` +sudo apt install paprefs +``` + +When the install finishes, you can just launch the program. + +### Enable Dual Audio Playback + +Even though the utility is graphical, it's still probably easier to launch it by typing `paprefs` in the command line as a regular user. + +The window that opens has a few tabs with settings that you can tweak. The tab that you're looking for is the last one, "Simultaneous Output." + +![Paprefs on Ubuntu][1] + +There isn't a whole lot on the tab, just a checkbox to enable the setting. + +Next, open up the regular sound preferences. It's in different places on different distributions. On Ubuntu it'll be under the GNOME system settings. + +![Enable Simultaneous Audio][2] + +Once you have your sound preferences open, select the "Output" tab. Select the "Simultaneous output" radio button. It's now your default output. + +### Test It + +To test it, you can use anything you like, but music always works. If you are using a video, like suggested earlier, you can certainly test it with that as well. + +If everything is working well, you should hear audio out of all connected devices. + +That's all there really is to do. This works best when there are multiple devices, like the HDMI port and the standard analog output. You can certainly try it with other configurations, too. You should also keep in mind that there will only be a single volume control, so adjust the physical output devices accordingly. + + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/play-sound-through-multiple-devices-linux/ + +作者:[Nick Congleton][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/nickcongleton/ +[1]:https://www.maketecheasier.com/assets/uploads/2018/01/sa-paprefs.jpg (Paprefs on Ubuntu) +[2]:https://www.maketecheasier.com/assets/uploads/2018/01/sa-enable.jpg (Enable Simultaneous Audio) +[3]:https://depositphotos.com/89314442/stock-photo-headphones-on-speakers.html From 33631e83ae877408383f9c9dc1bf493ab1f50069 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Sun, 21 Jan 2018 22:14:05 +0800 Subject: [PATCH 153/226] Delete 20170927 Linux directory structure- -lib explained.md --- ...nux directory structure- -lib explained.md | 79 ------------------- 1 file changed, 79 deletions(-) delete mode 100644 sources/tech/20170927 Linux directory structure- -lib explained.md diff --git a/sources/tech/20170927 Linux directory structure- -lib explained.md b/sources/tech/20170927 Linux directory structure- -lib explained.md deleted file mode 100644 index ff9ec9b72f..0000000000 --- a/sources/tech/20170927 Linux directory structure- -lib explained.md +++ /dev/null @@ -1,79 +0,0 @@ -translate by cy - -Linux directory structure: /lib explained -====== -[![lib folder linux][1]][1] - -We already explained other important system folders like /bin, /boot, /dev, /etc etc folders in our previous posts. Please check below links for more information about other stuff which you are interested. In this post, we will see what is /lib folder all about. - -[**Linux Directory Structure explained: /bin folder**][2] - -[**Linux Directory Structure explained: /boot folder**][3] - -[**Linux Directory Structure explained: /dev folder**][4] - -[**Linux Directory Structure explained: /etc folder**][5] - -[**Linux Directory Structure explained: /lost+found folder**][6] - -[**Linux Directory Structure explained: /home folder**][7] - -### What is /lib folder in Linux? - -The lib folder is a **library files directory** which contains all helpful library files used by the system. In simple terms, these are helpful files which are used by an application or a command or a process for their proper execution. The commands in /bin or /sbin dynamic library files are located just in this directory. The kernel modules are also located here. - -Taken an example of executing pwd command. It requires some library files to execute properly. Let us prove what is happening with pwd command when executing. We will use [the strace command][8] to figure out which library files are used. - -Example: - -If you observe, We just used open kernel call for pwd command. The pwd command to execute properly it will require two lib files. - -Contents of /lib folder in Linux - -As said earlier this folder contains object files and libraries, it's good to know some important subfolders with this directory. And below content are for my system and you may see some variants in your system. - -**/lib/firmware** - This is a folder which contains hardware firmware code. - -### What is the difference between firmware and drivers? - -Many devices software consists of two software piece to make that hardware properly. The piece of code that is loaded into actual hardware is firmware and the software which communicate between this firmware and kernel is called drivers. This way the kernel directly communicate with hardware and make sure hardware is doing the work assigned to it. - -**/lib/modprobe.d** - Configuration directory for modprobe command - -**/lib/modules** - All loadable kernel modules are stored in this directory. If you have more kernels you will see folders within this directory each represents a kernel. - -**/lib/hdparm** - Contains SATA/IDE parameters for disks to run properly. - -**/lib/udev** - Userspace /dev is a device manager for Linux Kernel. This folder contains all udev related files/folders like rules.d folder which contain udev specific rules. - -### The /lib folder sister folders: /lib32 and /lib64 - -These folders contain their specific architecture library files. These folders are almost identical to /lib folder expects architecture level differences. - -### Other library folders in Linux - -**/usr/lib** - All software libraries are installed here. This does not contain system default or kernel libraries. - -**/usr/local/lib** - To place extra system library files here. These library files can be used by different applications. - -**/var/lib** - Holds dynamic data libraries/files like the rpm/dpkg database and game scores. - --------------------------------------------------------------------------------- - -via: https://www.linuxnix.com/linux-directory-structure-lib-explained/ - -作者:[Surendra Anne][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linuxnix.com/author/surendra/ -[1]:https://www.linuxnix.com/wp-content/uploads/2017/09/The-lib-folder-explained.png -[2]:https://www.linuxnix.com/linux-directory-structure-explained-bin-folder/ -[3]:https://www.linuxnix.com/linux-directory-structure-explained-boot-folder/ -[4]:https://www.linuxnix.com/linux-directory-structure-explained-dev-folder/ -[5]:https://www.linuxnix.com/linux-directory-structure-explainedetc-folder/ -[6]:https://www.linuxnix.com/lostfound-directory-linuxunix/ -[7]:https://www.linuxnix.com/linux-directory-structure-home-root-folders/ -[8]:https://www.linuxnix.com/10-strace-command-examples-linuxunix/ From 9855b825822d1c514b8324117d26ff36d58aaa36 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Sun, 21 Jan 2018 22:16:05 +0800 Subject: [PATCH 154/226] translated by cyleft 20170927 Linux directory structure- -lib explained.md --- ...nux directory structure- -lib explained.md | 77 +++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 translated/tech/20170927 Linux directory structure- -lib explained.md diff --git a/translated/tech/20170927 Linux directory structure- -lib explained.md b/translated/tech/20170927 Linux directory structure- -lib explained.md new file mode 100644 index 0000000000..3472981eb9 --- /dev/null +++ b/translated/tech/20170927 Linux directory structure- -lib explained.md @@ -0,0 +1,77 @@ +Linux 目录结构:/lib 分析 +====== +[![linux 目录 lib][1]][1] + +我们在之前的文章中已经分析了其他重要系统目录,比如 bin、/boot、/dev、 /etc 等。可以根据自己的兴趣进入下列链接了解更多信息。本文中,让我们来看看 /lib 目录都有些什么。 + +[**目录结构分析:/bin 文件夹**][2] + +[**目录结构分析:/boot 文件夹**][3] + +[**目录结构分析:/dev 文件夹**][4] + +[**目录结构分析:/etc 文件夹**][5] + +[**目录结构分析:/lost+found 文件夹**][6] + +[**目录结构分析:/home 文件夹**][7] + +### Linux 中,/lib 文件夹是什么? + +lib 文件夹是 **库文件目录** ,包含了所有对系统有用的库文件。简单来说,它是应用程序、命令或进程正确执行所需要的文件。指令在 /bin 或 /sbin 目录,而动态库文件正是在此目录中。内核模块同样也在这里。 + +以 pwd 命令执行为例。正确执行,需要调用一些库文件。让我们来探索一下 pwd 命令执行时都发生了什么。我们需要使用 [strace 命令][8] 找出调用的库文件。 + +示例: + +如果你在观察的话,会发现我们使用的 pwd 命令仅进行了内核调用,命令正确执行需要调用两个库文件。 + +Linux 中 /lib 文件夹内部信息 + +正如之前所说,这个文件夹包含了目标文件和一些库文件,如果能了解这个文件夹的一些重要子文件,想必是极好的。下面列举的内容是基于我自己的系统,对于你的来说,可能会有所不同。 + +**/lib/firmware** - 这个文件夹包含了一些硬件、固件(Firmware)代码。 + +### 硬件和固件(Firmware)之间有什么不同? + +为了使硬件合法运行,很多设备软件有两部分软件组成。加载了一个代码片段的切实硬件就是固件,固件与内核交流的软件,被称为驱动。这样一来,确保被指派工作的硬件完成内核直接与硬件交流的工作。 + +**/lib/modprobe.d** - 自动处理可载入模块命令配置目录 + +**/lib/modules** - 所有可加载的内核模块都存储在这个目录下。如果你有多个内核,那这个目录下有且不仅有一个文件夹,其中每一个都代表一个内核。 + +**/lib/hdparm** - 包含 SATA/IDE 硬盘正确运行的参数。 + +**/lib/udev** - Userspace /dev,是 Linux 内核设备管理器。这个文件夹包含了所有的 udev,类似 rules.d 这样描述特殊规则的相关文件/文件夹。 + +### /lib 的姊妹文件夹:/lib32 和 /lib64 + +这两个文件夹包含了特殊结构的库文件。它们几乎和 /lib 文件夹一样,除了架构级别的差异。 + +### Linux 其他的库文件 + +**/usr/lib** - 所有软件的库都安装在这里。但是不包含系统默认库文件和内核库文件。 + +**/usr/local/lib** - 放置额外的系统文件。不同应用都可以调用。 + +**/var/lib** - rpm/dpkg 数据和游戏缓存类似的动态库/文件都存储在这里。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxnix.com/linux-directory-structure-lib-explained/ + +作者:[Surendra Anne][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linuxnix.com/author/surendra/ +[1]:https://www.linuxnix.com/wp-content/uploads/2017/09/The-lib-folder-explained.png +[2]:https://www.linuxnix.com/linux-directory-structure-explained-bin-folder/ +[3]:https://www.linuxnix.com/linux-directory-structure-explained-boot-folder/ +[4]:https://www.linuxnix.com/linux-directory-structure-explained-dev-folder/ +[5]:https://www.linuxnix.com/linux-directory-structure-explainedetc-folder/ +[6]:https://www.linuxnix.com/lostfound-directory-linuxunix/ +[7]:https://www.linuxnix.com/linux-directory-structure-home-root-folders/ +[8]:https://www.linuxnix.com/10-strace-command-examples-linuxunix/ From 4a819ee7a9c2dab19fe2037bacb1285d3dea9c65 Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 22:17:45 +0800 Subject: [PATCH 155/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Getting=20Started?= =?UTF-8?q?=20with=20ncurses?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20180118 Getting Started with ncurses.md | 213 ++++++++++++++++++ 1 file changed, 213 insertions(+) create mode 100644 sources/tech/20180118 Getting Started with ncurses.md diff --git a/sources/tech/20180118 Getting Started with ncurses.md b/sources/tech/20180118 Getting Started with ncurses.md new file mode 100644 index 0000000000..d02ad61785 --- /dev/null +++ b/sources/tech/20180118 Getting Started with ncurses.md @@ -0,0 +1,213 @@ +Getting Started with ncurses +====== +How to use curses to draw to the terminal screen. + +While graphical user interfaces are very cool, not every program needs to run with a point-and-click interface. For example, the venerable vi editor ran in plain-text terminals long before the first GUI. + +The vi editor is one example of a screen-oriented program that draws in "text" mode, using a library called curses, which provides a set of programming interfaces to manipulate the terminal screen. The curses library originated in BSD UNIX, but Linux systems provide this functionality through the ncurses library. + +[For a "blast from the past" on ncurses, see ["ncurses: Portable Screen-Handling for Linux"][1], September 1, 1995, by Eric S. Raymond.] + +Creating programs that use curses is actually quite simple. In this article, I show an example program that leverages curses to draw to the terminal screen. + +### Sierpinski's Triangle + +One simple way to demonstrate a few curses functions is by generating Sierpinski's Triangle. If you aren't familiar with this method to generate Sierpinski's Triangle, here are the rules: + +1. Set three points that define a triangle. + +2. Randomly select a point anywhere (x,y). + +Then: + +1. Randomly select one of the triangle's points. + +2. Set the new x,y to be the midpoint between the previous x,y and the triangle point. + +3. Repeat. + +So with those instructions, I wrote this program to draw Sierpinski's Triangle to the terminal screen using the curses functions: + +``` + + 1 /* triangle.c */ + 2 + 3 #include + 4 #include + 5 + 6 #include "getrandom_int.h" + 7 + 8 #define ITERMAX 10000 + 9 + 10 int main(void) + 11 { + 12 long iter; + 13 int yi, xi; + 14 int y[3], x[3]; + 15 int index; + 16 int maxlines, maxcols; + 17 + 18 /* initialize curses */ + 19 + 20 initscr(); + 21 cbreak(); + 22 noecho(); + 23 + 24 clear(); + 25 + 26 /* initialize triangle */ + 27 + 28 maxlines = LINES - 1; + 29 maxcols = COLS - 1; + 30 + 31 y[0] = 0; + 32 x[0] = 0; + 33 + 34 y[1] = maxlines; + 35 x[1] = maxcols / 2; + 36 + 37 y[2] = 0; + 38 x[2] = maxcols; + 39 + 40 mvaddch(y[0], x[0], '0'); + 41 mvaddch(y[1], x[1], '1'); + 42 mvaddch(y[2], x[2], '2'); + 43 + 44 /* initialize yi,xi with random values */ + 45 + 46 yi = getrandom_int() % maxlines; + 47 xi = getrandom_int() % maxcols; + 48 + 49 mvaddch(yi, xi, '.'); + 50 + 51 /* iterate the triangle */ + 52 + 53 for (iter = 0; iter < ITERMAX; iter++) { + 54 index = getrandom_int() % 3; + 55 + 56 yi = (yi + y[index]) / 2; + 57 xi = (xi + x[index]) / 2; + 58 + 59 mvaddch(yi, xi, '*'); + 60 refresh(); + 61 } + 62 + 63 /* done */ + 64 + 65 mvaddstr(maxlines, 0, "Press any key to quit"); + 66 + 67 refresh(); + 68 + 69 getch(); + 70 endwin(); + 71 + 72 exit(0); + 73 } + +``` + +Let me walk through that program by way of explanation. First, the getrandom_int() is my own wrapper to the Linux getrandom() system call, but it's guaranteed to return a positive integer value. Otherwise, you should be able to identify the code lines that initialize and then iterate Sierpinski's Triangle, based on the above rules. Aside from that, let's look at the curses functions I used to draw the triangle on a terminal. + +Most curses programs will start with these four instructions. 1) The initscr() function determines the terminal type, including its size and features, and sets up the curses environment based on what the terminal can support. The cbreak() function disables line buffering and sets curses to take one character at a time. The noecho() function tells curses not to echo the input back to the screen, and the clear() function clears the screen: + +``` + + 20 initscr(); + 21 cbreak(); + 22 noecho(); + 23 + 24 clear(); + +``` + +The program then sets a few variables to define the three points that define a triangle. Note the use of LINES and COLS here, which were set by initscr(). These values tell the program how many lines and columns exist on the terminal. Screen coordinates start at zero, so the top-left of the screen is row 0, column 0\. The bottom-right of the screen is row LINES - 1, column COLS - 1\. To make this easy to remember, my program sets these values in the variables maxlines and maxcols, respectively. + +Two simple methods to draw text on the screen are the addch() and addstr() functions. To put text at a specific screen location, use the related mvaddch() and mvaddstr() functions. My program uses these functions in several places. First, the program draws the three points that define the triangle, labeled "0", "1" and "2": + +``` + + 40 mvaddch(y[0], x[0], '0'); + 41 mvaddch(y[1], x[1], '1'); + 42 mvaddch(y[2], x[2], '2'); + +``` + +To draw the random starting point, the program makes a similar call: + +``` + + 49 mvaddch(yi, xi, '.'); + +``` + +And to draw each successive point in Sierpinski's Triangle iteration: + +``` + + 59 mvaddch(yi, xi, '*'); + +``` + +When the program is done, it displays a helpful message at the lower-left corner of the screen (at row maxlines, column 0): + +``` + + 65 mvaddstr(maxlines, 0, "Press any key to quit"); + +``` + +It's important to note that curses maintains a version of the screen in memory and updates the screen only when you ask it to. This provides greater performance, especially if you want to display a lot of text to the screen. This is because curses can update only those parts of the screen that changed since the last update. To cause curses to update the terminal screen, use the refresh() function. + +In my example program, I've chosen to update the screen after "drawing" each successive point in Sierpinski's Triangle. By doing so, users should be able to observe each iteration in the triangle. + +Before exiting, I use the getch() function to wait for the user to press a key. Then I call endwin() to exit the curses environment and return the terminal screen to normal control: + +``` + + 69 getch(); + 70 endwin(); + +``` + +### Compiling and Sample Output + +Now that you have your first sample curses program, it's time to compile and run it. Remember that Linux systems implement the curses functionality via the ncurses library, so you need to link with -lncurses when you compile—for example: + +``` + +$ ls +getrandom_int.c getrandom_int.h triangle.c + +$ gcc -Wall -lncurses -o triangle triangle.c getrandom_int.c + +``` + +Running the triangle program on a standard 80x24 terminal is not very interesting. You just can't see much detail in Sierpinski's Triangle at that resolution. If you run a terminal window and set a very small font size, you can see the fractal nature of Sierpinski's Triangle more easily. On my system, the output looks like Figure 1. + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/triangle.png) + +Figure 1. Output of the triangle Program + +Despite the random nature of the iteration, every run of Sierpinski's Triangle will look pretty much the same. The only difference will be where the first few points are drawn to the screen. In this example, you can see the single dot that starts the triangle, near point 1\. It looks like the program picked point 2 next, and you can see the asterisk halfway between the dot and the "2". And it looks like the program randomly picked point 2 for the next random number, because you can see the asterisk halfway between the first asterisk and the "2". From there, it's impossible to tell how the triangle was drawn, because all of the successive dots fall within the triangle area. + +### Starting to Learn ncurses + +This program is a simple example of how to use the curses functions to draw characters to the screen. You can do so much more with curses, depending on what you need your program to do. In a follow up article, I will show how to use curses to allow the user to interact with the screen. If you are interested in getting a head start with curses, I encourage you to read Pradeep Padala's ["NCURSES Programming HOWTO"][2], at the Linux Documentation Project. + +### About the author + +Jim Hall is an advocate for free and open-source software, best known for his work on the FreeDOS Project, and he also focuses on the usability of open-source software. Jim is the Chief Information Officer at Ramsey County, Minn. + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/content/getting-started-ncurses + +作者:[Jim Hall][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/users/jim-hall +[1]:http://www.linuxjournal.com/article/1124 +[2]:http://tldp.org/HOWTO/NCURSES-Programming-HOWTO From 178b3f8ad35cabb4d8f66b0b890650864eb5cfcc Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 22:21:57 +0800 Subject: [PATCH 156/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20mv=20Comm?= =?UTF-8?q?and=20Explained=20for=20Beginners=20(8=20Examples)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nd Explained for Beginners (8 Examples).md | 186 ++++++++++++++++++ 1 file changed, 186 insertions(+) create mode 100644 sources/tech/20180119 Linux mv Command Explained for Beginners (8 Examples).md diff --git a/sources/tech/20180119 Linux mv Command Explained for Beginners (8 Examples).md b/sources/tech/20180119 Linux mv Command Explained for Beginners (8 Examples).md new file mode 100644 index 0000000000..786528137f --- /dev/null +++ b/sources/tech/20180119 Linux mv Command Explained for Beginners (8 Examples).md @@ -0,0 +1,186 @@ +Linux mv Command Explained for Beginners (8 Examples) +====== + +Just like [cp][1] for copying and rm for deleting, Linux also offers an in-built command for moving and renaming files. It's called **mv**. In this article, we will discuss the basics of this command line tool using easy to understand examples. Please note that all examples used in this tutorial have been tested on Ubuntu 16.04 LTS. + +#### Linux mv command + +As already mentioned, the mv command in Linux is used to move or rename files. Following is the syntax of the command: + +``` +mv [OPTION]... [-T] SOURCE DEST +mv [OPTION]... SOURCE... DIRECTORY +mv [OPTION]... -t DIRECTORY SOURCE... +``` + +And here's what the man page says about it: +``` +Rename SOURCE to DEST, or move SOURCE(s) to DIRECTORY. +``` + +The following Q&A-styled examples will give you a better idea on how this tool works. + +#### Q1. How to use mv command in Linux? + +If you want to just rename a file, you can use the mv command in the following way: + +``` +mv [filename] [new_filename] +``` + +For example: + +``` +mv names.txt fullnames.txt +``` + +[![How to use mv command in Linux][2]][3] + +Similarly, if the requirement is to move a file to a new location, use the mv command in the following way: + +``` +mv [filename] [dest-dir] +``` + +For example: + +``` +mv fullnames.txt /home/himanshu/Downloads +``` + +[![Linux mv command][4]][5] + +#### Q2. How to make sure mv prompts before overwriting? + +By default, the mv command doesn't prompt when the operation involves overwriting an existing file. For example, the following screenshot shows the existing full_names.txt was overwritten by mv without any warning or notification. + +[![How to make sure mv prompts before overwriting][6]][7] + +However, if you want, you can force mv to prompt by using the **-i** command line option. + +``` +mv -i [file_name] [new_file_name] +``` + +[![the -i command option][8]][9] + +So the above screenshots clearly shows that **-i** leads to mv asking for user permission before overwriting an existing file. Please note that in case you want to explicitly specify that you don't want mv to prompt before overwriting, then use the **-f** command line option. + +#### Q3. How to make mv not overwrite an existing file? + +For this, you need to use the **-n** command line option. + +``` +mv -n [filename] [new_filename] +``` + +The following screenshot shows the mv operation wasn't successful as a file with name 'full_names.txt' already existed and the command had -n option in it. + +[![How to make mv not overwrite an existing file][10]][11] + +Note: +``` +If you specify more than one of -i, -f, -n, only the final one takes effect. +``` + +#### Q4. How to make mv remove trailing slashes (if any) from source argument? + +To remove any trailing slashes from source arguments, use the **\--strip-trailing-slashes** command line option. + +``` +mv --strip-trailing-slashes [source] [dest] +``` + +Here's how the official documentation explains the usefulness of this option: +``` +This is useful when a + +source + + argument may have a trailing slash and specify a symbolic link to a directory. This scenario is in fact rather common because some shells can automatically append a trailing slash when performing file name completion on such symbolic links. Without this option, + +mv + +, for example, (via the system's rename function) must interpret a trailing slash as a request to dereference the symbolic link and so must rename the indirectly referenced + +directory + + and not the symbolic link. Although it may seem surprising that such behavior be the default, it is required by POSIX and is consistent with other parts of that standard. +``` + +#### Q5. How to make mv treat destination as normal file? + +To be absolutely sure that the destination entity is treated as a normal file (and not a directory), use the **-T** command line option. + +``` +mv -T [source] [dest] +``` + +Here's why this command line option exists: +``` +This can help avoid race conditions in programs that operate in a shared area. For example, when the command 'mv /tmp/source /tmp/dest' succeeds, there is no guarantee that /tmp/source was renamed to /tmp/dest: it could have been renamed to/tmp/dest/source instead, if some other process created /tmp/dest as a directory. However, if mv -T /tmp/source /tmp/dest succeeds, there is no question that/tmp/source was renamed to /tmp/dest. +``` +``` +In the opposite situation, where you want the last operand to be treated as a directory and want a diagnostic otherwise, you can use the --target-directory (-t) option. +``` + +#### Q6. How to make mv move file only when its newer than destination file? + +Suppose there exists a file named fullnames.txt in Downloads directory of your system, and there's a file with same name in your home directory. Now, you want to update ~/Downloads/fullnames.txt with ~/fullnames.txt, but only when the latter is newer. Then in this case, you'll have to use the **-u** command line option. + +``` +mv -u ~/fullnames.txt ~/Downloads/fullnames.txt +``` + +This option is particularly useful in cases when you need to take such decisions from within a shell script. + +#### Q7. How make mv emit details of what all it is doing? + +If you want mv to output information explaining what exactly it's doing, then use the **-v** command line option. + +``` +mv -v [filename] [new_filename] +``` + +For example, the following screenshots shows mv emitting some helpful details of what exactly it did. + +[![How make mv emit details of what all it is doing][12]][13] + +#### Q8. How to force mv to create backup of existing destination files? + +This you can do using the **-b** command line option. The backup file created this way will have the same name as the destination file, but with a tilde (~) appended to it. Here's an example: + +[![How to force mv to create backup of existing destination files][14]][15] + +#### Conclusion + +As you'd have guessed by now, mv is as important as cp and rm for the functionality it offers - renaming/moving files around is also one of the basic operations after all. We've discussed a majority of command line options this tool offers. So you can just practice them and start using the command. To know more about mv, head to its [man page][16]. + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/linux-mv-command/ + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:https://www.howtoforge.com/linux-cp-command/ +[2]:https://www.howtoforge.com/images/command-tutorial/mv-rename-ex.png +[3]:https://www.howtoforge.com/images/command-tutorial/big/mv-rename-ex.png +[4]:https://www.howtoforge.com/images/command-tutorial/mv-transfer-file.png +[5]:https://www.howtoforge.com/images/command-tutorial/big/mv-transfer-file.png +[6]:https://www.howtoforge.com/images/command-tutorial/mv-overwrite.png +[7]:https://www.howtoforge.com/images/command-tutorial/big/mv-overwrite.png +[8]:https://www.howtoforge.com/images/command-tutorial/mv-prompt-overwrite.png +[9]:https://www.howtoforge.com/images/command-tutorial/big/mv-prompt-overwrite.png +[10]:https://www.howtoforge.com/images/command-tutorial/mv-n-option.png +[11]:https://www.howtoforge.com/images/command-tutorial/big/mv-n-option.png +[12]:https://www.howtoforge.com/images/command-tutorial/mv-v-option.png +[13]:https://www.howtoforge.com/images/command-tutorial/big/mv-v-option.png +[14]:https://www.howtoforge.com/images/command-tutorial/mv-b-option.png +[15]:https://www.howtoforge.com/images/command-tutorial/big/mv-b-option.png +[16]:https://linux.die.net/man/1/mv From 72344b8ae6cd2125c69a0dc6ab293b7ef5320654 Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 22:25:31 +0800 Subject: [PATCH 157/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Rediscovering=20m?= =?UTF-8?q?ake:=20the=20power=20behind=20rules?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...iscovering make- the power behind rules.md | 100 ++++++++++++++++++ 1 file changed, 100 insertions(+) create mode 100644 sources/tech/20180118 Rediscovering make- the power behind rules.md diff --git a/sources/tech/20180118 Rediscovering make- the power behind rules.md b/sources/tech/20180118 Rediscovering make- the power behind rules.md new file mode 100644 index 0000000000..2dbddb8949 --- /dev/null +++ b/sources/tech/20180118 Rediscovering make- the power behind rules.md @@ -0,0 +1,100 @@ +Rediscovering make: the power behind rules +====== + +![](https://user-images.githubusercontent.com/4419992/35015638-0529f1c0-faf4-11e7-9801-4995fc4b54f0.jpg) + +I used to think makefiles were just a convenient way to list groups of shell commands; over time I've learned how powerful, flexible, and full-featured they are. This post brings to light over some of those features related to rules. + +### Rules + +Rules are instructions that indicate `make` how and when a file called the target should be built. The target can depend on other files called prerequisites. + +You instruct `make` how to build the target in the recipe, which is no more than a set of shell commands to be executed, one at a time, in the order they appear. The syntax looks like this: +``` +target_name : prerequisites + recipe +``` + +Once you have defined a rule, you can build the target from the command line by executing: +``` +$ make target_name +``` + +Once the target is built, `make` is smart enough to not run the recipe ever again unless at least one of the prerequisites has changed. + +### More on prerequisites + +Prerequisites indicate two things: + + * When the target should be built: if a prerequisite is newer than the target, `make` assumes that the target should be built. + * An order of execution: since prerequisites can, in turn, be built by another rule on the makefile, they also implicitly set an order on which rules are executed. + + + +If you want to define an order, but you don't want to rebuild the target if the prerequisite changes, you can use a special kind of prerequisite called order only, which can be placed after the normal prerequisites, separated by a pipe (`|`) + +### Patterns + +For convenience, `make` accepts patterns for targets and prerequisites. A pattern is defined by including the `%` character, a wildcard that matches any number of literal characters or an empty string. Here are some examples: + + * `%`: match any file + * `%.md`: match all files with the `.md` extension + * `prefix%.go`: match all files that start with `prefix` that have the `.go` extension + + + +### Special targets + +There's a set of target names that have special meaning for `make` called special targets. + +You can find the full list of special targets in the [documentation][1]. As a rule of thumb, special targets start with a dot followed by uppercase letters. + +Here are a few useful ones: + +**.PHONY** : Indicates `make` that the prerequisites of this target are considered to be phony targets, which means that `make` will always run it's recipe regardless of whether a file with that name exists or what its last-modification time is. + +**.DEFAULT** : Used for any target for which no rules are found. + +**.IGNORE** : If you specify prerequisites for `.IGNORE`, `make` will ignore errors in execution of their recipes. + +### Substitutions + +Substitutions are useful when you need to modify the value of a variable with alterations that you specify. + +A substitution has the form `$(var:a=b)` and its meaning is to take the value of the variable `var`, replace every `a` at the end of a word with `b` in that value, and substitute the resulting string. For example: +``` +foo := a.o +bar : = $(foo:.o=.c) # sets bar to a.c +``` + +note: special thanks to [Luis Lavena][2] for letting me know about the existence of substitutions. + +### Archive Files + +Archive files are used to collect multiple data files together into a single file (same concept as a zip file), they are built with the `ar` Unix utility. `ar` can be used to create archives for any purpose, but has been largely replaced by `tar` for any other purposes than [static libraries][3]. + +In `make`, you can use an individual member of an archive file as a target or prerequisite as follows: +``` +archive(member) : prerequisite + recipe +``` + +### Final Thoughts + +There's a lot more to discover about make, but at least this counts as a start, I strongly encourage you to check the [documentation][4], create a dumb makefile, and just play with it. + +-------------------------------------------------------------------------------- + +via: https://monades.roperzh.com/rediscovering-make-power-behind-rules/ + +作者:[Roberto Dip][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://monades.roperzh.com +[1]:https://www.gnu.org/software/make/manual/make.html#Special-Targets +[2]:https://twitter.com/luislavena/ +[3]:http://tldp.org/HOWTO/Program-Library-HOWTO/static-libraries.html +[4]:https://www.gnu.org/software/make/manual/make.html From 60d118ea04ec7d75d7c17deaa55b6151f7dd8f1a Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 22:31:47 +0800 Subject: [PATCH 158/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20List?= =?UTF-8?q?=20and=20Delete=20iptables=20Firewall=20Rules?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...List and Delete iptables Firewall Rules.md | 106 ++++++++++++++++++ 1 file changed, 106 insertions(+) create mode 100644 sources/tech/20180118 How To List and Delete iptables Firewall Rules.md diff --git a/sources/tech/20180118 How To List and Delete iptables Firewall Rules.md b/sources/tech/20180118 How To List and Delete iptables Firewall Rules.md new file mode 100644 index 0000000000..b6b875ad11 --- /dev/null +++ b/sources/tech/20180118 How To List and Delete iptables Firewall Rules.md @@ -0,0 +1,106 @@ +How To List and Delete iptables Firewall Rules +====== +![How To List and Delete iptables Firewall Rules][1] + +We'll show you, how to list and delete iptables firewall rules. Iptables is a command line utility that allows system administrators to configure the packet filtering rule set on Linux. iptables requires elevated privileges to operate and must be executed by user root, otherwise it fails to function. + +### How to List iptables Firewall Rules + +Iptables allows you to list all the rules which are already added to the packet filtering rule set. In order to be able to check this you need to have SSH access to the server. [Connect to your Linux VPS via SSH][2] and run the following command: +``` +sudo iptables -nvL +``` + +To run the command above your user need to have `sudo` privileges. Otherwise, you need to [add sudo user on your Linux VPS][3] or use the root user. + +If there are no rules added to the packet filtering ruleset the output should be similar to the one below: +``` +Chain INPUT (policy ACCEPT 0 packets, 0 bytes) + pkts bytes target prot opt in out source destination + +Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) + pkts bytes target prot opt in out source destination + +Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) + pkts bytes target prot opt in out source destination + +``` + +Since NAT (Network Address Translation) can also be configured via iptables, you can use iptables to list the NAT rules: +``` +sudo iptables -t nat -n -L -v +``` + +The output will be similar to the one below if there are no rules added: +``` +Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) + pkts bytes target prot opt in out source destination + +Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) + pkts bytes target prot opt in out source destination + +Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) + pkts bytes target prot opt in out source destination + +``` + +If this is the case we recommend you to check our tutorial on How to [Set Up a Firewall with iptables on Ubuntu and CentOS][4] to make your server more secure. + +### How to Delete iptables Firewall Rules + +At some point, you may need to remove a specific iptables firewall rule on your server. For that purpose you need to use the following syntax: +``` +iptables [-t table] -D chain rulenum +``` + +For example, if you have a firewall rule to block all connections from 111.111.111.111 to your server on port 22 and you want to remove that rule, you can use the following command: +``` +sudo iptables -D INPUT -s 111.111.111.111 -p tcp --dport 22 -j DROP +``` + +Now that you removed the iptables firewall rule you need to save the changes to make them persistent. + +In case you are using [Ubuntu VPS][5] you need to install additional package for that purpose. To install the required package use the following command: +``` +sudo apt-get install iptables-persistent +``` + +On **Ubutnu 14.04** you can save and reload the firewall rules using the commands below: +``` +sudo /etc/init.d/iptables-persistent save +sudo /etc/init.d/iptables-persistent reload +``` + +On **Ubuntu 16.04** use the following commands instead: +``` +sudo netfilter-persistent save +sudo netfilter-persistent reload +``` + +If you are using [CentOS VPS][6] you can save the changes using the command below: +``` +service iptables save +``` + +Of course, you don't have to list and delete iptables firewall rules if you use one of our [Managed VPS Hosting][7] services, in which case you can simply ask our expert Linux admins to help you list and delete iptables firewall rules on your server. They are available 24×7 and will take care of your request immediately. + +**PS**. If you liked this post, on how to list and delete iptables firewall rules, please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks. + +-------------------------------------------------------------------------------- + +via: https://www.rosehosting.com/blog/how-to-list-and-delete-iptables-firewall-rules/ + +作者:[RoseHosting][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.rosehosting.com +[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/01/How-To-List-and-Delete-iptables-Firewall-Rules.jpg +[2]:https://www.rosehosting.com/blog/connect-to-your-linux-vps-via-ssh/ +[3]:https://www.rosehosting.com/blog/how-to-create-a-sudo-user-on-ubuntu/ +[4]:https://www.rosehosting.com/blog/how-to-set-up-a-firewall-with-iptables-on-ubuntu-and-centos/ +[5]:https://www.rosehosting.com/ubuntu-vps.html +[6]:https://www.rosehosting.com/centos-vps.html +[7]:https://www.rosehosting.com/managed-vps-hosting.html From 5ca657ab22a7dfd485b1e024ddd4b0db65ffea92 Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 22:34:11 +0800 Subject: [PATCH 159/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=205=20of=20the=20Be?= =?UTF-8?q?st=20Linux=20Dark=20Themes=20that=20Are=20Easy=20on=20the=20Eye?= =?UTF-8?q?s?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...x Dark Themes that Are Easy on the Eyes.md | 73 +++++++++++++++++++ 1 file changed, 73 insertions(+) create mode 100644 sources/talk/20180119 5 of the Best Linux Dark Themes that Are Easy on the Eyes.md diff --git a/sources/talk/20180119 5 of the Best Linux Dark Themes that Are Easy on the Eyes.md b/sources/talk/20180119 5 of the Best Linux Dark Themes that Are Easy on the Eyes.md new file mode 100644 index 0000000000..db70cd8732 --- /dev/null +++ b/sources/talk/20180119 5 of the Best Linux Dark Themes that Are Easy on the Eyes.md @@ -0,0 +1,73 @@ +5 of the Best Linux Dark Themes that Are Easy on the Eyes +====== + +![](https://www.maketecheasier.com/assets/uploads/2017/12/linux-themes.png) + +There are several reasons people opt for dark themes on their computers. Some find them easy on the eye while others prefer them because of their medical condition. Programmers, especially, like dark themes because they reduce glare on the eyes. + +If you are a Linux user and a dark theme lover, you are in luck. Here are five of the best dark themes for Linux. Check them out! + +### 1. OSX-Arc-Shadow + +![OSX-Arc-Shadow Theme][1] + +As its name implies, this theme is inspired by OS X. It is a flat theme based on Arc. The theme supports GTK 3 and GTK 2 desktop environments, so Gnome, Cinnamon, Unity, Manjaro, Mate, and XFCE users can install and use the theme. [OSX-Arc-Shadow][2] is part of the OSX-Arc theme collection. The collection has several other themes (dark and light) included. You can download the whole collection and just use the dark variants. + +Debian- and Ubuntu-based distro users have the option of installing the stable release using the .deb files found on this [page][3]. The compressed source files are also on the same page. Arch Linux users, check out this [AUR link][4]. Finally, to install the theme manually, extract the zip content to the "~/.themes" folder and set it as your current theme, controls, and window borders. + +### 2. Kiss-Kool-Red version 2 + +![Kiss-Kool-Red version 2 ][5] + +The theme is only a few days old. It has a darker look compared to OSX-Arc-Shadow and red selection outlines. It is especially appealing to those who want more contrast and less glare from the computer screen. Hence, It reduces distraction when used at night or in places with low lights. It supports GTK 3 and GTK2. + +Head to [gnome-looks][6] to download the theme under the "Files" menu. The installation procedure is simple: extract the theme into the "~/.themes" folder and set it as your current theme, controls, and window borders. + +### 3. Equilux + +![Equilux][7] + +Equilux is another simple dark theme based on Materia Theme. It has a neutral dark color tone and is not overly fancy. The contrast between the selection outlines is also minimal and not as sharp as the red color in Kiss-Kool-Red. The theme is truly made with reduction of eye strain in mind. + +[Download the compressed file][8] and unzip it into your "~/.themes" folder. Then, you can set it as your theme. You can check [its GitHub page][9] for the latest additions. + +### 4. Deepin Dark + +![Deepin Dark][10] + +Deepin Dark is a completely dark theme. For those who like a little more darkness, this theme is definitely one to consider. Moreover, it also reduces the amount of glare from the computer screen. Additionally, it supports Unity. [Download Deepin Dark here][11]. + +### 5. Ambiance DS BlueSB12 + +![Ambiance DS BlueSB12 ][12] + +Ambiance DS BlueSB12 is a simple dark theme, so it makes the important details stand out. It helps with focus as is not unnecessarily fancy. It is very similar to Deepin Dark. Especially relevant to Ubuntu users, it is compatible with Ubuntu 17.04. You can download and try it from [here][13]. + +### Conclusion + +If you use a computer for a very long time, dark themes are a great way to reduce the strain on your eyes. Even if you don't, dark themes can help you in many other ways like improving your focus. Let us know which is your favorite. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/best-linux-dark-themes/ + +作者:[Bruno Edoh][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com +[1]:https://www.maketecheasier.com/assets/uploads/2017/12/osx-arc-shadow.png (OSX-Arc-Shadow Theme) +[2]:https://github.com/LinxGem33/OSX-Arc-Shadow/ +[3]:https://github.com/LinxGem33/OSX-Arc-Shadow/releases +[4]:https://aur.archlinux.org/packages/osx-arc-shadow/ +[5]:https://www.maketecheasier.com/assets/uploads/2017/12/Kiss-Kool-Red.png (Kiss-Kool-Red version 2 ) +[6]:https://www.gnome-look.org/p/1207964/ +[7]:https://www.maketecheasier.com/assets/uploads/2017/12/equilux.png (Equilux) +[8]:https://www.gnome-look.org/p/1182169/ +[9]:https://github.com/ddnexus/equilux-theme +[10]:https://www.maketecheasier.com/assets/uploads/2017/12/deepin-dark.png (Deepin Dark ) +[11]:https://www.gnome-look.org/p/1190867/ +[12]:https://www.maketecheasier.com/assets/uploads/2017/12/ambience.png (Ambiance DS BlueSB12 ) +[13]:https://www.gnome-look.org/p/1013664/ From 2c79404f18657d294d5707f518ee7a7f41de2833 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 21 Jan 2018 22:37:00 +0800 Subject: [PATCH 160/226] PRF:20171030 How To Create Custom Ubuntu Live CD Image.md @stevenzdg988 --- ...w To Create Custom Ubuntu Live CD Image.md | 129 +++++++++--------- 1 file changed, 64 insertions(+), 65 deletions(-) diff --git a/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md b/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md index 2a6dad8027..97ede8771c 100644 --- a/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md +++ b/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md @@ -1,128 +1,127 @@ -如何创建 Ubuntu Live CD (Linux 中国注:Ubuntu 原生光盘)的定制镜像 +如何创建定制的 Ubuntu Live CD 镜像 ====== + ![](https://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-720x340.png) -今天让我们来讨论一下如何创建 Ubuntu Live CD 的定制镜像(ISO)。我们已经使用[* *Pinguy Builder* *][1]完成了这项工作。但是,现在似乎停止了。最近 Pinguy Builder 的官方网站似乎没有任何更新。幸运的是,我找到了另一种创建 Ubuntu Live CD 镜像的工具。使用 **Cubic** 即 **C**ustom **Ub**untu **I**SO **C**reator (Linux 中国注:Ubuntu 镜像定制器)的首字母所写,一个 GUI (图形用户界面)应用程序用来创建一个可定制的可启动的 Ubuntu Live CD(ISO)镜像。 +今天让我们来讨论一下如何创建 Ubuntu Live CD 的定制镜像(ISO)。我们以前可以使用 [Pinguy Builder][1] 完成这项工作。但是,现在它似乎停止维护了。最近 Pinguy Builder 的官方网站似乎没有任何更新。幸运的是,我找到了另一种创建 Ubuntu Live CD 镜像的工具。使用 Cubic 即 **C**ustom **Ub**untu **I**SO **C**reator 的首字母缩写,这是一个用来创建定制的可启动的 Ubuntu Live CD(ISO)镜像的 GUI 应用程序。 + +Cubic 正在积极开发,它提供了许多选项来轻松地创建一个定制的 Ubuntu Live CD ,它有一个集成的 chroot 命令行环境(LCTT 译注:chroot —— Change Root,也就是改变程序执行时所参考的根目录位置),在那里你可以定制各种方面,比如安装新的软件包、内核,添加更多的背景壁纸,添加更多的文件和文件夹。它有一个直观的 GUI 界面,在 live 镜像创建过程中可以轻松的利用导航(可以利用点击鼠标来回切换)。您可以创建一个新的自定义镜像或修改现有的项目。因为它可以用来制作 Ubuntu live 镜像,所以我相信它可以用在制作其他 Ubuntu 的发行版和衍生版镜像中,比如 Linux Mint。 -Cubic 正在积极开发,它提供了许多选项来轻松地创建一个定制的 Ubuntu Live CD ,它有一个集成的命令行环境``chroot``(Linux 中国注:Change Root,也就是改变程序执行时所参考的根目录位置),在那里你可以定制所有,比如安装新的软件包,内核,添加更多的背景壁纸,添加更多的文件和文件夹。它有一个直观的 GUI 界面,在实时镜像创建过程中可以轻松的利用导航(可以利用点击鼠标来回切换)。您可以创建一个新的自定义镜像或修改现有的项目。因为它可以用来实时制作 Ubuntu 镜像,所以我相信它可以被利用在制作其他 Ubuntu 的发行版和衍生版镜像中使用,比如 Linux Mint。 ### 安装 Cubic -Cubic 的开发人员已经开发出了一个 PPA (Linux 中国注:Personal Package Archives 首字母简写,私有的软件包档案) 来简化安装过程。要在 Ubuntu 系统上安装 Cubic ,在你的终端上运行以下命令: +Cubic 的开发人员已经做出了一个 PPA 来简化安装过程。要在 Ubuntu 系统上安装 Cubic ,在你的终端上运行以下命令: + ``` sudo apt-add-repository ppa:cubic-wizard/release -``` -``` sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 6494C6D6997C215E -``` -``` sudo apt update -``` -``` sudo apt install cubic ``` ### 利用 Cubic 创建 Ubuntu Live CD 的定制镜像 - -安装完成后,从应用程序菜单或坞站启动 Cubic。这是在我在 Ubuntu 16.04 LTS 桌面系统中 Cubic 的样子。 +安装完成后,从应用程序菜单或 dock 启动 Cubic。这是在我在 Ubuntu 16.04 LTS 桌面系统中 Cubic 的样子。 为新项目选择一个目录。它是保存镜像文件的目录。 -[![][2]][3] -请注意,Cubic 不是创建您系统的 Live CD 镜像。而它只是利用 Ubuntu 安装 CD 来创建一个定制的 Live CD,因此,你应该有一个最新的 ISO 镜像。 +![][3] + +请注意,Cubic 不是创建您当前系统的 Live CD 镜像,而是利用 Ubuntu 的安装 CD 来创建一个定制的 Live CD,因此,你应该有一个最新的 ISO 镜像。 + 选择您存储 Ubuntu 安装 ISO 镜像的路径。Cubic 将自动填写您定制操作系统的所有细节。如果你愿意,你可以改变细节。单击 Next 继续。 -[![][2]][4] +![][4] -接下来,从压缩的源安装介质中的 Linux 文件系统将被提取到项目的目录(在我们的例子中目录的位置是 **/home/ostechnix/custom_ubuntu**)。 -[![][2]][5] +接下来,来自源安装介质中的压缩的 Linux 文件系统将被提取到项目的目录(在我们的例子中目录的位置是 `/home/ostechnix/custom_ubuntu`)。 +![][5] -一旦文件系统被提取出来,将自动加载到``chroot``环境。如果你没有看到终端提示,按下回车键几次。 -[![][2]][6] +一旦文件系统被提取出来,将自动加载到 chroot 环境。如果你没有看到终端提示符,请按几次回车键。 +![][6] 在这里可以安装任何额外的软件包,添加背景图片,添加软件源列表,添加最新的 Linux 内核和所有其他定制到你的 Live CD 。 例如,我希望 `vim` 安装在我的 Live CD 中,所以现在就要安装它。 -[![][2]][7] +![][7] -我们不需要使用 ``sudo``,因为我们已经在具有最高权限(root)的环境中了。 +我们不需要使用 `sudo`,因为我们已经在具有最高权限(root)的环境中了。 + +类似地,如果需要,可以安装更多的任何版本 Linux 内核。 -类似地,如果需要,可以安装添加的任何版本 Linux Kernel 。 ``` apt install linux-image-extra-4.10.0-24-generic ``` -此外,您还可以更新软件源列表(添加或删除软件存储库列表): -[![][2]][8] +此外,您还可以更新软件源列表(添加或删除软件存储库列表): + +![][8] + +修改源列表后,不要忘记运行 `apt update` 命令来更新源列表: -修改源列表后,不要忘记运行 ``apt update`` 命令来更新源列表: ``` apt update ``` +另外,您还可以向 Live CD 中添加文件或文件夹。复制文件或文件夹(右击它们并选择复制或者利用 `CTRL+C`),在终端右键单击(在 Cubic 窗口内),选择 “Paste file(s)”,最后点击 Cubic 向导底部的 “Copy”。 -另外,您还可以向 Live CD 中添加文件或文件夹。复制文件/文件夹(右击它们并选择复制或者利用 `CTRL+C`),在终端右键单击(在 Cubic 窗口内),选择**Paste file(s)**,最后点击它将其复制进 Cubic 向导的底部。 -[![][2]][9] +![][9] -**Ubuntu 17.10 用户注意事项: ** +**Ubuntu 17.10 用户注意事项** +> 在 Ubuntu 17.10 系统中,DNS 查询可能无法在 chroot 环境中工作。如果您正在制作一个定制的 Ubuntu 17.10 Live 镜像,您需要指向正确的 `resolve.conf` 配置文件: -在 Ubuntu 17.10 系统中,DNS 查询可能无法在 ``chroot``环境中工作。如果您正在制作一个定制的 Ubuntu 17.10 原生镜像,您需要指向正确的 `resolve.conf` 配置文件: -``` +>``` ln -sr /run/systemd/resolve/resolv.conf /run/systemd/resolve/stub-resolv.conf - ``` -验证 DNS 解析工作,运行: -``` +> 要验证 DNS 解析工作,运行: + +> ``` cat /etc/resolv.conf ping google.com ``` +如果你想的话,可以添加你自己的壁纸。要做到这一点,请切换到 `/usr/share/backgrounds/` 目录, -如果你想的话,可以添加你自己的壁纸。要做到这一点,请切换到 **/usr/share/backgrounds/** 目录, ``` cd /usr/share/backgrounds ``` +并将图像拖放到 Cubic 窗口中。或复制图像,右键单击 Cubic 终端窗口并选择 “Paste file(s)” 选项。此外,确保你在 `/usr/share/gnome-backproperties` 的XML文件中添加了新的壁纸,这样你可以在桌面上右键单击新添加的图像选择 “Change Desktop Background” 进行交互。完成所有更改后,在 Cubic 向导中单击 “Next”。 -并将图像拖放到 Cubic 窗口中。或复制图像,右键单击 Cubic 终端窗口,选择 **Paste file(s)** 选项。此外,确保你在**/usr/share/gnome-backproperties** 的XML文件中添加了新的壁纸,这样你可以在桌面上右键单击新添加的图像选择**Change Desktop Background** 进行交互。完成所有更改后,在 Cubic 向导中单击 ``Next``。 +接下来,选择引导到新的 Live ISO 镜像时使用的 Linux 内核版本。如果已经安装了其他版本内核,它们也将在这部分中被列出。然后选择您想在 Live CD 中使用的内核。 -接下来,选择引导到新的原生 ISO 镜像时使用的 Linux 内核版本。如果已经安装了其他版本内核,它们也将在这部分中被列出。然后选择您想在 Live CD 中使用的内核。 -[![][2]][10] +![][10] +在下一节中,选择要从您的 Live 映像中删除的软件包。在使用定制的 Live 映像安装完 Ubuntu 操作系统后,所选的软件包将自动删除。在选择要删除的软件包时,要格外小心,您可能在不知不觉中删除了一个软件包,而此软件包又是另外一个软件包的依赖包。 -在下一节中,选择要从您的原生映像中删除的软件包。在使用定制的原生映像安装完 Ubuntu 操作系统后,所选的软件包将自动删除。在选择要删除的软件包时,要格外小心,您可能在不知不觉中删除了一个软件包,而此软件包又是另外一个软件包的依赖包。 -[![][2]][11] +![][11] +接下来, Live 镜像创建过程将开始。这里所要花费的时间取决于你定制的系统规格。 -接下来,原生镜像创建过程将开始。这里所要花费的时间取决于你定制的系统规格。 -[![][2]][12] +![][12] +镜像创建完成后后,单击 “Finish”。Cubic 将显示新创建的自定义镜像的细节。 -镜像创建完成后后,单击 ``Finish``。Cubic 将显示新创建的自定义镜像的细节。 +如果你想在将来修改刚刚创建的自定义 Live 镜像,不要选择“ Delete all project files, except the generated disk image and the corresponding MD5 checksum file”(除了生成的磁盘映像和相应的 MD5 校验和文件之外,删除所有的项目文件**) ,Cubic 将在项目的工作目录中保留自定义图像,您可以在将来进行任何更改。而不用从头再来一遍。 -如果你想在将来修改刚刚创建的自定义原生镜像,**uncheck** 选项解释说**" Delete all project files, except the generated disk image and the corresponding MD5 checksum file"** (**除了生成的磁盘映像和相应的MD5校验和文件之外,删除所有的项目文件**) Cubic 将在项目的工作目录中保留自定义图像,您可以在将来进行任何更改。而不用从头再来一遍。 +要为不同的 Ubuntu 版本创建新的 Live 镜像,最好使用不同的项目目录。 -要为不同的 Ubuntu 版本创建新的原生镜像,最好使用不同的项目目录。 ### 利用 Cubic 修改 Ubuntu Live CD 的定制镜像 -从菜单中启动 Cubic ,并选择一个现有的项目目录。单击 Next 按钮,您将看到以下三个选项: - 1. 从现有项目创建一个磁盘映像。 - 2. 继续定制现有项目。 - 3. 删除当前项目。 +从菜单中启动 Cubic ,并选择一个现有的项目目录。单击 “Next” 按钮,您将看到以下三个选项: +1. Create a disk image from the existing project. (从现有项目创建一个磁盘映像。) +2. Continue customizing the existing project.(继续定制现有项目。) +3. Delete the existing project.(删除当前项目。) +![][13] -[![][2]][13] +第一个选项将允许您从现有项目中使用之前所做的自定义设置创建一个新的 Live ISO 镜像。如果您丢失了 ISO 镜像,您可以使用第一个选项来创建一个新的。 - -第一个选项将允许您使用之前所做的自定义在现有项目中创建一个新的原生 ISO 镜像。如果您丢失了 ISO 镜像,您可以使用第一个选项来创建一个新的。 - -第二个选项允许您在现有项目中进行任何其他更改。如果您选择此选项,您将再次进入 ``chroot``环境。您可以添加新的文件或文件夹,安装任何新的软件,删除任何软件,添加其他的 Linux 内核,添加桌面背景等等。 +第二个选项允许您在现有项目中进行任何其他更改。如果您选择此选项,您将再次进入 chroot 环境。您可以添加新的文件或文件夹,安装任何新的软件,删除任何软件,添加其他的 Linux 内核,添加桌面背景等等。 第三个选项将删除现有的项目,所以您可以从头开始。选择此选项将删除所有文件,包括新生成的 ISO 镜像文件。 @@ -137,21 +136,21 @@ via: https://www.ostechnix.com/create-custom-ubuntu-live-cd-image/ 作者:[SK][a] 译者:[stevenzdg988](https://github.com/stevenzdg988) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.ostechnix.com/author/sk/ [1]:https://www.ostechnix.com/pinguy-builder-build-custom-ubuntu-os/ [2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-1.png () -[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-2.png () -[5]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-3.png () -[6]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-4.png () -[7]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-6.png () -[8]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-5.png () -[9]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-7.png () -[10]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-8.png () -[11]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-10-1.png () -[12]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-12-1.png () -[13]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-13.png () +[3]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-1.png +[4]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-2.png +[5]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-3.png +[6]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-4.png +[7]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-6.png +[8]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-5.png +[9]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-7.png +[10]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-8.png +[11]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-10-1.png +[12]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-12-1.png +[13]:http://www.ostechnix.com/wp-content/uploads/2017/10/Cubic-13.png From 69fdff710300613d84b81fd7cf2bd39296fbf857 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Sun, 21 Jan 2018 22:37:08 +0800 Subject: [PATCH 161/226] Delete 20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md --- ...For Googles In-house Linux Distribution.md | 105 ------------------ 1 file changed, 105 deletions(-) delete mode 100644 sources/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md diff --git a/sources/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md b/sources/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md deleted file mode 100644 index 5a63106d2f..0000000000 --- a/sources/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md +++ /dev/null @@ -1,105 +0,0 @@ -Translating by jessie-pang - -No More Ubuntu! Debian is the New Choice For Google’s In-house Linux Distribution -============================================================ - -_Brief: For years Google used Goobuntu, an in-house, Ubuntu-based operating system. Goobuntu is now being replaced by gLinux, which is based on Debian Testing._ - -If you have read [Ubuntu facts][18], you probably already know that Google uses a Linux distribution called [Goobuntu][19] as the development platform. It is a custom Linux distribution based on…(easy to guess)… Ubuntu. - -Goobuntu is basically a “[light skin over standard Ubuntu][20]“. It is based on the LTS releases of Ubuntu. If you think that Google contributes to the testing or development of Ubuntu, you are wrong. Google is simply a paying customer for Canonical’s [Ubuntu Advantage Program][21]. [Canonical][22] is the parent company behind Ubuntu. - -### Meet gLinux: Google’s new Linux distribution based on Debian Buster - -![gLinux from Goobuntu](https://itsfoss.com/wp-content/uploads/2018/01/glinux-announcement-800x450.jpg) - -After more than five years with Ubuntu, Google is replacing Goobuntu with gLinux, a Linux distribution based on Debian Testing release. - -As [MuyLinux reports][23], gLinux is being built from the source code of the packages and Google introduces its own changes to it. The changes will also be contributed to the upstream. - -This ‘news’ is not really new. It was announced in Debconf’17 in August last year. Somehow the story did not get the attention it deserves. - -You can watch the presentation in Debconf video [here][24]. The gLinux presentation starts around 12:00. - -[Suggested readCity of Barcelona Kicks Out Microsoft in Favor of Linux and Open Source][25] - -### Moving from Ubuntu 14.04 LTS to Debian 10 Buster - -Once Google opted Ubuntu LTS for stability. Now it is moving to Debian testing branch for timely testing the packages. But it is not clear why Google decided to switch to Debian from Ubuntu. - -How does Google plan to move to Debian Testing? The current Debian Testing release is upcoming Debian 10 Buster. Google has developed an internal tool to migrate the existing systems from Ubuntu 14.04 LTS to Debian 10 Buster. Project leader Margarita claimed in the Debconf talk that tool was tested to be working fine. - -Google also plans to send the changes to Debian Upstream and hence contributing to its development. - -![gLinux testing plan from Google](https://itsfoss.com/wp-content/uploads/2018/01/glinux-testing-plan.jpg) -Development plan for gLinux - -### Ubuntu loses a big customer! - -Back in 2012, Canonical had clarified that Google is not their largest business desktop customer. However, it is safe to say that Google was a big customer for them. As Google prepares to switch to Debian, this will surely result in revenue loss for Canonical. - -[Suggested readMandrake Linux Creator Launches a New Open Source Mobile OS][26] - -### What do you think? - -Do keep in mind that Google doesn’t restrict its developers from using any operating system. However, use of Linux is encouraged. - -If you are thinking that you can get your hands on either of Goobuntu or gLinux, you’ll have to get a job at Google. It is an internal project of Google and is not accessible to the general public. - -Overall, it is a good news for Debian, especially if they get changes to upstream. Cannot say the same for Ubuntu though. I have contacted Canonical for a comment but have got no response so far. - -Update: Canonical responded that they “don’t share details of relationships with individual customers” and hence they cannot provide details about revenue and any other such details. - -What are your views on Google ditching Ubuntu for Debian? - -[Share3K][9][Tweet][10][+1][11][Share161][12][Stumble][13][Reddit644][14]SHARES3K - -
- -Filed Under: [News][15]Tagged With: [glinux][16], [goobuntu][17] - -
- -![](https://secure.gravatar.com/avatar/20749c268f5d3e4d2c785499eb6a17c0?s=125&d=mm&r=g) - -#### About Abhishek Prakash - -I am a professional software developer, and founder of It’s FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I’m a huge fan of Agatha Christie’s work. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/goobuntu-glinux-google/ - -作者:[Abhishek Prakash ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/abhishek/ -[1]:https://itsfoss.com/author/abhishek/ -[2]:https://itsfoss.com/goobuntu-glinux-google/#comments -[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare -[4]:https://twitter.com/share?original_referer=/&text=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%E2%80%99s+In-house+Linux+Distribution&url=https://itsfoss.com/goobuntu-glinux-google/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=abhishek_foss -[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare -[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare -[7]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution -[8]:https://www.reddit.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution -[9]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare -[10]:https://twitter.com/share?original_referer=/&text=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%E2%80%99s+In-house+Linux+Distribution&url=https://itsfoss.com/goobuntu-glinux-google/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=abhishek_foss -[11]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare -[12]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare -[13]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution -[14]:https://www.reddit.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution -[15]:https://itsfoss.com/category/news/ -[16]:https://itsfoss.com/tag/glinux/ -[17]:https://itsfoss.com/tag/goobuntu/ -[18]:https://itsfoss.com/facts-about-ubuntu/ -[19]:https://en.wikipedia.org/wiki/Goobuntu -[20]:http://www.zdnet.com/article/the-truth-about-goobuntu-googles-in-house-desktop-ubuntu-linux/ -[21]:https://www.ubuntu.com/support -[22]:https://www.canonical.com/ -[23]:https://www.muylinux.com/2018/01/15/goobuntu-glinux-google/ -[24]:https://debconf17.debconf.org/talks/44/ -[25]:https://itsfoss.com/barcelona-open-source/ -[26]:https://itsfoss.com/eelo-mobile-os/ From 803efaaa4b39956d71f74252c67067e2556eaf47 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 21 Jan 2018 22:37:32 +0800 Subject: [PATCH 162/226] PUB:20171030 How To Create Custom Ubuntu Live CD Image.md @stevenzdg988 https://linux.cn/article-9264-1.html --- .../20171030 How To Create Custom Ubuntu Live CD Image.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20171030 How To Create Custom Ubuntu Live CD Image.md (100%) diff --git a/translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md b/published/20171030 How To Create Custom Ubuntu Live CD Image.md similarity index 100% rename from translated/tech/20171030 How To Create Custom Ubuntu Live CD Image.md rename to published/20171030 How To Create Custom Ubuntu Live CD Image.md From 0b546c0164c9f22ea272da5c78d38156da21ea1d Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Sun, 21 Jan 2018 22:38:55 +0800 Subject: [PATCH 163/226] 20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md --- ...For Googles In-house Linux Distribution.md | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md diff --git a/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md b/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md new file mode 100644 index 0000000000..674dff1928 --- /dev/null +++ b/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md @@ -0,0 +1,102 @@ +Debian 取代 Ubuntu 成为 Google 内部 Linux 发行版的新选择 +============================================================ + +_摘要:Google 多年来一直使用基于 Ubuntu 的内部操作系统 Goobuntu。如今,Goobuntu 正在被基于 Debian Testing 的 gLinux 所取代。 + +如果你读过 [Ubuntu facts][18],你可能知道 Google 使用了一个名为 [Goobuntu][19] 的 Linux 发行版作为开发平台。这是一个定制化的 Linux 发行版,不难猜到,它是基于 Ubuntu 的。 + +Goobuntu 基本上是一个 [采用轻量级的界面的 Ubuntu][20],它是基于 Ubuntu LTS 版本的。如果你认为 Google 对 Ubuntu 的测试或开发做出了贡献,那么你就错了。Google 只是 Canonical 公司的 [Ubuntu Advantage Program][21] 计划的付费客户而已。[Canonical][22] 是 Ubuntu 的母公司。 + +### 遇见 gLinux:Google 基于 Debian Buster 的新 Linux 发行版 +![gLinux from Goobuntu](https://itsfoss.com/wp-content/uploads/2018/01/glinux-announcement-800x450.jpg) + +在使用 Ubuntu 五年多以后,Google 正在用一个基于 Debian Testing 版本的 Linux 发行版—— gLinux 取代 Goobuntu。 + +正如 [MuyLinux][23] 所报道的,gLinux 是从软件包的源代码中构建出来的,然后 Google 对其进行了修改,这些改动也将为上游做出贡献。 + +这个“新闻”并不是什么新鲜事,它早在去年八月就在 Debconf'17 开发者大会上宣布了。但不知为何,这件事并没有引起应有的关注。 + +请点击 [这里][24] 观看 Debconf 视频中的演示。gLinux 的演示从 12:00 开始。 + +[推荐阅读:微软出局,巴塞罗那青睐 Linux 系统和开源软件][25] + +### 从 Ubuntu 14.04 LTS 转移到 Debian 10 Buster + +Google 曾经看重 Ubuntu LTS 的稳定性,现在为了及时测试软件而转移到 Debian Testing 上。但目前尚不清楚 Google 为什么决定从 Ubuntu 切换到 Debian。 + +Google 计划如何转移到 Debian Testing?目前的 Debian Testing 版本是即将发布的 Debian 10 Buster。Google 开发了一个内部工具,用于将现有系统从 Ubuntu 14.04 LTS 迁移到 Debian 10 Buster。项目负责人 Margarita 在 Debconf 中声称,经过测试,该工具工作正常。 + +Google 还计划将这些改动发到 Debian 的上游项目中,从而为其发展做出贡献。 + +![gLinux testing plan from Google](https://itsfoss.com/wp-content/uploads/2018/01/glinux-testing-plan.jpg) +gLinux 的开发计划 + +### Ubuntu 丢失了一个大客户! + +回溯到 2012 年,Canonical 公司澄清说 Google 不是他们最大的商业桌面客户。但至少可以说,Google 是他们的大客户。当 Google 准备切换到 Debian 时,必然会使 Canonical 蒙受损失。 + +[推荐阅读:Mandrake Linux Creator 推出新的开源移动操作系统][26] + +### 你怎么看? + +请记住,Google 不会限制其开发者使用任何操作系统,但鼓励使用 Linux。 + +如果你想使用 Goobuntu 或 gLinux,那得成为 Google 公司的雇员才行。因为这是 Google 的内部项目,不对公众开放。 + +总的来说,这对 Debian 来说是一个好消息,尤其是如果他们改变了上游的话。对 Ubuntu 来说可就不同了。我已经联系了 Canonical 公司征求意见,但至今没有回应。 + +更新:Canonical 公司回应称,他们“不共享与单个客户关系的细节”,因此他们不能提供有关收入和任何其他的细节。 + +你对 Google 抛弃 Ubuntu 而选择 Debian 有什么看法? + +
+ +发表于:[新闻][15] +标签:[glinux][16]、[goobuntu][17] + +
+ +![](https://secure.gravatar.com/avatar/20749c268f5d3e4d2c785499eb6a17c0?s=125&d=mm&r=g) + +#### 关于作者 Abhishek Prakash + +I am a professional software developer, and founder of It’s FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I’m a huge fan of Agatha Christie’s work. +我是一名专业的软件开发人员,也是 FOSS 的创始人。我是一个狂热的 Linux 爱好者和开源爱好者。我使用Ubuntu并相信知识共享。除了 Linux 之外,我还喜欢经典侦探的奥秘。我是阿加莎·克里斯蒂(Agatha Christie)作品的忠实粉丝。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/goobuntu-glinux-google/ + +作者:[Abhishek Prakash ][a] +译者:[jessie-pang](https://github.com/jessie-pang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/abhishek/ +[1]:https://itsfoss.com/author/abhishek/ +[2]:https://itsfoss.com/goobuntu-glinux-google/#comments +[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[4]:https://twitter.com/share?original_referer=/&text=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%E2%80%99s+In-house+Linux+Distribution&url=https://itsfoss.com/goobuntu-glinux-google/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=abhishek_foss +[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[7]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution +[8]:https://www.reddit.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution +[9]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[10]:https://twitter.com/share?original_referer=/&text=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%E2%80%99s+In-house+Linux+Distribution&url=https://itsfoss.com/goobuntu-glinux-google/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=abhishek_foss +[11]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[12]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fgoobuntu-glinux-google%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare +[13]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution +[14]:https://www.reddit.com/submit?url=https://itsfoss.com/goobuntu-glinux-google/&title=No+More+Ubuntu%21+Debian+is+the+New+Choice+For+Google%26%238217%3Bs+In-house+Linux+Distribution +[15]:https://itsfoss.com/category/news/ +[16]:https://itsfoss.com/tag/glinux/ +[17]:https://itsfoss.com/tag/goobuntu/ +[18]:https://itsfoss.com/facts-about-ubuntu/ +[19]:https://en.wikipedia.org/wiki/Goobuntu +[20]:http://www.zdnet.com/article/the-truth-about-goobuntu-googles-in-house-desktop-ubuntu-linux/ +[21]:https://www.ubuntu.com/support +[22]:https://www.canonical.com/ +[23]:https://www.muylinux.com/2018/01/15/goobuntu-glinux-google/ +[24]:https://debconf17.debconf.org/talks/44/ +[25]:https://itsfoss.com/barcelona-open-source/ +[26]:https://itsfoss.com/eelo-mobile-os/ \ No newline at end of file From 1121d2aec15548eda89fa1d81e8868298eed819b Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Sun, 21 Jan 2018 22:42:40 +0800 Subject: [PATCH 164/226] Update 20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md --- ...he New Choice For Googles In-house Linux Distribution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md b/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md index 674dff1928..acfccc63e3 100644 --- a/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md +++ b/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md @@ -1,7 +1,7 @@ Debian 取代 Ubuntu 成为 Google 内部 Linux 发行版的新选择 ============================================================ -_摘要:Google 多年来一直使用基于 Ubuntu 的内部操作系统 Goobuntu。如今,Goobuntu 正在被基于 Debian Testing 的 gLinux 所取代。 +_摘要_:Google 多年来一直使用基于 Ubuntu 的内部操作系统 Goobuntu。如今,Goobuntu 正在被基于 Debian Testing 的 gLinux 所取代。 如果你读过 [Ubuntu facts][18],你可能知道 Google 使用了一个名为 [Goobuntu][19] 的 Linux 发行版作为开发平台。这是一个定制化的 Linux 发行版,不难猜到,它是基于 Ubuntu 的。 @@ -61,7 +61,7 @@ gLinux 的开发计划 #### 关于作者 Abhishek Prakash I am a professional software developer, and founder of It’s FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I’m a huge fan of Agatha Christie’s work. -我是一名专业的软件开发人员,也是 FOSS 的创始人。我是一个狂热的 Linux 爱好者和开源爱好者。我使用Ubuntu并相信知识共享。除了 Linux 之外,我还喜欢经典侦探的奥秘。我是阿加莎·克里斯蒂(Agatha Christie)作品的忠实粉丝。 +我是一名专业的软件开发人员,也是 FOSS 的创始人。我是一个狂热的 Linux 爱好者和开源爱好者。我使用 Ubuntu 并相信知识共享。除了 Linux 之外,我还喜欢经典侦探的奥秘。我是阿加莎·克里斯蒂(Agatha Christie)作品的忠实粉丝。 -------------------------------------------------------------------------------- @@ -99,4 +99,4 @@ via: https://itsfoss.com/goobuntu-glinux-google/ [23]:https://www.muylinux.com/2018/01/15/goobuntu-glinux-google/ [24]:https://debconf17.debconf.org/talks/44/ [25]:https://itsfoss.com/barcelona-open-source/ -[26]:https://itsfoss.com/eelo-mobile-os/ \ No newline at end of file +[26]:https://itsfoss.com/eelo-mobile-os/ From 3a571a200dd325836c52e413188d84655bc3fe1f Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Sun, 21 Jan 2018 22:50:39 +0800 Subject: [PATCH 165/226] Update 20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md --- ... the New Choice For Googles In-house Linux Distribution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md b/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md index acfccc63e3..b585a739e9 100644 --- a/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md +++ b/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md @@ -60,8 +60,8 @@ gLinux 的开发计划 #### 关于作者 Abhishek Prakash -I am a professional software developer, and founder of It’s FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I’m a huge fan of Agatha Christie’s work. -我是一名专业的软件开发人员,也是 FOSS 的创始人。我是一个狂热的 Linux 爱好者和开源爱好者。我使用 Ubuntu 并相信知识共享。除了 Linux 之外,我还喜欢经典侦探的奥秘。我是阿加莎·克里斯蒂(Agatha Christie)作品的忠实粉丝。 +我是一名专业的软件开发人员,也是 FOSS 的创始人。我是一个狂热的 Linux 爱好者和开源爱好者。我使用 Ubuntu 并相信知识共享。除了 Linux 之外,我还喜欢经典的侦探推理故事。我是阿加莎·克里斯蒂(Agatha Christie)作品的忠实粉丝。 + -------------------------------------------------------------------------------- From 0cd0711d69e0fee3083af8cba5c0601f67a75d39 Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 23:14:18 +0800 Subject: [PATCH 166/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20EU=20GDPR=20and?= =?UTF-8?q?=20personal=20data=20in=20web=20server=20logs?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From c82d2a18b464c14e895bb72f3d8c85f0753ab3db Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 23:18:36 +0800 Subject: [PATCH 167/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20SPARTA=20?= =?UTF-8?q?=E2=80=93=20Network=20Penetration=20Testing=20GUI=20Toolkit?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Network Penetration Testing GUI Toolkit.md | 107 ++++++++++++++++++ 1 file changed, 107 insertions(+) create mode 100644 sources/tech/20180116 SPARTA - Network Penetration Testing GUI Toolkit.md diff --git a/sources/tech/20180116 SPARTA - Network Penetration Testing GUI Toolkit.md b/sources/tech/20180116 SPARTA - Network Penetration Testing GUI Toolkit.md new file mode 100644 index 0000000000..06427c101d --- /dev/null +++ b/sources/tech/20180116 SPARTA - Network Penetration Testing GUI Toolkit.md @@ -0,0 +1,107 @@ +SPARTA – Network Penetration Testing GUI Toolkit +====== + +![](https://i0.wp.com/gbhackers.com/wp-content/uploads/2018/01/GjWDZ1516079830.png?resize=696%2C379&ssl=1) + +SPARTA is GUI application developed with python and inbuild Network Penetration Testing Kali Linux tool. It simplifies scanning and enumeration phase with faster results. + +Best thing of SPARTA GUI Toolkit it scans detects the service running on the target port. + +Also, it provides Bruteforce attack for scanned open ports and services as a part of enumeration phase. + + +Also Read: Network Pentesting Checklist][1] + +## Installation + +Please clone the latest version of SPARTA from github: + +``` +git clone https://github.com/secforce/sparta.git +``` + +Alternatively, download the latest zip file [here][2]. +``` +cd /usr/share/ +git clone https://github.com/secforce/sparta.git +``` +Place the "sparta" file in /usr/bin/ and make it executable. +Type 'sparta' in any terminal to launch the application. + + +## The scope of Network Penetration Testing Work: + + * Organizations security weaknesses in their network infrastructures are identified by a list of host or targeted host and add them to the scope. + * Select menu bar - File > Add host(s) to scope + + + +[![Network Penetration Testing][3]][4] + +[![Network Penetration Testing][5]][6] + + * Above figures show target Ip is added to the scope.According to your network can add the range of IPs to scan. + * After adding Nmap scan will begin and results will be very faster.now scanning phase is done. + + + +## Open Ports & Services: + + * Nmap results will provide target open ports and services. + + + +[![Network Penetration Testing][7]][8] + + * Above figure shows that target operating system, Open ports and services are discovered as scan results. + + + +## Brute Force Attack on Open ports: + + * Let us Brute force Server Message Block (SMB) via port 445 to enumerate the list of users and their valid passwords. + + + +[![Network Penetration Testing][9]][10] + + * Right-click and Select option Send to Brute.Also, select discovered Open ports and service on target. + * Browse and add dictionary files for Username and password fields. + + + +[![Network Penetration Testing][11]][12] + + * Click Run to start the Brute force attack on the target.Above Figure shows Brute force attack is successfully completed on the target IP and the valid password is Found! + * Always think failed login attempts will be logged as Event logs in Windows. + * Password changing policy should be 15 to 30 days will be a good practice. + * Always recommended to use a strong password as per policy.Password lockout policy is a good one to stop brute force attacks (After 5 failure attempts account will be locked) + * The integration of business-critical asset to SIEM( security incident & Event Management) will detect these kinds of attacks as soon as possible. + + + +SPARTA is timing saving GUI Toolkit for pentesters for scanning and enumeration phase.SPARTA Scans and Bruteforce various protocols.It has many more features! Happy Hacking. + +-------------------------------------------------------------------------------- + +via: https://gbhackers.com/sparta-network-penetration-testing-gui-toolkit/ + +作者:[Balaganesh][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://gbhackers.com/author/balaganesh/ +[1]:https://gbhackers.com/network-penetration-testing-checklist-examples/ +[2]:https://github.com/SECFORCE/sparta/archive/master.zip +[3]:https://i0.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-526.png?resize=696%2C495&ssl=1 +[4]:https://i0.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-526.png?ssl=1 +[5]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-527.png?resize=696%2C516&ssl=1 +[6]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-527.png?ssl=1 +[7]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-528.png?resize=696%2C519&ssl=1 +[8]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-528.png?ssl=1 +[9]:https://i1.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-529.png?resize=696%2C525&ssl=1 +[10]:https://i1.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-529.png?ssl=1 +[11]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-531.png?resize=696%2C523&ssl=1 +[12]:https://i2.wp.com/gbhackers.com/wp-content/uploads/2018/01/Screenshot-531.png?ssl=1 From d177376e3aa7a015ef1d3458927028a57a10c2d7 Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 23:21:52 +0800 Subject: [PATCH 168/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=204=20Tools=20for?= =?UTF-8?q?=20Network=20Snooping=20on=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...4 4 Tools for Network Snooping on Linux.md | 197 ++++++++++++++++++ 1 file changed, 197 insertions(+) create mode 100644 sources/tech/20180104 4 Tools for Network Snooping on Linux.md diff --git a/sources/tech/20180104 4 Tools for Network Snooping on Linux.md b/sources/tech/20180104 4 Tools for Network Snooping on Linux.md new file mode 100644 index 0000000000..0ba60006ee --- /dev/null +++ b/sources/tech/20180104 4 Tools for Network Snooping on Linux.md @@ -0,0 +1,197 @@ +4 Tools for Network Snooping on Linux +====== +Computer networking data has to be exposed, because packets can't travel blindfolded, so join us as we use `whois`, `dig`, `nmcli`, and `nmap` to snoop networks. + +Do be polite and don't run `nmap` on any network but your own, because probing other people's networks can be interpreted as a hostile act. + +### Thin and Thick whois + +You may have noticed that our beloved old `whois` command doesn't seem to give the level of detail that it used to. Check out this example for Linux.com: +``` +$ whois linux.com +Domain Name: LINUX.COM +Registry Domain ID: 4245540_DOMAIN_COM-VRSN +Registrar WHOIS Server: whois.namecheap.com +Registrar URL: http://www.namecheap.com +Updated Date: 2018-01-10T12:26:50Z +Creation Date: 1994-06-02T04:00:00Z +Registry Expiry Date: 2018-06-01T04:00:00Z +Registrar: NameCheap Inc. +Registrar IANA ID: 1068 +Registrar Abuse Contact Email: abuse@namecheap.com +Registrar Abuse Contact Phone: +1.6613102107 +Domain Status: ok https://icann.org/epp#ok +Name Server: NS5.DNSMADEEASY.COM +Name Server: NS6.DNSMADEEASY.COM +Name Server: NS7.DNSMADEEASY.COM +DNSSEC: unsigned +[...] + +``` + +There is quite a bit more, mainly annoying legalese. But where is the contact information? It is sitting on whois.namecheap.com (see the third line of output above): +``` +$ whois -h whois.namecheap.com linux.com + +``` + +I won't print the output here, as it is very long, containing the Registrant, Admin, and Tech contact information. So what's the deal, Lucille? Some registries, such as .com and .net are "thin" registries, storing a limited subset of domain data. To get complete information use the `-h`, or `--host` option, to get the complete dump from the domain's `Registrar WHOIS Server`. + +Most of the other top-level domains are thick registries, such as .info. Try `whois blockchain.info` to see an example. + +Want to get rid of the obnoxious legalese? Use the `-H` option. + +### Digging DNS + +Use the `dig` command to compare the results from different name servers to check for stale entries. DNS records are cached all over the place, and different servers have different refresh intervals. This is the simplest usage: +``` +$ dig linux.com +<<>> DiG 9.10.3-P4-Ubuntu <<>> linux.com +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<<- opcode: QUERY, status: NOERROR, id: 13694 +;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1 + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 1440 +;; QUESTION SECTION: +;linux.com. IN A + +;; ANSWER SECTION: +linux.com. 10800 IN A 151.101.129.5 +linux.com. 10800 IN A 151.101.65.5 +linux.com. 10800 IN A 151.101.1.5 +linux.com. 10800 IN A 151.101.193.5 + +;; Query time: 92 msec +;; SERVER: 127.0.1.1#53(127.0.1.1) +;; WHEN: Tue Jan 16 15:17:04 PST 2018 +;; MSG SIZE rcvd: 102 + +``` + +Take notice of the SERVER: 127.0.1.1#53(127.0.1.1) line near the end of the output. This is your default caching resolver. When the address is localhost, that means there is a DNS server installed on your machine. In my case that is Dnsmasq, which is being used by Network Manager: +``` +$ ps ax|grep dnsmasq +2842 ? S 0:00 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground +--no-hosts --bind-interfaces --pid-file=/var/run/NetworkManager/dnsmasq.pid +--listen-address=127.0.1.1 + +``` + +The `dig` default is to return A records, which define the domain name. IPv6 has AAAA records: +``` +$ $ dig linux.com AAAA +[...] +;; ANSWER SECTION: +linux.com. 60 IN AAAA 64:ff9b::9765:105 +linux.com. 60 IN AAAA 64:ff9b::9765:4105 +linux.com. 60 IN AAAA 64:ff9b::9765:8105 +linux.com. 60 IN AAAA 64:ff9b::9765:c105 +[...] + +``` + +Checkitout, Linux.com has IPv6 addresses. Very good! If your Internet service provider supports IPv6 then you can connect over IPv6. (Sadly, my overpriced mobile broadband does not.) + +Suppose you make some DNS changes to your domain, or you're seeing `dig` results that don't look right. Try querying with a public DNS service, like OpenNIC: +``` +$ dig @69.195.152.204 linux.com +[...] +;; Query time: 231 msec +;; SERVER: 69.195.152.204#53(69.195.152.204) + +``` + +`dig` confirms that you're getting your lookup from 69.195.152.204. You can query all kinds of servers and compare results. + +### Upstream Name Servers + +I want to know what my upstream name servers are. To find this, I first look in `/etc/resolv/conf`: +``` +$ cat /etc/resolv.conf +# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) +# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN +nameserver 127.0.1.1 + +``` + +Thanks, but I already knew that. Your Linux distribution may be configured differently, and you'll see your upstream servers. Let's try `nmcli`, the Network Manager command-line tool: +``` +$ nmcli dev show | grep DNS +IP4.DNS[1]: 192.168.1.1 + +``` + +Now we're getting somewhere, as that is the address of my mobile hotspot, and I should have thought of that myself. I can log in to its weird little Web admin panel to see its upstream servers. A lot of consumer Internet gateways don't let you view or change these settings, so try an external service such as [What's my DNS server?][1] + +### List IPv4 Addresses on your Network + +Which IPv4 addresses are up and in use on your network? +``` +$ nmap -sn 192.168.1.0/24 +Starting Nmap 7.01 ( https://nmap.org ) at 2018-01-14 14:03 PST +Nmap scan report for Mobile.Hotspot (192.168.1.1) +Host is up (0.011s latency). +Nmap scan report for studio (192.168.1.2) +Host is up (0.000071s latency). +Nmap scan report for nellybly (192.168.1.3) +Host is up (0.015s latency) +Nmap done: 256 IP addresses (2 hosts up) scanned in 2.23 seconds + +``` + +Everyone wants to scan their network for open ports. This example looks for services and their versions: +``` +$ nmap -sV 192.168.1.1/24 + +Starting Nmap 7.01 ( https://nmap.org ) at 2018-01-14 16:46 PST +Nmap scan report for Mobile.Hotspot (192.168.1.1) +Host is up (0.0071s latency). +Not shown: 997 closed ports +PORT STATE SERVICE VERSION +22/tcp filtered ssh +53/tcp open domain dnsmasq 2.55 +80/tcp open http GoAhead WebServer 2.5.0 + +Nmap scan report for studio (192.168.1.102) +Host is up (0.000087s latency). +Not shown: 998 closed ports +PORT STATE SERVICE VERSION +22/tcp open ssh OpenSSH 7.2p2 Ubuntu 4ubuntu2.2 (Ubuntu Linux; protocol 2.0) +631/tcp open ipp CUPS 2.1 +Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel + +Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . +Nmap done: 256 IP addresses (2 hosts up) scanned in 11.65 seconds + +``` + +These are interesting results. Let's try the same run from a different Internet account, to see if any of these services are exposed to big bad Internet. You have a second network if you have a smartphone. There are probably apps you can download, or use your phone as a hotspot to your faithful Linux computer. Fetch the WAN IP address from the hotspot control panel and try again: +``` +$ nmap -sV 12.34.56.78 + +Starting Nmap 7.01 ( https://nmap.org ) at 2018-01-14 17:05 PST +Nmap scan report for 12.34.56.78 +Host is up (0.0061s latency). +All 1000 scanned ports on 12.34.56.78 are closed + +``` + +That's what I like to see. Consult the fine man pages for these commands to learn more fun snooping techniques. + +Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/1/4-tools-network-snooping-linux + +作者:[Carla Schroder][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/cschroder +[1]:http://www.whatsmydnsserver.com/ +[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From 0e4f96f32c50592f7d0a4b546cc6ae649feb21ba Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 23:24:59 +0800 Subject: [PATCH 169/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Manage?= =?UTF-8?q?=20Vim=20Plugins=20Using=20Vundle=20On=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...anage Vim Plugins Using Vundle On Linux.md | 250 ++++++++++++++++++ 1 file changed, 250 insertions(+) create mode 100644 sources/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md diff --git a/sources/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md b/sources/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md new file mode 100644 index 0000000000..4d4d388ed7 --- /dev/null +++ b/sources/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md @@ -0,0 +1,250 @@ +How To Manage Vim Plugins Using Vundle On Linux +====== +![](https://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-4-720x340.png) + +**Vim** , undoubtedly, is one of the powerful and versatile tool to manipulate text files, manage the system configuration files and writing code. The functionality of Vim can be extended to different levels using plugins. Usually, all plugins and additional configuration files will be stored in **~/.vim** directory. Since all plugin files are stored in a single directory, the files from different plugins are mixed up together as you install more plugins. Hence, it is going to be a daunting task to track and manage all of them. This is where Vundle comes in help. Vundle, acronym of **V** im B **undle** , is an extremely useful plug-in to manage Vim plugins. + +Vundle creates a separate directory tree for each plugin you install and stores the additional configuration files in the respective plugin directory. Therefore, there is no mix up files with one another. In a nutshell, Vundle allows you to install new plugins, configure existing plugins, update configured plugins, search for installed plugins and clean up unused plugins. All actions can be done in a single keypress with interactive mode. In this brief tutorial, let me show you how to install Vundle and how to manage Vim plugins using Vundle in GNU/Linux. + +### Installing Vundle + +If you need Vundle, I assume you have already installed **vim** on your system. If not, install vim and **git** (to download vundle). Both packages are available in the official repositories of most GNU/Linux distributions.For instance, you can use the following command to install these packages on Debian based systems. +``` +sudo apt-get install vim git +``` + +**Download Vundle** + +Clone Vundle GitHub repository: +``` +git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim +``` + +**Configure Vundle** + +To tell vim to use the new plugin manager, we need to create **~/.vimrc** file. This file is required to install, update, configure and remove plugins. +``` +vim ~/.vimrc +``` + +Put the following lines on the top of this file: +``` +set nocompatible " be iMproved, required +filetype off " required + +" set the runtime path to include Vundle and initialize +set rtp+=~/.vim/bundle/Vundle.vim +call vundle#begin() +" alternatively, pass a path where Vundle should install plugins +"call vundle#begin('~/some/path/here') + +" let Vundle manage Vundle, required +Plugin 'VundleVim/Vundle.vim' + +" The following are examples of different formats supported. +" Keep Plugin commands between vundle#begin/end. +" plugin on GitHub repo +Plugin 'tpope/vim-fugitive' +" plugin from http://vim-scripts.org/vim/scripts.html +" Plugin 'L9' +" Git plugin not hosted on GitHub +Plugin 'git://git.wincent.com/command-t.git' +" git repos on your local machine (i.e. when working on your own plugin) +Plugin 'file:///home/gmarik/path/to/plugin' +" The sparkup vim script is in a subdirectory of this repo called vim. +" Pass the path to set the runtimepath properly. +Plugin 'rstacruz/sparkup', {'rtp': 'vim/'} +" Install L9 and avoid a Naming conflict if you've already installed a +" different version somewhere else. +" Plugin 'ascenator/L9', {'name': 'newL9'} + +" All of your Plugins must be added before the following line +call vundle#end() " required +filetype plugin indent on " required +" To ignore plugin indent changes, instead use: +"filetype plugin on +" +" Brief help +" :PluginList - lists configured plugins +" :PluginInstall - installs plugins; append `!` to update or just :PluginUpdate +" :PluginSearch foo - searches for foo; append `!` to refresh local cache +" :PluginClean - confirms removal of unused plugins; append `!` to auto-approve removal +" +" see :h vundle for more details or wiki for FAQ +" Put your non-Plugin stuff after this line +``` + +The lines which are marked as "required" are Vundle's requirement. The rest of the lines are just examples. You can remove those lines if you don't want to install that specified plugins. Once you finished, type **:wq** to save and close file. + +Finally, open vim: +``` +vim +``` + +And type the following to install the plugins. +``` +:PluginInstall +``` + +[![][1]][2] + +A new split window will open and all the plugins which we added in the .vimrc file will be installed automatically. + +[![][1]][3] + +When the installation is completed, you can delete the buffer cache and close the split window by typing the following command: +``` +:bdelete +``` + +You can also install the plugins without opening vim using the following command from the Terminal: +``` +vim +PluginInstall +qall +``` + +For those using the [**fish shell**][4], add the following line to your **.vimrc** file.`` +``` +set shell=/bin/bash +``` + +### Manage Vim Plugins Using Vundle + +**Add New Plugins** + +First, search for the available plugins using command: +``` +:PluginSearch +``` + +To refresh the local list from the from the vimscripts site, add **"! "** at the end. +``` +:PluginSearch! +``` + +A new split window will open list all available plugins. + +[![][1]][5] + +You can also narrow down your search by using directly specifying the name of the plugin like below. +``` +:PluginSearch vim +``` + +This will list the plugin(s) that contains the words "vim" + +You can, of course, specify the exact plugin name like below. +``` +:PluginSearch vim-dasm +``` + +To install a plugin, move the cursor to the correct line and hit **" i"**. Now, the selected plugin will be installed. + +[![][1]][6] + +Similarly, install all plugins you wanted to have in your system. Once installed, delete the Vundle buffer cache using command: +``` +:bdelete +``` + +Now the plugin is installed. To make it autoload correctly, we need to add the installed plugin name to .vimrc file. + +To do so, type: +``` +:e ~/.vimrc +``` + +Add the following line. +``` +[...] +Plugin 'vim-dasm' +[...] +``` + +Replace vim-dasm with your plugin name. Then, hit ESC key and type **:wq** to save the changes and close the file. + +Please note that all of your Plugins must be added before the following line in your .vimrc file. +``` +[...] +filetype plugin indent on +``` + +**List installed Plugins** + +To list installed plugins, type the following from the vim editor: +``` +:PluginList +``` + +[![][1]][7] + +**Update plugins** + +To update the all installed plugins, type: +``` +:PluginUpdate +``` + +To reinstall all plugins, type: +``` +:PluginInstall! +``` + +**Uninstall plugins** + +First, list out all installed plugins: +``` +:PluginList +``` + +Then place the cursor to the correct line, and press **" SHITF+d"**. + +[![][1]][8] + +Then, edit your .vimrc file: +``` +:e ~/.vimrc +``` + +And delete the Plugin entry. Finally, type **:wq** to save the changes and exit from vim editor. + +Alternatively, you can uninstall a plugin by removing its line from .vimrc file and run: +``` +:PluginClean +``` + +This command will remove all plugins which are no longer present in your .vimrc but still present the bundle directory. + +At this point, you should have learned the basic usage about managing plugins using Vundle. For details, refer the help section by typing the following in your vim editor. +``` +:h vundle +``` + +**Also Read:** + +And, that's all for now. I will be soon here with another useful guide. Until then, stay tuned with OSTechNix! + +Cheers! + +**Resource:** + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/manage-vim-plugins-using-vundle-linux/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-1.png () +[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-2.png () +[4]:https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/ +[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-3.png () +[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-4-2.png () +[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-5-1.png () +[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-6.png () From 8a05fc6f1b6be223ebdd8d1ab1d8e9cbcb9cd423 Mon Sep 17 00:00:00 2001 From: darksun Date: Sun, 21 Jan 2018 23:49:38 +0800 Subject: [PATCH 170/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Securing=20the=20?= =?UTF-8?q?Linux=20filesystem=20with=20Tripwire?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ring the Linux filesystem with Tripwire.md | 112 ++++++++++++++++++ 1 file changed, 112 insertions(+) create mode 100644 sources/tech/20180118 Securing the Linux filesystem with Tripwire.md diff --git a/sources/tech/20180118 Securing the Linux filesystem with Tripwire.md b/sources/tech/20180118 Securing the Linux filesystem with Tripwire.md new file mode 100644 index 0000000000..a359e3a422 --- /dev/null +++ b/sources/tech/20180118 Securing the Linux filesystem with Tripwire.md @@ -0,0 +1,112 @@ +Securing the Linux filesystem with Tripwire +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/file_system.jpg?itok=pzCrX1Kc) + +While Linux is considered to be the most secure operating system (ahead of Windows and MacOS), it is still vulnerable to rootkits and other variants of malware. Thus, Linux users need to know how to protect their servers or personal computers from destruction, and the first step they need to take is to protect the filesystem. + +In this article, we'll look at [Tripwire][1], an excellent tool for protecting Linux filesystems. Tripwire is an integrity checking tool that enables system administrators, security engineers, and others to detect alterations to system files. Although it's not the only option available ([AIDE][2] and [Samhain][3] offer similar features), Tripwire is arguably the most commonly used integrity checker for Linux system files, and it is available as open source under GPLv2. + +### How Tripwire works + +It's helpful to know how Tripwire operates in order to understand what it does once it's installed. Tripwire is made up of two major components: policy and database. Policy lists all the files and directories that the integrity checker should take a snapshot of, in addition to creating rules for identifying violations of changes to directories and files. Database consists of the snapshot taken by Tripwire. + +Tripwire also has a configuration file, which specifies the locations of the database, policy file, and Tripwire executable. It also provides two cryptographic keys--site key and local key--to protect important files against tampering. The site key protects the policy and configuration files, while the local key protects the database and generated reports. + +Tripwire works by periodically comparing the directories and files against the snapshot in the database and reporting any changes. + +### Installing Tripwire + +In order to use Tripwire, we need to download and install it first. Tripwire works on almost all Linux distributions; you can download an open source version from [Sourceforge][4] and install it as follows, depending on your version of Linux. + +Debian and Ubuntu users can install Tripwire directly from the repository using `apt-get`. Non-root users should type the `sudo` command to install Tripwire via `apt-get`. +``` + + +sudo apt-get update + +sudo  apt-get install tripwire   +``` + +CentOS and other rpm-based distributions use a similar process. For the sake of best practice, update your repository before installing a new package such as Tripwire. The command `yum install epel-release` simply means we want to install extra repositories. (`epel` stands for Extra Packages for Enterprise Linux.) +``` + + +yum update + +yum install epel-release + +yum install tripwire   +``` + +This command causes the installation to run a configuration of packages that are required for Tripwire to function effectively. In addition, it will ask if you want to select passphrases during installation. You can select "Yes" to both prompts. + +Also, select or choose "Yes" if it's required to build the configuration file. Choose and confirm a passphrase for a site key and for a local key. (A complex passphrase such as `Il0ve0pens0urce` is recommended.) + +### Build and initialize Tripwire's database + +Next, initialize the Tripwire database as follows: +``` + + +tripwire --init +``` + +You'll need to provide your local key passphrase to run the commands. + +### Basic integrity checking using Tripwire + +You can use the following command to instruct Tripwire to check whether your files or directories have been modified. Tripwire's ability to compare files and directories against the initial snapshot in the database is based on the rules you created in the active policy. +``` + + +tripwire  --check   +``` + +You can also limit the `-check` command to specific files or directories, such as in this example: +``` + + +tripwire   --check   /usr/tmp   +``` + +In addition, if you need extended help on using Tripwire's `-check` command, this command allows you to consult Tripwire's manual: +``` + + +tripwire  --check  --help   +``` + +### Generating reports using Tripwire + +To easily generate a daily system integrity report, create a `crontab` with this command: +``` + + +crontab -e +``` + +Afterward, you can edit this file (with the text editor of your choice) to introduce tasks to be run by cron. For instance, you can set up a cron job to send Tripwire reports to your email daily at 5:40 a.m. by using this command: +``` + + +40 5  *  *  *  usr/sbin/tripwire   --check +``` + +Whether you decide to use Tripwire or another integrity checker with similar features, the key issue is making sure you have a solution to protect the security of your Linux filesystem. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/securing-linux-filesystem-tripwire + +作者:[Michael Kwaku Aboagye][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/revoks +[1]:https://www.tripwire.com/ +[2]:http://aide.sourceforge.net/ +[3]:http://www.la-samhna.de/samhain/ +[4]:http://sourceforge.net/projects/tripwire From b3d62741f7ce5731e90323b46adbb0c88d9cecfc Mon Sep 17 00:00:00 2001 From: yunfengHe Date: Mon, 22 Jan 2018 00:30:35 +0800 Subject: [PATCH 171/226] =?UTF-8?q?20171107=20The=20long=20goodbye=20to=20?= =?UTF-8?q?C=20=E6=A0=A1=E5=AF=B9=E5=AE=8C=E6=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../talk/20171107 The long goodbye to C.md | 57 ++++++++++--------- 1 file changed, 29 insertions(+), 28 deletions(-) diff --git a/translated/talk/20171107 The long goodbye to C.md b/translated/talk/20171107 The long goodbye to C.md index 4b19be074a..253cf95071 100644 --- a/translated/talk/20171107 The long goodbye to C.md +++ b/translated/talk/20171107 The long goodbye to C.md @@ -1,36 +1,37 @@ 对 C 的漫长的告别 ========================================== +这几天来,我在思考那些正在挑战 C 语言作为系统编程语言堆中的根节点的地位的新潮语言,尤其是 Go 和 Rust。思考的过程中,我意识到了一个让我震惊的事实 —— 我有着 35 年的 C 语言经验。每周我都要写很多 C 代码,但是我已经记不清楚上一次我 _创建一个新的 C 语言项目_ 是在什么时候了。 -这几天来,我就在思考那些能够挑战 C 语言作为系统编程语言堆中的根节点的地位的新潮语言,尤其是 Go 和 Rust。我发现了一个让我震惊的事实 —— 我有着 35 年的 C 语言经验。每周我都要写很多 C 代码,但是我已经忘了我上一次是在什么时候 _创建新的 C 语言项目_ 了。 +如果你完全不认为这种情况令人震惊,那你很可能不是一个系统程序员。我知道有很多程序员使用更高级的语言工作。但是我把大部分时间都花在了深入打磨像 NTPsec , GPSD 以及 giflib 这些东西上。熟练使用 C 语言在这几十年里一直就是我的专长。但是,现在我不仅是不再使用 C 语言写新的项目,而且我都记不清我什么时候开始这样做的了。而且...回望历史,我不认为这是本世纪发生的事情。 -如果你认为这件事情不够震惊,那你可能不是一个系统程序员。我知道有很多程序员使用更高级的语言工作。但是我把大部分时间都花在了深入打磨像 NTPsec , GPSD 以及 giflib 这些东西上。熟练使用 C 语言在这几十年里一直就是我的专长。但是,现在我不仅是不再使用 C 语言写新的项目,而且我都记不清我什么时候开始这样做的了。而且...回望历史,我不认为这是本世纪发生的事情。 - -当你问到我我的五个核心软件开发技能,“C 语言专家” 一定是你最有可能听到的,这件事情对我来说很好。这也激起了我的思考。C 的未来会怎样 ?C 是否正像当年的 COBOL 一样,在辉煌之后,走向落幕? + 我很惊讶的意识到,如果你问到我我的五个最核心软件开发技能是什么,“C 语言专家” 一定是你最有可能听到的。这也激起了我的思考。C 的未来会怎样 ?C 是否正像当年的 COBOL 一样,在辉煌之后,走向落幕? 我恰好是在 C 语言迅猛发展并把汇编语言以及其他许多编译型语言挤出主流存在的前几年开始编程的。那场过渡大约是在 1982 到 1985 年之间。在那之前,有很多编译型语言来争相吸引程序员的注意力,那些语言中还没有明确的领导者;但是在那之后,小众的语言直接毫无声息的退出舞台。主流的(FORTRAN,Pascal,COBOL)语言则要么只限于老代码,要么就是固守单一领域,再就是在 C 语言的边缘领域顶着愈来愈大的压力苟延残喘。 -在那以后,这种情形持续了近 30 年。尽管在应用程序开发上出现了新的动向: Java, Perl, Python, 以及许许多多不是很成功的竞争者。起初我很少关注这些语言,这很大一部是是因为在它们的运行时的开销对于当时的实际硬件来说太大。因此,这就使得 C 的成功无可撼动;为了使用之前存在的 C 语言代码,你得使用 C 语言写新代码(一部分脚本语言尝试过大伯这个限制,但是只有 Python 做到了) +在那以后,这种情形持续了近 30 年。尽管在应用程序开发上出现了新的动向: Java, Perl, Python, 以及许许多多不是很成功的竞争者。起初我很少关注这些语言,这很大一部是是因为在它们的运行时的开销对于当时的实际硬件来说太大。因此,这就使得 C 的成功无可撼动;为了使用和对接大量已存在的 C 语言代码,你得使用 C 语言写新代码(一部分脚本语言尝试过打破这种壁垒,但是只有 Python 有可能成功) -回想起来,我在 1997 年使用脚本语言写应用时本应该注意到这些语言的更重要的意义的。当时我写的是一个帮助图书管理员使用一款叫做 SunSITE 的源码分发式软件,我使用的那个语言,叫做 Perl。 +回想起来,我在 1997 年使用脚本语言写应用时本应该注意到这些语言的更重要的意义的。当时我写的是一个帮助图书管理员使用一个叫做SunSITE的源码分发站点的辅助软件,当时使用的是 Perl 语言。 -这个应用完全是基于文本的,而且只需要以人类能反应过来的速度运行(大概 0.1 秒),因此使用 C 或者别的没有动态内存分配以及字符串类型的语言来写就会显得很傻。但是在当时,我仅仅是把其视为一个试验,我在那时没想到我几乎再也不会在一个新项目的第一个文件里敲下 “int main(int argc, char **argv)” 了。 +这个应用完全是用来处理文本输入的,而且只需要能够应对人类的反应速度即可(大概 0.1 秒),因此使用 C 或者别的没有动态内存分配以及字符串类型的语言来写就会显得很傻。但是在当时,我仅仅是把其视为一个试验,而完全没有想到我几乎再也不会在一个新项目的第一个文件里敲下 “int main(int argc, char **argv)” 这样的 C 语言代码了。 -我说“几乎”,主要是因为 1999 年的 [SNG][3].我像那是我最后一个从头开始写的项目。在那之后我的所有新的 C 代码都是为我贡献代码,或者成为维护者的项目而写 —— 比如 GPSD 以及 NTPsec。 +我说“几乎”,主要是因为 1999 年的 [SNG][3]。 我想那是我最后一个用 C 从头开始写的项目。 -当年我本不应该使用 C 语言写 SNG 的。因为在那个年代,摩尔定律的快速循环使得硬件愈加便宜,像 Perl 这样的语言的运行也不再是问题。仅仅三年以后,我可能就会毫不犹豫地使用 Python 而不是 C 语言来写 SNG。 +在那之后我写的所有的 C 代码都是在为那些上世纪已经存在的老项目添砖加瓦,或者是在维护诸如 GPSD 以及 NTPsec 一类的项目。 -在 1997 年学习了 Python 这件事对我来说是一道分水岭。这个语言很完美 —— 就像我早年使用的 Lisp 一样,而且 Python 还有很酷的库!还完全绑定了 POSIX!还有一个绝不完犊子的对象系统!Python 没有把 C 语言挤出我的工具箱,但是我很快就习惯了在只要能用 Python 时就写 Python ,而只在必须使用 C 时写 C . +当年我本不应该使用 C 语言写 SNG 的。因为在那个年代,摩尔定律的快速迭代使得硬件愈加便宜,使得像 Perl 这样的语言的执行效率也不再是问题。仅仅三年以后,我可能就会毫不犹豫地使用 Python 而不是 C 语言来写 SNG。 -(在此之后,我开始在我的访谈中指出我所谓的 “Perl 的教训” ,也就是任何一个没有和 C 语言语义等价的 POSIX 绑定的语言_都得失败_。在计算机科学的发展史上,作者没有意识到这一点的学术语言的骨骸俯拾皆是。) +在 1997 年我学习了 Python, 这对我来说是一道分水岭。这个语言很美妙 —— 就像我早年使用的 Lisp 一样,而且 Python 还有很酷的库!甚至还完全绑定了 POSIX!还有一个蛮好用的对象系统!Python 没有把 C 语言挤出我的工具箱,但是我很快就习惯了在只要能用 Python 时就写 Python ,而只在必须使用 C 时写 C . -显然,对我来说,,Python 的主要优势之一就是它很简单,当我写 Python 时,我不再需要担心内存管理问题或者会导致吐核的程序崩溃 —— 对于 C 程序员来说,处理这些问题烦的要命。而不那么明显的优势恰好在我更改语言时显现,我在 90 年代末写应用程序和非核心系统服务的代码时为了平衡成本与风险都会倾向于选择具有自动内存管理但是开销更大的语言,以抵消之前提到的 C 语言的缺陷。而在仅仅几年之前(甚至是 1990 年),那些语言的开销还是大到无法承受的;那时摩尔定律还没让硬件产业迅猛发展。 +( 在此之后,我开始在我的访谈中指出我所谓的 “ Perl 的教训” ,也就是任何一个没能实现和 C 语言语义等价的绑定了 POSIX 的语言_都注定要失败_。在计算机科学的发展史上,很多学术语言的骨骸俯拾皆是,原因是这些语言的设计者没有意识到这个重要的问题。) -与 C 相比更喜欢 Python —— 然后只要是能的话我就会从 C 语言转移到 Python ,这让我的工作的复杂程度降了不少。我开始在 GPSD 以及 NTPsec 里面加入 Python。这就是我们能把 NTP 的代码库大小削减四分之一的原因。 +显然,对我来说,Python 的主要优势之一就是它很简单,当我写 Python 时,我不再需要担心内存管理问题或者会导致吐核的程序崩溃 —— 对于 C 程序员来说,处理这些问题烦的要命。而不那么明显的优势恰好在我更改语言时显现,我在 90 年代末写应用程序和非核心系统服务的代码时为了平衡成本与风险都会倾向于选择具有自动内存管理但是开销更大的语言,以抵消之前提到的 C 语言的缺陷。而在仅仅几年之前(甚至是 1990 年),那些语言的开销还是大到无法承受的;那时硬件产业的发展还在早期阶段,没有给摩尔定律足够的时间来发挥威力。 -但是今天我不是来讲 Python 的。尽管我觉得它在竞争中脱颖而出,Python 也不是在 2000 年之前彻底结束我在新项目上使用 C 语言的原因,在当时任何一个新的学院派的动态语言都可以让我不写 C 语言代码。那件事可能是在我写了很多 Java 之后发生的,这就是另一段时间线了。 +尽量地在 C 和 Python 之间选择 C —— 只要是能的话我就会从 C 语言转移到 Python 。这是一种降低工程复杂程度的有效策略。我将这种策略应用在了 GPSD 中,而针对 NTPsec , 我对这个策略的采用则更加系统。这就是我们能把 NTP 的代码库大小削减四分之一的原因。 -我写这个回忆录部分原因是我觉得我不特殊,我像在世纪之交,同样的事件也改变了不少 C 语言老手的编码习惯。他们也会和我之前一样,没有发现这一转变。 +但是今天我不是来讲 Python 的。尽管我觉得它在竞争中脱颖而出,Python 也未必真的是在 2000 年之前彻底结束我在新项目上使用 C 语言的原因,因为在当时任何一个新的学院派的动态语言都可以让我不再选择使用 C 语言。也有可能是在某段时间里在我写了很多 Java 之后,我才慢慢远离了 C 语言。 + +我写这个回忆录是因为我觉得我并非特例,在世纪之交,同样的发展和转变也改变了不少 C 语言老手的编码习惯。像我一样,他们在当时也并没有意识到这种转变正在发生。 在 2000 年以后,尽管我还在使用 C/C++ 写之前的项目,比如 GPSD ,游戏韦诺之战以及 NTPsec,但是我的所有新项目都是使用 Python 的。 @@ -38,31 +39,31 @@ 甚至是对于更小的项目 —— 那些可以在 C 中实现的东西 —— 我也使用 Python 写,因为我不想花不必要的时间以及精力去处理内核转储问题。这种情况一直持续到去年年底,持续到我创建我的第一个 Rust 项目,以及成功写出第一个[使用 Go 语言的项目][6]。 -如前文所述,尽管我是在讨论我的个人经历,但是我想我的经历体现了时代的趋势。我期待新潮流的出现,而不是仅仅跟随潮流。在 98 年,我是 Python 的早期使用者。来自 [TIOBE][7] 的数据让我在 Go 语言脱胎于公司的实验项目从小众语言火爆的几个月内开始写自己的第一个 Go 语言项目。 +如前文所述,尽管我是在讨论我的个人经历,但是我想我的经历体现了时代的趋势。我期待新潮流的出现,而不是仅仅跟随潮流。在 98 年的时候,我就是 Python 的早期使用者。来自 [TIOBE][7] 的数据则表明在 Go 语言脱胎于公司的实验项目并刚刚从小众语言中脱颖而出的几个月内,我就在开始实现自己的第一个 Go 语言项目了。 -总而言之:直到现在第一批有可能挑战 C 语言的传统地位的语言才出现。我判断这个的标砖很简单 —— 只要这个语言能让我等 C 语言老手接受不再写 C 的 事实,这个语言才 “有可能” 挑战到 C 语言的地位 —— 来看啊,这有个新编译器,能把 C 转换到新语言,现在你可以让他完成你的_全部工作_了 —— 这样 C 语言的老手就会开心起来。 +总而言之:直到现在第一批有可能挑战 C 语言的传统地位的语言才出现。我判断这个的标准很简单 —— 只要这个语言能让我等 C 语言老手接受不再写 C 的事实,这个语言才 “有可能” 挑战到 C 语言的地位 —— 来看啊,这有个新编译器,能把 C 转换到新语言,现在你可以让他完成你的_全部工作_了 —— 这样 C 语言的老手就会开心起来。 -Python 以及和其类似的语言对此做的并不够好。使用 Python 实现 NTPsec(以此举例)可能是个灾难,最终会由于过高的运行时开销以及由于垃圾回收机制导致的延迟变化而烂尾。当写单用户且只需要以人类能接受的速度运行的程序时,使用 Python 很好,但是对于以 _机器的速度_ 运行的程序来说就不总是如此了 —— 尤其是在很高的多用户负载之下。这不只是我自己的判断,起初 Go 存在的主要原因就是 Google ,然后 Python 的众多支持者也来支持这款语言 ——— 他们遭遇了同样的痛点。 +Python 以及和其类似的语言对此做的并不够好。使用 Python 实现 NTPsec(以此举例)可能是个灾难,最终会由于过高的运行时开销以及由于垃圾回收机制导致的延迟变化而烂尾。如果需求是针对单个用户且只需要以人类能接受的速度运行,使用 Python 当然是很好的,但是对于以 _机器的速度_ 运行的程序来说就不总是如此了 —— 尤其是在很高的多用户负载之下。这不只是我自己的判断 -- 因为拿 Go 语言来说,它的存在主要就是因为当时作为 Python 语言主要支持者的 Google 在使用 Python 实现一些工程的时候也遭遇了同样的效能痛点。 -Go 语言就是为了处理 Python 处理不了的类 C 语言工作而设计的。尽管没有一个全自动语言转换软件让我很是不爽,但是使用 Go 语言来写系统程序对我来说不算麻烦,我发现我写 Go 写的还挺开心的。我的 很多 C 编码技能还可以继续使用,我还收获了垃圾回收机制以及并发编程机制,这何乐而不为? +Go 语言就是为了处理 Python 搞不定的那些多由 C 语言来实现的任务而设计的。尽管没有一个全自动语言转换软件让我很是不爽,但是使用 Go 语言来写系统程序对我来说不算麻烦,我发现我写 Go 写的还挺开心的。我的很多 C 编码技能还可以继续使用,我还收获了垃圾回收机制以及并发编程机制,这何乐而不为? ([这里][8]有关于我第一次写 Go 的经验的更多信息) -本来我像把 Rust 也视为 “C 语言要过时了” 的例子,但是在学习这们语言并尝试使用这门语言编程之后,我觉得[这语言现在还不行][9]。也许 5 年以后,它才会成为 C 语言的对手。 +本来我像把 Rust 也视为 “C 语言要过时了” 的例证,但是在学习这们语言并尝试使用这门语言编程之后,我觉得[这种语言现在还没有做好准备][9]。也许 5 年以后,它才会成为 C 语言的对手。 -随着 2017 的临近,我们已经发现了一个相对成熟的语言,其和 C 类似,能够胜任 C 语言的大部分工作场景(我在下面会准确描述),在几年以后,这个语言届的新星可能就会取得成功。 +随着 2017 的尾声来临,我们已经发现了一个相对成熟的语言,其和 C 类似,能够胜任 C 语言的大部分工作场景(我在下面会准确描述),在几年以后,这个语言届的新星可能就会取得成功。 -这件事意义重大。如果你不长远地回顾历史,你可能看不出来这件事情的伟大性。_三十年了_ —— 这几乎就是我写代码的时间,我们都没有等到 C 语言的继任者。也无法体验在前 C 语言时代的系统编程是什么模样。但是现在我们可以使用两种视角来看待系统编程... +这件事意义重大。如果你不长远地回顾历史,你可能看不出来这件事情的伟大性。_三十年了_ —— 这几乎就是我作为一个程序员的全部生涯,我们都没有等到一个 C 语言的继任者,也无法遥望 C 之后的系统编程会是什么样子的。而现在,我们面前突然有了后 C 时代的两种不同的展望和未来... -...另一个视角就是下面这个语言。我的一个朋友正在开发一个他称之为 "Cx" 的语言,这个语言在 C 语言上做了很少的改动,使得其能够支持类型安全;他的项目的目的就是要创建一个能够在最少人力参与的情况下把古典 C 语言修改为新语言的程序。我不会指出这位朋友的名字,免得给他太多压力,让他给我做出不切实际的保证,他的实现方法真的很是有意思,我会尽量给他募集资金。 +...另一种展望则是下面这个语言留给我们的。我的一个朋友正在开发一个他称之为 "Cx" 的语言,这个语言在 C 语言上做了很少的改动,使得其能够支持类型安全;他的项目的目的就是要创建一个能够在最少人力参与的情况下把古典 C 语言修改为新语言的程序。我不会指出这位朋友的名字,免得给他太多压力,让他做出太多不切实际的保证。但是他的实现方法真的很是有意思,我会尽量给他募集资金。 -现在,除了 C 语言之外,我看到了三种不同的道路。在两年之前,我一种都不会发现。我重复一遍:这件事情意义重大。 +现在,我们看到了可以替代 C 语言实现系统编程的三种不同的可能的道路。而就在两年之前,我们的眼前还是一篇漆黑。我重复一遍:这件事情意义重大。 -我是说 C 语言将要灭绝吗?没有,在可预见的未来里,C 语言还会在操作系统的内核以及设备固件的编程的主流语言,在那里,尽力压榨硬件性能的古老命令还在奏效,尽管它可能不是那么安全。 +我是在说 C 语言将要灭绝吗?不是这样的,在可预见的未来里,C 语言还会是操作系统的内核编程以及设备固件编程的主流语言,在这些场景下,尽力压榨硬件性能的古老命令还在奏效,尽管它可能不是那么安全。 -现在被攻破的领域就是我之前提到的我经常出没的领域 —— 比如 GPSD 以及 NTPsec ,系统服务以及那些因为历史原因而使用 C 语言写的进程。还有就是以 DNS 服务器以及邮箱 —— 那些得以机器而不是人类的速度运行的系统程序。 +现在那些将要被 C 的继任者攻破的领域就是我之前提到的我经常涉及的领域 —— 比如 GPSD 以及 NTPsec ,系统服务以及那些因为历史原因而使用 C 语言写的进程。还有就是以 DNS 服务器以及邮箱 —— 那些需要以机器速度而不是人类的速度运行的系统程序。 -现在我们可以预见,未来大多数代码都是由具有强大内存安全特性的 C 语言的替代者实现。Go , Rust 或者 Cx ,无论是哪个, C 的存在都将被弱化。如果我现在来实现 NTP ,我可能就会毫不犹豫的使用 Go 语言来实现。 +现在我们可以对后 C 时代的未来窥见一斑,即上述这类领域的代码都可以使用那些具有强大内存安全特性的 C 语言的替代者实现。Go , Rust 或者 Cx ,无论是哪个,都可能使 C 的存在被弱化。比如,如果我现在再来重新实现一遍 NTP ,我可能就会毫不犹豫的使用 Go 语言去完成。 -------------------------------------------------------------------------------- @@ -70,7 +71,7 @@ via: http://esr.ibiblio.org/?p=7711 作者:[Eric Raymond][a] 译者:[name1e5s](https://github.com/name1e5s) -校对:[校对者ID](https://github.com/校对者ID) +校对:[yunfengHe](https://github.com/yunfengHe) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 660ad615bb0a59919de06615fbc82b16d0de9e4a Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 22 Jan 2018 09:11:53 +0800 Subject: [PATCH 172/226] translated --- ...a YUM repository from ISO - Online repo.md | 118 ------------------ ...a YUM repository from ISO - Online repo.md | 116 +++++++++++++++++ 2 files changed, 116 insertions(+), 118 deletions(-) delete mode 100644 sources/tech/20170526 Creating a YUM repository from ISO - Online repo.md create mode 100644 translated/tech/20170526 Creating a YUM repository from ISO - Online repo.md diff --git a/sources/tech/20170526 Creating a YUM repository from ISO - Online repo.md b/sources/tech/20170526 Creating a YUM repository from ISO - Online repo.md deleted file mode 100644 index ac11acc6e5..0000000000 --- a/sources/tech/20170526 Creating a YUM repository from ISO - Online repo.md +++ /dev/null @@ -1,118 +0,0 @@ -translating---geekpi - -Creating a YUM repository from ISO & Online repo -====== - -YUM tool is one of the most important tool for Centos/RHEL/Fedora. Though in latest builds of fedora, it has been replaced with DNF but that not at all means that it has ran its course. It is still used widely for installing rpm packages, we have already discussed YUM with examples in our earlier tutorial ([ **READ HERE**][1]). - -In this tutorial, we are going to learn to create a Local YUM repository, first by using ISO image of OS & then by creating a mirror image of an online yum repository. - -### Creating YUM with DVD ISO - -We are using a Centos 7 dvd for this tutorial & same process should work on RHEL 7 as well. - -Firstly create a directory named YUM in root folder - -``` -$ mkdir /YUM- -``` - -then mount Centos 7 ISO , - -``` -$ mount -t iso9660 -o loop /home/dan/Centos-7-x86_x64-DVD.iso /mnt/iso/ -``` - -Next, copy the packages from mounted ISO to /YUM folder. Once all the packages have been copied to the system, we will install the required packages for creating YUM. Open /YUM & install the following RPM packages, - -``` -$ rpm -ivh deltarpm -$ rpm -ivh python-deltarpm -$ rpm -ivh createrepo -``` - -Once these packages have been installed, we will create a file named " **local.repo "** in **/etc/yum.repos.d** folder with all the yum information - -``` -$ vi /etc/yum.repos.d/local.repo -``` - -``` -LOCAL REPO] -Name=Local YUM -baseurl=file:///YUM -gpgcheck=0 -enabled=1 -``` - -Save & exit the file. Next we will create repo-data by running the following command - -``` -$ createrepo -v /YUM -``` - -It will take some time to create the repo data. Once the process finishes, run - -``` -$ yum clean all -``` - -to clean cache & then run - -``` -$ yum repolist -``` - -to check the list of all repositories. You should see repo "local.repo" in the list. - - -### Creating mirror YUM repository with online repository - -Process involved in creating a yum is similar to creating a yum with an ISO image with one exception that we will fetch our rpm packages from an online repository instead of an ISO. - -Firstly, we need to find an online repository to get the latest packages . It is advised to find an online yum that is closest to your location , in order to optimize the download speeds. We will be using below mentioned , you can select one nearest to yours location from [CENTOS MIRROR LIST][2] - -After selecting a mirror, we will sync that mirror with our system using rsync but before you do that, make sure that you plenty of space on your server - -``` -$ rsync -avz rsync://mirror.fibergrid.in/centos/7.2/os/x86_64/Packages/s/ /YUM -``` - -Sync will take quite a while (maybe an hour) depending on your internet speed. After the syncing is completed, we will update our repo-data - -``` -$ createrepo - v /YUM -``` - -Our Yum is now ready to used . We can create a cron job for our repo to be updated automatically at a determined time daily or weekly as per you needs. - -To create a cron job for syncing the repository, run - -``` -$ crontab -e -``` - -& add the following line - -``` -30 12 * * * rsync -avz http://mirror.centos.org/centos/7/os/x86_64/Packages/ /YUM -``` - -This will enable the syncing of yum every night at 12:30 AM. Also remember to create repository configuration file in /etc/yum.repos.d , as we did above. - -That's it guys, you now have your own yum repository to use. Please share this article if you like it & leave your comments/queries in the comment box down below. - - --------------------------------------------------------------------------------- - -via: http://linuxtechlab.com/creating-yum-repository-iso-online-repo/ - -作者:[Shusain][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linuxtechlab.com/author/shsuain/ -[1]:http://linuxtechlab.com/using-yum-command-examples/ -[2]:http://mirror.centos.org/centos/ diff --git a/translated/tech/20170526 Creating a YUM repository from ISO - Online repo.md b/translated/tech/20170526 Creating a YUM repository from ISO - Online repo.md new file mode 100644 index 0000000000..a483766ddf --- /dev/null +++ b/translated/tech/20170526 Creating a YUM repository from ISO - Online repo.md @@ -0,0 +1,116 @@ +从 ISO 和在线仓库创建一个 YUM 仓库 +====== + +YUM 是 Centos/RHEL/Fedora 中最重要的工具之一。尽管在 Fedora 的最新版本中,它已经被 DNF 所取代,但这并不意味着它已经成功了。它仍然被广泛用于安装 rpm 包,我们已经在前面的教程([**在这里阅读**] [1])中用示例讨论了 YUM。 + +在本教程中,我们将学习创建一个本地 YUM 仓库,首先使用系统的 ISO 镜像,然后创建一个在线 yum 仓库的镜像。 + +### 用 DVD ISO 创建 YUM + +我们在本教程中使用 Centos 7 dvd,同样的过程也应该可以用在 RHEL 7 上。 + +首先在根文件夹中创建一个名为 YUM 的目录 + +``` +$ mkdir /YUM- +``` + +然后挂载 Centos 7 ISO: + +``` +$ mount -t iso9660 -o loop /home/dan/Centos-7-x86_x64-DVD.iso /mnt/iso/ +``` + +接下来,从挂载的 ISO 中复制软件包到 /YUM 中。当所有的软件包都被复制到系统中后,我们将安装创建 YUM 所需的软件包。打开 /YUM 并安装以下 RPM 包: + +``` +$ rpm -ivh deltarpm +$ rpm -ivh python-deltarpm +$ rpm -ivh createrepo +``` + +安装完成后,我们将在 **/etc/yum.repos.d** 中创建一个名 为 **“local.repo”** 的文件,其中包含所有的 yum 信息。 + +``` +$ vi /etc/yum.repos.d/local.repo +``` + +``` +LOCAL REPO] +Name=Local YUM +baseurl=file:///YUM +gpgcheck=0 +enabled=1 +``` + +保存并退出文件。接下来,我们将通过运行以下命令来创建仓库数据。 + +``` +$ createrepo -v /YUM +``` + +创建仓库数据需要一些时间。一切完成后,请运行 + +``` +$ yum clean all +``` + +清理缓存,然后运行 + +``` +$ yum repolist +``` + +检查所有仓库列表。你应该在列表中看到 “local.repo”。 + + +### 使用在线仓库创建镜像 YUM 仓库 + +创建在线 yum 的过程与使用 ISO 镜像创建 yum 类似,只是我们将从在线仓库而不是 ISO 中获取 rpm 软件包。 + +首先,我们需要找到一个在线仓库来获取最新的软件包。建议你找一个离你位置最近的在线 yum 仓库,以优化下载速度。我们将使用下面的镜像,你可以从[ CENTOS 镜像列表][2]中选择一个离你最近的镜像。 + +选择镜像之后,我们将使用 rsync 将该镜像与我们的系统同步,但在此之前,请确保你服务器上有足够的空间。 + +``` +$ rsync -avz rsync://mirror.fibergrid.in/centos/7.2/os/x86_64/Packages/s/ /YUM +``` + +同步将需要相当长一段时间(也许一个小时),这取决于你互联网的速度。同步完成后,我们将更新我们的仓库数据。 + +``` +$ createrepo - v /YUM +``` + +我们的 Yum 已经可以使用了。我们可以创建一个 cron 任务来根据你的需求每天或每周定时地自动更新仓库数据。 + +要创建一个用于同步仓库的 cron 任务,请运行: + +``` +$ crontab -e +``` + +并添加以下行 + +``` +30 12 * * * rsync -avz http://mirror.centos.org/centos/7/os/x86_64/Packages/ /YUM +``` + +这会在每晚 12:30 同步 yum。还请记住在 /etc/yum.repos.d 中创建仓库配置文件,就像我们上面所做的一样。 + +就是这样,你现在有你自己的 yum 仓库来使用。如果你喜欢它,请分享这篇文章,并在下面的评论栏留下你的意见/疑问。 + + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/creating-yum-repository-iso-online-repo/ + +作者:[Shusain][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:http://linuxtechlab.com/using-yum-command-examples/ +[2]:http://mirror.centos.org/centos/ From 1133dfa2ce425b05e10b4995014799eaf228bbe7 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 22 Jan 2018 09:19:39 +0800 Subject: [PATCH 173/226] translating --- .../20180119 How to install Spotify application on Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180119 How to install Spotify application on Linux.md b/sources/tech/20180119 How to install Spotify application on Linux.md index e5b6f94a74..3050e36199 100644 --- a/sources/tech/20180119 How to install Spotify application on Linux.md +++ b/sources/tech/20180119 How to install Spotify application on Linux.md @@ -1,3 +1,5 @@ +translating---geekpi + How to install Spotify application on Linux ====== From 9c307174273e44d496260b31466c7ea0e8e58e91 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 22 Jan 2018 09:42:22 +0800 Subject: [PATCH 174/226] PRF&PUB:20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @jessie-pang 辛苦!https://linux.cn/article-9265-1.html --- ...For Googles In-house Linux Distribution.md | 28 ++++++++----------- 1 file changed, 12 insertions(+), 16 deletions(-) rename {translated/tech => published}/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md (88%) diff --git a/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md b/published/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md similarity index 88% rename from translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md rename to published/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md index b585a739e9..b5d1b8637f 100644 --- a/translated/tech/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md +++ b/published/20180119 No More Ubuntu Debian is the New Choice For Googles In-house Linux Distribution.md @@ -1,16 +1,17 @@ Debian 取代 Ubuntu 成为 Google 内部 Linux 发行版的新选择 ============================================================ -_摘要_:Google 多年来一直使用基于 Ubuntu 的内部操作系统 Goobuntu。如今,Goobuntu 正在被基于 Debian Testing 的 gLinux 所取代。 +> 摘要:Google 多年来一直使用基于 Ubuntu 的内部操作系统 Goobuntu。如今,Goobuntu 正在被基于 Debian Testing 的 gLinux 所取代。 -如果你读过 [Ubuntu facts][18],你可能知道 Google 使用了一个名为 [Goobuntu][19] 的 Linux 发行版作为开发平台。这是一个定制化的 Linux 发行版,不难猜到,它是基于 Ubuntu 的。 +如果你读过那篇《[Ubuntu 十个令人惊奇的事实][18]》,你可能知道 Google 使用了一个名为 [Goobuntu][19] 的 Linux 发行版作为开发平台。这是一个定制化的 Linux 发行版,不难猜到,它是基于 Ubuntu 的。 Goobuntu 基本上是一个 [采用轻量级的界面的 Ubuntu][20],它是基于 Ubuntu LTS 版本的。如果你认为 Google 对 Ubuntu 的测试或开发做出了贡献,那么你就错了。Google 只是 Canonical 公司的 [Ubuntu Advantage Program][21] 计划的付费客户而已。[Canonical][22] 是 Ubuntu 的母公司。 -### 遇见 gLinux:Google 基于 Debian Buster 的新 Linux 发行版 +### 遇见 gLinux:Google 基于 Debian Buster 的新 Linux 发行版 + ![gLinux from Goobuntu](https://itsfoss.com/wp-content/uploads/2018/01/glinux-announcement-800x450.jpg) -在使用 Ubuntu 五年多以后,Google 正在用一个基于 Debian Testing 版本的 Linux 发行版—— gLinux 取代 Goobuntu。 +在使用 Ubuntu 五年多以后,Google 正在用一个基于 Debian Testing 版本的 Linux 发行版 —— gLinux 取代 Goobuntu。 正如 [MuyLinux][23] 所报道的,gLinux 是从软件包的源代码中构建出来的,然后 Google 对其进行了修改,这些改动也将为上游做出贡献。 @@ -28,8 +29,9 @@ Google 计划如何转移到 Debian Testing?目前的 Debian Testing 版本是 Google 还计划将这些改动发到 Debian 的上游项目中,从而为其发展做出贡献。 -![gLinux testing plan from Google](https://itsfoss.com/wp-content/uploads/2018/01/glinux-testing-plan.jpg) -gLinux 的开发计划 +![gLinux testing plan from Google](https://itsfoss.com/wp-content/uploads/2018/01/glinux-testing-plan.jpg) + +*gLinux 的开发计划* ### Ubuntu 丢失了一个大客户! @@ -43,18 +45,12 @@ gLinux 的开发计划 如果你想使用 Goobuntu 或 gLinux,那得成为 Google 公司的雇员才行。因为这是 Google 的内部项目,不对公众开放。 -总的来说,这对 Debian 来说是一个好消息,尤其是如果他们改变了上游的话。对 Ubuntu 来说可就不同了。我已经联系了 Canonical 公司征求意见,但至今没有回应。 +总的来说,这对 Debian 来说是一个好消息,尤其是他们成为了上游发行版的话。对 Ubuntu 来说可就不同了。我已经联系了 Canonical 公司征求意见,但至今没有回应。 更新:Canonical 公司回应称,他们“不共享与单个客户关系的细节”,因此他们不能提供有关收入和任何其他的细节。 你对 Google 抛弃 Ubuntu 而选择 Debian 有什么看法? -
- -发表于:[新闻][15] -标签:[glinux][16]、[goobuntu][17] - -
![](https://secure.gravatar.com/avatar/20749c268f5d3e4d2c785499eb6a17c0?s=125&d=mm&r=g) @@ -67,9 +63,9 @@ gLinux 的开发计划 via: https://itsfoss.com/goobuntu-glinux-google/ -作者:[Abhishek Prakash ][a] +作者:[Abhishek Prakash][a] 译者:[jessie-pang](https://github.com/jessie-pang) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -98,5 +94,5 @@ via: https://itsfoss.com/goobuntu-glinux-google/ [22]:https://www.canonical.com/ [23]:https://www.muylinux.com/2018/01/15/goobuntu-glinux-google/ [24]:https://debconf17.debconf.org/talks/44/ -[25]:https://itsfoss.com/barcelona-open-source/ +[25]:https://linux.cn/article-9236-1.html [26]:https://itsfoss.com/eelo-mobile-os/ From cc68f1d0b208f1597796195933d53b95b5095c03 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Mon, 22 Jan 2018 10:19:13 +0800 Subject: [PATCH 175/226] Translating by qhwdw --- .../20140510 Journey to the Stack Part I.md | 105 ++++++++++++++++++ 1 file changed, 105 insertions(+) create mode 100644 sources/tech/20140510 Journey to the Stack Part I.md diff --git a/sources/tech/20140510 Journey to the Stack Part I.md b/sources/tech/20140510 Journey to the Stack Part I.md new file mode 100644 index 0000000000..a29bb909e2 --- /dev/null +++ b/sources/tech/20140510 Journey to the Stack Part I.md @@ -0,0 +1,105 @@ +#Translating by qhwdw [Journey to the Stack, Part I][1] + +Earlier we've explored the [anatomy of a program in memory][2], the landscape of how our programs run in a computer. Now we turn to the call stack, the work horse in most programming languages and virtual machines. Along the way we'll meet fantastic creatures like closures, recursion, and buffer overflows. But the first step is a precise picture of how the stack operates. + +The stack is so important because it keeps track of the functions running in a program, and functions are in turn the building blocks of software. In fact, the internal operation of programs is normally very simple. It consists mostly of functions pushing data onto and popping data off the stack as they call each other, while allocating memory on the heap for data that must survive across function calls. This is true for both low-level C software and VM-based languages like JavaScript and C#. A solid grasp of this reality is invaluable for debugging, performance tuning and generally knowing what the hell is going on. + +When a function is called, a stack frame is created to support the function's execution. The stack frame contains the function's local variables and the arguments passed to the function by its caller. The frame also contains housekeeping information that allows the called function (the callee) to return to the caller safely. The exact contents and layout of the stack vary by processor architecture and function call convention. In this post we look at Intel x86 stacks using C-style function calls (cdecl). Here's a single stack frame sitting live on top of the stack: + +![](https://manybutfinite.com/img/stack/stackIntro.png) + +Right away, three CPU registers burst into the scene. The stack pointer, esp, points to the top of the stack. The top is always occupied by the last item that was pushed onto the stack but has not yet been popped off, just as in a real-world stack of plates or $100 bills. + +The address stored in esp constantly changes as stack items are pushed and popped, such that it always points to the last item. Many CPU instructions automatically update esp as a side effect, and it's impractical to use the stack without this register. + +In the Intel architecture, as in most, the stack grows towards lower memory addresses. So the "top" is the lowest memory address in the stack containing live data: local_buffer in this case. Notice there's nothing vague about the arrow from esp to local_buffer. This arrow means business: it points specifically to the first byte occupied by local_buffer because that is the exact address stored in esp. + +The second register tracking the stack is ebp, the base pointer or frame pointer. It points to a fixed location within the stack frame of the function currently running and provides a stable reference point (base) for access to arguments and local variables. ebp changes only when a function call begins or ends. Thus we can easily address each item in the stack as an offset from ebp, as shown in the diagram. + +Unlike esp, ebp is mostly maintained by program code with little CPU interference. Sometimes there are performance benefits in ditching ebp altogether, which can be done via [compiler flags][3]. The Linux kernel is one example where this is done. + +Finally, the eax register is used by convention to transfer return values back to the caller for most C data types. + +Now let's inspect the data in our stack frame. These diagram shows precise byte-for-byte contents as you'd see in a debugger, with memory growing left-to-right, top-to-bottom. Here it is: + +![](https://manybutfinite.com/img/stack/frameContents.png) + +The local variable local_buffer is a byte array containing a null-terminated ascii string, a staple of C programs. The string was likely read from somewhere, for example keyboard input or a file, and it is 7 bytes long. Since local_buffer can hold 8 bytes, there's 1 free byte left. The content of this byte is unknown because in the stack's infinite dance of pushes and pops, you never know what memory holds unless you write to it. Since the C compiler does not initialize the memory for a stack frame, contents are undetermined + +* and somewhat random - until written to. This has driven some into madness. + +Moving on, local1 is a 4-byte integer and you can see the contents of each byte. It looks like a big number, with all those zeros following the 8, but here your intuition leads you astray. + +Intel processors are little endian machines, meaning that numbers in memory start with the little end first. So the least significant byte of a multi-byte number is in the lowest memory address. Since that is normally shown leftmost, this departs from our usual representation of numbers. It helps to know that this endian talk is borrowed from Gulliver's Travels: just as folks in Lilliput eat their eggs starting from the little end, Intel processors eat their numbers starting from the little byte. + +So local1 in fact holds the number 8, as in the legs of an octopus. param1, however, has a value of 2 in the second byte position, so its mathematical value is 2 * 256 = 512 (we multiply by 256 because each place value ranges from 0 to 255). Meanwhile, param2 is carrying weight at 1 * 256 * 256 = 65536. + +The housekeeping data in this stack frame consists of two crucial pieces: the address of the previous stack frame (saved ebp) and the address of the instruction to be executed upon the function's exit (the return address). Together, they make it possible for the function to return sanely and for the program to keep running along. + +Now let's see the birth of a stack frame to build a clear mental picture of how this all works together. Stack growth is puzzling at first because it happens in the opposite direction you'd expect. For example, to allocate 8 bytes on the stack one subtracts 8 from esp, and subtraction is an odd way to grow something. + +Let's take a simple C program: + +``` +Simple Add Program - add.c + +int add(int a, int b) +{ + int result = a + b; + return result; +} + +int main(int argc) +{ + int answer; + answer = add(40, 2); +} +``` + +Suppose we run this in Linux without command-line parameters. When you run a C program, the first code to actually execute is in the C runtime library, which then calls our main function. The diagrams below show step-by-step what happens as the program runs. Each diagram links to GDB output showing the state of memory and registers. You may also see the [GDB commands][4] used and the whole [GDB output][5]. Here we go: + +![](https://manybutfinite.com/img/stack/mainProlog.png) + +Steps 2 and 3, along with 4 below, are the function prologue, which is common to nearly all functions: the current value of ebp is saved to the top of the stack, and then esp is copied to ebp, establishing a new frame. main's prologue is like any other, but with the peculiarity that ebp is zeroed out when the program starts. + +If you were to inspect the stack below argc (to the right) you'd find more data, including pointers to the program name and command-line parameters (the traditional C argv), plus pointers to Unix environment variables and their actual contents. But that's not important here, so the ball keeps rolling towards the add() call: + +![](https://manybutfinite.com/img/stack/callAdd.png) + +After main subtracts 12 from esp to get the stack space it needs, it sets the values for a and b. Values in memory are shown in hex and little-endian format, as you'd see in a debugger. Once parameter values are set, main calls add and it starts running: + +![](https://manybutfinite.com/img/stack/addProlog.png) + +Now there's some excitement! We get another prologue, but this time you can see clearly how the stack frames form a linked list, starting at ebp and going down the stack. This is how debuggers and Exception objects in higher-level languages get their stack traces. You can also see the much more typical catching up of ebp to esp when a new frame is born. And again, we subtract from esp to get more stack space. + +There's also the slightly weird reversal of bytes when the ebp register value is copied to memory. What's happening here is that registers don't really have endianness: there are no "growing addresses" inside a register as there are for memory. Thus by convention debuggers show register values in the most natural format to humans: most significant to least significant digits. So the results of the copy in a little-endian machine appear reversed in the usual left-to-right notation for memory. I want the diagrams to provide an accurate picture of what you find in the trenches, so there you have it. + +With the hard part behind us, we add: + +![](https://manybutfinite.com/img/stack/doAdd.png) + +There are guest register appearances to help out with the addition, but otherwise no alarms and no surprises. add did its job, and at this point the stack action would go in reverse, but we'll save that for next time. + +Anybody who's read this far deserves a souvenir, so I've made a large diagram showing [all the steps combined][6] in a fit of nerd pride. + +It looks tame once it's all laid out. Those little boxes help a lot. In fact, little boxes are the chief tool of computer science. I hope the pictures and register movements provide an intuitive mental picture that integrates stack growth and memory contents. Up close, our software doesn't look too far from a simple Turing machine. + +This concludes the first part of our stack tour. There's some more byte spelunking ahead, and then it's on to see higher level programming concepts built on this foundation. See you next week. + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/journey-to-the-stack/ + +作者:[Gustavo Duarte][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/journey-to-the-stack/ +[2]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory +[3]:http://stackoverflow.com/questions/14666665/trying-to-understand-gcc-option-fomit-frame-pointer +[4]:https://github.com/gduarte/blog/blob/master/code/x86-stack/add-gdb-commands.txt +[5]:https://github.com/gduarte/blog/blob/master/code/x86-stack/add-gdb-output.txt +[6]:https://manybutfinite.com/img/stack/callSequence.png \ No newline at end of file From 6116f96f235b9ec0fcdc31e23eb261f9ded16fc4 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 22 Jan 2018 12:43:17 +0800 Subject: [PATCH 176/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Top=2020=20OpenSS?= =?UTF-8?q?H=20Server=20Best=20Security=20Practices?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... OpenSSH Server Best Security Practices.md | 474 ++++++++++++++++++ 1 file changed, 474 insertions(+) create mode 100644 sources/tech/20090724 Top 20 OpenSSH Server Best Security Practices.md diff --git a/sources/tech/20090724 Top 20 OpenSSH Server Best Security Practices.md b/sources/tech/20090724 Top 20 OpenSSH Server Best Security Practices.md new file mode 100644 index 0000000000..a7ad346af4 --- /dev/null +++ b/sources/tech/20090724 Top 20 OpenSSH Server Best Security Practices.md @@ -0,0 +1,474 @@ +Top 20 OpenSSH Server Best Security Practices +====== +![OpenSSH Security Tips][1] + +OpenSSH is the implementation of the SSH protocol. OpenSSH is recommended for remote login, making backups, remote file transfer via scp or sftp, and much more. SSH is perfect to keep confidentiality and integrity for data exchanged between two networks and systems. However, the main advantage is server authentication, through the use of public key cryptography. From time to time there are [rumors][2] about OpenSSH zero day exploit. This **page shows how to secure your OpenSSH server running on a Linux or Unix-like system to improve sshd security**. + + +#### OpenSSH defaults + + * TCP port - 22 + * OpenSSH server config file - sshd_config (located in /etc/ssh/) + + + +#### 1. Use SSH public key based login + +OpenSSH server supports various authentication. It is recommended that you use public key based authentication. First, create the key pair using following ssh-keygen command on your local desktop/laptop: + +DSA and RSA 1024 bit or lower ssh keys are considered weak. Avoid them. RSA keys are chosen over ECDSA keys when backward compatibility is a concern with ssh clients. All ssh keys are either ED25519 or RSA. Do not use any other type. + +``` +$ ssh-keygen -t key_type -b bits -C "comment" +$ ssh-keygen -t ed25519 -C "Login to production cluster at xyz corp" +$ ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa_aws_$(date +%Y-%m-%d) -C "AWS key for abc corp clients" +``` +Next, install the public key using ssh-copy-id command: +``` +$ ssh-copy-id -i /path/to/public-key-file user@host +$ ssh-copy-id user@remote-server-ip-or-dns-name +$ ssh-copy-id vivek@rhel7-aws-server +``` +When promoted supply user password. Verify that ssh key based login working for you: +`$ ssh vivek@rhel7-aws-server` +[![OpenSSH server security best practices][3]][3] +For more info on ssh public key auth see: + +* [keychain: Set Up Secure Passwordless SSH Access For Backup Scripts][48] + +* [sshpass: Login To SSH Server / Provide SSH Password Using A Shell Script][49] + +* [How To Setup SSH Keys on a Linux / Unix System][50] + +* [How to upload ssh public key to as authorized_key using Ansible DevOPS tool][51] + + +#### 2. Disable root user login + +Before we disable root user login, make sure regular user can log in as root. For example, allow vivek user to login as root using the sudo command. + +##### How to add vivek user to sudo group on a Debian/Ubuntu + +Allow members of group sudo to execute any command. [Add user vivek to sudo group][4]: +`$ sudo adduser vivek sudo` +Verify group membership with [id command][5] +`$ id vivek` + +##### How to add vivek user to sudo group on a CentOS/RHEL server + +Allows people in group wheel to run all commands on a CentOS/RHEL and Fedora Linux server. Use the usermod command to add the user named vivek to the wheel group: +``` +$ sudo usermod -aG wheel vivek +$ id vivek +``` + +##### Test sudo access and disable root login for ssh + +Test it and make sure user vivek can log in as root or run the command as root: +``` +$ sudo -i +$ sudo /etc/init.d/sshd status +$ sudo systemctl status httpd +``` +Once confirmed disable root login by adding the following line to sshd_config: +``` +PermitRootLogin no +ChallengeResponseAuthentication no +PasswordAuthentication no +UsePAM no +``` +See "[How to disable ssh password login on Linux to increase security][6]" for more info. + +#### 3. Disable password based login + +All password-based logins must be disabled. Only public key based logins are allowed. Add the following in your sshd_config file: +``` +AuthenticationMethods publickey +PubkeyAuthentication yes +``` +Older version of SSHD on CentOS 6.x/RHEL 6.x user should use the following setting: +``` +PubkeyAuthentication yes +``` + +#### 4. Limit Users' ssh access + +By default, all systems user can login via SSH using their password or public key. Sometimes you create UNIX / Linux user account for FTP or email purpose. However, those users can log in to the system using ssh. They will have full access to system tools including compilers and scripting languages such as Perl, Python which can open network ports and do many other fancy things. Only allow root, vivek and jerry user to use the system via SSH, add the following to sshd_config: +`AllowUsers vivek jerry` +Alternatively, you can allow all users to login via SSH but deny only a few users, with the following line in sshd_config: +`DenyUsers root saroj anjali foo` +You can also [configure Linux PAM][7] allows or deny login via the sshd server. You can allow [list of group name][8] to access or deny access to the ssh. + +#### 5. Disable Empty Passwords + +You need to explicitly disallow remote login from accounts with empty passwords, update sshd_config with the following line: +`PermitEmptyPasswords no` + +#### 6. Use strong passwords and passphrase for ssh users/keys + +It cannot be stressed enough how important it is to use strong user passwords and passphrase for your keys. Brute force attack works because user goes to dictionary based passwords. You can force users to avoid [passwords against a dictionary][9] attack and use [john the ripper tool][10] to find out existing weak passwords. Here is a sample random password generator (put in your ~/.bashrc): +``` +genpasswd() { + local l=$1 + [ "$l" == "" ] && l=20 + tr -dc A-Za-z0-9_ < /dev/urandom | head -c ${l} | xargs +} +``` + +Run it: +`genpasswd 16` +Output: +``` +uw8CnDVMwC6vOKgW +``` +* [Generating Random Password With mkpasswd / makepasswd / pwgen][52] + +* [Linux / UNIX: Generate Passwords][53] + +* [Linux Random Password Generator Command][54] + +-------------------------------------------------------------------------------- + +#### 7. Firewall SSH TCP port # 22 + +You need to firewall ssh TCP port # 22 by updating iptables/ufw/firewall-cmd or pf firewall configurations. Usually, OpenSSH server must only accept connections from your LAN or other remote WAN sites only. + +##### Netfilter (Iptables) Configuration + +Update [/etc/sysconfig/iptables (Redhat and friends specific file) to accept connection][11] only from 192.168.1.0/24 and 202.54.1.5/29, enter: +``` +-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 22 -j ACCEPT +-A RH-Firewall-1-INPUT -s 202.54.1.5/29 -m state --state NEW -p tcp --dport 22 -j ACCEPT +``` + +If you've dual stacked sshd with IPv6, edit /etc/sysconfig/ip6tables (Redhat and friends specific file), enter: +``` +-A RH-Firewall-1-INPUT -s ipv6network::/ipv6mask -m tcp -p tcp --dport 22 -j ACCEPT + +``` + +Replace ipv6network::/ipv6mask with actual IPv6 ranges. + +##### UFW for Debian/Ubuntu Linux + +[UFW is an acronym for uncomplicated firewall. It is used for managing a Linux firewall][12] and aims to provide an easy to use interface for the user. Use the [following command to accept port 22 from 202.54.1.5/29][13] only: +`$ sudo ufw allow from 202.54.1.5/29 to any port 22` +Read "[Linux: 25 Iptables Netfilter Firewall Examples For New SysAdmins][14]" for more info. + +##### *BSD PF Firewall Configuration + +If you are using PF firewall update [/etc/pf.conf][15] as follows: +``` +pass in on $ext_if inet proto tcp from {192.168.1.0/24, 202.54.1.5/29} to $ssh_server_ip port ssh flags S/SA synproxy state +``` + +#### 8. Change SSH Port and limit IP binding + +By default, SSH listens to all available interfaces and IP address on the system. Limit ssh port binding and change ssh port (many brutes forcing scripts only try to connect to TCP port # 22). To bind to 192.168.1.5 and 202.54.1.5 IPs and port 300, add or correct the following line in sshd_config: +``` +Port 300 +ListenAddress 192.168.1.5 +ListenAddress 202.54.1.5 +``` + +Port 300 ListenAddress 192.168.1.5 ListenAddress 202.54.1.5 + +A better approach to use proactive approaches scripts such as fail2ban or denyhosts when you want to accept connection from dynamic WAN IP address. + +#### 9. Use TCP wrappers (optional) + +TCP Wrapper is a host-based Networking ACL system, used to filter network access to the Internet. OpenSSH does support TCP wrappers. Just update your /etc/hosts.allow file as follows to allow SSH only from 192.168.1.2 and 172.16.23.12 IP address: +``` +sshd : 192.168.1.2 172.16.23.12 +``` + +See this [FAQ about setting and using TCP wrappers][16] under Linux / Mac OS X and UNIX like operating systems. + +#### 10. Thwart SSH crackers/brute force attacks + +Brute force is a method of defeating a cryptographic scheme by trying a large number of possibilities (combination of users and passwords) using a single or distributed computer network. To prevents brute force attacks against SSH, use the following software: + + * [DenyHosts][17] is a Python based security tool for SSH servers. It is intended to prevent brute force attacks on SSH servers by monitoring invalid login attempts in the authentication log and blocking the originating IP addresses. + * Explains how to setup [DenyHosts][18] under RHEL / Fedora and CentOS Linux. + * [Fail2ban][19] is a similar program that prevents brute force attacks against SSH. + * [sshguard][20] protect hosts from brute force attacks against ssh and other services using pf. + * [security/sshblock][21] block abusive SSH login attempts. + * [ IPQ BDB filter][22] May be considered as a fail2ban lite. + + + +#### 11. Rate-limit incoming traffic at TCP port # 22 (optional) + +Both netfilter and pf provides rate-limit option to perform simple throttling on incoming connections on port # 22. + +##### Iptables Example + +The following example will drop incoming connections which make more than 5 connection attempts upon port 22 within 60 seconds: +``` +#!/bin/bash +inet_if=eth1 +ssh_port=22 +$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --set +$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --update --seconds 60 --hitcount 5 +``` + +Call above script from your iptables scripts. Another config option: +``` +$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state NEW -m limit --limit 3/min --limit-burst 3 -j ACCEPT +$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT +$IPT -A OUTPUT -o ${inet_if} -p tcp --sport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT +# another one line example +# $IPT -A INPUT -i ${inet_if} -m state --state NEW,ESTABLISHED,RELATED -p tcp --dport 22 -m limit --limit 5/minute --limit-burst 5-j ACCEPT +``` + +See iptables man page for more details. + +##### *BSD PF Example + +The following will limits the maximum number of connections per source to 20 and rate limit the number of connections to 15 in a 5 second span. If anyone breaks our rules add them to our abusive_ips table and block them for making any further connections. Finally, flush keyword kills all states created by the matching rule which originate from the host which exceeds these limits. +``` +sshd_server_ip = "202.54.1.5" +table persist +block in quick from +pass in on $ext_if proto tcp to $sshd_server_ip port ssh flags S/SA keep state (max-src-conn 20, max-src-conn-rate 15/5, overload flush) +``` + +#### 12. Use port knocking (optional) + +[Port knocking][23] is a method of externally opening ports on a firewall by generating a connection attempt on a set of prespecified closed ports. Once a correct sequence of connection attempts is received, the firewall rules are dynamically modified to allow the host which sent the connection attempts to connect to the specific port(s). A sample port Knocking example for ssh using iptables: +``` +$IPT -N stage1 +$IPT -A stage1 -m recent --remove --name knock +$IPT -A stage1 -p tcp --dport 3456 -m recent --set --name knock2 + +$IPT -N stage2 +$IPT -A stage2 -m recent --remove --name knock2 +$IPT -A stage2 -p tcp --dport 2345 -m recent --set --name heaven + +$IPT -N door +$IPT -A door -m recent --rcheck --seconds 5 --name knock2 -j stage2 +$IPT -A door -m recent --rcheck --seconds 5 --name knock -j stage1 +$IPT -A door -p tcp --dport 1234 -m recent --set --name knock + +$IPT -A INPUT -m --state ESTABLISHED,RELATED -j ACCEPT +$IPT -A INPUT -p tcp --dport 22 -m recent --rcheck --seconds 5 --name heaven -j ACCEPT +$IPT -A INPUT -p tcp --syn -j door +``` + + +For more info see: +[Debian / Ubuntu: Set Port Knocking With Knockd and Iptables][55] + +#### 13. Configure idle log out timeout interval + +A user can log in to the server via ssh, and you can set an idle timeout interval to avoid unattended ssh session. Open sshd_config and make sure following values are configured: +``` +ClientAliveInterval 300 +ClientAliveCountMax 0 +``` +You are setting an idle timeout interval in seconds (300 secs == 5 minutes). After this interval has passed, the idle user will be automatically kicked out (read as logged out). See [how to automatically log BASH / TCSH / SSH users][24] out after a period of inactivity for more details. + +#### 14. Enable a warning banner for ssh users + +Set a warning banner by updating sshd_config with the following line: +`Banner /etc/issue` +Sample /etc/issue file: +``` +---------------------------------------------------------------------------------------------- +You are accessing a XYZ Government (XYZG) Information System (IS) that is provided for authorized use only. +By using this IS (which includes any device attached to this IS), you consent to the following conditions: + ++ The XYZG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, +penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), +law enforcement (LE), and counterintelligence (CI) investigations. + ++ At any time, the XYZG may inspect and seize data stored on this IS. + ++ Communications using, or data stored on, this IS are not private, are subject to routine monitoring, +interception, and search, and may be disclosed or used for any XYZG authorized purpose. + ++ This IS includes security measures (e.g., authentication and access controls) to protect XYZG interests--not +for your personal benefit or privacy. + ++ Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching +or monitoring of the content of privileged communications, or work product, related to personal representation +or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work +product are private and confidential. See User Agreement for details. +---------------------------------------------------------------------------------------------- + +``` + +Above is a standard sample, consult your legal team for specific user agreement and legal notice details. + +#### 15. Disable .rhosts files (verification) + +Don't read the user's ~/.rhosts and ~/.shosts files. Update sshd_config with the following settings: +`IgnoreRhosts yes` +SSH can emulate the behavior of the obsolete rsh command, just disable insecure access via RSH. + +#### 16. Disable host-based authentication (verification) + +To disable host-based authentication, update sshd_config with the following option: +`HostbasedAuthentication no` + +#### 17. Patch OpenSSH and operating systems + +It is recommended that you use tools such as [yum][25], [apt-get][26], [freebsd-update][27] and others to keep systems up to date with the latest security patches: + +#### 18. Chroot OpenSSH (Lock down users to their home directories) + +By default users are allowed to browse the server directories such as /etc/, /bin and so on. You can protect ssh, using os based chroot or use [special tools such as rssh][28]. With the release of OpenSSH 4.8p1 or 4.9p1, you no longer have to rely on third-party hacks such as rssh or complicated chroot(1) setups to lock users to their home directories. See [this blog post][29] about new ChrootDirectory directive to lock down users to their home directories. + +#### 19. Disable OpenSSH server on client computer + +Workstations and laptop can work without OpenSSH server. If you do not provide the remote login and file transfer capabilities of SSH, disable and remove the SSHD server. CentOS / RHEL users can disable and remove openssh-server with the [yum command][30]: +`$ sudo yum erase openssh-server` +Debian / Ubuntu Linux user can disable and remove the same with the [apt command][31]/[apt-get command][32]: +`$ sudo apt-get remove openssh-server` +You may need to update your iptables script to remove ssh exception rule. Under CentOS / RHEL / Fedora edit the files /etc/sysconfig/iptables and /etc/sysconfig/ip6tables. Once done [restart iptables][33] service: +``` +# service iptables restart +# service ip6tables restart +``` + +#### 20. Bonus tips from Mozilla + +If you are using OpenSSH version 6.7+ or newer try [following][34] settings: +``` +#################[ WARNING ]######################## +# Do not use any setting blindly. Read sshd_config # +# man page. You must understand cryptography to # +# tweak following settings. Otherwise use defaults # +#################################################### + +# Supported HostKey algorithms by order of preference. +HostKey /etc/ssh/ssh_host_ed25519_key +HostKey /etc/ssh/ssh_host_rsa_key +HostKey /etc/ssh/ssh_host_ecdsa_key + +# Specifies the available KEX (Key Exchange) algorithms. +KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 + +# Specifies the ciphers allowed +Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr + +#Specifies the available MAC (message authentication code) algorithms +MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com + +# LogLevel VERBOSE logs user's key fingerprint on login. Needed to have a clear audit track of which key was using to log in. +LogLevel VERBOSE + +# Log sftp level file access (read/write/etc.) that would not be easily logged otherwise. +Subsystem sftp /usr/lib/ssh/sftp-server -f AUTHPRIV -l INFO +``` + +You can grab list of cipher and alog supported by your OpenSSH server using the following commands: +``` +$ ssh -Q cipher +$ ssh -Q cipher-auth +$ ssh -Q mac +$ ssh -Q kex +$ ssh -Q key +``` +[![OpenSSH Security Tutorial Query Ciphers and algorithms choice][35]][35] + +#### How do I test sshd_config file and restart/reload my SSH server? + +To [check the validity of the configuration file and sanity of the keys][36] for any errors before restarting sshd, run: +`$ sudo sshd -t` +Extended test mode: +`$ sudo sshd -T` +Finally [restart sshd on a Linux or Unix like systems][37] as per your distro version: +``` +$ [sudo systemctl start ssh][38] ## Debian/Ubunt Linux## +$ [sudo systemctl restart sshd.service][39] ## CentOS/RHEL/Fedora Linux## +$ doas /etc/rc.d/sshd restart ## OpenBSD## +$ sudo service sshd restart ## FreeBSD## +``` + +#### Other susggesions + + 1. [Tighter SSH security with 2FA][40] - Multi-Factor authentication can be enabled with [OATH Toolkit][41] or [DuoSecurity][42]. + 2. [Use keychain based authentication][43] - keychain is a special bash script designed to make key-based authentication incredibly convenient and flexible. It offers various security benefits over passphrase-free keys + + + +#### See also: + + * The [official OpenSSH][44] project. + * Man pages: sshd(8),ssh(1),ssh-add(1),ssh-agent(1) + + + +If you have a technique or handy software not mentioned here, please share in the comments below to help your fellow readers keep their OpenSSH based server secure. + +#### About the author + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][45], [Facebook][46], [Google+][47]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/linux-unix-bsd-openssh-server-best-practices.html + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/tips/2009/07/openSSH_logo.png +[2]:https://isc.sans.edu/diary/OpenSSH+Rumors/6742 +[3]:https://www.cyberciti.biz/tips/wp-content/uploads/2009/07/OpenSSH-server-security-best-practices.png +[4]:https://www.cyberciti.biz/faq/how-to-create-a-sudo-user-on-ubuntu-linux-server/ +[5]:https://www.cyberciti.biz/faq/unix-linux-id-command-examples-usage-syntax/ (See Linux/Unix id command examples for more info) +[6]:https://www.cyberciti.biz/faq/how-to-disable-ssh-password-login-on-linux/ +[7]:https://www.cyberciti.biz/tips/linux-pam-configuration-that-allows-or-deny-login-via-the-sshd-server.html +[8]:https://www.cyberciti.biz/tips/openssh-deny-or-restrict-access-to-users-and-groups.html +[9]:https://www.cyberciti.biz/tips/linux-check-passwords-against-a-dictionary-attack.html +[10]:https://www.cyberciti.biz/faq/unix-linux-password-cracking-john-the-ripper/ +[11]:https://www.cyberciti.biz/faq/rhel-fedorta-linux-iptables-firewall-configuration-tutorial/ +[12]:https://www.cyberciti.biz/faq/howto-configure-setup-firewall-with-ufw-on-ubuntu-linux/ +[13]:https://www.cyberciti.biz/faq/ufw-allow-incoming-ssh-connections-from-a-specific-ip-address-subnet-on-ubuntu-debian/ +[14]:https://www.cyberciti.biz/tips/linux-iptables-examples.html +[15]:https://bash.cyberciti.biz/firewall/pf-firewall-script/ +[16]:https://www.cyberciti.biz/faq/tcp-wrappers-hosts-allow-deny-tutorial/ +[17]:https://www.cyberciti.biz/faq/block-ssh-attacks-with-denyhosts/ +[18]:https://www.cyberciti.biz/faq/rhel-linux-block-ssh-dictionary-brute-force-attacks/ +[19]:https://www.fail2ban.org +[20]:https://sshguard.sourceforge.net/ +[21]:http://www.bsdconsulting.no/tools/ +[22]:https://savannah.nongnu.org/projects/ipqbdb/ +[23]:https://en.wikipedia.org/wiki/Port_knocking +[24]:https://www.cyberciti.biz/faq/linux-unix-login-bash-shell-force-time-outs/ +[25]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ +[26]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html +[27]:https://www.cyberciti.biz/tips/howto-keep-freebsd-system-upto-date.html +[28]:https://www.cyberciti.biz/tips/rhel-centos-linux-install-configure-rssh-shell.html +[29]:https://www.debian-administration.org/articles/590 +[30]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) +[31]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info) +[32]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info) +[33]:https://www.cyberciti.biz/faq/howto-rhel-linux-open-port-using-iptables/ +[34]:https://wiki.mozilla.org/Security/Guidelines/OpenSSH +[35]:https://www.cyberciti.biz/tips/wp-content/uploads/2009/07/OpenSSH-Security-Tutorial-Query-Ciphers-and-algorithms-choice.jpg +[36]:https://www.cyberciti.biz/tips/checking-openssh-sshd-configuration-syntax-errors.html +[37]:https://www.cyberciti.biz/faq/howto-restart-ssh/ +[38]:https://www.cyberciti.biz/faq/howto-start-stop-ssh-server/ (Restart sshd on a Debian/Ubuntu Linux) +[39]:https://www.cyberciti.biz/faq/centos-stop-start-restart-sshd-command/ (Restart sshd on a CentOS/RHEL/Fedora Linux) +[40]:https://www.cyberciti.biz/open-source/howto-protect-linux-ssh-login-with-google-authenticator/ +[41]:http://www.nongnu.org/oath-toolkit/ +[42]:https://duo.com +[43]:https://www.cyberciti.biz/faq/ssh-passwordless-login-with-keychain-for-scripts/ +[44]:https://www.openssh.com/ +[45]:https://twitter.com/nixcraft +[46]:https://facebook.com/nixcraft +[47]:https://plus.google.com/+CybercitiBiz +[48]:https://www.cyberciti.biz/faq/ssh-passwordless-login-with-keychain-for-scripts/ +[49]:https://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/ +[50]:https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/ +[51]:https://www.cyberciti.biz/faq/how-to-upload-ssh-public-key-to-as-authorized_key-using-ansible/ +[52]:https://www.cyberciti.biz/faq/generating-random-password/ +[53]:https://www.cyberciti.biz/faq/linux-unix-generating-passwords-command/ +[54]:https://www.cyberciti.biz/faq/linux-random-password-generator/ +[55]:https://www.cyberciti.biz/faq/debian-ubuntu-linux-iptables-knockd-port-knocking-tutorial/ From b4d6fb694b90acf9a1bd206bb22a1ecb8800662a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9D=8E=E6=9D=BE=E5=B3=B0?= Date: Mon, 22 Jan 2018 14:00:10 +0800 Subject: [PATCH 177/226] Update 20180119 Two great uses for the cp command Bash shortcuts.md --- ...20180119 Two great uses for the cp command Bash shortcuts.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180119 Two great uses for the cp command Bash shortcuts.md b/sources/tech/20180119 Two great uses for the cp command Bash shortcuts.md index b3a5200278..9a45c26e7a 100644 --- a/sources/tech/20180119 Two great uses for the cp command Bash shortcuts.md +++ b/sources/tech/20180119 Two great uses for the cp command Bash shortcuts.md @@ -1,3 +1,5 @@ +Translating by cncuckoo + Two great uses for the cp command: Bash shortcuts ============================================================ From 94d2bf8f3b2c11692865d8b22001b0f8c12f06b1 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 22 Jan 2018 14:31:58 +0800 Subject: [PATCH 178/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20socat=20as=20a=20?= =?UTF-8?q?handler=20for=20multiple=20reverse=20shells=20=C2=B7=20System?= =?UTF-8?q?=20Overlord?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ltiple reverse shells - System Overlord.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 sources/tech/20180120 socat as a handler for multiple reverse shells - System Overlord.md diff --git a/sources/tech/20180120 socat as a handler for multiple reverse shells - System Overlord.md b/sources/tech/20180120 socat as a handler for multiple reverse shells - System Overlord.md new file mode 100644 index 0000000000..b57a1e0140 --- /dev/null +++ b/sources/tech/20180120 socat as a handler for multiple reverse shells - System Overlord.md @@ -0,0 +1,66 @@ +socat as a handler for multiple reverse shells · System Overlord +====== + +I was looking for a new way to handle multiple incoming reverse shells. My shells needed to be encrypted and I preferred not to use Metasploit in this case. Because of the way I was deploying my implants, I wasn't able to use separate incoming port numbers or other ways of directing the traffic to multiple listeners. + +Obviously, it's important to keep each reverse shell separated, so I couldn't just have a listener redirecting all the connections to STDIN/STDOUT. I also didn't want to wait for sessions serially - obviously I wanted to be connected to all of my implants simultaneously. (And allow them to disconnect/reconnect as needed due to loss of network connectivity.) + +As I was thinking about the problem, I realized that I basically wanted `tmux` for reverse shells. So I began to wonder if there was some way to connect `openssl s_server` or something similar to `tmux`. Given the limitations of `s_server`, I started looking at `socat`. Despite it's versatility, I've actually only used it once or twice before this, so I spent a fair bit of time reading the man page and the examples. + +I couldn't find a way to get `socat` to talk directly to `tmux` in a way that would spawn each connection as a new window (file descriptors are not passed to the newly-started process in `tmux new-window`), so I ended up with a strange workaround. I feel a little bit like Rube Goldberg inventing C2 software (and I need to get something more permanent and featureful eventually, but this was a quick and dirty PoC), but I've put together a chain of `socat` to get a working solution. + +My implementation works by having a single `socat` process receive the incoming connections (forking on incoming connection), and executing a script that first starts a `socat` instance within tmux, and then another `socat` process to copy from the first to the second over a UNIX domain socket. + +Yes, this is 3 socat processes. It's a little ridiculous, but I couldn't find a better approach. Roughly speaking, the communications flow looks a little like this: +``` +TLS data <--> socat listener <--> script stdio <--> socat <--> unix socket <--> socat in tmux <--> terminal window + +``` + +Getting it started is fairly simple. Begin by generating your SSL certificate. In this case, I'm using a self-signed certificate, but obviously you could go through a commercial CA, Let's Encrypt, etc. +``` +openssl req -newkey rsa:2048 -nodes -keyout server.key -x509 -days 30 -out server.crt +cat server.key server.crt > server.pem + +``` + +Now we will create the script that is run on each incoming connection. This script needs to launch a `tmux` window running a `socat` process copying from a UNIX domain socket to `stdio` (in tmux), and then connecting another `socat` between the `stdio` coming in to the UNIX domain socket. +``` +#!/bin/bash + +SOCKDIR=$(mktemp -d) +SOCKF=${SOCKDIR}/usock + +# Start tmux, if needed +tmux start +# Create window +tmux new-window "socat UNIX-LISTEN:${SOCKF},umask=0077 STDIO" +# Wait for socket +while test ! -e ${SOCKF} ; do sleep 1 ; done +# Use socat to ship data between the unix socket and STDIO. +exec socat STDIO UNIX-CONNECT:${SOCKF} +``` + +The while loop is necessary to make sure that the last `socat` process does not attempt to open the UNIX domain socket before it has been created by the new `tmux` child process. + +Finally, we can launch the `socat` process that will accept the incoming requests (handling all the TLS steps) and execute our per-connection script: +``` +socat OPENSSL-LISTEN:8443,cert=server.pem,reuseaddr,verify=0,fork EXEC:./socatscript.sh + +``` + +This listens on port 8443, using the certificate and private key contained in `server.pem`, performs a `fork()` on accepting each incoming connection (so they do not block each other) and disables certificate verification (since we're not expecting our clients to provide a certificate). On the other side, it launches our script, providing the data from the TLS connection via STDIO. + +At this point, an incoming TLS connection connects, and is passed through our processes to eventually arrive on the `STDIO` of a new window in the running `tmux` server. Each connection gets its own window, allowing us to easily see and manage the connections for our implants. + +-------------------------------------------------------------------------------- + +via: https://systemoverlord.com/2018/01/20/socat-as-a-handler-for-multiple-reverse-shells.html + +作者:[David][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://systemoverlord.com/about From b7d3fe42bdbced682e84d1b21a6d659c1c2c09b1 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 22 Jan 2018 14:47:41 +0800 Subject: [PATCH 179/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20PlayOnLinux=20For?= =?UTF-8?q?=20Easier=20Use=20Of=20Wine?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0119 PlayOnLinux For Easier Use Of Wine.md | 153 ++++++++++++++++++ 1 file changed, 153 insertions(+) create mode 100644 sources/talk/20180119 PlayOnLinux For Easier Use Of Wine.md diff --git a/sources/talk/20180119 PlayOnLinux For Easier Use Of Wine.md b/sources/talk/20180119 PlayOnLinux For Easier Use Of Wine.md new file mode 100644 index 0000000000..2af3433920 --- /dev/null +++ b/sources/talk/20180119 PlayOnLinux For Easier Use Of Wine.md @@ -0,0 +1,153 @@ +PlayOnLinux For Easier Use Of Wine +====== + +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux-for-easier-use-of-wine_orig.jpg) + +[PlayOnLinux][1] is a free program that helps to install, run, and manage Windows software on Linux. It can also manage virtual C: drives (known as Wine prefixes), and download and install certain Windows libraries for getting some software to run on Wine properly. Creating different drives using different Wine versions is also possible. It is very handy because what runs well in one version may not run as well (if at all) on a newer version. There is [PlayOnMac][2] for macOS and PlayOnBSD for FreeBSD. + +[Wine][3] is the compatibility layer that allows many programs developed for Windows to run under operating systems such as Linux, FreeBSD, macOS and other UNIX systems. The app database ([AppDB][4]) gives users an overview of a multitude of programs that will function on Wine, however successfully. + +Both programs can be obtained using your distribution’s software center or package manager for convenience. + +### Installing Programs Using PlayOnLinux + +Installing software is easy. PlayOnLinux has hundreds of scripts to aid in installing different software with which to run the setup. In the sidebar, select “Install Software”. You will find several categories to choose from. + +​ + +Hundreds of games can be installed this way. + + [![install games playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_orig.png)][5] + +​Office software can be installed as well, including Microsoft Office as shown here. + + [![microsoft office in linux playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_1_orig.png)][6] + +​Let’s install Notepad++ using the script. You can select the script to read the compatibility rating according to PlayOnLinux, and an overview of the program. To get a better idea of compatibility, refer to the WineHQ App Database and find “Browse Apps” to find a program like Notepad++. + + [![install notepad++ in linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_2_orig.png)][7] + +​Once you press “Install”, if you are using PlayOnLinux for the first time, you will encounter two popups: one to give you tips when installing programs with a script, and the other to not submit bug reports to WineHQ because PlayOnLinux has nothing to do with them. + +​ + +​During the installation, I was given the choice to either download the setup executable, or select one on the computer. I downloaded the file but received a File Mismatch error; however, I continued and it was successful. It’s not perfect, but it is functional. (It is possible to submit bug reports to PlayOnLinux if the option is given.) + +[![bug report on playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_3_orig.png)][8] + +Nevertheless, I was able to install Notepad++ successfully, run it, and update it to the latest version (at the time of writing 7.5.3) from version 7.4.2. + +​ + +Also during installation, it created a virtual C: drive specifically for Notepad++. As there are no other Wine versions available for PlayOnLinux to use, it defaults to using the version installed on the system. In this case, it is more than adequate for Notepad++ to run smoothly. + +### Installing Non-Listed Programs + +You can also install a program that is not on the list by pressing “Install Non-Listed Program” on the bottom-left corner of the install menu. Bear in mind that there is no script to install certain libraries to make things work properly. You will need to do this yourself. Look at the Wine AppDB for information for your program. Also, if the app isn’t listed, it doesn’t mean that it won’t work with Wine. It just means no one has given any information about it. + +​ + +I’ve installed Graphmatica, a graph plotting program, using this method. First I selected the option to install it on a new virtual drive. + + [![install non listed programs on linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_4_orig.png)][9] + +​Then I selected the option to install additional libraries after creating the drive and select a Wine version to use in doing so. + + [![playonlinux setup wizard](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_5_orig.png)][10] + +​I then proceeded to select Gecko (which encountered an error for some reason), and Mono 2.10 to install. + + [![playonlinux wizard POL_install](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_6_orig.png)][11] + +​Finally, I installed Graphmatica. It’s as simple as that. + + [![software installation done playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_7_orig.png)][12] + +A launcher can be created after installation. A list of executables found in the drive will appear. Search for the app executable (may not always be obvious) which may have its icon, select it and give it a display name. The icon will appear on the desktop. + + [![install graphmatica in linux playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_8_orig.png)][13] + [![playonlinux install windows software](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_9_orig.png)][14] + +### Multiple “C:” Drives + +Now that we have easily installed a program, let’s have a look at the drive configuration. In the main window, press “Configure” in the toolbar and this window will show. + + [![multiple c: drives in linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/playonlinux_10.png?1516170517)][15] + +On the left are the drives that are found within PlayOnLinux. To the right, the “General” tab allows you to create shortcuts of programs installed on that virtual drive. + +​ + +The “Wine” tab has 8 buttons, including those to launch the Wine configuration program (winecfg), control panel, registry editor, command prompt, etc. + + [![playonlinux configuration wine](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_11_orig.png)][16] + +​“Install Components” allows you to select different Windows libraries like DirectX 9, .NET Framework versions 2 – 4.5, Visual C++ runtime, etc., like [winetricks][17]. + + [![install playonlinux components](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_12_orig.png)][18] + +“Display” allows the user to control advanced graphics settings like GLSL support, video memory size, and more. And “Miscellaneous” is for other actions like running an executable found anywhere on the computer to be run under the selected virtual drive. + +### Creating Virtual Drives Without Installing Programs + +To create a drive without installing software, simply press “New” below the list of drives to launch the virtual drive creator. Drives are created using the same method used in installing programs not found in the install menu. Follow the prompts, select either a 32-bit or 64-bit installation (in this case we only have 32-bit versions so select 32-bit), choose the Wine version, and give the drive a name. Once completed, it will appear in the drive list. + + [![playonlinux sandbox](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_13_orig.png)][19] + +### Managing Wine Versions + +Entire Wine versions can be downloaded using the manager. To access this through the menu bar, press “Tools” and select “Manage Wine versions”. Sometimes different software can behave differently between Wine versions. A Wine update can break something that made your application work in the previous version; thus rendering the application broken or completely unusable. Therefore, this feature is one of the highlights of PlayOnLinux. + +​ + +If you’re still on the configuration window, in the “General” tab, you can also access the version manager by pressing the “+” button next to the Wine version field. + + [![playonlinux select wine version](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_14_orig.png)][20] + +To install a version of Wine (32-bit or 64-bit), simply select the version, and press the “>” button to download and install it. After installation, if setup executables for Mono, and/or the Gecko HTML engine have not yet been downloaded by PlayOnLinux, they will be downloaded. + +​ + +I went ahead and installed the 2.21-staging version of Wine afterward. + + [![select wine version playonlinux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_15_orig.png)][21] + +​To remove a version, press the “<” button. + +### Conclusion + +​This article demonstrated how to use PlayOnLinux to easily install Windows software into separate virtual C: drives, create and manage virtual drives, and manage several Wine versions. The software isn’t perfect, but it is still functional and useful. Managing different drives with different Wine versions is one of the key features of PlayOnLinux. It is a lot easier to use a front-end for Wine such as PlayOnLinux than pure Wine. + + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/playonlinux-for-easier-use-of-wine + +作者:[LinuxAndUbuntu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxandubuntu.com +[1]:https://www.playonlinux.com/en/ +[2]:https://www.playonmac.com +[3]:https://www.winehq.org/ +[4]:http://appdb.winehq.org/ +[5]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_orig.png +[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_1_orig.png +[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_2_orig.png +[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_3_orig.png +[9]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_4_orig.png +[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_5_orig.png +[11]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_6_orig.png +[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_7_orig.png +[13]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_8_orig.png +[14]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_9_orig.png +[15]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_10_orig.png +[16]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_11_orig.png +[17]:https://github.com/Winetricks/winetricks +[18]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_12_orig.png +[19]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_13_orig.png +[20]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_14_orig.png +[21]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/playonlinux_15_orig.png From 748de950123414664bdba43e8c3ad9bcc39b245c Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 22 Jan 2018 15:00:41 +0800 Subject: [PATCH 180/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Debug?= =?UTF-8?q?=20a=20Bash=20Shell=20Script=20Under=20Linux=20or=20UNIX?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...a Bash Shell Script Under Linux or UNIX.md | 249 ++++++++++++++++++ 1 file changed, 249 insertions(+) create mode 100644 sources/tech/20070129 How To Debug a Bash Shell Script Under Linux or UNIX.md diff --git a/sources/tech/20070129 How To Debug a Bash Shell Script Under Linux or UNIX.md b/sources/tech/20070129 How To Debug a Bash Shell Script Under Linux or UNIX.md new file mode 100644 index 0000000000..1527014d2f --- /dev/null +++ b/sources/tech/20070129 How To Debug a Bash Shell Script Under Linux or UNIX.md @@ -0,0 +1,249 @@ +How To Debug a Bash Shell Script Under Linux or UNIX +====== +From my mailbag: +**I wrote a small hello world script. How can I Debug a bash shell scripts running on a Linux or Unix like systems?** +It is the most common question asked by new sysadmins or Linux/UNIX user. Shell scripting debugging can be a tedious job (read as not easy). There are various ways to debug a shell script. + +You need to pass the -x or -v argument to bash shell to walk through each line in the script. + +[![How to debug a bash shell script on Linux or Unix][1]][1] + +Let us see how to debug a bash script running on Linux and Unix using various methods. + +### -x option to debug a bash shell script + +Run a shell script with -x option. +``` +$ bash -x script-name +$ bash -x domains.sh +``` + +### Use of set builtin command + +Bash shell offers debugging options which can be turn on or off using the [set command][2]: + + * **set -x** : Display commands and their arguments as they are executed. + * **set -v** : Display shell input lines as they are read. + + + +You can use above two command in shell script itself: +``` +#!/bin/bash +clear + +# turn on debug mode +set -x +for f in * +do + file $f +done +# turn OFF debug mode +set +x +ls +# more commands +``` + +You can replace the [standard Shebang][3] line: +`#!/bin/bash` +with the following (for debugging) code: +`#!/bin/bash -xv` + +### Use of intelligent DEBUG function + +First, add a special variable called _DEBUG. Set _DEBUG to 'on' when you need to debug a script: +`_DEBUG="on"` + +Put the following function at the beginning of the script: +``` +function DEBUG() +{ + [ "$_DEBUG" == "on" ] && $@ +} +``` + +function DEBUG() { [ "$_DEBUG" == "on" ] && $@ } + +Now wherever you need debugging simply use the DEBUG function as follows: +`DEBUG echo "File is $filename"` +OR +``` +DEBUG set -x +Cmd1 +Cmd2 +DEBUG set +x +``` + +When done with debugging (and before moving your script to production) set _DEBUG to 'off'. No need to delete debug lines. +`_DEBUG="off" # set to anything but not to 'on'` + +Sample script: +``` +#!/bin/bash +_DEBUG="on" +function DEBUG() +{ + [ "$_DEBUG" == "on" ] && $@ +} + +DEBUG echo 'Reading files' +for i in * +do + grep 'something' $i > /dev/null + [ $? -eq 0 ] && echo "Found in $i file" +done +DEBUG set -x +a=2 +b=3 +c=$(( $a + $b )) +DEBUG set +x +echo "$a + $b = $c" +``` + +Save and close the file. Run the script as follows: +`$ ./script.sh` +Output: +``` +Reading files +Found in xyz.txt file ++ a=2 ++ b=3 ++ c=5 ++ DEBUG set +x ++ '[' on == on ']' ++ set +x +2 + 3 = 5 + +``` + +Now set DEBUG to off (you need to edit the file): +`_DEBUG="off"` +Run script: +`$ ./script.sh` +Output: +``` +Found in xyz.txt file +2 + 3 = 5 + +``` + +Above is a simple but quite effective technique. You can also try to use DEBUG as an alias instead of function. + +### Debugging Common Bash Shell Scripting Errors + +Bash or sh or ksh gives various error messages on screen and in many case the error message may not provide detailed information. + +#### Skipping to apply execute permission on the file + +When you [write your first hello world bash shell script][4], you might end up getting an error that read as follows: +`bash: ./hello.sh: Permission denied` +Set permission using chmod command: +``` +$ chmod +x hello.sh +$ ./hello.sh +$ bash hello.sh +``` + +#### End of file unexpected Error + +If you are getting an End of file unexpected error message, open your script file and and make sure it has both opening and closing quotes. In this example, the echo statement has an opening quote but no closing quote: +``` +#!/bin/bash + + +... +.... + + +echo 'Error: File not found + ^^^^^^^ + missing quote +``` + +Also make sure you check for missing parentheses and braces ({}): +``` +#!/bin/bash +..... +[ ! -d $DIRNAME ] && { echo "Error: Chroot dir not found"; exit 1; + ^^^^^^^^^^^^^ + missing brace } +... +``` + +#### Missing Keywords Such As fi, esac, ;;, etc. + +If you missed ending keyword such as fi or ;; you will get an error such as as "xxx unexpected". So make sure all nested if and case statements ends with proper keywords. See bash man page for syntax requirements. In this example, fi is missing: +``` +#!/bin/bash +echo "Starting..." +.... +if [ $1 -eq 10 ] +then + if [ $2 -eq 100 ] + then + echo "Do something" +fi + +for f in $files +do + echo $f +done + +# note fi is missing +``` + +#### Moving or editing shell script on Windows or Unix boxes + +Do not create the script on Linux/Unix and move to Windows. Another problem is editing the bash shell script on Windows 10 and move/upload to Unix server. It will result in an error like command not found due to the carriage return (DOS CR-LF). You [can convert DOS newlines CR-LF to Unix/Linux format using][5] the following syntax: +`dos2unix my-script.sh` + +### Tip 1 - Send Debug Message To stderr + +[Standard error][6] is the default error output device, which is used to write all system error messages. So it is a good idea to send messages to the default error device: +``` +# Write error to stdout +echo "Error: $1 file not found" +# +# Write error to stderr (note 1>&2 at the end of echo command) +# +echo "Error: $1 file not found" 1>&2 +``` + +### Tip 2 - Turn On Syntax Highlighting when using vim text editor + +Most modern text editors allows you to set syntax highlighting option. This is useful to detect syntax and prevent common errors such as opening or closing quote. You can see bash script in different colors. This feature eases writing in a shell script structures and syntax errors are visually distinct. Highlighting does not affect the meaning of the text itself; it's made only for you. In this example, vim syntax highlighting is used for my bash script: +[![How To Debug a Bash Shell Script Under Linux or UNIX Using Vim Syntax Highlighting Feature][7]][7] + +### Tip 3 - Use shellcheck to lint script + +[ShellCheck is a static analysis tool for shell scripts][8]. One can use it to finds bugs in your shell scripts. It is written in Haskell. You can find warnings and suggestions for bash/sh shell scripts with this tool. Let us see how to install and use ShellCheck on a Linux or Unix-like system to enhance your shell scripts, avoid errors and productivity. + + +### About the author + +Posted by: + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][9], [Facebook][10], [Google+][11]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/debugging-shell-script.html + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/tips/wp-content/uploads/2007/01/How-to-debug-a-bash-shell-script-on-Linux-or-Unix.jpg +[2]:https://bash.cyberciti.biz/guide/Set_command +[3]:https://bash.cyberciti.biz/guide/Shebang +[4]:https://www.cyberciti.biz/faq/hello-world-bash-shell-script/ +[5]:https://www.cyberciti.biz/faq/howto-unix-linux-convert-dos-newlines-cr-lf-unix-text-format/ +[6]:https://bash.cyberciti.biz/guide/Standard_error +[7]:https://www.cyberciti.biz/media/new/tips/2007/01/bash-vim-debug-syntax-highlighting.png +[8]:https://www.cyberciti.biz/programming/improve-your-bashsh-shell-script-with-shellcheck-lint-script-analysis-tool/ +[9]:https://twitter.com/nixcraft +[10]:https://facebook.com/nixcraft +[11]:https://plus.google.com/+CybercitiBiz From 92d34173e4584f7b64e97cd2f82ddd152024063a Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 22 Jan 2018 15:04:42 +0800 Subject: [PATCH 181/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Shell=20Scripting?= =?UTF-8?q?=20a=20Bunco=20Game?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20180121 Shell Scripting a Bunco Game.md | 234 ++++++++++++++++++ 1 file changed, 234 insertions(+) create mode 100644 sources/tech/20180121 Shell Scripting a Bunco Game.md diff --git a/sources/tech/20180121 Shell Scripting a Bunco Game.md b/sources/tech/20180121 Shell Scripting a Bunco Game.md new file mode 100644 index 0000000000..4483cae92b --- /dev/null +++ b/sources/tech/20180121 Shell Scripting a Bunco Game.md @@ -0,0 +1,234 @@ +Shell Scripting a Bunco Game +====== +I haven't dug into any game programming for a while, so I thought it was high time to do something in that realm. At first, I thought "Halo as a shell script?", but then I came to my senses. Instead, let's look at a simple dice game called Bunco. You may not have heard of it, but I bet your Mom has—it's a quite popular game for groups of gals at a local pub or tavern. + +Played in six rounds with three dice, the game is simple. You roll all three dice and have to match the current round number. If all three dice match the current round number (for example, three 3s in round three), you score 21\. If all three match but aren't the current round number, it's a Mini Bunco and worth five points. Failing both of those, each die with the same value as the round number is worth one point. + +Played properly, the game also involves teams, multiple tables including a winner's table, and usually cash prizes funded by everyone paying $5 or similar to play and based on specific winning scenarios like "most Buncos" or "most points". I'll skip that part here, however, and just focus on the dice part. + +### Let's Do the Math + +Before I go too far into the programming side of things, let me talk briefly about the math behind the game. Dice are easy to work with because on a properly weighted die, the chance of a particular value coming up is 1:6. + +Random tip: not sure whether your dice are balanced? Toss them in salty water and spin them. There are some really interesting YouTube videos from the D&D world showing how to do this test. + +So what are the odds of three dice having the same value? The first die has a 100% chance of having a value (no leaners here), so that's easy. The second die has a 16.66% chance of being any particular value, and then the third die has the same chance of being that value, but of course, they multiply, so three dice have about a 2.7% chance of all having the same value. + +Then, it's a 16.66% chance that those three dice would be the current round's number—or, in mathematical terms: 0.166 * 0.166 * 0.166 = 0.00462. + +In other words, you have a 0.46% chance of rolling a Bunco, which is a bit less than once out of every 200 rolls of three dice. + +It could be tougher though. If you were playing with five dice, the chance of rolling a Mini Bunco (or Yahtzee) is 0.077%, and if you were trying to accomplish a specific value, say just sixes, then it's 0.00012% likely on any given roll—which is to say, not bloody likely! + +### And So into the Coding + +As with every game, the hardest part is really having a good random number generator that generates truly random values. That's actually hard to affect in a shell script though, so I'm going to sidestep this entire issue and assume that the shell's built-in random number generator will be sufficient. + +What's nice is that it's super easy to work with. Just reference $RANDOM, and you'll have a random value between 0 and MAXINT (32767): + +``` + +$ echo $RANDOM $RANDOM $RANDOM +10252 22142 14863 + +``` + +To constrain that to values between 1–6 use the modulus function: + +``` + +$ echo $(( $RANDOM % 6 )) +3 +$ echo $(( $RANDOM % 6 )) +0 + +``` + +Oops! I forgot to shift it one. Here's another try: + +``` + +$ echo $(( ( $RANDOM % 6 ) + 1 )) +6 + +``` + +That's the dice-rolling feature. Let's make it a function where you can specify the variable you'd like to have the generated value as part of the invocation: + +``` + +rolldie() +{ + local result=$1 + rolled=$(( ( $RANDOM % 6 ) + 1 )) + eval $result=$rolled +} + +``` + +The use of the eval is to ensure that the variable specified in the invocation is actually assigned the calculated value. It's easy to work with: + +``` + +rolldie die1 + +``` + +That will load a random value between 1–6 into the variable die1. To roll your three dice, it's straightforward: + +``` + +rolldie die1 ; rolldie die2 ; rolldie die3 + +``` + +Now to test the values. First, let's test for a Bunco where all three dice have the same value, and it's the value of the current round too: + +``` + +if [ $die1 -eq $die2 ] && [ $die2 -eq $die3 ] ; then + if [ $die1 -eq $round ] ; then + echo "BUNCO!" + score=25 + else + echo "Mini Bunco!" + score=5 + fi + +``` + +That's probably the hardest of the tests, and notice the unusual use of test in the first conditional: [ cond1 ] && [ cond2 ]. If you're thinking that you could also write it as cond1 -a cond2, you're right. As with so much in the shell, there's more than one way to get to the solution. + +The remainder of the code is straightforward; you just need to test for whether the die matches the current round value: + +``` + +if [ $die1 -eq $round ] ; then + score=1 +fi +if [ $die2 -eq $round ] ; then + score=$(( $score + 1 )) +fi +if [ $die3 -eq $round ] ; then + score=$(( $score + 1 )) +fi + +``` + +The only thing to consider here is that you don't want to score die value vs. round if you've also scored a Bunco or Mini Bunco, so the entire second set of tests needs to be within the else clause of the first conditional (to see if all three dice have the same value). + +Put it together and specify the round number on the command line, and here's what you have at this point: + +``` + +$ sh bunco.sh 5 +You rolled: 1 1 5 +score = 1 +$ sh bunco.sh 2 +You rolled: 6 4 3 +score = 0 +$ sh bunco.sh 1 +You rolled: 1 1 1 +BUNCO! +score = 25 + +``` + +A Bunco so quickly? Well, as I said, there might be a slight issue with the randomness of the random number generator in the shell. + +You can test it once you have the script working by running it a few hundred times and then checking to see what percentage are Bunco or Mini Bunco, but I'll leave that as an exercise for you, dear reader. Well, maybe I'll come back to it another time. + +Let's finish up this script by having it accumulate score and run for all six rounds instead of specifying a round on the command line. That's easily done, because it's just a wrapper around the entire script, or, better, the big conditional statement becomes a function all its own: + +``` + +BuncoRound() +{ + # roll, display, and score a round of bunco! + # round is specified when invoked, score added to totalscore + + local score=0 ; local round=$1 ; local hidescore=0 + + rolldie die1 ; rolldie die2 ; rolldie die3 + echo Round $round. You rolled: $die1 $die2 $die3 + + if [ $die1 -eq $die2 ] && [ $die2 -eq $die3 ] ; then + if [ $die1 -eq $round ] ; then + echo " BUNCO!" + score=25 + hidescore=1 + else + echo " Mini Bunco!" + score=5 + hidescore=1 + fi + else + if [ $die1 -eq $round ] ; then + score=1 + fi + if [ $die2 -eq $round ] ; then + score=$(( $score + 1 )) + fi + if [ $die3 -eq $round ] ; then + score=$(( $score + 1 )) + fi + fi + + if [ $hidescore -eq 0 ] ; then + echo " score this round: $score" + fi + + totalscore=$(( $totalscore + $score )) +} + +``` + +I admit, I couldn't resist a few improvements as I went along, including the addition of it showing either Bunco, Mini Bunco or a score value (that's what $hidescore does). + +Invoking it is a breeze, and you'll use a for loop: + +``` + +for round in {1..6} ; do + BuncoRound $round +done + +``` + +That's about the entire program at this point. Let's run it once and see what happens: + +``` + +$ sh bunco.sh 1 +Round 1\. You rolled: 2 3 3 + score this round: 0 +Round 2\. You rolled: 2 6 6 + score this round: 1 +Round 3\. You rolled: 1 2 4 + score this round: 0 +Round 4\. You rolled: 2 1 4 + score this round: 1 +Round 5\. You rolled: 5 5 6 + score this round: 2 +Round 6\. You rolled: 2 1 3 + score this round: 0 +Game over. Your total score was 4 + +``` + +Ugh. Not too impressive, but it's probably a typical round. Again, you can run it a few hundred—or thousand—times, just save the "Game over" line, then do some quick statistical analysis to see how often you score more than 3 points in six rounds. (With three dice to roll a given value, you should hit that 50% of the time.) + +It's not a complicated game by any means, but it makes for an interesting little programming project. Now, what if they used 20-sided die and let you re-roll one die per round and had a dozen rounds? + + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/content/shell-scripting-bunco-game + +作者:[Dave Taylor][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/users/dave-taylor From 33f7f351c32eb8be202c1e26d436d8f4ddd690c4 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Mon, 22 Jan 2018 15:06:28 +0800 Subject: [PATCH 182/226] Translated by qhwdw --- .../20140510 Journey to the Stack Part I.md | 105 ------------------ .../20140510 Journey to the Stack Part I.md | 103 +++++++++++++++++ 2 files changed, 103 insertions(+), 105 deletions(-) delete mode 100644 sources/tech/20140510 Journey to the Stack Part I.md create mode 100644 translated/tech/20140510 Journey to the Stack Part I.md diff --git a/sources/tech/20140510 Journey to the Stack Part I.md b/sources/tech/20140510 Journey to the Stack Part I.md deleted file mode 100644 index a29bb909e2..0000000000 --- a/sources/tech/20140510 Journey to the Stack Part I.md +++ /dev/null @@ -1,105 +0,0 @@ -#Translating by qhwdw [Journey to the Stack, Part I][1] - -Earlier we've explored the [anatomy of a program in memory][2], the landscape of how our programs run in a computer. Now we turn to the call stack, the work horse in most programming languages and virtual machines. Along the way we'll meet fantastic creatures like closures, recursion, and buffer overflows. But the first step is a precise picture of how the stack operates. - -The stack is so important because it keeps track of the functions running in a program, and functions are in turn the building blocks of software. In fact, the internal operation of programs is normally very simple. It consists mostly of functions pushing data onto and popping data off the stack as they call each other, while allocating memory on the heap for data that must survive across function calls. This is true for both low-level C software and VM-based languages like JavaScript and C#. A solid grasp of this reality is invaluable for debugging, performance tuning and generally knowing what the hell is going on. - -When a function is called, a stack frame is created to support the function's execution. The stack frame contains the function's local variables and the arguments passed to the function by its caller. The frame also contains housekeeping information that allows the called function (the callee) to return to the caller safely. The exact contents and layout of the stack vary by processor architecture and function call convention. In this post we look at Intel x86 stacks using C-style function calls (cdecl). Here's a single stack frame sitting live on top of the stack: - -![](https://manybutfinite.com/img/stack/stackIntro.png) - -Right away, three CPU registers burst into the scene. The stack pointer, esp, points to the top of the stack. The top is always occupied by the last item that was pushed onto the stack but has not yet been popped off, just as in a real-world stack of plates or $100 bills. - -The address stored in esp constantly changes as stack items are pushed and popped, such that it always points to the last item. Many CPU instructions automatically update esp as a side effect, and it's impractical to use the stack without this register. - -In the Intel architecture, as in most, the stack grows towards lower memory addresses. So the "top" is the lowest memory address in the stack containing live data: local_buffer in this case. Notice there's nothing vague about the arrow from esp to local_buffer. This arrow means business: it points specifically to the first byte occupied by local_buffer because that is the exact address stored in esp. - -The second register tracking the stack is ebp, the base pointer or frame pointer. It points to a fixed location within the stack frame of the function currently running and provides a stable reference point (base) for access to arguments and local variables. ebp changes only when a function call begins or ends. Thus we can easily address each item in the stack as an offset from ebp, as shown in the diagram. - -Unlike esp, ebp is mostly maintained by program code with little CPU interference. Sometimes there are performance benefits in ditching ebp altogether, which can be done via [compiler flags][3]. The Linux kernel is one example where this is done. - -Finally, the eax register is used by convention to transfer return values back to the caller for most C data types. - -Now let's inspect the data in our stack frame. These diagram shows precise byte-for-byte contents as you'd see in a debugger, with memory growing left-to-right, top-to-bottom. Here it is: - -![](https://manybutfinite.com/img/stack/frameContents.png) - -The local variable local_buffer is a byte array containing a null-terminated ascii string, a staple of C programs. The string was likely read from somewhere, for example keyboard input or a file, and it is 7 bytes long. Since local_buffer can hold 8 bytes, there's 1 free byte left. The content of this byte is unknown because in the stack's infinite dance of pushes and pops, you never know what memory holds unless you write to it. Since the C compiler does not initialize the memory for a stack frame, contents are undetermined - -* and somewhat random - until written to. This has driven some into madness. - -Moving on, local1 is a 4-byte integer and you can see the contents of each byte. It looks like a big number, with all those zeros following the 8, but here your intuition leads you astray. - -Intel processors are little endian machines, meaning that numbers in memory start with the little end first. So the least significant byte of a multi-byte number is in the lowest memory address. Since that is normally shown leftmost, this departs from our usual representation of numbers. It helps to know that this endian talk is borrowed from Gulliver's Travels: just as folks in Lilliput eat their eggs starting from the little end, Intel processors eat their numbers starting from the little byte. - -So local1 in fact holds the number 8, as in the legs of an octopus. param1, however, has a value of 2 in the second byte position, so its mathematical value is 2 * 256 = 512 (we multiply by 256 because each place value ranges from 0 to 255). Meanwhile, param2 is carrying weight at 1 * 256 * 256 = 65536. - -The housekeeping data in this stack frame consists of two crucial pieces: the address of the previous stack frame (saved ebp) and the address of the instruction to be executed upon the function's exit (the return address). Together, they make it possible for the function to return sanely and for the program to keep running along. - -Now let's see the birth of a stack frame to build a clear mental picture of how this all works together. Stack growth is puzzling at first because it happens in the opposite direction you'd expect. For example, to allocate 8 bytes on the stack one subtracts 8 from esp, and subtraction is an odd way to grow something. - -Let's take a simple C program: - -``` -Simple Add Program - add.c - -int add(int a, int b) -{ - int result = a + b; - return result; -} - -int main(int argc) -{ - int answer; - answer = add(40, 2); -} -``` - -Suppose we run this in Linux without command-line parameters. When you run a C program, the first code to actually execute is in the C runtime library, which then calls our main function. The diagrams below show step-by-step what happens as the program runs. Each diagram links to GDB output showing the state of memory and registers. You may also see the [GDB commands][4] used and the whole [GDB output][5]. Here we go: - -![](https://manybutfinite.com/img/stack/mainProlog.png) - -Steps 2 and 3, along with 4 below, are the function prologue, which is common to nearly all functions: the current value of ebp is saved to the top of the stack, and then esp is copied to ebp, establishing a new frame. main's prologue is like any other, but with the peculiarity that ebp is zeroed out when the program starts. - -If you were to inspect the stack below argc (to the right) you'd find more data, including pointers to the program name and command-line parameters (the traditional C argv), plus pointers to Unix environment variables and their actual contents. But that's not important here, so the ball keeps rolling towards the add() call: - -![](https://manybutfinite.com/img/stack/callAdd.png) - -After main subtracts 12 from esp to get the stack space it needs, it sets the values for a and b. Values in memory are shown in hex and little-endian format, as you'd see in a debugger. Once parameter values are set, main calls add and it starts running: - -![](https://manybutfinite.com/img/stack/addProlog.png) - -Now there's some excitement! We get another prologue, but this time you can see clearly how the stack frames form a linked list, starting at ebp and going down the stack. This is how debuggers and Exception objects in higher-level languages get their stack traces. You can also see the much more typical catching up of ebp to esp when a new frame is born. And again, we subtract from esp to get more stack space. - -There's also the slightly weird reversal of bytes when the ebp register value is copied to memory. What's happening here is that registers don't really have endianness: there are no "growing addresses" inside a register as there are for memory. Thus by convention debuggers show register values in the most natural format to humans: most significant to least significant digits. So the results of the copy in a little-endian machine appear reversed in the usual left-to-right notation for memory. I want the diagrams to provide an accurate picture of what you find in the trenches, so there you have it. - -With the hard part behind us, we add: - -![](https://manybutfinite.com/img/stack/doAdd.png) - -There are guest register appearances to help out with the addition, but otherwise no alarms and no surprises. add did its job, and at this point the stack action would go in reverse, but we'll save that for next time. - -Anybody who's read this far deserves a souvenir, so I've made a large diagram showing [all the steps combined][6] in a fit of nerd pride. - -It looks tame once it's all laid out. Those little boxes help a lot. In fact, little boxes are the chief tool of computer science. I hope the pictures and register movements provide an intuitive mental picture that integrates stack growth and memory contents. Up close, our software doesn't look too far from a simple Turing machine. - -This concludes the first part of our stack tour. There's some more byte spelunking ahead, and then it's on to see higher level programming concepts built on this foundation. See you next week. - --------------------------------------------------------------------------------- - -via:https://manybutfinite.com/post/journey-to-the-stack/ - -作者:[Gustavo Duarte][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://duartes.org/gustavo/blog/about/ -[1]:https://manybutfinite.com/post/journey-to-the-stack/ -[2]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory -[3]:http://stackoverflow.com/questions/14666665/trying-to-understand-gcc-option-fomit-frame-pointer -[4]:https://github.com/gduarte/blog/blob/master/code/x86-stack/add-gdb-commands.txt -[5]:https://github.com/gduarte/blog/blob/master/code/x86-stack/add-gdb-output.txt -[6]:https://manybutfinite.com/img/stack/callSequence.png \ No newline at end of file diff --git a/translated/tech/20140510 Journey to the Stack Part I.md b/translated/tech/20140510 Journey to the Stack Part I.md new file mode 100644 index 0000000000..b18c7d32f5 --- /dev/null +++ b/translated/tech/20140510 Journey to the Stack Part I.md @@ -0,0 +1,103 @@ +#[探秘“栈”之旅(I)][1] + +早些时候,我们讲解了 [“剖析内存中的程序之秘”][2],我们欣赏了在一台电脑中是如何运行我们的程序的。今天,我们去探索栈的调用,它在大多数编程语言和虚拟机中都默默地存在。在此过程中,我们将接触到一些平时很难见到的东西,像闭包(closures)、递归、以及缓冲溢出等等。但是,我们首先要作的事情是,描绘出栈是如何运作的。 + +栈非常重要,因为它持有着在一个程序中运行的函数,而函数又是一个软件的重要组成部分。事实上,程序的内部操作都是非常简单的。它大部分是由函数向栈中推入数据或者从栈中弹出数据的相互调用组成的,虽然为数据分配内存是在堆上,但是,在跨函数的调用中数据必须要保存下来,不论是低级(low-leverl)的 C 软件还是像 JavaScript 和 C# 这样的基于虚拟机的语言,它们都是这样的。而对这些行为的深刻理解,对排错、性能调优以及大概了解究竟发生了什么是非常重要的。 + +当一个函数被调用时,将会创建一个栈帧(stack frame)去支持函数的运行。这个栈帧包含函数的本地变量和调用者传递给它的参数。这个栈帧也包含了允许被调用的函数安全返回给调用者的内部事务信息。栈帧的精确内容和结构因处理器架构和函数调用规则而不同。在本文中我们以 Intel x86 架构和使用 C 风格的函数调用(cdecl)的栈为例。下图是一个处于栈顶部的一个单个栈帧: + +![](https://manybutfinite.com/img/stack/stackIntro.png) + +在图上的场景中,有三个 CPU 寄存器进入栈。栈指针 `esp`(译者注:扩展栈指针寄存器) 指向到栈的顶部。栈的顶部总是被最后一个推入到栈且还没有弹出的东西所占据,就像现实世界中堆在一起的一叠板子或者面值 $100 的钞票。 + +保存在 `esp` 中的地址始终在变化着,因为栈中的东西不停被推入和弹出,而它总是指向栈中的最后一个推入的东西。许多 CPU 指令的一个副作用就是自动更新 `esp`,离开寄存器而使用栈是行不通的。 + +在 Intel 的架构中,绝大多数情况下,栈的增长是向着低位内存地址的方向。因此,这个“顶部” 在包含数据(在这种情况下,包含的数据是 `local_buffer`)的栈中是处于低位的内存地址。注意,关于从 `esp` 到 `local_buffer` 的箭头,这里并没有模糊的地方。这个箭头代表着事务:它专门指向到由 `local_buffer` 所拥有的第一个字节,因为,那是一个保存在 `esp` 中的精确地址。 + +第二个寄存器跟踪的栈是 `ebp`(译者注:扩展基址指针寄存器),它包含一个基指针或者称为帧指针。它指向到一个当前运行的函数的栈帧内的固定的位置,并且它为参数和本地变量的访问提供一个稳定的参考点(基址)。仅当开始或者结束调用一个函数时,`ebp` 的内容才会发生变化。因此,我们可以很容易地处理每个在栈中的从 `ebp` 开始偏移后的一个东西。如下图所示。 + +不像 `esp`, `ebp` 大多数情况下是在程序代码中通过花费很少的 CPU 来进行维护的。有时候,完成抛弃 `ebp` 有一些性能优势,可以通过 [编译标志][3] 来做到这一点。Linux 内核中有一个实现的示例。 + +最后,`eax`(译者注:扩展的 32 位通用数据寄存器)寄存器是被调用规则所使用的寄存器,对于大多数 C 数据类型来说,它的作用是转换一个返回值给调用者。 + +现在,我们来看一下在我们的栈帧中的数据。下图清晰地按字节展示了字节的内容,就像你在一个调试器中所看到的内容一样,内存是从左到右、从底部到顶部增长的,如下图所示: + +![](https://manybutfinite.com/img/stack/frameContents.png) + +本地变量 `local_buffer` 是一个字节数组,它包含一个空终止(null-terminated)的 ascii 字符串,这是一个 C 程序中的基本元素。这个字符串可以从任意位置读取,例如,从键盘输入或者来自一个文件,它只有 7 个字节的长度。因为,`local_buffer` 只能保存 8 字节,在它的左侧保留了 1 个未使用的字节。这个字节的内容是未知的,因为栈的推入和弹出是极其活跃的,除了你写入的之外,你从不知道内存中保存了什么。因为 C 编译器并不为栈帧初始化内存,所以它的内容是未知的并且是随机的 - 除非是你自己写入。这使得一些人对此很困惑。 + +再往上走,`local1` 是一个 4 字节的整数,并且你可以看到每个字节的内容。它似乎是一个很大的数字,所有的零都在 8 后面,在这里可能会让你误入歧途。 + +Intel 处理器是按从小到大的机制来处理的,这表示在内存中的数字也是首先从小的位置开始的。因此,在一个多字节数字中,最小的标志字节在内存中处于低端地址。因为一般情况下是从左边开始显示的,这背离了我们一般意义上对数字的认识。我们讨论的这种从小到大的机制,使我想起《Gulliver 游记》:就像 Lilliput 吃鸡蛋是从小头开始的一样,Intel 处理器处理它们的数字也是从字节的小端开始的。 + +因此,`local1` 事实上只保存了一个数字 8,就像一个章鱼的腿。然而,`param1` 在第二个字节的位置有一个值 2,因此,它的数学上的值是 2 * 256 = 512(我们与 256 相乘是因为,每个位置值的范围都是从 0 到 255)。同时,`param2` 承载的数量是 1 * 256 * 256 = 65536。 + +这个栈帧的内部数据是由两个重要的部分组成:前一个栈帧的地址和函数的出口(返回地址)上运行的指令的地址。它们一起确保了函数能够正常返回,从而使程序可以继续正常运行。 + +现在,我们来看一下栈帧是如何产生的,以及去建立一个它们如何共同工作的内部蓝图。在刚开始的时候,栈的增长是非常令人困惑的,因为它发生的一切都不是你所期望的东西。例如,在栈上从 `esp` 减去 8,去分配一个 8 字节,而减法是以一种奇怪的方式去开始的。 + +我们来看一个简单的 C 程序: + +``` +Simple Add Program - add.c + +int add(int a, int b) +{ + int result = a + b; + return result; +} + +int main(int argc) +{ + int answer; + answer = add(40, 2); +} +``` + +假设我们在 Linux 中不使用命令行参数去运行它。当你运行一个 C 程序时,去真实运行的第一个代码是 C 运行时库,由它来调用我们的 `main` 函数。下图展示了程序运行时每一步都发生了什么。每个图链接的 GDB 输出展示了内存的状态和寄存器。你也可以看到所使用的 [GDB 命令][4],以及整个 [GDB 输出][5]。如下: + +![](https://manybutfinite.com/img/stack/mainProlog.png) + +第 2 步和第 3 步,以及下面的第 4 步,都只是函数的开端,几乎所有的函数都是这样的:`ebp` 的当前值保存着栈的顶部,然后,将 `esp` 的内容拷贝到 `ebp`,维护一个新帧。`main` 的开端和任何一个其它函数都是一样,但是,不同之处在于,当程序启动时 `ebp` 被清零。 + +如果你去检查栈下面的整形变量(argc),你将找到更多的数据,包括指向到程序名和命令行参数(传统的 C 参数数组)、Unix 环境变量以及它们真实的内容的指针。但是,在这里这些并不是重点,因此,继续向前调用 add(): + +![](https://manybutfinite.com/img/stack/callAdd.png) + +在 `main` 从 `esp` 减去 12 之后得到它所需的栈空间,它为 a 和 b 设置值。在内存中值展示为十六进制,并且是从小到大的格式。与你从调试器中看到的一样。一旦设置了参数值,`main` 将调用 `add` ,并且它开始运行: + +![](https://manybutfinite.com/img/stack/addProlog.png) + +现在,有一点小激动!我们进入了另一个开端,在这时你可以明确看到栈帧是如何从 `ebp` 的一个链表开始进入到栈的。这就是在高级语言中调试器和异常对象如何对它们的栈进行跟踪的。当一个新帧产生时,你也可以看到更多这种从 `ebp` 到 `esp` 的典型的捕获。我们再次从 `esp` 中做减法得到更多的栈空间。 + +当 `ebp` 寄存器的值拷贝到内存时,这里也有一个稍微有些怪异的地方。在这里发生的奇怪事情是,寄存器并没有真的按字节顺序拷贝:因为对于内存,没有像寄存器那样的“增长的地址”。因此,通过调试器的规则以最自然的格式给人展示了寄存器的值:从最重要的到最不重要的数字。因此,这个在从小到大的机制中拷贝的结果,与内存中常用的从左到右的标记法正好相反。我想用图去展示你将会看到的东西,因此有了下面的图。 + +在比较难懂的部分,我们增加了注释: + +![](https://manybutfinite.com/img/stack/doAdd.png) + +这是一个临时寄存器,用于帮你做加法,因此没有什么警报或者惊喜。对于加法这样的作业,栈的动作正好相反,我们留到下次再讲。 + +对于任何读到这篇文章的人都应该有一个小礼物,因此,我做了一个大的图表展示了 [组合到一起的所有步骤][6]。 + +一旦把它们全部布置好了,看上起似乎很乏味。这些小方框给我们提供了很多帮助。事实上,在计算机科学中,这些小方框是主要的展示工具。我希望这些图片和寄存器的移动能够提供一种更直观的构想图,将栈的增长和内存的内容整合到一起。从软件的底层运作来看,我们的软件与一个简单的图灵机器差不多。 + +这就是我们栈探秘的第一部分,再讲一些内容之后,我们将看到构建在这个基础上的高级编程的概念。下周见! + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/journey-to-the-stack/ + +作者:[Gustavo Duarte][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/journey-to-the-stack/ +[2]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory +[3]:http://stackoverflow.com/questions/14666665/trying-to-understand-gcc-option-fomit-frame-pointer +[4]:https://github.com/gduarte/blog/blob/master/code/x86-stack/add-gdb-commands.txt +[5]:https://github.com/gduarte/blog/blob/master/code/x86-stack/add-gdb-output.txt +[6]:https://manybutfinite.com/img/stack/callSequence.png \ No newline at end of file From 2bfa1e99fef47ac58b4e6bf10914ee45bf497413 Mon Sep 17 00:00:00 2001 From: Kane Date: Mon, 22 Jan 2018 23:18:58 +0800 Subject: [PATCH 183/226] =?UTF-8?q?=E5=AE=8C=E6=88=90=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...onfigure LXD containers with cloud-init.md | 198 ----------------- ...onfigure LXD containers with cloud-init.md | 204 ++++++++++++++++++ 2 files changed, 204 insertions(+), 198 deletions(-) delete mode 100644 sources/tech/20180103 How to preconfigure LXD containers with cloud-init.md create mode 100644 translated/tech/20180103 How to preconfigure LXD containers with cloud-init.md diff --git a/sources/tech/20180103 How to preconfigure LXD containers with cloud-init.md b/sources/tech/20180103 How to preconfigure LXD containers with cloud-init.md deleted file mode 100644 index d94b5fa2b8..0000000000 --- a/sources/tech/20180103 How to preconfigure LXD containers with cloud-init.md +++ /dev/null @@ -1,198 +0,0 @@ -kaneg is translating. -How to preconfigure LXD containers with cloud-init -====== -You are creating containers and you want them to be somewhat preconfigured. For example, you want them to run automatically **apt update** as soon as they are launched. Or, get some packages pre-installed, or run a few commands. Here is how to perform this early initialization with [**cloud-init**][1] through [LXD to container images that support **cloud-init**][2]. - -In the following, we are creating a separate LXD profile with some cloud-init instructions, then launch a container using that profile. - -### How to create a new LXD profile - -Let's see the existing profiles. -``` -$ **lxc profile list** -+---------|---------+ -| NAME | USED BY | -+---------|---------+ -| default | 11 | -+---------|---------+ -``` - -There is one profile, **default**. We copy it to a new name, so that we can start adding our instructions on that profile. -``` -$ **lxc profile copy default devprofile** - -$ **lxc profile list** -+------------|---------+ -| NAME | USED BY | -+------------|---------+ -| default | 11 | -+------------|---------+ -| devprofile | 0 | -+------------|---------+ -``` - -We have a new profile to work on, **devprofile**. Here is how it looks, -``` -$ **lxc profile show devprofile** -config: - environment.TZ: "" -description: Default LXD profile -devices: - eth0: - nictype: bridged - parent: lxdbr0 - type: nic - root: - path: / - pool: default - type: disk -name: devprofile -used_by: [] -``` - -Note the main sections, **config:** , **description:** , **devices:** , **name:** , and **used_by:**. There is careful indentation in the profile, and when you make edits, you need to take care of the indentation. - -### How to add cloud-init to an LXD profile - -In the **config:** section of a LXD profile, we can insert [cloud-init][1] instructions. Those[ cloud-init][1] instructions will be passed to the container and will be used when it is first launched. - -Here are those that we are going to use in the example, -``` - package_upgrade: true - packages: - - build-essential - locale: es_ES.UTF-8 - timezone: Europe/Madrid - runcmd: - - [touch, /tmp/simos_was_here] -``` - -**package_upgrade: true** means that we want **cloud-init** to run **sudo apt upgrade** when the container is first launched. Under **packages:** we list the packages that we want to get automatically installed. Then we set the **locale** and **timezone**. In the Ubuntu container images, the default locale for **root** is **C.UTF-8** , for the **ubuntu** account it 's **en_US.UTF-8**. The timezone is **Etc/UTC**. Finally, we show [how to run a Unix command with **runcmd**][3]. - -The part that needs a bit of attention is how to insert the **cloud-init** instructions into the LXD profile. My preferred way is -``` -$ **lxc profile edit devprofile** -``` - -This opens up a text editor and allows to paste the instructions. Here is [how the result should look like][4], -``` -$ **lxc profile show devprofile** -config: - environment.TZ: "" - - - user.user-data: | - #cloud-config - package_upgrade: true - packages: - - build-essential - locale: es_ES.UTF-8 - timezone: Europe/Madrid - runcmd: - - [touch, /tmp/simos_was_here] - - -description: Default LXD profile -devices: - eth0: - nictype: bridged - parent: lxdbr0 - type: nic - root: - path: / - pool: default - type: disk -name: devprofile -used_by: [] -``` - -WordPress can get a bit messed with indentation when you copy/paste, therefore, you may use [this pastebin][4] instead. - -### How to launch a container using a profile - -Let's launch a new container using the profile **devprofile**. -``` -$ **lxc launch --profile devprofile ubuntu:x mydev** -``` - -Let's get into the container and figure out whether our instructions took effect. -``` -$ **lxc exec mydev bash** -root@mydev:~# **ps ax** - PID TTY STAT TIME COMMAND - 1 ? Ss 0:00 /sbin/init - ... - 427 ? Ss 0:00 /usr/bin/python3 /usr/bin/cloud-init modules --mode=f - 430 ? S 0:00 /bin/sh -c tee -a /var/log/cloud-init-output.log - 431 ? S 0:00 tee -a /var/log/cloud-init-output.log - 432 ? S 0:00 /usr/bin/apt-get --option=Dpkg::Options::=--force-con - 437 ? S 0:00 /usr/lib/apt/methods/http - 438 ? S 0:00 /usr/lib/apt/methods/http - 440 ? S 0:00 /usr/lib/apt/methods/gpgv - 570 ? Ss 0:00 bash - 624 ? S 0:00 /usr/lib/apt/methods/store - 625 ? R+ 0:00 ps ax -root@mydev:~# -``` - -We connected quite quickly, and **ps ax** shows that the package update is indeed taking place! We can get the full output at /var/log/cloud-init-output.log and in there, -``` -Generating locales (this might take a while)... - es_ES.UTF-8... done -Generation complete. -``` - -The locale got set. The **root** user keeps having the **C.UTF-8** default locale. It is only the non-root account **ubuntu** that gets the new locale. -``` -Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease -Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] -Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] -``` - -Here is **apt update** that is required before installing packages. -``` -The following packages will be upgraded: - libdrm2 libseccomp2 squashfs-tools unattended-upgrades -4 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. -Need to get 211 kB of archives. -``` - -Here is runs **package_upgrade: true** and installs any available packages. -``` -The following NEW packages will be installed: - binutils build-essential cpp cpp-5 dpkg-dev fakeroot g++ g++-5 gcc gcc-5 - libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl -``` - -This is from our instruction to install the **build-essential** meta-package. - -What about the **runcmd** instruction? -``` -root@mydev:~# **ls -l /tmp/** -total 1 --rw-r--r-- 1 root root 0 Jan 3 15:23 simos_was_here -root@mydev:~# -``` - -It worked as well! - -### Conclusion - -When we launch LXD containers, we often need some configuration to be enabled by default and avoid repeated actions. The way to solve this, is to create LXD profiles. Each profile captures those configurations. Finally, when we launch the new container, we specify which LXD profile to use. - - --------------------------------------------------------------------------------- - -via: https://blog.simos.info/how-to-preconfigure-lxd-containers-with-cloud-init/ - -作者:[Simos Xenitellis][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://blog.simos.info/author/simos/ -[1]:http://cloudinit.readthedocs.io/en/latest/index.html -[2]:https://github.com/lxc/lxd/blob/master/doc/cloud-init.md -[3]:http://cloudinit.readthedocs.io/en/latest/topics/modules.html#runcmd -[4]:https://paste.ubuntu.com/26313399/ diff --git a/translated/tech/20180103 How to preconfigure LXD containers with cloud-init.md b/translated/tech/20180103 How to preconfigure LXD containers with cloud-init.md new file mode 100644 index 0000000000..919efe4a26 --- /dev/null +++ b/translated/tech/20180103 How to preconfigure LXD containers with cloud-init.md @@ -0,0 +1,204 @@ +如何使用cloud-init来预配置LXD容器 +====== +当你正在创建LXD容器的时候,你希望它们能被预先配置好。例如在容器一启动就自动执行 **apt update**来安装一些软件包,或者运行一些命令。 +这篇文章将讲述如何用[**cloud-init**][1]来对[LXD容器进行进行早期初始化][2]。 +接下来,我们将创建一个包含cloud-init指令的LXD profile,然后启动一个新的容器来使用这个profile。 + +### 如何创建一个新的LXD profile + +查看已经存在的profile: + +```shell +$ lxc profile list ++---------|---------+ +| NAME | USED BY | ++---------|---------+ +| default | 11 | ++---------|---------+ +``` + +我们把名叫default的profile复制一份,然后在其内添加新的指令: + +```shell +$ lxc profile copy default devprofile + +$ lxc profile list ++------------|---------+ +| NAME | USED BY | ++------------|---------+ +| default | 11 | ++------------|---------+ +| devprofile | 0 | ++------------|---------+ +``` + +我们就得到了一个新的profile: **devprofile**。下面是它的详情: + +```yaml +$ lxc profile show devprofile +config: + environment.TZ: "" +description: Default LXD profile +devices: + eth0: + nictype: bridged + parent: lxdbr0 + type: nic + root: + path: / + pool: default + type: disk +name: devprofile +used_by: [] +``` + +注意这几个部分: **config:** , **description:** , **devices:** , **name:** 和 **used_by:**,当你修改这些内容的时候注意不要搞错缩进。(译者注:因为这些内容是YAML格式的,缩进是语法的一部分) + +### 如何把cloud-init添加到LXD profile里 + +[cloud-init][1]可以添加到LXD profile的 **config** 里。当这些指令将被传递给容器后,会在容器第一次启动的时候执行。 +下面是用在示例中的指令: + +```yaml + package_upgrade: true + packages: + - build-essential + locale: es_ES.UTF-8 + timezone: Europe/Madrid + runcmd: + - [touch, /tmp/simos_was_here] +``` + +**package_upgrade: true** 是指当容器第一次被启动时,我们想要**cloud-init** 运行 **sudo apt upgrade**。 +**packages:** 列出了我们想要自动安装的软件。然后我们设置了**locale** and **timezone**。在Ubuntu容器的镜像里,root用户默认的 locale 是**C.UTF-8**,而**ubuntu** 用户则是 **en_US.UTF-8**。此外,我们把时区设置为**Etc/UTC**。 +最后,我们展示了[如何使用**runcmd**来运行一个Unix命令][3]。 + +我们需要关注如何将**cloud-init**指令插入LXD profile。 + +我首选的方法是: + +``` +$ lxc profile edit devprofile +``` + +它会打开一个文本编辑器,以便你将指令粘贴进去。[结果应该是这样的][4]: + +```yaml +$ lxc profile show devprofile +config: + environment.TZ: "" + user.user-data: | + #cloud-config + package_upgrade: true + packages: + - build-essential + locale: es_ES.UTF-8 + timezone: Europe/Madrid + runcmd: + - [touch, /tmp/simos_was_here] +description: Default LXD profile +devices: + eth0: + nictype: bridged + parent: lxdbr0 + type: nic + root: + path: / + pool: default + type: disk +name: devprofile +used_by: [] +``` + +### 如何使用LXD profile启动一个容器 + +使用profile **devprofile**来启动一个新容器: + +``` +$ lxc launch --profile devprofile ubuntu:x mydev +``` + +然后访问该容器来查看我们的的指令是否生效: + +```shell +$ lxc exec mydev bash +root@mydev:~# ps ax + PID TTY STAT TIME COMMAND + 1 ? Ss 0:00 /sbin/init + ... + 427 ? Ss 0:00 /usr/bin/python3 /usr/bin/cloud-init modules --mode=f + 430 ? S 0:00 /bin/sh -c tee -a /var/log/cloud-init-output.log + 431 ? S 0:00 tee -a /var/log/cloud-init-output.log + 432 ? S 0:00 /usr/bin/apt-get --option=Dpkg::Options::=--force-con + 437 ? S 0:00 /usr/lib/apt/methods/http + 438 ? S 0:00 /usr/lib/apt/methods/http + 440 ? S 0:00 /usr/lib/apt/methods/gpgv + 570 ? Ss 0:00 bash + 624 ? S 0:00 /usr/lib/apt/methods/store + 625 ? R+ 0:00 ps ax +root@mydev:~# +``` + +如果我们连接得够快,通过**ps ax**将能够看到系统正在更新软件。我们可以从/var/log/cloud-init-output.log看到完整的日志: + +``` +Generating locales (this might take a while)... + es_ES.UTF-8... done +Generation complete. +``` + +以上可以看出locale已经被更改了。root 用户还是保持默认的**C.UTF-8**,只有非root用户**ubuntu**使用了新的locale。 + +``` +Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease +Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] +Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] +``` + +以上是安装软件包之前执行的**apt update**。 + +``` +The following packages will be upgraded: + libdrm2 libseccomp2 squashfs-tools unattended-upgrades +4 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. +Need to get 211 kB of archives. +``` +以上是在执行**package_upgrade: true**和安装软件包。 + +``` +The following NEW packages will be installed: + binutils build-essential cpp cpp-5 dpkg-dev fakeroot g++ g++-5 gcc gcc-5 + libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl +``` +以上是我们安装**build-essential**软件包的指令。 + +**runcmd** 执行的结果如何? + +``` +root@mydev:~# ls -l /tmp/ +total 1 +-rw-r--r-- 1 root root 0 Jan 3 15:23 simos_was_here +root@mydev:~# +``` + +可见它已经生效了! + +### 结论 + +当我们启动LXD容器的时候,我们常常需要默认启用一些配置,并且希望能够避免重复工作。通常解决这个问题的方法是创建LXD profile,然后把需要的配置添加进去。最后,当我们启动新的容器时,只需要应用该LXD profile即可。 + +-------------------------------------------------------------------------------- + +via: https://blog.simos.info/how-to-preconfigure-lxd-containers-with-cloud-init/ + +作者:[Simos Xenitellis][a] +译者:[kaneg](https://github.com/kaneg) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.simos.info/author/simos/ +[1]:http://cloudinit.readthedocs.io/en/latest/index.html +[2]:https://github.com/lxc/lxd/blob/master/doc/cloud-init.md +[3]:http://cloudinit.readthedocs.io/en/latest/topics/modules.html#runcmd +[4]:https://paste.ubuntu.com/26313399/ \ No newline at end of file From f9ea9b89d81d8e150e35cb57fb8dc02f06d87761 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 23 Jan 2018 09:12:39 +0800 Subject: [PATCH 184/226] translated --- ... Create A Video From PDF Files In Linux.md | 91 ------------------- ... Create A Video From PDF Files In Linux.md | 89 ++++++++++++++++++ 2 files changed, 89 insertions(+), 91 deletions(-) delete mode 100644 sources/tech/20171004 How To Create A Video From PDF Files In Linux.md create mode 100644 translated/tech/20171004 How To Create A Video From PDF Files In Linux.md diff --git a/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md b/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md deleted file mode 100644 index 5ecf5da24e..0000000000 --- a/sources/tech/20171004 How To Create A Video From PDF Files In Linux.md +++ /dev/null @@ -1,91 +0,0 @@ -translating---geekpi - -How To Create A Video From PDF Files In Linux -====== -![](https://www.ostechnix.com/wp-content/uploads/2017/10/Video-1-720x340.jpg) - -I have a huge collection of PDF files, mostly Linux tutorials, in my tablet PC. Sometimes I feel too lazy to read them from the tablet. I thought It would be better If I can be able to create a video from PDF files and watch it in a big screen devices like a TV or a Computer. Though I have a little working experience with [**FFMpeg**][1], I am not aware of how to create a movie file using it. After a bit of Google searches, I came up with a good solution. For those who wanted to make a movie file from a set of PDF files, read on. It is not that difficult. - -### Create A Video From PDF Files In Linux - -For this purpose, you need to install **" FFMpeg"** and **" ImageMagick"** software in your system. - -To install FFMpeg, refer the following link. - -Imagemagick is available in the official repositories of most Linux distributions. - -On **Arch Linux** and derivatives such as **Antergos** , **Manjaro Linux** , run the following command to install it. -``` -sudo pacman -S imagemagick -``` - -**Debian, Ubuntu, Linux Mint:** -``` -sudo apt-get install imagemagick -``` - -**Fedora:** -``` -sudo dnf install imagemagick -``` - -**RHEL, CentOS, Scientific Linux:** -``` -sudo yum install imagemagick -``` - -**SUSE, openSUSE:** -``` -sudo zypper install imagemagick -``` - -After installing ffmpeg and imagemagick, convert your PDF file image format such as PNG or JPG like below. -``` -convert -density 400 input.pdf picture.png -``` - -Here, **-density 400** specifies the horizontal resolution of the output image file(s). - -The above command will convert all pages in the given PDF file to PNG format. Each page in the PDF file will be converted into a PNG file and saved in the current directory with file name **picture-1.png** , **picture-2.png** … and so on. It will take a while depending on the number of pages in the input PDF file. - -Once all pages in the PDF converted into PNG format, run the following command to create a video file from the PNG files. -``` -ffmpeg -r 1/10 -i picture-%01d.png -c:v libx264 -r 30 -pix_fmt yuv420p video.mp4 -``` - -Here, - - * **-r 1/10** : Display each image for 10 seconds. - * **-i picture-%01d.png** : Reads all pictures that starts with name **" picture-"**, following with 1 digit (%01d) and ending with **.png**. If the images name comes with 2 digits (I.e picture-10.png, picture11.png etc), use (%02d) in the above command. - * **-c:v libx264** : Output video codec (i.e h264). - * **-r 30** : framerate of output video - * **-pix_fmt yuv420p** : Output video resolution - * **video.mp4** : Output video file with .mp4 format. - - - -Hurrah! The movie file is ready!! You can play it on any devices that supports .mp4 format. Next, I need to find a way to insert a cool music to my video. I hope it won't be difficult either. - -If you wanted it in higher pixel resolution, you don't have to start all over again. Just convert the output video file to any other higher/lower resolution of your choice, say 720p, as shown below. -``` -ffmpeg -i video.mp4 -vf scale=-1:720 video_720p.mp4 -``` - -Please note that creating a video using ffmpeg requires a good configuration PC. While converting videos, ffmpeg will consume most of your system resources. I recommend to do this in high-end system. - -And, that's all for now folks. Hope you find this useful. More good stuffs to come. Stay tuned! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/create-video-pdf-files-linux/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/20-ffmpeg-commands-beginners/ diff --git a/translated/tech/20171004 How To Create A Video From PDF Files In Linux.md b/translated/tech/20171004 How To Create A Video From PDF Files In Linux.md new file mode 100644 index 0000000000..4242c993af --- /dev/null +++ b/translated/tech/20171004 How To Create A Video From PDF Files In Linux.md @@ -0,0 +1,89 @@ +如何在 Linux 中从 PDF 创建视频 +====== +![](https://www.ostechnix.com/wp-content/uploads/2017/10/Video-1-720x340.jpg) + +我在我的平板电脑中收集了大量的 PDF 文件,其中主要是 Linux 教程。有时候我懒得在平板电脑上看。我认为如果我能够从 PDF 创建视频,并在大屏幕设备(如电视机或计算机)中观看会更好。虽然我对 [**FFMpeg**][1] 有一些经验,但我不知道如何使用它来创建视频。经过一番 Google 搜索,我想出了一个很好的解决方案。对于那些想从一组 PDF 文件制作视频文件的人,请继续阅读。这并不困难。 + +### 在 Linux 中从 PDF 创建视频 + +为此,你需要在系统中安装 **“FFMpeg”** 和 **“ImageMagick”** 。 + +要安装 FFMpeg,请参考以下链接。 + +Imagemagick 可在大多数 Linux 发行版的官方仓库中找到。 + +在 **Arch Linux** 以及 **Antergos** 、**Manjaro Linux** 等衍生产品上,运行以下命令进行安装。 +``` +sudo pacman -S imagemagick +``` + +**Debian、Ubuntu、Linux Mint:** +``` +sudo apt-get install imagemagick +``` + +**Fedora:** +``` +sudo dnf install imagemagick +``` + +**RHEL、CentOS、Scientific Linux:** +``` +sudo yum install imagemagick +``` + +**SUSE、 openSUSE:** +``` +sudo zypper install imagemagick +``` + +在安装 ffmpeg 和 imagemagick 之后,将你的 PDF 文件转换成图像格式,如 PNG 或 JPG,如下所示。 +``` +convert -density 400 input.pdf picture.png +``` + +这里,**-density 400** 指定输出图像的水平分辨率。 + +上面的命令会将指定 PDF 的所有页面转换为 PNG 格式。PDF 中的每个页面都将被转换成 PNG 文件,并保存在当前目录中,文件名为: **picture-1.png**、 **picture-2.png** 等。根据选择的 PDF 的页数,这将需要一些时间。 + +将 PDF 中的所有页面转换为 PNG 格式后,运行以下命令以从 PNG 创建视频文件。 +``` +ffmpeg -r 1/10 -i picture-%01d.png -c:v libx264 -r 30 -pix_fmt yuv420p video.mp4 +``` + +这里: + + * **-r 1/10** :每张图像显示 10 秒。 + * **-i picture-%01d.png** :读取以 **“picture-”** 开头,接着是一位数字(%01d),最后以 **.png** 结尾的所有图片。如果图片名称带有2位数字(也就是 picture-10.png、picture11.png 等),在上面的命令中使用(%02d)。 + * **-c:v libx264**:输出的视频编码器(即 h264)。 + * **-r 30** :输出视频的帧率 + * **-pix_fmt yuv420p**:输出的视频分辨率 + * **video.mp4**:以 .mp4 格式输出视频文件。 + + + +好了,视频文件完成了!你可以在任何支持 .mp4 格式的设备上播放它。接下来,我需要找到一种方法来为我的视频插入一个很酷的音乐。我希望这也不难。 + +如果你想要更高的分辨率,你不必重新开始。只要将输出的视频文件转换为你选择的任何其他更高/更低的分辨率,比如说 720p,如下所示。 +``` +ffmpeg -i video.mp4 -vf scale=-1:720 video_720p.mp4 +``` + +请注意,使用 ffmpeg 创建视频需要一台配置好的 PC。在转换视频时,ffmpeg 会消耗大量系统资源。我建议在高端系统中这样做。 + +就是这些了。希望你觉得这个有帮助。还会有更好的东西。敬请关注! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/create-video-pdf-files-linux/ + +作者:[SK][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/20-ffmpeg-commands-beginners/ From eec25b0563b2569d24d1b59ffa26e144ad79b2d4 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 23 Jan 2018 09:18:07 +0800 Subject: [PATCH 185/226] translating --- sources/tech/20180115 How To Boot Into Linux Command Line.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180115 How To Boot Into Linux Command Line.md b/sources/tech/20180115 How To Boot Into Linux Command Line.md index 7a63f47f90..00649cc678 100644 --- a/sources/tech/20180115 How To Boot Into Linux Command Line.md +++ b/sources/tech/20180115 How To Boot Into Linux Command Line.md @@ -1,3 +1,5 @@ +translating---geekpi + How To Boot Into Linux Command Line ====== ![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/how-to-boot-into-linux-command-line_orig.jpg) From fe4b3a1c4782930af51bfb0873fe55dc89cc6eb2 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Tue, 23 Jan 2018 09:32:41 +0800 Subject: [PATCH 186/226] Translating by qhwdw --- ...Epilogues Canaries and Buffer Overflows.md | 101 ++++++++++++++++++ 1 file changed, 101 insertions(+) create mode 100644 sources/tech/20140519 Epilogues Canaries and Buffer Overflows.md diff --git a/sources/tech/20140519 Epilogues Canaries and Buffer Overflows.md b/sources/tech/20140519 Epilogues Canaries and Buffer Overflows.md new file mode 100644 index 0000000000..5f3ddca532 --- /dev/null +++ b/sources/tech/20140519 Epilogues Canaries and Buffer Overflows.md @@ -0,0 +1,101 @@ +Translating by qhwdw [Epilogues, Canaries, and Buffer Overflows][1] +============================================================ + +Last week we looked at [how the stack works][2] and how stack frames are built during function prologues. Now it's time to look at the inverse process as stack frames are destroyed in function epilogues. Let's bring back our friend add.c: + +Simple Add Program - add.c + +``` +int add(int a, int b) +{ + int result = a + b; + return result; +} + +int main(int argc) +{ + int answer; + answer = add(40, 2); +} +``` + + +We're executing line 4, right after the assignment of a + b into result. This is what happens: + +![](https://manybutfinite.com/img/stack/returnFromAdd.png) + +The first instruction is redundant and a little silly because we know eax is already equal to result, but this is what you get with optimization turned off. The leave instruction then runs, doing two tasks for the price of one: it resets esp to point to the start of the current stack frame, and then restores the saved ebp value. These two operations are logically distinct and thus are broken up in the diagram, but they happen atomically if you're tracing with a debugger. + +After leave runs the previous stack frame is restored. The only vestige of the call to add is the return address on top of the stack. It contains the address of the instruction in main that must run after add is done. The ret instruction takes care of it: it pops the return address into the eip register, which points to the next instruction to be executed. The program has now returned to main, which resumes: + +![](https://manybutfinite.com/img/stack/returnFromMain.png) + +main copies the return value from add into local variable answer and then runs its own epilogue, which is identical to any other. Again the only peculiarity in main is that the saved ebp is null, since it is the first stack frame in our code. In the last step, execution has been returned to the C runtime (libc), which will exit to the operating system. Here's a diagram with the [full return sequence][3] for those who need it. + +You now have an excellent grasp of how the stack operates, so let's have some fun and look at one of the most infamous hacks of all time: exploiting the stack buffer overflow. Here is a vulnerable program: + +Vulnerable Program - buffer.c + +| +``` +void doRead() +{ + char buffer[28]; + gets(buffer); +} + +int main(int argc) +{ + doRead(); +} +``` + +The code above uses [gets][4] to read from standard input. gets keeps reading until it encounters a newline or end of file. Here's what the stack looks like after a string has been read: + +![](https://manybutfinite.com/img/stack/bufferCopy.png) + +The problem here is that gets is unaware of buffer's size: it will blithely keep reading input and stuffing data into the stack beyond buffer, obliterating the saved ebp value, return address, and whatever else is below. To exploit this behavior, attackers craft a precise payload and feed it into the program. This is what the stack looks like during an attack, after the call to gets: + +![](https://manybutfinite.com/img/stack/bufferOverflowExploit.png) + +The basic idea is to provide malicious assembly code to be executed and overwrite the return address on the stack to point to that code. It is a bit like a virus invading a cell, subverting it, and introducing some RNA to further its goals. + +And like a virus, the exploit's payload has many notable features. It starts with several nop instructions to increase the odds of successful exploitation. This is because the return address is absolute and must be guessed, since attackers don't know exactly where in the stack their code will be stored. But as long as they land on a nop, the exploit works: the processor will execute the nops until it hits the instructions that do work. + +The exec /bin/sh symbolizes raw assembly instructions that execute a shell (imagine for example that the vulnerability is in a networked program, so the exploit might provide shell access to the system). The idea of feeding raw assembly to a program expecting a command or user input is shocking at first, but that's part of what makes security research so fun and mind-expanding. To give you an idea of how weird things get, sometimes the vulnerable program calls tolower or toupper on its inputs, forcing attackers to write assembly instructions whose bytes do not fall into the range of upper- or lower-case ascii letters. + +Finally, attackers repeat the guessed return address several times, again to tip the odds ever in their favor. By starting on a 4-byte boundary and providing multiple repeats, they are more likely to overwrite the original return address on the stack. + +Thankfully, modern operating systems have a host of [protections against buffer overflows][5], including non-executable stacks and stack canaries. The "canary" name comes from the [canary in a coal mine][6] expression, an addition to computer science's rich vocabulary. In the words of Steve McConnell: + +> Computer science has some of the most colorful language of any field. In what other field can you walk into a sterile room, carefully controlled at 68°F, and find viruses, Trojan horses, worms, bugs, bombs, crashes, flames, twisted sex changers, and fatal errors? Steve McConnellCode Complete 2 + +At any rate, here's what a stack canary looks like: + +![](https://manybutfinite.com/img/stack/bufferCanary.png) + +Canaries are implemented by the compiler. For example, GCC's [stack-protector][7] option causes canaries to be used in any function that is potentially vulnerable. The function prologue loads a magic value into the canary location, and the epilogue makes sure the value is intact. If it's not, a buffer overflow (or bug) likely happened and the program is aborted via [__stack_chk_fail][8]. Due to their strategic location on the stack, canaries make the exploitation of stack buffer overflows much harder. + +This finishes our journey within the depths of the stack. We don't want to delve too greedily and too deep. Next week we'll go up a notch in abstraction to take a good look at recursion, tail calls and other tidbits, probably using Google's V8\. To end this epilogue and prologue talk, I'll close with a cherished quote inscribed on a monument in the American National Archives: + +![](https://manybutfinite.com/img/stack/past-is-prologue.jpg) + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ + +作者:[Gustavo Duarte][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ +[2]:https://manybutfinite.com/post/journey-to-the-stack +[3]:https://manybutfinite.com/img/stack/returnSequence.png +[4]:http://linux.die.net/man/3/gets +[5]:http://paulmakowski.wordpress.com/2011/01/25/smashing-the-stack-in-2011/ +[6]:http://en.wiktionary.org/wiki/canary_in_a_coal_mine +[7]:http://gcc.gnu.org/onlinedocs/gcc-4.2.3/gcc/Optimize-Options.html +[8]:http://refspecs.linux-foundation.org/LSB_4.0.0/LSB-Core-generic/LSB-Core-generic/libc---stack-chk-fail-1.html \ No newline at end of file From c897acf98b98f94e6c73298735f427258480ba0c Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 23 Jan 2018 13:03:17 +0800 Subject: [PATCH 187/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20An=20overview=20o?= =?UTF-8?q?f=20the=20Perl=205=20engine?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...180122 An overview of the Perl 5 engine.md | 130 ++++++++++++++++++ 1 file changed, 130 insertions(+) create mode 100644 sources/talk/20180122 An overview of the Perl 5 engine.md diff --git a/sources/talk/20180122 An overview of the Perl 5 engine.md b/sources/talk/20180122 An overview of the Perl 5 engine.md new file mode 100644 index 0000000000..a26266a39a --- /dev/null +++ b/sources/talk/20180122 An overview of the Perl 5 engine.md @@ -0,0 +1,130 @@ +An overview of the Perl 5 engine +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/camel-perl-lead.png?itok=VyEv-C5o) + +As I described in "[My DeLorean runs Perl][1]," switching to Perl has vastly improved my development speed and possibilities. Here I'll dive deeper into the design of Perl 5 to discuss aspects important to systems programming. + +Some years ago, I wrote "OpenGL bindings for Bash" as sort of a joke. The implementation was simply an X11 program written in C that read OpenGL calls on [stdin][2] (yes, as text) and emitted user input on [stdout][3] . Then I had a littlefile that would declare all the OpenGL functions as Bash functions, which echoed the name of the function into a pipe, starting the GL interpreter process if it wasn't already running. The point of the exercise was to show that OpenGL (the 1.4 API, not the newer shader stuff) could render a lot of graphics with just a few calls per frame by using GL display lists. The OpenGL library did all the heavy lifting, and Bash just printed a few dozen lines of text per frame. + +In the end though, Bash is a really horrible [glue language][4], both from high overhead and limited available operations and syntax. [Perl][5], on the other hand, is a great glue language. + +### Syntax aside... + +If you're not a regular Perl user, the first thing you probably notice is the syntax. + +Perl 5 is built on a long legacy of awkward syntax, but more recent versions have removed the need for much of the punctuation. The remaining warts can mostly be avoided by choosing modules that give you domain-specific "syntactic sugar," which even alter the Perl syntax as it is parsed. This is in stark contrast to most other languages, where you are stuck with the syntax you're given, and infinitely more flexible than C's macros. Combined with Perl's powerful sparse-syntax operators, like `map`, `grep`, `sort`, and similar user-defined operators, I can almost always write complex algorithms more legibly and with less typing using Perl than with JavaScript, PHP, or any compiled language. + +So, because syntax is what you make of it, I think the underlying machine is the most important aspect of the language to consider. Perl 5 has a very capable engine, and it differs in interesting and useful ways from other languages. + +### A layer above C + +I don't recommend anyone start working with Perl by looking at the interpreter's internal API, but a quick description is useful. One of the main problems we deal with in the world of C is acquiring and releasing memory while also supporting control flow through a chain of function calls. C has a rough ability to throw exceptions using `longjmp`, but it doesn't do any cleanup for you, so it is almost useless without a framework to manage resources. The Perl interpreter is exactly this sort of framework. + +Perl provides a stack of variables independent from C's stack of function calls on which you can mark the logical boundaries of a Perl scope. There are also API calls you can use to allocate memory, Perl variables, etc., and tell Perl to automatically free them at the end of the Perl scope. Now you can make whatever C calls you like, "die" out of the middle of them, and let Perl clean everything up for you. + +Although this is a really unconventional perspective, I bring it up to emphasize that Perl sits on top of C and allows you to use as much or as little interpreted overhead as you like. Perl's internal API is certainly not as nice as C++ for general programming, but C++ doesn't give you an interpreted language on top of your work when you're done. I've lost track of the number of times that I wanted reflective capability to inspect or alter my C++ objects, and following that rabbit hole has derailed more than one of my personal projects. + +### Lisp-like functions + +Perl functions take a list of arguments. The downside is that you have to do argument count and type checking at runtime. The upside is you don't end up doing that much, because you can just let the interpreter's own runtime check catch those mistakes. You can also create the effect of C++'s overloaded functions by inspecting the arguments you were given and behaving accordingly. + +Because arguments are a list, and return values are a list, this encourages [Lisp-style programming][6], where you use a series of functions to filter a list of data elements. This "piping" or "streaming" effect can result in some really complicated loops turning into a single line of code. + +Every function is available to the language as a `coderef` that can be passed around in variables, including anonymous closure functions. Also, I find `sub {}` more convenient to type than JavaScript's `function(){}` or C++11's `[&](){}`. + +### Generic data structures + +The variables in Perl are either "scalars," references, arrays, or "hashes" ... or some other stuff that I'll skip. + +Scalars act as a string/integer/float hybrid and are automatically typecast as needed for the purpose you are using them. In other words, instead of determining the operation by the type of variable, the type of operator determines how the variable should be interpreted. This is less efficient than if the language knows the type in advance, but not as inefficient as, for example, shell scripting because Perl caches the type conversions. + +Perl scalars may contain null characters, so they are fully usable as buffers for binary data. The scalars are mutable and copied by value, but optimized with copy-on-write, and substring operations are also optimized. Strings support unicode characters but are stored efficiently as normal bytes until you append a codepoint above 255. + +References (which are considered scalars as well) hold a reference to any other variable; `hashrefs` and `arrayrefs` are most common, along with the `coderefs` described above. + +Arrays are simply a dynamic-length array of scalars (or references). + +Hashes (i.e., dictionaries, maps, or whatever you want to call them) are a performance-tuned hash table implementation where every key is a string and every value is a scalar (or reference). Hashes are used in Perl in the same way structs are used in C. Clearly a hash is less efficient than a struct, but it keeps things generic so tasks that require dozens of lines of code in other languages can become one-liners in Perl. For instance, you can dump the contents of a hash into a list of (key, value) pairs or reconstruct a hash from such a list as a natural part of the Perl syntax. + +### Object model + +Any reference can be "blessed" to make it into an object, granting it a multiple-inheritance method-dispatch table. The blessing is simply the name of a package (namespace), and any function in that namespace becomes an available method of the object. The inheritance tree is defined by variables in the package. As a result, you can make modifications to classes or class hierarchies or create new classes on the fly with simple data edits, rather than special keywords or built-in reflection APIs. By combining this with Perl's `local` keyword (where changes to a global are automatically undone at the end of the current scope), you can even make temporary changes to class methods or inheritance! + +Perl objects only have methods, so attributes are accessed via accessors like the canonical Java `get_` and `set_` methods. Perl authors usually combine them into a single method of just the attribute name and differentiate `get` from `set` by whether a parameter was given. + +You can also "re-bless" objects from one class to another, which enables interesting tricks not available in most other languages. Consider state machines, where each method would normally start by checking the object's current state; you can avoid that in Perl by swapping the method table to one that matches the object's state. + +### Visibility + +While other languages spend a bunch of effort on access rules between classes, Perl adopted a simple "if the name begins with underscore, don't touch it unless it's yours" convention. Although I can see how this could be a problem with an undisciplined software team, it has worked great in my experience. The only thing C++'s `private` keyword ever did for me was impair my debugging efforts, yet it felt dirty to make everything `public`. Perl removes my guilt. + +Likewise, an object provides methods, but you can ignore them and just access the underlying Perl data structure. This is another huge boost for debugging. + +### Garbage collection via reference counting + +Although [reference counting][7] is a rather leak-prone form of memory management (it doesn't detect cycles), it has a few upsides. It gives you deterministic destruction of your objects, like in C++, and never interrupts your program with a surprise garbage collection. It strongly encourages module authors to use a tree-of-objects pattern, which I much prefer vs. the tangle-of-objects pattern often seen in Java and JavaScript. (I've found trees to be much more easily tested with unit tests.) But, if you need a tangle of objects, Perl does offer "weak" references, which won't be considered when deciding if it's time to garbage-collect something. + +On the whole, the only time this ever bites me is when making heavy use of closures for event-driven callbacks. It's easy to have an object hold a reference to an event handle holding a reference to a callback that references the containing object. Again, weak references solve this, but it's an extra thing to be aware of that JavaScript or Python don't make you worry about. + +### Parallelism + +The Perl interpreter is a single thread, although modules written in C can use threads of their own internally, and Perl often includes support for multiple interpreters within the same process. + +Although this is a large limitation, knowing that a data structure will only ever be touched by one thread is nice, and it means you don't need locks when accessing them from C code. Even in Java, where locking is built into the syntax in convenient ways, it can be a real time sink to reason through all the ways that threads can interact (and especially annoying that they force you to deal with that in every GUI program you write). + +There are several event libraries available to assist in writing event-driven callback programs in the style of Node.js to avoid the need for threads. + +### Access to C libraries + +Aside from directly writing your own C extensions via Perl's [XS][8] system, there are already lots of common C libraries wrapped for you and available on Perl's [CPAN][9] repository. There is also a great module, [Inline::C][10], that takes most of the pain out of bridging between Perl and C, to the point where you just paste C code into the middle of a Perl module. (It compiles the first time you run it and caches the .so shared object file for subsequent runs.) You still need to learn some of the Perl interpreter API if you want to manipulate the Perl stack or pack/unpack Perl's variables other than your C function arguments and return value. + +### Memory usage + +Perl can use a surprising amount of memory, especially if you make use of heavyweight libraries and create thousands of objects, but with the size of today's systems it usually doesn't matter. It also isn't much worse than other interpreted systems. My personal preference is to only use lightweight libraries, which also generally improve performance. + +### Startup speed + +The Perl interpreter starts in under five milliseconds on modern hardware. If you take care to use only lightweight modules, you can use Perl for anything you might have used Bash for, like `hotplug` scripts. + +### Regex implementation + +Perl provides the mother of all regex implementations... but you probably already knew that. Regular expressions are built into Perl's syntax rather than being an object-oriented or function-based API; this helps encourage their use for any text processing you might need to do. + +### Ubiquity and stability + +Perl 5 is installed on just about every modern Unix system, and the CPAN module collection is extensive and easy to install. There's a production-quality module for almost any task, with solid test coverage and good documentation. + +Perl 5 has nearly complete backward compatibility across two decades of releases. The community has embraced this as well, so most of CPAN is pretty stable. There's even a crew of testers who run unit tests on all of CPAN on a regular basis to help detect breakage. + +The toolchain is also pretty solid. The documentation syntax (POD) is a little more verbose than I'd like, but it yields much more useful results than [doxygen][11] or [Javadoc][12]. You can run `perldoc FILENAME` to instantly see the documentation of the module you're writing. `perldoc Module::Name` shows you the specific documentation for the version of the module that you would load from your `include` path and can likewise show you the source code of that module without needing to browse deep into your filesystem. + +The testcase system (the `prove` command and Test Anything Protocol, or TAP) isn't specific to Perl and is extremely simple to work with (as opposed to unit testing based around language-specific object-oriented structure, or XML). Modules like `Test::More` make writing the test cases so easy that you can write a test suite in about the same time it would take to test your module once by hand. The testing effort barrier is so low that I've started using TAP and the POD documentation style for my non-Perl projects as well. + +### In summary + +Perl 5 still has a lot to offer despite the large number of newer languages competing with it. The frontend syntax hasn't stopped evolving, and you can improve it however you like with custom modules. The Perl 5 engine is capable of handling most programming problems you can throw at it, and it is even suitable for low-level work as a "glue" layer on top of C libraries. Once you get really familiar with it, it can even be an environment for developing C code. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/why-i-love-perl-5 + +作者:[Michael Conrad][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/nerdvana +[1]:https://opensource.com/article/17/12/my-delorean-runs-perl +[2]:https://en.wikipedia.org/wiki/Standard_streams#Standard_input_(stdin) +[3]:https://en.wikipedia.org/wiki/Standard_streams#Standard_output_(stdout) +[4]:https://www.techopedia.com/definition/19608/glue-language +[5]:https://www.perl.org/ +[6]:https://en.wikipedia.org/wiki/Lisp_(programming_language) +[7]:https://en.wikipedia.org/wiki/Reference_counting +[8]:https://en.wikipedia.org/wiki/XS_(Perl) +[9]:https://www.cpan.org/ +[10]:https://metacpan.org/pod/distribution/Inline-C/lib/Inline/C.pod +[11]:http://www.stack.nl/~dimitri/doxygen/ +[12]:http://www.oracle.com/technetwork/java/javase/documentation/index-jsp-135444.html From 08400a7254630ef5b41fcee45f0b11d2eb7758ab Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 23 Jan 2018 13:10:02 +0800 Subject: [PATCH 188/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Create?= =?UTF-8?q?=20a=20Docker=20Image?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20180122 How to Create a Docker Image.md | 197 ++++++++++++++++++ 1 file changed, 197 insertions(+) create mode 100644 sources/tech/20180122 How to Create a Docker Image.md diff --git a/sources/tech/20180122 How to Create a Docker Image.md b/sources/tech/20180122 How to Create a Docker Image.md new file mode 100644 index 0000000000..4894085a8f --- /dev/null +++ b/sources/tech/20180122 How to Create a Docker Image.md @@ -0,0 +1,197 @@ +How to Create a Docker Image +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/container-image_0.jpg?itok=G_Gz80R9) + +In the previous [article][1], we learned about how to get started with Docker on Linux, macOS, and Windows. In this article, we will get a basic understanding of creating Docker images. There are prebuilt images available on DockerHub that you can use for your own project, and you can publish your own image there. + +We are going to use prebuilt images to get the base Linux subsystem, as it's a lot of work to build one from scratch. You can get Alpine (the official distro used by Docker Editions), Ubuntu, BusyBox, or scratch. In this example, I will use Ubuntu. + +Before we start building our images, let's "containerize" them! By this I just mean creating directories for all of your Docker images so that you can maintain different projects and stages isolated from each other. +``` +$ mkdir dockerprojects + +cd dockerprojects + +``` + +Now create a Dockerfile inside the dockerprojects directory using your favorite text editor; I prefer nano, which is also easy for new users. +``` +$ nano Dockerfile + +``` + +And add this line: +``` +FROM Ubuntu + +``` + +![m7_f7No0pmZr2iQmEOH5_ID6MDG2oEnODpQZkUL7][2] + +Save it with Ctrl+Exit then Y. + +Now create your new image and provide it with a name (run these commands within the same directory): +``` +$ docker build -t dockp . + +``` + +(Note the dot at the end of the command.) This should build successfully, so you'll see: +``` +Sending build context to Docker daemon 2.048kB + +Step 1/1 : FROM ubuntu + +---> 2a4cca5ac898 + +Successfully built 2a4cca5ac898 + +Successfully tagged dockp:latest + +``` + +It's time to run and test your image: +``` +$ docker run -it Ubuntu + +``` + +You should see root prompt: +``` +root@c06fcd6af0e8:/# + +``` + +This means you are literally running bare minimal Ubuntu inside Linux, Windows, or macOS. You can run all native Ubuntu commands and CLI utilities. + +![vpZ8ts9oq3uk--z4n6KP3DD3uD_P4EpG7fX06MC3][3] + +Let's check all the Docker images you have in your directory: +``` +$docker images + + +REPOSITORY TAG IMAGE ID CREATED SIZE + +dockp latest 2a4cca5ac898 1 hour ago 111MB + +ubuntu latest 2a4cca5ac898 1 hour ago 111MB + +hello-world latest f2a91732366c 8 weeks ago 1.85kB + +``` + +You can see all three images: dockp, Ubuntu, and hello-world, which I created a few weeks ago when working on the previous articles of this series. Building a whole LAMP stack can be challenging, so we are going create a simple Apache server image with Dockerfile. + +Dockerfile is basically a set of instructions to install all the needed packages, configure, and copy files. In this case, it's Apache and Nginx. + +You may also want to create an account on DockerHub and log into your account before building images, in case you are pulling something from DockerHub. To log into DockerHub from the command line, just run: +``` +$ docker login + +``` + +Enter your username and password and you are logged in. + +Next, create a directory for Apache inside the dockerproject: +``` +$ mkdir apache + +``` + +Create a Dockerfile inside Apache folder: +``` +$ nano Dockerfile + +``` + +And paste these lines: +``` +FROM ubuntu + +MAINTAINER Kimbro Staken version: 0.1 + +RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/* + + +ENV APACHE_RUN_USER www-data + +ENV APACHE_RUN_GROUP www-data + +ENV APACHE_LOG_DIR /var/log/apache2 + + +EXPOSE 80 + + +CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"] + +``` + +Then, build the image: +``` +docker build -t apache . + +``` + +(Note the dot after a space at the end.) + +It will take some time, then you should see successful build like this: +``` +Successfully built e7083fd898c7 + +Successfully tagged ng:latest + +Swapnil:apache swapnil$ + +``` + +Now let's run the server: +``` +$ docker run -d apache + +a189a4db0f7c245dd6c934ef7164f3ddde09e1f3018b5b90350df8be85c8dc98 + +``` + +Eureka. Your container image is running. Check all the running containers: +``` +$ docker ps + +CONTAINER ID IMAGE COMMAND CREATED + +a189a4db0f7 apache "/usr/sbin/apache2ctl" 10 seconds ago + +``` + +You can kill the container with the docker kill command: +``` +$docker kill a189a4db0f7 + +``` + +So, you see the "image" itself is persistent that stays in your directory, but the container runs and goes away. Now you can create as many images as you want and spin and nuke as many containers as you need from those images. + +That's how to create an image and run containers. + +To learn more, you can open your web browser and check out the documentation about how to build more complicated Docker images like the whole LAMP stack. Here is a[ Dockerfile][4] file for you to play with. In the next article, I'll show how to push images to DockerHub. + +Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/how-create-docker-image + +作者:[SWAPNIL BHARTIYA][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/arnieswap +[1]:https://www.linux.com/blog/learn/intro-to-linux/how-install-docker-ce-your-desktop +[2]:https://lh6.googleusercontent.com/m7_f7No0pmZr2iQmEOH5_ID6MDG2oEnODpQZkUL7q3GYRB9f1-lvMYLE5f3GBpzIk-ev5VlcB0FHYSxn6NNQjxY4jJGqcgdFWaeQ-027qX_g-SVtbCCMybJeD6QIXjzM2ga8M4l4 +[3]:https://lh3.googleusercontent.com/vpZ8ts9oq3uk--z4n6KP3DD3uD_P4EpG7fX06MC3uFvj2-WaI1DfOfec9ZXuN7XUNObQ2SCc4Nbiqp-CM7ozUcQmtuzmOdtUHTF4Jq8YxkC49o2k7y5snZqTXsueITZyaLiHq8bT +[4]:https://github.com/fauria/docker-lamp/blob/master/Dockerfile +[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From 9cd7d9b334d5290f0988315ebc0a3442a87a5ccf Mon Sep 17 00:00:00 2001 From: yyyfor Date: Tue, 23 Jan 2018 14:02:50 +0800 Subject: [PATCH 189/226] Update 20170216 25 Free Books To Learn Linux For Free.md --- sources/tech/20170216 25 Free Books To Learn Linux For Free.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20170216 25 Free Books To Learn Linux For Free.md b/sources/tech/20170216 25 Free Books To Learn Linux For Free.md index e549f50ea3..0540be4d67 100644 --- a/sources/tech/20170216 25 Free Books To Learn Linux For Free.md +++ b/sources/tech/20170216 25 Free Books To Learn Linux For Free.md @@ -1,3 +1,5 @@ +Translating by yyyfor + 25 Free Books To Learn Linux For Free ====== Brief: In this article, I'll share with you the best resource to **learn Linux for free**. This is a collection of websites, online video courses and free eBooks. From c4a2f3b90d4e0e9506a73a29013eae24c7191701 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 23 Jan 2018 14:05:23 +0800 Subject: [PATCH 190/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Ick:=20a=20contin?= =?UTF-8?q?uous=20integration=20system?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...22 Ick- a continuous integration system.md | 75 +++++++++++++++++++ 1 file changed, 75 insertions(+) create mode 100644 sources/talk/20180122 Ick- a continuous integration system.md diff --git a/sources/talk/20180122 Ick- a continuous integration system.md b/sources/talk/20180122 Ick- a continuous integration system.md new file mode 100644 index 0000000000..4620e2c036 --- /dev/null +++ b/sources/talk/20180122 Ick- a continuous integration system.md @@ -0,0 +1,75 @@ +Ick: a continuous integration system +====== +**TL;DR:** Ick is a continuous integration or CI system. See for more information. + +More verbose version follows. + +### First public version released + +The world may not need yet another continuous integration system (CI), but I do. I've been unsatisfied with the ones I've tried or looked at. More importantly, I am interested in a few things that are more powerful than what I've ever even heard of. So I've started writing my own. + +My new personal hobby project is called ick. It is a CI system, which means it can run automated steps for building and testing software. The home page is at , and the [download][1] page has links to the source code and .deb packages and an Ansible playbook for installing it. + +I have now made the first publicly advertised release, dubbed ALPHA-1, version number 0.23. It is of alpha quality, and that means it doesn't have all the intended features and if any of the features it does have work, you should consider yourself lucky. + +### Invitation to contribute + +Ick has so far been my personal project. I am hoping to make it more than that, and invite contributions. See the [governance][2] page for the constitution, the [getting started][3] page for tips on how to start contributing, and the [contact][4] page for how to get in touch. + +### Architecture + +Ick has an architecture consisting of several components that communicate over HTTPS using RESTful APIs and JSON for structured data. See the [architecture][5] page for details. + +### Manifesto + +Continuous integration (CI) is a powerful tool for software development. It should not be tedious, fragile, or annoying. It should be quick and simple to set up, and work quietly in the background unless there's a problem in the code being built and tested. + +A CI system should be simple, easy, clear, clean, scalable, fast, comprehensible, transparent, reliable, and boost your productivity to get things done. It should not be a lot of effort to set up, require a lot of hardware just for the CI, need frequent attention for it to keep working, and developers should never have to wonder why something isn't working. + +A CI system should be flexible to suit your build and test needs. It should support multiple types of workers, as far as CPU architecture and operating system version are concerned. + +Also, like all software, CI should be fully and completely free software and your instance should be under your control. + +(Ick is little of this yet, but it will try to become all of it. In the best possible taste.) + +### Dreams of the future + +In the long run, I would ick to have features like ones described below. It may take a while to get all of them implemented. + + * A build may be triggered by a variety of events. Time is an obvious event, as is source code repository for the project changing. More powerfully, any build dependency changing, regardless of whether the dependency comes from another project built by ick, or a package from, say, Debian: ick should keep track of all the packages that get installed into the build environment of a project, and if any of their versions change, it should trigger the project build and tests again. + + * Ick should support building in (or against) any reasonable target, including any Linux distribution, any free operating system, and any non-free operating system that isn't brain-dead. + + * Ick should manage the build environment itself, and be able to do builds that are isolated from the build host or the network. This partially works: one can ask ick to build a container and run a build in the container. The container is implemented using systemd-nspawn. This can be improved upon, however. (If you think Docker is the only way to go, please contribute support for that.) + + * Ick should support any workers that it can control over ssh or a serial port or other such neutral communication channel, without having to install an agent of any kind on them. Ick won't assume that it can have, say, a full Java run time, so that the worker can be, say, a micro controller. + + * Ick should be able to effortlessly handle very large numbers of projects. I'm thinking here that it should be able to keep up with building everything in Debian, whenever a new Debian source package is uploaded. (Obviously whether that is feasible depends on whether there are enough resources to actually build things, but ick itself should not be the bottleneck.) + + * Ick should optionally provision workers as needed. If all workers of a certain type are busy, and ick's been configured to allow using more resources, it should do so. This seems like it would be easy to do with virtual machines, containers, cloud providers, etc. + + * Ick should be flexible in how it can notify interested parties, particularly about failures. It should allow an interested party to ask to be notified over IRC, Matrix, Mastodon, Twitter, email, SMS, or even by a phone call and speech syntethiser. "Hello, interested party. It is 04:00 and you wanted to be told when the hello package has been built for RISC-V." + + + + +### Please give feedback + +If you try ick, or even if you've just read this far, please share your thoughts on it. See the [contact][4] page for where to send it. Public feedback is preferred over private, but if you prefer private, that's OK too. + +-------------------------------------------------------------------------------- + +via: https://blog.liw.fi/posts/2018/01/22/ick_a_continuous_integration_system/ + +作者:[Lars Wirzenius][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.liw.fi/ +[1]:http://ick.liw.fi/download/ +[2]:http://ick.liw.fi/governance/ +[3]:http://ick.liw.fi/getting-started/ +[4]:http://ick.liw.fi/contact/ +[5]:http://ick.liw.fi/architecture/ From e0342cf41924973e41352b441d5f9c955997df81 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 23 Jan 2018 14:11:25 +0800 Subject: [PATCH 191/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20The=20World=20Map?= =?UTF-8?q?=20In=20Your=20Terminal?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20180120 The World Map In Your Terminal.md | 110 ++++++++++++++++++ 1 file changed, 110 insertions(+) create mode 100644 sources/tech/20180120 The World Map In Your Terminal.md diff --git a/sources/tech/20180120 The World Map In Your Terminal.md b/sources/tech/20180120 The World Map In Your Terminal.md new file mode 100644 index 0000000000..4ce4bd7542 --- /dev/null +++ b/sources/tech/20180120 The World Map In Your Terminal.md @@ -0,0 +1,110 @@ +The World Map In Your Terminal +====== +I just stumbled upon an interesting utility. The World map in the Terminal! Yes, It is so cool. Say hello to **MapSCII** , a Braille and ASCII world map renderer for your xterm-compatible terminals. It supports GNU/Linux, Mac OS, and Windows. I thought it is a just another project hosted on GitHub. But I was wrong! It is really impressive what they did there. We can use our mouse pointer to drag and zoom in and out a location anywhere in the world map. The other notable features are; + + * Discover Point-of-Interests around any given location + * Highly customizable layer styling with [Mapbox Styles][1] support + * Connect to any public or private vector tile server + * Or just use the supplied and optimized [OSM2VectorTiles][2] based one + * Work offline and discover local [VectorTile][3]/[MBTiles][4] + * Compatible with most Linux and OSX terminals + * Highly optimizied algorithms for a smooth experience + + + +### Displaying the World Map in your Terminal using MapSCII + +To open the map, just run the following command from your Terminal: +``` +telnet mapscii.me +``` + +Here is the World map from my Terminal. + +[![][5]][6] + +Cool, yeah? + +To switch to Braille view, press **c**. + +[![][5]][7] + +Type **c** again to switch back to the previous format **.** + +To scroll around the map, use arrow keys **up** , **down** , **left** , **right**. To zoom in/out a location, use **a** and **z** keys. Also, you can use the scroll wheel of your mouse to zoom in or out. To quit the map, press **q**. + +Like I already said, don't think it is a simple project. Click on any location on the map and press **" a"** to zoom in. + +Here are some the sample screenshots after I zoomed it. + +[![][5]][8] + +I can be able to zoom to view the states in my country (India). + +[![][5]][9] + +And the districts in a state (Tamilnadu): + +[![][5]][10] + +Even the [Taluks][11] and the towns in a district: + +[![][5]][12] + +And, the place where I completed my schooling: + +[![][5]][13] + +Even though it is just a smallest town, MapSCII displayed it accurately. MapSCII uses [**OpenStreetMap**][14] to collect the data. + +### Install MapSCII locally + +Liked it? Great! You can host it on your own system. + +Make sure you have installed Node.js on your system. If not, refer the following link. + +[Install NodeJS on Linux][15] + +Then, run the following command to install it. +``` +sudo npm install -g mapscii + +``` + +To launch MapSCII, run: +``` +mapscii +``` + +Have fun! More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/mapscii-world-map-terminal/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.mapbox.com/mapbox-gl-style-spec/ +[2]:https://github.com/osm2vectortiles +[3]:https://github.com/mapbox/vector-tile-spec +[4]:https://github.com/mapbox/mbtiles-spec +[5]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-1-2.png () +[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-2.png () +[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-3.png () +[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-4.png () +[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-5.png () +[11]:https://en.wikipedia.org/wiki/Tehsils_of_India +[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-6.png () +[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-7.png () +[14]:https://www.openstreetmap.org/ +[15]:https://www.ostechnix.com/install-node-js-linux/ From 8c04e36d0b0ad9d425b91b241bcf1d70e7ce095e Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 23 Jan 2018 14:16:36 +0800 Subject: [PATCH 192/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Raspberry=20Pi=20?= =?UTF-8?q?Alternatives?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20180122 Raspberry Pi Alternatives.md | 58 +++++++++++++++++++ 1 file changed, 58 insertions(+) create mode 100644 sources/talk/20180122 Raspberry Pi Alternatives.md diff --git a/sources/talk/20180122 Raspberry Pi Alternatives.md b/sources/talk/20180122 Raspberry Pi Alternatives.md new file mode 100644 index 0000000000..bf3bca4f61 --- /dev/null +++ b/sources/talk/20180122 Raspberry Pi Alternatives.md @@ -0,0 +1,58 @@ +Raspberry Pi Alternatives +====== +A look at some of the many interesting Raspberry Pi competitors. + +The phenomenon behind the Raspberry Pi computer series has been pretty amazing. It's obvious why it has become so popular for Linux projects—it's a low-cost computer that's actually quite capable for the price, and the GPIO pins allow you to use it in a number of electronics projects such that it starts to cross over into Arduino territory in some cases. Its overall popularity has spawned many different add-ons and accessories, not to mention step-by-step guides on how to use the platform. I've personally written about Raspberry Pis often in this space, and in my own home, I use one to control a beer fermentation fridge, one as my media PC, one to control my 3D printer and one as a handheld gaming device. + +The popularity of the Raspberry Pi also has spawned competition, and there are all kinds of other small, low-cost, Linux-powered Raspberry Pi-like computers for sale—many of which even go so far as to add "Pi" to their names. These computers aren't just clones, however. Although some share a similar form factor to the Raspberry Pi, and many also copy the GPIO pinouts, in many cases, these other computers offer features unavailable in a traditional Raspberry Pi. Some boards offer SATA, Wi-Fi or Gigabit networking; others offer USB3, and still others offer higher-performance CPUs or more RAM. When you are choosing a low-power computer for a project or as a home server, it pays to be aware of these Raspberry Pi alternatives, as in many cases, they will perform much better. So in this article, I discuss some alternatives to Raspberry Pis that I've used personally, their pros and cons, and then provide some examples of where they work best. + +### Banana Pi + +I've mentioned the Banana Pi before in past articles (see "Papa's Got a Brand New NAS" in the September 2016 issue and "Banana Backups" in the September 2017 issue), and it's a great choice when you want a board with a similar form factor, similar CPU and RAM specs, and a similar price (~$30) to a Raspberry Pi but need faster I/O. The Raspberry Pi product line is used for a lot of home server projects, but it limits you to 10/100 networking and a USB2 port for additional storage. Where the Banana Pi product line really shines is in the fact that it includes both a Gigabit network port and SATA port, while still having similar GPIO expansion options and running around the same price as a Raspberry Pi. + +Before I settled on an Odroid XU4 for my home NAS (more on that later), I first experimented with a cluster of Banana Pis. The idea was to attach a SATA disk to each Banana Pi and use software like Ceph or GlusterFS to create a storage cluster shared over the network. Even though any individual Banana Pi wasn't necessarily that fast, considering how cheap they are in aggregate, they should be able to perform reasonably well and allow you to expand your storage by adding another disk and another Banana Pi. In the end, I decided to go a more traditional and simpler route with a single server and software RAID, and now I use one Banana Pi as an image gallery server. I attached a 2.5" laptop SATA drive to the other and use it as a local backup server running BackupPC. It's a nice solution that takes up almost no space and little power to run. + +### Orange Pi Zero + +I was really excited when I first heard about the Raspberry Pi Zero project. I couldn't believe there was such a capable little computer for only $5, and I started imagining all of the cool projects I could use one for around the house. That initial excitement was dampened a bit by the fact that they sold out quickly, and just about every vendor settled into the same pattern: put standalone Raspberry Pi Zeros on backorder but have special $20 starter kits in stock that include various adapter cables, a micro SD card and a plastic case that I didn't need. More than a year after the release, the situation still remains largely the same. Although I did get one Pi Zero and used it for a cool Adafruit "Pi Grrl Zero" gaming project, I had to put the rest of my ideas on hold, because they just never seemed to be in stock when I wanted them. + +The Orange Pi Zero was created by the same company that makes the entire line of Orange Pi computers that compete with the Raspberry Pi. The main thing that makes the Orange Pi Zero shine in my mind is that they have a small, square form factor that is wider than a Raspberry Pi Zero but not as long. It also includes a Wi-Fi card like the more expensive Raspberry Pi Zero W, and it runs between $6 and $9, depending on whether you opt for 256MB of RAM or 512MB of RAM. More important, they are generally in stock, so there's no need to sit on a backorder list when you have a fun project in mind. + +The Orange Pi Zero boards themselves are pretty capable. Out of the box, they include a quad-core ARM CPU, Wi-Fi (as I mentioned before), along with a 10/100 network port and USB2\. They also include Raspberry-Pi-compatible GPIO pins, but even more interesting is that there is a $9 "NAS" expansion board for it that mounts to its 13-pin header and provides extra USB2 ports, a SATA and mSATA port, along with an IR and audio and video ports, which makes it about as capable as a more expensive Banana Pi board. Even without the expansion board, this would make a nice computer you could sit anywhere within range of your Wi-Fi and run any number of services. The main downside is you are limited to composite video, so this isn't the best choice for gaming or video-based projects. + +Although Orange Pi Zeros are capable boards in their own right, what makes them particularly enticing to me is that they are actually available when you want them, unlike some of the other sub-$10 boards out there. There's nothing worse than having a cool idea for a cheap home project and then having to wait for a board to come off backorder. + +![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12261f1.jpg) + +Figure 1\. An Orange Pi Zero (right) and an Espressobin (left) + +### Odroid XU4 + +When I was looking to replace my rack-mounted NAS at home, I first looked at all of the Raspberry Pi options, including Banana Pi and other alternatives, but none of them seemed to have quite enough horsepower for my needs. I needed a machine that not only offered Gigabit networking to act as a NAS, but one that had high-speed disk I/O as well. The Odroid XU4 fit the bill with its eight-core ARM CPU, 2GB RAM, Gigabit network and USB3 ports. Although it was around $75 (almost twice the price of a Raspberry Pi), it was a much more capable computer all while being small and low-power. + +The entire Odroid product line is a good one to consider if you want a low-power home server but need more resources than a traditional Raspberry Pi can offer and are willing to spend a little bit extra for the privilege. In addition to a NAS, the Odroid XU4, with its more powerful CPU and extra RAM, is a good all-around server for the home. The USB3 port means you have a lot of storage options should you need them. + +### Espressobin + +Although the Odroid XU4 is a great home server, I still sometimes can see that it gets bogged down in disk and network I/O compared to a traditional higher-powered server. Some of this might be due to the chips that were selected for the board, and perhaps some of it has to do with the fact that I'm using both disk encryption and software RAID over USB3\. In either case, I started looking for another option to help take a bit of the storage burden off this server, and I came across the Espressobin board. + +The Espressobin is a $50 board that launched as a popular Indiegogo campaign and is now a shipping product that you can pick up in a number of places, including Amazon. Although it costs a bit more than a Raspberry Pi 3, it includes a 64-bit dual-core ARM Cortex A53 at 1.2GHz, 1–2Gb of RAM (depending on the configuration), three Gigabit network ports with a built-in switch, a SATA port, a USB3 port, a mini-PCIe port, plus a number of other options, including two sets of GPIO headers and a nice built-in serial console running on the micro-USB port. + +The main benefit to the Espressobin is the fact that it was designed by Marvell with chips that actually can use all of the bandwidth that the board touts. In some other boards, often you'll find a SATA2 port that's hanging off a USB2 interface or other architectural hacks that, although they will let you connect a SATA disk or Gigabit networking port, it doesn't mean you'll get the full bandwidth the spec claims. Although I intend to have my own Espressobin take over home NAS duties, it also would make a great home gateway router, general-purpose server or even a Wi-Fi access point, provided you added the right Wi-Fi card. + +### Conclusion + +A whole world of alternatives to Raspberry Pis exists—this list covers only some of the ones I've used myself. I hope it has encouraged you to think twice before you default to a Raspberry Pi for your next project. Although there's certainly nothing wrong with Raspberry Pis, there are several small computers that run Linux well and, in many cases, offer better hardware or other expansion options beyond the capabilities of a Raspberry Pi for a similar price. + + +-------------------------------------------------------------------------------- + +via: http://www.linuxjournal.com/content/raspberry-pi-alternatives + +作者:[Kyle Rankin][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxjournal.com/users/kyle-rankin From 646f0478fe24a9cabf2eddce88a8bf40b398ea75 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 23 Jan 2018 14:20:16 +0800 Subject: [PATCH 193/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20rm=20Comm?= =?UTF-8?q?and=20Explained=20for=20Beginners=20(8=20Examples)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nd Explained for Beginners (8 Examples).md | 172 ++++++++++++++++++ 1 file changed, 172 insertions(+) create mode 100644 sources/tech/20180122 Linux rm Command Explained for Beginners (8 Examples).md diff --git a/sources/tech/20180122 Linux rm Command Explained for Beginners (8 Examples).md b/sources/tech/20180122 Linux rm Command Explained for Beginners (8 Examples).md new file mode 100644 index 0000000000..5ba87a1b7e --- /dev/null +++ b/sources/tech/20180122 Linux rm Command Explained for Beginners (8 Examples).md @@ -0,0 +1,172 @@ +Linux rm Command Explained for Beginners (8 Examples) +====== + +Deleting files is a fundamental operation, just like copying files or renaming/moving them. In Linux, there's a dedicated command - dubbed **rm** \- that lets you perform all deletion-related operations. In this tutorial, we will discuss the basics of this tool along with some easy to understand examples. + +But before we do that, it's worth mentioning that all examples mentioned in the article have been tested on Ubuntu 16.04 LTS. + +#### Linux rm command + +So in layman's terms, we can simply say the rm command is used for removing/deleting files and directories. Following is the syntax of the command: + +``` +rm [OPTION]... [FILE]... +``` + +And here's how the tool's man page describes it: +``` +This manual page documents the GNU version of rm. rm removes each specified file. By default, it +does not remove directories. + +If  the  -I or --interactive=once option is given, and there are more than three files or the -r, +-R, or --recursive are given, then rm prompts the user for whether to proceed with the entire +operation. If the response is not affirmative, the entire command is aborted. + +Otherwise, if a file is unwritable, standard input is a terminal, and the -f or --force option is +not given, or the -i or --interactive=always option is given, rm prompts the user for whether to +remove the file. If the response is not affirmative, the file is skipped. +``` + +The following Q&A-styled examples will give you a better idea on how the tool works. + +#### Q1. How to remove files using rm command? + +That's pretty easy and straightforward. All you have to do is to pass the name of the files (along with paths if they are not in the current working directory) as input to the rm command. + +``` +rm [filename] +``` + +For example: + +``` +rm testfile.txt +``` + +[![How to remove files using rm command][1]][2] + +#### Q2. How to remove directories using rm command? + +If you are trying to remove a directory, then you need to use the **-r** command line option. Otherwise, rm will throw an error saying what you are trying to delete is a directory. + +``` +rm -r [dir name] +``` + +For example: + +``` +rm -r testdir +``` + +[![How to remove directories using rm command][3]][4] + +#### Q3. How to make rm prompt before every removal? + +If you want rm to prompt before each delete action it performs, then use the **-i** command line option. + +``` +rm -i [file or dir] +``` + +For example, suppose you want to delete a directory 'testdir' and all its contents, but want rm to prompt before every deletion, then here's how you can do that: + +``` +rm -r -i testdir +``` + +[![How to make rm prompt before every removal][5]][6] + +#### Q4. How to force rm to ignore nonexistent files? + +The rm command lets you know through an error message if you try deleting a non-existent file or directory. + +[![Linux rm command example][7]][8] + +However, if you want, you can make rm suppress such error/notifications - all you have to do is to use the **-f** command line option. + +``` +rm -f [filename] +``` + +[![How to force rm to ignore nonexistent files][9]][10] + +#### Q5. How to make rm prompt only in some scenarios? + +There exists a command line option **-I** , which when used, makes sure the command only prompts once before removing more than three files, or when removing recursively. + +For example, the following screenshot shows this option in action - there was no prompt when two files were deleted, but the command prompted when more than three files were deleted. + +[![How to make rm prompt only in some scenarios][11]][12] + +#### Q6. How rm works when dealing with root directory? + +Of course, deleting root directory is the last thing a Linux user would want. That's why, the rm command doesn't let you perform a recursive delete operation on this directory by default. + +[![How rm works when dealing with root directory][13]][14] + +However, if you want to go ahead with this operation for whatever reason, then you need to tell this to rm by using the **\--no-preserve-root** option. When this option is enabled, rm doesn't treat the root directory (/) specially. + +In case you want to know the scenarios in which a user might want to delete the root directory of their system, head [here][15]. + +#### Q7. How to make rm only remove empty directories? + +In case you want to restrict rm's directory deletion ability to only empty directories, then you can use the -d command line option. + +``` +rm -d [dir] +``` + +The following screenshot shows the -d command line option in action - only empty directory got deleted. + +[![How to make rm only remove empty directories][16]][17] + +#### Q8. How to force rm to emit details of operation it is performing? + +If you want rm to display detailed information of the operation being performed, then this can be done by using the **-v** command line option. + +``` +rm -v [file or directory name] +``` + +For example: + +[![How to force rm to emit details of operation it is performing][18]][19] + +#### Conclusion + +Given the kind of functionality it offers, rm is one of the most frequently used commands in Linux (like [cp][20] and mv). Here, in this tutorial, we have covered almost all major command line options this tool provides. rm has a bit of learning curve associated with, so you'll have to spent some time practicing its options before you start using the tool in your day to day work. For more information, head to the command's [man page][21]. + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/linux-rm-command/ + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:https://www.howtoforge.com/images/command-tutorial/rm-basic-usage.png +[2]:https://www.howtoforge.com/images/command-tutorial/big/rm-basic-usage.png +[3]:https://www.howtoforge.com/images/command-tutorial/rm-r.png +[4]:https://www.howtoforge.com/images/command-tutorial/big/rm-r.png +[5]:https://www.howtoforge.com/images/command-tutorial/rm-i-option.png +[6]:https://www.howtoforge.com/images/command-tutorial/big/rm-i-option.png +[7]:https://www.howtoforge.com/images/command-tutorial/rm-non-ext-error.png +[8]:https://www.howtoforge.com/images/command-tutorial/big/rm-non-ext-error.png +[9]:https://www.howtoforge.com/images/command-tutorial/rm-f-option.png +[10]:https://www.howtoforge.com/images/command-tutorial/big/rm-f-option.png +[11]:https://www.howtoforge.com/images/command-tutorial/rm-I-option.png +[12]:https://www.howtoforge.com/images/command-tutorial/big/rm-I-option.png +[13]:https://www.howtoforge.com/images/command-tutorial/rm-root-default.png +[14]:https://www.howtoforge.com/images/command-tutorial/big/rm-root-default.png +[15]:https://superuser.com/questions/742334/is-there-a-scenario-where-rm-rf-no-preserve-root-is-needed +[16]:https://www.howtoforge.com/images/command-tutorial/rm-d-option.png +[17]:https://www.howtoforge.com/images/command-tutorial/big/rm-d-option.png +[18]:https://www.howtoforge.com/images/command-tutorial/rm-v-option.png +[19]:https://www.howtoforge.com/images/command-tutorial/big/rm-v-option.png +[20]:https://www.howtoforge.com/linux-cp-command/ +[21]:https://linux.die.net/man/1/rm From 4a27d1a4e29b0aa9b64681ad034a3479493cf6df Mon Sep 17 00:00:00 2001 From: qhwdw Date: Tue, 23 Jan 2018 14:41:19 +0800 Subject: [PATCH 194/226] Translated by qhwdw --- ...Epilogues Canaries and Buffer Overflows.md | 101 ------------------ ...Epilogues Canaries and Buffer Overflows.md | 100 +++++++++++++++++ 2 files changed, 100 insertions(+), 101 deletions(-) delete mode 100644 sources/tech/20140519 Epilogues Canaries and Buffer Overflows.md create mode 100644 translated/tech/20140519 Epilogues Canaries and Buffer Overflows.md diff --git a/sources/tech/20140519 Epilogues Canaries and Buffer Overflows.md b/sources/tech/20140519 Epilogues Canaries and Buffer Overflows.md deleted file mode 100644 index 5f3ddca532..0000000000 --- a/sources/tech/20140519 Epilogues Canaries and Buffer Overflows.md +++ /dev/null @@ -1,101 +0,0 @@ -Translating by qhwdw [Epilogues, Canaries, and Buffer Overflows][1] -============================================================ - -Last week we looked at [how the stack works][2] and how stack frames are built during function prologues. Now it's time to look at the inverse process as stack frames are destroyed in function epilogues. Let's bring back our friend add.c: - -Simple Add Program - add.c - -``` -int add(int a, int b) -{ - int result = a + b; - return result; -} - -int main(int argc) -{ - int answer; - answer = add(40, 2); -} -``` - - -We're executing line 4, right after the assignment of a + b into result. This is what happens: - -![](https://manybutfinite.com/img/stack/returnFromAdd.png) - -The first instruction is redundant and a little silly because we know eax is already equal to result, but this is what you get with optimization turned off. The leave instruction then runs, doing two tasks for the price of one: it resets esp to point to the start of the current stack frame, and then restores the saved ebp value. These two operations are logically distinct and thus are broken up in the diagram, but they happen atomically if you're tracing with a debugger. - -After leave runs the previous stack frame is restored. The only vestige of the call to add is the return address on top of the stack. It contains the address of the instruction in main that must run after add is done. The ret instruction takes care of it: it pops the return address into the eip register, which points to the next instruction to be executed. The program has now returned to main, which resumes: - -![](https://manybutfinite.com/img/stack/returnFromMain.png) - -main copies the return value from add into local variable answer and then runs its own epilogue, which is identical to any other. Again the only peculiarity in main is that the saved ebp is null, since it is the first stack frame in our code. In the last step, execution has been returned to the C runtime (libc), which will exit to the operating system. Here's a diagram with the [full return sequence][3] for those who need it. - -You now have an excellent grasp of how the stack operates, so let's have some fun and look at one of the most infamous hacks of all time: exploiting the stack buffer overflow. Here is a vulnerable program: - -Vulnerable Program - buffer.c - -| -``` -void doRead() -{ - char buffer[28]; - gets(buffer); -} - -int main(int argc) -{ - doRead(); -} -``` - -The code above uses [gets][4] to read from standard input. gets keeps reading until it encounters a newline or end of file. Here's what the stack looks like after a string has been read: - -![](https://manybutfinite.com/img/stack/bufferCopy.png) - -The problem here is that gets is unaware of buffer's size: it will blithely keep reading input and stuffing data into the stack beyond buffer, obliterating the saved ebp value, return address, and whatever else is below. To exploit this behavior, attackers craft a precise payload and feed it into the program. This is what the stack looks like during an attack, after the call to gets: - -![](https://manybutfinite.com/img/stack/bufferOverflowExploit.png) - -The basic idea is to provide malicious assembly code to be executed and overwrite the return address on the stack to point to that code. It is a bit like a virus invading a cell, subverting it, and introducing some RNA to further its goals. - -And like a virus, the exploit's payload has many notable features. It starts with several nop instructions to increase the odds of successful exploitation. This is because the return address is absolute and must be guessed, since attackers don't know exactly where in the stack their code will be stored. But as long as they land on a nop, the exploit works: the processor will execute the nops until it hits the instructions that do work. - -The exec /bin/sh symbolizes raw assembly instructions that execute a shell (imagine for example that the vulnerability is in a networked program, so the exploit might provide shell access to the system). The idea of feeding raw assembly to a program expecting a command or user input is shocking at first, but that's part of what makes security research so fun and mind-expanding. To give you an idea of how weird things get, sometimes the vulnerable program calls tolower or toupper on its inputs, forcing attackers to write assembly instructions whose bytes do not fall into the range of upper- or lower-case ascii letters. - -Finally, attackers repeat the guessed return address several times, again to tip the odds ever in their favor. By starting on a 4-byte boundary and providing multiple repeats, they are more likely to overwrite the original return address on the stack. - -Thankfully, modern operating systems have a host of [protections against buffer overflows][5], including non-executable stacks and stack canaries. The "canary" name comes from the [canary in a coal mine][6] expression, an addition to computer science's rich vocabulary. In the words of Steve McConnell: - -> Computer science has some of the most colorful language of any field. In what other field can you walk into a sterile room, carefully controlled at 68°F, and find viruses, Trojan horses, worms, bugs, bombs, crashes, flames, twisted sex changers, and fatal errors? Steve McConnellCode Complete 2 - -At any rate, here's what a stack canary looks like: - -![](https://manybutfinite.com/img/stack/bufferCanary.png) - -Canaries are implemented by the compiler. For example, GCC's [stack-protector][7] option causes canaries to be used in any function that is potentially vulnerable. The function prologue loads a magic value into the canary location, and the epilogue makes sure the value is intact. If it's not, a buffer overflow (or bug) likely happened and the program is aborted via [__stack_chk_fail][8]. Due to their strategic location on the stack, canaries make the exploitation of stack buffer overflows much harder. - -This finishes our journey within the depths of the stack. We don't want to delve too greedily and too deep. Next week we'll go up a notch in abstraction to take a good look at recursion, tail calls and other tidbits, probably using Google's V8\. To end this epilogue and prologue talk, I'll close with a cherished quote inscribed on a monument in the American National Archives: - -![](https://manybutfinite.com/img/stack/past-is-prologue.jpg) - --------------------------------------------------------------------------------- - -via:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ - -作者:[Gustavo Duarte][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://duartes.org/gustavo/blog/about/ -[1]:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ -[2]:https://manybutfinite.com/post/journey-to-the-stack -[3]:https://manybutfinite.com/img/stack/returnSequence.png -[4]:http://linux.die.net/man/3/gets -[5]:http://paulmakowski.wordpress.com/2011/01/25/smashing-the-stack-in-2011/ -[6]:http://en.wiktionary.org/wiki/canary_in_a_coal_mine -[7]:http://gcc.gnu.org/onlinedocs/gcc-4.2.3/gcc/Optimize-Options.html -[8]:http://refspecs.linux-foundation.org/LSB_4.0.0/LSB-Core-generic/LSB-Core-generic/libc---stack-chk-fail-1.html \ No newline at end of file diff --git a/translated/tech/20140519 Epilogues Canaries and Buffer Overflows.md b/translated/tech/20140519 Epilogues Canaries and Buffer Overflows.md new file mode 100644 index 0000000000..b74400a68b --- /dev/null +++ b/translated/tech/20140519 Epilogues Canaries and Buffer Overflows.md @@ -0,0 +1,100 @@ +[探秘“栈”之旅(II)—— 谢幕,金丝雀,和缓冲区溢出][1] +============================================================ + +上一周我们讲解了 [栈是如何工作的][2] 以及在函数的开端上栈帧是如何被构建的。今天,我们来看一下它的相反的过程,在函数结束时,栈帧是如何被销毁的。重新回到我们的 add.c 上: + +简单的一个做加法的程序 - add.c + +``` +int add(int a, int b) +{ + int result = a + b; + return result; +} + +int main(int argc) +{ + int answer; + answer = add(40, 2); +} +``` + + +在运行到第 4 行时,在把 `a + b` 值赋给 `result` 后,这时发生了什么: + +![](https://manybutfinite.com/img/stack/returnFromAdd.png) + +第一个指令是有些多余而且有点傻的,因为我们知道 `eax` 已经等于了 `result` ,但这就是关闭优化时得到的结果。剩余的指令接着运行,这一小段做了两个任务:重置 `esp` 并将它指向到当前栈帧开始的地方,另一个是恢复在 `ebp` 中保存的值。这两个操作在逻辑上是独立的,因此,在图中将它们分开来说,但是,如果你使用一个调试器去跟踪,你就会发现它们都是自动发生的。 + +在运行完毕后,恢复了前一个栈帧。`add` 调用唯一留下的东西就是在栈顶部的返回地址。它包含了运行完 `add` 之后在 `main` 中的指令的地址。它带来的是 `ret` 指令:它弹出返回地址到 `eip` 寄存器(译者注:32位的指令寄存器),这个寄存器指向下一个要执行的指令。现在程序将返回到 `main` ,主要部分如下: + +![](https://manybutfinite.com/img/stack/returnFromMain.png) + +`main` 从 `add` 中拷贝返回值到本地变量 `answer`,然后,运行它的“谢幕仪式”,这一点和其它的函数是一样的。在 `main` 中唯一的怪异之处是,它在 `ebp` 中保存了 `null` 值,因为,在我们的代码中它是第一个栈帧。最后一步执行的是,返回到 C 运行时库(libc),它将退回到操作系统中。这里为需要的人提供了一个 [完整的返回顺序][3] 的图。 + +现在,你已经理解了栈是如何运作的,所以我们现在可以来看一下,一直以来最著名的黑客行为:挖掘缓冲区溢出。这是一个有漏洞的程序: + +有漏洞的程序 - buffer.c + +``` +void doRead() +{ + char buffer[28]; + gets(buffer); +} + +int main(int argc) +{ + doRead(); +} +``` + +上面的代码中使用了 [gets][4] 从标准输入中去读取内容。`gets` 持续读取直到一个新行或者文件结束。下图是读取一个字符串之后栈的示意图: + +![](https://manybutfinite.com/img/stack/bufferCopy.png) + +在这里存在的问题是,`gets` 并不知道缓冲区大小:它毫无查觉地持续读取输入内容,并将读取的内容填入到栈那边的缓冲区,清除保存在 `ebp` 中的值,返回地址,下面的其它内容也是如此。对于挖掘行为,攻击者制作一个载荷片段并将它“喂”给程序。在这个时候,栈应该是下图所示的样子,然后去调用 `gets`: + +![](https://manybutfinite.com/img/stack/bufferOverflowExploit.png) + +基本的想法是提供一个恶意的汇编代码去运行,通过覆写栈上的返回地址指向到那个代码。这有点像病毒侵入一个细胞,颠覆它,然后引入一些 RNA 去达到它的目的。 + +和病毒一样,挖掘者的载荷有许多特别的功能。它从使用几个 `nop` 指令开始,以提升成功挖掘漏洞的可能性。这是因为返回的地址是一个靠猜测的且不受约束的地址,因此,攻击者并不知道保存它的代码的栈的准确位置。但是,只要它们进入一个 `nop`,这个漏洞挖掘工作就会进行:处理器将运行 `nops`,直到击中它希望去运行的指令。 + +exec /bin/sh 表示运行一个 shell(假设漏洞是在一个网络程序中,因此,这个漏洞可能提供一个访问系统的 shell)的原生汇编指令。将原生汇编指令嵌入到一个程序中,使程序产生一个命令窗口或者用户输入的想法是很可怕的,但是,那只是让安全研究如此有趣且“脑洞大开”的一部分而已。对于防范这个怪异的 `get`, 给你提供一个思路,有时候,在有漏洞的程序上,让它的输入转换为小写或者大写,将迫使攻击者写的汇编指令的完整字节不属于小写或者大写的 ascii 字母的范围内。 + +最后,攻击者重放几次猜测的返回地址,这将再次提升他们的胜算。通过从一个 4 字节的边界上多次重放,它们可能会覆写栈上的原始返回地址。 + +幸亏,现代操作系统有了 [防止缓冲区溢出][5] 的一系列保护措施,包括不可执行的栈和栈金丝雀(stack canaries)。这个 “金丝雀(canary)” 名字来自 [煤矿中的金丝雀(canary in a coal mine)][6] 中的表述(译者注:指在煤矿工人下井时,带一只金丝雀,因为金丝雀对煤矿中的瓦斯气体非常敏感,如果进入煤矿后,金丝雀死亡,说明瓦斯超标,矿工会立即撤出煤矿。金丝雀做为煤矿中瓦斯预警器来使用),是对丰富的计算机科学词汇的补充,用 Steve McConnell 的话解释如下: + +> 计算机科学拥有比其它任何领域都丰富多彩的语言,在其它的领域中你进入一个无菌室,小心地将温度控制在 68°F,然后,能找到病毒、特洛伊木马、蠕虫、臭虫、炸弹、崩溃、爆发、扭曲的变性者、以及致命错误吗? Steve McConnell 代码大全 2 + +不管怎么说,这里所谓的“栈金丝雀”应该看起来是这个样子的: + +![](https://manybutfinite.com/img/stack/bufferCanary.png) + +金丝雀是通过汇编来实现的。例如,由于 GCC 的 [栈保护器][7] 选项的原因使金丝雀被用于任何可能有漏洞的函数上。函数开端加载一个神奇的值到金丝雀的位置,并且在函数结束调用时确保这个值完好无损。如果这个值发生了变化,那就表示发生了一个缓冲区溢出(或者 bug),这时,程序通过 [__stack_chk_fail][8] 被终止运行。由于金丝雀处于栈的关键位置上,它使得栈缓冲区溢出的漏洞挖掘变得非常困难。 + +深入栈的探秘之旅结束了。我并不想过于深入。下一周我将深入递归、尾调用以及其它相关内容。或许要用到谷歌的 V8 引擎。为总结函数的开端和结束的讨论,我引述了美国国家档案馆纪念雕像上的一句名言:(what is past is prologue) + +![](https://manybutfinite.com/img/stack/past-is-prologue.jpg) + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ + +作者:[Gustavo Duarte][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ +[2]:https://manybutfinite.com/post/journey-to-the-stack +[3]:https://manybutfinite.com/img/stack/returnSequence.png +[4]:http://linux.die.net/man/3/gets +[5]:http://paulmakowski.wordpress.com/2011/01/25/smashing-the-stack-in-2011/ +[6]:http://en.wiktionary.org/wiki/canary_in_a_coal_mine +[7]:http://gcc.gnu.org/onlinedocs/gcc-4.2.3/gcc/Optimize-Options.html +[8]:http://refspecs.linux-foundation.org/LSB_4.0.0/LSB-Core-generic/LSB-Core-generic/libc---stack-chk-fail-1.html \ No newline at end of file From ac9832e3c8fcd1f0695c3411be0038172ace9091 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 23 Jan 2018 17:40:03 +0800 Subject: [PATCH 195/226] PRF:20171107 The long goodbye to C.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @name1e5s @yunfengHe 二校 --- .../talk/20171107 The long goodbye to C.md | 52 +++++++++---------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/translated/talk/20171107 The long goodbye to C.md b/translated/talk/20171107 The long goodbye to C.md index 253cf95071..436c01021f 100644 --- a/translated/talk/20171107 The long goodbye to C.md +++ b/translated/talk/20171107 The long goodbye to C.md @@ -1,33 +1,33 @@ -对 C 的漫长的告别 +与 C 语言长别离 ========================================== -这几天来,我在思考那些正在挑战 C 语言作为系统编程语言堆中的根节点的地位的新潮语言,尤其是 Go 和 Rust。思考的过程中,我意识到了一个让我震惊的事实 —— 我有着 35 年的 C 语言经验。每周我都要写很多 C 代码,但是我已经记不清楚上一次我 _创建一个新的 C 语言项目_ 是在什么时候了。 +这几天来,我在思考那些正在挑战 C 语言的系统编程语言领袖地位的新潮语言,尤其是 Go 和 Rust。思考的过程中,我意识到了一个让我震惊的事实 —— 我有着 35 年的 C 语言经验。每周我都要写很多 C 代码,但是我已经记不清楚上一次我 _创建一个新的 C 语言项目_ 是在什么时候了。 -如果你完全不认为这种情况令人震惊,那你很可能不是一个系统程序员。我知道有很多程序员使用更高级的语言工作。但是我把大部分时间都花在了深入打磨像 NTPsec , GPSD 以及 giflib 这些东西上。熟练使用 C 语言在这几十年里一直就是我的专长。但是,现在我不仅是不再使用 C 语言写新的项目,而且我都记不清我什么时候开始这样做的了。而且...回望历史,我不认为这是本世纪发生的事情。 +如果你完全不认为这种情况令人震惊,那你很可能不是一个系统程序员。我知道有很多程序员使用更高级的语言工作。但是我把大部分时间都花在了深入打磨像 NTPsec、 GPSD 以及 giflib 这些东西上。熟练使用 C 语言在这几十年里一直就是我的专长。但是,现在我不仅是不再使用 C 语言写新的项目,甚至我都记不清我是什么时候开始这样做的了,而且……回头想想,我觉得这都不是本世纪发生的事情。 - 我很惊讶的意识到,如果你问到我我的五个最核心软件开发技能是什么,“C 语言专家” 一定是你最有可能听到的。这也激起了我的思考。C 的未来会怎样 ?C 是否正像当年的 COBOL 一样,在辉煌之后,走向落幕? +这个对于我来说是件大事,因为如果你问我,我的五个最核心软件开发技能是什么,“C 语言专家” 一定是你最有可能听到的之一。这也激起了我的思考。C 语言的未来会怎样 ?C 语言是否正像当年的 COBOL 语言一样,在辉煌之后,走向落幕? -我恰好是在 C 语言迅猛发展并把汇编语言以及其他许多编译型语言挤出主流存在的前几年开始编程的。那场过渡大约是在 1982 到 1985 年之间。在那之前,有很多编译型语言来争相吸引程序员的注意力,那些语言中还没有明确的领导者;但是在那之后,小众的语言直接毫无声息的退出舞台。主流的(FORTRAN,Pascal,COBOL)语言则要么只限于老代码,要么就是固守单一领域,再就是在 C 语言的边缘领域顶着愈来愈大的压力苟延残喘。 +我恰好是在 C 语言迅猛发展,并把汇编语言以及其它许多编译型语言挤出主流存在的前几年开始编程的。那场过渡大约是在 1982 到 1985 年之间。在那之前,有很多编译型语言争相吸引程序员的注意力,那些语言中还没有明确的领导者;但是在那之后,小众的语言就直接毫无声息的退出了舞台。主流的语言(FORTRAN、Pascal、COBOL)则要么只限于老代码,要么就是固守单一领域,再就是在 C 语言的边缘领域顶着愈来愈大的压力苟延残喘。 -在那以后,这种情形持续了近 30 年。尽管在应用程序开发上出现了新的动向: Java, Perl, Python, 以及许许多多不是很成功的竞争者。起初我很少关注这些语言,这很大一部是是因为在它们的运行时的开销对于当时的实际硬件来说太大。因此,这就使得 C 的成功无可撼动;为了使用和对接大量已存在的 C 语言代码,你得使用 C 语言写新代码(一部分脚本语言尝试过打破这种壁垒,但是只有 Python 有可能成功) +而在那以后,这种情形持续了近 30 年。尽管在应用程序开发上出现了新的动向: Java、 Perl、 Python, 以及许许多多不是很成功的竞争者。起初我很少关注这些语言,这很大一部分是因为在它们的运行时的开销对于当时的实际硬件来说太大。因此,这就使得 C 的成功无可撼动;为了使用和对接大量已有的 C 语言代码,你得使用 C 语言写新代码(一部分脚本语言尝试过打破这种壁垒,但是只有 Python 有可能取得成功)。 -回想起来,我在 1997 年使用脚本语言写应用时本应该注意到这些语言的更重要的意义的。当时我写的是一个帮助图书管理员使用一个叫做SunSITE的源码分发站点的辅助软件,当时使用的是 Perl 语言。 +回想起来,我在 1997 年使用脚本语言写应用时本应该注意到这些语言的更重要的意义的。当时我写的是一个名为 SunSITE 的帮助图书管理员做源码分发的辅助软件,当时使用的是 Perl 语言。 -这个应用完全是用来处理文本输入的,而且只需要能够应对人类的反应速度即可(大概 0.1 秒),因此使用 C 或者别的没有动态内存分配以及字符串类型的语言来写就会显得很傻。但是在当时,我仅仅是把其视为一个试验,而完全没有想到我几乎再也不会在一个新项目的第一个文件里敲下 “int main(int argc, char **argv)” 这样的 C 语言代码了。 +这个应用完全是用来处理文本输入的,而且只需要能够应对人类的反应速度即可(大概 0.1 秒),因此使用 C 或者别的没有动态内存分配以及字符串类型的语言来写就会显得很傻。但是在当时,我仅仅是把其视为一个试验,而完全没有想到我几乎再也不会在一个新项目的第一个文件里敲下 `int main(int argc, char **argv)` 这样的 C 语言代码了。 -我说“几乎”,主要是因为 1999 年的 [SNG][3]。 我想那是我最后一个用 C 从头开始写的项目。 +我说“几乎”,主要是因为 1999 年的 [SNG][3]。 我想那是我最后一个用 C 从头开始写的项目了。 在那之后我写的所有的 C 代码都是在为那些上世纪已经存在的老项目添砖加瓦,或者是在维护诸如 GPSD 以及 NTPsec 一类的项目。 当年我本不应该使用 C 语言写 SNG 的。因为在那个年代,摩尔定律的快速迭代使得硬件愈加便宜,使得像 Perl 这样的语言的执行效率也不再是问题。仅仅三年以后,我可能就会毫不犹豫地使用 Python 而不是 C 语言来写 SNG。 -在 1997 年我学习了 Python, 这对我来说是一道分水岭。这个语言很美妙 —— 就像我早年使用的 Lisp 一样,而且 Python 还有很酷的库!甚至还完全绑定了 POSIX!还有一个蛮好用的对象系统!Python 没有把 C 语言挤出我的工具箱,但是我很快就习惯了在只要能用 Python 时就写 Python ,而只在必须使用 C 时写 C . +在 1997 年我学习了 Python, 这对我来说是一道分水岭。这个语言很美妙 —— 就像我早年使用的 Lisp 一样,而且 Python 还有很酷的库!甚至还完全遵循了 POSIX!还有一个蛮好用的对象系统!Python 没有把 C 语言挤出我的工具箱,但是我很快就习惯了在只要能用 Python 时就写 Python ,而只在必须使用 C 语言时写 C。 -( 在此之后,我开始在我的访谈中指出我所谓的 “ Perl 的教训” ,也就是任何一个没能实现和 C 语言语义等价的绑定了 POSIX 的语言_都注定要失败_。在计算机科学的发展史上,很多学术语言的骨骸俯拾皆是,原因是这些语言的设计者没有意识到这个重要的问题。) +(在此之后,我开始在我的访谈中指出我所谓的 “Perl 的教训” ,也就是任何一个没能实现和 C 语言语义等价的遵循 POSIX 的语言_都注定要失败_。在计算机科学的发展史上,很多学术语言的骨骸俯拾皆是,原因是这些语言的设计者没有意识到这个重要的问题。) -显然,对我来说,Python 的主要优势之一就是它很简单,当我写 Python 时,我不再需要担心内存管理问题或者会导致吐核的程序崩溃 —— 对于 C 程序员来说,处理这些问题烦的要命。而不那么明显的优势恰好在我更改语言时显现,我在 90 年代末写应用程序和非核心系统服务的代码时为了平衡成本与风险都会倾向于选择具有自动内存管理但是开销更大的语言,以抵消之前提到的 C 语言的缺陷。而在仅仅几年之前(甚至是 1990 年),那些语言的开销还是大到无法承受的;那时硬件产业的发展还在早期阶段,没有给摩尔定律足够的时间来发挥威力。 +显然,对我来说,Python 的主要优势之一就是它很简单,当我写 Python 时,我不再需要担心内存管理问题或者会导致核心转储的程序崩溃 —— 对于 C 程序员来说,处理这些问题烦的要命。而不那么明显的优势恰好在我更改语言时显现,我在 90 年代末写应用程序和非核心系统服务的代码时,为了平衡成本与风险都会倾向于选择具有自动内存管理但是开销更大的语言,以抵消之前提到的 C 语言的缺陷。而在仅仅几年之前(甚至是 1990 年),那些语言的开销还是大到无法承受的;那时硬件产业的发展还在早期阶段,没有给摩尔定律足够的时间来发挥威力。 -尽量地在 C 和 Python 之间选择 C —— 只要是能的话我就会从 C 语言转移到 Python 。这是一种降低工程复杂程度的有效策略。我将这种策略应用在了 GPSD 中,而针对 NTPsec , 我对这个策略的采用则更加系统。这就是我们能把 NTP 的代码库大小削减四分之一的原因。 +尽量地在 C 语言和 Python 之间选择 C —— 只要是能的话我就会从 C 语言转移到 Python 。这是一种降低工程复杂程度的有效策略。我将这种策略应用在了 GPSD 中,而针对 NTPsec , 我对这个策略的采用则更加系统化。这就是我们能把 NTP 的代码库大小削减四分之一的原因。 但是今天我不是来讲 Python 的。尽管我觉得它在竞争中脱颖而出,Python 也未必真的是在 2000 年之前彻底结束我在新项目上使用 C 语言的原因,因为在当时任何一个新的学院派的动态语言都可以让我不再选择使用 C 语言。也有可能是在某段时间里在我写了很多 Java 之后,我才慢慢远离了 C 语言。 @@ -35,35 +35,35 @@ 在 2000 年以后,尽管我还在使用 C/C++ 写之前的项目,比如 GPSD ,游戏韦诺之战以及 NTPsec,但是我的所有新项目都是使用 Python 的。 -有很多程序是在完全无法在 C 语言下写出来的,尤其是 [reposurgeon][4] 以及 [doclifter][5] 这样的项目。由于 C 语言的有限的数据本体以及其脆弱的底层管理,尝试用 C 写的话可能会很恐怖,并注定失败。 +有很多程序是在完全无法在 C 语言下写出来的,尤其是 [reposurgeon][4] 以及 [doclifter][5] 这样的项目。由于 C 语言受限的数据类型本体论以及其脆弱的底层数据管理问题,尝试用 C 写的话可能会很恐怖,并注定失败。 甚至是对于更小的项目 —— 那些可以在 C 中实现的东西 —— 我也使用 Python 写,因为我不想花不必要的时间以及精力去处理内核转储问题。这种情况一直持续到去年年底,持续到我创建我的第一个 Rust 项目,以及成功写出第一个[使用 Go 语言的项目][6]。 -如前文所述,尽管我是在讨论我的个人经历,但是我想我的经历体现了时代的趋势。我期待新潮流的出现,而不是仅仅跟随潮流。在 98 年的时候,我就是 Python 的早期使用者。来自 [TIOBE][7] 的数据则表明在 Go 语言脱胎于公司的实验项目并刚刚从小众语言中脱颖而出的几个月内,我就在开始实现自己的第一个 Go 语言项目了。 +如前文所述,尽管我是在讨论我的个人经历,但是我想我的经历体现了时代的趋势。我期待新潮流的出现,而不是仅仅跟随潮流。在 98 年的时候,我就是 Python 的早期使用者。来自 [TIOBE][7] 的数据则表明,在 Go 语言脱胎于公司的实验项目并刚刚从小众语言中脱颖而出的几个月内,我就开始实现自己的第一个 Go 语言项目了。 总而言之:直到现在第一批有可能挑战 C 语言的传统地位的语言才出现。我判断这个的标准很简单 —— 只要这个语言能让我等 C 语言老手接受不再写 C 的事实,这个语言才 “有可能” 挑战到 C 语言的地位 —— 来看啊,这有个新编译器,能把 C 转换到新语言,现在你可以让他完成你的_全部工作_了 —— 这样 C 语言的老手就会开心起来。 -Python 以及和其类似的语言对此做的并不够好。使用 Python 实现 NTPsec(以此举例)可能是个灾难,最终会由于过高的运行时开销以及由于垃圾回收机制导致的延迟变化而烂尾。如果需求是针对单个用户且只需要以人类能接受的速度运行,使用 Python 当然是很好的,但是对于以 _机器的速度_ 运行的程序来说就不总是如此了 —— 尤其是在很高的多用户负载之下。这不只是我自己的判断 -- 因为拿 Go 语言来说,它的存在主要就是因为当时作为 Python 语言主要支持者的 Google 在使用 Python 实现一些工程的时候也遭遇了同样的效能痛点。 +Python 以及和其类似的语言对此做的并不够好。使用 Python 实现 NTPsec(以此举例)可能是个灾难,最终会由于过高的运行时开销以及由于垃圾回收机制导致的延迟变化而烂尾。如果需求是针对单个用户且只需要以人类能接受的速度运行,使用 Python 当然是很好的,但是对于以 _机器的速度_ 运行的程序来说就不总是如此了 —— 尤其是在很高的多用户负载之下。这不只是我自己的判断 —— 因为拿 Go 语言来说,它的存在主要就是因为当时作为 Python 语言主要支持者的 Google 在使用 Python 实现一些工程的时候也遭遇了同样的效能痛点。 -Go 语言就是为了处理 Python 搞不定的那些多由 C 语言来实现的任务而设计的。尽管没有一个全自动语言转换软件让我很是不爽,但是使用 Go 语言来写系统程序对我来说不算麻烦,我发现我写 Go 写的还挺开心的。我的很多 C 编码技能还可以继续使用,我还收获了垃圾回收机制以及并发编程机制,这何乐而不为? +Go 语言就是为了解决 Python 搞不定的那些大多由 C 语言来实现的任务而设计的。尽管没有一个全自动语言转换软件让我很是不爽,但是使用 Go 语言来写系统程序对我来说不算麻烦,我发现我写 Go 写的还挺开心的。我的很多 C 编码技能还可以继续使用,我还收获了垃圾回收机制以及并发编程机制,这何乐而不为? ([这里][8]有关于我第一次写 Go 的经验的更多信息) -本来我像把 Rust 也视为 “C 语言要过时了” 的例证,但是在学习这们语言并尝试使用这门语言编程之后,我觉得[这种语言现在还没有做好准备][9]。也许 5 年以后,它才会成为 C 语言的对手。 +本来我想把 Rust 也视为 “C 语言要过时了” 的例证,但是在学习并尝试使用了这门语言编程之后,我觉得[这种语言现在还没有做好准备][9]。也许 5 年以后,它才会成为 C 语言的对手。 -随着 2017 的尾声来临,我们已经发现了一个相对成熟的语言,其和 C 类似,能够胜任 C 语言的大部分工作场景(我在下面会准确描述),在几年以后,这个语言届的新星可能就会取得成功。 +随着 2017 的尾声来临,我们已经发现了一个相对成熟的语言,其和 C 类似,能够胜任 C 语言的大部分工作场景(我在下面会准确描述),在几年以后,这个语言界的新星可能就会取得成功。 -这件事意义重大。如果你不长远地回顾历史,你可能看不出来这件事情的伟大性。_三十年了_ —— 这几乎就是我作为一个程序员的全部生涯,我们都没有等到一个 C 语言的继任者,也无法遥望 C 之后的系统编程会是什么样子的。而现在,我们面前突然有了后 C 时代的两种不同的展望和未来... +这件事意义重大。如果你不长远地回顾历史,你可能看不出来这件事情的伟大性。_三十年了_ —— 这几乎就是我作为一个程序员的全部生涯,我们都没有等到一个 C 语言的继任者,也无法遥望 C 之后的系统编程会是什么样子的。而现在,我们面前突然有了后 C 时代的两种不同的展望和未来…… -...另一种展望则是下面这个语言留给我们的。我的一个朋友正在开发一个他称之为 "Cx" 的语言,这个语言在 C 语言上做了很少的改动,使得其能够支持类型安全;他的项目的目的就是要创建一个能够在最少人力参与的情况下把古典 C 语言修改为新语言的程序。我不会指出这位朋友的名字,免得给他太多压力,让他做出太多不切实际的保证。但是他的实现方法真的很是有意思,我会尽量给他募集资金。 +……另一种展望则是下面这个语言留给我们的。我的一个朋友正在开发一个他称之为 “Cx” 的语言,这个语言在 C 语言上做了很少的改动,使得其能够支持类型安全;他的项目的目的就是要创建一个能够在最少人力参与的情况下把古典 C 语言修改为新语言的程序。我不会指出这位朋友的名字,免得给他太多压力,让他做出太多不切实际的保证。但是他的实现方法真的很是有意思,我会尽量给他募集资金。 -现在,我们看到了可以替代 C 语言实现系统编程的三种不同的可能的道路。而就在两年之前,我们的眼前还是一篇漆黑。我重复一遍:这件事情意义重大。 +现在,我们看到了可以替代 C 语言实现系统编程的三种不同的可能的道路。而就在两年之前,我们的眼前还是一片漆黑。我重复一遍:这件事情意义重大。 -我是在说 C 语言将要灭绝吗?不是这样的,在可预见的未来里,C 语言还会是操作系统的内核编程以及设备固件编程的主流语言,在这些场景下,尽力压榨硬件性能的古老命令还在奏效,尽管它可能不是那么安全。 +我是在说 C 语言将要灭绝吗?不是这样的,在可预见的未来里,C 语言还会是操作系统的内核编程以及设备固件编程的主流语言,在这些场景下,尽力压榨硬件性能的古老规则还在奏效,尽管它可能不是那么安全。 -现在那些将要被 C 的继任者攻破的领域就是我之前提到的我经常涉及的领域 —— 比如 GPSD 以及 NTPsec ,系统服务以及那些因为历史原因而使用 C 语言写的进程。还有就是以 DNS 服务器以及邮箱 —— 那些需要以机器速度而不是人类的速度运行的系统程序。 +现在那些将要被 C 的继任者攻破的领域就是我之前提到的我经常涉及的领域 —— 比如 GPSD 以及 NTPsec、系统服务以及那些因为历史原因而使用 C 语言写的进程。还有就是以 DNS 服务器以及邮件传输代理 —— 那些需要以机器速度而不是人类的速度运行的系统程序。 -现在我们可以对后 C 时代的未来窥见一斑,即上述这类领域的代码都可以使用那些具有强大内存安全特性的 C 语言的替代者实现。Go , Rust 或者 Cx ,无论是哪个,都可能使 C 的存在被弱化。比如,如果我现在再来重新实现一遍 NTP ,我可能就会毫不犹豫的使用 Go 语言去完成。 +现在我们可以对后 C 时代的未来窥见一斑,即上述这类领域的代码都可以使用那些具有强大内存安全特性的 C 语言的替代者实现。Go 、Rust 或者 Cx ,无论是哪个,都可能使 C 的存在被弱化。比如,如果我现在再来重新实现一遍 NTP ,我可能就会毫不犹豫的使用 Go 语言去完成。 -------------------------------------------------------------------------------- @@ -71,7 +71,7 @@ via: http://esr.ibiblio.org/?p=7711 作者:[Eric Raymond][a] 译者:[name1e5s](https://github.com/name1e5s) -校对:[yunfengHe](https://github.com/yunfengHe) +校对:[yunfengHe](https://github.com/yunfengHe), [wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 378389bc1cbb627a062502f068dc33a8ad2c6ac7 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 23 Jan 2018 17:45:02 +0800 Subject: [PATCH 196/226] PUB:20171107 The long goodbye to C.md @name1e5s @yunfengHe --- ... The long goodbye to C.md => 20171107 The long goodbye to C.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/talk/20171107 The long goodbye to C.md => 20171107 The long goodbye to C.md (100%) diff --git a/translated/talk/20171107 The long goodbye to C.md b/20171107 The long goodbye to C.md similarity index 100% rename from translated/talk/20171107 The long goodbye to C.md rename to 20171107 The long goodbye to C.md From 959ed9c5c8399c26a8fcb7115ba9636a004e6a66 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 23 Jan 2018 17:48:41 +0800 Subject: [PATCH 197/226] PUB:20171107 The long goodbye to C.md @name1e5s @yunfengHe --- .../20171107 The long goodbye to C.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename 20171107 The long goodbye to C.md => published/20171107 The long goodbye to C.md (100%) diff --git a/20171107 The long goodbye to C.md b/published/20171107 The long goodbye to C.md similarity index 100% rename from 20171107 The long goodbye to C.md rename to published/20171107 The long goodbye to C.md From 0b268461a0b3cc85ba77b7c6e360d164257bc2d4 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Tue, 23 Jan 2018 20:43:06 +0800 Subject: [PATCH 198/226] Delete 20180110 Best Linux Screenshot and Screencasting Tools.md --- ...inux Screenshot and Screencasting Tools.md | 149 ------------------ 1 file changed, 149 deletions(-) delete mode 100644 sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md diff --git a/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md b/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md deleted file mode 100644 index 90ab5189f9..0000000000 --- a/sources/tech/20180110 Best Linux Screenshot and Screencasting Tools.md +++ /dev/null @@ -1,149 +0,0 @@ -translated by cyleft. - -Best Linux Screenshot and Screencasting Tools -====== -![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/best-linux-screenshot-and-screencasting-tools_orig.jpg) - -There comes a time you want to capture an error on your screen and send it to the developers or want help from _Stack Overflow,_ you need the right tools to take that screenshot and save it or send it. There are tools in the form of programs and others as shell extensions for GNOME. Not to worry, here are the best Linux Screenshot taking tools that you can use to take those screenshots or make a screencast. - -## Best Linux Screenshot Or Screencasting Tools - -### 1\. Shutter - - [![shutter linux screenshot taking tools](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/shutter-linux-screenshot-taking-tools_orig.jpg)][2] - -[Shutter][3] is one of the best Linux screenshot taking tools. It has the advantage of taking different screenshots depending on what you want to take on your screen. After you take the screenshot, it allows you to see the screenshot before saving it after you take the screenshot. It also includes an extension menu that shows up on your top panel for GNOME. That makes accessing the app much easier and much convenient for anyone to use. - -​You can take screenshots of a selection, a window, desktop, window under cursor, section, menu, tooltip or web. Shutter allows you to upload the screenshots directly to the cloud using the preferred cloud services provider. This Linux tool also allows you to edit your screenshots before you save them. It also comes with plugins that you can add or remove. - -To install it, you will have to type the following in the terminal: - -``` -sudo add-apt-repository -y ppa:shutter/ppa -sudo apt-get update && sudo apt-get install shutter -``` - -### 2. Vokoscreen - - [![vokoscreen screencasting tool for linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-screencasting-tool-for-linux_orig.jpg)][4] - - -[Vokoscreen][5] is an app that allows you to record your screen as you show around and narrate what you are doing on the screen. It is easy to use, has a simple interface and includes a top panel menu for easy access when you are recording your screen. - -​ - -You can choose to record the whole screen, a window or just a selection of an area. Customizing the recording is easy to get the type of screen recording you want to achieve. Vokoscreen even allows you to create a gif as a screen recording. You can also record yourself using the webcam in case you were narrating as tutorials so that you can engage the learners. Once you are done, you can playback the recording right from the application so that you don’t have to keep navigating to find the recording. - - [![vokoscreen preferences](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-preferences_orig.jpg)][6] - -You can install Vocoscreen from your distro repository. Or download the package from [pkgs.org][7] , select the Linux distro you are using. - -``` -sudo dpkg -i vokoscreen_2.5.0-1_amd64.deb -``` - -### 3. OBS - - [![obs linux screencasting tool](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/obs-linux-screencasting-tool_orig.jpg)][8] - -[OBS][9] can be used to record your screen as well as record streams from the internet. It allows you to see whatever you are recording as you stream or as you narrate your screen recording. It allows you to choose the quality of your recording according to your preferences. It also allows you to choose the type of file you want your recording to save to. In addition to the feature of recording, you can switch to Studio mode allowing you to edit your recording to make a complete video without having to use any other external editing software. To install OBS in your Linux distribution, you must have FFmpeg installed on your machine. To install FFmpeg type the following in the terminal for ubuntu 14.04 and earlier: - -``` -sudo add-apt-repository ppa:kirillshkrogalev/ffmpeg-next - -sudo apt-get update && sudo apt-get install ffmpeg -``` - -​For ubuntu 15.04 and later you can just type the following in the terminal to install FFmpeg: - -``` -sudo apt-get install ffmpeg -``` - -​If you have already installed FFmpeg, type the following in the terminal to install OBS: - -``` -sudo add-apt-repository ppa:obsproject/obs-studio - -sudo apt-get update - -sudo apt-get install obs-studio -``` - -### 4. Green Recorder - - [![green recording linux tool](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/green-recording-linux-tool_orig.jpg)][10] - -[Green recorder][11] is a simple interface based program that allows you to record the screen. You can choose what to record including video or just audio and allow you to show the mouse pointer and even follow it as you record your screen. You can record a window or just a selected area on your screen so that only what you want to record shows up in your recording. You can customize the number of frames to record in your final video. In case you want to start recording after a delay, you have the option to configure the delay you wish to set. You have the option to run a command after the recording is done that will run on your machine immediately after you stop recording. - -​ - -To install green recorder, type the following in the terminal: - -``` -sudo add-apt-repository ppa:fossproject/ppa - -sudo apt update && sudo apt install green-recorder -``` - -### 5. Kazam - - [![kazam screencasting tool for linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kazam-screencasting-tool-for-linux_orig.jpg)][12] - -[Kazam][13] Linux screenshot tool is very popular amongst Linux users. It is an intuitive simple to use app that allows you to take a screencast or a screenshot allowing you to customise the delay before taking a screencast or screenshot. It allows you to select the area, window or fullscreen you want to capture. Kazam’s interface is well laid out and not as complicated as other apps. Its features will leave you happy about taking your screenshots. Kazam also includes a system tray icon and menu that allows you to take the screenshot without going to the application itself. - -​​ - -To install Kazam, type the following in the terminal: - -``` -sudo apt-get install kazam -``` - -​If the PPA is not found, you can install it manually using the following commands: - -``` -sudo add-apt-repository ppa:kazam-team/stable-series - -sudo apt-get update && sudo apt-get install kazam -``` - -### 6. Screenshot tool GNOME extension - - [![gnome screenshot extension](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-compressed_orig.jpg)][1] - -There is a GNOME extension just named screenshot tool that always shows up on the system panel until you disable it. It is convenient since it just sits on the system panel until you will trigger it to take a screenshot. The main advantage of this tool is that it is the quickest to access since it is always in your system panel unless you deactivate it in the tweak utility tool. The tool also has a preferences window allowing you to tweak it to your preferences. To install it on your GNOME desktop, head to extensions.gnome.org and search for “_Screenshot Tool”._ - -You must have the gnome extensions chrome extension installed as well as GNOME tweaks tool installed to use the tool. - - [![gnome screenshot extension preferences](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-preferences_orig.jpg)][14] - -The **Linux screenshot tools** are quite helpful especially when you don’t know what to do when you come across a problem and want to share the error with [the Linux community][15] or the developers of a program that you are using. Learning developers or programmers or anyone else need it will find these tools useful to share your screenshots. Youtubers and tutorial makers will find the screencasting tools even more useful when they use them to record their tutorials and post them.​ - - --------------------------------------------------------------------------------- - -via: http://www.linuxandubuntu.com/home/best-linux-screenshot-screencasting-tools - -作者:[linuxandubuntu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxandubuntu.com -[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-compressed_orig.jpg -[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/shutter-linux-screenshot-taking-tools_orig.jpg -[3]:http://shutter-project.org/ -[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-screencasting-tool-for-linux_orig.jpg -[5]:https://github.com/vkohaupt/vokoscreen -[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-preferences_orig.jpg -[7]:https://pkgs.org/download/vokoscreen -[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/obs-linux-screencasting-tool_orig.jpg -[9]:https://obsproject.com/ -[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/green-recording-linux-tool_orig.jpg -[11]:https://github.com/foss-project/green-recorder -[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kazam-screencasting-tool-for-linux_orig.jpg -[13]:https://launchpad.net/kazam -[14]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-preferences_orig.jpg -[15]:http://www.linuxandubuntu.com/home/top-10-communities-to-help-you-learn-linux From 6b9baa80b60eba0b6d3966c9d88466f52e8ed2aa Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Tue, 23 Jan 2018 20:43:44 +0800 Subject: [PATCH 199/226] translated by cyleft 20180110 Best Linux Screenshot and Screencasting Tools.md --- ...inux Screenshot and Screencasting Tools.md | 140 ++++++++++++++++++ 1 file changed, 140 insertions(+) create mode 100644 translated/tech/20180110 Best Linux Screenshot and Screencasting Tools.md diff --git a/translated/tech/20180110 Best Linux Screenshot and Screencasting Tools.md b/translated/tech/20180110 Best Linux Screenshot and Screencasting Tools.md new file mode 100644 index 0000000000..277ded9f69 --- /dev/null +++ b/translated/tech/20180110 Best Linux Screenshot and Screencasting Tools.md @@ -0,0 +1,140 @@ +Linux 最好的图片截取和视频截录工具 +====== +![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/best-linux-screenshot-and-screencasting-tools_orig.jpg) + +这里可能有一个困扰你多时的问题,当你想要获取一张屏幕截图向开发者反馈问题,或是在 _Stack Overflow_ 寻求帮助时,你可能缺乏一个可靠的屏幕截图工具去保存和发送集截图。GNOME 有一些形如程序和 shell 拓展的工具。不必担心,这里有 Linux 最好的屏幕截图工具,供你截取图片或截录视频。 + +## Linux 最好的图片截取和视频截录工具 + +### 1. Shutter + + [![shutter Linux 截图工具](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/shutter-linux-screenshot-taking-tools_orig.jpg)][2] + +[Shutter][3] 可以截取任意你想截取的屏幕,是 Linux 最好的截屏工具之一。得到截屏之后,它还可以在保存截屏之前预览图片。GNOME 面板顶部有一个 Shutter 拓展菜单,使得用户进入软件变得更人性化。 + +你可以选择性的截取窗口、桌面、光标下的面板、自由内容、菜单、提示框或网页。Shutter 允许用户直接上传屏幕截图到设置内首选的云服务器中。它同样允许用户在保存截图之前编辑器图片;同样提供可自由添加或移除的插件。 + +终端内键入下列命令安装此工具: + +``` +sudo add-apt-repository -y ppa:shutter/ppa +sudo apt-get update && sudo apt-get install shutter +``` + +### 2. Vokoscreen + + [![vokoscreen Linux 屏幕录制工具](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-screencasting-tool-for-linux_orig.jpg)][4] + + +[Vokoscreen][5] 是一款允许记录和叙述屏幕活动的一款软件。它有一个简洁的界面,界面的顶端包含有一个简明的菜单栏,方便用户开始录制视频。 + +你可以选择记录整个屏幕,或是记录一个窗口,抑或是记录一个自由区域,并且自定义保存类型;你甚至可以将屏幕录制记录保存为 gif 文件。当然,你也可以使用网络摄像头记录自己的情况,将自己转换成学习者。一旦你这么做了,你就可以在应用程序中回放视频记录。 + + [![vokoscreen preferences](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-preferences_orig.jpg)][6] + +你可以安装自己仓库的 Vocoscreen 发行版,或者你也可以在 [pkgs.org][7] 选择下载你需要的发行版。 + +``` +sudo dpkg -i vokoscreen_2.5.0-1_amd64.deb +``` + +### 3. OBS + + [![obs Linux 视频截录](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/obs-linux-screencasting-tool_orig.jpg)][8] + +[OBS][9] 可以用来录制自己的屏幕亦可用来录制互联网上的数据流。它允许你看到自己所录制的内容或者当你叙述时的屏幕录制。它允许你根据喜好选择录制视频的品质;它也允许你选择文件的保存类型。除了视频录制功能之外,你还可以切换到 Studio 模式,不借助其他软件编辑视频。要在你的 Linux 系统中安装 OBS,你必须确保你的电脑已安装 FFmpeg。ubuntu 14.04 或更早的版本安装 FFmpeg 可以使用如下命令: + +``` +sudo add-apt-repository ppa:kirillshkrogalev/ffmpeg-next + +sudo apt-get update && sudo apt-get install ffmpeg +``` + +ubuntu 15.04 以及之后的版本,你可以在终端中键入如下命令安装 FFmpeg: + +``` +sudo apt-get install ffmpeg +``` + +​如果 GGmpeg 安装完成,在终端中键入如下安装 OBS: + +``` +sudo add-apt-repository ppa:obsproject/obs-studio + +sudo apt-get update + +sudo apt-get install obs-studio +``` + +### 4. Green Recorder + + [![屏幕录制工具](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/green-recording-linux-tool_orig.jpg)][10] + +[Green recorder][11] 是一款基于接口的简单程序,它可以让你记录屏幕。你可以选择包括视频和单纯的音频在内的录制内容,也可以显示鼠标指针,甚至可以跟随鼠标录制视频。同样,你可以选择记录窗口或是自由区域,以便于在自己的记录中保留需要的内容;你还可以自定义保存视频的帧数。如果你想要延迟录制,它提供给你一个选项可以设置出你想要的延迟时间。它还提供一个录制结束的命令运行选项,这样,就可以在视频录制结束后立即运行。​ + +在终端中键入如下命令来安装 green recorder: + +``` +sudo add-apt-repository ppa:fossproject/ppa + +sudo apt update && sudo apt install green-recorder +``` + +### 5. Kazam + + [![kazam screencasting tool for linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kazam-screencasting-tool-for-linux_orig.jpg)][12] + +[Kazam][13] 在几乎所有使用截图工具的 Linux 用户中,都十分流行。这是一款简单直观的软件,它可以让你做一个屏幕截图或是视频录制也同样允许在屏幕截图或屏幕录制之前设置延时。它可以让你选择录制区域,窗口或是你想要抓取的整个屏幕。Kazam 的界面接口部署的非常好,和其他软件相比毫无复杂感。它的特点,就是让你优雅的截图。Kazam 在系统托盘和菜单中都有图标,无需打开应用本身,你就可以开始屏幕截图。​​ + +终端中键入如下命令来安装 Kazam: + +``` +sudo apt-get install kazam +``` + +​如果没有找到 PPA,你需要使用下面的命令安装它: + +``` +sudo add-apt-repository ppa:kazam-team/stable-series + +sudo apt-get update && sudo apt-get install kazam +``` + +### 6. GNOME 拓展截屏工具 + + [![gnome screenshot extension](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-compressed_orig.jpg)][1] + +GNOME 的一个拓展软件就叫做 screenshot tool,它常驻系统面板,如果你没有设置禁用它。由于它是常驻系统面板的软件,所以它会一直等待你的调用,获取截图,方便和容易获取是它最主要的特点,除非你在系统工具禁用,否则它将一直在你的系统面板中。这个工具也有用来设置首选项的选项窗口。在 extensions.gnome.org 中搜索“_Screenshot Tool_”,在你的 GNOME 中安装它。 + +你需要安装 gnome 拓展,chrome 拓展和 GNOME 调整工具才能使用这个工具。 + + [![gnome screenshot 拓展选项](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-preferences_orig.jpg)][14] + +当你碰到一个问题,不知道怎么处理,想要在 [the Linux community][15] 或者其他开发社区分享、寻求帮助的的时候, **Linux 截图工具** 尤其合适。学习开发、程序或者其他任何事物都会发现这些工具在分享截图的时候真的很实用。Youtube 用户和教程制作爱好者会发现视频截录工具真的很适合录制可以发表的教程。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxandubuntu.com/home/best-linux-screenshot-screencasting-tools + +作者:[linuxandubuntu][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxandubuntu.com +[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-compressed_orig.jpg +[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/shutter-linux-screenshot-taking-tools_orig.jpg +[3]:http://shutter-project.org/ +[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-screencasting-tool-for-linux_orig.jpg +[5]:https://github.com/vkohaupt/vokoscreen +[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/vokoscreen-preferences_orig.jpg +[7]:https://pkgs.org/download/vokoscreen +[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/obs-linux-screencasting-tool_orig.jpg +[9]:https://obsproject.com/ +[10]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/green-recording-linux-tool_orig.jpg +[11]:https://github.com/foss-project/green-recorder +[12]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/kazam-screencasting-tool-for-linux_orig.jpg +[13]:https://launchpad.net/kazam +[14]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnome-screenshot-extension-preferences_orig.jpg +[15]:http://www.linuxandubuntu.com/home/top-10-communities-to-help-you-learn-linux From 6aa257abb76a01dc4350f9db5f8e91af0c234c27 Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Tue, 23 Jan 2018 20:55:40 +0800 Subject: [PATCH 200/226] apply for translation --- ...17 Linux tee Command Explained for Beginners (6 Examples).md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md b/sources/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md index e1be9e3da2..4d6ad5442d 100644 --- a/sources/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md +++ b/sources/tech/20180117 Linux tee Command Explained for Beginners (6 Examples).md @@ -1,3 +1,5 @@ +translated by cyleft + Linux tee Command Explained for Beginners (6 Examples) ====== From fd44d0acc1eb59232ba7a6e10658ddd3f4b2743b Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Tue, 23 Jan 2018 20:55:43 +0800 Subject: [PATCH 201/226] Delete 20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md --- ... Application Layer DOS Attacks With mod.md | 223 ------------------ 1 file changed, 223 deletions(-) delete mode 100644 sources/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md diff --git a/sources/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md b/sources/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md deleted file mode 100644 index c640d776c1..0000000000 --- a/sources/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md +++ /dev/null @@ -1,223 +0,0 @@ -Translating by jessie-pang - -Protecting Your Website From Application Layer DOS Attacks With mod -====== -There exist many ways of maliciously taking a website offline. The more complicated methods involve technical knowledge of databases and programming. A far simpler method is known as a "Denial Of Service", or "DOS" attack. This attack derives its name from its goal which is to deny your regular clients or site visitors normal website service. - -There are, generally speaking, two forms of DOS attack; - - 1. Layer 3,4 or Network-Layer attacks. - 2. Layer 7 or Application-Layer attacks. - - - -The first type of DOS attack, network-layer, is when a huge quantity of junk traffic is directed at the web server. When the quantity of junk traffic exceeds the capacity of the network infrastructure the website is taken offline. - -The second type of DOS attack, application-layer, is where instead of junk traffic legitimate looking page requests are made. When the number of page requests exceeds the capacity of the web server to serve pages legitimate visitors will not be able to use the site. - -This guide will look at mitigating application-layer attacks. This is because mitigating networking-layer attacks requires huge quantities of available bandwidth and the co-operation of upstream providers. This is usually not something that can be protected against through configuration of the web server. - -An application-layer attack, at least a modest one, can be protected against through the configuration of a normal web server. Protecting against this form of attack is important because [Cloudflare][1] have [recently reported][2] that the number of network-layer attacks is diminishing while the number of application-layer attacks is increasing. - -This guide will explain using the Apache2 module [mod_evasive][3] by [zdziarski][4]. - -In addition, mod_evasive will stop an attacker trying to guess a username/password combination by attempting hundreds of combinations i.e. a brute force attack. - -Mod_evasive works by keeping a record of the number of requests arriving from each IP address. When this number exceeds one of the several thresholds that IP is served an error page. Error pages require far fewer resources than a site page keeping the site online for legitimate visitors. - -### Installing mod_evasive on Ubuntu 16.04 - -Mod_evasive is contained in the default Ubuntu 16.04 repositories with the package name "libapache2-mod-evasive". A simple `apt-get` will get it installed: -``` -apt-get update -apt-get upgrade -apt-get install libapache2-mod-evasive - -``` - -We now need to configure mod_evasive. - -It's configuration file is located at `/etc/apache2/mods-available/evasive.conf`. By default, all the modules settings are commented after installation. Therefore, the module won't interfere with site traffic until the configuration file has been edited. -``` - - #DOSHashTableSize 3097 - #DOSPageCount 2 - #DOSSiteCount 50 - #DOSPageInterval 1 - #DOSSiteInterval 1 - #DOSBlockingPeriod 10 - - #DOSEmailNotify you@yourdomain.com - #DOSSystemCommand "su - someuser -c '/sbin/... %s ...'" - #DOSLogDir "/var/log/mod_evasive" - - -``` - -The first block of directives mean as follows: - - * **DOSHashTableSize** - The current list of accessing IP's and their request count. - * **DOSPageCount** - The threshold number of page requests per DOSPageInterval. - * **DOSPageInterval** - The amount of time in which mod_evasive counts up the page requests. - * **DOSSiteCount** - The same as the DOSPageCount but counts requests from the same IP for any page on the site. - * **DOSSiteInterval** - The amount of time that mod_evasive counts up the site requests. - * **DOSBlockingPeriod** - The amount of time in seconds that an IP is blocked for. - - - -If the default configuration shown above is used then an IP will be blocked if it: - - * Requests a single page more than twice a second. - * Requests more than 50 pages different pages per second. - - - -If an IP exceeds these thresholds it is blocked for 10 seconds. - -This may not seem like a lot, however, mod_evasive will continue monitoring the page requests even for blocked IP's and reset their block period. As long as an IP is attempting to DOS the site it will remain blocked. - -The remaining directives are: - - * **DOSEmailNotify** - An email address to receive notification of DOS attacks and IP's being blocked. - * **DOSSystemCommand** - A command to run in the event of a DOS. - * **DOSLogDir** - The directory where mod_evasive keeps some temporary files. - - - -### Configuring mod_evasive - -The default configuration is a good place to start as it should not block any legitimate users. The configuration file with all directives (apart from DOSSystemCommand) uncommented looks like the following: -``` - - DOSHashTableSize 3097 - DOSPageCount 2 - DOSSiteCount 50 - DOSPageInterval 1 - DOSSiteInterval 1 - DOSBlockingPeriod 10 - - DOSEmailNotify JohnW@example.com - #DOSSystemCommand "su - someuser -c '/sbin/... %s ...'" - DOSLogDir "/var/log/mod_evasive" - - -``` - -The log directory must be created and given the same owner as the apache process. Here it is created at `/var/log/mod_evasive` and given the owner and group of the Apache web server on Ubuntu `www-data`: -``` -mkdir /var/log/mod_evasive -chown www-data:www-data /var/log/mod_evasive - -``` - -After editing Apache's configuration, especially on a live website, it is always a good idea to check the syntax of the edits before restarting or reloading. This is because a syntax error will stop Apache from re-starting and taking your site offline. - -Apache comes packaged with a helper command that has a configuration syntax checker. Simply run the following command to check your edits: -``` -apachectl configtest - -``` - -If your configuration is correct you will get the response: -``` -Syntax OK - -``` - -However, if there is a problem you will be told where it occurred and what it was, e.g.: -``` -AH00526: Syntax error on line 6 of /etc/apache2/mods-enabled/evasive.conf: -DOSSiteInterval takes one argument, Set site interval -Action 'configtest' failed. -The Apache error log may have more information. - -``` - -If your configuration passes the configtest then the module can be safely enabled and Apache reloaded: -``` -a2enmod evasive -systemctl reload apache2.service - -``` - -Mod_evasive is now configured and running. - -### Testing - -In order to test mod_evasive, we simply need to make enough web requests to the server that we exceed the threshold and record the response codes from Apache. - -A normal, successful page request will receive the response: -``` -HTTP/1.1 200 OK - -``` - -However, one that has been denied by mod_evasive will return the following: -``` -HTTP/1.1 403 Forbidden - -``` - -The following script will make HTTP requests to `127.0.0.1:80`, that is localhost on port 80, as rapidly as possible and print out the response code of every request. - -All you need to do is to copy the following bash script into a file e.g. `mod_evasive_test.sh`: -``` -#!/bin/bash -set -e - -for i in {1..50}; do - curl -s -I 127.0.0.1 | head -n 1 -done - -``` - -The parts of this script mean as follows: - - * curl - This is a command to make web requests. - * -s - Hide the progress meter. - * -I - Only display the response header information. - * head - Print the first part of a file. - * -n 1 - Only display the first line. - - - -Then make it executable: -``` -chmod 755 mod_evasive_test.sh - -``` - -When the script is run **before** mod_evasive is enabled you will see 50 lines of `HTTP/1.1 200 OK` returned. - -However, after mod_evasive is enabled you will see the following: -``` -HTTP/1.1 200 OK -HTTP/1.1 200 OK -HTTP/1.1 403 Forbidden -HTTP/1.1 403 Forbidden -HTTP/1.1 403 Forbidden -HTTP/1.1 403 Forbidden -HTTP/1.1 403 Forbidden -... - -``` - -The first two requests were allowed, but then once a third in the same second was made mod_evasive denied any further requests. You will also receive an email letting you know that a DOS attempt was detected to the address you set with the `DOSEmailNotify` option. - -Mod_evasive is now protecting your site! - --------------------------------------------------------------------------------- - -via: https://bash-prompt.net/guides/mod_proxy/ - -作者:[Elliot Cooper][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://bash-prompt.net/about/ -[1]:https://www.cloudflare.com -[2]:https://blog.cloudflare.com/the-new-ddos-landscape/ -[3]:https://github.com/jzdziarski/mod_evasive -[4]:https://www.zdziarski.com/blog/ From d0f86089cf1707bb29ae86cd8ee9fb5b933f0f11 Mon Sep 17 00:00:00 2001 From: jessie-pang <35220454+jessie-pang@users.noreply.github.com> Date: Tue, 23 Jan 2018 20:56:38 +0800 Subject: [PATCH 202/226] 20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md --- ... Application Layer DOS Attacks With mod.md | 216 ++++++++++++++++++ 1 file changed, 216 insertions(+) create mode 100644 translated/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md diff --git a/translated/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md b/translated/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md new file mode 100644 index 0000000000..7913acd02c --- /dev/null +++ b/translated/tech/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md @@ -0,0 +1,216 @@ +用 mod 保护您的网站免受应用层 DOS 攻击 +====== + +有多种恶意攻击网站的方法,比较复杂的方法要涉及数据库和编程方面的技术知识。一个更简单的方法被称为“拒绝服务”或“DOS”攻击。这个攻击方法的名字来源于它的意图:使普通客户或网站访问者的正常服务请求被拒绝。 + +一般来说,有两种形式的 DOS 攻击: + + 1. OSI 模型的三、四层,即网络层攻击 + 2. OSI 模型的七层,即应用层攻击 + +第一种类型的 DOS 攻击——网络层,发生于当大量的垃圾流量流向网页服务器时。当垃圾流量超过网络的处理能力时,网站就会宕机。 + +第二种类型的 DOS 攻击是在应用层,是利用合法的服务请求,而不是垃圾流量。当页面请求数量超过网页服务器能承受的容量时,即使是合法访问者也将无法使用该网站。 + +本文将着眼于缓解应用层攻击,因为减轻网络层攻击需要大量的可用带宽和上游提供商的合作,这通常不是通过配置网络服务器就可以做到的。 + +通过配置普通的网页服务器,可以保护网页免受应用层攻击,至少是适度的防护。防止这种形式的攻击是非常重要的,因为 [Cloudflare][1] 最近 [报道][2] 了网络层攻击的数量正在减少,而应用层攻击的数量则在增加。 + +本文将根据 [zdziarski 的博客][4] 来解释如何使用 Apache2 的模块 [mod_evasive][3]。 + +另外,mod_evasive 会阻止攻击者试图通过尝试数百个组合来猜测用户名和密码,即暴力攻击。 + +Mod_evasive 会记录来自每个 IP 地址的请求的数量。当这个数字超过相应 IP 地址的几个阈值之一时,会出现一个错误页面。错误页面所需的资源要比一个能够响应合法访问的在线网站少得多。 + +### 在 Ubuntu 16.04 上安装 mod_evasive + +Ubuntu 16.04 默认的软件库中包含了 mod_evasive,名称为“libapache2-mod-evasive”。您可以使用 `apt-get` 来完成安装: +``` +apt-get update +apt-get upgrade +apt-get install libapache2-mod-evasive + +``` + +现在我们需要配置 mod_evasive。 + +它的配置文件位于 `/etc/apache2/mods-available/evasive.conf`。默认情况下,所有模块的设置在安装后都会被注释掉。因此,在修改配置文件之前,模块不会干扰到网站流量。 +``` + + #DOSHashTableSize 3097 + #DOSPageCount 2 + #DOSSiteCount 50 + #DOSPageInterval 1 + #DOSSiteInterval 1 + #DOSBlockingPeriod 10 + + #DOSEmailNotify you@yourdomain.com + #DOSSystemCommand "su - someuser -c '/sbin/... %s ...'" + #DOSLogDir "/var/log/mod_evasive" + + +``` + +第一部分的参数的含义如下: + + * **DOSHashTableSize** - 正在访问网站的 IP 地址列表及其请求数。 + * **DOSPageCount** - 在一定的时间间隔内,每个的页面的请求次数。时间间隔由 DOSPageInterval 定义。 + * **DOSPageInterval** - mod_evasive 统计页面请求次数的时间间隔。 + * **DOSSiteCount** - 与 DOSPageCount 相同,但统计的是网站内任何页面的来自相同 IP 地址的请求数量。 + * **DOSSiteInterval** - mod_evasive 统计网站请求次数的时间间隔。 + * **DOSBlockingPeriod** - 某个 IP 地址被加入黑名单的时长(以秒为单位)。 + + +如果使用上面显示的默认配置,则在如下情况下,一个 IP 地址会被加入黑名单: + + * 每秒请求同一页面超过两次。 + * 每秒请求 50 个以上不同页面。 + + +如果某个 IP 地址超过了这些阈值,则被加入黑名单 10 秒钟。 + +这看起来可能不算久,但是,mod_evasive 将一直监视页面请求,包括在黑名单中的 IP 地址,并重置其加入黑名单的起始时间。只要一个 IP 地址一直尝试使用 DOS 攻击该网站,它将始终在黑名单中。 + +其余的参数是: + + * **DOSEmailNotify** - 用于接收 DOS 攻击信息和 IP 地址黑名单的电子邮件地址。 + * **DOSSystemCommand** - 检测到 DOS 攻击时运行的命令。 + * **DOSLogDir** - 用于存放 mod_evasive 的临时文件的目录。 + + +### 配置 mod_evasive + +默认的配置是一个很好的开始,因为它的黑名单里不该有任何合法的用户。取消配置文件中的所有参数(DOSSystemCommand 除外)的注释,如下所示: +``` + + DOSHashTableSize 3097 + DOSPageCount 2 + DOSSiteCount 50 + DOSPageInterval 1 + DOSSiteInterval 1 + DOSBlockingPeriod 10 + + DOSEmailNotify JohnW@example.com + #DOSSystemCommand "su - someuser -c '/sbin/... %s ...'" + DOSLogDir "/var/log/mod_evasive" + + +``` + +必须要创建日志目录并且要赋予其与 apache 进程相同的所有者。这里创建的目录是 `/var/log/mod_evasive` ,并且在 Ubuntu 上将该目录的所有者和组设置为 `www-data` ,与 Apache 服务器相同: +``` +mkdir /var/log/mod_evasive +chown www-data:www-data /var/log/mod_evasive + +``` + +在编辑了 Apache 的配置之后,特别是在正在运行的网站上,在重新启动或重新加载之前,最好检查一下语法,因为语法错误将影响 Apache 的启动从而使网站宕机。 + +Apache 包含一个辅助命令,是一个配置语法检查器。只需运行以下命令来检查您的语法: +``` +apachectl configtest + +``` + +如果您的配置是正确的,会得到如下结果: +``` +Syntax OK + +``` + +但是,如果出现问题,您会被告知在哪部分发生了什么错误,例如: +``` +AH00526: Syntax error on line 6 of /etc/apache2/mods-enabled/evasive.conf: +DOSSiteInterval takes one argument, Set site interval +Action 'configtest' failed. +The Apache error log may have more information. + +``` + +如果您的配置通过了 configtest 的测试,那么这个模块可以安全地被启用并且 Apache 可以重新加载: +``` +a2enmod evasive +systemctl reload apache2.service + +``` + +Mod_evasive 现在已配置好并正在运行了。 + +### 测试 + +为了测试 mod_evasive,我们只需要向服务器提出足够的网页访问请求,以使其超出阈值,并记录来自 Apache 的响应代码。 + +一个正常并成功的页面请求将收到如下响应: +``` +HTTP/1.1 200 OK + +``` + +但是,被 mod_evasive 拒绝的将返回以下内容: +``` +HTTP/1.1 403 Forbidden + +``` + +以下脚本会尽可能迅速地向本地主机(127.0.0.1,localhost)的 80 端口发送 HTTP 请求,并打印出每个请求的响应代码。 + +你所要做的就是把下面的 bash 脚本复制到一个文件中,例如 `mod_evasive_test.sh`: +``` +#!/bin/bash +set -e + +for i in {1..50}; do + curl -s -I 127.0.0.1 | head -n 1 +done + +``` + +这个脚本的部分含义如下: + + * curl - 这是一个发出网络请求的命令。 + * -s - 隐藏进度表。 + * -I - 仅显示响应头部信息。 + * head - 打印文件的第一部分。 + * -n 1 - 只显示第一行。 + +然后赋予其执行权限: +``` +chmod 755 mod_evasive_test.sh + +``` + +在启用 mod_evasive **之前**,脚本运行时,将会看到 50 行“HTTP / 1.1 200 OK”的返回值。 + +但是,启用 mod_evasive 后,您将看到以下内容: +``` +HTTP/1.1 200 OK +HTTP/1.1 200 OK +HTTP/1.1 403 Forbidden +HTTP/1.1 403 Forbidden +HTTP/1.1 403 Forbidden +HTTP/1.1 403 Forbidden +HTTP/1.1 403 Forbidden +... + +``` + +前两个请求被允许,但是在同一秒内第三个请求发出时,mod_evasive 拒绝了任何进一步的请求。您还将收到一封电子邮件(邮件地址在选项 `DOSEmailNotify` 中设置),通知您有 DOS 攻击被检测到。 + +Mod_evasive 现在已经在保护您的网站啦! + + +-------------------------------------------------------------------------------- + +via: https://bash-prompt.net/guides/mod_proxy/ + +作者:[Elliot Cooper][a] +译者:[jessie-pang](https://github.com/jessie-pang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://bash-prompt.net/about/ +[1]:https://www.cloudflare.com +[2]:https://blog.cloudflare.com/the-new-ddos-landscape/ +[3]:https://github.com/jzdziarski/mod_evasive +[4]:https://www.zdziarski.com/blog/ \ No newline at end of file From 7a453bcb8a3964f92bd5a6ff05f40959de989a0b Mon Sep 17 00:00:00 2001 From: ChenYi <31087327+cyleft@users.noreply.github.com> Date: Tue, 23 Jan 2018 21:03:07 +0800 Subject: [PATCH 203/226] =?UTF-8?q?apply=20=E4=BD=9B=E5=A6=82?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20180117 How To Manage Vim Plugins Using Vundle On Linux.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md b/sources/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md index 4d4d388ed7..40f6c926f1 100644 --- a/sources/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md +++ b/sources/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md @@ -1,3 +1,5 @@ +translated by cyleft + How To Manage Vim Plugins Using Vundle On Linux ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-4-720x340.png) From 0c2c0e96295d6b7a25c3c345bf843c3443df908e Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 23 Jan 2018 22:46:03 +0800 Subject: [PATCH 204/226] PRF&PUB:20171002 Connect To Wifi From The Linux Command Line.md @lujun9972 --- ...ect To Wifi From The Linux Command Line.md | 23 ++++++++----------- 1 file changed, 10 insertions(+), 13 deletions(-) rename {translated/tech => published}/20171002 Connect To Wifi From The Linux Command Line.md (92%) diff --git a/translated/tech/20171002 Connect To Wifi From The Linux Command Line.md b/published/20171002 Connect To Wifi From The Linux Command Line.md similarity index 92% rename from translated/tech/20171002 Connect To Wifi From The Linux Command Line.md rename to published/20171002 Connect To Wifi From The Linux Command Line.md index 50c25bd839..c866c10590 100644 --- a/translated/tech/20171002 Connect To Wifi From The Linux Command Line.md +++ b/published/20171002 Connect To Wifi From The Linux Command Line.md @@ -28,22 +28,20 @@ wpa_supplicant 可以作为命令行工具来用。使用一个简单的配置 wpa_supplicant 中有一个工具叫做 `wpa_cli`,它提供了一个命令行接口来管理你的 WiFi 连接。事实上你可以用它来设置任何东西,但是设置一个配置文件看起来要更容易一些。 使用 root 权限运行 `wpa_cli`,然后扫描网络。 -``` +``` # wpa_cli > scan - ``` 扫描过程要花上一点时间,并且会显示所在区域的那些网络。记住你想要连接的那个网络。然后输入 `quit` 退出。 ### 生成配置块并且加密你的密码 -还有更方便的工具可以用来设置配置文件。它接受网络名称和密码作为参数,然后生成一个包含该网路配置块(其中的密码被加密处理了)的配置文件。 +还有更方便的工具可以用来设置配置文件。它接受网络名称和密码作为参数,然后生成一个包含该网路配置块(其中的密码被加密处理了)的配置文件。 + ``` - # wpa_passphrase networkname password > /etc/wpa_supplicant/wpa_supplicant.conf - ``` ### 裁剪你的配置 @@ -51,9 +49,9 @@ wpa_supplicant 中有一个工具叫做 `wpa_cli`,它提供了一个命令行 现在你已经有了一个配置文件了,这个配置文件就是 `/etc/wpa_supplicant/wpa_supplicant.conf`。其中的内容并不多,只有一个网络块,其中有网络名称和密码,不过你可以在此基础上对它进行修改。 用喜欢的编辑器打开该文件,首先删掉说明密码的那行注释。然后,将下面行加到配置最上方。 + ``` ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel - ``` 这一行只是让 `wheel` 组中的用户可以管理 wpa_supplicant。这会方便很多。 @@ -61,29 +59,29 @@ ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel 其他的内容则添加到网络块中。 如果你要连接到一个隐藏网络,你可以添加下面行来通知 wpa_supplicant 先扫描该网络。 + ``` scan_ssid=1 - ``` 下一步,设置协议以及密钥管理方面的配置。下面这些是 WPA2 相关的配置。 + ``` proto=RSN key_mgmt=WPA-PSK - ``` -group 和 pairwise 配置告诉 wpa_supplicant 你是否使用了 CCMP,TKIP,或者两者都用到了。为了安全考虑,你应该只用 CCMP。 +`group` 和 `pairwise` 配置告诉 wpa_supplicant 你是否使用了 CCMP、TKIP,或者两者都用到了。为了安全考虑,你应该只用 CCMP。 + ``` group=CCMP pairwise=CCMP - ``` 最后,设置网络优先级。越高的值越会优先连接。 + ``` priority=10 - ``` ![Complete WPA_Supplicant Settings][1] @@ -94,14 +92,13 @@ priority=10 当然,该方法并不是用于即时配置无线网络的最好方法,但对于定期连接的网络来说,这种方法非常有效。 - -------------------------------------------------------------------------------- via: https://linuxconfig.org/connect-to-wifi-from-the-linux-command-line 作者:[Nick Congleton][a] 译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 58cc800431997f1a561c1833bdb129adc2b23f79 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 23 Jan 2018 23:20:21 +0800 Subject: [PATCH 205/226] PRF:20160808 Top 10 Command Line Games For Linux.md @CYLeft --- ...808 Top 10 Command Line Games For Linux.md | 102 ++++++++++-------- 1 file changed, 60 insertions(+), 42 deletions(-) diff --git a/translated/tech/20160808 Top 10 Command Line Games For Linux.md b/translated/tech/20160808 Top 10 Command Line Games For Linux.md index 86d5e6fcf7..0368635e73 100644 --- a/translated/tech/20160808 Top 10 Command Line Games For Linux.md +++ b/translated/tech/20160808 Top 10 Command Line Games For Linux.md @@ -1,178 +1,196 @@ -Linux 命令行游戏 Top 10 +十大 Linux 命令行游戏 ====== -概要: 本文列举了 **Linux 中最好的命令行游戏**。 -Linux 从来都不是游戏的首选操作系统。尽管近日来 [Linux 的游戏][1] 提供了很多。你可以在 [下载 Linux 游戏][2] 得到许多资源。 +概要: 本文列举了 Linux 中最好的命令行游戏。 -这有专门的 [游戏版 Linux][3]。它确实存在。但是今天,我们并不是要欣赏游戏版 Linux。 +Linux 从来都不是游戏的首选操作系统,尽管近日来 [Linux 的游戏][1]提供了很多,你也可以从许多资源[下载到 Linux 游戏][2]。 + +也有专门的 [游戏版 Linux][3]。没错,确实有。但是今天,我们并不是要欣赏游戏版 Linux。 Linux 有一个超过 Windows 的优势。它拥有一个强大的 Linux 终端。在 Linux 终端上,你可以做很多事情,包括玩 **命令行游戏**。 -当然,毕竟是 Linux 终端的核心爱好者、拥护者。终端游戏轻便,快速,有地狱般的魔力。而这最有意思的事情是,你可以在 Linux 终端上重温大量经典游戏。 - -[推荐阅读:Linux 上游戏,你所需要了解的全部][20] +当然,我们都是 Linux 终端的骨灰粉。终端游戏轻便、快速、有地狱般的魔力。而这最有意思的事情是,你可以在 Linux 终端上重温大量经典游戏。 ### 最好的 Linux 终端游戏 来揭秘这张榜单,找出 Linux 终端最好的游戏。 -### 1. Bastet +#### 1. Bastet -谁还没花上几个小时玩 [俄罗斯方块][4] ?简单而且容易上瘾。 Bastet 就是 Linux 版的俄罗斯方块。 +谁还没花上几个小时玩[俄罗斯方块][4]?它简单而且容易上瘾。 Bastet 就是 Linux 版的俄罗斯方块。 ![Linux 终端游戏 Bastet][5] -使用下面的命令获取 Bastet: +使用下面的命令获取 Bastet: + ``` sudo apt install bastet ``` -运行下列命令,在终端上开始这个游戏: +运行下列命令,在终端上开始这个游戏: + ``` bastet ``` -使用空格键旋转方块,方向键控制方块移动 +使用空格键旋转方块,方向键控制方块移动。 -### 2. Ninvaders +#### 2. Ninvaders -Space Invaders(太空侵略者)。我任记得这个游戏里,和我弟弟(哥哥)在高分之路上扭打。这是最好的街机游戏之一。 +Space Invaders(太空侵略者)。我仍记得这个游戏里,和我兄弟为了最高分而比拼。这是最好的街机游戏之一。 ![Linux 终端游戏 nInvaders][6] 复制粘贴这段代码安装 Ninvaders。 + ``` sudo apt-get install ninvaders ``` -使用下面的命令开始游戏: +使用下面的命令开始游戏: + ``` ninvaders ``` -方向键移动太空飞船。空格键设计外星人。 +方向键移动太空飞船。空格键射击外星人。 [推荐阅读:2016 你可以开始的 Linux 游戏 Top 10][21] -### 3. Pacman4console +#### 3. Pacman4console -是的,这个就是街机之王。Pacman4console 是最受欢迎的街机游戏 Pacman(吃豆豆)终端版。 +是的,这个就是街机之王。Pacman4console 是最受欢迎的街机游戏 Pacman(吃豆人)的终端版。 ![Linux 命令行吃豆豆游戏 Pacman4console][7] 使用以下命令获取 pacman4console: + ``` sudo apt-get install pacman4console ``` -打开终端,建议使用最大的终端界面(29x32)。键入以下命令启动游戏: +打开终端,建议使用最大的终端界面。键入以下命令启动游戏: + ``` pacman4console ``` 使用方向键控制移动。 -### 4. nSnake +#### 4. nSnake 记得在老式诺基亚手机里玩的贪吃蛇游戏吗? -这个游戏让我保持对手机着迷很长时间。我曾经设计过各种姿态去获得更长的蛇身。 +这个游戏让我在很长时间内着迷于手机。我曾经设计过各种姿态去获得更长的蛇身。 ![nsnake : Linux 终端上的贪吃蛇游戏][8] 我们拥有 [Linux 终端上的贪吃蛇游戏][9] 得感谢 [nSnake][9]。使用下面的命令安装它: + ``` sudo apt-get install nsnake ``` 键入下面的命令开始游戏: + ``` nsnake ``` -使用方向键控制蛇身,获取豆豆。 +使用方向键控制蛇身并喂它。 -### 5. Greed +#### 5. Greed -Greed 有点像精简调加速和肾上腺素的 Tron(类似贪吃蛇的进化版)。 +Greed 有点像 Tron(类似贪吃蛇的进化版),但是减少了速度,也没那么刺激。 -你当前的位置由‘@’表示。你被数字包围了,你可以在四个方向任意移动。你选择的移动方向上标识的数字,就是你能移动的步数。走过的路不能再走,如果你无路可走,游戏结束。 +你当前的位置由闪烁的 ‘@’ 表示。你被数字所环绕,你可以在四个方向任意移动。 -听起来,似乎我让它变得更复杂了。 +你选择的移动方向上标识的数字,就是你能移动的步数。你将重复这个步骤。走过的路不能再走,如果你无路可走,游戏结束。 + +似乎我让它听起来变得更复杂了。 ![Greed : 命令行上的 Tron][10] 通过下列命令获取 Greed: + ``` sudo apt-get install greed ``` 通过下列命令启动游戏,使用方向键控制游戏。 + ``` greed ``` -### 6. Air Traffic Controller +#### 6. Air Traffic Controller -还有什么比做飞行员更有意思的?空中交通管制员。在你的终端中,你可以模拟一个空中要塞。说实话,在终端里管理空中交通蛮有意思的。 +还有什么比做飞行员更有意思的?那就是空中交通管制员。在你的终端中,你可以模拟一个空中交通系统。说实话,在终端里管理空中交通蛮有意思的。 ![Linux 空中交通管理员][11] 使用下列命令安装游戏: + ``` sudo apt-get install bsdgames ``` 键入下列命令启动游戏: + ``` atc ``` ATC 不是孩子玩的游戏。建议查看官方文档。 -### 7. Backgammon(双陆棋) +#### 7. Backgammon(双陆棋) 无论之前你有没有玩过 [双陆棋][12],你都应该看看这个。 它的说明书和控制手册都非常友好。如果你喜欢,可以挑战你的电脑或者你的朋友。 ![Linux 终端上的双陆棋][13] 使用下列命令安装双陆棋: + ``` sudo apt-get install bsdgames ``` 键入下列命令启动游戏: + ``` backgammon ``` -当你需要提示游戏规则时,回复 ‘y’。 +当你提示游戏规则时,回复 ‘y’ 即可。 -### 8. Moon Buggy +#### 8. Moon Buggy -跳跃。疯狂。欢乐时光不必多言。 +跳跃、开火。欢乐时光不必多言。 ![Moon buggy][14] 使用下列命令安装游戏: + ``` sudo apt-get install moon-buggy ``` 使用下列命令启动游戏: + ``` moon-buggy ``` -空格跳跃,‘a’或者‘l’射击。尽情享受吧。 +空格跳跃,‘a’ 或者 ‘l’射击。尽情享受吧。 -### 9. 2048 +#### 9. 2048 2048 可以活跃你的大脑。[2048][15] 是一个策咯游戏,很容易上瘾。以获取 2048 分为目标。 ![Linux 终端上的 2048][16] 复制粘贴下面的命令安装游戏: + ``` wget https://raw.githubusercontent.com/mevdschee/2048.c/master/2048.c @@ -180,28 +198,28 @@ gcc -o 2048 2048.c ``` 键入下列命令启动游戏: + ``` ./2048 ``` -### 10. Tron +#### 10. Tron 没有动作类游戏,这张榜单怎么可能结束? ![Linux 终端游戏 Tron][17] -是的,Linux 终端可以实现这种精力充沛的游戏 Tron。为接下来迅捷的反应做准备吧。无需被下载和安装困扰。一个命令即可启动游戏,你只需要一个网络连接 +是的,Linux 终端可以实现这种精力充沛的游戏 Tron。为接下来迅捷的反应做准备吧。无需被下载和安装困扰。一个命令即可启动游戏,你只需要一个网络连接: + ``` ssh sshtron.zachlatta.com ``` -如果由别的在线游戏者,你可以多人游戏。了解更多:[Linux 终端游戏 Tron][18]. +如果有别的在线游戏者,你可以多人游戏。了解更多:[Linux 终端游戏 Tron][18]。 ### 你看上了哪一款? -朋友,Linux 终端游戏 Top 10,都分享给你了。我猜你现在正准备键入 ctrl+alt+T(终端快捷键) 了。榜单中,那个是你最喜欢的游戏?或者为终端提供其他的有趣的事物?尽情分享吧! - -在 [Abhishek Prakash][19] 回复。 +伙计,十大 Linux 终端游戏都分享给你了。我猜你现在正准备键入 `ctrl+alt+T`(终端快捷键) 了。榜单中那个是你最喜欢的游戏?或者你有其它的终端游戏么?尽情分享吧! -------------------------------------------------------------------------------- @@ -209,12 +227,12 @@ via: https://itsfoss.com/best-command-line-games-linux/ 作者:[Aquil Roshan][a] 译者:[CYLeft](https://github.com/CYleft) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://itsfoss.com/author/aquil/ -[1]:https://itsfoss.com/linux-gaming-guide/ +[1]:https://linux.cn/article-7316-1.html [2]:https://itsfoss.com/download-linux-games/ [3]:https://itsfoss.com/manjaro-gaming-linux/ [4]:https://en.wikipedia.org/wiki/Tetris From 7f6e919d8d9756055ff53a3619705f55c1194494 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 23 Jan 2018 23:20:48 +0800 Subject: [PATCH 206/226] PUB:20160808 Top 10 Command Line Games For Linux.md @CYLeft https://linux.cn/article-9270-1.html --- .../20160808 Top 10 Command Line Games For Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20160808 Top 10 Command Line Games For Linux.md (100%) diff --git a/translated/tech/20160808 Top 10 Command Line Games For Linux.md b/published/20160808 Top 10 Command Line Games For Linux.md similarity index 100% rename from translated/tech/20160808 Top 10 Command Line Games For Linux.md rename to published/20160808 Top 10 Command Line Games For Linux.md From 4919d96028b4c98dd12c5ed16af8bc11d57a58f7 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 23 Jan 2018 23:28:35 +0800 Subject: [PATCH 207/226] PRF&PUB:20171016 Fixing vim in Debian - There and back again.md @geekpi --- ...xing vim in Debian - There and back again.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) rename {translated/tech => published}/20171016 Fixing vim in Debian - There and back again.md (83%) diff --git a/translated/tech/20171016 Fixing vim in Debian - There and back again.md b/published/20171016 Fixing vim in Debian - There and back again.md similarity index 83% rename from translated/tech/20171016 Fixing vim in Debian - There and back again.md rename to published/20171016 Fixing vim in Debian - There and back again.md index 36dd92d36a..ebe765c4be 100644 --- a/translated/tech/20171016 Fixing vim in Debian - There and back again.md +++ b/published/20171016 Fixing vim in Debian - There and back again.md @@ -1,18 +1,20 @@ -在 Debian 中修复 vim - 去而复得 +修复 Debian 中的 vim 奇怪行为 ====== -I was wondering for quite some time why on my server vim behaves so stupid with respect to the mouse: Jumping around, copy and paste wasn't possible the usual way. All this despite having -我一直在想,为什么我服务器上 vim 为什么在鼠标方面表现得如此愚蠢:不能像平时那样跳转、复制、粘贴。尽管在 `/etc/vim/vimrc.local` 中已经设置了 + +我一直在想,为什么我服务器上 vim 为什么在鼠标方面表现得如此愚蠢:不能像平时那样跳转、复制、粘贴。尽管在 `/etc/vim/vimrc.local` 中已经设置了。 + ``` - set mouse= +set mouse= ``` 最后我终于知道为什么了,多谢 bug [#864074][1] 并且修复了它。 ![][2] -原因是,当没有 `~/.vimrc` 的时候,vim在 `vimrc.local` **之后**加载 `defaults.vim`,从而覆盖了几个设置。 +原因是,当没有 `~/.vimrc` 的时候,vim 在 `vimrc.local` **之后**加载 `defaults.vim`,从而覆盖了几个设置。 在 `/etc/vim/vimrc` 中有一个注释(虽然我没有看到)解释了这一点: + ``` " Vim will load $VIMRUNTIME/defaults.vim if the user does not have a vimrc. " This happens after /etc/vim/vimrc(.local) are loaded, so it will override @@ -22,12 +24,12 @@ I was wondering for quite some time why on my server vim behaves so stupid with " let g:skip_defaults_vim = 1 ``` - 我同意这是在正常安装 vim 后设置 vim 的好方法,但 Debian 包可以做得更好。在错误报告中清楚地说明了这个问题:如果没有 `~/.vimrc`,`/etc/vim/vimrc.local` 中的设置被覆盖。 这在Debian中是违反直觉的 - 而且我也不知道其他包中是否采用类似的方法。 由于 `defaults.vim` 中的设置非常合理,所以我希望使用它,但只修改了一些我不同意的项目,比如鼠标。最后,我在 `/etc/vim/vimrc.local` 中做了以下操作: + ``` if filereadable("/usr/share/vim/vim80/defaults.vim") source /usr/share/vim/vim80/defaults.vim @@ -40,7 +42,6 @@ set mouse= " other override settings go here ``` - 可能有更好的方式来获得一个不依赖于 vim 版本的通用加载语句, 但现在我对此很满意。 -------------------------------------------------------------------------------- @@ -49,7 +50,7 @@ via: https://www.preining.info/blog/2017/10/fixing-vim-in-debian/ 作者:[Norbert Preining][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From cea35edb159b761de50a07573ea8272f431c386c Mon Sep 17 00:00:00 2001 From: Yixun Xu Date: Tue, 23 Jan 2018 10:44:18 -0500 Subject: [PATCH 208/226] Translating: Internet Chemotherapy --- sources/tech/20171218 Internet Chemotherapy.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20171218 Internet Chemotherapy.md b/sources/tech/20171218 Internet Chemotherapy.md index ffe15fb5c1..2d2b950db5 100644 --- a/sources/tech/20171218 Internet Chemotherapy.md +++ b/sources/tech/20171218 Internet Chemotherapy.md @@ -1,3 +1,4 @@ +(yixunx translating) Internet Chemotherapy ====== From 22be4ea11ed698736cf9fedf40a6d7a5709db6cc Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 23 Jan 2018 23:45:14 +0800 Subject: [PATCH 209/226] PRF&PUB:20171008 The most important Firefox command line options.md @lujun9972 --- ... important Firefox command line options.md | 55 ++++++++++++++++++ ... important Firefox command line options.md | 58 ------------------- 2 files changed, 55 insertions(+), 58 deletions(-) create mode 100644 published/20171008 The most important Firefox command line options.md delete mode 100644 translated/tech/20171008 The most important Firefox command line options.md diff --git a/published/20171008 The most important Firefox command line options.md b/published/20171008 The most important Firefox command line options.md new file mode 100644 index 0000000000..1f9383906c --- /dev/null +++ b/published/20171008 The most important Firefox command line options.md @@ -0,0 +1,55 @@ +最重要的 Firefox 命令行选项 +====== + +Firefox web 浏览器支持很多命令行选项,可以定制它启动的方式。 + +你可能已经接触过一些了,比如 `-P "配置文件名"` 指定浏览器启动加载时的配置文件,`-private` 开启一个私有会话。 + +本指南会列出对 FIrefox 来说比较重要的那些命令行选项。它并不包含所有的可选项,因为很多选项只用于特定的目的,对一般用户来说没什么价值。 + +你可以在 Firefox 开发者网站上看到[完整][1] 的命令行选项列表。需要注意的是,很多命令行选项对其它基于 Mozilla 的产品一样有效,甚至对某些第三方的程序也有效。 + +### 重要的 Firefox 命令行选项 + +![firefox command line][2] + +#### 配置文件相关选项 + + - `-CreateProfile 配置文件名` -- 创建新的用户配置信息,但并不立即使用它。 + - `-CreateProfile "配置文件名 存放配置文件的目录"` -- 跟上面一样,只是指定了存放配置文件的目录。 + - `-ProfileManager`,或 `-P` -- 打开内置的配置文件管理器。 + - `-P "配置文件名"` -- 使用指定的配置文件启动 Firefox。若指定的配置文件不存在则会打开配置文件管理器。只有在没有其他 Firefox 实例运行时才有用。 + - `-no-remote` -- 与 `-P` 连用来创建新的浏览器实例。它允许你在同一时间运行多个配置文件。 + +#### 浏览器相关选项 + + - `-headless` -- 以无头模式(LCTT 译注:无显示界面)启动 Firefox。Linux 上需要 Firefox 55 才支持,Windows 和 Mac OS X 上需要 Firefox 56 才支持。 + - `-new-tab URL` -- 在 Firefox 的新标签页中加载指定 URL。 + - `-new-window URL` -- 在 Firefox 的新窗口中加载指定 URL。 + - `-private` -- 以隐私浏览模式启动 Firefox。可以用来让 Firefox 始终运行在隐私浏览模式下。 + - `-private-window` -- 打开一个隐私窗口。 + - `-private-window URL` -- 在新的隐私窗口中打开 URL。若已经打开了一个隐私浏览窗口,则在那个窗口中打开 URL。 + - `-search 单词` -- 使用 FIrefox 默认的搜索引擎进行搜索。 + - - `url URL` -- 在新的标签页或窗口中加载 URL。可以省略这里的 `-url`,而且支持打开多个 URL,每个 URL 之间用空格分离。 + +#### 其他选项 + + - `-safe-mode` -- 在安全模式下启动 Firefox。在启动 Firefox 时一直按住 Shift 键也能进入安全模式。 + - `-devtools` -- 启动 Firefox,同时加载并打开开发者工具。 + - `-inspector URL` -- 使用 DOM Inspector 查看指定的 URL + - `-jsconsole` -- 启动 Firefox,同时打开浏览器终端。 + - `-tray` -- 启动 Firefox,但保持最小化。 + +-------------------------------------------------------------------------------- + +via: https://www.ghacks.net/2017/10/08/the-most-important-firefox-command-line-options/ + +作者:[Martin Brinkmann][a] +译者:[lujun9972](https://github.com/lujun9972) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ghacks.net/author/martin/ +[1]:https://developer.mozilla.org/en-US/docs/Mozilla/Command_Line_Options +[2]:https://cdn.ghacks.net/wp-content/uploads/2017/10/firefox-command-line.png \ No newline at end of file diff --git a/translated/tech/20171008 The most important Firefox command line options.md b/translated/tech/20171008 The most important Firefox command line options.md deleted file mode 100644 index 14daac06cb..0000000000 --- a/translated/tech/20171008 The most important Firefox command line options.md +++ /dev/null @@ -1,58 +0,0 @@ -最重要的 Firefox 命令行选项 -====== -Firefox web 浏览器支持很多命令行选项,可以定制它启动的方式。 - -你可能已经接触过一些了,比如 `-P "profile name"` 指定浏览器启动加载时的配置文件,`-private` 开启一个私有会话。 - -本指南会列出对 FIrefox 来说比较重要的那些命令行选项。它并不包含所有的可选项,因为很多选项只用于特定的目的,对一般用户来说没什么价值。 - -你可以在 Firefox 开发者网站上看到[完整 ][1] 的命令行选项。需要注意的是,很多命令行选项对其他基于 Mozilla 的产品一样有效,甚至对某些第三方的程序也有效。 - -### 重要的 Firefox 命令行选项 - -![firefox command line][2] - -#### Profile 相关选项 - - + **-CreateProfile profile 名称** -- 创建新的用户配置信息,但并不立即使用它。 - + **-CreateProfile "profile 名 存放 profile 的目录"** -- 跟上面一样,只是指定了存放 profile 的目录。 - + **-ProfileManager**,或 **-P** -- 打开内置的 profile 管理器。 - + - **P "profile 名"** -- 使用 n 指定的 profile 启动 Firefox。若指定的 profile 不存在则会打开 profile 管理器。只有在没有其他 Firefox 实例运行时才有用。 - + **-no-remote** -- 与 `-P` 连用来创建新的浏览器实例。它允许你在同一时间运行多个 profile。 - -#### 浏览器相关选项 - - + **-headless** -- 以无头模式启动 Firefox。Linux 上需要 Firefox 55 才支持,Windows 和 Mac OS X 上需要 Firefox 56 才支持。 - + **-new-tab URL** -- 在 Firefox 的新标签页中加载指定 URL。 - + **-new-window URL** -- 在 Firefox 的新窗口中加载指定 URL。 - + **-private** -- 以私隐私浏览模式启动 Firefox。可以用来让 Firefox 始终运行在隐私浏览模式下。 - + **-private-window** -- 打开一个隐私窗口 - + **-private-window URL** -- 在新的隐私窗口中打开 URL。若已经打开了一个隐私浏览窗口,则在那个窗口中打开 URL。 - + **-search 单词** -- 使用 FIrefox 默认的搜索引擎进行搜索。 - + - **url URL** -- 在新的标签也或窗口中加载 URL。可以省略这里的 `-url`,而且支持打开多个 URL,每个 URL 之间用空格分离。 - - - -#### 其他 options - - + **-safe-mode** -- 在安全模式下启动 Firefox。在启动 Firefox 时一直按住 Shift 键也能进入安全模式。 - + **-devtools** -- 启动 Firefox,同时加载并打开 Developer Tools。 - + **-inspector URL** -- 使用 DOM Inspector 查看指定的 URL - + **-jsconsole** -- 启动 Firefox,同时打开 Browser Console。 - + **-tray** -- 启动 Firefox,但保持最小化。 - - - - --------------------------------------------------------------------------------- - -via: https://www.ghacks.net/2017/10/08/the-most-important-firefox-command-line-options/ - -作者:[Martin Brinkmann][a] -译者:[lujun9972](https://github.com/lujun9972) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ghacks.net/author/martin/ -[1]:https://developer.mozilla.org/en-US/docs/Mozilla/Command_Line_Options From 148029639544725c3164ba66d94d2a0bf7180e62 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 24 Jan 2018 08:59:38 +0800 Subject: [PATCH 210/226] translated --- ...Easy APT Repository - Iain R. Learmonth.md | 85 ------------------- ...Easy APT Repository - Iain R. Learmonth.md | 83 ++++++++++++++++++ 2 files changed, 83 insertions(+), 85 deletions(-) delete mode 100644 sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md create mode 100644 translated/tech/20170920 Easy APT Repository - Iain R. Learmonth.md diff --git a/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md b/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md deleted file mode 100644 index 144cff3d7b..0000000000 --- a/sources/tech/20170920 Easy APT Repository - Iain R. Learmonth.md +++ /dev/null @@ -1,85 +0,0 @@ -translating---geekpi - -Easy APT Repository · Iain R. Learmonth -====== - -The [PATHspider][5] software I maintain as part of my work depends on some features in [cURL][6] and in [PycURL][7] that have [only][8] [just][9] been mereged or are still [awaiting][10] merge. I need to build a docker container that includes these as Debian packages, so I need to quickly build an APT repository. - -A Debian repository can essentially be seen as a static website and the contents are GPG signed so it doesn't necessarily need to be hosted somewhere trusted (unless availability is critical for your application). I host my blog with [Netlify][11], a static website host, and I figured they would be perfect for this use case. They also [support open source projects][12]. - -There is a CLI tool for netlify which you can install with: -``` -sudo apt install npm -sudo npm install -g netlify-cli - -``` - -The basic steps for setting up a repository are: -``` -mkdir repository -cp /path/to/*.deb repository/ - - -cd - - repository -apt-ftparchive packages . > Packages -apt-ftparchive release . > Release -gpg --clearsign -o InRelease Release -netlify deploy - -``` - -Once you've followed these steps, and created a new site on Netlify, you'll be able to manage this site also through the web interface. A few things you might want to do are set up a custom domain name for your repository, or enable HTTPS with Let's Encrypt. (Make sure you have `apt-transport-https` if you're going to enable HTTPS though.) - -To add this repository to your apt sources: -``` -gpg --export -a YOURKEYID | sudo apt-key add - - - -echo - - - -"deb https://SUBDOMAIN.netlify.com/ /" - - | sudo tee -a /etc/apt/sources.list -sudo apt update - -``` - -You'll now find that those packages are installable. Beware of [APT pinning][13] as you may find that the newer versions on your repository are not actually the preferred versions according to your policy. - -**Update** : If you're wanting a solution that would be more suitable for regular use, take a look at [repropro][14]. If you're wanting to have end-users add your apt repository as a third-party repository to their system, please take a look at [this page on the Debian wiki][15] which contains advice on how to instruct users to use your repository. - -**Update 2** : Another commenter has pointed out [aptly][16], which offers a greater feature set and removes some of the restrictions imposed by repropro. I've never use aptly myself so can't comment on specifics, but from the website it looks like it might be a nicely polished tool. - - - --------------------------------------------------------------------------------- - -via: https://iain.learmonth.me/blog/2017/2017w383/ - -作者:[Iain R. Learmonth][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://iain.learmonth.me -[1]:https://iain.learmonth.me/tags/netlify/ -[2]:https://iain.learmonth.me/tags/debian/ -[3]:https://iain.learmonth.me/tags/apt/ -[4]:https://iain.learmonth.me/tags/foss/ -[5]:https://pathspider.net -[6]:http://curl.haxx.se/ -[7]:http://pycurl.io/ -[8]:https://github.com/pycurl/pycurl/pull/456 -[9]:https://github.com/pycurl/pycurl/pull/458 -[10]:https://github.com/curl/curl/pull/1847 -[11]:http://netlify.com/ -[12]:https://www.netlify.com/open-source/ -[13]:https://wiki.debian.org/AptPreferences -[14]:https://mirrorer.alioth.debian.org/ -[15]:https://wiki.debian.org/DebianRepository/UseThirdParty -[16]:https://www.aptly.info/ diff --git a/translated/tech/20170920 Easy APT Repository - Iain R. Learmonth.md b/translated/tech/20170920 Easy APT Repository - Iain R. Learmonth.md new file mode 100644 index 0000000000..8ebb6a2cfd --- /dev/null +++ b/translated/tech/20170920 Easy APT Repository - Iain R. Learmonth.md @@ -0,0 +1,83 @@ +简化 APT 仓库 +====== + +作为我工作的一部分,我所维护的 [PATHspider][5] 依赖于 [cURL][6] 和 [PycURL][7]中的一些[刚刚][8][被][9]合并或仍在[等待][10]被合并的功能。我需要构建一个包含这些 Debian 包的 Docker 容器,所以我需要快速构建一个 APT 仓库。 + +Debian 仓库本质上可以看作是一个静态的网站,而且内容是经过 GPG 签名的,所以它不一定需要托管在某个可信任的地方(除非可用性对你的程序来说是至关重要的)。我在 [Netlify][11] 上托管我的博客,一个静态的网站主机,在这种情况下,我认为用它很完美。他们也[支持开源项目][12]。 + +你可以用下面的命令安装 netlify 的 CLI 工具: +``` +sudo apt install npm +sudo npm install -g netlify-cli + +``` + +设置仓库的基本步骤是: +``` +mkdir repository +cp /path/to/*.deb repository/ + + +cd + + repository +apt-ftparchive packages . > Packages +apt-ftparchive release . > Release +gpg --clearsign -o InRelease Release +netlify deploy + +``` + +当你完成这些步骤后这些步骤,并在 Netlify 上创建了一个新的网站,你也可以通过网页来管理这个网站。你可能想要做的一些事情是为你的仓库设置自定义域名,或者使用 Let's Encrypt 启用 HTTPS。(如果你打算启用 HTTPS,请确保命令中有 “apt-transport-https”。) + +要将这个仓库添加到你的 apt 源: +``` +gpg --export -a YOURKEYID | sudo apt-key add - + + +echo + + + +"deb https://SUBDOMAIN.netlify.com/ /" + + | sudo tee -a /etc/apt/sources.list +sudo apt update + +``` + +你会发现这些软件包是可以安装的。注意下[ APT pinnng][13],因为你可能会发现,根据你的策略,仓库上的较新版本实际上并不是首选版本。 + +**更新**:如果你想要一个更适合平时使用的解决方案,请参考 [repropro][14]。如果你想让最终用户将你的 apt 仓库作为第三方仓库添加到他们的系统中,请查看[ Debian wiki 上的这个页面][15],其中包含关于如何指导用户使用你的仓库。 + +**更新 2**:有一位评论者指出用 [aptly][16],它提供了更多的功能,并消除了 repropro 的一些限制。我从来没有用过 aptly,所以不能评论具体细节,但从网站看来,这是一个很好的工具。 + + + +-------------------------------------------------------------------------------- + +via: https://iain.learmonth.me/blog/2017/2017w383/ + +作者:[Iain R. Learmonth][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://iain.learmonth.me +[1]:https://iain.learmonth.me/tags/netlify/ +[2]:https://iain.learmonth.me/tags/debian/ +[3]:https://iain.learmonth.me/tags/apt/ +[4]:https://iain.learmonth.me/tags/foss/ +[5]:https://pathspider.net +[6]:http://curl.haxx.se/ +[7]:http://pycurl.io/ +[8]:https://github.com/pycurl/pycurl/pull/456 +[9]:https://github.com/pycurl/pycurl/pull/458 +[10]:https://github.com/curl/curl/pull/1847 +[11]:http://netlify.com/ +[12]:https://www.netlify.com/open-source/ +[13]:https://wiki.debian.org/AptPreferences +[14]:https://mirrorer.alioth.debian.org/ +[15]:https://wiki.debian.org/DebianRepository/UseThirdParty +[16]:https://www.aptly.info/ From 7a360aa113e1919bdce6c4ce7dc3e66e6ca102d3 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 24 Jan 2018 09:06:30 +0800 Subject: [PATCH 211/226] translating --- sources/tech/20180120 The World Map In Your Terminal.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180120 The World Map In Your Terminal.md b/sources/tech/20180120 The World Map In Your Terminal.md index 4ce4bd7542..edc23edf12 100644 --- a/sources/tech/20180120 The World Map In Your Terminal.md +++ b/sources/tech/20180120 The World Map In Your Terminal.md @@ -1,3 +1,5 @@ +translating---geekpi + The World Map In Your Terminal ====== I just stumbled upon an interesting utility. The World map in the Terminal! Yes, It is so cool. Say hello to **MapSCII** , a Braille and ASCII world map renderer for your xterm-compatible terminals. It supports GNU/Linux, Mac OS, and Windows. I thought it is a just another project hosted on GitHub. But I was wrong! It is really impressive what they did there. We can use our mouse pointer to drag and zoom in and out a location anywhere in the world map. The other notable features are; From 50e2de9113c02d75e661940fce7e63cd88991772 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 24 Jan 2018 10:11:44 +0800 Subject: [PATCH 212/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Best=20Websites?= =?UTF-8?q?=20to=20Download=20Linux=20Games?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...3 Best Websites to Download Linux Games.md | 139 ++++++++++++++++++ 1 file changed, 139 insertions(+) create mode 100644 sources/talk/20170523 Best Websites to Download Linux Games.md diff --git a/sources/talk/20170523 Best Websites to Download Linux Games.md b/sources/talk/20170523 Best Websites to Download Linux Games.md new file mode 100644 index 0000000000..e6d4636fdb --- /dev/null +++ b/sources/talk/20170523 Best Websites to Download Linux Games.md @@ -0,0 +1,139 @@ +Best Websites to Download Linux Games +====== +Brief: New to Linux gaming and wondering where to **download Linux games** from? We list the best resources from where you can **download free Linux games** as well as buy premium Linux games. + +Linux and Games? Once upon a time, it was hard to imagine these two going together. Then time passed and a lot of things happened. Fast-forward to the present, there are thousands and thousands of games available for Linux and more are being developed by both big game companies and independent developers. + +[Gaming on Linux][1] is real now and today we are going to see where you can find games for Linux platform and hunt down the games that you like. + +### Where to download Linux games? + +![Websites to download Linux games][2] + +First and foremost, look into your Linux distribution's software center (if it has one). You should find plenty of games there already. + +But that doesn't mean you should restrict yourself to the software center. Let me list you some websites to download Linux games. + +#### 1. Steam + +If you are a seasoned gamer, you have heard about Steam. Yes, if you don't know it already, Steam is available for Linux. Steam recommends Ubuntu but it should run on other major distributions too. And if you are really psyched up about Steam, there is even a dedicated operating system for playing Steam games - [SteamOS][3]. We covered it last year in the [Best Linux Gaming Distribution][4] article. + +![Steam Store][5] + +Steam has the largest games store for Linux. While writing this article, it has exactly 3487 games on Linux platform and that's really huge. You can find games from wide range of genre. As for [Digital Rights Management][6], most of the Steam games have some kind of DRM. + +For using Steam either you will have to install the [Steam client][7] on your Linux distribution or use SteamOS. One of the advantages of Steam is that, after your initial setup, for most of the games you wouldn't need to worry about dependencies and complex installation process. Steam client will do the heavy tasks for you. + +[Steam Store][8] + +#### 2. GOG + +If you are solely interested in DRM-free games, GOG has a pretty large collection of it. At this moment, GOG has 1978 DRM-free games in their library. GOG is kind of famous for its vast collection of DRM-free games. + +![GOG Store][9] + +Officially, GOG games support Ubuntu LTS versions and Linux Mint. So, Ubuntu and its derivatives will have no problem installing them. Installing them on other distributions might need some extra works, such as - installing correct dependencies. + +You will not need any extra clients for downloading games from GOG. All the purchased games will be available in your accounts section. You can download them directly with your favorite download manager. + +[GOG Store][10] + +#### 3. Humble Store + +The Humble Store is another place where you can find various games for Linux. There are both DRM-free and non-DRM-free games available on Humble Store. The non-DRM-free games are generally from the Steam. Currently there are about 1826 games for Linux in the Humble Store. + +![The Humble Store][11] + +Humble Store is famous for another reason though. They have a program called [**Humble Indie Bundle**][12] where they offer a bunch of games together with a compelling discount for a limited time period. Another thing about Humble is that when you make a purchase, 10% of the revenue from your purchase goes to charities. + +Humble doesn't have any extra clients for downloading their games. + +[The Humble Store][13] + +#### 4. itch.io + +itch.io is an open marketplace for independent digital creators with a focus on independent video games. itch.io has some of the most interesting and unique games that you can find. Most games available on itch.io are DRM-free. + +![itch.io Store][14] + +Right now, itch.io has 9514 games available in their store for Linux platform. + +itch.io has their own [client][15] for effortlessly downloading, installing, updating and playing their games. + +[itch.io Store][16] + +#### 5. LGDB + +LGDB is an abbreviation for Linux Game Database. Though technically not a game store, it has a large collection of games for Linux along with various information about them. Every game is documented with links of where you can find them. + +![Linux Game Database][17] + +As of now, there are 2046 games entries in the database. They also have very long lists for [Emulators][18], [Tools][19] and [Game Engines][20]. + +[LGDB][21] + +[Annoying Experiences Every Linux Gamer Never Wanted!][27] + +#### 6. Game Jolt + +Game Jolt has a very impressive collection with about 5000 indie games for Linux under their belt. + +![GameJolt Store][22] + +Game Jolt has an (pre-release) [client][23] for downloading, installing, updating and playing games with ease. + +[Game Jolt Store][24] + +### Others + +There are many other stores that sells Linux Games. Also there are many places you can find free games too. Here are a couple of them: + + * [**Bundle Stars**][25]: Bundle Stars currently has 814 Linux games and 31 games bundles. + * [**GamersGate**][26]: GamersGate has 595 Linux games as for now. There are both DRM-free and non-DRM-free games. + + + +#### App Stores, Software Center & Repositories + +Linux distribution has their own application stores or repositories. Though not many, but there you can find various games too. + +That's all for today. Did you know there are this many games available for Linux? How do you feel about this? Do you use some other websites to download Linux games? Do share your favorites with us. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/download-linux-games/ + +作者:[Munif Tanjim][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/munif/ +[1]:https://itsfoss.com/linux-gaming-guide/ +[2]:https://itsfoss.com/wp-content/uploads/2017/05/download-linux-games-800x450.jpg +[3]:http://store.steampowered.com/steamos/ +[4]:https://itsfoss.com/linux-gaming-distributions/ +[5]:https://itsfoss.com/wp-content/uploads/2017/05/Steam-Store-800x382.jpg +[6]:https://www.wikiwand.com/en/Digital_rights_management +[7]:http://store.steampowered.com/about/ +[8]:http://store.steampowered.com/linux +[9]:https://itsfoss.com/wp-content/uploads/2017/05/GOG-Store-800x366.jpg +[10]:https://www.gog.com/games?system=lin_mint,lin_ubuntu +[11]:https://itsfoss.com/wp-content/uploads/2017/05/The-Humble-Store-800x393.jpg +[12]:https://www.humblebundle.com/?partner=itsfoss +[13]:https://www.humblebundle.com/store?partner=itsfoss +[14]:https://itsfoss.com/wp-content/uploads/2017/05/itch.io-Store-800x485.jpg +[15]:https://itch.io/app +[16]:https://itch.io/games/platform-linux +[17]:https://itsfoss.com/wp-content/uploads/2017/05/LGDB-800x304.jpg +[18]:https://lgdb.org/emulators +[19]:https://lgdb.org/tools +[20]:https://lgdb.org/engines +[21]:https://lgdb.org/games +[22]:https://itsfoss.com/wp-content/uploads/2017/05/GameJolt-Store-800x357.jpg +[23]:http://gamejolt.com/client +[24]:http://gamejolt.com/games/best?os=linux +[25]:https://www.bundlestars.com/en/games?page=1&platforms=Linux +[26]:https://www.gamersgate.com/games?state=available +[27]:https://itsfoss.com/linux-gaming-problems/ From b64e4915ba1281990c422fb9784dfcba657b16be Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 24 Jan 2018 10:14:00 +0800 Subject: [PATCH 213/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Manjaro=20Gaming:?= =?UTF-8?q?=20Gaming=20on=20Linux=20Meets=20Manjaro=E2=80=99s=20Awesomenes?= =?UTF-8?q?s?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng on Linux Meets Manjaro-s Awesomeness.md | 115 ++++++++++++++++++ 1 file changed, 115 insertions(+) create mode 100644 sources/talk/20160605 Manjaro Gaming- Gaming on Linux Meets Manjaro-s Awesomeness.md diff --git a/sources/talk/20160605 Manjaro Gaming- Gaming on Linux Meets Manjaro-s Awesomeness.md b/sources/talk/20160605 Manjaro Gaming- Gaming on Linux Meets Manjaro-s Awesomeness.md new file mode 100644 index 0000000000..78e700de26 --- /dev/null +++ b/sources/talk/20160605 Manjaro Gaming- Gaming on Linux Meets Manjaro-s Awesomeness.md @@ -0,0 +1,115 @@ +Manjaro Gaming: Gaming on Linux Meets Manjaro’s Awesomeness +====== +[![Meet Manjaro Gaming, a Linux distro designed for gamers with the power of Manjaro][1]][1] + +[Gaming on Linux][2]? Yes, that's very much possible and we have a dedicated new Linux distribution aiming for gamers. + +Manjaro Gaming is a Linux distro designed for gamers with the power of Manjaro. Those who have used Manjaro Linux before, know exactly why it is a such a good news for gamers. + +[Manjaro][3] is a Linux distro based on one of the most popular distro - [Arch Linux][4]. Arch Linux is widely known for its bleeding-edge nature offering a lightweight, powerful, extensively customizable and up-to-date experience. And while all those are absolutely great, the main drawback is that Arch Linux embraces the DIY (do it yourself) approach where users need to possess a certain level of technical expertise to get along with it. + +Manjaro strips that requirement and makes Arch accessible to newcomers, and at the same time provides all the advanced and powerful features of Arch for the experienced users as well. In short, Manjaro is an user-friendly Linux distro that works straight out of the box. + +The reasons why Manjaro makes a great and extremely suitable distro for gaming are: + + * Manjaro automatically detects computer's hardware (e.g. Graphics cards) + * Automatically installs the necessary drivers and software (e.g. Graphics drivers) + * Various codecs for media files playback comes pre-installed with it + * Has dedicated repositories that deliver fully tested and stable packages + + + +Manjaro Gaming is packed with all of Manjaro's awesomeness with the addition of various tweaks and software packages dedicated to make gaming on Linux smooth and enjoyable. + +![Inside Manjaro Gaming][5] + +#### Tweaks + +Some of the tweaks made on Manjaro Gaming are: + + * Manjaro Gaming uses highly customizable XFCE desktop environment with an overall dark theme. + * Sleep mode is disabled for preventing computers from sleeping while playing games with GamePad or watching long cutscenes. + + + +#### Softwares + +Maintaining Manjaro's tradition of working straight out of the box, Manjaro Gaming comes bundled with various Open Source software to provide often needed functionalities for gamers. Some of the software included are: + + * [**KdenLIVE**][6]: Videos editing software for editing gaming videos + * [**Mumble**][7]: Voice chatting software for gamers + * [**OBS Studio**][8]: Software for video recording and live streaming games videos on [Twitch][9] + * **[OpenShot][10]** : Powerful video editor for Linux + * [**PlayOnLinux**][11]: For running Windows games on Linux with [Wine][12] backend + * [**Shutter**][13]: Feature-rich screenshot tool + + + +#### Emulators + +Manjaro Gaming comes with a long list of gaming emulators: + + * **[DeSmuME][14]** : Nintendo DS emulator + * **[Dolphin Emulator][15]** : GameCube and Wii emulator + * [**DOSBox**][16]: DOS Games emulator + * **[FCEUX][17]** : Nintendo Entertainment System (NES), Famicom, and Famicom Disk System (FDS) emulator + * **Gens/GS** : Sega Mega Drive emulator + * **[PCSXR][18]** : PlayStation Emulator + * [**PCSX2**][19]: Playstation 2 emulator + * [**PPSSPP**][20]: PSP emulator + * **[Stella][21]** : Atari 2600 VCS emulator + * [**VBA-M**][22]: Gameboy and GameboyAdvance emulator + * [**Yabause**][23]: Sega Saturn Emulator + * **[ZSNES][24]** : Super Nintendo emulator + + + +#### Others + +There are some terminal add-ons - Color, ILoveCandy and Screenfetch. [Conky Manager][25] with Retro Conky theme is also included. + +**Point to be noted: Not all the features mentioned are included in the current release of Manjaro Gaming (which is 16.03). Some of them are scheduled to be included in the next release - Manjaro Gaming 16.06.** + +### Downloads + +Manjaro Gaming 16.06 is going to be the first proper release of Manjaro Gaming. But if you are interested enough to try it now, Manjaro Gaming 16.03 is available for downloading on the Sourceforge [project page][26]. Go there and grab the ISO. + +How do you feel about this new Gaming Linux distro? Are you thinking of giving it a try? Let us know! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/manjaro-gaming-linux/ + +作者:[Munif Tanjim][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/munif/ +[1]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming.jpg +[2]:https://itsfoss.com/linux-gaming-guide/ +[3]:https://manjaro.github.io/ +[4]:https://www.archlinux.org/ +[5]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming-Inside-1024x576.png +[6]:https://kdenlive.org/ +[7]:https://www.mumble.info +[8]:https://obsproject.com/ +[9]:https://www.twitch.tv/ +[10]:http://www.openshot.org/ +[11]:https://www.playonlinux.com +[12]:https://www.winehq.org/ +[13]:http://shutter-project.org/ +[14]:http://desmume.org/ +[15]:https://dolphin-emu.org +[16]:https://www.dosbox.com/ +[17]:http://www.fceux.com/ +[18]:https://pcsxr.codeplex.com +[19]:http://pcsx2.net/ +[20]:http://www.ppsspp.org/ +[21]:http://stella.sourceforge.net/ +[22]:http://vba-m.com/ +[23]:https://yabause.org/ +[24]:http://www.zsnes.com/ +[25]:https://itsfoss.com/conky-gui-ubuntu-1304/ +[26]:https://sourceforge.net/projects/mgame/ From 16a2d811ee2f1e3cbde8424f54b410fd2494c3d1 Mon Sep 17 00:00:00 2001 From: Wuod3n <33994335+Wuod3n@users.noreply.github.com> Date: Wed, 24 Jan 2018 10:35:14 +0800 Subject: [PATCH 214/226] 4 artificial intelligence trends to watch MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 四个需要注意的人工智能趋势 --- .../talk/20180104 4 artificial intelligence trends to watch.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20180104 4 artificial intelligence trends to watch.md b/sources/talk/20180104 4 artificial intelligence trends to watch.md index 9c84bba147..de791c299b 100644 --- a/sources/talk/20180104 4 artificial intelligence trends to watch.md +++ b/sources/talk/20180104 4 artificial intelligence trends to watch.md @@ -1,4 +1,5 @@ 4 artificial intelligence trends to watch +Translating by Wuod3n ====== ![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Mentor.png?itok=K-6s_q2C) From 7055e3fc0c9a12fd163c044d2b737c6cdcb12f24 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Wed, 24 Jan 2018 11:09:53 +0800 Subject: [PATCH 215/226] Translating by qhwdw --- ...0140410 Recursion- dream within a dream.md | 122 ++++++++++++++++++ 1 file changed, 122 insertions(+) create mode 100644 sources/tech/20140410 Recursion- dream within a dream.md diff --git a/sources/tech/20140410 Recursion- dream within a dream.md b/sources/tech/20140410 Recursion- dream within a dream.md new file mode 100644 index 0000000000..b4e0b25fab --- /dev/null +++ b/sources/tech/20140410 Recursion- dream within a dream.md @@ -0,0 +1,122 @@ +#Translating by qhwdw [Recursion: dream within a dream][1] +Recursion is magic, but it suffers from the most awkward introduction in programming books. They'll show you a recursive factorial implementation, then warn you that while it sort of works it's terribly slow and might crash due to stack overflows. "You could always dry your hair by sticking your head into the microwave, but watch out for intracranial pressure and head explosions. Or you can use a towel." No wonder people are suspicious of it. Which is too bad, because recursion is the single most powerful idea in algorithms. + +Let's take a look at the classic recursive factorial: + +Recursive Factorial - factorial.c + +``` +#include + +int factorial(int n) +{ + int previous = 0xdeadbeef; + + if (n == 0 || n == 1) { + return 1; + } + + previous = factorial(n-1); + return n * previous; +} + +int main(int argc) +{ + int answer = factorial(5); + printf("%d\n", answer); +} +``` + +The idea of a function calling itself is mystifying at first. To make it concrete, here is exactly what is [on the stack][3] when factorial(5) is called and reaches n == 1: + +![](https://manybutfinite.com/img/stack/factorial.png) + +Each call to factorial generates a new [stack frame][4]. The creation and [destruction][5] of these stack frames is what makes the recursive factorial slower than its iterative counterpart. The accumulation of these frames before the calls start returning is what can potentially exhaust stack space and crash your program. + +These concerns are often theoretical. For example, the stack frames for factorial take 16 bytes each (this can vary depending on stack alignment and other factors). If you are running a modern x86 Linux kernel on a computer, you normally have 8 megabytes of stack space, so factorial could handle n up to ~512,000\. This is a [monstrously large result][6] that takes 8,971,833 bits to represent, so stack space is the least of our problems: a puny integer - even a 64-bit one - will overflow tens of thousands of times over before we run out of stack space. + +We'll look at CPU usage in a moment, but for now let's take a step back from the bits and bytes and look at recursion as a general technique. Our factorial algorithm boils down to pushing integers N, N-1, ... 1 onto a stack, then multiplying them in reverse order. The fact we're using the program's call stack to do this is an implementation detail: we could allocate a stack on the heap and use that instead. While the call stack does have special properties, it's just another data structure at your disposal. I hope the diagram makes that clear. + +Once you see the call stack as a data structure, something else becomes clear: piling up all those integers to multiply them afterwards is one dumbass idea. That is the real lameness of this implementation: it's using a screwdriver to hammer a nail. It's far more sensible to use an iterative process to calculate factorials. + +But there are plenty of screws out there, so let's pick one. There is a traditional interview question where you're given a mouse in a maze, and you must help the mouse search for cheese. Suppose the mouse can turn either left or right in the maze. How would you model and solve this problem? + +Like most problems in life, you can reduce this rodent quest to a graph, in particular a binary tree where the nodes represent positions in the maze. You could then have the mouse attempt left turns whenever possible, and backtrack to turn right when it reaches a dead end. Here's the mouse walk in an [example maze][7]: + +![](https://manybutfinite.com/img/stack/mazeGraph.png) + +Each edge (line) is a left or right turn taking our mouse to a new position. If either turn is blocked, the corresponding edge does not exist. Now we're talking! This process is inherently recursive whether you use the call stack or another data structure. But using the call stack is just so easy: + +Recursive Maze Solver[download][2] + +``` +#include +#include "maze.h" + +int explore(maze_t *node) +{ + int found = 0; + + if (node == NULL) + { + return 0; + } + if (node->hasCheese){ + return 1;// found cheese + } + + found = explore(node->left) || explore(node->right); + return found; + } + + int main(int argc) + { + int found = explore(&maze); + } +``` +Below is the stack when we find the cheese in maze.c:13\. You can also see the detailed [GDB output][8] and [commands][9] used to gather data. + +![](https://manybutfinite.com/img/stack/mazeCallStack.png) + +This shows recursion in a much better light because it's a suitable problem. And that's no oddity: when it comes to algorithms, recursion is the rule, not the exception. It comes up when we search, when we traverse trees and other data structures, when we parse, when we sort: it's everywhere. You know how pi or e come up in math all the time because they're in the foundations of the universe? Recursion is like that: it's in the fabric of computation. + +Steven Skienna's excellent [Algorithm Design Manual][10] is a great place to see that in action as he works through his "war stories" and shows the reasoning behind algorithmic solutions to real-world problems. It's the best resource I know of to develop your intuition for algorithms. Another good read is McCarthy's [original paper on LISP][11]. Recursion is both in its title and in the foundations of the language. The paper is readable and fun, it's always a pleasure to see a master at work. + +Back to the maze. While it's hard to get away from recursion here, it doesn't mean it must be done via the call stack. You could for example use a string like RRLL to keep track of the turns, and rely on the string to decide on the mouse's next move. Or you can allocate something else to record the state of the cheese hunt. You'd still be implementing a recursive process, but rolling your own data structure. + +That's likely to be more complex because the call stack fits like a glove. Each stack frame records not only the current node, but also the state of computation in that node (in this case, whether we've taken only the left, or are already attempting the right). Hence the code becomes trivial. Yet we sometimes give up this sweetness for fear of overflows and hopes of performance. That can be foolish. + +As we've seen, the stack is large and frequently other constraints kick in before stack space does. One can also check the problem size and ensure it can be handled safely. The CPU worry is instilled chiefly by two widespread pathological examples: the dumb factorial and the hideous O(2n) [recursive Fibonacci][12] without memoization. These are not indicative of sane stack-recursive algorithms. + +The reality is that stack operations are fast. Often the offsets to data are known exactly, the stack is hot in the [caches][13], and there are dedicated instructions to get things done. Meanwhile, there is substantial overhead involved in using your own heap-allocated data structures. It's not uncommon to see people write something that ends up more complex and less performant than call-stack recursion. Finally, modern CPUs are [pretty good][14] and often not the bottleneck. Be careful about sacrificing simplicity and as always with performance, [measure][15]. + +The next post is the last in this stack series, and we'll look at Tail Calls, Closures, and Other Fauna. Then it'll be time to visit our old friend, the Linux kernel. Thanks for reading! + +![](https://manybutfinite.com/img/stack/1000px-Sierpinski-build.png) + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/recursion/ + +作者:[Gustavo Duarte][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/recursion/ +[2]:https://manybutfinite.com/code/x86-stack/maze.c +[3]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-gdb-output.txt +[4]:https://manybutfinite.com/post/journey-to-the-stack +[5]:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ +[6]:https://gist.github.com/gduarte/9944878 +[7]:https://github.com/gduarte/blog/blob/master/code/x86-stack/maze.h +[8]:https://github.com/gduarte/blog/blob/master/code/x86-stack/maze-gdb-output.txt +[9]:https://github.com/gduarte/blog/blob/master/code/x86-stack/maze-gdb-commands.txt +[10]:http://www.amazon.com/Algorithm-Design-Manual-Steven-Skiena/dp/1848000693/ +[11]:https://github.com/papers-we-love/papers-we-love/blob/master/comp_sci_fundamentals_and_history/recursive-functions-of-symbolic-expressions-and-their-computation-by-machine-parti.pdf +[12]:http://stackoverflow.com/questions/360748/computational-complexity-of-fibonacci-sequence +[13]:https://manybutfinite.com/post/intel-cpu-caches/ +[14]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait/ +[15]:https://manybutfinite.com/post/performance-is-a-science \ No newline at end of file From 64037b89ad5c434ad3a4825355093956164edad5 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 24 Jan 2018 11:46:45 +0800 Subject: [PATCH 216/226] PRF:20170319 ftrace trace your kernel functions.md @qhwdw --- ...0319 ftrace trace your kernel functions.md | 60 +++++++------------ 1 file changed, 22 insertions(+), 38 deletions(-) diff --git a/translated/tech/20170319 ftrace trace your kernel functions.md b/translated/tech/20170319 ftrace trace your kernel functions.md index ccb5b76256..c166a3c513 100644 --- a/translated/tech/20170319 ftrace trace your kernel functions.md +++ b/translated/tech/20170319 ftrace trace your kernel functions.md @@ -3,43 +3,42 @@ ftrace:跟踪你的内核函数! 大家好!今天我们将去讨论一个调试工具:ftrace,之前我的博客上还没有讨论过它。还有什么能比一个新的调试工具更让人激动呢? -这个非常棒的 ftrace 并不是个新的工具!它大约在 Linux 的 2.6 内核版本中就有了,时间大约是在 2008 年。[这里是我用谷歌能找到的一些文档][10]。因此,如果你是一个调试系统的“老手”,可能早就已经使用它了! +这个非常棒的 ftrace 并不是个新的工具!它大约在 Linux 的 2.6 内核版本中就有了,时间大约是在 2008 年。[这一篇是我用谷歌能找到的最早的文档][10]。因此,如果你是一个调试系统的“老手”,可能早就已经使用它了! -我知道,ftrace 已经存在了大约 2.5 年了,但是还没有真正的去学习它。假设我明天要召开一个专题研究会,那么,关于 ftrace 应该讨论些什么?因此,今天是时间去讨论一下它了! +我知道,ftrace 已经存在了大约 2.5 年了(LCTT 译注:距本文初次写作时),但是还没有真正的去学习它。假设我明天要召开一个专题研究会,那么,关于 ftrace 应该讨论些什么?因此,今天是时间去讨论一下它了! ### 什么是 ftrace? ftrace 是一个 Linux 内核特性,它可以让你去跟踪 Linux 内核的函数调用。为什么要这么做呢?好吧,假设你调试一个奇怪的问题,而你已经得到了你的内核版本中这个问题在源代码中的开始的位置,而你想知道这里到底发生了什么? -每次在调试的时候,我并不会经常去读内核源代码,但是,极个别的情况下会去读它!例如,本周在工作中,我有一个程序在内核中卡死了。查看到底是调用了什么函数、哪些系统涉及其中,能够帮我更好的理解在内核中发生了什么!(在我的那个案例中,它是虚拟内存系统) +每次在调试的时候,我并不会经常去读内核源代码,但是,极个别的情况下会去读它!例如,本周在工作中,我有一个程序在内核中卡死了。查看到底是调用了什么函数,能够帮我更好的理解在内核中发生了什么,哪些系统涉及其中!(在我的那个案例中,它是虚拟内存系统)。 -我认为 ftrace 是一个十分好用的工具(它肯定没有 strace 那样广泛被使用,使用难度也低于它),但是它还是值得你去学习。因此,让我们开始吧! +我认为 ftrace 是一个十分好用的工具(它肯定没有 `strace` 那样使用广泛,也比它难以使用),但是它还是值得你去学习。因此,让我们开始吧! ### 使用 ftrace 的第一步 -不像 strace 和 perf,ftrace 并不是真正的 **程序** – 你不能只运行 `ftrace my_cool_function`。那样太容易了! +不像 `strace` 和 `perf`,ftrace 并不是真正的 **程序** – 你不能只运行 `ftrace my_cool_function`。那样太容易了! -如果你去读 [使用 Ftrace 调试内核][11],它会告诉你从 `cd /sys/kernel/debug/tracing` 开始,然后做很多文件系统的操作。 +如果你去读 [使用 ftrace 调试内核][11],它会告诉你从 `cd /sys/kernel/debug/tracing` 开始,然后做很多文件系统的操作。 -对于我来说,这种办法太麻烦 – 使用 ftrace 的一个简单例子应该像这样: +对于我来说,这种办法太麻烦——一个使用 ftrace 的简单例子像是这样: ``` cd /sys/kernel/debug/tracing echo function > current_tracer echo do_page_fault > set_ftrace_filter cat trace - ``` -这个文件系统到跟踪系统的接口(“给这些神奇的文件赋值,然后该发生的事情就会发生”)理论上看起来似乎可用,但是它不是我的首选方式。 +这个文件系统是跟踪系统的接口(“给这些神奇的文件赋值,然后该发生的事情就会发生”)理论上看起来似乎可用,但是它不是我的首选方式。 -幸运的是,ftrace 团队也考虑到这个并不友好的用户界面,因此,它有了一个更易于使用的界面,它就是 **trace-cmd**!!!trace-cmd 是一个带命令行参数的普通程序。我们后面将使用它!我在 LWN 上找到了一个 trace-cmd 的使用介绍:[trace-cmd: Ftrace 的一个前端][12]。 +幸运的是,ftrace 团队也考虑到这个并不友好的用户界面,因此,它有了一个更易于使用的界面,它就是 `trace-cmd`!!!`trace-cmd` 是一个带命令行参数的普通程序。我们后面将使用它!我在 LWN 上找到了一个 `trace-cmd` 的使用介绍:[trace-cmd: Ftrace 的一个前端][12]。 -### 开始使用 trace-cmd:让 trace 仅跟踪一个函数 +### 开始使用 trace-cmd:让我们仅跟踪一个函数 首先,我需要去使用 `sudo apt-get install trace-cmd` 安装 `trace-cmd`,这一步很容易。 -对于第一个 ftrace 的演示,我决定去了解我的内核如何去处理一个页面故障。当 Linux 分配内存时,它经常偷懒,(“你并不是  _真的_  计划去使用内存,对吗?”)。这意味着,当一个应用程序尝试去对分配给它的内存进行写入时,就会发生一个页面故障,而这个时候,内核才会真正的为应用程序去分配物理内存。 +对于第一个 ftrace 的演示,我决定去了解我的内核如何去处理一个页面故障。当 Linux 分配内存时,它经常偷懒,(“你并不是_真的_计划去使用内存,对吗?”)。这意味着,当一个应用程序尝试去对分配给它的内存进行写入时,就会发生一个页面故障,而这个时候,内核才会真正的为应用程序去分配物理内存。 我们开始使用 `trace-cmd` 并让它跟踪 `do_page_fault` 函数! @@ -47,7 +46,6 @@ cat trace $ sudo trace-cmd record -p function -l do_page_fault plugin 'function' Hit Ctrl^C to stop recording - ``` 我将它运行了几秒钟,然后按下了 `Ctrl+C`。 让我大吃一惊的是,它竟然产生了一个 2.5MB 大小的名为 `trace.dat` 的跟踪文件。我们来看一下这个文件的内容! @@ -68,7 +66,7 @@ $ sudo trace-cmd report ``` -看起来很整洁 – 它展示了进程名(chrome)、进程 ID (15144)、CPU(000)、以及它跟踪的函数。 +看起来很整洁 – 它展示了进程名(chrome)、进程 ID(15144)、CPU ID(000),以及它跟踪的函数。 通过察看整个文件,(`sudo trace-cmd report | grep chrome`)可以看到,我们跟踪了大约 1.5 秒,在这 1.5 秒的时间段内,Chrome 发生了大约 500 个页面故障。真是太酷了!这就是我们做的第一个 ftrace! @@ -81,14 +79,13 @@ $ sudo trace-cmd report ``` sudo trace-cmd record --help # I read the help! sudo trace-cmd record -p function -P 25314 # record for PID 25314 - ``` `sudo trace-cmd report` 输出了 18,000 行。如果你对这些感兴趣,你可以看 [这里是所有的 18,000 行的输出][13]。 18,000 行太多了,因此,在这里仅摘录其中几行。 -当系统调用 `clock_gettime` 运行时,都发生了什么。 +当系统调用 `clock_gettime` 运行的时候,都发生了什么: ``` compat_SyS_clock_gettime @@ -99,7 +96,6 @@ sudo trace-cmd record -p function -P 25314 # record for PID 25314 __getnstimeofday64 arch_counter_read __compat_put_timespec - ``` 这是与进程调试相关的一些东西: @@ -128,10 +124,9 @@ sudo trace-cmd record -p function -P 25314 # record for PID 25314 ``` sudo trace-cmd record -p function_graph -P 25314 - ``` -同样,这里只是一个片断(这次来自 futex 代码) +同样,这里只是一个片断(这次来自 futex 代码): ``` | futex_wake() { @@ -149,7 +144,6 @@ sudo trace-cmd record -p function_graph -P 25314 5.250 us | } 0.583 us | put_page(); + 24.208 us | } - ``` 我们看到在这个示例中,在 `futex_wake` 后面调用了 `get_futex_key`。这是在源代码中真实发生的事情吗?我们可以检查一下!![这里是在 Linux 4.4 中 futex_wake 的定义][15] (我的内核版本是 4.4)。 @@ -170,7 +164,6 @@ futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset) return -EINVAL; ret = get_futex_key(uaddr, flags & FLAGS_SHARED, &key, VERIFY_READ); - ``` 如你所见,在 `futex_wake` 中的第一个函数调用真的是 `get_futex_key`! 太棒了!相比阅读内核代码,阅读函数跟踪肯定是更容易的找到结果的办法,并且让人高兴的是,还能看到所有的函数用了多长时间。 @@ -183,7 +176,7 @@ futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset) 现在,我们已经知道了怎么去跟踪内核中的函数,真是太酷了! -还有一类我们可以跟踪的东西!有些事件与我们的函数调用并不相符。例如,你可能想去知道当一个程序被调度进入或者离开 CPU 时,都发生了什么事件!你可能想通过“盯着”函数调用计算出来,但是,我告诉你,不可行! +还有一类我们可以跟踪的东西!有些事件与我们的函数调用并不相符。例如,你可能想知道当一个程序被调度进入或者离开 CPU 时,都发生了什么事件!你可能想通过“盯着”函数调用计算出来,但是,我告诉你,不可行! 由于函数也为你提供了几种事件,因此,你可以看到当重要的事件发生时,都发生了什么事情。你可以使用 `sudo cat /sys/kernel/debug/tracing/available_events` 来查看这些事件的一个列表。  @@ -193,7 +186,6 @@ futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset) sudo cat /sys/kernel/debug/tracing/available_events sudo trace-cmd record -e sched:sched_switch sudo trace-cmd report - ``` 输出如下: @@ -207,23 +199,23 @@ sudo trace-cmd report ``` -现在,可以很清楚地看到这些切换,从 PID 24817 -> 15144 -> kernel -> 24817 -> 1561 -> 15114\。(所有的这些事件都发生在同一个 CPU 上) +现在,可以很清楚地看到这些切换,从 PID 24817 -> 15144 -> kernel -> 24817 -> 1561 -> 15114。(所有的这些事件都发生在同一个 CPU 上)。 ### ftrace 是如何工作的? -ftrace 是一个动态跟踪系统。当启动 ftracing 去跟踪内核函数时,**函数的代码会被改变**。因此 – 我们假设去跟踪 `do_page_fault` 函数。内核将在那个函数的汇编代码中插入一些额外的指令,以便每次该函数被调用时去提示跟踪系统。内核之所以能够添加额外的指令的原因是,Linux 将额外的几个 NOP 指令编译进每个函数中,因此,当需要的时候,这里有添加跟踪代码的地方。 +ftrace 是一个动态跟踪系统。当我们开始 ftrace 内核函数时,**函数的代码会被改变**。让我们假设去跟踪 `do_page_fault` 函数。内核将在那个函数的汇编代码中插入一些额外的指令,以便每次该函数被调用时去提示跟踪系统。内核之所以能够添加额外的指令的原因是,Linux 将额外的几个 NOP 指令编译进每个函数中,因此,当需要的时候,这里有添加跟踪代码的地方。 这是一个十分复杂的问题,因为,当不需要使用 ftrace 去跟踪我的内核时,它根本就不影响性能。而当我需要跟踪时,跟踪的函数越多,产生的开销就越大。 (或许有些是不对的,但是,我认为的 ftrace 就是这样工作的) -### 更容易地使用 ftrace:brendan gregg 的工具 & kernelshark +### 更容易地使用 ftrace:brendan gregg 的工具及 kernelshark 正如我们在文件中所讨论的,你需要去考虑很多的关于单个的内核函数/事件直接使用 ftrace 都做了些什么。能够做到这一点很酷!但是也需要做大量的工作! -Brendan Gregg (我们的 linux 调试工具“大神”)有个工具仓库,它使用 ftrace 去提供关于像 I/O 延迟这样的各种事情的信息。这是它在 GitHub 上全部的 [perf-tools][16] 仓库。 +Brendan Gregg (我们的 Linux 调试工具“大神”)有个工具仓库,它使用 ftrace 去提供关于像 I/O 延迟这样的各种事情的信息。这是它在 GitHub 上全部的 [perf-tools][16] 仓库。 -这里有一个权衡(tradeoff),那就是这些工具易于使用,但是被限制仅用于 Brendan Gregg 认可的事情。决定将它做成一个工具,那需要做很多的事情!:) +这里有一个权衡,那就是这些工具易于使用,但是你被限制仅能用于 Brendan Gregg 认可并做到工具里面的方面。它包括了很多方面!:) 另一个工具是将 ftrace 的输出可视化,做的比较好的是 [kernelshark][17]。我还没有用过它,但是看起来似乎很有用。你可以使用 `sudo apt-get install kernelshark` 来安装它。 @@ -236,30 +228,22 @@ Brendan Gregg (我们的 linux 调试工具“大神”)有个工具仓库 最后,这里是我找到的一些 ftrace 方面的文章。它们大部分在 LWN (Linux 新闻周刊)上,它是 Linux 的一个极好的资源(你可以购买一个 [订阅][18]!) * [使用 Ftrace 调试内核 - part 1][1] (Dec 2009, Steven Rostedt) - * [使用 Ftrace 调试内核 - part 2][2] (Dec 2009, Steven Rostedt) - * [Linux 函数跟踪器的秘密][3] (Jan 2010, Steven Rostedt) - * [trace-cmd:Ftrace 的一个前端][4] (Oct 2010, Steven Rostedt) - * [使用 KernelShark 去分析实时调试器][5] (2011, Steven Rostedt) - * [Ftrace: 神秘的开关][6] (2014, Brendan Gregg) - * 内核文档:(它十分有用) [Documentation/ftrace.txt][7] - * 你能跟踪的事件的文档 [Documentation/events.txt][8] - * linux 内核开发上的一些 ftrace 设计文档 (不是有用,而是有趣!) [Documentation/ftrace-design.txt][9] -------------------------------------------------------------------------------- via: https://jvns.ca/blog/2017/03/19/getting-started-with-ftrace/ -作者:[Julia Evans ][a] +作者:[Julia Evans][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 5dbe396fa57073073ffa424c60e9a68d999e7d9a Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 24 Jan 2018 11:47:35 +0800 Subject: [PATCH 217/226] PUB:20170319 ftrace trace your kernel functions.md @qhwdw https://linux.cn/article-9273-1.html --- .../20170319 ftrace trace your kernel functions.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20170319 ftrace trace your kernel functions.md (100%) diff --git a/translated/tech/20170319 ftrace trace your kernel functions.md b/published/20170319 ftrace trace your kernel functions.md similarity index 100% rename from translated/tech/20170319 ftrace trace your kernel functions.md rename to published/20170319 ftrace trace your kernel functions.md From 7dca68e8dfc1e9127a5aea60f963a57c0d098556 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 24 Jan 2018 12:44:20 +0800 Subject: [PATCH 218/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20price?= =?UTF-8?q?=20cryptocurrencies?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20180122 How to price cryptocurrencies.md | 73 +++++++++++++++++++ 1 file changed, 73 insertions(+) create mode 100644 sources/talk/20180122 How to price cryptocurrencies.md diff --git a/sources/talk/20180122 How to price cryptocurrencies.md b/sources/talk/20180122 How to price cryptocurrencies.md new file mode 100644 index 0000000000..061090db5a --- /dev/null +++ b/sources/talk/20180122 How to price cryptocurrencies.md @@ -0,0 +1,73 @@ +How to price cryptocurrencies +====== + +![](https://tctechcrunch2011.files.wordpress.com/2018/01/fabian-blank-78637.jpg?w=1279&h=727&crop=1) + +Predicting cryptocurrency prices is a fool's game, yet this fool is about to try. The drivers of a single cryptocurrency's value are currently too varied and vague to make assessments based on any one point. News is trending up on Bitcoin? Maybe there's a hack or an API failure that is driving it down at the same time. Ethereum looking sluggish? Who knows: Maybe someone will build a new smarter DAO tomorrow that will draw in the big spenders. + +So how do you invest? Or, more correctly, on which currency should you bet? + +The key to understanding what to buy or sell and when to hold is to use the tools associated with assessing the value of open-source projects. This has been said again and again, but to understand the current crypto boom you have to go back to the quiet rise of Linux. + +Linux appeared on most radars during the dot-com bubble. At that time, if you wanted to set up a web server, you had to physically ship a Windows server or Sun Sparc Station to a server farm where it would do the hard work of delivering Pets.com HTML. At the same time, Linux, like a freight train running on a parallel path to Microsoft and Sun, would consistently allow developers to build one-off projects very quickly and easily using an OS and toolset that were improving daily. In comparison, then, the massive hardware and software expenditures associated with the status quo solution providers were deeply inefficient, and very quickly all of the tech giants that made their money on software now made their money on services or, like Sun, folded. + +From the acorn of Linux an open-source forest bloomed. But there was one clear problem: You couldn't make money from open source. You could consult and you could sell products that used open-source components, but early builders built primarily for the betterment of humanity and not the betterment of their bank accounts. + +Cryptocurrencies have followed the Linux model almost exactly, but cryptocurrencies have cash value. Therefore, when you're working on a crypto project you're not doing it for the common good or for the joy of writing free software. You're writing it with the expectation of a big payout. This, therefore, clouds the value judgements of many programmers. The same folks that brought you Python, PHP, Django and Node.js are back… and now they're programming money. + +### Check the codebase + +This year will be the year of great reckoning in the token sale and cryptocurrency space. While many companies have been able to get away with poor or unusable codebases, I doubt developers will let future companies get away with so much smoke and mirrors. It's safe to say we can [expect posts like this one detailing Storj's anemic codebase to become the norm][1] and, more importantly, that these commentaries will sink many so-called ICOs. Though massive, the money trough that is flowing from ICO to ICO is finite and at some point there will be greater scrutiny paid to incomplete work. + +What does this mean? It means to understand cryptocurrency you have to treat it like a startup. Does it have a good team? Does it have a good product? Does the product work? Would someone want to use it? It's far too early to assess the value of cryptocurrency as a whole, but if we assume that tokens or coins will become the way computers pay each other in the future, this lets us hand wave away a lot of doubt. After all, not many people knew in 2000 that Apache was going to beat nearly every other web server in a crowded market or that Ubuntu instances would be so common that you'd spin them up and destroy them in an instant. + +The key to understanding cryptocurrency pricing is to ignore the froth, hype and FUD and instead focus on true utility. Do you think that some day your phone will pay another phone for, say, an in-game perk? Do you expect the credit card system to fold in the face of an Internet of Value? Do you expect that one day you'll move through life splashing out small bits of value in order to make yourself more comfortable? Then by all means, buy and hold or speculate on things that you think will make your life better. If you don't expect the Internet of Value to improve your life the way the TCP/IP internet did (or you do not understand enough to hold an opinion), then you're probably not cut out for this. NASDAQ is always open, at least during banker's hours. + +Still will us? Good, here are my predictions. + +### The rundown + +Here is my assessment of what you should look at when considering an "investment" in cryptocurrencies. There are a number of caveats we must address before we begin: + + * Crypto is not a monetary investment in a real currency, but an investment in a pie-in-the-sky technofuture. That's right: When you buy crypto you're basically assuming that we'll all be on the deck of the Starship Enterprise exchanging them like Galactic Credits one day. This is the only inevitable future for crypto bulls. While you can force crypto into various economic models and hope for the best, the entire platform is techno-utopianist and assumes all sorts of exciting and unlikely things will come to pass in the next few years. If you have spare cash lying around and you like Star Wars, then you're golden. If you bought bitcoin on a credit card because your cousin told you to, then you're probably going to have a bad time. + * Don't trust anyone. There is no guarantee and, in addition to offering the disclaimer that this is not investment advice and that this is in no way an endorsement of any particular cryptocurrency or even the concept in general, we must understand that everything I write here could be wrong. In fact, everything ever written about crypto could be wrong, and anyone who is trying to sell you a token with exciting upside is almost certainly wrong. In short, everyone is wrong and everyone is out to get you, so be very, very careful. + * You might as well hold. If you bought when BTC was $18,000 you'd best just hold on. Right now you're in Pascal's Wager territory. Yes, maybe you're angry at crypto for screwing you, but maybe you were just stupid and you got in too high and now you might as well keep believing because nothing is certain, or you can admit that you were a bit overeager and now you're being punished for it but that there is some sort of bitcoin god out there watching over you. Ultimately you need to take a deep breath, agree that all of this is pretty freaking weird, and hold on. + + + +Now on with the assessments. + +**Bitcoin** - Expect a rise over the next year that will surpass the current low. Also expect [bumps as the SEC and other federal agencies][2] around the world begin regulating the buying and selling of cryptocurrencies in very real ways. Now that banks are in on the joke they're going to want to reduce risk. Therefore, the bitcoin will become digital gold, a staid, boring and volatility proof safe haven for speculators. Although all but unusable as a real currency, it's good enough for what we need it to do and we also can expect quantum computing hardware to change the face of the oldest and most familiar cryptocurrency. + +**Ethereum** - Ethereum could sustain another few thousand dollars on its price as long as Vitalik Buterin, the creator, doesn't throw too much cold water on it. Like a remorseful Victor Frankenstein, Buterin tends to make amazing things and then denigrate them online, a sort of self-flagellation that is actually quite useful in a space full of froth and outright lies. Ethereum is the closest we've come to a useful cryptocurrency, but it is still the Raspberry Pi of distributed computing -- it's a useful and clever hack that makes it easy to experiment but no one has quite replaced the old systems with new distributed data stores or applications. In short, it's a really exciting technology, but nobody knows what to do with it. + +![][3] + +Where will the price go? It will hover around $1,000 and possibly go as high as $1,500 this year, but this is a principled tech project and not a store of value. + +**Altcoins** - One of the signs of a bubble is when average people make statements like "I couldn't afford a Bitcoin so I bought a Litecoin." This is exactly what I've heard multiple times from multiple people and it's akin to saying "I couldn't buy hamburger so I bought a pound of sawdust instead. I think the kids will eat it, right?" Play at your own risk. Altcoins are a very useful low-risk play for many, and if you create an algorithm -- say to sell when the asset hits a certain level -- then you could make a nice profit. Further, most altcoins will not disappear overnight. I would honestly recommend playing with Ethereum instead of altcoins, but if you're dead set on it, then by all means, enjoy. + +**Tokens** - This is where cryptocurrency gets interesting. Tokens require research, education and a deep understanding of technology to truly assess. Many of the tokens I've seen are true crapshoots and are used primarily as pump and dump vehicles. I won't name names, but the rule of thumb is that if you're buying a token on an open market then you've probably already missed out. The value of the token sale as of January 2018 is to allow crypto whales to turn a few cent per token investment into a 100X return. While many founders talk about the magic of their product and the power of their team, token sales are quite simply vehicles to turn 4 cents into 20 cents into a dollar. Multiply that by millions of tokens and you see the draw. + +The answer is simple: find a few projects you like and lurk in their message boards. Assess if the team is competent and figure out how to get in very, very early. Also expect your money to disappear into a rat hole in a few months or years. There are no sure things, and tokens are far too bleeding-edge a technology to assess sanely. + +You are reading this post because you are looking to maintain confirmation bias in a confusing space. That's fine. I've spoken to enough crypto-heads to know that nobody knows anything right now and that collusion and dirty dealings are the rule of the day. Therefore, it's up to folks like us to slowly buy surely begin to understand just what's going on and, perhaps, profit from it. At the very least we'll all get a new Linux of Value when we're all done. + + + +-------------------------------------------------------------------------------- + +via: https://techcrunch.com/2018/01/22/how-to-price-cryptocurrencies/ + +作者:[John Biggs][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://techcrunch.com/author/john-biggs/ +[1]:https://shitcoin.com/storj-not-a-dropbox-killer-1a9f27983d70 +[2]:http://www.businessinsider.com/bitcoin-price-cryptocurrency-warning-from-sec-cftc-2018-1 +[3]:https://tctechcrunch2011.files.wordpress.com/2018/01/vitalik-twitter-1312.png?w=525&h=615 +[4]:https://unsplash.com/photos/pElSkGRA2NU?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[5]:https://unsplash.com/search/photos/cash?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText From bf2d48e1e383f6f881d0a20f90c87915dce49475 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 24 Jan 2018 12:47:25 +0800 Subject: [PATCH 219/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20mkdir=20C?= =?UTF-8?q?ommand=20Explained=20for=20Beginners=20(with=20examples)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Explained for Beginners (with examples).md | 95 +++++++++++++++++++ 1 file changed, 95 insertions(+) create mode 100644 sources/tech/20180123 Linux mkdir Command Explained for Beginners (with examples).md diff --git a/sources/tech/20180123 Linux mkdir Command Explained for Beginners (with examples).md b/sources/tech/20180123 Linux mkdir Command Explained for Beginners (with examples).md new file mode 100644 index 0000000000..a437b20395 --- /dev/null +++ b/sources/tech/20180123 Linux mkdir Command Explained for Beginners (with examples).md @@ -0,0 +1,95 @@ +Linux mkdir Command Explained for Beginners (with examples) +====== + +At any given time on the command line, you are in a directory. So it speaks for itself how integral directories are to the command line. In Linux, while the rm command lets you delete directories, it's the **mkdir** command that allows you create them in the first place. In this tutorial, we will discuss the basics of this tool using some easy to understand examples. + +But before we do that, it's worth mentioning that all examples in this tutorial have been tested on Ubuntu 16.04 LTS. + +### Linux mkdir command + +As already mentioned, the mkdir command allows the user to create directories. Following is its syntax: + +``` +mkdir [OPTION]... DIRECTORY... +``` + +And here's how the tool's man page describes it: +``` +Create the DIRECTORY(ies), if they do not already exist. +``` + +The following Q&A-styled examples should give you a better idea on how mkdir works. + +### Q1. How to create directories using mkdir? + +Creating directories is pretty simple, all you need to do is to pass the name of the directory you want to create to the mkdir command. + +``` +mkdir [dir-name] +``` + +Following is an example: + +``` +mkdir test-dir +``` + +### Q2. How to make sure parent directories (if non-existent) are created in process? + +Sometimes the requirement is to create a complete directory structure with a single mkdir command. This is possible, but you'll have to use the **-p** command line option. + +For example, if you want to create dir1/dir2/dir2 when none of these directories are already existing, then you can do this in the following way: + +``` +mkdir -p dir1/dir2/dir3 +``` + +[![How to make sure parent directories \(if non-existent\) are created][1]][2] + +### Q3. How to set permissions for directory being created? + +By default, the mkdir command sets rwx, rwx, and r-x permissions for the directories created through it. + +[![How to set permissions for directory being created][3]][4] + +However, if you want, you can set custom permissions using the **-m** command line option. + +[![mkdir -m command option][5]][6] + +### Q4. How to make mkdir emit details of operation? + +In case you want mkdir to display complete details of the operation it's performing, then this can be done through the **-v** command line option. + +``` +mkdir -v [dir] +``` + +Here's an example: + +[![How to make mkdir emit details of operation][7]][8] + +### Conclusion + +So you can see mkdir is a pretty simple command to understand and use. It doesn't have any learning curve associated with it. We have covered almost all of its command line options here. Just practice them and you can start using the command in your day-to-day work. In case you want to know more about the tool, head to its [man page][9]. + + +-------------------------------------------------------------------------------- + +via: https://www.howtoforge.com/linux-mkdir-command/ + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.howtoforge.com +[1]:https://www.howtoforge.com/images/command-tutorial/mkdir-p.png +[2]:https://www.howtoforge.com/images/command-tutorial/big/mkdir-p.png +[3]:https://www.howtoforge.com/images/command-tutorial/mkdir-def-perm.png +[4]:https://www.howtoforge.com/images/command-tutorial/big/mkdir-def-perm.png +[5]:https://www.howtoforge.com/images/command-tutorial/mkdir-custom-perm.png +[6]:https://www.howtoforge.com/images/command-tutorial/big/mkdir-custom-perm.png +[7]:https://www.howtoforge.com/images/command-tutorial/mkdir-verbose.png +[8]:https://www.howtoforge.com/images/command-tutorial/big/mkdir-verbose.png +[9]:https://linux.die.net/man/1/mkdir From 1917f38a9a4173515e3f0800fbe265fb7d605b35 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 24 Jan 2018 12:54:12 +0800 Subject: [PATCH 220/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Linux=20/=20Unix?= =?UTF-8?q?=20Bash=20Shell=20List=20All=20Builtin=20Commands?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ix Bash Shell List All Builtin Commands.md | 170 ++++++++++++++++++ 1 file changed, 170 insertions(+) create mode 100644 sources/tech/20130319 Linux - Unix Bash Shell List All Builtin Commands.md diff --git a/sources/tech/20130319 Linux - Unix Bash Shell List All Builtin Commands.md b/sources/tech/20130319 Linux - Unix Bash Shell List All Builtin Commands.md new file mode 100644 index 0000000000..230ca95cba --- /dev/null +++ b/sources/tech/20130319 Linux - Unix Bash Shell List All Builtin Commands.md @@ -0,0 +1,170 @@ +Linux / Unix Bash Shell List All Builtin Commands +====== + +Builtin commands contained within the bash shell itself. How do I list all built-in bash commands on Linux / Apple OS X / *BSD / Unix like operating systems without reading large size bash man page? + +A shell builtin is nothing but command or a function, called from a shell, that is executed directly in the shell itself. The bash shell executes the command directly, without invoking another program. You can view information for Bash built-ins with help command. There are different types of built-in commands. + + +### built-in command types + + 1. Bourne Shell Builtins: Builtin commands inherited from the Bourne Shell. + 2. Bash Builtins: Table of builtins specific to Bash. + 3. Modifying Shell Behavior: Builtins to modify shell attributes and optional behavior. + 4. Special Builtins: Builtin commands classified specially by POSIX. + + + +### How to see all bash builtins + +Type the following command: +``` +$ help +$ help | less +$ help | grep read +``` + +Sample outputs: +``` +GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu) +These shell commands are defined internally. Type `help' to see this list. +Type `help name' to find out more about the function `name'. +Use `info bash' to find out more about the shell in general. +Use `man -k' or `info' to find out more about commands not in this list. + +A star (*) next to a name means that the command is disabled. + + job_spec [&] history [-c] [-d offset] [n] or hist> + (( expression )) if COMMANDS; then COMMANDS; [ elif C> + . filename [arguments] jobs [-lnprs] [jobspec ...] or jobs > + : kill [-s sigspec | -n signum | -sigs> + [ arg... ] let arg [arg ...] + [[ expression ]] local [option] name[=value] ... + alias [-p] [name[=value] ... ] logout [n] + bg [job_spec ...] mapfile [-n count] [-O origin] [-s c> + bind [-lpvsPVS] [-m keymap] [-f filen> popd [-n] [+N | -N] + break [n] printf [-v var] format [arguments] + builtin [shell-builtin [arg ...]] pushd [-n] [+N | -N | dir] + caller [expr] pwd [-LP] + case WORD in [PATTERN [| PATTERN]...)> read [-ers] [-a array] [-d delim] [-> + cd [-L|-P] [dir] readarray [-n count] [-O origin] [-s> + command [-pVv] command [arg ...] readonly [-af] [name[=value] ...] or> + compgen [-abcdefgjksuv] [-o option] > return [n] + complete [-abcdefgjksuv] [-pr] [-DE] > select NAME [in WORDS ... ;] do COMM> + compopt [-o|+o option] [-DE] [name ..> set [--abefhkmnptuvxBCHP] [-o option> + continue [n] shift [n] + coproc [NAME] command [redirections] shopt [-pqsu] [-o] [optname ...] + declare [-aAfFilrtux] [-p] [name[=val> source filename [arguments] + dirs [-clpv] [+N] [-N] suspend [-f] + disown [-h] [-ar] [jobspec ...] test [expr] + echo [-neE] [arg ...] time [-p] pipeline + enable [-a] [-dnps] [-f filename] [na> times + eval [arg ...] trap [-lp] [[arg] signal_spec ...] + exec [-cl] [-a name] [command [argume> true + exit [n] type [-afptP] name [name ...] + export [-fn] [name[=value] ...] or ex> typeset [-aAfFilrtux] [-p] name[=val> + false ulimit [-SHacdefilmnpqrstuvx] [limit> + fc [-e ename] [-lnr] [first] [last] o> umask [-p] [-S] [mode] + fg [job_spec] unalias [-a] name [name ...] + for NAME [in WORDS ... ] ; do COMMAND> unset [-f] [-v] [name ...] + for (( exp1; exp2; exp3 )); do COMMAN> until COMMANDS; do COMMANDS; done + function name { COMMANDS ; } or name > variables - Names and meanings of so> + getopts optstring name [arg] wait [id] + hash [-lr] [-p pathname] [-dt] [name > while COMMANDS; do COMMANDS; done + help [-dms] [pattern ...] { COMMANDS ; } +``` + +### Viewing information for Bash built-ins + +To get detailed info run: +``` +help command +help read +``` +To just get a list of all built-ins with a short description, execute: + +`$ help -d` + +### Find syntax and other options for builtins + +Use the following syntax ' to find out more about the builtins commands: +``` +help name +help cd +help fg +help for +help read +help : +``` + +Sample outputs: +``` +:: : + Null command. +  + No effect; the command does nothing. +  + Exit Status: + Always succeeds +``` + +### Find out if a command is internal (builtin) or external + +Use the type command or command command: +``` +type -a command-name-here +type -a cd +type -a uname +type -a : +type -a ls +``` + + +OR +``` +type -a cd uname : ls uname +``` + +Sample outputs: +``` +cd is a shell builtin +uname is /bin/uname +: is a shell builtin +ls is aliased to `ls --color=auto' +ls is /bin/ls +l is a function +l () +{ + ls --color=auto +} + +``` + +OR +``` +command -V ls +command -V cd +command -V foo +``` + +[![View list bash built-ins command info on Linux or Unix][1]][1] + +### about the author + +The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][2], [Facebook][3], [Google+][4]. + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/faq/linux-unix-bash-shell-list-all-builtin-commands/ + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/faq/2013/03/View-list-bash-built-ins-command-info-on-Linux-or-Unix.jpg +[2]:https://twitter.com/nixcraft +[3]:https://facebook.com/nixcraft +[4]:https://plus.google.com/+CybercitiBiz From 12bc3f8950024757444405f25111f036ae472f7e Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 24 Jan 2018 13:00:35 +0800 Subject: [PATCH 221/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Never=20miss=20a?= =?UTF-8?q?=20Magazine's=20article,=20build=20your=20own=20RSS=20notificat?= =?UTF-8?q?ion=20system?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... build your own RSS notification system.md | 170 ++++++++++++++++++ 1 file changed, 170 insertions(+) create mode 100644 sources/tech/20180123 Never miss a Magazine-s article, build your own RSS notification system.md diff --git a/sources/tech/20180123 Never miss a Magazine-s article, build your own RSS notification system.md b/sources/tech/20180123 Never miss a Magazine-s article, build your own RSS notification system.md new file mode 100644 index 0000000000..8794ca611a --- /dev/null +++ b/sources/tech/20180123 Never miss a Magazine-s article, build your own RSS notification system.md @@ -0,0 +1,170 @@ +Never miss a Magazine's article, build your own RSS notification system +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/01/learn-python-rss-notifier.png-945x400.jpg) + +Python is a great programming language to quickly build applications that make our life easier. In this article we will learn how to use Python to build a RSS notification system, the goal being to have fun learning Python using Fedora. If you are looking for a complete RSS notifier application, there are a few already packaged in Fedora. + +### Fedora and Python - getting started + +Python 3.6 is available by default in Fedora, that includes Python's extensive standard library. The standard library provides a collection of modules which make some tasks simpler for us. For example, in our case we will use the [**sqlite3**][1] module to create, add and read data from a database. In the case where a particular problem we are trying to solve is not covered by the standard library, the chance is that someone has already developed a module for everyone to use. The best place to search for such modules is the Python Package Index known as [PyPI][2]. In our example we are going to use the [**feedparser**][3] to parse an RSS feed. + +Since **feedparser** is not in the standard library, we have to install it in our system. Luckily for us there is an rpm package in Fedora, so the installation of **feedparser** is as simple as: +``` +$ sudo dnf install python3-feedparser +``` + +We now have everything we need to start coding our application. + +### Storing the feed data + +We need to store data from the articles that have already been published so that we send a notification only for new articles. The data we want to store will give us a unique way to identify an article. Therefore we will store the **title** and the **publication date** of the article. + +So let's create our database using python **sqlite3** module and a simple SQL query. We are also adding the modules we are going to use later ( **feedparser** , **smtplib** and **email** ). + +#### Creating the Database +``` +#!/usr/bin/python3 +import sqlite3 +import smtplib +from email.mime.text import MIMEText + +import feedparser + +db_connection = sqlite3.connect('/var/tmp/magazine_rss.sqlite') +db = db_connection.cursor() +db.execute(' CREATE TABLE IF NOT EXISTS magazine (title TEXT, date TEXT)') + +``` + +These few lines of code create a new sqlite database stored in a file called 'magazine_rss.sqlite', and then create a new table within the database called 'magazine'. This table has two columns - 'title' and 'date' - that can store data of the type TEXT, which means that the value of each column will be a text string. + +#### Checking the Database for old articles + +Since we only want to add new articles to our database we need a function that will check if the article we get from the RSS feed is already in our database or not. We will use it to decide if we should send an email notification (new article) or not (old article). Ok let's code this function. +``` +def article_is_not_db(article_title, article_date): + """ Check if a given pair of article title and date + is in the database. + Args: + article_title (str): The title of an article + article_date (str): The publication date of an article + Return: + True if the article is not in the database + False if the article is already present in the database + """ + db.execute("SELECT * from magazine WHERE title=? AND date=?", (article_title, article_date)) + if not db.fetchall(): + return True + else: + return False +``` + +The main part of this function is the SQL query we execute to search through the database. We are using a SELECT instruction to define which column of our magazine table we will run the query on. We are using the 0_sync_master.sh 1_add_new_article_manual.sh 1_add_new_article_newspaper.sh 2_start_translating.sh 3_continue_the_work.sh 4_finish.sh 5_pause.sh base.sh env format.test lctt.cfg parse_url_by_manual.sh parse_url_by_newspaper.py parse_url_by_newspaper.sh README.org reformat.sh symbol to select all columns ( title and date). Then we ask to select only the rows of the table WHERE the article_title and article_date string are equal to the value of the title and date column. + +To finish, we have a simple logic that will return True if the query did not return any results and False if the query found an article in database matching our title, date pair. + +#### Adding a new article to the Database + +Now we can code the function to add a new article to the database. +``` +def add_article_to_db(article_title, article_date): + """ Add a new article title and date to the database + Args: + article_title (str): The title of an article + article_date (str): The publication date of an article + """ + db.execute("INSERT INTO magazine VALUES (?,?)", (article_title, article_date)) + db_connection.commit() +``` + +This function is straight forward, we are using a SQL query to INSERT a new row INTO the magazine table with the VALUES of the article_title and article_date. Then we commit the change to make it persistent. + +That's all we need from the database's point of view, let's look at the notification system and how we can use python to send emails. + +### Sending an email notification + +Let's create a function to send an email using the python standard library module **smtplib.** We are also using the **email** module from the standard library to format our email message. +``` +def send_notification(article_title, article_url): + """ Add a new article title and date to the database + + Args: + article_title (str): The title of an article + article_url (str): The url to access the article + """ + + smtp_server = smtplib.SMTP('smtp.gmail.com', 587) + smtp_server.ehlo() + smtp_server.starttls() + smtp_server.login('your_email@gmail.com', '123your_password') + msg = MIMEText(f'\nHi there is a new Fedora Magazine article : {article_title}. \nYou can read it here {article_url}') + msg['Subject'] = 'New Fedora Magazine Article Available' + msg['From'] = 'your_email@gmail.com' + msg['To'] = 'destination_email@gmail.com' + smtp_server.send_message(msg) + smtp_server.quit() +``` + +In this example I am using the Google mail smtp server to send an email, but this will work with any email services that provides you with a SMTP server. Most of this function is boilerplate needed to configure the access to the smtp server. You will need to update the code with your email address and credentials. + +If you are using 2 Factor Authentication with your gmail account you can setup a password app that will give you a unique password to use for this application. Check out this help [page][4]. + +### Reading Fedora Magazine RSS feed + +We now have functions to store an article in the database and send an email notification, let's create a function that parses the Fedora Magazine RSS feed and extract the articles' data. +``` +def read_article_feed(): + """ Get articles from RSS feed """ + feed = feedparser.parse('https://fedoramagazine.org/feed/') + for article in feed['entries']: + if article_is_not_db(article['title'], article['published']): + send_notification(article['title'], article['link']) + add_article_to_db(article['title'], article['published']) + +if __name__ == '__main__': + read_article_feed() + db_connection.close() +``` + +Here we are making use of the **feedparser.parse** function. The function returns a dictionary representation of the RSS feed, for the full reference of the representation you can consult **feedparser** 's [documentation][5]. + +The RSS feed parser will return the last 10 articles as entries and then we extract the following information: the title, the link and the date the article was published. As a result, we can now use the functions we have previously defined to check if the article is not in the database, then send a notification email and finally, add the article to our database. + +The last if statement is used to execute our read_article_feed function and then close the database connection when we execute our script. + +### Running our script + +Finally, to run our script we need to give the correct permission to the file. Next, we make use of the **cron** utility to automatically execute our script every hour (1 minute past the hour). **cron** is a job scheduler that we can use to run a task at a fixed time. +``` +$ chmod a+x my_rss_notifier.py +$ sudo cp my_rss_notifier.py /etc/cron.hourly +``` + +To keep this tutorial simple, we are using the cron.hourly directory to execute the script every hours, I you wish to learn more about **cron** and how to configure the **crontab,** please read **cron 's** wikipedia [page][6]. + +### Conclusion + +In this tutorial we have learned how to use Python to create a simple sqlite database, parse an RSS feed and send emails. I hope that this showed you how you can easily build your own application using Python and Fedora. + +The script is available on github [here][7]. + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/never-miss-magazines-article-build-rss-notification-system/ + +作者:[Clément Verna][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fedoramagazine.org +[1]:https://docs.python.org/3/library/sqlite3.html +[2]:https://pypi.python.org/pypi +[3]:https://pypi.python.org/pypi/feedparser/5.2.1 +[4]:https://support.google.com/accounts/answer/185833?hl=en +[5]:https://pythonhosted.org/feedparser/reference.html +[6]:https://en.wikipedia.org/wiki/Cron +[7]:https://github.com/cverna/rss_feed_notifier From 444f5d9b4b2af9439d02e768675f2629bb1f39ca Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 24 Jan 2018 13:06:50 +0800 Subject: [PATCH 222/226] =?UTF-8?q?=E9=80=89=E9=A2=98:=20A=20Simple=20Comm?= =?UTF-8?q?and-line=20Snippet=20Manager?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...2 A Simple Command-line Snippet Manager.md | 319 ++++++++++++++++++ 1 file changed, 319 insertions(+) create mode 100644 sources/tech/20180122 A Simple Command-line Snippet Manager.md diff --git a/sources/tech/20180122 A Simple Command-line Snippet Manager.md b/sources/tech/20180122 A Simple Command-line Snippet Manager.md new file mode 100644 index 0000000000..1c8ef14fb6 --- /dev/null +++ b/sources/tech/20180122 A Simple Command-line Snippet Manager.md @@ -0,0 +1,319 @@ +A Simple Command-line Snippet Manager +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/01/pet-6-720x340.png) + +We can't remember all the commands, right? Yes. Except the frequently used commands, it is nearly impossible to remember some long commands that we rarely use. That's why we need to some external tools to help us to find the commands when we need them. In the past, we have reviewed two useful utilities named [**" Bashpast"**][1] and [**" Keep"**][2]. Using Bashpast, we can easily bookmark the Linux commands for easier repeated invocation. And, the Keep utility can be used to keep the some important and lengthy commands in your Terminal, so you can use them on demand. Today, we are going to see yet another tool in the series to help you remembering commands. Say hello to **" Pet"**, a simple command-line snippet manager written in **Go** language. + +Using Pet, you can; + + * Register/add your important, long and complex command snippets. + * Search the saved command snippets interactively. + * Run snippets directly without having to type over and over. + * Edit the saved command snippets easily. + * Sync the snippets via Gist. + * Use variables in snippets. + * And more yet to come. + + + +#### Installing Pet CLI Snippet Manager + +Since it is written in Go language, make sure you have installed Go in your system. + +After Go language, grab the latest binaries from [**the releases page**][3]. +``` +wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_amd64.zip +``` + +For 32 bit: +``` +wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_386.zip +``` + +Extract the downloaded archive: +``` +unzip pet_0.2.4_linux_amd64.zip +``` + +32 bit: +``` +unzip pet_0.2.4_linux_386.zip +``` + +Copy the pet binary file to your PATH (i.e **/usr/local/bin** or the like). +``` +sudo cp pet /usr/local/bin/ +``` + +Finally, make it executable: +``` +sudo chmod +x /usr/local/bin/pet +``` + +If you're using Arch based systems, then you can install it from AUR using any AUR helper tools. + +Using [**Pacaur**][4]: +``` +pacaur -S pet-git +``` + +Using [**Packer**][5]: +``` +packer -S pet-git +``` + +Using [**Yaourt**][6]: +``` +yaourt -S pet-git +``` + +Using [**Yay** :][7] +``` +yay -S pet-git +``` + +Also, you need to install **[fzf][8]** or [**peco**][9] tools to enable interactive search. Refer the official GitHub links to know how to install these tools. + +#### Usage + +Run 'pet' without any arguments to view the list of available commands and general options. +``` +$ pet +pet - Simple command-line snippet manager. + +Usage: + pet [command] + +Available Commands: + configure Edit config file + edit Edit snippet file + exec Run the selected commands + help Help about any command + list Show all snippets + new Create a new snippet + search Search snippets + sync Sync snippets + version Print the version number + +Flags: + --config string config file (default is $HOME/.config/pet/config.toml) + --debug debug mode + -h, --help help for pet + +Use "pet [command] --help" for more information about a command. +``` + +To view the help section of a specific command, run: +``` +$ pet [command] --help +``` + +**Configure Pet** + +It just works fine with default values. However, you can change the default directory to save snippets, choose the selector (fzf or peco) to use, the default text editor to edit snippets, add GIST id details etc. + +To configure Pet, run: +``` +$ pet configure +``` + +This command will open the default configuration in the default text editor (for example **vim** in my case). Change/edit the values as per your requirements. +``` +[General] + snippetfile = "/home/sk/.config/pet/snippet.toml" + editor = "vim" + column = 40 + selectcmd = "fzf" + +[Gist] + file_name = "pet-snippet.toml" + access_token = "" + gist_id = "" + public = false +~ +``` + +**Creating Snippets** + +To create a new snippet, run: +``` +$ pet new +``` + +Add the command and the description and hit ENTER to save it. +``` +Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9' +Description> Remove numbers from output. +``` + +[![][10]][11] + +This is a simple command to remove all numbers from the echo command output. You can easily remember it. But, if you rarely use it, you may forgot it completely after few days. Of course we can search the history using "CTRL+r", but "Pet" is much easier. Also, Pet can help you to add any number of entries. + +Another cool feature is we can easily add the previous command. To do so, add the following lines in your **.bashrc** or **.zshrc** file. +``` +function prev() { + PREV=$(fc -lrn | head -n 1) + sh -c "pet new `printf %q "$PREV"`" +} +``` + +Do the following command to take effect the saved changes. +``` +source .bashrc +``` + +Or, +``` +source .zshrc +``` + +Now, run any command, for example: +``` +$ cat Documents/ostechnix.txt | tr '|' '\n' | sort | tr '\n' '|' | sed "s/.$/\\n/g" +``` + +To add the above command, you don't have to use "pet new" command. just do: +``` +$ prev +``` + +Add the description to the command snippet and hit ENTER to save. + +[![][10]][12] + +**List snippets** + +To view the saved snippets, run: +``` +$ pet list +``` + +[![][10]][13] + +**Edit Snippets** + +If you want to edit the description or the command of a snippet, run: +``` +$ pet edit +``` + +This will open all saved snippets in your default text editor. You can edit or change the snippets as you wish. +``` +[[snippets]] + description = "Remove numbers from output." + command = "echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9'" + output = "" + +[[snippets]] + description = "Alphabetically sort one line of text" + command = "\t prev" + output = "" +``` + +**Use Tags in snippets** + +To use tags to a snippet, use **-t** flag like below. +``` +$ pet new -t +Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9 +Description> Remove numbers from output. +Tag> tr command examples + +``` + +**Execute Snippets** + +To execute a saved snippet, run: +``` +$ pet exec +``` + +Choose the snippet you want to run from the list and hit ENTER to run it. + +[![][10]][14] + +Remember you need to install fzf or peco to use this feature. + +**Search Snippets** + +If you have plenty of saved snippets, you can easily search them using a string or key word like below. +``` +$ pet search +``` + +Enter the search term or keyword to narrow down the search results. + +[![][10]][15] + +**Sync Snippets** + +First, you need to obtain the access token. Go to this link and create access token (only need "gist" scope). + +Configure Pet using command: +``` +$ pet configure +``` + +Set that token to **access_token** in **[Gist]** field. + +After setting, you can upload snippets to Gist like below. +``` +$ pet sync -u +Gist ID: 2dfeeeg5f17e1170bf0c5612fb31a869 +Upload success + +``` + +You can also download snippets on another PC. To do so, edit configuration file and set **Gist ID** to **gist_id** in **[Gist]**. + +Then, download the snippets using command: +``` +$ pet sync +Download success + +``` + +For more details, refer the help section: +``` +pet -h +``` + +Or, +``` +pet [command] -h +``` + +And, that's all. Hope this helps. As you can see, Pet usage is fairly simple and easy to use! If you're having hard time remembering lengthy commands, Pet utility can definitely be useful. + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/pet-simple-command-line-snippet-manager/ + +作者:[SK][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://www.ostechnix.com/bookmark-linux-commands-easier-repeated-invocation/ +[2]:https://www.ostechnix.com/save-commands-terminal-use-demand/ +[3]:https://github.com/knqyf263/pet/releases +[4]:https://www.ostechnix.com/install-pacaur-arch-linux/ +[5]:https://www.ostechnix.com/install-packer-arch-linux-2/ +[6]:https://www.ostechnix.com/install-yaourt-arch-linux/ +[7]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[8]:https://github.com/junegunn/fzf +[9]:https://github.com/peco/peco +[10]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-1.png () +[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-2.png () +[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-3.png () +[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-4.png () +[15]:http://www.ostechnix.com/wp-content/uploads/2018/01/pet-5.png () From bf04a71a9489eafd5fbb02f6da477ba542bef88d Mon Sep 17 00:00:00 2001 From: qhwdw Date: Wed, 24 Jan 2018 14:58:59 +0800 Subject: [PATCH 223/226] Translated by qhwdw --- ...0140410 Recursion- dream within a dream.md | 122 ------------------ ...0140410 Recursion- dream within a dream.md | 122 ++++++++++++++++++ 2 files changed, 122 insertions(+), 122 deletions(-) delete mode 100644 sources/tech/20140410 Recursion- dream within a dream.md create mode 100644 translated/tech/20140410 Recursion- dream within a dream.md diff --git a/sources/tech/20140410 Recursion- dream within a dream.md b/sources/tech/20140410 Recursion- dream within a dream.md deleted file mode 100644 index b4e0b25fab..0000000000 --- a/sources/tech/20140410 Recursion- dream within a dream.md +++ /dev/null @@ -1,122 +0,0 @@ -#Translating by qhwdw [Recursion: dream within a dream][1] -Recursion is magic, but it suffers from the most awkward introduction in programming books. They'll show you a recursive factorial implementation, then warn you that while it sort of works it's terribly slow and might crash due to stack overflows. "You could always dry your hair by sticking your head into the microwave, but watch out for intracranial pressure and head explosions. Or you can use a towel." No wonder people are suspicious of it. Which is too bad, because recursion is the single most powerful idea in algorithms. - -Let's take a look at the classic recursive factorial: - -Recursive Factorial - factorial.c - -``` -#include - -int factorial(int n) -{ - int previous = 0xdeadbeef; - - if (n == 0 || n == 1) { - return 1; - } - - previous = factorial(n-1); - return n * previous; -} - -int main(int argc) -{ - int answer = factorial(5); - printf("%d\n", answer); -} -``` - -The idea of a function calling itself is mystifying at first. To make it concrete, here is exactly what is [on the stack][3] when factorial(5) is called and reaches n == 1: - -![](https://manybutfinite.com/img/stack/factorial.png) - -Each call to factorial generates a new [stack frame][4]. The creation and [destruction][5] of these stack frames is what makes the recursive factorial slower than its iterative counterpart. The accumulation of these frames before the calls start returning is what can potentially exhaust stack space and crash your program. - -These concerns are often theoretical. For example, the stack frames for factorial take 16 bytes each (this can vary depending on stack alignment and other factors). If you are running a modern x86 Linux kernel on a computer, you normally have 8 megabytes of stack space, so factorial could handle n up to ~512,000\. This is a [monstrously large result][6] that takes 8,971,833 bits to represent, so stack space is the least of our problems: a puny integer - even a 64-bit one - will overflow tens of thousands of times over before we run out of stack space. - -We'll look at CPU usage in a moment, but for now let's take a step back from the bits and bytes and look at recursion as a general technique. Our factorial algorithm boils down to pushing integers N, N-1, ... 1 onto a stack, then multiplying them in reverse order. The fact we're using the program's call stack to do this is an implementation detail: we could allocate a stack on the heap and use that instead. While the call stack does have special properties, it's just another data structure at your disposal. I hope the diagram makes that clear. - -Once you see the call stack as a data structure, something else becomes clear: piling up all those integers to multiply them afterwards is one dumbass idea. That is the real lameness of this implementation: it's using a screwdriver to hammer a nail. It's far more sensible to use an iterative process to calculate factorials. - -But there are plenty of screws out there, so let's pick one. There is a traditional interview question where you're given a mouse in a maze, and you must help the mouse search for cheese. Suppose the mouse can turn either left or right in the maze. How would you model and solve this problem? - -Like most problems in life, you can reduce this rodent quest to a graph, in particular a binary tree where the nodes represent positions in the maze. You could then have the mouse attempt left turns whenever possible, and backtrack to turn right when it reaches a dead end. Here's the mouse walk in an [example maze][7]: - -![](https://manybutfinite.com/img/stack/mazeGraph.png) - -Each edge (line) is a left or right turn taking our mouse to a new position. If either turn is blocked, the corresponding edge does not exist. Now we're talking! This process is inherently recursive whether you use the call stack or another data structure. But using the call stack is just so easy: - -Recursive Maze Solver[download][2] - -``` -#include -#include "maze.h" - -int explore(maze_t *node) -{ - int found = 0; - - if (node == NULL) - { - return 0; - } - if (node->hasCheese){ - return 1;// found cheese - } - - found = explore(node->left) || explore(node->right); - return found; - } - - int main(int argc) - { - int found = explore(&maze); - } -``` -Below is the stack when we find the cheese in maze.c:13\. You can also see the detailed [GDB output][8] and [commands][9] used to gather data. - -![](https://manybutfinite.com/img/stack/mazeCallStack.png) - -This shows recursion in a much better light because it's a suitable problem. And that's no oddity: when it comes to algorithms, recursion is the rule, not the exception. It comes up when we search, when we traverse trees and other data structures, when we parse, when we sort: it's everywhere. You know how pi or e come up in math all the time because they're in the foundations of the universe? Recursion is like that: it's in the fabric of computation. - -Steven Skienna's excellent [Algorithm Design Manual][10] is a great place to see that in action as he works through his "war stories" and shows the reasoning behind algorithmic solutions to real-world problems. It's the best resource I know of to develop your intuition for algorithms. Another good read is McCarthy's [original paper on LISP][11]. Recursion is both in its title and in the foundations of the language. The paper is readable and fun, it's always a pleasure to see a master at work. - -Back to the maze. While it's hard to get away from recursion here, it doesn't mean it must be done via the call stack. You could for example use a string like RRLL to keep track of the turns, and rely on the string to decide on the mouse's next move. Or you can allocate something else to record the state of the cheese hunt. You'd still be implementing a recursive process, but rolling your own data structure. - -That's likely to be more complex because the call stack fits like a glove. Each stack frame records not only the current node, but also the state of computation in that node (in this case, whether we've taken only the left, or are already attempting the right). Hence the code becomes trivial. Yet we sometimes give up this sweetness for fear of overflows and hopes of performance. That can be foolish. - -As we've seen, the stack is large and frequently other constraints kick in before stack space does. One can also check the problem size and ensure it can be handled safely. The CPU worry is instilled chiefly by two widespread pathological examples: the dumb factorial and the hideous O(2n) [recursive Fibonacci][12] without memoization. These are not indicative of sane stack-recursive algorithms. - -The reality is that stack operations are fast. Often the offsets to data are known exactly, the stack is hot in the [caches][13], and there are dedicated instructions to get things done. Meanwhile, there is substantial overhead involved in using your own heap-allocated data structures. It's not uncommon to see people write something that ends up more complex and less performant than call-stack recursion. Finally, modern CPUs are [pretty good][14] and often not the bottleneck. Be careful about sacrificing simplicity and as always with performance, [measure][15]. - -The next post is the last in this stack series, and we'll look at Tail Calls, Closures, and Other Fauna. Then it'll be time to visit our old friend, the Linux kernel. Thanks for reading! - -![](https://manybutfinite.com/img/stack/1000px-Sierpinski-build.png) - --------------------------------------------------------------------------------- - -via:https://manybutfinite.com/post/recursion/ - -作者:[Gustavo Duarte][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://duartes.org/gustavo/blog/about/ -[1]:https://manybutfinite.com/post/recursion/ -[2]:https://manybutfinite.com/code/x86-stack/maze.c -[3]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-gdb-output.txt -[4]:https://manybutfinite.com/post/journey-to-the-stack -[5]:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ -[6]:https://gist.github.com/gduarte/9944878 -[7]:https://github.com/gduarte/blog/blob/master/code/x86-stack/maze.h -[8]:https://github.com/gduarte/blog/blob/master/code/x86-stack/maze-gdb-output.txt -[9]:https://github.com/gduarte/blog/blob/master/code/x86-stack/maze-gdb-commands.txt -[10]:http://www.amazon.com/Algorithm-Design-Manual-Steven-Skiena/dp/1848000693/ -[11]:https://github.com/papers-we-love/papers-we-love/blob/master/comp_sci_fundamentals_and_history/recursive-functions-of-symbolic-expressions-and-their-computation-by-machine-parti.pdf -[12]:http://stackoverflow.com/questions/360748/computational-complexity-of-fibonacci-sequence -[13]:https://manybutfinite.com/post/intel-cpu-caches/ -[14]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait/ -[15]:https://manybutfinite.com/post/performance-is-a-science \ No newline at end of file diff --git a/translated/tech/20140410 Recursion- dream within a dream.md b/translated/tech/20140410 Recursion- dream within a dream.md new file mode 100644 index 0000000000..3becf75ebd --- /dev/null +++ b/translated/tech/20140410 Recursion- dream within a dream.md @@ -0,0 +1,122 @@ +#[递归:梦中梦][1] +递归是很神奇的,但是在大多数的编程类书藉中对递归讲解的并不好。它们只是给你展示一个递归阶乘的实现,然后警告你递归运行的很慢,并且还有可能因为栈缓冲区溢出而崩溃。“你可以将头伸进微波炉中去烘干你的头发,但是需要警惕颅内高压以及让你的头发生爆炸,或者你可以使用毛巾来擦干头发。”这就是人们不愿意使用递归的原因。这是很糟糕的,因为在算法中,递归是最强大的。 + +我们来看一下这个经典的递归阶乘: + +递归阶乘 - factorial.c + +``` +#include + +int factorial(int n) +{ + int previous = 0xdeadbeef; + + if (n == 0 || n == 1) { + return 1; + } + + previous = factorial(n-1); + return n * previous; +} + +int main(int argc) +{ + int answer = factorial(5); + printf("%d\n", answer); +} +``` + +函数的目的是调用它自己,这在一开始是让人很难理解的。为了解具体的内容,当调用 `factorial(5)` 并且达到 `n == 1` 时,[在栈上][3] 究竟发生了什么? + +![](https://manybutfinite.com/img/stack/factorial.png) + +每次调用 `factorial` 都生成一个新的 [栈帧][4]。这些栈帧的创建和 [销毁][5] 是递归慢于迭代的原因。在调用返回之前,累积的这些栈帧可能会耗尽栈空间,进而使你的程序崩溃。 + +而这些担心经常是存在于理论上的。例如,对于每个 `factorial` 的栈帧取 16 字节(这可能取决于栈排列以及其它因素)。如果在你的电脑上运行着现代的 x86 的 Linux 内核,一般情况下你拥有 8 GB 的栈空间,因此,`factorial` 最多可以被运行 ~512,000 次。这是一个 [巨大无比的结果][6],它相当于 8,971,833 比特,因此,栈空间根本就不是什么问题:一个极小的整数 - 甚至是一个 64 位的整数 - 在我们的栈空间被耗尽之前就早已经溢出了成千上万次了。 + +过一会儿我们再去看 CPU 的使用,现在,我们先从比特和字节回退一步,把递归看作一种通用技术。我们的阶乘算法总结为将整数 N、N-1、 … 1 推入到一个栈,然后将它们按相反的顺序相乘。实际上我们使用了程序调用栈来实现这一点,这是它的细节:我们在堆上分配一个栈并使用它。虽然调用栈具有特殊的特性,但是,你只是把它用作一种另外的数据结构。我希望示意图可以让你明白这一点。 + +当你看到栈调用作为一种数据结构使用,有些事情将变得更加清晰明了:将那些整数堆积起来,然后再将它们相乘,这并不是一个好的想法。那是一种有缺陷的实现:就像你拿螺丝刀去钉钉子一样。相对更合理的是使用一个迭代过程去计算阶乘。 + +但是,螺丝钉太多了,我们只能挑一个。有一个经典的面试题,在迷宫里有一只老鼠,你必须帮助这只老鼠找到一个奶酪。假设老鼠能够在迷宫中向左或者向右转弯。你该怎么去建模来解决这个问题? + +就像现实生活中的很多问题一样,你可以将这个老鼠找奶酪的问题简化为一个图,一个二叉树的每个结点代表在迷宫中的一个位置。然后你可以让老鼠在任何可能的地方都左转,而当它进入一个死胡同时,再返回来右转。这是一个老鼠行走的 [迷宫示例][7]: + +![](https://manybutfinite.com/img/stack/mazeGraph.png) + +每到边缘(线)都让老鼠左转或者右转来到达一个新的位置。如果向哪边转都被拦住,说明相关的边缘不存在。现在,我们来讨论一下!这个过程无论你是调用栈还是其它数据结构,它都离不开一个递归的过程。而使用调用栈是非常容易的: + +递归迷宫求解 [下载][2] + +``` +#include +#include "maze.h" + +int explore(maze_t *node) +{ + int found = 0; + + if (node == NULL) + { + return 0; + } + if (node->hasCheese){ + return 1;// found cheese + } + + found = explore(node->left) || explore(node->right); + return found; + } + + int main(int argc) + { + int found = explore(&maze); + } +``` +当我们在 `maze.c:13` 中找到奶酪时,栈的情况如下图所示。你也可以在 [GDB 输出][8] 中看到更详细的数据,它是使用 [命令][9] 采集的数据。 + +![](https://manybutfinite.com/img/stack/mazeCallStack.png) + +它展示了递归的良好表现,因为这是一个适合使用递归的问题。而且这并不奇怪:当涉及到算法时,递归是一种使用较多的算法,而不是被排除在外的。当进行搜索时、当进行遍历树和其它数据结构时、当进行解析时、当需要排序时:它的用途无处不在。正如众所周知的 pi 或者 e,它们在数学中像“神”一样的存在,因为它们是宇宙万物的基础,而递归也和它们一样:只是它在计算的结构中。 + +Steven Skienna 的优秀著作 [算法设计指南][10] 的精彩之处在于,他通过“战争故事” 作为手段来诠释工作,以此来展示解决现实世界中的问题背后的算法。这是我所知道的拓展你的算法知识的最佳资源。另一个较好的做法是,去读 McCarthy 的 [LISP 上的原创论文][11]。递归在语言中既是它的名字也是它的基本原理。这篇论文既可读又有趣,在工作中能看到大师的作品是件让人兴奋的事情。 + +回到迷宫问题上。虽然它在这里很难离开递归,但是并不意味着必须通过调用栈的方式来实现。你可以使用像 “RRLL” 这样的字符串去跟踪转向,然后,依据这个字符串去决定老鼠下一步的动作。或者你可以分配一些其它的东西来记录奶酪的状态。你仍然是去实现一个递归的过程,但是需要你实现一个自己的数据结构。 + +那样似乎更复杂一些,因为栈调用更合适。每个栈帧记录的不仅是当前节点,也记录那个节点上的计算状态(在这个案例中,我们是否只让它走左边,或者已经尝试向右)。因此,代码已经变得不重要了。然而,有时候我们因为害怕溢出和期望中的性能而放弃这种优秀的算法。那是很愚蠢的! + +正如我们所见,栈空间是非常大的,在耗尽栈空间之前往往会遇到其它的限制。一方面可以通过检查问题大小来确保它能够被安全地处理。而对 CPU 的担心是由两个广为流传的有问题的示例所导致的:哑阶乘(dumb factorial)和可怕的无记忆的 O(2n) [Fibonacci 递归][12]。它们并不是栈递归算法的正确代表。 + +事实上栈操作是非常快的。通常,栈对数据的偏移是非常准确的,它在 [缓存][13] 中是热点,并且是由专门的指令来操作它。同时,使用你自己定义的堆上分配的数据结构的相关开销是很大的。经常能看到人们写的一些比栈调用递归更复杂、性能更差的实现方法。最后,现代的 CPU 的性能都是 [非常好的][14] ,并且一般 CPU 不会是性能瓶颈所在。要注意牺牲简单性与保持性能的关系。[测量][15]。 + +下一篇文章将是探秘栈系列的最后一篇了,我们将了解尾调用、闭包、以及其它相关概念。然后,我们就该深入我们的老朋友—— Linux 内核了。感谢你的阅读! + +![](https://manybutfinite.com/img/stack/1000px-Sierpinski-build.png) + +-------------------------------------------------------------------------------- + +via:https://manybutfinite.com/post/recursion/ + +作者:[Gustavo Duarte][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://duartes.org/gustavo/blog/about/ +[1]:https://manybutfinite.com/post/recursion/ +[2]:https://manybutfinite.com/code/x86-stack/maze.c +[3]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-gdb-output.txt +[4]:https://manybutfinite.com/post/journey-to-the-stack +[5]:https://manybutfinite.com/post/epilogues-canaries-buffer-overflows/ +[6]:https://gist.github.com/gduarte/9944878 +[7]:https://github.com/gduarte/blog/blob/master/code/x86-stack/maze.h +[8]:https://github.com/gduarte/blog/blob/master/code/x86-stack/maze-gdb-output.txt +[9]:https://github.com/gduarte/blog/blob/master/code/x86-stack/maze-gdb-commands.txt +[10]:http://www.amazon.com/Algorithm-Design-Manual-Steven-Skiena/dp/1848000693/ +[11]:https://github.com/papers-we-love/papers-we-love/blob/master/comp_sci_fundamentals_and_history/recursive-functions-of-symbolic-expressions-and-their-computation-by-machine-parti.pdf +[12]:http://stackoverflow.com/questions/360748/computational-complexity-of-fibonacci-sequence +[13]:https://manybutfinite.com/post/intel-cpu-caches/ +[14]:https://manybutfinite.com/post/what-your-computer-does-while-you-wait/ +[15]:https://manybutfinite.com/post/performance-is-a-science \ No newline at end of file From 9448ee9db936f34b6624ef55e0dea956200ef33c Mon Sep 17 00:00:00 2001 From: WangYue <815420852@qq.com> Date: Wed, 24 Jan 2018 19:26:25 +0800 Subject: [PATCH 224/226] =?UTF-8?q?=E7=94=B3=E8=AF=B7=E7=BF=BB=E8=AF=91=20?= =?UTF-8?q?=20=2020170523=20Best=20Websites=20to=20Download=20Linux=20Game?= =?UTF-8?q?s.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 申请翻译 20170523 Best Websites to Download Linux Games.md --- sources/talk/20170523 Best Websites to Download Linux Games.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20170523 Best Websites to Download Linux Games.md b/sources/talk/20170523 Best Websites to Download Linux Games.md index e6d4636fdb..d3b2870738 100644 --- a/sources/talk/20170523 Best Websites to Download Linux Games.md +++ b/sources/talk/20170523 Best Websites to Download Linux Games.md @@ -1,3 +1,5 @@ +申请翻译  WangYueScream +================================ Best Websites to Download Linux Games ====== Brief: New to Linux gaming and wondering where to **download Linux games** from? We list the best resources from where you can **download free Linux games** as well as buy premium Linux games. From a3c3af4dcb93ac80c04441e5241e6a4d802e6d7b Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 25 Jan 2018 09:02:59 +0800 Subject: [PATCH 225/226] translated --- ...02 Bash Bypass Alias Linux-Unix Command.md | 51 +++++++++---------- 1 file changed, 24 insertions(+), 27 deletions(-) rename {sources => translated}/tech/20171002 Bash Bypass Alias Linux-Unix Command.md (57%) diff --git a/sources/tech/20171002 Bash Bypass Alias Linux-Unix Command.md b/translated/tech/20171002 Bash Bypass Alias Linux-Unix Command.md similarity index 57% rename from sources/tech/20171002 Bash Bypass Alias Linux-Unix Command.md rename to translated/tech/20171002 Bash Bypass Alias Linux-Unix Command.md index ba2d9cdb4c..e4dec43782 100644 --- a/sources/tech/20171002 Bash Bypass Alias Linux-Unix Command.md +++ b/translated/tech/20171002 Bash Bypass Alias Linux-Unix Command.md @@ -1,26 +1,23 @@ -translating---geekpi - -Bash Bypass Alias Linux/Unix Command +绕过 Linux/Unix 命令别名 ====== -I defined mount bash shell alias as follows on my Linux system: +我在我的 Linux 系统上定义了如下 mount 别名: ``` alias mount='mount | column -t' ``` -However, I need to bash bypass alias for mounting the file system and another usage. How can I disable or bypass my bash shell aliases temporarily on a Linux, *BSD, macOS or Unix-like system? +但是我需要在挂载文件系统和其他用途时绕过 bash 别名。我如何在 Linux、\*BSD、macOS 或者类 Unix 系统上临时禁用或者绕过 bash shell 呢? - -You can define or display bash shell aliases with alias command. Once bash shell aliases created, they take precedence over external or internal commands. This page shows how to bypass bash aliases temporarily so that you can run actual internal or external command. +你可以使用 alias 命令定义或显示 bash shell 别名。一旦创建了 bash shell 别名,它们将优先于外部或内部命令。本文将展示如何暂时绕过 bash 别名,以便你可以运行实际的内部或外部命令。 [![Bash Bypass Alias Linux BSD macOS Unix Command][1]][1] -## Four ways to bash bypass alias +## 4 种绕过 bash 别名的方法 -Try any one of the following ways to run a command that is shadowed by a bash shell alias. Let us [define an alias as follows][2]: +尝试以下任意一种方法来运行被 bash shell 别名绕过的命令。让我们[如下定义一个别名][2]: `alias mount='mount | column -t'` -Run it as follows: +运行如下: `mount ` -Sample outputs: +示例输出: ``` sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) @@ -33,16 +30,16 @@ binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_m lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) ``` -### Method 1 - Use \command +### 方法1 - 使用 \command -Type the following command to temporarily bypass a bash alias called mount: +输入以下命令暂时绕过名为 mount 的 bash 别名: `\mount` -### Method 2 - Use "command" or 'command' +### 方法2 - 使用 "command" 或 'command' -Quote the mount command as follows to call actual /bin/mount: +如下引用 mount 命令调用实际的 /bin/mount: `"mount"` -OR +或者 `'mount'` ### Method 3 - Use full command path @@ -51,27 +48,27 @@ Use full binary path such as /bin/mount: `/bin/mount /bin/mount /dev/sda1 /mnt/sda` -### Method 4 - Use internal command +### 方法3 - 使用完整的命令路径 -The syntax is: +语法是: `command cmd command cmd arg1 arg2` -To override alias set in .bash_aliases such as mount: +要覆盖 .bash_aliases 中设置的别名,例如 mount: `command mount command mount /dev/sdc /mnt/pendrive/` -[The 'command' run a simple command or display][3] information about commands. It runs COMMAND with ARGS suppressing shell function lookup or aliases, or display information about the given COMMANDs. +[”command“ 运行命令或显示][3]关于命令的信息。它带参数运行命令会抑制 shell 函数查询或者别名,或者显示有关给定命令的信息。 -## A note about unalias command +## 关于 unalias 命令的说明 -To remove each alias from the list of defined aliases from the current session use unalias command: +要从当前会话的已定义别名列表中移除别名,请使用 unalias 命令: `unalias mount` -To remove all alias definitions from the current bash session: +要从当前 bash 会话中删除所有别名定义: `unalias -a` -Make sure you update your ~/.bashrc or $HOME/.bash_aliases file. You must remove defined aliases if you want to remove them permanently: +确保你更新你的 ~/.bashrc 或 $HOME/.bash_aliases。如果要永久删除定义的别名,则必须删除定义的别名: `vi ~/.bashrc` -OR +或者 `joe $HOME/.bash_aliases` -For more information see bash command man page online [here][4] or read it by typing the following command: +想了解更多信息,参考[这里][4]的在线手册,或者输入下面的命令查看: ``` man bash help command @@ -85,7 +82,7 @@ help alias via: https://www.cyberciti.biz/faq/bash-bypass-alias-command-on-linux-macos-unix/ 作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a5921101990da9df13e49bf0c248859bb40665c4 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 25 Jan 2018 09:06:56 +0800 Subject: [PATCH 226/226] translating --- ...Ansible Tutorial- Intorduction to simple Ansible commands.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md b/sources/tech/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md index e72d90301c..d0300fe6e3 100644 --- a/sources/tech/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md +++ b/sources/tech/20170508 Ansible Tutorial- Intorduction to simple Ansible commands.md @@ -1,3 +1,5 @@ +translating---geekpi + Ansible Tutorial: Intorduction to simple Ansible commands ====== In our earlier Ansible tutorial, we discussed [**the installation & configuration of Ansible**][1]. Now in this ansible tutorial, we will learn some basic examples of ansible commands that we will use to manage our infrastructure. So let us start by looking at the syntax of a complete ansible command,