diff --git a/published/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md b/published/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md new file mode 100644 index 0000000000..fe53159cec --- /dev/null +++ b/published/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md @@ -0,0 +1,435 @@ +使用 sar 和 kSar 来发现 Linux 性能瓶颈 +====== + +`sar` 命令用用收集、报告、或者保存 UNIX / Linux 系统的活动信息。它保存选择的计数器到操作系统的 `/var/log/sa/sadd` 文件中。从收集的数据中,你可以得到许多关于你的服务器的信息: + +1. CPU 使用率 +2. 内存页面和使用率 +3. 网络 I/O 和传输统计 +4. 进程创建活动 +5. 所有的块设备活动 +6. 每秒中断数等等 + +`sar` 命令的输出能够用于识别服务器瓶颈。但是,分析 `sar` 命令提供的信息可能比较困难,所以要使用 kSar 工具。kSar 工具可以将 `sar` 命令的输出绘制成基于时间周期的、易于理解的图表。 + +### sysstat 包 + +`sar`、`sa1`、和 `sa2` 命令都是 sysstat 包的一部分。它是 Linux 包含的性能监视工具集合。 + +1. `sar`:显示数据 +2. `sa1` 和 `sa2`:收集和保存数据用于以后分析。`sa2` shell 脚本在 `/var/log/sa` 目录中每日写入一个报告。`sa1` shell 脚本将每日的系统活动信息以二进制数据的形式写入到文件中。 +3. sadc —— 系统活动数据收集器。你可以通过修改 `sa1` 和 `sa2` 脚本去配置各种选项。它们位于以下的目录: + * `/usr/lib64/sa/sa1` (64 位)或者 `/usr/lib/sa/sa1` (32 位) —— 它调用 `sadc` 去记录报告到 `/var/log/sa/sadX` 格式。 + * `/usr/lib64/sa/sa2` (64 位)或者 `/usr/lib/sa/sa2` (32 位) —— 它调用 `sar` 去记录报告到 `/var/log/sa/sarX` 格式。 + +#### 如何在我的系统上安装 sar? + +在一个基于 CentOS/RHEL 的系统上,输入如下的 [yum 命令][1] 去安装 sysstat: + +``` +# yum install sysstat +``` + +示例输出如下: + +``` +Loaded plugins: downloadonly, fastestmirror, priorities, + : protectbase, security +Loading mirror speeds from cached hostfile + * addons: mirror.cs.vt.edu + * base: mirror.ash.fastserv.com + * epel: serverbeach1.fedoraproject.org + * extras: mirror.cogentco.com + * updates: centos.mirror.nac.net +0 packages excluded due to repository protections +Setting up Install Process +Resolving Dependencies +--> Running transaction check +---> Package sysstat.x86_64 0:7.0.2-3.el5 set to be updated +--> Finished Dependency Resolution + +Dependencies Resolved + +==================================================================== + Package Arch Version Repository Size +==================================================================== +Installing: + sysstat x86_64 7.0.2-3.el5 base 173 k + +Transaction Summary +==================================================================== +Install 1 Package(s) +Update 0 Package(s) +Remove 0 Package(s) + +Total download size: 173 k +Is this ok [y/N]: y +Downloading Packages: +sysstat-7.0.2-3.el5.x86_64.rpm | 173 kB 00:00 +Running rpm_check_debug +Running Transaction Test +Finished Transaction Test +Transaction Test Succeeded +Running Transaction + Installing : sysstat 1/1 + +Installed: + sysstat.x86_64 0:7.0.2-3.el5 + +Complete! +``` + +#### 为 sysstat 配置文件 + +编辑 `/etc/sysconfig/sysstat` 文件去指定日志文件保存多少天(最长为一个月): + +``` +# vi /etc/sysconfig/sysstat +``` + +示例输出如下 : + +``` +# keep log for 28 days +# the default is 7 +HISTORY=28 +``` + +保存并关闭这个文件。 + +### 找到 sar 默认的 cron 作业 + +[默认的 cron 作业位于][2] `/etc/cron.d/sysstat`: + +``` +# cat /etc/cron.d/sysstat +``` + +示例输出如下: + +``` +# run system activity accounting tool every 10 minutes +*/10 * * * * root /usr/lib64/sa/sa1 1 1 +# generate a daily summary of process accounting at 23:53 +53 23 * * * root /usr/lib64/sa/sa2 -A +``` + +#### 告诉 sadc 去报告磁盘的统计数据 + +使用一个文本编辑器去编辑 `/etc/cron.d/sysstat` 文件,比如使用 `vim` 命令,输入如下: + +``` +# vi /etc/cron.d/sysstat +``` + +像下面的示例那样更新这个文件,以记录所有的硬盘统计数据(`-d` 选项强制记录每个块设备的统计数据,而 `-I` 选项强制记录所有系统中断的统计数据): + +``` +# run system activity accounting tool every 10 minutes +*/10 * * * * root /usr/lib64/sa/sa1 -I -d 1 1 +# generate a daily summary of process accounting at 23:53 +53 23 * * * root /usr/lib64/sa/sa2 -A +``` + +在 CentOS/RHEL 7.x 系统上你需要传递 `-S DISK` 选项去收集块设备的数据。传递 `-S XALL` 选项去采集如下所列的数据: + +1. 磁盘 +2. 分区 +3. 系统中断 +4. SNMP +5. IPv6 + +``` +# Run system activity accounting tool every 10 minutes +*/10 * * * * root /usr/lib64/sa/sa1 -S DISK 1 1 +# 0 * * * * root /usr/lib64/sa/sa1 600 6 & +# Generate a daily summary of process accounting at 23:53 +53 23 * * * root /usr/lib64/sa/sa2 -A +# Run system activity accounting tool every 10 minutes +``` + +保存并关闭这个文件。 + +#### 打开 CentOS/RHEL 版本 5.x/6.x 的服务 + +输入如下命令: + +``` +chkconfig sysstat on +service sysstat start +``` + +示例输出如下: + +``` +Calling the system activity data collector (sadc): +``` + +对于 CentOS/RHEL 7.x,运行如下的命令: + +``` +# systemctl enable sysstat +# systemctl start sysstat.service +# systemctl status sysstat.service +``` + +示例输出: + +``` +● sysstat.service - Resets System Activity Logs + Loaded: loaded (/usr/lib/systemd/system/sysstat.service; enabled; vendor preset: enabled) + Active: active (exited) since Sat 2018-01-06 16:33:19 IST; 3s ago + Process: 28297 ExecStart=/usr/lib64/sa/sa1 --boot (code=exited, status=0/SUCCESS) + Main PID: 28297 (code=exited, status=0/SUCCESS) + +Jan 06 16:33:19 centos7-box systemd[1]: Starting Resets System Activity Logs... +Jan 06 16:33:19 centos7-box systemd[1]: Started Resets System Activity Logs. +``` + +### 如何使用 sar?如何查看统计数据? + +使用 `sar` 命令去显示操作系统中选定的累积活动计数器输出。在这个示例中,运行 `sar` 命令行,去实时获得 CPU 使用率的报告: + +``` +# sar -u 3 10 +``` + +示例输出: + +``` +Linux 2.6.18-164.2.1.el5 (www-03.nixcraft.in) 12/14/2009 + +09:49:47 PM CPU %user %nice %system %iowait %steal %idle +09:49:50 PM all 5.66 0.00 1.22 0.04 0.00 93.08 +09:49:53 PM all 12.29 0.00 1.93 0.04 0.00 85.74 +09:49:56 PM all 9.30 0.00 1.61 0.00 0.00 89.10 +09:49:59 PM all 10.86 0.00 1.51 0.04 0.00 87.58 +09:50:02 PM all 14.21 0.00 3.27 0.04 0.00 82.47 +09:50:05 PM all 13.98 0.00 4.04 0.04 0.00 81.93 +09:50:08 PM all 6.60 6.89 1.26 0.00 0.00 85.25 +09:50:11 PM all 7.25 0.00 1.55 0.04 0.00 91.15 +09:50:14 PM all 6.61 0.00 1.09 0.00 0.00 92.31 +09:50:17 PM all 5.71 0.00 0.96 0.00 0.00 93.33 +Average: all 9.24 0.69 1.84 0.03 0.00 88.20 +``` + +其中: + + * 3 表示间隔时间 + * 10 表示次数 + +查看进程创建的统计数据,输入: + +``` +# sar -c 3 10 +``` + +查看 I/O 和传输率统计数据,输入: + +``` +# sar -b 3 10 +``` + +查看内存页面统计数据,输入: + +``` +# sar -B 3 10 +``` + +查看块设备统计数据,输入: + +``` +# sar -d 3 10 +``` + +查看所有中断的统计数据,输入: + +``` +# sar -I XALL 3 10 +``` + +查看网络设备特定的统计数据,输入: + +``` +# sar -n DEV 3 10 +# sar -n EDEV 3 10 +``` + +查看 CPU 特定的统计数据,输入: + +``` +# sar -P ALL +# Only 1st CPU stats +# sar -P 1 3 10 +``` + +查看队列长度和平均负载的统计数据,输入: + +``` +# sar -q 3 10 +``` + +查看内存和交换空间的使用统计数据,输入: + +``` +# sar -r 3 10 +# sar -R 3 10 +``` + +查看 inode、文件、和其它内核表统计数据状态,输入: + +``` +# sar -v 3 10 +``` + +查看系统切换活动统计数据,输入: + +``` +# sar -w 3 10 +``` + +查看交换统计数据,输入: + +``` +# sar -W 3 10 +``` + +查看一个 PID 为 3256 的 Apache 进程,输入: + +``` +# sar -x 3256 3 10 +``` + +### kSar 介绍 + +`sar` 和 `sadf` 提供了基于命令行界面的输出。这种输出可能会使新手用户/系统管理员感到无从下手。因此,你需要使用 kSar,它是一个图形化显示你的 `sar` 数据的 Java 应用程序。它也允许你以 PDF/JPG/PNG/CSV 格式导出数据。你可以用三种方式去加载数据:本地文件、运行本地命令、以及通过 SSH 远程运行的命令。kSar 可以处理下列操作系统的 `sar` 输出: + +1. Solaris 8, 9 和 10 +2. Mac OS/X 10.4+ +3. Linux (Systat Version >= 5.0.5) +4. AIX (4.3 & 5.3) +5. HPUX 11.00+ + +#### 下载和安装 kSar + +访问 [官方][3] 网站去获得最新版本的源代码。使用 [wget][4] 去下载源代码,输入: + +``` +$ wget https://github.com/vlsi/ksar/releases/download/v5.2.4-snapshot-652bf16/ksar-5.2.4-SNAPSHOT-all.jar +``` + +#### 如何运行 kSar? + +首先要确保你的机器上 [JAVA jdk][5] 已安装并能够正常工作。输入下列命令去启动 kSar: + +``` +$ java -jar ksar-5.2.4-SNAPSHOT-all.jar +``` + +![Fig.01: kSar welcome screen][6] + +接下来你将看到 kSar 的主窗口,和有两个菜单的面板。 + +![Fig.02: kSar - the main window][7] + +左侧有一个列表,是 kSar 根据数据已经解析出的可用图表的列表。右侧窗口将展示你选定的图表。 + +#### 如何使用 kSar 去生成 sar 图表? + +首先,你需要从命名为 server1 的服务器上采集 `sar` 命令的统计数据。输入如下的命令: + +``` +[ server1 ]# LC_ALL=C sar -A > /tmp/sar.data.txt +``` + +接下来,使用 `scp` 命令从本地桌面拷贝到远程电脑上: + +``` +[ desktop ]$ scp user@server1.nixcraft.com:/tmp/sar.data.txt /tmp/ +``` + +切换到 kSar 窗口,点击 “Data” > “Load data from text file” > 从 `/tmp/` 中选择 `sar.data.txt` > 点击 “Open” 按钮。 + +现在,图表类型树已经出现在左侧面板中并选定了一个图形: + +![Fig.03: Processes for server1][8] + +![Fig.03: Disk stats (blok device) stats for server1][9] + +![Fig.05: Memory stats for server1][10] + +##### 放大和缩小 + +通过移动你可以交互式缩放图像的一部分。在要缩放的图像的左上角点击并按下鼠标,移动到要缩放区域的右下角,可以选定要缩放的区域。返回到未缩放状态,点击并拖动鼠标到除了右下角外的任意位置,你也可以点击并选择 zoom 选项。 + +##### 了解 kSar 图像和 sar 数据 + +我强烈建议你去阅读 `sar` 和 `sadf` 命令的 man 页面: + +``` +$ man sar +$ man sadf +``` + +### 案例学习:识别 Linux 服务器的 CPU 瓶颈 + +使用 `sar` 命令和 kSar 工具,可以得到内存、CPU、以及其它子系统的详细快照。例如,如果 CPU 使用率在一个很长的时间内持续高于 80%,有可能就是出现了一个 CPU 瓶颈。使用 `sar -x ALL` 你可以找到大量消耗 CPU 的进程。 + +[mpstat 命令][11] 的输出(sysstat 包的一部分)也会帮你去了解 CPU 的使用率。但你可以使用 kSar 很容易地去分析这些信息。 + +#### 找出 CPU 瓶颈后 … + +对 CPU 执行如下的调整: + +1. 确保没有不需要的进程在后台运行。关闭 [Linux 上所有不需要的服务][12]。 +2. 使用 [cron][13] 在一个非高峰时刻运行任务(比如,备份)。 +3. 使用 [top 和 ps 命令][14] 去找出所有非关键的后台作业/服务。使用 [renice 命令][15] 去调整低优先级作业。 +4. 使用 [taskset 命令去设置进程使用的 CPU ][16] (卸载所使用的 CPU),即,绑定进程到不同的 CPU 上。例如,在 2# CPU 上运行 MySQL 数据库,而在 3# CPU 上运行 Apache。 +5. 确保你的系统使用了最新的驱动程序和固件。 +6. 如有可能在系统上增加额外的 CPU。 +7. 为单线程应用程序使用更快的 CPU(比如,Lighttpd web 服务器应用程序)。 +8. 为多线程应用程序使用多个 CPU(比如,MySQL 数据库服务器应用程序)。 +9. 为一个 web 应用程序使用多个计算节点并设置一个 [负载均衡器][17]。 + +### isag —— 交互式系统活动记录器(替代工具) + +`isag` 命令图形化显示了以前运行 `sar` 命令时存储在二进制文件中的系统活动数据。`isag` 命令引用 `sar` 并提取出它的数据来绘制图形。与 kSar 相比,`isag` 的选项比较少。 + +![Fig.06: isag CPU utilization graphs][18] + +### 关于作者 + +本文作者是 nixCraft 的创始人和一位经验丰富的 Linux 操作系统/Unix shell 脚本培训师。他与包括 IT、教育、国防和空间研究、以及非营利组织等全球各行业客户一起合作。可以在 [Twitter][19]、[Facebook][20]、[Google+][21] 上关注他。 + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/identifying-linux-bottlenecks-sar-graphs-with-ksar.html + +作者:[Vivek Gite][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ "See Linux/Unix yum command examples for more info" +[2]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/ +[3]:https://github.com/vlsi/ksar +[4]:https://www.cyberciti.biz/tips/linux-wget-your-ultimate-command-line-downloader.html +[5]:https://www.cyberciti.biz/faq/howto-ubuntu-linux-install-configure-jdk-jre/ +[6]:https://www.cyberciti.biz/media/new/tips/2009/12/sar-welcome.png "kSar welcome screen" +[7]:https://www.cyberciti.biz/media/new/tips/2009/12/screenshot-kSar-a-sar-grapher-01.png "kSar - the main window" +[8]:https://www.cyberciti.biz/media/new/tips/2009/12/cpu-ksar.png "Linux kSar Processes for server1 " +[9]:https://www.cyberciti.biz/media/new/tips/2009/12/disk-stats-ksar.png "Linux Disk I/O Stats Using kSar" +[10]:https://www.cyberciti.biz/media/new/tips/2009/12/memory-ksar.png "Linux Memory paging and its utilization stats" +[11]:https://www.cyberciti.biz/tips/how-do-i-find-out-linux-cpu-utilization.html +[12]:https://www.cyberciti.biz/faq/check-running-services-in-rhel-redhat-fedora-centoslinux/ +[13]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/ +[14]:https://www.cyberciti.biz/faq/show-all-running-processes-in-linux/ +[15]:https://www.cyberciti.biz/faq/howto-change-unix-linux-process-priority/ +[16]:https://www.cyberciti.biz/faq/taskset-cpu-affinity-command/ +[17]:https://www.cyberciti.biz/tips/load-balancer-open-source-software.html +[18]:https://www.cyberciti.biz/media/new/tips/2009/12/isag.cpu_.png "Fig.06: isag CPU utilization graphs" +[19]:https://twitter.com/nixcraft +[20]:https://facebook.com/nixcraft +[21]:https://plus.google.com/+CybercitiBiz diff --git a/translated/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md b/published/20150615 Let-s Build A Simple Interpreter. Part 1..md similarity index 55% rename from translated/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md rename to published/20150615 Let-s Build A Simple Interpreter. Part 1..md index 3a62934f42..ed2e0f0a0e 100644 --- a/translated/tech/20150615 Let-s Build A Simple Interpreter. Part 1..md +++ b/published/20150615 Let-s Build A Simple Interpreter. Part 1..md @@ -1,26 +1,27 @@ -让我们做个简单的解释器(1) +让我们做个简单的解释器(一) ====== +> “如果你不知道编译器是怎么工作的,那你就不知道电脑是怎么工作的。如果你不能百分百确定,那就是不知道它们是如何工作的。” --Steve Yegge -> **" If you don't know how compilers work, then you don't know how computers work. If you're not 100% sure whether you know how compilers work, then you don't know how they work."** -- Steve Yegge -> **“如果你不知道编译器是怎么工作的,那你就不知道电脑是怎么工作的。如果你不能百分百确定,那就是不知道他们是如何工作的。”** --Steve Yegge +就是这样。想一想。你是萌新还是一个资深的软件开发者实际上都无关紧要:如果你不知道编译器compiler解释器interpreter是怎么工作的,那么你就不知道电脑是怎么工作的。就这么简单。 -就是这样。想一想。你是萌新还是一个资深的软件开发者实际上都无关紧要:如果你不知道编译器和解释器是怎么工作的,那么你就不知道电脑是怎么工作的。就这么简单。 +所以,你知道编译器和解释器是怎么工作的吗?我是说,你百分百确定自己知道他们怎么工作吗?如果不知道。 -所以,你知道编译器和解释器是怎么工作的吗?我是说,你百分百确定自己知道他们怎么工作吗?如果不知道。![][1] +![][1] -或者如果你不知道但你非常想要了解它。 ![][2] +或者如果你不知道但你非常想要了解它。 -不用担心。如果你能坚持跟着这个系列做下去,和我一起构建一个解释器和编译器,最后你将会知道他们是怎么工作的。并且你会变成一个自信满满的快乐的人。至少我希望如此。![][3]。 +![][2] + +不用担心。如果你能坚持跟着这个系列做下去,和我一起构建一个解释器和编译器,最后你将会知道他们是怎么工作的。并且你会变成一个自信满满的快乐的人。至少我希望如此。 + +![][3] 为什么要学习编译器和解释器?有三点理由。 - 1. 要写出一个解释器或编译器,你需要有很多的专业知识,并能融会贯通。写一个解释器或编译器能帮你加强这些能力,成为一个更厉害的软件开发者。而且,你要学的技能对写软件非常有用,而不是仅仅局限于解释器或编译器。 - 2. 你确实想要了解电脑是怎么工作的。一般解释器和编译器看上去很魔幻。你或许不习惯这种魔力。你会想去揭开构建解释器和编译器那层神秘的面纱,了解他们的原理,把事情做好。 - 3. 你想要创建自己的编程语言或者特定领域的语言。如果你创建了一个,你还要为它创建一个解释器或者编译器。最近,兴起了对新的编程语言的兴趣。你能看到几乎每天都有一门新的编程语言横空出世:Elixir,Go,Rust,还有很多。 - - - +1. 要写出一个解释器或编译器,你需要有很多的专业知识,并能融会贯通。写一个解释器或编译器能帮你加强这些能力,成为一个更厉害的软件开发者。而且,你要学的技能对编写软件非常有用,而不是仅仅局限于解释器或编译器。 +2. 你确实想要了解电脑是怎么工作的。通常解释器和编译器看上去很魔幻。你或许不习惯这种魔力。你会想去揭开构建解释器和编译器那层神秘的面纱,了解它们的原理,把事情做好。 +3. 你想要创建自己的编程语言或者特定领域的语言。如果你创建了一个,你还要为它创建一个解释器或者编译器。最近,兴起了对新的编程语言的兴趣。你能看到几乎每天都有一门新的编程语言横空出世:Elixir,Go,Rust,还有很多。 好,但什么是解释器和编译器? @@ -32,11 +33,12 @@ 我希望你现在确信你很想学习构建一个编译器和解释器。你期望在这个教程里学习解释器的哪些知识呢? -你看这样如何。你和我一起做一个简单的解释器当作 [Pascal][5] 语言的子集。在这个系列结束的时候你能做出一个可以运行的 Pascal 解释器和一个像 Python 的 [pdb][6] 那样的源代码级别的调试器。 +你看这样如何。你和我一起为 [Pascal][5] 语言的一个大子集做一个简单的解释器。在这个系列结束的时候你能做出一个可以运行的 Pascal 解释器和一个像 Python 的 [pdb][6] 那样的源代码级别的调试器。 -你或许会问,为什么是 Pascal?有一点,它不是我为了这个系列而提出的一个虚构的语言:它是真实存在的一门编程语言,有很多重要的语言结构。有些陈旧但有用的计算机书籍使用 Pascal 编程语言作为示例(我知道对于选择一门语言来构建解释器,这个理由并不令人信服,但我认为学一门非主流的语言也不错:)。 +你或许会问,为什么是 Pascal?一方面,它不是我为了这个系列而提出的一个虚构的语言:它是真实存在的一门编程语言,有很多重要的语言结构。有些陈旧但有用的计算机书籍使用 Pascal 编程语言作为示例(我知道对于选择一门语言来构建解释器,这个理由并不令人信服,但我认为学一门非主流的语言也不错 :))。 + +这有个 Pascal 中的阶乘函数示例,你将能用自己的解释器解释代码,还能够用可交互的源码级调试器进行调试,你可以这样创造: -这有个 Pascal 中的阶乘函数示例,你能用自己的解释器解释代码,还能够用可交互的源码级调试器进行调试,你可以这样创造: ``` program factorial; @@ -57,15 +59,14 @@ begin end. ``` -这个 Pascal 解释器的实现语言会用 Python,但你也可以用其他任何语言,因为这里展示的思想不依赖任何特殊的实现语言。好,让我们开始干活。准备好了,出发! - -你会从编写一个简单的算术表达式解析器,也就是常说的计算器,开始学习解释器和编译器。今天的目标非常简单:让你的计算器能处理两个个位数相加,比如 **3+5**。这是你的计算器的源代码,不好意思,是解释器: +这个 Pascal 解释器的实现语言会使用 Python,但你也可以用其他任何语言,因为这里展示的思想不依赖任何特殊的实现语言。好,让我们开始干活。准备好了,出发! +你会从编写一个简单的算术表达式解析器,也就是常说的计算器,开始学习解释器和编译器。今天的目标非常简单:让你的计算器能处理两个个位数相加,比如 `3+5`。下面是你的计算器的源代码——不好意思,是解释器: ``` # 标记类型 # -# EOF (end-of-file 文件末尾) 标记是用来表示所有输入都解析完成 +# EOF (end-of-file 文件末尾)标记是用来表示所有输入都解析完成 INTEGER, PLUS, EOF = 'INTEGER', 'PLUS', 'EOF' @@ -73,7 +74,7 @@ class Token(object): def __init__(self, type, value): # token 类型: INTEGER, PLUS, MINUS, or EOF self.type = type - # token 值: 0, 1, 2. 3, 4, 5, 6, 7, 8, 9, '+', 或 None + # token 值: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, '+', 或 None self.value = value def __str__(self): @@ -187,7 +188,8 @@ if __name__ == '__main__': ``` -把上面的代码保存到 calc1.py 文件,或者直接从 [GitHub][7] 上下载。在你深入研究代码前,在命令行里面运行它看看效果。试一试!这是我笔记本上的示例会话(如果你想在 Python3 下运行,你要把 raw_input 换成 input): +把上面的代码保存到 `calc1.py` 文件,或者直接从 [GitHub][7] 上下载。在你深入研究代码前,在命令行里面运行它看看效果。试一试!这是我笔记本上的示例会话(如果你想在 Python3 下运行,你要把 `raw_input` 换成 `input`): + ``` $ python calc1.py calc> 3+4 @@ -205,31 +207,32 @@ calc> * 此时支持的唯一一个运算符是加法 * 输入中不允许有任何的空格符号 - - 要让计算器变得简单,这些限制非常必要。不用担心,你很快就会让它变得很复杂。 好,现在让我们深入它,看看解释器是怎么工作,它是怎么评估出算术表达式的。 -当你在命令行中输入一个表达式 3+5,解释器就获得了字符串 “3+5”。为了让解释器能够真正理解要用这个字符串做什么,它首先要把输入 “3+5” 分到叫做 **token(标记)** 的容器里。**标记** 是一个拥有类型和值的对象。比如说,对字符 “3” 而言,标记的类型是 INTEGER 整数,对应的值是 3。 +当你在命令行中输入一个表达式 `3+5`,解释器就获得了字符串 “3+5”。为了让解释器能够真正理解要用这个字符串做什么,它首先要把输入 “3+5” 分到叫做 `token`(标记)的容器里。标记token 是一个拥有类型和值的对象。比如说,对字符 “3” 而言,标记的类型是 INTEGER 整数,对应的值是 3。 -把输入字符串分成标记的过程叫 **词法分析**。因此解释器的需要做的第一步是读取输入字符,并将其转换成标记流。解释器中的这一部分叫做 **词法分析器**,或者简短点叫 **lexer**。你也可以给它起别的名字,诸如 **扫描器** 或者 **标记器**。他们指的都是同一个东西:解释器或编译器中将输入字符转换成标记流的那部分。 +把输入字符串分成标记的过程叫词法分析lexical analysis。因此解释器的需要做的第一步是读取输入字符,并将其转换成标记流。解释器中的这一部分叫做词法分析器lexical analyzer,或者简短点叫 **lexer**。你也可以给它起别的名字,诸如扫描器scanner或者标记器tokenizer。它们指的都是同一个东西:解释器或编译器中将输入字符转换成标记流的那部分。 -Interpreter 类中的 get_next_token 方法就是词法分析器。每次调用它的时候,你都能从传入解释器的输入字符中获得创建的下一个标记。仔细看看这个方法,看看它是如何完成把字符转换成标记的任务的。输入被存在可变文本中,它保存了输入的字符串和关于该字符串的索引(把字符串想象成字符数组)。pos 开始时设为 0,指向 ‘3’.这个方法一开始检查字符是不是数字,如果是,就将 pos 加 1,并返回一个 INTEGER 类型的标记实例,并把字符 ‘3’ 的值设为整数,也就是整数 3: +`Interpreter` 类中的 `get_next_token` 方法就是词法分析器。每次调用它的时候,你都能从传入解释器的输入字符中获得创建的下一个标记。仔细看看这个方法,看看它是如何完成把字符转换成标记的任务的。输入被存在可变文本中,它保存了输入的字符串和关于该字符串的索引(把字符串想象成字符数组)。`pos` 开始时设为 0,指向字符 ‘3’。这个方法一开始检查字符是不是数字,如果是,就将 `pos` 加 1,并返回一个 INTEGER 类型的标记实例,并把字符 ‘3’ 的值设为整数,也就是整数 3: ![][8] -现在 pos 指向文本中的 ‘+’ 号。下次调用这个方法的时候,它会测试 pos 位置的字符是不是个数字,然后检测下一个字符是不是个加号,就是这样。结果这个方法把 pos 加一,返回一个新创建的标记,类型是 PLUS,值为 ‘+’。 +现在 `pos` 指向文本中的 ‘+’ 号。下次调用这个方法的时候,它会测试 `pos` 位置的字符是不是个数字,然后检测下一个字符是不是个加号,就是这样。结果这个方法把 `pos` 加 1,返回一个新创建的标记,类型是 PLUS,值为 ‘+’。 ![][9] -pos 现在指向字符 ‘5’。当你再调用 get_next_token 方法时,该方法会检查这是不是个数字,就是这样,然后它把 pos 加一,返回一个新的 INTEGER 标记,该标记的值被设为 5: +`pos` 现在指向字符 ‘5’。当你再调用 `get_next_token` 方法时,该方法会检查这是不是个数字,就是这样,然后它把 `pos` 加 1,返回一个新的 INTEGER 标记,该标记的值被设为整数 5: + ![][10] -因为 pos 索引现在到了字符串 “3+5” 的末尾,你每次调用 get_next_token 方法时,它将会返回 EOF 标记: +因为 `pos` 索引现在到了字符串 “3+5” 的末尾,你每次调用 `get_next_token` 方法时,它将会返回 EOF 标记: + ![][11] 自己试一试,看看计算器里的词法分析器的运行: + ``` >>> from calc1 import Interpreter >>> @@ -248,17 +251,16 @@ Token(EOF, None) >>> ``` -既然你的解释器能够从输入字符中获取标记流,解释器需要做点什么:它需要在词法分析器 get_next_token 中获取的标记流中找出相应的结构。你的解释器应该能够找到流中的结构:INTEGER -> PLUS -> INTEGER。就是这样,它尝试找出标记的序列:整数后面要跟着加号,加号后面要跟着整数。 +既然你的解释器能够从输入字符中获取标记流,解释器需要对它做点什么:它需要在词法分析器 `get_next_token` 中获取的标记流中找出相应的结构。你的解释器应该能够找到流中的结构:INTEGER -> PLUS -> INTEGER。就是这样,它尝试找出标记的序列:整数后面要跟着加号,加号后面要跟着整数。 -负责找出并解释结构的方法就是 expr。该方法检验标记序列确实与期望的标记序列是对应的,比如 INTEGER -> PLUS -> INTEGER。成功确认了这个结构后,就会生成加号左右两边的标记的值相加的结果,这样就成功解释你输入到解释器中的算术表达式了。 +负责找出并解释结构的方法就是 `expr`。该方法检验标记序列确实与期望的标记序列是对应的,比如 INTEGER -> PLUS -> INTEGER。成功确认了这个结构后,就会生成加号左右两边的标记的值相加的结果,这样就成功解释你输入到解释器中的算术表达式了。 -expr 方法用了一个助手方法 eat 来检验传入的标记类型是否与当前的标记类型相匹配。在匹配到传入的标记类型后,eat 方法获取下一个标记,并将其赋给 current_token 变量,然后高效地 “吃掉” 当前匹配的标记,并将标记流的虚拟指针向后移动。如果标记流的结构与期望的 INTEGER PLUS INTEGER 标记序列不对应,eat 方法就抛出一个异常。 +`expr` 方法用了一个助手方法 `eat` 来检验传入的标记类型是否与当前的标记类型相匹配。在匹配到传入的标记类型后,`eat` 方法会获取下一个标记,并将其赋给 `current_token` 变量,然后高效地 “吃掉” 当前匹配的标记,并将标记流的虚拟指针向后移动。如果标记流的结构与期望的 INTEGER -> PLUS -> INTEGER 标记序列不对应,`eat` 方法就抛出一个异常。 让我们回顾下解释器做了什么来对算术表达式进行评估的: - * 解释器接受输入字符串,就把它当作 “3+5” - * 解释器调用 expr 方法,在词法分析器 get_next_token 返回的标记流中找出结构。这个结构就是 INTEGER PLUS INTEGER 这样的格式。在确认了格式后,它就通过把两个整型标记相加解释输入,因为此时对于解释器来说很清楚,他要做的就是把两个整数 3 和 5 进行相加。 - +* 解释器接受输入字符串,比如说 “3+5” +* 解释器调用 `expr` 方法,在词法分析器 `get_next_token` 返回的标记流中找出结构。这个结构就是 INTEGER -> PLUS -> INTEGER 这样的格式。在确认了格式后,它就通过把两个整型标记相加来解释输入,因为此时对于解释器来说很清楚,它要做的就是把两个整数 3 和 5 进行相加。 恭喜。你刚刚学习了怎么构建自己的第一个解释器! @@ -268,42 +270,38 @@ expr 方法用了一个助手方法 eat 来检验传入的标记类型是否与 看了这篇文章,你肯定觉得不够,是吗?好,准备好做这些练习: - 1. 修改代码,允许输入多位数,比如 “12+3” - 2. 添加一个方法忽略空格符,让你的计算器能够处理带有空白的输入,比如“12 + 3” - 3. 修改代码,用 ‘-’ 号而非 ‘+’ 号去执行减法比如 “7-5” - +1. 修改代码,允许输入多位数,比如 “12+3” +2. 添加一个方法忽略空格符,让你的计算器能够处理带有空白的输入,比如 “12 + 3” +3. 修改代码,用 ‘-’ 号而非 ‘+’ 号去执行减法比如 “7-5” **检验你的理解** - 1. 什么是解释器? - 2. 什么是编译器 - 3. 解释器和编译器有什么差别? - 4. 什么是标记? - 5. 将输入分隔成若干个标记的过程叫什么? - 6. 解释器中进行词法分析的部分叫什么? - 7. 解释器或编译器中进行词法分析的部分有哪些其他的常见名字? - - +1. 什么是解释器? +2. 什么是编译器 +3. 解释器和编译器有什么差别? +4. 什么是标记? +5. 将输入分隔成若干个标记的过程叫什么? +6. 解释器中进行词法分析的部分叫什么? +7. 解释器或编译器中进行词法分析的部分有哪些其他的常见名字? 在结束本文前,我衷心希望你能留下学习解释器和编译器的承诺。并且现在就开始做。不要把它留到以后。不要拖延。如果你已经看完了本文,就开始吧。如果已经仔细看完了但是还没做什么练习 —— 现在就开始做吧。如果已经开始做练习了,那就把剩下的做完。你懂得。而且你知道吗?签下承诺书,今天就开始学习解释器和编译器! +> 本人, ______,身体健全,思想正常,在此承诺从今天开始学习解释器和编译器,直到我百分百了解它们是怎么工作的! -_本人, ______,身体健全,思想正常,在此承诺从今天开始学习解释器和编译器,直到我百分百了解它们是怎么工作的!_ +> -签字人: +> 签字人: -日期: +> 日期: ![][13] 签字,写上日期,把它放在你每天都能看到的地方,确保你能坚守承诺。谨记你的承诺: -> "Commitment is doing the thing you said you were going to do long after the mood you said it in has left you." -- Darren Hardy > “承诺就是,你说自己会去做的事,在你说完就一直陪着你的东西。” —— Darren Hardy 好,今天的就结束了。这个系列的下一篇文章里,你将会扩展自己的计算器,让它能够处理更复杂的算术表达式。敬请期待。 - -------------------------------------------------------------------------------- via: https://ruslanspivak.com/lsbasi-part1/ @@ -311,7 +309,7 @@ via: https://ruslanspivak.com/lsbasi-part1/ 作者:[Ruslan Spivak][a] 译者:[BriFuture](https://github.com/BriFuture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20150708 Choosing a Linux Tracer (2015).md b/published/20150708 Choosing a Linux Tracer (2015).md new file mode 100644 index 0000000000..2d04d8594f --- /dev/null +++ b/published/20150708 Choosing a Linux Tracer (2015).md @@ -0,0 +1,189 @@ +Linux 跟踪器之选 +====== + +[![][1]][2] + +> Linux 跟踪很神奇! + +跟踪器tracer是一个高级的性能分析和调试工具,如果你使用过 `strace(1)` 或者 `tcpdump(8)`,你不应该被它吓到 ... 你使用的就是跟踪器。系统跟踪器能让你看到很多的东西,而不仅是系统调用或者数据包,因为常见的跟踪器都可以跟踪内核或者应用程序的任何东西。 + +有大量的 Linux 跟踪器可供你选择。由于它们中的每个都有一个官方的(或者非官方的)的吉祥物,我们有足够多的选择给孩子们展示。 + +你喜欢使用哪一个呢? + +我从两类读者的角度来回答这个问题:大多数人和性能/内核工程师。当然,随着时间的推移,这也可能会发生变化,因此,我需要及时去更新本文内容,或许是每年一次,或者更频繁。(LCTT 译注:本文最后更新于 2015 年) + +### 对于大多数人 + +大多数人(开发者、系统管理员、运维人员、网络可靠性工程师(SRE)…)是不需要去学习系统跟踪器的底层细节的。以下是你需要去了解和做的事情: + +#### 1. 使用 perf_events 进行 CPU 剖析 + +可以使用 perf_events 进行 CPU 剖析profiling。它可以用一个 [火焰图][3] 来形象地表示。比如: + +``` +git clone --depth 1 https://github.com/brendangregg/FlameGraph +perf record -F 99 -a -g -- sleep 30 +perf script | ./FlameGraph/stackcollapse-perf.pl | ./FlameGraph/flamegraph.pl > perf.svg +``` + +![](http://www.brendangregg.com/blog/images/2015/cpu-bash-flamegraph-500.png) + +Linux 的 perf_events(即 `perf`,后者是它的命令)是官方为 Linux 用户准备的跟踪器/分析器。它位于内核源码中,并且维护的非常好(而且现在它的功能还在快速变强)。它一般是通过 linux-tools-common 这个包来添加的。 + +`perf` 可以做的事情很多,但是,如果我只能建议你学习其中的一个功能的话,那就是 CPU 剖析。虽然从技术角度来说,这并不是事件“跟踪”,而是采样sampling。最难的部分是获得完整的栈和符号,这部分在我的 [Linux Profiling at Netflix][4] 中针对 Java 和 Node.js 讨论过。 + +#### 2. 知道它能干什么 + +正如一位朋友所说的:“你不需要知道 X 光机是如何工作的,但你需要明白的是,如果你吞下了一个硬币,X 光机是你的一个选择!”你需要知道使用跟踪器能够做什么,因此,如果你在业务上确实需要它,你可以以后再去学习它,或者请会使用它的人来做。 + +简单地说:几乎任何事情都可以通过跟踪来了解它。内部文件系统、TCP/IP 处理过程、设备驱动、应用程序内部情况。阅读我在 lwn.net 上的 [ftrace][5] 的文章,也可以去浏览 [perf_events 页面][6],那里有一些跟踪(和剖析)能力的示例。 + +#### 3. 需要一个前端工具 + +如果你要购买一个性能分析工具(有许多公司销售这类产品),并要求支持 Linux 跟踪。想要一个直观的“点击”界面去探查内核的内部,以及包含一个在不同堆栈位置的延迟热力图。就像我在 [Monitorama 演讲][7] 中描述的那样。 + +我创建并开源了我自己的一些前端工具,虽然它是基于 CLI 的(不是图形界面的)。这样可以使其它人使用跟踪器更快更容易。比如,我的 [perf-tools][8],跟踪新进程是这样的: + +``` +# ./execsnoop +Tracing exec()s. Ctrl-C to end. + PID PPID ARGS + 22898 22004 man ls + 22905 22898 preconv -e UTF-8 + 22908 22898 pager -s + 22907 22898 nroff -mandoc -rLL=164n -rLT=164n -Tutf8 +[...] +``` + +在 Netflix 公司,我正在开发 [Vector][9],它是一个实例分析工具,实际上它也是一个 Linux 跟踪器的前端。 + +### 对于性能或者内核工程师 + +一般来说,我们的工作都非常难,因为大多数人或许要求我们去搞清楚如何去跟踪某个事件,以及因此需要选择使用哪个跟踪器。为完全理解一个跟踪器,你通常需要花至少一百多个小时去使用它。理解所有的 Linux 跟踪器并能在它们之间做出正确的选择是件很难的事情。(我或许是唯一接近完成这件事的人) + +在这里我建议选择如下,要么: + +A)选择一个全能的跟踪器,并以它为标准。这需要在一个测试环境中花大量的时间来搞清楚它的细微差别和安全性。我现在的建议是 SystemTap 的最新版本(例如,从 [源代码][10] 构建)。我知道有的公司选择的是 LTTng ,尽管它并不是很强大(但是它很安全),但他们也用的很好。如果在 `sysdig` 中添加了跟踪点或者是 kprobes,它也是另外的一个候选者。 + +B)按我的 [Velocity 教程中][11] 的流程图。这意味着尽可能使用 ftrace 或者 perf_events,eBPF 已经集成到内核中了,然后用其它的跟踪器,如 SystemTap/LTTng 作为对 eBPF 的补充。我目前在 Netflix 的工作中就是这么做的。 + +![](http://www.brendangregg.com/blog/images/2015/choosing_a_tracer.png) + +以下是我对各个跟踪器的评价: + +#### 1. ftrace + +我爱 [ftrace][12],它是内核黑客最好的朋友。它被构建进内核中,它能够利用跟踪点、kprobes、以及 uprobes,以提供一些功能:使用可选的过滤器和参数进行事件跟踪;事件计数和计时,内核概览;函数流步进function-flow walking。关于它的示例可以查看内核源代码树中的 [ftrace.txt][13]。它通过 `/sys` 来管理,是面向单一的 root 用户的(虽然你可以使用缓冲实例以让其支持多用户),它的界面有时很繁琐,但是它比较容易调校hackable,并且有个前端:ftrace 的主要创建者 Steven Rostedt 设计了一个 trace-cmd,而且我也创建了 perf-tools 集合。我最诟病的就是它不是可编程的programmable,因此,举个例子说,你不能保存和获取时间戳、计算延迟,以及将其保存为直方图。你需要转储事件到用户级以便于进行后期处理,这需要花费一些成本。它也许可以通过 eBPF 实现可编程。 + +#### 2. perf_events + +[perf_events][14] 是 Linux 用户的主要跟踪工具,它的源代码位于 Linux 内核中,一般是通过 linux-tools-common 包来添加的。它又称为 `perf`,后者指的是它的前端,它相当高效(动态缓存),一般用于跟踪并转储到一个文件中(perf.data),然后可以在之后进行后期处理。它可以做大部分 ftrace 能做的事情。它不能进行函数流步进,并且不太容易调校(而它的安全/错误检查做的更好一些)。但它可以做剖析(采样)、CPU 性能计数、用户级的栈转换、以及使用本地变量利用调试信息debuginfo进行行级跟踪line tracing。它也支持多个并发用户。与 ftrace 一样,它也不是内核可编程的,除非 eBPF 支持(补丁已经在计划中)。如果只学习一个跟踪器,我建议大家去学习 perf,它可以解决大量的问题,并且它也相当安全。 + +#### 3. eBPF + +扩展的伯克利包过滤器extended Berkeley Packet Filter(eBPF)是一个内核内in-kernel的虚拟机,可以在事件上运行程序,它非常高效(JIT)。它可能最终为 ftrace 和 perf_events 提供内核内编程in-kernel programming,并可以去增强其它跟踪器。它现在是由 Alexei Starovoitov 开发的,还没有实现完全的整合,但是对于一些令人印象深刻的工具,有些内核版本(比如,4.1)已经支持了:比如,块设备 I/O 的延迟热力图latency heat map。更多参考资料,请查阅 Alexei 的 [BPF 演示][15],和它的 [eBPF 示例][16]。 + +#### 4. SystemTap + +[SystemTap][17] 是一个非常强大的跟踪器。它可以做任何事情:剖析、跟踪点、kprobes、uprobes(它就来自 SystemTap)、USDT、内核内编程等等。它将程序编译成内核模块并加载它们 —— 这是一种很难保证安全的方法。它开发是在内核代码树之外进行的,并且在过去出现过很多问题(内核崩溃或冻结)。许多并不是 SystemTap 的过错 —— 它通常是首次对内核使用某些跟踪功能,并率先遇到 bug。最新版本的 SystemTap 是非常好的(你需要从它的源代码编译),但是,许多人仍然没有从早期版本的问题阴影中走出来。如果你想去使用它,花一些时间去测试环境,然后,在 irc.freenode.net 的 #systemtap 频道与开发者进行讨论。(Netflix 有一个容错架构,我们使用了 SystemTap,但是我们或许比起你来说,更少担心它的安全性)我最诟病的事情是,它似乎假设你有办法得到内核调试信息,而我并没有这些信息。没有它我实际上可以做很多事情,但是缺少相关的文档和示例(我现在自己开始帮着做这些了)。 + +#### 5. LTTng + +[LTTng][18] 对事件收集进行了优化,性能要好于其它的跟踪器,也支持许多的事件类型,包括 USDT。它的开发是在内核代码树之外进行的。它的核心部分非常简单:通过一个很小的固定指令集写入事件到跟踪缓冲区。这样让它既安全又快速。缺点是做内核内编程不太容易。我觉得那不是个大问题,由于它优化的很好,可以充分的扩展,尽管需要后期处理。它也探索了一种不同的分析技术。很多的“黑匣子”记录了所有感兴趣的事件,以便可以在 GUI 中以后分析它。我担心该记录会错失之前没有预料的事件,我真的需要花一些时间去看看它在实践中是如何工作的。这个跟踪器上我花的时间最少(没有特别的原因)。 + +#### 6. ktap + +[ktap][19] 是一个很有前途的跟踪器,它在内核中使用了一个 lua 虚拟机,不需要调试信息和在嵌入时设备上可以工作的很好。这使得它进入了人们的视野,在某个时候似乎要成为 Linux 上最好的跟踪器。然而,由于 eBPF 开始集成到了内核,而 ktap 的集成工作被推迟了,直到它能够使用 eBPF 而不是它自己的虚拟机。由于 eBPF 在几个月过去之后仍然在集成过程中,ktap 的开发者已经等待了很长的时间。我希望在今年的晚些时间它能够重启开发。 + +#### 7. dtrace4linux + +[dtrace4linux][20] 主要由一个人(Paul Fox)利用业务时间将 Sun DTrace 移植到 Linux 中的。它令人印象深刻,一些供应器provider可以工作,还不是很完美,它最多应该算是实验性的工具(不安全)。我认为对于许可证的担心,使人们对它保持谨慎:它可能永远也进入不了 Linux 内核,因为 Sun 是基于 CDDL 许可证发布的 DTrace;Paul 的方法是将它作为一个插件。我非常希望看到 Linux 上的 DTrace,并且希望这个项目能够完成,我想我加入 Netflix 时将花一些时间来帮它完成。但是,我一直在使用内置的跟踪器 ftrace 和 perf_events。 + +#### 8. OL DTrace + +[Oracle Linux DTrace][21] 是将 DTrace 移植到 Linux (尤其是 Oracle Linux)的重大努力。过去这些年的许多发布版本都一直稳定的进步,开发者甚至谈到了改善 DTrace 测试套件,这显示出这个项目很有前途。许多有用的功能已经完成:系统调用、剖析、sdt、proc、sched、以及 USDT。我一直在等待着 fbt(函数边界跟踪,对内核的动态跟踪),它将成为 Linux 内核上非常强大的功能。它最终能否成功取决于能否吸引足够多的人去使用 Oracle Linux(并为支持付费)。另一个羁绊是它并非完全开源的:内核组件是开源的,但用户级代码我没有看到。 + +#### 9. sysdig + +[sysdig][22] 是一个很新的跟踪器,它可以使用类似 `tcpdump` 的语法来处理系统调用syscall事件,并用 lua 做后期处理。它也是令人印象深刻的,并且很高兴能看到在系统跟踪领域的创新。它的局限性是,它的系统调用只能是在当时,并且,它转储所有事件到用户级进行后期处理。你可以使用系统调用来做许多事情,虽然我希望能看到它去支持跟踪点、kprobes、以及 uprobes。我也希望看到它支持 eBPF 以查看内核内概览。sysdig 的开发者现在正在增加对容器的支持。可以关注它的进一步发展。 + +### 深入阅读 + +我自己的工作中使用到的跟踪器包括: + +- **ftrace** : 我的 [perf-tools][8] 集合(查看示例目录);我的 lwn.net 的 [ftrace 跟踪器的文章][5]; 一个 [LISA14][8] 演讲;以及帖子: [函数计数][23]、 [iosnoop][24]、 [opensnoop][25]、 [execsnoop][26]、 [TCP retransmits][27]、 [uprobes][28] 和 [USDT][29]。 +- **perf_events** : 我的 [perf_events 示例][6] 页面;在 SCALE 的一个 [Linux Profiling at Netflix][4] 演讲;和帖子:[CPU 采样][30]、[静态跟踪点][31]、[热力图][32]、[计数][33]、[内核行级跟踪][34]、[off-CPU 时间火焰图][35]。 +- **eBPF** : 帖子 [eBPF:一个小的进步][36],和一些 [BPF-tools][37] (我需要发布更多)。 +- **SystemTap** : 很久以前,我写了一篇 [使用 SystemTap][38] 的文章,它有点过时了。最近我发布了一些 [systemtap-lwtools][39],展示了在没有内核调试信息的情况下,SystemTap 是如何使用的。 +- **LTTng** : 我使用它的时间很短,不足以发布什么文章。 +- **ktap** : 我的 [ktap 示例][40] 页面包括一行程序和脚本,虽然它是早期的版本。 +- **dtrace4linux** : 在我的 [系统性能][41] 书中包含了一些示例,并且在过去我为了某些事情开发了一些小的修补,比如, [timestamps][42]。 +- **OL DTrace** : 因为它是对 DTrace 的直接移植,我早期 DTrace 的工作大多与之相关(链接太多了,可以去 [我的主页][43] 上搜索)。一旦它更加完美,我可以开发很多专用工具。 +- **sysdig** : 我贡献了 [fileslower][44] 和 [subsecond offset spectrogram][45] 的 chisel。 +- **其它** : 关于 [strace][46],我写了一些告诫文章。 + +不好意思,没有更多的跟踪器了! … 如果你想知道为什么 Linux 中的跟踪器不止一个,或者关于 DTrace 的内容,在我的 [从 DTrace 到 Linux][47] 的演讲中有答案,从 [第 28 张幻灯片][48] 开始。 + +感谢 [Deirdre Straughan][49] 的编辑,以及跟踪小马的创建(General Zoi 是小马的创建者)。 + +-------------------------------------------------------------------------------- + +via: http://www.brendangregg.com/blog/2015-07-08/choosing-a-linux-tracer.html + +作者:[Brendan Gregg][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.brendangregg.com +[1]:http://www.brendangregg.com/blog/images/2015/tracing_ponies.png +[2]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools/105 +[3]:http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html +[4]:http://www.brendangregg.com/blog/2015-02-27/linux-profiling-at-netflix.html +[5]:http://lwn.net/Articles/608497/ +[6]:http://www.brendangregg.com/perf.html +[7]:http://www.brendangregg.com/blog/2015-06-23/netflix-instance-analysis-requirements.html +[8]:http://www.brendangregg.com/blog/2015-03-17/linux-performance-analysis-perf-tools.html +[9]:http://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-host.html +[10]:https://sourceware.org/git/?p=systemtap.git;a=blob_plain;f=README;hb=HEAD +[11]:http://www.slideshare.net/brendangregg/velocity-2015-linux-perf-tools +[12]:http://lwn.net/Articles/370423/ +[13]:https://www.kernel.org/doc/Documentation/trace/ftrace.txt +[14]:https://perf.wiki.kernel.org/index.php/Main_Page +[15]:http://www.phoronix.com/scan.php?page=news_item&px=BPF-Understanding-Kernel-VM +[16]:https://github.com/torvalds/linux/tree/master/samples/bpf +[17]:https://sourceware.org/systemtap/wiki +[18]:http://lttng.org/ +[19]:http://ktap.org/ +[20]:https://github.com/dtrace4linux/linux +[21]:http://docs.oracle.com/cd/E37670_01/E38608/html/index.html +[22]:http://www.sysdig.org/ +[23]:http://www.brendangregg.com/blog/2014-07-13/linux-ftrace-function-counting.html +[24]:http://www.brendangregg.com/blog/2014-07-16/iosnoop-for-linux.html +[25]:http://www.brendangregg.com/blog/2014-07-25/opensnoop-for-linux.html +[26]:http://www.brendangregg.com/blog/2014-07-28/execsnoop-for-linux.html +[27]:http://www.brendangregg.com/blog/2014-09-06/linux-ftrace-tcp-retransmit-tracing.html +[28]:http://www.brendangregg.com/blog/2015-06-28/linux-ftrace-uprobe.html +[29]:http://www.brendangregg.com/blog/2015-07-03/hacking-linux-usdt-ftrace.html +[30]:http://www.brendangregg.com/blog/2014-06-22/perf-cpu-sample.html +[31]:http://www.brendangregg.com/blog/2014-06-29/perf-static-tracepoints.html +[32]:http://www.brendangregg.com/blog/2014-07-01/perf-heat-maps.html +[33]:http://www.brendangregg.com/blog/2014-07-03/perf-counting.html +[34]:http://www.brendangregg.com/blog/2014-09-11/perf-kernel-line-tracing.html +[35]:http://www.brendangregg.com/blog/2015-02-26/linux-perf-off-cpu-flame-graph.html +[36]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html +[37]:https://github.com/brendangregg/BPF-tools +[38]:http://dtrace.org/blogs/brendan/2011/10/15/using-systemtap/ +[39]:https://github.com/brendangregg/systemtap-lwtools +[40]:http://www.brendangregg.com/ktap.html +[41]:http://www.brendangregg.com/sysperfbook.html +[42]:https://github.com/dtrace4linux/linux/issues/55 +[43]:http://www.brendangregg.com +[44]:https://github.com/brendangregg/sysdig/commit/d0eeac1a32d6749dab24d1dc3fffb2ef0f9d7151 +[45]:https://github.com/brendangregg/sysdig/commit/2f21604dce0b561407accb9dba869aa19c365952 +[46]:http://www.brendangregg.com/blog/2014-05-11/strace-wow-much-syscall.html +[47]:http://www.brendangregg.com/blog/2015-02-28/from-dtrace-to-linux.html +[48]:http://www.slideshare.net/brendangregg/from-dtrace-to-linux/28 +[49]:http://www.beginningwithi.com/ diff --git a/published/20170310 9 Lightweight Linux Applications to Speed Up Your System.md b/published/20170310 9 Lightweight Linux Applications to Speed Up Your System.md new file mode 100644 index 0000000000..bf2e2d972a --- /dev/null +++ b/published/20170310 9 Lightweight Linux Applications to Speed Up Your System.md @@ -0,0 +1,210 @@ +9 个提高系统运行速度的轻量级 Linux 应用 +====== + +**简介:** [加速 Ubuntu 系统][1]有很多方法,办法之一是使用轻量级应用来替代一些常用应用程序。我们之前之前发布过一篇 [Linux 必备的应用程序][2],如今将分享这些应用程序在 Ubuntu 或其他 Linux 发行版的轻量级替代方案。 + +![在 ubunt 使用轻量级应用程序替代方案][4] + +### 9 个常用 Linux 应用程序的轻量级替代方案 + +你的 Linux 系统很慢吗?应用程序是不是很久才能打开?你最好的选择是使用[轻量级的 Linux 系统][5]。但是重装系统并非总是可行,不是吗? + +所以如果你想坚持使用你现在用的 Linux 发行版,但是想要提高性能,你应该使用更轻量级应用来替代你一些常用的应用。这篇文章会列出各种 Linux 应用程序的轻量级替代方案。 + +由于我使用的是 Ubuntu,因此我只提供了基于 Ubuntu 的 Linux 发行版的安装说明。但是这些应用程序可以用于几乎所有其他 Linux 发行版。你只需去找这些轻量级应用在你的 Linux 发行版中的安装方法就可以了。 + +### 1. Midori: Web 浏览器 + +[Midori][8] 是与现代互联网环境具有良好兼容性的最轻量级网页浏览器之一。它是开源的,使用与 Google Chrome 最初所基于的相同的渲染引擎 —— WebKit。并且超快速,最小化但高度可定制。 + +![Midori Browser][6] + +Midori 浏览器有很多可以定制的扩展和选项。如果你有最高权限,使用这个浏览器也是一个不错的选择。如果在浏览网页的时候遇到了某些问题,请查看其网站上[常见问题][7]部分 -- 这包含了你可能遇到的常见问题及其解决方案。 + + +#### 在基于 Ubuntu 的发行版上安装 Midori + +在 Ubuntu 上,可通过官方源找到 Midori 。运行以下指令即可安装它: + +``` +sudo apt install midori +``` + +### 2. Trojita:电子邮件客户端 + +[Trojita][11] 是一款开源强大的 IMAP 电子邮件客户端。它速度快,资源利用率高。我可以肯定地称它是 [Linux 最好的电子邮件客户端之一][9]。如果你只需电子邮件客户端提供 IMAP 支持,那么也许你不用再进一步考虑了。 + +![Trojitá][10] + +Trojita 使用各种技术 —— 按需电子邮件加载、离线缓存、带宽节省模式等 —— 以实现其令人印象深刻的性能。 + +#### 在基于 Ubuntu 的发行版上安装 Trojita + +Trojita 目前没有针对 Ubuntu 的官方 PPA 。但这应该不成问题。您可以使用以下命令轻松安装它: + +``` +sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/jkt-gentoo:/trojita/xUbuntu_16.04/ /' > /etc/apt/sources.list.d/trojita.list" +wget http://download.opensuse.org/repositories/home:jkt-gentoo:trojita/xUbuntu_16.04/Release.key +sudo apt-key add - < Release.key +sudo apt update +sudo apt install trojita +``` + +### 3. GDebi:包安装程序 + +有时您需要快速安装 DEB 软件包。Ubuntu 软件中心是一个消耗资源严重的应用程序,仅用于安装 .deb 文件并不明智。 + +Gdebi 无疑是一款可以完成同样目的的漂亮工具,而它只有个极简的图形界面。 + +![GDebi][12] + +GDebi 是完全轻量级的,完美无缺地完成了它的工作。你甚至应该[让 Gdebi 成为 DEB 文件的默认安装程序][13]。 + +#### 在基于 Ubuntu 的发行版上安装 GDebi + +只需一行指令,你便可以在 Ubuntu 上安装 GDebi: + +``` +sudo apt install gdebi +``` + +### 4. App Grid:软件中心 + +如果您经常在 Ubuntu 上使用软件中心搜索、安装和管理应用程序,则 [App Grid][15] 是必备的应用程序。它是默认的 Ubuntu 软件中心最具视觉吸引力且速度最快的替代方案。 + +![App Grid][14] + +App Grid 支持应用程序的评分、评论和屏幕截图。 + +#### 在基于 Ubuntu 的发行版上安装 App Grid + +App Grid 拥有 Ubuntu 的官方 PPA。使用以下指令安装 App Grid: + +``` +sudo add-apt-repository ppa:appgrid/stable +sudo apt update +sudo apt install appgrid +``` + +### 5. Yarock:音乐播放器 + +[Yarock][17] 是一个优雅的音乐播放器,拥有现代而最轻量级的用户界面。尽管在设计上是轻量级的,但 Yarock 有一个全面的高级功能列表。 + +![Yarock][16] + +Yarock 的主要功能包括多种音乐收藏、评级、智能播放列表、多种后端选项、桌面通知、音乐剪辑、上下文获取等。 + +### 在基于 Ubuntu 的发行版上安装 Yarock + +您得通过 PPA 使用以下指令在 Ubuntu 上安装 Yarock: + +``` +sudo add-apt-repository ppa:nilarimogard/webupd8 +sudo apt update +sudo apt install yarock +``` + +### 6. VLC:视频播放器 + +谁不需要视频播放器?谁还从未听说过 [VLC][19]?我想并不需要对它做任何介绍。 + +![VLC][18] + +VLC 能满足你在 Ubuntu 上播放各种媒体文件的全部需求,而且它非常轻便。它甚至可以在非常旧的 PC 上完美运行。 + +#### 在基于 Ubuntu 的发行版上安装 VLC + +VLC 为 Ubuntu 提供官方 PPA。可以输入以下命令来安装它: + +``` +sudo apt install vlc +``` + +### 7. PCManFM:文件管理器 + +PCManFM 是 LXDE 的标准文件管理器。与 LXDE 的其他应用程序一样,它也是轻量级的。如果您正在为文件管理器寻找更轻量级的替代品,可以尝试使用这个应用。 + +![PCManFM][20] + +尽管来自 LXDE,PCManFM 也同样适用于其他桌面环境。 + +#### 在基于 Ubuntu 的发行版上安装 PCManFM + +在 Ubuntu 上安装 PCManFM 只需要一条简单的指令: + +``` +sudo apt install pcmanfm +``` + +### 8. Mousepad:文本编辑器 + +在轻量级方面,没有什么可以击败像 nano、vim 等命令行文本编辑器。但是,如果你想要一个图形界面,你可以尝试一下 Mousepad -- 一个最轻量级的文本编辑器。它非常轻巧,速度非常快。带有简单的可定制的用户界面和多个主题。 + +![Mousepad][21] + +Mousepad 支持语法高亮显示。所以,你也可以使用它作为基础的代码编辑器。 + +#### 在基于 Ubuntu 的发行版上安装 Mousepad + +想要安装 Mousepad ,可以使用以下指令: + +``` +sudo apt install mousepad +``` + +### 9. GNOME Office:办公软件 + +许多人需要经常使用办公应用程序。通常,大多数办公应用程序体积庞大且很耗资源。Gnome Office 在这方面非常轻便。Gnome Office 在技术上不是一个完整的办公套件。它由不同的独立应用程序组成,在这之中 AbiWord&Gnumeric 脱颖而出。 + +**AbiWord** 是文字处理器。它比其他替代品轻巧并且快得多。但是这样做是有代价的 —— 你可能会失去宏、语法检查等一些功能。AdiWord 并不完美,但它可以满足你基本的需求。 + +![AbiWord][22] + +**Gnumeric** 是电子表格编辑器。就像 AbiWord 一样,Gnumeric 也非常快速,提供了精确的计算功能。如果你正在寻找一个简单轻便的电子表格编辑器,Gnumeric 已经能满足你的需求了。 + +![Gnumeric][23] + +在 [Gnome Office][24] 下面还有一些其它应用程序。你可以在官方页面找到它们。 + +#### 在基于 Ubuntu 的发行版上安装 AbiWord&Gnumeric + +要安装 AbiWord&Gnumeric,只需在终端中输入以下指令: + +``` +sudo apt install abiword gnumeric +``` + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/lightweight-alternative-applications-ubuntu/ + +作者:[Munif Tanjim][a] +译者:[imquanquan](https://github.com/imquanquan) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/munif/ +[1]:https://itsfoss.com/speed-up-ubuntu-1310/ +[2]:https://itsfoss.com/essential-linux-applications/ +[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Lightweight-alternative-applications-for-Linux-800x450.jpg +[5]:https://itsfoss.com/lightweight-linux-beginners/ +[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Midori-800x497.png +[7]:http://midori-browser.org/faqs/ +[8]:http://midori-browser.org/ +[9]:https://itsfoss.com/best-email-clients-linux/ +[10]:http://trojita.flaska.net/img/2016-03-22-trojita-home.png +[11]:http://trojita.flaska.net/ +[12]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/GDebi.png +[13]:https://itsfoss.com/gdebi-default-ubuntu-software-center/ +[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/AppGrid-800x553.png +[15]:http://www.appgrid.org/ +[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Yarock-800x529.png +[17]:https://seb-apps.github.io/yarock/ +[18]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/VLC-800x526.png +[19]:http://www.videolan.org/index.html +[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/PCManFM.png +[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Mousepad.png +[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/AbiWord-800x626.png +[23]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Gnumeric-800x470.png +[24]:https://gnome.org/gnome-office/ diff --git a/translated/tech/20170921 How to answer questions in a helpful way.md b/published/20170921 How to answer questions in a helpful way.md similarity index 57% rename from translated/tech/20170921 How to answer questions in a helpful way.md rename to published/20170921 How to answer questions in a helpful way.md index acc67fd10c..41436b0a90 100644 --- a/translated/tech/20170921 How to answer questions in a helpful way.md +++ b/published/20170921 How to answer questions in a helpful way.md @@ -1,28 +1,21 @@ - 如何提供有帮助的回答 ============================= -如果你的同事问你一个不太清晰的问题,你会怎么回答?我认为提问题是一种技巧(可以看 [如何提出有意义的问题][1]) 同时,合理地回答问题也是一种技巧。他们都是非常实用的。 +如果你的同事问你一个不太清晰的问题,你会怎么回答?我认为提问题是一种技巧(可以看 [如何提出有意义的问题][1]) 同时,合理地回答问题也是一种技巧,它们都是非常实用的。 -一开始 - 有时向你提问的人不尊重你的时间,这很糟糕。 - -理想情况下,我们假设问你问题的人是一个理性的人并且正在尽力解决问题而你想帮助他们。和我一起工作的人是这样,我所生活的世界也是这样。当然,现实生活并不是这样。 +一开始 —— 有时向你提问的人不尊重你的时间,这很糟糕。理想情况下,我们假设问你问题的人是一个理性的人并且正在尽力解决问题,而你想帮助他们。和我一起工作的人是这样,我所生活的世界也是这样。当然,现实生活并不是这样。 下面是有助于回答问题的一些方法! - -### 如果他们提问不清楚,帮他们澄清 +### 如果他们的提问不清楚,帮他们澄清 通常初学者不会提出很清晰的问题,或者问一些对回答问题没有必要信息的问题。你可以尝试以下方法 澄清问题: -* ** 重述为一个更明确的问题 ** 来回复他们(”你是想问 X 吗?“) - -* ** 向他们了解更具体的他们并没有提供的信息 ** (”你使用 IPv6 ?”) - -* ** 问是什么导致了他们的问题 ** 例如,有时有些人会进入我的团队频道,询问我们的服务发现(service discovery )如何工作的。这通常是因为他们试图设置/重新配置服务。在这种情况下,如果问“你正在使用哪种服务?可以给我看看你正在处理的 pull requests 吗?”是有帮助的。 - -这些方法很多来自 [如何提出有意义的问题][2]中的要点。(尽管我永远不会对某人说“噢,你得先看完 “如何提出有意义的问题”这篇文章后再来像我提问) +* **重述为一个更明确的问题**来回复他们(“你是想问 X 吗?”) +* **向他们了解更具体的他们并没有提供的信息** (“你使用 IPv6 ?”) +* **问是什么导致了他们的问题**。例如,有时有些人会进入我的团队频道,询问我们的服务发现service discovery如何工作的。这通常是因为他们试图设置/重新配置服务。在这种情况下,如果问“你正在使用哪种服务?可以给我看看你正在处理的‘拉取请求’吗?”是有帮助的。 +这些方法很多来自[如何提出有意义的问题][2]中的要点。(尽管我永远不会对某人说“噢,你得先看完《如何提出有意义的问题》这篇文章后再来向我提问) ### 弄清楚他们已经知道了什么 @@ -30,66 +23,54 @@ Harold Treen 给了我一个很好的例子: -> 前几天,有人请我解释“ Redux-Sagas ”。与其深入解释不如说“ 他们就像 worker threads 监听行为(actions),让你更新 Redux store 。 +> 前几天,有人请我解释 “Redux-Sagas”。与其深入解释,不如说 “它们就像监听 action 的工人线程,并可以让你更新 Redux store。 -> 我开始搞清楚他们对 Redux 、行为(actions)、store 以及其他基本概念了解多少。将这些概念都联系在一起再来解释会容易得多。 +> 我开始搞清楚他们对 Redux、action、store 以及其他基本概念了解多少。将这些概念都联系在一起再来解释会容易得多。 -弄清楚问你问题的人已经知道什么是非常重要的。因为有时他们可能会对基础概念感到疑惑(“ Redux 是什么?“),或者他们可能是专家但是恰巧遇到了微妙的极端情况(corner case)。如果答案建立在他们不知道的概念上会令他们困惑,但如果重述他们已经知道的的又会是乏味的。 +弄清楚问你问题的人已经知道什么是非常重要的。因为有时他们可能会对基础概念感到疑惑(“Redux 是什么?”),或者他们可能是专家,但是恰巧遇到了微妙的极端情况corner case。如果答案建立在他们不知道的概念上会令他们困惑,但如果重述他们已经知道的的又会是乏味的。 这里有一个很实用的技巧来了解他们已经知道什么 - 比如可以尝试用“你对 X 了解多少?”而不是问“你知道 X 吗?”。 - ### 给他们一个文档 -“RTFM” (“去读那些他妈的手册”(Read The Fucking Manual))是一个典型的无用的回答,但事实上如果向他们指明一个特定的文档会是非常有用的!当我提问题的时候,我当然很乐意翻看那些能实际解决我的问题的文档,因为它也可能解决其他我想问的问题。 +“RTFM” (“去读那些他妈的手册”Read The Fucking Manual)是一个典型的无用的回答,但事实上如果向他们指明一个特定的文档会是非常有用的!当我提问题的时候,我当然很乐意翻看那些能实际解决我的问题的文档,因为它也可能解决其他我想问的问题。 我认为明确你所给的文档的确能够解决问题是非常重要的,或者至少经过查阅后确认它对解决问题有帮助。否则,你可能将以下面这种情形结束对话(非常常见): * Ali:我应该如何处理 X ? +* Jada:\<文档链接> +* Ali: 这个没有实际解释如何处理 X ,它仅仅解释了如何处理 Y ! -* Jada:<文档链接> - -* Ali: 这个并有实际解释如何处理 X ,它仅仅解释了如何处理 Y ! - -如果我所给的文档特别长,我会指明文档中那个我将会谈及的特定部分。[bash 手册][3] 有44000个字(真的!),所以如果只说“它在 bash 手册中有说明”是没有帮助的:) - +如果我所给的文档特别长,我会指明文档中那个我将会谈及的特定部分。[bash 手册][3] 有 44000 个字(真的!),所以如果只说“它在 bash 手册中有说明”是没有帮助的 :) ### 告诉他们一个有用的搜索 -在工作中,我经常发现我可以利用我所知道的关键字进行搜索找到能够解决我的问题的答案。对于初学者来说,这些关键字往往不是那么明显。所以说“这是我用来寻找这个答案的搜索”可能有用些。再次说明,回答时请经检查后以确保搜索能够得到他们所需要的答案:) - +在工作中,我经常发现我可以利用我所知道的关键字进行搜索来找到能够解决我的问题的答案。对于初学者来说,这些关键字往往不是那么明显。所以说“这是我用来寻找这个答案的搜索”可能有用些。再次说明,回答时请经检查后以确保搜索能够得到他们所需要的答案 :) ### 写新文档 -人们经常一次又一次地问我的团队同样的问题。很显然这并不是他们的错(他们怎么能够知道在他们之前已经有10个人问了这个问题,且知道答案是什么呢?)因此,我们会尝试写新文档,而不是直接回答回答问题。 +人们经常一次又一次地问我的团队同样的问题。很显然这并不是他们的错(他们怎么能够知道在他们之前已经有 10 个人问了这个问题,且知道答案是什么呢?)因此,我们会尝试写新文档,而不是直接回答回答问题。 1. 马上写新文档 - 2. 给他们我们刚刚写好的新文档 - 3. 公示 写文档有时往往比回答问题需要花很多时间,但这是值得的。写文档尤其重要,如果: a. 这个问题被问了一遍又一遍 - b. 随着时间的推移,这个答案不会变化太大(如果这个答案每一个星期或者一个月就会变化,文档就会过时并且令人受挫) - ### 解释你做了什么 对于一个话题,作为初学者来说,这样的交流会真让人沮丧: * 新人:“嗨!你如何处理 X ?” - * 有经验的人:“我已经处理过了,而且它已经完美解决了” - * 新人:”...... 但是你做了什么?!“ 如果问你问题的人想知道事情是如何进行的,这样是有帮助的: * 让他们去完成任务而不是自己做 - * 告诉他们你是如何得到你给他们的答案的。 这可能比你自己做的时间还要长,但对于被问的人来说这是一个学习机会,因为那样做使得他们将来能够更好地解决问题。 @@ -97,88 +78,74 @@ b. 随着时间的推移,这个答案不会变化太大(如果这个答案 这样,你可以进行更好的交流,像这: * 新人:“这个网站出现了错误,发生了什么?” - -* 有经验的人:(2分钟后)”oh 这是因为发生了数据库故障转移“ - -* 新人: ”你是怎么知道的??!?!?“ - -* 有经验的人:“以下是我所做的!“: - +* 有经验的人:(2分钟后)“oh 这是因为发生了数据库故障转移” +* 新人: “你是怎么知道的??!?!?” +* 有经验的人:“以下是我所做的!”: 1. 通常这些错误是因为服务器 Y 被关闭了。我查看了一下 `$PLACE` 但它表明服务器 Y 开着。所以,并不是这个原因导致的。 - 2. 然后我查看 X 的仪表盘 ,仪表盘的这个部分显示这里发生了数据库故障转移。 - 3. 然后我在日志中找到了相应服务器,并且它显示连接数据库错误,看起来错误就是这里。 如果你正在解释你是如何调试一个问题,解释你是如何发现问题,以及如何找出问题的。尽管看起来你好像已经得到正确答案,但感觉更好的是能够帮助他们提高学习和诊断能力,并了解可用的资源。 - ### 解决根本问题 -这一点有点棘手。有时候人们认为他们依旧找到了解决问题的正确途径,且他们只再多一点信息就可以解决问题。但他们可能并不是走在正确的道路上!比如: +这一点有点棘手。有时候人们认为他们依旧找到了解决问题的正确途径,且他们只要再多一点信息就可以解决问题。但他们可能并不是走在正确的道路上!比如: -* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“ - -* Jasminda:”你是正在尝试解决 Y 吗?如果是这样,你不应该处理 X ,反而你应该处理 Z 。“ - -* George:“噢,你是对的!!!谢谢你!我回反过来处理 Z 的。“ +* George:“我在处理 X 的时候遇到了错误,我该如何修复它?” +* Jasminda:“你是正在尝试解决 Y 吗?如果是这样,你不应该处理 X ,反而你应该处理 Z 。” +* George:“噢,你是对的!!!谢谢你!我回反过来处理 Z 的。” Jasminda 一点都没有回答 George 的问题!反而,她猜测 George 并不想处理 X ,并且她是猜对了。这是非常有用的! 如果你这样做可能会产生高高在上的感觉: -* George:”我在处理 X 的时候遇到了错误,我该如何修复它?“ +* George:“我在处理 X 的时候遇到了错误,我该如何修复它?” +* Jasminda:“不要这样做,如果你想处理 Y ,你应该反过来完成 Z 。” +* George:“好吧,我并不是想处理 Y 。实际上我想处理 X 因为某些原因(REASONS)。所以我该如何处理 X 。” -* Jasminda:不要这样做,如果你想处理 Y ,你应该反过来完成 Z 。 - -* George:“好吧,我并不是想处理 Y 。实际上我想处理 X 因为某些原因(REASONS)。所以我该如何处理 X 。 - -所以不要高高在上,且要记住有时有些提问者可能已经偏离根本问题很远了。同时回答提问者提出的问题以及他们本该提出的问题都是合理的:“嗯,如果你想处理 X ,那么你可能需要这么做,但如果你想用这个解决 Y 问题,可能通过处理其他事情你可以更好地解决这个问题,这就是为什么可以做得更好的原因。 +所以不要高高在上,且要记住有时有些提问者可能已经偏离根本问题很远了。同时回答提问者提出的问题以及他们本该提出的问题都是合理的:“嗯,如果你想处理 X ,那么你可能需要这么做,但如果你想用这个解决 Y 问题,可能通过处理其他事情你可以更好地解决这个问题,这就是为什么可以做得更好的原因。” -### 询问”那个回答可以解决您的问题吗?” +### 询问“那个回答可以解决您的问题吗?” -我总是喜欢在我回答了问题之后核实是否真的已经解决了问题:”这个回答解决了您的问题吗?您还有其他问题吗?“在问完这个之后最好等待一会,因为人们通常需要一两分钟来知道他们是否已经找到了答案。 +我总是喜欢在我回答了问题之后核实是否真的已经解决了问题:“这个回答解决了您的问题吗?您还有其他问题吗?”在问完这个之后最好等待一会,因为人们通常需要一两分钟来知道他们是否已经找到了答案。 我发现尤其是问“这个回答解决了您的问题吗”这个额外的步骤在写完文档后是非常有用的。通常,在写关于我熟悉的东西的文档时,我会忽略掉重要的东西而不会意识到它。 - ### 结对编程和面对面交谈 我是远程工作的,所以我的很多对话都是基于文本的。我认为这是沟通的默认方式。 今天,我们生活在一个方便进行小视频会议和屏幕共享的世界!在工作时候,在任何时间我都可以点击一个按钮并快速加入与他人的视频对话或者屏幕共享的对话中! -例如,最近有人问如何自动调节他们的服务容量规划。我告诉他们我们有几样东西需要清理,但我还不太确定他们要清理的是什么。然后我们进行了一个简短的视屏会话并在5分钟后,我们解决了他们问题。 +例如,最近有人问如何自动调节他们的服务容量规划。我告诉他们我们有几样东西需要清理,但我还不太确定他们要清理的是什么。然后我们进行了一个简短的视频会话并在 5 分钟后,我们解决了他们问题。 我认为,特别是如果有人真的被困在该如何开始一项任务时,开启视频进行结对编程几分钟真的比电子邮件或者一些即时通信更有效。 - ### 不要表现得过于惊讶 这是源自 Recurse Center 的一则法则:[不要故作惊讶][4]。这里有一个常见的情景: -* 某人1:“什么是 Linux 内核” +* 某甲:“什么是 Linux 内核” +* 某乙:“你竟然不知道什么是 Linux 内核?!!!!?!!!????” -* 某人2:“你竟然不知道什么是 Linux 内核(LINUX KERNEL)?!!!!?!!!????” +某乙的表现(无论他们是否真的如此惊讶)是没有帮助的。这大部分只会让某甲不好受,因为他们确实不知道什么是 Linux 内核。 -某人2表现(无论他们是否真的如此惊讶)是没有帮助的。这大部分只会让某人1不好受,因为他们确实不知道什么是 Linux 内核。 +我一直在假装不惊讶,即使我事实上确实有点惊讶那个人不知道这种东西。 -我一直在假装不惊讶即使我事实上确实有点惊讶那个人不知道这种东西但它是令人敬畏的。 - -### 回答问题是令人敬畏的 +### 回答问题真的很棒 显然并不是所有方法都是合适的,但希望你能够发现这里有些是有帮助的!我发现花时间去回答问题并教导人们是其实是很有收获的。 -特别感谢 Josh Triplett 的一些建议并做了很多有益的补充,以及感谢 Harold Treen、Vaibhav Sagar、Peter Bhat Hatkins、Wesley Aptekar Cassels 和 Paul Gowder的阅读或评论。 +特别感谢 Josh Triplett 的一些建议并做了很多有益的补充,以及感谢 Harold Treen、Vaibhav Sagar、Peter Bhat Hatkins、Wesley Aptekar Cassels 和 Paul Gowder 的阅读或评论。 -------------------------------------------------------------------------------- via: https://jvns.ca/blog/answer-questions-well/ -作者:[ Julia Evans][a] +作者:[Julia Evans][a] 译者:[HardworkFish](https://github.com/HardworkFish) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20171002 Bash Bypass Alias Linux-Unix Command.md b/published/20171002 Bash Bypass Alias Linux-Unix Command.md similarity index 59% rename from translated/tech/20171002 Bash Bypass Alias Linux-Unix Command.md rename to published/20171002 Bash Bypass Alias Linux-Unix Command.md index e4dec43782..e055c1f519 100644 --- a/translated/tech/20171002 Bash Bypass Alias Linux-Unix Command.md +++ b/published/20171002 Bash Bypass Alias Linux-Unix Command.md @@ -1,23 +1,34 @@ -绕过 Linux/Unix 命令别名 +4 种绕过 Linux/Unix 命令别名的方法 ====== + 我在我的 Linux 系统上定义了如下 mount 别名: + ``` alias mount='mount | column -t' ``` -但是我需要在挂载文件系统和其他用途时绕过 bash 别名。我如何在 Linux、\*BSD、macOS 或者类 Unix 系统上临时禁用或者绕过 bash shell 呢? +但是我需要在挂载文件系统和其他用途时绕过这个 bash 别名。我如何在 Linux、*BSD、macOS 或者类 Unix 系统上临时禁用或者绕过 bash shell 呢? + +你可以使用 `alias` 命令定义或显示 bash shell 别名。一旦创建了 bash shell 别名,它们将优先于外部或内部命令。本文将展示如何暂时绕过 bash 别名,以便你可以运行实际的内部或外部命令。 -你可以使用 alias 命令定义或显示 bash shell 别名。一旦创建了 bash shell 别名,它们将优先于外部或内部命令。本文将展示如何暂时绕过 bash 别名,以便你可以运行实际的内部或外部命令。 [![Bash Bypass Alias Linux BSD macOS Unix Command][1]][1] -## 4 种绕过 bash 别名的方法 - +### 4 种绕过 bash 别名的方法 尝试以下任意一种方法来运行被 bash shell 别名绕过的命令。让我们[如下定义一个别名][2]: -`alias mount='mount | column -t'` + +``` +alias mount='mount | column -t' +``` + 运行如下: -`mount ` + +``` +mount +``` + 示例输出: + ``` sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) @@ -30,45 +41,83 @@ binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_m lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) ``` -### 方法1 - 使用 \command +#### 方法 1 - 使用 `\command` -输入以下命令暂时绕过名为 mount 的 bash 别名: -`\mount` +输入以下命令暂时绕过名为 `mount` 的 bash 别名: -### 方法2 - 使用 "command" 或 'command' +``` +\mount +``` + +#### 方法 2 - 使用 `"command"` 或 `'command'` + +如下引用 `mount` 命令调用实际的 `/bin/mount`: + +``` +"mount" +``` -如下引用 mount 命令调用实际的 /bin/mount: -`"mount"` 或者 -`'mount'` -### Method 3 - Use full command path +``` +'mount' +``` -Use full binary path such as /bin/mount: -`/bin/mount -/bin/mount /dev/sda1 /mnt/sda` +#### 方法 3 - 使用命令的完全路径 -### 方法3 - 使用完整的命令路径 +使用完整的二进制路径,如 `/bin/mount`: + +``` +/bin/mount +/bin/mount /dev/sda1 /mnt/sda +``` + +#### 方法 4 - 使用内部命令 `command` 语法是: -`command cmd -command cmd arg1 arg2` -要覆盖 .bash_aliases 中设置的别名,例如 mount: -`command mount -command mount /dev/sdc /mnt/pendrive/` -[”command“ 运行命令或显示][3]关于命令的信息。它带参数运行命令会抑制 shell 函数查询或者别名,或者显示有关给定命令的信息。 -## 关于 unalias 命令的说明 +``` +command cmd +command cmd arg1 arg2 +``` + +要覆盖 `.bash_aliases` 中设置的别名,例如 `mount`: + +``` +command mount +command mount /dev/sdc /mnt/pendrive/ +``` + +[“command” 直接运行命令或显示][3]关于命令的信息。它带参数运行命令会抑制 shell 函数查询或者别名,或者显示有关给定命令的信息。 + +### 关于 unalias 命令的说明 + +要从当前会话的已定义别名列表中移除别名,请使用 `unalias` 命令: + +``` +unalias mount +``` -要从当前会话的已定义别名列表中移除别名,请使用 unalias 命令: -`unalias mount` 要从当前 bash 会话中删除所有别名定义: -`unalias -a` -确保你更新你的 ~/.bashrc 或 $HOME/.bash_aliases。如果要永久删除定义的别名,则必须删除定义的别名: -`vi ~/.bashrc` + +``` +unalias -a +``` + +确保你更新你的 `~/.bashrc` 或 `$HOME/.bash_aliases`。如果要永久删除定义的别名,则必须删除定义的别名: + +``` +vi ~/.bashrc +``` + 或者 -`joe $HOME/.bash_aliases` + +``` +joe $HOME/.bash_aliases +``` + 想了解更多信息,参考[这里][4]的在线手册,或者输入下面的命令查看: + ``` man bash help command @@ -76,14 +125,13 @@ help unalias help alias ``` - -------------------------------------------------------------------------------- via: https://www.cyberciti.biz/faq/bash-bypass-alias-command-on-linux-macos-unix/ 作者:[Vivek Gite][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md b/published/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md similarity index 55% rename from translated/tech/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md rename to published/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md index 6d01bec236..3d4478ece2 100644 --- a/translated/tech/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md +++ b/published/20171016 Make -rm- Command To Move The Files To -Trash Can- Instead Of Removing Them Completely.md @@ -1,45 +1,44 @@ -# 让 “rm” 命令将文件移动到“垃圾桶”,而不是完全删除它们 +给 “rm” 命令添加个“垃圾桶” +============ -人类犯错误是因为我们不是一个可编程设备,所以,在使用 `rm` 命令时要额外注意,不要在任何时候使用 `rm -rf * `。当你使用 rm 命令时,它会永久删除文件,不会像文件管理器那样将这些文件移动到 `垃圾箱`。 +人类犯错误是因为我们不是一个可编程设备,所以,在使用 `rm` 命令时要额外注意,不要在任何时候使用 `rm -rf *`。当你使用 `rm` 命令时,它会永久删除文件,不会像文件管理器那样将这些文件移动到 “垃圾箱”。 -有时我们会将不应该删除的文件删除掉,所以当错误的删除文件时该怎么办? 你必须看看恢复工具(Linux 中有很多数据恢复工具),但我们不知道是否能将它百分之百恢复,所以要如何解决这个问题? +有时我们会将不应该删除的文件删除掉,所以当错误地删除了文件时该怎么办? 你必须看看恢复工具(Linux 中有很多数据恢复工具),但我们不知道是否能将它百分之百恢复,所以要如何解决这个问题? 我们最近发表了一篇关于 [Trash-Cli][1] 的文章,在评论部分,我们从用户 Eemil Lgz 那里获得了一个关于 [saferm.sh][2] 脚本的更新,它可以帮助我们将文件移动到“垃圾箱”而不是永久删除它们。 -将文件移动到“垃圾桶”是一个好主意,当你无意中运行 rm 命令时,可以节省你的时间,但是很少有人会说这是一个坏习惯,如果你不注意“垃圾桶”,它可能会在一定的时间内被文件和文件夹堆积起来。在这种情况下,我建议你按照你的意愿去做一个定时任务。 +将文件移动到“垃圾桶”是一个好主意,当你无意中运行 `rm` 命令时,可以拯救你;但是很少有人会说这是一个坏习惯,如果你不注意“垃圾桶”,它可能会在一定的时间内被文件和文件夹堆积起来。在这种情况下,我建议你按照你的意愿去做一个定时任务。 -这适用于服务器和桌面两种环境。 如果脚本检测到 **GNOME 、KDE、Unity 或 LXDE** 桌面环境(DE),则它将文件或文件夹安全地移动到默认垃圾箱 **\$HOME/.local/share/Trash/files**,否则会在您的主目录中创建垃圾箱文件夹 **$HOME/Trash**。 +这适用于服务器和桌面两种环境。 如果脚本检测到 GNOME 、KDE、Unity 或 LXDE 桌面环境(DE),则它将文件或文件夹安全地移动到默认垃圾箱 `$HOME/.local/share/Trash/files`,否则会在您的主目录中创建垃圾箱文件夹 `$HOME/Trash`。 + +`saferm.sh` 脚本托管在 Github 中,可以从仓库中克隆,也可以创建一个名为 `saferm.sh` 的文件并复制其上的代码。 -saferm.sh 脚本托管在 Github 中,可以从 repository 中克隆,也可以创建一个名为 saferm.sh 的文件并复制其上的代码。 ``` $ git clone https://github.com/lagerspetz/linux-stuff $ sudo mv linux-stuff/scripts/saferm.sh /bin $ rm -Rf linux-stuff - ``` -在 `bashrc` 文件中设置别名, +在 `.bashrc` 文件中设置别名, ``` alias rm=saferm.sh - ``` 执行下面的命令使其生效, ``` $ source ~/.bashrc - ``` -一切就绪,现在你可以执行 rm 命令,自动将文件移动到”垃圾桶”,而不是永久删除它们。 +一切就绪,现在你可以执行 `rm` 命令,自动将文件移动到”垃圾桶”,而不是永久删除它们。 + +测试一下,我们将删除一个名为 `magi.txt` 的文件,命令行明确的提醒了 `Moving magi.txt to $HOME/.local/share/Trash/file`。 -测试一下,我们将删除一个名为 `magi.txt` 的文件,命令行显式的说明了 `Moving magi.txt to $HOME/.local/share/Trash/file` ``` $ rm -rf magi.txt Moving magi.txt to /home/magi/.local/share/Trash/files - ``` 也可以通过 `ls` 命令或 `trash-cli` 进行验证。 @@ -47,47 +46,16 @@ Moving magi.txt to /home/magi/.local/share/Trash/files ``` $ ls -lh /home/magi/.local/share/Trash/files Permissions Size User Date Modified Name -.rw-r--r-- 32 magi 11 Oct 16:24 magi.txt - +.rw-r--r-- 32 magi 11 Oct 16:24 magi.txt ``` 或者我们可以通过文件管理器界面中查看相同的内容。 ![![][3]][4] -创建一个定时任务,每天清理一次“垃圾桶”,( LCTT 注:原文为每周一次,但根据下面的代码,应该是每天一次) +(LCTT 译注:原文此处混淆了部分 trash-cli 的内容,考虑到文章衔接和逻辑,此处略。) -``` -$ 1 1 * * * trash-empty - -``` - -`注意` 对于服务器环境,我们需要使用 rm 命令手动删除。 - -``` -$ rm -rf /root/Trash/ -/root/Trash/magi1.txt is on . Unsafe delete (y/n)? y -Deleting /root/Trash/magi1.txt - -``` - -对于桌面环境,trash-put 命令也可以做到这一点。 - -在 `bashrc` 文件中创建别名, - -``` -alias rm=trash-put - -``` - -执行下面的命令使其生效。 - -``` -$ source ~/.bashrc - -``` - -要了解 saferm.sh 的其他选项,请查看帮助。 +要了解 `saferm.sh` 的其他选项,请查看帮助。 ``` $ saferm.sh -h @@ -112,7 +80,7 @@ via: https://www.2daygeek.com/rm-command-to-move-files-to-trash-can-rm-alias/ 作者:[2DAYGEEK][a] 译者:[amwps290](https://github.com/amwps290) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20171017 What Are the Hidden Files in my Linux Home Directory For.md b/published/20171017 What Are the Hidden Files in my Linux Home Directory For.md new file mode 100644 index 0000000000..c221094e63 --- /dev/null +++ b/published/20171017 What Are the Hidden Files in my Linux Home Directory For.md @@ -0,0 +1,59 @@ +我的 Linux 主目录中的隐藏文件是干什么用的? +====== + +![](https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-linux-hero.png) + +在 Linux 系统中,你可能会在主目录中存储了大量文件和文件夹。但在这些文件之外,你知道你的主目录还附带了很多隐藏的文件和文件夹吗?如果你在主目录中运行 `ls -a`,你会发现一堆带有点前缀的隐藏文件和目录。这些隐藏的文件到底做了什么? + +### 在主目录中隐藏的文件是干什么用的? + +![hidden-files-liunux-2][1] + +通常,主目录中的隐藏文件和目录包含该用户程序访问的设置或数据。它们不打算让用户编辑,只需要应用程序进行编辑。这就是为什么它们被隐藏在用户的正常视图之外。 + +通常,删除和修改自己主目录中的文件不会损坏操作系统。然而,依赖这些隐藏文件的应用程序可能不那么灵活。从主目录中删除隐藏文件时,通常会丢失与其关联的应用程序的设置。 + +依赖该隐藏文件的程序通常会重新创建它。 但是,你将从“开箱即用”设置开始,如全新用户一般。如果你在使用应用程序时遇到问题,那实际上可能是一个巨大的帮助。它可以让你删除可能造成麻烦的自定义设置。但如果你不这样做,这意味着你需要把所有的东西都设置成原来的样子。 + +### 主目录中某些隐藏文件的特定用途是什么? + +![hidden-files-linux-3][2] + +每个人在他们的主目录中都会有不同的隐藏文件。每个人都有一些。但是,无论应用程序如何,这些文件都有类似的用途。 + +#### 系统设置 + +系统设置包括桌面环境和 shell 的配置。 + +* shell 和命令行程序的**配置文件**:根据你使用的特定 shell 和类似命令的应用程序,特定的文件名称会变化。你会看到 `.bashrc`、`.vimrc` 和 `.zshrc`。这些文件包含你已经更改的有关 shell 的操作环境的任何设置,或者对 `vim` 等命令行实用工具的设置进行的调整。删除这些文件将使关联的应用程序返回到其默认状态。考虑到许多 Linux 用户多年来建立了一系列微妙的调整和设置,删除这个文件可能是一个非常头疼的问题。 +* **用户配置文件**:像上面的配置文件一样,这些文件(通常是 `.profile` 或 `.bash_profile`)保存 shell 的用户设置。该文件通常包含你的 `PATH` 环境变量。它还包含你设置的[别名][3]。用户也可以在 `.bashrc` 或其他位置放置别名。`PATH` 环境变量控制着 shell 寻找可执行命令的位置。通过添加或修改 `PATH`,可以更改 shell 的命令查找位置。别名更改了原有命令的名称。例如:一个别名可能将 `ls -l` 设置为 `ll`。这为经常使用的命令提供基于文本的快捷方式。如果删除 `.profile` 文件,通常可以在 `/etc/skel` 目录中找到默认版本。 +* **桌面环境设置**:这里保存你的桌面环境的任何定制。其中包括桌面背景、屏幕保护程序、快捷键、菜单栏和任务栏图标以及用户针对其桌面环境设置的其他任何内容。当你删除这个文件时,用户的环境会在下一次登录时恢复到新的用户环境。 + +#### 应用配置文件 + +你会在 Ubuntu 的 `.config` 文件夹中找到它们。 这些是针对特定应用程序的设置。 它们将包含喜好列表和设置等内容。 + +* **应用程序的配置文件**:这包括应用程序首选项菜单中的设置、工作区配置等。 你在这里找到的具体取决于应用程序。 +* **Web 浏览器数据**:这可能包括书签和浏览历史记录等内容。这些文件大部分是缓存。这是 Web 浏览器临时存储下载文件(如图片)的地方。删除这些内容可能会降低你首次访问某些媒体网站的速度。 +* **缓存**:如果用户应用程序缓存仅与该用户相关的数据(如 [Spotify 应用程序存储播放列表的缓存][4]),则主目录是存储该目录的默认地点。 这些缓存可能包含大量数据或仅包含几行代码:这取决于应用程序需要什么。 如果你删除这些文件,则应用程序会根据需要重新创建它们。 +* **日志**:一些用户应用程序也可能在这里存储日志。根据开发人员设置应用程序的方式,你可能会发现存储在你的主目录中的日志文件。然而,这不是一个常见的选择。 + +### 结论 + +在大多数情况下,你的 Linux 主目录中的隐藏文件用于存储用户设置。 这包括命令行程序以及基于 GUI 的应用程序的设置。删除它们将删除用户设置。 通常情况下,它不会导致程序被破坏。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/hidden-files-linux-home-directory/ + +作者:[Alexander Fox][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/alexfox/ +[1]:https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-liunux-2.png (hidden-files-liunux-2) +[2]:https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-linux-3.png (hidden-files-linux-3) +[3]:https://www.maketecheasier.com/making-the-linux-command-line-a-little-friendlier/#aliases +[4]:https://www.maketecheasier.com/clear-spotify-cache/ diff --git a/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md b/published/20171109 Concurrent Servers- Part 4 - libuv.md similarity index 64% rename from translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md rename to published/20171109 Concurrent Servers- Part 4 - libuv.md index e819027b7d..2e714c630f 100644 --- a/translated/tech/20171109 Concurrent Servers- Part 4 - libuv.md +++ b/published/20171109 Concurrent Servers- Part 4 - libuv.md @@ -12,17 +12,17 @@ ### 使用 libuv 抽象出事件驱动循环 -在 [第三节][11] 中,我们看到了基于 `select` 和 `epoll` 的服务器的相似之处,并且,我说过,在它们之间抽象出细微的差别是件很有吸引力的事。许多库已经做到了这些,所以在这一部分中我将去选一个并使用它。我选的这个库是 [libuv][12],它最初设计用于 Node.js 底层的可移植平台层,并且,后来发现在其它的项目中已有使用。libuv 是用 C 写的,因此,它具有很高的可移植性,非常适用嵌入到像 JavaScript 和 Python 这样的高级语言中。 +在 [第三节][11] 中,我们看到了基于 `select` 和 `epoll` 的服务器的相似之处,并且,我说过,在它们之间抽象出细微的差别是件很有吸引力的事。许多库已经做到了这些,所以在这一部分中我将去选一个并使用它。我选的这个库是 [libuv][12],它最初设计用于 Node.js 底层的可移植平台层,并且,后来发现在其它的项目中也有使用。libuv 是用 C 写的,因此,它具有很高的可移植性,非常适用嵌入到像 JavaScript 和 Python 这样的高级语言中。 -虽然 libuv 为抽象出底层平台细节已经变成了一个相当大的框架,但它仍然是以 _事件循环_ 思想为中心的。在我们第三部分的事件驱动服务器中,事件循环在 `main` 函数中是很明确的;当使用 libuv 时,该循环通常隐藏在库自身中,而用户代码仅需要注册事件句柄(作为一个回调函数)和运行这个循环。此外,libuv 会在给定的平台上使用更快的事件循环实现,对于 Linux 它是 epoll,等等。 +虽然 libuv 为了抽象出底层平台细节已经变成了一个相当大的框架,但它仍然是以 _事件循环_ 思想为中心的。在我们第三部分的事件驱动服务器中,事件循环是显式定义在 `main` 函数中的;当使用 libuv 时,该循环通常隐藏在库自身中,而用户代码仅需要注册事件句柄(作为一个回调函数)和运行这个循环。此外,libuv 会在给定的平台上使用更快的事件循环实现,对于 Linux 它是 `epoll`,等等。 ![libuv loop](https://eli.thegreenplace.net/images/2017/libuvloop.png) -libuv 支持多路事件循环,并且,因此事件循环在库中是非常重要的;它有一个句柄 —— `uv_loop_t`,和创建/杀死/启动/停止循环的函数。也就是说,在这篇文章中,我将仅需要使用 “默认的” 循环,libuv 可通过 `uv_default_loop()` 提供它;多路循环大多用于多线程事件驱动的服务器,这是一个更高级别的话题,我将留在这一系列文章的以后部分。 +libuv 支持多路事件循环,因此事件循环在库中是非常重要的;它有一个句柄 —— `uv_loop_t`,以及创建/杀死/启动/停止循环的函数。也就是说,在这篇文章中,我将仅需要使用 “默认的” 循环,libuv 可通过 `uv_default_loop()` 提供它;多路循环大多用于多线程事件驱动的服务器,这是一个更高级别的话题,我将留在这一系列文章的以后部分。 ### 使用 libuv 的并发服务器 -为了对 libuv 有一个更深的印象,让我们跳转到我们的可靠协议的服务器,它通过我们的这个系列已经有了一个强大的重新实现。这个服务器的结构与第三部分中的基于 select 和 epoll 的服务器有一些相似之处,因为,它也依赖回调。完整的 [示例代码在这里][13];我们开始设置这个服务器的套接字绑定到一个本地端口: +为了对 libuv 有一个更深的印象,让我们跳转到我们的可靠协议的服务器,它通过我们的这个系列已经有了一个强大的重新实现。这个服务器的结构与第三部分中的基于 `select` 和 `epoll` 的服务器有一些相似之处,因为,它也依赖回调。完整的 [示例代码在这里][13];我们开始设置这个服务器的套接字绑定到一个本地端口: ``` int portnum = 9090; @@ -47,9 +47,9 @@ if ((rc = uv_tcp_bind(&server_stream, (const struct sockaddr*)&server_address, 0 } ``` -除了它被封装进 libuv API 中之外,你看到的是一个相当标准的套接字。在它的返回中,我们取得一个可工作于任何 libuv 支持的平台上的可移植接口。 +除了它被封装进 libuv API 中之外,你看到的是一个相当标准的套接字。在它的返回中,我们取得了一个可工作于任何 libuv 支持的平台上的可移植接口。 -这些代码也展示了很认真负责的错误处理;多数的 libuv 函数返回一个整数状态,返回一个负数意味着出现了一个错误。在我们的服务器中,我们把这些错误看做致命问题进行处理,但也可以设想为一个更优雅的错误恢复。 +这些代码也展示了很认真负责的错误处理;多数的 libuv 函数返回一个整数状态,返回一个负数意味着出现了一个错误。在我们的服务器中,我们把这些错误看做致命问题进行处理,但也可以设想一个更优雅的错误恢复。 现在,那个套接字已经绑定,是时候去监听它了。这里我们运行首个回调注册: @@ -73,7 +73,7 @@ uv_run(uv_default_loop(), UV_RUN_DEFAULT); return uv_loop_close(uv_default_loop()); ``` -注意,在运行事件循环之前,只有一个回调是通过 main 注册的;我们稍后将看到怎么去添加更多的回调。在事件循环的整个运行过程中,添加和删除回调并不是一个问题 —— 事实上,大多数服务器就是这么写的。 +注意,在运行事件循环之前,只有一个回调是通过 `main` 注册的;我们稍后将看到怎么去添加更多的回调。在事件循环的整个运行过程中,添加和删除回调并不是一个问题 —— 事实上,大多数服务器就是这么写的。 这是一个 `on_peer_connected`,它处理到服务器的新的客户端连接: @@ -132,8 +132,8 @@ void on_peer_connected(uv_stream_t* server_stream, int status) { 这些代码都有很好的注释,但是,这里有一些重要的 libuv 语法我想去强调一下: -* 传入自定义数据到回调中:因为 C 还没有闭包,这可能是个挑战,libuv 在它的所有的处理类型中有一个 `void* data` 字段;这些字段可以被用于传递用户数据。例如,注意 `client->data` 是如何指向到一个 `peer_state_t` 结构上,以便于 `uv_write` 和 `uv_read_start` 注册的回调可以知道它们正在处理的是哪个客户端的数据。 -* 内存管理:在带有垃圾回收的语言中进行事件驱动编程是非常容易的,因为,回调通常运行在一个它们注册的完全不同的栈帧中,使得基于栈的内存管理很困难。它总是需要传递堆分配的数据到 libuv 回调中(当所有回调运行时,除了 main,其它的都运行在栈上),并且,为了避免泄漏,许多情况下都要求这些数据去安全释放。这些都是些需要实践的内容 [[1]][6]。 +* 传入自定义数据到回调中:因为 C 语言还没有闭包,这可能是个挑战,libuv 在它的所有的处理类型中有一个 `void* data` 字段;这些字段可以被用于传递用户数据。例如,注意 `client->data` 是如何指向到一个 `peer_state_t` 结构上,以便于 `uv_write` 和 `uv_read_start` 注册的回调可以知道它们正在处理的是哪个客户端的数据。 +* 内存管理:在带有垃圾回收的语言中进行事件驱动编程是非常容易的,因为,回调通常运行在一个与它们注册的地方完全不同的栈帧中,使得基于栈的内存管理很困难。它总是需要传递堆分配的数据到 libuv 回调中(当所有回调运行时,除了 `main`,其它的都运行在栈上),并且,为了避免泄漏,许多情况下都要求这些数据去安全释放(`free()`)。这些都是些需要实践的内容 ^注1 。 这个服务器上对端的状态如下: @@ -146,7 +146,7 @@ typedef struct { } peer_state_t; ``` -它与第三部分中的状态非常类似;我们不再需要 sendptr,因为,在调用 "done writing" 回调之前,`uv_write` 将确保去发送它提供的整个缓冲。我们也为其它的回调使用保持了一个到客户端的指针。这里是 `on_wrote_init_ack`: +它与第三部分中的状态非常类似;我们不再需要 `sendptr`,因为,在调用 “done writing” 回调之前,`uv_write` 将确保发送它提供的整个缓冲。我们也为其它的回调使用保持了一个到客户端的指针。这里是 `on_wrote_init_ack`: ``` void on_wrote_init_ack(uv_write_t* req, int status) { @@ -171,7 +171,7 @@ void on_wrote_init_ack(uv_write_t* req, int status) { } ``` -然后,我们确信知道了这个初始的 '*' 已经被发送到对端,我们通过调用 `uv_read_start` 去监听从这个对端来的入站数据,它注册一个回调(`on_peer_read`)去被调用,不论什么时候,事件循环都在套接字上接收来自客户端的调用: +然后,我们确信知道了这个初始的 `'*'` 已经被发送到对端,我们通过调用 `uv_read_start` 去监听从这个对端来的入站数据,它注册一个将被事件循环调用的回调(`on_peer_read`),不论什么时候,事件循环都在套接字上接收来自客户端的调用: ``` void on_peer_read(uv_stream_t* client, ssize_t nread, const uv_buf_t* buf) { @@ -236,11 +236,11 @@ void on_peer_read(uv_stream_t* client, ssize_t nread, const uv_buf_t* buf) { } ``` -这个服务器的运行时行为非常类似于第三部分的事件驱动服务器:所有的客户端都在一个单个的线程中并发处理。并且一些行为被维护在服务器代码中:服务器的逻辑实现为一个集成的回调,并且长周期运行是禁止的,因为它会阻塞事件循环。这一点也很类似。让我们进一步探索这个问题。 +这个服务器的运行时行为非常类似于第三部分的事件驱动服务器:所有的客户端都在一个单个的线程中并发处理。并且类似的,一些特定的行为必须在服务器代码中维护:服务器的逻辑实现为一个集成的回调,并且长周期运行是禁止的,因为它会阻塞事件循环。这一点也很类似。让我们进一步探索这个问题。 ### 在事件驱动循环中的长周期运行的操作 -单线程的事件驱动代码使它先天地对一些常见问题非常敏感:整个循环中的长周期运行的代码块。参见如下的程序: +单线程的事件驱动代码使它先天就容易受到一些常见问题的影响:长周期运行的代码会阻塞整个循环。参见如下的程序: ``` void on_timer(uv_timer_t* timer) { @@ -280,23 +280,21 @@ on_timer [18850 ms] ... ``` -`on_timer` 忠实地每秒执行一次,直到随机出现的睡眠为止。在那个时间点,`on_timer` 不再被调用,直到睡眠时间结束;事实上,_没有其它的回调_  在这个时间帧中被调用。这个睡眠调用阻塞当前线程,它正是被调用的线程,并且也是事件循环使用的线程。当这个线程被阻塞后,事件循环也被阻塞。 +`on_timer` 忠实地每秒执行一次,直到随机出现的睡眠为止。在那个时间点,`on_timer` 不再被调用,直到睡眠时间结束;事实上,_没有其它的回调_  会在这个时间帧中被调用。这个睡眠调用阻塞了当前线程,它正是被调用的线程,并且也是事件循环使用的线程。当这个线程被阻塞后,事件循环也被阻塞。 这个示例演示了在事件驱动的调用中为什么回调不能被阻塞是多少的重要。并且,同样适用于 Node.js 服务器、客户端侧的 Javascript、大多数的 GUI 编程框架、以及许多其它的异步编程模型。 -但是,有时候运行耗时的任务是不可避免的。并不是所有任务都有一个异步 APIs;例如,我们可能使用一些仅有同步 API 的库去处理,或者,正在执行一个可能的长周期计算。我们如何用事件驱动编程去结合这些代码?线程可以帮到你! +但是,有时候运行耗时的任务是不可避免的。并不是所有任务都有一个异步 API;例如,我们可能使用一些仅有同步 API 的库去处理,或者,正在执行一个可能的长周期计算。我们如何用事件驱动编程去结合这些代码?线程可以帮到你! -### “转换” 阻塞调用到异步调用的线程 +### “转换” 阻塞调用为异步调用的线程 -一个线程池可以被用于去转换阻塞调用到异步调用,通过与事件循环并行运行,并且当任务完成时去由它去公布事件。一个给定的阻塞函数 `do_work()`,这里介绍了它是怎么运行的: +一个线程池可以用于转换阻塞调用为异步调用,通过与事件循环并行运行,并且当任务完成时去由它去公布事件。以阻塞函数 `do_work()` 为例,这里介绍了它是怎么运行的: -1. 在一个回调中,用 `do_work()` 代表直接调用,我们将它打包进一个 “任务”,并且请求线程池去运行这个任务。当任务完成时,我们也为循环去调用它注册一个回调;我们称它为 `on_work_done()`。 +1. 不在一个回调中直接调用 `do_work()` ,而是将它打包进一个 “任务”,让线程池去运行这个任务。当任务完成时,我们也为循环去调用它注册一个回调;我们称它为 `on_work_done()`。 +2. 在这个时间点,我们的回调就可以返回了,而事件循环保持运行;在同一时间点,线程池中的有一个线程运行这个任务。 +3. 一旦任务运行完成,通知主线程(指正在运行事件循环的线程),并且事件循环调用 `on_work_done()`。 -2. 在这个时间点,我们的回调可以返回并且事件循环保持运行;在同一时间点,线程池中的一个线程运行这个任务。 - -3. 一旦任务运行完成,通知主线程(指正在运行事件循环的线程),并且,通过事件循环调用 `on_work_done()`。 - -让我们看一下,使用 libuv 的工作调度 API,是怎么去解决我们前面的 timer/sleep 示例中展示的问题的: +让我们看一下,使用 libuv 的工作调度 API,是怎么去解决我们前面的计时器/睡眠示例中展示的问题的: ``` void on_after_work(uv_work_t* req, int status) { @@ -327,7 +325,7 @@ int main(int argc, const char** argv) { } ``` -通过一个 work_req [[2]][14] 类型的句柄,我们进入一个任务队列,代替在 `on_timer` 上直接调用 sleep,这个函数在任务中(`on_work`)运行,并且,一旦任务完成(`on_after_work`),这个函数被调用一次。`on_work` 在这里是指发生的 “work”(阻塞中的/耗时的操作)。在这两个回调传递到 `uv_queue_work` 时,注意一个关键的区别:`on_work` 运行在线程池中,而 `on_after_work` 运行在事件循环中的主线程上 - 就好像是其它的回调一样。 +通过一个 `work_req` ^注2 类型的句柄,我们进入一个任务队列,代替在 `on_timer` 上直接调用 sleep,这个函数在任务中(`on_work`)运行,并且,一旦任务完成(`on_after_work`),这个函数被调用一次。`on_work` 是指 “work”(阻塞中的/耗时的操作)进行的地方。注意在这两个回调传递到 `uv_queue_work` 时的一个关键区别:`on_work` 运行在线程池中,而 `on_after_work` 运行在事件循环中的主线程上 —— 就好像是其它的回调一样。 让我们看一下这种方式的运行: @@ -347,25 +345,25 @@ on_timer [97578 ms] ... ``` -即便在 sleep 函数被调用时,定时器也每秒钟滴答一下,睡眠(sleeping)现在运行在一个单独的线程中,并且不会阻塞事件循环。 +即便在 sleep 函数被调用时,定时器也每秒钟滴答一下,睡眠现在运行在一个单独的线程中,并且不会阻塞事件循环。 ### 一个用于练习的素数测试服务器 -因为通过睡眼去模拟工作并不是件让人兴奋的事,我有一个事先准备好的更综合的一个示例 - 一个基于套接字接受来自客户端的数字的服务器,检查这个数字是否是素数,然后去返回一个 “prime" 或者 “composite”。完整的 [服务器代码在这里][15] - 我不在这里粘贴了,因为它太长了,更希望读者在一些自己的练习中去体会它。 +因为通过睡眠去模拟工作并不是件让人兴奋的事,我有一个事先准备好的更综合的一个示例 —— 一个基于套接字接受来自客户端的数字的服务器,检查这个数字是否是素数,然后去返回一个 “prime" 或者 “composite”。完整的 [服务器代码在这里][15] —— 我不在这里粘贴了,因为它太长了,更希望读者在一些自己的练习中去体会它。 这个服务器使用了一个原生的素数测试算法,因此,对于大的素数可能花很长时间才返回一个回答。在我的机器中,对于 2305843009213693951,它花了 ~5 秒钟去计算,但是,你的方法可能不同。 -练习 1:服务器有一个设置(通过一个名为 MODE 的环境变量)要么去在套接字回调(意味着在主线程上)中运行素数测试,要么在 libuv 工作队列中。当多个客户端同时连接时,使用这个设置来观察服务器的行为。当它计算一个大的任务时,在阻塞模式中,服务器将不回复其它客户端,而在非阻塞模式中,它会回复。 +练习 1:服务器有一个设置(通过一个名为 `MODE` 的环境变量)要么在套接字回调(意味着在主线程上)中运行素数测试,要么在 libuv 工作队列中。当多个客户端同时连接时,使用这个设置来观察服务器的行为。当它计算一个大的任务时,在阻塞模式中,服务器将不回复其它客户端,而在非阻塞模式中,它会回复。 -练习 2;libuv 有一个缺省大小的线程池,并且线程池的大小可以通过环境变量配置。你可以通过使用多个客户端去实验找出它的缺省值是多少?找到线程池缺省值后,使用不同的设置去看一下,在重负载下怎么去影响服务器的响应能力。 +练习 2:libuv 有一个缺省大小的线程池,并且线程池的大小可以通过环境变量配置。你可以通过使用多个客户端去实验找出它的缺省值是多少?找到线程池缺省值后,使用不同的设置去看一下,在重负载下怎么去影响服务器的响应能力。 ### 在非阻塞文件系统中使用工作队列 -对于仅傻傻的演示和 CPU 密集型的计算来说,将可能的阻塞操作委托给一个线程池并不是明智的;libuv 在它的文件系统 APIs 中本身就大量使用了这种性能。通过这种方式,libuv 使用一个异步 API,在一个轻便的方式中,显示出它强大的文件系统的处理能力。 +对于只是呆板的演示和 CPU 密集型的计算来说,将可能的阻塞操作委托给一个线程池并不是明智的;libuv 在它的文件系统 API 中本身就大量使用了这种能力。通过这种方式,libuv 使用一个异步 API,以一个轻便的方式显示出它强大的文件系统的处理能力。 -让我们使用 `uv_fs_read()`,例如,这个函数从一个文件中(以一个 `uv_fs_t` 句柄为代表)读取一个文件到一个缓冲中 [[3]][16],并且当读取完成后调用一个回调。换句话说,`uv_fs_read()` 总是立即返回,甚至如果文件在一个类似 NFS 的系统上,并且,数据到达缓冲区可能需要一些时间。换句话说,这个 API 与这种方式中其它的 libuv APIs 是异步的。这是怎么工作的呢? +让我们使用 `uv_fs_read()`,例如,这个函数从一个文件中(表示为一个 `uv_fs_t` 句柄)读取一个文件到一个缓冲中 ^注3,并且当读取完成后调用一个回调。换句话说,`uv_fs_read()` 总是立即返回,即使是文件在一个类似 NFS 的系统上,而数据到达缓冲区可能需要一些时间。换句话说,这个 API 与这种方式中其它的 libuv API 是异步的。这是怎么工作的呢? -在这一点上,我们看一下 libuv 的底层;内部实际上非常简单,并且它是一个很好的练习。作为一个便携的库,libuv 对于 Windows 和 Unix 系统在它的许多函数上有不同的实现。我们去看一下在 libuv 源树中的 src/unix/fs.c。 +在这一点上,我们看一下 libuv 的底层;内部实际上非常简单,并且它是一个很好的练习。作为一个可移植的库,libuv 对于 Windows 和 Unix 系统在它的许多函数上有不同的实现。我们去看一下在 libuv 源树中的 `src/unix/fs.c`。 这是 `uv_fs_read` 的代码: @@ -400,9 +398,9 @@ int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, } ``` -第一次看可能觉得很困难,因为它延缓真实的工作到 INIT 和 POST 宏中,在 POST 中与一些本地变量一起设置。这样做可以避免了文件中的许多重复代码。 +第一次看可能觉得很困难,因为它延缓真实的工作到 `INIT` 和 `POST` 宏中,以及为 `POST` 设置了一些本地变量。这样做可以避免了文件中的许多重复代码。 -这是 INIT 宏: +这是 `INIT` 宏: ``` #define INIT(subtype) \ @@ -421,9 +419,9 @@ int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, while (0) ``` -它设置了请求,并且更重要的是,设置 `req->fs_type` 域为真实的 FS 请求类型。因为 `uv_fs_read` 调用 invokes INIT(READ),它意味着 `req->fs_type` 被分配一个常数 `UV_FS_READ`。 +它设置了请求,并且更重要的是,设置 `req->fs_type` 域为真实的 FS 请求类型。因为 `uv_fs_read` 调用 `INIT(READ)`,它意味着 `req->fs_type` 被分配一个常数 `UV_FS_READ`。 -这是 POST 宏: +这是 `POST` 宏: ``` #define POST \ @@ -440,31 +438,25 @@ int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, while (0) ``` -它做什么取决于回调是否为 NULL。在 libuv 文件系统 APIs 中,一个 NULL 回调意味着我们真实地希望去执行一个 _同步_ 操作。在这种情况下,POST 直接调用 `uv__fs_work`(我们需要了解一下这个函数的功能),而对于一个 non-NULL 回调,它提交 `uv__fs_work` 作为一个工作事项到工作队列(指的是线程池),然后,注册 `uv__fs_done` 作为回调;该函数执行一些登记并调用用户提供的回调。 +它做什么取决于回调是否为 `NULL`。在 libuv 文件系统 API 中,一个 `NULL` 回调意味着我们真实地希望去执行一个 _同步_ 操作。在这种情况下,`POST` 直接调用 `uv__fs_work`(我们需要了解一下这个函数的功能),而对于一个非 `NULL` 回调,它把 `uv__fs_work` 作为一个工作项提交到工作队列(指的是线程池),然后,注册 `uv__fs_done` 作为回调;该函数执行一些登记并调用用户提供的回调。 -如果我们去看 `uv__fs_work` 的代码,我们将看到它使用很多宏去按需路由工作到真实的文件系统调用。在我们的案例中,对于 `UV_FS_READ` 这个调用将被 `uv__fs_read` 生成,它(最终)使用普通的 POSIX APIs 去读取。这个函数可以在一个 _阻塞_ 方式中很安全地实现。因为,它通过异步 API 调用时被置于一个线程池中。 +如果我们去看 `uv__fs_work` 的代码,我们将看到它使用很多宏按照需求将工作分发到实际的文件系统调用。在我们的案例中,对于 `UV_FS_READ` 这个调用将被 `uv__fs_read` 生成,它(最终)使用普通的 POSIX API 去读取。这个函数可以在一个 _阻塞_ 方式中很安全地实现。因为,它通过异步 API 调用时被置于一个线程池中。 -在 Node.js 中,fs.readFile 函数是映射到 `uv_fs_read` 上。因此,可以在一个非阻塞模式中读取文件,甚至是当底层文件系统 API 是阻塞方式时。 +在 Node.js 中,`fs.readFile` 函数是映射到 `uv_fs_read` 上。因此,可以在一个非阻塞模式中读取文件,甚至是当底层文件系统 API 是阻塞方式时。 * * * - -[[1]][1] 为确保服务器不泄露内存,我在一个启用泄露检查的 Valgrind 中运行它。因为服务器经常是被设计为永久运行,这是一个挑战;为克服这个问题,我在服务器上添加了一个 “kill 开关” - 一个从客户端接收的特定序列,以使它可以停止事件循环并退出。这个代码在 `theon_wrote_buf` 句柄中。 - - -[[2]][2] 在这里我们不过多地使用 `work_req`;讨论的素数测试服务器接下来将展示怎么被用于去传递上下文信息到回调中。 - - -[[3]][3] `uv_fs_read()` 提供了一个类似于 preadv Linux 系统调用的通用 API:它使用多缓冲区用于排序,并且支持一个到文件中的偏移。基于我们讨论的目的可以忽略这些特性。 - +- 注1: 为确保服务器不泄露内存,我在一个启用泄露检查的 Valgrind 中运行它。因为服务器经常是被设计为永久运行,这是一个挑战;为克服这个问题,我在服务器上添加了一个 “kill 开关” —— 一个从客户端接收的特定序列,以使它可以停止事件循环并退出。这个代码在 `theon_wrote_buf` 句柄中。 +- 注2: 在这里我们不过多地使用 `work_req`;讨论的素数测试服务器接下来将展示怎么被用于去传递上下文信息到回调中。 +- 注3: `uv_fs_read()` 提供了一个类似于 `preadv` Linux 系统调用的通用 API:它使用多缓冲区用于排序,并且支持一个到文件中的偏移。基于我们讨论的目的可以忽略这些特性。 -------------------------------------------------------------------------------- via: https://eli.thegreenplace.net/2017/concurrent-servers-part-4-libuv/ -作者:[Eli Bendersky ][a] +作者:[Eli Bendersky][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20171110 How to configure login banners in Linux (RedHat, Ubuntu, CentOS, Fedora).md b/published/20171110 How to configure login banners in Linux (RedHat, Ubuntu, CentOS, Fedora).md new file mode 100644 index 0000000000..bd0959ea64 --- /dev/null +++ b/published/20171110 How to configure login banners in Linux (RedHat, Ubuntu, CentOS, Fedora).md @@ -0,0 +1,94 @@ +如何在 Linux 中配置 ssh 登录导语 +====== + +> 了解如何在 Linux 中创建登录导语,来向要登录或登录后的用户显示不同的警告或消息。 + +![Login banners in Linux][1] + +无论何时登录公司的某些生产系统,你都会看到一些登录消息、警告或关于你将登录或已登录的服务器的信息,如下所示。这些是登录导语login banner。 + +![Login welcome messages in Linux][2] + +在本文中,我们将引导你配置它们。 + +你可以配置两种类型的导语。 + +1. 用户登录前显示的导语信息(在你选择的文件中配置,例如 `/etc/login.warn`) +2. 用户成功登录后显示的导语信息(在 `/etc/motd` 中配置) + +### 如何在用户登录前连接系统时显示消息 + +当用户连接到服务器并且在登录之前,这个消息将被显示给他。意味着当他输入用户名时,该消息将在密码提示之前显示。 + +你可以使用任何文件名并在其中输入信息。在这里我们使用 `/etc/login.warn` 并且把我们的消息放在里面。 + +``` +# cat /etc/login.warn + !!!! Welcome to KernelTalks test server !!!! +This server is meant for testing Linux commands and tools. If you are +not associated with kerneltalks.com and not authorized please dis-connect +immediately. +``` + +现在,需要将此文件和路径告诉 `sshd` 守护进程,以便它可以为每个用户登录请求获取此标语。对于此,打开 `/etc/sshd/sshd_config` 文件并搜索 `#Banner none`。 + +这里你需要编辑该配置文件,并写下你的文件名并删除注释标记(`#`)。它应该看起来像:`Banner /etc/login.warn`。 + +保存文件并重启 `sshd` 守护进程。为避免断开现有的连接用户,请使用 HUP 信号重启 sshd。 + +``` +root@kerneltalks # ps -ef | grep -i sshd +root 14255 1 0 18:42 ? 00:00:00 /usr/sbin/sshd -D +root 19074 14255 0 18:46 ? 00:00:00 sshd: ec2-user [priv] +root 19177 19127 0 18:54 pts/0 00:00:00 grep -i sshd + +root@kerneltalks # kill -HUP 14255 +``` + +就是这样了!打开新的会话并尝试登录。你将看待你在上述步骤中配置的消息。 + +![Login banner in Linux][3] + +你可以在用户输入密码登录系统之前看到此消息。 + +### 如何在用户登录后显示消息 + +消息用户在成功登录系统后看到的当天消息Message Of The Day(MOTD)由 `/etc/motd` 控制。编辑这个文件并输入当成功登录后欢迎用户的消息。 + +``` +root@kerneltalks # cat /etc/motd + W E L C O M E +Welcome to the testing environment of kerneltalks. +Feel free to use this system for testing your Linux +skills. In case of any issues reach out to admin at +info@kerneltalks.com. Thank you. + +``` + +你不需要重启 `sshd` 守护进程来使更改生效。只要保存该文件,`sshd` 守护进程就会下一次登录请求时读取和显示。 + +![motd in linux][4] + +你可以在上面的截图中看到:黄色框是由 `/etc/motd` 控制的 MOTD,绿色框就是我们之前看到的登录导语。 + +你可以使用 [cowsay][5]、[banner][6]、[figlet][7]、[lolcat][8] 等工具创建出色的引人注目的登录消息。此方法适用于几乎所有 Linux 发行版,如 RedHat、CentOs、Ubuntu、Fedora 等。 + +-------------------------------------------------------------------------------- + +via: https://kerneltalks.com/tips-tricks/how-to-configure-login-banners-in-linux/ + +作者:[kerneltalks][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://kerneltalks.com +[1]:https://a3.kerneltalks.com/wp-content/uploads/2017/11/login-banner-message-in-linux.png +[2]:https://a3.kerneltalks.com/wp-content/uploads/2017/11/Login-message-in-linux.png +[3]:https://a1.kerneltalks.com/wp-content/uploads/2017/11/login-banner.png +[4]:https://a3.kerneltalks.com/wp-content/uploads/2017/11/motd-message-in-linux.png +[5]:https://kerneltalks.com/tips-tricks/cowsay-fun-in-linux-terminal/ +[6]:https://kerneltalks.com/howto/create-nice-text-banner-hpux/ +[7]:https://kerneltalks.com/tips-tricks/create-beautiful-ascii-text-banners-linux/ +[8]:https://kerneltalks.com/linux/lolcat-tool-to-rainbow-color-linux-terminal/ diff --git a/translated/tech/20171121 How to organize your passwords using pass password manager.md b/published/20171121 How to organize your passwords using pass password manager.md similarity index 52% rename from translated/tech/20171121 How to organize your passwords using pass password manager.md rename to published/20171121 How to organize your passwords using pass password manager.md index be460cc720..8f8f183a02 100644 --- a/translated/tech/20171121 How to organize your passwords using pass password manager.md +++ b/published/20171121 How to organize your passwords using pass password manager.md @@ -3,11 +3,11 @@ ### 目标 -学习在 Linux 上使用 "pass" 密码管理器来管理你的密码 +学习在 Linux 上使用 pass 密码管理器来管理你的密码 ### 条件 - * 需要 root 权限来安装需要的包 + * 需要 root 权限来安装需要的包 ### 难度 @@ -15,75 +15,81 @@ ### 约定 - * **#** - 执行指定命令需要 root 权限,可以是直接使用 root 用户来执行或者使用 `sudo` 命令来执行 - * **$** - 使用普通的非特权用户执行指定命令 + * `#` - 执行指定命令需要 root 权限,可以是直接使用 root 用户来执行或者使用 `sudo` 命令来执行 + * `$` - 使用普通的非特权用户执行指定命令 ### 介绍 -如果你有根据不同的意图设置不同密码的好习惯,你可能已经感受到需要一个密码管理器的必要性了。在 Linux 上有很多选择,可以是专利软件(如果你敢的话)也可以是开源软件。如果你跟我一样喜欢简洁的话,你可能会对 `pass` 感兴趣。 +如果你有根据不同的意图设置不同密码的好习惯,你可能已经感受到需要一个密码管理器的必要性了。在 Linux 上有很多选择,可以是专有软件(如果你敢用的话)也可以是开源软件。如果你跟我一样喜欢简洁的话,你可能会对 `pass` 感兴趣。 -### First steps +### 第一步 -Pass 作为一个密码管理器,其实际上是一些你可能早已每天使用的、可信赖且实用的工具的一种封装,比如 `gpg` 和 `git` 。虽然它也有图形界面,但它专门设计能成在命令行下工作的:因此它也可以在 headless machines 上工作 (LCTT 注:根据 wikipedia 的说法,所谓 headless machines 是指没有显示器、键盘和鼠标的机器,一般通过网络链接来控制)。 +pass 作为一个密码管理器,其实际上是一些你可能早已每天使用的、可信赖且实用的工具的一种封装,比如 `gpg` 和 `git` 。虽然它也有图形界面,但它专门设计能成在命令行下工作的:因此它也可以在 headless 机器上工作(LCTT 译注:根据 wikipedia 的说法,所谓 headless 是指没有显示器、键盘和鼠标的机器,一般通过网络链接来控制)。 -### 步骤 1 - 安装 +### 安装 -Pass 在主流的 linux 发行版中都是可用的,你可以通过包管理器安装: +pass 在主流的 Linux 发行版中都是可用的,你可以通过包管理器安装: #### Fedora + ``` # dnf install pass ``` -#### RHEL and CentOS +#### RHEL 和 CentOS + +pass 不在官方仓库中,但你可以从 `epel` 中获取道它。要在 CentOS7 上启用后面这个源,只需要执行: -Pass 不在官方仓库中,但你可以从 `epel` 中获取道它。要在 CentOS7 上启用后面这个源,只需要执行: ``` # yum install epel-release ``` -然而在 Red Hat 企业版的 Linux 上,这个额外的源是不可用的;你需要从 EPEL 官方网站上下载它。 +然而在 Red Hat 企业版的 Linux 上,这个额外的源是不可用的;你需要从 EPEL 官方网站上下载它。 + +#### Debian 和 Ubuntu -#### Debian and Ubuntu ``` # apt-get install pass ``` #### Arch Linux + ``` # pacman -S pass ``` ### 初始化密码仓库 -安装好 `pass` 后,就可以开始使用和配置它了。首先,由于 pass 依赖 `gpg` 来对我们的密码进行加密并以安全的方式进行存储,我们必须准备好一个 `gpg 密钥对`。 +安装好 `pass` 后,就可以开始使用和配置它了。首先,由于 `pass` 依赖于 `gpg` 来对我们的密码进行加密并以安全的方式进行存储,我们必须准备好一个 gpg 密钥对。 + +首先我们要初始化密码仓库:这就是一个用来存放 gpg 加密后的密码的目录。默认情况下它会在你的 `$HOME` 创建一个隐藏目录,不过你也可以通过使用 `PASSWORD_STORE_DIR` 这一环境变量来指定另一个路径。让我们运行: -首先我们要初始化 `密码仓库`:这就是一个用来存放 gpg 加密后的密码的目录。默认情况下它会在你的 `$HOME` 创建一个隐藏目录,不过你也可以通过使用 `PASSWORD_STORE_DIR` 这一环境变量来指定另一个路径。让我们运行: ``` $ pass init ``` -然后`密码仓库`目录就创建好了。现在,让我们来存储我们第一个密码: +然后 `password-store` 目录就创建好了。现在,让我们来存储我们第一个密码: + ``` $ pass edit mysite ``` 这会打开默认文本编辑器,我么只需要输入密码就可以了。输入的内容会用 gpg 加密并存储为密码仓库目录中的 `mysite.gpg` 文件。 -Pass 以目录树的形式存储加密后的文件,也就是说我们可以在逻辑上将多个文件放在子目录中以实现更好的组织形式,我们只需要在创建文件时指定存在哪个目录下就行了,像这样: +`pass` 以目录树的形式存储加密后的文件,也就是说我们可以在逻辑上将多个文件放在子目录中以实现更好的组织形式,我们只需要在创建文件时指定存在哪个目录下就行了,像这样: + ``` $ pass edit foo/bar ``` 跟上面的命令一样,它也会让你输入密码,但是创建的文件是放在密码仓库目录下的 `foo` 子目录中的。要查看文件组织结构,只需要不带任何参数运行 `pass` 命令即可: -``` +``` $ pass Password Store ├── foo │   └── bar └── mysite - ``` 若想修改密码,只需要重复创建密码的操作就行了。 @@ -91,11 +97,13 @@ Password Store ### 获取密码 有两种方法可以获取密码:第一种会显示密码到终端上,方法是运行: + ``` pass mysite ``` -然而更好的方法是使用 `-c` 选项让 pass 将密码直接拷贝到剪切板上: +然而更好的方法是使用 `-c` 选项让 `pass` 将密码直接拷贝到剪切板上: + ``` pass -c mysite ``` @@ -104,29 +112,32 @@ pass -c mysite ### 生成密码 -Pass 也可以为我们自动生成(并自动存储)安全密码。假设我们想要生成一个由 15 个字符组成的密码:包含字母,数字和特殊符号,其命令如下: +`pass` 也可以为我们自动生成(并自动存储)安全密码。假设我们想要生成一个由 15 个字符组成的密码:包含字母,数字和特殊符号,其命令如下: + ``` pass generate mysite 15 ``` -若希望密码只包含字母和数字则可以是使用 `--no-symbols` 选项。生成的密码会显示在屏幕上。也可以通过 `--clip` 或 `-c` 选项让 pass 把密码直接拷贝到剪切板中。通过使用 `-q` 或 `--qrcode` 选项来生成二维码: +若希望密码只包含字母和数字则可以是使用 `--no-symbols` 选项。生成的密码会显示在屏幕上。也可以通过 `--clip` 或 `-c` 选项让 `pass` 把密码直接拷贝到剪切板中。通过使用 `-q` 或 `--qrcode` 选项来生成二维码: ![qrcode][1] -从上面的截屏中尅看出,生成了一个二维码,不过由于 `mysite` 的密码已经存在了,pass 会提示我们确认是否要覆盖原密码。 +从上面的截屏中可看出,生成了一个二维码,不过由于运行该命令时 `mysite` 的密码已经存在了,`pass` 会提示我们确认是否要覆盖原密码。 -Pass 使用 `/dev/urandom` 设备作为(伪)随机数据生成器来生成密码,同时它使用 `xclip` 工具来将密码拷贝到粘帖板中,同时使用 `qrencode` 来将密码以二维码的形式显示出来。在我看来,这种模块化的设计正是它最大的优势:它并不重复造轮子,而只是将常用的工具包装起来完成任务。 +`pass` 使用 `/dev/urandom` 设备作为(伪)随机数据生成器来生成密码,同时它使用 `xclip` 工具来将密码拷贝到粘帖板中,而使用 `qrencode` 来将密码以二维码的形式显示出来。在我看来,这种模块化的设计正是它最大的优势:它并不重复造轮子,而只是将常用的工具包装起来完成任务。 -你也可以使用 `pass mv`,`pass cp`,和 `pass rm` 来重命名,拷贝和删除密码仓库中的文件。 +你也可以使用 `pass mv`、`pass cp` 和 `pass rm` 来重命名、拷贝和删除密码仓库中的文件。 ### 将密码仓库变成 git 仓库 `pass` 另一个很棒的功能就是可以将密码仓库当成 git 仓库来用:通过版本管理系统能让我们管理密码更方便。 + ``` pass git init ``` 这会创建 git 仓库,并自动提交所有已存在的文件。下一步就是指定跟踪的远程仓库了: + ``` pass git remote add ``` @@ -135,7 +146,6 @@ pass git remote add `pass` 有一个叫做 `qtpass` 的图形界面,而且也支持 Windows 和 MacOs。通过使用 `PassFF` 插件,它还能获取 firefox 中存储的密码。在它的项目网站上可以查看更多详细信息。试一下 `pass` 吧,你不会失望的! - -------------------------------------------------------------------------------- via: https://linuxconfig.org/how-to-organize-your-passwords-using-pass-password-manager @@ -147,4 +157,4 @@ via: https://linuxconfig.org/how-to-organize-your-passwords-using-pass-password- 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://linuxconfig.org -[1]:/https://linuxconfig.org/images/pass-manager-qrcode.png +[1]:https://linuxconfig.org/images/pass-manager-qrcode.png diff --git a/translated/tech/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md b/published/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md similarity index 67% rename from translated/tech/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md rename to published/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md index e772f5cf85..b464b8c0e1 100644 --- a/translated/tech/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md +++ b/published/20171203 3 Essential Questions to Ask at Your Next Tech Interview.md @@ -1,32 +1,31 @@ -在你下一次技术面试的时候要提的 3 个基本问题 +下一次技术面试时要问的 3 个重要问题 ====== + ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/os-jobs_0.jpg?itok=nDf5j7xC) -面试可能会有压力,但 58% 的公司告诉 Dice 和 Linux 基金会,他们需要在未来几个月内聘请开源人才。学习如何提出正确的问题。 - -Linux 基金会 +> 面试可能会有压力,但 58% 的公司告诉 Dice 和 Linux 基金会,他们需要在未来几个月内聘请开源人才。学习如何提出正确的问题。 Dice 和 Linux 基金会的年度[开源工作报告][1]揭示了开源专业人士的前景以及未来一年的招聘活动。在今年的报告中,86% 的科技专业人士表示,了解开源推动了他们的职业生涯。然而,当在他们自己的组织内推进或在别处申请新职位的时候,有这些经历会发生什么呢? -面试新工作绝非易事。除了在准备新职位时还要应付复杂的工作,当面试官问“你对我有什么问题吗?”时适当的回答更增添了压力。 +面试新工作绝非易事。除了在准备新职位时还要应付复杂的工作,当面试官问“你有什么问题要问吗?”时,适当的回答更增添了压力。 -在 Dice,我们从事职业、建议,并将技术专家与雇主连接起来。但是我们也在公司雇佣技术人才来开发开源项目。实际上,Dice 平台基于许多 Linux 发行版,我们利用开源数据库作为我们搜索功能的基础。总之,如果没有开源软件,我们就无法运行 Dice,因此聘请了解和热爱开源软件的专业人士至关重要。 +在 Dice,我们从事职业、建议,并将技术专家与雇主连接起来。但是我们也在公司里雇佣技术人才来开发开源项目。实际上,Dice 平台基于许多 Linux 发行版,我们利用开源数据库作为我们搜索功能的基础。总之,如果没有开源软件,我们就无法运行 Dice,因此聘请了解和热爱开源软件的专业人士至关重要。 多年来,我在面试中了解到提出好问题的重要性。这是一个了解你的潜在新雇主的机会,以及更好地了解他们是否与你的技能相匹配。 -这里有三个重要的问题需要以及其重要的原因: +这里有三个要问的重要问题,以及其重要的原因: -**1\. 公司对员工在空闲时间致力于开源项目或编写代码的立场是什么?** +### 1、 公司对员工在空闲时间致力于开源项目或编写代码的立场是什么? 这个问题的答案会告诉正在面试的公司的很多信息。一般来说,只要它与你在该公司所从事的工作没有冲突,公司会希望技术专家为网站或项目做出贡献。在公司之外允许这种情况,也会在技术组织中培养出一种创业精神,并教授技术技能,否则在正常的日常工作中你可能无法获得这些技能。 -**2\. 项目在这如何分优先级?** +### 2、 项目如何区分优先级? 由于所有的公司都成为了科技公司,所以在创新的客户面对技术项目与改进平台本身之间往往存在着分歧。你会努力保持现有的平台最新么?或者致力于公众开发新产品?根据你的兴趣,答案可以决定公司是否适合你。 -**3\. 谁主要决定新产品,开发者在决策过程中有多少投入?** +### 3、 谁主要决定新产品,开发者在决策过程中有多少投入? -这个问题是了解谁负责公司创新(以及与他/她有多少联系),还有一个是了解你在公司的职业道路。在开发新产品之前,一个好的公司会和开发人员和开源人才交流。这看起来没有困难,但有时会错过这步,意味着在新产品发布之前是协作环境或者混乱的过程。 +这个问题是了解谁负责公司创新(以及与他/她有多少联系),还有一个是了解你在公司的职业道路。在开发新产品之前,一个好的公司会和开发人员和开源人才交流。这看起来不用多想,但有时会错过这步,意味着在新产品发布之前协作环境的不同或者混乱的过程。 面试可能会有压力,但是 58% 的公司告诉 Dice 和 Linux 基金会他们需要在未来几个月内聘用开源人才,所以记住高需求会让像你这样的专业人士成为雇员。以你想要的方向引导你的事业。 @@ -38,7 +37,7 @@ via: https://www.linux.com/blog/os-jobs/2017/12/3-essential-questions-ask-your-n 作者:[Brian Hostetter][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20171207 Concurrent Servers- Part 5 - Redis case study.md b/published/20171207 Concurrent Servers- Part 5 - Redis case study.md new file mode 100644 index 0000000000..d706242226 --- /dev/null +++ b/published/20171207 Concurrent Servers- Part 5 - Redis case study.md @@ -0,0 +1,109 @@ +并发服务器(五):Redis 案例研究 +====== + +这是我写的并发网络服务器系列文章的第五部分。在前四部分中我们讨论了并发服务器的结构,这篇文章我们将去研究一个在生产系统中大量使用的服务器的案例—— [Redis][10]。 + +![Redis logo](https://eli.thegreenplace.net/images/2017/redis_logo.png) + +Redis 是一个非常有魅力的项目,我关注它很久了。它最让我着迷的一点就是它的 C 源代码非常清晰。它也是一个高性能、大并发的内存数据库服务器的非常好的例子,它是研究网络并发服务器的一个非常好的案例,因此,我们不能错过这个好机会。 + +我们来看看前四部分讨论的概念在真实世界中的应用程序。 + +本系列的所有文章有: + +* [第一节 - 简介][3] +* [第二节 - 线程][4] +* [第三节 - 事件驱动][5] +* [第四节 - libuv][6] +* [第五节 - Redis 案例研究][7] + +### 事件处理库 + +Redis 最初发布于 2009 年,它最牛逼的一件事情大概就是它的速度 —— 它能够处理大量的并发客户端连接。需要特别指出的是,它是用*一个单线程*来完成的,而且还不对保存在内存中的数据使用任何复杂的锁或者同步机制。 + +Redis 之所以如此牛逼是因为,它在给定的系统上使用了其可用的最快的事件循环,并将它们封装成由它实现的事件循环库(在 Linux 上是 epoll,在 BSD 上是 kqueue,等等)。这个库的名字叫做 [ae][11]。ae 使得编写一个快速服务器变得很容易,只要在它内部没有阻塞即可,而 Redis 则保证 ^注1 了这一点。 + +在这里,我们的兴趣点主要是它对*文件事件*的支持 —— 当文件描述符(如网络套接字)有一些有趣的未决事情时将调用注册的回调函数。与 libuv 类似,ae 支持多路事件循环(参阅本系列的[第三节][5]和[第四节][6])和不应该感到意外的 `aeCreateFileEvent` 信号: + +``` +int aeCreateFileEvent(aeEventLoop *eventLoop, int fd, int mask, + aeFileProc *proc, void *clientData); +``` + +它在 `fd` 上使用一个给定的事件循环,为新的文件事件注册一个回调(`proc`)函数。当使用的是 epoll 时,它将调用 `epoll_ctl` 在文件描述符上添加一个事件(可能是 `EPOLLIN`、`EPOLLOUT`、也或许两者都有,取决于 `mask` 参数)。ae 的 `aeProcessEvents` 功能是 “运行事件循环和发送回调函数”,它在底层调用了 `epoll_wait`。 + +### 处理客户端请求 + +我们通过跟踪 Redis 服务器代码来看一下,ae 如何为客户端事件注册回调函数的。`initServer` 启动时,通过注册一个回调函数来读取正在监听的套接字上的事件,通过使用回调函数 `acceptTcpHandler` 来调用 `aeCreateFileEvent`。当新的连接可用时,这个回调函数被调用。它调用 `accept` ^注2 ,接下来是 `acceptCommonHandler`,它转而去调用 `createClient` 以初始化新客户端连接所需要的数据结构。 + +`createClient` 的工作是去监听来自客户端的入站数据。它将套接字设置为非阻塞模式(一个异步事件循环中的关键因素)并使用 `aeCreateFileEvent` 去注册另外一个文件事件回调函数以读取事件 —— `readQueryFromClient`。每当客户端发送数据,这个函数将被事件循环调用。 + +`readQueryFromClient` 就让我们期望的那样 —— 解析客户端命令和动作,并通过查询和/或操作数据来回复。因为客户端套接字是非阻塞的,所以这个函数必须能够处理 `EAGAIN`,以及部分数据;从客户端中读取的数据是累积在客户端专用的缓冲区中,而完整的查询可能被分割在回调函数的多个调用当中。 + +### 将数据发送回客户端 + +在前面的内容中,我说到了 `readQueryFromClient` 结束了发送给客户端的回复。这在逻辑上是正确的,因为 `readQueryFromClient` *准备*要发送回复,但它不真正去做实质的发送 —— 因为这里并不能保证客户端套接字已经准备好写入/发送数据。我们必须为此使用事件循环机制。 + +Redis 是这样做的,它注册一个 `beforeSleep` 函数,每次事件循环即将进入休眠时,调用它去等待套接字变得可以读取/写入。`beforeSleep` 做的其中一件事情就是调用 `handleClientsWithPendingWrites`。它的作用是通过调用 `writeToClient` 去尝试立即发送所有可用的回复;如果一些套接字不可用时,那么*当*套接字可用时,它将注册一个事件循环去调用 `sendReplyToClient`。这可以被看作为一种优化 —— 如果套接字可用于立即发送数据(一般是 TCP 套接字),这时并不需要注册事件 ——直接发送数据。因为套接字是非阻塞的,它从不会去阻塞循环。 + +### 为什么 Redis 要实现它自己的事件库? + +在 [第四节][14] 中我们讨论了使用 libuv 来构建一个异步并发服务器。需要注意的是,Redis 并没有使用 libuv,或者任何类似的事件库,而是它去实现自己的事件库 —— ae,用 ae 来封装 epoll、kqueue 和 select。事实上,Antirez(Redis 的创建者)恰好在 [2011 年的一篇文章][15] 中回答了这个问题。他的回答的要点是:ae 只有大约 770 行他理解的非常透彻的代码;而 libuv 代码量非常巨大,也没有提供 Redis 所需的额外功能。 + +现在,ae 的代码大约增长到 1300 多行,比起 libuv 的 26000 行(这是在没有 Windows、测试、示例、文档的情况下的数据)来说那是小巫见大巫了。libuv 是一个非常综合的库,这使它更复杂,并且很难去适应其它项目的特殊需求;另一方面,ae 是专门为 Redis 设计的,与 Redis 共同演进,只包含 Redis 所需要的东西。 + +这是我 [前些年在一篇文章中][16] 提到的软件项目依赖关系的另一个很好的示例: + +> 依赖的优势与在软件项目上花费的工作量成反比。 + +在某种程度上,Antirez 在他的文章中也提到了这一点。他提到,提供大量附加价值(在我的文章中的“基础” 依赖)的依赖比像 libuv 这样的依赖更有意义(它的例子是 jemalloc 和 Lua),对于 Redis 特定需求,其功能的实现相当容易。 + +### Redis 中的多线程 + +[在 Redis 的绝大多数历史中][17],它都是一个不折不扣的单线程的东西。一些人觉得这太不可思议了,有这种想法完全可以理解。Redis 本质上是受网络束缚的 —— 只要数据库大小合理,对于任何给定的客户端请求,其大部分延时都是浪费在网络等待上,而不是在 Redis 的数据结构上。 + +然而,现在事情已经不再那么简单了。Redis 现在有几个新功能都用到了线程: + +1. “惰性” [内存释放][8]。 +2. 在后台线程中使用 fsync 调用写一个 [持久化日志][9]。 +3. 运行需要执行一个长周期运行的操作的用户定义模块。 + +对于前两个特性,Redis 使用它自己的一个简单的 bio(它是 “Background I/O" 的首字母缩写)库。这个库是根据 Redis 的需要进行了硬编码,它不能用到其它的地方 —— 它运行预设数量的线程,每个 Redis 后台作业类型需要一个线程。 + +而对于第三个特性,[Redis 模块][18] 可以定义新的 Redis 命令,并且遵循与普通 Redis 命令相同的标准,包括不阻塞主线程。如果在模块中自定义的一个 Redis 命令,希望去执行一个长周期运行的操作,这将创建一个线程在后台去运行它。在 Redis 源码树中的 `src/modules/helloblock.c` 提供了这样的一个示例。 + +有了这些特性,Redis 使用线程将一个事件循环结合起来,在一般的案例中,Redis 具有了更快的速度和弹性,这有点类似于在本系统文章中 [第四节][19] 讨论的工作队列。 + +- 注1: Redis 的一个核心部分是:它是一个 _内存中_ 数据库;因此,查询从不会运行太长的时间。当然了,这将会带来各种各样的其它问题。在使用分区的情况下,服务器可能最终路由一个请求到另一个实例上;在这种情况下,将使用异步 I/O 来避免阻塞其它客户端。 +- 注2: 使用 `anetAccept`;`anet` 是 Redis 对 TCP 套接字代码的封装。 + +-------------------------------------------------------------------------------- + +via: https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/ + +作者:[Eli Bendersky][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://eli.thegreenplace.net/pages/about +[1]:https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/#id1 +[2]:https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/#id2 +[3]:https://linux.cn/article-8993-1.html +[4]:https://linux.cn/article-9002-1.html +[5]:https://linux.cn/article-9117-1.html +[6]:https://linux.cn/article-9397-1.html +[7]:http://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/ +[8]:http://antirez.com/news/93 +[9]:https://redis.io/topics/persistence +[10]:https://redis.io/ +[11]:https://redis.io/topics/internals-rediseventlib +[12]:https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/#id4 +[13]:https://eli.thegreenplace.net/2017/concurrent-servers-part-5-redis-case-study/#id5 +[14]:https://linux.cn/article-9397-1.html +[15]:http://oldblog.antirez.com/post/redis-win32-msft-patch.html +[16]:http://eli.thegreenplace.net/2017/benefits-of-dependencies-in-software-projects-as-a-function-of-effort/ +[17]:http://antirez.com/news/93 +[18]:https://redis.io/topics/modules-intro +[19]:https://linux.cn/article-9397-1.html diff --git a/published/20171214 6 open source home automation tools.md b/published/20171214 6 open source home automation tools.md new file mode 100644 index 0000000000..017e0fd447 --- /dev/null +++ b/published/20171214 6 open source home automation tools.md @@ -0,0 +1,115 @@ +6 个开源的家庭自动化工具 +====== + +> 用这些开源软件解决方案构建一个更智能的家庭。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_openlightbulbs.png?itok=nrv9hgnH) + +[物联网][13] 不仅是一个时髦词,在现实中,自 2016 年我们发布了一篇关于家庭自动化工具的评论文章以来,它也在迅速占领着我们的生活。在 2017,[26.5% 的美国家庭][14] 已经使用了一些智能家居技术;预计五年内,这一数字还将翻倍。 + +随着这些数量持续增加的各种设备的使用,可以帮助你实现对家庭的自动化管理、安保、和监视,在家庭自动化方面,从来没有像现在这样容易和更加吸引人过。不论你是要远程控制你的 HVAC 系统,集成一个家庭影院,保护你的家免受盗窃、火灾、或是其它威胁,还是节省能源或只是控制几盏灯,现在都有无数的设备可以帮到你。 + +但同时,还有许多用户担心安装在他们家庭中的新设备带来的安全和隐私问题 —— 这是一个很现实也很 [严肃的问题][15]。他们想要去控制有谁可以接触到这个重要的系统,这个系统管理着他们的应用程序,记录了他们生活中的点点滴滴。这种想法是可以理解的:毕竟在一个连你的冰箱都是智能设备的今天,你不想要一个基本的保证吗?甚至是如果你授权了设备可以与外界通讯,它是否是仅被授权的人访问它呢? + +[对安全的担心][16] 是为什么开源对我们将来使用的互联设备至关重要的众多理由之一。由于源代码运行在他们自己的设备上,完全可以去搞明白控制你的家庭的程序,也就是说你可以查看它的代码,如果必要的话甚至可以去修改它。 + +虽然联网设备通常都包含它们专有的组件,但是将开源引入家庭自动化的第一步是确保你的设备和这些设备可以共同工作 —— 它们为你提供一个接口 —— 并且是开源的。幸运的是,现在有许多解决方案可供选择,从 PC 到树莓派,你可以在它们上做任何事情。 + +这里有几个我比较喜欢的。 + +### Calaos + +[Calaos][17] 是一个设计为全栈的家庭自动化平台,包含一个服务器应用程序、触摸屏界面、Web 应用程序、支持 iOS 和 Android 的原生移动应用、以及一个运行在底层的预配置好的 Linux 操作系统。Calaos 项目出自一个法国公司,因此它的支持论坛以法语为主,不过大量的介绍资料和文档都已经翻译为英语了。 + +Calaos 使用的是 [GPL][18] v3 的许可证,你可以在 [GitHub][19] 上查看它的源代码。 + +### Domoticz + +[Domoticz][20] 是一个有大量设备库支持的家庭自动化系统,在它的项目网站上有大量的文档,从气象站到远程控制的烟雾探测器,以及大量的第三方 [集成软件][21] 。它使用一个 HTML5 前端,可以从桌面浏览器或者大多数现代的智能手机上访问它,它是一个轻量级的应用,可以运行在像树莓派这样的低功耗设备上。 + +Domoticz 是用 C++ 写的,使用 [GPLv3][22] 许可证。它的 [源代码][23] 在 GitHub 上。 + +### Home Assistant + +[Home Assistant][24] 是一个开源的家庭自动化平台,它可以轻松部署在任何能运行 Python 3 的机器上,从树莓派到网络存储(NAS),甚至可以使用 Docker 容器轻松地部署到其它系统上。它集成了大量的开源和商业的产品,允许你去连接它们,比如,IFTTT、天气信息、或者你的 Amazon Echo 设备,去控制从锁到灯的各种硬件。 + +Home Assistant 以 [MIT 许可证][25] 发布,它的源代码可以从 [GitHub][26] 上下载。 + +### MisterHouse + +从 2016 年起,[MisterHouse][27] 取得了很多的进展,我们把它作为一个“可以考虑的另外选择”列在这个清单上。它使用 Perl 脚本去监视任何东西,它可以通过一台计算机来查询或者控制任何可以远程控制的东西。它可以响应语音命令,查询当前时间、天气、位置、以及其它事件,比如去打开灯、唤醒你、记下你喜欢的电视节目、通报呼入的来电、开门报警、记录你儿子上了多长时间的网、如果你女儿汽车超速它也可以告诉你等等。它可以运行在 Linux、macOS、以及 Windows 计算机上,它可以读/写很多的设备,包括安全系统、气象站、来电显示、路由器、机动车位置系统等等。 + +MisterHouse 使用 [GPLv2][28] 许可证,你可以在 [GitHub][29] 上查看它的源代码。 + +### OpenHAB + +[OpenHAB][30](开放家庭自动化总线的简称)是在开源爱好者中所熟知的家庭自动化工具,它拥有大量用户的社区以及支持和集成了大量的设备。它是用 Java 写的,OpenHAB 非常轻便,可以跨大多数主流操作系统使用,它甚至在树莓派上也运行的很好。支持成百上千的设备,OpenHAB 被设计为与设备无关的,这使开发者在系统中添加他们的设备或者插件很容易。OpenHAB 也支持通过 iOS 和 Android 应用来控制设备以及设计工具,因此,你可以为你的家庭系统创建你自己的 UI。 + +你可以在 GitHub 上找到 OpenHAB 的 [源代码][31],它使用 [Eclipse 公共许可证][32]。 + +### OpenMotics + +[OpenMotics][33] 是一个开源的硬件和软件家庭自动化系统。它的设计目标是为控制设备提供一个综合的系统,而不是从不同的供应商处将各种设备拼接在一起。不像其它的系统主要是为了方便改装而设计的,OpenMotics 专注于硬件解决方案。更多资料请查阅来自 OpenMotics 的后端开发者 Frederick Ryckbosch的 [完整文章][34] 。 + +OpenMotics 使用 [GPLv2][35] 许可证,它的源代码可以从 [GitHub][36] 上下载。 + +当然了,我们的选择不仅有这些。许多家庭自动化爱好者使用不同的解决方案,甚至是他们自己动手做。其它用户选择使用单独的智能家庭设备而无需集成它们到一个单一的综合系统中。 + +如果上面的解决方案并不能满足你的需求,下面还有一些潜在的替代者可以去考虑: + +* [EventGhost][1] 是一个开源的([GPL v2][2])家庭影院自动化工具,它只能运行在 Microsoft Windows PC 上。它允许用户去控制多媒体电脑和连接的硬件,它通过触发宏指令的插件或者定制的 Python 脚本来使用。 +* [ioBroker][3] 是一个基于 JavaScript 的物联网平台,它能够控制灯、锁、空调、多媒体、网络摄像头等等。它可以运行在任何可以运行 Node.js 的硬件上,包括 Windows、Linux、以及 macOS,它使用 [MIT 许可证][4]。 +* [Jeedom][5] 是一个由开源软件([GPL v2][6])构成的家庭自动化平台,它可以控制灯、锁、多媒体等等。它包含一个移动应用程序(Android 和 iOS),并且可以运行在 Linux PC 上;该公司也销售 hub,它为配置家庭自动化提供一个现成的解决方案。 +* [LinuxMCE][7] 标称它是你的多媒体与电子设备之间的“数字粘合剂”。它运行在 Linux(包括树莓派)上,它基于 Pluto 开源 [许可证][8] 发布,它可以用于家庭安全、电话(VoIP 和语音信箱)、A/V 设备、家庭自动化、以及玩视频游戏。 +* [OpenNetHome][9],和这一类中的其它解决方案一样,是一个控制灯、报警、应用程序等等的一个开源软件。它基于 Java 和 Apache Maven,可以运行在 Windows、macOS、以及 Linux —— 包括树莓派,它以 [GPLv3][10] 许可证发布。 +* [Smarthomatic][11] 是一个专注于硬件设备和软件的开源家庭自动化框架,而不仅是用户界面。它基于 [GPLv3][12] 许可证,它可用于控制灯、电器、以及空调、检测温度、提醒给植物浇水。 + +现在该轮到你了:你已经准备好家庭自动化系统了吗?或者正在研究去设计一个。你对家庭自动化的新手有什么建议,你会推荐什么样的系统? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/life/17/12/home-automation-tools + +作者:[Jason Baker][a] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jason-baker +[1]:http://www.eventghost.net/ +[2]:http://www.gnu.org/licenses/old-licenses/gpl-2.0.html +[3]:http://iobroker.net/ +[4]:https://github.com/ioBroker/ioBroker#license +[5]:https://www.jeedom.com/site/en/index.html +[6]:http://www.gnu.org/licenses/old-licenses/gpl-2.0.html +[7]:http://www.linuxmce.com/ +[8]:http://wiki.linuxmce.org/index.php/License +[9]:http://opennethome.org/ +[10]:https://github.com/NetHome/NetHomeServer/blob/master/LICENSE +[11]:https://www.smarthomatic.org/ +[12]:https://github.com/breaker27/smarthomatic/blob/develop/GPL3.txt +[13]:https://opensource.com/resources/internet-of-things +[14]:https://www.statista.com/outlook/279/109/smart-home/united-states +[15]:http://www.crn.com/slide-shows/internet-of-things/300089496/black-hat-2017-9-iot-security-threats-to-watch.htm +[16]:https://opensource.com/business/15/5/why-open-source-means-stronger-security +[17]:https://calaos.fr/en/ +[18]:https://github.com/calaos/calaos-os/blob/master/LICENSE +[19]:https://github.com/calaos +[20]:https://domoticz.com/ +[21]:https://www.domoticz.com/wiki/Integrations_and_Protocols +[22]:https://github.com/domoticz/domoticz/blob/master/License.txt +[23]:https://github.com/domoticz/domoticz +[24]:https://home-assistant.io/ +[25]:https://github.com/home-assistant/home-assistant/blob/dev/LICENSE.md +[26]:https://github.com/balloob/home-assistant +[27]:http://misterhouse.sourceforge.net/ +[28]:http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html +[29]:https://github.com/hollie/misterhouse +[30]:http://www.openhab.org/ +[31]:https://github.com/openhab/openhab +[32]:https://github.com/openhab/openhab/blob/master/LICENSE.TXT +[33]:https://www.openmotics.com/ +[34]:https://opensource.com/life/14/12/open-source-home-automation-system-opemmotics +[35]:http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html +[36]:https://github.com/openmotics diff --git a/translated/tech/20171218 Whats CGManager.md b/published/20171218 Whats CGManager.md similarity index 68% rename from translated/tech/20171218 Whats CGManager.md rename to published/20171218 Whats CGManager.md index a21776eea7..41c9fd71be 100644 --- a/translated/tech/20171218 Whats CGManager.md +++ b/published/20171218 Whats CGManager.md @@ -1,47 +1,47 @@ -什么是 CGManager?[][1] +什么是 CGManager? ============================================================ CGManager 是一个核心的特权守护进程,通过一个简单的 D-Bus API 管理你所有的 cgroup。它被设计用来处理嵌套的 LXC 容器以及接受无特权的请求,包括解析用户名称空间的 UID/GID。 -# 组件[][2] +### 组件 -### cgmanager[][3] +#### cgmanager -这个守护进程在主机上运行,​​将 cgroupfs 挂载到一个独立的挂载名称空间(所以它不能从主机上看到),绑定 /sys/fs/cgroup/cgmanager/sock 用于传入的 D-Bus 查询,并通常处理主机上直接运行的所有客户端。 +这个守护进程在宿主机上运行,​​将 cgroupfs 挂载到一个独立的挂载名称空间(所以它不能从宿主机上看到),绑定 `/sys/fs/cgroup/cgmanager/sock` 用于传入的 D-Bus 查询,并通常处理宿主机上直接运行的所有客户端。 -cgmanager 同时接受使用 D-Bus + SCM 凭证的身份验证请求,用于在命名空间之间转换 uid、gid 和 pid,或者使用简单的 “unauthenticated”(只是初始的 ucred)D-Bus 来查询来自主机级别的查询。 +cgmanager 既接受使用 D-Bus + SCM 凭证的身份验证请求,用于在命名空间之间转换 uid、gid 和 pid,也可以使用简单的 “unauthenticated”(只是初始的 ucred)D-Bus 来查询来自宿主机级别的查询。 -### cgproxy[][4] +#### cgproxy -你可能会在两种情况下看到这个守护进程运行。在主机上,如果你的内核小于 3.8(没有 pidns 连接支持)或在容器中(只有 cgproxy 运行)。 +你可能会在两种情况下看到这个守护进程运行。在宿主机上,如果你的内核老于 3.8(没有 pidns 连接支持)或处于容器中(只有 cgproxy 运行)。 cgproxy 本身并不做任何 cgroup 配置更改,而是如其名称所示,代理请求给主 cgmanager 进程。 -这是必要的,所以一个进程可以直接使用 D-Bus(例如使用 dbus-send)与 /sys/fs/cgroup/cgmanager/sock 进行通信。 +这是必要的,所以一个进程可以直接使用 D-Bus(例如使用 dbus-send)与 `/sys/fs/cgroup/cgmanager/sock` 进行通信。 -之后 cgproxy 将从该查询中得到 ucred,并对真正的 cgmanager 套接字进行经过身份验证的 SCM 查询,并通过 ucred 结构体传递参数,使它们能够正确地转换为 cgmanager 可以理解的主机命名空间 。 +之后 cgproxy 将从该查询中得到 ucred,并对真正的 cgmanager 套接字进行身份验证的 SCM 查询,并通过 ucred 结构体传递参数,使它们能够正确地转换为 cgmanager 可以理解的宿主机命名空间 。 -### cgm[][5] +#### cgm 一个简单的命令行工具,与 D-Bus 服务通信,并允许你从命令行执行所有常见的 cgroup 操作。 -# 通信协议[][6] +### 通信协议 如上所述,cgmanager 和 cgproxy 使用 D-Bus。建议外部客户端(所以不要是 cgproxy)使用标准的 D-Bus API,不要试图实现 SCM creds 协议,因为它是不必要的,并且容易出错。 -相反,只要简单假设与 /sys/fs/cgroup/cgmanager/sock 的通信总是正确的。 +相反,只要简单假设与 `/sys/fs/cgroup/cgmanager/sock` 的通信总是正确的。 cgmanager API 仅在独立的 D-Bus 套接字上可用,cgmanager 本身不连接到系统总线,所以 cgmanager/cgproxy 不要求有运行中的 dbus 守护进程。 你可以在[这里][7]阅读更多关于 D-Bus API。 -# Licensing[][8] +### 许可证 CGManager 是免费软件,大部分代码是根据 GNU LGPLv2.1+ 许可条款发布的,一些二进制文件是在 GNU GPLv2 许可下发布的。 该项目的默认许可证是 GNU LGPLv2.1+ -# Support[][9] +### 支持 CGManager 的稳定版本支持依赖于 Linux 发行版以及它们自己承诺推出稳定修复和安全更新。 @@ -51,9 +51,9 @@ CGManager 的稳定版本支持依赖于 Linux 发行版以及它们自己承诺 via: https://linuxcontainers.org/cgmanager/introduction/ -作者:[Canonical Ltd. ][a] +作者:[Canonical Ltd.][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180102 HTTP errors in WordPress.md b/published/20180102 HTTP errors in WordPress.md new file mode 100644 index 0000000000..ee2c5c7390 --- /dev/null +++ b/published/20180102 HTTP errors in WordPress.md @@ -0,0 +1,176 @@ +如何修复 WordPress 中的 HTTP 错误 +====== + +![http error wordpress][1] + +我们会向你介绍,如何在 Linux VPS 上修复 WordPress 中的 HTTP 错误。 下面列出了 WordPress 用户遇到的最常见的 HTTP 错误,我们的建议侧重于如何发现错误原因以及解决方法。 + + +### 1、 修复在上传图像时出现的 HTTP 错误 + +如果你在基于 WordPress 的网页中上传图像时出现错误,这也许是因为服务器上 PHP 的配置,例如存储空间不足或者其他配置问题造成的。 + +用如下命令查找 php 配置文件: + +``` +php -i | grep php.ini +Configuration File (php.ini) Path => /etc +Loaded Configuration File => /etc/php.ini +``` + +根据输出结果,php 配置文件位于 `/etc` 文件夹下。编辑 `/etc/php.ini` 文件,找出下列行,并按照下面的例子修改其中相对应的值: + +``` +vi /etc/php.ini +``` + +``` +upload_max_filesize = 64M +post_max_size = 32M +max_execution_time = 300 +max_input_time 300 +memory_limit = 128M +``` + +当然,如果你不习惯使用 vi 文本编辑器,你可以选用自己喜欢的。 + +不要忘记重启你的网页服务器来让改动生效。 + +如果你安装的网页服务器是 Apache,你也可以使用 `.htaccess` 文件。首先,找到 `.htaccess` 文件。它位于 WordPress 安装路径的根文件夹下。如果没有找到 `.htaccess` 文件,需要自己手动创建一个,然后加入如下内容: + + +``` +vi /www/html/path_to_wordpress/.htaccess +``` + +``` +php_value upload_max_filesize 64M +php_value post_max_size 32M +php_value max_execution_time 180 +php_value max_input_time 180 + +# BEGIN WordPress + + RewriteEngine On + RewriteBase / + RewriteRule ^index\.php$ - [L] + RewriteCond %{REQUEST_FILENAME} !-f + RewriteCond %{REQUEST_FILENAME} !-d + RewriteRule . /index.php [L] + +# END WordPress +``` + +如果你使用的网页服务器是 nginx,在 nginx 的 `server` 配置块中配置你的 WordPress 实例。详细配置和下面的例子相似: + +``` +server { + + listen 80; + client_max_body_size 128m; + client_body_timeout 300; + + server_name your-domain.com www.your-domain.com; + + root /var/www/html/wordpress; + index index.php; + + location = /favicon.ico { + log_not_found off; + access_log off; + } + + location = /robots.txt { + allow all; + log_not_found off; + access_log off; + } + + location / { + try_files $uri $uri/ /index.php?$args; + } + + location ~ \.php$ { + include fastcgi_params; + fastcgi_pass 127.0.0.1:9000; + fastcgi_index index.php; + fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; + } + + location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { + expires max; + log_not_found off; + } +} +``` + +根据自己的 PHP 配置,你需要将 `fastcgi_pass 127.0.0.1:9000;` 用类似于 `fastcgi_pass unix:/var/run/php7-fpm.sock;` 替换掉(依照实际连接方式) + +重启 nginx 服务来使改动生效。 + +### 2、 修复因为不恰当的文件权限而产生的 HTTP 错误 + +如果你在 WordPress 中出现一个意外错误,也许是因为不恰当的文件权限导致的,所以需要给 WordPress 文件和文件夹设置一个正确的权限: + +``` +chown www-data:www-data -R /var/www/html/path_to_wordpress/ +``` + +将 `www-data` 替换成实际的网页服务器用户,将 `/var/www/html/path_to_wordpress` 换成 WordPress 的实际安装路径。 + +### 3、 修复因为内存不足而产生的 HTTP 错误 + +你可以通过在 `wp-config.php` 中添加如下内容来设置 PHP 的最大内存限制: + +``` +define('WP_MEMORY_LIMIT', '128MB'); +``` + +### 4、 修复因为 php.ini 文件错误配置而产生的 HTTP 错误 + +编辑 PHP 配置主文件,然后找到 `cgi.fix_pathinfo` 这一行。 这一行内容默认情况下是被注释掉的,默认值为 `1`。取消这一行的注释(删掉这一行最前面的分号),然后将 `1` 改为 `0` 。同时需要修改 `date.timezone` 这一 PHP 设置,再次编辑 PHP 配置文件并将这一选项改成 `date.timezone = Asia/Shanghai` (或者将等号后内容改为你所在的时区)。 + +``` +vi /etc/php.ini +``` +``` +cgi.fix_pathinfo=0 +date.timezone = Asia/Shanghai +``` + +### 5、 修复因为 Apache mod_security 模块而产生的 HTTP 错误 + +如果你在使用 Apache mod_security 模块,这可能也会引起问题。试着禁用这一模块,确认是否因为在 `.htaccess` 文件中加入如下内容而引起了问题: + +``` + + SecFilterEngine Off + SecFilterScanPOST Off + +``` + +### 6、 修复因为有问题的插件/主题而产生的 HTTP 错误 + +一些插件或主题也会导致 HTTP 错误以及其他问题。你可以首先禁用有问题的插件/主题,或暂时禁用所有 WordPress 插件。如果你有 phpMyAdmin,使用它来禁用所有插件:在其中找到 `wp_options` 数据表,在 `option_name` 这一列中找到 `active_plugins` 这一记录,然后将 `option_value` 改为 :`a:0:{}`。 + +或者用以下命令通过SSH重命名插件所在文件夹: + +``` +mv /www/html/path_to_wordpress/wp-content/plugins /www/html/path_to_wordpress/wp-content/plugins.old +``` + +通常情况下,HTTP 错误会被记录在网页服务器的日志文件中,所以寻找错误时一个很好的切入点就是查看服务器日志。 + +-------------------------------------------------------------------------------- + +via: https://www.rosehosting.com/blog/http-error-wordpress/ + +作者:[rosehosting][a] +译者:[wenwensnow](https://github.com/wenwensnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.rosehosting.com +[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/01/http-error-wordpress.jpg +[2]:https://www.rosehosting.com/wordpress-hosting.html diff --git a/translated/tech/20180109 How to use syslog-ng to collect logs from remote Linux machines.md b/published/20180109 How to use syslog-ng to collect logs from remote Linux machines.md similarity index 67% rename from translated/tech/20180109 How to use syslog-ng to collect logs from remote Linux machines.md rename to published/20180109 How to use syslog-ng to collect logs from remote Linux machines.md index d101fae2a2..88da311ef4 100644 --- a/translated/tech/20180109 How to use syslog-ng to collect logs from remote Linux machines.md +++ b/published/20180109 How to use syslog-ng to collect logs from remote Linux machines.md @@ -1,83 +1,85 @@ 如何使用 syslog-ng 从远程 Linux 机器上收集日志 ====== -![linuxhero.jpg][1] -Image: Jack Wallen +![linuxhero.jpg][1] 如果你的数据中心全是 Linux 服务器,而你就是系统管理员。那么你的其中一项工作内容就是查看服务器的日志文件。但是,如果你在大量的机器上去查看日志文件,那么意味着你需要挨个去登入到机器中来阅读日志文件。如果你管理的机器很多,仅这项工作就可以花费你一天的时间。 另外的选择是,你可以配置一台单独的 Linux 机器去收集这些日志。这将使你的每日工作更加高效。要实现这个目的,有很多的不同系统可供你选择,而 syslog-ng 就是其中之一。 -使用 syslog-ng 的问题是文档并不容易梳理。但是,我已经解决了这个问题,我可以通过这种方法马上进行安装和配置 syslog-ng。下面我将在 Ubuntu Server 16.04 上示范这两种方法: - - * UBUNTUSERVERVM 的 IP 地址是 192.168.1.118 将配置为日志收集器 - * UBUNTUSERVERVM2 将配置为一个客户端,发送日志文件到收集器 - +syslog-ng 的不足是文档并不容易梳理。但是,我已经解决了这个问题,我可以通过这种方法马上进行安装和配置 syslog-ng。下面我将在 Ubuntu Server 16.04 上示范这两种方法: +* UBUNTUSERVERVM 的 IP 地址是 192.168.1.118 ,将配置为日志收集器 +* UBUNTUSERVERVM2 将配置为一个客户端,发送日志文件到收集器 现在我们来开始安装和配置。 -## 安装 +### 安装 安装很简单。为了尽可能容易,我将从标准仓库安装。打开一个终端窗口,运行如下命令: + ``` sudo apt install syslog-ng ``` -在作为收集器和客户端的机器上都要运行上面的命令。安装完成之后,你将开始配置。 +你必须在收集器和客户端的机器上都要运行上面的命令。安装完成之后,你将开始配置。 -## 配置收集器 +### 配置收集器 现在,我们开始日志收集器的配置。它的配置文件是 `/etc/syslog-ng/syslog-ng.conf`。syslog-ng 安装完成时就已经包含了一个配置文件。我们不使用这个默认的配置文件,可以使用 `mv /etc/syslog-ng/syslog-ng.conf /etc/syslog-ng/syslog-ng.conf.BAK` 将这个自带的默认配置文件重命名。现在使用 `sudo nano /etc/syslog/syslog-ng.conf` 命令创建一个新的配置文件。在这个文件中添加如下的行: + ``` @version: 3.5 @include "scl.conf" @include "`scl-root`/system/tty10.conf" - options { - time-reap(30); - mark-freq(10); - keep-hostname(yes); - }; - source s_local { system(); internal(); }; - source s_network { - syslog(transport(tcp) port(514)); - }; - destination d_local { - file("/var/log/syslog-ng/messages_${HOST}"); }; - destination d_logs { - file( - "/var/log/syslog-ng/logs.txt" - owner("root") - group("root") - perm(0777) - ); }; - log { source(s_local); source(s_network); destination(d_logs); }; + options { + time-reap(30); + mark-freq(10); + keep-hostname(yes); + }; + source s_local { system(); internal(); }; + source s_network { + syslog(transport(tcp) port(514)); + }; + destination d_local { + file("/var/log/syslog-ng/messages_${HOST}"); }; + destination d_logs { + file( + "/var/log/syslog-ng/logs.txt" + owner("root") + group("root") + perm(0777) + ); }; + log { source(s_local); source(s_network); destination(d_logs); }; ``` -需要注意的是,syslog-ng 使用 514 端口,你需要确保你的网络上它可以被访问。 +需要注意的是,syslog-ng 使用 514 端口,你需要确保在你的网络上它可以被访问。 + +保存并关闭这个文件。上面的配置将转存期望的日志文件(由 `system()` 和 `internal()` 指出)到 `/var/log/syslog-ng/logs.txt` 中。因此,你需要使用如下的命令去创建所需的目录和文件: -保存和关闭这个文件。上面的配置将转存期望的日志文件(使用 system() and internal())到 `/var/log/syslog-ng/logs.txt` 中。因此,你需要使用如下的命令去创建所需的目录和文件: ``` sudo mkdir /var/log/syslog-ng sudo touch /var/log/syslog-ng/logs.txt ``` 使用如下的命令启动和启用 syslog-ng: + ``` sudo systemctl start syslog-ng sudo systemctl enable syslog-ng ``` -## 配置为客户端 +### 配置客户端 我们将在客户端上做同样的事情(移动默认配置文件并创建新配置文件)。拷贝下列文本到新的客户端配置文件中: + ``` @version: 3.5 @include "scl.conf" @include "`scl-root`/system/tty10.conf" source s_local { system(); internal(); }; destination d_syslog_tcp { - syslog("192.168.1.118" transport("tcp") port(514)); }; + syslog("192.168.1.118" transport("tcp") port(514)); }; log { source(s_local);destination(d_syslog_tcp); }; ``` @@ -87,11 +89,9 @@ log { source(s_local);destination(d_syslog_tcp); }; ## 查看日志文件 -回到你的配置为收集器的服务器上,运行这个命令 `sudo tail -f /var/log/syslog-ng/logs.txt`。你将看到包含了收集器和客户端的日志条目的输出 ( **Figure A** )。 +回到你的配置为收集器的服务器上,运行这个命令 `sudo tail -f /var/log/syslog-ng/logs.txt`。你将看到包含了收集器和客户端的日志条目的输出(图 A)。 - **Figure A** - -![Figure A][3] +![图 A][3] 恭喜你!syslog-ng 已经正常工作了。你现在可以登入到你的收集器上查看本地机器和远程客户端的日志了。如果你的数据中心有很多 Linux 服务器,在每台服务器上都安装上 syslog-ng 并配置它们作为客户端发送日志到收集器,这样你就不需要登入到每个机器去查看它们的日志了。 @@ -101,7 +101,7 @@ via: https://www.techrepublic.com/article/how-to-use-syslog-ng-to-collect-logs-f 作者:[Jack Wallen][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20180110 How to install-update Intel microcode firmware on Linux.md b/published/20180110 How to install-update Intel microcode firmware on Linux.md similarity index 51% rename from translated/tech/20180110 How to install-update Intel microcode firmware on Linux.md rename to published/20180110 How to install-update Intel microcode firmware on Linux.md index b383311dc6..a3b0f53420 100644 --- a/translated/tech/20180110 How to install-update Intel microcode firmware on Linux.md +++ b/published/20180110 How to install-update Intel microcode firmware on Linux.md @@ -1,56 +1,56 @@ 如何在 Linux 上安装/更新 Intel 微码固件 ====== +如果你是一个 Linux 系统管理方面的新手,如何在 Linux 上使用命令行方式去安装或者更新 Intel/AMD CPU 的微码固件呢? -如果你是一个 Linux 系统管理方面的新手,如何在 Linux 上使用命令行选项去安装或者更新 Intel/AMD CPU 的微码固件? - - -微码只是由 Intel/AMD 提供的 CPU 固件而已。Linux 的内核可以在系统引导时不需要升级 BIOS 的情况下更新 CPU 的固件。处理器微码保存在内存中,在每次启动系统时,内核可以更新这个微码。这些来自 Intel/AMD 的升级微码可以去修复 bug 或者使用补丁来防范 bugs。这篇文章演示了如何使用包管理器去安装 AMD 或者 Intel 微码更新,或者由 lntel 提供的 Linux 上的处理器微码更新。 - -## 如何查看当前的微码状态 +微码microcode就是由 Intel/AMD 提供的 CPU 固件。Linux 的内核可以在引导时更新 CPU 固件,而无需 BIOS 更新。处理器的微码保存在内存中,在每次启动系统时,内核可以更新这个微码。这些来自 Intel/AMD 的微码的更新可以去修复 bug 或者使用补丁来防范 bug。这篇文章演示了如何使用包管理器或由 lntel 提供的 Linux 处理器微码更新来安装 AMD 或 Intel 的微码更新。 +### 如何查看当前的微码状态 以 root 用户运行下列命令: -`# dmesg | grep microcode` + +``` +# dmesg | grep microcode +``` + 输出如下: [![Verify microcode update on a CentOS RHEL Fedora Ubuntu Debian Linux][1]][1] -请注意,你的 CPU 在这里完全有可能出现没有可用的微码更新的情况。如果是这种情况,它的输出可能是如下图这样的: +请注意,你的 CPU 在这里完全有可能出现没有可用的微码更新的情况。如果是这种情况,它的输出可能是如下这样的: + ``` [ 0.952699] microcode: sig=0x306a9, pf=0x10, revision=0x1c [ 0.952773] microcode: Microcode Update Driver: v2.2. ``` -## 如何在 Linux 上使用包管理器去安装微码固件更新 - -对于运行在 Linux 系统的 x86/amd64 架构的 CPU 上,Linux 自带了工具去更改或者部署微码固件。在 Linux 上安装 AMD 或者 Intel 的微码固件的过程如下: - - 1. 打开终端应用程序 - 2. Debian/Ubuntu Linux 用户推输入:**sudo apt install intel-microcode** - 3. CentOS/RHEL Linux 用户输入:**sudo yum install microcode_ctl** +### 如何在 Linux 上使用包管理器去安装微码固件更新 +对于运行在 x86/amd64 架构的 CPU 上的 Linux 系统,Linux 自带了工具去更改或者部署微码固件。在 Linux 上安装 AMD 或者 Intel 的微码固件的过程如下: +1. 打开终端应用程序 +2. Debian/Ubuntu Linux 用户推输入:`sudo apt install intel-microcode` +3. CentOS/RHEL Linux 用户输入:`sudo yum install microcode_ctl` 对于流行的 Linux 发行版,这个包的名字一般如下 : - * microcode_ctl 和 linux-firmware —— CentOS/RHEL 微码更新包 - * intel-microcode —— Debian/Ubuntu 和 clones 发行版适用于 Intel CPU 的微码更新包 - * amd64-microcode —— Debian/Ubuntu 和 clones 发行版适用于 AMD CPU 的微码固件 - * linux-firmware —— 适用于 AMD CPU 的 Arch Linux 发行版微码固件(你不用做任何操作,它是默认安装的) - * intel-ucode —— 适用于 Intel CPU 的 Arch Linux 发行版微码固件 - * microcode_ctl 和 ucode-intel —— Suse/OpenSUSE Linux 微码更新包 +* `microcode_ctl` 和 `linux-firmware` —— CentOS/RHEL 微码更新包 +* `intel-microcode` —— Debian/Ubuntu 和衍生发行版的适用于 Intel CPU 的微码更新包 +* `amd64-microcode` —— Debian/Ubuntu 和衍生发行版的适用于 AMD CPU 的微码固件 +* `linux-firmware` —— 适用于 AMD CPU 的 Arch Linux 发行版的微码固件(你不用做任何操作,它是默认安装的) +* `intel-ucode` —— 适用于 Intel CPU 的 Arch Linux 发行版微码固件 +* `microcode_ctl` 、`linux-firmware` 和 `ucode-intel` —— Suse/OpenSUSE Linux 微码更新包 +**警告 :在某些情况下,微码更新可能会导致引导问题,比如,服务器在引导时被挂起或者自动重置。以下的步骤是在我的机器上运行过的,并且我是一个经验丰富的系统管理员。对于由此引发的任何硬件故障,我不承担任何责任。在做固件更新之前,请充分评估操作风险!** - -**警告 :在某些情况下,更新微码可能会导致引导问题,比如,服务器在引导时被挂起或者自动重置。以下的步骤是在我的机器上运行过的,并且我是一个经验丰富的系统管理员。对于由此引发的任何硬件故障,我不承担任何责任。在做固件更新之前,请充分评估操作风险!** - -### 示例 +#### 示例 在使用 Intel CPU 的 Debian/Ubuntu Linux 系统上,输入如下的 [apt 命令][2]/[apt-get 命令][3]: -`$ sudo apt-get install intel-microcode` +``` +$ sudo apt-get install intel-microcode +``` 示例输出如下: @@ -58,11 +58,15 @@ 你 [必须重启服务器以激活微码][5] 更新: -`$ sudo reboot` +``` +$ sudo reboot +``` 重启后检查微码状态: -`# dmesg | grep 'microcode'` +``` +# dmesg | grep 'microcode' +``` 示例输出如下: @@ -70,7 +74,6 @@ [ 0.000000] microcode: microcode updated early to revision 0x1c, date = 2015-02-26 [ 1.604672] microcode: sig=0x306a9, pf=0x10, revision=0x1c [ 1.604976] microcode: Microcode Update Driver: v2.01 , Peter Oruba - ``` 如果你使用的是 RHEL/CentOS 系统,使用 [yum 命令][6] 尝试去安装或者更新以下两个包: @@ -81,13 +84,14 @@ $ sudo reboot $ sudo dmesg | grep 'microcode' ``` -## 如何去更新/安装从 Intel 网站上下载的微码 +### 如何更新/安装从 Intel 网站上下载的微码 -仅当你的 CPU 制造商建议这么做的时候,才可以使用下列的方法去更新/安装微码,除此之外,都应该使用上面的方法去更新。大多数 Linux 发行版都可以通过包管理器来维护更新微码。使用包管理器的方法是经过测试的,对大多数用户来说是最安全的方式。 +只有在你的 CPU 制造商建议这么做的时候,才可以使用下列的方法去更新/安装微码,除此之外,都应该使用上面的方法去更新。大多数 Linux 发行版都可以通过包管理器来维护、更新微码。使用包管理器的方法是经过测试的,对大多数用户来说是最安全的方式。 -### 如何为 Linux 安装 Intel 处理器微码块(20180108 发布) +#### 如何为 Linux 安装 Intel 处理器微码块(20180108 发布) + +首先通过 AMD 或 [Intel 网站][7] 去获取最新的微码固件。在本示例中,我有一个名称为 `~/Downloads/microcode-20180108.tgz` 的文件(不要忘了去验证它的检验和),它的用途是去防范 `meltdown/Spectre` bug。先使用 `tar` 命令去提取它: -首先通过 AMD 或 [Intel 网站][7] 去获取最新的微码固件。在本示例中,我有一个名称为 ~/Downloads/microcode-20180108.tgz(不要忘了去验证它的检验和),它的用途是去防范 meltdown/Spectre bugs。先使用 tar 命令去提取它: ``` $ mkdir firmware $ cd firmware @@ -101,33 +105,44 @@ $ ls -l drwxr-xr-x 2 vivek vivek 4096 Jan 8 12:41 intel-ucode -rw-r--r-- 1 vivek vivek 4847056 Jan 8 12:39 microcode.dat -rw-r--r-- 1 vivek vivek 1907 Jan 9 07:03 releasenote - ``` -检查一下,确保存在 /sys/devices/system/cpu/microcode/reload 目录: +> 我只在 CentOS 7.x/RHEL、 7.x/Debian 9.x 和 Ubuntu 17.10 上测试了如下操作。如果你没有找到 `/sys/devices/system/cpu/microcode/reload` 文件的话,更老的发行版所带的更老的内核也许不能使用此方法。参见下面的讨论。请注意,在应用了固件更新之后,有一些客户遇到了系统重启现象。特别是对于[那些运行 Intel Broadwell 和 Haswell CPU][12] 的用于客户机和数据中心服务器上的系统。不要在 Intel Broadwell 和 Haswell CPU 上应用 20180108 版本。尽可能使用软件包管理器方式。 -`$ ls -l /sys/devices/system/cpu/microcode/reload` +检查一下,确保存在 `/sys/devices/system/cpu/microcode/reload`: -你必须使用 [cp 命令][8] 拷贝 intel-ucode 目录下的所有文件到 /lib/firmware/intel-ucode/ 下面: +``` +$ ls -l /sys/devices/system/cpu/microcode/reload +``` -`$ sudo cp -v intel-ucode/* /lib/firmware/intel-ucode/` +你必须使用 [cp 命令][8] 拷贝 `intel-ucode` 目录下的所有文件到 `/lib/firmware/intel-ucode/` 下面: -你只需要将 intel-ucode 这个目录整个拷贝到 /lib/firmware/ 目录下即可。然后在重新加载接口中写入 1 去重新加载微码文件: +``` +$ sudo cp -v intel-ucode/* /lib/firmware/intel-ucode/ +``` -`# echo 1 > /sys/devices/system/cpu/microcode/reload` +你只需要将 `intel-ucode` 这个目录整个拷贝到 `/lib/firmware/` 目录下即可。然后在重新加载接口中写入 `1` 去重新加载微码文件: -更新现有的 initramfs,以便于下次启动时通过内核来加载: +``` +# echo 1 > /sys/devices/system/cpu/microcode/reload +``` + +更新现有的 initramfs,以便于下次启动时它能通过内核来加载: ``` $ sudo update-initramfs -u $ sudo reboot ``` + 重启后通过以下的命令验证微码是否已经更新: -`# dmesg | grep microcode` + +``` +# dmesg | grep microcode +``` 到此为止,就是更新处理器微码的全部步骤。如果一切顺利的话,你的 Intel CPU 的固件将已经是最新的版本了。 -## 关于作者 +### 关于作者 作者是 nixCraft 的创始人、一位经验丰富的系统管理员、Linux/Unix 操作系统 shell 脚本培训师。他与全球的包括 IT、教育、国防和空间研究、以及非盈利组织等各行业的客户一起工作。可以在 [Twitter][9]、[Facebook][10]、[Google+][11] 上关注他。 @@ -137,7 +152,7 @@ via: https://www.cyberciti.biz/faq/install-update-intel-microcode-firmware-linux 作者:[Vivek Gite][a] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -153,3 +168,4 @@ via: https://www.cyberciti.biz/faq/install-update-intel-microcode-firmware-linux [9]:https://twitter.com/nixcraft [10]:https://facebook.com/nixcraft [11]:https://plus.google.com/+CybercitiBiz +[12]:https://newsroom.intel.com/news/intel-security-issue-update-addressing-reboot-issues/ \ No newline at end of file diff --git a/published/20180111 The Fold Command Tutorial With Examples For Beginners.md b/published/20180111 The Fold Command Tutorial With Examples For Beginners.md new file mode 100644 index 0000000000..c5ebfdc98f --- /dev/null +++ b/published/20180111 The Fold Command Tutorial With Examples For Beginners.md @@ -0,0 +1,119 @@ +fold 命令入门示例教程 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/01/Fold-Command-2-720x340.png) + +你有没有发现自己在某种情况下想要折叠或中断命令的输出,以适应特定的宽度?在运行虚拟机的时候,我遇到了几次这种的情况,特别是没有 GUI 的服务器。 以防万一,如果你想限制一个命令的输出为一个特定的宽度,现在看看这里! `fold` 命令在这里就能派的上用场了! `fold` 命令会以适合指定的宽度调整输入文件中的每一行,并将其打印到标准输出。 + +在这个简短的教程中,我们将看到 `fold` 命令的用法,带有实例。 + +### fold 命令示例教程 + +`fold` 命令是 GNU coreutils 包的一部分,所以我们不用为安装的事情烦恼。 + +`fold` 命令的典型语法: + +``` +fold [OPTION]... [FILE]... +``` + +请允许我向您展示一些示例,以便您更好地了解 `fold` 命令。 我有一个名为 `linux.txt` 文件,内容是随机的。 + + +![][2] + +要将上述文件中的每一行换行为默认宽度,请运行: + +``` +fold linux.txt +``` + +每行 80 列是默认的宽度。 这里是上述命令的输出: + +![][3] + +正如你在上面的输出中看到的,`fold` 命令已经将输出限制为 80 个字符的宽度。 + +当然,我们可以指定您的首选宽度,例如 50,如下所示: + +``` +fold -w50 linux.txt +``` + +示例输出: + +![][4] + +我们也可以将输出写入一个新的文件,如下所示: + +``` +fold -w50 linux.txt > linux1.txt +``` + +以上命令将把 `linux.txt` 的行宽度改为 50 个字符,并将输出写入到名为 `linux1.txt` 的新文件中。 + +让我们检查一下新文件的内容: + +``` +cat linux1.txt +``` + +![][5] + +你有没有注意到前面的命令的输出? 有些词在行之间被中断。 为了解决这个问题,我们可以使用 `-s` 标志来在空格处换行。 + +以下命令将给定文件中的每行调整为宽度 50,并在空格处换到新行: + +``` +fold -w50 -s linux.txt +``` + +示例输出: + +![][6] + +看清楚了吗? 现在,输出很清楚。 换到新行中的单词都是用空格隔开的,所在行单词的长度大于 50 的时候就会被调整到下一行。 + +在所有上面的例子中,我们用列来限制输出宽度。 但是,我们可以使用 `-b` 选项将输出的宽度强制为指定的字节数。 以下命令以 20 个字节中断输出。 + +``` +fold -b20 linux.txt +``` + +示例输出: + +![][7] + +另请阅读: + +- [Uniq 命令入门级示例教程][8] + +有关更多详细信息,请参阅 man 手册页。 + +``` +man fold +``` + +这些就是所有的内容了。 您现在知道如何使用 `fold` 命令以适应特定的宽度来限制命令的输出。 我希望这是有用的。 我们将每天发布更多有用的指南。 敬请关注! + +干杯! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/fold-command-tutorial-examples-beginners/ + +作者:[SK][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-2.png +[4]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-3-1.png +[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-4.png +[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-5-1.png +[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/fold-command-6-1.png +[8]:https://www.ostechnix.com/uniq-command-tutorial-examples-beginners/ diff --git a/translated/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md b/published/20180117 How To Manage Vim Plugins Using Vundle On Linux.md similarity index 52% rename from translated/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md rename to published/20180117 How To Manage Vim Plugins Using Vundle On Linux.md index d3f79742ec..67c83e5049 100644 --- a/translated/tech/20180117 How To Manage Vim Plugins Using Vundle On Linux.md +++ b/published/20180117 How To Manage Vim Plugins Using Vundle On Linux.md @@ -1,35 +1,38 @@ 如何在 Linux 上使用 Vundle 管理 Vim 插件 ====== + ![](https://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-4-720x340.png) -毋庸置疑,**Vim** 是一款强大的文本文件处理的通用工具,能够管理系统配置文件,编写代码。通过插件,vim 可以被拓展出不同层次的功能。通常,所有的插件和附属的配置文件都会存放在 **~/.vim** 目录中。由于所有的插件文件都被存储在同一个目录下,所以当你安装更多插件时,不同的插件文件之间相互混淆。因而,跟踪和管理它们将是一个恐怖的任务。然而,这正是 Vundle 所能处理的。Vundle,分别是 **V** im 和 B **undle** 的缩写,它是一款能够管理 Vim 插件的极其实用的工具。 +毋庸置疑,Vim 是一款强大的文本文件处理的通用工具,能够管理系统配置文件和编写代码。通过插件,Vim 可以被拓展出不同层次的功能。通常,所有的插件和附属的配置文件都会存放在 `~/.vim` 目录中。由于所有的插件文件都被存储在同一个目录下,所以当你安装更多插件时,不同的插件文件之间相互混淆。因而,跟踪和管理它们将是一个恐怖的任务。然而,这正是 Vundle 所能处理的。Vundle,分别是 **V** im 和 B **undle** 的缩写,它是一款能够管理 Vim 插件的极其实用的工具。 -Vundle 为每一个你安装和存储的拓展配置文件创建各自独立的目录树。因此,相互之间没有混淆的文件。简言之,Vundle 允许你安装新的插件、配置已存在的插件、更新插件配置、搜索安装插件和清理不使用的插件。所有的操作都可以在单一按键的交互模式下完成。在这个简易的教程中,让我告诉你如何安装 Vundle,如何在 GNU/Linux 中使用它来管理 Vim 插件。 +Vundle 为每一个你安装的插件创建一个独立的目录树,并在相应的插件目录中存储附加的配置文件。因此,相互之间没有混淆的文件。简言之,Vundle 允许你安装新的插件、配置已有的插件、更新插件配置、搜索安装的插件和清理不使用的插件。所有的操作都可以在一键交互模式下完成。在这个简易的教程中,让我告诉你如何安装 Vundle,如何在 GNU/Linux 中使用它来管理 Vim 插件。 ### Vundle 安装 -如果你需要 Vundle,那我就当作你的系统中,已将安装好了 **vim**。如果没有,安装 vim,尽情 **git**(下载 vundle)去吧。在大部分 GNU/Linux 发行版中的官方仓库中都可以获取到这两个包。比如,在 Debian 系列系统中,你可以使用下面的命令安装这两个包。 +如果你需要 Vundle,那我就当作你的系统中,已将安装好了 Vim。如果没有,请安装 Vim 和 git(以下载 Vundle)。在大部分 GNU/Linux 发行版中的官方仓库中都可以获取到这两个包。比如,在 Debian 系列系统中,你可以使用下面的命令安装这两个包。 ``` sudo apt-get install vim git ``` -**下载 Vundle** +#### 下载 Vundle 复制 Vundle 的 GitHub 仓库地址: + ``` git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim ``` -**配置 Vundle** +#### 配置 Vundle -创建 **~/.vimrc** 文件,通知 vim 使用新的插件管理器。这个文件获得有安装、更新、配置和移除插件的权限。 +创建 `~/.vimrc` 文件,以通知 Vim 使用新的插件管理器。安装、更新、配置和移除插件需要这个文件。 ``` vim ~/.vimrc ``` 在此文件顶部,加入如下若干行内容: + ``` set nocompatible " be iMproved, required filetype off " required @@ -76,35 +79,39 @@ filetype plugin indent on " required " Put your non-Plugin stuff after this line ``` -被标记的行中,是 Vundle 的请求项。其余行仅是一些例子。如果你不想安装那些特定的插件,可以移除它们。一旦你安装过,键入 **:wq** 保存退出。 +被标记为 “required” 的行是 Vundle 的所需配置。其余行仅是一些例子。如果你不想安装那些特定的插件,可以移除它们。完成后,键入 `:wq` 保存退出。 + +最后,打开 Vim: -最后,打开 vim ``` vim ``` -然后键入下列命令安装插件。 +然后键入下列命令安装插件: + ``` :PluginInstall ``` -[![][1]][2] +![][2] -将会弹出一个新的分窗口,.vimrc 中陈列的项目都会自动安装。 +将会弹出一个新的分窗口,我们加在 `.vimrc` 文件中的所有插件都会自动安装。 -[![][1]][3] +![][3] + +安装完毕之后,键入下列命令,可以删除高速缓存区缓存并关闭窗口: -安装完毕之后,键入下列命令,可以删除高速缓存区缓存并关闭窗口。 ``` :bdelete ``` -在终端上使用下面命令,规避使用 vim 安装插件 +你也可以在终端上使用下面命令安装插件,而不用打开 Vim: + ``` vim +PluginInstall +qall ``` -使用 [**fish shell**][4] 的朋友,添加下面这行到你的 **.vimrc** 文件中。 +使用 [fish shell][4] 的朋友,添加下面这行到你的 `.vimrc` 文件中。 ``` set shell=/bin/bash @@ -112,123 +119,138 @@ set shell=/bin/bash ### 使用 Vundle 管理 Vim 插件 -**添加新的插件** +#### 添加新的插件 + +首先,使用下面的命令搜索可以使用的插件: -首先,使用下面的命令搜索可以使用的插件。 ``` :PluginSearch ``` -命令之后添加 **"! "**,刷新 vimscripts 网站内容到本地。 +要从 vimscripts 网站刷新本地的列表,请在命令之后添加 `!`。 + ``` :PluginSearch! ``` -一个陈列可用插件列表的新分窗口将会被弹出。 +会弹出一个列出可用插件列表的新分窗口: -[![][1]][5] +![][5] 你还可以通过直接指定插件名的方式,缩小搜索范围。 + ``` :PluginSearch vim ``` -这样将会列出包含关键词“vim”的插件。 +这样将会列出包含关键词 “vim” 的插件。 当然你也可以指定确切的插件名,比如: + ``` :PluginSearch vim-dasm ``` -移动焦点到正确的一行上,点击 **" i"** 来安装插件。现在,被选择的插件将会被安装。 +移动焦点到正确的一行上,按下 `i` 键来安装插件。现在,被选择的插件将会被安装。 -[![][1]][6] +![][6] + +类似的,在你的系统中安装所有想要的插件。一旦安装成功,使用下列命令删除 Vundle 缓存: -在你的系统中,所有想要的的插件都以类似的方式安装。一旦安装成功,使用下列命令删除 Vundle 缓存: ``` :bdelete ``` -现在,插件已经安装完成。在 .vimrc 文件中添加安装好的插件名,让插件正确加载。 +现在,插件已经安装完成。为了让插件正确的自动加载,我们需要在 `.vimrc` 文件中添加安装好的插件名。 这样做: + ``` :e ~/.vimrc ``` 添加这一行: + ``` [...] Plugin 'vim-dasm' [...] ``` -用自己的插件名替换 vim-dasm。然后,敲击 ESC,键入 **:wq** 保存退出。 +用自己的插件名替换 vim-dasm。然后,敲击 `ESC`,键入 `:wq` 保存退出。 + +请注意,所有插件都必须在 `.vimrc` 文件中追加如下内容。 -请注意,所有插件都必须在 .vimrc 文件中追加如下内容。 ``` [...] filetype plugin indent on ``` -**列出已安装的插件** +#### 列出已安装的插件 键入下面命令列出所有已安装的插件: + ``` :PluginList ``` -[![][1]][7] +![][7] -**更新插件** +#### 更新插件 键入下列命令更新插件: + ``` :PluginUpdate ``` -键入下列命令重新安装所有插件 +键入下列命令重新安装所有插件: + ``` :PluginInstall! ``` -**卸载插件** +#### 卸载插件 首先,列出所有已安装的插件: + ``` :PluginList ``` -之后将焦点置于正确的一行上,敲 **" SHITF+d"** 组合键。 +之后将焦点置于正确的一行上,按下 `SHITF+d` 组合键。 -[![][1]][8] +![][8] + +然后编辑你的 `.vimrc` 文件: -然后编辑你的 .vimrc 文件: ``` :e ~/.vimrc ``` -再然后删除插件入口。最后,键入 **:wq** 保存退出。 +删除插件入口。最后,键入 `:wq` 保存退出。 + +或者,你可以通过移除插件所在 `.vimrc` 文件行,并且执行下列命令,卸载插件: -或者,你可以通过移除插件所在 .vimrc 文件行,并且执行下列命令,卸载插件: ``` :PluginClean ``` -这个命令将会移除所有不在你的 .vimrc 文件中但是存在于 bundle 目录中的插件。 +这个命令将会移除所有不在你的 `.vimrc` 文件中但是存在于 bundle 目录中的插件。 + +你应该已经掌握了 Vundle 管理插件的基本方法了。在 Vim 中使用下列命令,查询帮助文档,获取更多细节。 -你应该已经掌握了 Vundle 管理插件的基本方法了。在 vim 中使用下列命令,查询帮助文档,获取更多细节。 ``` :h vundle ``` -**捎带看看:** - -现在我已经把所有内容都告诉你了。很快,我就会出下一篇教程。保持关注 OSTechNix! +现在我已经把所有内容都告诉你了。很快,我就会出下一篇教程。保持关注! 干杯! -**来源:** +### 资源 + +[Vundle GitHub 仓库][9] -------------------------------------------------------------------------------- @@ -236,16 +258,17 @@ via: https://www.ostechnix.com/manage-vim-plugins-using-vundle-linux/ 作者:[SK][a] 译者:[CYLeft](https://github.com/CYLeft) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://www.ostechnix.com/author/sk/ [1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-1.png () -[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-2.png () +[2]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-2.png [4]:https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/ -[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-3.png () -[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-4-2.png () -[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-5-1.png () -[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-6.png () +[5]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-3.png +[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-4-2.png +[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-5-1.png +[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Vundle-6.png +[9]:https://github.com/VundleVim/Vundle.vim \ No newline at end of file diff --git a/translated/tech/20180119 How to install Spotify application on Linux.md b/published/20180119 How to install Spotify application on Linux.md similarity index 63% rename from translated/tech/20180119 How to install Spotify application on Linux.md rename to published/20180119 How to install Spotify application on Linux.md index 9b348401c1..8e6547e5da 100644 --- a/translated/tech/20180119 How to install Spotify application on Linux.md +++ b/published/20180119 How to install Spotify application on Linux.md @@ -1,9 +1,9 @@ -如何在 Linux 上安装 Spotify +如何在 Linux 上使用 snap 安装 Spotify(声破天) ====== -如何在 Ubuntu Linux 桌面上安装 Spotify 来在线听音乐? +如何在 Ubuntu Linux 桌面上安装 spotify 来在线听音乐? -Spotify 是一个可让你访问大量歌曲的数字音乐流服务。你可以免费收听或者购买订阅。可以创建播放列表。订阅用户可以免广告收听音乐。你会得到更好的音质。本教程**展示如何使用在 Ubuntu、Mint、Debian、Fedora、Arch 和其他更多发行版**上的 snap 包管理器安装 Spotify。 +Spotify 是一个可让你访问大量歌曲的数字音乐流服务。你可以免费收听或者购买订阅,可以创建播放列表。订阅用户可以免广告收听音乐,你会得到更好的音质。本教程展示如何使用在 Ubuntu、Mint、Debian、Fedora、Arch 和其他更多发行版上的 snap 包管理器安装 Spotify。 ### 在 Linux 上安装 spotify @@ -11,33 +11,28 @@ Spotify 是一个可让你访问大量歌曲的数字音乐流服务。你可以 1. 安装 snapd 2. 打开 snapd -3. 找到 Spotify snap: -``` -snap find spotify -``` -4. 安装 spotify: -``` -do snap install spotify -``` -5. 运行: -``` -spotify & -``` +3. 找到 Spotify snap:`snap find spotify` +4. 安装 spotify:`sudo snap install spotify` +5. 运行:`spotify &` 让我们详细看看所有的步骤和例子。 -### 步骤 1 - 安装 Snapd +### 步骤 1 - 安装 snapd 你需要安装 snapd 包。它是一个守护进程(服务),并能在 Linux 系统上启用 snap 包管理。 -#### Debian/Ubuntu/Mint Linux 上的 Snapd +#### Debian/Ubuntu/Mint Linux 上的 snapd -输入以下[ apt 命令][1]/ [apt-get 命令][2]: -`$ sudo apt install snapd` +输入以下 [apt 命令][1]/ [apt-get 命令][2]: + +``` +$ sudo apt install snapd +``` #### 在 Arch Linux 上安装 snapd -snapd 只包含在 Arch User Repository(AUR)中。运行 yaourt 命令(参见[如何在 Archlinux 上安装 yaourt][3]): +snapd 只包含在 Arch User Repository(AUR)中。运行 `yaourt` 命令(参见[如何在 Archlinux 上安装 yaourt][3]): + ``` $ sudo yaourt -S snapd $ sudo systemctl enable --now snapd.socket @@ -45,7 +40,8 @@ $ sudo systemctl enable --now snapd.socket #### 在 Fedora 上获取 snapd -运行 snapd 命令 +运行 snapd 命令: + ``` sudo dnf install snapd sudo ln -s /var/lib/snapd/snap /snap @@ -53,26 +49,67 @@ sudo ln -s /var/lib/snapd/snap /snap #### OpenSUSE 安装 snapd +执行如下的 `zypper` 命令: + +``` +### Tumbleweed verson ### +$ sudo zypper addrepo http://download.opensuse.org/repositories/system:/snappy/openSUSE_Tumbleweed/ snappy +### Leap version ## +$ sudo zypper addrepo http://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.3/ snappy +``` + +安装: + +``` +$ sudo zypper install snapd +$ sudo systemctl enable --now snapd.socket +``` + +### 步骤 2 - 在 Linux 上使用 snap 安装 spofity + 执行 snap 命令: -`$ snap find spotify` + +``` +$ snap find spotify +``` + [![snap search for spotify app command][4]][4] + 安装它: -`$ sudo snap install spotify` + +``` +$ sudo snap install spotify +``` + [![How to install Spotify application on Linux using snap command][5]][5] -### 步骤 3 - 运行 spotify 并享受它(译注:原博客中就是这么直接跳到 step3 的) +### 步骤 3 - 运行 spotify 并享受它 从 GUI 运行它,或者只需输入: -`$ spotify` + +``` +$ spotify +``` + 在启动时自动登录你的帐户: + ``` $ spotify --username vivek@nixcraft.com $ spotify --username vivek@nixcraft.com --password 'myPasswordHere' ``` + 在初始化时使用给定的 URI 启动 Spotify 客户端: -`$ spotify--uri=` + +``` +$ spotify --uri= +``` + 以指定的网址启动: -`$ spotify--url=` + +``` +$ spotify --url= +``` + [![Spotify client app running on my Ubuntu Linux desktop][6]][6] ### 关于作者 @@ -85,7 +122,7 @@ via: https://www.cyberciti.biz/faq/how-to-install-spotify-application-on-linux/ 作者:[Vivek Gite][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180129 Create your own personal Cloud- Install OwnCloud.md b/published/20180129 Create your own personal Cloud- Install OwnCloud.md new file mode 100644 index 0000000000..0b01586c92 --- /dev/null +++ b/published/20180129 Create your own personal Cloud- Install OwnCloud.md @@ -0,0 +1,113 @@ +搭建私有云:OwnCloud +====== + +所有人都在讨论云。尽管市面上有很多为我们提供云存储和其他云服务的主要服务商,但是我们还是可以为自己搭建一个私有云。 + +在本教程中,我们将讨论如何利用 OwnCloud 搭建私有云。OwnCloud 是一个可以安装在我们 Linux 设备上的 web 应用程序,能够存储和用我们的数据提供服务。OwnCloud 可以分享日历、联系人和书签,共享音/视频流等等。 + +本教程中,我们使用的是 CentOS 7 系统,但是本教程同样适用于其他 Linux 发行版中安装 OwnCloud。让我们开始安装 OwnCloud 并且做一些准备工作, + +- 推荐阅读:[如何在 CentOS & RHEL 上使用 Apache 作为反向代理服务器][1] +- 同时推荐:[实时 Linux 服务器监测和 GLANCES 监测工具][2] + +### 预备 + +* 我们需要在机器上配置 LAMP。参照阅读我们的文章《[在 CentOS/RHEL 上配置 LAMP 服务器最简单的教程][3]》 & 《[在 Ubuntu 搭建 LAMP][4]》。 +* 我们需要在自己的设备里安装这些包,`php-mysql`、 `php-json`、 `php-xml`、 `php-mbstring`、 `php-zip`、 `php-gd`、 `curl、 `php-curl` 、`php-pdo`。使用包管理器安装它们。 + +``` +$ sudo yum install php-mysql php-json php-xml php-mbstring php-zip php-gd curl php-curl php-pdo +``` + +### 安装 + +安装 OwnCloud,我们现在需要在服务器上下载 OwnCloud 安装包。使用下面的命令从官方网站下载最新的安装包(10.0.4-1): + +``` +$ wget https://download.owncloud.org/community/owncloud-10.0.4.tar.bz2 +``` + +使用下面的命令解压: + +``` +$ tar -xvf owncloud-10.0.4.tar.bz2 +``` + +现在,将所有解压后的文件移动至 `/var/www/html`: + +``` +$ mv owncloud/* /var/www/html +``` + +下一步,我们需要在 Apache 的配置文件 `httpd.conf` 上做些修改: + +``` +$ sudo vim /etc/httpd/conf/httpd.conf +``` + +更改下面的选项: + +``` +AllowOverride All +``` + +保存该文件,并修改 OwnCloud 文件夹的文件权限: + +``` +$ sudo chown -R apache:apache /var/www/html/ +$ sudo chmod 777 /var/www/html/config/ +``` + +然后重启 Apache 服务器执行修改: + +``` +$ sudo systemctl restart httpd +``` + +现在,我们需要在 MariaDB 上创建一个数据库,保存来自 OwnCloud 的数据。使用下面的命令创建数据库和数据库用户: + +``` +$ mysql -u root -p +MariaDB [(none)] > create database owncloud; +MariaDB [(none)] > GRANT ALL ON owncloud.* TO ocuser@localhost IDENTIFIED BY 'owncloud'; +MariaDB [(none)] > flush privileges; +MariaDB [(none)] > exit +``` + +服务器配置部分完成后,现在我们可以在网页浏览器上访问 OwnCloud。打开浏览器,输入您的服务器 IP 地址,我这边的服务器是 10.20.30.100: + +![安装 owncloud][7] + +一旦 URL 加载完毕,我们将呈现上述页面。这里,我们将创建管理员用户同时提供数据库信息。当所有信息提供完毕,点击“Finish setup”。 + +我们将被重定向到登录页面,在这里,我们需要输入先前创建的凭据: + +![安装 owncloud][9] + +认证成功之后,我们将进入 OwnCloud 面板: + +![安装 owncloud][11] + +我们可以使用手机应用程序,同样也可以使用网页界面更新我们的数据。现在,我们已经有自己的私有云了,同时,关于如何安装 OwnCloud 创建私有云的教程也进入尾声。请在评论区留下自己的问题或建议。 + +-------------------------------------------------------------------------------- + +via: http://linuxtechlab.com/create-personal-cloud-install-owncloud/ + +作者:[SHUSAIN][a] +译者:[CYLeft](https://github.com/CYLeft) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxtechlab.com/author/shsuain/ +[1]:http://linuxtechlab.com/apache-as-reverse-proxy-centos-rhel/ +[2]:http://linuxtechlab.com/linux-server-glances-monitoring-tool/ +[3]:http://linuxtechlab.com/easiest-guide-creating-lamp-server/ +[4]:http://linuxtechlab.com/install-lamp-stack-on-ubuntu/ +[6]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=400%2C647 +[7]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud1-compressor.jpg?resize=400%2C647 +[8]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=876%2C541 +[9]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud2-compressor1.jpg?resize=876%2C541 +[10]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=981%2C474 +[11]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud3-compressor1.jpg?resize=981%2C474 diff --git a/published/20180129 How programmers learn to code.md b/published/20180129 How programmers learn to code.md new file mode 100644 index 0000000000..4d533d5469 --- /dev/null +++ b/published/20180129 How programmers learn to code.md @@ -0,0 +1,62 @@ +程序员如何学习编码 +============================================================ + + [![How programmers learn to code](https://mybroadband.co.za/news/wp-content/uploads/2016/01/Programmer-working-computer-code.jpg)][8] + +HackerRank 最近公布了 2018 年开发者技能报告的结果,其中向程序员询问了他们何时开始编码。 + +39,441 名专业人员和学生开发者于 2016 年 10 月 16 日至 11 月 1 日完成了在线调查,超过 25% 的被调查的开发者在 16 岁前编写了他们的第一段代码。(LCTT 译注:日期恐有误) + +### 程序员是如何学习的 + +报告称,就程序员如何学习编码而言,自学是所有年龄段开发者的常态。 + +“尽管 67% 的开发者拥有计算机科学学位,但大约 74% 的人表示他们至少一部分是自学的。” + +开发者平均了解四种语言,但他们想学习更多语言。 + +对学习的渴望因人而异 —— 18 至 24 岁的开发者计划学习 6 种语言,而 35 岁以上的开发者只计划学习 3 种语言。 + + [![HackerRank 2018 how did you learn to code](https://mybroadband.co.za/news/wp-content/uploads/2018/01/HackerRank-2018-how-did-you-learn-to-code.jpg)][5] + +### 程序员想要什么 + +HackerRank 还研究了开发者最想从雇主那里得到什么。 + +平均而言,良好的工作与生活平衡,紧随其后的是专业成长与学习,是最理想的要求。 + +按地区划分的数据显示,美国人比亚洲和欧洲的开发者更渴望工作与生活的平衡。 + +学生倾向于将成长和学习列在工作与生活的平衡之上,而专业人员对薪酬的排名比学生高得多。 + +在小公司工作的人倾向于降低工作与生活的平衡,但仍处于前三名。 + +年龄也制造了不同,25 岁以上的开发者将工作与生活的平衡评为最重要的,而 18 岁至 24 岁的人们则认为其重要性较低。 + +HackerRank 说:“在某些方面,我们发现了一个小矛盾。开发人员需要工作与生活的平衡,但他们也渴望学习“。 + +它建议,专注于做你喜欢的事情,而不是试图学习一切,这可以帮助实现更好的工作与生活的平衡。 + + [![HackerRank 2018 what do developers want most](https://mybroadband.co.za/news/wp-content/uploads/2018/01/HackerRank-2018-what-do-developers-want-most-640x342.jpg)][6] + + [![HackerRank 2018 how to improve work-life balance](https://mybroadband.co.za/news/wp-content/uploads/2018/01/HackerRank-2018-how-to-improve-work-life-balance-378x430.jpg)][7] + +-------------------------------------------------------------------------------- + +via: https://mybroadband.co.za/news/smartphones/246583-how-programmers-learn-to-code.html + +作者:[Staff Writer][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://mybroadband.co.za/news/author/staff-writer +[1]:https://mybroadband.co.za/news/author/staff-writer +[2]:https://twitter.com/intent/tweet/?text=How+programmers+learn+to+code%20https://mybroadband.co.za/news/smartphones/246583-how-programmers-learn-to-code.html&via=mybroadband +[3]:mailto:?subject=How%20programmers%20learn%20to%20code&body=HackerRank%20recently%20published%20the%20results%20of%20its%202018%20Developer%20Skills%20Report.%0A%0Ahttps%3A%2F%2Fmybroadband.co.za%2Fnews%2Fsmartphones%2F246583-how-programmers-learn-to-code.html +[4]:https://mybroadband.co.za/news/smartphones/246583-how-programmers-learn-to-code.html#disqus_thread +[5]:https://mybroadband.co.za/news/wp-content/uploads/2018/01/HackerRank-2018-how-did-you-learn-to-code.jpg +[6]:https://mybroadband.co.za/news/wp-content/uploads/2018/01/HackerRank-2018-what-do-developers-want-most.jpg +[7]:https://mybroadband.co.za/news/wp-content/uploads/2018/01/HackerRank-2018-how-to-improve-work-life-balance.jpg +[8]:https://mybroadband.co.za/news/smartphones/246583-how-programmers-learn-to-code.html \ No newline at end of file diff --git a/published/20060430 Linux Find Out Last System Reboot Time and Date Command.md b/published/201802/20060430 Linux Find Out Last System Reboot Time and Date Command.md similarity index 100% rename from published/20060430 Linux Find Out Last System Reboot Time and Date Command.md rename to published/201802/20060430 Linux Find Out Last System Reboot Time and Date Command.md diff --git a/published/20061104 How To Turn On-Off Colors For ls Command In Bash On a Linux-Unix.md b/published/201802/20061104 How To Turn On-Off Colors For ls Command In Bash On a Linux-Unix.md similarity index 100% rename from published/20061104 How To Turn On-Off Colors For ls Command In Bash On a Linux-Unix.md rename to published/201802/20061104 How To Turn On-Off Colors For ls Command In Bash On a Linux-Unix.md diff --git a/published/20070129 How To Debug a Bash Shell Script Under Linux or UNIX.md b/published/201802/20070129 How To Debug a Bash Shell Script Under Linux or UNIX.md similarity index 100% rename from published/20070129 How To Debug a Bash Shell Script Under Linux or UNIX.md rename to published/201802/20070129 How To Debug a Bash Shell Script Under Linux or UNIX.md diff --git a/published/20070810 How to use lftp to accelerate ftp-https download speed on Linux-UNIX.md b/published/201802/20070810 How to use lftp to accelerate ftp-https download speed on Linux-UNIX.md similarity index 100% rename from published/20070810 How to use lftp to accelerate ftp-https download speed on Linux-UNIX.md rename to published/201802/20070810 How to use lftp to accelerate ftp-https download speed on Linux-UNIX.md diff --git a/published/20071007 Linux Check IDE - SATA SSD Hard Disk Transfer Speed.md b/published/201802/20071007 Linux Check IDE - SATA SSD Hard Disk Transfer Speed.md similarity index 100% rename from published/20071007 Linux Check IDE - SATA SSD Hard Disk Transfer Speed.md rename to published/201802/20071007 Linux Check IDE - SATA SSD Hard Disk Transfer Speed.md diff --git a/published/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md b/published/201802/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md similarity index 100% rename from published/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md rename to published/201802/20090627 30 Linux System Monitoring Tools Every SysAdmin Should Know.md diff --git a/translated/tech/20090724 Top 20 OpenSSH Server Best Security Practices.md b/published/201802/20090724 Top 20 OpenSSH Server Best Security Practices.md similarity index 58% rename from translated/tech/20090724 Top 20 OpenSSH Server Best Security Practices.md rename to published/201802/20090724 Top 20 OpenSSH Server Best Security Practices.md index b0fb1bcccf..53be8e2057 100644 --- a/translated/tech/20090724 Top 20 OpenSSH Server Best Security Practices.md +++ b/published/201802/20090724 Top 20 OpenSSH Server Best Security Practices.md @@ -1,223 +1,272 @@ -Translated by shipsw +20 个 OpenSSH 最佳安全实践 +====== -20 个 OpenSSH 安全实践 -====== ![OpenSSH 安全提示][1] -OpenSSH 是 SSH 协议的一个实现。一般被 scp 或 sftp 用在远程登录、备份、远程文件传输等功能上。SSH能够完美保障两个网络或系统间数据传输的保密性和完整性。尽管如此,他主要用在使用公匙加密的服务器验证上。不时出现关于 OpenSSH 零日漏洞的[谣言][2]。本文描述**如何设置你的 Linux 或类 Unix 系统以提高 sshd 的安全性**。 +OpenSSH 是 SSH 协议的一个实现。一般通过 `scp` 或 `sftp` 用于远程登录、备份、远程文件传输等功能。SSH能够完美保障两个网络或系统间数据传输的保密性和完整性。尽管如此,它最大的优势是使用公匙加密来进行服务器验证。时不时会出现关于 OpenSSH 零日漏洞的[传言][2]。本文将描述如何设置你的 Linux 或类 Unix 系统以提高 sshd 的安全性。 -#### OpenSSH 默认设置 +### OpenSSH 默认设置 - * TCP 端口 - 22 - * OpenSSH 服务配置文件 - sshd_config (位于 /etc/ssh/) +* TCP 端口 - 22 +* OpenSSH 服务配置文件 - `sshd_config` (位于 `/etc/ssh/`) +### 1、 基于公匙的登录 +OpenSSH 服务支持各种验证方式。推荐使用公匙加密验证。首先,使用以下 `ssh-keygen` 命令在本地电脑上创建密匙对: -#### 1. 基于公匙的登录 - -OpenSSH 服务支持各种验证方式。推荐使用公匙加密验证。首先,使用以下 ssh-keygen 命令在本地电脑上创建密匙对: - -低于 1024 位的 DSA 和 RSA 加密是很弱的,请不要使用。RSA 密匙主要是在考虑 ssh 客户端兼容性的时候代替 ECDSA 密匙使用的。 +> 1024 位或低于它的 DSA 和 RSA 加密是很弱的,请不要使用。当考虑 ssh 客户端向后兼容性的时候,请使用 RSA密匙代替 ECDSA 密匙。所有的 ssh 密钥要么使用 ED25519 ,要么使用 RSA,不要使用其它类型。 ``` $ ssh-keygen -t key_type -b bits -C "comment" +``` + +示例: + +``` $ ssh-keygen -t ed25519 -C "Login to production cluster at xyz corp" +或 $ ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa_aws_$(date +%Y-%m-%d) -C "AWS key for abc corp clients" ``` -下一步,使用 ssh-copy-id 命令安装公匙: + +下一步,使用 `ssh-copy-id` 命令安装公匙: + ``` $ ssh-copy-id -i /path/to/public-key-file user@host +或 $ ssh-copy-id user@remote-server-ip-or-dns-name +``` + +示例: + +``` $ ssh-copy-id vivek@rhel7-aws-server ``` -提示输入用户名和密码的时候,使用你自己的 ssh 公匙: -`$ ssh vivek@rhel7-aws-server` -[![OpenSSH 服务安全最佳实践][3]][3] + +提示输入用户名和密码的时候,确认基于 ssh 公匙的登录是否工作: + +``` +$ ssh vivek@rhel7-aws-server +``` + +[![OpenSSH 服务安全最佳实践][3]][3] + 更多有关 ssh 公匙的信息,参照以下文章: -* [为备份脚本设置无密码安全登录][48] - -* [sshpass: 使用脚本密码登录SSH服务器][49] - -* [如何为一个 Linux/类Unix 系统设置 SSH 登录密匙][50] - -* [如何使用 Ansible 工具上传 ssh 登录授权公匙][51] +* [为备份脚本设置无密码安全登录][48] +* [sshpass:使用脚本密码登录 SSH 服务器][49] +* [如何为一个 Linux/类 Unix 系统设置 SSH 登录密匙][50] +* [如何使用 Ansible 工具上传 ssh 登录授权公匙][51] -#### 2. 禁用 root 用户登录 +### 2、 禁用 root 用户登录 -禁用 root 用户登录前,确认普通用户可以以 root 身份登录。例如,允许用户 vivek 使用 sudo 命令以 root 身份登录。 +禁用 root 用户登录前,确认普通用户可以以 root 身份登录。例如,允许用户 vivek 使用 `sudo` 命令以 root 身份登录。 -##### 在 Debian/Ubuntu 系统中如何将用户 vivek 添加到 sudo 组中 +#### 在 Debian/Ubuntu 系统中如何将用户 vivek 添加到 sudo 组中 -允许 sudo 组中的用户执行任何命令。 [将用户 vivek 添加到 sudo 组中][4]: -`$ sudo adduser vivek sudo` -使用 [id 命令][5] 验证用户组。 -`$ id vivek` +允许 sudo 组中的用户执行任何命令。 [将用户 vivek 添加到 sudo 组中][4]: -##### 在 CentOS/RHEL 系统中如何将用户 vivek 添加到 sudo 组中 +``` +$ sudo adduser vivek sudo +``` + +使用 [id 命令][5] 验证用户组。 + +``` +$ id vivek +``` + +#### 在 CentOS/RHEL 系统中如何将用户 vivek 添加到 sudo 组中 + +在 CentOS/RHEL 和 Fedora 系统中允许 wheel 组中的用户执行所有的命令。使用 `usermod` 命令将用户 vivek 添加到 wheel 组中: -在 CentOS/RHEL 和 Fedora 系统中允许 wheel 组中的用户执行所有的命令。使用 uermod 命令将用户 vivek 添加到 wheel 组中: ``` $ sudo usermod -aG wheel vivek $ id vivek ``` -##### 测试 sudo 权限并禁用 ssh root 登录 +#### 测试 sudo 权限并禁用 ssh root 登录 测试并确保用户 vivek 可以以 root 身份登录执行以下命令: + ``` $ sudo -i $ sudo /etc/init.d/sshd status $ sudo systemctl status httpd ``` -添加以下内容到 sshd_config 文件中来禁用 root 登录。 + +添加以下内容到 `sshd_config` 文件中来禁用 root 登录: + ``` PermitRootLogin no ChallengeResponseAuthentication no PasswordAuthentication no UsePAM no ``` + 更多信息参见“[如何通过禁用 Linux 的 ssh 密码登录来增强系统安全][6]” 。 -#### 3. 禁用密码登录 +### 3、 禁用密码登录 + +所有的密码登录都应该禁用,仅留下公匙登录。添加以下内容到 `sshd_config` 文件中: -所有的密码登录都应该禁用,仅留下公匙登录。添加以下内容到 sshd_config 文件中: ``` AuthenticationMethods publickey PubkeyAuthentication yes ``` -CentOS 6.x/RHEL 6.x 系统中老版本的 SSHD 用户可以使用以下设置: + +CentOS 6.x/RHEL 6.x 系统中老版本的 sshd 用户可以使用以下设置: + ``` PubkeyAuthentication yes ``` -#### 4. 限制用户的 ssh 权限 +### 4、 限制用户的 ssh 访问 + +默认状态下,所有的系统用户都可以使用密码或公匙登录。但是有些时候需要为 FTP 或者 email 服务创建 UNIX/Linux 用户。然而,这些用户也可以使用 ssh 登录系统。他们将获得访问系统工具的完整权限,包括编译器和诸如 Perl、Python(可以打开网络端口干很多疯狂的事情)等的脚本语言。通过添加以下内容到 `sshd_config` 文件中来仅允许用户 root、vivek 和 jerry 通过 SSH 登录系统: + +``` +AllowUsers vivek jerry +``` + +当然,你也可以添加以下内容到 `sshd_config` 文件中来达到仅拒绝一部分用户通过 SSH 登录系统的效果。 + +``` +DenyUsers root saroj anjali foo +``` -默认状态下,所有的系统用户都可以使用密码或公匙登录。但是有些时候需要为 FTP 或者 email 服务创建 UNIX/Linux 用户。所以,这些用户也可以使用 ssh 登录系统。他们将获得访问系统工具的完整权限,包括编译器和诸如 Perl、Python(可以打开网络端口干很多疯狂的事情) 等的脚本语言。通过添加以下内容到 sshd_config 文件中来仅允许用户 root、vivek 和 jerry 通过 SSH 登录系统: -`AllowUsers vivek jerry` -当然,你也可以添加以下内容到 sshd_config 文件中来达到仅拒绝一部分用户通过 SSH 登录系统的效果。 -`DenyUsers root saroj anjali foo` 你也可以通过[配置 Linux PAM][7] 来禁用或允许用户通过 sshd 登录。也可以允许或禁止一个[用户组列表][8]通过 ssh 登录系统。 -#### 5. 禁用空密码 +### 5、 禁用空密码 -你需要明确禁止空密码账户远程登录系统,更新 sshd_config 文件的以下内容: -`PermitEmptyPasswords no` +你需要明确禁止空密码账户远程登录系统,更新 `sshd_config` 文件的以下内容: -#### 6. 为 ssh 用户或者密匙使用强密码 +``` +PermitEmptyPasswords no +``` + +### 6、 为 ssh 用户或者密匙使用强密码 + +为密匙使用强密码和短语的重要性再怎么强调都不过分。暴力破解可以起作用就是因为用户使用了基于字典的密码。你可以强制用户避开[字典密码][9]并使用[约翰的开膛手工具][10]来检测弱密码。以下是一个随机密码生成器(放到你的 `~/.bashrc` 下): -为密匙使用强密码和短语的重要性再怎么强调都不过分。暴力破解可以起作用就是因为用户使用了基于字典的密码。你可以强制用户避开字典密码并使用[约翰的开膛手工具][10]来检测弱密码。以下是一个随机密码生成器(放到你的 ~/.bashrc 下): ``` genpasswd() { local l=$1 - [ "$l" == "" ] && l=20 - tr -dc A-Za-z0-9_ < /dev/urandom | head -c ${l} | xargs + [ "$l" == "" ] && l=20 + tr -dc A-Za-z0-9_ < /dev/urandom | head -c ${l} | xargs } ``` -运行: -`genpasswd 16` -输出: +运行: + +``` +genpasswd 16 +``` + +输出: + ``` uw8CnDVMwC6vOKgW ``` -* [使用 mkpasswd / makepasswd / pwgen 生成随机密码][52] -* [Linux / UNIX: 生成密码][53] +* [使用 mkpasswd / makepasswd / pwgen 生成随机密码][52] +* [Linux / UNIX: 生成密码][53] +* [Linux 随机密码生成命令][54] -* [Linux 随机密码生成命令][54] +### 7、 为 SSH 的 22端口配置防火墙 --------------------------------------------------------------------------------- +你需要更新 `iptables`/`ufw`/`firewall-cmd` 或 pf 防火墙配置来为 ssh 的 TCP 端口 22 配置防火墙。一般来说,OpenSSH 服务应该仅允许本地或者其他的远端地址访问。 -#### 7. 为 SSH 端口 # 22 配置防火墙 +#### Netfilter(Iptables) 配置 -你需要更新 iptables/ufw/firewall-cmd 或 pf firewall 来为 ssh TCP 端口 # 22 配置防火墙。一般来说,OpenSSH 服务应该仅允许本地或者其他的远端地址访问。 +更新 [/etc/sysconfig/iptables (Redhat 和其派生系统特有文件) ][11] 实现仅接受来自于 192.168.1.0/24 和 202.54.1.5/29 的连接,输入: -##### Netfilter (Iptables) 配置 - -更新 [/etc/sysconfig/iptables (Redhat和其派生系统特有文件) ][11] 实现仅接受来自于 192.168.1.0/24 和 202.54.1.5/29 的连接, 输入: ``` -A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -s 202.54.1.5/29 -m state --state NEW -p tcp --dport 22 -j ACCEPT ``` -如果同时使用 IPv6 的话,可以编辑/etc/sysconfig/ip6tables(Redhat 和其派生系统特有文件),输入: +如果同时使用 IPv6 的话,可以编辑 `/etc/sysconfig/ip6tables` (Redhat 和其派生系统特有文件),输入: + ``` -A RH-Firewall-1-INPUT -s ipv6network::/ipv6mask -m tcp -p tcp --dport 22 -j ACCEPT - ``` -将 ipv6network::/ipv6mask 替换为实际的 IPv6 网段。 +将 `ipv6network::/ipv6mask` 替换为实际的 IPv6 网段。 -##### Debian/Ubuntu Linux 下的 UFW +#### Debian/Ubuntu Linux 下的 UFW -[UFW 是 uncomplicated firewall 的首字母缩写,主要用来管理 Linux 防火墙][12],目的是提供一种用户友好的界面。输入[以下命令使得系统进允许网段 202.54.1.5/29 接入端口 22][13]: -`$ sudo ufw allow from 202.54.1.5/29 to any port 22` -更多信息请参见 "[Linux: 菜鸟管理员的 25 个 Iptables Netfilter 命令][14]"。 +[UFW 是 Uncomplicated FireWall 的首字母缩写,主要用来管理 Linux 防火墙][12],目的是提供一种用户友好的界面。输入[以下命令使得系统仅允许网段 202.54.1.5/29 接入端口 22][13]: -##### *BSD PF 防火墙配置 +``` +$ sudo ufw allow from 202.54.1.5/29 to any port 22 +``` + +更多信息请参见 “[Linux:菜鸟管理员的 25 个 Iptables Netfilter 命令][14]”。 + +#### *BSD PF 防火墙配置 如果使用 PF 防火墙 [/etc/pf.conf][15] 配置如下: + ``` pass in on $ext_if inet proto tcp from {192.168.1.0/24, 202.54.1.5/29} to $ssh_server_ip port ssh flags S/SA synproxy state ``` -#### 8. 修改 SSH 端口和绑定 IP +### 8、 修改 SSH 端口和绑定 IP + +ssh 默认监听系统中所有可用的网卡。修改并绑定 ssh 端口有助于避免暴力脚本的连接(许多暴力脚本只尝试端口 22)。更新文件 `sshd_config` 的以下内容来绑定端口 300 到 IP 192.168.1.5 和 202.54.1.5: -SSH 默认监听系统中所有可用的网卡。修改并绑定 ssh 端口有助于避免暴力脚本的连接(许多暴力脚本只尝试端口 22)。更新文件 sshd_config 的以下内容来绑定端口 300 到 IP 192.168.1.5 和 202.54.1.5: ``` Port 300 ListenAddress 192.168.1.5 ListenAddress 202.54.1.5 ``` -端口 300 监听地址 192.168.1.5 监听地址 202.54.1.5 - 当需要接受动态广域网地址的连接时,使用主动脚本是个不错的选择,比如 fail2ban 或 denyhosts。 -#### 9. 使用 TCP wrappers (可选的) +### 9、 使用 TCP wrappers (可选的) + +TCP wrapper 是一个基于主机的访问控制系统,用来过滤来自互联网的网络访问。OpenSSH 支持 TCP wrappers。只需要更新文件 `/etc/hosts.allow` 中的以下内容就可以使得 SSH 只接受来自于 192.168.1.2 和 172.16.23.12 的连接: -TCP wrapper 是一个基于主机的访问控制系统,用来过滤来自互联网的网络访问。OpenSSH 支持 TCP wrappers。只需要更新文件 /etc/hosts.allow 中的以下内容就可以使得 SSH 只接受来自于 192.168.1.2 和 172.16.23.12 的连接: ``` sshd : 192.168.1.2 172.16.23.12 ``` 在 Linux/Mac OS X 和类 UNIX 系统中参见 [TCP wrappers 设置和使用的常见问题][16]。 -#### 10. 阻止 SSH 破解或暴力攻击 +### 10、 阻止 SSH 破解或暴力攻击 -暴力破解是一种在单一或者分布式网络中使用大量组合(用户名和密码的组合)来尝试连接一个加密系统的方法。可以使用以下软件来应对暴力攻击: +暴力破解是一种在单一或者分布式网络中使用大量(用户名和密码的)组合来尝试连接一个加密系统的方法。可以使用以下软件来应对暴力攻击: - * [DenyHosts][17] 是一个基于 Python SSH 安全工具。该工具通过监控授权日志中的非法登录日志并封禁原始IP的方式来应对暴力攻击。 - * RHEL / Fedora 和 CentOS Linux 下如何设置 [DenyHosts][18]。 - * [Fail2ban][19] 是另一个类似的用来预防针对 SSH 攻击的工具。 - * [sshguard][20] 是一个使用 pf 来预防针对 SSH 和其他服务攻击的工具。 - * [security/sshblock][21] 阻止滥用 SSH 尝试登录。 - * [IPQ BDB filter][22] 可以看做是 fail2ban 的一个简化版。 +* [DenyHosts][17] 是一个基于 Python SSH 安全工具。该工具通过监控授权日志中的非法登录日志并封禁原始 IP 的方式来应对暴力攻击。 + * RHEL / Fedora 和 CentOS Linux 下如何设置 [DenyHosts][18]。 +* [Fail2ban][19] 是另一个类似的用来预防针对 SSH 攻击的工具。 +* [sshguard][20] 是一个使用 pf 来预防针对 SSH 和其他服务攻击的工具。 +* [security/sshblock][21] 阻止滥用 SSH 尝试登录。 +* [IPQ BDB filter][22] 可以看做是 fail2ban 的一个简化版。 +### 11、 限制 TCP 端口 22 的传入速率(可选的) +netfilter 和 pf 都提供速率限制选项可以对端口 22 的传入速率进行简单的限制。 -#### 11. 限制 TCP 端口 # 22 的传入速率 (可选的) - -netfilter 和 pf 都提供速率限制选项可以对端口 # 22 的传入速率进行简单的限制。 - -##### Iptables 示例 +#### Iptables 示例 以下脚本将会阻止 60 秒内尝试登录 5 次以上的客户端的连入。 + ``` #!/bin/bash inet_if=eth1 ssh_port=22 -$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --set -$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --update --seconds 60 --hitcount 5 +$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --set +$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent --update --seconds 60 --hitcount 5 ``` 在你的 iptables 脚本中调用以上脚本。其他配置选项: + ``` -$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state NEW -m limit --limit 3/min --limit-burst 3 -j ACCEPT -$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT +$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state NEW -m limit --limit 3/min --limit-burst 3 -j ACCEPT +$IPT -A INPUT -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT $IPT -A OUTPUT -o ${inet_if} -p tcp --sport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT # another one line example # $IPT -A INPUT -i ${inet_if} -m state --state NEW,ESTABLISHED,RELATED -p tcp --dport 22 -m limit --limit 5/minute --limit-burst 5-j ACCEPT @@ -225,9 +274,10 @@ $IPT -A OUTPUT -o ${inet_if} -p tcp --sport ${ssh_port} -m state --state ESTABLI 其他细节参见 iptables 用户手册。 -##### *BSD PF 示例 +#### *BSD PF 示例 + +以下脚本将限制每个客户端的连入数量为 20,并且 5 秒内的连接不超过 15 个。如果客户端触发此规则,则将其加入 abusive_ips 表并限制该客户端连入。最后 flush 关键词杀死所有触发规则的客户端的连接。 -以下脚本将限制每个客户端的连入数量为 20,并且 5 秒范围的连接不超过 15 个。如果客户端触发此规则则将其加入 abusive_ips 表并限制该客户端连入。最后 flush 关键词杀死所有触发规则的客户端的状态。 ``` sshd_server_ip = "202.54.1.5" table persist @@ -235,9 +285,10 @@ block in quick from pass in on $ext_if proto tcp to $sshd_server_ip port ssh flags S/SA keep state (max-src-conn 20, max-src-conn-rate 15/5, overload flush) ``` -#### 12. 使用端口敲门 (可选的) +### 12、 使用端口敲门(可选的) + +[端口敲门][23]是通过在一组预先指定的封闭端口上生成连接尝试,以便从外部打开防火墙上的端口的方法。一旦指定的端口连接顺序被触发,防火墙规则就被动态修改以允许发送连接的主机连入指定的端口。以下是一个使用 iptables 实现的端口敲门的示例: -[端口敲门][23]是通过在一组预先指定的封闭端口上生成连接尝试来从外部打开防火墙上的端口的方法。一旦指定的端口连接顺序被触发,防火墙规则就被动态修改以允许发送连接的主机连入指定的端口。以下是一个使用 iptables 实现的端口敲门的示例: ``` $IPT -N stage1 $IPT -A stage1 -m recent --remove --name knock @@ -257,24 +308,31 @@ $IPT -A INPUT -p tcp --dport 22 -m recent --rcheck --seconds 5 --name heaven -j $IPT -A INPUT -p tcp --syn -j door ``` +更多信息请参见: -更多信息请参见: [Debian / Ubuntu: 使用 Knockd and Iptables 设置端口敲门][55] -#### 13. 配置空闲超时注销时长 +### 13、 配置空闲超时注销时长 + +用户可以通过 ssh 连入服务器,可以配置一个超时时间间隔来避免无人值守的 ssh 会话。 打开 `sshd_config` 并确保配置以下值: -用户可以通过 ssh 连入服务器,可以配置一个超时时间间隔来避免无人值守的 ssh 会话。 打开 sshd_config 并确保配置以下值: ``` ClientAliveInterval 300 ClientAliveCountMax 0 ``` -以秒为单位设置一个空闲超时时间(300秒 = 5分钟)。一旦空闲时间超过这个值,空闲用户就会被踢出会话。更多细节参见[如何自动注销空闲超时的 BASH / TCSH / SSH 用户][24]。 -#### 14. 为 ssh 用户启用警示标语 +以秒为单位设置一个空闲超时时间(300秒 = 5分钟)。一旦空闲时间超过这个值,空闲用户就会被踢出会话。更多细节参见[如何自动注销空闲超时的 BASH / TCSH / SSH 用户][24]。 + +### 14、 为 ssh 用户启用警示标语 + +更新 `sshd_config` 文件如下行来设置用户的警示标语: + +``` +Banner /etc/issue +``` + +`/etc/issue 示例文件: -更新 sshd_config 文件如下来设置用户的警示标语 -`Banner /etc/issue` -/etc/issue 示例文件: ``` ---------------------------------------------------------------------------------------------- You are accessing a XYZ Government (XYZG) Information System (IS) that is provided for authorized use only. @@ -297,45 +355,61 @@ or monitoring of the content of privileged communications, or work product, rela or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work product are private and confidential. See User Agreement for details. ---------------------------------------------------------------------------------------------- - ``` 以上是一个标准的示例,更多的用户协议和法律细节请咨询你的律师团队。 -#### 15. 禁用 .rhosts 文件 (核实) +### 15、 禁用 .rhosts 文件(需核实) + +禁止读取用户的 `~/.rhosts` 和 `~/.shosts` 文件。更新 `sshd_config` 文件中的以下内容: + +``` +IgnoreRhosts yes +``` -禁止读取用户的 ~/.rhosts 和 ~/.shosts 文件。更新 sshd_config 文件中的以下内容: -`IgnoreRhosts yes` SSH 可以模拟过时的 rsh 命令,所以应该禁用不安全的 RSH 连接。 -#### 16. 禁用 host-based 授权 (核实) +### 16、 禁用基于主机的授权(需核实) -禁用 host-based 授权,更新 sshd_config 文件的以下选项: -`HostbasedAuthentication no` +禁用基于主机的授权,更新 `sshd_config` 文件的以下选项: -#### 17. 为 OpenSSH 和 操作系统打补丁 +``` +HostbasedAuthentication no +``` + +### 17、 为 OpenSSH 和操作系统打补丁 推荐你使用类似 [yum][25]、[apt-get][26] 和 [freebsd-update][27] 等工具保持系统安装了最新的安全补丁。 -#### 18. Chroot OpenSSH (将用户锁定在主目录) +### 18、 Chroot OpenSSH (将用户锁定在主目录) -默认设置下用户可以浏览诸如 /etc/、/bin 等目录。可以使用 chroot 或者其他专有工具如 [rssh][28] 来保护ssh连接。从版本 4.8p1 或 4.9p1 起,OpenSSH 不再需要依赖诸如 rssh 或复杂的 chroot(1) 等第三方工具来将用户锁定在主目录中。可以使用新的 ChrootDirectory 指令将用户锁定在其主目录,参见[这篇博文][29]。 +默认设置下用户可以浏览诸如 `/etc`、`/bin` 等目录。可以使用 chroot 或者其他专有工具如 [rssh][28] 来保护 ssh 连接。从版本 4.8p1 或 4.9p1 起,OpenSSH 不再需要依赖诸如 rssh 或复杂的 chroot(1) 等第三方工具来将用户锁定在主目录中。可以使用新的 `ChrootDirectory` 指令将用户锁定在其主目录,参见[这篇博文][29]。 -#### 19. 禁用客户端的 OpenSSH 服务 +### 19. 禁用客户端的 OpenSSH 服务 + +工作站和笔记本不需要 OpenSSH 服务。如果不需要提供 ssh 远程登录和文件传输功能的话,可以禁用 sshd 服务。CentOS / RHEL 用户可以使用 [yum 命令][30] 禁用或删除 openssh-server: + +``` +$ sudo yum erase openssh-server +``` + +Debian / Ubuntu 用户可以使用 [apt 命令][31]/[apt-get 命令][32] 删除 openssh-server: + +``` +$ sudo apt-get remove openssh-server +``` + +有可能需要更新 iptables 脚本来移除 ssh 的例外规则。CentOS / RHEL / Fedora 系统可以编辑文件 `/etc/sysconfig/iptables` 和 `/etc/sysconfig/ip6tables`。最后[重启 iptables][33] 服务: -工作站和笔记本不需要 OpenSSH 服务。如果不需要提供 SSH 远程登录和文件传输功能的话,可以禁用 SSHD 服务。CentOS / RHEL 用户可以使用 [yum 命令][30] 禁用或删除openssh-server: -`$ sudo yum erase openssh-server` -Debian / Ubuntu 用户可以使用 [apt 命令][31]/[apt-get 命令][32] 删除 openssh-server: -`$ sudo apt-get remove openssh-server` -有可能需要更新 iptables 脚本来移除 ssh 例外规则。CentOS / RHEL / Fedora 系统可以编辑文件 /etc/sysconfig/iptables 和 /etc/sysconfig/ip6tables。最后[重启 iptables][33] 服务: ``` # service iptables restart # service ip6tables restart ``` -#### 20. 来自 Mozilla 的额外提示 +### 20. 来自 Mozilla 的额外提示 + +如果使用 6.7+ 版本的 OpenSSH,可以尝试下[以下设置][34]: -如果使用 6.7+ 版本的 OpenSSH,可以尝试下以下设置: ``` #################[ WARNING ]######################## # Do not use any setting blindly. Read sshd_config # @@ -361,10 +435,11 @@ MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@op LogLevel VERBOSE # Log sftp level file access (read/write/etc.) that would not be easily logged otherwise. -Subsystem sftp /usr/lib/ssh/sftp-server -f AUTHPRIV -l INFO +Subsystem sftp /usr/lib/ssh/sftp-server -f AUTHPRIV -l INFO ``` 使用以下命令获取 OpenSSH 支持的加密方法: + ``` $ ssh -Q cipher $ ssh -Q cipher-auth @@ -372,15 +447,25 @@ $ ssh -Q mac $ ssh -Q kex $ ssh -Q key ``` -[![OpenSSH安全教程查询密码和算法选择][35]][35] -#### 如何测试 sshd_config 文件并重启/重新加载 SSH 服务? +[![OpenSSH安全教程查询密码和算法选择][35]][35] + +### 如何测试 sshd_config 文件并重启/重新加载 SSH 服务? + +在重启 sshd 前检查配置文件的有效性和密匙的完整性,运行: + +``` +$ sudo sshd -t +``` + +扩展测试模式: + +``` +$ sudo sshd -T +``` -在重启 sshd 前检查配置文件的有效性和密匙的完整性,运行: -`$ sudo sshd -t` -扩展测试模式: -`$ sudo sshd -T` 最后,根据系统的的版本[重启 Linux 或类 Unix 系统中的 sshd 服务][37]: + ``` $ [sudo systemctl start ssh][38] ## Debian/Ubunt Linux## $ [sudo systemctl restart sshd.service][39] ## CentOS/RHEL/Fedora Linux## @@ -388,25 +473,21 @@ $ doas /etc/rc.d/sshd restart ## OpenBSD## $ sudo service sshd restart ## FreeBSD## ``` -#### 其他建议 +### 其他建议 - 1. [使用 2FA 加强 SSH 的安全性][40] - 可以使用[OATH Toolkit][41] 或 [DuoSecurity][42] 启用多重身份验证。 - 2. [基于密匙链的身份验证][43] - 密匙链是一个 bash 脚本,可以使得基于密匙的验证非常的灵活方便。相对于无密码密匙,它提供更好的安全性。 +1. [使用 2FA 加强 SSH 的安全性][40] - 可以使用 [OATH Toolkit][41] 或 [DuoSecurity][42] 启用多重身份验证。 +2. [基于密匙链的身份验证][43] - 密匙链是一个 bash 脚本,可以使得基于密匙的验证非常的灵活方便。相对于无密码密匙,它提供更好的安全性。 +### 更多信息: +* [OpenSSH 官方][44] 项目。 +* 用户手册: sshd(8)、ssh(1)、ssh-add(1)、ssh-agent(1)。 -#### 更多信息: +如果知道这里没用提及的方便的软件或者技术,请在下面的评论中分享,以帮助读者保持 OpenSSH 的安全。 - * [OpenSSH 官方][44] 项目. - * 用户手册: sshd(8),ssh(1),ssh-add(1),ssh-agent(1) +### 关于作者 - - -如果你发现一个方便的软件或者技术,请在下面的评论中分享,以帮助读者保持 OpenSSH 的安全。 - -#### 关于作者 - -作者是 nixCraft 的创始人,一个经验丰富的系统管理员和 Linux/Unix 脚本培训师。他曾与全球客户合作,领域涉及IT,教育,国防和空间研究以及非营利部门等多个行业。请在 [Twitter][45]、[Facebook][46]、[Google+][47] 上关注他。 +作者是 nixCraft 的创始人,一个经验丰富的系统管理员和 Linux/Unix 脚本培训师。他曾与全球客户合作,领域涉及 IT,教育,国防和空间研究以及非营利部门等多个行业。请在 [Twitter][45]、[Facebook][46]、[Google+][47] 上关注他。 -------------------------------------------------------------------------------- @@ -414,7 +495,7 @@ via: https://www.cyberciti.biz/tips/linux-unix-bsd-openssh-server-best-practices 作者:[Vivek Gite][a] 译者:[shipsw](https://github.com/shipsw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -467,7 +548,7 @@ via: https://www.cyberciti.biz/tips/linux-unix-bsd-openssh-server-best-practices [46]:https://facebook.com/nixcraft [47]:https://plus.google.com/+CybercitiBiz [48]:https://www.cyberciti.biz/faq/ssh-passwordless-login-with-keychain-for-scripts/ -[49]:https://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/ +[49]:https://linux.cn/article-8086-1.html [50]:https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/ [51]:https://www.cyberciti.biz/faq/how-to-upload-ssh-public-key-to-as-authorized_key-using-ansible/ [52]:https://www.cyberciti.biz/faq/generating-random-password/ diff --git a/published/20110917 Have I been hacked by root-notty- - Sysadmin World.md b/published/201802/20110917 Have I been hacked by root-notty- - Sysadmin World.md similarity index 100% rename from published/20110917 Have I been hacked by root-notty- - Sysadmin World.md rename to published/201802/20110917 Have I been hacked by root-notty- - Sysadmin World.md diff --git a/published/20120624 6 Best Open Source Alternatives to Microsoft Office for Linux.md b/published/201802/20120624 6 Best Open Source Alternatives to Microsoft Office for Linux.md similarity index 100% rename from published/20120624 6 Best Open Source Alternatives to Microsoft Office for Linux.md rename to published/201802/20120624 6 Best Open Source Alternatives to Microsoft Office for Linux.md diff --git a/published/20130319 Linux - Unix Bash Shell List All Builtin Commands.md b/published/201802/20130319 Linux - Unix Bash Shell List All Builtin Commands.md similarity index 100% rename from published/20130319 Linux - Unix Bash Shell List All Builtin Commands.md rename to published/201802/20130319 Linux - Unix Bash Shell List All Builtin Commands.md diff --git a/published/20141029 What does an idle CPU do.md b/published/201802/20141029 What does an idle CPU do.md similarity index 100% rename from published/20141029 What does an idle CPU do.md rename to published/201802/20141029 What does an idle CPU do.md diff --git a/published/20160605 Manjaro Gaming- Gaming on Linux Meets Manjaro-s Awesomeness.md b/published/201802/20160605 Manjaro Gaming- Gaming on Linux Meets Manjaro-s Awesomeness.md similarity index 100% rename from published/20160605 Manjaro Gaming- Gaming on Linux Meets Manjaro-s Awesomeness.md rename to published/201802/20160605 Manjaro Gaming- Gaming on Linux Meets Manjaro-s Awesomeness.md diff --git a/published/20160606 Learn your tools Navigating your Git History.md b/published/201802/20160606 Learn your tools Navigating your Git History.md similarity index 100% rename from published/20160606 Learn your tools Navigating your Git History.md rename to published/201802/20160606 Learn your tools Navigating your Git History.md diff --git a/published/20170216 25 Free Books To Learn Linux For Free.md b/published/201802/20170216 25 Free Books To Learn Linux For Free.md similarity index 100% rename from published/20170216 25 Free Books To Learn Linux For Free.md rename to published/201802/20170216 25 Free Books To Learn Linux For Free.md diff --git a/published/20170429 Monitoring network bandwidth with iftop command.md b/published/201802/20170429 Monitoring network bandwidth with iftop command.md similarity index 100% rename from published/20170429 Monitoring network bandwidth with iftop command.md rename to published/201802/20170429 Monitoring network bandwidth with iftop command.md diff --git a/published/20170511 Working with VI editor - The Basics.md b/published/201802/20170511 Working with VI editor - The Basics.md similarity index 100% rename from published/20170511 Working with VI editor - The Basics.md rename to published/201802/20170511 Working with VI editor - The Basics.md diff --git a/published/20170707 Lessons from my first year of live coding on Twitch.md b/published/201802/20170707 Lessons from my first year of live coding on Twitch.md similarity index 100% rename from published/20170707 Lessons from my first year of live coding on Twitch.md rename to published/201802/20170707 Lessons from my first year of live coding on Twitch.md diff --git a/published/20170724 How to automate your system administration tasks with Ansible.md b/published/201802/20170724 How to automate your system administration tasks with Ansible.md similarity index 100% rename from published/20170724 How to automate your system administration tasks with Ansible.md rename to published/201802/20170724 How to automate your system administration tasks with Ansible.md diff --git a/published/20170915 How To Install And Setup Vagrant.md b/published/201802/20170915 How To Install And Setup Vagrant.md similarity index 100% rename from published/20170915 How To Install And Setup Vagrant.md rename to published/201802/20170915 How To Install And Setup Vagrant.md diff --git a/published/201802/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md b/published/201802/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md new file mode 100644 index 0000000000..243b622b3e --- /dev/null +++ b/published/201802/20170918 Two Easy Ways To Install Bing Desktop Wallpaper Changer On Linux.md @@ -0,0 +1,124 @@ +在 Linux 上安装必应桌面墙纸更换器 +====== + +你是否厌倦了 Linux 桌面背景,想要设置好看的壁纸,但是不知道在哪里可以找到?别担心,我们在这里会帮助你。 + +我们都知道必应搜索引擎,但是由于一些原因很少有人使用它,每个人都喜欢必应网站的背景壁纸,它是非常漂亮和惊人的高分辨率图像。 + +如果你想使用这些图片作为你的桌面壁纸,你可以手动下载它,但是很难去每天下载一个新的图片,然后把它设置为壁纸。这就是自动壁纸改变的地方。 + +[必应桌面墙纸更换器][1]会自动下载并将桌面壁纸更改为当天的必应照片。所有的壁纸都储存在 `/home/[user]/Pictures/BingWallpapers/`。 + +### 方法 1: 使用 Utkarsh Gupta Shell 脚本 + +这个小型 Python 脚本会自动下载并将桌面壁纸更改为当天的必应照片。该脚本在机器启动时自动运行,并工作于 GNU/Linux 上的 Gnome 或 Cinnamon 环境。它不需要手动工作,安装程序会为你做所有事情。 + +从 2.0+ 版本开始,该脚本的安装程序就可以像普通的 Linux 二进制命令一样工作,它会为某些任务请求 sudo 权限。 + +只需克隆仓库并切换到项目目录,然后运行 shell 脚本即可安装必应桌面墙纸更换器。 + +``` +$ https://github.com/UtkarshGpta/bing-desktop-wallpaper-changer/archive/master.zip +$ unzip master +$ cd bing-desktop-wallpaper-changer-master +``` + +运行 `installer.sh` 使用 `--install` 选项来安装必应桌面墙纸更换器。它会下载并设置必应照片为你的 Linux 桌面。 + +``` +$ ./installer.sh --install + +Bing-Desktop-Wallpaper-Changer +BDWC Installer v3_beta2 + +GitHub: +Contributors: +. +. +[sudo] password for daygeek: ****** +. +Where do you want to install Bing-Desktop-Wallpaper-Changer? + Entering 'opt' or leaving input blank will install in /opt/bing-desktop-wallpaper-changer + Entering 'home' will install in /home/daygeek/bing-desktop-wallpaper-changer + Install Bing-Desktop-Wallpaper-Changer in (opt/home)? :Press Enter + +Should we create bing-desktop-wallpaper-changer symlink to /usr/bin/bingwallpaper so you could easily execute it? + Create symlink for easy execution, e.g. in Terminal (y/n)? : y + +Should bing-desktop-wallpaper-changer needs to autostart when you log in? (Add in Startup Application) + Add in Startup Application (y/n)? : y +. +. +Executing bing-desktop-wallpaper-changer... + + +Finished!! +``` + +![][3] + +要卸载该脚本: + +``` +$ ./installer.sh --uninstall +``` + +使用帮助页面了解更多关于此脚本的选项。 + +``` +$ ./installer.sh --help +``` + +### 方法 2: 使用 GNOME Shell 扩展 + +这个轻量级 [GNOME shell 扩展][4],可将你的壁纸每天更改为微软必应的壁纸。它还会显示一个包含图像标题和解释的通知。 + +该扩展大部分基于 Elinvention 的 NASA APOD 扩展,受到了 Utkarsh Gupta 的 Bing Desktop WallpaperChanger 启发。 + +#### 特点 + +- 获取当天的必应壁纸并设置为锁屏和桌面墙纸(这两者都是用户可选的) +- 可强制选择某个特定区域(即地区) +- 为多个显示器自动选择最高分辨率(和最合适的墙纸) +- 可以选择在 1 到 7 天之后清理墙纸目录(删除最旧的) +- 只有当它们被更新时,才会尝试下载壁纸 +- 不会持续进行更新 - 每天只进行一次,启动时也要进行一次(更新是在必应更新时进行的) + +#### 如何安装 + +访问 [extenisons.gnome.org][5] 网站并将切换按钮拖到 “ON”,然后点击 “Install” 按钮安装必应壁纸 GNOME 扩展。(LCTT 译注:页面上并没有发现 ON 按钮,但是有 Download 按钮) + +![][6] + +安装必应壁纸 GNOME 扩展后,它会自动下载并为你的 Linux 桌面设置当天的必应照片,并显示关于壁纸的通知。 + +![][7] + +托盘指示器将帮助你执行少量操作,也可以打开设置。 + +![][8] + +根据你的要求自定义设置。 + +![][9] + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/bing-desktop-wallpaper-changer-linux-bing-photo-of-the-day/ + +作者:[2daygeek][a] +译者:[MjSeven](https://github.com/MjSeven) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/2daygeek/ +[1]:https://github.com/UtkarshGpta/bing-desktop-wallpaper-changer +[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-linux-5.png +[4]:https://github.com/neffo/bing-wallpaper-gnome-extension +[5]:https://extensions.gnome.org/extension/1262/bing-wallpaper-changer/ +[6]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-1.png +[7]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-2.png +[8]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-3.png +[9]:https://www.2daygeek.com/wp-content/uploads/2017/09/bing-wallpaper-changer-for-linux-4.png diff --git a/published/20170928 How to Use the ZFS Filesystem on Ubuntu Linux.md b/published/201802/20170928 How to Use the ZFS Filesystem on Ubuntu Linux.md similarity index 100% rename from published/20170928 How to Use the ZFS Filesystem on Ubuntu Linux.md rename to published/201802/20170928 How to Use the ZFS Filesystem on Ubuntu Linux.md diff --git a/published/20171005 Reasons Kubernetes is cool.md b/published/201802/20171005 Reasons Kubernetes is cool.md similarity index 100% rename from published/20171005 Reasons Kubernetes is cool.md rename to published/201802/20171005 Reasons Kubernetes is cool.md diff --git a/published/20171016 Using the Linux find command with caution.md b/published/201802/20171016 Using the Linux find command with caution.md similarity index 100% rename from published/20171016 Using the Linux find command with caution.md rename to published/201802/20171016 Using the Linux find command with caution.md diff --git a/published/20171019 More ways to examine network connections on Linux.md b/published/201802/20171019 More ways to examine network connections on Linux.md similarity index 100% rename from published/20171019 More ways to examine network connections on Linux.md rename to published/201802/20171019 More ways to examine network connections on Linux.md diff --git a/published/20171023 Processors-Everything You Need to Know.md b/published/201802/20171023 Processors-Everything You Need to Know.md similarity index 100% rename from published/20171023 Processors-Everything You Need to Know.md rename to published/201802/20171023 Processors-Everything You Need to Know.md diff --git a/published/20171027 Easy guide to secure VNC server with TLS encryption.md b/published/201802/20171027 Easy guide to secure VNC server with TLS encryption.md similarity index 100% rename from published/20171027 Easy guide to secure VNC server with TLS encryption.md rename to published/201802/20171027 Easy guide to secure VNC server with TLS encryption.md diff --git a/published/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md b/published/201802/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md similarity index 100% rename from published/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md rename to published/201802/20171030 How to bind ntpd to specific IP addresses on Linux-Unix.md diff --git a/published/20171103 How To Fully Update And Upgrade Offline Debian-based Systems.md b/published/201802/20171103 How To Fully Update And Upgrade Offline Debian-based Systems.md similarity index 100% rename from published/20171103 How To Fully Update And Upgrade Offline Debian-based Systems.md rename to published/201802/20171103 How To Fully Update And Upgrade Offline Debian-based Systems.md diff --git a/published/20171108 How To Setup Japanese Language Environment In Arch Linux.md b/published/201802/20171108 How To Setup Japanese Language Environment In Arch Linux.md similarity index 100% rename from published/20171108 How To Setup Japanese Language Environment In Arch Linux.md rename to published/201802/20171108 How To Setup Japanese Language Environment In Arch Linux.md diff --git a/published/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md b/published/201802/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md similarity index 100% rename from published/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md rename to published/201802/20171112 Step by Step guide for creating Master Slave replication in MariaDB.md diff --git a/published/20171117 How to Install and Use Docker on Linux.md b/published/201802/20171117 How to Install and Use Docker on Linux.md similarity index 100% rename from published/20171117 How to Install and Use Docker on Linux.md rename to published/201802/20171117 How to Install and Use Docker on Linux.md diff --git a/published/20171120 How to use special permissions- the setuid, setgid and sticky bits.md b/published/201802/20171120 How to use special permissions- the setuid, setgid and sticky bits.md similarity index 100% rename from published/20171120 How to use special permissions- the setuid, setgid and sticky bits.md rename to published/201802/20171120 How to use special permissions- the setuid, setgid and sticky bits.md diff --git a/published/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md b/published/201802/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md similarity index 100% rename from published/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md rename to published/201802/20171127 Protecting Your Website From Application Layer DOS Attacks With mod.md diff --git a/published/20171128 Why Python and Pygame are a great pair for beginning programmers.md b/published/201802/20171128 Why Python and Pygame are a great pair for beginning programmers.md similarity index 100% rename from published/20171128 Why Python and Pygame are a great pair for beginning programmers.md rename to published/201802/20171128 Why Python and Pygame are a great pair for beginning programmers.md diff --git a/published/20171202 MariaDB administration commands for beginners.md b/published/201802/20171202 MariaDB administration commands for beginners.md similarity index 100% rename from published/20171202 MariaDB administration commands for beginners.md rename to published/201802/20171202 MariaDB administration commands for beginners.md diff --git a/published/20171203 Increase Torrent Speed - Here Is Why It Will Never Work.md b/published/201802/20171203 Increase Torrent Speed - Here Is Why It Will Never Work.md similarity index 100% rename from published/20171203 Increase Torrent Speed - Here Is Why It Will Never Work.md rename to published/201802/20171203 Increase Torrent Speed - Here Is Why It Will Never Work.md diff --git a/published/20171204 Tutorial on how to write basic udev rules in Linux.md b/published/201802/20171204 Tutorial on how to write basic udev rules in Linux.md similarity index 100% rename from published/20171204 Tutorial on how to write basic udev rules in Linux.md rename to published/201802/20171204 Tutorial on how to write basic udev rules in Linux.md diff --git a/published/20171208 Sessions And Cookies - How Does User-Login Work.md b/published/201802/20171208 Sessions And Cookies - How Does User-Login Work.md similarity index 100% rename from published/20171208 Sessions And Cookies - How Does User-Login Work.md rename to published/201802/20171208 Sessions And Cookies - How Does User-Login Work.md diff --git a/published/20171210 The Best Linux Laptop 2017-2018- A Buyers Guide with Picks from an RHCE.md b/published/201802/20171210 The Best Linux Laptop 2017-2018- A Buyers Guide with Picks from an RHCE.md similarity index 100% rename from published/20171210 The Best Linux Laptop 2017-2018- A Buyers Guide with Picks from an RHCE.md rename to published/201802/20171210 The Best Linux Laptop 2017-2018- A Buyers Guide with Picks from an RHCE.md diff --git a/published/20171211 A tour of containerd 1.0.md b/published/201802/20171211 A tour of containerd 1.0.md similarity index 100% rename from published/20171211 A tour of containerd 1.0.md rename to published/201802/20171211 A tour of containerd 1.0.md diff --git a/published/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md b/published/201802/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md similarity index 100% rename from published/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md rename to published/201802/20171212 How To Count The Number Of Files And Folders-Directories In Linux.md diff --git a/published/20171214 How to install and use encryptpad on ubuntu 16.04.md b/published/201802/20171214 How to install and use encryptpad on ubuntu 16.04.md similarity index 100% rename from published/20171214 How to install and use encryptpad on ubuntu 16.04.md rename to published/201802/20171214 How to install and use encryptpad on ubuntu 16.04.md diff --git a/published/20171215 Linux Vs Unix.md b/published/201802/20171215 Linux Vs Unix.md similarity index 100% rename from published/20171215 Linux Vs Unix.md rename to published/201802/20171215 Linux Vs Unix.md diff --git a/published/20171218 Internet Chemotherapy.md b/published/201802/20171218 Internet Chemotherapy.md similarity index 100% rename from published/20171218 Internet Chemotherapy.md rename to published/201802/20171218 Internet Chemotherapy.md diff --git a/published/20171219 4 Easiest Ways To Find Out Process ID (PID) In Linux.md b/published/201802/20171219 4 Easiest Ways To Find Out Process ID (PID) In Linux.md similarity index 100% rename from published/20171219 4 Easiest Ways To Find Out Process ID (PID) In Linux.md rename to published/201802/20171219 4 Easiest Ways To Find Out Process ID (PID) In Linux.md diff --git a/published/20171226 Dockerizing Compiled Software - Tianon-s Ramblings .md b/published/201802/20171226 Dockerizing Compiled Software - Tianon-s Ramblings .md similarity index 100% rename from published/20171226 Dockerizing Compiled Software - Tianon-s Ramblings .md rename to published/201802/20171226 Dockerizing Compiled Software - Tianon-s Ramblings .md diff --git a/published/20171228 Dual Boot Ubuntu And Arch Linux.md b/published/201802/20171228 Dual Boot Ubuntu And Arch Linux.md similarity index 100% rename from published/20171228 Dual Boot Ubuntu And Arch Linux.md rename to published/201802/20171228 Dual Boot Ubuntu And Arch Linux.md diff --git a/published/20171228 Linux wc Command Explained for Beginners (6 Examples).md b/published/201802/20171228 Linux wc Command Explained for Beginners (6 Examples).md similarity index 100% rename from published/20171228 Linux wc Command Explained for Beginners (6 Examples).md rename to published/201802/20171228 Linux wc Command Explained for Beginners (6 Examples).md diff --git a/published/20171230 What Is A Web Crawler- How Web Crawlers work.md b/published/201802/20171230 What Is A Web Crawler- How Web Crawlers work.md similarity index 100% rename from published/20171230 What Is A Web Crawler- How Web Crawlers work.md rename to published/201802/20171230 What Is A Web Crawler- How Web Crawlers work.md diff --git a/published/20171231 Making Vim Even More Awesome With These Cool Features.md b/published/201802/20171231 Making Vim Even More Awesome With These Cool Features.md similarity index 100% rename from published/20171231 Making Vim Even More Awesome With These Cool Features.md rename to published/201802/20171231 Making Vim Even More Awesome With These Cool Features.md diff --git a/published/20171231 Why You Should Still Love Telnet.md b/published/201802/20171231 Why You Should Still Love Telnet.md similarity index 100% rename from published/20171231 Why You Should Still Love Telnet.md rename to published/201802/20171231 Why You Should Still Love Telnet.md diff --git a/published/20180102 Best open source tutorials in 2017.md b/published/201802/20180102 Best open source tutorials in 2017.md similarity index 100% rename from published/20180102 Best open source tutorials in 2017.md rename to published/201802/20180102 Best open source tutorials in 2017.md diff --git a/published/20180102 Linux uptime Command Explained for Beginners with Examples.md b/published/201802/20180102 Linux uptime Command Explained for Beginners with Examples.md similarity index 100% rename from published/20180102 Linux uptime Command Explained for Beginners with Examples.md rename to published/201802/20180102 Linux uptime Command Explained for Beginners with Examples.md diff --git a/published/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md b/published/201802/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md similarity index 100% rename from published/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md rename to published/201802/20180102 cURL vs. wget- Their Differences, Usage and Which One You Should Use.md diff --git a/published/20180102 xfs file system commands with examples.md b/published/201802/20180102 xfs file system commands with examples.md similarity index 100% rename from published/20180102 xfs file system commands with examples.md rename to published/201802/20180102 xfs file system commands with examples.md diff --git a/published/20180103 Creating an Offline YUM repository for LAN.md b/published/201802/20180103 Creating an Offline YUM repository for LAN.md similarity index 100% rename from published/20180103 Creating an Offline YUM repository for LAN.md rename to published/201802/20180103 Creating an Offline YUM repository for LAN.md diff --git a/published/20180103 How to preconfigure LXD containers with cloud-init.md b/published/201802/20180103 How to preconfigure LXD containers with cloud-init.md similarity index 100% rename from published/20180103 How to preconfigure LXD containers with cloud-init.md rename to published/201802/20180103 How to preconfigure LXD containers with cloud-init.md diff --git a/published/20180104 How to Change Your Linux Console Fonts.md b/published/201802/20180104 How to Change Your Linux Console Fonts.md similarity index 100% rename from published/20180104 How to Change Your Linux Console Fonts.md rename to published/201802/20180104 How to Change Your Linux Console Fonts.md diff --git a/published/20180104 Whats behind the Intel design flaw forcing numerous patches.md b/published/201802/20180104 Whats behind the Intel design flaw forcing numerous patches.md similarity index 100% rename from published/20180104 Whats behind the Intel design flaw forcing numerous patches.md rename to published/201802/20180104 Whats behind the Intel design flaw forcing numerous patches.md diff --git a/published/20180105 How To Display Asterisks When You Type Password In terminal.md b/published/201802/20180105 How To Display Asterisks When You Type Password In terminal.md similarity index 100% rename from published/20180105 How To Display Asterisks When You Type Password In terminal.md rename to published/201802/20180105 How To Display Asterisks When You Type Password In terminal.md diff --git a/published/20180106 Meltdown and Spectre Linux Kernel Status.md b/published/201802/20180106 Meltdown and Spectre Linux Kernel Status.md similarity index 100% rename from published/20180106 Meltdown and Spectre Linux Kernel Status.md rename to published/201802/20180106 Meltdown and Spectre Linux Kernel Status.md diff --git a/published/20180111 Multimedia Apps for the Linux Console.md b/published/201802/20180111 Multimedia Apps for the Linux Console.md similarity index 100% rename from published/20180111 Multimedia Apps for the Linux Console.md rename to published/201802/20180111 Multimedia Apps for the Linux Console.md diff --git a/published/20180112 Top 5 Firefox extensions to install now.md b/published/201802/20180112 Top 5 Firefox extensions to install now.md similarity index 100% rename from published/20180112 Top 5 Firefox extensions to install now.md rename to published/201802/20180112 Top 5 Firefox extensions to install now.md diff --git a/published/20180115 How To Boot Into Linux Command Line.md b/published/201802/20180115 How To Boot Into Linux Command Line.md similarity index 100% rename from published/20180115 How To Boot Into Linux Command Line.md rename to published/201802/20180115 How To Boot Into Linux Command Line.md diff --git a/published/20180118 Getting Started with ncurses.md b/published/201802/20180118 Getting Started with ncurses.md similarity index 100% rename from published/20180118 Getting Started with ncurses.md rename to published/201802/20180118 Getting Started with ncurses.md diff --git a/published/20180120 The World Map In Your Terminal.md b/published/201802/20180120 The World Map In Your Terminal.md similarity index 100% rename from published/20180120 The World Map In Your Terminal.md rename to published/201802/20180120 The World Map In Your Terminal.md diff --git a/published/20180121 Shell Scripting a Bunco Game.md b/published/201802/20180121 Shell Scripting a Bunco Game.md similarity index 100% rename from published/20180121 Shell Scripting a Bunco Game.md rename to published/201802/20180121 Shell Scripting a Bunco Game.md diff --git a/published/20180122 How to price cryptocurrencies.md b/published/201802/20180122 How to price cryptocurrencies.md similarity index 100% rename from published/20180122 How to price cryptocurrencies.md rename to published/201802/20180122 How to price cryptocurrencies.md diff --git a/published/20180122 Linux rm Command Explained for Beginners (8 Examples).md b/published/201802/20180122 Linux rm Command Explained for Beginners (8 Examples).md similarity index 100% rename from published/20180122 Linux rm Command Explained for Beginners (8 Examples).md rename to published/201802/20180122 Linux rm Command Explained for Beginners (8 Examples).md diff --git a/published/20180123 Linux mkdir Command Explained for Beginners (with examples).md b/published/201802/20180123 Linux mkdir Command Explained for Beginners (with examples).md similarity index 100% rename from published/20180123 Linux mkdir Command Explained for Beginners (with examples).md rename to published/201802/20180123 Linux mkdir Command Explained for Beginners (with examples).md diff --git a/published/20180124 8 ways to generate random password in Linux.md b/published/201802/20180124 8 ways to generate random password in Linux.md similarity index 100% rename from published/20180124 8 ways to generate random password in Linux.md rename to published/201802/20180124 8 ways to generate random password in Linux.md diff --git a/published/20180125 A step-by-step guide to Git.md b/published/201802/20180125 A step-by-step guide to Git.md similarity index 100% rename from published/20180125 A step-by-step guide to Git.md rename to published/201802/20180125 A step-by-step guide to Git.md diff --git a/published/20180125 Linux whereis Command Explained for Beginners (5 Examples).md b/published/201802/20180125 Linux whereis Command Explained for Beginners (5 Examples).md similarity index 100% rename from published/20180125 Linux whereis Command Explained for Beginners (5 Examples).md rename to published/201802/20180125 Linux whereis Command Explained for Beginners (5 Examples).md diff --git a/published/20180126 Creating an Adventure Game in the Terminal with ncurses.md b/published/201802/20180126 Creating an Adventure Game in the Terminal with ncurses.md similarity index 100% rename from published/20180126 Creating an Adventure Game in the Terminal with ncurses.md rename to published/201802/20180126 Creating an Adventure Game in the Terminal with ncurses.md diff --git a/published/20180130 Linux Kernel 4.15 An Unusual Release Cycle.md b/published/201802/20180130 Linux Kernel 4.15 An Unusual Release Cycle.md similarity index 100% rename from published/20180130 Linux Kernel 4.15 An Unusual Release Cycle.md rename to published/201802/20180130 Linux Kernel 4.15 An Unusual Release Cycle.md diff --git a/published/20180131 Fastest way to unzip a zip file in Python.md b/published/201802/20180131 Fastest way to unzip a zip file in Python.md similarity index 100% rename from published/20180131 Fastest way to unzip a zip file in Python.md rename to published/201802/20180131 Fastest way to unzip a zip file in Python.md diff --git a/published/20180131 Why you should use named pipes on Linux.md b/published/201802/20180131 Why you should use named pipes on Linux.md similarity index 100% rename from published/20180131 Why you should use named pipes on Linux.md rename to published/201802/20180131 Why you should use named pipes on Linux.md diff --git a/published/20180201 Custom Embedded Linux Distributions.md b/published/201802/20180201 Custom Embedded Linux Distributions.md similarity index 100% rename from published/20180201 Custom Embedded Linux Distributions.md rename to published/201802/20180201 Custom Embedded Linux Distributions.md diff --git a/published/20180201 How to reload .vimrc file without restarting vim on Linux-Unix.md b/published/201802/20180201 How to reload .vimrc file without restarting vim on Linux-Unix.md similarity index 100% rename from published/20180201 How to reload .vimrc file without restarting vim on Linux-Unix.md rename to published/201802/20180201 How to reload .vimrc file without restarting vim on Linux-Unix.md diff --git a/published/20180202 Tuning MySQL 3 Simple Tweaks.md b/published/201802/20180202 Tuning MySQL 3 Simple Tweaks.md similarity index 100% rename from published/20180202 Tuning MySQL 3 Simple Tweaks.md rename to published/201802/20180202 Tuning MySQL 3 Simple Tweaks.md diff --git a/published/20180202 Which Linux Kernel Version Is Stable.md b/published/201802/20180202 Which Linux Kernel Version Is Stable.md similarity index 100% rename from published/20180202 Which Linux Kernel Version Is Stable.md rename to published/201802/20180202 Which Linux Kernel Version Is Stable.md diff --git a/published/20180203 Open source software 20 years and counting.md b/published/201802/20180203 Open source software 20 years and counting.md similarity index 100% rename from published/20180203 Open source software 20 years and counting.md rename to published/201802/20180203 Open source software 20 years and counting.md diff --git a/published/20180206 Save Some Battery On Our Linux Machines With TLP.md b/published/201802/20180206 Save Some Battery On Our Linux Machines With TLP.md similarity index 100% rename from published/20180206 Save Some Battery On Our Linux Machines With TLP.md rename to published/201802/20180206 Save Some Battery On Our Linux Machines With TLP.md diff --git a/published/20180206 Simple TensorFlow Examples.md b/published/201802/20180206 Simple TensorFlow Examples.md similarity index 100% rename from published/20180206 Simple TensorFlow Examples.md rename to published/201802/20180206 Simple TensorFlow Examples.md diff --git a/published/20180206 What Is Kali Linux, and Do You Need It.md b/published/201802/20180206 What Is Kali Linux, and Do You Need It.md similarity index 100% rename from published/20180206 What Is Kali Linux, and Do You Need It.md rename to published/201802/20180206 What Is Kali Linux, and Do You Need It.md diff --git a/published/20180209 How to Install Gogs Go Git Service on Ubuntu 16.04.md b/published/201802/20180209 How to Install Gogs Go Git Service on Ubuntu 16.04.md similarity index 100% rename from published/20180209 How to Install Gogs Go Git Service on Ubuntu 16.04.md rename to published/201802/20180209 How to Install Gogs Go Git Service on Ubuntu 16.04.md diff --git a/published/20180209 Linux rmdir Command for Beginners (with Examples).md b/published/201802/20180209 Linux rmdir Command for Beginners (with Examples).md similarity index 100% rename from published/20180209 Linux rmdir Command for Beginners (with Examples).md rename to published/201802/20180209 Linux rmdir Command for Beginners (with Examples).md diff --git a/published/20180210 How to create AWS ec2 key using Ansible.md b/published/201802/20180210 How to create AWS ec2 key using Ansible.md similarity index 100% rename from published/20180210 How to create AWS ec2 key using Ansible.md rename to published/201802/20180210 How to create AWS ec2 key using Ansible.md diff --git a/published/20180214 How to Encrypt Files with Tomb on Ubuntu 16.04 LTS.md b/published/201802/20180214 How to Encrypt Files with Tomb on Ubuntu 16.04 LTS.md similarity index 100% rename from published/20180214 How to Encrypt Files with Tomb on Ubuntu 16.04 LTS.md rename to published/201802/20180214 How to Encrypt Files with Tomb on Ubuntu 16.04 LTS.md diff --git a/translated/tech/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md b/published/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md similarity index 56% rename from translated/tech/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md rename to published/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md index c80d990a60..da22ca4f4c 100644 --- a/translated/tech/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md +++ b/published/20180207 How To Easily Correct Misspelled Bash Commands In Linux.md @@ -8,23 +8,23 @@ ### 在 Linux 中纠正拼写错误的 Bash 命令 你有没有运行过类似于下面的错误输入命令? + ``` $ unme -r bash: unme: command not found - ``` -你注意到了吗?上面的命令中有一个错误。我在 “uname” 命令缺少了字母 “a”。 +你注意到了吗?上面的命令中有一个错误。我在 `uname` 命令缺少了字母 `a`。 -我在很多时候犯过这种愚蠢的错误。在我知道这个技巧之前,我习惯按下向上箭头来调出命令并转到命令中拼写错误的单词,纠正拼写错误,然后按回车键再次运行该命令。但相信我。下面的技巧非常易于纠正你刚刚运行的命令中的任何拼写错误。 +我在很多时候犯过这种愚蠢的错误。在我知道这个技巧之前,我习惯按下向上箭头来调出命令,并转到命令中拼写错误的单词,纠正拼写错误,然后按回车键再次运行该命令。但相信我。下面的技巧非常易于纠正你刚刚运行的命令中的任何拼写错误。 要轻松更正上述拼写错误的命令,只需运行: + ``` $ ^nm^nam^ - ``` -这会将 “uname” 命令中将 “nm” 替换为 “nam”。很酷,是吗?它不仅纠正错别字,而且还能运行命令。查看下面的截图。 +这会将 `uname` 命令中将 `nm` 替换为 `nam`。很酷,是吗?它不仅纠正错别字,而且还能运行命令。查看下面的截图。 ![][2] @@ -32,49 +32,49 @@ $ ^nm^nam^ **额外提示:** -你有没有想过在使用 “cd” 命令时如何自动纠正拼写错误?没有么?没关系!下面的技巧将解释如何做到这一点。 +你有没有想过在使用 `cd` 命令时如何自动纠正拼写错误?没有么?没关系!下面的技巧将解释如何做到这一点。 -这个技巧只能纠正使用 “cd” 命令时的拼写错误。 +这个技巧只能纠正使用 `cd` 命令时的拼写错误。 + +比如说,你想使用命令切换到 `Downloads` 目录: -比如说,你想使用命令切换到 “Downloads” 目录: ``` $ cd Donloads bash: cd: Donloads: No such file or directory - ``` -哎呀!没有名称为 “Donloads” 的文件或目录。是的,正确的名称是 “Downloads”。上面的命令中缺少 “w”。 +哎呀!没有名称为 `Donloads` 的文件或目录。是的,正确的名称是 `Downloads`。上面的命令中缺少 `w`。 + +要解决此问题并在使用 `cd` 命令时自动更正错误,请编辑你的 `.bashrc` 文件: -要解决此问题并在使用 cd 命令时自动更正错误,请编辑你的 **.bashrc** 文件: ``` $ vi ~/.bashrc - ``` 最后添加以下行。 + ``` [...] shopt -s cdspell - ``` -输入 **:wq** 保存并退出文件。 +输入 `:wq` 保存并退出文件。 最后,运行以下命令更新更改。 + ``` $ source ~/.bashrc - ``` -现在,如果在使用 cd 命令时路径中存在任何拼写错误,它将自动更正并进入正确的目录。 +现在,如果在使用 `cd` 命令时路径中存在任何拼写错误,它将自动更正并进入正确的目录。 ![][3] -正如你在上面的命令中看到的那样,我故意输错(“Donloads” 而不是 “Downloads”),但 Bash 自动检测到正确的目录名并 cd 进入它。 +正如你在上面的命令中看到的那样,我故意输错(`Donloads` 而不是 `Downloads`),但 Bash 自动检测到正确的目录名并 `cd` 进入它。 -[**Fish**][4] 和**Zsh** shell 内置的此功能。所以,如果你使用的是它们,那么你不需要这个技巧。 +[Fish][4] 和 Zsh shell 内置的此功能。所以,如果你使用的是它们,那么你不需要这个技巧。 -然而,这个技巧有一些局限性。它只适用于使用正确的大小写。在上面的例子中,如果你输入的是 “cd donloads” 而不是 “cd Donloads”,它将无法识别正确的路径。另外,如果路径中缺少多个字母,它也不起作用。 +然而,这个技巧有一些局限性。它只适用于使用正确的大小写。在上面的例子中,如果你输入的是 `cd donloads` 而不是 `cd Donloads`,它将无法识别正确的路径。另外,如果路径中缺少多个字母,它也不起作用。 -------------------------------------------------------------------------------- @@ -83,7 +83,7 @@ via: https://www.ostechnix.com/easily-correct-misspelled-bash-commands-linux/ 作者:[SK][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20180209 Gnome without chrome-gnome-shell.md b/published/20180209 Gnome without chrome-gnome-shell.md similarity index 60% rename from translated/tech/20180209 Gnome without chrome-gnome-shell.md rename to published/20180209 Gnome without chrome-gnome-shell.md index 4dd2e0e4c7..856d840ba7 100644 --- a/translated/tech/20180209 Gnome without chrome-gnome-shell.md +++ b/published/20180209 Gnome without chrome-gnome-shell.md @@ -1,43 +1,42 @@ -没有 chrome-gnome-shell 的 Gnome +去掉了 chrome-gnome-shell 的 Gnome ====== -新的笔记本有触摸屏,它可以折叠成平板电脑,我听说 gnome-shell 将是桌面环境的一个很好的选择,我设法调整它足以按照现有的习惯重用。 +新的笔记本有触摸屏,它可以折叠成平板电脑,我听说 gnome-shell 将是桌面环境的一个很好的选择,我设法调整它以按照现有的习惯使用。 -然而,我有一个很大的问题,它怎么会鼓励人们从互联网上下载随机扩展,并将它们作为整个桌面环境的一部分运行。 一个更大的问题是,[gnome-core][1] 对 [chrome-gnome-shell] [2] 有强制依赖,插件不用 root 用户编辑 `/etc` 下的文件则无法禁用,这会给网站暴露我的桌面环境。 +然而,我发现一个很大的问题,它怎么会鼓励人们从互联网上下载随机扩展,并将它们作为整个桌面环境的一部分运行呢? 一个更大的问题是,[gnome-core][1] 对 [chrome-gnome-shell] [2] 有强制依赖,这个插件如果不用 root 用户编辑 `/etc` 下的文件则无法禁用,这会给将我的桌面环境暴露给网站。 访问[这个网站][3],它会知道你已经安装了哪些扩展,并且能够安装更多。我不信任它,我不需要那样,我不想那样。我为此感到震惊。 -[我想出了一个临时解决方法][4]。 +[我想出了一个临时解决方法][4]。(LCTT 译注:作者做了一个空的依赖包来满足依赖,而不会做任何可能危害你的隐私和安全的操作。) 人们会在 firefox 中如何做呢? ### 描述 -chrome-gnome-shell 是 gnome-core 的一个强制依赖项,它安装了一个可能不需要的浏览器插件,并强制它使用系统范围的 chrome 策略。 +chrome-gnome-shell 是 gnome-core 的一个强制依赖项,它安装了一个你可能不需要的浏览器插件,并强制它使用系统级的 chrome 策略。 我认为使用 chrome-gnome-shell 会不必要地增加系统的攻击面,我作为主要用户,它会获取下载和执行随机未经审查代码的可疑特权。 -这个包满足了 chrome-gnome-shell 的依赖,但不会安装任何东西。 +(我做的)这个包满足了 chrome-gnome-shell 的依赖,但不会安装任何东西。 -请注意,在安装此包之后,如果先前安装了 chrome-gnome-shell,则需要清除 chrome-gnome-shell,以使其在 /etc/chromium 中删除 chromium 策略文件 +请注意,在安装此包之后,如果先前安装了 chrome-gnome-shell,则需要清除 chrome-gnome-shell,以使其在 `/etc/chromium` 中删除 chromium 策略文件。 ### 说明 + ``` apt install equivs equivs-build contain-gnome-shell sudo dpkg -i contain-gnome-shell_1.0_all.deb sudo dpkg --purge chrome-gnome-shell - ``` - -------------------------------------------------------------------------------- via: http://www.enricozini.org/blog/2018/debian/gnome-without-chrome-gnome-shell/ 作者:[Enrico Zini][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/sources/talk/20170210 Evolutional Steps of Computer Systems.md b/sources/talk/20170210 Evolutional Steps of Computer Systems.md new file mode 100644 index 0000000000..16c3f0480a --- /dev/null +++ b/sources/talk/20170210 Evolutional Steps of Computer Systems.md @@ -0,0 +1,112 @@ +Evolutional Steps of Computer Systems +====== +Throughout the history of the modern computer, there were several evolutional steps related to the way we interact with the system. I tend to categorize those steps as following: + + 1. Numeric Systems + 2. Application-Specific Systems + 3. Application-Centric Systems + 4. Information-Centric Systems + 5. Application-Less Systems + + + +Following sections describe how I see those categories. + +### Numeric Systems + +[Early computers][1] were designed with numbers in mind. They could add, subtract, multiply, divide. Some of them were able to perform more complex mathematical operations such as differentiate or integrate. + +If you map characters to numbers, they were able to «compute» [strings][2] as well but this is somewhat «creative use of numbers» instead of meaningful processing arbitrary information. + +### Application-Specific Systems + +For higher-level problems, pure numeric systems are not sufficient. Application-specific systems were developed to do one single task. They were very similar to numeric systems. However, with sufficiently complex number calculations, systems were able to accomplish very well-defined higher level tasks such as calculations related to scheduling problems or other optimization problems. + +Systems of this category were built for one single purpose, one distinct problem they solved. + +### Application-Centric Systems + +Systems that are application-centric are the first real general purpose systems. Their main usage style is still mostly application-specific but with multiple applications working either time-sliced (one app after another) or in multi-tasking mode (multiple apps at the same time). + +Early personal computers [from the 70s][3] of the previous century were the first application-centric systems that became popular for a wide group of people. + +Yet modern operating systems - Windows, macOS, most GNU/Linux desktop environments - still follow the same principles. + +Of course, there are sub-categories as well: + + 1. Strict Application-Centric Systems + 2. Loose Application-Centric Systems + + + +Strict application-centric systems such as [Windows 3.1][4] (Program Manager and File Manager) or even the initial version of [Windows 95][5] had no pre-defined folder hierarchy. The user did start text processing software like [WinWord][6] and saved the files in the program folder of WinWord. When working with a spreadsheet program, its files were saved in the application folder of the spreadsheet tool. And so on. Users did not create their own hierarchy of folders mostly because of convenience, laziness, or because they did not saw any necessity. The number of files per user were sill within dozens up to a few hundreds. + +For accessing information, the user typically opened an application and within the application, the files containing the generated data were retrieved using file/open. + +It was [Windows 95][5] SP2 that introduced «[My Documents][7]» for the Windows platform. With this file hierarchy template, application designers began switching to «My Documents» as a default file save/open location instead of using the software product installation path. This made the users embrace this pattern and start to maintain folder hierarchies on their own. + +This resulted in loose application-centric systems: typical file retrieval is done via a file manager. When a file is opened, the associated application is started by the operating system. It is a small or subtle but very important usage shift. Application-centric systems are still the dominant usage pattern for personal computers. + +Nevertheless, this pattern comes with many disadvantages. For example in order to prevent data retrieval problems, there is the need to maintain a strict hierarchy of folders that contain all related files of a given project. Unfortunately, nature does not fit well in strict hierarchy of folders. Further more, [this does not scale well][8]. Desktop search engines and advanced data organizing tools like [tagstore][9] are able to smooth the edged a bit. As studies show, only a minority of users are using such advanced retrieval tools. Most users still navigate through the file system without using any alternative or supplemental retrieval techniques. + +### Information-Centric Systems + +One possible way of dealing with the issue that a certain topic needs to have a folder that holds all related files is to switch from an application-centric system to an information-centric systems. + +Instead of opening a spreadsheet application to work with the project budget, opening a word processor application to write the project report, and opening another tool to work with image files, an information-centric system combines all the information on the project in one place, in one application. + +The calculations for the previous month is right beneath notes from a client meeting which is right beneath a photography of the whiteboard notes which is right beneath some todo tasks. Without any application or file border in between. + +Early attempts to create such an environment were IBM [OS/2][10], Microsoft [OLE][11] or [NeXT][12]. None of them were a major success for a variety of reasons. A very interesting information-centric environment is [Acme][13] from [Plan 9][14]. It combines [a wide variety of applications][15] within one application but it never reached a notable distribution even with its ports to Windows or GNU/Linux. + +Modern approaches for an information-centric system are advanced [personal wikis][16] like [TheBrain][17] or [Microsoft OneNote][18]. + +My personal tool of choice is the [GNU/Emacs][19] platform with its [Org-mode][19] extension. I hardly leave Org-mode when I work with my computer. For accessing external data sources, I created [Memacs][20] which brings me a broad variety of data into Org-mode. I love to do spreadsheet calculations right beneath scheduled tasks, in-line images, internal and external links, and so forth. It is truly an information-centric system where the user doesn't have to deal with application borders or strictly hierarchical file-system folders. Multi-classifications is possible using simple or advanced tagging. All kinds of views can be derived with a single command. One of those views is my calendar, the agenda. Another derived view is the list of borrowed things. And so on. There are no limits for Org-mode users. If you can think of it, it is most likely possible within Org-mode. + +Is this the end of the evolution? Certainly not. + +### Application-Less Systems + +I can think of a class of systems which I refer to as application-less systems. As the next logical step, there is no need to have single-domain applications even when they are as capable as Org-mode. The computer offers a nice to use interface to information and features, not files and applications. Even a classical operating system is not accessible. + +Application-less systems might as well be combined with [artificial intelligence][21]. Think of it as some kind of [HAL 9000][22] from [A Space Odyssey][23]. Or [LCARS][24] from Star Trek. + +It is hard to believe that there is a transition between our application-based, vendor-based software culture and application-less systems. Maybe the open source movement with its slow but constant development will be able to form a truly application-less environment where all kinds of organizations and people are contributing to. + +Information and features to retrieve and manipulate information, this is all it takes. This is all we need. Everything else is just limiting distraction. + +-------------------------------------------------------------------------------- + +via: http://karl-voit.at/2017/02/10/evolution-of-systems/ + +作者:[Karl Voit][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://karl-voit.at +[1]:https://en.wikipedia.org/wiki/History_of_computing_hardware +[2]:https://en.wikipedia.org/wiki/String_%2528computer_science%2529 +[3]:https://en.wikipedia.org/wiki/Xerox_Alto +[4]:https://en.wikipedia.org/wiki/Windows_3.1x +[5]:https://en.wikipedia.org/wiki/Windows_95 +[6]:https://en.wikipedia.org/wiki/Microsoft_Word +[7]:https://en.wikipedia.org/wiki/My_Documents +[8]:http://karl-voit.at/tagstore/downloads/Voit2012b.pdf +[9]:http://karl-voit.at/tagstore/ +[10]:https://en.wikipedia.org/wiki/OS/2 +[11]:https://en.wikipedia.org/wiki/Object_Linking_and_Embedding +[12]:https://en.wikipedia.org/wiki/NeXT +[13]:https://en.wikipedia.org/wiki/Acme_%2528text_editor%2529 +[14]:https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs +[15]:https://en.wikipedia.org/wiki/List_of_Plan_9_applications +[16]:https://en.wikipedia.org/wiki/Personal_wiki +[17]:https://en.wikipedia.org/wiki/TheBrain +[18]:https://en.wikipedia.org/wiki/Microsoft_OneNote +[19]:../../../../tags/emacs +[20]:https://github.com/novoid/Memacs +[21]:https://en.wikipedia.org/wiki/Artificial_intelligence +[22]:https://en.wikipedia.org/wiki/HAL_9000 +[23]:https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey +[24]:https://en.wikipedia.org/wiki/LCARS diff --git a/sources/talk/20180131 An old DOS BBS in a Docker container.md b/sources/talk/20180131 An old DOS BBS in a Docker container.md index 769b5db1bd..3ef9a79b3e 100644 --- a/sources/talk/20180131 An old DOS BBS in a Docker container.md +++ b/sources/talk/20180131 An old DOS BBS in a Docker container.md @@ -1,3 +1,5 @@ +translating---geekpi + An old DOS BBS in a Docker container ====== Awhile back, I wrote about [my Debian Docker base images][1]. I decided to extend this concept a bit further: to running DOS applications in Docker. diff --git a/sources/talk/20180214 11 awesome vi tips and tricks.md b/sources/talk/20180214 11 awesome vi tips and tricks.md new file mode 100644 index 0000000000..8e75cf0503 --- /dev/null +++ b/sources/talk/20180214 11 awesome vi tips and tricks.md @@ -0,0 +1,98 @@ +11 awesome vi tips and tricks +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/keyboaord_enter_writing_documentation.jpg?itok=kKrnXc5h) + +The [vi editor][1] is one of the most popular text editors on Unix and Unix-like systems, such as Linux. Whether you're new to vi or just looking for a refresher, these 11 tips will enhance how you use it. + +### Editing + +Editing a long script can be tedious, especially when you need to edit a line so far down that it would take hours to scroll to it. Here's a faster way. + + 1. The command `:set number` numbers each line down the left side. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/setnum.png?itok=sFVA97mG) + +You can directly reach line number 26 by opening the file and entering this command on the CLI: `vi +26 sample.txt`. To edit line 26 (for example), the command `:26` will take you directly to it. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/number.png?itok=d7FE0LL3) + +### Fast navigation + + 2. `i` changes your mode from "command" to "insert" and starts inserting text at the current cursor position. + 3. `a` does the same, except it starts just after the current cursor position. + 4. `o` starts the cursor position from the line below the current cursor position. + + + +### Delete + +If you notice an error or typo, being able to make a quick fix is important. Good thing vi has it all figured out. + +Understanding vi's delete function so you don't accidentally press a key and permanently remove a line, paragraph, or more, is critical. + + 5. `x` deletes the character under the cursor. + 6. `dd` deletes the current line. (Yes, the whole line!) + + + +Here's the scary part: `30dd` would delete 30 lines starting with the current line! Proceed with caution when using this command. + +### Search + +You can search for keywords from the "command" mode rather than manually navigating and looking for a specific word in a plethora of text. + + 7. `:/` searches for the word mentioned in the `< >` space and takes your cursor to the first match. + 8. To navigate to the next instance of that word, type `n`, and keep pressing it until you get to the match you're looking for. + + + +For example, in the image below I searched for `ssh`, and vi highlighted the beginning of the first result. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/ssh-search.png?itok=tJ-7FujH) + +After I pressed `n`, vi highlighted the next instance. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/n-search.png?itok=wU-u3LiI) + +### Save and exit + +Developers (and others) will probably find this next command useful. + + 9. `:x` saves your work and exits vi. + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/x.png?itok=kfoHx84m) + + 10. If you think every nanosecond is worth saving, here's a faster way to shift to terminal mode in vi. Instead of pressing `Shift+:` on the keyboard, you can press `Shift+q` (or Q, in caps) to access [Ex mode][2], but this doesn't really make any difference if you just want to save and quit by typing `x` (as shown above). + + + +### Substitution + +Here is a neat trick if you want to substitute every occurrence of one word with another. For example, if you want to substitute "desktop" with "laptop" in a large file, it would be monotonous and waste time to search for each occurrence of "desktop," delete it, and type "laptop." + + 11. The command `:%s/desktop/laptop/g` would replace each occurrence of "desktop" with "laptop" throughout the file; it works just like the Linux `sed` command. + + + +In this example, I replaced "root" with "user": + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/subs-command.png?itok=M8MN72sp) + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/subs-result.png?itok=34zzVdUt) + +These tricks should help anyone get started using vi. Are there other neat tips I missed? Share them in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/top-11-vi-tips-and-tricks + +作者:[Archit Modi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/architmodi +[1]:http://ex-vi.sourceforge.net/ +[2]:https://en.wikibooks.org/wiki/Learning_the_vi_Editor/Vim/Modes#Ex-mode diff --git a/sources/talk/20180219 How Linux became my job.md b/sources/talk/20180219 How Linux became my job.md index 83bf420691..40bd9bb8b9 100644 --- a/sources/talk/20180219 How Linux became my job.md +++ b/sources/talk/20180219 How Linux became my job.md @@ -1,4 +1,4 @@ -How Linux became my job +How Linux became my job translation by ranchong ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22) diff --git a/sources/talk/20180221 3 warning flags of DevOps metrics.md b/sources/talk/20180221 3 warning flags of DevOps metrics.md new file mode 100644 index 0000000000..a103a2bbca --- /dev/null +++ b/sources/talk/20180221 3 warning flags of DevOps metrics.md @@ -0,0 +1,42 @@ +3 warning flags of DevOps metrics +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D) + +Metrics. Measurements. Data. Monitoring. Alerting. These are all big topics for DevOps and for cloud-native infrastructure and application development more broadly. In fact, acm Queue, a magazine published by the Association of Computing Machinery, recently devoted an [entire issue][1] to the topic. + +I've argued before that we conflate a lot of things under the "metrics" term, from key performance indicators to critical failure alerts to data that may be vaguely useful someday for something or other. But that's a topic for another day. What I want to discuss here is how metrics affect behavior. + +In 2008, Daniel Ariely published [Predictably Irrational][2] , one of a number of books written around that time that introduced behavioral psychology and behavioral economics to the general public. One memorable quote from that book is the following: "Human beings adjust behavior based on the metrics they're held against. Anything you measure will impel a person to optimize his score on that metric. What you measure is what you'll get. Period." + +This shouldn't be surprising. It's a finding that's been repeatedly confirmed by research. It should also be familiar to just about anyone with business experience. It's certainly not news to anyone in sales management, for example. Base sales reps' (or their managers'!) bonuses solely on revenue, and they'll discount whatever it takes to maximize revenue even if it puts margin in the toilet. Conversely, want the sales force to push a new product line—which will probably take extra effort—but skip the [spiffs][3]? Probably not happening. + +And lest you think I'm unfairly picking on sales, this behavior is pervasive, all the way up to the CEO, as Ariely describes in [a 2010 Harvard Business Review article][4]. "CEOs care about stock value because that's how we measure them. If we want to change what they care about, we should change what we measure," writes Ariely. + +Think developers and operations folks are immune from such behaviors? Think again. Let's consider some problematic measurements. They're not all bad or wrong but, if you rely too much on them, warning flags should go up. + +### Three warning signs for DevOps metrics + +First, there are the quantity metrics. Lines of code or bugs fixed are perhaps self-evidently absurd. But there are also the deployments per week or per month that are so widely quoted to illustrate DevOps velocity relative to more traditional development and deployment practices. Speed is good. It's one of the reasons you're probably doing DevOps—but don't reward people on it excessively relative to quality and other measures. + +Second, it's obvious that you want to reward individuals who do their work quickly and well. Yes. But. Whether it's your local pro sports team or some project team you've been on, you can probably name someone who was really a talent, but was just so toxic and such a distraction for everyone else that they were a net negative for the team. Moral: Don't provide incentives that solely encourage individual behaviors. You may also want to put in place programs, such as peer rewards, that explicitly value collaboration. [As Red Hat's Jen Krieger told me][5] in a podcast last year: "Having those automated pots of awards, or some sort of system that's tracked for that, can only help teams feel a little more cooperative with one another as in, 'Hey, we're all working together to get something done.'" + +The third red flag area is incentives that don't actually incent because neither the individual nor the team has a meaningful ability to influence the outcome. It's often a good thing when DevOps metrics connect to business goals and outcomes. For example, customer ticket volume relates to perceived shortcomings in applications and infrastructure. And it's also a reasonable proxy for overall customer satisfaction, which certainly should be of interest to the executive suite. The best reward systems to drive DevOps behaviors should be tied to specific individual and team actions as opposed to just company success generally. + +You've probably noticed a common theme. That theme is balance. Velocity is good but so is quality. Individual achievement is good but not when it damages the effectiveness of the team. The overall success of the business is certainly important, but the best reward systems also tie back to actions and behaviors within development and operations. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/three-warning-flags-devops-metrics + +作者:[Gordon Haff][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/ghaff +[1]:https://queue.acm.org/issuedetail.cfm?issue=3178368 +[2]:https://en.wikipedia.org/wiki/Predictably_Irrational +[3]:https://en.wikipedia.org/wiki/Spiff +[4]:https://hbr.org/2010/06/column-you-are-what-you-measure +[5]:http://bitmason.blogspot.com/2015/09/podcast-making-devops-succeed-with-red.html diff --git a/sources/talk/20180223 Why culture is the most important issue in a DevOps transformation.md b/sources/talk/20180223 Why culture is the most important issue in a DevOps transformation.md new file mode 100644 index 0000000000..8fe1b6f273 --- /dev/null +++ b/sources/talk/20180223 Why culture is the most important issue in a DevOps transformation.md @@ -0,0 +1,91 @@ +Why culture is the most important issue in a DevOps transformation +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_community2.png?itok=1blC7-NY) + +You've been appointed the DevOps champion in your organisation: congratulations. So, what's the most important issue that you need to address? + +It's the technology—tools and the toolchain—right? Everybody knows that unless you get the right tools for the job, you're never going to make things work. You need integration with your existing stack (though whether you go with tight or loose integration will be an interesting question), a support plan (vendor, third party, or internal), and a bug-tracking system to go with your source code management system. And that's just the start. + +No! Don't be ridiculous: It's clearly the process that's most important. If the team doesn't agree on how stand-ups are run, who participates, the frequency and length of the meetings, and how many people are required for a quorum, then you'll never be able to institute a consistent, repeatable working pattern. + +In fact, although both the technology and the process are important, there's a third component that is equally important, but typically even harder to get right: culture. Yup, it's that touch-feely thing we techies tend to struggle with.1 + +### Culture + +I was visiting a midsized government institution a few months ago (not in the UK, as it happens), and we arrived a little early to meet the CEO and CTO. We were ushered into the CEO's office and waited for a while as the two of them finished participating in the daily stand-up. They apologised for being a minute or two late, but far from being offended, I was impressed. Here was an organisation where the culture of participation was clearly infused all the way up to the top. + +Not that culture can be imposed from the top—nor can you rely on it percolating up from the bottom3—but these two C-level execs were not only modelling the behaviour they expected from the rest of their team, but also seemed, from the brief discussion we had about the process afterwards, to be truly invested in it. If you can get management to buy into the process—and be seen buying in—you are at least likely to have problems with other groups finding plausible excuses to keep their distance and get away with it. + +So let's assume management believes you should give DevOps a go. Where do you start? + +Developers may well be your easiest target group. They are often keen to try new things and find ways to move things along faster, so they are often the group that can be expected to adopt new technologies and methodologies. DevOps arguably has been driven mainly by the development community. + +But you shouldn't assume all developers will be keen to embrace this change. For some, the way things have always been done—your Rick Parfitts of dev, if you will7—is fine. Finding ways to help them work efficiently in the new world is part of your job, not just theirs. If you have superstar developers who aren't happy with change, you risk alienating and losing them if you try to force them into your brave new world. What's worse, if they dig their heels in, you risk the adoption of your DevSecOps vision being compromised when they explain to their managers that things aren't going to change if it makes their lives more difficult and reduces their productivity. + +Maybe you're not going to be able to move all the systems and people to DevOps immediately. Maybe you're going to need to choose which apps start with and who will be your first DevOps champions. Maybe it's time to move slowly. + +### Not maybe: definitely + +No—I lied. You're definitely going to need to move slowly. Trying to change everything at once is a recipe for disaster. + +This goes for all elements of the change—which people to choose, which technologies to choose, which applications to choose, which user base to choose, which use cases to choose—bar one. For those elements, if you try to move everything in one go, you will fail. You'll fail for a number of reasons. You'll fail for reasons I can't imagine and, more importantly, for reasons you can't imagine. But some of the reasons will include: + + * People—most people—don't like change. + * Technologies don't like change (you can't just switch and expect everything to still work). + * Applications don't like change (things worked before, or at least failed in known ways). You want to change everything in one go? Well, they'll all fail in new and exciting9 ways. + * Users don't like change. + * Use cases don't like change. + + + +### The one exception + +You noticed I wrote "bar one" when discussing which elements you shouldn't choose to change all in one go? Well done. + +What's that exception? It's the initial team. When you choose your initial application to change and you're thinking about choosing the team to make that change, select the members carefully and select a complete set. This is important. If you choose just developers, just test folks, just security folks, just ops folks, or just management—if you leave out one functional group from your list—you won't have proved anything at all. Well, you might have proved to a small section of your community that it kind of works, but you'll have missed out on a trick. And that trick is: If you choose keen people from across your functional groups, it's much harder to fail. + +Say your first attempt goes brilliantly. How are you going to convince other people to replicate your success and adopt DevOps? Well, the company newsletter, of course. And that will convince how many people, exactly? Yes, that number.12 If, on the other hand, you have team members from across the functional parts or the organisation, when you succeed, they'll tell their colleagues and you'll get more buy-in next time. + +If it fails, if you've chosen your team wisely—if they're all enthusiastic and know that "fail often, fail fast" is good—they'll be ready to go again. + +Therefore, you need to choose enthusiasts from across your functional groups. They can work on the technologies and the process, and once that's working, it's the people who will create that cultural change. You can just sit back and enjoy. Until the next crisis, of course. + +1\. OK, you're right. It should be "with which we techies tend to struggle."2 + +2\. You thought I was going to qualify that bit about techies struggling with touchy-feely stuff, didn't you? Read it again: I put "tend to." That's the best you're getting. + +3\. Is percolating a bottom-up process? I don't drink coffee,4 so I wouldn't know. + +4\. Do people even use percolators to make coffee anymore? Feel free to let me know in the comments. I may pretend interest if you're lucky. + +5\. For U.S. readers (and some other countries, maybe?), please substitute "check" for "tick" here.6 + +6\. For U.S. techie readers, feel free to perform `s/tick/check/;`. + +7\. This is a Status Quo8 reference for which I'm extremely sorry. + +8\. For millennial readers, please consult your favourite online reference engine or just roll your eyes and move on. + +9\. For people who say, "but I love excitement," try being on call at 2 a.m. on a Sunday at the end of the quarter when your chief financial officer calls you up to ask why all of last month's sales figures have been corrupted with the letters "DEADBEEF."10 + +10\. For people not in the know, this is a string often used by techies as test data because a) it's non-numerical; b) it's numerical (in hexadecimal); c) it's easy to search for in debug files; and d) it's funny.11 + +11\. Though see.9 + +12\. It's a low number, is all I'm saying. + +This article originally appeared on [Alice, Eve, and Bob – a security blog][1] and is republished with permission. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/most-important-issue-devops-transformation + +作者:[Mike Bursell][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/mikecamel +[1]:https://aliceevebob.com/2018/02/06/moving-to-devops-whats-most-important/ diff --git a/sources/talk/20180226 5 keys to building open hardware.md b/sources/talk/20180226 5 keys to building open hardware.md new file mode 100644 index 0000000000..7819149996 --- /dev/null +++ b/sources/talk/20180226 5 keys to building open hardware.md @@ -0,0 +1,85 @@ +5 keys to building open hardware +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openhardwaretools.png?itok=DC1RC_1f) + +The science community is increasingly embracing free and open source hardware ([FOSH][1]). Researchers have been busy [hacking their own equipment][2] and creating hundreds of devices based on the distributed digital manufacturing model to advance their scientific experiments. + +A major reason for all this interest in distributed digital manufacturing of scientific FOSH is money: Research indicates that FOSH [slashes costs by 90% to 99%][3] compared to proprietary tools. Commercializing scientific FOSH with [open hardware business models][4] has supported the rapid growth of an engineering subfield to develop FOSH for science, which comes together annually at the [Gathering for Open Science Hardware][5]. + +Remarkably, not one, but [two new academic journals][6] are devoted to the topic: the [Journal of Open Hardware][7] (from Ubiquity Press, a new open access publisher that also publishes the [Journal of Open Research Software][8] ) and [HardwareX][9] (an [open access journal][10] from Elsevier, one of the world's largest academic publishers). + +Because of the academic community's support, scientific FOSH developers can get academic credit while having fun designing open hardware and pushing science forward faster. + +### 5 steps for scientific FOSH + +Shane Oberloier and I co-authored a new [article][11] published in Designs, an open access engineering design journal, about the principles of designing FOSH scientific equipment. We used the example of a slide dryer, fabricated for under $20, which costs up to 300 times less than proprietary equivalents. [Scientific][1] and [medical][12] equipment tends to be complex with huge payoffs for developing FOSH alternatives. + +I've summarized the five steps (including six design principles) that Shane and I detail in our Designs article. These design principles can be generalized to non-scientific devices, although the more complex the design or equipment, the larger the potential savings. + +If you are interested in designing open hardware for scientific projects, these steps will maximize your project's impact. + + 1. Evaluate similar existing tools for their functions but base your FOSH design on replicating their physical effects, not pre-existing designs. If necessary, evaluate a proof of concept. + + + 2. Use the following design principles: + + + * Use only free and open source software toolchains (e.g., open source CAD packages such as [OpenSCAD][13], [FreeCAD][14], or [Blender][15]) and open hardware for device fabrication. + * Attempt to minimize the number and type of parts and the complexity of the tools. + * Minimize the amount of material and the cost of production. + * Maximize the use of components that can be distributed or digitally manufactured by using widespread and accessible tools such as the open source [RepRap 3D printer][16]. + * Create [parametric designs][17] with predesigned components, which enable others to customize your design. By making parametric designs rather than solving a specific case, all future cases can also be solved while enabling future users to alter the core variables to make the device useful for them. + * All components that are not easily and economically fabricated with existing open hardware equipment in a distributed fashion should be chosen from off-the-shelf parts that are readily available throughout the world. + + + 3. Validate the design for the targeted function(s). + + + 4. Meticulously document the design, manufacture, assembly, calibration, and operation of the device. This should include the raw source of the design, not just the files used for production. The Open Source Hardware Association has extensive [guidelines][18] for properly documenting and releasing open source designs, which can be summarized as follows: + + + * Share design files in a universal type. + * Include a fully detailed bill of materials, including prices and sourcing information. + * If software is involved, make sure the code is clear and understandable to the general public. + * Include many photos so that nothing is obscured, and they can be used as a reference while manufacturing. + * In the methods section, the entire manufacturing process must be detailed to act as instructions for users to replicate the design. + * Share online and specify a license. This gives users information on what constitutes fair use of the design. + + + 5. Share aggressively! For FOSH to proliferate, designs must be shared widely, frequently, and noticeably to raise awareness of their existence. All documentation should be published in the open access literature and shared with appropriate communities. One nice universal repository to consider is the [Open Science Framework][19], hosted by the Center for Open Science, which is set up to take any type of file and handle large datasets. + + + +This article was supported by [Fulbright Finland][20], which is sponsoring Joshua Pearce's research in open source scientific hardware in Finland as the Fulbright-Aalto University Distinguished Chair. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/5-steps-creating-successful-open-hardware + +作者:[Joshua Pearce][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jmpearce +[1]:https://opensource.com/business/16/4/how-calculate-open-source-hardware-return-investment +[2]:https://opensource.com/node/16840 +[3]:http://www.appropedia.org/Open-source_Lab +[4]:https://www.academia.edu/32004903/Emerging_Business_Models_for_Open_Source_Hardware +[5]:http://openhardware.science/ +[6]:https://opensource.com/life/16/7/hardwarex-open-access-journal +[7]:https://openhardware.metajnl.com/ +[8]:https://openresearchsoftware.metajnl.com/ +[9]:https://www.journals.elsevier.com/hardwarex +[10]:https://opensource.com/node/30041 +[11]:https://www.academia.edu/35603319/General_Design_Procedure_for_Free_and_Open-Source_Hardware_for_Scientific_Equipment +[12]:https://www.academia.edu/35382852/Maximizing_Returns_for_Public_Funding_of_Medical_Research_with_Open_source_Hardware +[13]:http://www.openscad.org/ +[14]:https://www.freecadweb.org/ +[15]:https://www.blender.org/ +[16]:http://reprap.org/ +[17]:https://en.wikipedia.org/wiki/Parametric_design +[18]:https://www.oshwa.org/sharing-best-practices/ +[19]:https://osf.io/ +[20]:http://www.fulbright.fi/en diff --git a/sources/talk/20180227 Emacs -1- Ditching a bunch of stuff and moving to Emacs and org-mode.md b/sources/talk/20180227 Emacs -1- Ditching a bunch of stuff and moving to Emacs and org-mode.md new file mode 100644 index 0000000000..501ba538b6 --- /dev/null +++ b/sources/talk/20180227 Emacs -1- Ditching a bunch of stuff and moving to Emacs and org-mode.md @@ -0,0 +1,73 @@ +Emacs #1: Ditching a bunch of stuff and moving to Emacs and org-mode +====== +I’ll admit it. After over a decade of vim, I’m hooked on [Emacs][1]. + +I’ve long had this frustration over how to organize things. I’ve followed approaches like [GTD][2] and [ZTD][3], but things like email or large files are really hard to organize. + +I had been using Asana for tasks, Evernote for notes, Thunderbird for email, a combination of ikiwiki and some other items for a personal knowledge base, and various files in an archive directory on my PC. When my new job added Slack to the mix, that was finally the last straw. + +A lot of todo-management tools integrate with email — poorly. When you want to do something like “remind me to reply to this in a week”, a lot of times that’s impossible because the tool doesn’t store the email in a fashion you can easily reply to. And that problem is even worse with Slack. + +It was right around then that I stumbled onto [Carsten Dominik’s Google Talk on org-mode][4]. Carsten was the author of org-mode, and although the talk is 10 years old, it is still highly relevant. + +I’d stumbled across [org-mode][5] before, but each time I didn’t really dig in because I had the reaction of “an outliner? But I need a todo list.” Turns out I was missing out. org-mode is all that. + +### Just what IS Emacs? And org-mode? + +Emacs grew up as a text editor. It still is, and that heritage is definitely present throughout. But to say Emacs is an editor would be rather unfair. + +Emacs is something more like a platform or a toolkit. Not only do you have source code to it, but the very configuration is a program, and there are hooks all over the place. It’s as if it was super easy to write a Firefox plugin. A couple lines, and boom, behavior changed. + +org-mode is very similar. Yes, it’s an outliner, but that’s not really what it is. It’s an information organization platform. Its website says “Your life in plain text: Org mode is for keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system.” + +### Capturing + +If you’ve ever read productivity guides based on GTD, one of the things they stress is effortless capture of items. The idea is that when something pops into your head, get it down into a trusted system quickly so you can get on with what you were doing. org-mode has a capture system for just this. I can press `C-c c` from anywhere in Emacs, and up pops a spot to type my note. But, critically, automatically embedded in that note is a link back to what I was doing when I pressed `C-c c`. If I was editing a file, it’ll have a link back to that file and the line I was on. If I was viewing an email, it’ll link back to that email (by Message-Id, no less, so it finds it in any folder). Same for participating in a chat, or even viewing another org-mode entry. + +So I can make a note that will remind me in a week to reply to a certain email, and when I click the link in that note, it’ll bring up the email in my mail reader — even if I subsequently archived it out of my inbox. + +YES, this is what I was looking for! + +### The tool suite + +Once you’re using org-mode, pretty soon you want to integrate everything with it. There are browser plugins for capturing things from the web. Multiple Emacs mail or news readers integrate with it. ERC (IRC client) does as well. So I found myself switching from Thunderbird and mairix+mutt (for the mail archives) to mu4e, and from xchat+slack to ERC. + +And wouldn’t you know it, I liked each of those Emacs-based tools **better** than the standalone they replaced. + +A small side tidbit: I’m using OfflineIMAP again! I even used it with GNUS way back when. + +### One Emacs process to rule them + +I used to use Emacs extensively, way back. Back then, Emacs was a “large” program. (Now my battery status applet literally uses more RAM than Emacs). There was this problem of startup time back then, so there was a way to connect to a running Emacs process. + +I like to spawn programs with Mod-p (an xmonad shortcut to a dzen menubar, but Alt-F2 in more traditional DEs would do the trick). It’s convenient to not run several emacsen with this setup, so you don’t run into issues with trying to capture to a file that’s open in another one. The solution is very simple: I created a script, named it `em`, and put it on my path. All it does is this: + +`#!/bin/bash exec emacsclient -c -a "" "$@"` + +It creates a new emacs process if one doesn’t already exist; otherwise, it uses what you’ve got. A bonus here: parameters such as `-nw` work just fine, so it really acts just as if you’d typed `emacs` at the shell prompt. It’s a suitable setting for `EDITOR`. + +### Up next… + +I’ll be talking about my use of, and showing off configurations for: + + * org-mode, including syncing between computers, capturing, agenda and todos, files, linking, keywords and tags, various exporting (slideshows), etc. + * mu4e for email, including multiple accounts, bbdb integration + * ERC for IRC and IM + + +-------------------------------------------------------------------------------- + +via: http://changelog.complete.org/archives/9861-emacs-1-ditching-a-bunch-of-stuff-and-moving-to-emacs-and-org-mode + +作者:[John Goerzen][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://changelog.complete.org/archives/author/jgoerzen +[1]:https://www.gnu.org/software/emacs/ +[2]:https://gettingthingsdone.com/ +[3]:https://zenhabits.net/zen-to-done-the-simple-productivity-e-book/ +[4]:https://www.youtube.com/watch?v=oJTwQvgfgMM +[5]:https://orgmode.org/ diff --git a/sources/talk/20180227 Top 10 open source legal stories that shook 2017.md b/sources/talk/20180227 Top 10 open source legal stories that shook 2017.md new file mode 100644 index 0000000000..53052c7526 --- /dev/null +++ b/sources/talk/20180227 Top 10 open source legal stories that shook 2017.md @@ -0,0 +1,192 @@ +LightonXue翻译中 + +Top 10 open source legal stories that shook 2017 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/law_legal_gavel_court.jpg?itok=tc27pzjI) + +Like every year, legal issues were a hot topic in the open source world in 2017. While we're deep into the first quarter of the year, it's still worthwhile to look back at the top legal news in open source last year. + +### 1. GitHub revises ToS + +In February 2017, GitHub [announced][1] it was revising its terms of service and invited comments on the changes, several of which concerned rights in the user-uploaded content. The [earlier GitHub terms][2] included an agreement by the user to allow others to "view and fork" public repositories, as well as an indemnification provision protecting GitHub against third-party claims. The new terms added a license from the user to GitHub to allow it to store and serve content, a default ["inbound=outbound"][3] contributor license, and an agreement by the user to comply with third-party licenses covering uploaded content. While keeping the "view and fork" language, the new terms state that further rights can be granted by adopting an open source license. The terms also add a waiver of [moral rights][4] with a two-level fallback license, the second license granting GitHub permission to use content without attribution and to make reasonable adaptations "as necessary to render the website and provide the service." + +In March, after the new terms became effective, concerns were raised by several developers, notably [Thorsten Glaser][5] and [Joey][6] [Hess][7], who said they would be removing their repositories from GitHub. As Glaser and Hess read the new terms, they seemed to require users to grant rights to GitHub and other users that were broader than third-party licenses would permit, particularly copyleft licenses like the GPL and licenses requiring attribution. Moreover, the license to GitHub could be read as giving it a more favorable license in users' own content than ordinary users would receive under the nominal license. Donald Robertson of the Free Software Foundation (FSF) [wrote][8] that, while GitHub's terms were confusing, they were not in conflict with copyleft: "Because it's highly unlikely that GitHub intended to destroy their business model and user base, we don't read the ambiguity in the terms as granting or requiring overly broad permissions outside those already granted by the GPL." + +GitHub eventually added a sentence addressing the issue; it can be seen in the [current version][9] of the terms: "If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required." + +### 2. Kernel enforcement statement + +[Section 4][10] of GPLv2 speaks of automatic termination of rights for those who violate the terms of the license. By [2006][11] the FSF had come to see this provision as unduly harsh in the case of inadvertent violations. GPLv3 modifies the GPLv2 approach to termination by providing a 30-day cure opportunity for first-time violators as well as a 60-day period of repose. Many GPL projects like the Linux kernel continue to be licensed under GPLv2. + +As I [wrote][12] last year, 2016 saw public condemnation of the GPL enforcement tactics of former Netfilter contributor Patrick McHardy. In a further reaction to McHardy's conduct, the Linux Foundation [Technical Advisory Board][13] (TAB), elected by the individual kernel developers, drafted a Linux Kernel Enforcement Statement, which was [announced by Greg Kroah-Hartman][14] on the Linux kernel mailing list (LKML) on October 16, 2017. The [statement][15], now part of the kernel's Documentation directory, incorporates the GPLv3 cure and repose language verbatim as a "commitment to users of the Linux kernel on behalf of ourselves and any successors to our copyright interests." The commitment, described as a grant of additional permission under GPLv2, applies to non-defensive assertions of GPLv2 rights. The kernel statement in effect adopts a recommendation in the [Principles of Community-Oriented GPL Enforcement][16]. To date, the statement has been signed by over 100 kernel developers. Kroah-Hartman published an [FAQ][17] on the statement and a detailed [explanation][18] authored by several TAB members. + +### 3. Red Hat, Facebook, Google, and IBM announce GPLv2/LGPLv2.x cure commitment + +A month after the announcement of the kernel enforcement statement on LKML, a coalition of companies led by Red Hat and including Facebook, Google, and IBM [announced][19] their own commitment to [extend the GPLv3 cure][20] and repose opportunities to all code covered by their copyrights and licensed under GPLv2, LGPLv2, and LGPLv2.1. (The termination provision in LGPLv2.x is essentially identical to that in GPLv2.) As with the kernel statement, the commitment does not apply to defensive proceedings or claims brought in response to some prior proceeding or claim (for example, a GPL violation counterclaim in a patent infringement lawsuit, as occurred in [Twin Peaks Software v. Red Hat][21]). + +### 4. EPL 2.0 released + +The [Eclipse Public License version 1.0][22], a weak copyleft license that descends from the [Common Public License][23] and indirectly the [IBM Public License][24], has been the primary license of Eclipse Foundation projects. It sees significant use outside of Eclipse as well; for example, EPL is the license of the [Clojure][25] language implementation and the preferred open source license of the Clojure community, and it is the main license of [OpenDaylight][26]. + +Following a quiet two-year community review process, in August 2017 the Eclipse Foundation [announced][27] that a new version 2 of the EPL had been approved by the Eclipse Foundation board and by the OSI. The Eclipse Foundation intends EPL 2.0 to be the default license for Eclipse community projects. + +EPL 2.0 is a fairly conservative license revision. Perhaps the most notable change concerns GPL compatibility. EPL 1.0 is regarded as GPL-incompatible by both the [FSF][28] and the [Eclipse Foundation][29]. The FSF has suggested that this is at least because of the conflicting copyleft requirements in the two licenses, and (rather more dubiously) the choice of law clause in EPL 1.0, which has been removed in EPL 2.0. As a weak copyleft license, EPL normally requires at least some subset of derivative works to be licensed under EPL if distributed in source code form. [FSF][30] and [Eclipse][31] published opinions about the use of GPL for Eclipse IDE plugins several years ago. Apart from the issue of license compatibility, the Eclipse Foundation generally prohibits projects from distributing third-party code under GPL and LGPL. + +While EPL 2.0 remains GPL-incompatible by default, it enables the initial "Contributor" to authorize the licensing of EPL-covered source code under a "Secondary License"—GPLv2, GPLv3, or a later version of the GPL, which may include identified GPL exceptions or additional permissions like the [Classpath Exception][32]—if the EPL-covered code is combined with GPL-licensed code contained in a separate file. Some Eclipse projects have already relicensed to EPL 2.0 and are making use of this "Secondary License" feature, including [OMR][33] and [OpenJ9][34]. As the FSF [observes][35], invocation of the Secondary License feature is roughly equivalent to dual-licensing the code under EPL / GPL. + +### 5. Java EE migration to Eclipse + +The [Java Community Process][36] (JCP) facilitates development of Java technology specifications (Java Specification Requests, aka JSRs), including those defining the [Java Enterprise Edition][37] platform (Java EE). The JCP rests on a complex legal architecture centered around the Java Specification Participation Agreement (JSPA). While JCP governance is shared among multiple organizational and individual participants, the JCP is in no way vendor-neutral. Oracle owns the Java trademark and has special controls over the JCP. Some JCP [reforms][38] were adopted several years ago, including measures to mandate open source licensing and open source project development practices for JSR reference implementations (RIs), but efforts to modernize the JSPA stalled during the pendency of the Oracle v. Google litigation. + +In August 2017, Oracle announced it would explore [moving Java EE][39] to an open source foundation. Following consultation with IBM and Red Hat, the two other major contributors to Java EE, Oracle announced in September that it had [selected the Eclipse Foundation][40] to house the successor to Java EE. + +The migration to Eclipse has been underway since then. The Eclipse board approved a new top-level Eclipse project, [EE4J][41] (Eclipse Enterprise for Java), to serve as the umbrella project for development of RIs and technology compatibility kits (TCKs) for the successor platform. The [GlassFish][42] project, consisting of source code of the RIs for the majority of Java EE JSRs for which Oracle has served as specification lead, has mostly been under a dual license of CDDL and GPLv2 plus the Classpath Exception. Oracle is in the process of [relicensing this code][43] to EPL 2.0 with GPLv2 plus the Classpath Exception as the Secondary License (see EPL 2.0 topic). In addition, Oracle is expected to relicense proprietary Java EE TCKs so they can be developed as Eclipse open source projects. Still to be determined are the name of an Eclipse-owned certification mark to succeed Java EE and the development of a new specification process in place of the one defined in the JSPA. + +### 6. React licensing controversy + +Open source licenses that specifically address patent licensing often couple the patent license grant with a "patent defense" clause, terminating the patent license upon certain acts of litigation brought by the licensee, an approach borrowed from standards agreements. The early period of corporate experimentation with open source licensing was characterized by enthusiasm for patent defense clauses that were broad (in the sense that a relatively wide range of conduct would trigger termination). The arrival of the Apache License 2.0 and Eclipse Public License 1.0 in 2004 marked an end to that era; their patent termination criteria are basically limited to patent lawsuits in which the user accuses the licensed software itself of infringement. + +In May 2013 Facebook released the [React][44] JavaScript library under the Apache License 2.0, but the 0.12.0 release (October 2014) switched to the 3-clause BSD license along with a patent license grant in a separate `PATENTS` file. The idea of using a simple, standard permissive open source license with a bespoke patent license in a separate file has some precedent in projects maintained by [Google][45] and [Microsoft][46]. However, the patent defense clauses in those cases take the narrow Apache/EPL approach. The React `PATENTS` language terminated the patent license in cases where the licensee brought a patent infringement action against Facebook, or against any party "arising from" any Facebook product, even where the claim was unrelated to React, as well as where the licensee alleged that a Facebook patent was invalid or unenforceable. In response to criticism from the community, Facebook [revised][47] the patent license language in April 2015, but the revised version continued to include as termination criteria patent litigation against Facebook and patent litigation "arising from" Facebook products. + +Facebook came to apply the React license to many of its community projects. In April 2017 an [issue][48] was opened in the Apache Software Foundation (ASF) "Legal Discuss" issue tracker concerning whether Apache Cassandra could use [RocksDB][49], another Facebook project using the React license, as a dependency. In addition to the several other ASF projects that were apparently already using RocksDB, a large number of ASF projects used React itself. In June, Chris Mattmann, VP of legal affairs for the ASF, [ruled][50] that the React license was relegated to the forbidden Category X (see my discussion of the [JSON license][12] last year)—despite the fact that the ASF has long placed open source licenses with similarly broad patent defense clauses (MPL 1.1, IBM-PL, CPL) in its semi-favored Category B. In response, Facebook relicensed RocksDB under [GPLv2][51] and the [Apache License][52] 2.0, and a few months later announced it was [relicensing React][53] and three other identically licensed projects under the MIT license. More recent Facebook project license changes from the React approach to conventional open source licenses include [osquery][54] (GPLv2 / Apache License 2.0) and [React Native][55] (MIT). + +Much of the community criticism of the React license was rather misinformed and often seemed to be little more than ad hominem attack against Facebook. One of the few examples of sober, well-reasoned analysis of the topic is [Heather Meeker's article][56] on Opensource.com. Whatever actual merits the React license may have, Facebook's decision to use it without making it licensor-neutral and without seeking OSI approval were tactical mistakes, as [Simon Phipps points out][57]. + +### 7. OpenSSL relicensing effort + +The [license][58] covering most of OpenSSL is a conjunction of two 1990s-vintage BSD-derivative licenses. The first closely resembles an early license of the Apache web server. The second is the bespoke license of OpenSSL's predecessor project SSLeay. Both licenses contain an advertising clause like that in the 4-clause BSD license. The closing sentence of the SSLeay license, a gratuitous snipe at the GPL, supports an interpretation, endorsed by the FSF but no doubt unintended, that the license is copyleft. If only because of the advertising clauses, the OpenSSL license has long been understood to be GPL-incompatible, as Mark McLoughlin explained in a now-classic [essay][59]. + +In 2015, a year after the disclosure of the Heartbleed vulnerability and the Linux Foundation's subsequent formation of the [Core Infrastructure Initiative][60], Rich Salz said in a [blog post][61] that OpenSSL planned to relicense to the Apache License 2.0. The OpenSSL team followed up in March 2017 with a [press release][62] announcing the relicensing initiative and set up a website to collect agreements to the license change from the project's several hundred past contributors. + +A form email sent to identified individual contributors, asking for permission to relicense, soon drew criticism, mainly because of its closing sentence: "If we do not hear from you, we will assume that you have no objection." Some raised policy and legal concerns over what Theo de Raadt called a "[manufacturing consent in volume][63]" approach. De Raadt mocked the effort by [posting][64] a facetious attempt to relicense GCC to the [ISC license][65]. + +Salz posted an [update][66] on the relicensing effort in June. At that point, 40% of contacted contributors had responded, with the vast majority in favor of the license change and fewer than a dozen objections, amounting to 86 commits, with half of them surviving in the master branch. Salz described in detail the reasonable steps the project had taken to review those objections, resulting in a determination that at most 10 commits required removal and rewriting. + +### 8. Open Source Security v. Perens + +Open Source Security, Inc. (OSS) is the commercial vehicle through which Brad Spengler maintains the out-of-tree [grsecurity][67] patchset to the Linux kernel. In 2015, citing concerns about GPL noncompliance by users and misuse of the grsecurity trademark, OSS began [limiting access][68] to the stable patchset to paying customers. In 2017 OSS [ceased][69] releasing any public branches of grsecurity. The [Grsecurity Stable Patch Access Agreement][70] affirms that grsecurity is licensed under GPLv2 and that the user has all GPLv2 "rights and obligations," but states a policy of terminating access to future updates if a user redistributes patchsets or changelogs "outside of the explicit obligations under the GPL to User's customers." + +In June 2017, Bruce Perens published a [blog post][71] contending that the grsecurity agreement violated the GPL. OSS sued Perens in the Northern District of California, with claims for defamation, false light, and tortious interference with prospective advantage. In December the court [granted][72] Perens' motion to dismiss, denied without prejudice Perens' motion to strike under the California [anti-SLAPP][73] statute, and denied OSS's motion for partial summary judgment. In essence, the court said that as statements of opinion by a non-lawyer, Perens' blog posts were not defamatory. OSS has said it intends to appeal. + +### 9. Artifex Software v. Hancom + +Artifex Software licenses [Ghostscript][74] gratis under the [GPL][75] (more recently AGPL) and for revenue under proprietary licenses. In December 2016 Artifex sued Hancom, a South Korean vendor of office suite software, in the Northern District of California. Artifex alleged that Hancom had incorporated Ghostscript into its Hangul word processing program and Hancom Office product without obtaining a proprietary license or complying with the GPL. The [complaint][76] includes claims for breach of contract as well as copyright infringement. In addition to monetary damages, Artifex requested injunctive relief, including an order compelling Hancom to distribute the source code of Hangul and Hancom Office to Hancom's customers. + +In April 2017 the court [denied][77] Hancom's motion to dismiss. One of Hancom's arguments was that Artifex did not plead the existence of a contract because there was no demonstration of mutual assent. The court disagreed, stating that the allegations of Hancom's use of Ghostscript, failure to obtain a proprietary license, and public representation that its use of Ghostscript was licensed under the GPL were sufficient to plead the existence of a contract. In addition, Artifex's allegations regarding its dual-licensing scheme were deemed sufficient to plead damages for breach of contract. The denial of the motion to dismiss was widely misreported and sensationalized as a ruling that the GPL itself was "an enforceable contract." + +In September the court [denied][78] Hancom's motion for summary judgment on the breach of contract claim. Hancom first argued that as a matter of law Artifex was not entitled to money damages, essentially because GPL compliance required no payment to Artifex. The court rejected this argument, as the value of a royalty-bearing license and an unjust enrichment theory could serve as the measure of Artifex's damages. Second, Hancom argued in essence that any damages for contract breach could not be based on continuing GPL-noncompliant activity after Hancom first began shipping Ghostscript in violation of the GPL, because at that moment Hancom's GPL license was automatically terminated. In rejecting this argument, the court noted that GPLv3's language suggested Hancom's GPL obligations persisted beyond the termination of its GPL rights. The parties reached a settlement in December. + +Special thanks to Chris Gillespie for his research and analysis of the Artifex case. + +### 10. SFLC/Conservancy trademark dispute + +In 2006 the Software Freedom Law Center formed a [separate nonprofit organization][79], which it named the Software Freedom Conservancy. By July 2011, the two organizations no longer had any board members, officers, or employees in common, and SFLC ceased providing legal services to Conservancy. SFLC obtained a registration from the USPTO for the service mark SOFTWARE FREEDOM LAW CENTER in early 2011. In November 2011 Conservancy applied to register the mark SOFTWARE FREEDOM CONSERVANCY; the registration issued in September 2012. SFLC continues to be run by its founder Eben Moglen, while Conservancy is managed by former SFLC employees Karen Sandler and Bradley Kuhn. The two organizations are known to have opposing positions on a number of significant legal and policy matters (see, for example, my discussion of the [ZFS-on-Linux][12] issue last year). + +In September 2017, SFLC filed a [petition][80] with the [Trademark Trial and Appeal Board][81] to cancel Conservancy's trademark registration under Section 14 of the Lanham Trademark Act of 1946, [15 U.S.C.][82][§][82][1064][82], claiming that Conservancy's mark is confusingly similar to SFLC's. In November, Conservancy submitted its [answer][83] listing its affirmative defenses, and in December Conservancy filed a [summary judgment motion][84] on those defenses. The TTAB in effect [denied the summary judgment motion][85] on the basis that the affirmative defenses in Conservancy's answer were insufficiently pleaded. + +Moglen publicly [proposed a mutual release][86] of all claims "in return for an iron-clad agreement for mutual non-disparagement," including "a perpetual, royalty-free trademark license for Conservancy to keep and use its current name." [Conservancy responded][87] in a blog post that it could not "accept any settlement offer that includes a trademark license we don't need. Furthermore, any trademark license necessarily gives SFLC perpetual control over how we pursue our charitable mission." + +SFLC [moved][88] for leave to amend its petition to add a second ground for cancellation, that Conservancy's trademark registration was obtained by fraud. Conservancy's [response][89] argues that the proposed amendment does not state a claim for fraud. Meanwhile, Conservancy has submitted [applications for trademarks][90] for "THE SOFTWARE CONSERVANCY." + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/2/top-10-open-source-legal-stories-shook-2017 + +作者:[Richard Fontana][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/fontana +[1]:https://github.com/blog/2314-new-github-terms-of-service +[2]:https://web.archive.org/web/20170131092801/https:/help.github.com/articles/github-terms-of-service/ +[3]:https://opensource.com/law/11/7/trouble-harmony-part-1 +[4]:https://en.wikipedia.org/wiki/Moral_rights +[5]:https://www.mirbsd.org/permalinks/wlog-10_e20170301-tg.htm#e20170301-tg_wlog-10 +[6]:https://joeyh.name/blog/entry/removing_everything_from_github/ +[7]:https://joeyh.name/blog/entry/what_I_would_ask_my_lawyers_about_the_new_Github_TOS/ +[8]:https://www.fsf.org/blogs/licensing/do-githubs-updated-terms-of-service-conflict-with-copyleft +[9]:https://help.github.com/articles/github-terms-of-service/ +[10]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html#section4 +[11]:http://gplv3.fsf.org/gpl-rationale-2006-01-16.html#SECTION00390000000000000000 +[12]:https://opensource.com/article/17/1/yearbook-7-notable-legal-developments-2016 +[13]:https://www.linuxfoundation.org/about/technical-advisory-board/ +[14]:https://lkml.org/lkml/2017/10/16/122 +[15]:https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/kernel-enforcement-statement.rst?h=v4.15 +[16]:https://sfconservancy.org/copyleft-compliance/principles.html +[17]:http://kroah.com/log/blog/2017/10/16/linux-kernel-community-enforcement-statement-faq/ +[18]:http://kroah.com/log/blog/2017/10/16/linux-kernel-community-enforcement-statement/ +[19]:https://www.redhat.com/en/about/press-releases/technology-industry-leaders-join-forces-increase-predictability-open-source-licensing +[20]:https://www.redhat.com/en/about/gplv3-enforcement-statement +[21]:https://lwn.net/Articles/516735/ +[22]:https://www.eclipse.org/legal/epl-v10.html +[23]:https://opensource.org/licenses/cpl1.0.php +[24]:https://opensource.org/licenses/IPL-1.0 +[25]:https://clojure.org/ +[26]:https://www.opendaylight.org/ +[27]:https://www.eclipse.org/org/press-release/20170829eplv2.php +[28]:https://www.gnu.org/licenses/license-list.en.html#EPL +[29]:http://www.eclipse.org/legal/eplfaq.php#GPLCOMPATIBLE +[30]:https://www.fsf.org/blogs/licensing/using-the-gpl-for-eclipse-plug-ins +[31]:https://mmilinkov.wordpress.com/2010/04/06/epl-gpl-commentary/ +[32]:https://www.gnu.org/software/classpath/license.html +[33]:https://github.com/eclipse/omr/blob/master/LICENSE +[34]:https://github.com/eclipse/openj9/blob/master/LICENSE +[35]:https://www.gnu.org/licenses/license-list.en.html#EPL2 +[36]:https://jcp.org/en/home/index +[37]:http://www.oracle.com/technetwork/java/javaee/overview/index.html +[38]:https://jcp.org/en/jsr/detail?id=348 +[39]:https://blogs.oracle.com/theaquarium/opening-up-java-ee +[40]:https://blogs.oracle.com/theaquarium/opening-up-ee-update +[41]:https://projects.eclipse.org/projects/ee4j/charter +[42]:https://javaee.github.io/glassfish/ +[43]:https://mmilinkov.wordpress.com/2018/01/23/ee4j-current-status-and-whats-next/ +[44]:https://reactjs.org/ +[45]:https://www.webmproject.org/license/additional/ +[46]:https://github.com/dotnet/coreclr/blob/master/PATENTS.TXT +[47]:https://github.com/facebook/react/blob/v0.13.3/PATENTS +[48]:https://issues.apache.org/jira/browse/LEGAL-303 +[49]:http://rocksdb.org/ +[50]:https://issues.apache.org/jira/browse/LEGAL-303?focusedCommentId=16052957&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16052957 +[51]:https://github.com/facebook/rocksdb/pull/2226 +[52]:https://github.com/facebook/rocksdb/pull/2589 +[53]:https://code.facebook.com/posts/300798627056246/relicensing-react-jest-flow-and-immutable-js/ +[54]:https://github.com/facebook/osquery/pull/4007 +[55]:https://github.com/facebook/react-native/commit/26684cf3adf4094eb6c405d345a75bf8c7c0bf88 +[56]:https://opensource.com/article/17/9/facebook-patents-license +[57]:https://opensource.com/article/17/9/5-reasons-facebooks-react-license-was-mistake +[58]:https://www.openssl.org/source/license.html +[59]:https://people.gnome.org/~markmc/openssl-and-the-gpl.html +[60]:https://www.coreinfrastructure.org/ +[61]:https://www.openssl.org/blog/blog/2015/08/01/cla/ +[62]:https://www.coreinfrastructure.org/news/announcements/2017/03/openssl-re-licensing-apache-license-v-20-encourage-broader-use-other-foss +[63]:https://marc.info/?l=openbsd-tech&m=149028829020600&w=2 +[64]:https://marc.info/?l=openbsd-tech&m=149032069130072&w=2 +[65]:https://opensource.org/licenses/ISC +[66]:https://www.openssl.org/blog/blog/2017/06/17/code-removal/ +[67]:https://grsecurity.net/ +[68]:https://grsecurity.net/announce.php +[69]:https://grsecurity.net/passing_the_baton.php +[70]:https://web.archive.org/web/20170805231029/https:/grsecurity.net/agree/agreement.php +[71]:https://perens.com/2017/06/28/warning-grsecurity-potential-contributory-infringement-risk-for-customers/ +[72]:https://www.courtlistener.com/docket/6132658/53/open-source-security-inc-v-perens/ +[73]:https://en.wikipedia.org/wiki/Strategic_lawsuit_against_public_participation +[74]:https://www.ghostscript.com/ +[75]:https://www.gnu.org/licenses/licenses.en.html +[76]:https://www.courtlistener.com/recap/gov.uscourts.cand.305835.1.0.pdf +[77]:https://ia801909.us.archive.org/13/items/gov.uscourts.cand.305835/gov.uscourts.cand.305835.32.0.pdf +[78]:https://ia801909.us.archive.org/13/items/gov.uscourts.cand.305835/gov.uscourts.cand.305835.54.0.pdf +[79]:https://www.softwarefreedom.org/news/2006/apr/03/conservancy-launch/ +[80]:http://ttabvue.uspto.gov/ttabvue/v?pno=92066968&pty=CAN&eno=1 +[81]:https://www.uspto.gov/trademarks-application-process/trademark-trial-and-appeal-board +[82]:https://www.law.cornell.edu/uscode/text/15/1064 +[83]:http://ttabvue.uspto.gov/ttabvue/v?pno=92066968&pty=CAN&eno=5 +[84]:http://ttabvue.uspto.gov/ttabvue/v?pno=92066968&pty=CAN&eno=6 +[85]:http://ttabvue.uspto.gov/ttabvue/v?pno=92066968&pty=CAN&eno=8 +[86]:https://www.softwarefreedom.org/blog/2017/dec/22/conservancy/ +[87]:https://sfconservancy.org/blog/2017/dec/22/sflc-escalation/ +[88]:http://ttabvue.uspto.gov/ttabvue/v?pno=92066968&pty=CAN&eno=7 +[89]:http://ttabvue.uspto.gov/ttabvue/v?pno=92066968&pty=CAN&eno=9 +[90]:http://tsdr.uspto.gov/documentviewer?caseId=sn87670034&docId=FTK20171106083425#docIndex=0&page=1 diff --git a/sources/talk/20180301 How to hire the right DevOps talent.md b/sources/talk/20180301 How to hire the right DevOps talent.md new file mode 100644 index 0000000000..bcf9bb3d20 --- /dev/null +++ b/sources/talk/20180301 How to hire the right DevOps talent.md @@ -0,0 +1,48 @@ +How to hire the right DevOps talent +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6) + +DevOps culture is quickly gaining ground, and demand for top-notch DevOps talent is greater than ever at companies all over the world. With the [annual base salary for a junior DevOps engineer][1] now topping $100,000, IT professionals are hurrying to [make the transition into DevOps.][2] + +But how do you choose the right candidate to fill your DevOps role? + +### Overview + +Most teams are looking for candidates with a background in operations and infrastructure, software engineering, or development. This is in conjunction with skills that relate to configuration management, continuous integration, and deployment (CI/CD), as well as cloud infrastructure. Knowledge of container orchestration is also in high demand. + +In a perfect world, the two backgrounds would meet somewhere in the middle to form Dev and Ops, but in most cases, candidates lean toward one side or the other. Yet they must possess the skills necessary to understand the needs of their counterparts to work effectively as a team to achieve continuous delivery and deployment. Since every company is different, there is no single right or wrong since so much depends on a company’s tech stack and infrastructure, as well as the goals and the skills of other team members. So how do you focus your search? + +### Decide on the background + +Begin by assessing the strength of your current team. Do you have rock-star software engineers but lack infrastructure knowledge? Focus on closing the skill gaps. Just because you have the budget to hire a DevOps engineer doesn’t mean you should spend weeks, or even months, trying to find the best software engineer who also happens to use Kubernetes and Docker because they are currently the trend. Instead, look for someone who will provide the most value in your environment, and see how things go from there. + +### There is no “Ctrl + F” solution + +Instead of concentrating on specific tools, concentrate on a candidate's understanding of DevOps and CI/CD-related processes. You'll be better off with someone who understands methodologies over tools. It is more important to ensure that candidates comprehend the concept of CI/CD than to ask if they prefer Jenkins, Bamboo, or TeamCity. Don’t get too caught up in the exact toolchain—rather, focus on problem-solving skills and the ability to increase efficiency, save time, and automate manual processes. You don't want to miss out on the right candidate just because the word “Puppet” was not on their resume. + +### Check your ego + +As mentioned above, DevOps is a rapidly growing field, and DevOps engineers are in hot demand. That means candidates have great buying power. You may have an amazing company or product, but hiring top talent is no longer as simple as putting up a “Help Wanted” sign and waiting for top-quality applicants to rush in. I'm not suggesting that maintaining a reputation a great place to work is unimportant, but in today's environment, you need to make an effort to sell your position. Flaws or glitches in the hiring process, such as abruptly canceling interviews or not offering feedback after interviews, can lead to negative reviews spreading across the industry. Remember, it takes just a couple of minutes to leave a negative review on Glassdoor. + +### Contractor or permanent employee? + +Most recruiters and hiring managers immediately start searching for a full-time employee, even though they may have other options. If you’re looking to design, build, and implement a new DevOps environment, why not hire a senior person who has done this in the past? Consider hiring a senior contractor, along with a junior full-time hire. That way, you can tap the knowledge and experience of the contractor by having them work with the junior employee. Contractors can be expensive, but they bring invaluable knowledge—especially if the work can be done within a short timeframe. + +### Cultivate from within + +With so many companies competing for talent, it is difficult to find the right DevOps engineer. Not only will you need to pay top dollar to hire this person, but you must also consider that the search can take several months. However, since few companies are lucky enough to find the ideal DevOps engineer, consider searching for a candidate internally. You might be surprised at the talent you can cultivate from within your own organization. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/how-hire-right-des-talentvop + +作者:[Stanislav Ivaschenko][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/ilyadudkin +[1]:https://www.glassdoor.com/Salaries/junior-devops-engineer-salary-SRCH_KO0,22.htm +[2]:https://squadex.com/insights/system-administrator-making-leap-devops/ diff --git a/sources/talk/20180302 Beyond metrics- How to operate as team on today-s open source project.md b/sources/talk/20180302 Beyond metrics- How to operate as team on today-s open source project.md new file mode 100644 index 0000000000..fb5454bbe4 --- /dev/null +++ b/sources/talk/20180302 Beyond metrics- How to operate as team on today-s open source project.md @@ -0,0 +1,53 @@ +Beyond metrics: How to operate as team on today's open source project +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-women-meeting-team.png?itok=BdDKxT1w) + +How do we traditionally think about community health and vibrancy? + +We might quickly zero in on metrics related primarily to code contributions: How many companies are contributing? How many individuals? How many lines of code? Collectively, these speak to both the level of development activity and the breadth of the contributor base. The former speaks to whether the project continues to be enhanced and expanded; the latter to whether it has attracted a diverse group of developers or is controlled primarily by a single organization. + +The [Linux Kernel Development Report][1] tracks these kinds of statistics and, unsurprisingly, it appears extremely healthy on all counts. + +However, while development cadence and code contributions are still clearly important, other aspects of the open source communities are also coming to the forefront. This is in part because, increasingly, open source is about more than a development model. It’s also about making it easier for users and other interested parties to interact in ways that go beyond being passive recipients of code. Of course, there have long been user groups. But open source streamlines the involvement of users, just as it does software development. + +This was the topic of my discussion with Diane Mueller, the director of community development for OpenShift. + +When OpenShift became a container platform based in part on Kubernetes in version 3, Mueller saw a need to broaden the community beyond the core code contributors. In part, this was because OpenShift was increasingly touching a broad range of open source projects and organizations such those associated with the [Open Container Initiative (OCI)][2] and the [Cloud Native Computing Foundation (CNCF)][3]. In addition to users, cloud service providers who were offering managed services also wanted ways to get involved in the project. + +“What we tried to do was open up our minds about what the community constituted,” Mueller explained, adding, “We called it the [Commons][4] because Red Hat's near Boston, and I'm from that area. Boston Common is a shared resource, the grass where you bring your cows to graze, and you have your farmer's hipster market or whatever it is today that they do on Boston Common.” + +This new model, she said, was really “a new ecosystem that incorporated all of those different parties and different perspectives. We used a lot of virtual tools, a lot of new tools like Slack. We stepped up beyond the mailing list. We do weekly briefings. We went very virtual because, one, I don't scale. The Evangelist and Dev Advocate team didn't scale. We need to be able to get all that word out there, all this new information out there, so we went very virtual. We worked with a lot of people to create online learning stuff, a lot of really good tooling, and we had a lot of community help and support in doing that.” + +![diane mueller open shift][6] + +Diane Mueller, director of community development at Open Shift, discusses the role of strong user communities in open source software development. (Credit: Gordon Haff, CC BY-SA 4.0) + +However, one interesting aspect of the Commons model is that it isn’t just virtual. We see the same pattern elsewhere in many successful open source communities, such as the Linux kernel. Lots of day-to-day activities happen on mailings lists, IRC, and other collaboration tools. But this doesn’t eliminate the benefits of face-to-face time that allows for both richer and informal discussions and exchanges. + +This interview with Mueller took place in London the day after the [OpenShift Commons Gathering][7]. Gatherings are full-day events, held a number of times a year, which are typically attended by a few hundred people. Much of the focus is on users and user stories. In fact, Mueller notes, “Here in London, one of the Commons members, Secnix, was really the major reason we actually hosted the gathering here. Justin Cook did an amazing job organizing the venue and helping us pull this whole thing together in less than 50 days. A lot of the community gatherings and things are driven by the Commons members.” + +Mueller wants to focus on users more and more. “The OpenShift Commons gathering at [Red Hat] Summit will be almost entirely case studies,” she noted. “Users talking about what's in their stack. What lessons did they learn? What are the best practices? Sharing those ideas that they've done just like we did here in London.” + +Although the Commons model grew out of some specific OpenShift needs at the time it was created, Mueller believes it’s an approach that can be applied more broadly. “I think if you abstract what we've done, you can apply it to any existing open source community,” she said. “The foundations still, in some ways, play a nice role in giving you some structure around governance, and helping incubate stuff, and helping create standards. I really love what OCI is doing to create standards around containers. There's still a role for that in some ways. I think the lesson that we can learn from the experience and we can apply to other projects is to open up the community so that it includes feedback mechanisms and gives the podium away.” + +The evolution of the community model though approaches like the OpenShift Commons mirror the healthy evolution of open source more broadly. Certainly, some users have been involved in the development of open source software for a long time. What’s striking today is how widespread and pervasive direct user participation has become. Sure, open source remains central to much of modern software development. But it’s also becoming increasingly central to how users learn from each other and work together with their partners and developers. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/how-communities-are-evolving + +作者:[Gordon Haff][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/ghaff +[1]:https://www.linuxfoundation.org/2017-linux-kernel-report-landing-page/ +[2]:https://www.opencontainers.org/ +[3]:https://www.cncf.io/ +[4]:https://commons.openshift.org/ +[5]:/file/388586 +[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/39369010275_7df2c3c260_z.jpg?itok=gIhnBl6F (diane mueller open shift) +[7]:https://www.meetup.com/London-OpenShift-User-Group/events/246498196/ diff --git a/sources/talk/20180303 4 meetup ideas- Make your data open.md b/sources/talk/20180303 4 meetup ideas- Make your data open.md new file mode 100644 index 0000000000..a431b8376a --- /dev/null +++ b/sources/talk/20180303 4 meetup ideas- Make your data open.md @@ -0,0 +1,75 @@ +4 meetup ideas: Make your data open +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_team_community_group.png?itok=Nc_lTsUK) + +[Open Data Day][1] (ODD) is an annual, worldwide celebration of open data and an opportunity to show the importance of open data in improving our communities. + +Not many individuals and organizations know about the meaningfulness of open data or why they might want to liberate their data from the restrictions of copyright, patents, and more. They also don't know how to make their data open—that is, publicly available for anyone to use, share, or republish with modifications. + +This year ODD falls on Saturday, March 3, and there are [events planned][2] in every continent except Antarctica. While it might be too late to organize an event for this year, it's never too early to plan for next year. Also, since open data is important every day of the year, there's no reason to wait until ODD 2019 to host an event in your community. + +There are many ways to build local awareness of open data. Here are four ideas to help plan an excellent open data event any time of year. + +### 1. Organize an entry-level event + +You can host an educational event at a local library, college, or another public venue about how open data can be used and why it matters for all of us. If possible, invite a [local speaker][3] or have someone present remotely. You could also have a roundtable discussion with several knowledgeable people in your community. + +Consider offering resources such as the [Open Data Handbook][4], which not only provides a guide to the philosophy and rationale behind adopting open data, but also offers case studies, use cases, how-to guides, and other material to support making data open. + +### 2. Organize an advanced-level event + +For a deeper experience, organize a hands-on training event for open data newbies. Ideas for good topics include [training teachers on open science][5], [creating audiovisual expressions from open data][6], and using [open government data][7] in meaningful ways. + +The options are endless. To choose a topic, think about what is locally relevant, identify issues that open data might be able to address, and find people who can do the training. + +### 3. Organize a hackathon + +Open data hackathons can be a great way to bring open data advocates, developers, and enthusiasts together under one roof. Hackathons are more than just training sessions, though; the idea is to build prototypes or solve real-life challenges that are tied to open data. In a hackathon, people in various groups can contribute to the entire assembly line in multiple ways, such as identifying issues by working collaboratively through [Etherpad][8] or creating focus groups. + +Once the hackathon is over, make sure to upload all the useful data that is produced to the internet with an open license. + +### 4. Release or relicense data as open + +Open data is about making meaningful data publicly available under open licenses while protecting any data that might put people's private information at risk. (Learn [how to protect private data][9].) Try to find existing, interesting, and useful data that is privately owned by individuals or organizations and negotiate with them to relicense or release the data online under any of the [recommended open data licenses][10]. The widely popular [Creative Commons licenses][11] (particularly the CC0 license and the 4.0 licenses) are quite compatible with relicensing public data. (See this FAQ from Creative Commons for more information on [openly licensing data][12].) + +Open data can be published on multiple platforms—your website, [GitHub][13], [GitLab][14], [DataHub.io][15], or anywhere else that supports open standards. + +### Tips for event success + +No matter what type of event you decide to do, here are some general planning tips to improve your chances of success. + + * Find a venue that's accessible to the people you want to reach, such as a library, a school, or a community center. + * Create a curriculum that will engage the participants. + * Invite your target audience—make sure to distribute information through social media, community events calendars, Meetup, and the like. + + + +Have you attended or hosted a successful open data event? If so, please share your ideas in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/celebrate-open-data-day + +作者:[Subhashish Panigraphi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/psubhashish +[1]:http://www.opendataday.org/ +[2]:http://opendataday.org/#map +[3]:https://openspeakers.org/ +[4]:http://opendatahandbook.org/ +[5]:https://docs.google.com/forms/d/1BRsyzlbn8KEMP8OkvjyttGgIKuTSgETZW9NHRtCbT1s/viewform?edit_requested=true +[6]:http://dattack.lv/en/ +[7]:https://www.eventbrite.co.nz/e/open-data-open-potential-event-friday-2-march-2018-tickets-42733708673 +[8]:http://etherpad.org/ +[9]:https://ssd.eff.org/en/module/keeping-your-data-safe +[10]:https://opendatacommons.org/licenses/ +[11]:https://creativecommons.org/share-your-work/licensing-types-examples/ +[12]:https://wiki.creativecommons.org/wiki/Data#Frequently_asked_questions_about_data_and_CC_licenses +[13]:https://github.com/MartinBriza/MediaWriter +[14]:https://about.gitlab.com/ +[15]:https://datahub.io/ diff --git a/sources/talk/20180305 What-s next in IT automation- 6 trends to watch.md b/sources/talk/20180305 What-s next in IT automation- 6 trends to watch.md new file mode 100644 index 0000000000..abdfc0df55 --- /dev/null +++ b/sources/talk/20180305 What-s next in IT automation- 6 trends to watch.md @@ -0,0 +1,130 @@ +What’s next in IT automation: 6 trends to watch +====== + +![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_ai_artificial_intelligence.png?itok=o0csm9l2) + +We’ve recently covered the [factors fueling IT automation][1], the [current trends][2] to watch as adoption grows, and [helpful tips][3] for those organizations just beginning to automate certain processes. + +Oh, and we also shared expert advice on [how to make the case for automation][4] in your company, as well as [keys for long-term success][5]. + +Now, there’s just one question: What’s next? We asked a range of experts to share a peek into the not-so-distant future of [automation][6]. Here are six trends they advise IT leaders to monitor closely. + +### 1. Machine learning matures + +For all of the buzz around [machine learning][7] (and the overlapping phrase “self-learning systems”), it’s still very early days for most organizations in terms of actual implementations. Expect that to change, and for machine learning to play a significant role in the next waves of IT automation. + +Mehul Amin, director of engineering for [Advanced Systems Concepts, Inc.][8], points to machine learning as one of the next key growth areas for IT automation. + +“With the data that is developed, automation software can make decisions that otherwise might be the responsibility of the developer,” Amin says. “For example, the developer builds what needs to be executed, but identifying the best system to execute the processes might be [done] by software using analytics from within the system.” + +That extends elsewhere in this same hypothetical system; Amin notes that machine learning can enable automated systems to provision additional resources when necessary to meet timelines or SLAs, as well as retire those resources when they’re no longer needed, and other possibilities. + +Amin is certainly not alone. + +“IT automation is moving towards self-learning,” says Kiran Chitturi, CTO architect at [Sungard Availability Services][9]. “Systems will be able to test and monitor themselves, enhancing business processes and software delivery.” + +Chitturi points to automated testing as an example; test scripts are already in widespread adoption, but soon those automated testing processes may be more likely to learn as they go, developing, for example, wider recognition of how new code or code changes will impact production environments. + +### 2. Artificial intelligence spawns automation opportunities + +The same principles above hold true for the related (but separate) field of [artificial intelligence][10]. Depending on your definition of AI, it seems likely that machine learning will have the more significant IT impact in the near term (and we’re likely to see a lot of overlapping definitions and understandings of the two fields). Assume that emerging AI technologies will spawn new automation opportunities, too. + +“The integration of artificial intelligence (AI) and machine learning capabilities is widely perceived as critical for business success in the coming years,” says Patrick Hubbard, head geek at [SolarWinds][11]. + +### 3. That doesn’t mean people are obsolete + +Let’s try to calm those among us who are now hyperventilating into a paper bag: The first two trends don’t necessarily mean we’re all going to be out of a job. + +It is likely to mean changes to various roles – and the creation of [new roles][12] altogether. + +But in the foreseeable future, at least, you don’t need to practice bowing to your robot overlords. + +“A machine can only consider the environment variables that it is given – it can’t choose to include new variables, only a human can do this today,” Hubbard explains. “However, for IT professionals this will necessitate the cultivation of AI- and automation-era skills such as programming, coding, a basic understanding of the algorithms that govern AI and machine learning functionality, and a strong security posture in the face of more sophisticated cyberattacks.” + +Hubbard shares the example of new tools or capabilities such as AI-enabled security software or machine-learning applications that remotely spot maintenance needs in an oil pipeline. Both might improve efficiency and effectiveness; neither automatically replaces the people necessary for information security or pipeline maintenance. + +“Many new functionalities still require human oversight,” Hubbard says. “In order for a machine to determine if something ‘predictive’ could become ‘prescriptive,’ for example, human management is needed.” + +The same principle holds true even if you set machine learning and AI aside for a moment and look at IT automation more generally, especially in the software development lifecycle. + +Matthew Oswalt, lead architect for automation at [Juniper Networks][13], points out that the fundamental reason IT automation is growing is that it is creating immediate value by reducing the amount of manual effort required to operate infrastructure. + +Rather than responding to an infrastructure issue at 3 a.m. themselves, operations engineers can use event-driven automation to define their workflows ahead of time, as code. + +“It also sets the stage for treating their operations workflows as code rather than easily outdated documentation or tribal knowledge,” Oswalt explains. “Operations staff are still required to play an active role in how [automation] tooling responds to events. The next phase of adopting automation is to put in place a system that is able to recognize interesting events that take place across the IT spectrum and respond in an autonomous fashion. Rather than responding to an infrastructure issue at 3 a.m. themselves, operations engineers can use event-driven automation to define their workflows ahead of time, as code. They can rely on this system to respond in the same way they would, at any time.” + +### 4. Automation anxiety will decrease + +Hubbard of SolarWinds notes that the term “automation” itself tends to spawn a lot of uncertainty and concern, not just in IT but across professional disciplines, and he says that concern is legitimate. But some of the attendant fears may be overblown, and even perpetuated by the tech industry itself. Reality might actually be the calming force on this front: When the actual implementation and practice of automation helps people realize #3 on this list, then we’ll see #4 occur. + +“This year we’ll likely see a decrease in automation anxiety and more organizations begin to embrace AI and machine learning as a way to augment their existing human resources,” Hubbard says. “Automation has historically created room for more jobs by lowering the cost and time required to accomplish smaller tasks and refocusing the workforce on things that cannot be automated and require human labor. The same will be true of AI and machine learning.” + +Automation will also decrease some anxiety around the topic most likely to increase an IT leader’s blood pressure: Security. As Matt Smith, chief architect, [Red Hat][14], recently [noted][15], automation will increasingly help IT groups reduce the security risks associated with maintenance tasks. + +His advice: “Start by documenting and automating the interactions between IT assets during maintenance activities. By relying on automation, not only will you eliminate tasks that historically required much manual effort and surgical skill, you will also be reducing the risks of human error and demonstrating what’s possible when your IT organization embraces change and new methods of work. Ultimately, this will reduce resistance to promptly applying security patches. And it could also help keep your business out of the headlines during the next major security event.” + +**[ Read the full article: [12 bad enterprise security habits to break][16]. ] ** + +### 5. Continued evolution of scripting and automation tools + +Many organizations see the first steps toward increasing automation – usually in the form of scripting or automation tools (sometimes referred to as configuration management tools) – as "early days" work. + +But views of those tools are evolving as the use of various automation technologies grows. + +“There are many processes in the data center environment that are repetitive and subject to human error, and technologies such as [Ansible][17] help to ameliorate those issues,” says Mark Abolafia, chief operating officer at [DataVision][18]. “With Ansible, one can write a specific playbook for a set of actions and input different variables such as addresses, etc., to automate long chains of process that were previously subject to human touch and longer lead times.” + +**[ Want to learn more about this aspect of Ansible? Read the related article:[Tips for success when getting started with Ansible][19]. ]** + +Another factor: The tools themselves will continue to become more advanced. + +“With advanced IT automation tools, developers will be able to build and automate workflows in less time, reducing error-prone coding,” says Amin of ASCI. “These tools include pre-built, pre-tested drag-and-drop integrations, API jobs, the rich use of variables, reference functionality, and object revision history.” + +### 6. Automation opens new metrics opportunities + +As we’ve said previously in this space, automation isn’t IT snake oil. It won’t fix busted processes or otherwise serve as some catch-all elixir for what ails your organization. That’s true on an ongoing basis, too: Automation doesn’t eliminate the need to measure performance. + +**[ See our related article[DevOps metrics: Are you measuring what matters?][20] ]** + +In fact, automation should open up new opportunities here. + +“As more and more development activities – source control, DevOps pipelines, work item tracking – move to the API-driven platforms – the opportunity and temptation to stitch these pieces of raw data together to paint the picture of your organization's efficiency increases,” says Josh Collins, VP of architecture at [Janeiro Digital][21]. + +Collins thinks of this as a possible new “development organization metrics-in-a-box.” But don’t mistake that to mean machines and algorithms can suddenly measure everything IT does. + +“Whether measuring individual resources or the team in aggregate, these metrics can be powerful – but should be balanced with a heavy dose of context,” Collins says. “Use this data for high-level trends and to affirm qualitative observations – not to clinically grade your team.” + +**Want more wisdom like this, IT leaders?[Sign up for our weekly email newsletter][22].** + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2018/3/what-s-next-it-automation-6-trends-watch + +作者:[Kevin Casey][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://enterprisersproject.com/user/kevin-casey +[1]:https://enterprisersproject.com/article/2017/12/5-factors-fueling-automation-it-now +[2]:https://enterprisersproject.com/article/2017/12/4-trends-watch-it-automation-expands +[3]:https://enterprisersproject.com/article/2018/1/getting-started-automation-6-tips +[4]:https://enterprisersproject.com/article/2018/1/how-make-case-it-automation +[5]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success +[6]:https://enterprisersproject.com/tags/automation +[7]:https://enterprisersproject.com/article/2018/2/how-spot-machine-learning-opportunity +[8]:https://www.advsyscon.com/en-us/ +[9]:https://www.sungardas.com/en/ +[10]:https://enterprisersproject.com/tags/artificial-intelligence +[11]:https://www.solarwinds.com/ +[12]:https://enterprisersproject.com/article/2017/12/8-emerging-ai-jobs-it-pros +[13]:https://www.juniper.net/ +[14]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA +[15]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break +[16]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break?sc_cid=70160000000h0aXAAQ +[17]:https://opensource.com/tags/ansible +[18]:https://datavision.com/ +[19]:https://opensource.com/article/18/2/tips-success-when-getting-started-ansible?intcmp=701f2000000tjyaAAA +[20]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters?sc_cid=70160000000h0aXAAQ +[21]:https://www.janeirodigital.com/ +[22]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ diff --git a/sources/talk/20180306 Try, learn, modify- The new IT leader-s code.md b/sources/talk/20180306 Try, learn, modify- The new IT leader-s code.md new file mode 100644 index 0000000000..bcc0729f77 --- /dev/null +++ b/sources/talk/20180306 Try, learn, modify- The new IT leader-s code.md @@ -0,0 +1,59 @@ +Try, learn, modify: The new IT leader's code +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv) + +Just about every day, new technological developments threaten to destabilize even the most intricate and best-laid business plans. Organizations often find themselves scrambling to adapt to new conditions, and that's created a shift in how they plan for the future. + +According to a 2017 [study][1] by CompTIA, only 34% of companies are currently developing IT architecture plans that extend beyond 12 months. One reason for that shift away from a longer-term plan is that business contexts are changing so quickly that planning any further into the future is nearly impossible. "If your company is trying to set a plan that will last five to 10 years down the road," [CIO.com writes][1], "forget it." + +I've heard similar statements from countless customers and partners around the world. Technological innovations are occurring at an unprecedented pace. + +The result is that long-term planning is dead. We need to be thinking differently about the way we run our organizations if we're going to succeed in this new world. + +### How planning died + +As I wrote in The Open Organization, traditionally-run organizations are optimized for industrial economies. They embrace hierarchical structures and rigidly prescribed processes as they work to achieve positional competitive advantage. To be successful, they have to define the strategic positions they want to achieve. Then they have to formulate and dictate plans for getting there, and execute on those plans in the most efficient ways possible—by coordinating activities and driving compliance. + +Management's role is to optimize this process: plan, prescribe, execute. It consists of saying: Let's think of a competitively advantaged position; let's configure our organization to ultimately get there; and then let's drive execution by making sure all aspects of the organization comply. It's what I'll call "mechanical management," and it's a brilliant solution for a different time. + +In today's volatile and uncertain world, our ability to predict and define strategic positions is diminishing—because the pace of change, the rate of introduction of new variables, is accelerating. Classic, long-term, strategic planning and execution isn't as effective as it used to be. + +If long-term planning has become so difficult, then prescribing necessary behaviors is even more challenging. And measuring compliance against a plan is next to impossible. + +All this dramatically affects the way people work. Unlike workers in the traditionally-run organizations of the past—who prided themselves on being able to act repetitively, with little variation and comfortable certainty—today's workers operate in contexts of abundant ambiguity. Their work requires greater creativity, intuition, and critical judgment—there is a greater demand to deviate from yesterday's "normal" and adjust to today's new conditions. + +In today's volatile and uncertain world, our ability to predict and define strategic positions is diminishing—because the pace of change, the rate of introduction of new variables, is accelerating. + +Working in this new way has become more critical to value creation. Our management systems must focus on building structures, systems, and processes that help create engaged, motivated workers—people who are enabled to innovate and act with speed and agility. + +We need to come up with a different solution for optimizing organizations for a very different economic era, one that works from the bottom up rather than the top down. We need to replace that old three-step formula for success—plan, prescribe, execute—with one much better suited to today's tumultuous climate: try, learn, modify. + +### Try, learn, modify + +Because conditions can change so rapidly and with so little warning—and because the steps we need to take next are no longer planned in advance—we need to cultivate environments that encourage creative trial and error, not unyielding allegiance to a five-year schedule. Here are just a few implications of beginning to work this way: + + * **Shorter planning cycles (try).** Rather than agonize over long-term strategic directions, managers need to be thinking of short-term experiments they can try quickly. They should be seeking ways to help their teams take calculated risks and leverage the data at their disposal to make best guesses about the most beneficial paths forward. They can do this by lowering overhead and giving teams the freedom to try new approaches quickly. + * **Higher tolerance for failure (learn).** Greater frequency of experimentation means greater opportunity for failure. Creative and resilient organizations have a[significantly higher tolerance for failure][2] than traditional organizations do. Managers should treat failures as learning opportunities—moments to gather feedback on the tests their teams are running. + * **More adaptable structures (modify).** An ability to easily modify organizational structures and strategic directions—and the willingness to do it when conditions necessitate—is the key to ensuring that organizations can evolve in line with rapidly changing environmental conditions. Managers can't be wedded to any idea any longer than that idea proves itself to be useful for accomplishing a short-term goal. + + + +If long-term planning is dead, then long live shorter-term experimentation. Try, learn, and modify—that's the best path forward during uncertain times. + +[Subscribe to our weekly newsletter][3] to learn more about open organizations. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/18/3/try-learn-modify + +作者:[Jim Whitehurst][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/remyd +[1]:https://www.cio.com/article/3246027/enterprise-architecture/the-death-of-long-term-it-planning.html?upd=1515780110970 +[2]:https://opensource.com/open-organization/16/12/building-culture-innovation-your-organization +[3]:https://opensource.com/open-organization/resources/newsletter diff --git a/sources/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md b/sources/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md index b0ca149c3e..4f96731332 100644 --- a/sources/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md +++ b/sources/tech/20090518 How to use yum-cron to automatically update RHEL-CentOS Linux.md @@ -1,3 +1,5 @@ +translating by shipsw + How to use yum-cron to automatically update RHEL/CentOS Linux ====== The yum command line tool is used to install and update software packages under RHEL / CentOS Linux server. I know how to apply updates using [yum update command line][1], but I would like to use cron to update packages where appropriate manually. How do I configure yum to install software patches/updates [automatically with cron][2]? diff --git a/sources/tech/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md b/sources/tech/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md deleted file mode 100644 index e386b29fd5..0000000000 --- a/sources/tech/20091215 How To Create sar Graphs With kSar To Identifying Linux Bottlenecks.md +++ /dev/null @@ -1,341 +0,0 @@ -Translating by qhwdw -How To Create sar Graphs With kSar To Identifying Linux Bottlenecks -====== -The sar command collects, report, or save UNIX / Linux system activity information. It will save selected counters in the operating system to the /var/log/sa/sadd file. From the collected data, you get lots of information about your server: - - 1. CPU utilization - 2. Memory paging and its utilization - 3. Network I/O, and transfer statistics - 4. Process creation activity - 5. All block devices activity - 6. Interrupts/sec etc. - - - -The sar command output can be used for identifying server bottlenecks. However, analyzing information provided by sar can be difficult, so use kSar tool. kSar takes sar command output and plots a nice easy to understand graph over a period of time. - - -## sysstat Package - -The sar, sa1, and sa2 commands are part of sysstat package. Collection of performance monitoring tools for Linux includes - - 1. sar : Displays the data. - 2. sa1 and sa2: Collect and store the data for later analysis. The sa2 shell script write a daily report in the /var/log/sa directory. The sa1 shell script collect and store binary data in the system activity daily data file. - 3. sadc - System activity data collector. You can configure various options by modifying sa1 and sa2 scripts. They are located at the following location: - * /usr/lib64/sa/sa1 (64bit) or /usr/lib/sa/sa1 (32bit) - This calls sadc to log reports to/var/log/sa/sadX format. - * /usr/lib64/sa/sa2 (64bit) or /usr/lib/sa/sa2 (32bit) - This calls sar to log reports to /var/log/sa/sarX format. - - - -### How do I install sar on my system? - -Type the following [yum command][1] to install sysstat on a CentOS/RHEL based system: -`# yum install sysstat` -Sample outputs: -``` -Loaded plugins: downloadonly, fastestmirror, priorities, - : protectbase, security -Loading mirror speeds from cached hostfile - * addons: mirror.cs.vt.edu - * base: mirror.ash.fastserv.com - * epel: serverbeach1.fedoraproject.org - * extras: mirror.cogentco.com - * updates: centos.mirror.nac.net -0 packages excluded due to repository protections -Setting up Install Process -Resolving Dependencies ---> Running transaction check ----> Package sysstat.x86_64 0:7.0.2-3.el5 set to be updated ---> Finished Dependency Resolution - -Dependencies Resolved - -==================================================================== - Package Arch Version Repository Size -==================================================================== -Installing: - sysstat x86_64 7.0.2-3.el5 base 173 k - -Transaction Summary -==================================================================== -Install 1 Package(s) -Update 0 Package(s) -Remove 0 Package(s) - -Total download size: 173 k -Is this ok [y/N]: y -Downloading Packages: -sysstat-7.0.2-3.el5.x86_64.rpm | 173 kB 00:00 -Running rpm_check_debug -Running Transaction Test -Finished Transaction Test -Transaction Test Succeeded -Running Transaction - Installing : sysstat 1/1 - -Installed: - sysstat.x86_64 0:7.0.2-3.el5 - -Complete! -``` - - -### Configuration files for sysstat - -Edit /etc/sysconfig/sysstat file specify how long to keep log files in days, maximum is a month: -`# vi /etc/sysconfig/sysstat` -Sample outputs: -``` -# keep log for 28 days -# the default is 7 -HISTORY=28 -``` - -Save and close the file. - -### Find the default cron job for sar - -[The default cron job is located][2] at /etc/cron.d/sysstat: -`# cat /etc/cron.d/sysstat` -Sample outputs: -``` -# run system activity accounting tool every 10 minutes -*/10 * * * * root /usr/lib64/sa/sa1 1 1 -# generate a daily summary of process accounting at 23:53 -53 23 * * * root /usr/lib64/sa/sa2 -A -``` - -### Tell sadc to report statistics for disks - -Edit the /etc/cron.d/sysstat file using a text editor such as NA command or vim command, enter: -`# vi /etc/cron.d/sysstat` -Update it as follows to log all disk stats (the -d option force to log stats for each block device and the -I option force report statistics for all system interrupts): -``` -# run system activity accounting tool every 10 minutes -*/10 * * * * root /usr/lib64/sa/sa1 -I -d 1 1 -# generate a daily summary of process accounting at 23:53 -53 23 * * * root /usr/lib64/sa/sa2 -A -``` - -On a CentOS/RHEL 7.x you need to pass the -S DISK option to collect data for block devices. Pass the -S XALL to collect data about: - - 1. Disk - 2. Partition - 3. System interrupts - 4. SNMP - 5. IPv6 - - -``` -# Run system activity accounting tool every 10 minutes -*/10 * * * * root /usr/lib64/sa/sa1 -S DISK 1 1 -# 0 * * * * root /usr/lib64/sa/sa1 600 6 & -# Generate a daily summary of process accounting at 23:53 -53 23 * * * root /usr/lib64/sa/sa2 -A -# Run system activity accounting tool every 10 minutes -``` - -Save and close the file. Turn on the service for a CentOS/RHEL version 5.x/6.x, enter: -`# chkconfig sysstat on -# service sysstat start` -Sample outputs: -``` -Calling the system activity data collector (sadc): -``` - -For a CentOS/RHEL 7.x, run the following commands: -``` -# systemctl enable sysstat -# systemctl start sysstat.service -# systemctl status sysstat.service -``` -Sample outputs: -``` -● sysstat.service - Resets System Activity Logs - Loaded: loaded (/usr/lib/systemd/system/sysstat.service; enabled; vendor preset: enabled) - Active: active (exited) since Sat 2018-01-06 16:33:19 IST; 3s ago - Process: 28297 ExecStart=/usr/lib64/sa/sa1 --boot (code=exited, status=0/SUCCESS) - Main PID: 28297 (code=exited, status=0/SUCCESS) - -Jan 06 16:33:19 centos7-box systemd[1]: Starting Resets System Activity Logs... -Jan 06 16:33:19 centos7-box systemd[1]: Started Resets System Activity Logs. -``` - -## How Do I Use sar? How do I View Stats? - -Use the sar command to display output the contents of selected cumulative activity counters in the operating system. In this example, sar is run to get real-time reporting from the command line about CPU utilization: -`# sar -u 3 10` -Sample outputs: -``` -Linux 2.6.18-164.2.1.el5 (www-03.nixcraft.in) 12/14/2009 - -09:49:47 PM CPU %user %nice %system %iowait %steal %idle -09:49:50 PM all 5.66 0.00 1.22 0.04 0.00 93.08 -09:49:53 PM all 12.29 0.00 1.93 0.04 0.00 85.74 -09:49:56 PM all 9.30 0.00 1.61 0.00 0.00 89.10 -09:49:59 PM all 10.86 0.00 1.51 0.04 0.00 87.58 -09:50:02 PM all 14.21 0.00 3.27 0.04 0.00 82.47 -09:50:05 PM all 13.98 0.00 4.04 0.04 0.00 81.93 -09:50:08 PM all 6.60 6.89 1.26 0.00 0.00 85.25 -09:50:11 PM all 7.25 0.00 1.55 0.04 0.00 91.15 -09:50:14 PM all 6.61 0.00 1.09 0.00 0.00 92.31 -09:50:17 PM all 5.71 0.00 0.96 0.00 0.00 93.33 -Average: all 9.24 0.69 1.84 0.03 0.00 88.20 -``` - -Where, - - * 3 = interval - * 10 = count - - - -To view process creation statistics, enter: -`# sar -c 3 10` -To view I/O and transfer rate statistics, enter: -`# sar -b 3 10` -To view paging statistics, enter: -`# sar -B 3 10` -To view block device statistics, enter: -`# sar -d 3 10` -To view statistics for all interrupt statistics, enter: -`# sar -I XALL 3 10` -To view device specific network statistics, enter: -``` -# sar -n DEV 3 10 -# sar -n EDEV 3 10 -``` -To view CPU specific statistics, enter: -``` -# sar -P ALL -# Only 1st CPU stats -# sar -P 1 3 10 -``` -To view queue length and load averages statistics, enter: -`# sar -q 3 10` -To view memory and swap space utilization statistics, enter: -``` -# sar -r 3 10 -# sar -R 3 10 -``` -To view status of inode, file and other kernel tables statistics, enter: -`# sar -v 3 10` -To view system switching activity statistics, enter: -`# sar -w 3 10` -To view swapping statistics, enter: -`# sar -W 3 10` -To view statistics for a given process called Apache with PID # 3256, enter: -`# sar -x 3256 3 10` - -## Say Hello To kSar - -sar and sadf provides CLI based output. The output may confuse all new users / sys admin. So you need to use kSar which is a java application that graph your sar data. It also permit to export data to PDF/JPG/PNG/CSV. You can load data from three method : local file, local command execution, and remote command execution via SSH. kSar supports the sar output of the following OS: - - 1. Solaris 8, 9 and 10 - 2. Mac OS/X 10.4+ - 3. Linux (Systat Version >= 5.0.5) - 4. AIX (4.3 & 5.3) - 5. HPUX 11.00+ - - - -### Download And Install kSar - -Visit the [official][3] website and grab the latest source code. Use [wget to][4] download the source code, enter: -`$ wget https://github.com/vlsi/ksar/releases/download/v5.2.4-snapshot-652bf16/ksar-5.2.4-SNAPSHOT-all.jar` - -#### How Do I Run kSar? - -Make sure [JAVA jdk][5] is installed and working correctly. Type the following command to start kSar, run: -`$ java -jar ksar-5.2.4-SNAPSHOT-all.jar` - -![Fig.01: kSar welcome screen][6] -Next you will see main kSar window, and menus with two panels. -![Fig.02: kSar - the main window][7] -The left one will have a list of graphs available depending on the data kSar has parsed. The right window will show you the graph you have selected. - -## How Do I Generate sar Graphs Using kSar? - -First, you need to grab sar command statistics from the server named server1. Type the following command to get stats, run: -`[ **server1** ]# LC_ALL=C sar -A > /tmp/sar.data.txt` -Next copy file to local desktop from a remote box using the scp command: -`[ **desktop** ]$ scp user@server1.nixcraft.com:/tmp/sar.data.txt /tmp/` -Switch to kSar Windows. Click on **Data** > **Load data from text file** > Select sar.data.txt from /tmp/ > Click the **Open** button. -Now, the graph type tree is deployed in left pane and a graph has been selected: -![Fig.03: Processes for server1][8] - -![Fig.03: Disk stats \(blok device\) stats for server1][9]![Fig.05: Memory stats for server1][10] - -#### Zoom in and out - -Using the move, you can interactively zoom onto up a part of a graph. To select a zone to zoom, click on the upper left conner and while still holding the mouse but on move to the lower-right of the zone you want to zoom. To come back to unzoomed view click and drag the mouse to any corner location except a lower-right one. You can also right click and select zoom options - -#### Understanding kSar Graphs And sar Data - -I strongly recommend reading sar and sadf command man page: -`$ man sar -$ man sadf` - -## Case Study: Identifying Linux Server CPU Bottlenecks - -With sar command and kSar tool, one can get the detailed snapshot of memory, CPU, and other subsystems. For example, if CPU utilization is more than 80% for a long period, a CPU bottleneck is most likely occurring. Using **sar -x ALL** you can find out CPU eating process. The output of [mpstat command][11] (part of sysstat package itself) will also help you understand the cpu utilization. You can easily analyze this information with kSar. - -### I Found CPU Bottlenecks… - -Performance tuning options for the CPU are as follows: - - 1. Make sure that no unnecessary programs are running in the background. Turn off [all unnecessary services on Linux][12]. - 2. Use [cron to schedule][13] jobs (e.g., backup) to run at off-peak hours. - 3. Use [top and ps command][14] to find out all non-critical background jobs / services. Make sure you lower their priority using [renice command][15]. - 4. Use [taskset command to set a processes's][16] CPU affinity (offload cpu) i.e. bind processes to different CPUs. For example, run MySQL database on cpu #2 and Apache on cpu # 3. - 5. Make sure you are using latest drivers and firmware for your server. - 6. If possible add additional CPUs to the system. - 7. Use faster CPUs for a single-threaded application (e.g. Lighttpd web server app). - 8. Use more CPUs for a multi-threaded application (e.g. MySQL database server app). - 9. Use more computer nodes and set up a [load balancer][17] for a web app. - - - -## isag - Interactive System Activity Grapher (alternate tool) - -The isag command graphically displays the system activity data stored in a binary data file by a previous sar run. The isag command invokes sar to extract the data to be plotted. isag has limited set of options as compare to kSar. - -![Fig.06: isag CPU utilization graphs][18] - - -### about the author - -The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][19], [Facebook][20], [Google+][21]. - --------------------------------------------------------------------------------- - -via: https://www.cyberciti.biz/tips/identifying-linux-bottlenecks-sar-graphs-with-ksar.html - -作者:[Vivek Gite][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.cyberciti.biz -[1]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info) -[2]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/ -[3]:https://github.com/vlsi/ksar -[4]:https://www.cyberciti.biz/tips/linux-wget-your-ultimate-command-line-downloader.html -[5]:https://www.cyberciti.biz/faq/howto-ubuntu-linux-install-configure-jdk-jre/ -[6]:https://www.cyberciti.biz/media/new/tips/2009/12/sar-welcome.png (kSar welcome screen) -[7]:https://www.cyberciti.biz/media/new/tips/2009/12/screenshot-kSar-a-sar-grapher-01.png (kSar - the main window) -[8]:https://www.cyberciti.biz/media/new/tips/2009/12/cpu-ksar.png (Linux kSar Processes for server1 ) -[9]:https://www.cyberciti.biz/media/new/tips/2009/12/disk-stats-ksar.png (Linux Disk I/O Stats Using kSar) -[10]:https://www.cyberciti.biz/media/new/tips/2009/12/memory-ksar.png (Linux Memory paging and its utilization stats) -[11]:https://www.cyberciti.biz/tips/how-do-i-find-out-linux-cpu-utilization.html -[12]:https://www.cyberciti.biz/faq/check-running-services-in-rhel-redhat-fedora-centoslinux/ -[13]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/ -[14]:https://www.cyberciti.biz/faq/show-all-running-processes-in-linux/ -[15]:https://www.cyberciti.biz/faq/howto-change-unix-linux-process-priority/ -[16]:https://www.cyberciti.biz/faq/taskset-cpu-affinity-command/ -[17]:https://www.cyberciti.biz/tips/load-balancer-open-source-software.html -[18]:https://www.cyberciti.biz/media/new/tips/2009/12/isag.cpu_.png (Fig.06: isag CPU utilization graphs) -[19]:https://twitter.com/nixcraft -[20]:https://facebook.com/nixcraft -[21]:https://plus.google.com/+CybercitiBiz diff --git a/sources/tech/20120424 How To Set Readonly File Permissions On Linux - Unix Web Server DocumentRoot.md b/sources/tech/20120424 How To Set Readonly File Permissions On Linux - Unix Web Server DocumentRoot.md index 6f46d154a1..d5b31e5120 100644 --- a/sources/tech/20120424 How To Set Readonly File Permissions On Linux - Unix Web Server DocumentRoot.md +++ b/sources/tech/20120424 How To Set Readonly File Permissions On Linux - Unix Web Server DocumentRoot.md @@ -1,3 +1,5 @@ +translating by yizhuoyan + How To Set Readonly File Permissions On Linux / Unix Web Server DocumentRoot ====== diff --git a/sources/tech/20140510 Managing Digital Files (e.g., Photographs) in Files and Folders.md b/sources/tech/20140510 Managing Digital Files (e.g., Photographs) in Files and Folders.md new file mode 100644 index 0000000000..d1fd01ef0c --- /dev/null +++ b/sources/tech/20140510 Managing Digital Files (e.g., Photographs) in Files and Folders.md @@ -0,0 +1,610 @@ +Managing Digital Files (e.g., Photographs) in Files and Folders +====== +Update 2014-05-14: added real world example + +Update 2015-03-16: filtering photographs according to their GPS coordinates + +Update 2016-08-29: replaced outdated `show-sel.sh` method with new `filetags --filter` method + +Update 2017-08-28: Email comment on geeqie video thumbnails + +I am a passionate photographer when being on vacation or whenever I see something beautiful. This way, I collected many [JPEG][1] files over the past years. Here, I describe how I manage my digital photographs while avoiding any [vendor lock-in][2] which binds me to a temporary solution and leads to loss of data. Instead, I prefer solutions where I am able to **invest my time and effort for a long-term relationship**. + +This (very long) entry is **not about image files only** : I am going to explain further things like my folder hierarchy, file name convention, and so forth. Therefore, this information applies to all kind of files I process. + +Before I start explaining my method, we should come to an agreement whether or not we do have the same set of requirements I am trying to match with my method. If you are into [raw image formats][3], storing your photographs somewhere in the cloud or anything else very special to you (and not to me), you might not get satisfied with the things described here. Decide yourself. + +### My requirements + +For **getting the photographs (and movies) from my digital camera to my computer** , I just want to put the SD card into my computer and invoke the fetch-workflow. This thing also has to **pre-process the files** to meet my file name convention (described further down) and to rotate images that are in portrait orientation (and not in landscape). + +Those files are written to my photography inbox folder `$HOME/tmp/digicam/`. In this folder, I want to **look through my image files and play movies** to **sort out/delete, rename, add/remove tags, and put sets of related files into separate destination folders**. + +After that, I want to **navigate through my set of folders** containing the sets of image/movie files. In rare occasions, I want to **open an image file in an independent image processing tool** like [the GIMP][4]. Just for **rotating JPEG files** , I want to have a quick method which does not require an image processing tool and which is rotating JPEG images [in a loss-less way][5]. + +My digital camera has now support for tagging images with [GPS][6] coordinates. Therefore, I need a method to **visualize GPS coordinates for single files as well as for a set of files** showing the path I was walking. + +There is another nice feature I want to use: imagine a beautiful vacation in Venice where you took hundreds of photographs. Each of them is so beautiful so that you do not want to delete some of them. On the other side, you might want to get a smaller set of photographs for presenting to your friends at home. And they are only expecting maybe two dozens of files before being too jealous. Therefore, I want to be able to **define and show a certain sub-set of photos**. + +In terms of being independent and **avoid lock-in effects** , I do not want to use a tool I am not able to use when a company discontinues a product or service. For the very same reason and because I am a privacy-aware person, **I do not want to use any cloud-based service**. In order to keep myself open for new possibilities, I do not want to invest any effort in something which is only available on one specific operating system platform. **Basic stuff has to be available on any platform** (viewing, navigation, ...). But the **full set of requirements have to work on GNU/Linux** , in my case Debian GNU/Linux. + +Before I describe my current solutions to this fairly large set of requirements mentioned above, I have to explain my general folder structure and file naming convention I also use for digital photographs. But first, there is an important fact you have to consider: + +#### iPhoto, Picasa, and such considered harmful + +Software tools which manage collections of photographs do provide pretty cool features. They offer a nice user interface and try to give you cozy work-flows for all kinds of requirements. + +The big issue I do have got with them is numerous. They mostly use proprietary storage formats for almost everything: image files, meta-data, and so forth. This is a huge issue, when you are going to change to a different software in a couple of years. Trust me: **you are going to switch** , some day, in any case, for multiple reasons. + +If you are in the position where you need to switch your tool, you are going to realize that iPhoto or Picasa do store original image files and everything you did to them separately. Rotation of images, adding description to image files, tags, cropping, and so forth: **everything will be lost forever** if you are not able to export it and re-import it to the new tool. Chances are very high that you are not going to do this without loss of information or data. + +I do not want to invest any effort in a tool which locks away my work. **I refuse to lock-in myself to any proprietary tool.** Been there, done that. Learned my lessons. + +This is the reason why I keep time-stamps, image descriptions, or tags in the file name itself. File names are permanent unless I manually change them. They do not get lost when I backup my photographs or when I copy them to USB memory sticks or other operating systems. Everybody is able to read them. Any future system is able to process them. + +### My file name convention + +All my files which do have a relation to a specific day or a time I start with a **date-stamp** or a **time-stamp** according to an adopted [ISO 8601][7]. + +Example file name with a date-stamp and two tags: `2014-05-09 Budget export for project 42 -- finance company.csv` + +Example file name with a time-stamp (even including optional seconds) and two tags: `2014-05-09T22.19.58 Susan presenting her new shoes -- family clothing.jpg` + +I have to use adopted ISO time-stamps because colons are not suitable for the Windows [file system NTFS][8]. Therefore, I replaced colons with dots for separating hours from minutes from the optional seconds. + +In case of **time or date duration** , I separate the two date- or time-stamps with two minus signs: "`2014-05-09--2014-05-13 Jazz festival Graz -- folder tourism music.pdf`". + +Time/date-stamps in file names have the advantage that they remain unchanged until I manually change them. Meta-data which is included in the file content itself (like [Exif][9]) tends to get lost when files are processed via tools that do not take care of those meta-data. Additionally, starting a file name with such a date/time-stamp ensures that files are displayed in a temporal order instead of alphabetic order. The alphabet is a [totally artificial sort order][10] and it is typically less practical for locating files by the user. + +When I want to associate **tags** to a file name, I place them between the original file name and the [file name extension][11] separated by a space, two minus signs and an additional space: "`--`". My tags are lower case English words which do not contain spaces or special characters. Sometimes, I might use concatenated words like `quantifiedself` or `usergenerated`. I [tend to prefer general categories][12] instead of more (too) more specific describing tags. I re-use my tags on Twitter [hashtags][13], file names, folder names, bookmarks, blog entries like this one, and so forth. + +Tags as part of the file name have several advantages. You are able to locate files with the help of tags by using your usual desktop search engine. Tags in file-names can not be lost because of copying on different storage media. This usually happens, whenever a system uses a different storage place than the file name: meta-data data-base, [dot-files][14], [alternate data streams][15], and so forth. + +Of course, please do **avoid special characters** , umlauts, colons, and so forth in file and folder names in general. Especially when you synchronize files between different operating system platforms. + +My **file name convention for folders** is the same as for files. + +Note: Because of the clever [filenametimestamps][16]-module of [Memacs][17], all files and folders with a date/time-stamp appear on the very same time/day on my Org-mode calendar (agenda). This way, I get a very cool overview on what happened when on which day including all photographs I took. + +### My general folder structure + +In this section, I will describe my most important folders within my home folder. NOTE: this might get moved to an independent page somewhere in the future. Or not. Time will tell. :-) + +Lots of stuff is only of interest for a certain period of time. These are things like downloads to quickly skim through its content, unpack ZIP files to examine the files contained, minor interesting stuff, and so forth. For **temporary stuff** I do have the `$HOME/tmp/` sub-hierarchy. New photographs are placed in `$HOME/tmp/digicam/`. Stuff I temporary copy from a CD, DVD, or USB memory stick are put in `$HOME/tmp/fromcd/`. Whenever a software tool needs temporary data within my user folder hierarchy, I use `$HOME/tmp/Tools/` as an starting point. A very frequent folder for me is `$HOME/tmp/2del/`: "2del" means "ready for being deleted any time". All my browser are using this folder as the default download folder for example. In case I need to free space on a machine, I firstly look at this `2del`-folder for stuff to delete. + +In contrast to the temporary stuff described above, I certainly want to keep files **for a longer period of time** as well. Those files gets moved to my `$HOME/archive/` sub-hierarchy. Its got several sub-folders for backups, web/download-stuff I want to keep, binary files I want to archive, index files of removable media (CD, DVD, memory sticks, external hard drives), and a folder where I place stuff I want to archive (and look for a decent destination folder) in the near future. Sometimes, I am too busy or impatient for the moment to file things properly. Yes, that's me I even have a "do not bug me now"-folder. Is this weird to you? :-) + +The most important sub-hierarchy within my archive is `$HOME/archive/events_memories/` and its sub/folders `2014/`, `2013/`, `2012/`, and so forth. As you might have guessed already, there is one **sub-folder per year**. Within each of them, there are single files and folders. The files are named according to my file name convention described in the previous section. Folder names start with an [ISO 8601][7] datestamp in the form "YYYY-MM-DD" followed by a hopefully descriptive name like `$HOME/archive/events_memories/2014/2014-05-08 Business marathon with colleagues/`. Within those date-related folders I keep all kinds of files which are related to a certain event: photographs, (scanned) PDF-files, text files, and so forth. + +For **sharing data** , I maintain a `$HOME/share/` sub-hierarchy. There is my Dropbox folder, folders for important people I share data using all kinds of methods (like [unison][18]). I also share data among my set of devices: Mac Mini at home, GNU/Linux notebook at home, Android phone, root-server (my personal cloud), Windows-notebook at work. I don't want to elaborate on my synchronization set-up here. There might be another blog entry for this if you ask nicely. :-) + +Within my `$HOME/templates_labels/` sub-hierarchy, I keep all kinds of **template files** ([LaTeX][19], scripts, ...), cliparts and **logos** , and so forth. + +My **Org-mode** files, I mostly keep within `$HOME/org/`. I practice retentiveness and do not explain how much I love [Emacs/Org-mode][20] and how much I get out of it this time. You probably have read or heard me elaborating the awesome things I do with it. Just look out for [my `emacs` tag][21] on my blog and its [hashtag `#orgmode`][22] on twitter. + +So far about my most important folder sub-hierarchies. + +### My workflows + +Tataaaa, after you learned about my folder structure and file name convention, here are my current workflows and tools I use for the requirements I described further up. + +Please note that **you have to know, what you are doing**. My examples here contain folder paths and more that **only works on my machine or my set-up**. **You have to adopt stuff** like paths, file names, and so forth to meet your requirements! + +#### Workflow: Moving files from my SD card to the laptop, rotating portrait images, and renaming files + +When I want to move data from my digital camera to my GNU/Linux notebook, I take out its Mini-SD storage card and put it in my notebook. Then it gets mounted on `/media/digicam` automatically. + +Then, I invoke [getdigicamdata.sh][23] which does several things: it moves the files from the SD card to a temporary folder for processing. The original file names are being converted to lower-case characters. All portrait photographs are rotated using [jhead][24]. Also with jhead, I generate file-name time-stamps from the Exif header time-stamps. Using [date2name][25] I add time-stamps also to the movie files. After processing all those files, they get moved to the destination folder for new digicam files: $HOME/tmp/digicam/tmp/~. + +#### Workflow: Folder navigation, viewing, renaming, deleting image files + +For skimming through my image and movie files, I prefer to use [geeqie][26] on GNU/Linux. It is a fairly lightweight image browser which has one big advantage other file browsers are missing: I can add external scripts/tools that can be invoked by a keyboard shortcut. This way, I am able to extend the feature-set of the image browser by arbitrary external commands. + +Basic image management functionality is built-in to geeqie: navigating my folder hierarchy, viewing image files in window-mode and in full-screen more (shortcut `f`), renaming file names, deleting files, showing Exif meta-data (shortcut `Ctrl-e`). + +On OS X, I use [Xee][27]. Unlike geeqie, it is not extendable by external commands. However, the basic navigation, viewing, and renaming functions are available as well. + +#### Workflow: Adding and removing tags + +I created a Python script called [filetags][28] which I use for adding and removing tags to single files as well as a set of files. + +For digital photographs, I use tags like, e.g., `specialL` for landscape images that I consider suitable for desktop backgrounds and so forth, `specialP` for portrait photographs I would like to show to others, `sel` for a selection, and many more. + +##### Initial set-up of filetags with geeqie + +Adding filetags to geeqie is a manual step: `Edit > Preferences > Configure Editors ...`. Then create an additional entry with `New`. There, you can define a new desktop-file which looks like this: + +add-tags.desktop +``` +[Desktop Entry] +Name=filetags +GenericName=filetags +Comment= +Exec=/home/vk/src/misc/vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh %F +Icon= +Terminal=true +Type=Application +Categories=Application;Graphics; +hidden=false +MimeType=image/*;video/*;image/mpo;image/thm +Categories=X-Geeqie; + +``` + +The wrapper-script `vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh` is necessary because I want a new terminal window to pop-up in order to add tags to my files: + +vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh +``` +#!/bin/sh + +/usr/bin/gnome-terminal \ + --geometry=85x15+330+5 \ + --tab-with-profile=big \ + --hide-menubar \ + -x /home/vk/src/filetags/filetags.py --interactive "${@}" + +#end + +``` + +In geeqie, you can add a keyboard shortcut in `Edit > Preferences > Preferences ... > Keyboard`. I associated `t` with the `filetags` command. + +The filetags script is also able to remove tags from a single file or a set of files. It basically uses the same method as described above. The only difference is the additional `--remove` parameter for the filetags script: + +remove-tags.desktop +``` +[Desktop Entry] +Name=filetags-remove +GenericName=filetags-remove +Comment= +Exec=/home/vk/src/misc/vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh %F +Icon= +Terminal=true +Type=Application +Categories=Application;Graphics; +hidden=false +MimeType=image/*;video/*;image/mpo;image/thm +Categories=X-Geeqie; + +``` + +vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh +``` +#!/bin/sh + +/usr/bin/gnome-terminal \ + --geometry=85x15+330+5 \ + --tab-with-profile=big \ + --hide-menubar \ + -x /home/vk/src/filetags/filetags.py --interactive --remove "${@}" + +#end + +``` + +For removing tags, I created a keyboard shortcut for `T`. + +##### Using filetags within geeqie + +When I skim though image files in the geeqie file browser, I select files I want to tag (one to many) and press `t`. Then, a small window pops up and asks me for one or more tags. After confirming with `Return`, these tags gets added to the file names. + +The same goes for removing tags: selecting multiple files, pressing `T`, entering tags to be removed, and confirming with `Return`. That's it. There is [almost no simpler way to add or remove tags to files][29]. + +#### Workflow: Advanced file renaming with appendfilename + +##### Without appendfilename + +Renaming a large set of files can be a tedious process. With original file names like `2014-04-20T17.09.11_p1100386.jpg`, the process to add a description to its file name is quite annoying. You are going to press `Ctrl-r` (rename) in geeqie which opens the file rename dialog. The base-name (file-name without the file extension) is marked by default. So if you do not want to delete/overwrite the file name (but append to it), you have to press the cursor key for ``. Then, the cursor is placed between the base name and the extension. Type in your description (don't forget the initial space character) and confirm with `Return`. + +##### Using appendfilename with geeqie + +With [appendfilename][30], my process is simplified to gain maximum user experience for appending text to file names: When I press `a` (append) in geeqie, a dialog window pops up, asking for a text. After confirming with `Return`, the entered text gets placed between the time-stamp and the optional tags. + +For example when I press `a` on `2014-04-20T17.09.11_p1100386.jpg` and I type `Pick-nick in Graz`, the file name gets changed to `2014-04-20T17.09.11_p1100386 Pick-nick in Graz.jpg`. When I press `a` once again and enter `with Susan`, the file name gets changed to `2014-04-20T17.09.11_p1100386 Pick-nick in Graz with Susan.jpg`. When the file name got tags as well, the appended text gets appended before the tag-separator. + +This way, I do not have to be afraid to overwrite time-stamps or tags. The process for renaming gets much more enjoyable for me! + +And the best part: when I want to add the same text to multiple selected files, this also works with appendfilename. + +##### Initial set-up of appendfilename with geeqie + +Add an additional editor to geeqie: `Edit > Preferences > Configure Editors ... > New`. Then enter the desktop file definition: + +appendfilename.desktop +``` +[Desktop Entry] +Name=appendfilename +GenericName=appendfilename +Comment= +Exec=/home/vk/src/misc/vk-appendfilename-interactive-wrapper-with-gnome-terminal.sh %F +Icon= +Terminal=true +Type=Application +Categories=Application;Graphics; +hidden=false +MimeType=image/*;video/*;image/mpo;image/thm +Categories=X-Geeqie; + +``` + +Once again, I do use a wrapper-script that provides me the terminal window: + +vk-appendfilename-interactive-wrapper-with-gnome-terminal.sh +``` +#!/bin/sh + +/usr/bin/gnome-terminal \ + --geometry=90x5+330+5 \ + --tab-with-profile=big \ + --hide-menubar \ + -x /home/vk/src/appendfilename/appendfilename.py "${@}" + +#end + +``` + +#### Workflow: Play movie files + +On GNU/Linux, I use [mplayer][31] to play-back video files. Since geeqie does not play movie files on itself, I have to create a set-up where I can open a movie file in mplayer. + +##### Initial set-up of mplayer with geeqie + +I did already associate movie file extensions to mplayer using [xdg-open][32]. Therefore, I only had to create a general "open" command to geeqie which uses xdg-open to open any file with its associated application. + +Once again, visit `Edit > Preferences > Configure Editors ...` in geeqie and add an entry for `open`: + +open.desktop +``` +[Desktop Entry] +Name=open +GenericName=open +Comment= +Exec=/usr/bin/xdg-open %F +Icon= +Terminal=true +Type=Application +hidden=false +NOMimeType=*; +MimeType=image/*;video/* +Categories=X-Geeqie; + +``` + +When you also associate the shortcut `o` (see above) to geeqie, you are able to open video files (and other files) with their associated application. + +##### Opening movie files (and others) with xdg-open + +After the set-up process from above, you just have to press `o` when your geeqie cursor is above the file. That's it. + +#### Workflow: Open in an external image editor + +I rarely want to be able to quickly edit image files in the GIMP. Therefore, I added a shortcut `g` and associated it with the external editor "GNU Image Manipulation Program" (GIMP) which was already created by default by geeqie. + +This way, only pressing `g` opens the current image file in the GIMP. + +#### Workflow: Move to archive folder + +Now that I have added comments to my file names, I want to move single files to `$HOME/archive/events_memories/2014/` or set of files to new folders within this folder like `$HOME/archive/events_memories/2014/2014-05-08 Business-Marathon After-Show-Party`. + +The usual way is to select one or multiple files and move them to a folder with the shortcut `Ctrl-m`. + +So booooring. + +Therefore, I (again) wrote a Python script which does this job for me: [move2archive][33] (in short: `m2a`) expects one or more files as command line parameters. Then, a dialog appears where I am able to enter an optional folder name. When I do not enter anything at all but press `Return`, the files gets moved to the folder of the corresponding year. When I enter a folder name like `Business-Marathon After-Show-Party`, the date-stamp of the first image file is appended to the folder (`$HOME/archive/events_memories/2014/2014-05-08 Business-Marathon After-Show-Party`), the resulting folder gets created, and the files gets moved. + +Once again: I am in geeqie, select one or more files, press `m` (move) and either press only `Return` (no special sub-folder) or enter a descriptive text which is the name of the sub-folder to be created (optionally without date-stamp). + +**No image managing tool is as quick and as fun to use as my geeqie with appendfilename and move2archive via shotcuts.** + +##### Initial set-up of m2a with geeqie + +Once again, adding `m2a` to geeqie is a manual step: `Edit > Preferences > Configure Editors ...`. Then create an additional entry with `New`. There, you can define a new desktop-file which looks like this: + +m2a.desktop +``` +[Desktop Entry] +Name=move2archive +GenericName=move2archive +Comment=Moving one or more files to my archive folder +Exec=/home/vk/src/misc/vk-m2a-interactive-wrapper-with-gnome-terminal.sh %F +Icon= +Terminal=true +Type=Application +Categories=Application;Graphics; +hidden=false +MimeType=image/*;video/*;image/mpo;image/thm +Categories=X-Geeqie; + +``` + +The wrapper-script `vk-m2a-interactive-wrapper-with-gnome-terminal.sh` is necessary because I want a new terminal window to pop-up in order to enter my desired destination folder for my files: + +vk-m2a-interactive-wrapper-with-gnome-terminal.sh +``` +#!/bin/sh + +/usr/bin/gnome-terminal \ + --geometry=157x56+330+5 \ + --tab-with-profile=big \ + --hide-menubar \ + -x /home/vk/src/m2a/m2a.py --pauseonexit "${@}" + +#end + +``` + +In geeqie, you can add a keyboard shortcut in `Edit > Preferences > Preferences ... > Keyboard`. I associated `m` with the `m2a` command. + +#### Workflow: Rotate images (loss-less) + +Usually, portrait photographs are being marked automatically as portrait photographs by my digital camera. However, there are certain situations (like taking a photograph from above the motif) where my camera gets it wrong. In those **rare cases** , I have to manually fix the orientation. + +You have to know that the JPEG file format is a lossy format which should be used only for photographs and not for computer-generated stuff like screen-shots or diagrams. Rotating a JPEG image file in the dumb way usually results in decompressing/visualizing the image file, rotating the resulting image, and re-encoding the result once again. This causes a resulting image with [much worse image quality than the original image][5]. + +Therefore, you should use a lossless method to rotate you JPEG image files. + +Once again, I add an "external editor" to geeqie: `Edit > Preferences > Configure Editors ... > New`. There, I add two entries: one for rotating 270 degrees (which is 90 degrees counter-clock-wise) and one for rotating 90 degrees (clock-wise) using [exiftran][34]: + +rotate-270.desktop +``` +[Desktop Entry] +Version=1.0 +Type=Application +Name=Losslessly rotate JPEG image counterclockwise + +# call the helper script +TryExec=exiftran +Exec=exiftran -p -2 -i -g %f + +# Desktop files that are usable only in Geeqie should be marked like this: +Categories=X-Geeqie; +OnlyShowIn=X-Geeqie; + +# Show in menu "Edit/Orientation" +X-Geeqie-Menu-Path=EditMenu/OrientationMenu + +MimeType=image/jpeg; + +``` + +rotate-90.desktop +``` +[Desktop Entry] +Version=1.0 +Type=Application +Name=Losslessly rotate JPEG image clockwise + +# call the helper script +TryExec=exiftran +Exec=exiftran -p -9 -i -g %f + +# Desktop files that are usable only in Geeqie should be marked like this: +Categories=X-Geeqie; +OnlyShowIn=X-Geeqie; + +# Show in menu "Edit/Orientation" +X-Geeqie-Menu-Path=EditMenu/OrientationMenu + +# It can be made verbose +# X-Geeqie-Verbose=true + +MimeType=image/jpeg; + +``` + +I created geeqie keyboard shortcuts for `[` (counter-clock-wise) and `]` (clock-wise). + +#### Workflow: Visualizing GPS coordinates + +My digital camera has a GPS sensor which stores the current geographic location within the Exif meta-data of the JPEG files. The location data gets stored in [WGS 84][35] format like "47, 58, 26.73; 16, 23, 55.51" (latitude; longitude). This is not human-readable in the sense I would expect: either a map or a location name. Therefore, I added functionality to geeqie so, that I am able to see the location of a single image file on [OpenStreetMap][36]: `Edit > Preferences > Configure Editors ... > New` + +photolocation.desktop +``` +[Desktop Entry] +Name=vkphotolocation +GenericName=vkphotolocation +Comment= +Exec=/home/vk/src/misc/vkphotolocation.sh %F +Icon= +Terminal=true +Type=Application +Categories=Application;Graphics; +hidden=false +MimeType=image/bmp;image/gif;image/jpeg;image/jpg;image/pjpeg;image/png;image/tiff;image/x-bmp;image/x-gray;image/x-icb;image/x-ico;image/x-png;image/x-portable-anymap;image/x-portable-bitmap;image/x-portable-graymap;image/x-portable-pixmap;image/x-xbitmap;image/x-xpixmap;image/x-pcx;image/svg+xml;image/svg+xml-compressed;image/vnd.wap.wbmp; + +``` + +This calls my wrapper-script named `vkphotolocation.sh` which uses [ExifTool][37] to extract the coordinates in a suitable format that [Marble][38] is able to read and visualize: + +vkphotolocation.sh +``` +#!/bin/sh + +IMAGEFILE="${1}" +IMAGEFILEBASENAME=`basename ${IMAGEFILE}` + +COORDINATES=`exiftool -c %.6f "${IMAGEFILE}" | awk '/GPS Position/ { print $4 " " $6 }'` + +if [ "x${COORDINATES}" = "x" ]; then + zenity --info --title="${IMAGEFILEBASENAME}" --text="No GPS-location found in the image file." +else + /usr/bin/marble --latlon "${COORDINATES}" --distance 0.5 +fi + +#end + +``` + +Mapped to the keyboard shortcut `G`, I can quickly get to the **map position of its location of a single image file**. + +When I want to visualize the **positions of multiple JPEG image files as a path** , I am using [GpsPrune][39]. I was not able to derive a method where GpsPrune takes a set of files as command line parameters. And because of this, I have to manually start GpsPrune, select a set of files or a folder with `File > Add photos`. + +This way, I get a dot for each JPEG location on a map of OpenStreetMap (if configured so). By clicking on such a dot, I get details of the corresponding image. + +If you happen to be abroad while taking photographs, visualizing the GPS positions is a **great help for adding descriptions** to the file name! + +#### Workflow: Filtering photographs according to their GPS coordinates + +This is no workflow of mine. For the sake of completeness, I list features of tools that make this workflow possible. What I would like to do is looking for only those photographs out of a big pile of images, that are within a certain area (rectangle or point + distance). + +So far, I found only [DigiKam][40] which is able to [filter according to a rectangle][41]. If you know another tool, please add it to the comments below or write an email. + +#### Workflow: Showing a sub-set of a given set + +As described in the requirements above, I want to be able to define a sub-set of files within a folder in order to present this small collection to other people. + +The work-flow is pretty simple: I add a tag (via `t`/filetags) to the files of the selection. For this, I use the tag `sel` which is short for "selection". After I tagged the set of files, I can press `s` which I associated with a script that shows only the files tagged with `sel`. + +Of course, this also works with any tag or tag combination. Therefore, with the same method, you are able to get a decent overview on all photos of your wedding that are tagged with "church" and "rings". + +Nifty feature, isn't it? :-) + +##### Initial set-up of filetags for filtering according to tags + geeqie + +You have to define an additional "external editor": `Edit > Preferences > Configure Editors ... > New`: + +filter-tags.desktop +``` +[Desktop Entry] +Name=filetag-filter +GenericName=filetag-filter +Comment= +Exec=/home/vk/src/misc/vk-filetag-filter-wrapper-with-gnome-terminal.sh +Icon= +Terminal=true +Type=Application +Categories=Application;Graphics; +hidden=false +MimeType=image/*;video/*;image/mpo;image/thm +Categories=X-Geeqie; + +``` + +This once again calls a wrapper-script I wrote: + +vk-filetag-filter-wrapper-with-gnome-terminal.sh +``` +#!/bin/sh + +/usr/bin/gnome-terminal \ + --geometry=85x15+330+5 \ + --hide-menubar \ + -x /home/vk/src/filetags/filetags.py --filter + +#end + +``` + +What `filetags` with parameter `--filter` does is basically the following: the user gets asked to enter one or more tags. Then, all matching files of the current folder are linked to `$HOME/.filetags_tagfilter/` using [symbolic links][42]. Then, a new geeqie instance is started which shows the linked files. + +After quitting this new geeqie instance, you see the old geeqie instance, from where you invoked the selection process. + +#### Summary with a real world example + +Wow, this was a very long blog entry. No wonder that you might have lost the overview here and there. To sum up the things I am able to do within geeqie (that extends the standard feature set), I have this cool table below: + +shortcut function `m` m2a `o` open (for non-images) `a` add text to file name `t` filetags (add) `T` filetags (remove) `s` filetags (filter) `g` gimp `G` show GPS position `[` lossless rotate counterclockwise `]` lossless rotate clockwise `Ctrl-e` EXIF `f` full-screen + +Parts of a file name (including its path) and tools I use to manipulate the components accordingly: +``` + /this/is/a/folder/2014-04-20T17.09 Pick-nick in Graz -- food graz.jpg + [ m2a ] [ date2name ] [ appendfilename ] [filetags] + +``` + +In practice, I do the following steps to get my photographs from the camera to my archive: I put the SD memory card into my SD card reader of my computer. Then I start [getdigicamdata.sh][23]. After it is finished, I open `$HOME/tmp/digicam/tmp/` within geeqie. I skim through the photographs and delete the ones that did not work out. If there is an image with the wrong orientation, I correct it by `[` or `]`. + +In a second run, I add descriptions to files I consider worth commenting on (`a`). Whenever I want to add tags I do so as well: I quickly mark all files that should share a tag (`Ctrl` \+ mouse-click) and tag them using [filetags][28] (`t`). + +To combine files from a given event, I select corresponding files and move them to their "event-folder" within the yearly archive folder by typing the event description in [move2archive][33] (`m`). The rest (no special event folder) is moved to the yearly archive directly by move2archive (`m`) without stating a event description. + +To finish my work-flow, I delete all files on my SD card, dismount it from the operating system, and put it back into my digital camera. + +That's it. + +Because this work-flow requires almost no overhead at all, commenting, tagging, and filing photographs is not a tedious job any more. + +### Finally + +So, this is a detailed description of my work-flows related to photographs and movies I make. You probably have found additional stuff I might be interested in. So please do not hesitate and leave a comment or email by using the links below. + +I also would like to get feed-back if my work-flow works for you as well. And: if you have published your work-flows or find descriptions of other peoples work-flows, please do leave a comment as well! + +Have fun and don't waste your time with the wrong tools or inefficient methods! + +### Other Tools + +Read about [gThumb in this article][43]. + +Please do suggest your tools of choice when you've got the feeling that they fit my requirements mentioned above. + +### Email Comments + +> Date: Sat, 26 Aug 2017 22:05:09 +0200 +> Hello Karl, +> I like your articles and work with memacs and of course orgmode but I am not very familiar with python by the way... in your blog post of "managing-digital-photographs" you write about to open videos with [Geeqie][26]. +> It works, but I cant see any video thumbnails in the browser of Geeqie. Do you have any suggestions to get this to work? +> Thank you, Thomas + +Hi Thomas, + +Thanks for your kind words. I always feel great when somebody finds my work useful in his/her life. Unfortunately most of the time, I never hear of them. + +Yes, I sometimes use Geeqie to visualize folders that not only contain image files but movie files as well. In those cases, I don't see any thumbnail image of the video. You're absolutely right, there are many file browsers out there which are able to display some kind of preview image of a video. + +Quite frankly, I never thought of video thumbnails and I don't miss them. A quick research in the preferences and with my search engine did not suggest that there is a possible way to enable video previews in Geeqie. So no luck here. + +-------------------------------------------------------------------------------- + +via: http://karl-voit.at/managing-digital-photographs/ + +作者:[Karl Voit][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://karl-voit.at +[1]:https://en.wikipedia.org/wiki/Jpeg +[2]:http://en.wikipedia.org/wiki/Vendor_lock-in +[3]:https://en.wikipedia.org/wiki/Raw_image_format +[4]:http://www.gimp.org/ +[5]:http://petapixel.com/2012/08/14/why-you-should-always-rotate-original-jpeg-photos-losslessly/ +[6]:https://en.wikipedia.org/wiki/Gps +[7]:https://en.wikipedia.org/wiki/Iso_date +[8]:https://en.wikipedia.org/wiki/Ntfs +[9]:https://en.wikipedia.org/wiki/Exif +[10]:http://www.isisinform.com/reinventing-knowledge-the-medieval-controversy-of-alphabetical-order/ +[11]:https://en.wikipedia.org/wiki/File_name_extension +[12]:http://karl-voit.at/tagstore/en/papers.shtml +[13]:https://en.wikipedia.org/wiki/Hashtag +[14]:https://en.wikipedia.org/wiki/Dot-file +[15]:https://en.wikipedia.org/wiki/NTFS#Alternate_data_streams_.28ADS.29 +[16]:https://github.com/novoid/Memacs/blob/master/docs/memacs_filenametimestamps.org +[17]:https://github.com/novoid/Memacs +[18]:http://www.cis.upenn.edu/~bcpierce/unison/ +[19]:https://github.com/novoid/LaTeX-KOMA-template +[20]:http://orgmode.org/ +[21]:http://karl-voit.at/tags/emacs +[22]:https://twitter.com/search?q%3D%2523orgmode&src%3Dtypd +[23]:https://github.com/novoid/getdigicamdata.sh +[24]:http://www.sentex.net/%3Ccode%3Emwandel/jhead/ +[25]:https://github.com/novoid/date2name +[26]:http://geeqie.sourceforge.net/ +[27]:http://xee.c3.cx/ +[28]:https://github.com/novoid/filetag +[29]:http://karl-voit.at/tagstore/ +[30]:https://github.com/novoid/appendfilename +[31]:http://www.mplayerhq.hu +[32]:https://wiki.archlinux.org/index.php/xdg-open +[33]:https://github.com/novoid/move2archive +[34]:http://manpages.ubuntu.com/manpages/raring/man1/exiftran.1.html +[35]:https://en.wikipedia.org/wiki/WGS84#A_new_World_Geodetic_System:_WGS_84 +[36]:http://www.openstreetmap.org/ +[37]:http://www.sno.phy.queensu.ca/~phil/exiftool/ +[38]:http://userbase.kde.org/Marble/Tracking +[39]:http://activityworkshop.net/software/gpsprune/ +[40]:https://en.wikipedia.org/wiki/DigiKam +[41]:https://docs.kde.org/development/en/extragear-graphics/digikam/using-kapp.html#idp7659904 +[42]:https://en.wikipedia.org/wiki/Symbolic_link +[43]:http://karl-voit.at/2017/02/19/gthumb diff --git a/sources/tech/20170310 9 Lightweight Linux Applications to Speed Up Your System.md b/sources/tech/20170310 9 Lightweight Linux Applications to Speed Up Your System.md deleted file mode 100644 index 5d5696cf6c..0000000000 --- a/sources/tech/20170310 9 Lightweight Linux Applications to Speed Up Your System.md +++ /dev/null @@ -1,213 +0,0 @@ -9 Lightweight Linux Applications to Speed Up Your System -====== -**Brief:** One of the many ways to [speed up Ubuntu][1] system is to use lightweight alternatives of the popular applications. We have already seen [must have Linux application][2] earlier. we'll see the lightweight alternative applications for Ubuntu and other Linux distributions. - -![Use these Lightweight alternative applications in Ubuntu Linux][4] - -## 9 Lightweight alternatives of popular Linux applications - -Is your Linux system slow? Are the applications taking a long time to open? The best option you have is to use a [light Linux distro][5]. But it's not always possible to reinstall an operating system, is it? - -So if you want to stick to your present Linux distribution, but want improved performance, you should use lightweight alternatives of the applications you are using. Here, I'm going to put together a small list of lightweight alternatives to various Linux applications. - -Since I am using Ubuntu, I have provided installation instructions for Ubuntu-based Linux distributions. But these applications will work on almost all other Linux distribution. You just have to find a way to install these lightweight Linux software in your distro. - -### 1. Midori: Web Browser - -Midori is one of the most lightweight web browsers that have reasonable compatibility with the modern web. It is open source and uses the same rendering engine that Google Chrome was initially built on -- WebKit. It is super fast and minimal yet highly customizable. - -![Midori Browser][6] - -It has plenty of extensions and options to tinker with. So if you are a power user, it's a great choice for you too. If you face any problems browsing round the web, check the [Frequently Asked Question][7] section of their website -- it contains the common problems you might face along with their solution. - -[Midori][8] - -#### Installing Midori on Ubuntu based distributions - -Midori is available on Ubuntu via the official repository. Just run the following commands for installing it: -``` - sudo apt install midori -``` - -### 2. Trojita: email client - -Trojita is an open source robust IMAP e-mail client. It is fast and resource efficient. I can certainly call it one of the [best email clients for Linux][9]. If you can live with only IMAP support on your e-mail client, you might not want to look any further. - -![Trojitá][10] - -Trojita uses various techniques -- on-demand e-mail loading, offline caching, bandwidth-saving mode etc. -- for achieving its impressive performance. - -[Trojita][11] - -#### Installing Trojita on Ubuntu based distributions - -Trojita currently doesn't have an official PPA for Ubuntu. But that shouldn't be a problem. You can install it quite easily using the following commands: -``` -sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/jkt-gentoo:/trojita/xUbuntu_16.04/ /' > /etc/apt/sources.list.d/trojita.list" -wget http://download.opensuse.org/repositories/home:jkt-gentoo:trojita/xUbuntu_16.04/Release.key -sudo apt-key add - < Release.key -sudo apt update -sudo apt install trojita -``` - -### 3. GDebi: Package Installer - -Sometimes you need to quickly install DEB packages. Ubuntu Software Center is a resource-heavy application and using it just for installing .deb files is not wise. - -Gdebi is certainly a nifty tool for the same purpose, just with a minimal graphical interface. - -![GDebi][12] - -GDebi is totally lightweight and does its job flawlessly. You should even [make Gdebi the default installer for DEB files][13]. - -#### Installing GDebi on Ubuntu based distributions - -You can install GDebi on Ubuntu with this simple one-liner: -``` -sudo apt install gdebi -``` - -### 4. App Grid: Software Center - -If you use software center frequently for searching, installing and managing applications on Ubuntu, App Grid is a must have application. It is the most visually appealing and yet fast alternative to the default Ubuntu Software Center. - -![App Grid][14] - -App Grid supports ratings, reviews and screenshots for applications. - -[App Grid][15] - -#### Installing App Grid on Ubuntu based distributions - -App Grid has its official PPA for Ubuntu. Use the following commands for installing App Grid: -``` -sudo add-apt-repository ppa:appgrid/stable -sudo apt update -sudo apt install appgrid -``` - -### 5. Yarock: Music Player - -Yarock is an elegant music player with a modern and minimal user interface. It is lightweight in design and yet it has a comprehensive list of advanced features. - -![Yarock][16] - -The main features of Yarock include multiple music collections, rating, smart playlist, multiple back-end option, desktop notification, scrobbling, context fetching etc. - -[Yarock][17] - -#### Installing Yarock on Ubuntu based distributions - -You will have to install Yarock on Ubuntu via PPA using the following commands: -``` -sudo add-apt-repository ppa:nilarimogard/webupd8 -sudo apt update -sudo apt install yarock -``` - -### 6. VLC: Video Player - -Who doesn't need a video player? And who has never heard about VLC? It doesn't really need any introduction. - -![VLC][18] - -VLC is all you need to play various media files on Ubuntu and it is quite lightweight too. It works flawlessly on even on very old PCs. - -[VLC][19] - -#### Installing VLC on Ubuntu based distributions - -VLC has official PPA for Ubuntu. Enter the following commands for installing it: -``` -sudo apt install vlc -``` - -### 7. PCManFM: File Manager - -PCManFM is the standard file manager from LXDE. As with the other applications from LXDE, this one too is lightweight. If you are looking for a lighter alternative for your file manager, try this one. - -![PCManFM][20] - -Although coming from LXDE, PCManFM works with other desktop environments just as well. - -#### Installing PCManFM on Ubuntu based distributions - -Installing PCManFM on Ubuntu will just take one simple command: -``` -sudo apt install pcmanfm -``` - -### 8. Mousepad: Text Editor - -Nothing can beat command-line text editors like - nano, vim etc. in terms of being lightweight. But if you want a graphical interface, here you go -- Mousepad is a minimal text editor. It's extremely lightweight and blazing fast. It comes with a simple customizable user interface with multiple themes. - -![Mousepad][21] - -Mousepad supports syntax highlighting. So, you can also use it as a basic code editor. - -#### Installing Mousepad on Ubuntu based distributions - -For installing Mousepad use the following command: -``` -sudo apt install mousepad -``` - -### 9. GNOME Office: Office Suite - -Many of us need to use office applications quite often. Generally, most of the office applications are bulky in size and resource hungry. Gnome Office is quite lightweight in that respect. Gnome Office is technically not a complete office suite. It's composed of different standalone applications and among them, **AbiWord** & **Gnumeric** stands out. - -**AbiWord** is the word processor. It is lightweight and a lot faster than other alternatives. But that came to be at a cost -- you might miss some features like macros, grammar checking etc. It's not perfect but it works. - -![AbiWord][22] - -**Gnumeric** is the spreadsheet editor. Just like AbiWord, Gnumeric is also very fast and it provides accurate calculations. If you are looking for a simple and lightweight spreadsheet editor, Gnumeric has got you covered. - -![Gnumeric][23] - -There are some other applications listed under Gnome Office. You can find them in the official page. - -[Gnome Office][24] - -#### Installing AbiWord & Gnumeric on Ubuntu based distributions - -For installing AbiWord & Gnumeric, simply enter the following command in your terminal: -``` -sudo apt install abiword gnumeric -``` - -That's all for today. Would you like to add some other **lightweight Linux applications** to this list? Do let us know! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/lightweight-alternative-applications-ubuntu/ - -作者:[Munif Tanjim][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/munif/ -[1]:https://itsfoss.com/speed-up-ubuntu-1310/ -[2]:https://itsfoss.com/essential-linux-applications/ -[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Lightweight-alternative-applications-for-Linux-800x450.jpg -[5]:https://itsfoss.com/lightweight-linux-beginners/ -[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Midori-800x497.png -[7]:http://midori-browser.org/faqs/ -[8]:http://midori-browser.org/ -[9]:https://itsfoss.com/best-email-clients-linux/ -[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Trojit%C3%A1-800x608.png -[11]:http://trojita.flaska.net/ -[12]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/GDebi.png -[13]:https://itsfoss.com/gdebi-default-ubuntu-software-center/ -[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/AppGrid-800x553.png -[15]:http://www.appgrid.org/ -[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Yarock-800x529.png -[17]:https://seb-apps.github.io/yarock/ -[18]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/VLC-800x526.png -[19]:http://www.videolan.org/index.html -[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/PCManFM.png -[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Mousepad.png -[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/AbiWord-800x626.png -[23]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/03/Gnumeric-800x470.png -[24]:https://gnome.org/gnome-office/ diff --git a/sources/tech/20170926 Managing users on Linux systems.md b/sources/tech/20170926 Managing users on Linux systems.md index 9842eab1d5..999110166d 100644 --- a/sources/tech/20170926 Managing users on Linux systems.md +++ b/sources/tech/20170926 Managing users on Linux systems.md @@ -1,3 +1,4 @@ +(Translating by runningwater) Managing users on Linux systems ====== Your Linux users may not be raging bulls, but keeping them happy is always a challenge as it involves managing their accounts, monitoring their access rights, tracking down the solutions to problems they run into, and keeping them informed about important changes on the systems they use. Here are some of the tasks and tools that make the job a little easier. @@ -215,7 +216,7 @@ Managing user accounts on a busy server depends in part on starting out with wel via: https://www.networkworld.com/article/3225109/linux/managing-users-on-linux-systems.html 作者:[Sandra Henry-Stocker][a] -译者:[译者ID](https://github.com/译者ID) +译者:[runningwater](https://github.com/runningwater) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/sources/tech/20171009 What-s next in DevOps- 5 trends to watch.md b/sources/tech/20171009 What-s next in DevOps- 5 trends to watch.md deleted file mode 100644 index be07522161..0000000000 --- a/sources/tech/20171009 What-s next in DevOps- 5 trends to watch.md +++ /dev/null @@ -1,89 +0,0 @@ -Translating by qhwdw -What’s next in DevOps: 5 trends to watch -====== - -![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Magnifying%20Glass%20Code.png?itok=IqZsJCEH) - -The term "DevOps" is typically credited [to this 2008 presentation][1] on agile infrastructure and operations. Now ubiquitous in IT vocabulary, the mashup word is less than 10 years old: We're still figuring out this modern way of working in IT. - -Sure, people who have been "doing DevOps" for years have accrued plenty of wisdom along the way. But most DevOps environments - and the mix of people and [culture][2], process and methodology, and tools and technology - are far from mature. - -More change is coming. That's kind of the whole point. "DevOps is a process, an algorithm," says Robert Reeves, CTO at [Datical][3]. "Its entire purpose is to change and evolve over time." - -What should we expect next? Here are some key trends to watch, according to DevOps experts. - -### 1. Expect increasing interdependence between DevOps, containers, and microservices - -The forces driving the proliferation of DevOps culture themselves may evolve. Sure, DevOps will still fundamentally knock down traditional IT silos and bottlenecks, but the reasons for doing so may become more urgent. Exhibits A & B: Growing interest in and [adoption of containers and microservices][4]. The technologies pack a powerful, scalable one-two punch, best paired with planned, [ongoing management][5]. - -"One of the major factors impacting DevOps is the shift towards microservices," says Arvind Soni, VP of product at [Netsil][6], adding that containers and orchestration are enabling developers to package and deliver services at an ever-increasing pace. DevOps teams will likely be tasked with helping to fuel that pace and to manage the ongoing complexity of a scalable microservices architecture. - -### 2. Expect fewer safety nets - -DevOps enables teams to build software with greater speed and agility, deploying faster and more frequently, while improving quality and stability. But good IT leaders don't typically ignore risk management, so plenty of early DevOps iterations began with safeguards and fallback positions in place. To get to the next level of speed and agility, more teams will take off their training wheels. - -"As teams mature, they may decide that some of the guard rails that were added early on may not be required anymore," says Nic Grange, CTO of [Retriever Communications][7]. Grange gives the example of a staging server: As DevOps teams mature, they may decide it's no longer necessary, especially if they're rarely catching issues in that pre-production environment. (Grange points out that this move isn't advisable for inexperienced teams.) - -"The team may be at a point where it is confident enough with its monitoring and ability to identify and resolve issues in production," Grange says. "The process of deploying and testing in staging may just be slowing them down without any demonstrable value." - -### 3. Expect DevOps to spread elsewhere - -DevOps brings two traditional IT groups, development and operations, into much closer alignment. As more companies see the benefits in the trenches, the culture is likely to spread. It's already happening in some organizations, evident in the increasing appearance of the term "DevSecOps," which reflects the intentional and much earlier inclusion of security in the software development lifecycle. - -"DevSecOps is not only tools, it is integrating a security mindset into development practices early on," says Derek Weeks, VP and DevOps advocate at [Sonatype][8]. - -Doing that isn't a technology challenge, it's a cultural challenge, says [Red Hat][9] security strategist Kirsten Newcomer. - -"Security teams have historically been isolated from development teams - and each team has developed deep expertise in different areas of IT," Newcomer says. "It doesn't need to be this way. Enterprises that care deeply about security and also care deeply about their ability to quickly deliver business value through software are finding ways to move security left in their application development lifecycles. They're adopting DevSecOps by integrating security practices, tooling, and automation throughout the CI/CD pipeline. To do this well, they're integrating their teams - security professionals are embedded with application development teams from inception (design) through to production deployment. Both sides are seeing the value - each team expands their skill sets and knowledge base, making them more valuable technologists. DevOps done right - or DevSecOps - improves IT security." - -Beyond security, look for DevOps expansion into areas such as database teams, QA, and even potentially outside of IT altogether. - -"This is a very DevOps thing to do: Identify areas of friction and resolve them," Datical's Reeves says. "Security and databases are currently the big bottlenecks for companies that have previously adopted DevOps." - -### 4. Expect ROI to increase - -As companies get deeper into their DevOps work, IT teams will be able to show greater return on investment in methodologies, processes, containers, and microservices, says Eric Schabell, global technology evangelist director, Red Hat. "The Holy Grail was to be moving faster, accomplishing more and becoming flexible. As these components find broader adoption and organizations become more vested in their application the results shall appear," Schabell says. - -"Everything has a learning curve with a peak of excitement as the emerging technologies gain our attention, but also go through a trough of disillusionment when the realization hits that applying it all is hard. Finally, we'll start to see a climb out of the trough and reap the benefits that we've been chasing with DevOps, containers, and microservices." - -### 5. Expect success metrics to keep evolving - -"I believe that two of the core tenets of the DevOps culture, automation and measurement, are never 'done,'" says Mike Kail, CTO at [CYBRIC][10] and former CIO at Yahoo. "There will always be opportunities to automate a task or improve upon an already automated solution, and what is important to measure will likely change and expand over time. This maturation process is a continuous journey, not a destination or completed task." - -In the spirit of DevOps, that maturation and learning will also depend on collaboration and sharing. Kail thinks it's still very much early days for Agile methodologies and DevOps culture, and that means plenty of room for growth. - -"As more mature organizations continue to measure actionable metrics, I believe - [I] hope - that those learnings will be broadly shared so we can all learn and improve from them," Kail says. - -As Red Hat technology evangelist [Gordon Haff][11] recently noted, organizations working hard to improve their DevOps metrics are using factors tied to business outcomes. "You probably don't really care about how many lines of code your developers write, whether a server had a hardware failure overnight, or how comprehensive your test coverage is," [writes Haff][12]. "In fact, you may not even directly care about the responsiveness of your website or the rapidity of your updates. But you do care to the degree such metrics can be correlated with customers abandoning shopping carts or leaving for a competitor." - -Some examples of DevOps metrics tied to business outcomes include customer ticket volume (as an indicator of overall customer satisfaction) and Net Promoter Score (the willingness of customers to recommend a company's products or services). For more on this topic, see his full article, [DevOps metrics: Are you measuring what matters? ][12] - -### No rest for the speedy - -By the way, if you were hoping things would get a little more leisurely anytime soon, you're out of luck. - -"If you think releases are fast today, you ain't seen nothing yet," Reeves says. "That's why bringing all stakeholders, including security and database teams, into the DevOps tent is so crucial. The friction caused by these two groups today will only grow as releases increase exponentially." - --------------------------------------------------------------------------------- - -via: https://enterprisersproject.com/article/2017/10/what-s-next-devops-5-trends-watch - -作者:[Kevin Casey][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://enterprisersproject.com/user/kevin-casey -[1]:http://www.jedi.be/presentations/agile-infrastructure-agile-2008.pdf -[2]:https://enterprisersproject.com/article/2017/9/5-ways-nurture-devops-culture -[3]:https://www.datical.com/ -[4]:https://enterprisersproject.com/article/2017/9/microservices-and-containers-6-things-know-start-time -[5]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul -[6]:https://netsil.com/ -[7]:http://retrievercommunications.com/ -[8]:https://www.sonatype.com/ -[9]:https://www.redhat.com/en/ -[10]:https://www.cybric.io/ -[11]:https://enterprisersproject.com/user/gordon-haff -[12]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters diff --git a/sources/tech/20171010 Operating a Kubernetes network.md b/sources/tech/20171010 Operating a Kubernetes network.md new file mode 100644 index 0000000000..abac12f718 --- /dev/null +++ b/sources/tech/20171010 Operating a Kubernetes network.md @@ -0,0 +1,217 @@ +**translating by [erlinux](https://github.com/erlinux)** +Operating a Kubernetes network +============================================================ + +I’ve been working on Kubernetes networking a lot recently. One thing I’ve noticed is, while there’s a reasonable amount written about how to **set up** your Kubernetes network, I haven’t seen much about how to **operate** your network and be confident that it won’t create a lot of production incidents for you down the line. + +In this post I’m going to try to convince you of three things: (all I think pretty reasonable :)) + +* Avoiding networking outages in production is important + +* Operating networking software is hard + +* It’s worth thinking critically about major changes to your networking infrastructure and the impact that will have on your reliability, even if very fancy Googlers say “this is what we do at Google”. (google engineers are doing great work on Kubernetes!! But I think it’s important to still look at the architecture and make sure it makes sense for your organization.) + +I’m definitely not a Kubernetes networking expert by any means, but I have run into a few issues while setting things up and definitely know a LOT more about Kubernetes networking than I used to. + +### Operating networking software is hard + +Here I’m not talking about operating physical networks (I don’t know anything about that), but instead about keeping software like DNS servers & load balancers & proxies working correctly. + +I have been working on a team that’s responsible for a lot of networking infrastructure for a year, and I have learned a few things about operating networking infrastructure! (though I still have a lot to learn obviously). 3 overall thoughts before we start: + +* Networking software often relies very heavily on the Linux kernel. So in addition to configuring the software correctly you also need to make sure that a bunch of different sysctls are set correctly, and a misconfigured sysctl can easily be the difference between “everything is 100% fine” and “everything is on fire”. + +* Networking requirements change over time (for example maybe you’re doing 5x more DNS lookups than you were last year! Maybe your DNS server suddenly started returning TCP DNS responses instead of UDP which is a totally different kernel workload!). This means software that was working fine before can suddenly start having issues. + +* To fix a production networking issues you often need a lot of expertise. (for example see this [great post by Sophie Haskins on debugging a kube-dns issue][1]) I’m a lot better at debugging networking issues than I was, but that’s only after spending a huge amount of time investing in my knowledge of Linux networking. + +I am still far from an expert at networking operations but I think it seems important to: + +1. Very rarely make major changes to the production networking infrastructure (because it’s super disruptive) + +2. When you  _are_  making major changes, think really carefully about what the failure modes are for the new network architecture are + +3. Have multiple people who are able to understand your networking setup + +Switching to Kubernetes is obviously a pretty major networking change! So let’s talk about what some of the things that can go wrong are! + +### Kubernetes networking components + +The Kubernetes networking components we’re going to talk about in this post are: + +* Your overlay network backend (like flannel/calico/weave net/romana) + +* `kube-dns` + +* `kube-proxy` + +* Ingress controllers / load balancers + +* The `kubelet` + +If you’re going to set up HTTP services you probably need all of these. I’m not using most of these components yet but I’m trying to understand them, so that’s what this post is about. + +### The simplest way: Use host networking for all your containers + +Let’s start with the simplest possible thing you can do. This won’t let you run HTTP services in Kubernetes. I think it’s pretty safe because there are less moving parts. + +If you use host networking for all your containers I think all you need to do is: + +1. Configure the kubelet to configure DNS correctly inside your containers + +2. That’s it + +If you use host networking for literally every pod you don’t need kube-dns or kube-proxy. You don’t even need a working overlay network. + +In this setup your pods can connect to the outside world (the same way any process on your hosts would talk to the outside world) but the outside world can’t connect to your pods. + +This isn’t super important (I think most people want to run HTTP services inside Kubernetes and actually communicate with those services) but I do think it’s interesting to realize that at some level all of this networking complexity isn’t strictly required and sometimes you can get away without using it. Avoiding networking complexity seems like a good idea to me if you can. + +### Operating an overlay network + +The first networking component we’re going to talk about is your overlay network. Kubernetes assumes that every pod has an IP address and that you can communicate with services inside that pod by using that IP address. When I say “overlay network” this is what I mean (“the system that lets you refer to a pod by its IP address”). + +All other Kubernetes networking stuff relies on the overlay networking working correctly. You can read more about the [kubernetes networking model here][10]. + +The way Kelsey Hightower describes in [kubernetes the hard way][11] seems pretty good but it’s not really viable on AWS for clusters more than 50 nodes or so, so I’m not going to talk about that. + +There are a lot of overlay network backends (calico, flannel, weaveworks, romana) and the landscape is pretty confusing. But as far as I’m concerned an overlay network has 2 responsibilities: + +1. Make sure your pods can send network requests outside your cluster + +2. Keep a stable mapping of nodes to subnets and keep every node in your cluster updated with that mapping. Do the right thing when nodes are added & removed. + +Okay! So! What can go wrong with your overlay network? + +* The overlay network is responsible for setting up iptables rules (basically `iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE`) to ensure that containers can make network requests outside Kubernetes. If something goes wrong with this rule then your containers can’t connect to the external network. This isn’t that hard (it’s just a few iptables rules) but it is important. I made a [pull request][2] because I wanted to make sure this was resilient + +* Something can go wrong with adding or deleting nodes. We’re using the flannel hostgw backend and at the time we started using it, node deletion [did not work][3]. + +* Your overlay network is probably dependent on a distributed database (etcd). If that database has an incident, this can cause issues. For example [https://github.com/coreos/flannel/issues/610][4] says that if you have data loss in your flannel etcd cluster it can result in containers losing network connectivity. (this has now been fixed) + +* You upgrade Docker and everything breaks + +* Probably more things! + +I’m mostly talking about past issues in Flannel here but I promise I’m not picking on Flannel – I actually really **like** Flannel because I feel like it’s relatively simple (for instance the [vxlan backend part of it][12] is like 500 lines of code) and I feel like it’s possible for me to reason through any issues with it. And it’s obviously continuously improving. They’ve been great about reviewing pull requests. + +My approach to operating an overlay network so far has been: + +* Learn how it works in detail and how to debug it (for example the hostgw network backend for Flannel works by creating routes, so you mostly just need to do `sudo ip route list` to see whether it’s doing the correct thing) + +* Maintain an internal build so it’s easy to patch it if needed + +* When there are issues, contribute patches upstream + +I think it’s actually really useful to go through the list of merged PRs and see bugs that have been fixed in the past – it’s a bit time consuming but is a great way to get a concrete list of kinds of issues other people have run into. + +It’s possible that for other people their overlay networks just work but that hasn’t been my experience and I’ve heard other folks report similar issues. If you have an overlay network setup that is a) on AWS and b) works on a cluster more than 50-100 nodes where you feel more confident about operating it I would like to know. + +### Operating kube-proxy and kube-dns? + +Now that we have some thoughts about operating overlay networks, let’s talk about + +There’s a question mark next to this one because I haven’t done this. Here I have more questions than answers. + +Here’s how Kubernetes services work! A service is a collection of pods, which each have their own IP address (like 10.1.0.3, 10.2.3.5, 10.3.5.6) + +1. Every Kubernetes service gets an IP address (like 10.23.1.2) + +2. `kube-dns` resolves Kubernetes service DNS names to IP addresses (so my-svc.my-namespace.svc.cluster.local might map to 10.23.1.2) + +3. `kube-proxy` sets up iptables rules in order to do random load balancing between them. Kube-proxy also has a userspace round-robin load balancer but my impression is that they don’t recommend using it. + +So when you make a request to `my-svc.my-namespace.svc.cluster.local`, it resolves to 10.23.1.2, and then iptables rules on your local host (generated by kube-proxy) redirect it to one of 10.1.0.3 or 10.2.3.5 or 10.3.5.6 at random. + +Some things that I can imagine going wrong with this: + +* `kube-dns` is misconfigured + +* `kube-proxy` dies and your iptables rules don’t get updated + +* Some issue related to maintaining a large number of iptables rules + +Let’s talk about the iptables rules a bit, since doing load balancing by creating a bajillion iptables rules is something I had never heard of before! + +kube-proxy creates one iptables rule per target host like this: (these rules are from [this github issue][13]) + +``` +-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-E4QKA7SLJRFZZ2DD[b][c] +-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LZ7EGMG4DRXMY26H +-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI +-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS +-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y + +``` + +So kube-proxy creates a **lot** of iptables rules. What does that mean? What are the implications of that in for my network? There’s a great talk from Huawei called [Scale Kubernetes to Support 50,000 services][14] that says if you have 5,000 services in your kubernetes cluster, it takes **11 minutes** to add a new rule. If that happened to your real cluster I think it would be very bad. + +I definitely don’t have 5,000 services in my cluster, but 5,000 isn’t SUCH a bit number. The proposal they give to solve this problem is to replace this iptables backend for kube-proxy with IPVS which is a load balancer that lives in the Linux kernel. + +It seems like kube-proxy is going in the direction of various Linux kernel based load balancers. I think this is partly because they support UDP load balancing, and other load balancers (like HAProxy) don’t support UDP load balancing. + +But I feel comfortable with HAProxy! Is it possible to replace kube-proxy with HAProxy! I googled this and I found this [thread on kubernetes-sig-network][15] saying: + +> kube-proxy is so awesome, we have used in production for almost a year, it works well most of time, but as we have more and more services in our cluster, we found it was getting hard to debug and maintain. There is no iptables expert in our team, we do have HAProxy&LVS experts, as we have used these for several years, so we decided to replace this distributed proxy with a centralized HAProxy. I think this maybe useful for some other people who are considering using HAProxy with kubernetes, so we just update this project and make it open source: [https://github.com/AdoHe/kube2haproxy][5]. If you found it’s useful , please take a look and give a try. + +So that’s an interesting option! I definitely don’t have answers here, but, some thoughts: + +* Load balancers are complicated + +* DNS is also complicated + +* If you already have a lot of experience operating one kind of load balancer (like HAProxy), it might make sense to do some extra work to use that instead of starting to use an entirely new kind of load balancer (like kube-proxy) + +* I’ve been thinking about where we want to be using kube-proxy or kube-dns at all – I think instead it might be better to just invest in Envoy and rely entirely on Envoy for all load balancing & service discovery. So then you just need to be good at operating Envoy. + +As you can see my thoughts on how to operate your Kubernetes internal proxies are still pretty confused and I’m still not super experienced with them. It’s totally possible that kube-proxy and kube-dns are fine and that they will just work fine but I still find it helpful to think through what some of the implications of using them are (for example “you can’t have 5,000 Kubernetes services”). + +### Ingress + +If you’re running a Kubernetes cluster, it’s pretty likely that you actually need HTTP requests to get into your cluster so far. This blog post is already too long and I don’t know much about ingress yet so we’re not going to talk about that. + +### Useful links + +A couple of useful links, to summarize: + +* [The Kubernetes networking model][6] + +* How GKE networking works: [https://www.youtube.com/watch?v=y2bhV81MfKQ][7] + +* The aforementioned talk on `kube-proxy` performance: [https://www.youtube.com/watch?v=4-pawkiazEg][8] + +### I think networking operations is important + +My sense of all this Kubernetes networking software is that it’s all still quite new and I’m not sure we (as a community) really know how to operate all of it well. This makes me worried as an operator because I really want my network to keep working! :) Also I feel like as an organization running your own Kubernetes cluster you need to make a pretty large investment into making sure you understand all the pieces so that you can fix things when they break. Which isn’t a bad thing, it’s just a thing. + +My plan right now is just to keep learning about how things work and reduce the number of moving parts I need to worry about as much as possible. + +As usual I hope this was helpful and I would very much like to know what I got wrong in this post! + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/ + +作者:[Julia Evans ][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://jvns.ca/about +[1]:http://blog.sophaskins.net/blog/misadventures-with-kube-dns/ +[2]:https://github.com/coreos/flannel/pull/808 +[3]:https://github.com/coreos/flannel/pull/803 +[4]:https://github.com/coreos/flannel/issues/610 +[5]:https://github.com/AdoHe/kube2haproxy +[6]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model +[7]:https://www.youtube.com/watch?v=y2bhV81MfKQ +[8]:https://www.youtube.com/watch?v=4-pawkiazEg +[9]:https://jvns.ca/categories/kubernetes +[10]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model +[11]:https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/11-pod-network-routes.md +[12]:https://github.com/coreos/flannel/tree/master/backend/vxlan +[13]:https://github.com/kubernetes/kubernetes/issues/37932 +[14]:https://www.youtube.com/watch?v=4-pawkiazEg +[15]:https://groups.google.com/forum/#!topic/kubernetes-sig-network/3NlBVbTUUU0 diff --git a/sources/tech/20171017 What Are the Hidden Files in my Linux Home Directory For.md b/sources/tech/20171017 What Are the Hidden Files in my Linux Home Directory For.md deleted file mode 100644 index fa8613f03d..0000000000 --- a/sources/tech/20171017 What Are the Hidden Files in my Linux Home Directory For.md +++ /dev/null @@ -1,61 +0,0 @@ -What Are the Hidden Files in my Linux Home Directory For? -====== - -![](https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-linux-hero.png) - -In your Linux system you probably store a lot of files and folders in your Home directory. But beneath those files, do you know that your Home directory also comes with a lot of hidden files and folders? If you run `ls -a` on your home directory, you'll discover a pile of hidden files and directories with dot prefixes. What do these hidden files do anyway? - -### What are hidden files in the home directory for? - -![hidden-files-liunux-2][1] - -Most commonly, hidden files and directories in the home directory contain settings or data that's accessed by that user's programs. They're not intended to be edited by the user, only the application. That's why they're hidden from the user's normal view. - -In general files from your own home directory can be removed and changed without damaging the operating system. The applications that rely on those hidden files, however, might not be as flexible. When you remove a hidden file from the home directory, you'll typically lose the settings for the application associated with it. - -The program that relied on that hidden file will typically recreate it. However, you'll be starting from the "out-of-the-box" settings, like a brand new user. If you're having trouble with an application, that can actually be a huge help. It lets you remove customizations that might be causing trouble. But if you're not, it just means you'll need to set everything back the way you like it. - - -### What are some specific uses of hidden files in the home directory? -![hidden-files-linux-3][2] - -Everyone will have different hidden files in their home directory. There are some that everyone has. However, the files serve a similar purpose, regardless of the parent application. - -### System Settings - -System settings include the configuration for your desktop environment and your shell. - - * **Configuration files** for your shell and command line utilities: Depending on the specific shell and command-like utilities you use, the specific file name will change. You 'll see files like ".bashrc," ".vimrc" and ".zshrc." These files contain any settings you've changed about your shell's operating environment or tweaks you've made to the settings of command-line utilities like `vim`. Removing these files will return the associated application to its default state. Considering many Linux users build up an array of subtle tweaks and settings over the years, removing this file could be a huge headache. - * **User profiles:** Like the configuration files above, these files (typically ".profile" or ".bash_profile") save user settings for the shell. This file often contains your PATH. It also contains [aliases][3] you've set. Users can also put aliases in `.bashrc` or other locations. The PATH governs where the shell looks for executable commands. By appending or modifying your PATH, you can change where your shell looks for commands. Aliases change the names of commands. One alias might set `ll` to call `ls -l`, for example. This provides text-based shortcuts to often-used commands. If you delete `.profile`, you can often find the default version in the "/etc/skel" directory. - * **Desktop environment settings:** This saves any customization of your desktop environment. That includes the desktop background, screensavers, shortcut keys, menu bar and taskbar icons, and anything else that the user has set about their desktop environment. When you remove this file, the user's environment reverts to the new user environment at the next login. - - - -### Application configuration files - -You'll find these in the ".config" folder in Ubuntu. These are settings for your specific applications. They'll include things like the preference lists and settings. - - * **Configuration files for applications** : This includes settings from the application preferences menu, workspace configurations and more. Exactly what you'll find here depends on the parent application. - * **Web browser data:** This may include things like bookmarks and browsing history. The majority of files make up the cache. This is where the web browser stores temporarily download files, like images. Removing this might slow down some media-heavy websites the first time you visit them. - * **Caches** : If a user application caches data that's only relevant to that user (like the [Spotify app storing cache of your playlist][4]), the home directory is a natural place to store it. These caches might contain masses of data or just a few lines of code: it depends on what the parent application needs. If you remove these files, the application recreates them as necessary. - * **Logs:** Some user applications might store logs here as well. Depending on how developers set up the application, you might find log files stored in your home directory. This isn't a common choice, however. - - -### Conclusion -In most cases the hidden files in your Linux home directory as used to store user settings. This includes settings for command-line utilities as well as GUI-based applications. Removing them will remove user settings. Typically, it won't cause a program to break. - --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/hidden-files-linux-home-directory/ - -作者:[Alexander Fox][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/alexfox/ -[1]:https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-liunux-2.png (hidden-files-liunux-2) -[2]:https://www.maketecheasier.com/assets/uploads/2017/06/hidden-files-linux-3.png (hidden-files-linux-3) -[3]:https://www.maketecheasier.com/making-the-linux-command-line-a-little-friendlier/#aliases -[4]:https://www.maketecheasier.com/clear-spotify-cache/ diff --git a/sources/tech/20171110 How to configure login banners in Linux (RedHat, Ubuntu, CentOS, Fedora).md b/sources/tech/20171110 How to configure login banners in Linux (RedHat, Ubuntu, CentOS, Fedora).md deleted file mode 100644 index 485be46ab7..0000000000 --- a/sources/tech/20171110 How to configure login banners in Linux (RedHat, Ubuntu, CentOS, Fedora).md +++ /dev/null @@ -1,95 +0,0 @@ -How to configure login banners in Linux (RedHat, Ubuntu, CentOS, Fedora) -====== -Learn how to create login banners in Linux to display different warning or information messages to user who is about to log in or after he logs in. - -![Login banners in Linux][1] - -Whenever you login to some production systems of firm, you get to see some login messages, warnings or info about server you are about to login or already logged in like below. Those are the login banners. - -![Login welcome messages in Linux][2] - -In this article we will walk you through how to configure them. - -There are two types of banners you can configure. - - 1. Banner message to display before user log in (configure in file of your choice eg. `/etc/login.warn`) - 2. Banner message to display after user successfully logged in (configure in `/etc/motd`) - - - -### How to display message when user connects to system before login - -This message will be displayed to user when he connects to server and before he logged in. Means when he enter the username, this message will be displayed before password prompt. - -You can use any filename and enter your message within. Here we used `/etc/login.warn` file and put our messages inside. - -``` -# cat /etc/login.warn - !!!! Welcome to KernelTalks test server !!!! -This server is meant for testing Linux commands and tools. If you are -not associated with kerneltalks.com and not authorized please dis-connect -immediately. -``` - -Now, you need to supply this file and path to `sshd` daemon so that it can fetch this banner for each user login request. For that open `/etc/sshd/sshd_config` file and search for line `#Banner none` - -Here you have to edit file and write your filename and remove hash mark. It should look like : `Banner /etc/login.warn` - -Save file and restart `sshd` daemon. To avoid disconnecting existing connected users, use HUP signal to restart sshd. - -``` -oot@kerneltalks # ps -ef |grep -i sshd -root 14255 1 0 18:42 ? 00:00:00 /usr/sbin/sshd -D -root 19074 14255 0 18:46 ? 00:00:00 sshd: ec2-user [priv] -root 19177 19127 0 18:54 pts/0 00:00:00 grep -i sshd - -root@kerneltalks # kill -HUP 14255 -``` - -Thats it! Open new session and try login. You will be greeted with the message you configured in above steps . - -![Login banner in Linux][3] - -You can see message is displayed before user enter his password and log in to system. - -### How to display message after user logs in - -Message user sees after he logs into system successfully is **M** essage **O** f **T** he **D** ay & is controlled by `/etc/motd` file. Edit this file and enter message you want to greet user with once he successfully logged in. - -``` -root@kerneltalks # cat /etc/motd - W E L C O M E -Welcome to the testing environment of kerneltalks. -Feel free to use this system for testing your Linux -skills. In case of any issues reach out to admin at -info@kerneltalks.com. Thank you. - -``` - -You dont need to restart `sshd` daemon to take this change effect. As soon as you save the file, its content will be read and displayed by sshd daemon from very next login request it serves. - -![motd in linux][4] - -You can see in above screenshot : Yellow box is MOTD controlled by `/etc/motd` and green box is what we saw earlier login banner. - -You can use tools like [cowsay][5], [banner][6], [figlet][7], [lolcat ][8]to create fancy, eye-catching messages to display at login. This method works on almost all Linux distros like RedHat, CentOs, Ubuntu, Fedora etc. - --------------------------------------------------------------------------------- - -via: https://kerneltalks.com/tips-tricks/how-to-configure-login-banners-in-linux/ - -作者:[kerneltalks][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://kerneltalks.com -[1]:https://c3.kerneltalks.com/wp-content/uploads/2017/11/login-banner-message-in-linux.png -[2]:https://c3.kerneltalks.com/wp-content/uploads/2017/11/Login-message-in-linux.png -[3]:https://c1.kerneltalks.com/wp-content/uploads/2017/11/login-banner.png -[4]:https://c3.kerneltalks.com/wp-content/uploads/2017/11/motd-message-in-linux.png -[5]:https://kerneltalks.com/tips-tricks/cowsay-fun-in-linux-terminal/ -[6]:https://kerneltalks.com/howto/create-nice-text-banner-hpux/ -[7]:https://kerneltalks.com/tips-tricks/create-beautiful-ascii-text-banners-linux/ -[8]:https://kerneltalks.com/linux/lolcat-tool-to-rainbow-color-linux-terminal/ diff --git a/sources/tech/20171113 My Adventure Migrating Back To Windows.md b/sources/tech/20171113 My Adventure Migrating Back To Windows.md index 9356dc8204..73d47d52cc 100644 --- a/sources/tech/20171113 My Adventure Migrating Back To Windows.md +++ b/sources/tech/20171113 My Adventure Migrating Back To Windows.md @@ -1,3 +1,5 @@ +Translating by MjSeven + My Adventure Migrating Back To Windows ====== I have had linux as my primary OS for about a decade now, and primarily use Ubuntu. But with the latest release I have decided to migrate back to an OS I generally dislike, Windows 10. diff --git a/sources/tech/20171115 How to create better documentation with a kanban board.md b/sources/tech/20171115 How to create better documentation with a kanban board.md index c07bef3bbb..d903a6683f 100644 --- a/sources/tech/20171115 How to create better documentation with a kanban board.md +++ b/sources/tech/20171115 How to create better documentation with a kanban board.md @@ -1,3 +1,5 @@ +translating---geekpi + How to create better documentation with a kanban board ====== ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open%20source_collaboration.png?itok=68kU6BHy) diff --git a/sources/tech/20171116 Record and Share Terminal Session with Showterm.md b/sources/tech/20171116 Record and Share Terminal Session with Showterm.md deleted file mode 100644 index 8d3ce451b5..0000000000 --- a/sources/tech/20171116 Record and Share Terminal Session with Showterm.md +++ /dev/null @@ -1,76 +0,0 @@ -translating---geekpi - -Record and Share Terminal Session with Showterm -====== - -![](https://www.maketecheasier.com/assets/uploads/2017/11/record-terminal-session.jpg) - -You can easily record your terminal sessions with virtually all screen recording programs. However, you are very likely to end up with an oversized video file. There are several terminal recorders available in Linux, each with its own strengths and weakness. Showterm is a tool that makes it pretty easy to record terminal sessions, upload them, share, and embed them in any web page. On the plus side, you don't end up with any huge file to deal with. - -Showterm is open source, and the project can be found on this [GitHub page][1]. - - **Related** : [2 Simple Applications That Record Your Terminal Session as Video [Linux]][2] - -### Installing Showterm for Linux - -Showterm requires that you have Ruby installed on your computer. Here's how to go about installing the program. -``` -gem install showterm -``` - -If you don't have Ruby installed on your Linux system: -``` -sudo curl showterm.io/showterm > ~/bin/showterm -sudo chmod +x ~/bin/showterm -``` - -If you just want to run the application without installation: -``` -bash <(curl record.showterm.io) -``` - -You can type `showterm --help` for the help screen. If a help page doesn't appear, showterm is probably not installed. Now that you have Showterm installed (or are running the standalone version), let us dive into using the tool to record. - - **Related** : [How to Record Terminal Session in Ubuntu][3] - -### Recording Terminal Session - -![showterm terminal][4] - -Recording a terminal session is pretty simple. From the command line run `showterm`. This should start the terminal recording in the background. All commands entered in the command line from hereon are recorded by Showterm. Once you are done recording, press Ctrl + D or type `exit` in the command line to stop your recording. - -Showterm should upload your video and output a link to the video that looks like http://showterm.io/. It is rather unfortunate that terminal sessions are uploaded right away without any prompting. Don't panic! You can delete any uploaded recording by entering `showterm --delete `. Before uploading your recordings, you'll have the chance to change the timing by adding the `-e` option to the showterm command. If by any chance a recording fails to upload, you can use `showterm --retry