Merge pull request #1 from LCTT/master

update
This commit is contained in:
amwps290 2019-10-07 15:17:30 +08:00 committed by GitHub
commit f54abea4ab
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
31 changed files with 3935 additions and 1723 deletions

View File

@ -0,0 +1,619 @@
数码文件与文件夹收纳术(以照片为例)
======
![](https://img.linux.net.cn/data/attachment/album/201910/05/000950xsxopomsrs55rrb5.jpg)
- 更新 2014-05-14增加了一些具体实例
- 更新 2015-03-16根据照片的 GPS 坐标过滤图片
- 更新 2016-08-29以新的 `filetags --filter` 替换已经过时的 `show-sel.sh` 脚本
- 更新 2017-08-28: geeqier 视频缩略图的邮件评论
- 更新 2018-03-06增加了 Julian Kahnert 的链接
- 更新 2018-05-06增加了作者在 2018 Linuxtage Graz 大会上 45 分钟演讲的视频
- 更新 2018-06-05关于 metadata 的邮件回复
- 更新 2018-07-22移动文件夹结构的解释到一篇它自己的文章中
- 更新 2019-07-09关于在文件名中避免使用系谱和字符的邮件回复
每当度假或去哪游玩时我就会化身为一个富有激情的摄影师。所以,过去的几年中我积累了许多的 [JPEG][1] 文件。这篇文章中我会介绍我是如何避免 [供应商锁定][2]LCTT 译注:<ruby>供应商锁定<rt>vendor lock-in</rt></ruby>,原为经济学术语,这里引申为避免过于依赖某一服务平台)造成受限于那些临时性的解决方案及数据丢失。相反,我更倾向于使用那些可以让我**投入时间和精力打理,并能长久使用**的解决方案。
这一(相当长的)攻略 **并不仅仅适用于图像文件**:我将进一步阐述像是文件夹结构、文件的命名规则等等许多领域的事情。因此,这些规范适用于我所能接触到的所有类型的文件。
在我开始传授我的方法之前,我们应该先就我将要介绍方法的达成一个共识,那就是我们是否有相同的需求。如果你对 [raw 图像格式][3]十分推崇,将照片存储在云端或其他你信赖的地方(对我而言可能不会),那么你可能不会认同这篇文章将要描述的方式了。请根据你的情况来灵活做出选择。
### 我的需求
对于 **将照片(或视频)从我的数码相机中导出到电脑里**,我只需要将 SD 卡插到我的电脑里并调用 `fetch-workflow` 软件。这一步也完成了**图像软件的预处理**以适用于我的文件命名规范(下文会具体论述),同时也可以将图片旋转至正常的方向(而不是横着)。
这些文件将会被存入到我的摄影收藏文件夹 `$HOME/tmp/digicam/`。在这一文件夹中我希望能**遍历我的图像和视频文件**,以便于**整理/删除、重命名、添加/移除标签,以及将一系列相关的文件移动到相应的文件夹中**。
在完成这些以后,我将会**浏览包含图像/电影文件集的文件夹**。在极少数情况下,我希望**在独立的图像处理工具**(比如 [GIMP][4])中打开一个图像文件。如果仅是为了**旋转 JPEG 文件**,我想找到一个快速的方法,不需要图像处理工具,并且是[以无损的方式][5]旋转 JPEG 图像。
我的数码相机支持用 [GPS][6] 坐标标记图像。因此,我需要一个方法来**对单个文件或一组文件可视化 GPS 坐标**来显示我走过的路径。
我想拥有的另一个好功能是:假设你在威尼斯度假时拍了几百张照片。每一个都很漂亮,所以你每张都舍不得删除。另一方面,你可能想把一组更少的照片送给家里的朋友。而且,在他们嫉妒的爆炸之前,他们可能只希望看到 20 多张照片。因此,我希望能够**定义并显示一组特定的照片子集**。
就独立性和**避免锁定效应**而言,我不想使用那种一旦公司停止产品或服务就无法使用的工具。出于同样的原因,由于我是一个注重隐私的人,**我不想使用任何基于云的服务**。为了让自己对新的可能性保持开放的心态,我不希望只在一个特定的操作系统平台才可行的方案上倾注全部的精力。**基本的东西必须在任何平台上可用**(查看、导航、……),而**全套需求必须可以在 GNU/Linux 上运行**,对我而言,我选择 Debian GNU/Linux。
在我传授当前针对上述大量需求的解决方案之前,我必须解释一下我的一般文件夹结构和文件命名约定,我也使用它来命名数码照片。但首先,你必须认清一个重要的事实:
#### iPhoto、Picasa诸如此类应被认为是有害的
管理照片集的软件工具确实提供了相当酷的功能。它们提供了一个良好的用户界面,并试图为你提供满足各种需求的舒适的工作流程。
对它们我确实遇到了很多大问题。它们几乎对所有东西都使用专有的存储格式:图像文件、元数据等等。当你打算在几年内换一个不同的软件,这是一个大问题。相信我:总有一天你会因为多种原因而**更换软件**。
如果你现在正打算更换相应的工具,你会意识到 iPhoto 或 Picasa 是分别存储原始图像文件和你对它们所做的所有操作的(旋转图像、向图像文件添加描述/标签、裁剪等等)。如果你不能导出并重新导入到新工具,那么**所有的东西都将永远丢失**。而无损的进行转换和迁移几乎是不可能的。
我不想在一个会锁住我工作的工具上投入任何精力。**我也拒绝把自己绑定在任何专有工具上**。我是一个过来人,希望你们吸取我的经验。
这就是我在文件名中保留时间戳、图像描述或标记的原因。文件名是永久性的,除非我手动更改它们。当我把照片备份或复制到 U 盘或其他操作系统时,它们不会丢失。每个人都能读懂。任何未来的系统都能够处理它们。
### 我的文件命名规范
这里有一个我在 [2018 Linuxtage Graz 大会][44]上做的[演讲][45],其中详细阐述了我的在本文中提到的想法和工作流程。
- [Grazer Linuxtage 2018 - The Advantages of File Name Conventions and Tagging](https://youtu.be/rckSVmYCH90)
- [备份视频托管在 media.CCC.de](https://media.ccc.de/v/GLT18_-_321_-_en_-_g_ap147_004_-_201804281550_-_the_advantages_of_file_name_conventions_and_tagging_-_karl_voit)
我所有的文件都与一个特定的日期或时间有关,根据所采用的 [ISO 8601][7] 规范,我采用的是**日期戳**或**时间戳**
带有日期戳和两个标签的示例文件名:`2014-05-09 Budget export for project 42 -- finance company.csv`。
带有时间戳(甚至包括可选秒)和两个标签的示例文件名:`2014-05-09T22.19.58 Susan presenting her new shoes -- family clothing.jpg`。
由于我使用的 ISO 时间戳冒号不适用于 Windows [NTFS 文件系统][8],因此,我用点代替冒号,以便将小时与分钟(以及可选的秒)区别开来。
如果是**持续的一段日期或时间**,我会将两个日期戳或时间戳用两个减号分开:`2014-05-09--2014-05-13 Jazz festival Graz -- folder tourism music.pdf`。
文件名中的时间/日期戳的优点是,除非我手动更改它们,否则它们保持不变。当通过某些不处理这些元数据的软件进行处理时,包含在文件内容本身中的元数据(如 [Exif][9])往往会丢失。此外,使用这样的日期/时间戳开始的文件名可以确保文件按时间顺序显示,而不是按字母顺序显示。字母表是一种[完全人工的排序顺序][10],对于用户定位文件通常不太实用。
当我想将**标签**关联到文件名时,我将它们放在原始文件名和[文件名扩展名][11]之间,中间用空格、两个减号和两端额外的空格分隔 ` -- `。我的标签是小写的英文单词,不包含空格或特殊字符。有时,我可能会使用 `quantifiedself``usergenerated` 这样的连接词。我[倾向于选择一般类别][12],而不是太过具体的描述标签。我在 Twitter [hashtags][13]、文件名、文件夹名、书签、诸如此类的博文等诸如此类地地方重用这些标签。
标签作为文件名的一部分有几个优点。通过使用常用的桌面搜索引擎,你可以在标签的帮助下定位文件。文件名称中的标签不会因为复制到不同的存储介质上而丢失。当系统使用与文件名之外的存储位置(如:元数据数据库、[点文件][14]、[备用数据流][15]等)存储元信息通常会发生丢失。
当然,通常在文件和文件夹名称中,**请避免使用特殊字符**、变音符、冒号等。尤其是在不同操作系统平台之间同步文件时。
我的**文件夹名命名约定**与文件的相应规范相同。
注意:由于 [Memacs][17] 的 [filenametimestamp][16] 模块的聪明之处,所有带有日期/时间戳的文件和文件夹都出现在我的 Org 模式的日历(日程)上的同一天/同一时间。这样,我就能很好地了解当天发生了什么,包括我拍的所有照片。
### 我的一般文件夹结构
在本节中,我将描述我的主文件夹中最重要的文件夹。注意:这可能在将来的被移动到一个独立的页面。或许不是。让我们等着瞧 :-) LCTT 译注:后来这一节已被作者扩展并移动到另外一篇[文章](https://karl-voit.at/folder-hierarchy/)。)
很多东西只有在一定的时间内才会引起人们的兴趣。这些内容包括快速浏览其内容的下载、解压缩文件以检查包含的文件、一些有趣的小内容等等。对于**临时的东西**,我有 `$HOME/tmp/` 子层次结构。新照片放在 `$HOME/tmp/digicam/` 中。我从 CD、DVD 或 USB 记忆棒临时复制的东西放在 `$HOME/tmp/fromcd/` 中。每当软件工具需要用户文件夹层次结构中的临时数据时,我就使用 `$HOME/tmp/Tools/`作为起点。我经常使用的文件夹是 `$HOME/tmp/2del/``2del` 的意思是“随时可以删除”。例如,我所有的浏览器都使用这个文件夹作为默认的下载文件夹。如果我需要在机器上腾出空间,我首先查看这个 `2del` 文件夹,用于删除内容。
与上面描述的临时文件相比,我当然也想将文件**保存更长的时间**。这些文件被移动到我的 `$HOME/archive/` 子层次结构中。它有几个子文件夹用于备份、我想保留的 web 下载类、我要存档的二进制文件、可移动媒体CD、DVD、记忆棒、外部硬盘驱动器的索引文件和一个稍后寻找一个合适的的目标文件夹存档的文件夹。有时我太忙或没有耐心的时候将文件妥善整理。是的那就是我我甚至有一个名为“现在不要烦我”的文件夹。这对你而言是否很怪:-)
我的归档中最重要的子层次结构是 `$HOME/archive/events_memories/` 及其子文件夹 `2014/`、`2013/`、`2012/` 等等。正如你可能已经猜到的,每个年份有一个**子文件夹**。其中每个文件中都有单个文件和文件夹。这些文件是根据我在前一节中描述的文件名约定命名的。文件夹名称以 [ISO 8601][7] 日期标签 “YYYY-MM-DD” 开头,后面跟着一个具有描述性的名称,如 `$HOME/archive/events_memories/2014/2014-05-08 Business marathon with/`。在这些与日期相关的文件夹中我保存着各种与特定事件相关的文件照片、扫描的pdf 文件、文本文件等等。
对于**共享数据**,我设置一个 `$HOME/share/` 子层次结构。这是我的 Dropbox 文件夹,我用各种各样的方法(比如 [unison][18])来分享数据。我也在我的设备之间共享数据:家里的 Mac Mini、家里的 GNU/Linux 笔记本、Android 手机root-server我的个人云工作用的 Windows 笔记本。我不想在这里详细说明我的同步设置。如果你想了解相关的设置,可以参考另一篇相关的文章。:-)
在我的 `$HOME/templates_tags/` 子层次结构中,我保存了各种**模板文件**[LaTeX][19]、脚本、…),插图和**徽标**,等等。
我的 **Org 模式** 文件,主要是保存在 `$HOME/org/`。我练习记忆力,不会解释我有多喜欢 [Emacs/Org 模式][20] 以及我从中获益多少。你可能读过或听过我详细描述我用它做的很棒的事情。具体可以在我的博客上查找 [我的 Emacs 标签][21],在 Twitter 上查找 [hashtag #orgmode][22]。
以上就是我最重要的文件夹子层次结构设置方式。
### 我的工作流程
哒哒哒,在你了解了我的文件夹结构和文件名约定之后,下面是我当前的工作流程和工具,我使用它们来满足我前面描述的需求。
请注意,**你必须知道你在做什么**。我这里的示例及文件夹路径和更多**只适用我的机器或我的环境**。**你必须采用相应的**路径、文件名等来满足你的需求!
#### 工作流程:将文件从 SD 卡移动到笔记本电脑、旋转人像图像,并重命名文件
当我想把数据从我的数码相机移到我的 GNU/Linux 笔记本上时,我拿出它的 mini SD 存储卡,把它放在我的笔记本上。然后它会自动挂载在 `/media/digicam` 上。
然后,调用 [getdigicamdata][23]。它做了如下几件事:它将文件从 SD 卡移动到一个临时文件夹中进行处理。原始文件名会转换为小写字符。所有的人像照片会使用 [jhead][24] 旋转。同样使用 jhead我从 Exif 头的时间戳中生成文件名称中的时间戳。使用 [date2name][25],我也将时间戳添加到电影文件中。处理完所有这些文件后,它们将被移动到新的数码相机文件的目标文件夹: `$HOME/tmp/digicam/tmp/`
#### 工作流程:文件夹索引、查看、重命名、删除图像文件
为了快速浏览我的图像和电影文件,我喜欢使用 GNU/Linux 上的 [geeqie][26]。这是一个相当轻量级的图像浏览器,它具有其他文件浏览器所缺少的一大优势:我可以添加通过键盘快捷方式调用的外部脚本/工具。通过这种方式,我可以通过任意外部命令扩展这个图像浏览器的特性。
基本的图像管理功能是内置在 geeqie浏览我的文件夹层次结构、以窗口模式或全屏查看图像快捷键 `f`)、重命名文件名、删除文件、显示 Exif 元数据(快捷键 `Ctrl-e`)。
在 OS X 上,我使用 [Xee][27]。与 geeqie 不同,它不能通过外部命令进行扩展。不过,基本的浏览、查看和重命名功能也是可用的。
#### 工作流程:添加和删除标签
我创建了一个名为 [filetags][28] 的 Python 脚本,用于向单个文件以及一组文件添加和删除标记。
对于数码照片,我使用标签,例如,`specialL` 用于我认为适合桌面背景的风景图片,`specialP` 用于我想展示给其他人的人像照片,`sel` 用于筛选,等等。
##### 使用 geeqie 初始设置 filetags
向 geeqie 添加 `filetags` 是一个手动步骤“Edit > Preferences > Configure Editors ...”,然后创建一个附加条目 `New`。在这里,你可以定义一个新的桌面文件,如下所示:
```
[Desktop Entry]
Name=filetags
GenericName=filetags
Comment=
Exec=/home/vk/src/misc/vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh %F
Icon=
Terminal=true
Type=Application
Categories=Application;Graphics;
hidden=false
MimeType=image/*;video/*;image/mpo;image/thm
Categories=X-Geeqie;
```
*add-tags.desktop*
封装脚本 `vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh` 是必须的,因为我想要弹出一个新的终端,以便添加标签到我的文件:
```
#!/bin/sh
/usr/bin/gnome-terminal \
--geometry=85x15+330+5 \
--tab-with-profile=big \
--hide-menubar \
-x /home/vk/src/filetags/filetags.py --interactive "${@}"
#end
```
*vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh*
在 geeqie 中,你可以在 “Edit > Preferences > Preferences ... > Keyboard”。我将 `t``filetags` 命令相关联。
这个 `filetags` 脚本还能够从单个文件或一组文件中删除标记。它基本上使用与上面相同的方法。唯一的区别是 `filetags` 脚本额外的 `--remove` 参数:
```
[Desktop Entry]
Name=filetags-remove
GenericName=filetags-remove
Comment=
Exec=/home/vk/src/misc/vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh %F
Icon=
Terminal=true
Type=Application
Categories=Application;Graphics;
hidden=false
MimeType=image/*;video/*;image/mpo;image/thm
Categories=X-Geeqie;
```
*remove-tags.desktop*
```
#!/bin/sh
/usr/bin/gnome-terminal \
--geometry=85x15+330+5 \
--tab-with-profile=big \
--hide-menubar \
-x /home/vk/src/filetags/filetags.py --interactive --remove "${@}"
#end
```
*vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh*
为了删除标签,我创建了一个键盘快捷方式 `T`
##### 在 geeqie 中使用 filetags
当我在 geeqie 文件浏览器中浏览图像文件时,我选择要标记的文件(一到多个)并按 `t`。然后,一个小窗口弹出,要求我提供一个或多个标签。用回车确认后,这些标签被添加到文件名中。
删除标签也是一样:选择多个文件,按下 `T`,输入要删除的标签,然后按回车确认。就是这样。几乎没有[给文件添加或删除标签的更简单的方法了][29]。
#### 工作流程:改进的使用 appendfilename 重命名文件
##### 不使用 appendfilename
重命名一组大型文件可能是一个冗长乏味的过程。对于 `2014-04-20T17.09.11_p1100386.jpg` 这样的原始文件名,在文件名中添加描述的过程相当烦人。你将按 `Ctrl-r` (重命名)在 geeqie 中打开文件重命名对话框。默认情况下,原始名称(没有文件扩展名的文件名称)被标记。因此,如果不希望删除/覆盖文件名(但要追加),则必须按下光标键 `→`。然后,光标放在基本名称和扩展名之间。输入你的描述(不要忘记以空格字符开始),并用回车进行确认。
##### 在 geeqie 使中用 appendfilename
使用 [appendfilename][30],我的过程得到了简化,可以获得将文本附加到文件名的最佳用户体验:当我在 geeqie 中按下 `a`(附加)时,会弹出一个对话框窗口,询问文本。在回车确认后,输入的文本将放置在时间戳和可选标记之间。
例如,当我在 `2014-04-20T17.09.11_p1100386.jpg` 上按下 `a`,然后键入`Pick-nick in Graz` 时,文件名变为 `2014-04-20T17.09.11_p1100386 Pick-nick in Graz.jpg`。当我再次按下 `a` 并输入 `with Susan` 时,文件名变为 `2014-04-20T17.09.11_p1100386 Pick-nick in Graz with Susan.jpg`。当文件名添加标记时,附加的文本前将附加标记分隔符。
这样,我就不必担心覆盖时间戳或标记。重命名的过程对我来说变得更加有趣!
最好的部分是:当我想要将相同的文本添加到多个选定的文件中时,也可以使用 `appendfilename`
##### 在 geeqie 中初始设置 appendfilename
添加一个额外的编辑器到 geeqie: “Edit > Preferences > Configure Editors ... > New”。然后输入桌面文件定义
```
[Desktop Entry]
Name=appendfilename
GenericName=appendfilename
Comment=
Exec=/home/vk/src/misc/vk-appendfilename-interactive-wrapper-with-gnome-terminal.sh %F
Icon=
Terminal=true
Type=Application
Categories=Application;Graphics;
hidden=false
MimeType=image/*;video/*;image/mpo;image/thm
Categories=X-Geeqie;
```
*appendfilename.desktop*
同样,我也使用了一个封装脚本,它将为我打开一个新的终端:
```
#!/bin/sh
/usr/bin/gnome-terminal \
--geometry=90x5+330+5 \
--tab-with-profile=big \
--hide-menubar \
-x /home/vk/src/appendfilename/appendfilename.py "${@}"
#end
```
*vk-appendfilename-interactive-wrapper-with-gnome-terminal.sh*
#### 工作流程:播放电影文件
在 GNU/Linux 上,我使用 [mplayer][31] 回放视频文件。由于 geeqie 本身不播放电影文件,所以我必须创建一个设置,以便在 mplayer 中打开电影文件。
##### 在 geeqie 中初始设置 mplayer
我已经使用 [xdg-open][32] 将电影文件扩展名关联到 mplayer。因此我只需要为 geeqie 创建一个通用的“open”命令让它使用 `xdg-open` 打开任何文件及其关联的应用程序。
在 geeqie 中,再次访问 “Edit > Preferences > Configure Editors ...” 添加“open”的条目
```
[Desktop Entry]
Name=open
GenericName=open
Comment=
Exec=/usr/bin/xdg-open %F
Icon=
Terminal=true
Type=Application
hidden=false
NOMimeType=*;
MimeType=image/*;video/*
Categories=X-Geeqie;
```
*open.desktop*
当你也将快捷方式 `o` (见上文)与 geeqie 关联时,你就能够打开与其关联的应用程序的视频文件(和其他文件)。
##### 使用 xdg-open 打开电影文件(和其他文件)
在上面的设置过程之后,当你的 geeqie 光标位于文件上方时,你只需按下 `o` 即可。就是如此简洁。
#### 工作流程:在外部图像编辑器中打开
我不太希望能够在 GIMP 中快速编辑图像文件。因此,我添加了一个快捷方式 `g`,并将其与外部编辑器 “"GNU Image Manipulation Program" (GIMP)” 关联起来geeqie 已经默认创建了该外部编辑器。
这样,只需按下 `g` 就可以在 GIMP 中打开当前图像。
#### 工作流程:移动到存档文件夹
现在我已经在我的文件名中添加了注释,我想将单个文件移动到 `$HOME/archive/events_memories/2014/`,或者将一组文件移动到这个文件夹中的新文件夹中,如 `$HOME/archive/events_memories/2014/2014-05-08 business marathon after show - party`
通常的方法是选择一个或多个文件,并用快捷方式 `Ctrl-m` 将它们移动到文件夹中。
何等繁复无趣之至!
因此,我(再次)编写了一个 Python 脚本,它为我完成了这项工作:[move2archive][33](简写为:` m2a `),需要一个或多个文件作为命令行参数。然后,出现一个对话框,我可以在其中输入一个可选文件夹名。当我不输入任何东西而是按回车,文件被移动到相应年份的文件夹。当我输入一个类似 `Business-Marathon After-Show-Party` 的文件夹名称时,第一个图像文件的日期戳被附加到该文件夹(`$HOME/archive/events_memories/2014/2014-05-08 Business-Marathon After-Show-Party`),然后创建该文件夹,并移动文件。
再一次,我在 geeqie 中选择一个或多个文件,按 `m`(移动),或者只按回车(没有特殊的子文件夹),或者输入一个描述性文本,这是要创建的子文件夹的名称(可选不带日期戳)。
**没有一个图像管理工具像我的带有 appendfilename 和 move2archive 的 geeqie 一样可以通过快捷键快速且有趣的完成工作。**
##### 在 geeqie 里初始化 m2a 的相关设置
同样,向 geeqie 添加 `m2a` 是一个手动步骤“Edit > Preferences > Configure Editors ...”然后创建一个附加条目“New”。在这里你可以定义一个新的桌面文件如下所示:
```
[Desktop Entry]
Name=move2archive
GenericName=move2archive
Comment=Moving one or more files to my archive folder
Exec=/home/vk/src/misc/vk-m2a-interactive-wrapper-with-gnome-terminal.sh %F
Icon=
Terminal=true
Type=Application
Categories=Application;Graphics;
hidden=false
MimeType=image/*;video/*;image/mpo;image/thm
Categories=X-Geeqie;
```
*m2a.desktop*
封装脚本 `vk-m2a-interactive-wrapper-with-gnome-terminal.sh` 是必要的,因为我想要弹出一个新的终端窗口,以便我的文件进入我指定的目标文件夹:
```
#!/bin/sh
/usr/bin/gnome-terminal \
--geometry=157x56+330+5 \
--tab-with-profile=big \
--hide-menubar \
-x /home/vk/src/m2a/m2a.py --pauseonexit "${@}"
#end
```
*vk-m2a-interactive-wrapper-with-gnome-terminal.sh*
在 geeqie 中,你可以在 “Edit > Preferences > Preferences ... > Keyboard” 将 `m``m2a` 命令相关联。
#### 工作流程:旋转图像(无损)
通常,我的数码相机会自动将人像照片标记为人像照片。然而,在某些特定的情况下(比如从装饰图案上方拍照),我的相机会出错。在那些**罕见的情况下**,我必须手动修正方向。
你必须知道JPEG 文件格式是一种有损格式,应该只用于照片,而不是计算机生成的东西,如屏幕截图或图表。以傻瓜方式旋转 JPEG 图像文件通常会解压/可视化图像文件、旋转生成新的图像,然后重新编码结果。这将导致生成的图像[比原始图像质量差得多][5]。
因此,你应该使用无损方法来旋转 JPEG 图像文件。
再一次,我添加了一个“外部编辑器”到 geeqie“Edit > Preferences > Configure Editors ... > New”。在这里我添加了两个条目使用 [exiftran][34],一个用于旋转 270 度(即逆时针旋转 90 度),另一个用于旋转 90 度(顺时针旋转 90 度):
```
[Desktop Entry]
Version=1.0
Type=Application
Name=Losslessly rotate JPEG image counterclockwise
# call the helper script
TryExec=exiftran
Exec=exiftran -p -2 -i -g %f
# Desktop files that are usable only in Geeqie should be marked like this:
Categories=X-Geeqie;
OnlyShowIn=X-Geeqie;
# Show in menu "Edit/Orientation"
X-Geeqie-Menu-Path=EditMenu/OrientationMenu
MimeType=image/jpeg;
```
*rotate-270.desktop*
```
[Desktop Entry]
Version=1.0
Type=Application
Name=Losslessly rotate JPEG image clockwise
# call the helper script
TryExec=exiftran
Exec=exiftran -p -9 -i -g %f
# Desktop files that are usable only in Geeqie should be marked like this:
Categories=X-Geeqie;
OnlyShowIn=X-Geeqie;
# Show in menu "Edit/Orientation"
X-Geeqie-Menu-Path=EditMenu/OrientationMenu
# It can be made verbose
# X-Geeqie-Verbose=true
MimeType=image/jpeg;
```
*rotate-90.desktop*
我创建了 geeqie 快捷键 `[`(逆时针方向)和 `]`(顺时针方向)。
#### 工作流程:可视化 GPS 坐标
我的数码相机有一个 GPS 传感器,它在 JPEG 文件的 Exif 元数据中存储当前的地理位置。位置数据以 [WGS 84][35] 格式存储,如 `47, 58, 26.73; 16, 23, 55.51`(纬度;经度)。这一方式可读性较差,我期望:要么是地图,要么是位置名称。因此,我向 geeqie 添加了一些功能,这样我就可以在 [OpenStreetMap][36] 上看到单个图像文件的位置: `Edit > Preferences > Configure Editors ... > New`
```
[Desktop Entry]
Name=vkphotolocation
GenericName=vkphotolocation
Comment=
Exec=/home/vk/src/misc/vkphotolocation.sh %F
Icon=
Terminal=true
Type=Application
Categories=Application;Graphics;
hidden=false
MimeType=image/bmp;image/gif;image/jpeg;image/jpg;image/pjpeg;image/png;image/tiff;image/x-bmp;image/x-gray;image/x-icb;image/x-ico;image/x-png;image/x-portable-anymap;image/x-portable-bitmap;image/x-portable-graymap;image/x-portable-pixmap;image/x-xbitmap;image/x-xpixmap;image/x-pcx;image/svg+xml;image/svg+xml-compressed;image/vnd.wap.wbmp;
```
*photolocation.desktop*
这调用了我的名为 `vkphotolocation.sh` 的封装脚本,它使用 [ExifTool][37] 以 [Marble][38] 能够读取和可视化的适当格式提取该坐标:
```
#!/bin/sh
IMAGEFILE="${1}"
IMAGEFILEBASENAME=`basename ${IMAGEFILE}`
COORDINATES=`exiftool -c %.6f "${IMAGEFILE}" | awk '/GPS Position/ { print $4 " " $6 }'`
if [ "x${COORDINATES}" = "x" ]; then
zenity --info --title="${IMAGEFILEBASENAME}" --text="No GPS-location found in the image file."
else
/usr/bin/marble --latlon "${COORDINATES}" --distance 0.5
fi
#end
```
*vkphotolocation.sh*
映射到键盘快捷键 `G`,我可以快速地得到**单个图像文件的位置的地图定位**。
当我想将多个 JPEG 图像文件的**位置可视化为路径**时,我使用 [GpsPrune][39]。我无法挖掘出 GpsPrune 将一组文件作为命令行参数的方法。正因为如此,我必须手动启动 GpsPrune用 “File > Add photos”选择一组文件或一个文件夹。
通过这种方式,我可以为每个 JPEG 位置在 OpenStreetMap 地图上获得一个点(如果配置为这样)。通过单击这样一个点,我可以得到相应图像的详细信息。
如果你恰好在国外拍摄照片,可视化 GPS 位置对**在文件名中添加描述**大有帮助!
#### 工作流程:根据 GPS 坐标过滤照片
这并非我的工作流程。为了完整起见,我列出该工作流对应工具的特性。我想做的就是从一大堆图片中寻找那些在一定区域内(范围或点 + 距离)的照片。
到目前为止,我只找到了 [DigiKam][40],它能够[根据矩形区域进行过滤][41]。如果你知道其他工具,请将其添加到下面的评论或给我写一封电子邮件。
#### 工作流程:显示给定集合的子集
如上面的需求所述,我希望能够对一个文件夹中的文件定义一个子集,以便将这个小集合呈现给其他人。
工作流程非常简单:我向选择的文件添加一个标记(通过 `t`/`filetags`)。为此,我使用标记 `sel`,它是 “selection” 的缩写。在标记了一组文件之后,我可以按下 `s`,它与一个脚本相关联,该脚本只显示标记为 `sel` 的文件。
当然,这也适用于任何标签或标签组合。因此,用同样的方法,你可以得到一个适当的概述,你的婚礼上的所有照片都标记着“教堂”和“戒指”。
很棒的功能,不是吗?:-)
##### 初始设置 filetags 以根据标签和 geeqie 过滤
你必须定义一个额外的“外部编辑器”,“ Edit > Preferences > Configure Editors ... > New”
```
[Desktop Entry]
Name=filetag-filter
GenericName=filetag-filter
Comment=
Exec=/home/vk/src/misc/vk-filetag-filter-wrapper-with-gnome-terminal.sh
Icon=
Terminal=true
Type=Application
Categories=Application;Graphics;
hidden=false
MimeType=image/*;video/*;image/mpo;image/thm
Categories=X-Geeqie;
```
*filter-tags.desktop*
再次调用我编写的封装脚本:
```
#!/bin/sh
/usr/bin/gnome-terminal \
--geometry=85x15+330+5 \
--hide-menubar \
-x /home/vk/src/filetags/filetags.py --filter
#end
```
*vk-filetag-filter-wrapper-with-gnome-terminal.sh*
带有参数 `--filter``filetags` 基本上完成的是:用户被要求输入一个或多个标签。然后,当前文件夹中所有匹配的文件都使用[符号链接][42]链接到 `$HOME/.filetags_tagfilter/`。然后,启动了一个新的 geeqie 实例,显示链接的文件。
在退出这个新的 geeqie 实例之后,你会看到进行选择的旧的 geeqie 实例。
#### 用一个真实的案例来总结
哇哦, 这是一篇很长的博客文章。你可能已经忘了之前的概述。总结一下我在(扩展了标准功能集的) geeqie 中可以做的事情,我有一个很酷的总结:
快捷键 | 功能
--- | ---
`m` | 移到归档m2a
`o` | 打开(针对非图像文件)
`a` | 在文件名里添加字段
`t` | 文件标签(添加)
`T` | 文件标签(删除)
`s` | 文件标签(排序)
`g` | gimp
`G` | 显示 GPS 信息
`[` | 无损的逆时针旋转
`]` | 无损的顺时针旋转
`Ctrl-e` | EXIF 图像信息
`f` | 全屏显示
文件名(包括它的路径)的部分及我用来操作该部分的相应工具:
```
/this/is/a/folder/2014-04-20T17.09 Picknick in Graz -- food graz.jpg
[ move2archive ] [ date2name ] [appendfilename] [ filetags ]
```
在实践中,我按照以下步骤将照片从相机保存到存档:我将 SD 存储卡放入计算机的 SD 读卡器中。然后我运行 [getdigicamdata.sh][23]。完成之后,我在 geeqie 中打开 `$HOME/tmp/digicam/tmp/`。我浏览了一下照片,把那些不成功的删除了。如果有一个图像的方向错误,我用 `[``]` 纠正它。
在第二步中,我向我认为值得评论的文件添加描述 `a`)。每当我想添加标签时,我也这样做:我快速地标记所有应该共享相同标签的文件(`Ctrl + 鼠标点击`),并使用 [filetags][28]`t`)进行标记。
要合并来自给定事件的文件,我选中相应的文件,将它们移动到年度归档文件夹中的 `event-folder`,并通过在 [move2archive][33]`m`)中键入事件描述,其余的(非特殊的文件夹)无需声明事件描述由 `move2archive` `m`)直接移动到年度归档中。
结束我的工作流程,我删除了 SD 卡上的所有文件,把它从操作系统上弹出,然后把它放回我的数码相机里。
以上。
因为这种工作流程几乎不需要任何开销,所以评论、标记和归档照片不再是一项乏味的工作。
### 最后
所以,这是一个详细描述我关于照片和电影的工作流程的叙述。你可能已经发现了我可能感兴趣的其他东西。所以请不要犹豫,请使用下面的链接留下评论或电子邮件。
我也希望得到反馈,如果我的工作流程适用于你。并且,如果你已经发布了你的工作流程或者找到了其他人工作流程的描述,也请留下评论!
及时行乐,莫让错误的工具或低效的方法浪费了我们的人生!
### 其他工具
阅读关于[本文中关于 gThumb 的部分][43]。
当你觉得你以上文中所叙述的符合你的需求时,请根据相关的建议来选择对应的工具。
--------------------------------------------------------------------------------
via: http://karl-voit.at/managing-digital-photographs/
作者:[Karl Voit][a]
译者:[qfzy1233](https://github.com/qfzy1233)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://karl-voit.at
[1]:https://en.wikipedia.org/wiki/Jpeg
[2]:http://en.wikipedia.org/wiki/Vendor_lock-in
[3]:https://en.wikipedia.org/wiki/Raw_image_format
[4]:http://www.gimp.org/
[5]:http://petapixel.com/2012/08/14/why-you-should-always-rotate-original-jpeg-photos-losslessly/
[6]:https://en.wikipedia.org/wiki/Gps
[7]:https://en.wikipedia.org/wiki/Iso_date
[8]:https://en.wikipedia.org/wiki/Ntfs
[9]:https://en.wikipedia.org/wiki/Exif
[10]:http://www.isisinform.com/reinventing-knowledge-the-medieval-controversy-of-alphabetical-order/
[11]:https://en.wikipedia.org/wiki/File_name_extension
[12]:http://karl-voit.at/tagstore/en/papers.shtml
[13]:https://en.wikipedia.org/wiki/Hashtag
[14]:https://en.wikipedia.org/wiki/Dot-file
[15]:https://en.wikipedia.org/wiki/NTFS#Alternate_data_streams_.28ADS.29
[16]:https://github.com/novoid/Memacs/blob/master/docs/memacs_filenametimestamps.org
[17]:https://github.com/novoid/Memacs
[18]:http://www.cis.upenn.edu/~bcpierce/unison/
[19]:https://github.com/novoid/LaTeX-KOMA-template
[20]:http://orgmode.org/
[21]:http://karl-voit.at/tags/emacs
[22]:https://twitter.com/search?q%3D%2523orgmode&src%3Dtypd
[23]:https://github.com/novoid/getdigicamdata.sh
[24]:http://www.sentex.net/%3Ccode%3Emwandel/jhead/
[25]:https://github.com/novoid/date2name
[26]:http://geeqie.sourceforge.net/
[27]:http://xee.c3.cx/
[28]:https://github.com/novoid/filetag
[29]:http://karl-voit.at/tagstore/
[30]:https://github.com/novoid/appendfilename
[31]:http://www.mplayerhq.hu
[32]:https://wiki.archlinux.org/index.php/xdg-open
[33]:https://github.com/novoid/move2archive
[34]:http://manpages.ubuntu.com/manpages/raring/man1/exiftran.1.html
[35]:https://en.wikipedia.org/wiki/WGS84#A_new_World_Geodetic_System:_WGS_84
[36]:http://www.openstreetmap.org/
[37]:http://www.sno.phy.queensu.ca/~phil/exiftool/
[38]:http://userbase.kde.org/Marble/Tracking
[39]:http://activityworkshop.net/software/gpsprune/
[40]:https://en.wikipedia.org/wiki/DigiKam
[41]:https://docs.kde.org/development/en/extragear-graphics/digikam/using-kapp.html#idp7659904
[42]:https://en.wikipedia.org/wiki/Symbolic_link
[43]:http://karl-voit.at/2017/02/19/gthumb
[44]:https://glt18.linuxtage.at
[45]:https://glt18-programm.linuxtage.at/events/321.html

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11429-1.html)
[#]: subject: (Learn how to Record and Replay Linux Terminal Sessions Activity)
[#]: via: (https://www.linuxtechi.com/record-replay-linux-terminal-sessions-activity/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
@ -12,7 +12,7 @@
通常Linux 管理员们都使用 `history` 命令来跟踪在先前的会话中执行过哪些命令,但是 `history` 命令的局限性在于它不存储命令的输出。在某些情况下,我们要检查上一个会话的命令输出,并希望将其与当前会话进行比较。除此之外,在某些情况下,我们正在对 Linux 生产环境中的问题进行故障排除,并希望保存所有终端会话活动以供将来参考,因此在这种情况下,`script` 命令就变得很方便。
![](https://www.linuxtechi.com/wp-content/uploads/2019/06/Record-linux-terminal-session-activity.jpg)
![](https://img.linux.net.cn/data/attachment/album/201910/06/122659mmi64z8ryr4z2n8a.jpg)
`script` 是一个命令行工具,用于捕获/记录你的 Linux 服务器终端会话活动,以后可以使用 `scriptreplay` 命令重放记录的会话。在本文中,我们将演示如何安装 `script` 命令行工具以及如何记录 Linux 服务器终端会话活动,然后,我们将看到如何使用 `scriptreplay` 命令来重放记录的会话。
@ -20,8 +20,6 @@
#### 在 RHEL 7/ CentOS 7 上安装 script 工具
如果未在 CentOS 7 / RHEL 7系统上安装rpm软件包“`-util-linux”则提供脚本命令请运行以下yum命令
`script` 命令由 RPM 包 `util-linux` 提供,如果你没有在你的 CentOS 7 / RHEL 7 系统上安装它,运行下面的 `yum` 安装它:
```
@ -54,7 +52,7 @@ Script started, file is typescript
[root@linuxtechi ~]#
```
要停止记录会话活动,请键入 `exit` 命令,然后按回车
要停止记录会话活动,请键入 `exit` 命令,然后按回车
```
[root@linuxtechi ~]# exit
@ -73,7 +71,7 @@ Script done, file is typescript
![options-script-command][1]
让我们开始通过执行 `script` 命令来记录 Linux 终端会话,然后执行诸如 `w``route -n``df -h` 和 `free- h`,示例如下所示:
让我们开始通过执行 `script` 命令来记录 Linux 终端会话,然后执行诸如 `w``route -n``df -h` 和 `free -h`,示例如下所示:
![script-examples-linux-server][3]
@ -91,10 +89,9 @@ Script done, file is typescript
以上内容确认了我们在终端上执行的所有命令都已保存在 `typescript` 文件中。
### 在 script 命令中使用定制文件名
假设我们要使用自定义文件名来执行脚本命令,可以在脚本命令后指定文件名。在下面的示例中,我们使用的文件名为 `session-log-(当前日期时间).txt`
假设我们要使用自定义文件名来执行 `script` 命令,可以在 `script` 命令后指定文件名。在下面的示例中,我们使用的文件名为 `session-log-(当前日期时间).txt`
```
[root@linuxtechi ~]# script sessions-log-$(date +%d-%m-%Y-%T).txt
@ -113,7 +110,7 @@ Script done, file is sessions-log-21-06-2019-01:37:39.txt
### 附加命令输出到 script 记录文件
假设 `script` 命令已经将命令输出记录到名为 `session-log.txt` 的文件中,现在我们想将新会话命令的输出附加到该文件中,那么可以在脚本命令中使用 `-a` 选项。
假设 `script` 命令已经将命令输出记录到名为 `session-log.txt` 的文件中,现在我们想将新会话命令的输出附加到该文件中,那么可以在 `script` 命令中使用 `-a` 选项。
```
[root@linuxtechi ~]# script -a sessions-log.txt
@ -138,7 +135,7 @@ Script done, file is sessions-log.txt
### 无需 shell 交互而捕获命令输出到 script 记录文件
假设我们要捕获命令的输出到 script 记录文件,那么使用 `-c` 选项,示例如下所示:
假设我们要捕获命令的输出到会话记录文件,那么使用 `-c` 选项,示例如下所示:
```
[root@linuxtechi ~]# script -c "uptime && hostname && date" root-session.txt
@ -285,7 +282,7 @@ via: https://www.linuxtechi.com/record-replay-linux-terminal-sessions-activity/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,212 @@
[#]: collector: (lujun9972)
[#]: translator: (LuuMing)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11427-1.html)
[#]: subject: (How to compile a Linux kernel in the 21st century)
[#]: via: (https://opensource.com/article/19/8/linux-kernel-21st-century)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
在 21 世纪该怎样编译 Linux 内核
======
> 也许你并不需要编译 Linux 内核,但你能通过这篇教程快速上手。
![](https://img.linux.net.cn/data/attachment/album/201910/06/113927vrs6rurljyuza8cy.jpg)
在计算机世界里,<ruby>内核<rt>kernel</rt></ruby>是处理硬件与一般系统之间通信的<ruby>低阶软件<rt>low-level software</rt></ruby>。除过一些烧录进计算机主板的初始固件,当你启动计算机时,内核让系统意识到它有一个硬盘驱动器、屏幕、键盘以及网卡。分配给每个部件相等时间(或多或少)使得图像、音频、文件系统和网络可以流畅甚至并行地运行。
然而,对于硬件的需求是源源不断的,随着发布的硬件越多,内核就必须纳入更多代码来保证那些硬件正常工作。得到具体的数字很困难,但是 Linux 内核无疑是硬件兼容性方面的顶级内核之一。Linux 操作着无数的计算机和移动电话、工业用途和爱好者使用的板级嵌入式系统SoC、RAID 卡、缝纫机等等。
回到 20 世纪(甚至是 21 世纪初期),对于 Linux 用户来说,在刚买到新的硬件后就需要下载最新的内核代码并编译安装才能使用这是不可理喻的。而现在你也很难见到 Linux 用户为了好玩而编译内核或通过高度专业化定制的硬件的方式赚钱。现在,通常已经不需要再编译 Linux 内核了。
这里列出了一些原因以及快速编译内核的教程。
### 更新当前的内核
无论你买了配备新显卡或 Wifi 芯片集的新品牌电脑还是给家里配备一个新的打印机,你的操作系统(称为 GNU+Linux 或 Linux它也是该内核的名字需要一个驱动程序来打开新部件显卡、芯片集、打印机和其他任何东西的信道。有时候当你插入某些新的设备时而你的电脑表示发现了它这具有一定的欺骗性。别被骗到了有时候那就够了但更多的情况是你的操作系统仅仅是使用了通用的协议检测到安装了新的设备。
例如,你的计算机也许能够鉴别出新的网络打印机,但有时候那仅仅是因为打印机的网卡被设计成为了获得 DHCP 地址而在网络上标识自己。它并不意味着你的计算机知道如何发送文档给打印机进行打印。事实上,你可以认为计算机甚至不“知道”那台设备是一个打印机。它也许仅仅是显示网络有个设备在一个特定的地址上,并且该设备以一系列字符 “p-r-i-n-t-e-r” 标识自己而已。人类语言的便利性对于计算机毫无意义。计算机需要的是一个驱动程序。
内核开发者、硬件制造商、技术支持和爱好者都知道新的硬件会不断地发布。它们大多数都会贡献驱动程序,直接提交给内核开发团队以包含在 Linux 中。例如,英伟达显卡驱动程序通常都会写入 [Nouveau][2] 内核模块中,并且因为英伟达显卡很常用,它的代码都包含在任一个日常使用的发行版内核中(例如当下载 [Fedora][3] 或 [Ubuntu][4] 得到的内核)。英伟达也有不常用的地方,例如嵌入式系统中 Nouveau 模块通常被移除。对其他设备来说也有类似的模块:打印机得益于 [Foomatic][5] 和 [CUPS][6],无线网卡有 [b43、ath9k、wl][7] 模块等等。
发行版往往会在它们 Linux 内核的构建中包含尽可能多合理的驱动程序,因为他们想让你在接入新设备时不用安装驱动程序能够立即使用。对于大多数情况来说就是这样的,尤其是现在很多设备厂商都在资助自己售卖硬件的 Linux 驱动程序开发,并且直接将这些驱动程序提交给内核团队以用在通常的发行版上。
有时候,或许你正在运行六个月之前安装的内核,并配备了上周刚刚上市令人兴奋的新设备。在这种情况下,你的内核也许没有那款设备的驱动程序。好消息是经常会出现那款设备的驱动程序已经存在于最近版本的内核中,意味着你只要更新运行的内核就可以了。
通常,这些都是通过安装包管理软件完成的。例如在 RHEL、CentOS 和 Fedora 上:
```
$ sudo dnf update kernel
```
在 Debian 和 Ubuntu 上,首先获取你当前内核的版本:
```
$ uname -r
4.4.186
```
搜索新的版本:
```
$ sudo apt update
$ sudo apt search linux-image
```
安装找到的最新版本。在这个例子中,最新的版本是 5.2.4
```
$ sudo apt install linux-image-5.2.4
```
内核更新后,你必须 [reboot][8] (除非你使用 kpatch 或 kgraft。这时如果你需要的设备驱动程序包含在最新的内核中你的硬件就会正常工作。
### 安装内核模块
有时候一个发行版没有预计到用户会使用某个设备(或者该设备的驱动程序至少不足以包含在 Linux 内核中。Linux 对于驱动程序采用模块化方式,因此尽管驱动程序没有编译进内核,但发行版可以推送单独的驱动程序包让内核去加载。尽管有些复杂但是非常有用,尤其是当驱动程序没有包含进内核中而是在引导过程中加载,或是内核中的驱动程序相比模块化的驱动程序过期时。第一个问题可以用 “initrd” 解决(初始化 RAM 磁盘),这一点超出了本文的讨论范围,第二点通过 “kmod” 系统解决。
kmod 系统保证了当内核更新后,所有与之安装的模块化驱动程序也得到更新。如果你手动安装一个驱动程序,你就体验不到 kmod 提供的自动化,因此只要能用 kmod 安装包,就应该选择它。例如,尽管英伟达驱动程序以 Nouveau 模块构建在内核中,但官方的驱动程序仅由英伟达发布。你可以去网站上手动安装英伟达旗下的驱动程序,下载 “.run” 文件,并运行提供的 shell 脚本,但在安装了新的内核之后你必须重复相同的过程,因为没有任何东西告诉包管理软件你手动安装了一个内核驱动程序。英伟达驱动着你的显卡,手动更新英伟达驱动程序通常意味着你需要通过终端来执行更新,因为没有显卡驱动程序将无法显示。
![Nvidia configuration application][9]
然而,如果你通过 kmod 包安装英伟达驱动程序,更新你的内核也会更新你的英伟达驱动程序。在 Fedora 和相关的发行版中:
```
$ sudo dnf install kmod-nvidia
```
在 Debian 和相关发行版上:
```
$ sudo apt update
$ sudo apt install nvidia-kernel-common nvidia-kernel-dkms nvidia-glx nvidia-xconfig nvidia-settings nvidia-vdpau-driver vdpau-va-driver
```
这仅仅是一个例子,但是如果你真的要安装英伟达驱动程序,你也必须屏蔽掉 Nouveau 驱动程序。参考你使用发行版的文档获取最佳的步骤吧。
### 下载并安装驱动程序
不是所有的东西都包含在内核中,也不是所有的东西都可以作为内核模块使用。在某些情况下,你需要下载一个由供应商编写并绑定好的特殊驱动程序,还有一些情况,你有驱动程序,但是没有配置驱动程序的前端界面。
有两个常见的例子是 HP 打印机和 [Wacom][10] 数位板。如果你有一台 HP 打印机,你可能有能够和打印机通信的通用的驱动程序,甚至能够打印出东西。但是通用的驱动程序却不能为特定型号的打印机提供定制化的选项,例如双面打印、校对、纸盒选择等等。[HPLIP][11]HP Linux 成像和打印系统)提供了选项来进行任务管理、调整打印设置、选择可用的纸盒等等。
HPLIP 通常包含在包管理软件中只要搜索“hplip”就行了。
![HPLIP in action][12]
同样的,电子艺术家主要使用的数位板 Wacom 的驱动程序通常也包含在内核中,但是例如调整压感和按键功能等设置只能通过默认包含在 GNOME 的图形控制面板访问。但也可以作为 KDE 上额外的程序包“kde-config-tablet”来访问。
这里也有几个类似的个别例子,例如内核中没有驱动程序,但是以 RPM 或 DEB 文件提供了可供下载并且通过包管理软件安装的 kmod 版本的驱动程序。
### 打上补丁并编译你的内核
即使在 21 世纪的未来主义乌托邦里,仍有厂商不够了解开源,没有提供可安装的驱动程序。有时候,一些公司为驱动程序提供开源代码,而需要你下载代码、修补内核、编译并手动安装。
这种发布方式和在 kmod 系统之外安装打包的驱动程序拥有同样的缺点:对内核的更新会破坏驱动程序,因为每次更换新的内核时都必须手动将其重新集成到内核中。
令人高兴的是,这种事情变得少见了,因为 Linux 内核团队在呼吁公司们与他们交流方面做得很好,并且公司们最终接受了开源不会很快消失的事实。但仍有新奇的或高度专业的设备仅提供了内核补丁。
官方上,对于你如何编译内核以使包管理器参与到升级系统如此重要的部分中,发行版有特定的习惯。这里有太多的包管理器,所以无法一一涵盖。举一个例子,当你使用 Fedora 上的工具例如 `rpmdev``build-essential`Debian 上的 `devscripts`
首先,像通常那样,找到你正在运行内核的版本:
```
$ uname -r
```
在大多数情况下,如果你还没有升级过内核那么可以试试升级一下内核。搞定之后,也许你的问题就会在最新发布的内核中解决。如果你尝试后发现不起作用,那么你应该下载正在运行内核的源码。大多数发行版提供了特定的命令来完成这件事,但是手动操作的话,可以在 [kernel.org][13] 上找到它的源代码。
你必须下载内核所需的任何补丁。有时候,这些补丁对应具体的内核版本,因此请谨慎选择。
通常,或至少在人们习惯于编译内核的那时,都是拿到源代码并对 `/usr/src/linux` 打上补丁。
解压内核源码并打上需要的补丁:
```
$ cd /usr/src/linux
$ bzip2 --decompress linux-5.2.4.tar.bz2
$ cd  linux-5.2.4
$ bzip2 -d ../patch*bz2
```
补丁文件也许包含如何使用的教程,但通常它们都设计成在内核源码树的顶层可用来执行。
```
$ patch -p1 < patch*example.patch
```
当内核代码打上补丁后,你可以继续使用旧的配置来对打了补丁的内核进行配置。
```
$ make oldconfig
```
`make oldconfig` 命令有两个作用:它继承了当前的内核配置,并且允许你配置补丁带来的新的选项。
你或许需要运行 `make menuconfig` 命令,它启动了一个基于 ncurses 的菜单界面,列出了新的内核所有可能的选项。整个菜单可能看不过来,但是它是以旧的内核配置为基础的,你可以遍历菜单并且禁用掉你没有或不需要的硬件模块。另外,如果你知道自己有一些硬件没有包含在当前的配置中,你可以选择构建它,当作模块或者直接嵌入内核中。理论上,这些并不是必要的,因为你可以猜想,当前的内核运行良好只是缺少了补丁,当使用补丁的时候可能已经激活了所有设备所必要的选项。
下一步,编译内核和它的模块:
```
$ make bzImage
$ make modules
```
这会产生一个叫作 `vmlinuz` 的文件,它是你的可引导内核的压缩版本。保存旧的版本并在 `/boot` 文件夹下替换为新的。
```
$ sudo mv /boot/vmlinuz /boot/vmlinuz.nopatch
$ sudo cat arch/x86_64/boot/bzImage > /boot/vmlinuz
$ sudo mv /boot/System.map /boot/System.map.stock
$ sudo cp System.map /boot/System.map
```
到目前为止,你已经打上了补丁并且编译了内核和它的模块,你安装了内核,但你并没有安装任何模块。那就是最后的步骤:
```
$ sudo make modules_install
```
新的内核已经就位,并且它的模块也已经安装。
最后一步是更新你的引导程序,为了让你的计算机在加载 Linux 内核之前知道它的位置。GRUB 引导程序使这一过程变得相当简单:
```
$ sudo grub2-mkconfig
```
### 现实生活中的编译
当然,现在没有人手动执行这些命令。相反的,参考你的发行版,寻找发行版维护人员使用的开发者工具集修改内核的说明。这些工具集可能会创建一个集成所有补丁的可安装软件包,告诉你的包管理器来升级并更新你的引导程序。
### 内核
操作系统和内核都是玄学,但要理解构成它们的组件并不难。下一次你看到某个技术无法应用在 Linux 上时深呼吸调查可用的驱动程序寻找一条捷径。Linux 比以前简单多了——包括内核。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/linux-kernel-21st-century
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[LuMing](https://github.com/LuuMing)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q (and old computer and a new computer, representing migration to new software or hardware)
[2]: https://nouveau.freedesktop.org/wiki/
[3]: http://fedoraproject.org
[4]: http://ubuntu.com
[5]: https://wiki.linuxfoundation.org/openprinting/database/foomatic
[6]: https://www.cups.org/
[7]: https://wireless.wiki.kernel.org/en/users/drivers
[8]: https://opensource.com/article/19/7/reboot-linux
[9]: https://opensource.com/sites/default/files/uploads/nvidia.jpg (Nvidia configuration application)
[10]: https://linuxwacom.github.io
[11]: https://developers.hp.com/hp-linux-imaging-and-printing
[12]: https://opensource.com/sites/default/files/uploads/hplip.jpg (HPLIP in action)
[13]: https://www.kernel.org/

View File

@ -0,0 +1,190 @@
[#]: collector: (lujun9972)
[#]: translator: (amwps290)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11426-1.html)
[#]: subject: (Adding themes and plugins to Zsh)
[#]: via: (https://opensource.com/article/19/9/adding-plugins-zsh)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
给 Zsh 添加主题和插件
======
> 通过 Oh My Zsh 安装的主题和插件来扩展 Zsh 的功能。
![](https://img.linux.net.cn/data/attachment/album/201910/05/120457r49mk2l9oelv94bi.jpg)
在我的[前文][2]中,我向大家展示了如何安装并使用 [Z-Shell][2] (Zsh)。对于某些用户来说Zsh 最令人激动的是它可以安装主题。Zsh 安装主题非常容易,一方面是因为有非常活跃的社区为 Z-Shell 设计主题,另一方面是因为有 [Oh My Zsh][3] 这个项目。这使得安装主题变得轻而易举。
主题的变化可能会立刻吸引你的注意力,因此如果你安装了 Zsh 并且将默认的 Shell 替换为 Zsh 时,你可能不喜欢 Shell 默认主题的样子,那么你可以立即更换 Oh My Zsh 自带的 100 多个主题。Oh My Zsh 不仅拥有大量精美的主题,同时还有数以百计的扩展 Zsh 功能的插件。
### 安装 Oh My Zsh
Oh My Zsh 的[官网][3]建议你使用一个脚本在有网络的情况下来安装这个包。尽管 Oh My Zsh 项目几乎是可以令人信服的,但是盲目地在你的电脑上运行一个脚本这是一个糟糕的建议。如果你想运行这个脚本,你可以把它下载下来,看一下它实现了什么功能,在你确信你已经了解了它的所作所为之后,你就可以运行它了。
如果你下载了脚本并且阅读了它,你就会发现安装过程仅仅只有三步:
#### 1、克隆 oh-my-zsh
第一步,克隆 oh-my-zsh 库到 `~/.oh-my-zsh` 目录:
```
% git clone http://github.com/robbyrussell/oh-my-zsh ~/.oh-my-zsh
```
#### 2、切换配置文件
下一步,备份你已有的 `.zshrc` 文件,然后将 oh-my-zsh 自带的配置文件移动到这个地方。这两步操作可以一步完成,只需要你的 `mv` 命令支持 `-b` 这个选项。
```
% mv -b \
~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc
```
#### 3、编辑配置文件
默认情况下Oh My Zsh 自带的配置文件是非常简陋的。如果你想将你自己的 `~/.zshrc` 文件合并到 `.oh-my-zsh` 的配置文件中。你可以使用 [cat][4] 命令将你的旧的配置文件添加到新文件的末尾。
```
% cat ~/.zshrc~ >> ~/.zshrc
```
看一下默认的配置文件以及它提供的一些选项。用你最喜欢的编辑器打开 `~/.zshrc` 文件。这个文件有非常良好的注释。这是了解它的一个非常好的方法。
例如,你可以更改 `.oh-my-zsh` 目录的位置。在安装的时候,它默认是位于你的家目录。但是,根据 [Free Desktop][5] 所定义的现代 Linux 规范。这个目录应当放置于 `~/.local/share` 。你可以在配置文件中进行修改。如下所示:
```
# Path to your oh-my-zsh installation.
export ZSH=$HOME/.local/share/oh-my-zsh
```
然后将 .oh-my-zsh 目录移动到你新配置的目录下:
```
% mv ~/.oh-my-zsh $HOME/.local/share/oh-my-zsh
```
如果你使用的是 MacOS这个目录可能会有点含糊不清但是最合适的位置可能是在 `$HOME/Library/Application\ Support`
### 重新启动 Zsh
编辑配置文件之后,你必须重新启动你的 Shell。在这之前你必须确定你的任何操作都已正确完成。例如在你修改了 `.oh-my-zsh` 目录的路径之后。不要忘记将目录移动到新的位置。如果你不想重新启动你的 Shell。你可以使用 `source` 命令来使你的配置文件生效。
```
% source ~/.zshrc
 .oh-my-zsh git:(master) ✗
```
你可以忽略任何丢失更新文件的警告;他们将会在重启的时候再次进行解析。
### 更换你的主题
安装好 oh-my-zsh 之后。你可以将你的 Zsh 的主题设置为 `robbyrussell`,这是一个该项目维护者的主题。这个主题的更改是非常小的,仅仅是改变了提示符的颜色。
你可以通过列出 `.oh-my-zsh` 目录下的所有文件来查看所有安装的主题:
```
 .oh-my-zsh git:(master) ✗ ls ~/.local/share/oh-my-zsh/themes
3den.zsh-theme
adben.zsh-theme
af-magic.zsh-theme
afowler.zsh-theme
agnoster.zsh-theme
[...]
```
想在切换主题之前查看一下它的样子,你可以查看 Oh My Zsh 的 [wiki][6] 页面。要查看更多主题,可以查看 [外部主题][7] wiki 页面。
大部分的主题是非常易于安装和使用的,仅仅需要改变 `.zshrc` 文件中的配置选项然后重新载入配置文件。
```
➜ ~ sed -i 's/_THEME=\"robbyrussel\"/_THEME=\"linuxonly\"/g' ~/.zshrc
➜ ~ source ~/.zshrc
seth@darkstar:pts/0-&gt;/home/skenlon (0) ➜
```
其他的主题可能需要一些额外的配置。例如,为了使用 `agnoster` 主题,你必须先安装 Powerline 字体。这是一个开源字体,如果你使用 Linux 操作系统的话,这个字体很可能在你的软件库中存在。使用下面的命令安装这个字体:
```
➜ ~ sudo dnf install powerline-fonts
```
在配置文件中更改你的主题:
```
➜ ~ sed -i 's/_THEME=\"linuxonly\"/_THEME=\"agnoster\"/g' ~/.zshrc
```
重新启动你的 Sehll一个简单的 `source` 命令并不会起作用)。一旦重启,你就可以看到新的主题:
![agnoster theme][8]
### 安装插件
Oh My Zsh 有超过 200 的插件,你可以在 `.oh-my-zsh/plugins` 中看到它们。每一个扩展目录下都有一个 `README` 文件解释了这个插件的作用。
一些插件相当简单。例如,`dnf`、`ubuntu`、`brew` 和 `macports` 插件仅仅是为了简化与 DNF、Apt、Homebres 和 MacPorts 的交互操作而定义的一些别名。
而其他的一些插件则较为复杂,`git` 插件默认是被激活使用的。当你的目录是一个 git 仓库的时候,这个扩展就会更新你的 Shell 提示符,以显示当前的分支和是否有未合并的更改。
为了激活这个扩展,你可以将这个扩展添加到你的配置文件 `~/.zshrc` 中。例如,你可以添加 `dnf``pass` 插件,按照如下的方式更改:
```
plugins=(git dnf pass)
```
保存修改,重新启动你的 Shell。
```
% source ~/.zshrc
```
这个扩展现在就可以使用了。你可以通过使用 `dnf` 提供的别名来测试一下:
```
% dnfs fop
====== Name Exactly Matched: fop ======
fop.noarch : XSL-driven print formatter
```
不同的插件做不同的事,因此你可以一次安装一两个插件来帮你学习新的特性和功能。
### 兼容性
一些 Oh My Zsh 插件具有通用性。如果你看到一个插件声称它可以与 Bash 兼容,那么它就可以在你自己的 Bash 中使用。另一些插件需要 Zsh 提供的特定功能。因此,它们并不是所有都能工作。但是你可以添加一些其他的插件,例如 `dnf`、`ubuntu`、`firewalld`,以及其他的一些插件。你可以使用 `source` 使你的选择生效。例如:
```
if [ -d $HOME/.local/share/oh-my-zsh/plugins ]; then
        source $HOME/.local/share/oh-my-zsh/plugins/dnf/dnf.plugin.zsh
fi
```
### 选择或者不选择 Zsh
Z-shell 的内置功能和它由社区贡献的扩展功能都非常强大。你可以把它当成你的主 Shell 使用,你也可以在你休闲娱乐的时候尝试一下。这取决于你的爱好。
什么是你最喜爱的主题和扩展可以在下方的评论告诉我们!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/adding-plugins-zsh
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[amwps290](https://github.com/amwps290)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code )
[2]: https://linux.cn/article-11378-1.html
[3]: https://ohmyz.sh/
[4]: https://opensource.com/article/19/2/getting-started-cat-command
[5]: http://freedesktop.org
[6]: https://github.com/robbyrussell/oh-my-zsh/wiki/Themes
[7]: https://github.com/robbyrussell/oh-my-zsh/wiki/External-themes
[8]: https://opensource.com/sites/default/files/uploads/zsh-agnoster.jpg (agnoster theme)
[9]: https://opensource.com/resources/what-is-git
[10]: https://opensource.com/article/19/7/make-linux-stronger-firewalls

View File

@ -0,0 +1,325 @@
[#]: collector: (lujun9972)
[#]: translator: (wenwensnow)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11430-1.html)
[#]: subject: (Hone advanced Bash skills by building Minesweeper)
[#]: via: (https://opensource.com/article/19/9/advanced-bash-building-minesweeper)
[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakar)
通过编写扫雷游戏提高你的 Bash 技巧
======
> 那些令人怀念的经典游戏可是提高编程能力的好素材。今天就让我们仔细探索一番,怎么用 Bash 编写一个扫雷程序。
![bash logo on green background][1]
我在编程教学方面不是专家,但当我想更好掌握某一样东西时,会试着找出让自己乐在其中的方法。比方说,当我想在 shell 编程方面更进一步时,我决定用 Bash 编写一个[扫雷][2]游戏来加以练习。
如果你是一个有经验的 Bash 程序员,希望在提高技巧的同时乐在其中,那么请跟着我编写一个你的运行在终端中的扫雷游戏。完整代码可以在这个 [GitHub 存储库][3]中找到。
### 做好准备
在我编写任何代码之前,我列出了该游戏所必须的几个部分:
1. 显示雷区
2. 创建游戏逻辑
3. 创建判断单元格是否可选的逻辑
4. 记录可用和已查明(已排雷)单元格的个数
5. 创建游戏结束逻辑
### 显示雷区
在扫雷中,游戏界面是一个由 2D 数组(列和行)组成的不透明小方格。每一格下都有可能藏有地雷。玩家的任务就是找到那些不含雷的方格,并且在这一过程中,不能点到地雷。这个 Bash 版本的扫雷使用 10x10 的矩阵,实际逻辑则由一个简单的 Bash 数组来完成。
首先,我先生成了一些随机数字。这将是地雷在雷区里的位置。控制地雷的数量,在开始编写代码之前,这么做会容易一些。实现这一功能的逻辑可以更好,但我这么做,是为了让游戏实现保持简洁,并有改进空间。(我编写这个游戏纯属娱乐,但如果你能将它修改的更好,我也是很乐意的。)
下面这些变量在整个过程中是不变的,声明它们是为了随机生成数字。就像下面的 `a` - `g` 的变量,它们会被用来计算可排除的地雷的值:
```
# 变量
score=0 # 会用来存放游戏分数
# 下面这些变量,用来随机生成可排除地雷的实际值
a="1 10 -10 -1"
b="-1 0 1"
c="0 1"
d="-1 0 1 -2 -3"
e="1 2 20 21 10 0 -10 -20 -23 -2 -1"
f="1 2 3 35 30 20 22 10 0 -10 -20 -25 -30 -35 -3 -2 -1"
g="1 4 6 9 10 15 20 25 30 -30 -24 -11 -10 -9 -8 -7"
#
# 声明
declare -a room # 声明一个 room 数组,它用来表示雷区的每一格。
```
接下来我会用列0-9和行a-j显示出游戏界面并且使用一个 10x10 矩阵作为雷区。(`M[10][10]` 是一个索引从 0-99有 100 个值的数组。) 如想了解更多关于 Bash 数组的内容,请阅读这本书[那些关于 Bash 你所不了解的事: Bash 数组简介][4]。
创建一个叫 `plough` 的函数,我们先将标题显示出来:两个空行、列头,和一行 `-`,以示意往下是游戏界面:
```
printf '\n\n'
printf '%s' "     a   b   c   d   e   f   g   h   i   j"
printf '\n   %s\n' "-----------------------------------------"
```
然后,我初始化一个计数器变量,叫 `r`,它会用来记录已显示多少横行。注意,稍后在游戏代码中,我们会用同一个变量 `r`,作为我们的数组索引。 在 [Bash for 循环][5]中,用 `seq` 命令从 0 增加到 9。我用数字`d%`)占位,来显示行号(`$row`,由 `seq` 定义):
```
r=0 # 计数器
for row in $(seq 0 9); do
printf '%d ' "$row" # 显示 行数 0-9
```
在我们接着往下做之前,让我们看看到现在都做了什么。我们先横着显示 `[a-j]` 然后再将 `[0-9]` 的行号显示出来,我们会用这两个范围,来确定用户排雷的确切位置。
接着,在每行中,插入列,所以是时候写一个新的 `for` 循环了。这一循环管理着每一列,也就是说,实际上是生成游戏界面的每一格。我添加了一些辅助函数,你能在源码中看到它的完整实现。 对每一格来说,我们需要一些让它看起来像地雷的东西,所以我们先用一个点(`.`)来初始化空格。为了实现这一想法,我们用的是一个叫 [`is_null_field`][6] 的自定义函数。 同时,我们需要一个存储每一格具体值的数组,这儿会用到之前已定义的全局数组 [`room`][7] , 并用 [变量 `r`][8]作为索引。随着 `r` 的增加,遍历所有单元格,并随机部署地雷。
```
  for col in $(seq 0 9); do
((r+=1)) # 循环完一列行数加一
is_null_field $r # 假设这里有个函数,它会检查单元格是否为空,为真,则此单元格初始值为点(.
printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}" # 最后显示分隔符,注意,${room[$r]} 的第一个值为 '.',等于其初始值。
#结束 col 循环
done
```
最后,为了保持游戏界面整齐好看,我会在每行用一个竖线作为结尾,并在最后结束行循环:
```
printf '%s\n' "|" # 显示出行分隔符
printf ' %s\n' "-----------------------------------------"
# 结束行循环
done
printf '\n\n'
```
完整的 `plough` 代码如下:
```
plough()
{
  r=0
  printf '\n\n'
  printf '%s' "     a   b   c   d   e   f   g   h   i   j"
  printf '\n   %s\n' "-----------------------------------------"
  for row in $(seq 0 9); do
    printf '%d  ' "$row"
    for col in $(seq 0 9); do
       ((r+=1))
       is_null_field $r
       printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}"
    done
    printf '%s\n' "|"
    printf '   %s\n' "-----------------------------------------"
  done
  printf '\n\n'
}
```
我花了点时间来思考,`is_null_field` 的具体功能是什么。让我们来看看,它到底能做些什么。在最开始,我们需要游戏有一个固定的状态。你可以随便选择个初始值,可以是一个数字或者任意字符。我最后决定,所有单元格的初始值为一个点(`.`),因为我觉得,这样会让游戏界面更好看。下面就是这一函数的完整代码:
```
is_null_field()
{
local e=$1 # 在数组 room 中,我们已经用过循环变量 'r' 了,这次我们用 'e'
if [[ -z "${room[$e]}" ]];then
room[$r]="." #这里用点.)来初始化每一个单元格
fi
}
```
现在,我已经初始化了所有的格子,现在只要用一个很简单的函数就能得出当前游戏中还有多少单元格可以操作:
```
get_free_fields()
{
free_fields=0 # 初始化变量
for n in $(seq 1 ${#room[@]}); do
if [[ "${room[$n]}" = "." ]]; then # 检查当前单元格是否等于初始值(.),结果为真,则记为空余格子。
((free_fields+=1))
    fi
  done
}
```
这是显示出来的游戏界面,`[a-j]` 为列,`[0-9]` 为行。
![Minefield][9]
### 创建玩家逻辑
玩家操作背后的逻辑在于,先从 [stdin][10] 中读取数据作为坐标,然后再找出对应位置实际包含的值。这里用到了 Bash 的[参数扩展][11],来设法得到行列数。然后将代表列数的字母传给分支语句,从而得到其对应的列数。为了更好地理解这一过程,可以看看下面这段代码中,变量 `o` 所对应的值。 举个例子,玩家输入了 `c3`,这时 Bash 将其分成两个字符:`c` 和 `3`。为了简单起见,我跳过了如何处理无效输入的部分。
```
colm=${opt:0:1} # 得到第一个字符,一个字母
ro=${opt:1:1} # 得到第二个字符,一个整数
case $colm in
a ) o=1;; # 最后,通过字母得到对应列数。
b ) o=2;;
    c ) o=3;;
    d ) o=4;;
    e ) o=5;;
    f ) o=6;;
    g ) o=7;;
    h ) o=8;;
    i ) o=9;;
    j ) o=10;;
  esac
```
下面的代码会计算用户所选单元格实际对应的数字,然后将结果储存在变量中。
这里也用到了很多的 `shuf` 命令,`shuf` 是一个专门用来生成随机序列的 [Linux 命令][12]。`-i` 选项后面需要提供需要打乱的数或者范围,`-n` 选项则规定输出结果最多需要返回几个值。Bash 中,可以在两个圆括号内进行[数学计算][13],这里我们会多次用到。
还是沿用之前的例子,玩家输入了 `c3`。 接着,它被转化成了 `ro=3``o=3`。 之后,通过上面的分支语句代码, 将 `c` 转化为对应的整数,带进公式,以得到最终结果 `i` 的值。
```
i=$(((ro*10)+o)) # 遵循运算规则,算出最终值
is_free_field $i $(shuf -i 0-5 -n 1) # 调用自定义函数,判断其指向空/可选择单元格。
```
仔细观察这个计算过程,看看最终结果 `i` 是如何计算出来的:
```
i=$(((ro*10)+o))
i=$(((3*10)+3))=$((30+3))=33
```
最后结果是 33。在我们的游戏界面显示出来玩家输入坐标指向了第 33 个单元格,也就是在第 3 行(从 0 开始,否则这里变成 4第 3 列。
### 创建判断单元格是否可选的逻辑
为了找到地雷,在将坐标转化,并找到实际位置之后,程序会检查这一单元格是否可选。如不可选,程序会显示一条警告信息,并要求玩家重新输入坐标。
在这段代码中,单元格是否可选,是由数组里对应的值是否为点(`.`)决定的。如果可选,则重置单元格对应的值,并更新分数。反之,因为其对应值不为点,则设置变量 `not_allowed`。为简单起见,游戏中[警告消息][14]这部分源码,我会留给读者们自己去探索。
```
is_free_field()
{
  local f=$1
  local val=$2
  not_allowed=0
  if [[ "${room[$f]}" = "." ]]; then
    room[$f]=$val
    score=$((score+val))
  else
    not_allowed=1
  fi
}
```
![Extracting mines][15]
如输入坐标有效,且对应位置为地雷,如下图所示。玩家输入 `h6`,游戏界面会出现一些随机生成的值。在发现地雷后,这些值会被加入用户得分。
![Extracting mines][16]
还记得我们开头定义的变量,`a` - `g` 吗,我会用它们来确定随机生成地雷的具体值。所以,根据玩家输入坐标,程序会根据(`m`)中随机生成的数,来生成周围其他单元格的值(如上图所示)。之后将所有值和初始输入坐标相加,最后结果放在 `i`(计算结果如上)中。
请注意下面代码中的 `X`,它是我们唯一的游戏结束标志。我们将它添加到随机列表中。在 `shuf` 命令的魔力下,`X` 可以在任意情况下出现,但如果你足够幸运的话,也可能一直不会出现。
```
m=$(shuf -e a b c d e f g X -n 1) # 将 X 添加到随机列表中,当 m=X游戏结束
if [[ "$m" != "X" ]]; then # X 将会是我们爆炸地雷(游戏结束)的触发标志
for limit in ${!m}; do # !m 代表 m 变量的值
field=$(shuf -i 0-5 -n 1) # 然后再次获得一个随机数字
index=$((i+limit)) # 将 m 中的每一个值和 index 加起来,直到列表结尾
is_free_field $index $field
    done
```
我想要游戏界面中,所有随机显示出来的单元格,都靠近玩家选择的单元格。
![Extracting mines][17]
### 记录已选择和可用单元格的个数
这个程序需要记录游戏界面中哪些单元格是可选择的。否则,程序会一直让用户输入数据,即使所有单元格都被选中过。为了实现这一功能,我创建了一个叫 `free_fields` 的变量,初始值为 `0`。用一个 `for` 循环,记录下游戏界面中可选择单元格的数量。 如果单元格所对应的值为点(`.`),则 `free_fields` 加一。
```
get_free_fields()
{
  free_fields=0
  for n in $(seq 1 ${#room[@]}); do
    if [[ "${room[$n]}" = "." ]]; then
      ((free_fields+=1))
    fi
  done
}
```
等下,如果 `free_fields=0` 呢? 这意味着,玩家已选择过所有单元格。如果想更好理解这一部分,可以看看这里的[源代码][18]。
```
if [[ $free_fields -eq 0 ]]; then # 这意味着你已选择过所有格子
printf '\n\n\t%s: %s %d\n\n' "You Win" "you scored" "$score"
      exit 0
fi
```
### 创建游戏结束逻辑
对于游戏结束这种情况,我们这里使用了一些很[巧妙的技巧][19],将结果在屏幕中央显示出来。我把这部分留给读者朋友们自己去探索。
```
if [[ "$m" = "X" ]]; then
g=0 # 为了在参数扩展中使用它
room[$i]=X # 覆盖此位置原有的值并将其赋值为X
for j in {42..49}; do # 在游戏界面中央,
out="gameover"
k=${out:$g:1} # 在每一格中显示一个字母
room[$j]=${k^^}
      ((g+=1))
    done
fi
```
最后,我们显示出玩家最关心的两行。
```
if [[ "$m" = "X" ]]; then
      printf '\n\n\t%s: %s %d\n' "GAMEOVER" "you scored" "$score"
      printf '\n\n\t%s\n\n' "You were just $free_fields mines away."
      exit 0
fi
```
![Minecraft Gameover][20]
文章到这里就结束了,朋友们!如果你想了解更多,具体可以查看我的 [GitHub 存储库][3],那儿有这个扫雷游戏的源代码,并且你还能找到更多用 Bash 编写的游戏。 我希望,这篇文章能激起你学习 Bash 的兴趣,并乐在其中。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/advanced-bash-building-minesweeper
作者:[Abhishek Tamrakar][a]
选题:[lujun9972][b]
译者:[wenwensnow](https://github.com/wenwensnow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tamrakar
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://en.wikipedia.org/wiki/Minesweeper_(video_game)
[3]: https://github.com/abhiTamrakar/playground/tree/master/bash_games
[4]: https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays
[5]: https://opensource.com/article/19/6/how-write-loop-bash
[6]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L114-L120
[7]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L41
[8]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L74
[9]: https://opensource.com/sites/default/files/uploads/minefield.png (Minefield)
[10]: https://en.wikipedia.org/wiki/Standard_streams#Standard_input_(stdin)
[11]: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html
[12]: https://linux.die.net/man/1/shuf
[13]: https://www.tldp.org/LDP/abs/html/dblparens.html
[14]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L143-L177
[15]: https://opensource.com/sites/default/files/uploads/extractmines.png (Extracting mines)
[16]: https://opensource.com/sites/default/files/uploads/extractmines2.png (Extracting mines)
[17]: https://opensource.com/sites/default/files/uploads/extractmines3.png (Extracting mines)
[18]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L91
[19]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L131-L141
[20]: https://opensource.com/sites/default/files/uploads/gameover.png (Minecraft Gameover)

View File

@ -0,0 +1,56 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11424-1.html)
[#]: subject: (In Fedora 31, 32-bit i686 is 86ed)
[#]: via: (https://fedoramagazine.org/in-fedora-31-32-bit-i686-is-86ed/)
[#]: author: (Justin Forbes https://fedoramagazine.org/author/jforbes/)
Fedora 31 将放弃 32 位 i686 支持
======
![][1]
Fedora 31 中[丢弃了][2] 32 位 i686 内核及其可启动镜像。虽然可能有一些用户仍然拥有无法与 64 位 x86_64 内核一起使用的硬件,但数量很少。本文为你提供了这次更改背后的整个事情,以及在 Fedora 31 中仍然可以找到的 32 位元素。
### 发生了什么?
i686 架构实质上从 [Fedora 27 版本][3]就进入了社区支持阶段LCTT 译注不再由官方支持。不幸的是社区中没有足够的成员愿意做维护该体系结构的工作。不过请放心Fedora 不会删除所有 32 位软件包,仍在构建许多 i686 软件包,以确保诸如 multilib、wine 和 Steam 之类的东西可以继续工作。
尽管该存储库不再构建和镜像输出,但存在一个 koji i686 存储库,该库可与 mock 一起使用以构建 32 位程序包,并且可以在紧要关头安装不属于 x86_64 multilib 存储库的 32 位版本。当然,维护人员希望这样做解决有限的使用场景。只是需要运行一个 32 位应用程序的用户应该可以在 64 位系统上使用 multilib 来运行。
### 如果你要运行 32 位应用需要做什么?
如果你仍在运行 32 位 i686 系统,则会在 Fedora 30 生命周期中继续收到受支持的 Fedora 更新。直到大约 2020 年 5 月或 6 月。到那时,如果硬件支持,你可以将其重新安装为 64 位 x86_64或者如果可能的话将其替换为支持 64 位的硬件。
社区中有一个用户已经成功地从 32 位 Fedora “升级” 到了 64 位 x86 Fedora。虽然这不是预期或受支持的升级路径但应该也可行。该项目希望可以为具有 64 位功能的硬件的用户提供一些文档,以在 Fedora 30 使用寿命终止之前说明该升级过程。
如果有 64 位的 CPU但由于内存不足而运行 32 位 Fedora请尝试[备用桌面流派][4]之一。LXDE 和其他产品在内存受限的环境中往往表现良好。对于仅在旧的可以扔掉的 32 位硬件上运行简单服务器的用户,请考虑使用较新的 ARM 板之一。在许多情况下,仅节能一项就可以支付新硬件的费用。如果以上皆不可行,[CentOS 7][5] 提供了一个 32 位镜像,并对该平台提供长期支持。
### 安全与你
尽管有些用户可能会在生命周期结束后继续运行旧版本的 Fedora但强烈建议不要这样做。人们不断研究软件的安全问题。通常他们发现这些问题已经存在多年了。
一旦 Fedora 维护人员知道了此类问题,他们通常会为它们打补丁,并为支持的发行版提供更新,而不会给使用寿命已终止的发行版提供。当然,一旦这些漏洞公开,就会有人尝试利用它们。如果你在生命周期结束时运行了较旧的发行版,则安全风险会随着时间的推移而增加,从而使你的系统面临不断增长的风险。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/in-fedora-31-32-bit-i686-is-86ed/
作者:[Justin Forbes][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/jforbes/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/i686-86-816x345.jpg
[2]: https://fedoraproject.org/wiki/Changes/Stop_Building_i686_Kernels
[3]: https://fedoramagazine.org/announcing-fedora-27/
[4]: https://spins.fedoraproject.org
[5]: https://centos.org
[6]: https://unsplash.com/@alexkixa?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[7]: https://unsplash.com/s/photos/motherboard?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Open Source Software Lets Us Push It to the Limit)
[#]: via: (https://opensourceforu.com/2019/09/how-open-source-software-lets-us-push-it-to-the-limit/)
[#]: author: (Ebbe Kernel https://opensourceforu.com/author/ebbe-kernel/)
How Open Source Software Lets Us Push It to the Limit
======
[![best text editors for web development][1]][2]
_Here is a conversation with Johan, a leading developer of an advanced proxy network. As his team tackles complex load-balancing problems, they are constantly forced to push their solutions beyond what the original developers imagined. He says that the decision to use an open source load-balancer HAProxy has made it possible to do what would not be possible with other solutions._
**Ebbe: Tell us a bit about why you chose HAProxy for load-balancing.**
**Johan: **Even though we use both open source and private source solutions for our network, I am a real ambassador for open source in our team. I think HAProxy is a perfect example of a great solution for a particular problem that can be adapted in unforeseen ways precisely because it is open sourced.
Ever since we started work developing our proxy network, I looked into using an open source solution for load-balancing. We tried Nginx and Squid, but we soon realized that HAProxy is an indisputable industry standard and the only option for our product.
**Ebbe: What made it exemplary?**
**Johan: **What Ive found with great open source software is that it must be constantly evolving, updated and managed. And in the case of HAProxy, we get minor updates every month. At first we liked the quick bug fixes. But now we jumped the waggon on the new major release, as it offered new features we were aching to implement.
Everyone knows that you do not update any working solutions until the last minute to make sure that early bugs are fixed, but good software [_offers features you cant resist_][3]. We trust it because it is transparent and has a strong community that has proven it can tackle most issues quickly.
**Ebbe: You mentioned the community, which often accompanies great open source solutions. Does it really have that much of an impact for your business?**
**Johan:** Of course. In terms of scale, everything pales in comparison to the community that HAProxy has mustered over the years. Every issue we encounter is usually solved or already escalated, and, as more and more companies use HAProxy, the community becomes vaster and more intelligent.
What weve found with other services we use, even enterprise solutions might not offer the freedom and flexibility we need. In our case, an active community is what makes it possible to adapt software in previously untested ways.
**Ebbe: What in particular does it let you do?**
**Johan: **Since we chose HAProxy to use in our network, we found that creating add-ons with Lua let us fully customize it to our own logic and integrate it with all of the other services that make the network work. This was extremely important, as we have a lot of services that need to work together, including some that are not open source.
Another great thing is that the community is always solving problems and bugs, so we do not really encounter stuff we couldnt handle. Over the years, Ive found that this is only possible for open source software.
What makes it a truly exceptional open source solution is the documentation. Even though Ive been working closely with HAProxy for over two years, I still find new things almost every month.
I know it sounds like a lot of praise, but I really love HAProxy for its resilience to our constant attempts to break it.
**Ebbe: What do you mean by break it?**
**Johan: **Originally, HAProxy works great as [_a load balancer for a couple of dozen servers_][4], usually 10 to 20. But, since our network is several orders of magnitude larger, weve constantly pushed it to its limits.
Its not uncommon for our HAProxy instances to load-balance over 10,000 servers, and we are certain that the original developers havent thought about optimizing it for these kind of loads. Due to this, it sometimes fails, but we are constantly optimizing our own solutions to make everything work. And, thanks to HAProxys developers and community, we are able to solve most of the issues we encounter easily.
**Ebbe: Doesnt this downtime impact your product negatively?**
**Johan: **First of all, our product would not work without HAProxy. At least not as successfully as it has over the years. As Ive said, all other solutions on the market are less optimized for what we do than HAProxy.
Also, breaking a service is nothing bad in and of itself. We always have backup services in place to handle the network. Testing in production is what we do for a simple reason: since we break the HAProxy so much, we cannot really test any updates before launching something on our network. We need the full scale of our network to run HAProxy instances and all the millions of servers to be available, and creating such a testing environment seems like a huge waste of resources.
**Ebbe: Do you have anything to add to the community of OpenSourceForU.com?**
**Johan: **My team and I want to thank everyone for supporting open source principles and making the world a better place!
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/how-open-source-software-lets-us-push-it-to-the-limit/
作者:[Ebbe Kernel][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/ebbe-kernel/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2017/07/developer-blog.jpg?resize=696%2C433&ssl=1 (text editor)
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2017/07/developer-blog.jpg?fit=750%2C467&ssl=1
[3]: https://smartproxy.com/what-is-a-proxy
[4]: https://opensourceforu.com/2016/09/github-open-sources-internal-load-balancer/

View File

@ -0,0 +1,89 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mobile App Security Tips to Secure Your Mobile Applications)
[#]: via: (https://opensourceforu.com/2019/10/mobile-app-security-tips-to-secure-your-mobile-applications/)
[#]: author: (Petr Kudlacek https://opensourceforu.com/author/petr-kudlacek/)
Mobile App Security Tips to Secure Your Mobile Applications
======
[![][1]][2]
 
_The world has become a mobile as every new and then the person has a smartphone in the hands with the Internet Connection. By using mobile devices, you can do everything online from the comfort of your home. You are allowed to do banking, tracking your health and control Internet of Things at home._
Today, the use of the mobile application is also increasing constantly as it is completely dominating mobile internet usage. As per the Flurry report, mobile applications account approximately 86% of the average U.S. mobile users time that amounts to more than two hours per day.
Moreover, applications that are obtainable through online app distributors like Google Play Store, Apples App Store and third-party marketplaces, are no doubting the dominant form of delivery value to users across the world.
*[![][3]][4]*Moreover, companies and organizations are embracing mobile applications as a great way to boost employees skills and productivity, agile with their new agile and mobile lifestyle. But do you know that whether these mobile apps are safe and secure and protected from any kind of virus?
**What to do to Secure Your Mobile App?**
If you have decided to develop an application or already have one, there are chances that you may ignore to consider how to secure your mobile application, your data, and your customers data. However, a mobile application comes with a good sanitation to make it work, but there is a software code itself, the business logic on the back-end network and the client side, databases.
Both are playing a significant role in the fabric of the apps security. All those companies that have mobile apps in a packed, competitive market, it is essential for them to have a robust security as it would be a big differentiator. In this post, we are going to mention some of the few tips for you to consider with mobile app security.
**Essential Tips to Secure Your Mobile Apps**
_Ensure that You Secure Your Network Connections On The Back-end_
Servers and cloud servers, which are accessing an apps API, need to have security measures in place in order to protect data and prevent unauthorized access. It is important that APIs and those accessing them need to be verified to prevent snooping on sensitive information passing from the client back to the applications server and database.
* If you want to securely store your data and important documents, containerization is one of the best methods of developing encrypted containers.
* You can get in touch with a professional network security analyst so that he can conduct a penetration testing and susceptibility assessments of your network to make sure that the right data is safe in the right ways.
* Today, the federation is the next-level security measure, which mainly spread resources across servers so that they are not all in one place, and separates main resources from users with encryption measures.
**Make Sure to Secure Transaction Regulate the Implementation of Risky Mobile Transactions**
Today, mobile applications allow users to easily manage with enterprise services on the go, so the risk lenience for transactions will differ. Therefore, it is essential for organizations to adopt an approach of risk-aware transaction execution, which restricts client-side functionality based on different policies that ponder mobile risk factors like user location, device security attributes and the security of the network connection.
Enterprise apps can easily leverage an enterprise mobile risk engine to associate risk factors like IP velocity accessing to the same account from two different locations, which are far apart over a short period even when client transactions are allowed.
It is one such approach that extends the enterprises ability to detect and respond to complex attacks which will span multiple interaction channels and outwardly unrelated security events.
**[![][5]][6]Securing the Data Stopping Data Theft and Leakage**
When mobile applications are accessing enterprise data, documents and unstructured information often stored on the device. Whenever the device is lost or data is shared with non-enterprise apps, the potential for data loss is heightened.
There are various enterprises that are already considering remote wipe capabilities to address taken or lost devices. Mobile data encryption can be easily used to secure data within the app sandbox against malware and other kinds of criminal access. When it comes to controlling the apps data sharing on the device, individual data elements can be encrypted and controlled.
**Testing Your Apps Software &amp; The Test Again**
It is important to test apps code in the app development process. As we all know that applications are being produced so rapidly that it should be an essential step in the process that falls to the wayside to speed up a time to market. At the time of testing functionality and usability, experts recommend testing for security whether their app is a native, hybrid or web app.
You can know the vulnerabilities in the code so that you can correct them before publishing your application on the web. There are some essential tips that you need to consider:
* Make sure to test thoroughly for verification and authorization, data security issues and session management.
* Penetration testing needs purposely searching a network or system for weaknesses.
* Emulators for operating systems, devices and browsers allow you to test how an application can perform in a simulated environment.
Today, mobile and mobile apps are increasingly where most of the users are; however, you will also find most of the hackers also there to seal your important and sensitive data and information. With creative mobile security strategy and an experienced mobile app developer, you can rapidly to threats and keep your app safer. Moreover, you consider above-mentioned tips as well for securing your mobile applications.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/mobile-app-security-tips-to-secure-your-mobile-applications/
作者:[Petr Kudlacek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/petr-kudlacek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/MOHD3.png?resize=626%2C419&ssl=1 (MOHD3)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/MOHD3.png?fit=626%2C419&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/MOHD1.png?resize=350%2C116&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/MOHD1.png?ssl=1
[5]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/MOHD2.png?resize=350%2C233&ssl=1
[6]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/MOHD2.png?ssl=1

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (XMPP: A Communication Protocol for the IoT)
[#]: via: (https://opensourceforu.com/2019/10/xmpp-a-communication-protocol-for-the-iot/)
[#]: author: (Neetesh Mehrotra https://opensourceforu.com/author/neetesh-mehrotra/)
XMPP: A Communication Protocol for the IoT
======
[![][1]][2]
_Formerly developed by the Jabber open source community in 1999 (and initially known as Jabber), the Extensible Messaging and Presence Protocol (XMPP) is now widely used as a communication protocol. Based on Extensible Markup Language (XML), XMPP enables fast, near-real-time exchange of data between multiple entities on a network._
In contrast to most direct messaging protocols, XMPP is described in an open standard and uses an open systems approach of development and application, by which anyone may implement an XMPP service and interoperate with other organisations implementations. Since XMPP is an open set of rules, implementations can be developed using any software licence, and many server, client, and library XMPP implementations are distributed as free and open source software. Numerous freeware and commercial software implementations also exist.
**XMPP: An overview**
XMPP is an open set of rules for streaming XML elements in order to swap messages and presence information in close to real-time. The XMPP protocol is based on the typical client server architecture, in which the XMPP client uses the XMPP server with the TCP socket.
XMPP provides a general framework for messaging across a network, offering a multitude of applications beyond traditional instant messaging (IM) and the distribution of presence data. It enables the discovery of services residing locally or across a network, as well as finding out about the availability of these services.
XMPP is well-matched for cloud computing where virtual machines, networks and firewalls would otherwise present obstacles to alternative service discovery and presence-based solutions. Cloud computing and storage systems rely on diverse forms of communication over multiple levels, including not only messaging between systems to relay state but also the migration of the distribution of larger objects, like storage or virtual machines. Along with validation and in-transit data protection, XMPP can be useful at many levels and may prove ideal as an extensible middleware or a message-oriented middleware (MOM) protocol.
![Figure 1: XMPP IM conversation][3]
**Comparisons with MQTT**
Given below are a few comparisons between the XMPP and MQTT protocols.
* MQTT is a lightweight publisher/subscriber protocol, which makes it a clear choice when implementing M2M on memory-constrained devices.
* MQTT does not define a message format; with XMPP you can define the message format and get structured data from devices. The defined structure helps validate messages, making it easier to handle and understand data coming from these connected devices.
* XMPP builds a devices identity (also called a Jabber ID). In MQTT, identities are created and managed separately in broker implementations.
* XMPP supports federation, which means that devices from different manufacturers connected to different platforms can talk to each other with a standard communication protocol.
* MQTT has different levels of quality of service. This flexibility is not available in XMPP.
* MQTT deployments become difficult to manage when the number of devices increases, while XMPP scales very easily.
**The pros and cons of XMPP**
_**Pros**_
* Addressing scheme to recognise devices on the network
* Client-server architecture
* Decentralised
* Flexible
* Open standards and formalised
_**Cons**_
* Text-based messaging and no provision for end-to-end encryption
* No provision for quality of service
* The data flow is usually more than 70 per cent of the XMPP protocol server, of which nearly 60 per cent is repeated; the XMPP protocol has a large overhead of data to multiple recipients
* Absence of binary data
* Limited scope for stability
![Figure 2: XML stream establishment][4]
**How the XMPP protocol manages communication between an XMPP client and server**
The features of the XMPP protocol that impact communication between the XMPP client and the XMPP server are described in Figure 1.
Figure 2 depicts an XML message swap between client Mike and server Ollie.org.
* XMPP uses Port 5222 for the client to server (C2S) communication.
* It utilises Port 5269 for server to server (S2S) communication.
* Discovery and XML streams are used for S2S and C2S communication.
* XMPP uses security mechanisms such as TLS (Transport Layer Security) and SASL (Simple Authentication and Security Layer).
* There are no in-between servers for federation, unlike e-mail.
Direct messaging is used as a method for immediate message transmission to and reception from online users (Figure 3).
![Figure 3: Client server communication][5]
**XMPP via HTTP**
As an alternative to the TCP protocol, XMPP can be used with HTTP in two ways: polling and binding. The polling method, now deprecated, essentially implies that messages stored on a server-side database are acquired by an XMPP client by way of HTTP GET and POST requests. Binding methods are applied using bi-directional streams over synchronous HTTP permit servers to push messages to clients as soon as they are sent. This push model of notification is more efficient than polling, wherein many of the polls return no new data.
XMPP provides a lot of support for communication, making it well suited for use within the realm of the Internet of Things.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/xmpp-a-communication-protocol-for-the-iot/
作者:[Neetesh Mehrotra][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/neetesh-mehrotra/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2017/06/IoT-Connecting-all-apps.jpg?resize=696%2C592&ssl=1 (IoT Connecting all apps)
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2017/06/IoT-Connecting-all-apps.jpg?fit=1675%2C1425&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-1-XMPP-IM-conversation.jpg?resize=261%2C287&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-2-XML-stream-establishment-350x276.jpg?resize=350%2C276&ssl=1
[5]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-3-Client-server-communication.jpg?resize=350%2C146&ssl=1

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (DARPA looks for new NICs to speed up networks)
[#]: via: (https://www.networkworld.com/article/3443046/darpa-looks-for-new-nics-to-speed-up-networks.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
DARPA looks for new NICs to speed up networks
======
The creator of the Internet now looks to speed it up by unclogging network bottlenecks.
RVLsoft / Shulz / Getty Images
The government agency that gave us the Internet 50 years ago is now looking to drastically increase network speed to address bottlenecks and chokepoints for compute-intensive applications.
The Defense Advanced Research Projects Agency (DARPA), an arm of the Pentagon, has unveiled a computing initiative, one of many, that will attempt to overhaul the network stack and interfaces that cannot keep up with high-end processors and are often the choke point for data-driven applications.
[[Get regularly scheduled insights by signing up for Network World newsletters. ]][1]
The DARPA initiative, Fast Network Interface Cards, or FastNICs, aims to boost network performance by a factor of 100 through a clean-slate transformation of the network stack from the application to the system software layers running on top of steadily faster hardware. DARPA is soliciting proposals from networking vendors. .
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][2] ]**
“The true bottleneck for processor throughput is the network interface used to connect a machine to an external network, such as an Ethernet, therefore severely limiting a processors data ingest capability,” said Dr. Jonathan Smith, a program manager in DARPAs Information Innovation Office (I2O) in a statement.
“Today, network throughput on state-of-the-art technology is about 1014 bits per second (bps) and data is processed in aggregate at about 1014 bps. Current stacks deliver only about 1010 to 1011 bps application throughputs,” he added.
Advertisement
Many other elements of server design have seen leaps in performance, like memory, meshes, NVMe-over-Fabric, and PCI Express, but networking speed has been something of a laggard, getting minor bumps in speed and throughput by comparison. The fact is were still using Ethernet as our network protocol 56 years after Bob Metcalf invented it at Xerox PARC.
So DARPAs program managers are using an approach that reworks existing network architectures. The FastNICs programs will select a challenge application and provide it with the hardware support it needs, operating system software, and application interfaces that will enable an overall system acceleration that comes from having faster NICs.
Researchers will design, implement, and demonstrate 10 Tbps network interface hardware using existing or road-mapped hardware interfaces. The hardware solutions must attach to servers via one or more industry-standard interface points, such as I/O buses, multiprocessor interconnection networks and memory slots to support the rapid transition of FastNICs technology.
“It starts with the hardware; if you cannot get that right, you are stuck. Software cant make things faster than the physical layer will allow so we have to first change the physical layer,” said Smith.
The next step would be developing system software to manage FastNICs hardware. The open-source software based on at least one open-source OS would enable faster, parallel data transfer between network hardware and applications.
Details on the proposal can be found [here][3].
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3443046/darpa-looks-for-new-nics-to-speed-up-networks.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[3]: https://www.fbo.gov/index?s=opportunity&mode=form&id=fb5cfba969669de12025ff1ce2c99935&tab=core&_cview=1
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Quantum computing, the open source way)
[#]: via: (https://opensource.com/article/19/10/open-source-quantum-future)
[#]: author: (Jaouhari Youssef https://opensource.com/users/jaouhari)
Quantum computing, the open source way
======
Quantum computing is promising, provided we overcome hurdles preventing
it from moving deeper into the real world.
![A circuit design in lights][1]
The quantum vision of reality is both strange and mesmerizing at the same time. As theoretical physicist [Michio Kaku][2] once said, "Common sense has no place in quantum mechanics."
Knowing this is a new and uncommon place, we can expect quantum innovations to surpass anything we have seen before. The theory behind it will enable as-yet-unseen capabilities, but there are also some hurdles that are slowing it from being unleashed into the real world.
By using the concepts of entanglement and superposition on quantum bits, a quantum computer can solve some problems faster than a classical computer. For example, quantum computers are useful for solving [NP-hard][3] problems, such as the [Boolean satisfiability problem][4], known as the SAT problem. Using [Grover's algorithm][5], the complexity of the evaluation of a boolean proposition of **$n$** variables goes down from **$O(n2^{n})$** to **$O(n2^{n/2})$** by applying its quantum version.
An even more interesting more problem quantum computing can solve is the [BernsteinVazirani problem][6], where given a function **$f$**, such as **$f(x)=x.s=x_{1}s_{1} + x_{2}s_{2} + x_{3}s_{3} + ... x_{n}s_{n}$**, you have to find **$s$**. While the classical solution requires **$n$** queries to find the solution, the quantum version requires only one query.
Quantum computing is very valuable for security issues. One interesting riddle it answers is: How can two communicating parties share a key to encrypt and decrypt their messages without any third party stealing it?
A valid answer would use [quantum key distribution][7], which is a method of communication that implements cryptographic protocols that involve quantum mechanics. This method relies on a quantum principle that "the measurement of a system generally disturbs it." Knowing that a third party measuring the quantum state would disturb the system, the two communicating parties can thereby know if a communication is secure by establishing a threshold for eavesdropping. This method is used for securing bank transfers in China and transferring ballot results in Switzerland.
However, there are some serious hurdles to the progress of quantum computing to meet the requirements for industrial-scale use and deployment. First, quantum computers operate at temperatures near absolute zero since any heat in the system can introduce errors. Second, there is a scalability issue for quantum chipsets. Knowing that there are chips in the order of 1,000 qubits, expanding to millions or billions of qubits for fully fault-tolerant systems, error-corrected algorithms will require significant work.
The best way to tackle real-life problems with quantum solutions is to use a hybridization of classic and quantum algorithms using quantum hardware. This way, the part of the problem that can be solved faster using a quantum algorithm can be transferred to a quantum computer for processing. One example would be using a quantum support vector machine for solving a classification problem, where the matrix-exponentiation task is handled by the quantum computer.
The [Quantum Open Source Foundation][8] is an initiative to support the development of open source tools for quantum computing. Its goal is to expand the role of open source software in quantum computing, focusing on using current or near-term quantum computing technologies. The foundation also offers links to open courses, papers, videos, development tools, and blogs about quantum computing.
The foundation also supports [OQS-OpenSSH][9], an interesting project that concerns quantum cryptography. The project aims to construct a public-key cryptosystem that will be safe even against quantum computing. Since it is still under development, using hybrid-cryptography, with both quantum-safe public key and classic public-key algorithms, is recommended.
A fun way to learn about quantum computing is by playing [Entanglion][10], a two-player game made by IBM Research. The goal is to rebuild a quantum computer from scratch. The game is very instructive and could be a great way to introduce youth to the quantum world.
All in all, the mysteries of the quantum world haven't stopped amazing us, and they will surely continue into the future. The most exciting parts are yet to come!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/open-source-quantum-future
作者:[Jaouhari Youssef][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jaouhari
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/adi-goldstein-eusvweosble-unsplash.jpg?itok=8shMsRyC (Circuit design)
[2]: https://en.wikipedia.org/wiki/Michio_Kaku
[3]: https://en.wikipedia.org/wiki/NP-hardness
[4]: https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
[5]: https://en.wikipedia.org/wiki/Grover%27s_algorithm
[6]: https://en.wikipedia.org/wiki/Bernstein%E2%80%93Vazirani_algorithm
[7]: https://en.wikipedia.org/wiki/Quantum_key_distribution
[8]: https://qosf.org/
[9]: https://github.com/open-quantum-safe/openssh-portable
[10]: https://github.com/Entanglion/entanglion

View File

@ -0,0 +1,130 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Secure Access Service Edge (SASE): A reflection of our times)
[#]: via: (https://www.networkworld.com/article/3442941/secure-access-service-edge-sase-a-reflection-of-our-times.html)
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
Secure Access Service Edge (SASE): A reflection of our times
======
Gartner makes the claim that the shift to SASE will make obsolete existing networking and security models.
RVLsoft / Shulz / Getty Images
Theres a buzz in the industry about a new type of product that promises to change the way we secure and network our organizations. It is called the Secure Access Service Edge (SASE). It was first mentioned by Gartner, Inc. in its hype cycle for networking. Since then Barracuda highlighted SASE in a recent [PR update][1] and Zscaler also discussed it in their [earnings call][2]. Most recently, [Cato Networks][3] announced that it was mentioned by Gartner as a “sample vendor” in the hype cycle.
Today, the enterprises have upgraded their portfolio and as a consequence, the ramifications of the network also need to be enhanced. What we are witnessing is cloud, mobility, and edge, which has resulted in increased pressure on the legacy network and security architecture. Enterprises are transitioning all users, applications, and data located on-premise, to a heavy reliance on the cloud, edge applications, and a dispersed mobile workforce.  
### Our technologies must evolve
Digital transformation improves agility and competitiveness. However, at the same time, it impacts the way we connect and secure these connections. Therefore, as the landscape evolves must our technologies. In such a scenario, the introduction of a SASE is a reflection of this change.
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
The new SASE category converges the capabilities of WAN with network security to support the needs of the digital enterprise. Some of these disparate networks and security services include SD-WAN, secure web gateway, CASB, software-defined perimeter, DNS protection, and firewall-as-a-service.
Today, there are a number of devices that should be folded into a converged single software stack. There should be a fabric wherein all the network and security functionality can be controlled centrally.
Advertisement
### SD-WAN forms part of the picture
The hardest thing is to accept what we have been doing in the past is not the best way forward for our organizations. The traditional methods to protect the mobile, cloud assets and sites are no longer the optimum way to support today's digital environment. Gartner claims that the shift to SASE will make the existing networking and security models obsolete.
Essentially, SASE is not just about offering SD-WAN services. SD-WAN is just a part of the much bigger story since it doesn't address all the problems. For this, you need to support a full range of capabilities. This means you must support mobile users and cloud resources (from anywhere), in a way that doesn't require backhauling. 
**[ Related: [MPLS explained What you need to know about multi-protocol label switching][5]**
Security should be embedded into the network which some SD-WAN vendors do not offer. Therefore, I could sense SASE saying that SD-WAN alone is insufficient.
### An overview of the SASE requirements
Primarily, to provide secure access in this new era and to meet the operational requirements will involve relying heavily on cloud-based services. This is contrary to a collection of on-premise network and security devices.
Whereas, to be SASE enabled, the network and security domain should be folded in a cloud-native approach to networking and security. This provides significant support for all types of edges**.**
To offer SASE services you need to fulfill a number of requirements:
1. The convergence of WAN edge and network security models
2. Cloud-native, cloud-based service delivery
3. A network designed for all edges
4. Identity and network location
### 1\. The convergence of WAN edge and network security models
Firstly, it requires the convergence of the WAN edge and network security models. Why? It is because the customer demands simplicity, scalability, low latency and pervasive security which drive the requirement for the convergence of these models.
So, we have a couple of options. One may opt to service the chain appliances; physical or virtual. Although this option does shorten the time to market but it will also result in inconsistent services, poor manageability, and high latency.
Keep in mind the service insertion fragments as it makes two separate domains. There are two different entities that are being managed by limiting visibility. Service chaining solutions for Gartner is not SASE.
The approach is to converge both networking and security into the cloud. This creates a global and cloud-native architecture that connects and secures all the locations, cloud resources, and mobile users everywhere.
SASE offerings will be purpose-built for scale-out, cloud-native, and cloud-based delivery. This will notably optimize the solution to deliver low latency services.
You need a cloud-native architecture to achieve the milestone of economy and agility. To deliver maximum flexibility with the lowest latency and resource requirements, cloud-native single-pass architecture is a very significant advantage.
### 2\. Cloud-native, cloud-based service delivery
Edge applications are latency sensitive. Hence, these require networking and security to be delivered in a distributed manner which is close to the endpoint. Edge is the new cloud that requires a paradigm shift to what cloud-based providers offer with a limited set of PoP.
The geographical footprint is critical and to effectively support these edge applications requires a cloud-delivery-based approach. Such an approach favors providers with many points of presence. Since the users are global, so you must have global operations.
It is not sufficient to offer a SASE service built solely on a hyper-scale. This limits the providers with the number of points of presence. You need to deliver where the customers are and to do this, you need a global footprint and the ability to instantiate a PoP in response to the customer demands.
### 3\. A network designed for all edges
The proliferation of the mobile workforce requires SASE services to connect with more than just sites. For this, you need to have an agent-based capability that should be managed as a cloud service.
In plain words, SASE offerings that rely on the on-premises, box-oriented delivery model, or a limited number of cloud points of presence (without agent-based capability), will be unable to meet the requirements of an increasingly mobile workforce and the emerging latency-sensitive applications.
### 4\. Identity and network location
Lets face it, now there are new demands on networks emerging from a variety of sources. This results in increased pressure on the traditional network and security architectures. Digital transformation and the adoption of mobile, cloud and edge deployment models, accompanied by the change in traffic patterns, make it imperative to rethink the place of legacy enterprise networks. 
To support these changes, we must reassess how we view the traditional data center. We must evaluate the way we use IP addresses as an anchor for the network location and security enforcement. Please keep in mind that anything tied to an IP address is useless as it does not provide a valid hook for network and security policy enforcement. This is often referred to as the IP address conundrum.
SASE is the ability to deliver network experience with the right level of security access. This access is based on the identity and real-time condition that is in accordance with company policy. Fundamentally, the traffic can be routed and prioritized in certain ways. This allows you to customize your level of security. For example, the user will get a different experience from a different location or device type. All policies are tied to the user identity and not based on the IP address. 
Finally, the legacy data center should no longer be considered as the center of network architecture. The new center of secure access networking design is the identity with a policy that follows regardless. Identities can be associated with people, devices, IoT or edge computing locations.
### A new market category
The introduction of the new market category SASE is a reflection of our current times. Technologies have changed considerably. The cloud, mobility, and edge have put increased pressure on the legacy network and network security architectures. Therefore, for some use cases, SASE will make the existing models obsolete.
For me, this is an exciting time to see a new market category and I will track this thoroughly with future posts. As we are in the early stages, there will be a lot of marketing buzz. My recommendation would be to line up who says they are claiming/mentioning SASE against the criteria set out in this post and see who does what.
**This article is published as part of the IDG Contributor Network. [Want to Join?][6]**
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3442941/secure-access-service-edge-sase-a-reflection-of-our-times.html
作者:[Matt Conran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: http://www.backupreview.info/2019/09/11/new-release-of-barracuda-cloudgen-firewall-automates-and-secures-enterprise-migrations-to-public-cloud/
[2]: https://seekingalpha.com/article/4290853-zscaler-inc-zs-ceo-jay-chaudhry-q4-2019-results-earnings-call-transcript
[3]: https://www.catonetworks.com/news/cato-networks-listed-for-sase-category-in-the-gartner-hype-cycle-2019
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html
[6]: https://www.networkworld.com/contributor-network/signup.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,198 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What's in an open source name?)
[#]: via: (https://opensource.com/article/19/10/open-source-name-origins)
[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja)
What's in an open source name?
======
Ever wonder where the names of your favorite open source projects or
programming languages came from? Get the origin stories behind popular
tech nomenclature from A to Z.
![A person writing.][1]
GNOME, Java, Jupyter, Python. If your friends or family members have ever eavesdropped on your work conversations, they might think you've made a career in Renaissance folklore, coffee roasting, astronomy, or zoology. Where did the names of these open source technologies come from? We asked our writer community for input and rounded up some of our favorite tech name origin stories.
### Ansible
The name "Ansible" is lifted directly from science fiction. Ursula Le Guin's book _Rocannon's World_ had devices allowing instantaneous (faster than light) communication called ansibles (derived, apparently, from the word "answerable"). Ansibles became a staple of science fiction, including in Orson Scott Card's _Ender's Game_ (which later became a popular film), where the device controlled many remote space ships. This seemed to be a good model for software that controls distributed machines, so Michael DeHaan (creator and founder of Ansible) borrowed the name.
### Apache
[Apache][2] is an open source web server that was originally released in 1995. Its name is not related to the famous Native American tribe; it instead refers to the repeated patches to its original software code. Hence, "A-patchy server."
### awk
"awk(1) Stands for Aho, Weinberger, Kernighan (authors)" —Michael Greenberg
### Bash
"The original Unix shell, the Bourne shell, was named after its creator. At the time Bash was being developed, csh (pronounced 'seashell') was actually more popular for interactive user logins. The Bash project aimed to give new life to the Bourne shell by making it more suitable for interactive use, thus it was named the 'Bourne again shell,' a pun on 'born again.'" —Ken Gaillot
### C
"In early days, Ken Thompson and Dennis Ritchie at AT&amp;T found it interesting that you could use a higher-level programming language (instead of low-level and less-portable assembly programming) to write operating systems and tools. There was an early programming system called BCPL (Basic Combined Programming Language), and Thompson created a stripped-down version of BCPL called B. But B wasn't very flexible or fast. Ritchie then took the ideas of B and expanded it into a compiled language called C." —Jim Hall
### dd
"I don't think you can publish such an article without mentioning dd. My nickname is Didi. Correctly pronounced, it sounds like 'dd.' I first learned Unix, and then Linux, in 1993 as a student. Then I went to the army, arrived to one of the very few sections in my unit that used Unix (Ultrix) (the rest were mainly VMS), and one of the people there said: 'So, you are a hacker, right? You think you know Unix? OK, so what's the reason for the name dd?' I had no idea and tried to guess: "Data duplicator?" So he said, 'I'll tell you the story of dd. dd is short for _convert and copy_ (as anyone can still see today on the manpage), but since cc was already taken by the c compiler, it was named dd.' Only years later, I heard the true story about JCL's data definition and the non-uniform, semi-joking syntax for the Unix dd command somewhat being based on it." —Yedidyah Bar David
### Emacs
The classic anti-vi editor, the true etymology of the name is unremarkable, in that it derives from "Editing MACroS." Being an object of great religious opprobrium and worship it has, however, attracted many spoof bacronyms such as "Escape Meta Alt Control Shift" (to spoof its heavy reliance on keystrokes), "Eight Megabytes And Constantly Swapping" (from when that was a lot of memory), "Eventually malloc()s All Computer Storage," and "EMACS Makes A Computer Slow." —Adapted from the Jargon File/Hacker's Dictionary
### Enarx
[Enarx][3] is a new project in the confidential computing space. One of the project's design principles was that it should be "fungible." so an initial name was "psilocybin" (the famed magic mushroom). The general feeling was that manager types would probably be resistant, so new names were considered. The project's two founders, Mike Bursell and Nathaniel McCallum, are both ancient language geeks, so they considered lots of different ideas, including тайна (Tayna—Russian for secret or mystery—although Russian, admittedly, is not ancient, but hey), crypticon (total bastardization of Greek), cryptidion (Greek for small secret place), arcanus (Latin masculine adjective for secret), arcanum (Latin neuter adjective for secret), and ærn (Anglo-Saxon for place, secret place, closet, habitation, house, or cottage). In the end, for various reasons, including the availability of domains and GitHub project names, they settled on enarx, a combination of two Latin roots: en- (meaning within) and -arx (meaning citadel, stronghold, or fortress).
### GIMP
Where would we be without [GIMP][4]? The GNU Image Manipulation Project has been an open source staple for many years. [Wikipedia][5] states, "In 1995, [Spencer Kimball][6] and [Peter Mattis][7] began developing GIMP as a semester-long project at the University of California, Berkeley, for the eXperimental Computing Facility."
### GNOME
Have you ever wondered why GNOME is called GNOME? According to [Wikipedia][8], GNOME was originally an acronym that represented the "GNU Network Object Model Environment." Now that name no longer represents the project and has been dropped, but the name has stayed. [GNOME 3][9] is the default desktop environment for Fedora, Red Hat Enterprise, Ubuntu, Debian, SUSE Linux Enterprise, and more.
### Java
Can you imagine this programming language being named anything else? Java was originally called Oak, but alas, the legal team at Sun Microsystems vetoed that name due to its existing trademark. So it was back to the drawing board for the development team. [Legend has it][10] that a massive brainstorm was held by the language's working group in January 1995. Lots of other names were tossed around including Silk, DNA, WebDancer, and so on. The team did not want the new name to have anything to do with the overused terms, "web" or "net." Instead, they were searching for something more dynamic, fun, and easy to remember. Java met the requirements and miraculously, the team agreed!
### Jupyter
Many of today's data scientists and students use [Jupyter][11] notebooks in their work. The name Jupyter is an amalgamation of three open source computer languages that are used in the notebooks and prominent in data science: [Julia][12], [Python][13], and [R][14].
### Kubernetes
Kubernetes is derived from the Greek word for helmsman. This etymology was corroborated in a [2015 Hacker News][15] response by a Kubernetes project founder, Craig McLuckie. Wanting to stick with the nautical theme, he explained that the technology drives containers, much like a helmsman or pilot drives a container ship. Thus, Kubernetes was the chosen name. Many of us are still trying to get the pronunciation right (koo-bur-NET-eez), so K8s is an acceptable substitute. Interestingly, it shares its etymology with the English word "governor," so has that in common with the mechanical negative-feedback device on steam engines.
### KDE
What about the K desktop? KDE originally represented the "Kool Desktop Environment." It was founded in 1996 by [Matthias Ettrich][16]. According to [Wikipedia][17], the name was a play on the words [Common Desktop Environment][18] (CDE) on Unix.
### Linux
[Linux][19] was named for its inventor, Linus Torvalds. Linus originally wanted to name his creation "Freax" as he thought that naming the creation after himself was too egotistical. According to [Wikipedia][19], "Ari Lemmke, Torvalds' coworker at the Helsinki University of Technology, who was one of the volunteer administrators for the FTP server at the time, did not think that 'Freax' was a good name. So, he named the project 'Linux' on the server without consulting Torvalds."
Following are some of the most popular Linux distributions.
#### CentOS
[CentOS][20] is an acronym for Community Enterprise Operating System. It contains the upstream packages from Red Hat Enterprise Linux.
#### Debian
[Debian][21] Linux, founded in September 1993, is a portmanteau of its founder, Ian Murdock, and his then-girlfriend Debra Lynn.
#### RHEL
[Red Hat Linux][22] got its name from its founder Marc Ewing, who wore a red Cornell University fedora given to him by his grandfather. Red Hat was founded on March 26, 1993. [Fedora Linux][23] began as a volunteer project to provide extra software for the Red Hat distribution and got its name from Red Hat's "Shadowman" logo.
#### Ubuntu
[Ubuntu][24] aims to share open source widely and is named after the African philosophy of ubuntu, which can be translated as "humanity to others" or "I am what I am because of who we all are."
### Moodle
The open source learning platform [Moodle][25] is an acronym for "modular object-oriented dynamic learning environment." Moodle continues to be a leading platform for e-learning. There are nearly 104,000 registered Moodle sites worldwide.
Two other popular open source content management systems are Drupal and Joomla. Drupal's name comes from the Dutch word for "druppel" which means "drop." Joomla is an [anglicized spelling][26] of the Swahili word "jumla," which means "all together" in Arabic, Urdu, and other languages, according to Wikipedia.
### Mozilla
[Mozilla][27] is an open source software community founded in 1998. According to its website, "The Mozilla project was created in 1998 with the release of the Netscape browser suite source code. It was intended to harness the creative power of thousands of programmers on the internet and fuel unprecedented levels of innovation in the browser market." The name was a portmanteau of [Mosaic][28] and Godzilla.
### Nginx
"Many tech people try to be cool and say it 'n' 'g' 'n' 'x'. Few actually did the basic actions of researching a bit more to find out very quickly that the name is actually supposed to be said as 'EngineX,' in reference to the powerful web server, like an engine." —Jean Sebastien Tougne
### Perl
Perl's founder Larry Wall originally named his project "Pearl." According to Wikipedia, Wall wanted to give the language a short name with positive connotations. Wall discovered the existing [PEARL][29] programming language before Perl's official release and changed the spelling of the name.
### Piet and Mondrian
"There are two programming language named after the artist Piet Mondrian. One is called 'Piet' and the other 'Mondrian.' [David Morgan-Mar [writes][30]]: 'Piet is a programming language in which programs look like abstract paintings. The language is named after Piet Mondrian, who pioneered the field of geometric abstract art. I would have liked to call the language Mondrian, but someone beat me to it with a rather mundane-looking scripting language. Oh well, we can't all be esoteric language writers, I suppose.'" —Yuval Lifshitz
### Python
The Python programming language received its unique name from its creator, Guido Van Rossum, who was a fan of the comedy group Monty Python.
### Raspberry Pi
Known for its tiny-but-mighty capabilities and wallet-friendly price tag, the Raspberry Pi is a favorite in the open source community. But where did its endearing (and yummy) name come from? In the '70s and '80s, it was a popular trend to name computers after fruit. Apple, Tangerine, Apricot... anyone getting hungry? According to a [2012 interview][31] with founder Eben Upton, the name "Raspberry Pi" is a nod to that trend. Raspberries are also tiny in size, yet mighty in flavor. The "Pi" in the name alludes to the fact that, originally, the computer could only run Python.
### Samba
[Server Message Block][32] for sharing Windows files on Linux.
### ScummVM
[ScummVM][33] (Script Creation Utility for Maniac Mansion Virtual Machine) is a program that makes it possible to run some classic computer adventure games on a modern computer. Originally, it was designed to play LucasArts adventure games that were built using SCUMM, which was originally used to develop Maniac Mansion before being used to develop most of LucasArts's other adventure games. Currently, ScummVM supports a large number of game engines, including Sierra Online's AGI and SCI, but still retains the name ScummVM. A related project, [ResidualVM][34], got its name because it covers the "residual" LucasArts adventure games not covered by ScummVM. The LucasArts games covered by ResidualVM were developed using GrimE (Grim Engine), which was first used to develop Grim Fandango, so the ResidualVM name is a double pun.
### SQL
"You may know [SQL] stands for Structured Query Language, but do you know why it's often pronounced 'sequel'? It was created as a follow-up (i.e. sequel) to the original 'QUEL' (QUEry Language)." —Ken Gaillot
### XFCE
[XFCE][35] is a popular desktop founded by [Olivier Fourdan][36]. It began as an alternative to CDE in 1996 and its name was originally an acronym for XForms Common Environment.
### Zsh
Zsh is an interactive login shell. In 1990, the first version of the shell was written by Princeton student Paul Falstad. He named it after seeing the login ID of Zhong Sha (zsh), then a teaching assistant at Princeton, and thought that it sounded like a [good name for a shell][37].
There are many more projects and names that we have not included in this list. Be sure to share your favorites in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/open-source-name-origins
作者:[Joshua Allen Holm][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/holmja
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E (A person writing.)
[2]: https://httpd.apache.org/
[3]: https://enarx.io
[4]: https://www.gimp.org/
[5]: https://en.wikipedia.org/wiki/GIMP
[6]: https://en.wikipedia.org/wiki/Spencer_Kimball_(computer_programmer)
[7]: https://en.wikipedia.org/wiki/Peter_Mattis
[8]: https://en.wikipedia.org/wiki/GNOME
[9]: https://www.gnome.org/gnome-3/
[10]: https://www.javaworld.com/article/2077265/so-why-did-they-decide-to-call-it-java-.html
[11]: https://jupyter.org/
[12]: https://julialang.org/
[13]: https://www.python.org/
[14]: https://www.r-project.org/
[15]: https://news.ycombinator.com/item?id=9653797
[16]: https://en.wikipedia.org/wiki/Matthias_Ettrich
[17]: https://en.wikipedia.org/wiki/KDE
[18]: https://sourceforge.net/projects/cdesktopenv/
[19]: https://en.wikipedia.org/wiki/Linux
[20]: https://www.centos.org/
[21]: https://www.debian.org/
[22]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
[23]: https://getfedora.org/
[24]: https://ubuntu.com/about
[25]: https://moodle.org/
[26]: https://en.wikipedia.org/wiki/Joomla#Historical_background
[27]: https://www.mozilla.org/en-US/
[28]: https://en.wikipedia.org/wiki/Mosaic_(web_browser)
[29]: https://en.wikipedia.org/wiki/PEARL_(programming_language)
[30]: http://www.dangermouse.net/esoteric/piet.html
[31]: https://www.techspot.com/article/531-eben-upton-interview/
[32]: https://www.samba.org/
[33]: https://www.scummvm.org/
[34]: https://www.residualvm.org/
[35]: https://www.xfce.org/
[36]: https://en.wikipedia.org/wiki/Olivier_Fourdan
[37]: http://www.zsh.org/mla/users/2005/msg00951.html

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Machine Learning (ML) and IoT can Work Together to Improve Lives)
[#]: via: (https://opensourceforu.com/2019/10/machine-learning-ml-and-iot-can-work-together-to-improve-lives/)
[#]: author: (Vinayak Ramachandra Adkoli https://opensourceforu.com/author/vinayak-adkoli/)
Machine Learning (ML) and IoT can Work Together to Improve Lives
======
[![][1]][2]
_IoT devices are becoming popular nowadays. The widespread use of IoT yields huge amounts of raw data. This data can be effectively processed by using machine learning to derive many useful insights that can become game changers and affect our lives deeply._
The field of machine learning is growing steadily, along with the growth of the IoT. Sensors, nano cameras, and other such IoT elements are now ubiquitous, placed in mobile phones, computers, parking stations, traffic control centres and even in home appliances. There are millions of IoT devices in the world and more are being manufactured every day. They collect huge amounts of data that is fed to machines via the Internet, enabling machines to learn from the data and make them more efficient.
In IoT, it is important to note that a single device/element can generate immense amounts of data every second. All this data from IoT is transmitted to servers or gateways to create better machine learning models. Data analytics software can convert this raw data into useful insights so that the machine can be made more intelligent, and perform better with cost-effectiveness and a long life. By the year 2020, the world will have an estimated 20 billion IoT devices. Data collected by these devices mostly pertains to machines. By using this data, machines can learn more effectively and can overcome their own drawbacks.
Now lets look at how machine learning and IoT can be combined together. Let us suppose that I have some bananas and apples. I have got a sophisticated nano camera and sensors to collect the data from these fruits. If the data collected by these elements is fed to my laptop through the Internet, my laptop will start analysing the information by using sophisticated data analytics software and the cloud platform. Now if my laptop shows graphically how many bananas and apples I have left, it probably means that my machine (laptop) hasnt learnt enough. On the other had, if my laptop is able to describe graphically how many of these are now ripe enough to be eaten, how many are not quite ripe and how many are very raw, it proves that my machine (laptop) has learned enough and has become more intelligent.
Storing, processing, analysing and being able to reason out using IoT data requires numerous computational and financial resources to attain business and machine learning values.
Today an Airbus aircraft is provided with thousands of sensors to measure temperature, speed, fuel consumption, air flow dynamics, mechanisms of working, etc. All this data provided by IoT devices is connected to cloud platforms such as IBM Watson, Microsoft Azure, etc, via the Internet. Using sophisticated data analytics software, useful information is fed back to the machine, i.e., the aircraft. Using this data, the machine can learn very fast to overcome its problems, so that its life span and performance can be greatly enhanced.
Today, the IoT connects several sectors such as manufacturing industries, healthcare, buildings, vehicles, traffic, shopping centres and so on. Data gathered from such diverse domains can certainly make the infrastructure learn meaningfully to work more efficiently.
**Giving a new deal to electronic vision**
Amazon DeepLens is a wireless-enabled video camera and is integrated with Amazon Cloud. It makes use of the latest AI tools to develop computer vision applications. Using deep learning frameworks such as Caffe, MxNet and Tensorflow, it can develop effective computer vision applications. The device can be effectively connected to Amazon IoT. It can be used to build custom models with Amazon Sage Market. Its efficiency can even be enhanced using Apache MxNet. In fact, Amazon DeepLens can be used in a variety of projects, ranging from safety and education to health and wellness. For example, individuals diagnosed with dementia have difficulty in recognising friends and even family, which can make them disoriented and confused when speaking with loved ones. Amazon DeepLens can greatly assist those who have difficulty in recognising other people.
**Why postpone the smart city concept?**
Cities today are experiencing unprecedented population growth as more people move to urban areas, and are dealing with several problems such as pollution, surging energy demand, public safety concerns, etc. It is important to remember the lessons from such urban problems. Its time now to view the smart city concept as an effective way to solve such problems. Smart city projects take advantage of IoT with advanced AI algorithms and machine learning, to relieve pressure on the infrastructure and staff while creating a better environment.
Let us look at the example of smart parking — it effectively solves vehicle parking problems. IoT monitoring today can locate empty parking spaces and quickly direct vehicles to parking spots. Today, up to 30 per cent of traffic congestion is caused by drivers looking for places to park. Not only does the extra traffic clog roadways, it also strains infrastructure and raises carbon emissions.
Today, smart buildings can automate central heating, air conditioning, lighting, elevators, fire-safety systems, the opening of doors, kitchen appliances, etc, using the IoT and machine learning (ML) techniques.
Another important problem faced by smart cities is vehicle platooning (flocking).This situation can be avoided by the construction of automated highways and by building smart cars. IoT and ML together offer better solutions to avoid vehicle platooning. This will result in greater fuel economy, reduced congestion and fewer traffic collisions.
IoT and ML can be effectively implemented in machine prognostics — an engineering discipline that mainly focuses on predicting the time at which a system or component will no longer perform its intended function. So ML with IoT can be effectively implemented in system health management (SHM), e.g., in transportation applications, in vehicle health management (VHM) or engine health management (EHM).
ML and IoT are rapidly attracting the attention of the defence and space sectors. Lets look at the case of NASA, the US space exploration agency. As a part of a five-node network, Xbee and ZigBee will be used to monitor Exo-Brake devices in space to collect data, which includes three-axis acceleration in addition to temperature and air pressure. This data is relayed to the ground control station via NASAs Iridium satellite to make the ML of the Exo-Brake instrument more efficient.
Today, drones in military operations are programmed with ML algorithms. This enables them to determine which pieces of data collected by IoT are critical to the mission and which are not. They collect real-time data when in-flight. These drones assess all incoming data and automatically discard irrelevant data, effectively managing data payloads.
In defence systems today, self-healing drones are slowly gaining widespread acceptance. Each drone has its own ML algorithm as it flies on a mission. Using this, a group of drones on a mission can detect when one member of the group has failed, and then communicate with other drones to regroup and continue the military mission without interruption.
In both the lunar and Mars projects, NASA is using hardened sensors that can withstand extreme heat and cold, high radiation levels and other harsh environmental conditions found in space to make the ML algorithm of the Rovers more effective and hence increase their life span and reliability.
In NASA s Lunar Lander project, the energy choice was solar, which is limitless in space. NASA is planning to take advantage of IoT and ML technology in this sector as well.
**IoT and ML can boost growth in agriculture**
Agriculture is one of the most fundamental human activities. Better technologies mean greater yield. This, in turn, keeps the human race happier and healthier. According to some estimates, worldwide food production will need to increase by 70 per cent by 2050 to keep up with global demand.
Adoption of IoT and ML in the agricultural space is also increasing quickly with the total number of connected devices expected to grow from 30 million in 2015 to 75 million in 2020.
In modern agriculture, all interactions between farmers and agricultural processes are becoming more and more data driven. Even analytical tools are providing the right information at the right time. Slowly but surely, ML is providing the impetus to scale and automate the agricultural sector. It is helping to learn patterns and extract information from large amounts of data, whether structured or unstructured.
**ML and IoT ensure better healthcare**
Today, intelligent, assisted living environments for home based healthcare for chronic patients are very essential. The environment combines the patients clinical history and semantic representation of ICP (individual care process) with the ability to monitor the living conditions using IoT technologies. Thus the Semantic Web of Things (SWOT) and ML algorithms, when combined together, result in LDC (less differentiated caregiver). The resultant integrated healthcare framework can provide significant savings while improving general health.
Machine learning algorithms, techniques and machinery are already present in the market to implement reasonable LDC processes. Thus, this technology is sometimes described as supervised or predictive ML.
IoT in home healthcare systems comprises multi-tier area networks. These consist of body area networks (BAN), the LAN and ultimately the WAN. These also need highly secured hybrid clouds.
IoT devices in home healthcare include nano sensors attached to the skin of the patients body to measure blood pressure, sugar levels, the heart beat, etc. This raw data is transmitted to the patients database that resides in the highly secured cloud platform. The doctor can access the raw data, previous prescriptions, etc, using sophisticated ML algorithms to recommend specific drugs to patients at remote places if required. Thus, patients at home can be saved from life threatening health conditions such as sudden heart attacks, paralysis, etc.
In this era of communication and connectivity, individuals have multiple technologies to support their day-to-day requirements. In this scenario, IoT together with ML is emerging as a practical solution for problems facing several sectors.
Growth in IoT is fine but just how much of the data collected by IoT devices is actually useful, is the key question. To answer that, efficient data analytics software, open source platforms and cloud technologies should be used. Machine learning and IoT should work towards creating a better technology, which will ensure efficiency and productivity for all sectors.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/machine-learning-ml-and-iot-can-work-together-to-improve-lives/
作者:[Vinayak Ramachandra Adkoli][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/vinayak-adkoli/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/ML-IoT_Sept-19.jpg?resize=696%2C458&ssl=1 (ML & IoT_Sept 19)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/ML-IoT_Sept-19.jpg?fit=1081%2C711&ssl=1

View File

@ -1,90 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (hopefully2333)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 open source cloud security tools)
[#]: via: (https://opensource.com/article/19/9/open-source-cloud-security)
[#]: author: (Alison NaylorAaron Rinehart https://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/sethhttps://opensource.com/users/bretthunoldtcomhttps://opensource.com/users/aaronrineharthttps://opensource.com/users/marcobravo)
4 open source cloud security tools
======
Find and eliminate vulnerabilities in the data you store in AWS and
GitHub.
![Tools in a cloud][1]
If your day-to-day as a developer, system administrator, full-stack engineer, or site reliability engineer involves Git pushes, commits, and pulls to and from GitHub and deployments to Amazon Web Services (AWS), security is a persistent concern. Fortunately, open source tools are available to help your team avoid common mistakes that could cost your organization thousands of dollars.
This article describes four open source tools that can help improve your security practices when you're developing on GitHub and AWS. Also, in the spirit of open source, I've joined forces with three security experts—[Travis McPeak][2], senior cloud security engineer at Netflix; [Rich Monk][3], senior principal information security analyst at Red Hat; and [Alison Naylor][4], principal information security analyst at Red Hat—to contribute to this article.
We've separated each tool by scenario, but they are not mutually exclusive.
### 1\. Find sensitive data with Gitrob
You need to find any potentially sensitive information present in your team's Git repos so you can remove it. It may make sense for you to use tools that are focused towards attacking an application or a system using a red/blue team model, in which an infosec team is divided in two: an attack team (a.k.a. a red team) and a defense team (a.k.a. a blue team). Having a red team to try to penetrate your systems and applications is lots better than waiting for an adversary to do so. Your red team might try using [Gitrob][5], a tool that can clone and crawl through your Git repositories looking for credentials and sensitive files.
Even though tools like Gitrob could be used for harm, the idea here is for your infosec team to use it to find inadvertently disclosed sensitive data that belongs to your organization (such as AWS keypairs or other credentials that were committed by mistake). That way, you can get your repositories fixed and sensitive data expunged—hopefully before an adversary finds them. Remember to remove not only the affected files but [also their history][6]!
### 2\. Avoid committing sensitive data with git-secrets
While it's important to find and remove sensitive information in your Git repos, wouldn't it be better to avoid committing those secrets in the first place? Mistakes happen, but you can protect yourself from public embarrassment by using [git-secrets][7]. This tool allows you to set up hooks that scan your commits, commit messages, and merges looking for common patterns for secrets. Choose patterns that match the credentials your team uses, such as AWS access keys and secret keys. If it finds a match, your commit is rejected and a potential crisis averted.
It's simple to set up git-secrets for your existing repos, and you can apply a global configuration to protect all future repositories you initialize or clone. You can also use git-secrets to scan your repos (and all previous revisions) to search for secrets before making them public.
### 3\. Create temporary credentials with Key Conjurer
It's great to have a little extra insurance to prevent inadvertently publishing stored secrets, but maybe we can do even better by not storing credentials at all. Keeping track of credentials generally—including who has access to them, where they are stored, and when they were last rotated—is a hassle. However, programmatically generating temporary credentials can avoid a lot of those issues altogether, neatly side-stepping the issue of storing secrets in Git repos. Enter [Key Conjurer][8], which was created to address this need. For more on why Riot Games created Key Conjurer and how they developed it, read _[Key conjurer: our policy of least privilege][9]_.
### 4\. Apply least privilege automatically with Repokid
Anyone who has taken a security 101 course knows that least privilege is the best practice for role-based access control configuration. Sadly, outside school, it becomes prohibitively difficult to apply least-privilege policies manually. An application's access requirements change over time, and developers are too busy to trim back their permissions manually. [Repokid][10] uses data that AWS provides about identity and access management (IAM) use to automatically right-size policies. Repokid helps even the largest organizations apply least privilege automatically in AWS.
### Tools, not silver bullets
These tools are by no means silver bullets, but they are just that: tools! So, make sure you work with the rest of your organization to understand the use cases and usage patterns for your cloud services before trying to implement any of these tools or other controls.
Becoming familiar with the best practices documented by all your cloud and code repository services should be taken seriously as well. The following articles will help you do so.
**For AWS:**
* [Best practices for managing AWS access keys][11]
* [AWS security audit guidelines][12]
**For GitHub:**
* [Introducing new ways to keep your code secure][13]
* [GitHub Enterprise security best practices][14]
Last but not least, reach out to your infosec team; they should be able to provide you with ideas, recommendations, and guidelines for your team's success. Always remember: security is everyone's responsibility, not just theirs.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/open-source-cloud-security
作者:[Alison NaylorAaron Rinehart][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/sethhttps://opensource.com/users/bretthunoldtcomhttps://opensource.com/users/aaronrineharthttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT (Tools in a cloud)
[2]: https://twitter.com/travismcpeak?lang=en
[3]: https://github.com/rmonk
[4]: https://www.linkedin.com/in/alperkins/
[5]: https://github.com/michenriksen/gitrob
[6]: https://help.github.com/en/articles/removing-sensitive-data-from-a-repository
[7]: https://github.com/awslabs/git-secrets
[8]: https://github.com/RiotGames/key-conjurer
[9]: https://technology.riotgames.com/news/key-conjurer-our-policy-least-privilege
[10]: https://github.com/Netflix/repokid
[11]: https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html
[12]: https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html
[13]: https://github.blog/2019-05-23-introducing-new-ways-to-keep-your-code-secure/
[14]: https://github.blog/2015-10-09-github-enterprise-security-best-practices/

View File

@ -1,264 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (CentOS 8 Installation Guide with Screenshots)
[#]: via: (https://www.linuxtechi.com/centos-8-installation-guide-screenshots/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
CentOS 8 Installation Guide with Screenshots
======
After **RHEL 8** release, **CentOS** community has released its most awaited Linux distribution as **CentOS 8**. It is released into two forms:
* **CentOS stream** It is designed for the developers where they will get the updates quite frequently.
* **CentOS** It is RHEL 8 like stable OS where sysadmin can install and configure the servers and applications.
In this article, we will demonstrate how to install CentOS 8 Server step by step with screenshots.
###  New features in CentOS 8 :**
**
* DNF is the default package manager though yum can also be used.
* Network configuration will be controlled by Network Manager (nmcli &amp; nmtui) as network scripts are removed.
* Podman utility to manage containers
* Introduction of two new packages repositories: BaseOS and AppStream
* Cockpit available as default server management tool
* Wayland is the default display server
* Iptables are replaced by nftables
* Linux Kernel 4.18
* PHP 7.2, Python 3.6, Ansible 2.8, VIM 8.0 and Squid 4
### Minimum System Requirements CentOS 8:
* 2 GB RAM
* 2 GHz or Higher Processor
* 20 GB Hard Disk
* 64-bit x86 System
### CentOS 8 Installation Steps with Screenshots
### Step:1) Download CentOS 8 ISO File
Download CentOS 8 ISO file from its official site,
<https://www.centos.org/download/>
### Step:2) Create CentOS 8 bootable media (USB / DVD)
Once you have downloaded CentOS 8 iso file, burn it either into USB stick or in DVD to make it bootable.
Reboot the system on which you want to install CentOS 8, change the boot medium either as USB or DVD from bios settings.
### Step:3) Choose “Install CentOS  Linux 8.0” option
When the system boots up with CentOS 8 bootable media, then we will get the following screen, choose “**Install CentOS Linux 8.0**” and hit enter,
[![Choose-Install-CentOS8][1]][2]
### Step:4) Select your preferred language
Choose the language that suits to your CentOS 8 installation and then click on Continue,
[![Select-Language-CentOS8-Installation][1]][3]
### Step:5) Preparing CentOS 8 Installation
In this step we will configure the followings:
* Keyboard Layout
* Date / Time
* Installation Source
* Software Selection
* Installation Destination
* Kdump
[![Installation-Summary-CentOS8][1]][4]
As we can see in above window, installer has automatically pick **Keyboard** layout, **Time &amp; Date**, **Installation Source** and **Software Selection.**
If you want to change any of these settings, then click on their respective icon, lets assume we want to change Time &amp; Date of system, so click on **Time &amp; Date** and choose the time zone that suits to your installation and then click on **Done**
[![TimeZone-CentOS8-Installation][1]][5]
Choose your preferred option from “**Software Selection**“, in case you want to install server with GUI then choose “**Server with GUI**” option and if you want to do minimal installation then choose “**Minimal Install**“.
[![Software-Selection-CentOS8-Installation][1]][6]
In this tutorial we will go with “**Server with GUI**” option, click on Done
**Kdump** is enabled by default, if wish to disable it then click on its icon and disable it but it is strongly recommended one should enable kdump.
If you wish to configure networking during the installation, then click on “**Network &amp; Host Name**”
[![Networking-During-CentOS8-Installation][1]][7]
In case your system is connected to modem where DHCP is running then it will automatically pick the ip whenever we enable the interface and if  you wish to configure the static ip then click on **Configure** and specify IP details there and apart from this we have also set host name as “**linuxtechi.com**“.
Once you are done with network changes, click on Done,
Now finally configure **Installation Destination**, in this step we will specify on which disk we will install CentOS 8 and what would be its partition scheme.
[![Installation-Destination-Custom-CentOS8][1]][8]
Click on Done
As we can see I have 40 GB disk space for CentOS 8 installation, here we have two options to create partition scheme, if you want installer to create automatic partition on 40 GB disk space then choose “**Automatic**” from **Storage Configuration** and if you want to create partitions manually then choose “**Custom**” option.
In this tutorial I will create custom partitions by choosing “Custom” option. I will create following LVM based partitions,
* /boot         2 GB (ext4 file system)
* /                 12 GB (xfs file system)
* /home       20 GB (xfs file system)
* /tmp          5 GB (xfs file system)
* Swap          1 GB (xfs file system)
First create /boot as standard partition of size 2 GB, steps are shown below,
[![boot-partition-CentOS8-Installation][1]][9]
Click on “**Add mount point**”
Create second partition as / of size 12 GB on LVM, click on + symbol and specify mount point and size and then click on “Add mount point”
[![slash-root-partition-centos8-installation][1]][10]
In next screen change Partition Type from standard to LVM for / partition and click on update settings
[![Change-Partition-Type-CentOS8][1]][11]
As we can see above, installer has automatically created a volume group, if want to change the name of that volume group then click on “**Modify**” option from “**Volume Group**” Tab
Similarly create next partitions as /home and /tmp of size 20 GB and 5 GB respectively and also change their partition type from standard to **LVM**,
[![home-partition-CentOS8-Installation][1]][12]
[![tmp-partition-centos8-installation][1]][13]
Finally create swap partition,
[![Swap-Partition-CentOS8-Installation][1]][14]
Click on “Add mount point”
Once you are done with all partition creations then click on Done,
[![Choose-Done-after-manual-partition-centos8][1]][15]
In the next window, click on “**Accept changes**“, it will write the changes to disk,
[![Accept-changes-CentOS8-Installation][1]][16]
### Step:6) Choose “Begin Installation”
Once we Accept the changes in above window then we will move back to installation summary screen, there click on “**Begin Installation**” to start the installation
[![Begin-Installation-CentOS8][1]][17]
Below screen confirms that installation has been started,
[![Installation-progress-centos8][1]][18]
To set root password, click on “**Root Password**” option and then specify the password string and Click on “**User creation**” option to create a local user
[![Root-Password-CentOS8-Installation][1]][19]
Local User details,
[![Local-User-Details-CentOS8][1]][20]
Installation is progress and once it is completed, installer will prompt us to reboot the system
[![CentOS8-Installation-Progress][1]][21]
### Step:7) Installation Completed and reboot system
Once the installation is completed, reboot your system, Click on Reboot
[![Installation-Completed-CentOS8][1]][22]
**Note:** After the reboot, dont forget to remove the installation media and set the boot medium as disk from bios.
### Step:8) Boot newly installed CentOS 8 and Accept License
From the grub menu, select the first option to boot CentOS 8,
[![Grub-Boot-CentOS8][1]][23]
Accept CentOS 8 License and then click on Done,
[![Accept-License-CentOS8-Installation][1]][24]
In the next screen, click on “**Finish Configuration**”
[![Finish-Configuration-CentOS8-Installation][1]][25]
### Step:9) Login Screen after finishing the configuration
We will get the following login screen after accepting CentOS 8 license and finishing the configuration
[![Login-screen-CentOS8][1]][26]
Use the same credentials of the user that you created during the installation. Follow the screen instructions and then finally we will get the following screen,
[![CentOS8-Ready-Use-Screen][1]][27]
Click on “**Start Using CentOS Linux**”
[![Desktop-Screen-CentOS8][1]][28]
Thats all from this tutorial, this confirms we have successfully installed CentOS 8. Please do share your valuable feedback and comments.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/centos-8-installation-guide-screenshots/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Install-CentOS8.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Language-CentOS8-Installation.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Summary-CentOS8.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/TimeZone-CentOS8-Installation.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Software-Selection-CentOS8-Installation.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Networking-During-CentOS8-Installation.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Destination-Custom-CentOS8.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/boot-partition-CentOS8-Installation.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/slash-root-partition-centos8-installation.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Change-Partition-Type-CentOS8.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/09/home-partition-CentOS8-Installation.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/09/tmp-partition-centos8-installation.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Swap-Partition-CentOS8-Installation.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Done-after-manual-partition-centos8.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Accept-changes-CentOS8-Installation.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Begin-Installation-CentOS8.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-progress-centos8.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Root-Password-CentOS8-Installation.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Local-User-Details-CentOS8.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/09/CentOS8-Installation-Progress.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Completed-CentOS8.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Grub-Boot-CentOS8.jpg
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Accept-License-CentOS8-Installation.jpg
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Finish-Configuration-CentOS8-Installation.jpg
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Login-screen-CentOS8.jpg
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/09/CentOS8-Ready-Use-Screen.jpg
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Desktop-Screen-CentOS8.jpg

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cacoo: A Lightweight Online Tool for Modelling AWS Architecture)
[#]: via: (https://opensourceforu.com/2019/09/cacoo-a-lightweight-online-tool-for-modelling-aws-architecture/)
[#]: author: (Magesh Kasthuri https://opensourceforu.com/author/magesh-kasthuri/)
Cacoo: A Lightweight Online Tool for Modelling AWS Architecture
======
[![AWS][1]][2]
_Cacoo is a simple and efficient online tool that can be used to model diagrams for AWS architecture. It is not specific to AWS architecture and can be used for UML modelling, cloud architecture for GCP, Azure, network architecture, etc. However, this open source tool is one of the most efficient in architecture modelling for AWS solutions._
For a cloud architect, representing the solutions design as an architecture diagram is much more helpful in explaining the details visually to target audiences like the IT manager, the development team, business stakeholders and the application owner. Though there are many tools like Sparkx Enterprise Architect, Rational Software Modeler and Visual Paradigm, to name a few, these are not so sophisticated and flexible enough for cloud architecture modelling. Cacoo is an advanced and lightweight tool that has many features to support AWS cloud modelling, as can be seen in Figures 1 and 2.
![Figure 1: Template options for AWS architecture diagram][3]
![Figure 2: Sample AWS architecture diagram in Cacoo][4]
![Figure 3: AWS diagram options in Cacoo][5]
Though AWS provides developer tools, there is no built-in tool provided for solution modelling and hence we have to choose an external tool like Cacoo for the design preparation.
We can start with solution modelling in Cacoo either by using the AWS diagram templates, which list pre-built templates for standard architecture diagrams like the network diagram, DevOps solutions, etc. If you want to develop a custom solution from the list of shapes available in the Cacoo online editor, you can choose AWS components like compute, storage, network, analytics, AI tools, etc, and prepare custom architecture to suit your solution, as shown in Figure 2.
There are connectors available to relate the components (for example, how network communication happens, and how ELB or elastic load balancing branches to EC2 storage). Figure 3 lists sample diagram shapes available for AWS architecture diagrams in Cacoo.
![Figure 4: Create an IAM role to connect to Cacoo][6]
![Figure 5: Add the policy to the IAM role to enable Cacoo to import from the AWS account][7]
**Integrating Cacoo with an AWS account to import architecture**
One of the biggest advantages of Cacoo compared to other cloud modelling tools is that it can import architecture from an AWS account. We can connect to an AWS account, and Cacoo selects the services created in the account with the role attached and prepares an architecture diagram, on the fly.
For this, we need to first create an IAM (Identity and Access Management) role in the AWS account with the account ID and external ID as given in the Cacoo Import AWS Architecture account (Figure 4).
Then we need to add a policy to the IAM role in order to access the components attached to this role from Cacoo. For policy creation, we have sample policies available in Cacoos Import AWS Architecture wizard. We just need to copy and paste the policy as shown in Figure 5.
Once this is done, the IAM role is created in the AWS account. Now we need to copy the role ARN (Amazon Resource Name) from the new role created and paste it in Cacoos Import AWS Architecture wizard as shown in Figure 6. This imports the architecture of the services created in the account, which is attached to the IAM role we have created and displays it as an architecture diagram.
![Figure 6: Cacoos AWS Architecture Import wizard][8]
![Figure 7: Cacoo worksheet with AWS imported architecture][9]
Once this is done, we can see the architecture in Cacoos worksheet (Figure 7). We can print or export the architecture diagram into PPT, PNG, SVG, PDF, etc, for an architecture document, or for poster printing and other technical discussion purposes, as needed.
Cacoo is one of the most powerful cloud architecture modelling tools and can be used for visual designs for AWS architecture, on the fly, using online tools without installing any software. The online account is accessible from anywhere and can be used for quick architecture presentation.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/cacoo-a-lightweight-online-tool-for-modelling-aws-architecture/
作者:[Magesh Kasthuri][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/magesh-kasthuri/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/07/AWS.jpg?resize=696%2C427&ssl=1 (AWS)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/07/AWS.jpg?fit=750%2C460&ssl=1
[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-1-Template-options-for-AWS-architecture-diagram.jpg?resize=350%2C262&ssl=1
[4]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-2-Sample-AWS-architecture-diagram-in-Cacoo.jpg?resize=350%2C186&ssl=1
[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-3-AWS-diagram-options-in-Cacoo.jpg?resize=350%2C337&ssl=1
[6]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-4-Create-an-IAM-role-to-connect-to-Cacoo.jpg?resize=350%2C228&ssl=1
[7]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-5-Add-the-policy-to-the-IAM-role-to-enable-Cacoo-to-import-from-the-AWS-account.jpg?resize=350%2C221&ssl=1
[8]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-6-Cacoo%E2%80%99s-AWS-Architecture-Import-wizard.jpg?resize=350%2C353&ssl=1
[9]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-7-Cacoo%E2%80%99s-worksheet-with-AWS-imported-architecture.jpg?resize=350%2C349&ssl=1

View File

@ -0,0 +1,119 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How the Linux screen tool can save your tasks and your sanity if SSH is interrupted)
[#]: via: (https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How the Linux screen tool can save your tasks and your sanity if SSH is interrupted
======
The Linux screen command can be a life-saver when you need to ensure long-running tasks don't get killed when an SSH session is interrupted. Here's how to use it.
Sandra Henry-Stocker
If youve ever had to restart a time-consuming process because your SSH session was disconnected, you might be very happy to learn about an interesting tool that you can use to avoid this problem  the **screen** tool.
Screen, which is a terminal multiplexor, allows you to run many terminal sessions within a single ssh session, detaching from them and reattaching them as needed. The process for doing this is surprising simple and involves only a handful of commands.
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][1] ]**
To start a screen session, you simply type **screen** within your ssh session. You then start your long-running process, type **Ctrl+A Ctrl+D** to detach from the session and **screen -r** to reattach when the time is right.
[][2]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][2]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
If youre going to run more than one screen session, a better option is to give each session a meaningful name that will help you remember what task is being handled in it. Using this approach, you would name each session when you start it by using a command like this:
```
$ screen -S slow-build
```
Once you have multiple sessions running, reattaching to one then requires that you pick it from the list. In the commands below, we list the currently running sessions before reattaching one of them. Notice that initially both sessions are marked as being detached.
Advertisement
```
$ screen -ls
There are screens on:
6617.check-backups (09/26/2019 04:35:30 PM) (Detached)
1946.slow-build (09/26/2019 02:51:50 PM) (Detached)
2 Sockets in /run/screen/S-shs
```
Reattaching to the session then requires that you supply the assigned name. For example:
```
$ screen -r slow-build
```
The process you left running should have continued processing while it was detached and you were doing some other work. If you ask about your screen sessions while using one of them, you should see that the session youre currently reattached to is once again “attached.”
```
$ screen -ls
There are screens on:
6617.check-backups (09/26/2019 04:35:30 PM) (Attached)
1946.slow-build (09/26/2019 02:51:50 PM) (Detached)
2 Sockets in /run/screen/S-shs.
```
You can ask what version of screen youre running with the **-version** option.
```
$ screen -version
Screen version 4.06.02 (GNU) 23-Oct-17
```
### Installing screen
If “which screen” doesnt provide information on screen, it probably isn't installed on your system.
```
$ which screen
/usr/bin/screen
```
If you need to install it, one of the following commands is probably right for your system:
```
sudo apt install screen
sudo yum install screen
```
The screen tool comes in handy whenever you need to run time-consuming processes that could be interrupted if your SSH session for any reason disconnects. And, as you've just seen, its very easy to use and manage.
Here's a recap of the commands used above:
```
screen -S <process description> start a session
Ctrl+A Ctrl+D detach from a session
screen -ls list sessions
screen -r <process description> reattach a session
```
While there is more to know about **screen**, including additional ways that you can maneuver between screen sessions, this should get you started using this handy tool.
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,134 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Best Android Apps for Protecting Privacy and Keeping Information Secure)
[#]: via: (https://opensourceforu.com/2019/10/the-best-android-apps-for-protecting-privacy-and-keeping-information-secure/)
[#]: author: (Magimai Prakash https://opensourceforu.com/author/magimai-prakash/)
The Best Android Apps for Protecting Privacy and Keeping Information Secure
======
[![][1]][2]
_Privacy violations and data theft occur every day, making it necessary for all of us to safeguard our data. We trust our smartphones way too much and tend to store personal data on them, ignoring the fact that these devices could easily be compromised. However, there are a few open source apps that can ensure the data on your phone is not compromised. This article lists the best ones._
Everyone is becoming aware about information security. There are plenty of privacy and security apps available in the Google Play store too, but it is not easy to select the right one. Most users prefer free apps, but some of these offer only limited functionality and force users to upgrade to a premium membership, which many cannot afford.
This article sheds light on some FOSS Android apps that will really help in safeguarding your privacy.
![Figure 1: Safe Notes][3]
![Figure 2: Exodus Privacy][4]
**Safe Notes**
Safe Notes is a companion app for the Protected Text website (_<https://www.protectedtext.com/>_). It is an online encrypted notepad which offers space on a separate site for users to store their notes. To use this service, you do not need to sign up with the website. Instead, you need to choose a site name and a password to protect it.
You have two options to use Safe Notes — you can either use this app to save your notes locally, or you can import your existing Protected Text site in the app. In the latter case, you can synchronise your notes between the app as well as in the Protected Text website.
By default, all the notes will be in an unlocked state. After you have saved your notes, if you want to encrypt them, click on the key icon beside your note and you will be prompted to give a password. After entering the password of your choice, your note will be encrypted and instead of the key icon, you will see an unlocked icon in its place, which means that your note is not locked. To lock your note, click the Unlocked icon beside your note — your note will get locked and the password will be removed from your device.
Passwords that you are using are not transmitted anywhere. Even if you are using an existing Protected Text site, your passwords are not transmitted. Only your encrypted notes get sent to the Protected Text servers, so you are in total control. But this also means that you cannot recover your password if you lose it.
Your notes are encrypted by the AES algorithm and SHA 12 for hashing, while SSL is used for data transmission.
![Figure 3: Net Guard][5]
**Exodus Privacy**
Have you ever wondered how many permissions you are granting to an Android app? While you can see these in the Google Play store, you may not know that some of those permissions are impacting your privacy more severely than you realise.
While permissions are taking control of your device with or without your knowledge, third party trackers also compromise your privacy by stealthily collecting data without your consent. And the worst part is that you have no clue as to how many trackers you have in your Android app.
To view the permissions for an Android app and the trackers in it, use Exodus Privacy.
Exodus Privacy is an Android app that has been created and maintained by a French non-profit organisation. While the app is not capable of any analysis, it will fetch reports from the Exodus Platform for the apps that are installed in your device.
These reports are auto-generated by using the static analysis method and, currently, the Exodus platform contains 58,392 reports. Each report gives you information about the number of trackers and permissions.
Permissions are evaluated using the three levels of Google Permission Classification. These are Normal, Signature and Dangerous. We should be concerned about the Dangerous level because such permissions can access the users private and other stored sensitive data.
Trackers are also listed in this app. When you click on a tracker, you will be taken to a page which shows you the other Android apps that have that particular tracker. This can be really useful to know if the same tracker has been used in the other apps that you have installed.
In addition, the reports will contain information such as Fingerprint and other geographical details about the app publisher such as Country, State and Locality.
![Figure 4: xBrowserSync][6]
![Figure 5: Scrambled Exif][7]
**Net Guard**
Most Android apps need network access to function properly, but offline apps dont need this to operate. Yet some of these offline apps continue to run in the background and use network access for some reason or the other. As a result, your battery gets drained very quickly and the data plan on your phone gets exhausted faster than you think.
Net Guard solves this problem by blocking the network access to selected apps. Net Guard will only block the outgoing traffic from apps, not whats incoming.
The Net Guard main window displays all the installed apps. For every app you will see the mobile network icon and the Wi-Fi icon. When they are both green, it means that Net Guard will allow the app to have network access via the mobile network and Wi-Fi. Alternatively, you can enable any one of them; for example, you can allow the app to use the Internet only via the mobile network by clicking on the Mobile network icon to turn it green while the Wi-Fi icon is red.
When both the Mobile network and Wi-Fi icons are red, the apps outgoing traffic is blocked.
Also, when Lockdown mode is enabled, it will block the network access for all apps except those that are configured to have network access in the Lockdown mode too. This is useful when you have very little battery and your data plan is about to expire.
Net Guard can also block network access to the system apps, but please be cautious about this because sometimes, when the user blocks Internet access to some critical system apps, it could result in a malfunction of other apps.
**xBrowserSync**
xBrowserSync is a free and open source service that helps to sync bookmarks across your devices. Most of the sync services require you to sign up and keep your data with them.
xBrowserSync is an anonymous and secure service, for which you need not sign up. To use this service you need to know your sync ID and have a strong password for it.
Currently, xBrowserSync supports the Mozilla and Chrome browsers; so if youre using either one of them, you can proceed further. Also, if you have to transfer a huge number of bookmarks from your existing service to xBrowserSync, it is advised that you have a backup of all your bookmarks before you create your first sync.
You can create your first sync by entering a strong password for it. After your sync is created, a unique sync ID will be shown to you, which can be used to sync your bookmarks across your devices.
xBrowserSync encrypts all your data locally before it is synced. It also uses PBKDF2 with 250,000 iterations of SHA-256 for the key derivation to combat brute force attacks. Apart from that, It uses PBKDF2 with 250,000 iterations of SHA-256 for the key derivation to combat brute force attacks. And it uses AES-GCM with a random 16 byte IV (Initialization Vector- a random number that is used with secret key to encrypt the data) with 32-bit char sync ID of the user as a salt value. All of these are in place to ensure that your data cannot be decrypted without your password.
The app provides you with a sleek interface that makes it easy for you to add bookmarks, and share and edit them by adding descriptions and tags to them.
xBrowserSync is currently hosted by four providers, including the official one. So to accommodate all the users, the synced data that isnt accessed for a long time is removed. If you dont want to be dependent on other providers, you can host xBrowserSync for yourself.
![Figure 6: Riseup VPN][8]
**Scrambled Exif**
When we share our photos on social media, sometimes we share the metadata on those photos accidentally. Metadata can be useful for some situations but it can also pose a serious threat to your privacy. A typical photo may consist of the following pieces of data such as date and time, make and model of the camera, phone name and location. When all these pieces of data are put together by a system or by a group of people, they are able to determine your location at that particular time.
So if you want to share your photos with your friends as well as on social media without divulging metadata, you can use Scrambled Exif.
Scrambled Exif is a free and open source tool which removes the Exif data from your photos, after installing the app. So when you want to share a photo, you have to click on the Share button from the photo, and it will show you the available options for sharing — choose Scrambled Exif. Once you have done that, all your metadata is removed from that photo, and you will again be shown the share list. From there on, you can share your photos normally.
**Riseup VPN**
Riseup VPN (Virtual Private Network) is a tool that enables you to protect your identity, as well as bypass the censorship that is imposed on your network and the encryption of your Internet traffic. Some VPN service providers log your IP address and quietly betray your trust.
Riseup VPN is a personal VPN service offered by the Riseup Organization, which is a non-profit that fights for a free Internet by providing tools and other resources for anyone who wants to enjoy the Internet without being restrained.
To use the Riseup VPN, you do not need to register, nor do you need to configure the settings — it is all prepped for you. All you need is to click on the Turn on button and within a few moments, you can see that your traffic is routed through the Riseup networks. By default, Riseup does not log your IP address.
At present, Riseup VPN supports the Riseup networks in Hong Kong and Amsterdam.
![Figure 7: Secure Photo Viewer][9]
**Secure Photo Viewer**
When you want to show a cool picture of yours to your friends by giving your phone to them, some of them may get curious and go to your gallery to view all your photos. Once you unlock the gallery, you cannot control what should be shown and what ought to be hidden, as long as your phone is with them.
Secure Photo Viewer fixes this problem. After installing it, choose the photos or videos you want to show to a friend and click share. This will show Secure Photo Viewer in the available options. Once you click on it, a new window will open and it will instruct you to lock your device. Within a few seconds the photo you have chosen will show up on the screen. Now you can show your friends just that photo, and they cant get into your gallery and view the rest of your private photos.
Most of the apps listed here are available on F-Droid as well as on Google Play. I recommend using F-Droid because every app has been compiled via its source code by F-Droid itself, so it is unlikely to have malicious code injected in it.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/the-best-android-apps-for-protecting-privacy-and-keeping-information-secure/
作者:[Magimai Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/magimai-prakash/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Android-Apps-security.jpg?resize=696%2C658&ssl=1 (Android Apps security)
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Android-Apps-security.jpg?fit=890%2C841&ssl=1
[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-1-Safe-Notes.jpg?resize=211%2C364&ssl=1
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-2-Exodus-Privacy.jpg?resize=225%2C386&ssl=1
[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-3-Net-Guard.jpg?resize=226%2C495&ssl=1
[6]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-4-xBrowserSync.jpg?resize=251%2C555&ssl=1
[7]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-5-Scrambled-Exif-350x535.jpg?resize=235%2C360&ssl=1
[8]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-6-Riseup-VPN.jpg?resize=242%2C536&ssl=1
[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-7-Secure-Photo-Viewer.jpg?resize=228%2C504&ssl=1

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Creating a perfect landing page for free)
[#]: via: (https://opensourceforu.com/2019/10/creating-a-perfect-landing-page-for-free/)
[#]: author: (Jagoda Glowacka https://opensourceforu.com/author/jagoda-glowacka/)
Creating a perfect landing page for free
======
[![][1]][2]
_Nowadays running an online business has become more popular than doing it in a traditional way. Entrepreneurs are lured by the lack of barriers of entry, simplicity of reaching wide ranges of customers and endless possibilities of growing. With Internet and new technologies it is far easier today to become an online businessman than a traditional one. However, one thing is to become an entrepreneur and another is to keep oneself on the market._
Since the digital business world is in constant expansion the competition is getting fiercer and the quality of products and services offered increases. It makes it more troublesome to be noticed in the crowd of alike ambitious online businessmen offering similar products. In order to survive you need to use all the cards you have and even if you have already done that you should always think about improvement and innovation.
One of this card should definitely be a decent nice-looking and attention-grabbing landing page that boosts your conversions and build trust among your potential customers. Since today you can easily [_create landing page_][3] for free you should never deprive your business of one. As it is a highly powerful tool than can move your business off the ground and gain a lot of new leads. However, in order to do all of this it has to be a high quality landing page that will be impeccable for your targeted audience.
**A landing page is a must for every online business**
The concept of landing pages arrived a few years back but these few years were enough to settle down and become the necessity of every online business. At the beginning loads of businessmen decided to ignore their existence and preferred to persuade themselves that a homepage is already enough. Well, sorry to break it for them but its not.
**Homepage should never equal landing page**
Obviously, a homepage is also a must for every online business and without it the business can only exist in entrepreneurs imagination ;-) However, an essence of a homepage is not the same what an essence of a landing page is. And even the most state-of-the-art business website does not replace a good piece of landing page.
Homepages do serve multiple purposes but none of them is focused on attracting new clients as they dont clearly encourage visitors to take an action such as subscribing or filling out a contact form. Homepages primary focus is the company itself its full offer, history or founder and it makes them full of distracting information and links. And last but not least, the information on them is not put in order that would make the visitors desire the product instantly.
**Landing pages impose action**
Landing page is a standalone website and serves as a first-impression maker among the visitors. It is the place where your new potential customers land and in order to keep them you need to show them instantly that your solution is something they need. It should quickly grab attention of the visitors, engage them in an action and make them interested in your product or service. And it should do all of that as quickly as possible.
Therefore, landing pages are a great tool which helps you increase your conversion rate, getting information about your visitors, engage new potential leads into action (such as subscribing for a free trial or a newsletter what provide you with personal information about them) and make them believe your product or service is worthwhile. However, in order to fulfill all these functions it needs to have all the necessary landing page elements and it has to be a landing page of high quality.
**Every landing page needs some core features**
In order to create a perfectly converting landing page you need to plan its structure and put all the essential elements on it that will help you achieve your goals. The core elements that should be placed on every landing page are:
* headlines which should be catchy, keywords focused and eye-catching. It is the first, and sometimes only, element that visitors read so it has to be well-thought and a little intriguing,
* subheadlines which should be completion of headlines, a little bit more descriptive but still keywords focused and catchy,
* benefits of your solution clearly outlined and demonstrating high value and absolute necessity of purchasing it for your potential leads,
* call-to-action in a visible place and allowing the visitors to subscribe for a free trial, coupons, a newsletter or purchase right away.
All of these features put together in a right order enable you to boost your conversions and make your product or service absolutely desirable for your customers. They are all the core elements of every landing page and without any of them there is a higher risk of landing page failure.
However, putting all the elements is one thing but designing a landing page is another. When planning its structure you should always have in mind who your target is and adjust your landing page look accordingly. You should always keep up with landing page trends which make your landing page up-to-date and appealing for the customers.
If it all sounds quite confusing and you are a landing page newbie or still dont really feel confident in landing page creation you may facilitate this task and use a highly powerful tool the landing page savvies have prepared for you. And that is a [_free landing page creator_][4] which help you create a high quality and eye-catching landing page in less than an hour.
**Creating a free landing page is a piece of cake**
Today the digital marketing world is full of bad quality landing pages that dont truly work miracles for businesses. In order to give you all the bonanza the quality of landing page is crucial and choosing a landing page builder designed by landing page experts is one of the most secure options to create a landing page of excellence.
They are online tools which slightly guide you through the whole creation process making it effortless and quick. They are full of pre-installed features such as landing page layouts and templates, drag and drop function, simple copying and moving or tailoring your landing page to every type of device. You can use these builders up to 14 days for free thanks to a free trial period. Quite nice, huh? ;-)
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/creating-a-perfect-landing-page-for-free/
作者:[Jagoda Glowacka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/jagoda-glowacka/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/09/Long-wait-open-for-webpage-in-broser-using-laptop.jpg?resize=696%2C405&ssl=1 (Long wait open for webpage in broser using laptop)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/09/Long-wait-open-for-webpage-in-broser-using-laptop.jpg?fit=1996%2C1162&ssl=1
[3]: https://landingi.com/blog/how-to-create-landing-page
[4]: https://landingi.com/free-landing-page

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (way-ww)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,144 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (SQL queries don't start with SELECT)
[#]: via: (https://jvns.ca/blog/2019/10/03/sql-queries-don-t-start-with-select/)
[#]: author: (Julia Evans https://jvns.ca/)
SQL queries don't start with SELECT
======
Okay, obviously many SQL queries do start with `SELECT` (and actually this post is only about `SELECT` queries, not `INSERT`s or anything).
But! Yesterday I was working on an [explanation of window functions][1], and I found myself googling “can you filter based on the result of a window function”. As in can you filter the result of a window function in a WHERE or HAVING or something?
Eventually I concluded “window functions must run after WHERE and GROUP BY happen, so you cant do it”. But this led me to a bigger question **what order do SQL queries actually run in?**.
This was something that I felt like I knew intuitively (“Ive written at least 10,000 SQL queries, some of them were really complicated! I must know this!“) but I struggled to actually articulate what the order was.
### SQL queries happen in this order
I looked up the order, and here it is! (SELECT isnt the first thing, its like the 5th thing!) ([here it is in a tweet][2]).
(I really want to find a more accurate way of phrasing this than “sql queries happen/run in this order” but I havent figured it out yet)
<https://jvns.ca/images/sql-queries.jpeg>
In a non-image format, the order is:
* `FROM/JOIN` and all the `ON` conditions
* `WHERE`
* `GROUP BY`
* `HAVING`
* `SELECT` (including window functions)
* `ORDER BY`
* `LIMIT`
### questions this diagram helps you answer
This diagram is about the _semantics_ of SQL queries it lets you reason through what a given query will return and answers questions like:
* Can I do `WHERE` on something that came from a `GROUP BY`? (no! WHERE happens before GROUP BY!)
* Can I filter based on the results of a window function? (no! window functions happen in `SELECT`, which happens after both `WHERE` and `GROUP BY`)
* Can I `ORDER BY` based on something I did in GROUP BY? (yes! `ORDER BY` is basically the last thing, you can `ORDER BY` based on anything!)
* When does `LIMIT` happen? (at the very end!)
**Database engines dont actually literally run queries in this order** because they implement a bunch of optimizations to make queries run faster well get to that a little later in the post.
So:
* you can use this diagram when you just want to understand which queries are valid and how to reason about what results of a given query will be
* you _shouldnt_ use this diagram to reason about query performance or anything involving indexes, thats a much more complicated thing with a lot more variables
### confounding factor: column aliases
Someone on Twitter pointed out that many SQL implementations let you use the syntax:
```
SELECT CONCAT(first_name, ' ', last_name) AS full_name, count(*)
FROM table
GROUP BY full_name
```
This query makes it _look_ like GROUP BY happens after SELECT even though GROUP BY is first, because the GROUP BY references an alias from the SELECT. But its not actually necessary for the GROUP BY to run after the SELECT for this to work the database engine can just rewrite the query as
```
SELECT CONCAT(first_name, ' ', last_name) AS full_name, count(*)
FROM table
GROUP BY CONCAT(first_name, ' ', last_name)
```
and run the GROUP BY first.
Your database engine also definitely does a bunch of checks to make sure that what you put in SELECT and GROUP BY makes sense together before it even starts to run the query, so it has to look at the query as a whole anyway before it starts to come up with an execution plan.
### queries arent actually run in this order (optimizations!)
Database engines in practice dont actually run queries by joining, and then filtering, and then grouping, because they implement a bunch of optimizations reorder things to make the query run faster as long as reordering things wont change the results of the query.
One simple example of a reason why need to run queries in a different order to make them fast is that in this query:
```
SELECT * FROM
owners LEFT JOIN cats ON owners.id = cats.owner
WHERE cats.name = 'mr darcy'
```
it would be silly to do the whole left join and match up all the rows in the 2 tables if you just need to look up the 3 cats named mr darcy its way faster to do some filtering first for cats named mr darcy. And in this case filtering first doesnt change the results of the query!
There are lots of other optimizations that database engines implement in practice that might make them run queries in a different order but theres no room for that and honestly its not something Im an expert on.
### LINQ starts queries with `FROM`
LINQ (a querying syntax in C# and VB.NET) uses the order `FROM ... WHERE ... SELECT`. Heres an example of a LINQ query:
```
var teenAgerStudent = from s in studentList
where s.Age > 12 && s.Age < 20
select s;
```
pandas (my [favourite data wrangling tool][3]) also basically works like this, though you dont need to use this exact order Ill often write pandas code like this:
```
df = thing1.join(thing2) # like a JOIN
df = df[df.created_at > 1000] # like a WHERE
df = df.groupby('something', num_yes = ('yes', 'sum')) # like a GROUP BY
df = df[df.num_yes > 2] # like a HAVING, filtering on the result of a GROUP BY
df = df[['num_yes', 'something1', 'something']] # pick the columns I want to display, like a SELECT
df.sort_values('sometthing', ascending=True)[:30] # ORDER BY and LIMIT
df[:30]
```
This isnt because pandas is imposing any specific rule on how you have to write your code, though. Its just that it often makes sense to write code in the order JOIN / WHERE / GROUP BY / HAVING. (Ill often put a `WHERE` first to improve performance though, and I think most database engines will also do a WHERE first in practice)
`dplyr` in R also lets you use a different syntax for querying SQL databases like Postgres, MySQL and SQLite, which is also in a more logical order.
### I was really surprised that I didnt know this
Im writing a blog post about this because when I found out the order I was SO SURPRISED that Id never seen it written down that way before it explains basically everything that I knew intuitively about why some queries are allowed and others arent. So I wanted to write it down in the hopes that it will help other people also understand how to write SQL queries.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/10/03/sql-queries-don-t-start-with-select/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://twitter.com/b0rk/status/1179419244808851462?s=20
[2]: https://twitter.com/b0rk/status/1179449535938076673
[3]: https://github.com/jvns/pandas-cookbook

View File

@ -0,0 +1,428 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (9 essential GNU binutils tools)
[#]: via: (https://opensource.com/article/19/10/gnu-binutils)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
9 essential GNU binutils tools
======
Binary analysis is the most underestimated skill in the computer
industry.
![Tools for the sysadmin][1]
Imagine not having access to a software's source code but still being able to understand how the software is implemented, find vulnerabilities in it, and—better yet—fix the bugs. All of this in binary form. It sounds like having superpowers, doesn't it?
You, too, can possess such superpowers, and the GNU binary utilities (binutils) are a good starting point. The [GNU binutils][2] are a collection of binary tools that are installed by default on all Linux distributions.
Binary analysis is the most underestimated skill in the computer industry. It is mostly utilized by malware analysts, reverse engineers, and people
working on low-level software.
This article explores some of the tools available through binutils. I am using RHEL but these examples should run on any Linux distribution.
```
`[~]# cat /etc/redhat-release  Red Hat Enterprise Linux Server release 7.6 (Maipo) [~]#  [~]# uname -r 3.10.0-957.el7.x86_64 [~]# `
```
Note that some packaging commands (like **rpm**) might not be available on Debian-based distributions, so use the equivalent **dpkg** command where applicable.
### Software development 101
In the open source world, many of us are focused on software in source form; when the software's source code is readily available, it is easy to simply get a copy of the source code, open your favorite editor, get a cup of coffee, and start exploring.
But the source code is not what is executed on the CPU; it is the binary or machine language instructions that are executed on the CPU. The binary or executable file is what you get when you compile the source code. People skilled in debugging often get their edge by understanding this difference.
### Compilation 101
Before digging into the binutils package itself, it's good to understand the basics of compilation.
Compilation is the process of converting a program from its source or text form in a certain programming language (C/C++) into machine code.
Machine code is the sequence of 1's and 0's that are understood by a CPU (or hardware in general) and therefore can be executed or run by the CPU. This machine code is saved to a file in a specific format that is often referred to as an executable file or a binary file. On Linux (and BSD, when using [Linux Binary Compatibility][3]), this is called [ELF][4] (Executable and Linkable Format).
The compilation process goes through a series of complicated steps before it presents an executable or binary file for a given source file. Consider this source program (C code) as an example. Open your favorite editor and type out this program:
```
`#include <stdio.h> int main(void) { printf("Hello World\n"); return 0; }`
```
#### Step 1: Preprocessing with cpp
The [C preprocessor (**cpp**)][5] is used to expand all macros and include the header files. In this example, the header file **stdio.h** will be included in the source code. **stdio.h** is a header file that contains information on a **printf** function that is used within the program. **cpp** runs on the source code, and the resulting instructions are saved in a file called **hello.i**. Open the file with a text editor to see its contents. The source code for printing **hello world** is at the bottom of the file.
```
`[testdir]# cat hello.c #include <stdio.h> int main(void) { printf("Hello World\n"); return 0; } [testdir]# [testdir]# cpp hello.c > hello.i [testdir]# [testdir]# ls -lrt total 24 -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i [testdir]#`
```
#### Step 2: Compilation with gcc
This is the stage where preprocessed source code from Step 1 is converted to assembly language instructions without creating an object file. It uses the [GNU Compiler Collection (**gcc**)][6]. After running the **gcc** command with the -**S** option on the **hello.i** file, it creates a new file called **hello.s**. This file contains the assembly language instructions for the C program.
You can view the contents using any editor or the **cat** command.
```
`[testdir]# [testdir]# gcc -Wall -S hello.i [testdir]# [testdir]# ls -l total 28 -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i -rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s [testdir]# [testdir]# cat hello.s .file "hello.c" .section .rodata .LC0: .string "Hello World" .text .globl main .type main, @function main: .LFB0: .cfi_startproc pushq %rbp .cfi_def_cfa_offset 16 .cfi_offset 6, -16 movq %rsp, %rbp .cfi_def_cfa_register 6 movl $.LC0, %edi call puts movl $0, %eax popq %rbp .cfi_def_cfa 7, 8 ret .cfi_endproc .LFE0: .size main, .-main .ident "GCC: (GNU) 4.8.5 20150623 (Red Hat 4.8.5-36)" .section .note.GNU-stack,"",@progbits [testdir]#`
```
#### Step 3: Assembling with as
The purpose of an assembler is to convert assembly language instructions into machine language code and generate an object file that has a **.o** extension. Use the GNU assembler **as** that is available by default on all Linux platforms.
```
`[testdir]# as hello.s -o hello.o [testdir]# [testdir]# ls -l total 32 -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i -rw-r--r--. 1 root root 1496 Sep 13 03:39 hello.o -rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s [testdir]#`
```
You now have your first file in the ELF format; however, you cannot execute it yet. Later, you will see the difference between an **object file** and an **executable file**.
```
`[testdir]# file hello.o hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped`
```
#### Step 4: Linking with ld
This is the final stage of compillation, when the object files are linked to create an executable. An executable usually requires external functions that often come from system libraries (**libc**).
You can directly invoke the linker with the **ld** command; however, this command is somewhat complicated. Instead, you can use the **gcc** compiler with the **-v** (verbose) flag to understand how linking happens. (Using the **ld** command for linking is an exercise left for you to explore.)
```
`[testdir]# gcc -v hello.o Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man [...] --build=x86_64-redhat-linux Thread model: posix gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) COMPILER_PATH=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/:/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/:[...]:/usr/lib/gcc/x86_64-redhat-linux/ LIBRARY_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.8.5/:/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/:/lib/../lib64/:/usr/lib/../lib64/:/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-v' '-mtune=generic' '-march=x86-64' /usr/libexec/gcc/x86_64-redhat-linux/4.8.5/collect2 --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu [...]/../../../../lib64/crtn.o [testdir]#`
```
After running this command, you should see an executable file named **a.out**:
```
`[testdir]# ls -l total 44 -rwxr-xr-x. 1 root root 8440 Sep 13 03:45 a.out -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i -rw-r--r--. 1 root root 1496 Sep 13 03:39 hello.o -rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s`
```
Running the **file** command on **a.out** shows that it is indeed an ELF executable:
```
`[testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=48e4c11901d54d4bf1b6e3826baf18215e4255e5, not stripped`
```
Run your executable file to see if it does as the source code instructs:
```
`[testdir]# ./a.out Hello World`
```
It does! So much happens behind the scenes just to print **Hello World** on the screen. Imagine what happens in more complicated programs.
### Explore the binutils tools
This exercise provided a good background for utilizing the tools that are in the binutils package. My system has binutils version 2.27-34; you may have a different version depending on your Linux distribution.
```
`[~]# rpm -qa | grep binutils binutils-2.27-34.base.el7.x86_64`
```
The following tools are available in the binutils packages:
```
`[~]# rpm -ql binutils-2.27-34.base.el7.x86_64 | grep bin/ /usr/bin/addr2line /usr/bin/ar /usr/bin/as /usr/bin/c++filt /usr/bin/dwp /usr/bin/elfedit /usr/bin/gprof /usr/bin/ld /usr/bin/ld.bfd /usr/bin/ld.gold /usr/bin/nm /usr/bin/objcopy /usr/bin/objdump /usr/bin/ranlib /usr/bin/readelf /usr/bin/size /usr/bin/strings /usr/bin/strip`
```
The compilation exercise above already explored two of these tools: the **as** command was used as an assembler, and the **ld** command was used as a linker. Read on to learn about the other seven GNU binutils package tools highlighted in bold above.
#### readelf: Displays information about ELF files
The exercise above mentioned the terms **object file** and **executable file**. Using the files from that exercise, enter **readelf** using the **-h** (header) option to dump the files' ELF header on your screen. Notice that the object file ending with the **.o** extension is shown as **Type: REL (Relocatable file)**:
```
`[testdir]# readelf -h hello.o ELF Header: Magic: 7f 45 4c 46 02 01 01 00 [...] [...] Type: REL (Relocatable file) [...]`
```
If you try to execute this file, you will get an error saying it cannot be executed. This simply means that it doesn't yet have the information that is required for it to be executed on the CPU.
Remember, you need to add the **x** or **executable bit** on the object file first using the **chmod** command or else you will get a **Permission denied** error.
```
`[testdir]# ./hello.o bash: ./hello.o: Permission denied [testdir]# chmod +x ./hello.o [testdir]# [testdir]# ./hello.o bash: ./hello.o: cannot execute binary file`
```
If you try the same command on the **a.out** file, you see that its type is an **EXEC (Executable file)**.
```
`[testdir]# readelf -h a.out ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 [...] Type: EXEC (Executable file)`
```
As seen before, this file can directly be executed by the CPU:
```
`[testdir]# ./a.out Hello World`
```
The **readelf** command gives a wealth of information about a binary. Here, it tells you that it is in ELF64-bit format, which means it can be executed only on a 64-bit CPU and won't work on a 32-bit CPU. It also tells you that it is meant to be executed on X86-64 (Intel/AMD) architecture. The entry point into the binary is at address 0x400430, which is just the address of the **main** function within the C source program.
Try the **readelf** command on the other system binaries you know, like **ls**. Note that your output (especially **Type:**) might differ on RHEL 8 or Fedora 30 systems and above due to position independent executable ([PIE][7]) changes made for security reasons.
```
`[testdir]# readelf -h /bin/ls ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file)`
```
Learn what **system libraries** the **ls** command is dependant on using the **ldd** command, as follows:
```
`[testdir]# ldd /bin/ls linux-vdso.so.1 => (0x00007ffd7d746000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f060daca000) libcap.so.2 => /lib64/libcap.so.2 (0x00007f060d8c5000) libacl.so.1 => /lib64/libacl.so.1 (0x00007f060d6bc000) libc.so.6 => /lib64/libc.so.6 (0x00007f060d2ef000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f060d08d000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f060ce89000) /lib64/ld-linux-x86-64.so.2 (0x00007f060dcf1000) libattr.so.1 => /lib64/libattr.so.1 (0x00007f060cc84000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f060ca68000)`
```
Run **readelf** on the **libc** library file to see what kind of file it is. As it points out, it is a **DYN (Shared object file)**, which means it can't be directly executed on its own; it must be used by an executable file that internally uses any functions made available by the library.
```
`[testdir]# readelf -h /lib64/libc.so.6 ELF Header: Magic: 7f 45 4c 46 02 01 01 03 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - GNU ABI Version: 0 Type: DYN (Shared object file)`
```
#### size: Lists section sizes and the total size
The **size** command works only on object and executable files, so if you try running it on a simple ASCII file, it will throw an error saying **File format not recognized**.
```
`[testdir]# echo "test" > file1 [testdir]# cat file1 test [testdir]# file file1 file1: ASCII text [testdir]# size file1 size: file1: File format not recognized`
```
Now, run **size** on the **object file** and the **executable file** from the exercise above. Notice that the executable file (**a.out**) has considerably more information than the object file (**hello.o**), based on the output of size command:
```
`[testdir]# size hello.o text data bss dec hex filename 89 0 0 89 59 hello.o [testdir]# size a.out text data bss dec hex filename 1194 540 4 1738 6ca a.out`
```
But what do the **text**, **data**, and **bss** sections mean?
The **text** sections refer to the code section of the binary, which has all the executable instructions. The **data** sections are where all the initialized data is, and **bss** is where all the uninitialized data is stored.
Compare **size** with some of the other available system binaries.
For the **ls** command:
```
`[testdir]# size /bin/ls text data bss dec hex filename 103119 4768 3360 111247 1b28f /bin/ls`
```
You can see that **gcc** and **gdb** are far bigger programs than **ls** just by looking at the output of the **size** command:
```
`[testdir]# size /bin/gcc text data bss dec hex filename 755549 8464 81856 845869 ce82d /bin/gcc [testdir]# size /bin/gdb text data bss dec hex filename 6650433 90842 152280 6893555 692ff3 /bin/gdb`
```
#### strings: Prints the strings of printable characters in files
It is often useful to add the **-d** flag to the **strings** command to show only the printable characters from the data section.
**hello.o** is an object file that contains instructions to print out the text **Hello World**. Hence, the only output from the **strings** command is **Hello World**.
```
`[testdir]# strings -d hello.o Hello World`
```
Running **strings** on **a.out** (an executable), on the other hand, shows additional information that was included in the binary during the linking phase:
```
`[testdir]# strings -d a.out /lib64/ld-linux-x86-64.so.2 !^BU libc.so.6 puts __libc_start_main __gmon_start__ GLIBC_2.2.5 UH-0 UH-0 =( []A\A]A^A_ Hello World ;*3$"`
```
Recall that compilation is the process of converting source code instructions into machine code. Machine code consists of only 1's and 0's and is difficult for humans to read. Therefore, it helps to present machine code as assembly language instructions. What do assembly languages look like? Remember that assembly language is architecture-specific; since I am using Intel or x86-64 architecture, the instructions will be different if you're using ARM architecture to compile the same programs.
#### objdump: Displays information from object files
Another binutils tool that can dump the machine language instructions from the binary is called **objdump**.
Use the **-d** option, which disassembles all assembly instructions from the binary.
```
`[testdir]# objdump -d hello.o hello.o: file format elf64-x86-64 Disassembly of section .text: 0000000000000000`
`: 0: 55 push %rbp 1: 48 89 e5 mov %rsp,%rbp 4: bf 00 00 00 00 mov $0x0,%edi 9: e8 00 00 00 00 callq e`
` e: b8 00 00 00 00 mov $0x0,%eax 13: 5d pop %rbp 14: c3 retq`
` `
```
This output seems intimidating at first, but take a moment to understand it before moving ahead. Recall that the **.text** section has all the machine code instructions. The assembly instructions can be seen in the fourth column (i.e., **push**, **mov**, **callq**, **pop**, **retq**). These instructions act on registers, which are memory locations built into the CPU. The registers in this example are **rbp**, **rsp**, **edi**, **eax**, etc., and each register has a special meaning.
Now run **objdump** on the executable file (**a.out**) and see what you get. The output of **objdump** on the executable can be large, so I've narrowed it down to the **main** function using the **grep** command:
```
`[testdir]# objdump -d a.out | grep -A 9 main\> 000000000040051d`
`: 40051d: 55 push %rbp 40051e: 48 89 e5 mov %rsp,%rbp 400521: bf d0 05 40 00 mov $0x4005d0,%edi 400526: e8 d5 fe ff ff callq 400400 40052b: b8 00 00 00 00 mov $0x0,%eax 400530: 5d pop %rbp 400531: c3 retq`
` `
```
Notice that the instructions are similar to the object file **hello.o**, but they have some additional information in them:
* The object file **hello.o** has the following instruction: `callq e` ` `
* The executable **a.out** consists of the following instruction with an address and a function:`callq 400400 <puts@plt>`
The above assembly instruction is calling a **puts** function. Remember that you used a **printf** function in the source code. The compiler inserted a call to the **puts** library function to output **Hello World** to the screen.
Look at the instruction for a line above **puts**:
* The object file **hello.o** has the instruction **mov**:`mov $0x0,%edi`
* The instruction **mov** for the executable **a.out** has an actual address (**$0x4005d0**) instead of **$0x0**:`mov $0x4005d0,%edi`
This instruction moves whatever is present at address **$0x4005d0** within the binary to the register named **edi**.
What else could be in the contents of that memory location? Yes, you guessed it right: it is nothing but the text **Hello, World**. How can you be sure?
The **readelf** command enables you to dump any section of the binary file (**a.out**) onto the screen. The following asks it to dump the **.rodata**, which is read-only data, onto the screen:
```
`[testdir]# readelf -x .rodata a.out Hex dump of section '.rodata': 0x004005c0 01000200 00000000 00000000 00000000 .... 0x004005d0 48656c6c 6f20576f 726c6400 Hello World.`
```
You can see the text **Hello World** on the right-hand side and its address in binary on the left-hand side. Does it match the address you saw in the **mov** instruction above? Yes, it does.
#### strip: Discards symbols from object files
This command is often used to reduce the size of the binary before shipping it to customers.
Remember that it hinders the process of debugging since vital information is removed from the binary; nonetheless, the binary executes flawlessly.
Run it on your **a.out** executable and notice what happens. First, ensure the binary is **not stripped** by running the following command:
```
`[testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, [......] not stripped`
```
Also, keep track of the number of bytes originally in the binary before running the **strip** command:
```
`[testdir]# du -b a.out 8440 a.out`
```
Now run the **strip** command on your executable and ensure it worked using the **file** command:
```
`[testdir]# strip a.out [testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, [......] stripped`
```
After stripping the binary, its size went down to **6296** from the previous **8440** bytes for this small program. With this much savings for a tiny program, no wonder large programs often are stripped.
```
`[testdir]# du -b a.out 6296 a.out`
```
#### addr2line: Converts addresses into file names and line numbers
The **addr2line** tool simply looks up addresses in the binary file and matches them up with lines in the C source code program. Pretty cool, isn't it?
Write another test program for this; only this time ensure you compile it with the **-g** flag for **gcc**, which adds additional debugging information for the binary and also helps by including the line numbers (provided in the source code here):
```
`[testdir]# cat -n atest.c 1 #include <stdio.h> 2 3 int globalvar = 100; 4 5 int function1(void) 6 { 7 printf("Within function1\n"); 8 return 0; 9 } 10 11 int function2(void) 12 { 13 printf("Within function2\n"); 14 return 0; 15 } 16 17 int main(void) 18 { 19 function1(); 20 function2(); 21 printf("Within main\n"); 22 return 0; 23 }`
```
Compile with the **-g** flag and execute it. No surprises here:
```
`[testdir]# gcc -g atest.c [testdir]# ./a.out Within function1 Within function2 Within main`
```
Now use **objdump** to identify memory addresses where your functions begin. You can use the **grep** command to filter out specific lines that you want. The addresses for your functions are highlighted below:
```
`[testdir]# objdump -d a.out | grep -A 2 -E 'main>:|function1>:|function2>:' 000000000040051d : 40051d: 55 push %rbp 40051e: 48 89 e5 mov %rsp,%rbp -- 0000000000400532 : 400532: 55 push %rbp 400533: 48 89 e5 mov %rsp,%rbp -- 0000000000400547`
`: 400547: 55 push %rbp 400548: 48 89 e5 mov %rsp,%rbp`
```
Now use the **addr2line** tool to map these addresses from the binary to match those of the C source code:
```
`[testdir]# addr2line -e a.out 40051d /tmp/testdir/atest.c:6 [testdir]# [testdir]# addr2line -e a.out 400532 /tmp/testdir/atest.c:12 [testdir]# [testdir]# addr2line -e a.out 400547 /tmp/testdir/atest.c:18`
```
It says that **40051d** starts on line number 6 in the source file **atest.c**, which is the line where the starting brace (**{**) for **function1** starts. Match the output for **function2** and **main**.
#### nm: Lists symbols from object files
Use the C program above to test the **nm** tool. Compile it quickly using **gcc** and execute it.
```
`[testdir]# gcc atest.c [testdir]# ./a.out Within function1 Within function2 Within main`
```
Now run **nm** and **grep** for information on your functions and variables:
```
`[testdir]# nm a.out | grep -Ei 'function|main|globalvar' 000000000040051d T function1 0000000000400532 T function2 000000000060102c D globalvar U __libc_start_main@@GLIBC_2.2.5 0000000000400547 T main`
```
You can see that the functions are marked **T**, which stands for symbols in the **text** section, whereas variables are marked as **D**, which stands for symbols in the initialized **data** section.
Imagine how useful it will be to run this command on binaries where you do not have source code? This allows you to peek inside and understand which functions and variables are used. Unless, of course, the binaries have been stripped, in which case they contain no symbols, and therefore the **nm** command wouldn't be very helpful, as you can see here:
```
`[testdir]# strip a.out [testdir]# nm a.out | grep -Ei 'function|main|globalvar' nm: a.out: no symbols`
```
### Conclusion
The GNU binutils tools offer many options for anyone interested in analyzing binaries, and this has only been a glimpse of what they can do for you. Read the man pages for each tool to understand more about them and how to use them.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/gnu-binutils
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn (Tools for the sysadmin)
[2]: https://en.wikipedia.org/wiki/GNU_Binutils
[3]: https://www.freebsd.org/doc/handbook/linuxemu.html
[4]: https://en.wikipedia.org/wiki/Executable_and_Linkable_Format
[5]: https://en.wikipedia.org/wiki/C_preprocessor
[6]: https://gcc.gnu.org/onlinedocs/gcc/
[7]: https://en.wikipedia.org/wiki/Position-independent_code#Position-independent_executables

View File

@ -0,0 +1,242 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (All That You Can Do with Google Analytics, and More)
[#]: via: (https://opensourceforu.com/2019/10/all-that-you-can-do-with-google-analytics-and-more/)
[#]: author: (Ashwin Sathian https://opensourceforu.com/author/ashwin-sathian/)
All That You Can Do with Google Analytics, and More
======
[![][1]][2]
*We have all heard about or used Google Analytics (GA) the most popular tool to track user activity such as, but not limited to, page visits. Its utility and popularity means that everybody wishes to use it. This article focuses on how to use it correctly in a world where single page Angular and React applications are becoming more popular by the day. *
A pertinent question is how does one track page visits in applications that have just one page?
As always, there are ways around this and we will look at one such option in this article. While we will do the implementation in an Angular application, the usage and concepts arent very different if the app is in React. So, lets get started!
**Getting the application ready**
Getting a tracking ID: Before we write actual code, we need to get a tracking ID, the unique identifier that tells Google Analytics that data like a click or a page view is coming from a particular application.
To get this, we do the following:
1. Go to _<https://analytics.google.com>_.
2. Sign up by entering the required details. Make sure the registration is for the Web ours is a Web application, after all.
3. Agree to the _Terms and Conditions_, and generate your tracking ID.
4. Copy the ID, which will perhaps look something like UA-ID-Y.
Now that the ID is ready, lets write some code.
**Adding _analytics.js_ script**
While the team at Google has done all the hard work to get the Google Analytics tools ready for us to use, this is where we do our part make it available to the application. This is simple all thats to be done is to add the following script to your applications _index.html_:
```
<script>
(function(i,s,o,g,r,a,m){i[GoogleAnalyticsObject]=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,script,https://www.google-analytics.com/analytics.js,ga);
</script>
```
With that out of the way, lets see how we can initialise Google Analytics in the application.
**Creating the tracker**
Lets now set up the tracker for the application. For this, _open app.component.ts_ and perform the following steps:
1. Declare a global variable named ga, of type _any_ (remember, this is Typescript and you need to mention types).
2. Insert the following line of code into _ngOnInit()_ of your component.
```
ga(create, YOUR_TRACKING_ID, auto);
```
Congratulations! You have now successfully initiated a Google Analytics tracker in your application. Since the tracker initiation is made inside the _OnInit_ function, the tracker will get activated every time the application starts up.
**Recording page visits in the single page application**
What we need to do is record route-visits.
Now comes the tricky bit tracking the parts of your application visited by users. From a functional aspect, we know that routes are stand-ins for traditional pages in modern single page Web applications. It means we need to record route-visits. This is not easy, but we can still achieve this.
In the _ngOnInit()_ function inside _app.component.ts_, add the following code snippet:
```
import { Router, NavigationEnd } from @angular/router;
constructor(public router: Router) {}
...
this.router.events.subscribe(
event => {
if (event instanceof NavigationEnd) {
ga(set, page, event.urlAfterRedirects);
ga(send, { hitType: pageview, hitCallback: () => { this.pageViewSent = true; }
});
}
} );
```
Believe it or not, those few lines of code have taken care of the pageview recording issue in the Angular application.
Despite only a few lines of code, a fair bit is happening here:
1. Import Router and NavigationEnd from Angular Router.
2. Add Router to the component through its constructor.
3. Next, subscribe to router events; i.e., all events emitted by the Angular Router.
4. Whenever there is an instance of a NavigationEnd event (emitted whenever the application navigates to a route), we set that destination/ route as a page and this sends a pageview.
Now, every time a routing occurs, a pageview is sent to Google Analytics. You can view this in the online Google Analytics dashboard.
Like pageviews, we can record a lot of other activities like screenview, timing, etc, using the same syntax. We can set the program to react in any way we want to for all such send activities, through the _hitCallback()_, as shown in the code snippet. Here we are setting a variable value to true, but any piece of code can be executed in _hitCallback_.
**Tracking user interactions**
After pageviews, the most commonly tracked activities on Google Analytics are user interactions such as, but not limited to, button clicks. How many times was the _Submit_ button clicked?, How often is the product brochure viewed? These are questions that are often asked in product review meetings for Web applications. In this section, well look at this implementation using Google Analytics in the application.
**Button clicks:** Consider a case for which you wish to track the number of times a certain button/link in the application is clicked a metric that is most associated with sign-ups, call-to-action buttons, etc. Well look at an example for this.
For this purpose, assume that you have a Show Interest button in your application for an upcoming event. The organisers wish to keep track of how many people are interested in the event by tracking those clicking the button. The following code snippet facilitates this:
```
params = {
eventCategory:
Button
,
eventAction:
Click
,
eventLabel:
Show interest
,
eventValue:
1
};
showInterest() {
ga(send, event, this.params);
}
```
Lets look at what is being done here. Google Analytics, as already discussed, records activities when we send the data to it. It is the parameters that we pass to this send method that distinguish between various events like tracking clicks on two separate buttons.
1\. First, we define a params object that should have the following fields:
1. _eventCategory_ An object with which interaction happens; in this case, a button.
2. _eventAction_ The type of interaction; in our case, a click.
3. _eventLabel_ An identifier for the interaction. In this case, we could call it Show Interest.
4. _eventValue_ Basically, the value you wish to associate with each instance of this event.
Since this example is measuring the number of people showing interest, we can set this value to 1.
2\. After constructing this object, the next part is pretty simple and one of the most commonly used methods as far as Google Analytics tracking goes sending the event, with the params object as a payload. We do this by using event binding on the button and attaching the _showInterest()_ button to it.
Thats it! Google Analytics (GA) will now track data of people expressing interest by clicking the button.
**Tracking social engagements:** Google Analytics also lets you track peoples interactions on social media through the application. One such case would be a Facebook-type Like button for our brands page that we place in the application. Lets look at how we can do this tracking using GA.
```
fbLikeParams = {
socialNetwork:
'Facebook',
socialAction:
'Like',
socialTarget:
'https://facebook.com/mypage'
};
fbLike() {
ga('send', 'social', this.fbLikeParams);
}
```
If that code looks familiar, its because it is very similar to the method by which we track button clicks. Lets look at the steps:
1\. Construct the payload for sending data. This payload should have the following fields:
1. _socialNetwork_ The network the interaction is happening with, e.g., Facebook, Twitter, etc.
2. _socialAction_ What sort of interaction is happening, e.g., Like, Tweet, Share, etc.
3. _socialTarget_ What URL is being targeted by using the interaction. This could be the URL of the social media profile/page.
2\. The next method is, of course, to add a function to report this activity. Unlike a button click, we dont use the _send_ method here, but the social method. Also, after this function is written, we bind it to the Like button we have in place.
Theres more that can be done with GA as far as tracking user interactions go, one of the top items being exception tracking. This allows us to track errors and exceptions occurring in the application using GA. We wont delve deeper into it in this article; however, the reader is encouraged to explore this.
**User identity**
**Privacy is a right, not a luxury:** While Google Analytics can record a lot of activities as well as user interactions, there is one comparatively less known aspect, which we will look at in this section. Theres a lot of control we can place over tracking (and not tracking) user identity.
**Cookies:** GA uses cookies as a means to track a users activity. But we can define what these cookies are named and a couple of other aspects about them, as shown in the code snippet below:
```
trackingID =
UA-139883813-1
;
cookieParams = {
cookieName: myGACookie,
cookieDomain: window.location.hostname,
cookieExpires: 604800
};
ngOnInit() {
ga(create, this.trackingID, this.cookieParams);
...
}
```
Here, we are setting the GA cookies name, domain and cookie expiration date, which allows us to distinguish cookies set by our GA tracker from those set by GA trackers from other websites/Web applications. Rather than a cryptic auto-generated identity, well be able to set custom identities for our applications GA tracker cookies.
**IP anonymisation:** There may be cases when we do not want to know where the traffic to our application is coming from. For instance, consider a button click activity tracker we do not necessarily need to know the geographical source of that interaction, so long as the number of hits is tracked. In such situations, GA allows us to track users activity without them having to reveal their IP address.
```
ipParams = {
anonymizeIp: true
};
ngOnInit() {
ga('set', this.ipParams);
...
}
```
Here, we are setting the parameters of the GA tracker so that IP anonymisation is set to _true_. Thus, our users IP address will not be tracked by Google Analytics, which will give users a sense of privacy.
**Opt-out:** At times, users may not want their browsing data to be tracked. GA allows for this too, and hence has the option to enable users to completely opt out of GA tracking.
```
...
optOut() {
window[ga-disable-UA-139883813-1] = true;
}
...
```
_optOut()_ is a custom function which disables GA tracking from the window. We can employ this function using event binding on a button/check box, which allows users to opt out of GA tracking.
We have looked at what makes integrating Google Analytics into single page applications tricky and explored a way to work around this. We also saw how we can track page views in single page applications as well as touched upon tracking users interactions with the application, such as button clicks, social media engagements, etc.
Finally, we examined opportunities offered by GA to ensure user privacy, especially when their identity isnt critical to our applications analytics, to the extent of allowing users to entirely opt out of Google Analytics tracking. Since there is much more that can be done, youre encouraged to keep exploring and keep playing around with the methods offered by GA.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/all-that-you-can-do-with-google-analytics-and-more/
作者:[Ashwin Sathian][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/ashwin-sathian/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Analytics-illustration.jpg?resize=696%2C396&ssl=1 (Analytics illustration)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Analytics-illustration.jpg?fit=900%2C512&ssl=1

View File

@ -0,0 +1,161 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use GameHub to Manage All Your Linux Games in One Place)
[#]: via: (https://itsfoss.com/gamehub/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Use GameHub to Manage All Your Linux Games in One Place
======
How do you [play games on Linux][1]? Let me guess. Either you install games from the software center or from Steam or from GOG or Humble Bundle etc, right? But, how do you plan to manage all your games from multiple launchers and clients? Well, that sounds like a hassle to me which is why I was delighted when I come across [GameHub][2].
GameHub is a desktop application for Linux distributions that lets you manage “All your games in one place”. That sounds interesting, isnt it? Let me share more details about it.
![][3]
### GameHub Features to manage Linux games from different sources at one place
Lets see all the features that make GameHub one of the [essential Linux applications][4], specially for gamers.
#### Steam, GOG &amp; Humble Bundle Support
![][5]
It supports Steam, [GOG][6], and [Humble Bundle][7] account integration. You can sign in to your account to see manager your library from within GameHub.
For my usage, I have a lot of games on Steam and a couple on Humble Bundle. I cant speak for all but it is safe to assume that these are the major platforms one would want to have.
#### Native Game Support
![][8]
There are several [websites where you can find and download Linux games][9]. You can also add native Linux games by downloading their installers or add the executable file.
Unfortunately, theres no easy way of finding out games for Linux from within GameHub at the moment. So, you will have to download them separately and add it to the GameHub as shown in the image above.
#### Emulator Support
With emulators, you can [play retro games on Linux][10]. As you can observe in the image above, you also get the ability to add emulators (and import emulated images).
You can see [RetroArch][11] listed already but you can also add custom emulators as per your requirements.
#### User Interface
![Gamehub Appearance Option][12]
Of course, the user experience matters. Hence, it is important to take a look at its user interface and what it offers.
To me, I felt it very easy to use and the presence of a dark theme is a bonus.
#### Controller Support
If you are comfortable using a controller with your Linux system to play games you can easily add it, enable or disable it from the settings.
#### Multiple Data Providers
Just because it fetches the information (or metadata) of your games, it needs a source for that. You can see all the sources listed in the image below.
![Data Providers Gamehub][13]
You dont have to do anything here but if you are using anything else other than steam as your platform, you can generate an [API key for IDGB.][14]
I shall recommend you to do that only if you observe a prompt/notice within GameHub or if you have some games that do not have any description/pictures/stats on GameHub.
#### Compatibility Layer
![][15]
Do you have a game that does not support Linux?
You do not have to worry. GameHub offers multiple compatibility layers like Wine/Proton which you can use to get the game installed in order to make it playable.
We cant be really sure on what works for you so you have to test it yourself for that matter. Nevertheless, it is an important feature that could come handy for a lot of gamers.
### How Do You Manage Your Games in GameHub?
You get the option to add Steam/GOG/Humble Bundle account right after you launch it.
For Steam, you need to have the Steam client installed on your Linux distro. Once, you have it, you can easily link the games to GameHub.
![][16]
For GOG &amp; Humble Bundle, you can directly sign in using your credentials to get your games organized in GameHub.
If you are adding an emulated image or a native installer, you can always do that by clicking on the “**+**” button that you observe in the top-right corner of the window.
### How Do You Install Games?
For Steam games, it automatically launches the Steam client to download/install (I wish if this was possible without launching Steam!)
![][17]
But, for GOG/Humble Bundle, you can directly start downloading to install the games after signing in. If necessary, you can utilize the compatibility layer for non-native Linux games.
In either case, if you want to install an emulated game or a native game just add the installer or import the emulated image. Theres nothing more to it.
### GameHub: How do you install it?
![][18]
To start with, you can just search for it in your software center or app center. It is available in the **Pop!_Shop**. So, it can be found in most of the official repositories.
If you dont find it there, you can always add the repository and install it via terminal by typing these commands:
```
sudo add-apt-repository ppa:tkashkin/gamehub
sudo apt update
sudo apt install com.github.tkashkin.gamehub
```
In case you encounter “**add-apt-repository command not found**” error, you can take a look at our article to help fix [add-apt-repository not found error.][19]
There are also AppImage and Flatpak versions available. You can find installation instructions for other Linux distros on its [official webpage][2].
Also, you have the option to download pre-release packages from its [GitHub page][20].
[GameHub][2]
**Wrapping Up**
GameHub is a pretty neat application as a unified library for all your games. The user interface is intuitive and so are the options.
Have you had the chance it test it out before? If yes, let us know your experience in the comments down below.
Also, feel free to tell us about some of your favorite tools/applications similar to this which you would want us to try.
--------------------------------------------------------------------------------
via: https://itsfoss.com/gamehub/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/linux-gaming-guide/
[2]: https://tkashkin.tk/projects/gamehub/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-home-1.png?ssl=1
[4]: https://itsfoss.com/essential-linux-applications/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-platform-support.png?ssl=1
[6]: https://www.gog.com/
[7]: https://www.humblebundle.com/monthly?partner=itsfoss
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-native-installers.png?ssl=1
[9]: https://itsfoss.com/download-linux-games/
[10]: https://itsfoss.com/play-retro-games-linux/
[11]: https://www.retroarch.com/
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-appearance.png?ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/data-providers-gamehub.png?ssl=1
[14]: https://www.igdb.com/api
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-windows-game.png?fit=800%2C569&ssl=1
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-library.png?ssl=1
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-compatibility-layer.png?ssl=1
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-install.jpg?ssl=1
[19]: https://itsfoss.com/add-apt-repository-command-not-found/
[20]: https://github.com/tkashkin/GameHub/releases

View File

@ -1,605 +0,0 @@
数码文件与文件夹收纳术(以照片为例)
======
更新 2014-05-14:增加了一些具体实例
更新 2015-03-16:根据照片的 GPS 坐标过滤图片
更新 2016-08-29:以新的 `filetags--filter` (LCTT译注文件标签过滤器)替换已经过时的 `show-sel.sh` 脚本LCTT译注show-sel 为 show firmware System Event Log records 即硬件系统事件及日志显示)
更新 2017-08-28: geeqier 视频缩略图的邮件评论
更新 2018-03-06:增加了zum Konzept von Julian Kahnert 的链接
更新 2018-05-06:增加了作者在 2018 Linuxtage Graz 大会上演讲的视频
更新 2018-06-05:关于 metadata 的邮件回复
更新 2019-07-09:关于在文件名中避免使用系谱和字符的邮件回复
每当度假或去哪游玩时我就会化身为一个富有激情的摄影师。所以,过去的几年中我积累了许多的 [JPEG][1] 文件。这篇文章中我会介绍我是如何避免[vendor lock-in][2]LCTT译注vendor lock-in 供应商锁定,原为经济学术语,这里引申为避免过于依赖某一服务平台)造成受限于那些临时性的解决方案及数据丢失。相反,我更倾向于使用那些可以让我投入时间和精力打理并能长久使用的解决方案。
这一(相当长的)攻略 **并不仅仅适用于图像文件** :我将进一步阐述像是文件夹结构,文件的命名规则,等等许多领域的事情。因此,这些规范适用于我所能接触到的所有类型的文件。
在我开始传授我的方法之前,我们应该先就我将要介绍方法的达成一个共识,那就是我们是否有相同的需求。如果你对[raw 图像格式][3]十分推崇,将照片存储在云端或其他你信赖的地方(对我而言可能不会),那么你可能不会认同这篇文章将要描述的方式了。请根据你的情况来灵活做出选择。
### 我的需求
对于 **将照片(或视频)从我的数码相机中导出到电脑里**,我仅仅将 SD 卡茶道我的电脑里并调用 fetch-workflow 软件。这一步也完成了 **图像软件的预处理** 适用于我提出的文件命名规范(下文会具体论述)同时也可以将图片旋转至正常的方向(而不是横着)。
这些文件将会被存入到我的摄影收藏文件夹 `$HOME/tmp/digicam/`。 在这一文件夹中我希望能完成以下的操作 **浏览图像和视频文件** 以便于 **整理排序/删除,重命名,添加/移除标签,以及将一系列相关的文件移动到相应的文件夹中**
在完成这些以后,我将会**浏览包含图像/电影文件集的文件夹**。在极少数情况下,我希望**在独立的图像处理工具**中打开一个图像文件,比如[GIMP][4]。如果仅是为了**旋转JPEG文件**我想找到一个快速的方法不需要图像处理工具这是旋转JPEG图像[无损的方式][5]。
我的数码相机支持用[GPS][6]坐标标记图像。因此,我需要一个方法来**可视化GPS坐标的单个文件以及一组文件**显示我走过的路径。
我想安利的另一个好功能是:假设你在威尼斯度假时拍了几百张照片。每一个都很漂亮所以你每张都舍不得删除。另一方面你可能想要一组更小的照片送给家里的朋友。而且为了不让他们过于嫉妒他们可能只希望看到20多张照片。因此我希望能够**定义并显示一组特定的照片**。
就独立性和**避免锁定效应**而言,我不想使用那种一旦公司停止产品或服务就无法使用的工具。出于同样的原因,由于我是一个注重隐私的人,**我不想使用任何基于云的服务**。为了让自己对新的可能性保持开放的心态,我不希望仅在一个特定的操作系统平台上倾注全部的精力。**基本的东西必须在任何平台上可用**(查看、导航、……)。但是**全套需求必须在GNU/Linux上运行**且我选择Debian GNU/Linux。
在我传授当前针对上述大量需求的解决方案之前,我必须解释一下我的一般文件夹结构和文件命名约定,我也使用它来命名数码照片。但首先,你必须认清一个重要的事实:
#### iPhoto, Picasa, 诸如此类应被认为是有害的
管理照片集合的软件工具确实提供了相当酷的功能。他们提供了一个良好的用户界面,并试图为你提供各种需求的舒适的工作流程。
这些软件的功能和我的个人需求之间的差异很大。它们几乎对所有东西都使用专有的存储格式:图像文件、元数据等等。这是一个大问题,当你打算在几年内换一个不同的软件。相信我:总有一天你会因为多种原因而改变。
如果你现在正打算更换 相应的工具你将会意识到iPhoto或Picasa确实分别存储原始图像文件和你对它们所做的所有操作。旋转图像向图像文件添加描述标签裁剪等等如果你不能导出并重新导入到新工具那么**所有的东西都将永远丢失**。而无损的进行转换和迁移几乎是不可能的。
我不想在一个会锁住我工作的工具上投入任何精力。**我也拒绝把自己绑定在任何专有工具上**。我是一个过来人,希望你们吸取我的经验。
这就是我在文件名中保留时间戳、图像描述或标记的原因。文件名是永久性的除非我手动更改它们。当我把照片备份或复制到u盘或其他操作系统时它们不会丢失。每个人都能读懂。任何未来的系统都能够处理它们。
### 我的文件命名规范
这里有一个我在 [2018 Linuxtage Graz 大会][44]上给出的[演讲][45],其中详细阐述了我的在本文中提到的想法和工作流程。
我所有的文件都与一个特定的日期或时间有关,根据所采用的[ISO 8601][7]规范,我采用的是**日期-标记**或**时间-标记**
带有日期戳和两个标签的示例文件名:`2014-05-09 42号项目的预算 -- 金融公司.csv`
带有时间戳(甚至包括可选秒)和两个标签的示例文件名:`2014-05-09T22.19.58 Susan展示她的新鞋子 -- 家庭衣物.jpg`
由于冒号不适用于Windows[文件系统NTFS][8]所以我必须使用已采用的ISO时间戳。因此我用点代替冒号以便将小时与分钟区别开来。
如果是**时间或持续的一段时间**,我会将两个日期或时间戳用两个负号分开:`2014-05-09—2014-05-13爵士音乐节Graz—folder 旅游音乐.pdf`。
文件名中的时间/日期戳的优点是,除非我手动更改它们,否则它们保持不变。当通过某些不处理这些元数据的软件进行处理时,包含在文件内容本身中的元数据(如[Exif][9])往往会丢失。此外,使用这样的日期/时间戳启动文件名可以确保文件按时间顺序显示,而不是按字母顺序显示。字母表是一种[完全人工的排序顺序][10],对于用户定位文件通常不太实用。
当我想将**tags**关联到文件名时,我将它们放在原始文件名和[文件名扩展名][11]之间,中间用空格、两个减号和一个额外的空格分隔"`--`"。我的标签是小写的英文单词,不包含空格或特殊字符。有时,我可能会使用`quantifiedself`或`usergenerated`等连接词。我[倾向于选择一般类别][12]而不是太过具体的描述标签。我用这一方式在Twitter [hashtags][13]上重用标记、文件名、文件夹名、书签、诸如此类的博客条目等等。
标签作为文件名的一部分有几个优点。通过使用常用的桌面搜索引擎,你可以在标签的帮助下定位文件。文件名称中的标签不能因为在不同的存储介质上复制而丢失。当系统使用与文件名不同的存储位置如:元数据数据库、[dot-files][14]、[备用数据流][15]等,通常会发生这种情况
当然,在一般的文件和文件夹名称中,**请避免使用特殊字符**umlauts冒号等。尤其是在不同操作系统平台之间同步文件时。
我的**文件夹名命名约定**与文件的相应规范相同。
注意:由于[Memacs][17]的[filenametimestamp][16]-module的聪明之处所有带有日期/时间戳的文件和文件夹都在同一时间/天出现在我的组织模式日历(agenda)上。这样,我就能很好地了解当天发生了什么,包括我拍的所有照片。
### 我的一般文件夹结构
在本节中,我将描述主文件夹中最重要的文件夹。注意:这可能在将来的被移动到一个独立的页面。或许不是。让我们等着瞧:-)
很多东西只有在一定的时间内才会引起人们的兴趣。这些内容包括快速浏览其内容的下载、解压缩文件以检查包含的文件、一些有趣的小内容等等。对于**临时的东西**,我有 `$HOME/tmp/ ` 子层次结构。新照片放在`$HOME/tmp/digicam/`中。我从CD、DVD或USB记忆棒临时复制的东西放在`$HOME/tmp/fromcd/`中。每当软件工具需要用户文件夹层次结构中的临时数据时,我就使用` $HOME/tmp/Tools/ `作为起点。我经常使用的文件夹是`$HOME/tmp/2del/`:`2del`的意思是“随时可以删除”。例如,我所有的浏览器都使用这个文件夹作为默认的下载文件夹。如果我需要在机器上腾出空间,我首先查看这个`2del`-文件夹,用于删除内容。
与上面描述的临时文件相比,我当然也想将文件**保存更长的时间**。这些文件被移动到我的`$HOME/archive/`子层次结构中。它有几个子文件夹备份,web /下载我想保留,二进制文件我要存档,索引文件的可移动媒体(CD, DVD,记忆棒、外部硬盘驱动器),和一个文件夹用来存档(和寻找一个合适的的目标文件夹)在不久的将来。有时,我太忙或没有耐心的时候将文件妥善整理。是的,那就是我,我甚至有一个名为`现在不要整理我`的文件夹。这对你而言是否很怪?:-)
我的归档中最重要的子层次结构是 `$HOME/archive/events_memories/ `及其子文件夹` 2014/ `、` 2013/ `、` 2012/ `等等。正如你可能已经猜到的,每个年份有一个**子文件夹**。其中每个文件中都有单个文件和文件夹。这些文件是根据我在前一节中描述的文件名约定命名的。文件夹名称符合“YYYY-MM-DD”[ISO 8601][7] 日期标签开头,后面跟着一个具有描述性的名称,如`$HOME/archive/events_memories/2014/2014-05-08 Business marathon with /`。在这些与日期相关的文件夹中,我保存着各种与特定事件相关的文件:照片、(扫描的)pdf文件、文本文件等等。
对于**共享数据**,我设置一个`$HOME/share/`子层次结构。这是我的Dropbox文件夹我用各种各样的方法(比如[unison][18])来分享数据。我也在我的设备之间共享数据:家里的Mac Mini家里的GNU/Linux笔记本Android手机root-server(我的个人云)工作时的windows笔记本。我不想在这里详细说明我的同步设置。如果你想了解相关的设置可以参考另一篇相关的文章。:-)
在我的` $HOME/ templates_tags / `子层次结构中,我保存了各种**模板文件** ([LaTeX][19] 脚本,…),剪辑和**logos**,等等。
我的**Org-mode**文件,主要是保存在`$ HOME /org/`。我练习保持记忆,并没有解释我有多喜欢 [Emacs/Org-mode][20]以及这我从中获益多少。你可能读过或听过我详细描述我用它做的很棒的事情。具体可以在我的博客上查找[我的' emacs '标签][21]在twitter上查找[hashtag ' #orgmode '][22]。
以上就是我最重要的文件夹子层次结构设置方式。
### 我的工作流程
Tataaaa在你了解了我的文件夹结构和文件名约定之后下面是我当前的工作流程和工具我使用它们来满足我前面描述的需求。
请注意,**你必须知道你在做什么**。我这里的示例及文件夹路径和更多只**适用我的机器或我的设置的文件夹路径**。你必须采用**相应的路径、文件名等**来满足你的需求!
#### 工作流程:将文件从SD卡移动到笔记本电脑旋转人像图像并重命名文件
当我想把数据从我的数码相机移到我的GNU/Linux笔记本上时我拿出它的mini sd存储卡把它放在我的笔记本上。然后它会自动挂载在` /media/digicam `上。
然后,调用[getdigicamdata]。它做了如下几件事:它将文件从SD卡移动到一个临时文件夹中进行处理。原始文件名会转换为小写字符。使用[jhead][24]旋转所有人像照片。同样使用jhead我从Exif头时间戳生成文件名称时间戳。使用[date2name][25]我将时间戳添加到电影文件中。处理完所有这些文件后它们将被移动到新的digicam文件的目标文件夹:$HOME/tmp/digicam/tmp/~。
#### 工作流程:文件夹索引、查看、重命名、删除图像文件
为了快速浏览我的图像和电影文件我更喜欢在GNU/Linux上使用[geeqie][26]。这是一个相当轻量级的图像浏览器,它具有其他文件浏览器所缺少的一大优势:我可以通过键盘快捷方式调用的外部脚本/工具。通过这种方式,我可以通过任意的外部命令扩展图像浏览器的特性。
基本的图像管理功能是内置在geeqie:索引我的文件夹层次结构,在窗口模式或全屏查看图像(快捷键' f ')重命名文件名删除文件显示Exif元数据(快捷键` Ctrl-e `)。
在OS X上我使用[Xee][27]。与geeqie不同它不能通过外部命令进行扩展。不过基本的导航、查看和重命名功能也是可用的。
#### 工作流:添加和删除标签
我创建了一个名为[filetags][28]的Python脚本用于向单个文件以及一组文件添加和删除标记。
对于数码照片,我使用标签,例如,`specialL`用于我认为适合桌面背景的风景图片,`specialP`用于我想展示给其他人的人像照片,`sel`用于筛选,等等。
##### 使用geeqie初始设置文件标签
向geeqie添加文件标签是一个手动步骤:`编辑>首选项>配置编辑器…`然后创建一个带有`New`的附加条目。在这里,你可以定义一个新的桌面文件,如下所示:
add-tags.desktop
```
[Desktop Entry]
Name=filetags
GenericName=filetags
Comment=
Exec=/home/vk/src/misc/vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh %F
Icon=
Terminal=true
Type=Application
Categories=Application;Graphics;
hidden=false
MimeType=image/*;video/*;image/mpo;image/thm
Categories=X-Geeqie;
```
包装器脚本的`vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh `是必须的,因为我想要弹出一个新的终端,以便添加标签到我的文件:
vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh
```
#!/bin/sh
/usr/bin/gnome-terminal \
--geometry=85x15+330+5 \
--tab-with-profile=big \
--hide-menubar \
-x /home/vk/src/filetags/filetags.py --interactive "${@}"
#end
```
在geeqie中你可以在` Edit > Preferences > Preferences…>键盘`。我将`t`与`filetags`命令相关联。
标签脚本还能够从单个文件或一组文件中删除标记。它基本上使用与上面相同的方法。唯一的区别是文件标签脚本额外的`--remove`参数:
remove-tags.desktop
```
[Desktop Entry]
Name=filetags-remove
GenericName=filetags-remove
Comment=
Exec=/home/vk/src/misc/vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh %F
Icon=
Terminal=true
Type=Application
Categories=Application;Graphics;
hidden=false
MimeType=image/*;video/*;image/mpo;image/thm
Categories=X-Geeqie;
```
vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh
```
#!/bin/sh
/usr/bin/gnome-terminal \
--geometry=85x15+330+5 \
--tab-with-profile=big \
--hide-menubar \
-x /home/vk/src/filetags/filetags.py --interactive --remove "${@}"
#end
```
为了删除标签,我为`T`参数创建了一个键盘快捷方式。
##### 在geeqie中使用文件标签
当我在geeqie文件浏览器中浏览图像文件时我选择要标记的文件(一到多个)并按`t`。然后,一个小窗口弹出,要求我提供一个或多个标签。在 ` Return ` 命令确认后,这些标签被添加到文件名中。
删除标签也是一样:选择多个文件,按下`T`,输入要删除的标签,然后用`Return`确认。就是这样。几乎没有[更简单的方法来添加或删除标签到文件][29]。
#### 工作流:使用appendfilename重命名高级文件
##### 不使用 appendfilename
重命名一组大型文件可能是一个冗长乏味的过程。对于`2014-04-20T17.09.11_p1100386.jpg`这样的原始文件名,在文件名中添加描述的过程相当烦人。你将按`Ctrl-r`(重命名)在geeqie打开文件重命名对话框。默认情况下原始名称(没有文件扩展名的文件名称)被标记。因此,如果不希望删除/覆盖文件名(但要追加),则必须按下光标键` <right> `。然后,光标放在基本名称和扩展名之间。输入你的描述(不要忘记初始空格字符),并用`Return`进行确认。
##### 在 geeqie 使中用 appendfilename
使用[appendfilename][30],我的过程得到了简化,可以获得将文本附加到文件名的最佳用户体验:当我在geeqie中按下` a ` (append)时,会弹出一个对话框窗口,询问文本。在`Return`确认后,输入的文本将放置在时间戳和可选标记之间。
例如,当我在`2014-04-20T17.09.11_p1100386.jpg`上按下`a`,然后在`Graz`中键入`Pick-nick in Graz`时,文件名变为`2014-04-20T17.09.11_p1100386 Pick-nick in Graz.jpg`。当我再次按下`a`并输入`with Susan`时,文件名变为`2014-04-20T17.09.11_p1100386 Pick-nick in Graz with Susan.jpg`。当文件名也获得标记时,附加的文本将附加在标记分隔符之前。
这样,我就不必担心覆盖时间戳或标记。重命名的过程对我来说变得更加有趣!
最好的部分是:当我想要将相同的文本添加到多个选定的文件中时也可以使用appendfilename。
##### 使用 geeqie 初始 appendfilename
添加一个额外的编辑器到geeqie: ` Edit > Preferences > Configure editor…>New`。然后输入桌面文件定义:
appendfilename.desktop
```
[Desktop Entry]
Name=appendfilename
GenericName=appendfilename
Comment=
Exec=/home/vk/src/misc/vk-appendfilename-interactive-wrapper-with-gnome-terminal.sh %F
Icon=
Terminal=true
Type=Application
Categories=Application;Graphics;
hidden=false
MimeType=image/*;video/*;image/mpo;image/thm
Categories=X-Geeqie;
```
同样,我也使用了一个包装脚本,它将为我打开一个新的终端:
vk-appendfilename-interactive-wrapper-with-gnome-terminal.sh
```
#!/bin/sh
/usr/bin/gnome-terminal \
--geometry=90x5+330+5 \
--tab-with-profile=big \
--hide-menubar \
-x /home/vk/src/appendfilename/appendfilename.py "${@}"
#end
```
#### 工作流程:播放电影文件
在GNU/Linux上我使用[mplayer][31]回放视频文件。由于geeqie本身不播放电影文件所以我必须创建一个设置以便在mplayer中打开电影文件。
##### 在geeqie中初始化mplayer的设置
我已经使用[xdg-open][32]将电影文件扩展名关联到mplayer。因此我只需要为geeqie创建一个通用的“open”命令使用xdg-open打开任何文件及其关联的应用程序。
再次访问` Edit > Preferences > Configure editor…`在geeqie中添加`open`的条目:
open.desktop
```
[Desktop Entry]
Name=open
GenericName=open
Comment=
Exec=/usr/bin/xdg-open %F
Icon=
Terminal=true
Type=Application
hidden=false
NOMimeType=*;
MimeType=image/*;video/*
Categories=X-Geeqie;
```
当你将快捷方式`o`(见上文)与geeqie关联时你就能够打开与其关联的应用程序的视频文件(和其他文件)。
##### 使用xdg-open打开电影文件(和其他文件)
在上面的设置过程之后当你的geeqie光标位于文件上方时你只需按下`o`即可。就是如此简洁。
#### 工作流:在外部图像编辑器中打开
我不太希望能够在GIMP中快速编辑图像文件。因此我添加了一个快捷方式`g`并将其与外部编辑器“GNU图像处理程序”(GIMP)关联起来geeqie已经默认创建了该程序
这样,只需按下`g`就可以打开GIMP中的当前图像。
#### 工作流程:移动到存档文件夹
现在我已经在我的文件名中添加了注释,我想将单个文件移动到`$HOME/archive/events_memories/2014/`,或者将一组文件移动到这个文件夹中的新文件夹中,如`$HOME/archive/events_memories/2014/2014-05-08 business marathon after show - party`。
通常的方法是选择一个或多个文件,并将它们移动到具有快捷方式`Ctrl-m`的文件夹中。
何等繁复无趣之至!
因此,我(再次)编写了一个Python脚本它为我完成了这项工作:[move2archive][33](简而言之:` m2a `需要一个或多个文件作为命令行参数。然后,出现一个对话框,我可以在其中输入一个可选文件夹名。当我不输入任何东西,但按`Return`,文件被移动到相应年份的文件夹。当我输入一个类似`business marathon after show - party`的文件夹名称时,第一个图像文件的日期戳被附加到该文件夹(`$HOME/archive/events_memories/2014/2014-05-08 business marathon after show - party`),得到的文件夹是(`$HOME/archive/events_memories/2014/2014-05-08 Business-Marathon After-Show-Party`),并移动文件。
我在geeqie中再一次选择一个或多个文件按`m`(移动),或者只按`Return`(没有特殊的子文件夹),或者输入一个描述性文本,这是要创建的子文件夹的名称(可选不带日期戳)。
**没有一个图像管理工具像我的geeqie一样通过快捷键快速且有趣的使用 appendfilename和move2archive完成工作。**
##### 在 geeqie 里初始化 m2a 的相关设置
同样向geeqie添加`m2a`是一个手动步骤:“编辑>首选项>配置编辑器……”然后创建一个带有“New”的附加条目。在这里你可以定义一个新的桌面文件如下所示:
m2a.desktop
```
[Desktop Entry]
Name=move2archive
GenericName=move2archive
Comment=Moving one or more files to my archive folder
Exec=/home/vk/src/misc/vk-m2a-interactive-wrapper-with-gnome-terminal.sh %F
Icon=
Terminal=true
Type=Application
Categories=Application;Graphics;
hidden=false
MimeType=image/*;video/*;image/mpo;image/thm
Categories=X-Geeqie;
```
包装器脚本的`vk-m2a-interactive-wrapper-with-gnome-terminal.sh `是必要的,因为我想要弹出一个新的终端窗口,以便我的文件进入我指定的目标文件夹:
vk-m2a-interactive-wrapper-with-gnome-terminal.sh
```
#!/bin/sh
/usr/bin/gnome-terminal \
--geometry=157x56+330+5 \
--tab-with-profile=big \
--hide-menubar \
-x /home/vk/src/m2a/m2a.py --pauseonexit "${@}"
#end
```
在geeqie中你可以在`Edit > Preferences > Preferences ... > Keyboard`将`m`与`m2a`命令相关联。
#### 工作流程:旋转图像(无损)
通常,我的数码相机会自动将人像照片标记为人像照片。然而,在某些特定的情况下(比如从主题上方拍照),我的相机会出错。在那些**罕见的情况下**,我必须手动修正方向。
你必须知道JPEG文件格式是一种有损格式应该只用于照片而不是计算机生成的东西如屏幕截图或图表。以傻瓜方式旋转JPEG图像文件通常会解压/可视化图像文件,旋转生成新的图像,然后重新编码结果。这将导致生成的图像[比原始图像质量差得多][5]。
因此你应该使用无损方法来旋转JPEG图像文件。
再一次我添加了一个“外部编辑器”到geeqie:`Edit > Preferences > Configure Editors ... > New`。在这里,我添加了两个条目:一个用于旋转270度(即逆时针旋转90度),另一个用于使用[exiftran][34]旋转90度(逆时针旋转90度):
rotate-270.desktop
```
[Desktop Entry]
Version=1.0
Type=Application
Name=Losslessly rotate JPEG image counterclockwise
# call the helper script
TryExec=exiftran
Exec=exiftran -p -2 -i -g %f
# Desktop files that are usable only in Geeqie should be marked like this:
Categories=X-Geeqie;
OnlyShowIn=X-Geeqie;
# Show in menu "Edit/Orientation"
X-Geeqie-Menu-Path=EditMenu/OrientationMenu
MimeType=image/jpeg;
```
rotate-90.desktop
```
[Desktop Entry]
Version=1.0
Type=Application
Name=Losslessly rotate JPEG image clockwise
# call the helper script
TryExec=exiftran
Exec=exiftran -p -9 -i -g %f
# Desktop files that are usable only in Geeqie should be marked like this:
Categories=X-Geeqie;
OnlyShowIn=X-Geeqie;
# Show in menu "Edit/Orientation"
X-Geeqie-Menu-Path=EditMenu/OrientationMenu
# It can be made verbose
# X-Geeqie-Verbose=true
MimeType=image/jpeg;
```
我为“[”(逆时针方向)和“]”(逆时针方向)创建了geeqie快捷键。
#### 工作流程:可视化GPS坐标
我的数码相机有一个GPS传感器它在JPEG文件的Exif元数据中存储当前的地理位置。位置数据以[WGS 84][35]格式存储如“47,58,26.73;16、23、55.51”(纬度;经度)。这一方式可读性较差,从我所期望的意义上讲:要么是地图要么是位置名称。因此我向geeqie添加了一些功能这样我就可以在[OpenStreetMap][36]上看到单个图像文件的位置: `Edit > Preferences > Configure Editors ... > New`
photolocation.desktop
```
[Desktop Entry]
Name=vkphotolocation
GenericName=vkphotolocation
Comment=
Exec=/home/vk/src/misc/vkphotolocation.sh %F
Icon=
Terminal=true
Type=Application
Categories=Application;Graphics;
hidden=false
MimeType=image/bmp;image/gif;image/jpeg;image/jpg;image/pjpeg;image/png;image/tiff;image/x-bmp;image/x-gray;image/x-icb;image/x-ico;image/x-png;image/x-portable-anymap;image/x-portable-bitmap;image/x-portable-graymap;image/x-portable-pixmap;image/x-xbitmap;image/x-xpixmap;image/x-pcx;image/svg+xml;image/svg+xml-compressed;image/vnd.wap.wbmp;
```
这就调用了我的名为`vkphotolocation.sh`的包装脚本,它使用[ExifTool][37]让[Marble][38]能够读取和可视化的适当格式并提取坐标:
vkphotolocation.sh
```
#!/bin/sh
IMAGEFILE="${1}"
IMAGEFILEBASENAME=`basename ${IMAGEFILE}`
COORDINATES=`exiftool -c %.6f "${IMAGEFILE}" | awk '/GPS Position/ { print $4 " " $6 }'`
if [ "x${COORDINATES}" = "x" ]; then
zenity --info --title="${IMAGEFILEBASENAME}" --text="No GPS-location found in the image file."
else
/usr/bin/marble --latlon "${COORDINATES}" --distance 0.5
fi
#end
```
映射到键盘快捷键“G”我可以快速地得到**单个图像文件的映射位置位置**。
当我想将多个JPEG图像文件的**位置可视化为路径**时,我使用[GpsPrune][39]。我无法派生出GpsPrune将一组文件作为命令行参数的方法。正因为如此我必须手动启动GpsPrune`选择一组文件或一个文件夹>添加照片`。
通过这种方式我可以为OpenStreetMap地图上的每个JPEG位置获得一个点(如果配置为这样)。通过单击这样一个点,我可以得到相应图像的详细信息。
如果你恰好在国外拍摄照片可视化GPS位置对**在文件名中添加描述**大有帮助!
#### 工作流程:根据GPS坐标过滤照片
这并非我的工作流程。为了完整起见,我列出该工作流对应工具的特性。我想做的就是从一大堆图片中寻找那些在一定区域内(范围或点+距离)的照片。
到目前为止,我只找到了[DigiKam][40],它能够[根据矩形区域进行过滤][41]。如果你知道其他工具,请将其添加到下面的评论或写一封电子邮件。
#### 工作流:显示给定集合的子集
如上面的需求所述,我希望能够在一个文件夹中定义一组子文件,以便将这个小集合呈现给其他人。
工作流程非常简单:我向选择的文件添加一个标记(通过` t ` /filetags)。为此,我使用标记`sel`它是“selection”的缩写。在标记了一组文件之后我可以按下` s `,它与一个脚本相关联,该脚本只显示标记为` sel `的文件。
当然,这也适用于任何标签或标签组合。因此,用同样的方法,你可以得到一个适当的概述,你的婚礼上的所有照片都标记着“教堂”和“戒指”。
很棒的功能,不是吗?:-)
##### 根据标签和geeqie初始设置文件标签
你必须定义一个额外的“外部编辑器”:`Edit > Preferences > Configure Editors ... > New`:
filter-tags.desktop
```
[Desktop Entry]
Name=filetag-filter
GenericName=filetag-filter
Comment=
Exec=/home/vk/src/misc/vk-filetag-filter-wrapper-with-gnome-terminal.sh
Icon=
Terminal=true
Type=Application
Categories=Application;Graphics;
hidden=false
MimeType=image/*;video/*;image/mpo;image/thm
Categories=X-Geeqie;
```
再次调用我编写的包装脚本:
vk-filetag-filter-wrapper-with-gnome-terminal.sh
```
#!/bin/sh
/usr/bin/gnome-terminal \
--geometry=85x15+330+5 \
--hide-menubar \
-x /home/vk/src/filetags/filetags.py --filter
#end
```
带参数`--filter`的`filetags`基本上完成的是:用户被要求输入一个或多个标签。然后,当前文件夹中所有匹配的文件都能使用[符号链接][42]都链接到` $HOME/.filetags_tagfilter/ `。然后启动一个新的geeqie实例显示链接的文件。
在退出这个新的geeqie实例之后你将从该实例调用了选择过程中看到旧的geeqie实例。
#### 用一个真实的案例来总结
Wow, 这是一篇很长的博客文章。难怪你可能已经忘了之前的概述。总结一下我在geeqie(扩展了标准功能集)中可以做的事情,我有一个很酷的总结:
快捷功能 `m` m2a `o` 打开(针对非图像文件) `a` 在文件名里添加字段 `t` 文件标签(添加) `T` 文件标签(删除) `s` 文件标签(排序) `g` gimp `G` 显示GPS信息 `[` 无损的逆时针旋转 `]` 无损的顺时针旋转 `Ctrl-e` EXIF图像信息 `f` 全屏显示
一些针对文件名的(包括它的路径)及我用来操作组件的示例:
```
/this/is/a/folder/2014-04-20T17.09 Pick-nick in Graz -- food graz.jpg
[ m2a ] [ date2name ] [ appendfilename ] [filetags]
```
在示例中,我按照以下步骤将照片从相机保存到存档:我将SD存储卡放入计算机的SD卡读卡器中。然后我运行[getdigicamdata.sh][23]。完成之后我在geeqie中打开`$HOME/tmp/digicam/tmp/`。我浏览了一下照片,把那些不成功的删除了。如果有一个图像的方向错误,我用`[`or`]`纠正它。
在第二步中,我向我认为值得评论的文件添加描述(` a `)。每当我想添加标签时,我也这样做:我快速地标记所有应该共享标签的文件(` Ctrl ` \+鼠标点击),并使用[filetags][28] (` t `)进行标记。
要组合来自给定事件的文件我选中相应的文件将它们移动到年度归档文件夹中的“event-folder”,并通过在[move2archive][33] (`m `)中键入事件描述,其余的(非特殊的文件夹)由move2archive (`m `)直接移动到年度归档中,而不需要声明事件描述。
为了完成我的工作流程我删除了SD卡上的所有文件把它从操作系统上弹出然后把它放回我的数码相机里。
以上。
因为这种工作流程几乎不需要任何开销,所以评论、标记和归档照片不再是一项乏味的工作。
### 最后
所以,这是一个详细描述我关于照片和电影的工作流程的叙述。你可能已经发现了我可能感兴趣的其他东西。所以请不要犹豫,请使用下面的链接留下评论或电子邮件。
我也希望得到反馈,如果我的工作流程适用于你。并且:如果你已经发布了你的工作流程或者找到了其他人工作流程的描述,也请留下评论!
及时行乐,莫让错误的工具或低效的方法浪费了我们的人生!
### 其他工具
阅读关于[本文中关于 gThumb 的部分][43].
当你觉得你以上文中所叙述的符合你的需求时,请根据相关的建议来选择对应的工具。
--------------------------------------------------------------------------------
via: http://karl-voit.at/managing-digital-photographs/
作者:[Karl Voit][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://karl-voit.at
[1]:https://en.wikipedia.org/wiki/Jpeg
[2]:http://en.wikipedia.org/wiki/Vendor_lock-in
[3]:https://en.wikipedia.org/wiki/Raw_image_format
[4]:http://www.gimp.org/
[5]:http://petapixel.com/2012/08/14/why-you-should-always-rotate-original-jpeg-photos-losslessly/
[6]:https://en.wikipedia.org/wiki/Gps
[7]:https://en.wikipedia.org/wiki/Iso_date
[8]:https://en.wikipedia.org/wiki/Ntfs
[9]:https://en.wikipedia.org/wiki/Exif
[10]:http://www.isisinform.com/reinventing-knowledge-the-medieval-controversy-of-alphabetical-order/
[11]:https://en.wikipedia.org/wiki/File_name_extension
[12]:http://karl-voit.at/tagstore/en/papers.shtml
[13]:https://en.wikipedia.org/wiki/Hashtag
[14]:https://en.wikipedia.org/wiki/Dot-file
[15]:https://en.wikipedia.org/wiki/NTFS#Alternate_data_streams_.28ADS.29
[16]:https://github.com/novoid/Memacs/blob/master/docs/memacs_filenametimestamps.org
[17]:https://github.com/novoid/Memacs
[18]:http://www.cis.upenn.edu/~bcpierce/unison/
[19]:https://github.com/novoid/LaTeX-KOMA-template
[20]:http://orgmode.org/
[21]:http://karl-voit.at/tags/emacs
[22]:https://twitter.com/search?q%3D%2523orgmode&src%3Dtypd
[23]:https://github.com/novoid/getdigicamdata.sh
[24]:http://www.sentex.net/%3Ccode%3Emwandel/jhead/
[25]:https://github.com/novoid/date2name
[26]:http://geeqie.sourceforge.net/
[27]:http://xee.c3.cx/
[28]:https://github.com/novoid/filetag
[29]:http://karl-voit.at/tagstore/
[30]:https://github.com/novoid/appendfilename
[31]:http://www.mplayerhq.hu
[32]:https://wiki.archlinux.org/index.php/xdg-open
[33]:https://github.com/novoid/move2archive
[34]:http://manpages.ubuntu.com/manpages/raring/man1/exiftran.1.html
[35]:https://en.wikipedia.org/wiki/WGS84#A_new_World_Geodetic_System:_WGS_84
[36]:http://www.openstreetmap.org/
[37]:http://www.sno.phy.queensu.ca/~phil/exiftool/
[38]:http://userbase.kde.org/Marble/Tracking
[39]:http://activityworkshop.net/software/gpsprune/
[40]:https://en.wikipedia.org/wiki/DigiKam
[41]:https://docs.kde.org/development/en/extragear-graphics/digikam/using-kapp.html#idp7659904
[42]:https://en.wikipedia.org/wiki/Symbolic_link
[43]:http://karl-voit.at/2017/02/19/gthumb
[44]:https://glt18.linuxtage.at
[45]:https://glt18-programm.linuxtage.at/events/321.html

View File

@ -1,222 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (luming)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to compile a Linux kernel in the 21st century)
[#]: via: (https://opensource.com/article/19/8/linux-kernel-21st-century)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/greg-p)
在 21 世纪编译你的 Linux 内核
======
也许你并不需要编译 Linux 内核,但你能通过这篇教程快速上手。
![and old computer and a new computer, representing migration to new software or hardware][1]
在计算机世界里,<ruby>内核<rt>kernel</rt><ruby>是处理硬件与一般系统之间通信的<ruby>低阶软件<rt>low-level software</rt></ruby>。除过一些烧录进计算机主板的初始固件,当你启动计算机时,内核让系统意识到它有一个硬盘驱动器、屏幕、键盘以及网卡。分配给每个部件(或多或少)相等时间使得图像、音频、文件系统和网络都流畅甚至并行地运行。
然而,对于硬件的需求是源源不断的,因为释放的硬件越多,内核就必须采用更多代码来保证硬件正常工作。得到具体的数字很困难,但是 Linux 内核无疑是硬件兼容性方面的顶级内核之一。Linux 操作着无数的计算机和移动电话工业用途和爱好者使用的板级嵌入式系统SoCRAID 卡,缝纫机等等。
回到 20 世纪甚至是21世纪初期对于 Linux 用户来说,在刚买到新的硬件后就需要下载最新的内核代码并编译安装才能使用这是不可理喻的。而现在你也很难见到 Linux 用户编译内核是除了好玩或通过高度专业化定制的硬件赚钱。现在,通常已经不需要再编译 Linux 内核了。
这里列出了一些原因以及快速编译内核的教程。
### 更新当前的内核
无论你买了配备新显卡或 Wifi 芯片集的新品牌电脑还是给家里配备一个新的打印机,你的操作系统(称为 GNU+Linux 或 Linux它也是内核的名字需要一个驱动来打开新部件显卡芯片集打印机和其他任何东西的信道。有时候当你插入某些新的设备时你可能会误认为电脑已经具备了驱动。但是别被骗到了有时候那就是你需要的驱动但更多的情况是你的操作系统仅仅使用了通用的协议来检测新的设备是否安装。
例如,你的计算机也许能够鉴别出新的网络打印机,但有时候那仅仅是因为打印机的网卡为了获得 DHCP 地址被设计成在网络上标识自己。它并不意味着你的计算机知道如何发送文档给打印机。事实上你可以认为计算机甚至不知道那台设备是一个打印机。它也许仅仅在网络上标识出自己的地址和一系列字符“p-r-i-n-t-e-r”。人类语言的便利性对于计算机毫无意义。计算机需要的是一个驱动。
内核开发者,硬件制造商,技术支持和爱好者都知道新的硬件会不断地发布。它们大多数都会贡献驱动,直接提交给内核开发团队以包含在 Linxu 中。例如,英伟达显卡驱动通常都会写入 [Nouveau][2] 内核模块中,并且因为英伟达显卡很常用,它的代码都包含在任一个日常使用的发行版内核中(例如当下载 [Fedora][3] 或 [Ubuntu][4] 得到的内核)。英伟达也有不常用的地方,例如嵌入式系统中 Nouveau 模块通常被移除。对其他设备来说也有类似的模块:打印机得益于 [Foomatic][5] 和 [CUPS][6],无线网卡有 [b43, ath9k, wl][7] 模块等等。
发行版往往会在它们 Linux 内核的构建中包含尽可能多合理的驱动,因为他们想让你在接入新设备时不用安装驱动能够立即使用。对于大多数情况来说都是非常令人开心的,尤其是现在很多设备厂商都在资助自己售卖硬件的 Linux 驱动开发,并且直接将这些驱动提交给内核团队以用在通常的发行版上。
有时候,或许你正在运行六个月之前安装的内核,并配备了上周刚刚上市令人兴奋的新设备。在这种情况下,你的内核也许没有那款设备的驱动。好消息是经常会出现那款设备的驱动在最近版本的内核中出现,意味着你只要更新运行的内核就可以了。
通常,这些都是通过安装包管理软件完成的。例如在 RHELCentOS 和 Fedora 上:
```
`$ sudo dnf update kernel`
```
在 Debian 和 Ubuntu 上,首先获取你当前内核的版本:
```
$ uname -r
4.4.186
```
搜索新的版本:
```
$ sudo apt update
$ sudo apt search linux-image
```
安装找到的最新版本。在这个例子中,最新的版本是 5.2.4
```
`$ sudo apt install linux-image-5.2.4`
```
内核更新后,你必须 [reboot][8] (除非你使用 kpatch 或 kgraft。这时如果你需要的设备驱动包含在最新的内核中你的硬件就会正常工作。
### 安装内核模块
有时候一个发行版没有预计到用户会使用某个设备(或者设备的驱动至少不足以包含在 Linux 内核中。Linux 对于驱动采用模块化方式因此尽管驱动没有编译进内核但发行版可以推送单独的驱动包让内核去加载。尽管有些复杂当驱动没有包含进内核中而是在引导过程中加载或是从模块驱动下得到更新时非常有用。第一个问题可以用“initrd”解决初始 RAM 磁盘这一点超出了本文的讨论范围第二点通过“kmod”系统解决。
kmod 系统保证了当内核更新后,所有与之安装的模块化驱动也得到更新。如果你手动安装一个驱动,你就体验不到 kmod 提供的自动化,因此只要当 kmod 安装包可用时,就应该选择它。例如,尽管英伟达驱动构建在内核中作为 Nouveau 模块,但官方的驱动仅由英伟达发布。你可以去网站上手动安装英伟达旗下的驱动,下载“.run”文件并运行提供的 shell 脚本,但在安装完新的内核之后你必须重复相同的过程,因为没有任何东西告诉包管理软件你手动安装了一个内核驱动。英伟达驱动着你的显卡,手动更新英伟达驱动通常意味着你需要通过终端来执行更新,因为没有显卡驱动将无法显示。
![Nvidia configuration application][9]
然而,如果你通过 kmod 包安装英伟达驱动,更新你的内核也会更新你的英伟达驱动。在 Fedora 和相关的发行版中:
```
`$ sudo dnf install kmod-nvidia`
```
在 Debian 和相关发行版上:
```
$ sudo apt update
$ sudo apt install nvidia-kernel-common nvidia-kernel-dkms nvidia-glx nvidia-xconfig nvidia-settings nvidia-vdpau-driver vdpau-va-driver
```
这仅仅是一个例子,但是如果你真的要安装英伟达驱动,你也必须屏蔽掉 Nouveau 驱动。参考你使用发行版的文档获取最佳的步骤吧。
### 下载并安装驱动
不是所有的东西都包含在内核中,也不是所有的东西都可以作为内核模块使用。在某些情况下,你需要下载一个由供应商编写并绑定好的特殊驱动程序,还有一些情况,你有驱动,但是没有前端配置驱动程序。
有两个常见的例子是 HP 打印机和 [Wacom][10] 数位板。如果你有一台 HP 打印机,你可能有通用的驱动和打印机通信,甚至能够打印出东西。但是通用的驱动却不能为特定型号的打印机提供定制化的选项,例如双面打印,校对,纸盒选择等等。[HPLIP][11]HP Linux 成像和打印系统)提供了选项来进行任务管理,调整打印设置,选择可用的纸盒等等。
HPLIP 通常包含在包管理软件中只要搜索“hplip”就行了。
![HPLIP in action][12]
同样的,电子艺术家主要使用的数位板 Wacom 驱动通常也包含在内核中,但是例如调整压感和按键功能等设置只能通过包含在 GNOME 的默认图形控制面板中访问。但也可以作为 KDE 上额外的程序包“kde-config-tablet”来访问。
这里也有几个类似的例子,例如内核中没有驱动,但可以作为 RPM 或 DEB 文件下载并且通过包管理软件安装来提供 kmod 版本的驱动。
### 打上补丁并编译你的内核
即使在 21 世纪的未来主义乌托邦里,仍有厂商不够了解开源,无法提供可安装的驱动。有时候,一些公司为驱动提供开源代码,希望你下载代码、修补内核、编译并手动安装。
这种发布方式和在 kmod 系统之外安装打包的驱动程序拥有同样的缺点:对内核的更新会破坏驱动,因为每次更换新的内核时都必须手动将其重新集成到内核中。
令人高兴的是,这一点变得少见了,因为 Linux 内核团队在高声呼唤公司与他们交流方面做得很好,并且公司最终接受了开源不会很快消失的事实。但仍有新奇的或高度专业的设备仅提供了内核补丁。
官方上,对于你如何编译内核以使包管理器参与到升级系统如此重要的部分中,发行版有特定的选项。这里有太多的包管理器,所以无法一一涵盖。举一个例子,当你使用 Fedora 上的工具例如“rpmdev”或“build-essential”Debian 上的“devscripts”。
首先,像通常那样,找到你正在运行内核的版本:
```
`$ uname -r`
```
在大多数情况下,如果你还没有升级过内核那么升级是安全的。搞定之后,也许你的问题就会在最新发布的内核中解决。如果你尝试后发现不起作用,那么你应该下载正在运行内核的源码。大多数发行版提供了特定的命令,但是手动操作的话,可以在 [kernel.org][13] 上找到它的源代码。
你必须下载内核所需的任何补丁。有时候,这些补丁对应具体的内核版本,因此请谨慎选择。
通常,或是至少回到人们习惯于编译内核的时候,都是替换源代码并在`/usr/src/linux` 打上补丁。
解压内核源码并打上需要的补丁:
```
$ cd /usr/src/linux
$ bzip2 --decompress linux-5.2.4.tar.bz2
$ cd  linux-5.2.4
$ bzip2 -d ../patch*bz2
```
补丁文件也许包含如何使用的教程,但通常它们都设计成在内核源码树的顶层可用来执行。
```
`$ patch -p1 < patch*example.patch`
```
当内核代码打上补丁后,你可以继续使用旧的配置。
```
`$ make oldconfig`
```
`make oldconfig` 命令有两个作用:它继承了当前的内核配置,并且允许你配置补丁带来的新的选项。
你或许需要运行 `make menuconfig` 命令,它启动了一个基于
ncurses 的菜单界面,列出了新的内核所有可能的选项。整个菜单可能看不过来,但是它是以旧的内核配置为基础的,你可以查看菜单并且禁用掉你不需要的硬件模块。另外,如果你知道自己有一些硬件没有包含在当前的配置中,你可以选择构建它,当作模块或者直接嵌入内核中。理论上,这些并不是必要的,因为你可以猜想,当前的内核运行良好只是缺少了补丁,当使用补丁的时候可能已经激活了所有设备所必要的选项。
下一步,编译内核和它的模块:
```
$ make bzImage
$ make modules
```
这会产生一个叫作 `vmlinuz` 的文件,它是可引导内核的压缩版本。保存旧的版本并在`/boot`文件夹下替换新的。
```
$ sudo mv /boot/vmlinuz /boot/vmlinuz.nopatch
$ sudo cat arch/x86_64/boot/bzImage &gt; /boot/vmlinuz
$ sudo mv /boot/System.map /boot/System.map.stock
$ sudo cp System.map /boot/System.map
```
到目前为止,你已经打上了补丁并且编译了内核和它的模块,你安装了内核,但你并没有安装任何模块。那就是最后的步骤:
```
`$ sudo make modules_install`
```
新的内核已经就位,并且它的模块也已经安装。
最后一步是更新你的 bootloader为了让你的计算机在加载 linux 内核之前知道它的位置。GRUB bootloader 使这一过程变得相当简单:
```
`$ sudo grub2-mkconfig`
```
### 现实生活中的编译
Of course, nobody runs those manual commands now. Instead, refer to your distribution for instructions on modifying a kernel using the developer toolset that your distribution's maintainers use. This toolset will probably create a new installable package with all the patches incorporated, alert the package manager of the upgrade, and update your bootloader for you.当然,现在没有人手动执行这些命令。相反的,参考你的发行版,寻找修改内核的维护人员使用的工具集。这些工具集可能会创建一个集成所有补丁的软件包,告诉你的包管理器来升级并更新你的 bootloader。
### 内核
操作系统和内核都是玄学,但要理解构成它们的组件并不难。下一次你看到某个技术无法应用在 Linux 上时深呼吸调查可用的驱动寻找一条捷径。Linux 比以前简单多了——包括内核。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/linux-kernel-21st-century
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[LuMing](https://github.com/LuuMing)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q (and old computer and a new computer, representing migration to new software or hardware)
[2]: https://nouveau.freedesktop.org/wiki/
[3]: http://fedoraproject.org
[4]: http://ubuntu.com
[5]: https://wiki.linuxfoundation.org/openprinting/database/foomatic
[6]: https://www.cups.org/
[7]: https://wireless.wiki.kernel.org/en/users/drivers
[8]: https://opensource.com/article/19/7/reboot-linux
[9]: https://opensource.com/sites/default/files/uploads/nvidia.jpg (Nvidia configuration application)
[10]: https://linuxwacom.github.io
[11]: https://developers.hp.com/hp-linux-imaging-and-printing
[12]: https://opensource.com/sites/default/files/uploads/hplip.jpg (HPLIP in action)
[13]: https://www.kernel.org/

View File

@ -0,0 +1,89 @@
[#]: collector: (lujun9972)
[#]: translator: (hopefully2333)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 open source cloud security tools)
[#]: via: (https://opensource.com/article/19/9/open-source-cloud-security)
[#]: author: (Alison NaylorAaron Rinehart https://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/sethhttps://opensource.com/users/bretthunoldtcomhttps://opensource.com/users/aaronrineharthttps://opensource.com/users/marcobravo)
4 种开源云安全工具
======
查找并排除你存储在 AWS 和 GitHub 中的数据里的漏洞。
![Tools in a cloud][1]
如果你的日常工作是开发者、系统管理员、全栈工程师或者是网站可用性保障工程师,内容包括使用 Git 从 GitHub 上 pushes、commits 和 pulls并部署到亚马逊 Web 服务上AWS安全性就是一个需要持续考虑的一个点。幸运的是开源工具能帮助你的团队避免犯常见错误这些常见错误会导致你的组织损失数千美元。
本文介绍了四种开源工具,当你在 GitHub 和 AWS 上进行开发时这些工具能帮助你提升项目的安全性。同样的本着开源的精神我会与三位安全专家——Travis McPeak网飞高级云安全工程师Rich Monk红帽首席高级信息安全分析师以及 Alison Naylor红帽首席信息安全分析师——共同为本文做出贡献。
我们已经按场景对每个工具都做了区分,但是他们并不是相互排斥的。
### 1\. 使用 gitrob 发现敏感数据
你需要在你们团队的 Git 仓库中发现 任何现在还存在的敏感信息,以便你能将其删除。使用红/蓝队模型并利用工具专注于攻击应用程序或者时操作系统,这样可能会更有意义,在这个模型中,一个信息安全团队会划分为两块,一个是攻击团队(又名红队),以及一个防守团队(又名蓝队)。有一个红队来尝试渗透你的系统和应用要远远好于等待一个黑客来实际攻击你。你的红队可能会尝试使用 Gitrob该工具可以克隆和爬虫你的 Git 仓库,以此来寻找凭证和敏感信息。
即使像 Gitrob 这样的工具可以被用来造成破坏,但这里的目的是让你的信息安全团队使用它来发现无意间泄露的属于您的组织的敏感信息(比如 AWS 的密钥对或者是其他被失误提交上去的凭证)。这样,你可以修整你的仓库并清除敏感数据——希望能赶在黑客发现它们之前。记住不光要修改受影响的文件,还要删除它们的历史记录。
### 2\. 使用 git-secrets 来避免合并敏感数据
虽然在你的 Git 仓库里发现并移除敏感信息很重要,但在一开始就避免合并这些敏感信息岂不是更好?即使错误地提交了敏感信息,使用 git-secrets 从公开的困境保护你自己。这款工具可以帮助你设置钩子,以此来扫描你的提交、已提交的代码和合并的代码,寻找暴露在公共仓库里的敏感信息。注意你选择的模式要匹配你的团队使用的凭证,比如 AWS 访问密钥和秘密密钥。如果发现了一个匹配项,你的提交就会被拒绝,一个潜在的危机就此得到避免。
为你已有的仓库设置 git-secrets 是很简单的,而且你可以使用一个全局设置来保护所有你以后要创建或克隆的仓库。你同样可以在公开你的仓库之前,使用 git-secrets 来扫描它们(包括之前所有的历史版本)。
### 3\. 使用 Key Conjurer 创建临时凭证
有一点额外的保险来防止无意间公开了存储的敏感信息,这是很好的事,但我们还可以做得更好,就完全不存储任何凭证。追踪凭证,谁访问了它,存储到了哪里,上次被访问事什么时候——太麻烦了。然而,以编程的方式生成的临时凭证就可以避免大量的此类问题,从而巧妙地避开了在 Git 仓库里存储敏感信息这一问题。使用 Key Conjurer它就是为解决这一需求而被创建出来的。有关更多 Riot Games 为什么创建 Key Conjurer以及 Riot Games 如何开发的 Key Conjurer请阅读 Key Conjurer我们最低权限的策略。
### 4\. 使用 Repokid 自动化地提供最小权限
任何一个参加过安全 101 课程的人都知道设置最小权限是基于角色的访问控制的最佳实现。难过的是在学校之外手动运用最低权限策略会变得如此艰难。一个应用的访问需求会随着时间的流逝而变化开发人员又太忙了没时间去手动调整它们的权限。Repokid 使用 AWS 提供提供的有关身份和访问管理IAM的数据来自动化地调整访问策略。Repokid 甚至可以在 AWS 中为超大型组织提供自动化地最小权限设置。
### 工具而已,又不是大招
这些工具并不是什么灵丹妙药,它们只是工具!所以,在尝试使用这些工具或其他的控件之前,请和你的组织里一起工作的其他人确保你们已经理解了你的云服务的使用情况和用法模式。
应该严肃对待你的云服务和代码仓库服务,并熟悉最佳实现的做法。下面的文章将帮助你做到这一点。
**对于 AWS:**
* [管理 AWS 访问密钥的最佳实现][11]
* [AWS 安全审计指南][12]
**对于 GitHub:**
* [介绍一种新方法来让你的代码保持安全][13]
* [GitHub 企业版最佳安全实现][14]
最后但并非最不重要的一点,和你的安全团队保持联系;他们应该可以为你团队的成功提供想法、建议和指南。永远记住:安全是每个人的责任,而不仅仅是他们的。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/open-source-cloud-security
作者:[Alison NaylorAaron Rinehart][a]
选题:[lujun9972][b]
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/sethhttps://opensource.com/users/bretthunoldtcomhttps://opensource.com/users/aaronrineharthttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT (Tools in a cloud)
[2]: https://twitter.com/travismcpeak?lang=en
[3]: https://github.com/rmonk
[4]: https://www.linkedin.com/in/alperkins/
[5]: https://github.com/michenriksen/gitrob
[6]: https://help.github.com/en/articles/removing-sensitive-data-from-a-repository
[7]: https://github.com/awslabs/git-secrets
[8]: https://github.com/RiotGames/key-conjurer
[9]: https://technology.riotgames.com/news/key-conjurer-our-policy-least-privilege
[10]: https://github.com/Netflix/repokid
[11]: https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html
[12]: https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html
[13]: https://github.blog/2019-05-23-introducing-new-ways-to-keep-your-code-secure/
[14]: https://github.blog/2015-10-09-github-enterprise-security-best-practices/

View File

@ -1,194 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (amwps290)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Adding themes and plugins to Zsh)
[#]: via: (https://opensource.com/article/19/9/adding-plugins-zsh)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth)
给 Zsh 添加主题和插件
======
通过 Oh My Zsh 安装的主题和插件来扩展 Zsh 的功能
![Someone wearing a hardhat and carrying code ][1]
在我的[前文][2]中,我向大家展示了如何安装并使用 [Z-Shell][2] (Zsh)。对于某些用户来说Zsh 最令人激动的是它可以安装主题。Zsh 安装主题非常容易,一方面是因为有非常活跃的社区为 Z-Shell 设计主题,另一方面是因为有 [Oh My Zsh][3] 这个项目。这使得安装主题变得轻而易举。
主题的变化可能会立刻吸引你的注意力,因此如果你安装了 Zsh 并且将默认的 Shell 替换为 Zsh 时,你可能不喜欢 Shell 默认主题的样子,那么你可以立即更换 Oh My Zsh 自带的 100+ 的主题。Oh My Zsh 不仅拥有大量精美的主题,同时还有数以百计的扩展 Zsh 功能的插件。
### 安装 Oh My Zsh
Oh My Zsh 的[官网][3]建议你使用一个脚本在有网络的情况下来安装这个包。尽管 Oh My Zsh 项目几乎是可以令人信服的,但是盲目地在你的电脑上运行一个脚本这是一个不好的建议。如果你想运行这个脚本,你可以把它下载下来,看一下它实现了什么功能,在你确信你已经了解了它的所作所为之后,你就可以运行它了。
如果你下载了脚本并且阅读了它,你就会发现安装过程仅仅只有三步:
#### 1\. 克隆 oh-my-zsh
第一步,克隆 oh-my-zsh 库到 **~/.oh-my-zsh** 目录:
```
`% git clone http://github.com/robbyrussell/oh-my-zsh ~/.oh-my-zsh`
```
#### 2\. 切换配置文件
下一步,备份你已经存在的 **.zshrc** 文件,然后将 oh-my-zsh 自带的配置文件移动到这个地方。这两步操作可以一步完成,只需要你的 **mv** 命令支持 **-b** 这个选项。
```
% mv -b \
~/.oh-my-zsh/templates/zshrc.zsh-template \
~/.zshrc
```
#### 3\. 编辑配置文件
默认情况下Oh My Zsh 自带的配置文件是非常简陋的。如果你想将你自己的 **~/.zshrc** 文件合并到 **.oh-my-zsh** 的配置文件中。你可以使用 [cat][4] 命令将你的旧的配置文件添加到新文件的末尾。
```
`% cat ~/.zshrc~ >> ~/.zshrc`
```
看一下默认的配置文件以及它提供的一些选项。用你最喜欢的编辑器打开 **~/.zshrc** 文件。这个文件有非常良好的注释。这是了解它的一个非常好的方法。
例如,你可以更改 **.oh-my-zsh** 目录的位置。在安装的时候,它默认是位于你的家目录。但是,根据 [Free Desktop][5] 所定义的现代 Linux 规范。这个目录应当放置于 **~/.local/share** 。你可以在配置文件中进行修改。如下所示:
```
# Path to your oh-my-zsh installation.
export ZSH=$HOME/.local/share/oh-my-zsh
```
然后将 .oh-my-zsh 目录移动到你新配置的目录下
```
% mv ~/.oh-my-zsh \
$HOME/.local/share/oh-my-zsh
```
如果你使用的是 MacOS ,这个目录可能会有点含糊不清,但是最合适的位置可能是在 **$HOME/Library/Application\ Support** 。
### 重新启动 Zsh
编辑配置文件之后,你必须重新启动你的 Shell 。在这之前,你必须确定你的任何操作都已正确完成。例如,在你修改了 **.oh-my-zsh** 目录的路径之后。不要忘记将目录移动到新的位置。如果你不想重新启动你的 Shell 。你可以使用 **source** 命令来使你的配置文件生效。
```
% source ~/.zshrc
 .oh-my-zsh git:(master) ✗
```
你可以忽略任何丢失更新文件的警告;他们将会在重启的时候再次进行解析。
### 更换你的主题
安装好 oh-my-zsh 之后。你可以将你的 Zsh 的主题设置为 **robbyrussell**,这是一个项目维护者的主题。这个主题的更改是非常小的,仅仅是改变了提示符的颜色。
你可以通过列出 **.oh-my-zsh** 目录下的所有文件来查看所有安装的主题:
```
 .oh-my-zsh git:(master) ✗ ls \
~/.local/share/oh-my-zsh/themes
3den.zsh-theme
adben.zsh-theme
af-magic.zsh-theme
afowler.zsh-theme
agnoster.zsh-theme
[...]
```
想在切换主题之前查看一下它的样子。你可以查看 Oh My Zsh 的 [wiki][6] 页面。查看更多主题,可以查看 [External themes][7] wiki 页面。
大部分的主题是非常易于安装和使用的,仅仅需要改变 **.zshrc** 文件中的配置选项然后重新载入配置文件。
```
➜ ~ sed -i \
's/_THEME=\"robbyrussel\"/_THEME=\"linuxonly\"/g' \
~/.zshrc
➜ ~ source ~/.zshrc
seth@darkstar:pts/0-&gt;/home/skenlon (0) ➜
```
其他的主题可能需要一些额外的配置。例如,为了使用 **agnoster** 主题,你必须先安装 Powerline 字体。这是一个开源字体,如果你使用 Linux 操作系统的话,这个字体很可能在你的软件库中存在。使用下面的命令安装这个字体:
```
`➜ ~ sudo dnf install powerline-fonts`
```
在配置文件中更改你的主题:
```
➜ ~ sed -i \
's/_THEME=\"linuxonly\"/_THEME=\"agnoster\"/g' \
~/.zshrc
```
重新启动你的 Sehll一个简单的 **source** 并不会起作用)。一旦重启,你就可以看到新的主题:
![agnoster theme][8]
### 安装插件
Oh My Zsh 有超过 200 的插件,你可以在 **.oh-my-zsh/plugins** 中看到他们。每一个扩展目录下都有一个 README 文件解释了这个插件的作用。
一些插件相当简单。例如,**dnf**, **ubuntu**, **brew**, 和 **macports** 插件仅仅是为了简化与 DNF , Apt Homebres 和 MacPorts 的交互操作而定义的一些别名。
而其他的一些插件则较为复杂, **git** 插件默认是被激活使用的。当你的目录是一个 git 仓库的时候,这个扩展就会更新你的 Shell 提示符,以显示当前的分支和是否有未合并的更改。
为了激活这个扩展,你可以将这个扩展添加到你的配置文件 **~/.zshrc** 中。例如,你可以添加 **dnf****pass** 插件,按照如下的方式更改:
```
`plugins=(git dnf pass)`
```
保存修改,重新启动你的 Shell。
```
`% source ~/.zshrc`
```
这个扩展现在就可以使用了。你可以通过使用 **dnf** 提供的别名来测试一下:
```
% dnfs fop
====== Name Exactly Matched: fop ======
fop.noarch : XSL-driven print formatter
```
不同的插件做不同的事,因此你可以一次安装一两个插件来帮你学习新的特性和功能。
### 兼容性
一些 Oh My Zsh 插件具有通用性。如果你看到一个插件声称它可以与 Bash 兼容,那么它就可以在你自己的 Bash 中使用。另一些插件需要 Zsh 提供的特定功能。因此,他们并不是所有都能工作。但是你可以添加一些其他的插件,例如 **dnf****ubuntu** , **[firewalld][10]**,以及其他的一些插件。你可以使用 **source** 使你的选择生效。例如:
```
if [ -d $HOME/.local/share/oh-my-zsh/plugins ]; then
        source $HOME/.local/share/oh-my-zsh/plugins/dnf/dnf.plugin.zsh
fi
```
### 选择或者不选择 Zsh
Z-shell 的内置功能和它由社区贡献的扩展功能都非常强大。你可以把它当成你的主 Shell 使用,你也可以在你休闲娱乐的时候尝试一下。这取决于你的爱好。
什么是你最喜爱的主题和扩展可以在下方的评论告诉我们!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/adding-plugins-zsh
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[amwps290](https://github.com/amwps290)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code )
[2]: https://opensource.com/article/19/9/getting-started-zsh
[3]: https://ohmyz.sh/
[4]: https://opensource.com/article/19/2/getting-started-cat-command
[5]: http://freedesktop.org
[6]: https://github.com/robbyrussell/oh-my-zsh/wiki/Themes
[7]: https://github.com/robbyrussell/oh-my-zsh/wiki/External-themes
[8]: https://opensource.com/sites/default/files/uploads/zsh-agnoster.jpg (agnoster theme)
[9]: https://opensource.com/resources/what-is-git
[10]: https://opensource.com/article/19/7/make-linux-stronger-firewalls

View File

@ -1,334 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (wenwensnow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Hone advanced Bash skills by building Minesweeper)
[#]: via: (https://opensource.com/article/19/9/advanced-bash-building-minesweeper)
[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakarhttps://opensource.com/users/dnearyhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/marcobravo)
通过编写扫雷游戏提高你的bash技巧
======
那些令人怀念的经典游戏可是提高编程能力的好素材。今天就让我们仔细探索一番怎么用Bash编写一个扫雷程序。
![bash logo on green background][1]
我在编程教学方面不是专家但当我想更好掌握某一样东西时会试着找出让自己乐在其中的方法。比方说当我想在shell编程方面更进一步时我决定用Bash编写一个[扫雷][2]游戏来加以练习。
如果你是一个有经验的Bash程序员并且在提高技巧的同时乐在其中你可以在终端中编写个人版本的扫雷。完整代码可以在这个 [GitHub 库]中找到[3].
### 做好准备
在我编写任何代码之前,我列出了游戏所必须的几个部分:
1. 显示雷区
2. 创建玩家逻辑
3. 创建判断单元格是否可选的逻辑
4. 记录已选择和可用单元格的个数
5. 创建游戏结束逻辑
### 显示雷区
在扫雷中游戏界面是一个由2D数组列和行组成的不透明小方格。每一格下都有可能藏有地雷。玩家的任务就是找到那些不含雷的方格并且在这一过程中不能点到地雷。Bash版本的扫雷使用10x10的矩阵实际逻辑则由一个简单的Bash数组来完成。
首先,我先生成了一些随机数字。这将是地雷在雷区里的位置。为了控制地雷的数量,在开始编写代码之前,这么做会容易一些。实现这一功能的逻辑可以更好,但我这么做,是为了让游戏实现保持简洁,并有改进空间。(我编写这个游戏纯属娱乐,但如果你能将它修改的更好,我也是很乐意的。)
下面这些变量是整个过程中是不变的声明它们是为了随机生成数字。就像下面的变量a-g它们会被用来计算可选择的地雷
的值:
```
# 变量
score=0 # 会用来存放游戏分数
#下面这些变量,用来随机生成可选择地雷的实际值
a="1 10 -10 -1"
b="-1 0 1"
c="0 1"
d="-1 0 1 -2 -3"
e="1 2 20 21 10 0 -10 -20 -23 -2 -1"
f="1 2 3 35 30 20 22 10 0 -10 -20 -25 -30 -35 -3 -2 -1"
g="1 4 6 9 10 15 20 25 30 -30 -24 -11 -10 -9 -8 -7"
#
# 声明
declare -a room # 声明一个room 数组,它用来表示雷区的每一格。
```
接下来我会用列0-9和行a-j显示出游戏界面,并且使用一个10x10矩阵作为雷区。M[10][10] 是一个索引从0-99有100个值的数组。 如想了解更多关于Bash 数组的内容,请阅读这本书[_那些关于Bash你所不了解的事: Bash数组简介_][4]。
创建一个叫 **plough**的函数,我们先将标题显示出来:两个空行,列头,和一行 “-”,以示意往下是游戏界面:
```
printf '\n\n'
printf '%s' "     a   b   c   d   e   f   g   h   i   j"
printf '\n   %s\n' "-----------------------------------------"
```
然后,我初始化一个计数器变量,叫 **r**,它会用来记录已显示多少横行。 注意,稍后在游戏代码中,我们会用同一个变量**r**,作为我们的数组索引。 在 [Bash **for** 循环][5]中,用 **seq**命令从0增加到9。我用 (**d%**)占位,来显示行号($row,被**seq**定义的变量)
```
r=0 # our counter
for row in $(seq 0 9); do
printf '%d ' "$row" # 显示 行数 0-9
```
在我们接着往下做之前,让我们看看到现在都做了什么。我们先横着显示 **[a-j]** 然后再将 **[0-9]** 的行号显示出来,我们会用这两个范围,来确定用户选择的确切位置。
接着,在每行中,插入列,所以是时候写一个新的 **for** 循环了。 这一循环管理着每一列,也就是说,实际上是生成游戏界面的每一格。我添加了一些说明函数,你能在源码中看到它的完整实现。 对每一格来说,我们需要一些让它看起来像地雷的东西,所以我们先用一个点(.)来初始化空格。实现这一想法,我们用的是一个叫[**is_null_field**][6] 的自定义函数。 同时,我们需要一个存储每一格具体值的数组,这儿会用到之前已定义的全局数组 **[room][7]** , 并用 [变量 **r**][8]作为索引。 随着 **r** 的增加,遍历所有单元格,并随机部署地雷。
```
  for col in $(seq 0 9); do
((r+=1)) # 循环完一列行数加一
is_null_field $r # 假设这里有个函数,它会检查单元格是否为空,为真,则此单元格初始值为点(.
printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}" # 最后显示分隔符,注意,${room[$r]} 的第一个值为 '.',等于其初始值。
#结束 col 循环
done
```
最后,为了保持游戏界面整齐好看,我会在每行用一个竖线作为结尾,并在最后结束行循环:
```
printf '%s\n' "|" #显示出行分隔符
printf ' %s\n' "-----------------------------------------"
# 结束行循环
done
printf '\n\n'
```
完整的 **plough** 代码如下:
```
plough()
{
  r=0
  printf '\n\n'
  printf '%s' "     a   b   c   d   e   f   g   h   i   j"
  printf '\n   %s\n' "-----------------------------------------"
  for row in $(seq 0 9); do
    printf '%d  ' "$row"
    for col in $(seq 0 9); do
       ((r+=1))
       is_null_field $r
       printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}"
    done
    printf '%s\n' "|"
    printf '   %s\n' "-----------------------------------------"
  done
  printf '\n\n'
}
```
我花了点时间来思考,**is_null_field** 的具体功能是什么。让我们来看看,它到底能做些什么。在最开始,我们需要游戏有一个固定的状态。你可以随便选择所有格子的初始值,可以是一个数字或者任意字符。 我最后决定,所有单元格的初始值为一个点(.),因为我觉得,这样会让游戏界面更好看。下面就是这一函数的完整代码:
```
is_null_field()
{
local e=$1 # 在数组room中我们已经用过循环变量 'r'了,这次我们用'e'
if [[ -z "${room[$e]}" ]];then
room[$r]="." #这里用点.)来初始化每一个单元格
fi
}
```
现在,我已经初始化了所有的格子,现在只要用一个很简单的函数,就能得出,当前游戏中还有多少单元格可以操作:
```
get_free_fields()
{
free_fields=0 # 初始化变量
for n in $(seq 1 ${#room[@]}); do
if [[ "${room[$n]}" = "." ]]; then # 检查当前单元格是否等于初始值(.),结果为真,则记为空余格子。
((free_fields+=1))
    fi
  done
}
```
这是显示出来的游戏界面,[**a-j]** 为列, [**0-9**] 为行。
![Minefield][9]
### 创建玩家逻辑
玩家操作背后的逻辑在于,先从[stdin][10] 中读取数据作为坐标然后再找出对应位置实际包含的值。这里用到了Bash的[参数扩展][11]来设法得到行列数。然后将代表列数的字母传给switch,从而得到其对应的列数。为了更好地理解这一过程,可以看看下面这段代码中,变量 '**o'** 所对应的值。 举个例子,玩家输入了 **c3** ,这时 Bash 将其分成两个字符: **c** and **3** 。 为了简单起见,我跳过了如何处理无效输入的部分。
```
colm=${opt:0:1} # 得到第一个字符,一个字母
ro=${opt:1:1} # 得到第二个字符,一个整数
case $colm in
a ) o=1;; # 最后,通过字母得到对应列数。
b ) o=2;;
    c ) o=3;;
    d ) o=4;;
    e ) o=5;;
    f ) o=6;;
    g ) o=7;;
    h ) o=8;;
    i ) o=9;;
    j ) o=10;;
  esac
```
下面的代码会计算,用户所选单元格实际对应的数字,然后将结果储存在变量中。
这里也用到了很多的 **shuf** 命令,**shuf** 是一个专门用来生成随机序列的[Linux命令][12]。 **-i** 选项,后面需要提供需要打乱的数或者范围, **-n** 选择则规定输出结果最多需要返回几个值。Bash中可以在两个圆括号内进行[数学计算],这里我们会多次用到。
还是沿用之前的例子,玩家输入了 **c3** 。 接着,它被转化成了**ro=3** 和 **o=3**。 之后通过上面的switch 代码, 将**c** 转化为对应的整数,带进公式,以得到最终结果 '**i'.** 的值。
```
i=$(((ro*10)+o)) # 遵循运算规则,算出最终值
is_free_field $i $(shuf -i 0-5 -n 1) # 调用自定义函数,判断其指向空/可选择单元格。
```
仔细观察这个计算过程,看看最终结果 '**i**' 是如何计算出来的:
```
i=$(((ro*10)+o))
i=$(((3*10)+3))=$((30+3))=33
```
最后结果是33。在我们的游戏界面显示出来玩家输入坐标指向了第33个单元格也就是在第3行从0开始否则这里变成4第3列。
### 创建判断单元格是否可选的逻辑
为了找到地雷,在将坐标转化,并找到实际位置之后,程序会检查这一单元格是否可选。如不可选,程序会显示一条警告信息,并要求玩家重新输入坐标。
在这段代码中,单元格是否可选,是由数组里对应的值是否为点(**.**)决定的。 如果可选,则重置单元格对应的值,并更新分数。反之,因为其对应值不为点,则设置 变量 **not_allowed**。 为简单起见,游戏中[警告消息][14]这部分源码,我会留给读者们自己去探索。
```
is_free_field()
{
  local f=$1
  local val=$2
  not_allowed=0
  if [[ "${room[$f]}" = "." ]]; then
    room[$f]=$val
    score=$((score+val))
  else
    not_allowed=1
  fi
}
```
![Extracting mines][15]
如输入坐标有效,且对应位置为地雷,如下图所示。 玩家输入 **h6**,游戏界面会出现一些随机生成的值。在发现地雷后,这些值会被加入用户得分。
![Extracting mines][16]
还记得我们开头定义的变量,[a-g]吗,我会用它们来确定,随机生成地雷的具体值。 所以,根据玩家输入坐标,程序会根据 (**m**) 中随机生成的数,来生成周围其他单元格的值。(如上图所示) 。之后将所有值和初始输入坐标相加,最后结果放在**i (**计算结果如上**)**中.
请注意下面代码中的 **X**,它是我们唯一的游戏结束标志。我们将它添加到随机列表中。在 **shuf** 命令的魔力下X可以在任意情况下出现但如果你足够幸运的话也可能一直不会出现。
```
m=$(shuf -e a b c d e f g X -n 1) # 将 X 添加到随机列表中,当 m=X,游戏结束
if [[ "$m" != "X" ]]; then # X将会是我们爆炸地雷游戏结束的触发标志
for limit in ${!m}; do # !m 代表m变量的值
field=$(shuf -i 0-5 -n 1) # 然后再次获得一个随机数字
index=$((i+limit)) # 将m中的每一个值和index加起来直到列表结尾
is_free_field $index $field
    done
```
我想要游戏界面中,所有随机显示出来的单元格,都靠近玩家选择的单元格。
![Extracting mines][17]
### 记录已选择和可用单元格的个数
这个程序需要记录,游戏界面中哪些单元格是可选择的。否则,程序会一直让用户输入数据,即使所有单元格都被选中过。为了实现这一功能,我创建了一个叫 **free_fields** 的变量初始值为0。 用一个 **for** 循环,记录下游戏界面中可选择单元格的数量。 ****如果单元格所对应的值为点 (**.**), 则 **free_fields** 加一。
```
get_free_fields()
{
  free_fields=0
  for n in $(seq 1 ${#room[@]}); do
    if [[ "${room[$n]}" = "." ]]; then
      ((free_fields+=1))
    fi
  done
}
```
等下,如果 **free_fields=0** 呢? 这意味着,玩家已选择过所有单元格。如果想更好理解这一部分,可以看看这里的[源代码][18] 。
```
if [[ $free_fields -eq 0 ]]; then # 这意味着你已选择过所有格子
printf '\n\n\t%s: %s %d\n\n' "You Win" "you scored" "$score"
      exit 0
fi
```
### 创建游戏结束逻辑
对于游戏结束这种情况,我们这里使用了一些很巧妙的技巧,将结果在屏幕中央显示出来。我把这部分留给读者朋友们自己去探索。
```
if [[ "$m" = "X" ]]; then
g=0 # 为了在参数扩展中使用它
room[$i]=X # 覆盖此位置原有的值并将其赋值为X
for j in {42..49}; do # 在游戏界面中央,
out="gameover"
k=${out:$g:1} # 在每一格中显示一个字母
room[$j]=${k^^}
      ((g+=1))
    done
fi
```
最后,我们显示出玩家最关心的两行。
```
if [[ "$m" = "X" ]]; then
      printf '\n\n\t%s: %s %d\n' "GAMEOVER" "you scored" "$score"
      printf '\n\n\t%s\n\n' "You were just $free_fields mines away."
      exit 0
fi
```
![Minecraft Gameover][20]
文章到这里就结束了,朋友们! 如果你想了解更多,具体可以查看我的[GitHub 库][3]那儿有这个扫雷游戏的源代码并且你还能找到更多用Bash 编写的游戏。 我希望这篇文章能激起你学习Bash的兴趣并乐在其中。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/advanced-bash-building-minesweeper
作者:[Abhishek Tamrakar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tamrakarhttps://opensource.com/users/dnearyhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://en.wikipedia.org/wiki/Minesweeper_(video_game)
[3]: https://github.com/abhiTamrakar/playground/tree/master/bash_games
[4]: https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays
[5]: https://opensource.com/article/19/6/how-write-loop-bash
[6]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L114-L120
[7]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L41
[8]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L74
[9]: https://opensource.com/sites/default/files/uploads/minefield.png (Minefield)
[10]: https://en.wikipedia.org/wiki/Standard_streams#Standard_input_(stdin)
[11]: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html
[12]: https://linux.die.net/man/1/shuf
[13]: https://www.tldp.org/LDP/abs/html/dblparens.html
[14]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L143-L177
[15]: https://opensource.com/sites/default/files/uploads/extractmines.png (Extracting mines)
[16]: https://opensource.com/sites/default/files/uploads/extractmines2.png (Extracting mines)
[17]: https://opensource.com/sites/default/files/uploads/extractmines3.png (Extracting mines)
[18]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L91
[19]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L131-L141
[20]: https://opensource.com/sites/default/files/uploads/gameover.png (Minecraft Gameover)

View File

@ -0,0 +1,257 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (CentOS 8 Installation Guide with Screenshots)
[#]: via: (https://www.linuxtechi.com/centos-8-installation-guide-screenshots/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
CentOS 8 安装图解
======
继 RHEL 8 发布之后CentOS 社区也发布了让人期待已久的 CentOS 8并发布了两种模式
* CentOS stream滚动发布的 Linux 发行版,适用于需要频繁更新的开发者
* CentOS类似 RHEL 8 的稳定操作系统,系统管理员可以用其部署或配置服务和应用
在这篇文章中,我们会使用图解的方式演示 CentOS 8 的安装方法。
### CentOS 8 的新特性
* DNF 成为了默认的软件包管理器,同时 yum 仍然是可用的
* 使用 `nmcli``nmtui` 进行网络配置N
* 使用 Podman 进行容器管理
* 引入了两个新的包仓库BaseOS 和 AppStream
* 使用 Cockpit 作为默认的系统管理工具
* 默认使用 Wayland 提供图形界面
* `iptables` 将被 `nftables` 取代
* 使用 Linux 内核 4.18
* 提供 PHP 7.2、Python 3.6、Ansible 2.8、VIM 8.0 和 Squid 4
### CentOS 8 所需的最低硬件配置:
* 2 GB RAM
* 64 位 x86 架构、2 GHz 或以上的 CPU
* 20 GB 硬盘空间
### CentOS 8 安装图解
### 第一步:下载 CentOS 8 ISO 文件
在 CentOS 官方网站 <https://www.centos.org/download/> 下载 CentOS 8 ISO 文件。
### 第二步: 创建 CentOS 8 启动介质USB 或 DVD
下载 CentOS 8 ISO 文件之后,将 ISO 文件烧录到 USB 移动硬盘或 DVD 光盘中,作为启动介质。
然后重启系统,在 BIOS 设置中启动上面烧录好的启动介质。
### 第三步:选择“安装 CentOS Linux 8.0”选项
搭载了 CentOS 8 ISO 文件的启动介质启动之后,就可以看到以下这个界面。选择“安装 CentOS Linux 8.0”Install CentOS Linux 8.0)选项并按回车。
[![Choose-Install-CentOS8][1]][2]
### 第四步:选择偏好语言
选择想要在 CentOS 8 安装过程中使用的语言,然后继续。
[![Select-Language-CentOS8-Installation][1]][3]
### 第五步:准备安装 CentOS 8
这一步我们会配置以下内容:
* 键盘布局
* 日期和时间
* 安装来源
* 软件选择
* 安装目标
* Kdump
[![Installation-Summary-CentOS8][1]][4]
如上图所示安装向导已经自动提供了键盘布局Keyboard、时间和日期Time &amp; Date、安装来源Installation Source和软件选择Software Selection的选项。
如果你需要修改以上的选项点击对应的图标就可以了。例如修改系统的时间和日期只需要点击“Time &amp; Date”选择正确的时区然后点击“完成”Done即可。
[![TimeZone-CentOS8-Installation][1]][5]
在软件选择选项中选择安装的模式。例如“包含图形界面”Server with GUI选项会在安装后的系统中提供图形界面而如果想安装尽可能少的额外软件可以选择“最小化安装”Minimal Install
[![Software-Selection-CentOS8-Installation][1]][6]
这里我们选择“包含图形界面”,点击完成。
Kdump 功能默认是开启的。尽管这是一个强烈建议开启的功能,但也可以点击对应的图标将其关闭。
如果想要在安装过程中对网络进行配置可以点击“网络与主机名”Network &amp; Host Name选项。
[![Networking-During-CentOS8-Installation][1]][7]
如果系统连接到启用了 DHCP 功能的调制解调器上,就会在启动网络接口的时候自动获取一个 IP 地址。如果需要配置静态 IP点击“配置”Configure并指定 IP 的相关信息。除此以外我们还将主机名设置为 linuxtechi.com。
完成网络配置后,点击完成。
最后我们要配置“安装目标”Installation Destination指定 CentOS 8 将要安装到哪一个硬盘,以及相关的分区方式。
[![Installation-Destination-Custom-CentOS8][1]][8]
点击完成。
如图所示,我为 CentOS 8 分配了 40 GB 的硬盘空间。有两种分区方案可供选择如果由安装向导进行自动分区可以选择“自动”Automatic选项如果想要自己手动进行分区可以选择“自定义”Custom选项。
在这里我们选择“自定义”选项,并按照以下的方式分区:
* /boot 2 GB (ext4 文件系统)
* / 12 GB (xfs 文件系统)
* /home 20 GB (xfs 文件系统)
* /tmp 5 GB (xfs 文件系统)
* Swap 1 GB (xfs 文件系统)
首先创建 `/boot` 标准分区,设置大小为 2GB如下图所示
[![boot-partition-CentOS8-Installation][1]][9]
点击“添加挂载点”Add mount point
再创建第二个分区 `/`,并设置大小为 12GB。点击加号指定挂载点和分区大小点击“添加挂载点”即可。
[![slash-root-partition-centos8-installation][1]][10]
然后在页面上将 `/` 分区的分区类型从标准更改为逻辑卷LVM并点击“更新设置”update settings
[![Change-Partition-Type-CentOS8][1]][11]
如上图所示安装向导已经自动创建了一个卷组volume group。如果想要更改卷组的名称只需要点击卷组标签页中的“修改”Modify选项。
同样地,创建 `/home` 分区和 `/tmp` 分区,分别将大小设置为 20GB 和 5GB并设置分区类型为逻辑卷。
[![home-partition-CentOS8-Installation][1]][12]
[![tmp-partition-centos8-installation][1]][13]
最后创建<ruby>交换分区<rt>swap partition</rt></ruby>
[![Swap-Partition-CentOS8-Installation][1]][14]
点击“添加挂载点”。
在完成所有分区设置后,点击“完成”。
[![Choose-Done-after-manual-partition-centos8][1]][15]
在下一个界面点击“应用更改”Accept changes以上做的更改就会写入到硬盘中。
[![Accept-changes-CentOS8-Installation][1]][16]
### 第六步:选择“开始安装”
完成上述的所有更改后回到先前的安装概览界面点击“开始安装”Begin Installation以开始安装 CentOS 8。
[![Begin-Installation-CentOS8][1]][17]
下面这个界面表示安装过程正在进行中。
[![Installation-progress-centos8][1]][18]
要设置 root 用户的口令,只需要点击 “root 口令”Root Password选项输入一个口令然后点击“创建用户”User Creation选项创建一个本地用户。
[![Root-Password-CentOS8-Installation][1]][19]
填写新创建的用户的详细信息。
[![Local-User-Details-CentOS8][1]][20]
在安装完成后,安装向导会提示重启系统。
[![CentOS8-Installation-Progress][1]][21]
### 第七步:完成安装并重启系统
安装完成后要重启系统。只需点击“重启”Reboot按钮。
[![Installation-Completed-CentOS8][1]][22]
注意:重启完成后,记得要把安装介质断开,并将 bios 的启动介质设置为硬盘。
### Step:8) Boot newly installed CentOS 8 and Accept License
在 grub 引导菜单中,选择 CentOS 8 进行启动。
[![Grub-Boot-CentOS8][1]][23]
同意 CentOS 8 的许可证,点击“完成”。
[![Accept-License-CentOS8-Installation][1]][24]
在下一个界面点击“完成配置”Finish Configuration
[![Finish-Configuration-CentOS8-Installation][1]][25]
### 第九步:配置完成后登录
同意 CentOS 8 的许可证以及完成配置之后,会来到登录界面。
[![Login-screen-CentOS8][1]][26]
使用刚才创建的用户以及对应的口令登录,按照提示进行操作,就可以看到以下界面。
[![CentOS8-Ready-Use-Screen][1]][27]
点击“开始使用 CentOS Linux”Start Using CentOS Linux
[![Desktop-Screen-CentOS8][1]][28]
以上就是 CentOS 8 的安装过程,至此我们已经完成了 CentOS 8 的安装。欢迎给我们发送评论。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/centos-8-installation-guide-screenshots/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Install-CentOS8.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Language-CentOS8-Installation.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Summary-CentOS8.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/TimeZone-CentOS8-Installation.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Software-Selection-CentOS8-Installation.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Networking-During-CentOS8-Installation.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Destination-Custom-CentOS8.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/boot-partition-CentOS8-Installation.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/slash-root-partition-centos8-installation.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Change-Partition-Type-CentOS8.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/09/home-partition-CentOS8-Installation.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/09/tmp-partition-centos8-installation.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Swap-Partition-CentOS8-Installation.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Done-after-manual-partition-centos8.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Accept-changes-CentOS8-Installation.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Begin-Installation-CentOS8.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-progress-centos8.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Root-Password-CentOS8-Installation.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Local-User-Details-CentOS8.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/09/CentOS8-Installation-Progress.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Completed-CentOS8.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Grub-Boot-CentOS8.jpg
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Accept-License-CentOS8-Installation.jpg
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Finish-Configuration-CentOS8-Installation.jpg
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Login-screen-CentOS8.jpg
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/09/CentOS8-Ready-Use-Screen.jpg
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Desktop-Screen-CentOS8.jpg