diff --git a/published/20140510 Managing Digital Files (e.g., Photographs) in Files and Folders.md b/published/20140510 Managing Digital Files (e.g., Photographs) in Files and Folders.md new file mode 100644 index 0000000000..6c2f834fa2 --- /dev/null +++ b/published/20140510 Managing Digital Files (e.g., Photographs) in Files and Folders.md @@ -0,0 +1,619 @@ +数码文件与文件夹收纳术(以照片为例) +====== + +![](https://img.linux.net.cn/data/attachment/album/201910/05/000950xsxopomsrs55rrb5.jpg) + +- 更新 2014-05-14:增加了一些具体实例 +- 更新 2015-03-16:根据照片的 GPS 坐标过滤图片 +- 更新 2016-08-29:以新的 `filetags --filter` 替换已经过时的 `show-sel.sh` 脚本 +- 更新 2017-08-28: geeqier 视频缩略图的邮件评论 +- 更新 2018-03-06:增加了 Julian Kahnert 的链接 +- 更新 2018-05-06:增加了作者在 2018 Linuxtage Graz 大会上 45 分钟演讲的视频 +- 更新 2018-06-05:关于 metadata 的邮件回复 +- 更新 2018-07-22:移动文件夹结构的解释到一篇它自己的文章中 +- 更新 2019-07-09:关于在文件名中避免使用系谱和字符的邮件回复 + +每当度假或去哪游玩时我就会化身为一个富有激情的摄影师。所以,过去的几年中我积累了许多的 [JPEG][1] 文件。这篇文章中我会介绍我是如何避免 [供应商锁定][2](LCTT 译注:供应商锁定vendor lock-in,原为经济学术语,这里引申为避免过于依赖某一服务平台)造成受限于那些临时性的解决方案及数据丢失。相反,我更倾向于使用那些可以让我**投入时间和精力打理,并能长久使用**的解决方案。 + +这一(相当长的)攻略 **并不仅仅适用于图像文件**:我将进一步阐述像是文件夹结构、文件的命名规则等等许多领域的事情。因此,这些规范适用于我所能接触到的所有类型的文件。 + +在我开始传授我的方法之前,我们应该先就我将要介绍方法的达成一个共识,那就是我们是否有相同的需求。如果你对 [raw 图像格式][3]十分推崇,将照片存储在云端或其他你信赖的地方(对我而言可能不会),那么你可能不会认同这篇文章将要描述的方式了。请根据你的情况来灵活做出选择。 + +### 我的需求 + +对于 **将照片(或视频)从我的数码相机中导出到电脑里**,我只需要将 SD 卡插到我的电脑里并调用 `fetch-workflow` 软件。这一步也完成了**图像软件的预处理**以适用于我的文件命名规范(下文会具体论述),同时也可以将图片旋转至正常的方向(而不是横着)。 + +这些文件将会被存入到我的摄影收藏文件夹 `$HOME/tmp/digicam/`。在这一文件夹中我希望能**遍历我的图像和视频文件**,以便于**整理/删除、重命名、添加/移除标签,以及将一系列相关的文件移动到相应的文件夹中**。 + +在完成这些以后,我将会**浏览包含图像/电影文件集的文件夹**。在极少数情况下,我希望**在独立的图像处理工具**(比如 [GIMP][4])中打开一个图像文件。如果仅是为了**旋转 JPEG 文件**,我想找到一个快速的方法,不需要图像处理工具,并且是[以无损的方式][5]旋转 JPEG 图像。 + +我的数码相机支持用 [GPS][6] 坐标标记图像。因此,我需要一个方法来**对单个文件或一组文件可视化 GPS 坐标**来显示我走过的路径。 + +我想拥有的另一个好功能是:假设你在威尼斯度假时拍了几百张照片。每一个都很漂亮,所以你每张都舍不得删除。另一方面,你可能想把一组更少的照片送给家里的朋友。而且,在他们嫉妒的爆炸之前,他们可能只希望看到 20 多张照片。因此,我希望能够**定义并显示一组特定的照片子集**。 + +就独立性和**避免锁定效应**而言,我不想使用那种一旦公司停止产品或服务就无法使用的工具。出于同样的原因,由于我是一个注重隐私的人,**我不想使用任何基于云的服务**。为了让自己对新的可能性保持开放的心态,我不希望只在一个特定的操作系统平台才可行的方案上倾注全部的精力。**基本的东西必须在任何平台上可用**(查看、导航、……),而**全套需求必须可以在 GNU/Linux 上运行**,对我而言,我选择 Debian GNU/Linux。 + +在我传授当前针对上述大量需求的解决方案之前,我必须解释一下我的一般文件夹结构和文件命名约定,我也使用它来命名数码照片。但首先,你必须认清一个重要的事实: + +#### iPhoto、Picasa,诸如此类应被认为是有害的 + +管理照片集的软件工具确实提供了相当酷的功能。它们提供了一个良好的用户界面,并试图为你提供满足各种需求的舒适的工作流程。 + +对它们我确实遇到了很多大问题。它们几乎对所有东西都使用专有的存储格式:图像文件、元数据等等。当你打算在几年内换一个不同的软件,这是一个大问题。相信我:总有一天你会因为多种原因而**更换软件**。 + +如果你现在正打算更换相应的工具,你会意识到 iPhoto 或 Picasa 是分别存储原始图像文件和你对它们所做的所有操作的(旋转图像、向图像文件添加描述/标签、裁剪等等)。如果你不能导出并重新导入到新工具,那么**所有的东西都将永远丢失**。而无损的进行转换和迁移几乎是不可能的。 + +我不想在一个会锁住我工作的工具上投入任何精力。**我也拒绝把自己绑定在任何专有工具上**。我是一个过来人,希望你们吸取我的经验。 + +这就是我在文件名中保留时间戳、图像描述或标记的原因。文件名是永久性的,除非我手动更改它们。当我把照片备份或复制到 U 盘或其他操作系统时,它们不会丢失。每个人都能读懂。任何未来的系统都能够处理它们。 + +### 我的文件命名规范 + +这里有一个我在 [2018 Linuxtage Graz 大会][44]上做的[演讲][45],其中详细阐述了我的在本文中提到的想法和工作流程。 + +- [Grazer Linuxtage 2018 - The Advantages of File Name Conventions and Tagging](https://youtu.be/rckSVmYCH90) +- [备份视频托管在 media.CCC.de](https://media.ccc.de/v/GLT18_-_321_-_en_-_g_ap147_004_-_201804281550_-_the_advantages_of_file_name_conventions_and_tagging_-_karl_voit) + +我所有的文件都与一个特定的日期或时间有关,根据所采用的 [ISO 8601][7] 规范,我采用的是**日期戳**或**时间戳** + +带有日期戳和两个标签的示例文件名:`2014-05-09 Budget export for project 42 -- finance company.csv`。 + +带有时间戳(甚至包括可选秒)和两个标签的示例文件名:`2014-05-09T22.19.58 Susan presenting her new shoes -- family clothing.jpg`。 + +由于我使用的 ISO 时间戳冒号不适用于 Windows [NTFS 文件系统][8],因此,我用点代替冒号,以便将小时与分钟(以及可选的秒)区别开来。 + +如果是**持续的一段日期或时间**,我会将两个日期戳或时间戳用两个减号分开:`2014-05-09--2014-05-13 Jazz festival Graz -- folder tourism music.pdf`。 + +文件名中的时间/日期戳的优点是,除非我手动更改它们,否则它们保持不变。当通过某些不处理这些元数据的软件进行处理时,包含在文件内容本身中的元数据(如 [Exif][9])往往会丢失。此外,使用这样的日期/时间戳开始的文件名可以确保文件按时间顺序显示,而不是按字母顺序显示。字母表是一种[完全人工的排序顺序][10],对于用户定位文件通常不太实用。 + +当我想将**标签**关联到文件名时,我将它们放在原始文件名和[文件名扩展名][11]之间,中间用空格、两个减号和两端额外的空格分隔 ` -- `。我的标签是小写的英文单词,不包含空格或特殊字符。有时,我可能会使用 `quantifiedself` 或 `usergenerated` 这样的连接词。我[倾向于选择一般类别][12],而不是太过具体的描述标签。我在 Twitter [hashtags][13]、文件名、文件夹名、书签、诸如此类的博文等诸如此类地地方重用这些标签。 + +标签作为文件名的一部分有几个优点。通过使用常用的桌面搜索引擎,你可以在标签的帮助下定位文件。文件名称中的标签不会因为复制到不同的存储介质上而丢失。当系统使用与文件名之外的存储位置(如:元数据数据库、[点文件][14]、[备用数据流][15]等)存储元信息通常会发生丢失。 + +当然,通常在文件和文件夹名称中,**请避免使用特殊字符**、变音符、冒号等。尤其是在不同操作系统平台之间同步文件时。 + +我的**文件夹名命名约定**与文件的相应规范相同。 + +注意:由于 [Memacs][17] 的 [filenametimestamp][16] 模块的聪明之处,所有带有日期/时间戳的文件和文件夹都出现在我的 Org 模式的日历(日程)上的同一天/同一时间。这样,我就能很好地了解当天发生了什么,包括我拍的所有照片。 + +### 我的一般文件夹结构 + +在本节中,我将描述我的主文件夹中最重要的文件夹。注意:这可能在将来的被移动到一个独立的页面。或许不是。让我们等着瞧 :-) (LCTT 译注:后来这一节已被作者扩展并移动到另外一篇[文章](https://karl-voit.at/folder-hierarchy/)。) + +很多东西只有在一定的时间内才会引起人们的兴趣。这些内容包括快速浏览其内容的下载、解压缩文件以检查包含的文件、一些有趣的小内容等等。对于**临时的东西**,我有 `$HOME/tmp/` 子层次结构。新照片放在 `$HOME/tmp/digicam/` 中。我从 CD、DVD 或 USB 记忆棒临时复制的东西放在 `$HOME/tmp/fromcd/` 中。每当软件工具需要用户文件夹层次结构中的临时数据时,我就使用 `$HOME/tmp/Tools/`作为起点。我经常使用的文件夹是 `$HOME/tmp/2del/`:`2del` 的意思是“随时可以删除”。例如,我所有的浏览器都使用这个文件夹作为默认的下载文件夹。如果我需要在机器上腾出空间,我首先查看这个 `2del` 文件夹,用于删除内容。 + +与上面描述的临时文件相比,我当然也想将文件**保存更长的时间**。这些文件被移动到我的 `$HOME/archive/` 子层次结构中。它有几个子文件夹用于备份、我想保留的 web 下载类、我要存档的二进制文件、可移动媒体(CD、DVD、记忆棒、外部硬盘驱动器)的索引文件,和一个稍后(寻找一个合适的的目标文件夹)存档的文件夹。有时,我太忙或没有耐心的时候将文件妥善整理。是的,那就是我,我甚至有一个名为“现在不要烦我”的文件夹。这对你而言是否很怪?:-) + +我的归档中最重要的子层次结构是 `$HOME/archive/events_memories/` 及其子文件夹 `2014/`、`2013/`、`2012/` 等等。正如你可能已经猜到的,每个年份有一个**子文件夹**。其中每个文件中都有单个文件和文件夹。这些文件是根据我在前一节中描述的文件名约定命名的。文件夹名称以 [ISO 8601][7] 日期标签 “YYYY-MM-DD” 开头,后面跟着一个具有描述性的名称,如 `$HOME/archive/events_memories/2014/2014-05-08 Business marathon with/`。在这些与日期相关的文件夹中,我保存着各种与特定事件相关的文件:照片、(扫描的)pdf 文件、文本文件等等。 + +对于**共享数据**,我设置一个 `$HOME/share/` 子层次结构。这是我的 Dropbox 文件夹,我用各种各样的方法(比如 [unison][18])来分享数据。我也在我的设备之间共享数据:家里的 Mac Mini、家里的 GNU/Linux 笔记本、Android 手机,root-server(我的个人云),工作用的 Windows 笔记本。我不想在这里详细说明我的同步设置。如果你想了解相关的设置,可以参考另一篇相关的文章。:-) + +在我的 `$HOME/templates_tags/` 子层次结构中,我保存了各种**模板文件**([LaTeX][19]、脚本、…),插图和**徽标**,等等。 + +我的 **Org 模式** 文件,主要是保存在 `$HOME/org/`。我练习记忆力,不会解释我有多喜欢 [Emacs/Org 模式][20] 以及我从中获益多少。你可能读过或听过我详细描述我用它做的很棒的事情。具体可以在我的博客上查找 [我的 Emacs 标签][21],在 Twitter 上查找 [hashtag #orgmode][22]。 + +以上就是我最重要的文件夹子层次结构设置方式。 + +### 我的工作流程 + +哒哒哒,在你了解了我的文件夹结构和文件名约定之后,下面是我当前的工作流程和工具,我使用它们来满足我前面描述的需求。 + +请注意,**你必须知道你在做什么**。我这里的示例及文件夹路径和更多**只适用我的机器或我的环境**。**你必须采用相应的**路径、文件名等来满足你的需求! + +#### 工作流程:将文件从 SD 卡移动到笔记本电脑、旋转人像图像,并重命名文件 + +当我想把数据从我的数码相机移到我的 GNU/Linux 笔记本上时,我拿出它的 mini SD 存储卡,把它放在我的笔记本上。然后它会自动挂载在 `/media/digicam` 上。 + +然后,调用 [getdigicamdata][23]。它做了如下几件事:它将文件从 SD 卡移动到一个临时文件夹中进行处理。原始文件名会转换为小写字符。所有的人像照片会使用 [jhead][24] 旋转。同样使用 jhead,我从 Exif 头的时间戳中生成文件名称中的时间戳。使用 [date2name][25],我也将时间戳添加到电影文件中。处理完所有这些文件后,它们将被移动到新的数码相机文件的目标文件夹: `$HOME/tmp/digicam/tmp/`。 + +#### 工作流程:文件夹索引、查看、重命名、删除图像文件 + +为了快速浏览我的图像和电影文件,我喜欢使用 GNU/Linux 上的 [geeqie][26]。这是一个相当轻量级的图像浏览器,它具有其他文件浏览器所缺少的一大优势:我可以添加通过键盘快捷方式调用的外部脚本/工具。通过这种方式,我可以通过任意外部命令扩展这个图像浏览器的特性。 + +基本的图像管理功能是内置在 geeqie:浏览我的文件夹层次结构、以窗口模式或全屏查看图像(快捷键 `f`)、重命名文件名、删除文件、显示 Exif 元数据(快捷键 `Ctrl-e`)。 + +在 OS X 上,我使用 [Xee][27]。与 geeqie 不同,它不能通过外部命令进行扩展。不过,基本的浏览、查看和重命名功能也是可用的。 + +#### 工作流程:添加和删除标签 + +我创建了一个名为 [filetags][28] 的 Python 脚本,用于向单个文件以及一组文件添加和删除标记。 + +对于数码照片,我使用标签,例如,`specialL` 用于我认为适合桌面背景的风景图片,`specialP` 用于我想展示给其他人的人像照片,`sel` 用于筛选,等等。 + +##### 使用 geeqie 初始设置 filetags + +向 geeqie 添加 `filetags` 是一个手动步骤:“Edit > Preferences > Configure Editors ...”,然后创建一个附加条目 `New`。在这里,你可以定义一个新的桌面文件,如下所示: + +``` +[Desktop Entry] +Name=filetags +GenericName=filetags +Comment= +Exec=/home/vk/src/misc/vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh %F +Icon= +Terminal=true +Type=Application +Categories=Application;Graphics; +hidden=false +MimeType=image/*;video/*;image/mpo;image/thm +Categories=X-Geeqie; +``` + +*add-tags.desktop* + +封装脚本 `vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh` 是必须的,因为我想要弹出一个新的终端,以便添加标签到我的文件: + +``` +#!/bin/sh + +/usr/bin/gnome-terminal \ + --geometry=85x15+330+5 \ + --tab-with-profile=big \ + --hide-menubar \ + -x /home/vk/src/filetags/filetags.py --interactive "${@}" + +#end +``` + +*vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh* + +在 geeqie 中,你可以在 “Edit > Preferences > Preferences ... > Keyboard”。我将 `t` 与 `filetags` 命令相关联。 + +这个 `filetags` 脚本还能够从单个文件或一组文件中删除标记。它基本上使用与上面相同的方法。唯一的区别是 `filetags` 脚本额外的 `--remove` 参数: + +``` +[Desktop Entry] +Name=filetags-remove +GenericName=filetags-remove +Comment= +Exec=/home/vk/src/misc/vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh %F +Icon= +Terminal=true +Type=Application +Categories=Application;Graphics; +hidden=false +MimeType=image/*;video/*;image/mpo;image/thm +Categories=X-Geeqie; +``` + +*remove-tags.desktop* + +``` +#!/bin/sh + +/usr/bin/gnome-terminal \ + --geometry=85x15+330+5 \ + --tab-with-profile=big \ + --hide-menubar \ + -x /home/vk/src/filetags/filetags.py --interactive --remove "${@}" + +#end +``` + +*vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh* + +为了删除标签,我创建了一个键盘快捷方式 `T`。 + +##### 在 geeqie 中使用 filetags + +当我在 geeqie 文件浏览器中浏览图像文件时,我选择要标记的文件(一到多个)并按 `t`。然后,一个小窗口弹出,要求我提供一个或多个标签。用回车确认后,这些标签被添加到文件名中。 + +删除标签也是一样:选择多个文件,按下 `T`,输入要删除的标签,然后按回车确认。就是这样。几乎没有[给文件添加或删除标签的更简单的方法了][29]。 + +#### 工作流程:改进的使用 appendfilename 重命名文件 + +##### 不使用 appendfilename + +重命名一组大型文件可能是一个冗长乏味的过程。对于 `2014-04-20T17.09.11_p1100386.jpg` 这样的原始文件名,在文件名中添加描述的过程相当烦人。你将按 `Ctrl-r` (重命名)在 geeqie 中打开文件重命名对话框。默认情况下,原始名称(没有文件扩展名的文件名称)被标记。因此,如果不希望删除/覆盖文件名(但要追加),则必须按下光标键 `→`。然后,光标放在基本名称和扩展名之间。输入你的描述(不要忘记以空格字符开始),并用回车进行确认。 + +##### 在 geeqie 使中用 appendfilename + +使用 [appendfilename][30],我的过程得到了简化,可以获得将文本附加到文件名的最佳用户体验:当我在 geeqie 中按下 `a`(附加)时,会弹出一个对话框窗口,询问文本。在回车确认后,输入的文本将放置在时间戳和可选标记之间。 + +例如,当我在 `2014-04-20T17.09.11_p1100386.jpg` 上按下 `a`,然后键入`Pick-nick in Graz` 时,文件名变为 `2014-04-20T17.09.11_p1100386 Pick-nick in Graz.jpg`。当我再次按下 `a` 并输入 `with Susan` 时,文件名变为 `2014-04-20T17.09.11_p1100386 Pick-nick in Graz with Susan.jpg`。当文件名添加标记时,附加的文本前将附加标记分隔符。 + +这样,我就不必担心覆盖时间戳或标记。重命名的过程对我来说变得更加有趣! + +最好的部分是:当我想要将相同的文本添加到多个选定的文件中时,也可以使用 `appendfilename`。 + +##### 在 geeqie 中初始设置 appendfilename + +添加一个额外的编辑器到 geeqie: “Edit > Preferences > Configure Editors ... > New”。然后输入桌面文件定义: + +``` +[Desktop Entry] +Name=appendfilename +GenericName=appendfilename +Comment= +Exec=/home/vk/src/misc/vk-appendfilename-interactive-wrapper-with-gnome-terminal.sh %F +Icon= +Terminal=true +Type=Application +Categories=Application;Graphics; +hidden=false +MimeType=image/*;video/*;image/mpo;image/thm +Categories=X-Geeqie; +``` + +*appendfilename.desktop* + +同样,我也使用了一个封装脚本,它将为我打开一个新的终端: + +``` +#!/bin/sh + +/usr/bin/gnome-terminal \ + --geometry=90x5+330+5 \ + --tab-with-profile=big \ + --hide-menubar \ + -x /home/vk/src/appendfilename/appendfilename.py "${@}" + +#end +``` + +*vk-appendfilename-interactive-wrapper-with-gnome-terminal.sh* + +#### 工作流程:播放电影文件 + +在 GNU/Linux 上,我使用 [mplayer][31] 回放视频文件。由于 geeqie 本身不播放电影文件,所以我必须创建一个设置,以便在 mplayer 中打开电影文件。 + +##### 在 geeqie 中初始设置 mplayer + +我已经使用 [xdg-open][32] 将电影文件扩展名关联到 mplayer。因此,我只需要为 geeqie 创建一个通用的“open”命令,让它使用 `xdg-open` 打开任何文件及其关联的应用程序。 + +在 geeqie 中,再次访问 “Edit > Preferences > Configure Editors ...” 添加“open”的条目: + +``` +[Desktop Entry] +Name=open +GenericName=open +Comment= +Exec=/usr/bin/xdg-open %F +Icon= +Terminal=true +Type=Application +hidden=false +NOMimeType=*; +MimeType=image/*;video/* +Categories=X-Geeqie; +``` + +*open.desktop* + +当你也将快捷方式 `o` (见上文)与 geeqie 关联时,你就能够打开与其关联的应用程序的视频文件(和其他文件)。 + +##### 使用 xdg-open 打开电影文件(和其他文件) + +在上面的设置过程之后,当你的 geeqie 光标位于文件上方时,你只需按下 `o` 即可。就是如此简洁。 + +#### 工作流程:在外部图像编辑器中打开 + +我不太希望能够在 GIMP 中快速编辑图像文件。因此,我添加了一个快捷方式 `g`,并将其与外部编辑器 “"GNU Image Manipulation Program" (GIMP)” 关联起来,geeqie 已经默认创建了该外部编辑器。 + +这样,只需按下 `g` 就可以在 GIMP 中打开当前图像。 + +#### 工作流程:移动到存档文件夹 + +现在我已经在我的文件名中添加了注释,我想将单个文件移动到 `$HOME/archive/events_memories/2014/`,或者将一组文件移动到这个文件夹中的新文件夹中,如 `$HOME/archive/events_memories/2014/2014-05-08 business marathon after show - party`。 + +通常的方法是选择一个或多个文件,并用快捷方式 `Ctrl-m` 将它们移动到文件夹中。 + +何等繁复无趣之至! + +因此,我(再次)编写了一个 Python 脚本,它为我完成了这项工作:[move2archive][33](简写为:` m2a `),需要一个或多个文件作为命令行参数。然后,出现一个对话框,我可以在其中输入一个可选文件夹名。当我不输入任何东西而是按回车,文件被移动到相应年份的文件夹。当我输入一个类似 `Business-Marathon After-Show-Party` 的文件夹名称时,第一个图像文件的日期戳被附加到该文件夹(`$HOME/archive/events_memories/2014/2014-05-08 Business-Marathon After-Show-Party`),然后创建该文件夹,并移动文件。 + +再一次,我在 geeqie 中选择一个或多个文件,按 `m`(移动),或者只按回车(没有特殊的子文件夹),或者输入一个描述性文本,这是要创建的子文件夹的名称(可选不带日期戳)。 + +**没有一个图像管理工具像我的带有 appendfilename 和 move2archive 的 geeqie 一样可以通过快捷键快速且有趣的完成工作。** + +##### 在 geeqie 里初始化 m2a 的相关设置 + +同样,向 geeqie 添加 `m2a` 是一个手动步骤:“Edit > Preferences > Configure Editors ...”,然后创建一个附加条目“New”。在这里,你可以定义一个新的桌面文件,如下所示: + +``` +[Desktop Entry] +Name=move2archive +GenericName=move2archive +Comment=Moving one or more files to my archive folder +Exec=/home/vk/src/misc/vk-m2a-interactive-wrapper-with-gnome-terminal.sh %F +Icon= +Terminal=true +Type=Application +Categories=Application;Graphics; +hidden=false +MimeType=image/*;video/*;image/mpo;image/thm +Categories=X-Geeqie; +``` + +*m2a.desktop* + +封装脚本 `vk-m2a-interactive-wrapper-with-gnome-terminal.sh` 是必要的,因为我想要弹出一个新的终端窗口,以便我的文件进入我指定的目标文件夹: + +``` +#!/bin/sh + +/usr/bin/gnome-terminal \ + --geometry=157x56+330+5 \ + --tab-with-profile=big \ + --hide-menubar \ + -x /home/vk/src/m2a/m2a.py --pauseonexit "${@}" + +#end +``` + +*vk-m2a-interactive-wrapper-with-gnome-terminal.sh* + +在 geeqie 中,你可以在 “Edit > Preferences > Preferences ... > Keyboard” 将 `m` 与 `m2a` 命令相关联。 + +#### 工作流程:旋转图像(无损) + +通常,我的数码相机会自动将人像照片标记为人像照片。然而,在某些特定的情况下(比如从装饰图案上方拍照),我的相机会出错。在那些**罕见的情况下**,我必须手动修正方向。 + +你必须知道,JPEG 文件格式是一种有损格式,应该只用于照片,而不是计算机生成的东西,如屏幕截图或图表。以傻瓜方式旋转 JPEG 图像文件通常会解压/可视化图像文件、旋转生成新的图像,然后重新编码结果。这将导致生成的图像[比原始图像质量差得多][5]。 + +因此,你应该使用无损方法来旋转 JPEG 图像文件。 + +再一次,我添加了一个“外部编辑器”到 geeqie:“Edit > Preferences > Configure Editors ... > New”。在这里,我添加了两个条目:使用 [exiftran][34],一个用于旋转 270 度(即逆时针旋转 90 度),另一个用于旋转 90 度(顺时针旋转 90 度): + +``` +[Desktop Entry] +Version=1.0 +Type=Application +Name=Losslessly rotate JPEG image counterclockwise + +# call the helper script +TryExec=exiftran +Exec=exiftran -p -2 -i -g %f + +# Desktop files that are usable only in Geeqie should be marked like this: +Categories=X-Geeqie; +OnlyShowIn=X-Geeqie; + +# Show in menu "Edit/Orientation" +X-Geeqie-Menu-Path=EditMenu/OrientationMenu + +MimeType=image/jpeg; +``` + +*rotate-270.desktop* + +``` +[Desktop Entry] +Version=1.0 +Type=Application +Name=Losslessly rotate JPEG image clockwise + +# call the helper script +TryExec=exiftran +Exec=exiftran -p -9 -i -g %f + +# Desktop files that are usable only in Geeqie should be marked like this: +Categories=X-Geeqie; +OnlyShowIn=X-Geeqie; + +# Show in menu "Edit/Orientation" +X-Geeqie-Menu-Path=EditMenu/OrientationMenu + +# It can be made verbose +# X-Geeqie-Verbose=true + +MimeType=image/jpeg; +``` + +*rotate-90.desktop* + +我创建了 geeqie 快捷键 `[`(逆时针方向)和 `]`(顺时针方向)。 + +#### 工作流程:可视化 GPS 坐标 + +我的数码相机有一个 GPS 传感器,它在 JPEG 文件的 Exif 元数据中存储当前的地理位置。位置数据以 [WGS 84][35] 格式存储,如 `47, 58, 26.73; 16, 23, 55.51`(纬度;经度)。这一方式可读性较差,我期望:要么是地图,要么是位置名称。因此,我向 geeqie 添加了一些功能,这样我就可以在 [OpenStreetMap][36] 上看到单个图像文件的位置: `Edit > Preferences > Configure Editors ... > New`。 + +``` +[Desktop Entry] +Name=vkphotolocation +GenericName=vkphotolocation +Comment= +Exec=/home/vk/src/misc/vkphotolocation.sh %F +Icon= +Terminal=true +Type=Application +Categories=Application;Graphics; +hidden=false +MimeType=image/bmp;image/gif;image/jpeg;image/jpg;image/pjpeg;image/png;image/tiff;image/x-bmp;image/x-gray;image/x-icb;image/x-ico;image/x-png;image/x-portable-anymap;image/x-portable-bitmap;image/x-portable-graymap;image/x-portable-pixmap;image/x-xbitmap;image/x-xpixmap;image/x-pcx;image/svg+xml;image/svg+xml-compressed;image/vnd.wap.wbmp; +``` + +*photolocation.desktop* + +这调用了我的名为 `vkphotolocation.sh` 的封装脚本,它使用 [ExifTool][37] 以 [Marble][38] 能够读取和可视化的适当格式提取该坐标: + +``` +#!/bin/sh + +IMAGEFILE="${1}" +IMAGEFILEBASENAME=`basename ${IMAGEFILE}` + +COORDINATES=`exiftool -c %.6f "${IMAGEFILE}" | awk '/GPS Position/ { print $4 " " $6 }'` + +if [ "x${COORDINATES}" = "x" ]; then + zenity --info --title="${IMAGEFILEBASENAME}" --text="No GPS-location found in the image file." +else + /usr/bin/marble --latlon "${COORDINATES}" --distance 0.5 +fi + +#end +``` + +*vkphotolocation.sh* + +映射到键盘快捷键 `G`,我可以快速地得到**单个图像文件的位置的地图定位**。 + +当我想将多个 JPEG 图像文件的**位置可视化为路径**时,我使用 [GpsPrune][39]。我无法挖掘出 GpsPrune 将一组文件作为命令行参数的方法。正因为如此,我必须手动启动 GpsPrune,用 “File > Add photos”选择一组文件或一个文件夹。 + +通过这种方式,我可以为每个 JPEG 位置在 OpenStreetMap 地图上获得一个点(如果配置为这样)。通过单击这样一个点,我可以得到相应图像的详细信息。 + +如果你恰好在国外拍摄照片,可视化 GPS 位置对**在文件名中添加描述**大有帮助! + +#### 工作流程:根据 GPS 坐标过滤照片 + +这并非我的工作流程。为了完整起见,我列出该工作流对应工具的特性。我想做的就是从一大堆图片中寻找那些在一定区域内(范围或点 + 距离)的照片。 + +到目前为止,我只找到了 [DigiKam][40],它能够[根据矩形区域进行过滤][41]。如果你知道其他工具,请将其添加到下面的评论或给我写一封电子邮件。 + +#### 工作流程:显示给定集合的子集 + +如上面的需求所述,我希望能够对一个文件夹中的文件定义一个子集,以便将这个小集合呈现给其他人。 + +工作流程非常简单:我向选择的文件添加一个标记(通过 `t`/`filetags`)。为此,我使用标记 `sel`,它是 “selection” 的缩写。在标记了一组文件之后,我可以按下 `s`,它与一个脚本相关联,该脚本只显示标记为 `sel` 的文件。 + +当然,这也适用于任何标签或标签组合。因此,用同样的方法,你可以得到一个适当的概述,你的婚礼上的所有照片都标记着“教堂”和“戒指”。 + +很棒的功能,不是吗?:-) + +##### 初始设置 filetags 以根据标签和 geeqie 过滤 + +你必须定义一个额外的“外部编辑器”,“ Edit > Preferences > Configure Editors ... > New”: + +``` +[Desktop Entry] +Name=filetag-filter +GenericName=filetag-filter +Comment= +Exec=/home/vk/src/misc/vk-filetag-filter-wrapper-with-gnome-terminal.sh +Icon= +Terminal=true +Type=Application +Categories=Application;Graphics; +hidden=false +MimeType=image/*;video/*;image/mpo;image/thm +Categories=X-Geeqie; +``` + +*filter-tags.desktop* + +再次调用我编写的封装脚本: + +``` +#!/bin/sh + +/usr/bin/gnome-terminal \ + --geometry=85x15+330+5 \ + --hide-menubar \ + -x /home/vk/src/filetags/filetags.py --filter + +#end +``` + +*vk-filetag-filter-wrapper-with-gnome-terminal.sh* + +带有参数 `--filter` 的 `filetags` 基本上完成的是:用户被要求输入一个或多个标签。然后,当前文件夹中所有匹配的文件都使用[符号链接][42]链接到 `$HOME/.filetags_tagfilter/`。然后,启动了一个新的 geeqie 实例,显示链接的文件。 + +在退出这个新的 geeqie 实例之后,你会看到进行选择的旧的 geeqie 实例。 + +#### 用一个真实的案例来总结 + +哇哦, 这是一篇很长的博客文章。你可能已经忘了之前的概述。总结一下我在(扩展了标准功能集的) geeqie 中可以做的事情,我有一个很酷的总结: + +快捷键 | 功能 +--- | --- +`m` | 移到归档(m2a) +`o` | 打开(针对非图像文件) +`a` | 在文件名里添加字段 +`t` | 文件标签(添加) +`T` | 文件标签(删除) +`s` | 文件标签(排序) +`g` | gimp +`G` | 显示 GPS 信息 +`[` | 无损的逆时针旋转 +`]` | 无损的顺时针旋转 +`Ctrl-e` | EXIF 图像信息 +`f` | 全屏显示 + +文件名(包括它的路径)的部分及我用来操作该部分的相应工具: + +``` + /this/is/a/folder/2014-04-20T17.09 Picknick in Graz -- food graz.jpg + [ move2archive ] [ date2name ] [appendfilename] [ filetags ] +``` + +在实践中,我按照以下步骤将照片从相机保存到存档:我将 SD 存储卡放入计算机的 SD 读卡器中。然后我运行 [getdigicamdata.sh][23]。完成之后,我在 geeqie 中打开 `$HOME/tmp/digicam/tmp/`。我浏览了一下照片,把那些不成功的删除了。如果有一个图像的方向错误,我用 `[` 或 `]` 纠正它。 + +在第二步中,我向我认为值得评论的文件添加描述 (`a`)。每当我想添加标签时,我也这样做:我快速地标记所有应该共享相同标签的文件(`Ctrl + 鼠标点击`),并使用 [filetags][28](`t`)进行标记。 + +要合并来自给定事件的文件,我选中相应的文件,将它们移动到年度归档文件夹中的 `event-folder`,并通过在 [move2archive][33](`m`)中键入事件描述,其余的(非特殊的文件夹)无需声明事件描述由 `move2archive` (`m`)直接移动到年度归档中。 + +结束我的工作流程,我删除了 SD 卡上的所有文件,把它从操作系统上弹出,然后把它放回我的数码相机里。 + +以上。 + +因为这种工作流程几乎不需要任何开销,所以评论、标记和归档照片不再是一项乏味的工作。 + +### 最后 + +所以,这是一个详细描述我关于照片和电影的工作流程的叙述。你可能已经发现了我可能感兴趣的其他东西。所以请不要犹豫,请使用下面的链接留下评论或电子邮件。 + +我也希望得到反馈,如果我的工作流程适用于你。并且,如果你已经发布了你的工作流程或者找到了其他人工作流程的描述,也请留下评论! + +及时行乐,莫让错误的工具或低效的方法浪费了我们的人生! + +### 其他工具 + +阅读关于[本文中关于 gThumb 的部分][43]。 + +当你觉得你以上文中所叙述的符合你的需求时,请根据相关的建议来选择对应的工具。 + +-------------------------------------------------------------------------------- + +via: http://karl-voit.at/managing-digital-photographs/ + +作者:[Karl Voit][a] +译者:[qfzy1233](https://github.com/qfzy1233) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://karl-voit.at +[1]:https://en.wikipedia.org/wiki/Jpeg +[2]:http://en.wikipedia.org/wiki/Vendor_lock-in +[3]:https://en.wikipedia.org/wiki/Raw_image_format +[4]:http://www.gimp.org/ +[5]:http://petapixel.com/2012/08/14/why-you-should-always-rotate-original-jpeg-photos-losslessly/ +[6]:https://en.wikipedia.org/wiki/Gps +[7]:https://en.wikipedia.org/wiki/Iso_date +[8]:https://en.wikipedia.org/wiki/Ntfs +[9]:https://en.wikipedia.org/wiki/Exif +[10]:http://www.isisinform.com/reinventing-knowledge-the-medieval-controversy-of-alphabetical-order/ +[11]:https://en.wikipedia.org/wiki/File_name_extension +[12]:http://karl-voit.at/tagstore/en/papers.shtml +[13]:https://en.wikipedia.org/wiki/Hashtag +[14]:https://en.wikipedia.org/wiki/Dot-file +[15]:https://en.wikipedia.org/wiki/NTFS#Alternate_data_streams_.28ADS.29 +[16]:https://github.com/novoid/Memacs/blob/master/docs/memacs_filenametimestamps.org +[17]:https://github.com/novoid/Memacs +[18]:http://www.cis.upenn.edu/~bcpierce/unison/ +[19]:https://github.com/novoid/LaTeX-KOMA-template +[20]:http://orgmode.org/ +[21]:http://karl-voit.at/tags/emacs +[22]:https://twitter.com/search?q%3D%2523orgmode&src%3Dtypd +[23]:https://github.com/novoid/getdigicamdata.sh +[24]:http://www.sentex.net/%3Ccode%3Emwandel/jhead/ +[25]:https://github.com/novoid/date2name +[26]:http://geeqie.sourceforge.net/ +[27]:http://xee.c3.cx/ +[28]:https://github.com/novoid/filetag +[29]:http://karl-voit.at/tagstore/ +[30]:https://github.com/novoid/appendfilename +[31]:http://www.mplayerhq.hu +[32]:https://wiki.archlinux.org/index.php/xdg-open +[33]:https://github.com/novoid/move2archive +[34]:http://manpages.ubuntu.com/manpages/raring/man1/exiftran.1.html +[35]:https://en.wikipedia.org/wiki/WGS84#A_new_World_Geodetic_System:_WGS_84 +[36]:http://www.openstreetmap.org/ +[37]:http://www.sno.phy.queensu.ca/~phil/exiftool/ +[38]:http://userbase.kde.org/Marble/Tracking +[39]:http://activityworkshop.net/software/gpsprune/ +[40]:https://en.wikipedia.org/wiki/DigiKam +[41]:https://docs.kde.org/development/en/extragear-graphics/digikam/using-kapp.html#idp7659904 +[42]:https://en.wikipedia.org/wiki/Symbolic_link +[43]:http://karl-voit.at/2017/02/19/gthumb +[44]:https://glt18.linuxtage.at +[45]:https://glt18-programm.linuxtage.at/events/321.html diff --git a/published/20180906 What a shell dotfile can do for you.md b/published/20180906 What a shell dotfile can do for you.md new file mode 100644 index 0000000000..d2f1a73c71 --- /dev/null +++ b/published/20180906 What a shell dotfile can do for you.md @@ -0,0 +1,277 @@ +Shell 点文件可以为你做点什么 +====== + +> 了解如何使用配置文件来改善你的工作环境。 + +![](https://img.linux.net.cn/data/attachment/album/201910/03/123528x3skwqwb8sz8qo8s.jpg) + +不要问你可以为你的 shell 点文件dotfile做什么,而是要问一个 shell 点文件可以为你做什么! + +我一直在操作系统领域里面打转,但是在过去的几年中,我的日常使用的一直是 Mac。很长一段时间,我都在使用 Bash,但是当几个朋友开始把 [zsh][1] 当成宗教信仰时,我也试试了它。我没用太长时间就喜欢上了它,几年后,我越发喜欢它做的许多小事情。 + +我一直在使用 zsh(通过 [Homebrew][2] 提供,而不是由操作系统安装的)和 [Oh My Zsh 增强功能][3]。 + +本文中的示例是我的个人 `.zshrc`。大多数都可以直接用在 Bash 中,我觉得不是每个人都依赖于 Oh My Zsh,但是如果不用的话你的工作量可能会有所不同。曾经有一段时间,我同时为 zsh 和 Bash 维护一个 shell 点文件,但是最终我还是放弃了我的 `.bashrc`。 + +### 不偏执不行 + +如果你希望在各个操作系统上使用相同的点文件,则需要让你的点文件聪明点。 + +``` +### Mac 专用 +if [[ "$OSTYPE" == "darwin"* ]]; then +        # Mac 专用内容在此 +``` + +例如,我希望 `Alt + 箭头键` 将光标按单词移动而不是单个空格。为了在 [iTerm2][4](我的首选终端)中实现这一目标,我将此代码段添加到了 `.zshrc` 的 Mac 专用部分: + +``` +### Mac 专用 +if [[ "$OSTYPE" == "darwin"* ]]; then +        ### Mac 用于 iTerm2 的光标命令;映射 ctrl+arrows 或 alt+arrows 来快速移动 +        bindkey -e +        bindkey '^[[1;9C' forward-word +        bindkey '^[[1;9D' backward-word +        bindkey '\e\e[D' backward-word +        bindkey '\e\e[C' forward-word +fi +``` + +(LCTT 译注:标题 “We're all mad here” 是电影《爱丽丝梦游仙境》中,微笑猫对爱丽丝讲的一句话:“我们这儿全都是疯的”。) + +### 在家不工作 + +虽然我开始喜欢我的 Shell 点文件了,但我并不总是想要家用计算机上的东西与工作的计算机上的东西一样。解决此问题的一种方法是让补充的点文件在家中使用,而不是在工作中使用。以下是我的实现方式: + +``` +if [[ `egrep 'dnssuffix1|dnssuffix2' /etc/resolv.conf` ]]; then +        if [ -e $HOME/.work ] +                source $HOME/.work +        else +                echo "This looks like a work machine, but I can't find the ~/.work file" +        fi +fi +``` + +在这种情况下,我根据我的工作 dns 后缀(或多个后缀,具体取决于你的情况)来提供(`source`)一个可以使我的工作环境更好的单独文件。 + +(LCTT 译注:标题 “What about Bob?” 是 1991 年的美国电影《天才也疯狂》。) + +### 你该这么做 + +现在可能是放弃使用波浪号(`~`)表示编写脚本时的主目录的好时机。你会发现在某些上下文中无法识别它。养成使用环境变量 `$HOME` 的习惯,这将为你节省大量的故障排除时间和以后的工作。 + +如果你愿意,合乎逻辑的扩展是应该包括特定于操作系统的点文件。 + +(LCTT 译注:标题 “That thing you do” 是 1996 年由汤姆·汉克斯执导的喜剧片《挡不住的奇迹》。) + +### 别指望记忆 + +我写了那么多 shell 脚本,我真的再也不想写脚本了。并不是说 shell 脚本不能满足我大部分时间的需求,而是我发现写 shell 脚本,可能只是拼凑了一个胶带式解决方案,而不是永久地解决问题。 + +同样,我讨厌记住事情,在我的整个职业生涯中,我经常不得不在一天之中就彻彻底底地改换环境。实际的结果是这些年来,我不得不一再重新学习很多东西。(“等等……这种语言使用哪种 for 循环结构?”) + +因此,每隔一段时间我就会觉得自己厌倦了再次寻找做某事的方法。我改善生活的一种方法是添加别名。 + +对于任何一个使用操作系统的人来说,一个常见的情况是找出占用了所有磁盘的内容。不幸的是,我从来没有记住过这个咒语,所以我做了一个 shell 别名,创造性地叫做 `bigdirs`: + +``` +alias bigdirs='du --max-depth=1 2> /dev/null | sort -n -r | head -n20' +``` + +虽然我可能不那么懒惰,并实际记住了它,但是,那不太 Unix …… + +(LCTT 译注:标题 “Memory, all alone in the moonlight” 是一手英文老歌。) + +### 输错的人们 + +使用 shell 别名改善我的生活的另一种方法是使我免于输入错误。我不知道为什么,但是我已经养成了这种讨厌的习惯,在序列 `ea` 之后输入 `w`,所以如果我想清除终端,我经常会输入 `cleawr`。不幸的是,这对我的 shell 没有任何意义。直到我添加了这个小东西: + +``` +alias cleawr='clear' +``` + +在 Windows 中有一个等效但更好的命令 `cls`,但我发现自己会在 Shell 也输入它。看到你的 shell 表示抗议真令人沮丧,因此我添加: + +``` +alias cls='clear' +``` + +是的,我知道 `ctrl + l`,但是我从不使用它。 + +(LCTT 译注:标题 “Typos, and the people who love them” 可能来自某部电影。) + +### 要自娱自乐 + +工作压力很大。有时你需要找点乐子。如果你的 shell 不知道它显然应该执行的命令,则可能你想直接让它耸耸肩!你可以使用以下功能执行此操作: + +``` +shrug() { echo "¯\_(ツ)_/¯"; } +``` + +如果还不行,也许你需要掀桌不干了: + +``` +fliptable() { echo "(╯°□°)╯ ┻━┻"; } # 掀桌,用法示例: fsck -y /dev/sdb1 || fliptable +``` + +想想看,当我想掀桌子时而我不记得我给它起了个什么名字,我会有多沮丧和失望,所以我添加了更多的 shell 别名: + +``` +alias flipdesk='fliptable' +alias deskflip='fliptable' +alias tableflip='fliptable' +``` + +而有时你需要庆祝一下: + +``` +disco() { +        echo "(•_•)" +        echo "<)   )╯" +        echo " /    \ " +        echo "" +        echo "\(•_•)" +        echo " (   (>" +        echo " /    \ " +        echo "" +        echo " (•_•)" +        echo "<)   )>" +        echo " /    \ " +} +``` + +通常,我会将这些命令的输出通过管道传递到 `pbcopy`,并将其粘贴到我正在使用的相关聊天工具中。 + +我从一个我关注的一个叫 “Command Line Magic” [@ climagic][5] 的 Twitter 帐户得到了下面这个有趣的函数。自从我现在住在佛罗里达州以来,我很高兴看到我这一生中唯一的一次下雪: + +``` +snow() { + clear;while :;do echo $LINES $COLUMNS $(($RANDOM%$COLUMNS));sleep 0.1;done|gawk '{a[$3]=0;for(x in a) {o=a[x];a[x]=a[x]+1;printf "\033[%s;%sH ",o,x;printf "\033[%s;%sH*\033[0;0H",a[x],x;}}' +} +``` + +(LCTT 译注:标题 “Amuse yourself” 是 1936 年的美国电影《自娱自乐》) + +### 函数的乐趣 + +我们已经看到了一些我使用的函数示例。由于这些示例中几乎不需要参数,因此可以将它们作为别名来完成。 当比一个短句更长时,我出于个人喜好使用函数。 + +在我职业生涯的很多时期我都运行过 [Graphite][6],这是一个开源、可扩展的时间序列指标解决方案。 在很多的情况下,我需要将度量路径(用句点表示)转换到文件系统路径(用斜杠表示),反之亦然,拥有专用于这些任务的函数就变得很有用: + +``` +# 在 Graphite 指标和文件路径之间转换很有用 +function dottoslash() { +        echo $1 | sed 's/\./\//g' +} +function slashtodot() { +        echo $1 | sed 's/\//\./g' +} +``` + +在我的另外一段职业生涯里,我运行了很多 Kubernetes。如果你对运行 Kubernetes 不熟悉,你需要编写很多 YAML。不幸的是,一不小心就会编写了无效的 YAML。更糟糕的是,Kubernetes 不会在尝试应用 YAML 之前对其进行验证,因此,除非你应用它,否则你不会发现它是无效的。除非你先进行验证: + +``` +function yamllint() { +        for i in $(find . -name '*.yml' -o -name '*.yaml'); do echo $i; ruby -e "require 'yaml';YAML.load_file(\"$i\")"; done +} +``` + +因为我厌倦了偶尔破坏客户的设置而让自己感到尴尬,所以我写了这个小片段并将其作为提交前挂钩添加到我所有相关的存储库中。在持续集成过程中,类似的内容将非常有帮助,尤其是在你作为团队成员的情况下。 + +(LCTT 译注:哦抱歉,我不知道这个标题的出处。) + +### 手指不听话 + +我曾经是一位出色的盲打打字员。但那些日子已经一去不回。我的打字错误超出了我的想象。 + +在各种时期,我多次用过 Chef 或 Kubernetes。对我来说幸运的是,我从未同时使用过这两者。 + +Chef 生态系统的一部分是 Test Kitchen,它是加快测试的一组工具,可通过命令 `kitchen test` 来调用。Kubernetes 使用 CLI 工具 `kubectl` 进行管理。这两个命令都需要几个子命令,并且这两者都不会特别顺畅地移动手指。 + +我没有创建一堆“输错别名”,而是将这两个命令别名为 `k`: + +``` +alias k='kitchen test $@' +``` + +或 + +``` +alias k='kubectl $@' +``` + +(LCTT 译注:标题 “Oh, fingers, where art thou?” 演绎自《O Brother, Where Art Thou?》,这是 2000 年美国的一部电影《逃狱三王》。) + +### 分裂与合并 + +我职业生涯的后半截涉及与其他人一起编写更多代码。我曾在许多环境中工作过,在这些环境中,我们在帐户中复刻了存储库副本,并将拉取请求用作审核过程的一部分。当我想确保给定存储库的复刻与父版本保持最新时,我使用 `fetchupstream`: + +``` +alias fetchupstream='git fetch upstream && git checkout master && git merge upstream/master && git push' +``` + +(LCTT 译注:标题 “Timesplitters” 是一款视频游戏《时空分裂者》。) + +### 颜色之荣耀 + +我喜欢颜色。它可以使 `diff` 之类的东西更易于使用。 + +``` +alias diff='colordiff' +``` + +我觉得彩色的手册页是个巧妙的技巧,因此我合并了以下函数: + +``` +# 彩色化手册页,来自: +# http://boredzo.org/blog/archives/2016-08-15/colorized-man-pages-understood-and-customized +man() { +        env \ +                LESS_TERMCAP_md=$(printf "\e[1;36m") \ +                LESS_TERMCAP_me=$(printf "\e[0m") \ +                LESS_TERMCAP_se=$(printf "\e[0m") \ +                LESS_TERMCAP_so=$(printf "\e[1;44;33m") \ +                LESS_TERMCAP_ue=$(printf "\e[0m") \ +                LESS_TERMCAP_us=$(printf "\e[1;32m") \ +                man "$@" +} +``` + +我喜欢命令 `which`,但它只是告诉你正在运行的命令在文件系统中的位置,除非它是 Shell 函数才能告诉你更多。在多个级联的点文件之后,有时会不清楚函数的定义位置或作用。事实证明,`whence` 和 `type` 命令可以帮助解决这一问题。 + +``` +# 函数定义在哪里? +whichfunc() { +        whence -v $1 +        type -a $1 +} +``` + +(LCTT 译注:标题“Mine eyes have seen the glory of the coming of color” 演绎自歌曲 《Mine Eyes Have Seen The Glory Of The Coming Of The Lord》) + +### 总结 + +希望本文对你有所帮助,并能激发你找到改善日常使用 Shell 的方法。这些方法不必庞大、新颖或复杂。它们可能会解决一些微小但频繁的摩擦、创建捷径,甚至提供减少常见输入错误的解决方案。 + +欢迎你浏览我的 [dotfiles 存储库][7],但我要警示你,这样做可能会花费很多时间。请随意使用你认为有帮助的任何东西,并互相取长补短。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/shell-dotfile + +作者:[H.Waldo Grunenwald][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/gwaldo +[1]: http://www.zsh.org/ +[2]: https://brew.sh/ +[3]: https://github.com/robbyrussell/oh-my-zsh +[4]: https://www.iterm2.com/ +[5]: https://twitter.com/climagic +[6]: https://github.com/graphite-project/ +[7]: https://github.com/gwaldo/dotfiles diff --git a/published/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md b/published/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md new file mode 100644 index 0000000000..f28180e516 --- /dev/null +++ b/published/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md @@ -0,0 +1,101 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11420-1.html) +[#]: subject: (The Earliest Linux Distros: Before Mainstream Distros Became So Popular) +[#]: via: (https://itsfoss.com/earliest-linux-distros/) +[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/) + +主流发行版之前的那些最早的 Linux 发行版 +====== + +> 在这篇回溯历史的文章中,我们尝试回顾一些最早的 Linux 发行版是如何演变的,并形成我们今天所知道的发行版的。 + +![][1] + +在这里,我们尝试探讨了第一个 Linux 内核问世后,诸如 Red Hat、Debian、Slackware、SUSE、Ubuntu 等诸多流行的发行版的想法是如何产生的。 + +随着 1991 年 Linux 最初以内核的形式发布,今天我们所知道的发行版在世界各地众多合作者的帮助下得以创建 shell、库、编译器和相关软件包,从而使其成为一个完整的操作系统。 + +### 1、第一个已知的“发行版”是由 HJ Lu 创建的 + +Linux 发行版这种方式可以追溯到 1992 年,当时可以用来访问 Linux 的第一个已知的类似发行版的工具是由 HJ Lu 发布的。它由两个 5.25 英寸软盘组成: + +![Linux 0.12 Boot and Root Disks | Photo Credit][2] + +* LINUX 0.12 BOOT DISK:“启动”磁盘用来先启动系统。 +* LINUX 0.12 ROOT DISK:第二个“根”磁盘,用于在启动后获取命令提示符以访问 Linux 文件系统。 + +要在硬盘上安装 LINUX 0.12,必须使用十六进制编辑器来编辑其主启动记录(MBR),这是一个非常复杂的过程,尤其是在那个时代。 + +> 感觉太怀旧了? +> +> 你可以[安装 cool-retro-term 应用程序][3],它可以为你提供 90 年代计算机的复古外观的 Linux 终端。 + +### 2、MCC Interim Linux + +![MCC Linux 0.99.14, 1993 | Image Credit][4] + +MCC Interim Linux 最初由英格兰曼彻斯特计算中心的 Owen Le Blanc 与 “LINUX 0.12” 同年发布,它是针对普通用户的第一个 Linux 发行版,它具有菜单驱动的安装程序和最终用户/编程工具。它也是以软盘集的形式,可以将其安装在系统上以提供基于文本的基本环境。 + +MCC Interim Linux 比 0.12 更加易于使用,并且在硬盘驱动器上的安装过程更加轻松和类似于现代方式。它不需要使用十六进制编辑器来编辑 MBR。 + +尽管它于 1992 年 2 月首次发布,但自当年 11 月以来也可以通过 FTP 下载。 + +### 3、TAMU Linux + +![TAMU Linux | Image Credit][5] + +TAMU Linux 由 Texas A&M 的 Aggies 与 Texas A&M Unix & Linux 用户组于 1992 年 5 月开发,被称为 TAMU 1.0A。它是第一个提供 X Window System 的 Linux 发行版,而不仅仅是基于文本的操作系统。 + +### 4、Softlanding Linux System (SLS) + +![SLS Linux 1.05, 1994 | Image Credit][6] + +他们的口号是“DOS 伞降的温柔救援”!SLS 由 Peter McDonald 于 1992 年 5 月发布。SLS 在其时代得到了广泛的使用和流行,并极大地推广了 Linux 的思想。但是由于开发人员决定更改发行版中的可执行格式,因此用户停止使用它。 + +当今社区最熟悉的许多流行发行版是通过 SLS 演变而成的。其中两个是: + +* Slackware:它是最早的 Linux 发行版之一,由 Patrick Volkerding 于 1993 年创建。Slackware 基于 SLS,是最早的 Linux 发行版之一。 +* Debian:由 Ian Murdock 发起,Debian 在从 SLS 模型继续发展之后于 1993 年发布。我们今天知道的非常流行的 Ubuntu 发行版基于 Debian。 + +### 5、Yggdrasil + +![LGX Yggdrasil Fall 1993 | Image Credit][7] + +Yggdrasil 于 1992 年 12 月发行,是第一个产生 Live Linux CD 想法的发行版。它是由 Yggdrasil 计算公司开发的,该公司由位于加利福尼亚州伯克利的 Adam J. Richter 创立。它可以在系统硬件上自动配置自身,即“即插即用”功能,这是当今非常普遍且众所周知的功能。Yggdrasil 后来的版本包括一个用于在 Linux 中运行任何专有 MS-DOS CD-ROM 驱动程序的黑科技。 + +![Yggdrasil’s Plug-and-Play Promo | Image Credit][8] + +他们的座右铭是“我们其余人的免费软件”。 + +### 6、Mandriva + +在 90 年代后期,有一个非常受欢迎的发行版 [Mandriva][9],该发行版于 1998 年首次发行,是通过将法国的 Mandrake Linux 发行版与巴西的 Conectiva Linux 发行版统一起来形成的。它的发布寿命为 18 个月,会对 Linux 和系统软件进行更新,并且每年都会发布基于桌面的更新。它还有带有 5 年支持的服务器版本。现在是 [Open Mandriva][10]。 + +如果你在 Linux 发行之初就用过更多的怀旧发行版,请在下面的评论中与我们分享。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/earliest-linux-distros/ + +作者:[Avimanyu Bandyopadhyay][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/avimanyu/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/earliest-linux-distros.png?resize=800%2C450&ssl=1 +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Linux-0.12-Floppies.jpg?ssl=1 +[3]: https://itsfoss.com/cool-retro-term/ +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/MCC-Interim-Linux-0.99.14-1993.jpg?fit=800%2C600&ssl=1 +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/TAMU-Linux.jpg?ssl=1 +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/SLS-1.05-1994.jpg?ssl=1 +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/LGX_Yggdrasil_CD_Fall_1993.jpg?fit=781%2C800&ssl=1 +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Yggdrasil-Linux-Summer-1994.jpg?ssl=1 +[9]: https://en.wikipedia.org/wiki/Mandriva_Linux +[10]: https://www.openmandriva.org/ diff --git a/published/20190301 Guide to Install VMware Tools on Linux.md b/published/20190301 Guide to Install VMware Tools on Linux.md new file mode 100644 index 0000000000..e3d241592e --- /dev/null +++ b/published/20190301 Guide to Install VMware Tools on Linux.md @@ -0,0 +1,134 @@ +[#]: collector: (lujun9972) +[#]: translator: (tomjlw) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11467-1.html) +[#]: subject: (Guide to Install VMware Tools on Linux) +[#]: via: (https://itsfoss.com/install-vmware-tools-linux) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +在 Linux 上安装 VMware 工具 +====== + +> VMware 工具通过允许你共享剪贴板和文件夹以及其他东西来提升你的虚拟机体验。了解如何在 Ubuntu 和其它 Linux 发行版上安装 VMware 工具。 + +![如何在 Linux 上安装 VMware 工具][4] + +在先前的教程中,你学习了[在 Ubuntu 上安装 VMware 工作站][1]。你还可以通过安装 VMware 工具进一步提升你的虚拟机功能。 + +如果你已经在 VMware 上安装了一个访客机系统,你必须要注意 [VMware 工具][2]的要求 —— 尽管并不完全清楚到底有什么要求。 + +在本文中,我们将要强调 VMware 工具的重要性、所提供的特性,以及在 Ubuntu 和其它 Linux 发行版上安装 VMware 工具的方法。 + +### VMware 工具:概览及特性 + +![在 Ubuntu 上安装 VMware 工具][3] + +出于显而易见的理由,虚拟机(你的访客机系统)并不能做到与宿主机上的表现完全一致。在其性能和操作上会有特定的限制。那就是为什么引入 VMware 工具的原因。 + +VMware 工具以一种高效的形式在提升了其性能的同时,也可以帮助管理访客机系统。 + +#### VMware 工具到底负责什么? + +你大致知道它可以做什么,但让我们探讨一下细节: + +* 同步访客机系统与宿主机系统间的时间以简化操作 +* 提供从宿主机系统向访客机系统传递消息的能力。比如说,你可以复制文字到剪贴板,并将它轻松粘贴到你的访客机系统 +* 在访客机系统上启用声音 +* 提升访客机视频分辨率 +* 修正错误的网络速度数据 +* 减少不合适的色深 + +在访客机系统上安装了 VMware 工具会给它带来显著改变,但是它到底包含了什么特性才解锁或提升这些功能的呢?让我们来看看…… + +#### VMware 工具:核心特性细节 + +![用 VMware 工具在宿主机系统与访客机系统间共享剪切板][5] + +如果你不想知道它包含什么来启用这些功能的话,你可以跳过这部分。但是为了好奇的读者,让我们简短地讨论它一下: + +**VMware 设备驱动:** 它具体取决于操作系统。大多数主流操作系统都默认包含了设备驱动,因此你不必另外安装它。这主要涉及到内存控制驱动、鼠标驱动、音频驱动、网卡驱动、VGA 驱动以及其它。 + +**VMware 用户进程:** 这是这里真正有意思的地方。通过它你获得了在访客机和宿主机间复制粘贴和拖拽的能力。基本上,你可以从宿主机复制粘贴文本到虚拟机,反之亦然。 + +你同样也可以拖拽文件。此外,在你未安装 SVGA 驱动时它会启用鼠标指针的释放/锁定。 + +**VMware 工具生命周期管理:** 嗯,我们会在下面看看如何安装 VMware 工具,但是这个特性帮你在虚拟机中轻松安装/升级 VMware 工具。 + +**共享文件夹**:除了这些。VMware 工具同样允许你在访客机与宿主机系统间共享文件夹。 + +![使用 VMware 工具在访客机与宿机系统间共享文件][6] + +当然,它的效果同样取决于访客机系统。例如在 Windows 上你通过 Unity 模式运行虚拟机上的程序并从宿主机系统上操作它。 + +### 如何在 Ubuntu 和其它 Linux 发行版上安装 VMware 工具 + +**注意:** 对于 Linux 操作系统,你应该已经安装好了“Open VM 工具”,大多数情况下免除了额外安装 VMware 工具的需要。 + +大部分时候,当你安装了访客机系统时,如果操作系统支持 [Easy Install][7] 的话你会收到软件更新或弹窗告诉你要安装 VMware 工具。 + +Windows 和 Ubuntu 都支持 Easy Install。因此如果你使用 Windows 作为你的宿主机或尝试在 Ubuntu 上安装 VMware 工具,你应该会看到一个和弹窗消息差不多的选项来轻松安装 VMware 工具。这是它应该看起来的样子: + +![安装 VMware 工具的弹窗][8] + +这是搞定它最简便的办法。因此当你配置虚拟机时确保你有一个通畅的网络连接。 + +如果你没收到任何弹窗或者选项来轻松安装 VMware 工具。你需要手动安装它。以下是如何去做: + +1. 运行 VMware Workstation Player。 +2. 从菜单导航至 “Virtual Machine -> Install VMware tools”。如果你已经安装了它并想修复安装,你会看到 “Re-install VMware tools” 这一选项出现。 +3. 一旦你点击了,你就会看到一个虚拟 CD/DVD 挂载在访客机系统上。 +4. 打开该 CD/DVD,并复制粘贴那个 tar.gz 文件到任何你选择的区域并解压,这里我们选择“桌面”作为解压目的地。 + + ![][9] +5. 在解压后,运行终端并通过输入以下命令导航至里面的文件夹: + + ``` +cd Desktop/VMwareTools-10.3.2-9925305/vmware-tools-distrib +``` + + 你需要检查文件夹与路径名,这取决于版本与解压目的地,名字可能会改变。 + + ![][10] + + 用你的存储位置(如“下载”)替换“桌面”,如果你安装的也是 10.3.2 版本,其它的保持一样即可。 +6. 现在仅需输入以下命令开始安装: + + ``` +sudo ./vmware-install.pl -d +``` + + ![][11] + + 你会被询问密码以获得安装权限,输入密码然后应当一切都搞定了。 + +到此为止了,你搞定了。这系列步骤应当适用于几乎大部分基于 Ubuntu 的访客机系统。如果你想要在 Ubuntu 服务器上或其它系统安装 VMware 工具,步骤应该类似。 + +### 总结 + +在 Ubuntu Linux 上安装 VMware 工具应该挺简单。除了简单办法,我们也详述了手动安装的方法。如果你仍需帮助或者对安装有任何建议,在评论区评论让我们知道。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-vmware-tools-linux + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[tomjlw](https://github.com/tomjlw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/install-vmware-player-ubuntu-1310/ +[2]: https://kb.vmware.com/s/article/340 +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-downloading.jpg?fit=800%2C531&ssl=1 +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/install-vmware-tools-linux.png?resize=800%2C450&ssl=1 +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-features.gif?resize=800%2C500&ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-shared-folder.jpg?fit=800%2C660&ssl=1 +[7]: https://docs.vmware.com/en/VMware-Workstation-Player-for-Linux/15.0/com.vmware.player.linux.using.doc/GUID-3F6B9D0E-6CFC-4627-B80B-9A68A5960F60.html +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools.jpg?fit=800%2C481&ssl=1 +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-extraction.jpg?fit=800%2C564&ssl=1 +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-folder.jpg?fit=800%2C487&ssl=1 +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-installation-ubuntu.jpg?fit=800%2C492&ssl=1 diff --git a/published/20190320 Move your dotfiles to version control.md b/published/20190320 Move your dotfiles to version control.md new file mode 100644 index 0000000000..0b99773fa9 --- /dev/null +++ b/published/20190320 Move your dotfiles to version control.md @@ -0,0 +1,125 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11419-1.html) +[#]: subject: (Move your dotfiles to version control) +[#]: via: (https://opensource.com/article/19/3/move-your-dotfiles-version-control) +[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg) + +把“点文件”放到版本控制中 +====== + +> 通过在 GitLab 或 GitHub 上分享你的点文件,可以在整个系统上备份或同步你的自定义配置。 + +![](https://img.linux.net.cn/data/attachment/album/201910/03/205222yzo1rbck6accccvo.jpg) + +通过隐藏文件集(称为点文件dotfile)来定制操作系统是个非常棒的想法。在这篇 [Shell 点文件可以为你做点什么][1]中,H. "Waldo" Grunenwald 详细介绍了为什么以及如何设置点文件的细节。现在让我们深入探讨分享它们的原因和方式。 + +### 什么是点文件? + +“点文件dotfile”是指我们计算机中四处漂泊的配置文件。这些文件通常在文件名的开头以 `.` 开头,例如 `.gitconfig`,并且操作系统通常在默认情况下将其隐藏。例如,当我在 MacOS 上使用 `ls -a` 时,它才会显示所有可爱的点文件,否则就不会显示这些点文件。 + +``` +dotfiles on master +➜ ls +README.md  Rakefile   bin       misc    profiles   zsh-custom + +dotfiles on master +➜ ls -a +.               .gitignore      .oh-my-zsh      README.md       zsh-custom +..              .gitmodules     .tmux           Rakefile +.gemrc          .global_ignore .vimrc           bin +.git            .gvimrc         .zlogin         misc +.gitconfig      .maid           .zshrc          profiles +``` + +如果看一下用于 Git 配置的 `.gitconfig`,我能看到大量的自定义配置。我设置了帐户信息、终端颜色首选项和大量别名,这些别名可以使我的命令行界面看起来就像我的一样。这是 `[alias]` 块的摘录: + +``` +87   # Show the diff between the latest commit and the current state +88   d = !"git diff-index --quiet HEAD -- || clear; git --no-pager diff --patch-with-stat" +89 +90   # `git di $number` shows the diff between the state `$number` revisions ago and the current state +91   di = !"d() { git diff --patch-with-stat HEAD~$1; }; git diff-index --quiet HEAD -- || clear; d" +92 +93   # Pull in remote changes for the current repository and all its submodules +94   p = !"git pull; git submodule foreach git pull origin master" +95 +96   # Checkout a pull request from origin (of a github repository) +97   pr = !"pr() { git fetch origin pull/$1/head:pr-$1; git checkout pr-$1; }; pr" +``` + +由于我的 `.gitconfig` 有 200 多行的自定义设置,我无意于在我使用的每一台新计算机或系统上重写它,其他人肯定也不想这样。这是分享点文件变得越来越流行的原因之一,尤其是随着社交编码网站 GitHub 的兴起。正式提倡分享点文件的文章是 Zach Holman 在 2008 年发表的《[点文件意味着被复刻][2]》。其前提到今天依然如此:我想与我自己、与点文件新手,以及那些分享了他们的自定义配置从而教会了我很多知识的人分享它们。 + +### 分享点文件 + +我们中的许多人拥有多个系统,或者知道硬盘变化无常,因此我们希望备份我们精心策划的自定义设置。那么我们如何在环境之间同步这些精彩的文件? + +我最喜欢的答案是分布式版本控制,最好是可以为我处理繁重任务的服务。我经常使用 GitHub,随着我对 GitLab 的使用经验越来越丰富,我肯定会一如既往地继续喜欢它。任何一个这样的服务都是共享你的信息的理想场所。要自己设置的话可以这样做: + +1. 登录到你首选的基于 Git 的服务。 +2. 创建一个名为 `dotfiles` 的存储库。(将其设置为公开!分享即关爱。) +3. 将其克隆到你的本地环境。(你可能需要设置 Git 配置命令来克隆存储库。GitHub 和 GitLab 都会提示你需要运行的命令。) +4. 将你的点文件复制到该文件夹中。 +5. 将它们符号链接回到其目标文件夹(最常见的是 `$HOME`)。 +6. 将它们推送到远程存储库。 + +![](https://opensource.com/sites/default/files/uploads/gitlab-new-project.png) + +上面的步骤 4 是这项工作的关键,可能有些棘手。无论是使用脚本还是手动执行,工作流程都是从 `dotfiles` 文件夹符号链接到点文件的目标位置,以便对点文件的任何更新都可以轻松地推送到远程存储库。要对我的 `.gitconfig` 文件执行此操作,我要输入: + +``` +$ cd dotfiles/ +$ ln -nfs .gitconfig $HOME/.gitconfig +``` + +添加到符号链接命令的标志还具有其他一些用处: + +* `-s` 创建符号链接而不是硬链接。 +* `-f` 在发生错误时继续做其他符号链接(此处不需要,但在循环中很有用) +* `-n` 避免符号链接到一个符号链接文件(等同于其他版本的 `ln` 的 `-h` 标志) + +如果要更深入地研究可用参数,可以查看 IEEE 和开放小组的 [ln 规范][3]以及 [MacOS 10.14.3] [4] 上的版本。自从其他人的点文件中拉取出这些标志以来,我才发现了这些标志。 + +你还可以使用一些其他代码来简化更新,例如我从 [Brad Parbs][6] 复刻的 [Rakefile][5]。另外,你也可以像 Jeff Geerling [在其点文件中][7]那样,使它保持极其简单的状态。他使用[此 Ansible 剧本][8]对文件进行符号链接。这样使所有内容保持同步很容易:你可以从点文件的文件夹中进行 cron 作业或偶尔进行 `git push`。 + +### 简单旁注:什么不能分享 + +在继续之前,值得注意的是你不应该添加到共享的点文件存储库中的内容 —— 即使它以点开头。任何有安全风险的东西,例如 `.ssh/` 文件夹中的文件,都不是使用此方法分享的好选择。确保在在线发布配置文件之前仔细检查配置文件,并再三检查文件中没有 API 令牌。 + +### 我应该从哪里开始? + +如果你不熟悉 Git,那么我[有关 Git 术语的文章][9]和常用命令[备忘清单][10]将会帮助你继续前进。 + +还有其他超棒的资源可帮助你开始使用点文件。多年前,我就发现了 [dotfiles.github.io][11],并继续使用它来更广泛地了解人们在做什么。在其他人的点文件中隐藏了许多秘传知识。花时间浏览一些,大胆地将它们添加到自己的内容中。 + +我希望这是让你在计算机上拥有一致的点文件的快乐开端。 + +你最喜欢的点文件技巧是什么?添加评论或在 Twitter 上找我 [@mbbroberg][12]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/move-your-dotfiles-version-control + +作者:[Matthew Broberg][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mbbroberg +[b]: https://github.com/lujun9972 +[1]: https://linux.cn/article-11417-1.html +[2]: https://zachholman.com/2010/08/dotfiles-are-meant-to-be-forked/ +[3]: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ln.html +[4]: https://www.unix.com/man-page/FreeBSD/1/ln/ +[5]: https://github.com/mbbroberg/dotfiles/blob/master/Rakefile +[6]: https://github.com/bradp/dotfiles +[7]: https://github.com/geerlingguy/dotfiles +[8]: https://github.com/geerlingguy/mac-dev-playbook +[9]: https://opensource.com/article/19/2/git-terminology +[10]: https://opensource.com/downloads/cheat-sheet-git +[11]: http://dotfiles.github.io/ +[12]: https://twitter.com/mbbroberg?lang=en diff --git a/published/20190513 Blockchain 2.0 - Introduction To Hyperledger Fabric -Part 10.md b/published/20190513 Blockchain 2.0 - Introduction To Hyperledger Fabric -Part 10.md new file mode 100644 index 0000000000..5b3be0ba08 --- /dev/null +++ b/published/20190513 Blockchain 2.0 - Introduction To Hyperledger Fabric -Part 10.md @@ -0,0 +1,73 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11461-1.html) +[#]: subject: (Blockchain 2.0 – Introduction To Hyperledger Fabric [Part 10]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +区块链 2.0:Hyperledger Fabric 介绍(十) +====== + +![Hyperledger Fabric][1] + +### Hyperledger Fabric + +[Hyperledger 项目][2] 是一个伞形组织,包括许多正在开发的不同模块和系统。在这些子项目中,最受欢迎的是 “Hyperledger Fabric”。这篇博文将探讨一旦区块链系统开始大量使用到主流,将使 Fabric 在不久的将来成为几乎不可或缺的功能。最后,我们还将快速了解开发人员和爱好者们需要了解的有关 Hyperledger Fabric 技术的知识。 + +### 起源 + +按照 Hyperledger 项目的常规方式,Fabric 由其核心成员之一 IBM “捐赠”给该组织,而 IBM 以前是该组织的主要开发者。由 IBM 共享的这个技术平台在 Hyperledger 项目中进行了联合开发,来自 100 多个成员公司和机构为之做出了贡献。 + +目前,Fabric 正处于 LTS 版本的 v1.4,该版本已经发展很长一段时间,并且被视为企业管理业务数据的解决方案。Hyperledger 项目的核心愿景也必然会渗透到 Fabric 中。Hyperledger Fabric 系统继承了所有企业级的可扩展功能,这些功能已深深地刻入到 Hyperledger 组织旗下所有的项目当中。 + +### Hyperledger Fabric 的亮点 + +Hyperledger Fabric 提供了多种功能和标准,这些功能和标准围绕着支持快速开发和模块化体系结构的使命而构建。此外,与竞争对手(主要是瑞波和[以太坊][3])相比,Fabric 明确用于封闭和[许可区块链][4]。它们的核心目标是开发一套工具,这些工具将帮助区块链开发人员创建定制的解决方案,而不是创建独立的生态系统或产品。 + +Hyperledger Fabric 的一些亮点如下: + +#### 许可区块链系统 + +这是一个 Hyperledger Fabric 与其他平台(如以太坊和瑞波)差异很大的地方。默认情况下,Fabric 是一种旨在实现私有许可的区块链的工具。此类区块链不能被所有人访问,并且其中致力于达成共识或验证交易的节点将由中央机构进行选择。这对于某些应用(例如银行和保险)可能很重要,在这些应用中,交易必须由中央机构而不是参与者来验证。 + +#### 机密和受控的信息流 + +Fabric 内置了权限系统,该权限系统将视情况限制特定组或某些个人中的信息流。与公有区块链不同,在公有区块链中,任何运行节点的人都可以对存储在区块链中的数据进行复制和选择性访问,而 Fabric 系统的管理员可以选择谁能访问共享的信息,以及访问的方式。与现有竞争产品相比,它还有以更好的安全性标准对存储的数据进行加密的子系统。 + +#### 即插即用架构 + +Hyperledger Fabric 具有即插即用类型的体系结构。可以选择实施系统的各个组件,而开发人员看不到用处的系统组件可能会被废弃。Fabric 采取高度模块化和可定制的方式进行开发,而不是一种与其竞争对手采用的“一种方法适应所有需求”的方式。对于希望快速构建精益系统的公司和公司而言,这尤其有吸引力。这与 Fabric 和其它 Hyperledger 组件的互操作性相结合,意味着开发人员和设计人员现在可以使用各种标准化工具,而不必从其他来源提取代码并随后进行集成。它还提供了一种相当可靠的方式来构建健壮的模块化系统。 + +#### 智能合约和链码 + +运行在区块链上的分布式应用程序称为[智能合约][5]。虽然智能合约这个术语或多或少与以太坊平台相关联,但链码chaincode是 Hyperledger 阵营中为其赋予的名称。链码应用程序除了拥有 DApp 中有的所有优点之外,使 Hyperledger 与众不同的是,该应用程序的代码可以用多种高级编程语言编写。它本身支持 [Go][6] 和 JavaScript,并且在与适当的编译器模块集成后还支持许多其它编程语言。尽管这一事实在此时可能并不代表什么,但这意味着,如果可以将现有人才用于正在进行的涉及区块链的项目,从长远来看,这有可能为公司节省数十亿美元的人员培训和管理费用。开发人员可以使用自己喜欢的语言进行编码,从而在 Hyperledger Fabric 上开始构建应用程序,而无需学习或培训平台特定的语言和语法。这提供了 Hyperledger Fabric 当前竞争对手无法提供的灵活性。 + +### 总结 + +* Hyperledger Fabric 是一个后端驱动程序平台,是一个主要针对需要区块链或其它分布式账本技术的集成项目。因此,除了次要的脚本功能外,它不提供任何面向用户的服务。(认可以为​​它更像是一种脚本语言。) +* Hyperledger Fabric 支持针对特定用例构建侧链。如果开发人员希望将一组用户或参与者隔离到应用程序的特定部分或功能,则可以通过侧链来实现。侧链是衍生自主要父代的区块链,但在其初始块之后形成不同的链。产生新链的块将不受新链进一步变化的影响,即使将新信息添加到原始链中,新链也将保持不变。此功能将有助于扩展正在开发的平台,并引入用户特定的和案例特定的处理功能。 +* 前面的功能还意味着并非所有用户都会像通常对公有链所期望的那样拥有区块链中所有数据的“精确”副本。参与节点将具有仅与之相关的数据副本。例如,假设有一个类似于印度的 PayTM 的应用程序,该应用程序具有钱包功能以及电子商务功能。但是,并非所有的钱包用户都使用 PayTM 在线购物。在这种情况下,只有活跃的购物者将在 PayTM 电子商务网站上拥有相应的交易链,而钱包用户将仅拥有存储钱包交易的链的副本。这种灵活的数据存储和检索体系结构在扩展时非常重要,因为大量的单链区块链已经显示出会增加处理交易的前置时间。这样可以保持链的精简和分类。 + +我们将在以后的文章中详细介绍 Hyperledger Project 下的其他模块。 + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Hyperledger-Fabric-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/ +[3]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ +[4]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/ +[5]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[6]: https://www.ostechnix.com/install-go-language-linux/ diff --git a/published/20190627 RPM packages explained.md b/published/20190627 RPM packages explained.md new file mode 100644 index 0000000000..7f036a1504 --- /dev/null +++ b/published/20190627 RPM packages explained.md @@ -0,0 +1,313 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11452-1.html) +[#]: subject: (RPM packages explained) +[#]: via: (https://fedoramagazine.org/rpm-packages-explained/) +[#]: author: (Ankur Sinha "FranciscoD" https://fedoramagazine.org/author/ankursinha/) + +RPM 包初窥 +====== + +![][1] + +也许,Fedora 社区追求其[促进自由和开源的软件及内容的使命][2]的最著名的方式就是开发 [Fedora 软件发行版][3]了。因此,我们将很大一部分的社区资源用于此任务也就不足为奇了。这篇文章总结了这些软件是如何“打包”的,以及使之成为可能的基础工具,如 `rpm` 之类。 + +### RPM:最小的软件单元 + +可供用户选择的“版本”和“风味版”([spins][4] / [labs][5] / [silverblue][6])其实非常相似。它们都是由各种软件组成的,这些软件经过混合和搭配,可以很好地协同工作。它们之间的不同之处在于放入其中的具体工具不同。这种选择取决于它们所针对的用例。所有这些的“版本”和“风味版”基本组成单位都是 RPM 软件包文件。 + +RPM 文件是类似于 ZIP 文件或 tarball 的存档文件。实际上,它们使用了压缩来减小存档文件的大小。但是,除了文件之外,RPM 存档中还包含有关软件包的元数据。可以使用 `rpm` 工具查询: + +``` +$ rpm -q fpaste +fpaste-0.3.9.2-2.fc30.noarch + +$ rpm -qi fpaste +Name : fpaste +Version : 0.3.9.2 +Release : 2.fc30 +Architecture: noarch +Install Date: Tue 26 Mar 2019 08:49:10 GMT +Group : Unspecified +Size : 64144 +License : GPLv3+ +Signature : RSA/SHA256, Thu 07 Feb 2019 15:46:11 GMT, Key ID ef3c111fcfc659b9 +Source RPM : fpaste-0.3.9.2-2.fc30.src.rpm +Build Date : Thu 31 Jan 2019 20:06:01 GMT +Build Host : buildhw-07.phx2.fedoraproject.org +Relocations : (not relocatable) +Packager : Fedora Project +Vendor : Fedora Project +URL : https://pagure.io/fpaste +Bug URL : https://bugz.fedoraproject.org/fpaste +Summary : A simple tool for pasting info onto sticky notes instances +Description : +It is often useful to be able to easily paste text to the Fedora +Pastebin at http://paste.fedoraproject.org and this simple script +will do that and return the resulting URL so that people may +examine the output. This can hopefully help folks who are for +some reason stuck without X, working remotely, or any other +reason they may be unable to paste something into the pastebin + +$ rpm -ql fpaste +/usr/bin/fpaste +/usr/share/doc/fpaste +/usr/share/doc/fpaste/README.rst +/usr/share/doc/fpaste/TODO +/usr/share/licenses/fpaste +/usr/share/licenses/fpaste/COPYING +/usr/share/man/man1/fpaste.1.gz +``` + +安装 RPM 软件包后,`rpm` 工具可以知道具体哪些文件被添加到了系统中。因此,删除该软件包也会删除这些文件,并使系统保持一致状态。这就是为什么要尽可能地使用 `rpm` 安装软件,而不是从源代码安装软件的原因。 + +### 依赖关系 + +如今,完全独立的软件已经非常罕见。甚至 [fpaste][7],连这样一个简单的单个文件的 Python 脚本,都需要安装 Python 解释器。因此,如果系统未安装 Python(几乎不可能,但有可能),则无法使用 `fpaste`。用打包者的术语来说,“Python 是 `fpaste` 的**运行时依赖项**。” + +构建 RPM 软件包时(本文不讨论构建 RPM 的过程),生成的归档文件中包括了所有这些元数据。这样,与 RPM 软件包归档文件交互的工具就知道必须要安装其它的什么东西,以便 `fpaste` 可以正常工作: + +``` +$ rpm -q --requires fpaste +/usr/bin/python3 +python3 +rpmlib(CompressedFileNames) <= 3.0.4-1 +rpmlib(FileDigests) <= 4.6.0-1 +rpmlib(PayloadFilesHavePrefix) <= 4.0-1 +rpmlib(PayloadIsXz) <= 5.2-1 + +$ rpm -q --provides fpaste +fpaste = 0.3.9.2-2.fc30 + +$ rpm -qi python3 +Name : python3 +Version : 3.7.3 +Release : 3.fc30 +Architecture: x86_64 +Install Date: Thu 16 May 2019 18:51:41 BST +Group : Unspecified +Size : 46139 +License : Python +Signature : RSA/SHA256, Sat 11 May 2019 17:02:44 BST, Key ID ef3c111fcfc659b9 +Source RPM : python3-3.7.3-3.fc30.src.rpm +Build Date : Sat 11 May 2019 01:47:35 BST +Build Host : buildhw-05.phx2.fedoraproject.org +Relocations : (not relocatable) +Packager : Fedora Project +Vendor : Fedora Project +URL : https://www.python.org/ +Bug URL : https://bugz.fedoraproject.org/python3 +Summary : Interpreter of the Python programming language +Description : +Python is an accessible, high-level, dynamically typed, interpreted programming +language, designed with an emphasis on code readability. +It includes an extensive standard library, and has a vast ecosystem of +third-party libraries. + +The python3 package provides the "python3" executable: the reference +interpreter for the Python language, version 3. +The majority of its standard library is provided in the python3-libs package, +which should be installed automatically along with python3. +The remaining parts of the Python standard library are broken out into the +python3-tkinter and python3-test packages, which may need to be installed +separately. + +Documentation for Python is provided in the python3-docs package. + +Packages containing additional libraries for Python are generally named with +the "python3-" prefix. + +$ rpm -q --provides python3 +python(abi) = 3.7 +python3 = 3.7.3-3.fc30 +python3(x86-64) = 3.7.3-3.fc30 +python3.7 = 3.7.3-3.fc30 +python37 = 3.7.3-3.fc30 +``` + +### 解决 RPM 依赖关系 + +虽然 `rpm` 知道每个归档文件所需的依赖关系,但不知道在哪里找到它们。这是设计使然:`rpm` 仅适用于本地文件,必须具体告知它们的位置。因此,如果你尝试安装单个 RPM 软件包,则 `rpm` 找不到该软件包的运行时依赖项时就会出错。本示例尝试安装从 Fedora 软件包集中下载的软件包: + +``` +$ ls +python3-elephant-0.6.2-3.fc30.noarch.rpm + +$ rpm -qpi python3-elephant-0.6.2-3.fc30.noarch.rpm +Name : python3-elephant +Version : 0.6.2 +Release : 3.fc30 +Architecture: noarch +Install Date: (not installed) +Group : Unspecified +Size : 2574456 +License : BSD +Signature : (none) +Source RPM : python-elephant-0.6.2-3.fc30.src.rpm +Build Date : Fri 14 Jun 2019 17:23:48 BST +Build Host : buildhw-02.phx2.fedoraproject.org +Relocations : (not relocatable) +Packager : Fedora Project +Vendor : Fedora Project +URL : http://neuralensemble.org/elephant +Bug URL : https://bugz.fedoraproject.org/python-elephant +Summary : Elephant is a package for analysis of electrophysiology data in Python +Description : +Elephant - Electrophysiology Analysis Toolkit Elephant is a package for the +analysis of neurophysiology data, based on Neo. + +$ rpm -qp --requires python3-elephant-0.6.2-3.fc30.noarch.rpm +python(abi) = 3.7 +python3.7dist(neo) >= 0.7.1 +python3.7dist(numpy) >= 1.8.2 +python3.7dist(quantities) >= 0.10.1 +python3.7dist(scipy) >= 0.14.0 +python3.7dist(six) >= 1.10.0 +rpmlib(CompressedFileNames) <= 3.0.4-1 +rpmlib(FileDigests) <= 4.6.0-1 +rpmlib(PartialHardlinkSets) <= 4.0.4-1 +rpmlib(PayloadFilesHavePrefix) <= 4.0-1 +rpmlib(PayloadIsXz) <= 5.2-1 + +$ sudo rpm -i ./python3-elephant-0.6.2-3.fc30.noarch.rpm +error: Failed dependencies: + python3.7dist(neo) >= 0.7.1 is needed by python3-elephant-0.6.2-3.fc30.noarch + python3.7dist(quantities) >= 0.10.1 is needed by python3-elephant-0.6.2-3.fc30.noarch +``` + +理论上,你可以下载 `python3-elephant` 所需的所有软件包,并告诉 `rpm` 它们都在哪里,但这并不方便。如果 `python3-neo` 和 `python3-quantities` 还有其它的运行时要求怎么办?很快,这种“依赖链”就会变得相当复杂。 + +#### 存储库 + +幸运的是,有了 `dnf` 和它的朋友们,可以帮助解决此问题。与 `rpm` 不同,`dnf` 能感知到**存储库**。存储库是程序包的集合,带有告诉 `dnf` 这些存储库包含什么内容的元数据。所有 Fedora 系统都带有默认启用的默认 Fedora 存储库: + +``` +$ sudo dnf repolist +repo id              repo name                             status +fedora               Fedora 30 - x86_64                    56,582 +fedora-modular       Fedora Modular 30 - x86_64               135 +updates              Fedora 30 - x86_64 - Updates           8,573 +updates-modular      Fedora Modular 30 - x86_64 - Updates     138 +updates-testing      Fedora 30 - x86_64 - Test Updates      8,458 +``` + +在 Fedora 快速文档中有[这些存储库][8]以及[如何管理][9]它们的更多信息。 + +`dnf` 可用于查询存储库以获取有关它们包含的软件包信息。它还可以在这些存储库中搜索软件,或从中安装/卸载/升级软件包: + +``` +$ sudo dnf search elephant +Last metadata expiration check: 0:05:21 ago on Sun 23 Jun 2019 14:33:38 BST. +============================================================================== Name & Summary Matched: elephant ============================================================================== +python3-elephant.noarch : Elephant is a package for analysis of electrophysiology data in Python +python3-elephant.noarch : Elephant is a package for analysis of electrophysiology data in Python + +$ sudo dnf list \*elephant\* +Last metadata expiration check: 0:05:26 ago on Sun 23 Jun 2019 14:33:38 BST. +Available Packages +python3-elephant.noarch 0.6.2-3.fc30 updates-testing +python3-elephant.noarch 0.6.2-3.fc30 updates +``` + +#### 安装依赖项 + +现在使用 `dnf` 安装软件包时,它将*解决*所有必需的依赖项,然后调用 `rpm` 执行该事务操作: + +``` +$ sudo dnf install python3-elephant +Last metadata expiration check: 0:06:17 ago on Sun 23 Jun 2019 14:33:38 BST. +Dependencies resolved. +============================================================================================================================================================================================== + Package Architecture Version Repository Size +============================================================================================================================================================================================== +Installing: + python3-elephant noarch 0.6.2-3.fc30 updates-testing 456 k +Installing dependencies: + python3-neo noarch 0.8.0-0.1.20190215git49b6041.fc30 fedora 753 k + python3-quantities noarch 0.12.2-4.fc30 fedora 163 k +Installing weak dependencies: + python3-igor noarch 0.3-5.20150408git2c2a79d.fc30 fedora 63 k + +Transaction Summary +============================================================================================================================================================================================== +Install 4 Packages + +Total download size: 1.4 M +Installed size: 7.0 M +Is this ok [y/N]: y +Downloading Packages: +(1/4): python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch.rpm 222 kB/s | 63 kB 00:00 +(2/4): python3-elephant-0.6.2-3.fc30.noarch.rpm 681 kB/s | 456 kB 00:00 +(3/4): python3-quantities-0.12.2-4.fc30.noarch.rpm 421 kB/s | 163 kB 00:00 +(4/4): python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch.rpm 840 kB/s | 753 kB 00:00 +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +Total 884 kB/s | 1.4 MB 00:01 +Running transaction check +Transaction check succeeded. +Running transaction test +Transaction test succeeded. +Running transaction + Preparing : 1/1 + Installing : python3-quantities-0.12.2-4.fc30.noarch 1/4 + Installing : python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch 2/4 + Installing : python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch 3/4 + Installing : python3-elephant-0.6.2-3.fc30.noarch 4/4 + Running scriptlet: python3-elephant-0.6.2-3.fc30.noarch 4/4 + Verifying : python3-elephant-0.6.2-3.fc30.noarch 1/4 + Verifying : python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch 2/4 + Verifying : python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch 3/4 + Verifying : python3-quantities-0.12.2-4.fc30.noarch 4/4 + +Installed: + python3-elephant-0.6.2-3.fc30.noarch python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch python3-quantities-0.12.2-4.fc30.noarch + +Complete! +``` + +请注意,`dnf` 甚至还安装了`python3-igor`,而它不是 `python3-elephant` 的直接依赖项。 + +### DnfDragora:DNF 的一个图形界面 + +尽管技术用户可能会发现 `dnf` 易于使用,但并非所有人都这样认为。[Dnfdragora][10] 通过为 `dnf` 提供图形化前端来解决此问题。 + +![dnfdragora (version 1.1.1-2 on Fedora 30) listing all the packages installed on a system.][11] + +从上面可以看到,dnfdragora 似乎提供了 `dnf` 的所有主要功能。 + +Fedora 中还有其他工具也可以管理软件包,GNOME 的“软件Software”和“发现Discover”就是其中两个。GNOME “软件”仅专注于图形应用程序。你无法使用这个图形化前端来安装命令行或终端工具,例如 `htop` 或 `weechat`。但是,GNOME “软件”支持安装 `dnf` 所不支持的 [Flatpak][12] 和 Snap 应用程序。它们是针对不同目标受众的不同工具,因此提供了不同的功能。 + +这篇文章仅触及到了 Fedora 软件的生命周期的冰山一角。本文介绍了什么是 RPM 软件包,以及使用 `rpm` 和 `dnf` 的主要区别。 + +在以后的文章中,我们将详细介绍: + +* 创建这些程序包所需的过程 +* 社区如何测试它们以确保它们正确构建 +* 社区用来将其给到社区用户的基础设施 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/rpm-packages-explained/ + +作者:[Ankur Sinha "FranciscoD"][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/ankursinha/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/rpm.png-816x345.jpg +[2]: https://docs.fedoraproject.org/en-US/project/#_what_is_fedora_all_about +[3]: https://getfedora.org +[4]: https://spins.fedoraproject.org/ +[5]: https://labs.fedoraproject.org/ +[6]: https://silverblue.fedoraproject.org/ +[7]: https://src.fedoraproject.org/rpms/fpaste +[8]: https://docs.fedoraproject.org/en-US/quick-docs/repositories/ +[9]: https://docs.fedoraproject.org/en-US/quick-docs/adding-or-removing-software-repositories-in-fedora/ +[10]: https://src.fedoraproject.org/rpms/dnfdragora +[11]: https://fedoramagazine.org/wp-content/uploads/2019/06/dnfdragora-1024x558.png +[12]: https://fedoramagazine.org/getting-started-flatpak/ diff --git a/sources/tech/20190701 Learn how to Record and Replay Linux Terminal Sessions Activity.md b/published/20190701 Learn how to Record and Replay Linux Terminal Sessions Activity.md similarity index 54% rename from sources/tech/20190701 Learn how to Record and Replay Linux Terminal Sessions Activity.md rename to published/20190701 Learn how to Record and Replay Linux Terminal Sessions Activity.md index bf076d08ea..060ba6926a 100644 --- a/sources/tech/20190701 Learn how to Record and Replay Linux Terminal Sessions Activity.md +++ b/published/20190701 Learn how to Record and Replay Linux Terminal Sessions Activity.md @@ -1,48 +1,50 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11429-1.html) [#]: subject: (Learn how to Record and Replay Linux Terminal Sessions Activity) [#]: via: (https://www.linuxtechi.com/record-replay-linux-terminal-sessions-activity/) [#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) -Learn how to Record and Replay Linux Terminal Sessions Activity +在 Linux 上记录和重放终端会话活动 ====== -Generally, all Linux administrators use **history** command to track which commands were executed in previous sessions, but there is one limitation of history command is that it doesn’t store the command’s output. There can be some scenarios where we want to check commands output of previous session and want to compare it with current session. Apart from this, there are some situations where we are troubleshooting the issues on Linux production boxes and want to save all terminal session activities for future reference, so in such cases script command become handy. +通常,Linux 管理员们都使用 `history` 命令来跟踪在先前的会话中执行过哪些命令,但是 `history` 命令的局限性在于它不存储命令的输出。在某些情况下,我们要检查上一个会话的命令输出,并希望将其与当前会话进行比较。除此之外,在某些情况下,我们正在对 Linux 生产环境中的问题进行故障排除,并希望保存所有终端会话活动以供将来参考,因此在这种情况下,`script` 命令就变得很方便。 - +![](https://img.linux.net.cn/data/attachment/album/201910/06/122659mmi64z8ryr4z2n8a.jpg) -Script is a command line tool which is used to capture or record your Linux server terminal sessions activity and later the recorded session can be replayed using scriptreplay command. In this article we will demonstrate how to install script command line tool and how to record Linux server terminal session activity and then later we will see how the recorded session can be replayed using **scriptreplay** command. +`script` 是一个命令行工具,用于捕获/记录你的 Linux 服务器终端会话活动,以后可以使用 `scriptreplay` 命令重放记录的会话。在本文中,我们将演示如何安装 `script` 命令行工具以及如何记录 Linux 服务器终端会话活动,然后,我们将看到如何使用 `scriptreplay` 命令来重放记录的会话。 -### Installation of Script tool on RHEL 7/ CentOS 7 +### 安装 script 工具 -Script command is provided by the rpm package “**util-linux**”, in case it is not installed on your CentOS 7 / RHEL 7 system , run the following yum command, +#### 在 RHEL 7/ CentOS 7 上安装 script 工具 + +`script` 命令由 RPM 包 `util-linux` 提供,如果你没有在你的 CentOS 7 / RHEL 7 系统上安装它,运行下面的 `yum` 安装它: ``` [root@linuxtechi ~]# yum install util-linux -y ``` -**On RHEL 8 / CentOS 8** +#### 在 RHEL 8 / CentOS 8 上安装 script 工具 -Run the following dnf command to install script utility on RHEL 8 and CentOS 8 system, +运行下面的 `dnf` 命令来在 RHEL 8 / CentOS 8 上安装 `script` 工具: ``` [root@linuxtechi ~]# dnf install util-linux -y ``` -**Installation of Script tool on Debian based systems (Ubuntu / Linux Mint)** +#### 在基于 Debian 的系统(Ubuntu / Linux Mint)上安装 script 工具 -Execute the beneath apt-get command to install script utility +运行下面的 `apt-get` 命令来安装 `script` 工具: ``` root@linuxtechi ~]# apt-get install util-linux -y ``` -### How to Use script utility +### 如何使用 script 工具 -Use of script command is straight forward, type script command on terminal then hit enter, it will start capturing your current terminal session activities inside a file called “**typescript**” +直接使用 `script` 命令,在终端上键入 `script` 命令,然后按回车,它将开始在名为 `typescript` 的文件中捕获当前的终端会话活动。 ``` [root@linuxtechi ~]# script @@ -50,7 +52,7 @@ Script started, file is typescript [root@linuxtechi ~]# ``` -To stop recording the session activities, type exit command and hit enter. +要停止记录会话活动,请键入 `exit` 命令,然后按回车: ``` [root@linuxtechi ~]# exit @@ -59,23 +61,23 @@ Script done, file is typescript [root@linuxtechi ~]# ``` -Syntax of Script command: +`script` 命令的语法格式: ``` -~ ] # script {options} {file_name} +~] # script {options} {file_name} ``` -Different options used in script command, +能在 `script` 命令中使用的不同选项: ![options-script-command][1] -Let’s start recording of your Linux terminal session by executing script command and then execute couple of command like ‘**w**’, ‘**route -n**’ , ‘[**df -h**][2]’ and ‘**free-h**’, example is shown below +让我们开始通过执行 `script` 命令来记录 Linux 终端会话,然后执行诸如 `w`,`route -n`,`df -h` 和 `free -h`,示例如下所示: ![script-examples-linux-server][3] -As we can see above, terminal session logs are saved in the file “typescript” +正如我们在上面看到的,终端会话日志保存在文件 `typescript` 中: -Now view the contents of typescript file using [cat][4] / vi command, +现在使用 `cat` / `vi` 命令查看 `typescript` 文件的内容, ``` [root@linuxtechi ~]# ls -l typescript @@ -85,11 +87,11 @@ Now view the contents of typescript file using [cat][4] / vi command, ![typescript-file-content-linux][5] -Above confirms that whatever commands we execute on terminal that have been saved inside the file “typescript” +以上内容确认了我们在终端上执行的所有命令都已保存在 `typescript` 文件中。 -### Use Custom File name in script command +### 在 script 命令中使用定制文件名 -Let’s assume we want to use our customize file name to script command, so specify the file name after script command, in the below example we are using a file name “session-log-(current-date-time).txt” +假设我们要使用自定义文件名来执行 `script` 命令,可以在 `script` 命令后指定文件名。在下面的示例中,我们使用的文件名为 `session-log-(当前日期时间).txt`。 ``` [root@linuxtechi ~]# script sessions-log-$(date +%d-%m-%Y-%T).txt @@ -97,7 +99,7 @@ Script started, file is sessions-log-21-06-2019-01:37:39.txt [root@linuxtechi ~]# ``` -Now run the commands and then type exit, +现在运行该命令并输入 `exit`: ``` [root@linuxtechi ~]# exit @@ -106,9 +108,9 @@ Script done, file is sessions-log-21-06-2019-01:37:39.txt [root@linuxtechi ~]# ``` -### Append the commands output to script file +### 附加命令输出到 script 记录文件 -Let assume script command had already recorded the commands output to a file called session-log.txt file and now we want to append output of new sessions commands output to this file, then use “**-a**” command in script command +假设 `script` 命令已经将命令输出记录到名为 `session-log.txt` 的文件中,现在我们想将新会话命令的输出附加到该文件中,那么可以在 `script` 命令中使用 `-a` 选项。 ``` [root@linuxtechi ~]# script -a sessions-log.txt @@ -129,11 +131,11 @@ Script done, file is sessions-log.txt [root@linuxtechi ~]# ``` -To view updated session’s logs, use “cat session-log.txt ” +要查看更新的会话记录,使用 `cat session-log.txt` 命令。 -### Capture commands output to script file without interactive shell +### 无需 shell 交互而捕获命令输出到 script 记录文件 -Let’s assume we want to capture commands output to a script file, then use **-c** option, example is shown below, +假设我们要捕获命令的输出到会话记录文件,那么使用 `-c` 选项,示例如下所示: ``` [root@linuxtechi ~]# script -c "uptime && hostname && date" root-session.txt @@ -145,9 +147,9 @@ Script done, file is root-session.txt [root@linuxtechi ~]# ``` -### Run script command in quiet mode +### 以静默模式运行 script 命令 -To run script command in quiet mode use **-q** option, this option will suppress the script started and script done message, example is shown below, +要以静默模式运行 `script` 命令,请使用 `-q` 选项,该选项将禁止 `script` 的启动和完成消息,示例如下所示: ``` [root@linuxtechi ~]# script -c "uptime && date" -q root-session.txt @@ -156,11 +158,13 @@ Fri Jun 21 02:01:10 EDT 2019 [root@linuxtechi ~]# ``` -Record Timing information to a file and capture commands output to a separate file, this can be achieved in script command by passing timing file (**–timing**) , example is shown below, +要将时序信息记录到文件中并捕获命令输出到单独的文件中,这可以通过在 `script` 命令中传递时序文件(`-timing`)实现,示例如下所示: -Syntax: +语法格式: -~ ]# script -t <timing-file-name>  {file_name} +``` +~ ]# script -t {file_name} +``` ``` [root@linuxtechi ~]# script --timing=timing.txt session.log @@ -185,23 +189,23 @@ Script done, file is session.log [root@linuxtechi ~]# ``` -### Replay recorded Linux terminal session activity +### 重放记录的 Linux 终端会话活动 -Now replay the recorded terminal session activities using scriptreplay command, +现在,使用 `scriptreplay` 命令重放录制的终端会话活动。 -**Note:** Scriptreplay is also provided by rpm package “**util-linux**”. Scriptreplay command requires timing file to work. +注意:`scriptreplay` 也由 RPM 包 `util-linux` 提供。`scriptreplay` 命令需要时序文件才能工作。 ``` [root@linuxtechi ~]# scriptreplay --timing=timing.txt session.log ``` -Output of above command would be something like below, +上面命令的输出将如下所示, - +![](https://www.linuxtechi.com/wp-content/uploads/2019/06/scriptreplay-linux.gif) -### Record all User’s Linux terminal session activities +### 记录所有用户的 Linux 终端会话活动 -There are some business critical Linux servers where we want keep track on all users activity, so this can be accomplished using script command, place the following content in /etc/profile file , +在某些关键业务的 Linux 服务器上,我们希望跟踪所有用户的活动,这可以使用 `script` 命令来完成,将以下内容放在 `/etc/profile` 文件中, ``` [root@linuxtechi ~]# vi /etc/profile @@ -218,22 +222,22 @@ fi …………………………………………………… ``` -Save & exit the file. +保存文件并退出。 -Create the session directory under /var/log folder, +在 `/var/log` 文件夹下创建 `session` 目录: ``` [root@linuxtechi ~]# mkdir /var/log/session ``` -Assign the permissions to session folder, +给该文件夹指定权限: ``` [root@linuxtechi ~]# chmod 777 /var/log/session/ [root@linuxtechi ~]# ``` -Now verify whether above code is working or not. Login to ordinary user to linux server, in my I am using pkumar user, +现在,验证以上代码是否有效。在我正在使用 `pkumar` 用户的情况下,登录普通用户到 Linux 服务器: ``` ~ ] # ssh root@linuxtechi @@ -263,13 +267,13 @@ Login as root and view user’s linux terminal session activity ![Session-output-file-linux][6] -We can also use scriptreplay command to replay user’s terminal session activities, +我们还可以使用 `scriptreplay` 命令来重放用户的终端会话活动: ``` [root@linuxtechi session]# scriptreplay --timing session.pkumar.19785.21-06-2019-04\:34\:05.timing session.pkumar.19785.21-06-2019-04\:34\:05 ``` -That’s all from this tutorial, please do share your feedback and comments in the comments section below. +以上就是本教程的全部内容,请在下面的评论部分中分享你的反馈和评论。 -------------------------------------------------------------------------------- @@ -277,8 +281,8 @@ via: https://www.linuxtechi.com/record-replay-linux-terminal-sessions-activity/ 作者:[Pradeep Kumar][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20190719 Buying a Linux-ready laptop.md b/published/20190719 Buying a Linux-ready laptop.md new file mode 100644 index 0000000000..91fdeab1ab --- /dev/null +++ b/published/20190719 Buying a Linux-ready laptop.md @@ -0,0 +1,82 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11436-1.html) +[#]: subject: (Buying a Linux-ready laptop) +[#]: via: (https://opensource.com/article/19/7/linux-laptop) +[#]: author: (Ricardo Berlasso https://opensource.com/users/rgb-es) + +我买了一台 Linux 笔记本 +====== + +> Tuxedo 让买一台开箱即用的 Linux 笔记本变得容易。 + +![](https://img.linux.net.cn/data/attachment/album/201910/08/133924vnmbklqh5jkshkmj.jpg) + +最近,我开始使用我买的 Linux 笔记本计算机 Tuxedo Book BC1507。十年前,如果有人告诉我,十年后我可以从 [System76][2]、[Slimbook][3] 和 [Tuxedo][4] 等公司购买到高质量的“企鹅就绪”的笔记本电脑。我可能会发笑。好吧,现在我也在笑,但是很开心! + +除了为免费/自由开源软件(FLOSS)设计计算机之外,这三家公司最近[宣布][5]都试图通过切换到[Coreboot][6]来消除专有的 BIOS 软件。 + +### 买一台 + +Tuxedo Computers 是一家德国公司,生产支持 Linux 的笔记本电脑。实际上,如果你要使用其他的操作系统,则它的价格会更高。 + +购买他们的计算机非常容易。Tuxedo 提供了许多付款方式:不仅包括信用卡,而且还包括 PayPal 甚至银行转帐(LCTT 译注:我们需要支付宝和微信支付……此外,要国际配送,还需要支付运输费和清关费用等)。只需在 Tuxedo 的网页上填写银行转帐表格,公司就会给你发送银行信息。 + +Tuxedo 可以按需构建每台计算机,只需选择基本模型并浏览下拉菜单以选择不同的组件,即可轻松准确地选择所需内容。页面上有很多信息可以指导你进行购买。 + +如果你选择的 Linux 发行版与推荐的发行版不同,则 Tuxedo 会进行“网络安装”,因此请准备好网络电缆以完成安装,也可以将你首选的镜像文件刻录到 USB 盘上。我通过外部 DVD 阅读器来安装刻录了 openSUSE Leap 15.1 安装程序的 DVD,但是你可用你自己的方式。 + +我选择的型号最多可以容纳两个磁盘:一个 SSD,另一个可以是 SSD 或常规硬盘。由于已经超出了我的预算,因此我决定选择传统的 1TB 磁盘并将 RAM 增加到 16GB。该处理器是具有四个内核的第八代 i5。我选择了背光西班牙语键盘、1920×1080/96dpi 屏幕和 SD 卡读卡器。总而言之,这是一个很棒的系统。 + +如果你对默认的英语或德语键盘感觉满意,甚至可以要求在 Meta 键上印上一个企鹅图标!我需要的西班牙语键盘则不提供此选项。 + +### 收货并开箱使用 + +付款完成后仅六个工作日,完好包装的计算机就十分安全地到达了我家。打开计算机包装并解锁电池后,我准备好开始浪了。 + +![Tuxedo Book BC1507][7] + +*我的(物理)桌面上的新玩具。* + +该电脑的设计确实很棒,而且感觉扎实。即使此型号的外壳不是铝制的(LCTT 译注:他们有更好看的铝制外壳的型号),也可以保持凉爽。风扇真的很安静,气流像许多其他笔记本电脑一样导向后边缘,而不是流向侧面。电池可提供数小时的续航时间。BIOS 中的一个名为 FlexiCharger 的选项会在达到一定百分比后停止为电池充电,因此在插入电源长时间工作时,无需卸下电池。 + +键盘真的很舒适,而且非常安静。甚至触摸板按键也很安静!另外,你可以轻松调整背光键盘上的光强度。 + +最后,很容易访问笔记本电脑中的每个组件,因此可以毫无问题地对计算机进行更新或维修。Tuxedo 甚至送了几个备用螺丝! + +### 结语 + +经过一个月的频繁使用,我对该系统感到非常满意。它完全满足了我的要求,并且一切都很完美。 + +因为它们通常是高端系统,所以包含 Linux 的计算机往往比较昂贵。如果你将 Tuxedo 或 Slimbook 电脑的价格与更知名品牌的类似规格的价格进行比较,价格相差无几。如果你想要一台使用自由软件的强大系统,请毫不犹豫地支持这些公司:他们所提供的物有所值。 + +请在评论中让我们知道你在 Tuxedo 和其他“企鹅友好”公司的经历。 + +* * * + +本文基于 Ricardo 的博客 [From Mind to Type][9] 上发表的“ [我的新企鹅笔记本电脑:Tuxedo-Book-BC1507][8]”。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/7/linux-laptop + +作者:[Ricardo Berlasso][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rgb-es +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background) +[2]: https://system76.com/ +[3]: https://slimbook.es/en/ +[4]: https://www.tuxedocomputers.com/ +[5]: https://www.tuxedocomputers.com/en/Infos/News/Tuxedo-Computers-stands-for-Free-Software-and-Security-.tuxedo +[6]: https://coreboot.org/ +[7]: https://opensource.com/sites/default/files/uploads/tuxedo-600_0.jpg (Tuxedo Book BC1507) +[8]: https://frommindtotype.wordpress.com/2019/06/17/my-new-penguin-ready-laptop-tuxedo-book-bc1507/ +[9]: https://frommindtotype.wordpress.com/ diff --git a/published/20190809 Mutation testing is the evolution of TDD.md b/published/20190809 Mutation testing is the evolution of TDD.md new file mode 100644 index 0000000000..475673d8b5 --- /dev/null +++ b/published/20190809 Mutation testing is the evolution of TDD.md @@ -0,0 +1,273 @@ +[#]: collector: (lujun9972) +[#]: translator: (Morisun029) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11468-1.html) +[#]: subject: (Mutation testing is the evolution of TDD) +[#]: via: (https://opensource.com/article/19/8/mutation-testing-evolution-tdd) +[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) + +变异测试是测试驱动开发(TDD)的演变 +====== + +> 测试驱动开发技术是根据大自然的运作规律创建的,变异测试自然成为 DevOps 演变的下一步。 + +![Ants and a leaf making the word "open"][1] + +在 “[故障是无懈可击的开发运维中的一个特点][2]”,我讨论了故障在通过征求反馈来交付优质产品的过程中所起到的重要作用。敏捷 DevOps 团队就是用故障来指导他们并推动开发进程的。[测试驱动开发][3]Test-driven development(TDD)是任何敏捷 DevOps 团队评估产品交付的[必要条件][4]。以故障为中心的 TDD 方法仅在与可量化的测试配合使用时才有效。 + +TDD 方法仿照大自然是如何运作的以及自然界在进化博弈中是如何产生赢家和输家为模型而建立的。 + +### 自然选择 + +![查尔斯·达尔文][5] + +1859 年,[查尔斯·达尔文][6]Charles Darwin在他的《[物种起源][7]On the Origin of Species》一书中提出了进化论学说。达尔文的论点是,自然变异是由生物个体的自发突变和环境压力共同造成的。环境压力淘汰了适应性较差的生物体,而有利于其他适应性强的生物的发展。每个生物体的染色体都会发生变异,而这些自发的变异会携带给下一代(后代)。然后在自然选择下测试新出现的变异性 —— 当下存在的环境压力是由变异性的环境条件所导致的。 + +这张简图说明了调整适应环境条件的过程。 + +![环境压力对鱼类的影响][8] + +*图1. 不同的环境压力导致自然选择下的不同结果。图片截图来源于[理查德·道金斯的一个视频][9]。* + +该图显示了一群生活在自己栖息地的鱼。栖息地各不相同(海底或河床底部的砾石颜色有深有浅),每条鱼长的也各不相同(鱼身图案和颜色也有深有浅)。 + +这张图还显示了两种情况(即环境压力的两种变化): + + 1. 捕食者在场 + 2. 捕食者不在场 + +在第一种情况下,在砾石颜色衬托下容易凸显出来的鱼被捕食者捕获的风险更高。当砾石颜色较深时,浅色鱼的数量会更少一些。反之亦然,当砾石颜色较浅时,深色鱼的数量会更少。 + +在第二种情况下,鱼完全放松下来进行交配。在没有捕食者和没有交配仪式的情况下,可以预料到相反的结果:在砾石背景下显眼的鱼会有更大的机会被选来交配并将其特性传递给后代。 + +### 选择标准 + +变异性在进行选择时,绝不是任意的、反复无常的、异想天开的或随机的。选择过程中的决定性因素通常是可以度量的。该决定性因素通常称为测试或目标。 + +一个简单的数学例子可以说明这一决策过程。(在该示例中,这种选择不是由自然选择决定的,而是由人为选择决定。)假设有人要求你构建一个小函数,该函数将接受一个正数,然后计算该数的平方根。你将怎么做? + +敏捷 DevOps 团队的方法是快速验证失败。谦虚一点,先承认自己并不真的知道如何开发该函数。这时,你所知道的就是如何描述你想做的事情。从技术上讲,你已准备好进行单元测试。 + +“单元测试unit test”描述了你的具体期望结果是什么。它可以简单地表述为“给定数字 16,我希望平方根函数返回数字 4”。你可能知道 16 的平方根是 4。但是,你不知道一些较大数字(例如 533)的平方根。 + +但至少,你已经制定了选择标准,即你的测试或你的期望值。 + +### 进行故障测试 + +[.NET Core][10] 平台可以演示该测试。.NET 通常使用 xUnit.net 作为单元测试框架。(要跟随进行这个代码示例,请安装 .NET Core 和 xUnit.net。) + +打开命令行并创建一个文件夹,在该文件夹实现平方根解决方案。例如,输入: + +``` +mkdir square_root +``` + +再输入: + +``` +cd square_root +``` + +为单元测试创建一个单独的文件夹: + +``` +mkdir unit_tests +``` + +进入 `unit_tests` 文件夹下(`cd unit_tests`),初始化 xUnit 框架: + +``` +dotnet new xunit +``` + +现在,转到 `square_root` 下, 创建 `app` 文件夹: + +``` +mkdir app +cd app +``` + +如果有必要的话,为你的代码创建一个脚手架: + +``` +dotnet new classlib +``` + +现在打开你最喜欢的编辑器开始编码! + +在你的代码编辑器中,导航到 `unit_tests` 文件夹,打开 `UnitTest1.cs`。 + +将 `UnitTest1.cs` 中自动生成的代码替换为: + +``` +using System; +using Xunit; +using app; + +namespace unit_tests{ + + public class UnitTest1{ + Calculator calculator = new Calculator(); + + [Fact] + public void GivenPositiveNumberCalculateSquareRoot(){ + var expected = 4; + var actual = calculator.CalculateSquareRoot(16); + Assert.Equal(expected, actual); + } + } +} +``` + +该单元测试描述了变量的**期望值**应该为 4。下一行描述了**实际值**。建议通过将输入值发送到称为`calculator` 的组件来计算**实际值**。对该组件的描述是通过接收数值来处理`CalculateSquareRoot` 信息。该组件尚未开发。但这并不重要,我们在此只是描述期望值。 + +最后,描述了触发消息发送时发生的情况。此时,判断**期望值**是否等于**实际值**。如果是,则测试通过,目标达成。如果**期望值**不等于**实际值**,则测试失败。 + +接下来,要实现称为 `calculator` 的组件,在 `app` 文件夹中创建一个新文件,并将其命名为`Calculator.cs`。要实现计算平方根的函数,请在此新文件中添加以下代码: + +``` +namespace app { + public class Calculator { + public double CalculateSquareRoot(double number) { + double bestGuess = number; + return bestGuess; + } + } +} +``` + +在测试之前,你需要通知单元测试如何找到该新组件(`Calculator`)。导航至 `unit_tests` 文件夹,打开 `unit_tests.csproj` 文件。在 `` 代码块中添加以下代码: + +``` + +``` + +保存 `unit_test.csproj` 文件。现在,你可以运行第一个测试了。 + +切换到命令行,进入 `unit_tests` 文件夹。运行以下命令: + +``` +dotnet test +``` + +运行单元测试,会输出以下内容: + +![单元测试失败后xUnit的输出结果][12] + +*图2. 单元测试失败后 xUnit 的输出结果* + +正如你所看到的,单元测试失败了。期望将数字 16 发送到 `calculator` 组件后会输出数字 4,但是输出(`Actual`)的是 16。 + +恭喜你!创建了第一个故障。单元测试为你提供了强有力的反馈机制,敦促你修复故障。 + +### 修复故障 + +要修复故障,你必须要改进 `bestGuess`。当下,`bestGuess` 仅获取函数接收的数字并返回。这不够好。 + +但是,如何找到一种计算平方根值的方法呢? 我有一个主意 —— 看一下大自然母亲是如何解决问题的。 + +### 效仿大自然的迭代 + +在第一次(也是唯一的)尝试中要得出正确值是非常难的(几乎不可能)。你必须允许自己进行多次尝试猜测,以增加解决问题的机会。允许多次尝试的一种方法是进行迭代。 + +要迭代,就要将 `bestGuess` 值存储在 `previousGuess` 变量中,转换 `bestGuess` 的值,然后比较两个值之间的差。如果差为 0,则说明问题已解决。否则,继续迭代。 + +这是生成任何正数的平方根的函数体: + +``` +double bestGuess = number; +double previousGuess; + +do { + previousGuess = bestGuess; + bestGuess = (previousGuess + (number/previousGuess))/2; +} while((bestGuess - previousGuess) != 0); + +return bestGuess; +``` + +该循环(迭代)将 `bestGuess` 值集中到设想的解决方案。现在,你精心设计的单元测试通过了! + +![单元测试通过了][13] + +*图 3. 单元测试通过了。* + +### 迭代解决了问题 + +正如大自然母亲解决问题的方法,在本练习中,迭代解决了问题。增量方法与逐步改进相结合是获得满意解决方案的有效方法。该示例中的决定性因素是具有可衡量的目标和测试。一旦有了这些,就可以继续迭代直到达到目标。 + +### 关键点! + +好的,这是一个有趣的试验,但是更有趣的发现来自于使用这种新创建的解决方案。到目前为止,`bestGuess` 从开始一直把函数接收到的数字作为输入参数。如果更改 `bestGuess` 的初始值会怎样? + +为了测试这一点,你可以测试几种情况。 首先,在迭代多次尝试计算 25 的平方根时,要逐步细化观察结果: + +![25 平方根的迭代编码][14] + +*图 4. 通过迭代来计算 25 的平方根。* + +以 25 作为 `bestGuess` 的初始值,该函数需要八次迭代才能计算出 25 的平方根。但是,如果在设计 `bestGuess` 初始值上犯下荒谬的错误,那将怎么办? 尝试第二次,那 100 万可能是 25 的平方根吗? 在这种明显错误的情况下会发生什么?你写的函数是否能够处理这种低级错误。 + +直接来吧。回到测试中来,这次以一百万开始: + +![逐步求精法][15] + +*图 5. 在计算 25 的平方根时,运用逐步求精法,以 100 万作为 bestGuess 的初始值。* + +哇! 以一个荒谬的数字开始,迭代次数仅增加了两倍(从八次迭代到 23 次)。增长幅度没有你直觉中预期的那么大。 + +### 故事的寓意 + +啊哈! 当你意识到,迭代不仅能够保证解决问题,而且与你的解决方案的初始猜测值是好是坏也没有关系。 不论你最初理解得多么不正确,迭代过程以及可衡量的测试/目标,都可以使你走上正确的道路并得到解决方案。 + +图 4 和 5 显示了陡峭而戏剧性的燃尽图。一个非常错误的开始,迭代很快就产生了一个绝对正确的解决方案。 + +简而言之,这种神奇的方法就是敏捷 DevOps 的本质。 + +### 回到一些更深层次的观察 + +敏捷 DevOps 的实践源于人们对所生活的世界的认知。我们生活的世界存在不确定性、不完整性以及充满太多的困惑。从科学/哲学的角度来看,这些特征得到了[海森堡的不确定性原理][16]Heisenberg's Uncertainty Principle(涵盖不确定性部分),[维特根斯坦的逻辑论哲学][17]Wittgenstein's Tractatus Logico-Philosophicus(歧义性部分),[哥德尔的不完全性定理][18]Gödel's incompleteness theorems(不完全性方面)以及[热力学第二定律][19]Second Law of Thermodynamics(无情的熵引起的混乱)的充分证明和支持。 + +简而言之,无论你多么努力,在尝试解决任何问题时都无法获得完整的信息。因此,放下傲慢的姿态,采取更为谦虚的方法来解决问题对我们会更有帮助。谦卑会给为你带来巨大的回报,这个回报不仅是你期望的一个解决方案,还会有它的副产品。 + +### 总结 + +大自然在不停地运作,这是一个持续不断的过程。大自然没有总体规划。一切都是对先前发生的事情的回应。 反馈循环是非常紧密的,明显的进步/倒退都是逐步实现的。大自然中随处可见,任何事物的都在以一种或多种形式逐步完善。 + +敏捷 DevOps 是工程模型逐渐成熟的一个非常有趣的结果。DevOps 基于这样的认识,即你所拥有的信息总是不完整的,因此你最好谨慎进行。获得可衡量的测试(例如,假设、可测量的期望结果),进行简单的尝试,大多数情况下可能失败,然后收集反馈,修复故障并继续测试。除了同意每个步骤都必须要有可衡量的假设/测试之外,没有其他方法。 + +在本系列的下一篇文章中,我将仔细研究变异测试是如何提供及时反馈来推动实现结果的。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/8/mutation-testing-evolution-tdd + +作者:[Alex Bunardzic][a] +选题:[lujun9972][b] +译者:[Morisun029](https://github.com/Morisun029) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alex-bunardzic +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520X292_openanttrail-2.png?itok=xhD3WmUd (Ants and a leaf making the word "open") +[2]: https://opensource.com/article/19/7/failure-feature-blameless-devops +[3]: https://en.wikipedia.org/wiki/Test-driven_development +[4]: https://www.merriam-webster.com/dictionary/conditio%20sine%20qua%20non +[5]: https://opensource.com/sites/default/files/uploads/darwin.png (Charles Darwin) +[6]: https://en.wikipedia.org/wiki/Charles_Darwin +[7]: https://en.wikipedia.org/wiki/On_the_Origin_of_Species +[8]: https://opensource.com/sites/default/files/uploads/environmentalconditions2.png (Environmental pressures on fish) +[9]: https://www.youtube.com/watch?v=MgK5Rf7qFaU +[10]: https://dotnet.microsoft.com/ +[11]: https://xunit.net/ +[12]: https://opensource.com/sites/default/files/uploads/xunit-output.png (xUnit output after the unit test run fails) +[13]: https://opensource.com/sites/default/files/uploads/unit-test-success.png (Unit test successful) +[14]: https://opensource.com/sites/default/files/uploads/iterating-square-root.png (Code iterating for the square root of 25) +[15]: https://opensource.com/sites/default/files/uploads/bestguess.png (Stepwise refinement) +[16]: https://en.wikipedia.org/wiki/Uncertainty_principle +[17]: https://en.wikipedia.org/wiki/Tractatus_Logico-Philosophicus +[18]: https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems +[19]: https://en.wikipedia.org/wiki/Second_law_of_thermodynamics diff --git a/published/20190822 A Raspberry Pi Based Open Source Tablet is in Making and it-s Called CutiePi.md b/published/20190822 A Raspberry Pi Based Open Source Tablet is in Making and it-s Called CutiePi.md new file mode 100644 index 0000000000..7e470c208c --- /dev/null +++ b/published/20190822 A Raspberry Pi Based Open Source Tablet is in Making and it-s Called CutiePi.md @@ -0,0 +1,82 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11414-1.html) +[#]: subject: (A Raspberry Pi Based Open Source Tablet is in Making and it’s Called CutiePi) +[#]: via: (https://itsfoss.com/cutiepi-open-source-tab/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +CutiePi:正在开发中的基于树莓派的开源平板 +====== + +![](https://img.linux.net.cn/data/attachment/album/201910/02/125301wkbvgz1n7zv7j55e.jpg) + +CutiePi 是一款 8 英寸的构建在树莓派上的开源平板。他们在[树莓派论坛][1]上宣布:现在,它只是一台原型机。 + +在本文中,你将了解有关 CutiePi 的规格、价格和可用性的更多详细信息。 + +它们使用一款定制的计算模块载版(CM3)来制造平板。[官网][2]提到使用定制 CM3 载板的目的是: + +> 定制 CM3/CM3+ 载板是专为便携使用而设计,拥有增强的电源管理和锂聚合物电池电量监控功能。还可与指定的 HDMI 或 MIPI DSI 显示器配合使用。 + +因此,这使得该平板足够薄而且便携。 + +### CutiePi 规格 + +![CutiePi Board][3] + +我惊讶地了解到它有 8 英寸的 IPS LCD 显示屏,这对新手而言是个好事。然而,你不会有一个真正高清的屏幕,因为官方宣称它的分辨率是 1280×800。 + +它还计划配备 4800 mAh 锂电池(原型机的电池为 5000 mAh)。嗯,对于平板来说,这不算坏。 + +连接性上包括支持 Wi-Fi 和蓝牙 4.0。此外,还有一个 USB Type-A 插口、6 个 GPIO 引脚和 microSD 卡插槽。 + +![CutiePi Specifications][4] + +硬件与 [Raspbian OS][5] 官方兼容,用户界面采用 [Qt][6] 构建,以获得快速直观的用户体验。此外,除了内置应用外,它还将通过 XWayland 支持 Raspbian PIXEL 应用。 + +### CutiePi 源码 + +你可以通过分析所用材料的清单来猜测此平板的定价。CutiePi 遵循 100% 的开源硬件设计。因此,如果你觉得好奇,可以查看它的 [GitHub 页面][7],了解有关硬件设计和内容的详细信息。 + +### CutiePi 价格、发布日期和可用性 + +CutiePi 计划在 8 月进行[设计验证测试][8]批量 PCB。他们的目标是在 2019 年底推出最终产品。 + +官方预计,发售价大约在 $150-$250 左右。这只是一个近似的范围,还应该保有怀疑。 + +显然,即使产品听上去挺有希望,但价格将是它成功的一个主要因素。 + +### 总结 + +CutiePi 并不是第一个使用[像树莓派这样的单板计算机][9]来制作平板的项目。我们有即将推出的 [PineTab][10],它基于 Pine64 单板电脑。Pine 还有一种笔记本电脑,名为 [Pinebook][11]。 + +从原型来看,它确实是一个我们可以期望使用的产品。但是,预安装的应用和它将支持的应用可能会扭转局面。此外,考虑到价格估计,这听起来很有希望。 + +你觉得怎么样?让我们在下面的评论中知道你的想法,或者投个票。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/cutiepi-open-source-tab/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://www.raspberrypi.org/forums/viewtopic.php?t=247380 +[2]: https://cutiepi.io/ +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/cutiepi-board.png?ssl=1 +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/cutiepi-specifications.jpg?ssl=1 +[5]: https://itsfoss.com/raspberry-pi-os-desktop/ +[6]: https://en.wikipedia.org/wiki/Qt_%28software%29 +[7]: https://github.com/cutiepi-io/cutiepi-board +[8]: https://en.wikipedia.org/wiki/Engineering_validation_test#Design_verification_test +[9]: https://itsfoss.com/raspberry-pi-alternatives/ +[10]: https://www.pine64.org/pinetab/ +[11]: https://itsfoss.com/pinebook-pro/ diff --git a/published/20190823 The lifecycle of Linux kernel testing.md b/published/20190823 The lifecycle of Linux kernel testing.md new file mode 100644 index 0000000000..4353371b32 --- /dev/null +++ b/published/20190823 The lifecycle of Linux kernel testing.md @@ -0,0 +1,75 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11464-1.html) +[#]: subject: (The lifecycle of Linux kernel testing) +[#]: via: (https://opensource.com/article/19/8/linux-kernel-testing) +[#]: author: (Major Hayden https://opensource.com/users/mhaydenhttps://opensource.com/users/mhaydenhttps://opensource.com/users/marcobravohttps://opensource.com/users/mhayden) + +Linux 内核测试的生命周期 +====== + +> 内核持续集成(CKI)项目旨在防止错误进入 Linux 内核。 + +![](https://img.linux.net.cn/data/attachment/album/201910/16/101933nexzccpea9sjxcq9.jpg) + +在 [Linux 内核的持续集成测试][2] 一文中,我介绍了 [内核持续集成][3]Continuous Kernel Integration(CKI)项目及其使命:改变内核开发人员和维护人员的工作方式。本文深入探讨了该项目的某些技术方面,以及这所有的部分是如何组合在一起的。 + +### 从一次更改开始 + +内核中每一项令人兴奋的功能、改进和错误都始于开发人员提出的更改。这些更改出现在各个内核存储库的大量邮件列表中。一些存储库关注内核中的某些子系统,例如存储或网络,而其它存储库关注内核的更多方面。 当开发人员向内核提出更改或补丁集时,或者维护者在存储库本身中进行更改时,CKI 项目就会付诸行动。 + +CKI 项目维护的触发器用于监视这些补丁集并采取措施。诸如 [Patchwork][4] 之类的软件项目通过将多个补丁贡献整合为单个补丁系列,使此过程变得更加容易。补丁系列作为一个整体历经 CKI 系统,并可以针对该系列发布单个报告。 + +其他触发器可以监视存储库中的更改。当内核维护人员合并补丁集、还原补丁或创建新标签时,就会触发。测试这些关键的更改可确保开发人员始终具有坚实的基线,可以用作编写新补丁的基础。 + +所有这些更改都会进入 GitLab CI 管道,并历经多个阶段和多个系统。 + +### 准备构建 + +首先要准备好要编译的源代码。这需要克隆存储库、打上开发人员建议的补丁集,并生成内核配置文件。这些配置文件具有成千上万个用于打开或关闭功能的选项,并且配置文件在不同的系统体系结构之间差异非常大。 例如,一个相当标准的 x86\_64 系统在其配置文件中可能有很多可用选项,但是 s390x 系统(IBM zSeries 大型机)的选项可能要少得多。在该大型机上,某些选项可能有意义,但在消费类笔记本电脑上没有任何作用。 + +内核进一步转换为源代码工件。该工件包含整个存储库(已打上补丁)以及编译所需的所有内核配置文件。 上游内核会打包成压缩包,而 Red Hat 的内核会生成下一步所用的源代码 RPM 包。 + +### 成堆的编译 + +编译内核会将源代码转换为计算机可以启动和使用的代码。配置文件描述了要构建的内容,内核中的脚本描述了如何构建它,系统上的工具(例如 GCC 和 glibc)完成构建。此过程需要一段时间才能完成,但是 CKI 项目需要针对四种体系结构快速完成:aarch64(64 位 ARM)、ppc64le(POWER)、s390x(IBM zSeries)和 x86\_64。重要的是,我们必须快速编译内核,以便使工作任务不会积压,而开发人员可以及时收到反馈。 + +添加更多的 CPU 可以大大提高速度,但是每个系统都有其局限性。CKI 项目在 OpenShift 的部署环境中的容器内编译内核;尽管 OpenShift 可以实现高伸缩性,但在部署环境中的可用 CPU 仍然是数量有限的。CKI 团队分配了 20 个虚拟 CPU 来编译每个内核。涉及到四个体系结构,这就涨到了 80 个 CPU! + +另一个速度的提高来自 [ccache][5] 工具。内核开发进展迅速,但是即使在多个发布版本之间,内核的大部分仍保持不变。ccache 工具进行编译期间会在磁盘上缓存已构建的对象(整个内核的一小部分)。稍后再进行另一个内核编译时,ccache 会查找以前看到的内核的未更改部分。ccache 会从磁盘中提取缓存的对象并重新使用它。这样可以加快编译速度并降低总体 CPU 使用率。现在,耗时 20 分钟编译的内核在不到几分钟的时间内就完成了。 + +### 测试时间 + +内核进入最后一步:在真实硬件上进行测试。每个内核都使用 Beaker 在其原生体系结构上启动,并且开始无数次的测试以发现问题。一些测试会寻找简单的问题,例如容器问题或启动时的错误消息。其他测试则深入到各种内核子系统中,以查找系统调用、内存分配和线程中的回归问题。 + +大型测试框架,例如 [Linux Test Project][6](LTP),包含了大量测试,这些测试在内核中寻找麻烦的回归问题。其中一些回归问题可能会回滚关键的安全修复程序,并且进行测试以确保这些改进仍保留在内核中。 + +测试完成后,关键的一步仍然是:报告。内核开发人员和维护人员需要一份简明的报告,准确地告诉他们哪些有效、哪些无效以及如何获取更多信息。每个 CKI 报告都包含所用源代码、编译参数和测试输出的详细信息。该信息可帮助开发人员知道从哪里开始寻找解决问题的方法。此外,它还可以帮助维护人员在漏洞进入内核存储库之前知道何时需要保留补丁集以进行其他查看。 + +### 总结 + +CKI 项目团队通过向内核开发人员和维护人员提供及时、自动的反馈,努力防止错误进入 Linux 内核。这项工作通过发现导致内核错误、安全性问题和性能问题等易于找到的问题,使他们的工作更加轻松。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/8/linux-kernel-testing + +作者:[Major Hayden][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mhaydenhttps://opensource.com/users/mhaydenhttps://opensource.com/users/marcobravohttps://opensource.com/users/mhayden +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster) +[2]: https://opensource.com/article/19/6/continuous-kernel-integration-linux +[3]: https://cki-project.org/ +[4]: https://github.com/getpatchwork/patchwork +[5]: https://ccache.dev/ +[6]: https://linux-test-project.github.io +[7]: https://cki-project.org/posts/hackfest-agenda/ +[8]: https://www.linuxplumbersconf.org/ diff --git a/published/20190824 How to compile a Linux kernel in the 21st century.md b/published/20190824 How to compile a Linux kernel in the 21st century.md new file mode 100644 index 0000000000..fd29db6483 --- /dev/null +++ b/published/20190824 How to compile a Linux kernel in the 21st century.md @@ -0,0 +1,212 @@ +[#]: collector: (lujun9972) +[#]: translator: (LuuMing) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11427-1.html) +[#]: subject: (How to compile a Linux kernel in the 21st century) +[#]: via: (https://opensource.com/article/19/8/linux-kernel-21st-century) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +在 21 世纪该怎样编译 Linux 内核 +====== + +> 也许你并不需要编译 Linux 内核,但你能通过这篇教程快速上手。 + +![](https://img.linux.net.cn/data/attachment/album/201910/06/113927vrs6rurljyuza8cy.jpg) + +在计算机世界里,内核kernel是处理硬件与一般系统之间通信的低阶软件low-level software。除过一些烧录进计算机主板的初始固件,当你启动计算机时,内核让系统意识到它有一个硬盘驱动器、屏幕、键盘以及网卡。分配给每个部件相等时间(或多或少)使得图像、音频、文件系统和网络可以流畅甚至并行地运行。 + +然而,对于硬件的需求是源源不断的,随着发布的硬件越多,内核就必须纳入更多代码来保证那些硬件正常工作。得到具体的数字很困难,但是 Linux 内核无疑是硬件兼容性方面的顶级内核之一。Linux 操作着无数的计算机和移动电话、工业用途和爱好者使用的板级嵌入式系统(SoC)、RAID 卡、缝纫机等等。 + +回到 20 世纪(甚至是 21 世纪初期),对于 Linux 用户来说,在刚买到新的硬件后就需要下载最新的内核代码并编译安装才能使用这是不可理喻的。而现在你也很难见到 Linux 用户为了好玩而编译内核或通过高度专业化定制的硬件的方式赚钱。现在,通常已经不需要再编译 Linux 内核了。 + +这里列出了一些原因以及快速编译内核的教程。 + +### 更新当前的内核 + +无论你买了配备新显卡或 Wifi 芯片集的新品牌电脑还是给家里配备一个新的打印机,你的操作系统(称为 GNU+Linux 或 Linux,它也是该内核的名字)需要一个驱动程序来打开新部件(显卡、芯片集、打印机和其他任何东西)的信道。有时候当你插入某些新的设备时而你的电脑表示发现了它,这具有一定的欺骗性。别被骗到了,有时候那就够了,但更多的情况是你的操作系统仅仅是使用了通用的协议检测到安装了新的设备。 + +例如,你的计算机也许能够鉴别出新的网络打印机,但有时候那仅仅是因为打印机的网卡被设计成为了获得 DHCP 地址而在网络上标识自己。它并不意味着你的计算机知道如何发送文档给打印机进行打印。事实上,你可以认为计算机甚至不“知道”那台设备是一个打印机。它也许仅仅是显示网络有个设备在一个特定的地址上,并且该设备以一系列字符 “p-r-i-n-t-e-r” 标识自己而已。人类语言的便利性对于计算机毫无意义。计算机需要的是一个驱动程序。 + +内核开发者、硬件制造商、技术支持和爱好者都知道新的硬件会不断地发布。它们大多数都会贡献驱动程序,直接提交给内核开发团队以包含在 Linux 中。例如,英伟达显卡驱动程序通常都会写入 [Nouveau][2] 内核模块中,并且因为英伟达显卡很常用,它的代码都包含在任一个日常使用的发行版内核中(例如当下载 [Fedora][3] 或 [Ubuntu][4] 得到的内核)。英伟达也有不常用的地方,例如嵌入式系统中 Nouveau 模块通常被移除。对其他设备来说也有类似的模块:打印机得益于 [Foomatic][5] 和 [CUPS][6],无线网卡有 [b43、ath9k、wl][7] 模块等等。 + +发行版往往会在它们 Linux 内核的构建中包含尽可能多合理的驱动程序,因为他们想让你在接入新设备时不用安装驱动程序能够立即使用。对于大多数情况来说就是这样的,尤其是现在很多设备厂商都在资助自己售卖硬件的 Linux 驱动程序开发,并且直接将这些驱动程序提交给内核团队以用在通常的发行版上。 + +有时候,或许你正在运行六个月之前安装的内核,并配备了上周刚刚上市令人兴奋的新设备。在这种情况下,你的内核也许没有那款设备的驱动程序。好消息是经常会出现那款设备的驱动程序已经存在于最近版本的内核中,意味着你只要更新运行的内核就可以了。 + +通常,这些都是通过安装包管理软件完成的。例如在 RHEL、CentOS 和 Fedora 上: + +``` +$ sudo dnf update kernel +``` + +在 Debian 和 Ubuntu 上,首先获取你当前内核的版本: + +``` +$ uname -r +4.4.186 +``` + +搜索新的版本: + +``` +$ sudo apt update +$ sudo apt search linux-image +``` + +安装找到的最新版本。在这个例子中,最新的版本是 5.2.4: + +``` +$ sudo apt install linux-image-5.2.4 +``` + +内核更新后,你必须 [reboot][8] (除非你使用 kpatch 或 kgraft)。这时,如果你需要的设备驱动程序包含在最新的内核中,你的硬件就会正常工作。 + +### 安装内核模块 + +有时候一个发行版没有预计到用户会使用某个设备(或者该设备的驱动程序至少不足以包含在 Linux 内核中)。Linux 对于驱动程序采用模块化方式,因此尽管驱动程序没有编译进内核,但发行版可以推送单独的驱动程序包让内核去加载。尽管有些复杂但是非常有用,尤其是当驱动程序没有包含进内核中而是在引导过程中加载,或是内核中的驱动程序相比模块化的驱动程序过期时。第一个问题可以用 “initrd” 解决(初始化 RAM 磁盘),这一点超出了本文的讨论范围,第二点通过 “kmod” 系统解决。 + +kmod 系统保证了当内核更新后,所有与之安装的模块化驱动程序也得到更新。如果你手动安装一个驱动程序,你就体验不到 kmod 提供的自动化,因此只要能用 kmod 安装包,就应该选择它。例如,尽管英伟达驱动程序以 Nouveau 模块构建在内核中,但官方的驱动程序仅由英伟达发布。你可以去网站上手动安装英伟达旗下的驱动程序,下载 “.run” 文件,并运行提供的 shell 脚本,但在安装了新的内核之后你必须重复相同的过程,因为没有任何东西告诉包管理软件你手动安装了一个内核驱动程序。英伟达驱动着你的显卡,手动更新英伟达驱动程序通常意味着你需要通过终端来执行更新,因为没有显卡驱动程序将无法显示。 + +![Nvidia configuration application][9] + +然而,如果你通过 kmod 包安装英伟达驱动程序,更新你的内核也会更新你的英伟达驱动程序。在 Fedora 和相关的发行版中: + +``` +$ sudo dnf install kmod-nvidia +``` + +在 Debian 和相关发行版上: + +``` +$ sudo apt update +$ sudo apt install nvidia-kernel-common nvidia-kernel-dkms nvidia-glx nvidia-xconfig nvidia-settings nvidia-vdpau-driver vdpau-va-driver +``` + +这仅仅是一个例子,但是如果你真的要安装英伟达驱动程序,你也必须屏蔽掉 Nouveau 驱动程序。参考你使用发行版的文档获取最佳的步骤吧。 + +### 下载并安装驱动程序 + +不是所有的东西都包含在内核中,也不是所有的东西都可以作为内核模块使用。在某些情况下,你需要下载一个由供应商编写并绑定好的特殊驱动程序,还有一些情况,你有驱动程序,但是没有配置驱动程序的前端界面。 + +有两个常见的例子是 HP 打印机和 [Wacom][10] 数位板。如果你有一台 HP 打印机,你可能有能够和打印机通信的通用的驱动程序,甚至能够打印出东西。但是通用的驱动程序却不能为特定型号的打印机提供定制化的选项,例如双面打印、校对、纸盒选择等等。[HPLIP][11](HP Linux 成像和打印系统)提供了选项来进行任务管理、调整打印设置、选择可用的纸盒等等。 + +HPLIP 通常包含在包管理软件中;只要搜索“hplip”就行了。 + +![HPLIP in action][12] + +同样的,电子艺术家主要使用的数位板 Wacom 的驱动程序通常也包含在内核中,但是例如调整压感和按键功能等设置只能通过默认包含在 GNOME 的图形控制面板访问。但也可以作为 KDE 上额外的程序包“kde-config-tablet”来访问。 + +这里也有几个类似的个别例子,例如内核中没有驱动程序,但是以 RPM 或 DEB 文件提供了可供下载并且通过包管理软件安装的 kmod 版本的驱动程序。 + +### 打上补丁并编译你的内核 + +即使在 21 世纪的未来主义乌托邦里,仍有厂商不够了解开源,没有提供可安装的驱动程序。有时候,一些公司为驱动程序提供开源代码,而需要你下载代码、修补内核、编译并手动安装。 + +这种发布方式和在 kmod 系统之外安装打包的驱动程序拥有同样的缺点:对内核的更新会破坏驱动程序,因为每次更换新的内核时都必须手动将其重新集成到内核中。 + +令人高兴的是,这种事情变得少见了,因为 Linux 内核团队在呼吁公司们与他们交流方面做得很好,并且公司们最终接受了开源不会很快消失的事实。但仍有新奇的或高度专业的设备仅提供了内核补丁。 + +官方上,对于你如何编译内核以使包管理器参与到升级系统如此重要的部分中,发行版有特定的习惯。这里有太多的包管理器,所以无法一一涵盖。举一个例子,当你使用 Fedora 上的工具例如 `rpmdev` 或 `build-essential`,Debian 上的 `devscripts`。 + +首先,像通常那样,找到你正在运行内核的版本: + +``` +$ uname -r +``` + +在大多数情况下,如果你还没有升级过内核那么可以试试升级一下内核。搞定之后,也许你的问题就会在最新发布的内核中解决。如果你尝试后发现不起作用,那么你应该下载正在运行内核的源码。大多数发行版提供了特定的命令来完成这件事,但是手动操作的话,可以在 [kernel.org][13] 上找到它的源代码。 + +你必须下载内核所需的任何补丁。有时候,这些补丁对应具体的内核版本,因此请谨慎选择。 + +通常,或至少在人们习惯于编译内核的那时,都是拿到源代码并对 `/usr/src/linux` 打上补丁。 + +解压内核源码并打上需要的补丁: + +``` +$ cd /usr/src/linux +$ bzip2 --decompress linux-5.2.4.tar.bz2 +$ cd  linux-5.2.4 +$ bzip2 -d ../patch*bz2 +``` + +补丁文件也许包含如何使用的教程,但通常它们都设计成在内核源码树的顶层可用来执行。 + +``` +$ patch -p1 < patch*example.patch +``` + +当内核代码打上补丁后,你可以继续使用旧的配置来对打了补丁的内核进行配置。 + +``` +$ make oldconfig +``` + +`make oldconfig` 命令有两个作用:它继承了当前的内核配置,并且允许你配置补丁带来的新的选项。 + +你或许需要运行 `make menuconfig` 命令,它启动了一个基于 ncurses 的菜单界面,列出了新的内核所有可能的选项。整个菜单可能看不过来,但是它是以旧的内核配置为基础的,你可以遍历菜单并且禁用掉你没有或不需要的硬件模块。另外,如果你知道自己有一些硬件没有包含在当前的配置中,你可以选择构建它,当作模块或者直接嵌入内核中。理论上,这些并不是必要的,因为你可以猜想,当前的内核运行良好只是缺少了补丁,当使用补丁的时候可能已经激活了所有设备所必要的选项。 + +下一步,编译内核和它的模块: + +``` +$ make bzImage +$ make modules +``` + +这会产生一个叫作 `vmlinuz` 的文件,它是你的可引导内核的压缩版本。保存旧的版本并在 `/boot` 文件夹下替换为新的。 + +``` +$ sudo mv /boot/vmlinuz /boot/vmlinuz.nopatch +$ sudo cat arch/x86_64/boot/bzImage > /boot/vmlinuz +$ sudo mv /boot/System.map /boot/System.map.stock +$ sudo cp System.map /boot/System.map +``` + +到目前为止,你已经打上了补丁并且编译了内核和它的模块,你安装了内核,但你并没有安装任何模块。那就是最后的步骤: + +``` +$ sudo make modules_install +``` + +新的内核已经就位,并且它的模块也已经安装。 + +最后一步是更新你的引导程序,为了让你的计算机在加载 Linux 内核之前知道它的位置。GRUB 引导程序使这一过程变得相当简单: + +``` +$ sudo grub2-mkconfig +``` + +### 现实生活中的编译 + +当然,现在没有人手动执行这些命令。相反的,参考你的发行版,寻找发行版维护人员使用的开发者工具集修改内核的说明。这些工具集可能会创建一个集成所有补丁的可安装软件包,告诉你的包管理器来升级并更新你的引导程序。 + +### 内核 + +操作系统和内核都是玄学,但要理解构成它们的组件并不难。下一次你看到某个技术无法应用在 Linux 上时,深呼吸,调查可用的驱动程序,寻找一条捷径。Linux 比以前简单多了——包括内核。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/8/linux-kernel-21st-century + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[LuMing](https://github.com/LuuMing) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q (and old computer and a new computer, representing migration to new software or hardware) +[2]: https://nouveau.freedesktop.org/wiki/ +[3]: http://fedoraproject.org +[4]: http://ubuntu.com +[5]: https://wiki.linuxfoundation.org/openprinting/database/foomatic +[6]: https://www.cups.org/ +[7]: https://wireless.wiki.kernel.org/en/users/drivers +[8]: https://opensource.com/article/19/7/reboot-linux +[9]: https://opensource.com/sites/default/files/uploads/nvidia.jpg (Nvidia configuration application) +[10]: https://linuxwacom.github.io +[11]: https://developers.hp.com/hp-linux-imaging-and-printing +[12]: https://opensource.com/sites/default/files/uploads/hplip.jpg (HPLIP in action) +[13]: https://www.kernel.org/ diff --git a/published/20190826 Introduction to the Linux chown command.md b/published/20190826 Introduction to the Linux chown command.md new file mode 100644 index 0000000000..6a7befd20a --- /dev/null +++ b/published/20190826 Introduction to the Linux chown command.md @@ -0,0 +1,131 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11416-1.html) +[#]: subject: (Introduction to the Linux chown command) +[#]: via: (https://opensource.com/article/19/8/linux-chown-command) +[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss) + +chown 命令简介 +====== + +> 学习如何使用 chown 命令更改文件或目录的所有权。 + +![](https://img.linux.net.cn/data/attachment/album/201910/03/000014mfrxrxi5rej75mjs.jpg) + +Linux 系统上的每个文件和目录均由某个人拥有,拥有者可以完全控制更改或删除他们拥有的文件。除了有一个*拥有用户*外,文件还有一个*拥有组*。 + +你可以使用 `ls -l` 命令查看文件的所有权: + +``` +[pablo@workstation Downloads]$ ls -l +total 2454732 +-rw-r--r--. 1 pablo pablo 1934753792 Jul 25 18:49 Fedora-Workstation-Live-x86_64-30-1.2.iso +``` + +该输出的第三和第四列是拥有用户和组,它们一起称为*所有权*。上面的那个 ISO 文件这两者都是 `pablo`。 + +所有权设置由 [chmod 命令][2]进行设置,控制允许谁可以执行读取、写入或运行的操作。你可以使用 `chown` 命令更改所有权(一个或两者)。 + +所有权经常需要更改。文件和目录一直存在在系统中,但用户不断变来变去。当文件和目录在系统中移动时,或从一个系统移动到另一个系统时,所有权也可能需要更改。 + +我的主目录中的文件和目录的所有权是我的用户和我的主要组,以 `user:group` 的形式表示。假设 Susan 正在管理 Delta 组,该组需要编辑一个名为 `mynotes` 的文件。你可以使用 `chown` 命令将该文件的用户更改为 `susan`,组更改为 `delta`: + +``` +$ chown susan:delta mynotes +ls -l +-rw-rw-r--. 1 susan delta 0 Aug  1 12:04 mynotes +``` + +当给该文件设置好了 Delta 组时,它可以分配回给我: + +``` +$ chown alan mynotes +$ ls -l mynotes +-rw-rw-r--. 1 alan delta 0 Aug  1 12:04 mynotes +``` + +给用户后添加冒号(`:`),可以将用户和组都分配回给我: + +``` +$ chown alan: mynotes +$ ls -l mynotes +-rw-rw-r--. 1 alan alan 0 Aug  1 12:04 mynotes +``` + +通过在组前面加一个冒号,可以只更改组。现在,`gamma` 组的成员可以编辑该文件: + +``` +$ chown :gamma mynotes +$ ls -l +-rw-rw-r--. 1 alan gamma 0 Aug  1 12:04 mynotes +``` + +`chown` 的一些附加参数都能用在命令行和脚本中。就像许多其他 Linux 命令一样,`chown` 有一个递归参数(`-R`),它告诉该命令进入目录以对其中的所有文件进行操作。没有 `-R` 标志,你就只能更改文件夹的权限,而不会更改其中的文件。在此示例中,假定目的是更改目录及其所有内容的权限。这里我添加了 `-v`(详细)参数,以便 `chown` 报告其工作情况: + +``` +$ ls -l . conf +.: +drwxrwxr-x 2 alan alan 4096 Aug  5 15:33 conf + +conf: +-rw-rw-r-- 1 alan alan 0 Aug  5 15:33 conf.xml + +$ chown -vR susan:delta conf +changed ownership of 'conf/conf.xml' from alan:alan to  susan:delta +changed ownership of 'conf' from alan:alan to  susan:delta +``` + +根据你的角色,你可能需要使用 `sudo` 来更改文件的所有权。 + +在更改文件的所有权以匹配特定配置时,或者在你不知道所有权时(例如运行脚本时),可以使用参考文件(`--reference=RFILE`)。例如,你可以复制另一个文件(`RFILE`,称为参考文件)的用户和组,以撤消上面所做的更改。回想一下,点(`.`)表示当前的工作目录。 + +``` +$ chown -vR --reference=. conf +``` + +### 报告更改 + +大多数命令都有用于控制其输出的参数。最常见的是 `-v`(`--verbose`)以启用详细信息,但是 `chown` 还具有 `-c`(`--changes`)参数来指示 `chown` 仅在进行更改时报告。`chown` 还会报告其他情况,例如不允许进行的操作。 + +参数 `-f`(`--silent`、`--quiet`)用于禁止显示大多数错误消息。在下一节中,我将使用 `-f` 和 `-c`,以便仅显示实际更改。 + +### 保持根目录 + +Linux 文件系统的根目录(`/`)应该受到高度重视。如果命令在此层级上犯了一个错误,则后果可能会使系统完全无用。尤其是在运行一个会递归修改甚至删除的命令时。`chown` 命令具有一个可用于保护和保持根目录的参数,它是 `--preserve-root`。如果在根目录中将此参数和递归一起使用,那么什么也不会发生,而是会出现一条消息: + +``` +$ chown -cfR --preserve-root alan / +chown: it is dangerous to operate recursively on '/' +chown: use --no-preserve-root to override this failsafe +``` + +如果不与 `--recursive` 结合使用,则该选项无效。但是,如果该命令由 `root` 用户运行,则 `/` 本身的权限将被更改,但其下的其他文件或目录的权限则不会更改: + +``` +$ chown -c --preserve-root alan / +chown: changing ownership of '/': Operation not permitted +[root@localhost /]# chown -c --preserve-root alan / +changed ownership of '/' from root to alan +``` + +### 所有权即安全 + +文件和目录所有权是良好的信息安全性的一部分,因此,偶尔检查和维护文件所有权以防止不必要的访问非常重要。`chown` 命令是 Linux 安全命令集中最常见和最重要的命令之一。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/8/linux-chown-command + +作者:[Alan Formy-Duval][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alanfdoss +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer) +[2]: https://opensource.com/article/19/8/introduction-linux-chmod-command diff --git a/published/20180116 Command Line Heroes- Season 1- OS Wars_2.md b/published/201909/20180116 Command Line Heroes- Season 1- OS Wars_2.md similarity index 100% rename from published/20180116 Command Line Heroes- Season 1- OS Wars_2.md rename to published/201909/20180116 Command Line Heroes- Season 1- OS Wars_2.md diff --git a/published/201909/20180117 How technology changes the rules for doing agile.md b/published/201909/20180117 How technology changes the rules for doing agile.md new file mode 100644 index 0000000000..9c12a818d6 --- /dev/null +++ b/published/201909/20180117 How technology changes the rules for doing agile.md @@ -0,0 +1,93 @@ +技术如何改变敏捷的规则 +====== + +> 当我们开始推行敏捷时,还没有容器和 Kubernetes。但是它们改变了过去最困难的部分:将敏捷性从小团队应用到整个组织。 + +![](https://img.linux.net.cn/data/attachment/album/201909/26/113910ytmoosx5tt79gan5.jpg) + +越来越多的企业正因为一个非常明显的原因开始尝试敏捷和 [DevOps][1]: 企业需要通过更快的速度和更多的实验为创新和竞争性提供优势。而 DevOps 将帮助我们得到所需的创新速度。但是,在小团队或初创企业中实践 DevOps 与进行大规模实践完全是两码事。我们都明白这样的一个事实,那就是在十个人的跨职能团队中能够很好地解决问题的方案,当将相同的模式应用到一百个人的团队中时就可能无法奏效。这条道路是如此艰难,以至于 IT 领导者最简单的应对就是将敏捷方法的推行再推迟一年。 + +但那样的时代已经结束了。如果你已经尝试过,但是没有成功,那么现在是时候重新开始了。 + +到目前为止,DevOps 需要为许多组织提供个性化的解决方案,因此往往需要进行大量的调整以及付出额外的工作。但在今天,[Linux 容器][2]和 Kubernetes 正在推动 DevOps 工具和过程的标准化。而这样的标准化将会加速整个软件开发过程。因此,我们用来实践 DevOps 工作方式的技术最终能够满足我们加快软件开发速度的愿望。 + +Linux 容器和 [Kubernetes][3] 正在改变团队交互的方式。此外,你可以在 Kubernetes 平台上运行任何能够在 Linux 运行的应用程序。这意味着什么呢?你可以运行大量的企业及应用程序(甚至可以解决以前令人烦恼的 Windows 和 Linux 之间的协调问题)。最后,容器和 Kubernetes 能够满足你未来将要运行的几乎所有工作。它们正在经受着未来的考验,以应对机器学习、人工智能和分析工作等下一代解决问题工具。 + +让我们以机器学习为例来思考一下。今天,人们可以在大量的企业数据中找到一些模式。当机器发现这些模式时(想想机器学习),你的员工就能更快地采取行动。随着人工智能的加入,机器不仅可以发现模式,还可以对模式进行操作。如今,一个积极的软件开发冲刺周期也就是三个星期而已。有了人工智能,机器每秒可以多次修改代码。创业公司会利用这种能力来“打扰你”。 + +考虑一下你需要多快才能参与到竞争当中。如果你对于无法对于 DevOps 和每周一个迭代周期充满信心,那么考虑一下当那个创业公司将 AI 驱动的过程指向你时会发生什么?现在是时候转向 DevOps 的工作方式了,否则就会像你的竞争对手一样被甩在后面。 + +### 容器技术如何改变团队的工作? + +DevOps 使得许多试图将这种工作方式扩展到更大范围的团队感到沮丧。即使许多 IT(和业务)人员之前都听说过敏捷相关的语言、框架、模型(如 DevOps),而这些都有望彻底应用程序开发和 IT 流程,但他们还是对此持怀疑态度。 + +向你的受众“推销”快速开发冲刺也不是一件容易的事情。想象一下,如果你以这种方式买了一栋房子 —— 你将不再需要向开发商支付固定的金额,而是会得到这样的信息:“我们将在 4 周内浇筑完地基,其成本是 X,之后再搭建房屋框架和铺设电路,但是我们现在只能够知道地基完成的时间表。”人们已经习惯了买房子的时候有一个预先的价格和交付时间表。 + +挑战在于构建软件与构建房屋不同。同一个建筑商往往建造了成千上万个完全相同的房子,而软件项目从来都各不相同。这是你要克服的第一个障碍。 + +开发和运维团队的工作方式确实不同,我之所以知道这一点是因为我曾经从事过这两方面的工作。企业往往会用不同的方式来激励他们,开发人员会因为更改和创建而获得奖励,而运维专家则会因降低成本和确保安全性而获得奖励。我们会把他们分成不同的小组,并且尽量减少互动。而这些角色通常会吸引那些思维方式完全不同的技术人员。但是这样的解决方案注定会失败,你必须打破横亘在开发和运维之间的藩篱。 + +想想传统情况下会发生什么。业务会把需求扔过墙,这是因为他们在“买房”模式下运作,并且说上一句“我们 9 个月后见。”开发人员根据这些需求进行开发,并根据技术约束的需要进行更改。然后,他们把它扔过墙传递给运维人员,并说一句“搞清楚如何运行这个软件”。然后,运维人员勤就会勤奋地进行大量更改,使软件与基础设施保持一致。然而,最终的结果是什么呢? + +通常情况下,当业务人员看到需求实现的最终结果时甚至根本辨认不出。在过去 20 年的大部分时间里,我们一次又一次地目睹了这种模式在软件行业中上演。而现在,是时候改变了。 + +Linux 容器能够真正地解决这样的问题,这是因为容器弥合开发和运维之间的鸿沟。容器技术允许两个团队共同理解和设计所有的关键需求,但仍然独立地履行各自团队的职责。基本上,我们去掉了开发人员和运维人员之间的电话游戏。 + +有了容器技术,我们可以使得运维团队的规模更小,但依旧能够承担起数百万应用程序的运维工作,并且能够使得开发团队可以更加快速地根据需要更改软件。(在较大的组织中,所需的速度可能比运维人员的响应速度更快。) + +有了容器技术,你可以将所需要交付的内容与它运行的位置分开。你的运维团队只需要负责运行容器的主机和安全的内存占用,仅此而已。这意味着什么呢? + +首先,这意味着你现在可以和团队一起实践 DevOps 了。没错,只需要让团队专注于他们已经拥有的专业知识,而对于容器,只需让团队了解所需集成依赖关系的必要知识即可。 + +如果你想要重新训练每个人,没有人会精通所有事情。容器技术允许团队之间进行交互,但同时也会为每个团队提供一个围绕该团队优势而构建的强大边界。开发人员会知道需要消耗什么资源,但不需要知道如何使其大规模运行。运维团队了解核心基础设施,但不需要了解应用程序的细节。此外,运维团队也可以通过更新应用程序来解决新的安全问题,以免你成为下一个数据泄露的热门话题。 + +想要为一个大型 IT 组织,比如 30000 人的团队教授运维和开发技能?那或许需要花费你十年的时间,而你可能并没有那么多时间。 + +当人们谈论“构建新的云原生应用程序将帮助我们摆脱这个问题”时,请批判性地进行思考。你可以在 10 个人的团队中构建云原生应用程序,但这对《财富》杂志前 1000 强的企业而言或许并不适用。除非你不再需要依赖现有的团队,否则你无法一个接一个地构建新的微服务:你最终将成为一个孤立的组织。这是一个诱人的想法,但你不能指望这些应用程序来重新定义你的业务。我还没见过哪家公司能在如此大规模的并行开发中获得成功。IT 预算已经受到限制;在很长时间内,将预算翻倍甚至三倍是不现实的。 + +### 当奇迹发生时:你好,速度 + +Linux 容器就是为扩容而生的。一旦你开始这样做,[Kubernetes 之类的编制工具就会发挥作用][6],这是因为你将需要运行数千个容器。应用程序将不仅仅由一个容器组成,它们将依赖于许多不同的部分,所有的部分都会作为一个单元运行在容器上。如果不这样做,你的应用程序将无法在生产环境中很好地运行。 + +思考一下有多少小滑轮和杠杆组合在一起来支撑你的业务,对于任何应用程序都是如此。开发人员负责应用程序中的所有滑轮和杠杆。(如果开发人员没有这些组件,你可能会在集成时做噩梦。)与此同时,无论是在线下还是在云上,运维团队都会负责构成基础设施的所有滑轮和杠杆。做一个较为抽象的比喻,使用Kubernetes,你的运维团队就可以为应用程序提供运行所需的燃料,但又不必成为所有方面的专家。 + +开发人员进行实验,运维团队则保持基础设施的安全和可靠。这样的组合使得企业敢于承担小风险,从而实现创新。不同于打几个孤注一掷的赌,公司中真正的实验往往是循序渐进的和快速的。 + +从个人经验来看,这就是组织内部发生的显著变化:因为人们说:“我们如何通过改变计划来真正地利用这种实验能力?”它会强制执行敏捷计划。 + +举个例子,使用 DevOps 模型、容器和 Kubernetes 的 KeyBank 如今每天都会部署代码。(观看[视频][7],其中主导了 KeyBank 持续交付和反馈的 John Rzeszotarski 将解释这一变化。)类似地,Macquarie 银行也借助 DevOps 和容器技术每天将一些东西投入生产环境。 + +一旦你每天都推出软件,它就会改变你计划的每一个方面,并且会[加速业务的变化速度][8]。Macquarie 银行和金融服务集团的 CDO,Luis Uguina 表示:“创意可以在一天内触达客户。”(参见对 Red Hat 与 Macquarie 银行合作的[案例研究][9])。 + +### 是时候去创造一些伟大的东西了 + +Macquarie 的例子说明了速度的力量。这将如何改变你的经营方式?记住,Macquarie 不是一家初创企业。这是 CIO 们所面临的颠覆性力量,它不仅来自新的市场进入者,也来自老牌同行。 + +开发人员的自由还改变了运营敏捷商店的 CIO 们的人才方程式。突然之间,大公司里的个体(即使不是在最热门的行业或地区)也可以产生巨大的影响。Macquarie 利用这一变动作为招聘工具,并向开发人员承诺,所有新招聘的员工将会在第一周内推出新产品。 + +与此同时,在这个基于云的计算和存储能力的时代,我们比以往任何时候都拥有更多可用的基础设施。考虑到[机器学习和人工智能工具将很快实现的飞跃][10],这是幸运的。 + +所有这些都说明现在正是打造伟大事业的好时机。考虑到市场创新的速度,你需要不断地创造伟大的东西来保持客户的忠诚度。因此,如果你一直在等待将赌注押在 DevOps 上,那么现在就是正确的时机。容器技术和 Kubernetes 改变了规则,并且对你有利。 + +-------------------------------------------------------------------------------- + +via: https://enterprisersproject.com/article/2018/1/how-technology-changes-rules-doing-agile + +作者:[Matt Hicks][a] +译者:[JayFrank](https://github.com/JayFrank) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://enterprisersproject.com/user/matt-hicks +[1]:https://enterprisersproject.com/tags/devops +[2]:https://www.redhat.com/en/topics/containers?intcmp=701f2000000tjyaAAA +[3]:https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA +[4]:https://enterprisersproject.com/article/2017/8/4-container-adoption-patterns-what-you-need-know?sc_cid=70160000000h0aXAAQ +[5]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ +[6]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity +[7]:https://www.redhat.com/en/about/videos/john-rzeszotarski-keybank-red-hat-summit-2017?intcmp=701f2000000tjyaAAA +[8]:https://enterprisersproject.com/article/2017/11/dear-cios-stop-beating-yourselves-being-behind-transformation +[9]:https://www.redhat.com/en/resources/macquarie-bank-case-study?intcmp=701f2000000tjyaAAA +[10]:https://enterprisersproject.com/article/2018/1/4-ai-trends-watch +[11]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ diff --git a/published/201909/20180330 Go on very small hardware Part 1.md b/published/201909/20180330 Go on very small hardware Part 1.md new file mode 100644 index 0000000000..e4759907fc --- /dev/null +++ b/published/201909/20180330 Go on very small hardware Part 1.md @@ -0,0 +1,480 @@ +Go 语言在极小硬件上的运用(一) +========= + +Go 语言,能在多低下的配置上运行并发挥作用呢? + +我最近购买了一个特别便宜的开发板: + +![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/board.jpg) + +我购买它的理由有三个。首先,我(作为程序员)从未接触过 STM320 系列的开发板。其次,STM32F10x 系列使用也有点少了。STM320 系列的 MCU 很便宜,有更新一些的外设,对系列产品进行了改进,问题修复也做得更好了。最后,为了这篇文章,我选用了这一系列中最低配置的开发板,整件事情就变得有趣起来了。 + +### 硬件部分 + +[STM32F030F4P6][3] 给人留下了很深的印象: + +* CPU: [Cortex M0][1] 48 MHz(最低配置,只有 12000 个逻辑门电路) +* RAM: 4 KB, +* Flash: 16 KB, +* ADC、SPI、I2C、USART 和几个定时器 + +以上这些采用了 TSSOP20 封装。正如你所见,这是一个很小的 32 位系统。 + +### 软件部分 + +如果你想知道如何在这块开发板上使用 [Go][4] 编程,你需要反复阅读硬件规范手册。你必须面对这样的真实情况:在 Go 编译器中给 Cortex-M0 提供支持的可能性很小。而且,这还仅仅只是第一个要解决的问题。 + +我会使用 [Emgo][5],但别担心,之后你会看到,它如何让 Go 在如此小的系统上尽可能发挥作用。 + +在我拿到这块开发板之前,对 [stm32/hal][6] 系列下的 F0 MCU 没有任何支持。在简单研究[参考手册][7]后,我发现 STM32F0 系列是 STM32F3 削减版,这让在新端口上开发的工作变得容易了一些。 + +如果你想接着本文的步骤做下去,需要先安装 Emgo + +``` +cd $HOME +git clone https://github.com/ziutek/emgo/ +cd emgo/egc +go install +``` + +然后设置一下环境变量 + +``` +export EGCC=path_to_arm_gcc # eg. /usr/local/arm/bin/arm-none-eabi-gcc +export EGLD=path_to_arm_linker # eg. /usr/local/arm/bin/arm-none-eabi-ld +export EGAR=path_to_arm_archiver # eg. /usr/local/arm/bin/arm-none-eabi-ar + +export EGROOT=$HOME/emgo/egroot +export EGPATH=$HOME/emgo/egpath + +export EGARCH=cortexm0 +export EGOS=noos +export EGTARGET=f030x6 +``` + +更详细的说明可以在 [Emgo][8] 官网上找到。 + +要确保 `egc` 在你的 `PATH` 中。 你可以使用 `go build` 来代替 `go install`,然后把 `egc` 复制到你的 `$HOME/bin` 或 `/usr/local/bin` 中。 + +现在,为你的第一个 Emgo 程序创建一个新文件夹,随后把示例中链接器脚本复制过来: + +``` +mkdir $HOME/firstemgo +cd $HOME/firstemgo +cp $EGPATH/src/stm32/examples/f030-demo-board/blinky/script.ld . +``` + +### 最基本程序 + +在 `main.go` 文件中创建一个最基本的程序: + +``` +package main + +func main() { +} +``` + +文件编译没有出现任何问题: + +``` +$ egc +$ arm-none-eabi-size cortexm0.elf + text data bss dec hex filename + 7452 172 104 7728 1e30 cortexm0.elf +``` + +第一次编译可能会花点时间。编译后产生的二进制占用了 7624 个字节的 Flash 空间(文本 + 数据)。对于一个什么都没做的程序来说,占用的空间有些大。还剩下 8760 字节,可以用来做些有用的事。 + +不妨试试传统的 “Hello, World!” 程序: + +``` +package main + +import "fmt" + +func main() { + fmt.Println("Hello, World!") +} +``` + +不幸的是,这次结果有些糟糕: + +``` +$ egc +/usr/local/arm/bin/arm-none-eabi-ld: /home/michal/P/go/src/github.com/ziutek/emgo/egpath/src/stm32/examples/f030-demo-board/blog/cortexm0.elf section `.text' will not fit in region `Flash' +/usr/local/arm/bin/arm-none-eabi-ld: region `Flash' overflowed by 10880 bytes +exit status 1 +``` + + “Hello, World!” 需要 STM32F030x6 上至少 32KB 的 Flash 空间。 + +`fmt` 包强制包含整个 `strconv` 和 `reflect` 包。这三个包,即使在精简版本中的 Emgo 中,占用空间也很大。我们不能使用这个例子了。有很多的应用不需要好看的文本输出。通常,一个或多个 LED,或者七段数码管显示就足够了。不过,在第二部分,我会尝试使用 `strconv` 包来格式化,并在 UART 上显示一些数字和文本。 + +### 闪烁 + +我们的开发板上有一个与 PA4 引脚和 VCC 相连的 LED。这次我们的代码稍稍长了一些: + +``` +package main + +import ( + "delay" + + "stm32/hal/gpio" + "stm32/hal/system" + "stm32/hal/system/timer/systick" +) + +var led gpio.Pin + +func init() { + system.SetupPLL(8, 1, 48/8) + systick.Setup(2e6) + + gpio.A.EnableClock(false) + led = gpio.A.Pin(4) + + cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain} + led.Setup(cfg) +} + +func main() { + for { + led.Clear() + delay.Millisec(100) + led.Set() + delay.Millisec(900) + } +} +``` + +按照惯例,`init` 函数用来初始化和配置外设。 + +`system.SetupPLL(8, 1, 48/8)` 用来配置 RCC,将外部的 8 MHz 振荡器的 PLL 作为系统时钟源。PLL 分频器设置为 1,倍频数设置为 48/8 =6,这样系统时钟频率为 48MHz。 + +`systick.Setup(2e6)` 将 Cortex-M SYSTICK 时钟作为系统时钟,每隔 2e6 次纳秒运行一次(每秒钟 500 次)。 + +`gpio.A.EnableClock(false)` 开启了 GPIO A 口的时钟。`False` 意味着这一时钟在低功耗模式下会被禁用,但在 STM32F0 系列中并未实现这一功能。 + +`led.Setup(cfg)` 设置 PA4 引脚为开漏输出。 + +`led.Clear()` 将 PA4 引脚设为低,在开漏设置中,打开 LED。 + +`led.Set()` 将 PA4 设为高电平状态,关掉LED。 + +编译这个代码: + +``` +$ egc +$ arm-none-eabi-size cortexm0.elf + text data bss dec hex filename + 9772 172 168 10112 2780 cortexm0.elf +``` + +正如你所看到的,这个闪烁程序占用了 2320 字节,比最基本程序占用空间要大。还有 6440 字节的剩余空间。 + +看看代码是否能运行: + +``` +$ openocd -d0 -f interface/stlink.cfg -f target/stm32f0x.cfg -c 'init; program cortexm0.elf; reset run; exit' +Open On-Chip Debugger 0.10.0+dev-00319-g8f1f912a (2018-03-07-19:20) +Licensed under GNU GPL v2 +For bug reports, read + http://openocd.org/doc/doxygen/bugs.html +debug_level: 0 +adapter speed: 1000 kHz +adapter_nsrst_delay: 100 +none separate +adapter speed: 950 kHz +target halted due to debug-request, current mode: Thread +xPSR: 0xc1000000 pc: 0x0800119c msp: 0x20000da0 +adapter speed: 4000 kHz +** Programming Started ** +auto erase enabled +target halted due to breakpoint, current mode: Thread +xPSR: 0x61000000 pc: 0x2000003a msp: 0x20000da0 +wrote 10240 bytes from file cortexm0.elf in 0.817425s (12.234 KiB/s) +** Programming Finished ** +adapter speed: 950 kHz +``` + +在这篇文章中,这是我第一次,将一个短视频转换成[动画 PNG][9]。我对此印象很深,再见了 YouTube。 对于 IE 用户,我很抱歉,更多信息请看 [apngasm][10]。我本应该学习 HTML5,但现在,APNG 是我最喜欢的,用来播放循环短视频的方法了。 + +![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/blinky.png) + +### 更多的 Go 语言编程 + +如果你不是一个 Go 程序员,但你已经听说过一些关于 Go 语言的事情,你可能会说:“Go 语法很好,但跟 C 比起来,并没有明显的提升。让我看看 Go 语言的通道和协程!” + +接下来我会一一展示: + +``` +import ( + "delay" + + "stm32/hal/gpio" + "stm32/hal/system" + "stm32/hal/system/timer/systick" +) + +var led1, led2 gpio.Pin + +func init() { + system.SetupPLL(8, 1, 48/8) + systick.Setup(2e6) + + gpio.A.EnableClock(false) + led1 = gpio.A.Pin(4) + led2 = gpio.A.Pin(5) + + cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain} + led1.Setup(cfg) + led2.Setup(cfg) +} + +func blinky(led gpio.Pin, period int) { + for { + led.Clear() + delay.Millisec(100) + led.Set() + delay.Millisec(period - 100) + } +} + +func main() { + go blinky(led1, 500) + blinky(led2, 1000) +} + +``` + +代码改动很小: 添加了第二个 LED,上一个例子中的 `main` 函数被重命名为 `blinky` 并且需要提供两个参数。 `main` 在新的协程中先调用 `blinky`,所以两个 LED 灯在并行使用。值得一提的是,`gpio.Pin` 可以同时访问同一 GPIO 口的不同引脚。 + +Emgo 还有很多不足。其中之一就是你需要提前规定 `goroutines(tasks)` 的最大执行数量。是时候修改 `script.ld` 了: + +``` +ISRStack = 1024; +MainStack = 1024; +TaskStack = 1024; +MaxTasks = 2; + +INCLUDE stm32/f030x4 +INCLUDE stm32/loadflash +INCLUDE noos-cortexm +``` + +栈的大小需要靠猜,现在还不用关心这一点。 + +``` +$ egc +$ arm-none-eabi-size cortexm0.elf + text data bss dec hex filename + 10020 172 172 10364 287c cortexm0.elf +``` + +另一个 LED 和协程一共占用了 248 字节的 Flash 空间。 + +![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/goroutines.png) + +### 通道 + +通道是 Go 语言中协程之间相互通信的一种[推荐方式][11]。Emgo 甚至能允许通过*中断处理*来使用缓冲通道。下一个例子就展示了这种情况。 + +``` +package main + +import ( + "delay" + "rtos" + + "stm32/hal/gpio" + "stm32/hal/irq" + "stm32/hal/system" + "stm32/hal/system/timer/systick" + "stm32/hal/tim" +) + +var ( + leds [3]gpio.Pin + timer *tim.Periph + ch = make(chan int, 1) +) + +func init() { + system.SetupPLL(8, 1, 48/8) + systick.Setup(2e6) + + gpio.A.EnableClock(false) + leds[0] = gpio.A.Pin(4) + leds[1] = gpio.A.Pin(5) + leds[2] = gpio.A.Pin(9) + + cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain} + for _, led := range leds { + led.Set() + led.Setup(cfg) + } + + timer = tim.TIM3 + pclk := timer.Bus().Clock() + if pclk < system.AHB.Clock() { + pclk *= 2 + } + freq := uint(1e3) // Hz + timer.EnableClock(true) + timer.PSC.Store(tim.PSC(pclk/freq - 1)) + timer.ARR.Store(700) // ms + timer.DIER.Store(tim.UIE) + timer.CR1.Store(tim.CEN) + + rtos.IRQ(irq.TIM3).Enable() +} + +func blinky(led gpio.Pin, period int) { + for range ch { + led.Clear() + delay.Millisec(100) + led.Set() + delay.Millisec(period - 100) + } +} + +func main() { + go blinky(leds[1], 500) + blinky(leds[2], 500) +} + +func timerISR() { + timer.SR.Store(0) + leds[0].Set() + select { + case ch <- 0: + // Success + default: + leds[0].Clear() + } +} + +//c:__attribute__((section(".ISRs"))) +var ISRs = [...]func(){ + irq.TIM3: timerISR, +} +``` + +与之前例子相比较下的不同: + +1. 添加了第三个 LED,并连接到 PA9 引脚(UART 头的 TXD 引脚)。 +2. 时钟(`TIM3`)作为中断源。 +3. 新函数 `timerISR` 用来处理 `irq.TIM3` 的中断。 +4. 新增容量为 1 的缓冲通道是为了 `timerISR` 和 `blinky` 协程之间的通信。 +5. `ISRs` 数组作为*中断向量表*,是更大的*异常向量表*的一部分。 +6. `blinky` 中的 `for` 语句被替换成 `range` 语句。 + +为了方便起见,所有的 LED,或者说它们的引脚,都被放在 `leds` 这个数组里。另外,所有引脚在被配置为输出之前,都设置为一种已知的初始状态(高电平状态)。 + +在这个例子里,我们想让时钟以 1 kHz 的频率运行。为了配置 TIM3 预分频器,我们需要知道它的输入时钟频率。通过参考手册我们知道,输入时钟频率在 `APBCLK = AHBCLK` 时,与 `APBCLK` 相同,反之等于 2 倍的 `APBCLK`。 + +如果 CNT 寄存器增加 1 kHz,那么 ARR 寄存器的值等于*更新事件*(重载事件)在毫秒中的计数周期。 为了让更新事件产生中断,必须要设置 DIER 寄存器中的 UIE 位。CEN 位能启动时钟。 + +时钟外设在低功耗模式下必须启用,为了自身能在 CPU 处于休眠时保持运行: `timer.EnableClock(true)`。这在 STM32F0 中无关紧要,但对代码可移植性却十分重要。 + +`timerISR` 函数处理 `irq.TIM3` 的中断请求。`timer.SR.Store(0)` 会清除 SR 寄存器里的所有事件标志,无效化向 [NVIC][12] 发出的所有中断请求。凭借经验,由于中断请求无效的延时性,需要在程序一开始马上清除所有的中断标志。这避免了无意间再次调用处理。为了确保万无一失,需要先清除标志,再读取,但是在我们的例子中,清除标志就已经足够了。 + +下面的这几行代码: + +``` +select { +case ch <- 0: + // Success +default: + leds[0].Clear() +} +``` + +是 Go 语言中,如何在通道上非阻塞地发送消息的方法。中断处理程序无法一直等待通道中的空余空间。如果通道已满,则执行 `default`,开发板上的LED就会开启,直到下一次中断。 + +`ISRs` 数组包含了中断向量表。`//c:__attribute__((section(".ISRs")))` 会导致链接器将数组插入到 `.ISRs` 节中。 + +`blinky` 的 `for` 循环的新写法: + +``` +for range ch { + led.Clear() + delay.Millisec(100) + led.Set() + delay.Millisec(period - 100) +} +``` + +等价于: + +``` +for { + _, ok := <-ch + if !ok { + break // Channel closed. + } + led.Clear() + delay.Millisec(100) + led.Set() + delay.Millisec(period - 100) +} +``` + +注意,在这个例子中,我们不在意通道中收到的值,我们只对其接受到的消息感兴趣。我们可以在声明时,将通道元素类型中的 `int` 用空结构体 `struct{}` 来代替,发送消息时,用 `struct{}{}` 结构体的值代替 0,但这部分对新手来说可能会有些陌生。 + +让我们来编译一下代码: + +``` +$ egc +$ arm-none-eabi-size cortexm0.elf + text data bss dec hex filename + 11096 228 188 11512 2cf8 cortexm0.elf +``` + +新的例子占用了 11324 字节的 Flash 空间,比上一个例子多占用了 1132 字节。 + +采用现在的时序,两个闪烁协程从通道中获取数据的速度,比 `timerISR` 发送数据的速度要快。所以它们在同时等待新数据,你还能观察到 `select` 的随机性,这也是 [Go 规范][13]所要求的。 + +![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/channels1.png) + +开发板上的 LED 一直没有亮起,说明通道从未出现过溢出。 + +我们可以加快消息发送的速度,将 `timer.ARR.Store(700)` 改为 `timer.ARR.Store(200)`。 现在 `timerISR` 每秒钟发送 5 条消息,但是两个接收者加起来,每秒也只能接受 4 条消息。 + +![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/channels2.png) + +正如你所看到的,`timerISR` 开启黄色 LED 灯,意味着通道上已经没有剩余空间了。 + +第一部分到这里就结束了。你应该知道,这一部分并未展示 Go 中最重要的部分,接口。 + +协程和通道只是一些方便好用的语法。你可以用自己的代码来替换它们,这并不容易,但也可以实现。接口是Go 语言的基础。这是文章中 [第二部分][14]所要提到的. + +在 Flash 上我们还有些剩余空间。 + +-------------------------------------------------------------------------------- + +via: https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html + +作者:[Michał Derkacz][a] +译者:[wenwensnow](https://github.com/wenwensnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://ziutek.github.io/ +[1]:https://en.wikipedia.org/wiki/ARM_Cortex-M#Cortex-M0 +[2]:https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html +[3]:http://www.st.com/content/st_com/en/products/microcontrollers/stm32-32-bit-arm-cortex-mcus/stm32-mainstream-mcus/stm32f0-series/stm32f0x0-value-line/stm32f030f4.html +[4]:https://golang.org/ +[5]:https://github.com/ziutek/emgo +[6]:https://github.com/ziutek/emgo/tree/master/egpath/src/stm32/hal +[7]:http://www.st.com/resource/en/reference_manual/dm00091010.pdf +[8]:https://github.com/ziutek/emgo +[9]:https://en.wikipedia.org/wiki/APNG +[10]:http://apngasm.sourceforge.net/ +[11]:https://blog.golang.org/share-memory-by-communicating +[12]:http://infocenter.arm.com/help/topic/com.arm.doc.ddi0432c/Cihbecee.html +[13]:https://golang.org/ref/spec#Select_statements +[14]:https://ziutek.github.io/2018/04/14/go_on_very_small_hardware2.html diff --git a/published/20180704 BASHing data- Truncated data items.md b/published/201909/20180704 BASHing data- Truncated data items.md similarity index 100% rename from published/20180704 BASHing data- Truncated data items.md rename to published/201909/20180704 BASHing data- Truncated data items.md diff --git a/published/201909/20180705 Building a Messenger App- Schema.md b/published/201909/20180705 Building a Messenger App- Schema.md new file mode 100644 index 0000000000..cf222174a6 --- /dev/null +++ b/published/201909/20180705 Building a Messenger App- Schema.md @@ -0,0 +1,116 @@ +[#]: collector: "lujun9972" +[#]: translator: "PsiACE" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-11396-1.html" +[#]: subject: "Building a Messenger App: Schema" +[#]: via: "https://nicolasparada.netlify.com/posts/go-messenger-schema/" +[#]: author: "Nicolás Parada https://nicolasparada.netlify.com/" + +构建一个即时消息应用(一):模式 +======== + +![](https://img.linux.net.cn/data/attachment/album/201909/27/211458n44f7jvp77lfxxm0.jpg) + +这是一系列关于构建“即时消息”应用的新帖子。你应该对这类应用并不陌生。有了它们的帮助,我们才可以与朋友畅聊无忌。[Facebook Messenger][1]、[WhatsApp][2] 和 [Skype][3] 就是其中的几个例子。正如你所看到的那样,这些应用允许我们发送图片、传输视频、录制音频、以及和一大帮子人聊天等等。当然,我们的教程应用将会尽量保持简单,只在两个用户之间发送文本消息。 + +我们将会用 [CockroachDB][4] 作为 SQL 数据库,用 [Go][5] 作为后端语言,并且用 JavaScript 来制作 web 应用。 + +这是第一篇帖子,我们将会讲述数据库的设计。 + +``` +CREATE TABLE users ( + id SERIAL NOT NULL PRIMARY KEY, + username STRING NOT NULL UNIQUE, + avatar_url STRING, + github_id INT NOT NULL UNIQUE +); +``` + +显然,这个应用需要一些用户。我们这里采用社交登录的形式。由于我选用了 [GitHub][6],所以这里需要保存一个对 GitHub 用户 ID 的引用。 + +``` +CREATE TABLE conversations ( + id SERIAL NOT NULL PRIMARY KEY, + last_message_id INT, + INDEX (last_message_id DESC) +); +``` + +每个对话都会引用最近一条消息。每当我们输入一条新消息时,我们都会更新这个字段。我会在后面添加外键约束。 + +… 你可能会想,我们可以先对对话进行分组,然后再通过这样的方式获取最近一条消息。但这样做会使查询变得更加复杂。 + +``` +CREATE TABLE participants ( + user_id INT NOT NULL REFERENCES users ON DELETE CASCADE, + conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE, + messages_read_at TIMESTAMPTZ NOT NULL DEFAULT now(), + PRIMARY KEY (user_id, conversation_id) +); +``` + +尽管之前我提到过对话只会在两个用户之间进行,但我们还是采用了允许向对话中添加多个参与者的设计。因此,在对话和用户之间有一个参与者表。 + +为了知道用户是否有未读消息,我们在消息表中添加了“读取时间”(`messages_read_at`)字段。每当用户在对话中读取消息时,我们都会更新它的值,这样一来,我们就可以将它与对话中最后一条消息的“创建时间”(`created_at`)字段进行比较。 + +``` +CREATE TABLE messages ( + id SERIAL NOT NULL PRIMARY KEY, + content STRING NOT NULL, + user_id INT NOT NULL REFERENCES users ON DELETE CASCADE, + conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE, + created_at TIMESTAMPTZ NOT NULL DEFAULT now(), + INDEX(created_at DESC) +); +``` + +尽管我们将消息表放在最后,但它在应用中相当重要。我们用它来保存对创建它的用户以及它所出现的对话的引用。而且还可以根据“创建时间”(`created_at`)来创建索引以完成对消息的排序。 + +``` +ALTER TABLE conversations +ADD CONSTRAINT fk_last_message_id_ref_messages +FOREIGN KEY (last_message_id) REFERENCES messages ON DELETE SET NULL; +``` + +我在前面已经提到过这个外键约束了,不是吗:D + +有这四张表就足够了。你也可以将这些查询保存到一个文件中,并将其通过管道传送到 Cockroach CLI。 + +首先,我们需要启动一个新节点: + +``` +cockroach start --insecure --host 127.0.0.1 +``` + +然后创建数据库和这些表: + +``` +cockroach sql --insecure -e "CREATE DATABASE messenger" +cat schema.sql | cockroach sql --insecure -d messenger +``` + +这篇帖子就到这里。在接下来的部分中,我们将会介绍「登录」,敬请期待。 + +- [源代码][7] + +--- + +via: https://nicolasparada.netlify.com/posts/go-messenger-schema/ + +作者:[Nicolás Parada][a] +选题:[lujun9972][b] +译者:[PsiACE](https://github.com/PsiACE) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出 + +[a]: https://nicolasparada.netlify.com/ +[b]: https://github.com/lujun9972 +[1]: https://www.messenger.com/ +[2]: https://www.whatsapp.com/ +[3]: https://www.skype.com/ +[4]: https://www.cockroachlabs.com/ +[5]: https://golang.org/ +[6]: https://github.com/ +[7]: https://github.com/nicolasparada/go-messenger-demo diff --git a/published/20180802 Top 5 CAD Software Available for Linux in 2018.md b/published/201909/20180802 Top 5 CAD Software Available for Linux in 2018.md similarity index 100% rename from published/20180802 Top 5 CAD Software Available for Linux in 2018.md rename to published/201909/20180802 Top 5 CAD Software Available for Linux in 2018.md diff --git a/published/20180904 How blockchain can complement open source.md b/published/201909/20180904 How blockchain can complement open source.md similarity index 100% rename from published/20180904 How blockchain can complement open source.md rename to published/201909/20180904 How blockchain can complement open source.md diff --git a/published/20181113 Eldoc Goes Global.md b/published/201909/20181113 Eldoc Goes Global.md similarity index 100% rename from published/20181113 Eldoc Goes Global.md rename to published/201909/20181113 Eldoc Goes Global.md diff --git a/published/201909/20181227 Linux commands for measuring disk activity.md b/published/201909/20181227 Linux commands for measuring disk activity.md new file mode 100644 index 0000000000..05988cbf63 --- /dev/null +++ b/published/201909/20181227 Linux commands for measuring disk activity.md @@ -0,0 +1,252 @@ +[#]: collector: (lujun9972) +[#]: translator: (laingke) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11387-1.html) +[#]: subject: (Linux commands for measuring disk activity) +[#]: via: (https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +用于测量磁盘活动的 Linux 命令 +====== +> Linux 发行版提供了几个度量磁盘活动的有用命令。让我们了解一下其中的几个。 + +![](https://images.idgesg.net/images/article/2018/12/tape-measure-100782593-large.jpg) + +Linux 系统提供了一套方便的命令,帮助你查看磁盘有多忙,而不仅仅是磁盘有多满。在本文中,我们将研究五个非常有用的命令,用于查看磁盘活动。其中两个命令(`iostat` 和 `ioping`)可能必须添加到你的系统中,这两个命令一样要求你使用 sudo 特权,所有这五个命令都提供了查看磁盘活动的有用方法。 + +这些命令中最简单、最直观的一个可能是 `dstat` 了。 + +### dtstat + +尽管 `dstat` 命令以字母 “d” 开头,但它提供的统计信息远远不止磁盘活动。如果你只想查看磁盘活动,可以使用 `-d` 选项。如下所示,你将得到一个磁盘读/写测量值的连续列表,直到使用 `CTRL-c` 停止显示为止。注意,在第一个报告信息之后,显示中的每个后续行将在接下来的时间间隔内报告磁盘活动,缺省值仅为一秒。 + +``` +$ dstat -d +-dsk/total- + read writ + 949B 73k + 65k 0 <== first second + 0 24k <== second second + 0 16k + 0 0 ^C +``` + +在 `-d` 选项后面包含一个数字将把间隔设置为该秒数。 + +``` +$ dstat -d 10 +-dsk/total- + read writ + 949B 73k + 65k 81M <== first five seconds + 0 21k <== second five second + 0 9011B ^C +``` + +请注意,报告的数据可能以许多不同的单位显示——例如,M(Mb)、K(Kb)和 B(字节)。 + +如果没有选项,`dstat` 命令还将显示许多其他信息——指示 CPU 如何使用时间、显示网络和分页活动、报告中断和上下文切换。 + +``` +$ dstat +You did not select any stats, using -cdngy by default. +--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system-- +usr sys idl wai stl| read writ| recv send| in out | int csw + 0 0 100 0 0| 949B 73k| 0 0 | 0 3B| 38 65 + 0 0 100 0 0| 0 0 | 218B 932B| 0 0 | 53 68 + 0 1 99 0 0| 0 16k| 64B 468B| 0 0 | 64 81 ^C +``` + +`dstat` 命令提供了关于整个 Linux 系统性能的有价值的见解,几乎可以用它灵活而功能强大的命令来代替 `vmstat`、`netstat`、`iostat` 和 `ifstat` 等较旧的工具集合,该命令结合了这些旧工具的功能。要深入了解 `dstat` 命令可以提供的其它信息,请参阅这篇关于 [dstat][1] 命令的文章。 + +### iostat + +`iostat` 命令通过观察设备活动的时间与其平均传输速率之间的关系,帮助监视系统输入/输出设备的加载情况。它有时用于评估磁盘之间的活动平衡。 + +``` +$ iostat +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 0.07 0.01 0.03 0.05 0.00 99.85 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +loop0 0.00 0.00 0.00 1048 0 +loop1 0.00 0.00 0.00 365 0 +loop2 0.00 0.00 0.00 1056 0 +loop3 0.00 0.01 0.00 16169 0 +loop4 0.00 0.00 0.00 413 0 +loop5 0.00 0.00 0.00 1184 0 +loop6 0.00 0.00 0.00 1062 0 +loop7 0.00 0.00 0.00 5261 0 +sda 1.06 0.89 72.66 2837453 232735080 +sdb 0.00 0.02 0.00 48669 40 +loop8 0.00 0.00 0.00 1053 0 +loop9 0.01 0.01 0.00 18949 0 +loop10 0.00 0.00 0.00 56 0 +loop11 0.00 0.00 0.00 7090 0 +loop12 0.00 0.00 0.00 1160 0 +loop13 0.00 0.00 0.00 108 0 +loop14 0.00 0.00 0.00 3572 0 +loop15 0.01 0.01 0.00 20026 0 +loop16 0.00 0.00 0.00 24 0 +``` + +当然,当你只想关注磁盘时,Linux 回环设备上提供的所有统计信息都会使结果显得杂乱无章。不过,该命令也确实提供了 `-p` 选项,该选项使你可以仅查看磁盘——如以下命令所示。 + +``` +$ iostat -p sda +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 0.07 0.01 0.03 0.05 0.00 99.85 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 1.06 0.89 72.54 2843737 232815784 +sda1 1.04 0.88 72.54 2821733 232815784 +``` + +请注意 `tps` 是指每秒的传输量。 + +你还可以让 `iostat` 提供重复的报告。在下面的示例中,我们使用 `-d` 选项每五秒钟进行一次测量。 + +``` +$ iostat -p sda -d 5 +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 1.06 0.89 72.51 2843749 232834048 +sda1 1.04 0.88 72.51 2821745 232834048 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 0.80 0.00 11.20 0 56 +sda1 0.80 0.00 11.20 0 56 +``` + +如果你希望省略第一个(自启动以来的统计信息)报告,请在命令中添加 `-y`。 + +``` +$ iostat -p sda -d 5 -y +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 0.80 0.00 11.20 0 56 +sda1 0.80 0.00 11.20 0 56 +``` + +接下来,我们看第二个磁盘驱动器。 + +``` +$ iostat -p sdb +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 0.07 0.01 0.03 0.05 0.00 99.85 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sdb 0.00 0.02 0.00 48669 40 +sdb2 0.00 0.00 0.00 4861 40 +sdb1 0.00 0.01 0.00 35344 0 +``` + +### iotop + +`iotop` 命令是类似 `top` 的实用程序,用于查看磁盘 I/O。它收集 Linux 内核提供的 I/O 使用信息,以便你了解哪些进程在磁盘 I/O 方面的要求最高。在下面的示例中,循环时间被设置为 5 秒。显示将自动更新,覆盖前面的输出。 + +``` +$ sudo iotop -d 5 +Total DISK READ: 0.00 B/s | Total DISK WRITE: 1585.31 B/s +Current DISK READ: 0.00 B/s | Current DISK WRITE: 12.39 K/s + TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND +32492 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.12 % [kworker/u8:1-ev~_power_efficient] + 208 be/3 root 0.00 B/s 1585.31 B/s 0.00 % 0.11 % [jbd2/sda1-8] + 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init splash + 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] + 3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp] + 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp] + 8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq] +``` + +### ioping + +`ioping` 命令是一种完全不同的工具,但是它可以报告磁盘延迟——也就是磁盘响应请求需要多长时间,而这有助于诊断磁盘问题。 + +``` +$ sudo ioping /dev/sda1 +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=1 time=960.2 us (warmup) +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=2 time=841.5 us +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=3 time=831.0 us +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=4 time=1.17 ms +^C +--- /dev/sda1 (block device 111.8 GiB) ioping statistics --- +3 requests completed in 2.84 ms, 12 KiB read, 1.05 k iops, 4.12 MiB/s +generated 4 requests in 3.37 s, 16 KiB, 1 iops, 4.75 KiB/s +min/avg/max/mdev = 831.0 us / 947.9 us / 1.17 ms / 158.0 us +``` + +### atop + +`atop` 命令,像 `top` 一样提供了大量有关系统性能的信息,包括有关磁盘活动的一些统计信息。 + +``` +ATOP - butterfly 2018/12/26 17:24:19 37d3h13m------ 10ed +PRC | sys 0.03s | user 0.01s | #proc 179 | #zombie 0 | #exit 6 | +CPU | sys 1% | user 0% | irq 0% | idle 199% | wait 0% | +cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu000 w 0% | +CPL | avg1 0.00 | avg5 0.00 | avg15 0.00 | csw 677 | intr 470 | +MEM | tot 5.8G | free 223.4M | cache 4.6G | buff 253.2M | slab 394.4M | +SWP | tot 2.0G | free 2.0G | | vmcom 1.9G | vmlim 4.9G | +DSK | sda | busy 0% | read 0 | write 7 | avio 1.14 ms | +NET | transport | tcpi 4 | tcpo stall 8 | udpi 1 | udpo 0swout 2255 | +NET | network | ipi 10 | ipo 7 | ipfrw 0 | deliv 60.67 ms | +NET | enp0s25 0% | pcki 10 | pcko 8 | si 1 Kbps | so 3 Kbp0.73 ms | + + PID SYSCPU USRCPU VGROW RGROW ST EXC THR S CPUNR CPU CMD 1/1673e4 | + 3357 0.01s 0.00s 672K 824K -- - 1 R 0 0% atop + 3359 0.01s 0.00s 0K 0K NE 0 0 E - 0% + 3361 0.00s 0.01s 0K 0K NE 0 0 E - 0% + 3363 0.01s 0.00s 0K 0K NE 0 0 E - 0% +31357 0.00s 0.00s 0K 0K -- - 1 S 1 0% bash + 3364 0.00s 0.00s 8032K 756K N- - 1 S 1 0% sleep + 2931 0.00s 0.00s 0K 0K -- - 1 I 1 0% kworker/u8:2-e + 3356 0.00s 0.00s 0K 0K -E 0 0 E - 0% + 3360 0.00s 0.00s 0K 0K NE 0 0 E - 0% + 3362 0.00s 0.00s 0K 0K NE 0 0 E - 0% +``` + +如果你*只*想查看磁盘统计信息,则可以使用以下命令轻松进行管理: + +``` +$ atop | grep DSK +DSK | sda | busy 0% | read 122901 | write 3318e3 | avio 0.67 ms | +DSK | sdb | busy 0% | read 1168 | write 103 | avio 0.73 ms | +DSK | sda | busy 2% | read 0 | write 92 | avio 2.39 ms | +DSK | sda | busy 2% | read 0 | write 94 | avio 2.47 ms | +DSK | sda | busy 2% | read 0 | write 99 | avio 2.26 ms | +DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms | +DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms | +DSK | sda | busy 2% | read 0 | write 92 | avio 2.43 ms | +^C +``` + +### 了解磁盘 I/O + +Linux 提供了足够的命令,可以让你很好地了解磁盘的工作强度,并帮助你关注潜在的问题或减缓。希望这些命令中的一个可以告诉你何时需要质疑磁盘性能。偶尔使用这些命令将有助于确保当你需要检查磁盘,特别是忙碌或缓慢的磁盘时可以显而易见地发现它们。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[laingke](https://github.com/laingke) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3291616/linux/examining-linux-system-performance-with-dstat.html +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world diff --git a/published/20190129 Create an online store with this Java-based framework.md b/published/201909/20190129 Create an online store with this Java-based framework.md similarity index 100% rename from published/20190129 Create an online store with this Java-based framework.md rename to published/201909/20190129 Create an online store with this Java-based framework.md diff --git a/published/20190401 Build and host a website with Git.md b/published/201909/20190401 Build and host a website with Git.md similarity index 100% rename from published/20190401 Build and host a website with Git.md rename to published/201909/20190401 Build and host a website with Git.md diff --git a/published/20190402 Manage your daily schedule with Git.md b/published/201909/20190402 Manage your daily schedule with Git.md similarity index 100% rename from published/20190402 Manage your daily schedule with Git.md rename to published/201909/20190402 Manage your daily schedule with Git.md diff --git a/published/20190403 Use Git as the backend for chat.md b/published/201909/20190403 Use Git as the backend for chat.md similarity index 100% rename from published/20190403 Use Git as the backend for chat.md rename to published/201909/20190403 Use Git as the backend for chat.md diff --git a/published/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md b/published/201909/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md similarity index 100% rename from published/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md rename to published/201909/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md diff --git a/published/20190409 Working with variables on Linux.md b/published/201909/20190409 Working with variables on Linux.md similarity index 100% rename from published/20190409 Working with variables on Linux.md rename to published/201909/20190409 Working with variables on Linux.md diff --git a/translated/tech/20190505 Blockchain 2.0 - What Is Ethereum -Part 9.md b/published/201909/20190505 Blockchain 2.0 - What Is Ethereum -Part 9.md similarity index 72% rename from translated/tech/20190505 Blockchain 2.0 - What Is Ethereum -Part 9.md rename to published/201909/20190505 Blockchain 2.0 - What Is Ethereum -Part 9.md index b80271af21..5cf9c3e4c9 100644 --- a/translated/tech/20190505 Blockchain 2.0 - What Is Ethereum -Part 9.md +++ b/published/201909/20190505 Blockchain 2.0 - What Is Ethereum -Part 9.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11404-1.html) [#]: subject: (Blockchain 2.0 – What Is Ethereum [Part 9]) [#]: via: (https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/) [#]: author: (editor https://www.ostechnix.com/author/editor/) @@ -12,29 +12,29 @@ ![Ethereum][1] -在本系列的上一指南中,我们讨论了 [Hyperledger 项目(HLP)][2],这是一个由 Linux 基金会开发的增长最快的产品。在本指南中,我们将详细讨论什么是“以太坊Ethereum”及其功能。许多研究人员认为,互联网的未来将基于去中心化计算decentralized computing的原理。实际上,去中心化计算是互联网放在首位的更广泛目标之一。但是,由于可用的计算能力不同,互联网发生了另一次变化。尽管现代服务器功能使服务器端处理和执行成为可能,但在世界上大部分地区缺乏像样的移动网络使客户端也是如此。现在,现代智能手机具有 SoC(片上系统),在客户端本身上也能够处理许多此类操作,但是,由于安全地检索和存储数据而受到的限制仍然迫使开发人员进行服务器端计算和数据管理。因此,当前可以观察到数据传输能力的瓶颈。 +在本系列的上一指南中,我们讨论了 [Hyperledger 项目(HLP)][2],这是一个由 Linux 基金会开发的增长最快的产品。在本指南中,我们将详细讨论什么是“以太坊Ethereum”及其功能。许多研究人员认为,互联网的未来将基于去中心化计算decentralized computing的原理。实际上,去中心化计算是互联网放在首位的更广泛目标之一。但是,由于可用的计算能力不同,互联网发生了转折。尽管现代服务器功能使得服务器端处理和执行成为可能,但在世界上大部分地区缺乏像样的移动网络使得客户端也是如此。现在,现代智能手机具有 SoC(片上系统),在客户端本身上也能够处理许多此类操作,但是,由于安全地检索和存储数据而受到的限制仍然迫使开发人员需要在服务器端进行计算和数据管理。因此,当前可以观察到数据传输能力方面存在瓶颈。 由于分布式数据存储和程序执行平台的进步,所有这些可能很快就会改变。[区块链][3]允许在分布式用户网络(而不是中央服务器)上进行安全的数据管理和程序执行,这在互联网历史上基本上是第一次。 以太坊就是一个这样的区块链平台,使开发人员可以访问用于在这样的去中心化网络上构建和运行应用程序的框架和工具。尽管它以其加密货币而广为人知,以太坊不只是以太币ether(加密货币)。这是一种完整的图灵完备Turing complete编程语言,旨在开发和部署 DApp(即分布式应用Distributed APPlication) [^1]。我们会在接下来的一篇文章中详细介绍 DApp。 -以太坊是开源的,默认情况下是一个公共(非许可)区块链,并具有一个大范围的智能合约平台底层(Solidity)。以太坊提供了一个称为“以太坊虚拟机(EVM)”的虚拟计算环境,以运行应用程序和[智能合约][4] [^2]。 以太坊虚拟机在世界各地成千上万个参与节点上运行,这意味着应用程序数据在保证安全的同时,几乎不可能被篡改或丢失。 +以太坊是开源的,默认情况下是一个公共(非许可)区块链,并具有一个大范围的智能合约平台底层(Solidity)。以太坊提供了一个称为“以太坊虚拟机Ethereum virtual machine(EVM)”的虚拟计算环境,以运行应用程序和[智能合约][4] [^2]。以太坊虚拟机运行在世界各地的成千上万个参与节点上,这意味着应用程序数据在保证安全的同时,几乎不可能被篡改或丢失。 ### 以太坊的背后:什么使之不同 -在 2017 年,为了推广以太坊区块链的功能的利用,30 多个技术和金融领域的名人聚集在一起。因此,“以太坊企业联盟Ethereum Enterprise Alliance”(EEA)由众多支持成员组成,包括微软、摩根大通、思科、德勤和埃森哲。摩根大通已经拥有 Quorum,这是一个基于以太坊的去中心化金融服务计算平台,目前正在运营中;而微软拥有通过其 Azure 云业务销售的基于以太坊的云服务[^3]。 +在 2017 年,为了推广对以太坊区块链的功能的利用,技术和金融领域的 30 多个团队汇聚一堂。因此,“以太坊企业联盟Ethereum Enterprise Alliance”(EEA)由众多支持成员组成,包括微软、摩根大通、思科、德勤和埃森哲。摩根大通已经拥有 Quorum,这是一个基于以太坊的去中心化金融服务计算平台,目前已经投入运行;而微软拥有基于以太坊的云服务,通过其 Azure 云业务销售 [^3]。 ### 什么是以太币,它和以太坊有什么关系 以太坊的创建者维塔利克·布特林Vitalik Buterin深谙去中心化处理平台的真正价值以及为比特币提供动力的底层区块链技术。他提议比特币应该开发以支持运行分布式应用程序(DApp)和程序(现在称为智能合约)的想法,未能获得多数同意。 -因此,他在 2013 年发表的白皮书中提出了以太坊的想法。原始白皮书仍在维护中,读者可从[此处][5]获得。这个想法是开发一个基于区块链的平台来运行智能合约和应用程序,这些合约和应用程序设计为在节点和用户设备而非服务器上运行。 +因此,他在 2013 年发表的白皮书中提出了以太坊的想法。原始白皮书仍然保留,[可供][5]读者阅读。其理念是开发一个基于区块链的平台来运行智能合约和应用程序,这些合约和应用程序设计为在节点和用户设备上运行,而非服务器上运行。 -以太坊系统经常被误认为就是加密货币以太币,但是,必须重申,以太坊是一个用于开发和执行应用程序的全栈平台,自成立以来一直如此,而比特币并非如此。**以太网目前是按市值计算的第二大加密货币**,在撰写本文时,其平均交易价格为每个以太币 170 美元 [^4]。 +以太坊系统经常被误认为就是加密货币以太币,但是,必须重申,以太坊是一个用于开发和执行应用程序的全栈平台,自成立以来一直如此,而比特币则不是。**以太网目前是按市值计算的第二大加密货币**,在撰写本文时,其平均交易价格为每个以太币 170 美元 [^4]。 ### 该平台的功能和技术特性 [^5] -* 正如我们已经提到的,称为以太币的加密货币只是该平台功能之一。该系统的目的不仅仅是处理金融交易。 实际上,以太坊平台和比特币之间的主要区别在于它们的脚本功能。以太坊是以图灵完备的编程语言开发的,这意味着它具有类似于其他主要编程语言的脚本和应用程序功能。开发人员需要此功能才能在平台上创建 DApp 和复杂的智能合约,而该功能是比特币缺失的。 +* 正如我们已经提到的,称为以太币的加密货币只是该平台功能之一。该系统的目的不仅仅是处理金融交易。 实际上,以太坊平台和比特币之间的主要区别在于它们的脚本能力。以太坊是以图灵完备的编程语言开发的,这意味着它具有类似于其他主要编程语言的脚本编程和应用程序功能。开发人员需要此功能才能在平台上创建 DApp 和复杂的智能合约,而该功能是比特币缺失的。 * 以太币的“挖矿”过程更加严格和复杂。尽管可以使用专用的 ASIC 来开采比特币,但以太坊使用的基本哈希算法(EThash)降低了 ASIC 在这方面的优势。 * 为激励矿工和节点运营者运行网络而支付的交易费用本身是使用称为 “燃料Gas”的计算令牌来计算的。通过要求交易的发起者支付与执行交易所需的计算资源数量成比例的以太币,燃料提高了系统的弹性以及对外部黑客和攻击的抵抗力。这与其他平台(例如比特币)相反,在该平台上,交易费用与交易规模一并衡量。因此,以太坊的平均交易成本从根本上低于比特币。这也意味着在以太坊虚拟机上运行的应用程序需要付费,具体取决于应用程序要解决的计算问题。基本上,执行越复杂,费用就越高。 * 以太坊的出块时间估计约为 10 - 15 秒。出块时间是在区块链网络上打时间戳和创建区块所需的平均时间。与将在比特币网络上进行同样的交易要花费 10 分钟以上的时间相比,很明显,就交易和区块验证而言,以太坊要快得多。 @@ -44,7 +44,7 @@ 尽管与以太坊相比,它远远超过了类似的平台,但在以太坊企业联盟开始推动之前,该平台本身尚缺乏明确的发展道路。虽然以太坊平台确实推动了企业发展,但必须注意,以太坊还可以满足小型开发商和个人的需求。 这样一来,为最终用户和企业开发的平台就为以太坊遗漏了许多特定功能。另外,以太坊基金会提出和开发的区块链模型是一种公共模型,而 Hyperledger 项目等项目提出的模型是私有的和需要许可的。 -虽然只有时间才能证明以太坊、Hyperledger 和 R3 Corda 等平台中,哪一个平台会在现实场景中找到最多粉丝,但此类系统确实证明了以区块链为动力的未来主张的正确性。 +虽然只有时间才能证明以太坊、Hyperledger 和 R3 Corda 等平台中,哪一个平台会在现实场景中找到最多粉丝,但此类系统确实证明了以区块链为动力的未来主张背后的有效性。 [^1]: [Gabriel Nicholas, “Ethereum Is Coding’s New Wild West | WIRED,” Wired , 2017][6]. [^2]: [What is Ethereum? — Ethereum Homestead 0.1 documentation][7]. @@ -52,25 +52,23 @@ [^4]: [Cryptocurrency Market Capitalizations | CoinMarketCap][9]. [^5]: [Introduction — Ethereum Homestead 0.1 documentation][10]. - - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ -作者:[editor][a] +作者:[ostechnix][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://www.ostechnix.com/author/editor/ [b]: https://github.com/lujun9972 [1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Ethereum-720x340.png -[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/ -[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ -[4]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[2]: https://linux.cn/article-11275-1.html +[3]: https://linux.cn/article-10650-1.html +[4]: https://linux.cn/article-10956-1.html [5]: https://github.com/ethereum/wiki/wiki/White-Paper [6]: https://www.wired.com/story/ethereum-is-codings-new-wild-west/ [7]: http://www.ethdocs.org/en/latest/introduction/what-is-ethereum.html#ethereum-virtual-machine diff --git a/published/20190524 Spell Checking Comments.md b/published/201909/20190524 Spell Checking Comments.md similarity index 100% rename from published/20190524 Spell Checking Comments.md rename to published/201909/20190524 Spell Checking Comments.md diff --git a/published/201909/20190528 A Quick Look at Elvish Shell.md b/published/201909/20190528 A Quick Look at Elvish Shell.md new file mode 100644 index 0000000000..9822423b08 --- /dev/null +++ b/published/201909/20190528 A Quick Look at Elvish Shell.md @@ -0,0 +1,104 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11393-1.html) +[#]: subject: (A Quick Look at Elvish Shell) +[#]: via: (https://itsfoss.com/elvish-shell/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Elvish Shell 速览 +====== + +每个来到这里的人都会对许多系统中默认 Bash shell 有所了解(无论多少)。过去这些年已经有一些新的 shell 出现来解决 Bash 中的一些缺点。Elvish 就是其中之一,我们将在今天讨论它。 + +### 什么是 Elvish Shell? + +![Pipelines In Elvish][1] + +[Elvish][2] 不仅仅是一个 shell。它[也是][3]“一种表达性编程语言”。它有许多有趣的特性,包括: + +* 它是由 Go 语言编写的 +* 内置文件管理器,灵感来自 [Ranger 文件管理器][4](`Ctrl + N`) +* 可搜索的命令历史记录(`Ctrl + R`) +* 访问的目录的历史记录(`Ctrl + L`) +* 支持结构化数据,例如列表、字典和函数的强大的管道 +* 包含“一组标准的控制结构:有 `if` 条件控制、`for` 和 `while` 循环,还有 `try` 的异常处理” +* 通过包管理器支持[第三方模块扩展 Elvish][5] +* BSD 两句版许可证 + +你肯定在喊,“为什么叫 Elvish?”。好吧,根据[他们的网站][6],他们之所以选择当前的名字,是因为: + +> 在 Roguelike 游戏中,精灵制造的物品质量很高。它们通常被称为“精灵物品”。但是之所以选择 “elvish” 是因为它以 “sh” 结尾,这是 Unix shell 的久远传统。这个与 fish 押韵,它是影响 Elvish 哲学的 shell 之一。 + +### 如何安装 Elvish Shell + +Elvish 在几种主流发行版中都有。 + +请注意,该软件还很年轻。最新版本是 0.12。根据该项目的 [GitHub 页面][3]:“尽管还处在 1.0 之前,但它已经适合大多数日常交互使用。” + +![Elvish Control Structures][7] + +#### Debian 和 Ubuntu + +Elvish 包已引入 Debian Buster 和 Ubuntu 17.10。不幸的是,这些包已经过时,你需要使用 [PPA][8] 安装最新版本。你需要使用以下命令: + +``` +sudo add-apt-repository ppa:zhsj/elvish +sudo apt update +sudo apt install elvish +``` + +#### Fedora + +Elvish 在 Fedora 的主仓库中没有。你需要添加 [FZUG 仓库][9]安装 Evlish。为此,你需要使用以下命令: + +``` +sudo dnf config-manager --add-repo=http://repo.fdzh.org/FZUG/FZUG.repol +sudo dnf install elvish +``` + +#### Arch + +Elvish 在 [Arch 用户仓库][10]中可用。 + +我相信你知道该[如何在 Linux 中更改 Shell][11],因此安装后可以切换到 Elvish 来使用它。 + +### 对 Elvish Shell 的想法 + +就个人而言,我没有理由在任何系统上安装 Elvish。我可以通过安装几个小的命令行程序或使用已经安装的程序来获得它的大多数功能。 + +例如,Bash 中已经存在“搜索历史命令”功能,并且效果很好。如果要提高历史命令的能力,我建议安装 [fzf][12]。`fzf` 使用模糊搜索,因此你无需记住要查找的确切命令。`fzf` 还允许你预览和打开文件。 + +我认为 Elvish 作为一种编程语言是不错的,但是我会坚持使用 Bash shell 脚本,直到 Elvish 变得更成熟。 + +你们都有用过 Elvish 么?你认为安装 Elvish 是否值得?你最喜欢的 Bash 替代品是什么?请在下面的评论中告诉我们。 + +如果你发现这篇文章有趣,请花一点时间在社交媒体、Hacker News 或 Reddit 上分享它。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/elvish-shell/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/pipelines-in-elvish.png?fit=800%2C421&ssl=1 +[2]: https://elv.sh/ +[3]: https://github.com/elves/elvish +[4]: https://ranger.github.io/ +[5]: https://github.com/elves/awesome-elvish +[6]: https://elv.sh/ref/name.html +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Elvish-control-structures.png?fit=800%2C425&ssl=1 +[8]: https://launchpad.net/%7Ezhsj/+archive/ubuntu/elvish +[9]: https://github.com/FZUG/repo/wiki/Add-FZUG-Repository +[10]: https://aur.archlinux.org/packages/elvish/ +[11]: https://linuxhandbook.com/change-shell-linux/ +[12]: https://github.com/junegunn/fzf +[13]: http://reddit.com/r/linuxusersgroup diff --git a/published/20190603 How many browser tabs do you usually have open.md b/published/201909/20190603 How many browser tabs do you usually have open.md similarity index 100% rename from published/20190603 How many browser tabs do you usually have open.md rename to published/201909/20190603 How many browser tabs do you usually have open.md diff --git a/published/20190603 How to stream music with GNOME Internet Radio.md b/published/201909/20190603 How to stream music with GNOME Internet Radio.md similarity index 100% rename from published/20190603 How to stream music with GNOME Internet Radio.md rename to published/201909/20190603 How to stream music with GNOME Internet Radio.md diff --git a/published/20190628 How to Install and Use R on Ubuntu.md b/published/201909/20190628 How to Install and Use R on Ubuntu.md similarity index 100% rename from published/20190628 How to Install and Use R on Ubuntu.md rename to published/201909/20190628 How to Install and Use R on Ubuntu.md diff --git a/published/20190701 Get modular with Python functions.md b/published/201909/20190701 Get modular with Python functions.md similarity index 100% rename from published/20190701 Get modular with Python functions.md rename to published/201909/20190701 Get modular with Python functions.md diff --git a/published/20190705 Learn object-oriented programming with Python.md b/published/201909/20190705 Learn object-oriented programming with Python.md similarity index 100% rename from published/20190705 Learn object-oriented programming with Python.md rename to published/201909/20190705 Learn object-oriented programming with Python.md diff --git a/published/20190730 How to manage logs in Linux.md b/published/201909/20190730 How to manage logs in Linux.md similarity index 100% rename from published/20190730 How to manage logs in Linux.md rename to published/201909/20190730 How to manage logs in Linux.md diff --git a/published/20190805 Is your enterprise software committing security malpractice.md b/published/201909/20190805 Is your enterprise software committing security malpractice.md similarity index 100% rename from published/20190805 Is your enterprise software committing security malpractice.md rename to published/201909/20190805 Is your enterprise software committing security malpractice.md diff --git a/published/20190810 How to Upgrade Linux Mint 19.1 (Tessa) to Linux Mint 19.2 (Tina).md b/published/201909/20190810 How to Upgrade Linux Mint 19.1 (Tessa) to Linux Mint 19.2 (Tina).md similarity index 100% rename from published/20190810 How to Upgrade Linux Mint 19.1 (Tessa) to Linux Mint 19.2 (Tina).md rename to published/201909/20190810 How to Upgrade Linux Mint 19.1 (Tessa) to Linux Mint 19.2 (Tina).md diff --git a/published/201909/20190812 Cloud-native Java, open source security, and more industry trends.md b/published/201909/20190812 Cloud-native Java, open source security, and more industry trends.md new file mode 100644 index 0000000000..116483d98a --- /dev/null +++ b/published/201909/20190812 Cloud-native Java, open source security, and more industry trends.md @@ -0,0 +1,100 @@ +[#]: collector: (lujun9972) +[#]: translator: (laingke) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11400-1.html) +[#]: subject: (Cloud-native Java, open source security, and more industry trends) +[#]: via: (https://opensource.com/article/19/8/cloud-native-java-and-more) +[#]: author: (Tim Hildred https://opensource.com/users/thildred) + +每周开源点评:云原生 Java、开源安全以及更多行业趋势 +====== + +> 开源社区和行业趋势的每周总览。 + +![Person standing in front of a giant computer screen with numbers, data][1] + +作为我在具有开源开发模型的企业软件公司担任高级产品营销经理的角色的一部分,我为产品营销人员、经理和其他影响者定期发布有关开源社区,市场和行业趋势的定期更新。 以下是该更新中我和他们最喜欢的五篇文章。 + +### 《为什么现代 web 开发如此复杂?》 + +- [文章地址][2] + +> 现代前端 web 开发带来了一种两极分化的体验:许多人喜欢它,而其他人则鄙视它。 +> +> 我是现代Web开发的忠实拥护者,尽管我将其描述为“魔法”——而魔法也有其优点和缺点……。最近,我一直在向那些只具有粗略的原始 web 开发工作流程的人们讲解“现代 web 开发工作流程”……,但我发现需要解释的内容实在是太多了!甚至笼统的解释最终都会变得冗长。因此,在我努力写下更多解释的过程中,这里是对 web 开发演变的一个长期而笼统的解释的开始…… + +**影响**:足够具体,对前端开发人员非常有用(特别是对新开发人员),且足够简单,解释得足够好,可以帮助非开发人员更好地理解前端开发人员的一些问题。到最后,你将(有点)了解 Javascript 和 WebAPI 之间的区别,以及 2019 年的 Javascript 与 2006 年的 Javascript 有何不同。 + +### 开源 Kubernetes 安全审计 + +- [文章链接][3] + +> 去年,云原生计算基金会(CNCF)开始为其项目执行并开源第三方安全审计,以提高我们生态系统的整体安全性。这个想法是从一些项目开始,并从 CNCF 社区收集了关于这个试点项目是否有用的反馈。第一批经历这个过程的项目是 [CoreDNS][4]、[Envoy][5] 和 [Prometheus][6]。这些首次公开审计发现了从一般漏洞到严重漏洞的安全问题。有了这些结果,CoreDNS、Envoy 和 Prometheus 的项目维护者就能够解决已发现的漏洞,并添加文档来帮助用户。 +> +> 从这些初始审计中得出的主要结论是,公开安全审计是测试开源项目的质量及其漏洞管理过程的一个很好的方法,更重要的是,测试开源项目的安全实践有多大的弹性。特别是 CNCF 的[毕业项目][7],它们被世界上一些最大的公司广泛应用于生产中,它们必须坚持最高级别的安全最佳实践。 + +**影响**:就像 Linux 之于数据中心一样,很多公司都把云计算押宝在 Kubernetes 上。从安全的角度来看,看到其中 4 家公司以确保项目正在做应该做的事情,这激发了人们的信心。共享这项研究表明,开源远远不止是仓库中的代码;它是以一种有益于整个社区而不是少数人利益的方式获取和分享专家意见。 + +### Quarkus——这个轻量级 Java 框架的下一步是什么? + +- [文章链接][8] + +> “容器优先”是什么意思?Quarkus 有哪些优势?0.20.0 版本有什么新功能?未来我们可以期待哪些功能?1.0.0 版什么时候发布?我们对 Quarkus 有很多问题,而 Alex Soto 也很耐心地回答了所有问题。 随着 Quarkus 0.20.0 的发布,我们和 [JAX 伦敦演讲者][9],Java 拥护者和红帽的开发人员体验总监 Alex Soto 进行了接触。他很好地回答了我们关于 Quarkus 的过去、现在和未来的所有问题。看起来我们对这个令人兴奋的轻量级框架有很多期待! + +**影响**:最近有个聪明的人告诉我,Quarkus 有潜力使 Java “可能成为容器和无服务器环境的最佳语言之一”。不禁使我多看了一眼。尽管 Java 是最流行的编程语言之一([如果不是最流行的][10]),但当你听到“云原生”一词时,它可能并不是第一个想到的语言。Quarkus 可以通过让开发人员将他们的经验应用到新的挑战中,从而扩展和提高他们所拥有的技能的价值。 + +### Julia 编程语言:用户批露他们最喜欢和最讨厌它的地方 + +- [文章链接][11] + +> Julia 最受欢迎的技术特性是速度和性能,其次是易用性,而最受欢迎的非技术特性是使用者无需付费即可使用它。 +> +> 用户还报告了他们对该语言最大的不满。排在首位的是附加功能的包不够成熟,或者维护得不够好,无法满足他们的需求。 + +**影响**:Julia 1.0 版本已经发布了一年,并且在一系列相关指标(下载、GitHub 星级等)中取得了令人瞩目的增长。它是一种直接针对我们当前和未来最大挑战(“科学计算、机器学习、数据挖掘、大规模线性代数、分布式和并行计算”)的语言,因此,了解用户对它的感受,就可以间接看到有关这些挑战的应对情况。 + +### 多云数据解读:11 个有趣的统计数据 + +- [文章链接][12] + +> 如果你把我们最近对 [Kubernetes 的有趣数据][13]的深入研究归结最基本的一条,它看起来是这样的:[Kubernetes][14] 的受欢迎程度在可预见的未来将持续下去。 +> +> 剧透警报:当你挖掘有关[多云][15]使用情况的最新数据时,他们告诉你一个类似的描述:使用率正在飙升。 +> +> 这种一致性是有道理的。也许不是每个组织都将使用 Kubernetes 来管理其多云和/或[混合云][16]基础架构,但是两者越来越紧密地联系在一起。即使不这样做,它们都反映了向更分散和异构 IT 环境的普遍转变,以及[云原生开发][17]和其他重叠趋势。 + +**影响**:越来越多地采用“多云战略”的另一种解释是,它们将组织中单独部分未经协商而作出的决策追溯为“战略”,从而使决策合法化。“等等,所以你从谁那里买了几个小时?又从另一个人那里买了几个小时?为什么在会议纪要中没有呢?我想我们现在是一家多云公司!”。当然,我在开玩笑,我敢肯定大多数大公司的协调能力远胜于此,对吗? + +*我希望你喜欢这张上周让我印象深刻的列表,并在下周一回来了解更多的开放源码社区、市场和行业趋势。* + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/8/cloud-native-java-and-more + +作者:[Tim Hildred][a] +选题:[lujun9972][b] +译者:[laingke](https://github.com/laingke) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/thildred +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data) +[2]: https://www.vrk.dev/2019/07/11/why-is-modern-web-development-so-complicated-a-long-yet-hasty-explanation-part-1/ +[3]: https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/ +[4]: https://coredns.io/2018/03/15/cure53-security-assessment/ +[5]: https://github.com/envoyproxy/envoy/blob/master/docs/SECURITY_AUDIT.pdf +[6]: https://cure53.de/pentest-report_prometheus.pdf +[7]: https://www.cncf.io/projects/ +[8]: https://jaxenter.com/quarkus-whats-next-for-the-lightweight-java-framework-160793.html +[9]: https://jaxlondon.com/cloud-kubernetes-serverless/java-particle-acceleration-using-quarkus/ +[10]: https://opensource.com/article/19/8/possibly%20one%20of%20the%20best%20languages%20for%20containers%20and%20serverless%20environments. +[11]: https://www.zdnet.com/article/julia-programming-language-users-reveal-what-they-love-and-hate-the-most-about-it/#ftag=RSSbaffb68 +[12]: https://enterprisersproject.com/article/2019/8/multi-cloud-statistics +[13]: https://enterprisersproject.com/article/2019/7/kubernetes-statistics-13-compelling +[14]: https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA +[15]: https://www.redhat.com/en/topics/cloud-computing/what-is-multicloud?intcmp=701f2000000tjyaAAA +[16]: https://enterprisersproject.com/hybrid-cloud +[17]: https://enterprisersproject.com/article/2018/10/how-explain-cloud-native-apps-plain-english diff --git a/published/20190812 Why const Doesn-t Make C Code Faster.md b/published/201909/20190812 Why const Doesn-t Make C Code Faster.md similarity index 100% rename from published/20190812 Why const Doesn-t Make C Code Faster.md rename to published/201909/20190812 Why const Doesn-t Make C Code Faster.md diff --git a/published/20190819 Moving files on Linux without mv.md b/published/201909/20190819 Moving files on Linux without mv.md similarity index 100% rename from published/20190819 Moving files on Linux without mv.md rename to published/201909/20190819 Moving files on Linux without mv.md diff --git a/published/20190821 Getting Started with Go on Fedora.md b/published/201909/20190821 Getting Started with Go on Fedora.md similarity index 100% rename from published/20190821 Getting Started with Go on Fedora.md rename to published/201909/20190821 Getting Started with Go on Fedora.md diff --git a/published/201909/20190822 How to move a file in Linux.md b/published/201909/20190822 How to move a file in Linux.md new file mode 100644 index 0000000000..db2d2c4157 --- /dev/null +++ b/published/201909/20190822 How to move a file in Linux.md @@ -0,0 +1,269 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11380-1.html) +[#]: subject: (How to move a file in Linux) +[#]: via: (https://opensource.com/article/19/8/moving-files-linux-depth) +[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/doni08521059) + +在 Linux 中如何移动文件 +====== + +> 无论你是刚接触 Linux 的文件移动的新手还是已有丰富的经验,你都可以通过此深入的文章中学到一些东西。 + +![](https://img.linux.net.cn/data/attachment/album/201909/24/162919ygppgeevgrj0ppgv.jpg) + +在 Linux 中移动文件看似比较简单,但是可用的选项却比大多数人想象的要多。本文介绍了初学者如何在 GUI 和命令行中移动文件,还介绍了底层实际上发生了什么,并介绍了许多有一定经验的用户也很少使用的命令行选项。 + +### 移动什么? + +在研究移动文件之前,有必要仔细研究*移动*文件系统对象时实际发生的情况。当文件创建后,会将其分配给一个索引节点inode,这是文件系统中用于数据存储的固定点。你可以使用 [ls][2] 命令看到文件对应的索引节点: + +``` +$ ls --inode example.txt +7344977 example.txt +``` + +移动文件时,实际上并没有将数据从一个索引节点移动到另一个索引节点,只是给文件对象分配了新的名称或文件路径而已。实际上,文件在移动时会保留其权限,因为移动文件不会更改或重新创建文件。(LCTT 译注:在不跨卷、分区和存储器时,移动文件是不会重新创建文件的;反之亦然) + +文件和目录的索引节点并没有暗示这种继承关系,而是由文件系统本身决定的。索引节点的分配是基于文件创建时的顺序分配的,并且完全独立于你组织计算机文件的方式。一个目录“内”的文件的索引节点号可能比其父目录的索引节点号更低或更高。例如: + +``` +$ mkdir foo +$ mv example.txt foo +$ ls --inode +7476865 foo +$ ls --inode foo +7344977 example.txt +``` + +但是,将文件从一个硬盘驱动器移动到另一个硬盘驱动器时,索引节点基本上会更改。发生这种情况是因为必须将新数据写入新文件系统。因此,在 Linux 中,移动和重命名文件的操作实际上是相同的操作。无论你将文件移动到另一个目录还是在同一目录使用新名称,这两个操作均由同一个底层程序执行。 + +本文重点介绍将文件从一个目录移动到另一个目录。 + +### 用鼠标移动文件 + +图形用户界面是大多数人都熟悉的友好的抽象层,位于复杂的二进制数据集合之上。这也是在 Linux 桌面上移动文件的首选方法,也是最直观的方法。从一般意义上来说,如果你习惯使用台式机,那么你可能已经知道如何在硬盘驱动器上移动文件。例如,在 GNOME 桌面上,将文件从一个窗口拖放到另一个窗口时的默认操作是移动文件而不是复制文件,因此这可能是该桌面上最直观的操作之一: + +![Moving a file in GNOME.][3] + +而 KDE Plasma 桌面中的 Dolphin 文件管理器默认情况下会提示用户以执行不同的操作。拖动文件时按住 `Shift` 键可强制执行移动操作: + +![Moving a file in KDE.][4] + +### 在命令行移动文件 + +用于在 Linux、BSD、Illumos、Solaris 和 MacOS 上移动文件的 shell 命令是 `mv`。不言自明,简单的命令 `mv ` 会将源文件移动到指定的目标,源和目标都由[绝对][5]或[相对][6]文件路径定义。如前所述,`mv` 是 [POSIX][7] 用户的常用命令,其有很多不为人知的附加选项,因此,无论你是新手还是有经验的人,本文都会为你带来一些有用的选项。 + +但是,不是所有 `mv` 命令都是由同一个人编写的,因此取决于你的操作系统,你可能拥有 GNU `mv`、BSD `mv` 或 Sun `mv`。命令的选项因其实现而异(BSD `mv` 根本没有长选项),因此请参阅你的 `mv` 手册页以查看支持的内容,或安装你的首选版本(这是开源的奢侈之处)。 + +#### 移动文件 + +要使用 `mv` 将文件从一个文件夹移动到另一个文件夹,请记住语法 `mv `。 例如,要将文件 `example.txt` 移到你的 `Documents` 目录中: + +``` +$ touch example.txt +$ mv example.txt ~/Documents +$ ls ~/Documents +example.txt +``` + +就像你通过将文件拖放到文件夹图标上来移动文件一样,此命令不会将 `Documents` 替换为 `example.txt`。相反,`mv` 会检测到 `Documents` 是一个文件夹,并将 `example.txt` 文件放入其中。 + +你还可以方便地在移动文件时重命名该文件: + +``` +$ touch example.txt +$ mv example.txt ~/Documents/foo.txt +$ ls ~/Documents +foo.txt +``` + +这很重要,这使你不用将文件移动到另一个位置,也可以重命名文件,例如: + +``` +$ touch example.txt +$ mv example.txt foo2.txt +$ ls foo2.txt` +``` + +#### 移动目录 + +不像 [cp][8] 命令,`mv` 命令处理文件和目录没有什么不同,你可以用同样的格式移动目录或文件: + +``` +$ touch file.txt +$ mkdir foo_directory +$ mv file.txt foo_directory +$ mv foo_directory ~/Documents +``` + +#### 安全地移动文件 + +如果你移动一个文件到一个已有同名文件的地方,默认情况下,`mv` 会用你移动的文件替换目标文件。这种行为被称为清除clobbering,有时候这就是你想要的结果,而有时则不是。 + +一些发行版将 `mv` 别名定义为 `mv --interactive`(你也可以[自己写一个][9]),这会提醒你确认是否覆盖。而另外一些发行版没有这样做,那么你可以使用 `--interactive` 或 `-i` 选项来确保当两个文件有一样的名字而发生冲突时让 `mv` 请你来确认。 + +``` +$ mv --interactive example.txt ~/Documents +mv: overwrite '~/Documents/example.txt'? +``` + +如果你不想手动干预,那么可以使用 `--no-clobber` 或 `-n`。该选项会在发生冲突时静默拒绝移动操作。在这个例子当中,一个名为 `example.txt` 的文件以及存在于 `~/Documents`,所以它不会如命令要求从当前目录移走。 + +``` +$ mv --no-clobber example.txt ~/Documents +$ ls +example.txt +``` + +#### 带备份的移动 + +如果你使用 GNU `mv`,有一个备份选项提供了另外一种安全移动的方式。要为任何冲突的目标文件创建备份文件,可以使用 `-b` 选项。 + +``` +$ mv -b example.txt ~/Documents +$ ls ~/Documents +example.txt    example.txt~ +``` + +这个选项可以确保 `mv` 完成移动操作,但是也会保护目录位置的已有文件。 + +另外的 GNU 备份选项是 `--backup`,它带有一个定义了备份文件如何命名的参数。 + +* `existing`:如果在目标位置已经存在了编号备份文件,那么会创建编号备份。否则,会使用 `simple` 方式。 +* `none`:即使设置了 `--backup`,也不会创建备份。当 `mv` 被别名定义为带有备份选项时,这个选项可以覆盖这种行为。 +* `numbered`:给目标文件名附加一个编号。 +* `simple`:给目标文件附加一个 `~`,当你日常使用带有 `--ignore-backups` 选项的 [ls][2] 时,这些文件可以很方便地隐藏起来。 + +简单来说: + +``` +$ mv --backup=numbered example.txt ~/Documents +$ ls ~/Documents +-rw-rw-r--. 1 seth users 128 Aug  1 17:23 example.txt +-rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~ +``` + +可以使用环境变量 `VERSION_CONTROL` 设置默认的备份方案。你可以在 `~/.bashrc` 文件中设置该环境变量,也可以在命令前动态设置: + +``` +$ VERSION_CONTROL=numbered mv --backup example.txt ~/Documents +$ ls ~/Documents +-rw-rw-r--. 1 seth users 128 Aug  1 17:23 example.txt +-rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~ +-rw-rw-r--. 1 seth users 128 Aug  1 17:22 example.txt.~2~ +``` + +`--backup` 选项仍然遵循 `--interactive` 或 `-i` 选项,因此即使它在执行备份之前创建了备份,它仍会提示你覆盖目标文件: + +``` +$ mv --backup=numbered example.txt ~/Documents +mv: overwrite '~/Documents/example.txt'? y +$ ls ~/Documents +-rw-rw-r--. 1 seth users 128 Aug  1 17:24 example.txt +-rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~ +-rw-rw-r--. 1 seth users 128 Aug  1 17:22 example.txt.~2~ +-rw-rw-r--. 1 seth users 128 Aug  1 17:23 example.txt.~3~ +``` + +你可以使用 `--force` 或 `-f` 选项覆盖 `-i`。 + +``` +$ mv --backup=numbered --force example.txt ~/Documents +$ ls ~/Documents +-rw-rw-r--. 1 seth users 128 Aug  1 17:26 example.txt +-rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~ +-rw-rw-r--. 1 seth users 128 Aug  1 17:22 example.txt.~2~ +-rw-rw-r--. 1 seth users 128 Aug  1 17:24 example.txt.~3~ +-rw-rw-r--. 1 seth users 128 Aug  1 17:25 example.txt.~4~ +``` + +`--backup` 选项在 BSD `mv` 中不可用。 + +#### 一次性移动多个文件 + +移动多个文件时,`mv` 会将最终目录视为目标: + +``` +$ mv foo bar baz ~/Documents +$ ls ~/Documents +foo   bar   baz +``` + +如果最后一个项目不是目录,则 `mv` 返回错误: + +``` +$ mv foo bar baz +mv: target 'baz' is not a directory +``` + +GNU `mv` 的语法相当灵活。如果无法把目标目录作为提供给 `mv` 命令的最终参数,请使用 `--target-directory` 或 `-t` 选项: + +``` +$ mv --target-directory=~/Documents foo bar baz +$ ls ~/Documents +foo   bar   baz +``` + +当从某些其他命令的输出构造 `mv` 命令时(例如 `find` 命令、`xargs` 或 [GNU Parallel][10]),这特别有用。 + +#### 基于修改时间移动 + +使用 GNU `mv`,你可以根据要移动的文件是否比要替换的目标文件新来定义移动动作。该方式可以通过 `--update` 或 `-u` 选项使用,在BSD `mv` 中不可用: + +``` +$ ls -l ~/Documents +-rw-rw-r--. 1 seth users 128 Aug  1 17:32 example.txt +$ ls -l +-rw-rw-r--. 1 seth users 128 Aug  1 17:42 example.txt +$ mv --update example.txt ~/Documents +$ ls -l ~/Documents +-rw-rw-r--. 1 seth users 128 Aug  1 17:42 example.txt +$ ls -l +``` + +此结果仅基于文件的修改时间,而不是两个文件的差异,因此请谨慎使用。只需使用 `touch` 命令即可愚弄 `mv`: + +``` +$ cat example.txt +one +$ cat ~/Documents/example.txt +one +two +$ touch example.txt +$ mv --update example.txt ~/Documents +$ cat ~/Documents/example.txt +one +``` + +显然,这不是最智能的更新功能,但是它提供了防止覆盖最新数据的基本保护。 + +### 移动 + +除了 `mv` 命令以外,还有更多的移动数据的方法,但是作为这项任务的默认程序,`mv` 是一个很好的通用选择。现在你知道了有哪些可以使用的选项,可以比以前更智能地使用 `mv` 了。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/8/moving-files-linux-depth + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/sethhttps://opensource.com/users/doni08521059 +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder) +[2]: https://opensource.com/article/19/7/master-ls-command +[3]: https://opensource.com/sites/default/files/uploads/gnome-mv.jpg (Moving a file in GNOME.) +[4]: https://opensource.com/sites/default/files/uploads/kde-mv.jpg (Moving a file in KDE.) +[5]: https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them +[6]: https://opensource.com/article/19/7/navigating-filesystem-relative-paths +[7]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains +[8]: https://opensource.com/article/19/7/copying-files-linux +[9]: https://opensource.com/article/19/7/bash-aliases +[10]: https://opensource.com/article/18/5/gnu-parallel diff --git a/published/20190823 How To Check Your IP Address in Ubuntu -Beginner-s Tip.md b/published/201909/20190823 How To Check Your IP Address in Ubuntu -Beginner-s Tip.md similarity index 100% rename from published/20190823 How To Check Your IP Address in Ubuntu -Beginner-s Tip.md rename to published/201909/20190823 How To Check Your IP Address in Ubuntu -Beginner-s Tip.md diff --git a/published/20190823 The Linux kernel- Top 5 innovations.md b/published/201909/20190823 The Linux kernel- Top 5 innovations.md similarity index 100% rename from published/20190823 The Linux kernel- Top 5 innovations.md rename to published/201909/20190823 The Linux kernel- Top 5 innovations.md diff --git a/published/20190825 Top 5 IoT networking security mistakes.md b/published/201909/20190825 Top 5 IoT networking security mistakes.md similarity index 100% rename from published/20190825 Top 5 IoT networking security mistakes.md rename to published/201909/20190825 Top 5 IoT networking security mistakes.md diff --git a/published/20190826 5 ops tasks to do with Ansible.md b/published/201909/20190826 5 ops tasks to do with Ansible.md similarity index 100% rename from published/20190826 5 ops tasks to do with Ansible.md rename to published/201909/20190826 5 ops tasks to do with Ansible.md diff --git a/published/20190826 How to rename a group of files on Linux.md b/published/201909/20190826 How to rename a group of files on Linux.md similarity index 100% rename from published/20190826 How to rename a group of files on Linux.md rename to published/201909/20190826 How to rename a group of files on Linux.md diff --git a/published/20190828 Managing Ansible environments on MacOS with Conda.md b/published/201909/20190828 Managing Ansible environments on MacOS with Conda.md similarity index 100% rename from published/20190828 Managing Ansible environments on MacOS with Conda.md rename to published/201909/20190828 Managing Ansible environments on MacOS with Conda.md diff --git a/published/20190829 Getting started with HTTPie for API testing.md b/published/201909/20190829 Getting started with HTTPie for API testing.md similarity index 100% rename from published/20190829 Getting started with HTTPie for API testing.md rename to published/201909/20190829 Getting started with HTTPie for API testing.md diff --git a/published/20190829 Three Ways to Exclude Specific-Certain Packages from Yum Update.md b/published/201909/20190829 Three Ways to Exclude Specific-Certain Packages from Yum Update.md similarity index 100% rename from published/20190829 Three Ways to Exclude Specific-Certain Packages from Yum Update.md rename to published/201909/20190829 Three Ways to Exclude Specific-Certain Packages from Yum Update.md diff --git a/published/20190830 Change your Linux terminal color theme.md b/published/201909/20190830 Change your Linux terminal color theme.md similarity index 100% rename from published/20190830 Change your Linux terminal color theme.md rename to published/201909/20190830 Change your Linux terminal color theme.md diff --git a/published/20190830 How to Create and Use Swap File on Linux.md b/published/201909/20190830 How to Create and Use Swap File on Linux.md similarity index 100% rename from published/20190830 How to Create and Use Swap File on Linux.md rename to published/201909/20190830 How to Create and Use Swap File on Linux.md diff --git a/published/20190830 git exercises- navigate a repository.md b/published/201909/20190830 git exercises- navigate a repository.md similarity index 100% rename from published/20190830 git exercises- navigate a repository.md rename to published/201909/20190830 git exercises- navigate a repository.md diff --git a/published/20190831 Google opens Android speech transcription and gesture tracking, Twitter-s telemetry tooling, Blender-s growing adoption, and more news.md b/published/201909/20190831 Google opens Android speech transcription and gesture tracking, Twitter-s telemetry tooling, Blender-s growing adoption, and more news.md similarity index 100% rename from published/20190831 Google opens Android speech transcription and gesture tracking, Twitter-s telemetry tooling, Blender-s growing adoption, and more news.md rename to published/201909/20190831 Google opens Android speech transcription and gesture tracking, Twitter-s telemetry tooling, Blender-s growing adoption, and more news.md diff --git a/published/201909/20190901 Best Linux Distributions For Everyone in 2019.md b/published/201909/20190901 Best Linux Distributions For Everyone in 2019.md new file mode 100644 index 0000000000..4a6e136180 --- /dev/null +++ b/published/201909/20190901 Best Linux Distributions For Everyone in 2019.md @@ -0,0 +1,386 @@ +[#]: collector: (lujun9972) +[#]: translator: (heguangzhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11411-1.html) +[#]: subject: (Best Linux Distributions For Everyone in 2019) +[#]: via: (https://itsfoss.com/best-linux-distributions/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +2019 年最好的 Linux 发行版 +====== + +> 哪个是最好的 Linux 发行版呢?这个问题是没有明确的答案的。这就是为什么我们按各种分类汇编了这个最佳 Linux 列表的原因。 + +有许多 Linux 发行版,我甚至想不出一个确切的数量,因为你会发现很多不同的 Linux 发行版。 + +其中有些只是另外一个的复制品,而有些往往是独一无二的。这虽然有点混乱——但这也是 Linux 的优点。 + +不用担心,尽管有成千上万的发行版,在这篇文章中,我已经列出了目前最好的 Linux 发行版。当然,这个列表是主观的。但是,在这里,我们试图对发行版进行分类——每个发行版本都有自己的特点的。 + +* 面向初学者的 Linux 用户的最佳发行版 +* 最佳 Linux 服务器发行版 +* 可以在旧计算机上运行的最佳 Linux 发行版 +* 面向高级 Linux 用户的最佳发行版 +* 最佳常青树 Linux 发行版 + +**注:** 该列表没有特定的排名顺序。 + +### 面向初学者的最佳 Linux 发行版 + +在这个分类中,我们的目标是列出开箱即用的易用发行版。你不需要深度学习,你可以在安装后马上开始使用,不需要知道任何命令或技巧。 + +#### Ubuntu + +![][6] + +Ubuntu 无疑是最流行的 Linux 发行版之一。你甚至可以发现它已经预装在很多笔记本电脑上了。 + +用户界面很容易适应。如果你愿意,你可以根据自己的要求轻松定制它的外观。无论哪种情况,你都可以选择安装一个主题。你可以从了解更多关于[如何在 Ubuntu 安装主题的][7]的信息来起步。 + +除了它本身提供的功能外,你会发现一个巨大的 Ubuntu 用户在线社区。因此,如果你有问题——可以去任何论坛(或版块)寻求帮助。如果你想直接寻找解决方案,你应该看看我们对 [Ubuntu][8] 的报道(我们有很多关于 Ubuntu 的教程和建议)。 + +- [Ubuntu][9] + +#### Linux Mint + +![][10] + +Linux Mint Cinnamon 是另一个受初学者欢迎的 Linux 发行版。默认的 Cinnamon 桌面类似于 Windows XP,这就是为什么当 Windows XP 停止维护时许多用户选择它的原因。 + +Linux Mint 基于 Ubuntu,因此它具有适用于 Ubuntu 的所有应用程序。简单易用是它成为 Linux 新用户首选的原因。 + +- [Linux Mint][11] + +#### elementary OS + +![][12] + +elementary OS 是我用过的最漂亮的 Linux 发行版之一。用户界面类似于苹果操作系统——所以如果你已经使用了苹果系统,则很容易适应。 + +该发行版基于 Ubuntu,致力于提供一个用户友好的 Linux 环境,该环境在考虑性能的同时尽可能美观。如果你选择安装 elementary OS,这份[在安装 elementary OS 后要做的 11 件事的清单][13]会派上用场。 + +- [elementary OS][14] + +#### MX Linux + +![][15] + +大约一年前,MX Linux 成为众人瞩目的焦点。现在(在发表这篇文章的时候),它是 [DistroWatch.com][16] 上最受欢迎的 Linux 发行版。如果你还没有使用过它,那么当你开始使用它时,你会感到惊讶。 + +与 Ubuntu 不同,MX Linux 是一个基于 Debian 的日益流行的发行版,采用 Xfce 作为其桌面环境。除了无与伦比的稳定性之外,它还配备了许多图形用户界面工具,这使得任何习惯了 Windows/Mac 的用户易于使用它。 + +此外,软件包管理器还专门针对一键安装进行了量身定制。你甚至可以搜索 [Flatpak][18] 软件包并立即安装它(默认情况下,Flathub 在软件包管理器中是可用的来源之一)。 + +- [MX Linux][19] + +#### Zorin OS + +![][20] + +Zorin OS 是又一个基于 Ubuntu 的发行版,它又是桌面上最漂亮、最直观的操作系统之一。尤其是在[Zorin OS 15 发布][21]之后——我绝对会向没有任何 Linux 经验的用户推荐它。它也引入了许多基于图形用户界面的应用程序。 + +你也可以将其安装在旧电脑上,但是,请确保选择“Lite”版本。此外,你还有“Core”、“Education”和 “Ultimate”版本可以选择。你可以选择免费安装 Core 版,但是如果你想支持开发人员并帮助改进 Zorin,请考虑获得 Ultimate 版。 + +Zorin OS 是由两名爱尔兰的青少年创建的。你可以[在这里阅读他们的故事][22]。 + +- [Zorin OS][23] + +#### Pop!_OS + +![](https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/04/pop-1.jpg?w=800&ssl=1) + +Sytem76 的 Pop!_OS 是开发人员或计算机科学专业人员的理想选择。当然,不仅限于编码人员,如果你刚开始使用 Linux,这也是一个很好的选择。它基于 Ubuntu,但是其 UI 感觉更加直观和流畅。除了 UI 外,它还强制执行全盘加密。 + +你可以通过文章下面的评论看到,我们的许多读者似乎都喜欢(并坚持使用)它。如果你对此感到好奇,也应该查看一下我们关于 Phillip Prado 的 [Pop!_OS 的动手实践](https://itsfoss.com/pop-os-linux-review/)的文章。 + +(LCTT 译注:这段推荐是原文后来补充的,因为原文下面很多人在评论推荐。) + +- [Pop!_OS](https://system76.com/pop) + +#### 其他选择 + +[深度操作系统][24] 和其他的 Ubuntu 变种(如 Kubuntu、Xubuntu)也是初学者的首选。如果你想寻求更多的选择,你可以看看。(LCTT 译注:我知道你们肯定对将深度操作系统列入其它不满意——这个锅归原作者。) + +如果你想要挑战自己,你可以试试 Ubuntu 之外的 Fedora —— 但是一定要看看我们关于 [Ubuntu 和 Fedora 对比][25]的文章,从桌面的角度做出更好的选择。 + +### 最好的服务器发行版 + +对于服务器来说,选择 Linux 发行版取决于稳定性、性能和企业级支持。如果你只是尝试,则可以尝试任何你想要的发行版。 + +但是,如果你要为 Web 服务器或任何重要的组件安装它,你应该看看我们的一些建议。 + +#### Ubuntu 服务器 + +根据你的需要,Ubuntu 为你的服务器提供了不同的选项。如果你正在寻找运行在 AWS、Azure、谷歌云平台等平台上的优化解决方案,[Ubuntu Cloud][26] 是一个很好的选择。 + +无论是哪种情况,你都可以选择 Ubuntu 服务器包,并将其安装在你的服务器上。然而,Ubuntu 在云上部署时也是最受欢迎的 Linux 发行版(根据数字判断——[来源1][27]、[来源2][28])。 + +请注意,除非你有特殊要求,我们建议你选择 LTS 版。 + +- [Ubuntu Server][29] + +#### 红帽企业版 Linux(RHEL) + +红帽企业版 Linux(RHEL)是面向企业和组织的顶级 Linux 平台。如果我们按数字来看,红帽可能不是服务器领域最受欢迎的。但是,有相当一部分企业用户依赖于 RHEL (比如联想)。 + +从技术上讲,Fedora 和红帽企业版是相关联的。无论红帽要支持什么——在出现在 RHEL 之前,都要在 Fedora 上进行测试。我不是定制需求的服务器发行版专家,所以你一定要查看他们的[官方文档][30]以了解它是否适合你。 + +- [RHEL][31] + +#### SUSE Linux 企业服务器(SLES) + +![][32] + +别担心,不要把这和 OpenSUSE 混淆。一切都以一个共同的品牌 “SUSE” 命名 —— 但是 OpenSUSE 是一个开源发行版,目标是社区,并且由社区维护。 + +SUSE Linux 企业服务器(SLES)是基于云的服务器最受欢迎的解决方案之一。为了获得管理开源解决方案的优先支持和帮助,你必须选择订阅。 + +- [SLES][33] + +#### CentOS + +![][34] + +正如我提到的,对于 RHEL 你需要订阅。而 CentOS 更像是 RHEL 的社区版,因为它是从 RHEL 的源代码中派生出来的。而且,它是开源的,也是免费的。尽管与过去几年相比,使用 CentOS 的托管提供商数量明显减少,但这仍然是一个很好的选择。 + +CentOS 可能没有加载最新的软件包,但它被认为是最稳定的发行版之一,你可以在各种云平台上找到 CentOS 镜像。如果没有,你可以选择 CentOS 提供的自托管镜像。 + +- [CentOS][35] + +#### 其他选择 + +你也可以尝试 [Fedora Server][36]或[Debian][37]作为上述发行版的替代品。 + +### 旧电脑的最佳 Linux 发行版 + +如果你有一台旧电脑,或者你真的不需要升级你的系统,你仍然可以尝试一些最好的 Linux 发行版。 + +我们已经详细讨论了一些[最好的轻量级 Linux 发行版][42]。在这里,我们将只提到那些真正突出的东西(以及一些新的补充)。 + +#### Puppy Linux + +![][43] + +Puppy Linux 实际上是最小的发行版本之一。刚开始使用 Linux 时,我的朋友建议我尝试一下 Puppy Linux,因为它可以轻松地在较旧的硬件配置上运行。 + +如果你想在你的旧电脑上享受一次爽快的体验,那就值得去看看。多年来,随着一些新的有用特性的增加,用户体验得到了改善。 + +- [Puppy Linux][44] + +#### Solus Budgie + +![][45] + +在最近的一个主要版本——[Solus 4 Fortitude][46] 之后,它是一个令人印象深刻的轻量级桌面操作系统。你可以选择像 GNOME 或 MATE 这样的桌面环境。然而,Solus Budgie 恰好是我的最爱之一,它是一款适合初学者的功能齐全的 Linux发行版,同时对系统资源要求很少。 + +- [Solus][47] + +#### Bodhi + +![][48] + +Bodhi Linux 构建于 Ubuntu 之上。然而,与Ubuntu不同,它在较旧的配置上运行良好。 + +这个发行版的主要亮点是它的 [Moksha 桌面][49](这是 Enlightenment 17 桌面的延续)。用户体验直观且反应极快。即使我个人不用它,你也应该在你的旧系统上试一试。 + +- [Bodhi Linux][50] + +#### antiX + +![][51] + +antiX 部分担起了 MX Linux 的责任,它是一个轻量级的 Linux 发行版,为新的或旧的计算机量身定制。其用户界面并不令人印象深刻——但它可以像预期的那样工作。 + +它基于 Debian,可以作为一个现场版 CD 发行版使用,而不需要安装它。antiX 还提供现场版引导加载程序。与其他发行版相比,你可以保存设置,这样就不会在每次重新启动时丢失设置。不仅如此,你还可以通过其“持久保留”功能将更改保存到根目录中。 + +因此,如果你正在寻找一个可以在旧硬件上提供快速用户体验的现场版 USB 发行版,antiX 是一个不错的选择。 + +- [antiX][52] + +#### Sparky Linux + +![][53] + +Sparky Linux 基于 Debian,它是理想的低端系统 Linux 发行版。伴随着超快的用户体验,Sparky Linux 为不同的用户提供了几个特殊版本(或变种)。 + +例如,它提供了针对一组用户的稳定版本(和变种)和滚动版本。Sparky Linux GameOver 版非常受游戏玩家欢迎,因为它包含了一堆预装的游戏。你可以查看我们的[最佳 Linux 游戏发行版][54] —— 如果你也想在你的系统上玩游戏。 + +#### 其他选择 + +你也可以尝试 [Linux Lite][55]、[Lubuntu][56]、[Peppermint][57] 等轻量级 Linux 发行版。 + +### 面向高级用户的最佳 Linux 发行版 + +一旦你习惯了各种软件包管理器和命令来帮助你解决任何问题,你就可以开始找寻只为高级用户量身定制的 Linux 发行版。 + +当然,如果你是专业人士,你会有一套具体的要求。然而,如果你已经作为普通用户使用了一段时间——以下发行版值得一试。 + +#### Arch Linux + +![][58] + +Arch Linux 本身是一个简单而强大的发行版,具有陡峭的学习曲线。不像其系统,你不会一次就把所有东西都预先安装好。你必须配置系统并根据需要添加软件包。 + +此外,在安装 Arch Linux 时,必须按照一组命令来进行(没有图形用户界面)。要了解更多信息,你可以按照我们关于[如何安装 Arch Linux][59] 的指南进行操作。如果你要安装它,你还应该知道在[安装 Arch Linux 后需要做的一些基本事情][60]。这会帮助你快速入门。 + +除了多才多艺和简便性之外,值得一提的是 Arch Linux 背后的社区非常活跃。所以,如果你遇到问题,你不用担心。 + +- [Arch Linux][61] + +#### Gentoo + +![][62] + +如果你知道如何编译源代码,Gentoo Linux 是你必须尝试的版本。这也是一个轻量级的发行版,但是,你需要具备必要的技术知识才能使它发挥作用。 + +当然,[官方手册][63]提供了许多你需要知道的信息。但是,如果你不确定自己在做什么——你需要花很多时间去想如何充分利用它。 + +- [Gentoo Linux][64] + +#### Slackware + +![][65] + +Slackware 是仍然重要的最古老的 Linux 发行版之一。如果你愿意编译或开发软件来为自己建立一个完美的环境 —— Slackware 是一个不错的选择。 + +如果你对一些最古老的 Linux 发行版感到好奇,我们有一篇关于[最早的 Linux 发行版][66]可以去看看。 + +尽管使用它的用户/开发人员的数量已经显著减少,但对于高级用户来说,它仍然是一个极好的选择。此外,最近有个新闻是 [Slackware 有了一个 Patreon 捐赠页面][67],我们希望 Slackware 继续作为最好的 Linux 发行版之一存在。 + +- [Slackware][68] + +### 最佳多用途 Linux 发行版 + +有些 Linux 发行版既可以作为初学者友好的桌面又可以作为高级操作系统的服务器。因此,我们考虑为这样的发行版编辑一个单独的部分。 + +如果你不同意我们的观点(或者有建议要补充),请在评论中告诉我们。我们认为,这对于每个用户都可以派上用场: + +#### Fedora + +![][69] + +Fedora 提供两个独立的版本:一个用于台式机/笔记本电脑(Fedora 工作站),另一个用于服务器(Fedora 服务器)。 + +因此,如果你正在寻找一款时髦的桌面操作系统,有点学习曲线,又对用户友好,那么 Fedora 是一个选择。无论是哪种情况,如果你正在为你的服务器寻找一个 Linux 操作系统,这也是一个不错的选择。 + +- [Fedora][70] + +#### Manjaro + +![][71] + +Manjaro 基于 [Arch Linux][72]。不用担心,虽然 Arch Linux 是为高级用户量身定制的,但Manjaro 让新手更容易上手。这是一个简单且对初学者友好的 Linux 发行版。用户界面足够好,并且内置了一系列有用的图形用户界面应用程序。 + +下载时,你可以为 Manjaro 选择[桌面环境][73]。就个人而言,我喜欢 Manjaro 的 KDE 桌面。 + +- [Manjaro Linux][74] + +#### Debian + +![][75] + +嗯,Ubuntu 是基于 Debian 的——所以它本身是一个非常好的发行版本。Debian 是台式机和服务器的理想选择。 + +这可能不是对初学者最友好的操作系统——但你可以通过阅读[官方文档][76]轻松开始。[Debian 10 Buster][77] 的最新版本引入了许多变化和必要的改进。所以,你必须试一试! + +### 总结 + +总的来说,这些是我们推荐你去尝试的最好的 Linux 发行版。是的,还有许多其他的 Linux 发行版值得一提,但是根据个人喜好,对每个发行版来说,取决于个人喜好,这种选择是主观的。 + +但是,我们也为 [Windows 用户][78]、[黑客和脆弱性测试人员][41]、[游戏玩家][54]、[程序员][39]和[偏重隐私者][79]提供了单独的发行版列表所以,如果你感兴趣的话请仔细阅读。 + +如果你认为我们遗漏了你最喜欢的 Linux 发行版,请在下面的评论中告诉我们你的想法,我们将更新这篇文章。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-linux-distributions/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: tmp.NoRXbIWHkg#for-beginners +[2]: tmp.NoRXbIWHkg#for-servers +[3]: tmp.NoRXbIWHkg#for-old-computers +[4]: tmp.NoRXbIWHkg#for-advanced-users +[5]: tmp.NoRXbIWHkg#general-purpose +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/install-google-chrome-ubuntu-10.jpg?ssl=1 +[7]: https://itsfoss.com/install-themes-ubuntu/ +[8]: https://itsfoss.com/tag/ubuntu/ +[9]: https://ubuntu.com/download/desktop +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/linux-Mint-19-desktop.jpg?ssl=1 +[11]: https://www.linuxmint.com/ +[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/elementary-os-juno-feat.jpg?ssl=1 +[13]: https://itsfoss.com/things-to-do-after-installing-elementary-os-5-juno/ +[14]: https://elementary.io/ +[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/mx-linux.jpg?ssl=1 +[16]: https://distrowatch.com/ +[17]: https://en.wikipedia.org/wiki/Linux_distribution#Rolling_distributions +[18]: https://flatpak.org/ +[19]: https://mxlinux.org/ +[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/zorin-os-15.png?ssl=1 +[21]: https://itsfoss.com/zorin-os-15-release/ +[22]: https://itsfoss.com/zorin-os-interview/ +[23]: https://zorinos.com/ +[24]: https://www.deepin.org/en/ +[25]: https://itsfoss.com/ubuntu-vs-fedora/ +[26]: https://ubuntu.com/download/cloud +[27]: https://w3techs.com/technologies/details/os-linux/all/all +[28]: https://thecloudmarket.com/stats +[29]: https://ubuntu.com/download/server +[30]: https://developers.redhat.com/products/rhel/docs-and-apis +[31]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux +[32]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/SUSE-Linux-Enterprise.jpg?ssl=1 +[33]: https://www.suse.com/products/server/ +[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/centos.png?ssl=1 +[35]: https://www.centos.org/ +[36]: https://getfedora.org/en/server/ +[37]: https://www.debian.org/distrib/ +[38]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/coding.jpg?ssl=1 +[39]: https://itsfoss.com/best-linux-distributions-progammers/ +[40]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/hacking.jpg?ssl=1 +[41]: https://itsfoss.com/linux-hacking-penetration-testing/ +[42]: https://itsfoss.com/lightweight-linux-beginners/ +[43]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/puppy-linux-bionic.jpg?ssl=1 +[44]: http://puppylinux.com/ +[45]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/solus-4-featured.jpg?resize=800%2C450&ssl=1 +[46]: https://itsfoss.com/solus-4-release/ +[47]: https://getsol.us/home/ +[48]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/bodhi-linux.png?fit=800%2C436&ssl=1 +[49]: http://www.bodhilinux.com/moksha-desktop/ +[50]: http://www.bodhilinux.com/ +[51]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/10/antix-linux-screenshot.jpg?ssl=1 +[52]: https://antixlinux.com/ +[53]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/sparky-linux.jpg?ssl=1 +[54]: https://itsfoss.com/linux-gaming-distributions/ +[55]: https://www.linuxliteos.com/ +[56]: https://lubuntu.me/ +[57]: https://peppermintos.com/ +[58]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/arch_linux_screenshot.jpg?ssl=1 +[59]: https://itsfoss.com/install-arch-linux/ +[60]: https://itsfoss.com/things-to-do-after-installing-arch-linux/ +[61]: https://www.archlinux.org +[62]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/gentoo-linux.png?ssl=1 +[63]: https://wiki.gentoo.org/wiki/Handbook:Main_Page +[64]: https://www.gentoo.org +[65]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/slackware-screenshot.jpg?ssl=1 +[66]: https://itsfoss.com/earliest-linux-distros/ +[67]: https://distrowatch.com/dwres.php?resource=showheadline&story=8743 +[68]: http://www.slackware.com/ +[69]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/fedora-overview.png?ssl=1 +[70]: https://getfedora.org/ +[71]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/manjaro-gnome.jpg?ssl=1 +[72]: https://www.archlinux.org/ +[73]: https://itsfoss.com/glossary/desktop-environment/ +[74]: https://manjaro.org/ +[75]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/debian-screenshot.png?ssl=1 +[76]: https://www.debian.org/releases/stable/installmanual +[77]: https://itsfoss.com/debian-10-buster/ +[78]: https://itsfoss.com/windows-like-linux-distributions/ +[79]: https://itsfoss.com/privacy-focused-linux-distributions/ diff --git a/published/201909/20190901 Different Ways to Configure Static IP Address in RHEL 8.md b/published/201909/20190901 Different Ways to Configure Static IP Address in RHEL 8.md new file mode 100644 index 0000000000..d67e035961 --- /dev/null +++ b/published/201909/20190901 Different Ways to Configure Static IP Address in RHEL 8.md @@ -0,0 +1,237 @@ +[#]: collector: (lujun9972) +[#]: translator: (heguangzhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11390-1.html) +[#]: subject: (Different Ways to Configure Static IP Address in RHEL 8) +[#]: via: (https://www.linuxtechi.com/configure-static-ip-address-rhel8/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +在 RHEL8 配置静态 IP 地址的不同方法 +====== + +在 Linux 服务器上工作时,在网卡/以太网卡上分配静态 IP 地址是每个 Linux 工程师的常见任务之一。如果一个人在 Linux 服务器上正确配置了静态地址,那么他/她就可以通过网络远程访问它。在本文中,我们将演示在 RHEL 8 服务器网卡上配置静态 IP 地址的不同方法。 + +![](https://img.linux.net.cn/data/attachment/album/201909/25/222737dx94bbl9qbhzlfe4.jpg) + +以下是在网卡上配置静态IP的方法: + + * `nmcli`(命令行工具) + * 网络脚本文件(`ifcfg-*`) + * `nmtui`(基于文本的用户界面) + +### 使用 nmcli 命令行工具配置静态 IP 地址 + +每当我们安装 RHEL 8 服务器时,就会自动安装命令行工具 `nmcli`,它是由网络管理器使用的,可以让我们在以太网卡上配置静态 IP 地址。 + +运行下面的 `ip addr` 命令,列出 RHEL 8 服务器上的以太网卡 + +``` +[root@linuxtechi ~]# ip addr +``` + +正如我们在上面的命令输出中看到的,我们有两个网卡 `enp0s3` 和 `enp0s8`。当前分配给网卡的 IP 地址是通过 DHCP 服务器获得的。 + +假设我们希望在第一个网卡 (`enp0s3`) 上分配静态 IP 地址,具体内容如下: + + * IP 地址 = 192.168.1.4 + * 网络掩码 = 255.255.255.0 + * 网关 = 192.168.1.1 + * DNS = 8.8.8.8 + +依次运行以下 `nmcli` 命令来配置静态 IP, + +使用 `nmcli connection` 命令列出当前活动的以太网卡, + +``` +[root@linuxtechi ~]# nmcli connection +NAME UUID TYPE DEVICE +enp0s3 7c1b8444-cb65-440d-9bf6-ea0ad5e60bae ethernet enp0s3 +virbr0 3020c41f-6b21-4d80-a1a6-7c1bd5867e6c bridge virbr0 +[root@linuxtechi ~]# +``` + +使用下面的 `nmcli` 给 `enp0s3` 分配静态 IP。 + +**命令语法:** + +``` +# nmcli connection modify ipv4.address +``` + +**注意:** 为了简化语句,在 `nmcli` 命令中,我们通常用 `con` 关键字替换 `connection`,并用 `mod` 关键字替换 `modify`。 + +将 IPv4 地址 (192.168.1.4) 分配给 `enp0s3` 网卡上, + +``` +[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.addresses 192.168.1.4/24 +``` + +使用下面的 `nmcli` 命令设置网关, + +``` +[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.gateway 192.168.1.1 +``` + +设置手动配置(从 dhcp 到 static), + +``` +[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.method manual +``` + +设置 DNS 值为 “8.8.8.8”, + +``` +[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.dns "8.8.8.8" +``` + +要保存上述更改并重新加载,请执行如下 `nmcli` 命令, + +``` +[root@linuxtechi ~]# nmcli con up enp0s3 +Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4) +``` + +以上命令显示网卡 `enp0s3` 已成功配置。我们使用 `nmcli` 命令做的那些更改都将永久保存在文件 `etc/sysconfig/network-scripts/ifcfg-enp0s3` 里。 + +``` +[root@linuxtechi ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3 +``` + +![ifcfg-enp0s3-file-rhel8][2] + +要确认 IP 地址是否分配给了 `enp0s3` 网卡了,请使用以下 IP 命令查看, + +``` +[root@linuxtechi ~]#ip addr show enp0s3 +``` + +### 使用网络脚本文件(ifcfg-*)手动配置静态 IP 地址 + +我们可以使用配置以太网卡的网络脚本或 `ifcfg-*` 文件来配置以太网卡的静态 IP 地址。假设我们想在第二个以太网卡 `enp0s8` 上分配静态 IP 地址: + +* IP 地址 = 192.168.1.91 +* 前缀 = 24 +* 网关 =192.168.1.1 +* DNS1 =4.2.2.2 + + +转到目录 `/etc/sysconfig/network-scripts`,查找文件 `ifcfg-enp0s8`,如果它不存在,则使用以下内容创建它, + +``` +[root@linuxtechi ~]# cd /etc/sysconfig/network-scripts/ +[root@linuxtechi network-scripts]# vi ifcfg-enp0s8 +TYPE="Ethernet" +DEVICE="enp0s8" +BOOTPROTO="static" +ONBOOT="yes" +NAME="enp0s8" +IPADDR="192.168.1.91" +PREFIX="24" +GATEWAY="192.168.1.1" +DNS1="4.2.2.2" +``` + +保存并退出文件,然后重新启动网络管理器服务以使上述更改生效, + +``` +[root@linuxtechi network-scripts]# systemctl restart NetworkManager +``` + +现在使用下面的 `ip` 命令来验证 IP 地址是否分配给网卡, + +``` +[root@linuxtechi ~]# ip add show enp0s8 +3: enp0s8: mtu 1500 qdisc fq_codel state UP group default qlen 1000 + link/ether 08:00:27:7c:bb:cb brd ff:ff:ff:ff:ff:ff + inet 192.168.1.91/24 brd 192.168.1.255 scope global noprefixroute enp0s8 + valid_lft forever preferred_lft forever + inet6 fe80::a00:27ff:fe7c:bbcb/64 scope link + valid_lft forever preferred_lft forever +[root@linuxtechi ~]# +``` + +以上输出内容确认静态 IP 地址已在网卡 `enp0s8` 上成功配置了。 + +### 使用 nmtui 实用程序配置静态 IP 地址 + +`nmtui` 是一个基于文本用户界面的,用于控制网络的管理器,当我们执行 `nmtui` 时,它将打开一个基于文本的用户界面,通过它我们可以添加、修改和删除连接。除此之外,`nmtui` 还可以用来设置系统的主机名。 + +假设我们希望通过以下细节将静态 IP 地址分配给网卡 `enp0s3` , + +* IP 地址 = 10.20.0.72 +* 前缀 = 24 +* 网关 = 10.20.0.1 +* DNS1 = 4.2.2.2 + +运行 `nmtui` 并按照屏幕说明操作,示例如下所示, + +``` +[root@linuxtechi ~]# nmtui +``` + +![nmtui-rhel8][3] + +选择第一个选项 “Edit a connection”,然后选择接口为 “enp0s3”, + +![Choose-interface-nmtui-rhel8][4] + +选择 “Edit”,然后指定 IP 地址、前缀、网关和域名系统服务器 IP, + +![set-ip-nmtui-rhel8][5] + +选择确定,然后点击回车。在下一个窗口中,选择 “Activate a connection”, + +![Activate-option-nmtui-rhel8][6] + +选择 “enp0s3”,选择 “Deactivate” 并点击回车, + +![Deactivate-interface-nmtui-rhel8][7] + +现在选择 “Activate” 并点击回车, + +![Activate-interface-nmtui-rhel8][8] + +选择 “Back”,然后选择 “Quit”, + +![Quit-Option-nmtui-rhel8][9] + +使用下面的 `ip` 命令验证 IP 地址是否已分配给接口 `enp0s3`, + +``` +[root@linuxtechi ~]# ip add show enp0s3 +2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000 + link/ether 08:00:27:53:39:4d brd ff:ff:ff:ff:ff:ff + inet 10.20.0.72/24 brd 10.20.0.255 scope global noprefixroute enp0s3 + valid_lft forever preferred_lft forever + inet6 fe80::421d:5abf:58bd:c47e/64 scope link noprefixroute + valid_lft forever preferred_lft forever +[root@linuxtechi ~]# +``` + +以上输出内容显示我们已经使用 `nmtui` 实用程序成功地将静态 IP 地址分配给接口 `enp0s3`。 + +以上就是本教程的全部内容,我们已经介绍了在 RHEL 8 系统上为以太网卡配置 IPv4 地址的三种不同方法。请在下面的评论部分分享反馈和评论。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/configure-static-ip-address-rhel8/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Configure-Static-IP-RHEL8.jpg +[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/ifcfg-enp0s3-file-rhel8.jpg +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/nmtui-rhel8.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-interface-nmtui-rhel8.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/set-ip-nmtui-rhel8.jpg +[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Activate-option-nmtui-rhel8.jpg +[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Deactivate-interface-nmtui-rhel8.jpg +[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Activate-interface-nmtui-rhel8.jpg +[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Quit-Option-nmtui-rhel8.jpg diff --git a/published/20190902 Why I use Java.md b/published/201909/20190902 Why I use Java.md similarity index 100% rename from published/20190902 Why I use Java.md rename to published/201909/20190902 Why I use Java.md diff --git a/published/20190903 5 open source speed-reading applications.md b/published/201909/20190903 5 open source speed-reading applications.md similarity index 100% rename from published/20190903 5 open source speed-reading applications.md rename to published/201909/20190903 5 open source speed-reading applications.md diff --git a/published/20190903 An introduction to Hyperledger Fabric.md b/published/201909/20190903 An introduction to Hyperledger Fabric.md similarity index 100% rename from published/20190903 An introduction to Hyperledger Fabric.md rename to published/201909/20190903 An introduction to Hyperledger Fabric.md diff --git a/published/20190903 The birth of the Bash shell.md b/published/201909/20190903 The birth of the Bash shell.md similarity index 100% rename from published/20190903 The birth of the Bash shell.md rename to published/201909/20190903 The birth of the Bash shell.md diff --git a/published/20190904 How to build Fedora container images.md b/published/201909/20190904 How to build Fedora container images.md similarity index 100% rename from published/20190904 How to build Fedora container images.md rename to published/201909/20190904 How to build Fedora container images.md diff --git a/published/20190905 How to Change Themes in Linux Mint.md b/published/201909/20190905 How to Change Themes in Linux Mint.md similarity index 100% rename from published/20190905 How to Change Themes in Linux Mint.md rename to published/201909/20190905 How to Change Themes in Linux Mint.md diff --git a/published/20190905 How to Get Average CPU and Memory Usage from SAR Reports Using the Bash Script.md b/published/201909/20190905 How to Get Average CPU and Memory Usage from SAR Reports Using the Bash Script.md similarity index 100% rename from published/20190905 How to Get Average CPU and Memory Usage from SAR Reports Using the Bash Script.md rename to published/201909/20190905 How to Get Average CPU and Memory Usage from SAR Reports Using the Bash Script.md diff --git a/published/20190905 USB4 gets final approval, offers Ethernet-like speed.md b/published/201909/20190905 USB4 gets final approval, offers Ethernet-like speed.md similarity index 100% rename from published/20190905 USB4 gets final approval, offers Ethernet-like speed.md rename to published/201909/20190905 USB4 gets final approval, offers Ethernet-like speed.md diff --git a/published/20190906 Great News- Firefox 69 Blocks Third-Party Cookies, Autoplay Videos - Cryptominers by Default.md b/published/201909/20190906 Great News- Firefox 69 Blocks Third-Party Cookies, Autoplay Videos - Cryptominers by Default.md similarity index 100% rename from published/20190906 Great News- Firefox 69 Blocks Third-Party Cookies, Autoplay Videos - Cryptominers by Default.md rename to published/201909/20190906 Great News- Firefox 69 Blocks Third-Party Cookies, Autoplay Videos - Cryptominers by Default.md diff --git a/published/20190906 How to change the color of your Linux terminal.md b/published/201909/20190906 How to change the color of your Linux terminal.md similarity index 100% rename from published/20190906 How to change the color of your Linux terminal.md rename to published/201909/20190906 How to change the color of your Linux terminal.md diff --git a/published/20190906 How to put an HTML page on the internet.md b/published/201909/20190906 How to put an HTML page on the internet.md similarity index 100% rename from published/20190906 How to put an HTML page on the internet.md rename to published/201909/20190906 How to put an HTML page on the internet.md diff --git a/published/20190909 Firefox 69 available in Fedora.md b/published/201909/20190909 Firefox 69 available in Fedora.md similarity index 100% rename from published/20190909 Firefox 69 available in Fedora.md rename to published/201909/20190909 Firefox 69 available in Fedora.md diff --git a/published/20190909 How to Install Shutter Screenshot Tool in Ubuntu 19.04.md b/published/201909/20190909 How to Install Shutter Screenshot Tool in Ubuntu 19.04.md similarity index 100% rename from published/20190909 How to Install Shutter Screenshot Tool in Ubuntu 19.04.md rename to published/201909/20190909 How to Install Shutter Screenshot Tool in Ubuntu 19.04.md diff --git a/published/201909/20190909 How to Setup Multi Node Elastic Stack Cluster on RHEL 8 - CentOS 8.md b/published/201909/20190909 How to Setup Multi Node Elastic Stack Cluster on RHEL 8 - CentOS 8.md new file mode 100644 index 0000000000..8838a490d6 --- /dev/null +++ b/published/201909/20190909 How to Setup Multi Node Elastic Stack Cluster on RHEL 8 - CentOS 8.md @@ -0,0 +1,462 @@ +[#]: collector: (lujun9972) +[#]: translator: (heguangzhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11394-1.html) +[#]: subject: (How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8) +[#]: via: (https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +如何在 RHEL8 /CentOS8 上建立多节点 Elastic stack 集群 +====== + + +Elastic stack 俗称 ELK stack,是一组包括 Elasticsearch、Logstash 和 Kibana 在内的开源产品。Elastic Stack 由 Elastic 公司开发和维护。使用 Elastic stack,可以将系统日志发送到 Logstash,它是一个数据收集引擎,接受来自可能任何来源的日志或数据,并对日志进行归一化,然后将日志转发到 Elasticsearch,用于分析、索引、搜索和存储,最后使用 Kibana 表示为可视化数据,使用 Kibana,我们还可以基于用户的查询创建交互式图表。 + +![Elastic-Stack-Cluster-RHEL8-CentOS8][2] + +在本文中,我们将演示如何在 RHEL 8 / CentOS 8 服务器上设置多节点 elastic stack 集群。以下是我的 Elastic Stack 集群的详细信息: + +**Elasticsearch:** + +* 三台服务器,最小化安装 RHEL 8 / CentOS 8 +* IP & 主机名 – 192.168.56.40(`elasticsearch1.linuxtechi.local`)、192.168.56.50 (`elasticsearch2.linuxtechi.local`)、192.168.56.60(elasticsearch3.linuxtechi.local`) + +Logstash:** + +* 两台服务器,最小化安装 RHEL 8 / CentOS 8 +* IP & 主机 – 192.168.56.20(`logstash1.linuxtechi.local`)、192.168.56.30(`logstash2.linuxtechi.local`) + +**Kibana:** + +* 一台服务器,最小化安装 RHEL 8 / CentOS 8 +* IP & 主机名 – 192.168.56.10(`kibana.linuxtechi.local`) + +**Filebeat:** + +* 一台服务器,最小化安装 CentOS 7 +* IP & 主机名 – 192.168.56.70(`web-server`) + +让我们从设置 Elasticsearch 集群开始, + +### 设置3个节点 Elasticsearch 集群 + +正如我已经说过的,设置 Elasticsearch 集群的节点,登录到每个节点,设置主机名并配置 yum/dnf 库。 + +使用命令 `hostnamectl` 设置各个节点上的主机名: + +``` +[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch1.linuxtechi. local" +[root@linuxtechi ~]# exec bash +[root@linuxtechi ~]# +[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch2.linuxtechi. local" +[root@linuxtechi ~]# exec bash +[root@linuxtechi ~]# +[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch3.linuxtechi. local" +[root@linuxtechi ~]# exec bash +[root@linuxtechi ~]# +``` + +对于 CentOS 8 系统,我们不需要配置任何操作系统包库,对于 RHEL 8 服务器,如果你有有效订阅,那么用红帽订阅以获得包存储库就可以了。如果你想为操作系统包配置本地 yum/dnf 存储库,请参考以下网址: + +- [如何使用 DVD 或 ISO 文件在 RHEL 8 服务器上设置本地 Yum / DNF 存储库][3] + +在所有节点上配置 Elasticsearch 包存储库,在 `/etc/yum.repo.d/` 文件夹下创建一个包含以下内容的 `elastic.repo` 文件: + +``` +~]# vi /etc/yum.repos.d/elastic.repo + +[elasticsearch-7.x] +name=Elasticsearch repository for 7.x packages +baseurl=https://artifacts.elastic.co/packages/7.x/yum +gpgcheck=1 +gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch +enabled=1 +autorefresh=1 +type=rpm-md +``` + +保存文件并退出。 + +在所有三个节点上使用 `rpm` 命令导入 Elastic 公共签名密钥。 + +``` +~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch +``` + +在所有三个节点的 `/etc/hosts` 文件中添加以下行: + +``` +192.168.56.40 elasticsearch1.linuxtechi.local +192.168.56.50 elasticsearch2.linuxtechi.local +192.168.56.60 elasticsearch3.linuxtechi.local +``` + +使用 `yum`/`dnf` 命令在所有三个节点上安装 Java: + +``` +[root@linuxtechi ~]# dnf install java-openjdk -y +[root@linuxtechi ~]# dnf install java-openjdk -y +[root@linuxtechi ~]# dnf install java-openjdk -y +``` + +使用 `yum`/`dnf` 命令在所有三个节点上安装 Elasticsearch: + +``` +[root@linuxtechi ~]# dnf install elasticsearch -y +[root@linuxtechi ~]# dnf install elasticsearch -y +[root@linuxtechi ~]# dnf install elasticsearch -y +``` + +**注意:** 如果操作系统防火墙已启用并在每个 Elasticsearch 节点中运行,则使用 `firewall-cmd` 命令允许以下端口开放: + +``` +~]# firewall-cmd --permanent --add-port=9300/tcp +~]# firewall-cmd --permanent --add-port=9200/tcp +~]# firewall-cmd --reload +``` + +配置 Elasticsearch, 在所有节点上编辑文件 `/etc/elasticsearch/elasticsearch.yml` 并加入以下内容: + +``` +~]# vim /etc/elasticsearch/elasticsearch.yml + +cluster.name: opn-cluster +node.name: elasticsearch1.linuxtechi.local +network.host: 192.168.56.40 +http.port: 9200 +discovery.seed_hosts: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"] +cluster.initial_master_nodes: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"] +``` + +**注意:** 在每个节点上,在 `node.name` 中填写正确的主机名,在 `network.host` 中填写正确的 IP 地址,其他参数保持不变。 + +现在使用 `systemctl` 命令在所有三个节点上启动并启用 Elasticsearch 服务: + +``` +~]# systemctl daemon-reload +~]# systemctl enable elasticsearch.service +~]# systemctl start elasticsearch.service +``` + +使用下面 `ss` 命令验证 elasticsearch 节点是否开始监听 9200 端口: + +``` +[root@linuxtechi ~]# ss -tunlp | grep 9200 +tcp LISTEN 0 128 [::ffff:192.168.56.40]:9200 *:* users:(("java",pid=2734,fd=256)) +[root@linuxtechi ~]# +``` + +使用以下 `curl` 命令验证 Elasticsearch 群集状态: + +``` +[root@linuxtechi ~]# curl http://elasticsearch1.linuxtechi.local:9200 +[root@linuxtechi ~]# curl -X GET http://elasticsearch2.linuxtechi.local:9200/_cluster/health?pretty +``` + +命令的输出如下所示: + +![Elasticsearch-cluster-status-rhel8][3] + +以上输出表明我们已经成功创建了 3 节点的 Elasticsearch 集群,集群的状态也是绿色的。 + +**注意:** 如果你想修改 JVM 堆大小,那么你可以编辑了文件 `/etc/elasticsearch/jvm.options`,并根据你的环境更改以下参数: + +* `-Xms1g` +* `-Xmx1g` + +现在让我们转到 Logstash 节点。 + +### 安装和配置 Logstash + +在两个 Logstash 节点上执行以下步骤。 + +登录到两个节点使用 `hostnamectl` 命令设置主机名: + +``` +[root@linuxtechi ~]# hostnamectl set-hostname "logstash1.linuxtechi.local" +[root@linuxtechi ~]# exec bash +[root@linuxtechi ~]# +[root@linuxtechi ~]# hostnamectl set-hostname "logstash2.linuxtechi.local" +[root@linuxtechi ~]# exec bash +[root@linuxtechi ~]# +``` + +在两个 logstash 节点的 `/etc/hosts` 文件中添加以下条目: + +``` +~]# vi /etc/hosts +192.168.56.40 elasticsearch1.linuxtechi.local +192.168.56.50 elasticsearch2.linuxtechi.local +192.168.56.60 elasticsearch3.linuxtechi.local +``` + +保存文件并退出。 + +在两个节点上配置 Logstash 存储库,在文件夹 `/ete/yum.repo.d/` 下创建一个包含以下内容的文件 `logstash.repo`: + +``` +~]# vi /etc/yum.repos.d/logstash.repo + +[elasticsearch-7.x] +name=Elasticsearch repository for 7.x packages +baseurl=https://artifacts.elastic.co/packages/7.x/yum +gpgcheck=1 +gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch +enabled=1 +autorefresh=1 +type=rpm-md +``` + +保存并退出文件,运行 `rpm` 命令导入签名密钥: + +``` +~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch +``` + +使用 `yum`/`dnf` 命令在两个节点上安装 Java OpenJDK: + +``` +~]# dnf install java-openjdk -y +``` + +从两个节点运行 `yum`/`dnf` 命令来安装 logstash: + +``` +[root@linuxtechi ~]# dnf install logstash -y +[root@linuxtechi ~]# dnf install logstash -y +``` + +现在配置 logstash,在两个 logstash 节点上执行以下步骤,创建一个 logstash 配置文件,首先我们在 `/etc/logstash/conf.d/` 下复制 logstash 示例文件: + +``` +# cd /etc/logstash/ +# cp logstash-sample.conf conf.d/logstash.conf +``` + +编辑配置文件并更新以下内容: + +``` +# vi conf.d/logstash.conf + +input { + beats { + port => 5044 + } +} + +output { + elasticsearch { + hosts => ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"] + index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" + #user => "elastic" + #password => "changeme" + } +} +``` + +在 `output` 部分之下,在 `hosts` 参数中指定所有三个 Elasticsearch 节点的 FQDN,其他参数保持不变。 + +使用 `firewall-cmd` 命令在操作系统防火墙中允许 logstash 端口 “5044”: + +``` +~ # firewall-cmd --permanent --add-port=5044/tcp +~ # firewall-cmd –reload +``` + +现在,在每个节点上运行以下 `systemctl` 命令,启动并启用 Logstash 服务: + +``` +~]# systemctl start logstash +~]# systemctl eanble logstash +``` + +使用 `ss` 命令验证 logstash 服务是否开始监听 5044 端口: + +``` +[root@linuxtechi ~]# ss -tunlp | grep 5044 +tcp LISTEN 0 128 *:5044 *:* users:(("java",pid=2416,fd=96)) +[root@linuxtechi ~]# +``` + +以上输出表明 logstash 已成功安装和配置。让我们转到 Kibana 安装。 + +### 安装和配置 Kibana + +登录 Kibana 节点,使用 `hostnamectl` 命令设置主机名: + +``` +[root@linuxtechi ~]# hostnamectl set-hostname "kibana.linuxtechi.local" +[root@linuxtechi ~]# exec bash +[root@linuxtechi ~]# +``` + +编辑 `/etc/hosts` 文件并添加以下行: + +``` +192.168.56.40 elasticsearch1.linuxtechi.local +192.168.56.50 elasticsearch2.linuxtechi.local +192.168.56.60 elasticsearch3.linuxtechi.local +``` + +使用以下命令设置 Kibana 存储库: + +``` +[root@linuxtechi ~]# vi /etc/yum.repos.d/kibana.repo +[elasticsearch-7.x] +name=Elasticsearch repository for 7.x packages +baseurl=https://artifacts.elastic.co/packages/7.x/yum +gpgcheck=1 +gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch +enabled=1 +autorefresh=1 +type=rpm-md + +[root@linuxtechi ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch +``` + +执行 `yum`/`dnf` 命令安装 kibana: + +``` +[root@linuxtechi ~]# yum install kibana -y +``` + +通过编辑 `/etc/kibana/kibana.yml` 文件,配置 Kibana: + +``` +[root@linuxtechi ~]# vim /etc/kibana/kibana.yml +………… +server.host: "kibana.linuxtechi.local" +server.name: "kibana.linuxtechi.local" +elasticsearch.hosts: ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"] +………… +``` + +启用并启动 kibana 服务: + +``` +[root@linuxtechi ~]# systemctl start kibana +[root@linuxtechi ~]# systemctl enable kibana +``` + +在系统防火墙上允许 Kibana 端口 “5601”: + +``` +[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5601/tcp +success +[root@linuxtechi ~]# firewall-cmd --reload +success +[root@linuxtechi ~]# +``` + +使用以下 URL 访问 Kibana 界面: + +![Kibana-Dashboard-rhel8][4] + +从面板上,我们可以检查 Elastic Stack 集群的状态。 + +![Stack-Monitoring-Overview-RHEL8][5] + +这证明我们已经在 RHEL 8 /CentOS 8 上成功地安装并设置了多节点 Elastic Stack 集群。 + +现在让我们通过 `filebeat` 从其他 Linux 服务器发送一些日志到 logstash 节点中,在我的例子中,我有一个 CentOS 7服务器,我将通过 `filebeat` 将该服务器的所有重要日志推送到 logstash。 + +登录到 CentOS 7 服务器使用 yum/rpm 命令安装 filebeat 包: + +``` +[root@linuxtechi ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm +Retrieving https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm +Preparing... ################################# [100%] +Updating / installing... + 1:filebeat-7.3.1-1 ################################# [100%] +[root@linuxtechi ~]# +``` + +编辑 `/etc/hosts` 文件并添加以下内容: + +``` +192.168.56.20 logstash1.linuxtechi.local +192.168.56.30 logstash2.linuxtechi.local +``` + +现在配置 `filebeat`,以便它可以使用负载平衡技术向 logstash 节点发送日志,编辑文件 `/etc/filebeat/filebeat.yml`,并添加以下参数: + +在 `filebeat.inputs:` 部分将 `enabled: false` 更改为 `enabled: true`,并在 `paths` 参数下指定我们可以发送到 logstash 的日志文件的位置;注释掉 `output.elasticsearch` 和 `host` 参数;删除 `output.logstash:` 和 `hosts:` 的注释,并在 `hosts` 参数添加两个 logstash 节点,以及设置 `loadbalance: true`。 + +``` +[root@linuxtechi ~]# vi /etc/filebeat/filebeat.yml + +filebeat.inputs: +- type: log + enabled: true + paths: + - /var/log/messages + - /var/log/dmesg + - /var/log/maillog + - /var/log/boot.log +#output.elasticsearch: + # hosts: ["localhost:9200"] + +output.logstash: + hosts: ["logstash1.linuxtechi.local:5044", "logstash2.linuxtechi.local:5044"] + loadbalance: true +``` + +使用下面的 2 个 `systemctl` 命令 启动并启用 `filebeat` 服务: + +``` +[root@linuxtechi ~]# systemctl start filebeat +[root@linuxtechi ~]# systemctl enable filebeat +``` + +现在转到 Kibana 用户界面,验证新索引是否可见。 + +从左侧栏中选择管理选项,然后单击 Elasticsearch 下的索引管理: + +![Elasticsearch-index-management-Kibana][6] + +正如我们上面看到的,索引现在是可见的,让我们现在创建索引模型。 + +点击 Kibana 部分的 “Index Patterns”,它将提示我们创建一个新模型,点击 “Create Index Pattern” ,并将模式名称指定为 “filebeat”: + +![Define-Index-Pattern-Kibana-RHEL8][7] + +点击下一步。 + +选择 “Timestamp” 作为索引模型的时间过滤器,然后单击 “Create index pattern”: + +![Time-Filter-Index-Pattern-Kibana-RHEL8][8] + +![filebeat-index-pattern-overview-Kibana][9] + +现在单击查看实时 filebeat 索引模型: + +![Discover-Kibana-REHL8][10] + +这表明 Filebeat 代理已配置成功,我们能够在 Kibana 仪表盘上看到实时日志。 + +以上就是本文的全部内容,对这些帮助你在 RHEL 8 / CentOS 8 系统上设置 Elastic Stack 集群的步骤,请不要犹豫分享你的反馈和意见。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elastic-Stack-Cluster-RHEL8-CentOS8.jpg +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elasticsearch-cluster-status-rhel8.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Kibana-Dashboard-rhel8.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Stack-Monitoring-Overview-RHEL8.jpg +[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elasticsearch-index-management-Kibana.jpg +[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Define-Index-Pattern-Kibana-RHEL8.jpg +[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Time-Filter-Index-Pattern-Kibana-RHEL8.jpg +[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/filebeat-index-pattern-overview-Kibana.jpg +[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Discover-Kibana-REHL8.jpg diff --git a/published/201909/20190909 How to use Terminator on Linux to run multiple terminals in one window.md b/published/201909/20190909 How to use Terminator on Linux to run multiple terminals in one window.md new file mode 100644 index 0000000000..8121fe2b25 --- /dev/null +++ b/published/201909/20190909 How to use Terminator on Linux to run multiple terminals in one window.md @@ -0,0 +1,117 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11409-1.html) +[#]: subject: (How to use Terminator on Linux to run multiple terminals in one window) +[#]: via: (https://www.networkworld.com/article/3436784/how-to-use-terminator-on-linux-to-run-multiple-terminals-in-one-window.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +使用 Terminator 在一个窗口中运行多个终端 +====== + +![](https://img.linux.net.cn/data/attachment/album/201909/30/233732j9jjx3xxuujopiuu.jpg) + +> Terminator 为在单窗口中运行多个 GNOME 终端提供了一个选择,让你可以灵活地调整工作空间来适应你的需求。 + +![](https://images.idgesg.net/images/article/2019/09/terminator-code-100810364-large.jpg) + +如果你曾经希望可以排列多个终端并将它们组织在一个窗口中,那么我们可能会给你带来一个好消息。 Linux 的 Terminator 可以为你做到这一点。没有问题! + +### 分割窗口 + +Terminator 最初打开像是一个单一窗口的终端窗口一样。但是,一旦在该窗口中单击鼠标,它将弹出一个选项,让你可以灵活地进行更改。你可以选择“水平分割”或“垂直分割”,将你当前所在的窗口分为两个较小的窗口。实际上,菜单旁会有小的分割结果图示(类似于 `=` and `||`),你可以根据需要重复拆分窗口。当然,你如果将整个窗口分为六个或九个以上,那么你可能会发现它们太小而无法有效使用。 + +使用 ASCII 艺术来说明分割窗口的过程,你可能会看到类似以下的样子: + +``` ++-------------------+ +-------------------+ +-------------------+ +| | | | | | +| | | | | | +| | ==> |-------------------| ==> |-------------------| +| | | | | | | +| | | | | | | ++-------------------+ +-------------------+ +-------------------+ + 原始终端 水平分割 垂直分割 +``` + +另一种拆分窗口的方法是使用控制键组合,例如,使用 `Ctrl+Shift+e` 垂直分割窗口,使用 `Ctrl+Shift+o`(“o” 表示“打开”)水平分割窗口。 + +在 Terminator 分割完成后,你可以点击任意窗口使用,并根据工作需求在窗口间移动。 + +### 最大化窗口 + +如果你想暂时忽略除了一个窗口外的其他窗口而只关注一个,你可以单击该窗口,然后从菜单中选择“最大化”选项。接着该窗口会撑满所有空间。再次单击并选择“还原所有终端”可以返回到多窗口显示。使用 `Ctrl+Shift+x` 将在正常和最大化设置之间切换。 + +窗口标签上的窗口大小指示(例如 80x15)显示了每行的字符数以及每个窗口的行数。 + +### 关闭窗口 + +要关闭任何窗口,请打开 Terminator 菜单,然后选择“关闭”。其他窗口将自行调整占用空间,直到你关闭最后一个窗口。 + +### 保存你的自定义设置 + +将窗口分为多个部分后,将自定义的 Terminator 设置设置为默认非常容易。从弹出菜单中选择“首选项”,然后从打开的窗口顶部的选项卡中选择“布局”。接着你应该看到列出了“新布局”。只需单击底部的“保存”,然后单击右下角的“关闭”。Terminator 会将你的设置保存在 `~/.config/terminator/config` 中,然后每次使用到时都会使用该文件。 + +你也可以通过使用鼠标拉伸来扩大整个窗口。再说一次,如果要保留更改,请从菜单中选择“首选项”,“布局”,接着选择“保存”和“关闭”。 + +### 在保存的配置之间进行选择 + +如果愿意,你可以通过维护多个配置文件来设置多种 Terminator 窗口布局,重命名每个配置文件(如 `config-1`、`config-2`),接着在你想使用它时将它移动到 `~/.config/terminator/config`。这有一个类似执行此任务的脚本。它让你在 3 个预配置的窗口布局之间进行选择。 + +``` +#!/bin/bash + +PS3='Terminator options: ' +options=("Split 1" "Split 2" "Split 3" "Quit") +select opt in "${options[@]}" +do + case $opt in + "Split 1") + config=config-1 + break + ;; + "Split 2") + config=config-2 + break + ;; + "Split 3") + config=config-3 + break + ;; + *) + exit + ;; + esac +done + +cd ~/.config/terminator +cp config config- +cp $config config +cd +terminator & +``` + +如果有用的话,你可以给选项一个比 `config-1` 更有意义的名称。 + +### 总结 + +Terminator 是设置多窗口处理相关任务的不错选择。如果你从未使用过它,那么可能需要先使用 `sudo apt install terminator` 或 `sudo yum install -y terminator` 之类的命令进行安装。 + +希望你喜欢使用 Terminator。还有,如另一个同名角色所说,“我会回来的!” + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3436784/how-to-use-terminator-on-linux-to-run-multiple-terminals-in-one-window.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world diff --git a/published/20190911 How to set up a TFTP server on Fedora.md b/published/201909/20190911 How to set up a TFTP server on Fedora.md similarity index 100% rename from published/20190911 How to set up a TFTP server on Fedora.md rename to published/201909/20190911 How to set up a TFTP server on Fedora.md diff --git a/published/201909/20190912 An introduction to Markdown.md b/published/201909/20190912 An introduction to Markdown.md new file mode 100644 index 0000000000..56bf81de5d --- /dev/null +++ b/published/201909/20190912 An introduction to Markdown.md @@ -0,0 +1,153 @@ +[#]: collector: (lujun9972) +[#]: translator: (qfzy1233) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11402-1.html) +[#]: subject: (An introduction to Markdown) +[#]: via: (https://opensource.com/article/19/9/introduction-markdown) +[#]: author: (Juan Islas https://opensource.com/users/xislas) + +一份 Markdown 简介 +====== + +> 一次编辑便可将文本转换为多种格式。下面是如何开始使用 Markdown。 + +![](https://img.linux.net.cn/data/attachment/album/201909/29/123226bjte253n2h44cjjj.jpg) + +在很长一段时间里,我发现我在 GitLab 和 GitHub 上看到的所有文件都带有 **.md** 扩展名,这是专门为开发人员编写的文件类型。几周前,当我开始使用 Markdown 时,我的观念发生了变化。它很快成为我日常工作中最重要的工具。 + +Markdown 使我的生活更简易。我只需要在已经编写的代码中添加一些符号,并且在浏览器扩展或开源程序的帮助下,即可将文本转换为各种常用格式,如 ODT、电子邮件(稍后将详细介绍)、PDF 和 EPUB。 + +### 什么是 Markdown? + +来自 [维基百科][2]的友情提示: + +> Markdown 是一种轻量级标记语言,具有纯文本格式语法。 + +这意味着通过在文本中使用一些额外的符号,Markdown 可以帮助你创建具有特定结构和格式的文档。当你以纯文本(例如,在记事本应用程序中)做笔记时,没有任何东西表明哪个文本应该是粗体或斜体。在普通文本中,你在写链接时需要将一个链接写为 “http://example.com”,或者写为 “example.com”,又或“访问网站(example.com)”。这样没有内在的一致性。 + +但是如果你按照 Markdown 的方式编写,你的文本就有了内在的一致性。计算机喜欢一致性,因为这使得它们能够遵循严格的指令而不用担心异常。 + +相信我;一旦你学会使用 Markdown,每一项写作任务在某种程度上都会比以前更容易、更好。让我们开始吧。 + +### Markdown 基础 + +以下是使用 Markdown 的基础语法。 + +1、创建一个以 **.md** 扩展名结尾的文本文件(例如,`example.md`)。你可以使用任何文本编辑器(甚至像 LibreOffice 或 Microsoft word 这样的文字处理程序亦可),只要记住将其保存为*文本*文件。 + +![Names of Markdown files][3] + +2、想写什么就写什么,就像往常一样: + +``` +Lorem ipsum + +Consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. +Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. +Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. + +De Finibus Bonorum et Malorum + +Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. +Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. + + Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. +``` + +(LCTT 译注:上述这段“Lorem ipsum”,中文又称“乱数假文”,是一篇常用于排版设计领域的拉丁文文章,主要目的为测试文章或文字在不同字型、版型下看起来的效果。) + +3、确保在段落之间留有空行。如果你习惯写商务信函或传统散文,这可能会觉得不自然,因为那里段落只有一行,甚至在第一个单词前还有一个缩进。对于 Markdown,空行(一些文字处理程序使用 `¶`,称为Pilcrow 符号)保证在创建一个新段落应用另一种格式(如 HTML)。 + +4、指定标题和副标题。对于文档的标题,在文本前面添加一个井号或散列符号(`#`)和一个空格(例如 `# Lorem ipsum`)。第一个副标题级别使用两个(`## De Finibus Bonorum et Malorum`),下一个级别使用三个(`### 第三个副标题`),以此类推。注意,在井号和第一个单词之间有一个空格。 + +``` +# Lorem ipsum + +Consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. +Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. +Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. + +## De Finibus Bonorum et Malorum + +Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. +Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. + +  Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. +``` + +5、如果你想使用**粗体**字符,只需将字母放在两个星号之间,没有空格:`**对应的文本将以粗体显示**`。 + +![Bold text in Markdown][4] + +6、对于**斜体**,将文本放在没有空格的下划线符号之间:`_我希望这个本文以斜体显示_`。(LCTT 译注:有的 Markdown 流派会将用下划线引起来的字符串视作下划线文本,而单个星号 `*` 引用起来的才视作斜体。从兼容性的角度看,使用星号比较兼容。) + +![Italics text in Markdown][5] + +7、要插入一个链接(像 [Markdown Tutorial][6]),把你想链接的文本放在括号里,URL 放在括号里,中间没有空格:`[Markdown Tutorial]()`。 + +![Hyperlinks in Markdown][7] + +8、块引用是用大于号编写的(`>`)在你要引用的文本前加上大于符号和空格: `> 名言引用`。 + +![Blockquote text in Markdown][8] + +### Markdown 教程和技巧 + +这些技巧可以帮助你上手 Markdown ,但它涵盖了很多功能,不仅仅是粗体、斜体和链接。学习 Markdown 的最好方法是使用它,但是我建议你花 15 分钟来学习这篇简单的 [Markdown 教程][6],学以致用,勤加练习。 + +由于现代 Markdown 是对结构化文本概念的许多不同解释的融合,[CommonMark][9] 项目定义了一个规范,其中包含一组严格的规则,以使 Markdown 更加清晰。在编辑时手边准备一份[符合 CommonMark 的快捷键列表][10]可能会有帮助。 + +### 你能用 Markdown 做什么 + +Markdown 可以让你写任何你想写的东西,仅需一次编辑,就可以把它转换成几乎任何你想使用的格式。下面的示例演示如何将用 MD 编写简单的文本并转换为不同的格式。你不需要多种格式的文档-你可以仅仅编辑一次…然后拥有无限可能。 + +1、**简单的笔记**:你可以用 Markdown 编写你的笔记,并且在保存笔记时,开源笔记应用程序 [Turtl][11] 将解释你的文本文件并显示为对应的格式。你可以把笔记存储在任何地方! + +![Turtl application][12] + +2、**PDF 文件**:使用 [Pandoc][13] 应用程序,你可以使用一个简单的命令将 Markdown 文件转换为 PDF: + +``` +pandoc -o +``` + +![Markdown text converted to PDF with Pandoc][14] + +3、**Email**:你还可以通过安装浏览器扩展 [Markdown Here][15] 将 Markdown 文本转换为 html 格式的电子邮件。要使用它,只需选择你的 Markdown 文本,在这里使用 Markdown 将其转换为 HTML,并使用你喜欢的电子邮件客户端发送消息。 + +![Markdown text converted to email with Markdown Here][16] + +### 现在就开始上手吧 + +你不需要一个特殊的应用程序来使用 Markdown,你只需要一个文本编辑器和上面的技巧。它与你已有的写作方式兼容;你所需要做的就是使用它,所以试试吧。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/introduction-markdown + +作者:[Juan Islas][a] +选题:[lujun9972][b] +译者:[qfzy1233](https://github.com/qfzy1233) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/xislas +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming) +[2]: https://en.wikipedia.org/wiki/Markdown +[3]: https://opensource.com/sites/default/files/uploads/markdown_names_md-1.png (Names of Markdown files) +[4]: https://opensource.com/sites/default/files/uploads/markdown_bold.png (Bold text in Markdown) +[5]: https://opensource.com/sites/default/files/uploads/markdown_italic.png (Italics text in Markdown) +[6]: https://www.markdowntutorial.com/ +[7]: https://opensource.com/sites/default/files/uploads/markdown_link.png (Hyperlinks in Markdown) +[8]: https://opensource.com/sites/default/files/uploads/markdown_blockquote.png (Blockquote text in Markdown) +[9]: https://commonmark.org/help/ +[10]: https://opensource.com/downloads/cheat-sheet-markdown +[11]: https://turtlapp.com/ +[12]: https://opensource.com/sites/default/files/uploads/markdown_turtl_02.png (Turtl application) +[13]: https://opensource.com/article/19/5/convert-markdown-to-word-pandoc +[14]: https://opensource.com/sites/default/files/uploads/markdown_pdf.png (Markdown text converted to PDF with Pandoc) +[15]: https://markdown-here.com/ +[16]: https://opensource.com/sites/default/files/uploads/markdown_mail_02.png (Markdown text converted to email with Markdown Here) diff --git a/published/20190912 Bash Script to Send a Mail About New User Account Creation.md b/published/201909/20190912 Bash Script to Send a Mail About New User Account Creation.md similarity index 100% rename from published/20190912 Bash Script to Send a Mail About New User Account Creation.md rename to published/201909/20190912 Bash Script to Send a Mail About New User Account Creation.md diff --git a/published/20190913 An introduction to Virtual Machine Manager.md b/published/201909/20190913 An introduction to Virtual Machine Manager.md similarity index 100% rename from published/20190913 An introduction to Virtual Machine Manager.md rename to published/201909/20190913 An introduction to Virtual Machine Manager.md diff --git a/published/20190913 How to Find and Replace a String in File Using the sed Command in Linux.md b/published/201909/20190913 How to Find and Replace a String in File Using the sed Command in Linux.md similarity index 100% rename from published/20190913 How to Find and Replace a String in File Using the sed Command in Linux.md rename to published/201909/20190913 How to Find and Replace a String in File Using the sed Command in Linux.md diff --git a/published/20190914 GNOME 3.34 Released With New Features - Performance Improvements.md b/published/201909/20190914 GNOME 3.34 Released With New Features - Performance Improvements.md similarity index 100% rename from published/20190914 GNOME 3.34 Released With New Features - Performance Improvements.md rename to published/201909/20190914 GNOME 3.34 Released With New Features - Performance Improvements.md diff --git a/published/20190914 Manjaro Linux Graduates From A Hobby Project To A Professional Project.md b/published/201909/20190914 Manjaro Linux Graduates From A Hobby Project To A Professional Project.md similarity index 100% rename from published/20190914 Manjaro Linux Graduates From A Hobby Project To A Professional Project.md rename to published/201909/20190914 Manjaro Linux Graduates From A Hobby Project To A Professional Project.md diff --git a/published/20190915 Sandboxie-s path to-open source, update on the Pentagon-s open source initiative, open source in Hollywood,-and more.md b/published/201909/20190915 Sandboxie-s path to-open source, update on the Pentagon-s open source initiative, open source in Hollywood,-and more.md similarity index 100% rename from published/20190915 Sandboxie-s path to-open source, update on the Pentagon-s open source initiative, open source in Hollywood,-and more.md rename to published/201909/20190915 Sandboxie-s path to-open source, update on the Pentagon-s open source initiative, open source in Hollywood,-and more.md diff --git a/translated/tech/20190916 How to freeze and lock your Linux system (and why you would want to).md b/published/201909/20190916 How to freeze and lock your Linux system (and why you would want to).md similarity index 58% rename from translated/tech/20190916 How to freeze and lock your Linux system (and why you would want to).md rename to published/201909/20190916 How to freeze and lock your Linux system (and why you would want to).md index 738b38c6cd..37b0a31311 100644 --- a/translated/tech/20190916 How to freeze and lock your Linux system (and why you would want to).md +++ b/published/201909/20190916 How to freeze and lock your Linux system (and why you would want to).md @@ -1,15 +1,18 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11384-1.html) [#]: subject: (How to freeze and lock your Linux system (and why you would want to)) [#]: via: (https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html) [#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) -如何冻结和锁定你的 Linux 系统(以及为何你会希望做) +如何冻结和锁定你的 Linux 系统 ====== -冻结终端窗口并锁定屏幕意味着什么 - 以及如何在 Linux 系统上管理这些活动。 + +> 冻结终端窗口并锁定屏幕意味着什么 - 以及如何在 Linux 系统上管理这些活动。 + +![](https://img.linux.net.cn/data/attachment/album/201909/24/230938vgxzv3nrakk0wxnw.jpg) 如何在 Linux 系统上冻结和“解冻”屏幕,很大程度上取决于这些术语的含义。有时“冻结屏幕”可能意味着冻结终端窗口,以便该窗口内的活动停止。有时它意味着锁定屏幕,这样就没人可以在你去拿一杯咖啡时,走到你的系统旁边代替你输入命令了。 @@ -17,9 +20,9 @@ ### 如何在 Linux 上冻结终端窗口 -你可以输入 **Ctrl+S**(按住 Ctrl 键和 “s” 键)冻结 Linux 系统上的终端窗口。把 “s” 想象成“开始冻结” (start the freeze)。如果在此操作后继续输入命令,那么你不会看到输入的命令或你希望看到的输出。实际上,命令将堆积在一个队列中,并且只有在通过输入 **Ctrl+Q** 解冻时才会运行。把它想象成“退出冻结” (quit the freeze)。 +你可以输入 `Ctrl+S`(按住 `Ctrl` 键和 `s` 键)冻结 Linux 系统上的终端窗口。把 `s` 想象成“开始冻结start the freeze”。如果在此操作后继续输入命令,那么你不会看到输入的命令或你希望看到的输出。实际上,命令将堆积在一个队列中,并且只有在通过输入 `Ctrl+Q` 解冻时才会运行。把它想象成“退出冻结quit the freeze”。 -查看其工作的一种简单方式是使用 date 命令,然后输入 **Ctrl+S**。接着再次输入 date 命令并等待几分钟后再次输入 **Ctrl+Q**。你会看到这样的情景: +查看其工作的一种简单方式是使用 `date` 命令,然后输入 `Ctrl+S`。接着再次输入 `date` 命令并等待几分钟后再次输入 `Ctrl+Q`。你会看到这样的情景: ``` $ date @@ -28,25 +31,25 @@ $ date Mon 16 Sep 2019 06:49:49 PM EDT ``` -这两次时间显示的差距表示第二次的 date 命令直到你解冻窗口时才运行。 +这两次时间显示的差距表示第二次的 `date` 命令直到你解冻窗口时才运行。 无论你是坐在计算机屏幕前还是使用 PuTTY 等工具远程运行,终端窗口都可以冻结和解冻。 -这有一个可以派上用场的小技巧。如果你发现终端窗口似乎处于非活动状态,那么可能是你或其他人无意中输入了 **Ctrl+S**。无论如何,输入 **Ctrl+Q** 来尝试解决不妨是个不错的办法。 +这有一个可以派上用场的小技巧。如果你发现终端窗口似乎处于非活动状态,那么可能是你或其他人无意中输入了 `Ctrl+S`。那么,输入 `Ctrl+Q` 来尝试解决不妨是个不错的办法。 ### 如何锁定屏幕 -要在离开办公桌前锁定屏幕,请按住  **Ctrl+Alt+L** 或 **Super+L**(即按住 Windows 键和 L 键)。屏幕锁定后,你必须输入密码才能重新登录。 +要在离开办公桌前锁定屏幕,请按住  `Ctrl+Alt+L` 或 `Super+L`(即按住 `Windows` 键和 `L` 键)。屏幕锁定后,你必须输入密码才能重新登录。 ### Linux 系统上的自动屏幕锁定 虽然最佳做法建议你在即将离开办公桌时锁定屏幕,但 Linux 系统通常会在一段时间没有活动后自动锁定。 “消隐”屏幕(使其变暗)并实际锁定屏幕(需要登录才能再次使用)的时间取决于你个人首选项中的设置。 -要更改使用 GNOME 屏幕保护程序时屏幕变暗所需的时间,请打开设置窗口并选择 **Power** 然后 **Blank screen**。你可以选择 1 到 15 分钟或从不变暗。要选择屏幕变暗后锁定所需时间,请进入设置,选择 **Privacy**,然后选择**Blank screen**。设置应包括 1、2、3、5 和 30 分钟或一小时。 +要更改使用 GNOME 屏幕保护程序时屏幕变暗所需的时间,请打开设置窗口并选择 “Power” 然后 “Blank screen”。你可以选择 1 到 15 分钟或从不变暗。要选择屏幕变暗后锁定所需时间,请进入设置,选择 “Privacy”,然后选择 “Blank screen”。设置应包括 1、2、3、5 和 30 分钟或一小时。 ### 如何在命令行锁定屏幕 -如果你使用的是 Gnome 屏幕保护程序,你还可以使用以下命令从命令行锁定屏幕: +如果你使用的是 GNOME 屏幕保护程序,你还可以使用以下命令从命令行锁定屏幕: ``` gnome-screensaver-command -l @@ -56,7 +59,7 @@ gnome-screensaver-command -l ### 如何检查锁屏状态 -你还可以使用 gnome-screensaver 命令检查屏幕是否已锁定。使用 **\--query** 选项,该命令会告诉你屏幕当前是否已锁定(即处于活动状态)。使用 --time 选项,它会告诉你锁定生效的时间。这是一个示例脚本: +你还可以使用 `gnome-screensaver` 命令检查屏幕是否已锁定。使用 `--query` 选项,该命令会告诉你屏幕当前是否已锁定(即处于活动状态)。使用 `--time` 选项,它会告诉你锁定生效的时间。这是一个示例脚本: ``` #!/bin/bash @@ -77,8 +80,6 @@ The screensaver has been active for 1013 seconds. 如果你记住了正确的控制方式,那么锁定终端窗口是很简单的。对于屏幕锁定,它的效果取决于你自己的设置,或者你是否习惯使用默认设置。 -在 [Facebook][3] 和 [LinkedIn][4] 上加入 Network World 社区,来评论最新主题。 - -------------------------------------------------------------------------------- via: https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html @@ -86,7 +87,7 @@ via: https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-li 作者:[Sandra Henry-Stocker][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201909/20190916 How to start developing with .NET.md b/published/201909/20190916 How to start developing with .NET.md new file mode 100644 index 0000000000..81233361e9 --- /dev/null +++ b/published/201909/20190916 How to start developing with .NET.md @@ -0,0 +1,160 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11397-1.html) +[#]: subject: (How to start developing with .NET) +[#]: via: (https://opensource.com/article/19/9/getting-started-net) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +如何在 Linux/Windows/MacOS 上使用 .NET 进行开发 +====== + +> 了解 .NET 开发平台启动和运行的基础知识。 + +![](https://img.linux.net.cn/data/attachment/album/201909/28/111101n3i43c38tv3j9im4.jpg) + +.NET 框架由 Microsoft 于 2000 年发布。该平台的开源实现 [Mono][2] 在 21 世纪初成为了争议的焦点,因为微软拥有 .NET 技术的多项专利,并且可能使用这些专利来终止 Mono 项目。幸运的是,在 2014 年,微软宣布 .NET 开发平台从此成为 MIT 许可下的开源平台,并在 2016 年收购了开发 Mono 的 Xamarin 公司。 + +.NET 和 Mono 已经同时可用于 C#、F#、GTK+、Visual Basic、Vala 等的跨平台编程环境。使用 .NET 和 Mono 创建的程序已经应用于 Linux、BSD、Windows、MacOS、Android,甚至一些游戏机。你可以使用 .NET 或 Mono 来开发 .NET 应用。这两个都是开源的,并且都有活跃和充满活力的社区。本文重点介绍微软的 .NET 环境。 + +### 如何安装 .NET + +.NET 下载被分为多个包:一个仅包含 .NET 运行时,另一个 .NET SDK 包含了 .NET Core 和运行时。根据架构和操作系统版本,这些包可能有多个版本。要开始使用 .NET 进行开发,你必须[安装该 SDK][3]。它为你提供了 [dotnet][4] 终端或 PowerShell 命令,你可以使用它们来创建和生成项目。 + +#### Linux + +要在 Linux 上安装 .NET,首先将微软 Linux 软件仓库添加到你的计算机。 + +在 Fedora 上: + +``` +$ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc +$ sudo wget -q -O /etc/yum.repos.d/microsoft-prod.repo https://packages.microsoft.com/config/fedora/27/prod.repo +``` + +在 Ubuntu 上: + +``` +$ wget -q https://packages.microsoft.com/config/ubuntu/19.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb +$ sudo dpkg -i packages-microsoft-prod.deb +``` + +接下来,使用包管理器安装 SDK,将 `` 替换为当前版本的 .NET 版本: + +在 Fedora 上: + +``` +$ sudo dnf install dotnet-sdk- +``` + +在 Ubuntu 上: + +``` +$ sudo apt install apt-transport-https +$ sudo apt update +$ sudo apt install dotnet-sdk- +``` + +下载并安装所有包后,打开终端并输入下面命令确认安装: + +``` +$ dotnet --version +X.Y.Z +``` + +#### Windows + +如果你使用的是微软 Windows,那么你可能已经安装了 .NET 运行时。但是,要开发 .NET 应用,你还必须安装 .NET Core SDK。 + +首先,[下载安装程序][3]。请认准下载 .NET Core 进行跨平台开发(.NET Framework 仅适用于 Windows)。下载 .exe 文件后,双击该文件启动安装向导,然后单击两下进行安装:接受许可证并允许安装继续。 + +![Installing dotnet on Windows][5] + +然后,从左下角的“应用程序”菜单中打开 PowerShell。在 PowerShell 中,输入测试命令: + +``` +PS C:\Users\osdc> dotnet +``` + +如果你看到有关 dotnet 安装的信息,那么说明 .NET 已正确安装。 + +#### MacOS + +如果你使用的是 Apple Mac,请下载 .pkg 形式的 [Mac 安装程序][3]。下载并双击该 .pkg 文件,然后单击安装程序。你可能需要授予安装程序权限,因为该软件包并非来自 App Store。 + +下载并安装所有软件包后,请打开终端并输入以下命令来确认安装: + +``` +$ dotnet --version +X.Y.Z +``` + +### Hello .NET + +`dotnet` 命令提供了一个用 .NET 编写的 “hello world” 示例程序。或者,更准确地说,该命令提供了示例应用。 + +首先,使用 `dotnet` 命令以及 `new` 和 `console` 参数创建一个控制台应用的项目目录及所需的代码基础结构。使用 `-o` 选项指定项目名称: + +``` +$ dotnet new console -o hellodotnet +``` + +这将在当前目录中创建一个名为 `hellodotnet` 的目录。进入你的项目目录并看一下: + +``` +$ cd hellodotnet +$ dir +hellodotnet.csproj  obj  Program.cs +``` + +`Program.cs` 是一个空的 C# 文件,它包含了一个简单的 Hello World 程序。在文本编辑器中打开查看它。微软的 Visual Studio Code 是一个使用 dotnet 编写的跨平台的开源应用,虽然它不是一个糟糕的文本编辑器,但它会收集用户的大量数据(在它的二进制发行版的许可证中授予了自己权限)。如果要尝试使用 Visual Studio Code,请考虑使用 [VSCodium][6],它是使用 Visual Studio Code 的 MIT 许可的源码构建的版本,而*没有*远程收集(请阅读[此文档][7]来禁止此构建中的其他形式追踪)。或者,只需使用现有的你最喜欢的文本编辑器或 IDE。 + +新控制台应用中的样板代码为: + +``` +using System; + +namespace hellodotnet +{ +    class Program +    { +        static void Main(string[] args) +        { +            Console.WriteLine("Hello World!"); +        } +    } +} +``` + +要运行该程序,请使用 `dotnet run` 命令: + +``` +$ dotnet run +Hello World! +``` + +这是 .NET 和 `dotnet` 命令的基本工作流程。这里有完整的 [.NET C# 指南][8],并且都是与 .NET 相关的内容。关于 .NET 实战示例,请关注 [Alex Bunardzic][9] 在 opensource.com 中的变异测试文章。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/getting-started-net + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer) +[2]: https://www.monodevelop.com/ +[3]: https://dotnet.microsoft.com/download +[4]: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet?tabs=netcore21 +[5]: https://opensource.com/sites/default/files/uploads/dotnet-windows-install.jpg (Installing dotnet on Windows) +[6]: https://vscodium.com/ +[7]: https://github.com/VSCodium/vscodium/blob/master/DOCS.md +[8]: https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/intro-to-csharp/ +[9]: https://opensource.com/users/alex-bunardzic (View user profile.) diff --git a/published/20190916 Linux Plumbers, Appwrite, and more industry trends.md b/published/201909/20190916 Linux Plumbers, Appwrite, and more industry trends.md similarity index 100% rename from published/20190916 Linux Plumbers, Appwrite, and more industry trends.md rename to published/201909/20190916 Linux Plumbers, Appwrite, and more industry trends.md diff --git a/published/20190917 Getting started with Zsh.md b/published/201909/20190917 Getting started with Zsh.md similarity index 100% rename from published/20190917 Getting started with Zsh.md rename to published/201909/20190917 Getting started with Zsh.md diff --git a/published/20190917 How to Check Linux Mint Version Number - Codename.md b/published/201909/20190917 How to Check Linux Mint Version Number - Codename.md similarity index 100% rename from published/20190917 How to Check Linux Mint Version Number - Codename.md rename to published/201909/20190917 How to Check Linux Mint Version Number - Codename.md diff --git a/published/20190918 Amid Epstein Controversy, Richard Stallman is Forced to Resign as FSF President.md b/published/201909/20190918 Amid Epstein Controversy, Richard Stallman is Forced to Resign as FSF President.md similarity index 100% rename from published/20190918 Amid Epstein Controversy, Richard Stallman is Forced to Resign as FSF President.md rename to published/201909/20190918 Amid Epstein Controversy, Richard Stallman is Forced to Resign as FSF President.md diff --git a/published/201909/20190918 How to remove carriage returns from text files on Linux.md b/published/201909/20190918 How to remove carriage returns from text files on Linux.md new file mode 100644 index 0000000000..2f746068d7 --- /dev/null +++ b/published/201909/20190918 How to remove carriage returns from text files on Linux.md @@ -0,0 +1,111 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11389-1.html) +[#]: subject: (How to remove carriage returns from text files on Linux) +[#]: via: (https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +如何在 Linux 中删除文本中的回车字符 +====== + +> 当回车字符(`Ctrl+M`)让你紧张时,别担心。有几种简单的方法消除它们。 + +![](https://img.linux.net.cn/data/attachment/album/201909/25/214211xenk2dqfepx3xemm.jpg) + +“回车”字符可以往回追溯很长一段时间 —— 早在打字机上就有一个机械装置或杠杆将承载纸滚筒的机架移到右边,以便可以重新在左侧输入字母。他们在 Windows 上的文本文件上保留了它,但从未在 Linux 系统上使用过。当你尝试在 Linux 上处理在 Windows 上创建的文件时,这种不兼容性有时会导致问题,但这是一个非常容易解决的问题。 + +如果你使用 `od`(八进制转储octal dump)命令查看文件,那么回车(也用 `Ctrl+M` 代表)字符将显示为八进制的 15。字符 `CRLF` 通常用于表示 Windows 文本文件中的一行结束的回车符和换行符序列。那些注意看八进制转储的会看到 `\r\n`。相比之下,Linux 文本仅以换行符结束。 + +这有一个 `od` 输出的示例,高亮显示了行中的 `CRLF` 字符,以及它的八进制。 + +``` +$ od -bc testfile.txt +0000000 124 150 151 163 040 151 163 040 141 040 164 145 163 164 040 146 + T h i s i s a t e s t f +0000020 151 154 145 040 146 162 157 155 040 127 151 156 144 157 167 163 + i l e f r o m W i n d o w s +0000040 056 015 012 111 164 047 163 040 144 151 146 146 145 162 145 156 <== + . \r \n I t ' s d i f f e r e n <== +0000060 164 040 164 150 141 156 040 141 040 125 156 151 170 040 164 145 + t t h a n a U n i x t e +0000100 170 164 040 146 151 154 145 015 012 167 157 165 154 144 040 142 <== + x t f i l e \r \n w o u l d b <== +``` + +虽然这些字符不是大问题,但是当你想要以某种方式解析文本,并且不希望就它们是否存在进行编码时,这有时候会产生干扰。 + +### 3 种从文本中删除回车符的方法 + +幸运的是,有几种方法可以轻松删除回车符。这有三个选择: + +#### dos2unix + +你可能会在安装时遇到麻烦,但 `dos2unix` 可能是将 Windows 文本转换为 Unix/Linux 文本的最简单方法。一个命令带上一个参数就行了。不需要第二个文件名。该文件会被直接更改。 + +``` +$ dos2unix testfile.txt +dos2unix: converting file testfile.txt to Unix format... +``` + +你应该会发现文件长度减少,具体取决于它包含的行数。包含 100 行的文件可能会缩小 99 个字符,因为只有最后一行不会以 `CRLF` 字符结尾。 + +之前: + +``` +-rw-rw-r-- 1 shs shs 121 Sep 14 19:11 testfile.txt +``` + +之后: + +``` +-rw-rw-r-- 1 shs shs 118 Sep 14 19:12 testfile.txt +``` + +如果你需要转换大量文件,不用每次修复一个。相反,将它们全部放在一个目录中并运行如下命令: + +``` +$ find . -type f -exec dos2unix {} \; +``` + +在此命令中,我们使用 `find` 查找常规文件,然后运行 `dos2unix` 命令一次转换一个。命令中的 `{}` 将被替换为文件名。运行时,你应该处于包含文件的目录中。此命令可能会损坏其他类型的文件,例如除了文本文件外在上下文中包含八进制 15 的文件(如,镜像文件中的字节)。 + +#### sed + +你还可以使用流编辑器 `sed` 来删除回车符。但是,你必须提供第二个文件名。以下是例子: + +``` +$ sed -e “s/^M//” before.txt > after.txt +``` + +一件需要注意的重要的事情是,请不要输入你看到的字符。你必须按下 `Ctrl+V` 后跟 `Ctrl+M` 来输入 `^M`。`s` 是替换命令。斜杠将我们要查找的文本(`Ctrl + M`)和要替换的文本(这里为空)分开。 + +#### vi + +你甚至可以使用 `vi` 删除回车符(`Ctrl+M`),但这里假设你没有打开数百个文件,或许也在做一些其他的修改。你可以键入 `:` 进入命令行,然后输入下面的字符串。与 `sed` 一样,命令中 `^M` 需要通过 `Ctrl+V` 输入 `^`,然后 `Ctrl+M` 插入 `M`。`%s` 是替换操作,斜杠再次将我们要删除的字符和我们想要替换它的文本(空)分开。 `g`(全局)意味在所有行上执行。 + +``` +:%s/^M//g +``` + +### 总结 + +`dos2unix` 命令可能是最容易记住的,也是从文本中删除回车的最可靠的方法。其他选择使用起来有点困难,但它们提供相同的基本功能。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.flickr.com/photos/kmsiever/5895380540/in/photolist-9YXnf5-cNmpxq-2KEvib-rfecPZ-9snnkJ-2KAcDR-dTxzKW-6WdgaG-6H5i46-2KzTZX-7cnSw7-e3bUdi-a9meh9-Zm3pD-xiFhs-9Hz6YM-ar4DEx-4PXAhw-9wR4jC-cihLcs-asRFJc-9ueXvG-aoWwHq-atwL3T-ai89xS-dgnntH-5en8Te-dMUDd9-aSQVn-dyZqij-cg4SeS-abygkg-f2umXt-Xk129E-4YAeNn-abB6Hb-9313Wk-f9Tot-92Yfva-2KA7Sv-awSCtG-2KDPzb-eoPN6w-FE9oi-5VhaNf-eoQgx7-eoQogA-9ZWoYU-7dTGdG-5B1aSS +[3]: https://www.facebook.com/NetworkWorld/ +[4]: https://www.linkedin.com/company/network-world diff --git a/published/20190918 Microsoft brings IBM iron to Azure for on-premises migrations.md b/published/201909/20190918 Microsoft brings IBM iron to Azure for on-premises migrations.md similarity index 100% rename from published/20190918 Microsoft brings IBM iron to Azure for on-premises migrations.md rename to published/201909/20190918 Microsoft brings IBM iron to Azure for on-premises migrations.md diff --git a/published/20190918 Oracle Unleashes World-s Fastest Database Machine ‘Exadata X8M.md b/published/201909/20190918 Oracle Unleashes World-s Fastest Database Machine ‘Exadata X8M.md similarity index 100% rename from published/20190918 Oracle Unleashes World-s Fastest Database Machine ‘Exadata X8M.md rename to published/201909/20190918 Oracle Unleashes World-s Fastest Database Machine ‘Exadata X8M.md diff --git a/translated/tech/20190921 How to Remove (Delete) Symbolic Links in Linux.md b/published/201909/20190921 How to Remove (Delete) Symbolic Links in Linux.md similarity index 53% rename from translated/tech/20190921 How to Remove (Delete) Symbolic Links in Linux.md rename to published/201909/20190921 How to Remove (Delete) Symbolic Links in Linux.md index ae63aaf0d6..bbe57011eb 100644 --- a/translated/tech/20190921 How to Remove (Delete) Symbolic Links in Linux.md +++ b/published/201909/20190921 How to Remove (Delete) Symbolic Links in Linux.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (arrowfeng) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11382-1.html) [#]: subject: (How to Remove (Delete) Symbolic Links in Linux) [#]: via: (https://www.2daygeek.com/remove-delete-symbolic-link-softlink-linux/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) @@ -10,50 +10,40 @@ 在 Linux 中怎样移除(删除)符号链接 ====== -你可能有时需要在 Linux 上创建或者删除符号链接。 +你可能有时需要在 Linux 上创建或者删除符号链接。如果有,你知道该怎样做吗?之前你做过吗?你踩坑没有?如果你踩过坑,那没什么问题。如果还没有,别担心,我们将在这里帮助你。 -如果有,你知道该怎样做吗? - -之前你做过吗?你踩坑没有? - -如果你踩过坑,那没什么问题。如果还没有,别担心,我们将在这里帮助你。 - -使用 rm 和 unlink 命令就能完成移除(删除)符号链接的操作。 +使用 `rm` 和 `unlink` 命令就能完成移除(删除)符号链接的操作。 ### 什么是符号链接? -符号链接又称 symlink 或者 软链接,它是一种特殊的文件类型,在 Linux 中该文件指向另一个文件或者目录。 +符号链接(symlink)又称软链接,它是一种特殊的文件类型,在 Linux 中该文件指向另一个文件或者目录。它类似于 Windows 中的快捷方式。它能在相同或者不同的文件系统或分区中指向一个文件或着目录。 -它类似于 Windows 中的快捷方式。 - -它能在相同或者不同的文件系统或分区中指向一个文件或着目录。 - -符号链接通常用来链接库文件。它也可用于链接日志文件和在 NFS (网络文件系统)上的文件夹。 +符号链接通常用来链接库文件。它也可用于链接日志文件和挂载的 NFS(网络文件系统)上的文件夹。 ### 什么是 rm 命令? -这个 **[rm command][1]** 被用来移除文件和目录。它非常危险,你每次使用 rm 命令的时候要非常小心。 +[rm 命令][1] 被用来移除文件和目录。它非常危险,你每次使用 `rm` 命令的时候要非常小心。 ### 什么是 unlink 命令? -unlink 命令被用来移除特殊的文件。它被作为 GNU Gorutils 的一部分安装了。 +`unlink` 命令被用来移除特殊的文件。它被作为 GNU Gorutils 的一部分安装了。 ### 1) 使用 rm 命令怎样移除符号链接文件 -rm 命令是在 Linux 中使用最频繁的命令,它允许我们像下列描述那样去移除符号链接。 +`rm` 命令是在 Linux 中使用最频繁的命令,它允许我们像下列描述那样去移除符号链接。 ``` # rm symlinkfile ``` -始终将 rm 命令与 “-i” 一起使用以了解正在执行的操作。 +始终将 `rm` 命令与 `-i` 一起使用以了解正在执行的操作。 ``` # rm -i symlinkfile1 rm: remove symbolic link ‘symlinkfile1’? y ``` -它允许我们一次移除多个符号链接 +它允许我们一次移除多个符号链接: ``` # rm -i symlinkfile2 symlinkfile3 @@ -62,11 +52,9 @@ rm: remove symbolic link ‘symlinkfile2’? y rm: remove symbolic link ‘symlinkfile3’? y ``` -### 1a) 使用 rm 命令怎样移除符号链接目录 +#### 1a) 使用 rm 命令怎样移除符号链接目录 -这像移除符号链接文件那样。 - -使用下列命令移除符号链接目录。 +这像移除符号链接文件那样。使用下列命令移除符号链接目录。 ``` # rm -i symlinkdir @@ -83,7 +71,7 @@ rm: remove symbolic link ‘symlinkdir1’? y rm: remove symbolic link ‘symlinkdir2’? y ``` -如果你增加 _**“/”**_ 在结尾,这个符号链接目录将不会被删除。如果你加了,你将得到一个错误。 +如果你在结尾增加 `/`,这个符号链接目录将不会被删除。如果你加了,你将得到一个错误。 ``` # rm -i symlinkdir/ @@ -91,7 +79,7 @@ rm: remove symbolic link ‘symlinkdir2’? y rm: cannot remove ‘symlinkdir/’: Is a directory ``` -你可以增加 **“-r”** 去处理上述问题。如果你增加这个参数,它将会删除目标目录下的内容,并且它不会删除这个符号链接文件。 +你可以增加 `-r` 去处理上述问题。**但如果你增加这个参数,它将会删除目标目录下的内容,并且它不会删除这个符号链接文件。**(LCTT 译注:这可能不是你的原意。) ``` # rm -ri symlinkdir/ @@ -104,21 +92,21 @@ rm: cannot remove ‘symlinkdir/’: Not a directory ### 2) 使用 unlink 命令怎样移除符号链接 -unlink 命令删除指定文件。它一次仅接受一个文件。 +`unlink` 命令删除指定文件。它一次仅接受一个文件。 -删除符号链接文件 +删除符号链接文件: ``` # unlink symlinkfile ``` -删除符号链接目录 +删除符号链接目录: ``` # unlink symlinkdir2 ``` -如果你增加 _**“/”**_ 在结尾,你不能使用 unlink 命令删除符号链接目录 +如果你在结尾增加 `/`,你不能使用 `unlink` 命令删除符号链接目录。 ``` # unlink symlinkdir3/ @@ -133,7 +121,7 @@ via: https://www.2daygeek.com/remove-delete-symbolic-link-softlink-linux/ 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] 译者:[arrowfeng](https://github.com/arrowfeng) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20190921 Oracle Autonomous Linux- A Self Updating, Self Patching Linux Distribution for Cloud Computing.md b/published/201909/20190921 Oracle Autonomous Linux- A Self Updating, Self Patching Linux Distribution for Cloud Computing.md similarity index 100% rename from published/20190921 Oracle Autonomous Linux- A Self Updating, Self Patching Linux Distribution for Cloud Computing.md rename to published/201909/20190921 Oracle Autonomous Linux- A Self Updating, Self Patching Linux Distribution for Cloud Computing.md diff --git a/published/201909/20190923 Getting started with data science using Python.md b/published/201909/20190923 Getting started with data science using Python.md new file mode 100644 index 0000000000..319331fadc --- /dev/null +++ b/published/201909/20190923 Getting started with data science using Python.md @@ -0,0 +1,312 @@ +[#]: collector: (lujun9972) +[#]: translator: (GraveAccent) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11406-1.html) +[#]: subject: (Getting started with data science using Python) +[#]: via: (https://opensource.com/article/19/9/get-started-data-science-python) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +用 Python 入门数据科学 +====== + +> 使用 Python 开展数据科学为你提供了无限的潜力,使你能够以有意义和启发性的方式解析、解释和组织数据。 + +![](https://img.linux.net.cn/data/attachment/album/201909/30/001853sfkm07j7wfp94dzp.jpg) + +数据科学是计算领域一个令人兴奋的新领域,它围绕分析、可视化和关联以解释我们的计算机收集的有关世界的无限信息而建立。当然,称其为“新”领域有点不诚实,因为该学科是统计学、数据分析和普通而古老的科学观察派生而来的。 + +但是数据科学是这些学科的形式化分支,拥有自己的流程和工具,并且可以广泛应用于以前从未产生过大量不可管理数据的学科(例如视觉效果)。数据科学是一个新的机会,可以重新审视海洋学、气象学、地理学、制图学、生物学、医学和健康以及娱乐行业的数据,并更好地了解其中的模式、影响和因果关系。 + +像其他看似包罗万象的大型领域一样,知道从哪里开始探索数据科学可能会令人生畏。有很多资源可以帮助数据科学家使用自己喜欢的编程语言来实现其目标,其中包括最流行的编程语言之一:Python。使用 [Pandas][2]、[Matplotlib][3] 和 [Seaborn][4] 这些库,你可以学习数据科学的基本工具集。 + +如果你对 Python 的基本用法不是很熟悉,请在继续之前先阅读我的 [Python 介绍][5]。 + +### 创建 Python 虚拟环境 + +程序员有时会忘记在开发计算机上安装了哪些库,这可能导致他们提供了在自己计算机上可以运行,但由于缺少库而无法在所有其它电脑上运行的代码。Python 有一个系统旨在避免这种令人不快的意外:虚拟环境。虚拟环境会故意忽略你已安装的所有 Python 库,从而有效地迫使你一开始使用通常的 Python 进行开发。 + +为了用 `venv` 激活虚拟环境, 为你的环境取个名字 (我会用 `example`) 并且用下面的指令创建它: + +``` +$ python3 -m venv example +``` + +导入source该环境的 `bin` 目录里的 `activate` 文件以激活它: + +``` +$ source ./example/bin/activate +(example) $ +``` + +你现在“位于”你的虚拟环境中。这是一个干净的状态,你可以在其中构建针对该问题的自定义解决方案,但是额外增加了需要有意识地安装依赖库的负担。 + +### 安装 Pandas 和 NumPy + +你必须在新环境中首先安装的库是 Pandas 和 NumPy。这些库在数据科学中很常见,因此你肯定要时不时安装它们。它们也不是你在数据科学中唯一需要的库,但是它们是一个好的开始。 + +Pandas 是使用 BSD 许可证的开源库,可轻松处理数据结构以进行分析。它依赖于 NumPy,这是一个提供多维数组、线性代数和傅立叶变换等等的科学库。使用 `pip3` 安装两者: + +``` +(example) $ pip3 install pandas +``` + +安装 Pandas 还会安装 NumPy,因此你无需同时指定两者。一旦将它们安装到虚拟环境中,安装包就会被缓存,这样,当你再次安装它们时,就不必从互联网上下载它们。 + +这些是你现在仅需的库。接下来,你需要一些样本数据。 + +### 生成样本数据集 + +数据科学都是关于数据的,幸运的是,科学、计算和政府组织可以提供许多免费和开放的数据集。虽然这些数据集是用于教育的重要资源,但它们具有比这个简单示例所需的数据更多的数据。你可以使用 Python 快速创建示例和可管理的数据集: + +```python +#!/usr/bin/env python3 + +import random + +def rgb(): +    NUMBER=random.randint(0,255)/255 +    return NUMBER + +FILE = open('sample.csv','w') +FILE.write('"red","green","blue"') +for COUNT in range(10): +    FILE.write('\n{:0.2f},{:0.2f},{:0.2f}'.format(rgb(),rgb(),rgb())) +``` + +这将生成一个名为 `sample.csv` 的文件,该文件由随机生成的浮点数组成,这些浮点数在本示例中表示 RGB 值(在视觉效果中通常是数百个跟踪值)。你可以将 CSV 文件用作 Pandas 的数据源。 + +### 使用 Pandas 提取数据 + +Pandas 的基本功能之一是可以提取数据和处理数据,而无需程序员编写仅用于解析输入的新函数。如果你习惯于自动执行此操作的应用程序,那么这似乎不是很特别,但请想象一下在 [LibreOffice][6] 中打开 CSV 并且必须编写公式以在每个逗号处拆分值。Pandas 可以让你免受此类低级操作的影响。以下是一些简单的代码,可用于提取和打印以逗号分隔的值的文件: + +```python +#!/usr/bin/env python3 + +from pandas import read_csv, DataFrame +import pandas as pd + +FILE = open('sample.csv','r') +DATAFRAME = pd.read_csv(FILE) +print(DATAFRAME) +``` + +一开始的几行导入 Pandas 库的组件。Pandas 库功能丰富,因此在寻找除本文中基本功能以外的功能时,你会经常参考它的文档。 + +接下来,通过打开你创建的 `sample.csv` 文件创建变量 `FILE`。Pandas 模块 `read_csv`(在第二行中导入)使用该变量来创建数据帧dataframe。在 Pandas 中,数据帧是二维数组,通常可以认为是表格。数据放入数据帧中后,你可以按列和行进行操作,查询其范围,然后执行更多操作。目前,示例代码仅将该数据帧输出到终端。 + +运行代码。你的输出会和下面的输出有些许不同,因为这些数字都是随机生成的,但是格式都是一样的。 + +``` +(example) $ python3 ./parse.py +    red  green  blue +0  0.31   0.96  0.47 +1  0.95   0.17  0.64 +2  0.00   0.23  0.59 +3  0.22   0.16  0.42 +4  0.53   0.52  0.18 +5  0.76   0.80  0.28 +6  0.68   0.69  0.46 +7  0.75   0.52  0.27 +8  0.53   0.76  0.96 +9  0.01   0.81  0.79 +``` + +假设你只需要数据集中的红色值(`red`),你可以通过声明数据帧的列名称并有选择地仅打印你感兴趣的列来做到这一点: + +```python +from pandas import read_csv, DataFrame +import pandas as pd + +FILE = open('sample.csv','r') +DATAFRAME = pd.read_csv(FILE) + +# define columns +DATAFRAME.columns = [ 'red','green','blue' ] + +print(DATAFRAME['red']) +``` + +现在运行代码,你只会得到红色列: + +``` +(example) $ python3 ./parse.py +0    0.31 +1    0.95 +2    0.00 +3    0.22 +4    0.53 +5    0.76 +6    0.68 +7    0.75 +8    0.53 +9    0.01 +Name: red, dtype: float64 +``` + +处理数据表是经常使用 Pandas 解析数据的好方法。从数据帧中选择数据的方法有很多,你尝试的次数越多就越习惯。 + +### 可视化你的数据 + +很多人偏爱可视化信息已不是什么秘密,这是图表和图形成为与高层管理人员开会的主要内容的原因,也是“信息图”在新闻界如此流行的原因。数据科学家的工作之一是帮助其他人理解大量数据样本,并且有一些库可以帮助你完成这项任务。将 Pandas 与可视化库结合使用可以对数据进行可视化解释。一个流行的可视化开源库是 [Seaborn][7],它基于开源的 [Matplotlib][3]。 + +#### 安装 Seaborn 和 Matplotlib + +你的 Python 虚拟环境还没有 Seaborn 和 Matplotlib,所以用 `pip3` 安装它们。安装 Seaborn 的时候,也会安装 Matplotlib 和很多其它的库。 + +``` +(example) $ pip3 install seaborn +``` + +为了使 Matplotlib 显示图形,你还必须安装 [PyGObject][8] 和 [Pycairo][9]。这涉及到编译代码,只要你安装了必需的头文件和库,`pip3` 便可以为你执行此操作。你的 Python 虚拟环境不了解这些依赖库,因此你可以在环境内部或外部执行安装命令。 + +在 Fedora 和 CentOS 上: + +``` +(example) $ sudo dnf install -y gcc zlib-devel bzip2 bzip2-devel readline-devel \ +sqlite sqlite-devel openssl-devel tk-devel git python3-cairo-devel \ +cairo-gobject-devel gobject-introspection-devel +``` + +在 Ubuntu 和 Debian 上: + +``` +(example) $ sudo apt install -y libgirepository1.0-dev build-essential \ +libbz2-dev libreadline-dev libssl-dev zlib1g-dev libsqlite3-dev wget \ +curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libcairo2-dev +``` + +一旦它们安装好了,你可以安装 Matplotlib 需要的 GUI 组件。 + +``` +(example) $ pip3 install PyGObject pycairo +``` + +### 用 Seaborn 和 Matplotlib 显示图形 + +在你最喜欢的文本编辑器新建一个叫 `vizualize.py` 的文件。要创建数据的线形图可视化,首先,你必须导入必要的 Python 模块 —— 先前代码示例中使用的 Pandas 模块: + +```python +#!/usr/bin/env python3 + +from pandas import read_csv, DataFrame +import pandas as pd +``` + +接下来,导入 Seaborn、Matplotlib 和 Matplotlib 的几个组件,以便你可以配置生成的图形: + +```python +import seaborn as sns +import matplotlib +import matplotlib.pyplot as plt +from matplotlib import rcParams +``` + +Matplotlib 可以将其输出导出为多种格式,包括 PDF、SVG 和桌面上的 GUI 窗口。对于此示例,将输出发送到桌面很有意义,因此必须将 Matplotlib 后端设置为 `GTK3Agg`。如果你不使用 Linux,则可能需要使用 `TkAgg` 后端。 + +设置完 GUI 窗口以后,设置窗口大小和 Seaborn 预设样式: + +```python +matplotlib.use('GTK3Agg') +rcParams['figure.figsize'] = 11,8 +sns.set_style('darkgrid') +``` + +现在,你的显示已配置完毕,代码已经很熟悉了。使用 Pandas 导入 `sample.csv` 文件,并定义数据帧的列: + +```python +FILE = open('sample.csv','r') +DATAFRAME = pd.read_csv(FILE) +DATAFRAME.columns = [ 'red','green','blue' ] +``` + +有了适当格式的数据,你可以将其绘制在图形中。将每一列用作绘图的输入,然后使用 `plt.show()` 在 GUI 窗口中绘制图形。`plt.legend()` 参数将列标题与图形上的每一行关联(`loc` 参数将图例放置在图表之外而不是在图表上方): + + +```python +for i in DATAFRAME.columns: +    DATAFRAME[i].plot() + +plt.legend(bbox_to_anchor=(1, 1), loc=2, borderaxespad=1) +plt.show() +``` + +运行代码以获得结果。 + +![第一个数据可视化][10] + +你的图形可以准确显示 CSV 文件中包含的所有信息:值在 Y 轴上,索引号在 X 轴上,并且图形中的线也被标识出来了,以便你知道它们代表什么。然而,由于此代码正在跟踪颜色值(至少是假装),所以线条的颜色不仅不直观,而且违反直觉。如果你永远不需要分析颜色数据,则可能永远不会遇到此问题,但是你一定会遇到类似的问题。在可视化数据时,你必须考虑呈现数据的最佳方法,以防止观看者从你呈现的内容中推断出虚假信息。 + +为了解决此问题(并展示一些可用的自定义设置),以下代码为每条绘制的线分配了特定的颜色: + +```python +import matplotlib +from pandas import read_csv, DataFrame +import pandas as pd +import seaborn as sns +import matplotlib.pyplot as plt +from matplotlib import rcParams + +matplotlib.use('GTK3Agg') +rcParams['figure.figsize'] = 11,8 +sns.set_style('whitegrid') + +FILE = open('sample.csv','r') +DATAFRAME = pd.read_csv(FILE) +DATAFRAME.columns = [ 'red','green','blue' ] + +plt.plot(DATAFRAME['red'],'r-') +plt.plot(DATAFRAME['green'],'g-') +plt.plot(DATAFRAME['blue'],'b-') +plt.plot(DATAFRAME['red'],'ro') +plt.plot(DATAFRAME['green'],'go') +plt.plot(DATAFRAME['blue'],'bo') + +plt.show() +``` + +这使用特殊的 Matplotlib 表示法为每列创建两个图。每列的初始图分配有一种颜色(红色为 `r`,绿色为 `g`,蓝色为 `b`)。这些是内置的 Matplotlib 设置。 `-` 表示实线(双破折号,例如 `r--`,将创建虚线)。为每个具有相同颜色的列创建第二个图,但是使用 `o` 表示点或节点。为了演示内置的 Seaborn 主题,请将 `sns.set_style` 的值更改为 `whitegrid`。 + +![改进的数据可视化][11] + +### 停用你的虚拟环境 + +探索完 Pandas 和绘图后,可以使用 `deactivate` 命令停用 Python 虚拟环境: + +``` +(example) $ deactivate +$ +``` + +当你想重新使用它时,只需像在本文开始时一样重新激活它即可。重新激活虚拟环境时,你必须重新安装模块,但是它们是从缓存安装的,而不是从互联网下载的,因此你不必联网。 + +### 无尽的可能性 + +Pandas、Matplotlib、Seaborn 和数据科学的真正力量是无穷的潜力,使你能够以有意义和启发性的方式解析、解释和组织数据。下一步是使用你在本文中学到的新工具探索简单的数据集。Matplotlib 和 Seaborn 不仅有折线图,还有很多其他功能,因此,请尝试创建条形图或饼图或完全不一样的东西。 + +一旦你了解了你的工具集并对如何关联数据有了一些想法,则可能性是无限的。数据科学是寻找隐藏在数据中的故事的新方法。让开源成为你的媒介。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/get-started-data-science-python + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[GraveAccent](https://github.com/GraveAccent) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D (Metrics and a graph illustration) +[2]: https://pandas.pydata.org/ +[3]: https://matplotlib.org/ +[4]: https://seaborn.pydata.org/index.html +[5]: https://opensource.com/article/17/10/python-101 +[6]: http://libreoffice.org +[7]: https://seaborn.pydata.org/ +[8]: https://pygobject.readthedocs.io/en/latest/getting_started.html +[9]: https://pycairo.readthedocs.io/en/latest/ +[10]: https://opensource.com/sites/default/files/uploads/seaborn-matplotlib-graph_0.png (First data visualization) +[11]: https://opensource.com/sites/default/files/uploads/seaborn-matplotlib-graph_1.png (Improved data visualization) diff --git a/published/201909/20190923 Introduction to the Linux chgrp and newgrp commands.md b/published/201909/20190923 Introduction to the Linux chgrp and newgrp commands.md new file mode 100644 index 0000000000..7dffe09737 --- /dev/null +++ b/published/201909/20190923 Introduction to the Linux chgrp and newgrp commands.md @@ -0,0 +1,131 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11399-1.html) +[#]: subject: (Introduction to the Linux chgrp and newgrp commands) +[#]: via: (https://opensource.com/article/19/9/linux-chgrp-and-newgrp-commands) +[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss) + +chgrp 和 newgrp 命令简介 +====== + +> chgrp 和 newgrp 命令可帮助你管理需要维护组所有权的文件。 + +![](https://img.linux.net.cn/data/attachment/album/201909/28/155554aezllilzbedetm43.jpg) + +在最近的一篇文章中,我介绍了 [chown][2] 命令,它用于修改系统上的文件所有权。回想一下,所有权是分配给一个对象的用户和组的组合。`chgrp` 和 `newgrp` 命令为管理需要维护组所有权的文件提供了帮助。 + +### 使用 chgrp + +`chgrp` 只是更改文件的组所有权。这与 `chown :` 命令相同。你可以使用: + +``` +$chown :alan mynotes +``` + +或者: + +``` +$chgrp alan mynotes +``` + +#### 递归 + +`chgrp` 和它的一些参数可以用在命令行和脚本中。就像许多其他 Linux 命令一样,`chgrp` 有一个递归参数 `-R`。如下所示,你需要它来对文件夹及其内容进行递归操作。我加了 `-v`(详细)参数,因此 `chgrp` 会告诉我它在做什么: + + +``` +$ ls -l . conf +.: +drwxrwxr-x 2 alan alan 4096 Aug  5 15:33 conf + +conf: +-rw-rw-r-- 1 alan alan 0 Aug  5 15:33 conf.xml +# chgrp -vR delta conf +changed group of 'conf/conf.xml' from alan to delta +changed group of 'conf' from alan to delta +``` + +#### 参考 + +当你要更改文件的组以匹配特定的配置,或者当你不知道具体的组时(比如你运行一个脚本时),可使用参考文件 (`--reference=RFILE`)。你可以复制另外一个作为参考的文件(RFILE)的组。比如,为了撤销上面的更改 (请注意,点 `.` 代表当前工作目录): + +``` +$ chgrp -vR --reference=. conf +``` + +#### 报告更改 + +大多数命令都有用于控制其输出的参数。最常见的是 `-v` 来启用详细信息,而且 `chgrp` 命令也拥有详细模式。它还具有 `-c`(`--changes`)参数,指示 `chgrp` 仅在进行了更改时报告。`chgrp` 还会报告其他内容,例如是操作不被允许时。 + +参数 `-f`(`--silent`、`--quiet`)用于禁止显示大部分错误消息。我将在下一节中使用此参数和 `-c` 来显示实际更改。 + +#### 保持根目录 + +Linux 文件系统的根目录(`/`)应该受到高度重视。如果命令在此层级犯了一个错误,那么后果可能是可怕的,并会让系统无法使用。尤其是在运行一个会递归修改甚至删除的命令时。`chgrp` 命令有一个可用于保护和保持根目录的参数。它是 `--preserve-root`。如果在根目录中将此参数和递归一起使用,那么什么也不会发生,而是会出现一条消息: + +``` +[root@localhost /]# chgrp -cfR --preserve-root a+w / +chgrp: it is dangerous to operate recursively on '/' +chgrp: use --no-preserve-root to override this failsafe +``` + +不与递归(-R)结合使用时,该选项无效。但是,如果该命令由 `root` 用户运行,那么 `/` 的权限将会更改,但其下的其他文件或目录的权限则不会被更改: + +``` +[alan@localhost /]$ chgrp -c --preserve-root alan / +chgrp: changing group of '/': Operation not permitted +[root@localhost /]# chgrp -c --preserve-root alan / +changed group of '/' from root to alan +``` + +令人惊讶的是,它似乎不是默认参数。而选项 `--no-preserve-root` 是默认的。如果你在不带“保持”选项的情况下运行上述命令,那么它将默认为“无保持”模式,并可能会更改不应更改的文件的权限: + +``` +[alan@localhost /]$ chgrp -cfR alan / +changed group of '/dev/pts/0' from tty to alan +changed group of '/dev/tty2' from tty to alan +changed group of '/var/spool/mail/alan' from mail to alan +``` + +### 关于 newgrp + +`newgrp` 命令允许用户覆盖当前的主要组。当你在所有文件必须有相同的组所有权的目录中操作时,`newgrp` 会很方便。假设你的内网服务器上有一个名为 `share` 的目录,不同的团队在其中存储市场活动照片。组名为 `share`。当不同的用户将文件放入目录时,文件的主要组可能会变得混乱。每当添加新文件时,你都可以运行 `chgrp` 将错乱的组纠正为 `share`: + +``` +$ cd share +ls -l +-rw-r--r--. 1 alan share 0 Aug  7 15:35 pic13 +-rw-r--r--. 1 alan alan 0 Aug  7 15:35 pic1 +-rw-r--r--. 1 susan delta 0 Aug  7 15:35 pic2 +-rw-r--r--. 1 james gamma 0 Aug  7 15:35 pic3 +-rw-rw-r--. 1 bill contract  0 Aug  7 15:36 pic4 +``` + +我在 [chmod 命令][3]的文章中介绍了 `setgid` 模式。它是解决此问题的一种方法。但是,假设由于某种原因未设置 `setgid` 位。`newgrp` 命令在此时很有用。在任何用户将文件放入 `share` 目录之前,他们可以运行命令 `newgrp share`。这会将其主要组切换为 `share`,因此他们放入目录中的所有文件都将有 `share` 组,而不是用户自己的主要组。完成后,用户可以使用以下命令切换回常规主要组(举例): + +``` +newgrp alan +``` + +### 总结 + +了解如何管理用户、组和权限非常重要。最好知道一些替代方法来解决可能遇到的问题,因为并非所有环境都以相同的方式设置。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/linux-chgrp-and-newgrp-commands + +作者:[Alan Formy-Duval][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A (Penguins walking on the beach ) +[2]: https://opensource.com/article/19/8/linux-chown-command +[3]: https://opensource.com/article/19/8/linux-chmod-command diff --git a/published/201909/20190927 IBM brings blockchain to Red Hat OpenShift- adds Apache CouchDB for hybrid cloud customers.md b/published/201909/20190927 IBM brings blockchain to Red Hat OpenShift- adds Apache CouchDB for hybrid cloud customers.md new file mode 100644 index 0000000000..ac533a1b2c --- /dev/null +++ b/published/201909/20190927 IBM brings blockchain to Red Hat OpenShift- adds Apache CouchDB for hybrid cloud customers.md @@ -0,0 +1,64 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11405-1.html) +[#]: subject: (IBM brings blockchain to Red Hat OpenShift; adds Apache CouchDB for hybrid cloud customers) +[#]: via: (https://www.networkworld.com/article/3441362/ibm-brings-blockchain-to-red-hat-openshift-adds-apache-couchdb-for-hybrid-cloud-customers.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +IBM 将区块链引入红帽 OpenShift;为混合云客户添加了Apache CouchDB +====== + +> IBM 在其区块链平台上增加了红帽 OpenShift 支持,并将用于 Apache CouchDB 的 Kubernetes Operator 引入其混合云服务中。 + +![](https://images.idgesg.net/images/article/2019/08/cloudjourney1200x800-100808549-large.jpg) + +IBM 本周继续推进其红帽和开源集成工作,在其[区块链][1]平台上添加了红帽 OpenShift 支持,并在其[混合云][2]服务产品之外为 Apache CouchDB 引入了 Kubernetes Operator。 + +在该公司的旗舰级企业 Kubernetes 平台 [红帽 OpenShift 上部署 IBM 区块链][3] 的能力,意味着 IBM 区块链的开发人员将能够在本地、公共云或混合云架构中部署安全软件。 + +区块链是一个分布式数据库,维护着一个不断增长的记录列表,可以使用哈希技术对其进行验证,并且 IBM 区块链平台包括用于构建、操作、治理和发展受保护的区块链网络的工具。 + +IBM 表示,其区块链 / OpenShift 组合的目标客户面对的公司客户是:希望保留区块链分类帐副本并在自己的基础设施上运行工作负载以实现安全性,降低风险或合规性;需要将数据存储在特定位置以满足数据驻留要求;需要在多个云或混合云架构中部署区块链组件。 + +自 7 月份完成对红帽的收购以来,IBM 一直在围绕红帽基于 Kubernetes 的 OpenShift 容器平台构建云开发生态系统。最近,这位蓝色巨人将其[新 z15 大型机与 IBM 的红帽][4]技术融合在一起,称它将为红帽 OpenShift 容器平台提供 IBM z/OS 云代理。该产品将通过连接到 Kubernetes 容器为用户提供 z/OS 计算资源的直接自助访问。 + +IBM 表示,打算在 IBM z 系列和 LinuxONE 产品上向 Linux 提供 IBM [Cloud Pak 产品][5]。Cloud Paks 是由 OpenShift 与 100 多种其他 IBM 软件产品组成的捆绑包。LinuxONE 是 IBM 专为支持 Linux 环境而设计的非常成功的大型机系统。 + +IBM 表示,愿景是使支持 OpenShift 的 IBM 软件成为客户用来转变其组织的基础构建组件。 + +IBM 表示:“我们的大多数客户都需要支持混合云工作负载以及可在任何地方运行这些工作负载的灵活性的解决方案,而用于红帽的 z/OS 云代理将成为我们在平台上启用云原生的关键。” + +在相关新闻中,IBM 宣布支持开源 Apache CouchDB,这是 [Apache CouchDB][7] 的 Kubernetes Operator,并且该 Operator 已通过认证可与红帽 OpenShift 一起使用。Operator 可以自动部署、管理和维护 Apache CouchDB 部署。Apache CouchDB 是非关系型开源 NoSQL 数据库。 + +在最近的 [Forrester Wave 报告][8]中,研究人员说:“企业喜欢 NoSQL 这样的能力,可以使用低成本服务器和可以存储、处理和访问任何类型的业务数据的灵活的无模式模型进行横向扩展。NoSQL 平台为企业基础设施专业人士提供了对数据存储和处理的更好控制,并提供了可加速应用程序部署的配置。当许多组织使用 NoSQL 来补充其关系数据库时,一些组织已开始替换它们以支持更好的性能、扩展规模并降低其数据库成本。” + +当前,IBM 云使用 Cloudant Db 服务作为其针对新的云原生应用程序的标准数据库。IBM 表示,对 CouchDB 的强大支持为用户提供了替代方案和后备选项。IBM 表示,能够将它们全部绑定到红帽 OpenShift Kubernetes 部署中,可以使客户在部署应用程序并在多个云环境中移动数据时使用数据库本地复制功能来维持对数据的低延迟访问。 + +“我们的客户正在转向基于容器化和[微服务][9]的架构,以提高速度、敏捷性和运营能力。在云原生应用程序开发中,应用程序需要具有支持可伸缩性、可移植性和弹性的数据层。”IBM 院士兼云数据库副总裁 Adam Kocoloski 写道,“我们相信数据可移植性和 CouchDB 可以大大改善多云架构的功能,使客户能够构建真正可在私有云、公共云和边缘位置之间移植的解决方案。” + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3441362/ibm-brings-blockchain-to-red-hat-openshift-adds-apache-couchdb-for-hybrid-cloud-customers.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3330937/how-blockchain-will-transform-the-iot.html +[2]: https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html +[3]: https://www.ibm.com/blogs/blockchain/2019/09/ibm-blockchain-platform-meets-red-hat-openshift/ +[4]: https://www.networkworld.com/article/3438542/ibm-z15-mainframe-amps-up-cloud-security-features.html +[5]: https://www.networkworld.com/article/3429596/ibm-fuses-its-software-with-red-hats-to-launch-hybrid-cloud-juggernaut.html +[6]: https://www.networkworld.com/article/3400740/achieve-compliant-cost-effective-hybrid-cloud-operations.html +[7]: https://www.ibm.com/cloud/learn/couchdb +[8]: https://reprints.forrester.com/#/assets/2/363/RES136481/reports +[9]: https://www.networkworld.com/article/3137250/what-you-need-to-know-about-microservices.html +[10]: https://www.facebook.com/NetworkWorld/ +[11]: https://www.linkedin.com/company/network-world diff --git a/published/20190901 Best Linux Distributions For Everyone in 2019.md b/published/20190901 Best Linux Distributions For Everyone in 2019.md new file mode 100644 index 0000000000..4a6e136180 --- /dev/null +++ b/published/20190901 Best Linux Distributions For Everyone in 2019.md @@ -0,0 +1,386 @@ +[#]: collector: (lujun9972) +[#]: translator: (heguangzhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11411-1.html) +[#]: subject: (Best Linux Distributions For Everyone in 2019) +[#]: via: (https://itsfoss.com/best-linux-distributions/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +2019 年最好的 Linux 发行版 +====== + +> 哪个是最好的 Linux 发行版呢?这个问题是没有明确的答案的。这就是为什么我们按各种分类汇编了这个最佳 Linux 列表的原因。 + +有许多 Linux 发行版,我甚至想不出一个确切的数量,因为你会发现很多不同的 Linux 发行版。 + +其中有些只是另外一个的复制品,而有些往往是独一无二的。这虽然有点混乱——但这也是 Linux 的优点。 + +不用担心,尽管有成千上万的发行版,在这篇文章中,我已经列出了目前最好的 Linux 发行版。当然,这个列表是主观的。但是,在这里,我们试图对发行版进行分类——每个发行版本都有自己的特点的。 + +* 面向初学者的 Linux 用户的最佳发行版 +* 最佳 Linux 服务器发行版 +* 可以在旧计算机上运行的最佳 Linux 发行版 +* 面向高级 Linux 用户的最佳发行版 +* 最佳常青树 Linux 发行版 + +**注:** 该列表没有特定的排名顺序。 + +### 面向初学者的最佳 Linux 发行版 + +在这个分类中,我们的目标是列出开箱即用的易用发行版。你不需要深度学习,你可以在安装后马上开始使用,不需要知道任何命令或技巧。 + +#### Ubuntu + +![][6] + +Ubuntu 无疑是最流行的 Linux 发行版之一。你甚至可以发现它已经预装在很多笔记本电脑上了。 + +用户界面很容易适应。如果你愿意,你可以根据自己的要求轻松定制它的外观。无论哪种情况,你都可以选择安装一个主题。你可以从了解更多关于[如何在 Ubuntu 安装主题的][7]的信息来起步。 + +除了它本身提供的功能外,你会发现一个巨大的 Ubuntu 用户在线社区。因此,如果你有问题——可以去任何论坛(或版块)寻求帮助。如果你想直接寻找解决方案,你应该看看我们对 [Ubuntu][8] 的报道(我们有很多关于 Ubuntu 的教程和建议)。 + +- [Ubuntu][9] + +#### Linux Mint + +![][10] + +Linux Mint Cinnamon 是另一个受初学者欢迎的 Linux 发行版。默认的 Cinnamon 桌面类似于 Windows XP,这就是为什么当 Windows XP 停止维护时许多用户选择它的原因。 + +Linux Mint 基于 Ubuntu,因此它具有适用于 Ubuntu 的所有应用程序。简单易用是它成为 Linux 新用户首选的原因。 + +- [Linux Mint][11] + +#### elementary OS + +![][12] + +elementary OS 是我用过的最漂亮的 Linux 发行版之一。用户界面类似于苹果操作系统——所以如果你已经使用了苹果系统,则很容易适应。 + +该发行版基于 Ubuntu,致力于提供一个用户友好的 Linux 环境,该环境在考虑性能的同时尽可能美观。如果你选择安装 elementary OS,这份[在安装 elementary OS 后要做的 11 件事的清单][13]会派上用场。 + +- [elementary OS][14] + +#### MX Linux + +![][15] + +大约一年前,MX Linux 成为众人瞩目的焦点。现在(在发表这篇文章的时候),它是 [DistroWatch.com][16] 上最受欢迎的 Linux 发行版。如果你还没有使用过它,那么当你开始使用它时,你会感到惊讶。 + +与 Ubuntu 不同,MX Linux 是一个基于 Debian 的日益流行的发行版,采用 Xfce 作为其桌面环境。除了无与伦比的稳定性之外,它还配备了许多图形用户界面工具,这使得任何习惯了 Windows/Mac 的用户易于使用它。 + +此外,软件包管理器还专门针对一键安装进行了量身定制。你甚至可以搜索 [Flatpak][18] 软件包并立即安装它(默认情况下,Flathub 在软件包管理器中是可用的来源之一)。 + +- [MX Linux][19] + +#### Zorin OS + +![][20] + +Zorin OS 是又一个基于 Ubuntu 的发行版,它又是桌面上最漂亮、最直观的操作系统之一。尤其是在[Zorin OS 15 发布][21]之后——我绝对会向没有任何 Linux 经验的用户推荐它。它也引入了许多基于图形用户界面的应用程序。 + +你也可以将其安装在旧电脑上,但是,请确保选择“Lite”版本。此外,你还有“Core”、“Education”和 “Ultimate”版本可以选择。你可以选择免费安装 Core 版,但是如果你想支持开发人员并帮助改进 Zorin,请考虑获得 Ultimate 版。 + +Zorin OS 是由两名爱尔兰的青少年创建的。你可以[在这里阅读他们的故事][22]。 + +- [Zorin OS][23] + +#### Pop!_OS + +![](https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/04/pop-1.jpg?w=800&ssl=1) + +Sytem76 的 Pop!_OS 是开发人员或计算机科学专业人员的理想选择。当然,不仅限于编码人员,如果你刚开始使用 Linux,这也是一个很好的选择。它基于 Ubuntu,但是其 UI 感觉更加直观和流畅。除了 UI 外,它还强制执行全盘加密。 + +你可以通过文章下面的评论看到,我们的许多读者似乎都喜欢(并坚持使用)它。如果你对此感到好奇,也应该查看一下我们关于 Phillip Prado 的 [Pop!_OS 的动手实践](https://itsfoss.com/pop-os-linux-review/)的文章。 + +(LCTT 译注:这段推荐是原文后来补充的,因为原文下面很多人在评论推荐。) + +- [Pop!_OS](https://system76.com/pop) + +#### 其他选择 + +[深度操作系统][24] 和其他的 Ubuntu 变种(如 Kubuntu、Xubuntu)也是初学者的首选。如果你想寻求更多的选择,你可以看看。(LCTT 译注:我知道你们肯定对将深度操作系统列入其它不满意——这个锅归原作者。) + +如果你想要挑战自己,你可以试试 Ubuntu 之外的 Fedora —— 但是一定要看看我们关于 [Ubuntu 和 Fedora 对比][25]的文章,从桌面的角度做出更好的选择。 + +### 最好的服务器发行版 + +对于服务器来说,选择 Linux 发行版取决于稳定性、性能和企业级支持。如果你只是尝试,则可以尝试任何你想要的发行版。 + +但是,如果你要为 Web 服务器或任何重要的组件安装它,你应该看看我们的一些建议。 + +#### Ubuntu 服务器 + +根据你的需要,Ubuntu 为你的服务器提供了不同的选项。如果你正在寻找运行在 AWS、Azure、谷歌云平台等平台上的优化解决方案,[Ubuntu Cloud][26] 是一个很好的选择。 + +无论是哪种情况,你都可以选择 Ubuntu 服务器包,并将其安装在你的服务器上。然而,Ubuntu 在云上部署时也是最受欢迎的 Linux 发行版(根据数字判断——[来源1][27]、[来源2][28])。 + +请注意,除非你有特殊要求,我们建议你选择 LTS 版。 + +- [Ubuntu Server][29] + +#### 红帽企业版 Linux(RHEL) + +红帽企业版 Linux(RHEL)是面向企业和组织的顶级 Linux 平台。如果我们按数字来看,红帽可能不是服务器领域最受欢迎的。但是,有相当一部分企业用户依赖于 RHEL (比如联想)。 + +从技术上讲,Fedora 和红帽企业版是相关联的。无论红帽要支持什么——在出现在 RHEL 之前,都要在 Fedora 上进行测试。我不是定制需求的服务器发行版专家,所以你一定要查看他们的[官方文档][30]以了解它是否适合你。 + +- [RHEL][31] + +#### SUSE Linux 企业服务器(SLES) + +![][32] + +别担心,不要把这和 OpenSUSE 混淆。一切都以一个共同的品牌 “SUSE” 命名 —— 但是 OpenSUSE 是一个开源发行版,目标是社区,并且由社区维护。 + +SUSE Linux 企业服务器(SLES)是基于云的服务器最受欢迎的解决方案之一。为了获得管理开源解决方案的优先支持和帮助,你必须选择订阅。 + +- [SLES][33] + +#### CentOS + +![][34] + +正如我提到的,对于 RHEL 你需要订阅。而 CentOS 更像是 RHEL 的社区版,因为它是从 RHEL 的源代码中派生出来的。而且,它是开源的,也是免费的。尽管与过去几年相比,使用 CentOS 的托管提供商数量明显减少,但这仍然是一个很好的选择。 + +CentOS 可能没有加载最新的软件包,但它被认为是最稳定的发行版之一,你可以在各种云平台上找到 CentOS 镜像。如果没有,你可以选择 CentOS 提供的自托管镜像。 + +- [CentOS][35] + +#### 其他选择 + +你也可以尝试 [Fedora Server][36]或[Debian][37]作为上述发行版的替代品。 + +### 旧电脑的最佳 Linux 发行版 + +如果你有一台旧电脑,或者你真的不需要升级你的系统,你仍然可以尝试一些最好的 Linux 发行版。 + +我们已经详细讨论了一些[最好的轻量级 Linux 发行版][42]。在这里,我们将只提到那些真正突出的东西(以及一些新的补充)。 + +#### Puppy Linux + +![][43] + +Puppy Linux 实际上是最小的发行版本之一。刚开始使用 Linux 时,我的朋友建议我尝试一下 Puppy Linux,因为它可以轻松地在较旧的硬件配置上运行。 + +如果你想在你的旧电脑上享受一次爽快的体验,那就值得去看看。多年来,随着一些新的有用特性的增加,用户体验得到了改善。 + +- [Puppy Linux][44] + +#### Solus Budgie + +![][45] + +在最近的一个主要版本——[Solus 4 Fortitude][46] 之后,它是一个令人印象深刻的轻量级桌面操作系统。你可以选择像 GNOME 或 MATE 这样的桌面环境。然而,Solus Budgie 恰好是我的最爱之一,它是一款适合初学者的功能齐全的 Linux发行版,同时对系统资源要求很少。 + +- [Solus][47] + +#### Bodhi + +![][48] + +Bodhi Linux 构建于 Ubuntu 之上。然而,与Ubuntu不同,它在较旧的配置上运行良好。 + +这个发行版的主要亮点是它的 [Moksha 桌面][49](这是 Enlightenment 17 桌面的延续)。用户体验直观且反应极快。即使我个人不用它,你也应该在你的旧系统上试一试。 + +- [Bodhi Linux][50] + +#### antiX + +![][51] + +antiX 部分担起了 MX Linux 的责任,它是一个轻量级的 Linux 发行版,为新的或旧的计算机量身定制。其用户界面并不令人印象深刻——但它可以像预期的那样工作。 + +它基于 Debian,可以作为一个现场版 CD 发行版使用,而不需要安装它。antiX 还提供现场版引导加载程序。与其他发行版相比,你可以保存设置,这样就不会在每次重新启动时丢失设置。不仅如此,你还可以通过其“持久保留”功能将更改保存到根目录中。 + +因此,如果你正在寻找一个可以在旧硬件上提供快速用户体验的现场版 USB 发行版,antiX 是一个不错的选择。 + +- [antiX][52] + +#### Sparky Linux + +![][53] + +Sparky Linux 基于 Debian,它是理想的低端系统 Linux 发行版。伴随着超快的用户体验,Sparky Linux 为不同的用户提供了几个特殊版本(或变种)。 + +例如,它提供了针对一组用户的稳定版本(和变种)和滚动版本。Sparky Linux GameOver 版非常受游戏玩家欢迎,因为它包含了一堆预装的游戏。你可以查看我们的[最佳 Linux 游戏发行版][54] —— 如果你也想在你的系统上玩游戏。 + +#### 其他选择 + +你也可以尝试 [Linux Lite][55]、[Lubuntu][56]、[Peppermint][57] 等轻量级 Linux 发行版。 + +### 面向高级用户的最佳 Linux 发行版 + +一旦你习惯了各种软件包管理器和命令来帮助你解决任何问题,你就可以开始找寻只为高级用户量身定制的 Linux 发行版。 + +当然,如果你是专业人士,你会有一套具体的要求。然而,如果你已经作为普通用户使用了一段时间——以下发行版值得一试。 + +#### Arch Linux + +![][58] + +Arch Linux 本身是一个简单而强大的发行版,具有陡峭的学习曲线。不像其系统,你不会一次就把所有东西都预先安装好。你必须配置系统并根据需要添加软件包。 + +此外,在安装 Arch Linux 时,必须按照一组命令来进行(没有图形用户界面)。要了解更多信息,你可以按照我们关于[如何安装 Arch Linux][59] 的指南进行操作。如果你要安装它,你还应该知道在[安装 Arch Linux 后需要做的一些基本事情][60]。这会帮助你快速入门。 + +除了多才多艺和简便性之外,值得一提的是 Arch Linux 背后的社区非常活跃。所以,如果你遇到问题,你不用担心。 + +- [Arch Linux][61] + +#### Gentoo + +![][62] + +如果你知道如何编译源代码,Gentoo Linux 是你必须尝试的版本。这也是一个轻量级的发行版,但是,你需要具备必要的技术知识才能使它发挥作用。 + +当然,[官方手册][63]提供了许多你需要知道的信息。但是,如果你不确定自己在做什么——你需要花很多时间去想如何充分利用它。 + +- [Gentoo Linux][64] + +#### Slackware + +![][65] + +Slackware 是仍然重要的最古老的 Linux 发行版之一。如果你愿意编译或开发软件来为自己建立一个完美的环境 —— Slackware 是一个不错的选择。 + +如果你对一些最古老的 Linux 发行版感到好奇,我们有一篇关于[最早的 Linux 发行版][66]可以去看看。 + +尽管使用它的用户/开发人员的数量已经显著减少,但对于高级用户来说,它仍然是一个极好的选择。此外,最近有个新闻是 [Slackware 有了一个 Patreon 捐赠页面][67],我们希望 Slackware 继续作为最好的 Linux 发行版之一存在。 + +- [Slackware][68] + +### 最佳多用途 Linux 发行版 + +有些 Linux 发行版既可以作为初学者友好的桌面又可以作为高级操作系统的服务器。因此,我们考虑为这样的发行版编辑一个单独的部分。 + +如果你不同意我们的观点(或者有建议要补充),请在评论中告诉我们。我们认为,这对于每个用户都可以派上用场: + +#### Fedora + +![][69] + +Fedora 提供两个独立的版本:一个用于台式机/笔记本电脑(Fedora 工作站),另一个用于服务器(Fedora 服务器)。 + +因此,如果你正在寻找一款时髦的桌面操作系统,有点学习曲线,又对用户友好,那么 Fedora 是一个选择。无论是哪种情况,如果你正在为你的服务器寻找一个 Linux 操作系统,这也是一个不错的选择。 + +- [Fedora][70] + +#### Manjaro + +![][71] + +Manjaro 基于 [Arch Linux][72]。不用担心,虽然 Arch Linux 是为高级用户量身定制的,但Manjaro 让新手更容易上手。这是一个简单且对初学者友好的 Linux 发行版。用户界面足够好,并且内置了一系列有用的图形用户界面应用程序。 + +下载时,你可以为 Manjaro 选择[桌面环境][73]。就个人而言,我喜欢 Manjaro 的 KDE 桌面。 + +- [Manjaro Linux][74] + +#### Debian + +![][75] + +嗯,Ubuntu 是基于 Debian 的——所以它本身是一个非常好的发行版本。Debian 是台式机和服务器的理想选择。 + +这可能不是对初学者最友好的操作系统——但你可以通过阅读[官方文档][76]轻松开始。[Debian 10 Buster][77] 的最新版本引入了许多变化和必要的改进。所以,你必须试一试! + +### 总结 + +总的来说,这些是我们推荐你去尝试的最好的 Linux 发行版。是的,还有许多其他的 Linux 发行版值得一提,但是根据个人喜好,对每个发行版来说,取决于个人喜好,这种选择是主观的。 + +但是,我们也为 [Windows 用户][78]、[黑客和脆弱性测试人员][41]、[游戏玩家][54]、[程序员][39]和[偏重隐私者][79]提供了单独的发行版列表所以,如果你感兴趣的话请仔细阅读。 + +如果你认为我们遗漏了你最喜欢的 Linux 发行版,请在下面的评论中告诉我们你的想法,我们将更新这篇文章。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-linux-distributions/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: tmp.NoRXbIWHkg#for-beginners +[2]: tmp.NoRXbIWHkg#for-servers +[3]: tmp.NoRXbIWHkg#for-old-computers +[4]: tmp.NoRXbIWHkg#for-advanced-users +[5]: tmp.NoRXbIWHkg#general-purpose +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/install-google-chrome-ubuntu-10.jpg?ssl=1 +[7]: https://itsfoss.com/install-themes-ubuntu/ +[8]: https://itsfoss.com/tag/ubuntu/ +[9]: https://ubuntu.com/download/desktop +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/linux-Mint-19-desktop.jpg?ssl=1 +[11]: https://www.linuxmint.com/ +[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/elementary-os-juno-feat.jpg?ssl=1 +[13]: https://itsfoss.com/things-to-do-after-installing-elementary-os-5-juno/ +[14]: https://elementary.io/ +[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/mx-linux.jpg?ssl=1 +[16]: https://distrowatch.com/ +[17]: https://en.wikipedia.org/wiki/Linux_distribution#Rolling_distributions +[18]: https://flatpak.org/ +[19]: https://mxlinux.org/ +[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/zorin-os-15.png?ssl=1 +[21]: https://itsfoss.com/zorin-os-15-release/ +[22]: https://itsfoss.com/zorin-os-interview/ +[23]: https://zorinos.com/ +[24]: https://www.deepin.org/en/ +[25]: https://itsfoss.com/ubuntu-vs-fedora/ +[26]: https://ubuntu.com/download/cloud +[27]: https://w3techs.com/technologies/details/os-linux/all/all +[28]: https://thecloudmarket.com/stats +[29]: https://ubuntu.com/download/server +[30]: https://developers.redhat.com/products/rhel/docs-and-apis +[31]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux +[32]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/SUSE-Linux-Enterprise.jpg?ssl=1 +[33]: https://www.suse.com/products/server/ +[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/centos.png?ssl=1 +[35]: https://www.centos.org/ +[36]: https://getfedora.org/en/server/ +[37]: https://www.debian.org/distrib/ +[38]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/coding.jpg?ssl=1 +[39]: https://itsfoss.com/best-linux-distributions-progammers/ +[40]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/hacking.jpg?ssl=1 +[41]: https://itsfoss.com/linux-hacking-penetration-testing/ +[42]: https://itsfoss.com/lightweight-linux-beginners/ +[43]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/puppy-linux-bionic.jpg?ssl=1 +[44]: http://puppylinux.com/ +[45]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/solus-4-featured.jpg?resize=800%2C450&ssl=1 +[46]: https://itsfoss.com/solus-4-release/ +[47]: https://getsol.us/home/ +[48]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/bodhi-linux.png?fit=800%2C436&ssl=1 +[49]: http://www.bodhilinux.com/moksha-desktop/ +[50]: http://www.bodhilinux.com/ +[51]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/10/antix-linux-screenshot.jpg?ssl=1 +[52]: https://antixlinux.com/ +[53]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/sparky-linux.jpg?ssl=1 +[54]: https://itsfoss.com/linux-gaming-distributions/ +[55]: https://www.linuxliteos.com/ +[56]: https://lubuntu.me/ +[57]: https://peppermintos.com/ +[58]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/arch_linux_screenshot.jpg?ssl=1 +[59]: https://itsfoss.com/install-arch-linux/ +[60]: https://itsfoss.com/things-to-do-after-installing-arch-linux/ +[61]: https://www.archlinux.org +[62]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/gentoo-linux.png?ssl=1 +[63]: https://wiki.gentoo.org/wiki/Handbook:Main_Page +[64]: https://www.gentoo.org +[65]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/slackware-screenshot.jpg?ssl=1 +[66]: https://itsfoss.com/earliest-linux-distros/ +[67]: https://distrowatch.com/dwres.php?resource=showheadline&story=8743 +[68]: http://www.slackware.com/ +[69]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/fedora-overview.png?ssl=1 +[70]: https://getfedora.org/ +[71]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/manjaro-gnome.jpg?ssl=1 +[72]: https://www.archlinux.org/ +[73]: https://itsfoss.com/glossary/desktop-environment/ +[74]: https://manjaro.org/ +[75]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/debian-screenshot.png?ssl=1 +[76]: https://www.debian.org/releases/stable/installmanual +[77]: https://itsfoss.com/debian-10-buster/ +[78]: https://itsfoss.com/windows-like-linux-distributions/ +[79]: https://itsfoss.com/privacy-focused-linux-distributions/ diff --git a/published/20190911 4 open source cloud security tools.md b/published/20190911 4 open source cloud security tools.md new file mode 100644 index 0000000000..f2c9de3893 --- /dev/null +++ b/published/20190911 4 open source cloud security tools.md @@ -0,0 +1,88 @@ +[#]: collector: (lujun9972) +[#]: translator: (hopefully2333) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11432-1.html) +[#]: subject: (4 open source cloud security tools) +[#]: via: (https://opensource.com/article/19/9/open-source-cloud-security) +[#]: author: (Alison Naylor https://opensource.com/users/asnaylor,Anderson Silva https://opensource.com/users/ansilva) + +4 种开源云安全工具 +====== + +> 查找并排除你存储在 AWS 和 GitHub 中的数据里的漏洞。 + +![Tools in a cloud][1] + +如果你的日常工作是开发者、系统管理员、全栈工程师或者是网站可靠性工程师(SRE),工作内容包括使用 Git 从 GitHub 上推送、提交和拉取,并部署到亚马逊 Web 服务上(AWS),安全性就是一个需要持续考虑的一个点。幸运的是,开源工具能帮助你的团队避免犯常见错误,这些常见错误会导致你的组织损失数千美元。 + +本文介绍了四种开源工具,当你在 GitHub 和 AWS 上进行开发时,这些工具能帮助你提升项目的安全性。同样的,本着开源的精神,我会与三位安全专家——[Travis McPeak][2],奈飞高级云安全工程师;[Rich Monk][3],红帽首席高级信息安全分析师;以及 [Alison Naylor][4],红帽首席信息安全分析师——共同为本文做出贡献。 + +我们已经按场景对每个工具都做了区分,但是它们并不是相互排斥的。 + +### 1、使用 gitrob 发现敏感数据 + +你需要发现任何出现于你们团队的 Git 仓库中的敏感信息,以便你能将其删除。借助专注于攻击应用程序或者操作系统的工具以使用红/蓝队模型,这样可能会更有意义,在这个模型中,一个信息安全团队会划分为两块,一个是攻击团队(又名红队),以及一个防守团队(又名蓝队)。有一个红队来尝试渗透你的系统和应用要远远好于等待一个攻击者来实际攻击你。你的红队可能会尝试使用 [Gitrob][5],该工具可以克隆和爬取你的 Git 仓库,以此来寻找凭证和敏感信息。 + +即使像 Gitrob 这样的工具可以被用来造成破坏,但这里的目的是让你的信息安全团队使用它来发现无意间泄露的属于你的组织的敏感信息(比如 AWS 的密钥对或者是其他被失误提交上去的凭证)。这样,你可以修整你的仓库并清除敏感数据——希望能赶在攻击者发现它们之前。记住不光要修改受影响的文件,还要[删除它们的历史记录][6]。 + +### 2、使用 git-secrets 来避免合并敏感数据 + +虽然在你的 Git 仓库里发现并移除敏感信息很重要,但在一开始就避免合并这些敏感信息岂不是更好?即使错误地提交了敏感信息,使用 [git-secrets][7] 可以避免你陷入公开的困境。这款工具可以帮助你设置钩子,以此来扫描你的提交、提交信息和合并信息,寻找常见的敏感信息模式。注意你选择的模式要匹配你的团队使用的凭证,比如 AWS 访问密钥和秘密密钥。如果发现了一个匹配项,你的提交就会被拒绝,一个潜在的危机就此得到避免。 + +为你已有的仓库设置 git-secrets 是很简单的,而且你可以使用一个全局设置来保护所有你以后要创建或克隆的仓库。你同样可以在公开你的仓库之前,使用 git-secrets 来扫描它们(包括之前所有的历史版本)。 + +### 3、使用 Key Conjurer 创建临时凭证 + +有一点额外的保险来防止无意间公开了存储的敏感信息,这是很好的事,但我们还可以做得更好,就完全不存储任何凭证。追踪凭证,谁访问了它,存储到了哪里,上次更新是什么时候——太麻烦了。然而,以编程的方式生成的临时凭证就可以避免大量的此类问题,从而巧妙地避开了在 Git 仓库里存储敏感信息这一问题。使用 [Key Conjurer][8],它就是为解决这一需求而被创建出来的。有关更多 Riot Games 为什么创建 Key Conjurer,以及 Riot Games 如何开发的 Key Conjurer,请阅读 [Key Conjurer:我们最低权限的策略][9]。 + +### 4、使用 Repokid 自动化地提供最小权限 + +任何一个参加过基本安全课程的人都知道,设置最小权限是基于角色的访问控制的最佳实现。难过的是,离开校门,会发现手动运用最低权限策略会变得如此艰难。一个应用的访问需求会随着时间的流逝而变化,开发人员又太忙了没时间去手动削减他们的权限。[Repokid][10] 使用 AWS 提供提供的有关身份和访问管理(IAM)的数据来自动化地调整访问策略。Repokid 甚至可以在 AWS 中为超大型组织提供自动化地最小权限设置。 + +### 工具而已,又不是大招 + +这些工具并不是什么灵丹妙药,它们只是工具!所以,在尝试使用这些工具或其他的控件之前,请和你的组织里一起工作的其他人确保你们已经理解了你的云服务的使用情况和用法模式。 + +应该严肃对待你的云服务和代码仓库服务,并熟悉最佳实现的做法。下面的文章将帮助你做到这一点。 + +**对于 AWS:** + + * [管理 AWS 访问密钥的最佳实现][11] + * [AWS 安全审计指南][12] + +**对于 GitHub:** + + * [介绍一种新方法来让你的代码保持安全][13] + * [GitHub 企业版最佳安全实现][14] + +同样重要的一点是,和你的安全团队保持联系;他们应该可以为你团队的成功提供想法、建议和指南。永远记住:安全是每个人的责任,而不仅仅是他们的。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/open-source-cloud-security + +作者:[Alison Naylor][a1],[Anderson Silva][a2] +选题:[lujun9972][b] +译者:[hopefully2333](https://github.com/hopefully2333) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]: https://opensource.com/users/asnaylor +[a2]: https://opensource.com/users/ansilva +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT (Tools in a cloud) +[2]: https://twitter.com/travismcpeak?lang=en +[3]: https://github.com/rmonk +[4]: https://www.linkedin.com/in/alperkins/ +[5]: https://github.com/michenriksen/gitrob +[6]: https://help.github.com/en/articles/removing-sensitive-data-from-a-repository +[7]: https://github.com/awslabs/git-secrets +[8]: https://github.com/RiotGames/key-conjurer +[9]: https://technology.riotgames.com/news/key-conjurer-our-policy-least-privilege +[10]: https://github.com/Netflix/repokid +[11]: https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html +[12]: https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html +[13]: https://github.blog/2019-05-23-introducing-new-ways-to-keep-your-code-secure/ +[14]: https://github.blog/2015-10-09-github-enterprise-security-best-practices/ diff --git a/published/20190916 Copying large files with Rsync, and some misconceptions.md b/published/20190916 Copying large files with Rsync, and some misconceptions.md new file mode 100644 index 0000000000..3fe61c4a95 --- /dev/null +++ b/published/20190916 Copying large files with Rsync, and some misconceptions.md @@ -0,0 +1,101 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11415-1.html) +[#]: subject: (Copying large files with Rsync, and some misconceptions) +[#]: via: (https://fedoramagazine.org/copying-large-files-with-rsync-and-some-misconceptions/) +[#]: author: (Daniel Leite de Abreu https://fedoramagazine.org/author/dabreu/) + +使用 rsync 复制大文件的一些误解 +====== + +![][1] + +有一种观点认为,在 IT 行业工作的许多人经常从网络帖子里复制和粘贴。我们都干过,复制粘贴本身不是问题。问题是当我们在不理解它们的情况下这样干。 + +几年前,一个曾经在我团队中工作的朋友需要将虚拟机模板从站点 A 复制到站点 B。他们无法理解为什么复制的文件在站点 A 上为 10GB,但是在站点 B 上却变为 100GB。 + +这位朋友认为 `rsync` 是一个神奇的工具,应该仅“同步”文件本身。但是,我们大多数人所忘记的是了解 `rsync` 的真正含义、用法,以及我认为最重要的是它原本是用来做什么的。本文提供了有关 `rsync` 的更多信息,并解释了那件事中发生了什么。 + +### 关于 rsync + +`rsync` 是由 Andrew Tridgell 和 Paul Mackerras 创建的工具,其动机是以下问题: + +假设你有两个文件,`file_A` 和 `file_B`。你希望将 `file_B` 更新为与 `file_A` 相同。显而易见的方法是将 `file_A` 复制到 `file_B`。 + +现在,假设这两个文件位于通过慢速通信链接(例如,拨号 IP 链接)连接的两个不同的服务器上。如果`file_A` 大,将其复制到 `file_B` 将会很慢,有时甚至是不可能完成的。为了提高效率,你可以在发送前压缩 `file_A`,但这通常只会获得 2 到 4 倍的效率提升。 + +现在假设 `file_A` 和 `file_B` 非常相似,并且为了加快处理速度,你可以利用这种相似性。一种常见的方法是仅通过链接发送 `file_A` 和 `file_B` 之间的差异,然后使用这个差异列表在远程端重建文件。 + +问题在于,用于在两个文件之间创建一组差异的常规方法依赖于能够读取两个文件。因此,它们要求链接的一端预先提供两个文件。如果它们在同一台计算机上不是同时可用的,则无法使用这些算法。(一旦将文件复制过来,就不需要做对比差异了)。而这是 `rsync` 解决的问题。 + +`rsync` 算法有效地计算源文件的哪些部分与现有目标文件的部分匹配。这样,匹配的部分就不需要通过链接发送了;所需要的只是对目标文件部分的引用。只有源文件中不匹配的部分才需要发送。 + +然后,接收者可以使用对现有目标文件各个部分的引用和原始素材来构造源文件的副本。 + +另外,可以使用一系列常用压缩算法中的任何一种来压缩发送到接收器的数据,以进一步提高速度。 + +我们都知道,`rsync` 算法以一种漂亮的方式解决了这个问题。 + +在 `rsync` 的介绍之后,回到那件事! + +### 问题 1:自动精简配置 + +有两件事可以帮助那个朋友了解正在发生的事情。 + +该文件在其他地方的大小变得越来越大的问题是由源系统上启用了自动精简配置Thin Provisioning(TP)引起的,这是一种优化存储区域网络(SAN)或网络连接存储(NAS)中可用空间效率的方法。 + +由于启用了 TP,源文件只有 10GB,并且在不使用任何其他配置的情况下使用 `rsync` 进行传输时,目标位置将接收到全部 100GB 的大小。`rsync` 无法自动完成该(TP)操作,必须对其进行配置。 + +进行此工作的选项是 `-S`(或 `–sparse`),它告诉 `rsync` 有效地处理稀疏文件。它会按照它说的做!它只会发送该稀疏数据,因此源和目标将有一个 10GB 的文件。 + +### 问题 2:更新文件 + +当发送一个更新的文件时会出现第二个问题。现在目标仅接收 10GB 了,但始终传输的是整个文件(包含虚拟磁盘),即使只是在该虚拟磁盘上更改了一个配置文件。换句话说,只是该文件的一小部分发生了更改。 + +用于此传输的命令是: + +``` +rsync -avS vmdk_file syncuser@host1:/destination +``` + +同样,了解 `rsync` 的工作方式也将有助于解决此问题。 + +上面是关于 `rsync` 的最大误解。我们许多人认为 `rsync` 只会发送文件的增量更新,并且只会自动更新需要更新的内容。**但这不是 `rsync` 的默认行为**。 + +如手册页所述,`rsync` 的默认行为是在目标位置创建文件的新副本,并在传输完成后将其移动到正确的位置。 + +要更改 `rsync` 的默认行为,你必须设置以下标志,然后 `rsync` 将仅发送增量: + +``` +--inplace 原地更新目标文件 +--partial 保留部分传输的文件 +--append 附加数据到更短的文件 +--progress 在传输时显示进度条 +``` + +因此,可以确切地执行我那个朋友想要的功能的完整命令是: + +``` +rsync -av --partial --inplace --append --progress vmdk_file syncuser@host1:/destination +``` + +注意,出于两个原因,这里必须删除稀疏选项 `-S`。首先是通过网络发送文件时,不能同时使用 `–sparse` 和 `–inplace`。其次,当你以前使用过 `–sparse` 发送文件时,就无法再使用 `–inplace` 进行更新。请注意,低于 3.1.3 的 `rsync` 版本将拒绝 `–sparse` 和 `–inplace` 的组合。 + +因此,即使那个朋友最终通过网络复制了 100GB,那也只需发生一次。以下所有更新仅复制差异,从而使复制非常高效。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/copying-large-files-with-rsync-and-some-misconceptions/ + +作者:[Daniel Leite de Abreu][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/dabreu/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/rsync-816x345.jpg diff --git a/published/20190916 Linux commands to display your hardware information.md b/published/20190916 Linux commands to display your hardware information.md new file mode 100644 index 0000000000..39cd92b312 --- /dev/null +++ b/published/20190916 Linux commands to display your hardware information.md @@ -0,0 +1,363 @@ +[#]: collector: (lujun9972) +[#]: translator: (way-ww) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11422-1.html) +[#]: subject: (Linux commands to display your hardware information) +[#]: via: (https://opensource.com/article/19/9/linux-commands-hardware-information) +[#]: author: (Howard Fosdick https://opensource.com/users/howtechhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth) + +用 Linux 命令显示硬件信息 +====== + +> 通过命令行获取计算机硬件详细信息。 + +![](https://img.linux.net.cn/data/attachment/album/201910/04/120618q2k1fflrsy1bgbwp.jpg) + +你可能会有很多的原因需要查清计算机硬件的详细信息。例如,你需要修复某些问题并在论坛上发出请求,人们可能会立即询问你的计算机具体的信息。或者当你想要升级计算机配置时,你需要知道现有的硬件型号和能够升级的型号。这些都需要查询你的计算机具体规格信息。 + +最简单的方法是使用标准的 Linux GUI 程序之一: + +* [i-nex][2] 收集硬件信息,并且类似于 Windows 下流行的 [CPU-Z][3] 的显示。 +* [HardInfo][4] 显示硬件具体信息,甚至包括一组八个的流行的性能基准程序,你可以用它们评估你的系统性能。 +* [KInfoCenter][5] 和 [Lshw][6] 也能够显示硬件的详细信息,并且可以从许多软件仓库中获取。 + +或者,你也可以拆开计算机机箱去查看硬盘、内存和其他设备上的标签信息。或者你可以在系统启动时,按下[相应的按键][7]进入 UEFI 和 BIOS 界面获得信息。这两种方式都会向你显示硬件信息但省略软件信息。 + +你也可以使用命令行获取硬件信息。等一下… 这听起来有些困难。为什么你会要这样做? + +有时候通过使用一条针对性强的命令可以很轻松的找到特定信息。也可能你没有可用的 GUI 程序或者只是不想安装这样的程序。 + +使用命令行的主要原因可能是编写脚本。无论你是使用 Linux shell 还是其他编程语言来编写脚本通常都需要使用命令行。 + +很多检测硬件信息的命令行都需要使用 root 权限。所以要么切换到 root 用户,要么使用 `sudo` 在普通用户状态下发出命令: + +``` +sudo +``` + +并按提示输入你的密码。 + +这篇文章介绍了很多用于发现系统信息的有用命令。文章最后的快速查询表对它们作出了总结。 + +### 硬件概述 + +下面几条命令可以全面概述计算机硬件信息。 + +`inxi` 命令能够列出包括 CPU、图形、音频、网络、驱动、分区、传感器等详细信息。当论坛里的人尝试帮助其他人解决问题的时候,他们常常询问此命令的输出。这是解决问题的标准诊断程序: + +``` +inxi -Fxz +``` + +`-F` 参数意味着你将得到完整的输出,`x` 增加细节信息,`z` 参数隐藏像 MAC 和 IP 等私人身份信息。 + +`hwinfo` 和 `lshw` 命令以不同的格式显示大量相同的信息: + +``` +hwinfo --short +``` + +或 + +``` +lshw -short +``` + +这两条命令的长格式输出非常详细,但也有点难以阅读: + +``` +hwinfo +``` + +或 + +``` +lshw +``` + +### CPU 详细信息 + +通过命令你可以了解关于你的 CPU 的任何信息。使用 `lscpu` 命令或与它相近的 `lshw` 命令查看 CPU 的详细信息: + +``` +lscpu +``` + +或 + +``` +lshw -C cpu +``` + +在这两个例子中,输出的最后几行都列出了所有 CPU 的功能。你可以查看你的处理器是否支持特定的功能。 + +使用这些命令的时候,你可以通过使用 `grep` 命令过滤复杂的信息,并缩小所需信息范围。例如,只查看 CPU 品牌和型号: + +``` +lshw -C cpu | grep -i product +``` + +仅查看 CPU 的速度(兆赫兹): + +``` +lscpu | grep -i mhz +``` + +或其 [BogoMips][8] 额定功率: + +``` +lscpu | grep -i bogo +``` + +`grep` 命令的 `-i` 参数代表搜索结果忽略大小写。 + +### 内存 + +Linux 命令行使你能够收集关于你的计算机内存的所有可能的详细信息。你甚至可以不拆开计算机机箱就能确定是否可以为计算机添加额外的内存条。 + +使用 `dmidecode` 命令列出每根内存条和其容量: + +``` +dmidecode -t memory | grep -i size +``` + +使用以下命令获取系统内存更多的信息,包括类型、容量、速度和电压: + +``` +lshw -short -C memory +``` + +你肯定想知道的一件事是你的计算机可以安装的最大内存: + +``` +dmidecode -t memory | grep -i max +``` + +现在检查一下计算机是否有空闲的插槽可以插入额外的内存条。你可以通过使用命令在不打开计算机机箱的情况下就做到: + +``` +lshw -short -C memory | grep -i empty +``` + +输出为空则意味着所有的插槽都在使用中。 + +确定你的计算机拥有多少显卡内存需要下面的命令。首先使用 `lspci` 列出所有设备信息然后过滤出你想要的显卡设备信息: + +``` +lspci | grep -i vga +``` + +视频控制器的设备号输出信息通常如下: + +``` +00:02.0 VGA compatible controller: Intel Corporation 82Q35 Express Integrated Graphics Controller (rev 02) +``` + +现在再加上视频设备号重新运行 `lspci` 命令: + +``` +lspci -v -s 00:02.0 +``` + +输出信息中 `prefetchable` 那一行显示了系统中的显卡内存大小: + +``` +... +Memory at f0100000 (32-bit, non-prefetchable) [size=512K] +I/O ports at 1230 [size=8] +Memory at e0000000 (32-bit, prefetchable) [size=256M] +Memory at f0000000 (32-bit, non-prefetchable) [size=1M] +... +``` + +最后使用下面的命令展示当前内存使用量(兆字节): + +``` +free -m +``` + +这条命令告诉你多少内存是空闲的,多少命令正在使用中以及交换内存的大小和是否正在使用。例如,输出信息如下: + +``` +              total        used        free     shared    buff/cache   available +Mem:          11891        1326        8877      212        1687       10077 +Swap:          1999           0        1999 +``` + +`top` 命令为你提供内存使用更加详细的信息。它显示了当前全部内存和 CPU 使用情况并按照进程 ID、用户 ID 及正在运行的命令细分。同时这条命令也是全屏输出: + +``` +top +``` + +### 磁盘文件系统和设备 + +你可以轻松确定有关磁盘、分区、文件系统和其他设备信息。 + +显示每个磁盘设备的描述信息: + +``` +lshw -short -C disk +``` + +通过以下命令获取任何指定的 SATA 磁盘详细信息,例如其型号、序列号以及支持的模式和扇区数量等: + +``` +hdparm -i /dev/sda +``` + +当然,如果需要的话你应该将 `sda` 替换成 `sdb` 或者其他设备号。 + +要列出所有磁盘及其分区和大小,请使用以下命令: + +``` +lsblk +``` + +使用以下命令获取更多有关扇区数量、大小、文件系统 ID 和 类型以及分区开始和结束扇区: + +``` +fdisk -l +``` + +要启动 Linux,你需要确定 [GRUB][9] 引导程序的可挂载分区。你可以使用 `blkid` 命令找到此信息。它列出了每个分区的唯一标识符(UUID)及其文件系统类型(例如 ext3 或 ext4): + +``` +blkid +``` + +使用以下命令列出已挂载的文件系统和它们的挂载点,以及已用的空间和可用的空间(兆字节为单位): + +``` +df -m +``` + +最后,你可以列出所有的 USB 和 PCI 总线以及其他设备的详细信息: + +``` +lsusb +``` + +或 + +``` +lspci +``` + +### 网络 + +Linux 提供大量的网络相关命令,下面只是几个例子。 + +查看你的网卡硬件详细信息: + +``` +lshw -C network +``` + +`ifconfig` 是显示网络接口的传统命令: + +``` +ifconfig -a +``` + +但是现在很多人们使用: + +``` +ip link show +``` + +或 + +``` +netstat -i +``` + +在阅读输出时,了解常见的网络缩写十分有用: + +缩写 | 含义 +---|--- +`lo` | 回环接口 +`eth0` 或 `enp*` | 以太网接口 +`wlan0` | 无线网接口 +`ppp0` | 点对点协议接口(由拨号调制解调器、PPTP VPN 连接或者 USB 调制解调器使用) +`vboxnet0` 或 `vmnet*` | 虚拟机网络接口 + +表中的星号是通配符,代表不同系统的任意字符。 + +使用以下命令显示默认网关和路由表: + +``` +ip route | column -t +``` + +或 + +``` +netstat -r +``` + +### 软件 + +让我们以显示最底层软件详细信息的两条命令来结束。例如,如果你想知道是否安装了最新的固件该怎么办?这条命令显示了 UEFI 或 BIOS 的日期和版本: + +``` +dmidecode -t bios +``` + +内核版本是多少,以及它是 64 位的吗?网络主机名是什么?使用下面的命令查出结果: + +``` +uname -a +``` + +### 快速查询表 + +用途 | 命令 +--- | --- +显示所有硬件信息 | `inxi -Fxz` 或 `hwinfo --short` 或 `lshw  -short` +CPU 信息 | `lscpu` 或 `lshw -C cpu` +显示 CPU 功能(例如 PAE、SSE2) | `lshw -C cpu | grep -i capabilities` +报告 CPU 位数 | `lshw -C cpu | grep -i width` +显示当前内存大小和配置 | `dmidecode -t memory | grep -i size` 或 `lshw -short -C memory` +显示硬件支持的最大内存 | `dmidecode -t memory | grep -i max` +确定是否有空闲内存插槽 | `lshw -short -C memory | grep -i empty`(输出为空表示没有可用插槽) +确定显卡内存数量 | `lspci | grep -i vga` 然后指定设备号再次使用;例如:`lspci -v -s 00:02.0` 显卡内存数量就是 `prefetchable` 的值 +显示当前内存使用情况 | `free -m` 或 `top` +列出磁盘驱动器 | `lshw -short -C disk` +显示指定磁盘驱动器的详细信息 | `hdparm -i /dev/sda`(需要的话替换掉 `sda`) +列出磁盘和分区信息 | `lsblk`(简单) 或 `fdisk -l`(详细) +列出分区 ID(UUID)| `blkid` +列出已挂载文件系统挂载点以及已用和可用空间 | `df -m` +列出 USB 设备 | `lsusb` +列出 PCI 设备 | `lspci` +显示网卡详细信息 | `lshw -C network` +显示网络接口 | `ifconfig -a` 或 `ip link show` 或 `netstat -i` +显示路由表 | `ip route | column -t` 或 `netstat -r` +显示 UEFI/BIOS 信息 | `dmidecode -t bios` +显示内核版本网络主机名等 | `uname -a` + +你有喜欢的命令被我忽略掉的吗?请添加评论分享给大家。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/linux-commands-hardware-information + +作者:[Howard Fosdick][a] +选题:[lujun9972][b] +译者:[way-ww](https://github.com/way-ww) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/howtechhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK (computer screen ) +[2]: http://sourceforge.net/projects/i-nex/ +[3]: https://www.cpuid.com/softwares/cpu-z.html +[4]: http://sourceforge.net/projects/hardinfo.berlios/ +[5]: https://userbase.kde.org/KInfoCenter +[6]: http://www.binarytides.com/linux-lshw-command/ +[7]: http://www.disk-image.com/faq-bootmenu.htm +[8]: https://en.wikipedia.org/wiki/BogoMips +[9]: https://www.dedoimedo.com/computers/grub.html diff --git a/published/20190918 Adding themes and plugins to Zsh.md b/published/20190918 Adding themes and plugins to Zsh.md new file mode 100644 index 0000000000..a9eaf0da80 --- /dev/null +++ b/published/20190918 Adding themes and plugins to Zsh.md @@ -0,0 +1,190 @@ +[#]: collector: (lujun9972) +[#]: translator: (amwps290) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11426-1.html) +[#]: subject: (Adding themes and plugins to Zsh) +[#]: via: (https://opensource.com/article/19/9/adding-plugins-zsh) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +给 Zsh 添加主题和插件 +====== + +> 通过 Oh My Zsh 安装的主题和插件来扩展 Zsh 的功能。 + +![](https://img.linux.net.cn/data/attachment/album/201910/05/120457r49mk2l9oelv94bi.jpg) + +在我的[前文][2]中,我向大家展示了如何安装并使用 [Z-Shell][2] (Zsh)。对于某些用户来说,Zsh 最令人激动的是它可以安装主题。Zsh 安装主题非常容易,一方面是因为有非常活跃的社区为 Z-Shell 设计主题,另一方面是因为有 [Oh My Zsh][3] 这个项目。这使得安装主题变得轻而易举。 + +主题的变化可能会立刻吸引你的注意力,因此如果你安装了 Zsh 并且将默认的 Shell 替换为 Zsh 时,你可能不喜欢 Shell 默认主题的样子,那么你可以立即更换 Oh My Zsh 自带的 100 多个主题。Oh My Zsh 不仅拥有大量精美的主题,同时还有数以百计的扩展 Zsh 功能的插件。 + +### 安装 Oh My Zsh + +Oh My Zsh 的[官网][3]建议你使用一个脚本在有网络的情况下来安装这个包。尽管 Oh My Zsh 项目几乎是可以令人信服的,但是盲目地在你的电脑上运行一个脚本这是一个糟糕的建议。如果你想运行这个脚本,你可以把它下载下来,看一下它实现了什么功能,在你确信你已经了解了它的所作所为之后,你就可以运行它了。 + +如果你下载了脚本并且阅读了它,你就会发现安装过程仅仅只有三步: + +#### 1、克隆 oh-my-zsh + +第一步,克隆 oh-my-zsh 库到 `~/.oh-my-zsh` 目录: + +``` +% git clone http://github.com/robbyrussell/oh-my-zsh ~/.oh-my-zsh +``` + +#### 2、切换配置文件 + +下一步,备份你已有的 `.zshrc` 文件,然后将 oh-my-zsh 自带的配置文件移动到这个地方。这两步操作可以一步完成,只需要你的 `mv` 命令支持 `-b` 这个选项。 + +``` +% mv -b \ +~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc +``` + +#### 3、编辑配置文件 + +默认情况下,Oh My Zsh 自带的配置文件是非常简陋的。如果你想将你自己的 `~/.zshrc` 文件合并到 `.oh-my-zsh` 的配置文件中。你可以使用 [cat][4] 命令将你的旧的配置文件添加到新文件的末尾。 + +``` +% cat ~/.zshrc~ >> ~/.zshrc +``` + +看一下默认的配置文件以及它提供的一些选项。用你最喜欢的编辑器打开 `~/.zshrc` 文件。这个文件有非常良好的注释。这是了解它的一个非常好的方法。 + +例如,你可以更改 `.oh-my-zsh` 目录的位置。在安装的时候,它默认是位于你的家目录。但是,根据 [Free Desktop][5] 所定义的现代 Linux 规范。这个目录应当放置于 `~/.local/share` 。你可以在配置文件中进行修改。如下所示: + +``` +# Path to your oh-my-zsh installation. +export ZSH=$HOME/.local/share/oh-my-zsh +``` + +然后将 .oh-my-zsh 目录移动到你新配置的目录下: + +``` +% mv ~/.oh-my-zsh $HOME/.local/share/oh-my-zsh +``` + +如果你使用的是 MacOS,这个目录可能会有点含糊不清,但是最合适的位置可能是在 `$HOME/Library/Application\ Support`。 + +### 重新启动 Zsh + +编辑配置文件之后,你必须重新启动你的 Shell。在这之前,你必须确定你的任何操作都已正确完成。例如,在你修改了 `.oh-my-zsh` 目录的路径之后。不要忘记将目录移动到新的位置。如果你不想重新启动你的 Shell。你可以使用 `source` 命令来使你的配置文件生效。 + +``` +% source ~/.zshrc +➜  .oh-my-zsh git:(master) ✗ +``` + +你可以忽略任何丢失更新文件的警告;他们将会在重启的时候再次进行解析。 + +### 更换你的主题 + +安装好 oh-my-zsh 之后。你可以将你的 Zsh 的主题设置为 `robbyrussell`,这是一个该项目维护者的主题。这个主题的更改是非常小的,仅仅是改变了提示符的颜色。 + +你可以通过列出 `.oh-my-zsh` 目录下的所有文件来查看所有安装的主题: + +``` +➜  .oh-my-zsh git:(master) ✗ ls ~/.local/share/oh-my-zsh/themes +3den.zsh-theme +adben.zsh-theme +af-magic.zsh-theme +afowler.zsh-theme +agnoster.zsh-theme +[...] +``` + +想在切换主题之前查看一下它的样子,你可以查看 Oh My Zsh 的 [wiki][6] 页面。要查看更多主题,可以查看 [外部主题][7] wiki 页面。 + +大部分的主题是非常易于安装和使用的,仅仅需要改变 `.zshrc` 文件中的配置选项然后重新载入配置文件。 + +``` +➜ ~ sed -i 's/_THEME=\"robbyrussel\"/_THEME=\"linuxonly\"/g' ~/.zshrc +➜ ~ source ~/.zshrc +seth@darkstar:pts/0->/home/skenlon (0) ➜ +``` + +其他的主题可能需要一些额外的配置。例如,为了使用 `agnoster` 主题,你必须先安装 Powerline 字体。这是一个开源字体,如果你使用 Linux 操作系统的话,这个字体很可能在你的软件库中存在。使用下面的命令安装这个字体: + +``` +➜ ~ sudo dnf install powerline-fonts +``` + +在配置文件中更改你的主题: + +``` +➜ ~ sed -i 's/_THEME=\"linuxonly\"/_THEME=\"agnoster\"/g' ~/.zshrc +``` + +重新启动你的 Sehll(一个简单的 `source` 命令并不会起作用)。一旦重启,你就可以看到新的主题: + +![agnoster theme][8] + +### 安装插件 + +Oh My Zsh 有超过 200 的插件,你可以在 `.oh-my-zsh/plugins` 中看到它们。每一个扩展目录下都有一个 `README` 文件解释了这个插件的作用。 + +一些插件相当简单。例如,`dnf`、`ubuntu`、`brew` 和 `macports` 插件仅仅是为了简化与 DNF、Apt、Homebres 和 MacPorts 的交互操作而定义的一些别名。 + +而其他的一些插件则较为复杂,`git` 插件默认是被激活使用的。当你的目录是一个 git 仓库的时候,这个扩展就会更新你的 Shell 提示符,以显示当前的分支和是否有未合并的更改。 + +为了激活这个扩展,你可以将这个扩展添加到你的配置文件 `~/.zshrc` 中。例如,你可以添加 `dnf` 和 `pass` 插件,按照如下的方式更改: + +``` +plugins=(git dnf pass) +``` + +保存修改,重新启动你的 Shell。 + +``` +% source ~/.zshrc +``` + +这个扩展现在就可以使用了。你可以通过使用 `dnf` 提供的别名来测试一下: + +``` +% dnfs fop +====== Name Exactly Matched: fop ====== +fop.noarch : XSL-driven print formatter +``` + +不同的插件做不同的事,因此你可以一次安装一两个插件来帮你学习新的特性和功能。 + +### 兼容性 + +一些 Oh My Zsh 插件具有通用性。如果你看到一个插件声称它可以与 Bash 兼容,那么它就可以在你自己的 Bash 中使用。另一些插件需要 Zsh 提供的特定功能。因此,它们并不是所有都能工作。但是你可以添加一些其他的插件,例如 `dnf`、`ubuntu`、`firewalld`,以及其他的一些插件。你可以使用 `source` 使你的选择生效。例如: + +``` +if [ -d $HOME/.local/share/oh-my-zsh/plugins ]; then +        source $HOME/.local/share/oh-my-zsh/plugins/dnf/dnf.plugin.zsh +fi +``` + +### 选择或者不选择 Zsh + +Z-shell 的内置功能和它由社区贡献的扩展功能都非常强大。你可以把它当成你的主 Shell 使用,你也可以在你休闲娱乐的时候尝试一下。这取决于你的爱好。 + +什么是你最喜爱的主题和扩展可以在下方的评论告诉我们! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/adding-plugins-zsh + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[amwps290](https://github.com/amwps290) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code ) +[2]: https://linux.cn/article-11378-1.html +[3]: https://ohmyz.sh/ +[4]: https://opensource.com/article/19/2/getting-started-cat-command +[5]: http://freedesktop.org +[6]: https://github.com/robbyrussell/oh-my-zsh/wiki/Themes +[7]: https://github.com/robbyrussell/oh-my-zsh/wiki/External-themes +[8]: https://opensource.com/sites/default/files/uploads/zsh-agnoster.jpg (agnoster theme) +[9]: https://opensource.com/resources/what-is-git +[10]: https://opensource.com/article/19/7/make-linux-stronger-firewalls diff --git a/published/20190920 Hone advanced Bash skills by building Minesweeper.md b/published/20190920 Hone advanced Bash skills by building Minesweeper.md new file mode 100644 index 0000000000..59b0e8cbd1 --- /dev/null +++ b/published/20190920 Hone advanced Bash skills by building Minesweeper.md @@ -0,0 +1,325 @@ +[#]: collector: (lujun9972) +[#]: translator: (wenwensnow) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11430-1.html) +[#]: subject: (Hone advanced Bash skills by building Minesweeper) +[#]: via: (https://opensource.com/article/19/9/advanced-bash-building-minesweeper) +[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakar) + +通过编写扫雷游戏提高你的 Bash 技巧 +====== + +> 那些令人怀念的经典游戏可是提高编程能力的好素材。今天就让我们仔细探索一番,怎么用 Bash 编写一个扫雷程序。 + +![bash logo on green background][1] + +我在编程教学方面不是专家,但当我想更好掌握某一样东西时,会试着找出让自己乐在其中的方法。比方说,当我想在 shell 编程方面更进一步时,我决定用 Bash 编写一个[扫雷][2]游戏来加以练习。 + +如果你是一个有经验的 Bash 程序员,希望在提高技巧的同时乐在其中,那么请跟着我编写一个你的运行在终端中的扫雷游戏。完整代码可以在这个 [GitHub 存储库][3]中找到。 + +### 做好准备 + +在我编写任何代码之前,我列出了该游戏所必须的几个部分: + +1. 显示雷区 +2. 创建游戏逻辑 +3. 创建判断单元格是否可选的逻辑 +4. 记录可用和已查明(已排雷)单元格的个数 +5. 创建游戏结束逻辑 + +### 显示雷区 + +在扫雷中,游戏界面是一个由 2D 数组(列和行)组成的不透明小方格。每一格下都有可能藏有地雷。玩家的任务就是找到那些不含雷的方格,并且在这一过程中,不能点到地雷。这个 Bash 版本的扫雷使用 10x10 的矩阵,实际逻辑则由一个简单的 Bash 数组来完成。 + +首先,我先生成了一些随机数字。这将是地雷在雷区里的位置。控制地雷的数量,在开始编写代码之前,这么做会容易一些。实现这一功能的逻辑可以更好,但我这么做,是为了让游戏实现保持简洁,并有改进空间。(我编写这个游戏纯属娱乐,但如果你能将它修改的更好,我也是很乐意的。) + +下面这些变量在整个过程中是不变的,声明它们是为了随机生成数字。就像下面的 `a` - `g` 的变量,它们会被用来计算可排除的地雷的值: + +``` +# 变量 +score=0 # 会用来存放游戏分数 +# 下面这些变量,用来随机生成可排除地雷的实际值 +a="1 10 -10 -1" +b="-1 0 1" +c="0 1" +d="-1 0 1 -2 -3" +e="1 2 20 21 10 0 -10 -20 -23 -2 -1" +f="1 2 3 35 30 20 22 10 0 -10 -20 -25 -30 -35 -3 -2 -1" +g="1 4 6 9 10 15 20 25 30 -30 -24 -11 -10 -9 -8 -7" +# +# 声明 +declare -a room # 声明一个 room 数组,它用来表示雷区的每一格。 +``` + +接下来,我会用列(0-9)和行(a-j)显示出游戏界面,并且使用一个 10x10 矩阵作为雷区。(`M[10][10]` 是一个索引从 0-99,有 100 个值的数组。) 如想了解更多关于 Bash 数组的内容,请阅读这本书[那些关于 Bash 你所不了解的事: Bash 数组简介][4]。 + + +创建一个叫 `plough` 的函数,我们先将标题显示出来:两个空行、列头,和一行 `-`,以示意往下是游戏界面: + +``` +printf '\n\n' +printf '%s' "     a   b   c   d   e   f   g   h   i   j" +printf '\n   %s\n' "-----------------------------------------" +``` + +然后,我初始化一个计数器变量,叫 `r`,它会用来记录已显示多少横行。注意,稍后在游戏代码中,我们会用同一个变量 `r`,作为我们的数组索引。 在 [Bash for 循环][5]中,用 `seq` 命令从 0 增加到 9。我用数字(`d%`)占位,来显示行号(`$row`,由 `seq` 定义): + + +``` +r=0 # 计数器 +for row in $(seq 0 9); do + printf '%d ' "$row" # 显示 行数 0-9 +``` + +在我们接着往下做之前,让我们看看到现在都做了什么。我们先横着显示 `[a-j]` 然后再将 `[0-9]` 的行号显示出来,我们会用这两个范围,来确定用户排雷的确切位置。 + +接着,在每行中,插入列,所以是时候写一个新的 `for` 循环了。这一循环管理着每一列,也就是说,实际上是生成游戏界面的每一格。我添加了一些辅助函数,你能在源码中看到它的完整实现。 对每一格来说,我们需要一些让它看起来像地雷的东西,所以我们先用一个点(`.`)来初始化空格。为了实现这一想法,我们用的是一个叫 [`is_null_field`][6] 的自定义函数。 同时,我们需要一个存储每一格具体值的数组,这儿会用到之前已定义的全局数组 [`room`][7] , 并用 [变量 `r`][8]作为索引。随着 `r` 的增加,遍历所有单元格,并随机部署地雷。 + +``` +  for col in $(seq 0 9); do + ((r+=1)) # 循环完一列行数加一 + is_null_field $r # 假设这里有个函数,它会检查单元格是否为空,为真,则此单元格初始值为点(.) + printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}" # 最后显示分隔符,注意,${room[$r]} 的第一个值为 '.',等于其初始值。 + #结束 col 循环 + done +``` + +最后,为了保持游戏界面整齐好看,我会在每行用一个竖线作为结尾,并在最后结束行循环: + +``` +printf '%s\n' "|" # 显示出行分隔符 +printf ' %s\n' "-----------------------------------------" +# 结束行循环 +done +printf '\n\n' +``` + +完整的 `plough` 代码如下: + +``` +plough() +{ +  r=0 +  printf '\n\n' +  printf '%s' "     a   b   c   d   e   f   g   h   i   j" +  printf '\n   %s\n' "-----------------------------------------" +  for row in $(seq 0 9); do +    printf '%d  ' "$row" +    for col in $(seq 0 9); do +       ((r+=1)) +       is_null_field $r +       printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}" +    done +    printf '%s\n' "|" +    printf '   %s\n' "-----------------------------------------" +  done +  printf '\n\n' +} +``` + +我花了点时间来思考,`is_null_field` 的具体功能是什么。让我们来看看,它到底能做些什么。在最开始,我们需要游戏有一个固定的状态。你可以随便选择个初始值,可以是一个数字或者任意字符。我最后决定,所有单元格的初始值为一个点(`.`),因为我觉得,这样会让游戏界面更好看。下面就是这一函数的完整代码: + +``` +is_null_field() +{ + local e=$1 # 在数组 room 中,我们已经用过循环变量 'r' 了,这次我们用 'e' + if [[ -z "${room[$e]}" ]];then + room[$r]="." #这里用点(.)来初始化每一个单元格 + fi +} +``` + +现在,我已经初始化了所有的格子,现在只要用一个很简单的函数就能得出当前游戏中还有多少单元格可以操作: + +``` +get_free_fields() +{ + free_fields=0 # 初始化变量 + for n in $(seq 1 ${#room[@]}); do + if [[ "${room[$n]}" = "." ]]; then # 检查当前单元格是否等于初始值(.),结果为真,则记为空余格子。 + ((free_fields+=1)) +    fi +  done +} +``` + +这是显示出来的游戏界面,`[a-j]` 为列,`[0-9]` 为行。 + +![Minefield][9] + +### 创建玩家逻辑 + +玩家操作背后的逻辑在于,先从 [stdin][10] 中读取数据作为坐标,然后再找出对应位置实际包含的值。这里用到了 Bash 的[参数扩展][11],来设法得到行列数。然后将代表列数的字母传给分支语句,从而得到其对应的列数。为了更好地理解这一过程,可以看看下面这段代码中,变量 `o` 所对应的值。 举个例子,玩家输入了 `c3`,这时 Bash 将其分成两个字符:`c` 和 `3`。为了简单起见,我跳过了如何处理无效输入的部分。 + +``` + colm=${opt:0:1} # 得到第一个字符,一个字母 + ro=${opt:1:1} # 得到第二个字符,一个整数 + case $colm in + a ) o=1;; # 最后,通过字母得到对应列数。 + b ) o=2;; +    c ) o=3;; +    d ) o=4;; +    e ) o=5;; +    f ) o=6;; +    g ) o=7;; +    h ) o=8;; +    i ) o=9;; +    j ) o=10;; +  esac +``` + +下面的代码会计算用户所选单元格实际对应的数字,然后将结果储存在变量中。 + +这里也用到了很多的 `shuf` 命令,`shuf` 是一个专门用来生成随机序列的 [Linux 命令][12]。`-i` 选项后面需要提供需要打乱的数或者范围,`-n` 选项则规定输出结果最多需要返回几个值。Bash 中,可以在两个圆括号内进行[数学计算][13],这里我们会多次用到。 + +还是沿用之前的例子,玩家输入了 `c3`。 接着,它被转化成了 `ro=3` 和 `o=3`。 之后,通过上面的分支语句代码, 将 `c` 转化为对应的整数,带进公式,以得到最终结果 `i` 的值。 + +``` + i=$(((ro*10)+o)) # 遵循运算规则,算出最终值 + is_free_field $i $(shuf -i 0-5 -n 1) # 调用自定义函数,判断其指向空/可选择单元格。 +``` + +仔细观察这个计算过程,看看最终结果 `i` 是如何计算出来的: + +``` +i=$(((ro*10)+o)) +i=$(((3*10)+3))=$((30+3))=33 +``` + +最后结果是 33。在我们的游戏界面显示出来,玩家输入坐标指向了第 33 个单元格,也就是在第 3 行(从 0 开始,否则这里变成 4),第 3 列。 + +### 创建判断单元格是否可选的逻辑 + +为了找到地雷,在将坐标转化,并找到实际位置之后,程序会检查这一单元格是否可选。如不可选,程序会显示一条警告信息,并要求玩家重新输入坐标。 + +在这段代码中,单元格是否可选,是由数组里对应的值是否为点(`.`)决定的。如果可选,则重置单元格对应的值,并更新分数。反之,因为其对应值不为点,则设置变量 `not_allowed`。为简单起见,游戏中[警告消息][14]这部分源码,我会留给读者们自己去探索。 + +``` +is_free_field() +{ +  local f=$1 +  local val=$2 +  not_allowed=0 +  if [[ "${room[$f]}" = "." ]]; then +    room[$f]=$val +    score=$((score+val)) +  else +    not_allowed=1 +  fi +} +``` + +![Extracting mines][15] + +如输入坐标有效,且对应位置为地雷,如下图所示。玩家输入 `h6`,游戏界面会出现一些随机生成的值。在发现地雷后,这些值会被加入用户得分。 + +![Extracting mines][16] + +还记得我们开头定义的变量,`a` - `g` 吗,我会用它们来确定随机生成地雷的具体值。所以,根据玩家输入坐标,程序会根据(`m`)中随机生成的数,来生成周围其他单元格的值(如上图所示)。之后将所有值和初始输入坐标相加,最后结果放在 `i`(计算结果如上)中。 + +请注意下面代码中的 `X`,它是我们唯一的游戏结束标志。我们将它添加到随机列表中。在 `shuf` 命令的魔力下,`X` 可以在任意情况下出现,但如果你足够幸运的话,也可能一直不会出现。 + +``` +m=$(shuf -e a b c d e f g X -n 1) # 将 X 添加到随机列表中,当 m=X,游戏结束 + if [[ "$m" != "X" ]]; then # X 将会是我们爆炸地雷(游戏结束)的触发标志 + for limit in ${!m}; do # !m 代表 m 变量的值 + field=$(shuf -i 0-5 -n 1) # 然后再次获得一个随机数字 + index=$((i+limit)) # 将 m 中的每一个值和 index 加起来,直到列表结尾 + is_free_field $index $field +    done +``` + +我想要游戏界面中,所有随机显示出来的单元格,都靠近玩家选择的单元格。 + +![Extracting mines][17] + +### 记录已选择和可用单元格的个数 + +这个程序需要记录游戏界面中哪些单元格是可选择的。否则,程序会一直让用户输入数据,即使所有单元格都被选中过。为了实现这一功能,我创建了一个叫 `free_fields` 的变量,初始值为 `0`。用一个 `for` 循环,记录下游戏界面中可选择单元格的数量。 如果单元格所对应的值为点(`.`),则 `free_fields` 加一。 + +``` +get_free_fields() +{ +  free_fields=0 +  for n in $(seq 1 ${#room[@]}); do +    if [[ "${room[$n]}" = "." ]]; then +      ((free_fields+=1)) +    fi +  done +} +``` + +等下,如果 `free_fields=0` 呢? 这意味着,玩家已选择过所有单元格。如果想更好理解这一部分,可以看看这里的[源代码][18]。 + +``` +if [[ $free_fields -eq 0 ]]; then # 这意味着你已选择过所有格子 + printf '\n\n\t%s: %s %d\n\n' "You Win" "you scored" "$score" +      exit 0 +fi +``` + +### 创建游戏结束逻辑 + +对于游戏结束这种情况,我们这里使用了一些很[巧妙的技巧][19],将结果在屏幕中央显示出来。我把这部分留给读者朋友们自己去探索。 + +``` +if [[ "$m" = "X" ]]; then + g=0 # 为了在参数扩展中使用它 + room[$i]=X # 覆盖此位置原有的值,并将其赋值为X + for j in {42..49}; do # 在游戏界面中央, + out="gameover" + k=${out:$g:1} # 在每一格中显示一个字母 + room[$j]=${k^^} +      ((g+=1)) +    done +fi +``` + +最后,我们显示出玩家最关心的两行。 + +``` +if [[ "$m" = "X" ]]; then +      printf '\n\n\t%s: %s %d\n' "GAMEOVER" "you scored" "$score" +      printf '\n\n\t%s\n\n' "You were just $free_fields mines away." +      exit 0 +fi +``` + +![Minecraft Gameover][20] + +文章到这里就结束了,朋友们!如果你想了解更多,具体可以查看我的 [GitHub 存储库][3],那儿有这个扫雷游戏的源代码,并且你还能找到更多用 Bash 编写的游戏。 我希望,这篇文章能激起你学习 Bash 的兴趣,并乐在其中。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/advanced-bash-building-minesweeper + +作者:[Abhishek Tamrakar][a] +选题:[lujun9972][b] +译者:[wenwensnow](https://github.com/wenwensnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/tamrakar +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background) +[2]: https://en.wikipedia.org/wiki/Minesweeper_(video_game) +[3]: https://github.com/abhiTamrakar/playground/tree/master/bash_games +[4]: https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays +[5]: https://opensource.com/article/19/6/how-write-loop-bash +[6]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L114-L120 +[7]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L41 +[8]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L74 +[9]: https://opensource.com/sites/default/files/uploads/minefield.png (Minefield) +[10]: https://en.wikipedia.org/wiki/Standard_streams#Standard_input_(stdin) +[11]: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html +[12]: https://linux.die.net/man/1/shuf +[13]: https://www.tldp.org/LDP/abs/html/dblparens.html +[14]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L143-L177 +[15]: https://opensource.com/sites/default/files/uploads/extractmines.png (Extracting mines) +[16]: https://opensource.com/sites/default/files/uploads/extractmines2.png (Extracting mines) +[17]: https://opensource.com/sites/default/files/uploads/extractmines3.png (Extracting mines) +[18]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L91 +[19]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L131-L141 +[20]: https://opensource.com/sites/default/files/uploads/gameover.png (Minecraft Gameover) diff --git a/published/20190924 Fedora and CentOS Stream.md b/published/20190924 Fedora and CentOS Stream.md new file mode 100644 index 0000000000..d31e7437c0 --- /dev/null +++ b/published/20190924 Fedora and CentOS Stream.md @@ -0,0 +1,70 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11412-1.html) +[#]: subject: (Fedora and CentOS Stream) +[#]: via: (https://fedoramagazine.org/fedora-and-centos-stream/) +[#]: author: (Matthew Miller https://fedoramagazine.org/author/mattdm/) + +Fedora 和 CentOS Stream +====== + +![][1] + +*一封来自 Fedora 项目负责人办公室的信件:* + +(LCTT 译注:背景介绍 —— 红帽宣布与 CentOS 同步构建一个 CentOS Stream 滚动构建版。我们知道 Fedora 是红帽企业版 Linux [RHEL] 的上游,经过 Fedora 验证的特性才会放入 RHEL;而 RHEL 发布后,其源代码开放出来形成了 CentOS。而新的 CentOS Stream 则位于 Fedora 和 RHEL 之间,会滚动添加新的实验特性、更新的软件包等。) + +嗨,大家好!你可能已经看到有关 [CentOS 项目变更][3]的[公告][2]。(如果没有,请花一些时间阅读它,我等你看完回来!)现在你可能想知道:如果 CentOS 现在位于 RHEL 的上游,那么 Fedora 会发生什么?那不是 Fedora 在 Red Hat 生态系统中的角色吗? + +首先,不用担心。整体有一些变化,但是一切都变得更好。 + +![][4] + +如果你一直在关注 RHEL 领导者关于 Fedora、CentOS 和 RHEL 之间关系的会议讨论,那么你就听说过 “[彭罗斯三角][5]Penrose Triangle”。形状就像 M. C. Escher 绘图中的形状:在现实生活中这是不可能的! + +我们已经思考了一段时间,*也许*几何不可能实际上是最好的模型。 + +一方面,想象中的流向最终的贡献会流回 Fedora 并以“良性循环”增长,但这种流从来没有真正起作用过。 真可惜,因为有一个庞大而强大的 CentOS 社区,并且有很多伟大的人在为此工作,而且 Fedora 社区也有很多重叠之处。我们错失了。 + +但是,这个缺口并不是唯一的空隙:在该项目与产品之间并没有真正一致的流程。到目前为止,该过程如下: + +1. 在上一版 RHEL 发布之后的某个时间,红帽突然会比以往更加关注 Fedora。 +2. 几个月后,红帽将分拆出一个内部开发的 RHEL 新版本。 +3. 几个月后,它便被带到了世界各地,成为所有包括 CentOS 在内的下游发行版的来源。 +4. 这些源持续向下更新,有时这些更新包括 Fedora 中的修补程序,但没有明确的路径。 + +这里的每个步骤都有其问题:间歇性注意力、闭门开发、盲目下发以及几乎没有持续的透明度。但是现在红帽和 CentOS 项目正在解决此问题,这对 Fedora 也是个好消息。 + +**Fedora 仍将是 RHEL 的[第一个][6]上游**。这是每个 RHEL 的来源,也是 RHEL 9 的来源。但是在 RHEL 分支之后,*CentOS* 将成为上游,以继续进行那些 RHEL 版本的工作。我喜欢称其为“中游”,但营销人员却不这样称呼,因此将其称为 “CentOS Stream”。 + +我们(Fedora、CentOS 和红帽)仍需要解决各种技术细节,但是我们的想法是这些分支将存在于同一软件包源存储库中。(目前的计划是制作一个 “src.centos.org”,它具有与 [src.fedoraproject.org][7] 相同数据的并行视图)。这项更改使公众可以看到已经发布的 RHEL 上正在进行的工作,并为开发人员和红帽合作伙伴在该级别进行协作提供了场所。 + +[CentOS SIG][8](虚拟化、存储、配置管理等特殊兴趣小组)将在 Fedora 分支旁边的共享空间中开展工作。这将使项目之间的协作和共享更加容易,我希望我们甚至能够合并一些类似的 SIG,以直接协同工作。在有用的情况下,可以将 Fedora 软件包中的修补程序挑选到 CentOS “中游”中,反之亦然。 + +最终,Fedora、CentOS 和 RHEL 属于同一大型项目家族。这种新的、更自然的流程为协作提供了可能性,这些协作被锁定在人为(和超维度!)障碍的后面。我们现在可以一起做,我感到非常兴奋! + +*—— Matthew Miller, Fedora 项目负责人* + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/fedora-and-centos-stream/ + +作者:[Matthew Miller][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/mattdm/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/centos-stream-816x345.jpg +[2]: http://redhat.com/en/blog/transforming-development-experience-within-centos +[3]: https://wiki.centos.org/Manuals/ReleaseNotes/CentOSStream +[4]: https://lh3.googleusercontent.com/5XMDU29DYPsFKIVLCexK46n9DqWZEa0nTjAnJcouzww-RSAzNshGW3yIxXBSBsd6KfAyUAGpxX9y0Dsh1hj21ygcAn5a7h55LrneKROkxsipdXO2gq8cgoFqz582ojOh8NU9Ix0X +[5]: https://www.youtube.com/watch?v=1JmgOkEznjw +[6]: https://docs.fedoraproject.org/en-US/project/#_first +[7]: https://src.fedoraproject.org/ +[8]: https://wiki.centos.org/SpecialInterestGroup diff --git a/published/20190924 Java still relevant, Linux desktop, and more industry trends.md b/published/20190924 Java still relevant, Linux desktop, and more industry trends.md new file mode 100644 index 0000000000..9823bf93e1 --- /dev/null +++ b/published/20190924 Java still relevant, Linux desktop, and more industry trends.md @@ -0,0 +1,79 @@ +[#]: collector: (lujun9972) +[#]: translator: (laingke) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11453-1.html) +[#]: subject: (Java still relevant, Linux desktop, and more industry trends) +[#]: via: (https://opensource.com/article/19/9/java-relevant-and-more-industry-trends) +[#]: author: (Tim Hildred https://opensource.com/users/thildred) + +每周开源点评:Java 还有用吗、Linux 桌面以及更多的行业趋势 +====== + +> 开源社区和行业趋势的每周总览。 + +![Person standing in front of a giant computer screen with numbers, data][1] + +作为我在具有开源开发模型的企业软件公司担任高级产品营销经理的角色的一部分,我为产品营销人员、经理和其他影响者定期发布有关开源社区,市场和行业趋势的定期更新。以下是该更新中我和他们最喜欢的五篇文章。 + +### 《Java 还有用吗?》 + +- [文章地址][2] + +> 负责 Java Enterprise Edition(现为 Jakarta EE)的 Eclipse 基金会执行董事 Mike Milinkovich 也认为 Java 本身将不断发展以支持这些技术。“我认为 Java 将从 JVM 一直到 Java 本身都将发生变化,”Milinkovich 表示,“因此,JVM 中任何有助于将 JVM 与 Docker 容器集成在一起,以及能够更好地在 Kubernetes 中对 Docker 容器进行检测的新特性,都将是一个巨大的帮助。因此,我们将期待 Java SE 朝着这个方向发展。” + +**影响**:Jakarta EE 是 Java Enterprise Edition 的完全开源版本,奠定了 Java 未来发展的基础。一些 Java 有用论来自于在用 Java 开发中花费的令人难以置信的成本,以及软件开发人员在用它解决问题方面的多年经验。将其与生态系统中的创新相结合(例如,请参见 [Quarkus][3] 或 GraalVM),答案必须是“是”。 + +### 《GraalVM:多语言 JVM 的圣杯?》 + +- [文章地址][4] + +> 虽然大多数关于 GraalVM 的宣传都是围绕着将 JVM 项目编译成原生的程序,但是我们仍可以发现它的 Polyglot API 有很多价值。GraalVM 是一个引人注目的、已经完全可以用来替代 Nashorn 的选择,尽管迁移的路径仍然有一些困难,主要原因是缺乏文档。希望这篇文章能帮助其他人找到离开 Nashorn 通往圣杯之路。 + +**影响**:对于开放源码项目来说,最好的事情之一就是用户开始对一些新奇的应用程序赞不绝口,即使这些应用程序不是主要用例。“是的,听起来不错,我们甚至没有使用过那个功能(指在 JVM 上运行本地语言)……,(都可以感受得到它的优势,)然而我们使用了它的另一个功能(指 Polyglot API)!” + +### 《你可以说我疯了,但 Windows 11 或可以在 Linux 上运行》 + +- [文章链接][5] + +> 微软已经做了一些必要的工作。[Windows 的 Linux 子系统][6](WSL)的开发人员一直在致力于将 Linux API 调用映射到 Windows 中,反之亦然。在 WSL 的第一个版本中, 微软将 Windows 本地库、程序以及 Linux 之间的关键点连接起来了。当时,[Carmen Crincoli 发推文称][7]:“2017 年归根结底还是 Linux 桌面年。只不过这个桌面是 Windows。”Carmen Crincoli 是什么人?微软与存储和独立硬件供应商的合作伙伴经理。 + +**影响**:[Hieroglyph 项目][8] 的前提是“一部好的科幻小说都有一个对未来的愿景……是建立在现实主义的基础上的……(而这)引发我们思考自己的选择和互动对创造未来做出贡献的复杂方式。”微软的选择以及与更广泛的开源社区的互动是否可以导致科幻的未来?敬请关注! + +### 《Python 正在吞噬世界:一个开发人员的业余项目如何成为地球上最热门的编程语言》 + +- [文章链接][9] + +> 还有一个问题是,监督语言开发的机构“Python 核心开发人员和 Python 指导委员会”的组成是否能更好地反映 2019 年 Python 用户群的多样性。 +> +> Wijaya 称:“我希望看到在所有不同指标上都有更好的代表,不仅在性别平衡方面,而且在种族和其它所有方面。” +> +> “在 PyCon 上,我与来自印度和非洲的 [PyLadies][10] 成员进行了交谈。他们评论说:‘当我们听说 Python 或 PyLadies 时,我们想到的是北美或加拿大的人,而实际上,世界其它地区的用户群很大。为什么我们看不到更多?’我认为这很有意义。因此,我绝对希望看到这种情况发生,我认为我们都需要尽自己的一份力量。” + +**影响**: 在这个动荡的时代,谁不想听到一位仁慈独裁者(指 Python 创始人)把他们项目的统治权移交给最经常使用它的人呢? + +*我希望你喜欢这张上周让我印象深刻的列表,并在下周一回来了解更多的开放源码社区、市场和行业趋势。* + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/java-relevant-and-more-industry-trends + +作者:[Tim Hildred][a] +选题:[lujun9972][b] +译者:[laingke](https://github.com/laingke) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/thildred +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data) +[2]: https://sdtimes.com/java/is-java-still-relevant/ +[3]: https://github.com/quarkusio/quarkus +[4]: https://www.transposit.com/blog/2019.01.02-graalvm-holy/?c=hn +[5]: https://www.computerworld.com/article/3438856/call-me-crazy-but-windows-11-could-run-on-linux.html#tk.rss_operatingsystems +[6]: https://blogs.msdn.microsoft.com/wsl/ +[7]: https://twitter.com/CarmenCrincoli/status/862714516257226752 +[8]: https://hieroglyph.asu.edu/2016/04/what-is-the-purpose-of-science-fiction-stories/ +[9]: https://www.techrepublic.com/article/python-is-eating-the-world-how-one-developers-side-project-became-the-hottest-programming-language-on-the-planet/ +[10]: https://www.pyladies.com/ diff --git a/published/20190925 3 quick tips for working with Linux files.md b/published/20190925 3 quick tips for working with Linux files.md new file mode 100644 index 0000000000..dcf0c29398 --- /dev/null +++ b/published/20190925 3 quick tips for working with Linux files.md @@ -0,0 +1,115 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11445-1.html) +[#]: subject: (3 quick tips for working with Linux files) +[#]: via: (https://www.networkworld.com/article/3440035/3-quick-tips-for-working-with-linux-files.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +处理 Linux 文件的 3 个技巧 +====== + +> Linux 提供了许多用于查找、计数和重命名文件的命令。这有一些有用的选择。 + +![](https://img.linux.net.cn/data/attachment/album/201910/11/101136ei4sslezne7esyis.jpg) + +Linux 提供了多种用于处理文件的命令,这些命令可以节省你的时间,并使你的工作不那么繁琐。 + +### 查找文件 + +当你查找文件时,`find` 可能会是第一个想到的命令,但是有时精心设计的 `ls` 命令会更好。想知道你昨天离开办公室回家前调用的脚本么?简单!使用 `ls` 命令并加上 `-ltr` 选项。最后一个列出的将是最近创建或更新的文件。 + +``` +$ ls -ltr ~/bin | tail -3 +-rwx------ 1 shs shs 229 Sep 22 19:37 checkCPU +-rwx------ 1 shs shs 285 Sep 22 19:37 ff +-rwxrw-r-- 1 shs shs 1629 Sep 22 19:37 test2 +``` + +像这样的命令将仅列出今天更新的文件: + +``` +$ ls -al --time-style=+%D | grep `date +%D` +drwxr-xr-x 60 shs shs 69632 09/23/19 . +drwxrwxr-x 2 shs shs 8052736 09/23/19 bin +-rw-rw-r-- 1 shs shs 506 09/23/19 stats +``` + +如果你要查找的文件可能不在当前目录中,那么 `find` 将比 `ls` 提供更好的选项,但它可能会输出比你想要的更多结果。在下面的命令中,我们*不*搜索以点开头的目录(它们很多一直在更新),指定我们要查找的是文件(即不是目录),并要求仅显示最近一天 (`-mtime -1`)更新过的文件。 + +``` +$ find . -not -path '*/\.*' -type f -mtime -1 -ls + 917517 0 -rwxrw-r-- 1 shs shs 683 Sep 23 11:00 ./newscript +``` + +注意 `-not` 选项反转了 `-path` 的行为,因此我们不会搜索以点开头的子目录。 + +如果只想查找最大的文件和目录,那么可以使用类似 `du` 这样的命令,它会按大小列出当前目录的内容。将输出通过管道传输到 `tail`,仅查看最大的几个。 + +``` +$ du -kx | egrep -v "\./.+/" | sort -n | tail -5 +918984 ./reports +1053980 ./notes +1217932 ./.cache +31470204 ./photos +39771212 . +``` + +`-k` 选项让 `du` 以块列出文件大小,而 `x` 可防止其遍历其他文件系统上的目录(例如,通过符号链接引用)。事实上,`du` 会先列出文件大小,这样可以按照大小排序(`sort -n`)。 + +### 文件计数 + +使用 `find` 命令可以很容易地计数任何特定目录中的文件。你只需要记住,`find` 会递归到子目录中,并将这些子目录中的文件与当前目录中的文件一起计数。在此命令中,我们计数一个特定用户(`username`)的家目录中的文件。根据家目录的权限,这可能需要使用 `sudo`。请记住,第一个参数是搜索的起点。这里指定的是用户的家目录。 + +``` +$ find ~username -type f 2>/dev/null | wc -l +35624 +``` + +请注意,我们正在将上面 `find` 命令的错误输出发送到 `/dev/null`,以避免搜索类似 `~username/.cache` 这类无法搜索并且对它的内容也不感兴趣的文件夹。 + +必要时,你可以使用 `maxdepth 1` 选项将 `find` 限制在单个目录中: + +``` +$ find /home/shs -maxdepth 1 -type f | wc -l +387 +``` + +### 重命名文件 + +使用 `mv` 命令可以很容易地重命名文件,但是有时你会想重命名大量文件,并且不想花费大量时间。例如,要将你在当前目录的文件名中找到的所有空格更改为下划线,你可以使用如下命令: + +``` +$ rename 's/ /_/g' * +``` + +如你怀疑的那样,此命令中的 `g` 表示“全局”。这意味着该命令会将文件名中的*所有*空格更改为下划线,而不仅仅是第一个。 + +要从文本文件中删除 .txt 扩展名,可以使用如下命令: + +``` +$ rename 's/.txt//g' * +``` + +### 总结 + +Linux 命令行提供了许多用于处理文件的有用选择。请提出你认为特别有用的其他命令。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3440035/3-quick-tips-for-working-with-linux-files.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/09/file_key-100811696-large.jpg +[2]: https://creativecommons.org/licenses/by/2.0/legalcode +[4]: https://www.facebook.com/NetworkWorld/ +[5]: https://www.linkedin.com/company/network-world diff --git a/published/20190925 Mirror your Android screen on your computer with Guiscrcpy.md b/published/20190925 Mirror your Android screen on your computer with Guiscrcpy.md new file mode 100644 index 0000000000..a0c8a223bd --- /dev/null +++ b/published/20190925 Mirror your Android screen on your computer with Guiscrcpy.md @@ -0,0 +1,110 @@ +[#]: collector: (lujun9972) +[#]: translator: (amwps290) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11434-1.html) +[#]: subject: (Mirror your Android screen on your computer with Guiscrcpy) +[#]: via: (https://opensource.com/article/19/9/mirror-android-screen-guiscrcpy) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +使用 guiscrcpy 将你的安卓手机的屏幕投射到你的电脑 +====== + +> 使用这个基于 scrcpy 的开源应用从你的电脑上访问你的安卓设备。 + +![](https://img.linux.net.cn/data/attachment/album/201910/08/123143nlz718152v5nf5n8.png) + +在未来,你所需的一切信息皆触手可及,并且全部会以全息的形式出现在空中,即使你在驾驶汽车时也可以与之交互。不过,那是未来,在那一刻到来之前,我们所有人都只能将信息分散在笔记本电脑、手机、平板电脑和智能冰箱上。不幸的是,这意味着当我们需要来自该设备的信息时,我们通常必须查看该设备。 + +虽然不完全是像全息终端或飞行汽车那样酷炫,但 [srevin saju][3] 开发的 [guiscrcpy][2] 是一个可以在一个地方整合多个屏幕,让你有一点未来感觉的应用程序。 + +Guiscrcpy 是一个基于屡获殊荣的一个开源引擎 [scrcpy][4] 的一个开源项目(GUN GPLv3 许可证)。使用 Guiscrcpy 可以将你的安卓手机的屏幕投射到你的电脑,这样你就可以查看手机上的一切东西。Guiscrcpy 支持 Linux、Windows 和 MacOS。 + +不像其他 scrcpy 的替代软件一样,Guiscrcpy 并不仅仅是 scrcpy 的一个简单的复制品。该项目优先考虑了与其他开源项目的协作。因此,Guiscrcpy 对 scrcpy 来说是一个扩展,或者说是一个用户界面层。将 Python 3 GUI 与 scrcpy 分开可以确保没有任何东西干扰 scrcpy 后端的效率。你可以投射到 1080P 分辨率的屏幕,因为它的超快的渲染速度和超低的 CPU 使用,即使在低端的电脑上也可以运行的很顺畅。 + +Scrcpy 是 Guiscrcpy 项目的基石。它是一个基于命令行的应用,因此它没有处理你的手势操作的用户界面。它也没有提供返回按钮和主页按钮,而且它需要你对 [Linux 终端][5]比较熟悉。Guiscrcpy 给 scrcpy 添加了图形面板。因此,任何用户都可以使用它,而且不需要通过网络发送任何信息就可以投射和控制他的设备。Guiscrcpy 同时也为 Windows 用户和 Linux 用户提供了编译好的二进制文件,以方便你的使用。 + +### 安装 Guiscrcpy + +在你安装 Guiscrcpy 之前,你需要先安装它的依赖包。尤其是要安装 scrcpy。安装 scrcpy 最简单的方式可能就是使用对于大部分 Linux 发行版都安装了的 [snap][6] 工具。如果你的电脑上安装并使用了 snap,那么你就可以使用下面的命令来一步安装 scrcpy。 + +``` +$ sudo snap install scrcpy +``` + +当你安装完 scrcpy,你就可以安装其他的依赖包了。[Simple DirectMedia Layer][7](SDL 2.0) 是一个显示和控制你设备屏幕的工具包。[Android Debug Bridge][8] (ADB) 命令可以连接你的安卓手机到电脑。 + +在 Fedora 或者 CentOS: + +``` +$ sudo dnf install SDL2 android-tools +``` + +在 Ubuntu 或者 Debian: + +``` +$ sudo apt install SDL2 android-tools-adb +``` + +在另一个终端中,安装 Python 依赖项: + +``` +$ python3 -m pip install -r requirements.txt --user +``` + +### 设置你的手机 + +为了能够让你的手机接受 adb 连接。必须让你的手机开启开发者选项。为了打开开发者选项,打开“设置”,然后选择“关于手机”,找到“版本号”(它也可能位于“软件信息”面板中)。不敢置信,只要你连续点击“版本号”七次,你就可以打开开发者选项。(LCTT 译注:显然这里是以 Google 原生的 Android 作为说明的,你的不同品牌的安卓手机打开开发者选项的方式或有不同。) + +![Enabling Developer Mode][9] + +更多更全面的连接手机的方式,请参考[安卓开发者文档][10]。 + +一旦你设置好了你的手机,将你的手机通过 USB 线插入到你的电脑中(或者通过无线的方式进行连接,确保你已经配置好了无线连接)。 + +### 使用 Guiscrcpy + +当你启动 guiscrcpy 的时候,你就能看到一个主控制窗口。点击窗口里的 “Start scrcpy” 按钮。只要你设置好了开发者模式并且通过 USB 或者 WiFi 将你的手机连接到电脑。guiscrcpy 就会连接你的手机。 + +![Guiscrcpy main screen][11] + +它还包括一个可写入的配置系统,你可以将你的配置文件写入到 `~/.config` 目录。可以在使用前保存你的首选项。 + +guiscrcpy 底部的面板是一个浮动的窗口,可以帮助你执行一些基本的控制动作。它包括了主页按钮、返回按钮、电源按钮以及一些其他的按键。这些按键在安卓手机上都非常常用。值得注意的是,这个模块并不是与 scrcpy 的 SDL 进行交互。因此,它可以毫无延迟的执行。换句话说,这个操作窗口是直接通过 adb 与你的手机进行交互而不是通过 scrcpy。 + +![guiscrcpy's bottom panel][12] + +这个项目目前十分活跃,不断地有新的特性加入其中。最新版本的具有了手势操作和通知界面。 + +有了这个 guiscrcpy,你不仅仅可以在你的电脑屏幕上看到你的手机,你还可以就像操作你的实体手机一样点击 SDL 窗口,或者使用浮动窗口上的按钮与之进行交互。 + +![guiscrcpy running on Fedora 30][13] + +Guiscrcpy 是一个有趣且实用的应用程序,它提供的功能应该是任何现代设备(尤其是 Android 之类的平台)的正式功能。自己尝试一下,为当今的数字生活增添一些未来主义的感觉。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/mirror-android-screen-guiscrcpy + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[amwps290](https://github.com/amwps290) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer) +[2]: https://github.com/srevinsaju/guiscrcpy +[3]: http://opensource.com/users/srevinsaju +[4]: https://github.com/Genymobile/scrcpy +[5]: https://www.redhat.com/sysadmin/navigating-filesystem-linux-terminal +[6]: https://snapcraft.io/ +[7]: https://www.libsdl.org/ +[8]: https://developer.android.com/studio/command-line/adb +[9]: https://opensource.com/sites/default/files/uploads/developer-mode.jpg (Enabling Developer Mode) +[10]: https://developer.android.com/studio/debug/dev-options +[11]: https://opensource.com/sites/default/files/uploads/guiscrcpy-main.png (Guiscrcpy main screen) +[12]: https://opensource.com/sites/default/files/uploads/guiscrcpy-bottompanel.png (guiscrcpy's bottom panel) +[13]: https://opensource.com/sites/default/files/uploads/guiscrcpy-screenshot.jpg (guiscrcpy running on Fedora 30) diff --git a/published/20190926 How to Execute Commands on Remote Linux System over SSH.md b/published/20190926 How to Execute Commands on Remote Linux System over SSH.md new file mode 100644 index 0000000000..944cd800c7 --- /dev/null +++ b/published/20190926 How to Execute Commands on Remote Linux System over SSH.md @@ -0,0 +1,411 @@ +[#]: collector: (lujun9972) +[#]: translator: (alim0x) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11440-1.html) +[#]: subject: (How to Execute Commands on Remote Linux System over SSH) +[#]: via: (https://www.2daygeek.com/execute-run-linux-commands-remote-system-over-ssh/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +如何通过 SSH 在远程 Linux 系统上运行命令 +====== + +我们有时可能需要在远程机器上运行一些命令。如果只是偶尔进行的操作,要实现这个目的,可以登录到远程系统上直接执行命令。但是每次都这么做的话,就有点烦人了。既然如此,有没有摆脱这种麻烦操作的更佳方案? + +是的,你可以从你本地系统上执行这些操作,而不用登录到远程系统上。这有什么好处吗?毫无疑问。这会为你节省很多好时光。 + +这是怎么实现的?SSH 允许你无需登录到远程计算机就可以在它上面运行命令。 + +**通用语法如下所示:** + +``` +$ ssh [用户名]@[远程主机名或 IP] [命令或脚本] +``` + +### 1) 如何通过 SSH 在远程 Linux 系统上运行命令 + +下面的例子允许用户通过 ssh 在远程 Linux 机器上运行 [df 命令][1]。 + +``` +$ ssh daygeek@CentOS7.2daygeek.com df -h + + Filesystem Size Used Avail Use% Mounted on + /dev/mapper/centos-root 27G 4.4G 23G 17% / + devtmpfs 903M 0 903M 0% /dev + tmpfs 920M 0 920M 0% /dev/shm + tmpfs 920M 9.3M 910M 2% /run + tmpfs 920M 0 920M 0% /sys/fs/cgroup + /dev/sda1 1014M 179M 836M 18% /boot + tmpfs 184M 8.0K 184M 1% /run/user/42 + tmpfs 184M 0 184M 0% /run/user/1000 +``` + +### 2) 如何通过 SSH 在远程 Linux 系统上运行多条命令 + +下面的例子允许用户通过 ssh 在远程 Linux 机器上一次运行多条命令。 + +同时在远程 Linux 系统上运行 `uptime` 命令和 `free` 命令。 + +``` +$ ssh daygeek@CentOS7.2daygeek.com "uptime && free -m" + + 23:05:10 up 10 min, 0 users, load average: 0.00, 0.03, 0.03 + + total used free shared buffers cached + Mem: 1878 432 1445 1 100 134 + -/+ buffers/cache: 197 1680 + Swap: 3071 0 3071 +``` + +### 3) 如何通过 SSH 在远程 Linux 系统上运行带 sudo 权限的命令 + +下面的例子允许用户通过 ssh 在远程 Linux 机器上运行带有 [sudo 权限][2] 的 `fdisk` 命令。 + +普通用户不允许执行系统二进制(`/usr/sbin/`)目录下提供的命令。用户需要 root 权限来运行它。 + +所以你需要 root 权限,好在 Linux 系统上运行 [fdisk 命令][3]。`which` 命令返回给定命令的完整可执行路径。 + +``` +$ which fdisk + /usr/sbin/fdisk +``` + +``` +$ ssh -t daygeek@CentOS7.2daygeek.com "sudo fdisk -l" + [sudo] password for daygeek: + + Disk /dev/sda: 32.2 GB, 32212254720 bytes, 62914560 sectors + Units = sectors of 1 * 512 = 512 bytes + Sector size (logical/physical): 512 bytes / 512 bytes + I/O size (minimum/optimal): 512 bytes / 512 bytes + Disk label type: dos + Disk identifier: 0x000bf685 + + Device Boot Start End Blocks Id System + /dev/sda1 * 2048 2099199 1048576 83 Linux + /dev/sda2 2099200 62914559 30407680 8e Linux LVM + + Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors + Units = sectors of 1 * 512 = 512 bytes + Sector size (logical/physical): 512 bytes / 512 bytes + I/O size (minimum/optimal): 512 bytes / 512 bytes + + Disk /dev/mapper/centos-root: 29.0 GB, 28982640640 bytes, 56606720 sectors + Units = sectors of 1 * 512 = 512 bytes + Sector size (logical/physical): 512 bytes / 512 bytes + I/O size (minimum/optimal): 512 bytes / 512 bytes + + Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors + Units = sectors of 1 * 512 = 512 bytes + Sector size (logical/physical): 512 bytes / 512 bytes + I/O size (minimum/optimal): 512 bytes / 512 bytes + + Connection to centos7.2daygeek.com closed. +``` + +### 4) 如何通过 SSH 在远程 Linux 系统上运行带 sudo 权限的服务控制命令 + +下面的例子允许用户通过 ssh 在远程 Linux 机器上运行带有 sudo 权限的服务控制命令。 + +``` +$ ssh -t daygeek@CentOS7.2daygeek.com "sudo systemctl restart httpd" + + [sudo] password for daygeek: + Connection to centos7.2daygeek.com closed. +``` + +### 5) 如何通过非标准端口 SSH 在远程 Linux 系统上运行命令 + +下面的例子允许用户通过 ssh 在使用了非标准端口的远程 Linux 机器上运行 [hostnamectl 命令][4]。 + +``` +$ ssh -p 2200 daygeek@CentOS7.2daygeek.com hostnamectl + + Static hostname: Ubuntu18.2daygeek.com + Icon name: computer-vm + Chassis: vm + Machine ID: 27f6c2febda84dc881f28fd145077187 + Boot ID: bbeccdf932be41ddb5deae9e5f15183d + Virtualization: oracle + Operating System: Ubuntu 18.04.2 LTS + Kernel: Linux 4.15.0-60-generic + Architecture: x86-64 +``` + +### 6) 如何将远程系统的输出保存到本地系统 + +下面的例子允许用户通过 ssh 在远程 Linux 机器上运行 [top 命令][5],并将输出保存到本地系统。 + +``` +$ ssh daygeek@CentOS7.2daygeek.com "top -bc | head -n 35" > /tmp/top-output.txt +``` + +``` +cat /tmp/top-output.txt + + top - 01:13:11 up 18 min, 1 user, load average: 0.01, 0.05, 0.10 + Tasks: 168 total, 1 running, 167 sleeping, 0 stopped, 0 zombie + %Cpu(s): 0.0 us, 6.2 sy, 0.0 ni, 93.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st + KiB Mem : 1882300 total, 1176324 free, 342392 used, 363584 buff/cache + KiB Swap: 2097148 total, 2097148 free, 0 used. 1348140 avail Mem + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND + 4943 daygeek 20 0 162052 2248 1612 R 10.0 0.1 0:00.07 top -bc + 1 root 20 0 128276 6936 4204 S 0.0 0.4 0:03.08 /usr/lib/sy+ + 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kthreadd] + 3 root 20 0 0 0 0 S 0.0 0.0 0:00.25 [ksoftirqd/+ + 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:+ + 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:+ + 7 root rt 0 0 0 0 S 0.0 0.0 0:00.00 [migration/+ + 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcu_bh] + 9 root 20 0 0 0 0 S 0.0 0.0 0:00.77 [rcu_sched] + 10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [lru-add-dr+ + 11 root rt 0 0 0 0 S 0.0 0.0 0:00.01 [watchdog/0] + 13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kdevtmpfs] + 14 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [netns] + 15 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [khungtaskd] + 16 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [writeback] + 17 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kintegrity+ + 18 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset] + 19 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset] + 20 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset] +``` + +或者你也可以使用以下格式在远程系统上运行多条命令: + +``` +$ ssh daygeek@CentOS7.2daygeek.com << EOF +hostnamectl +free -m +grep daygeek /etc/passwd +EOF +``` + +上面命令的输出如下: + +``` +Pseudo-terminal will not be allocated because stdin is not a terminal. + Static hostname: CentOS7.2daygeek.com + Icon name: computer-vm + Chassis: vm + Machine ID: 002f47b82af248f5be1d67b67e03514c + Boot ID: dca9a1ba06374d7d96678f9461752482 + Virtualization: kvm + Operating System: CentOS Linux 7 (Core) + CPE OS Name: cpe:/o:centos:centos:7 + Kernel: Linux 3.10.0-957.el7.x86_64 + Architecture: x86-64 + + total used free shared buff/cache available + Mem: 1838 335 1146 11 355 1314 + Swap: 2047 0 2047 + + daygeek:x:1000:1000:2daygeek:/home/daygeek:/bin/bash +``` + +### 7) 如何在远程系统上运行本地 Bash 脚本 + +下面的例子允许用户通过 ssh 在远程 Linux 机器上运行本地 [bash 脚本][5] `remote-test.sh`。 + +创建一个 shell 脚本并执行它。 + +``` +$ vi /tmp/remote-test.sh + +#!/bin/bash +#Name: remote-test.sh +#-------------------- + uptime + free -m + df -h + uname -a + hostnamectl +``` + +上面命令的输出如下: + +``` +$ ssh daygeek@CentOS7.2daygeek.com 'bash -s' < /tmp/remote-test.sh + + 01:17:09 up 22 min, 1 user, load average: 0.00, 0.02, 0.08 + + total used free shared buff/cache available + Mem: 1838 333 1148 11 355 1316 + Swap: 2047 0 2047 + + Filesystem Size Used Avail Use% Mounted on + /dev/mapper/centos-root 27G 4.4G 23G 17% / + devtmpfs 903M 0 903M 0% /dev + tmpfs 920M 0 920M 0% /dev/shm + tmpfs 920M 9.3M 910M 2% /run + tmpfs 920M 0 920M 0% /sys/fs/cgroup + /dev/sda1 1014M 179M 836M 18% /boot + tmpfs 184M 12K 184M 1% /run/user/42 + tmpfs 184M 0 184M 0% /run/user/1000 + + Linux CentOS7.2daygeek.com 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux + + Static hostname: CentOS7.2daygeek.com + Icon name: computer-vm + Chassis: vm + Machine ID: 002f47b82af248f5be1d67b67e03514c + Boot ID: dca9a1ba06374d7d96678f9461752482 + Virtualization: kvm + Operating System: CentOS Linux 7 (Core) + CPE OS Name: cpe:/o:centos:centos:7 + Kernel: Linux 3.10.0-957.el7.x86_64 + Architecture: x86-64 +``` + +或者也可以使用管道。如果你觉得输出不太好看,再做点修改让它更优雅些。 + +``` +$ vi /tmp/remote-test-1.sh + +#!/bin/bash +#Name: remote-test.sh + echo "---------System Uptime--------------------------------------------" + uptime + echo -e "\n" + echo "---------Memory Usage---------------------------------------------" + free -m + echo -e "\n" + echo "---------Disk Usage-----------------------------------------------" + df -h + echo -e "\n" + echo "---------Kernel Version-------------------------------------------" + uname -a + echo -e "\n" + echo "---------HostName Info--------------------------------------------" + hostnamectl + echo "------------------------------------------------------------------" +``` + +上面脚本的输出如下: + +``` +$ cat /tmp/remote-test.sh | ssh daygeek@CentOS7.2daygeek.com + Pseudo-terminal will not be allocated because stdin is not a terminal. + ---------System Uptime-------------------------------------------- + 03:14:09 up 2:19, 1 user, load average: 0.00, 0.01, 0.05 + + ---------Memory Usage--------------------------------------------- + total used free shared buff/cache available + Mem: 1838 376 1063 11 398 1253 + Swap: 2047 0 2047 + + ---------Disk Usage----------------------------------------------- + Filesystem Size Used Avail Use% Mounted on + /dev/mapper/centos-root 27G 4.4G 23G 17% / + devtmpfs 903M 0 903M 0% /dev + tmpfs 920M 0 920M 0% /dev/shm + tmpfs 920M 9.3M 910M 2% /run + tmpfs 920M 0 920M 0% /sys/fs/cgroup + /dev/sda1 1014M 179M 836M 18% /boot + tmpfs 184M 12K 184M 1% /run/user/42 + tmpfs 184M 0 184M 0% /run/user/1000 + tmpfs 184M 0 184M 0% /run/user/0 + + ---------Kernel Version------------------------------------------- + Linux CentOS7.2daygeek.com 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux + + ---------HostName Info-------------------------------------------- + Static hostname: CentOS7.2daygeek.com + Icon name: computer-vm + Chassis: vm + Machine ID: 002f47b82af248f5be1d67b67e03514c + Boot ID: dca9a1ba06374d7d96678f9461752482 + Virtualization: kvm + Operating System: CentOS Linux 7 (Core) + CPE OS Name: cpe:/o:centos:centos:7 + Kernel: Linux 3.10.0-957.el7.x86_64 + Architecture: x86-64 +``` + +### 8) 如何同时在多个远程系统上运行多条指令 + +下面的 bash 脚本允许用户同时在多个远程系统上运行多条指令。使用简单的 `for` 循环实现。 + +为了实现这个目的,你可以尝试 [PSSH 命令][7] 或 [ClusterShell 命令][8] 或 [DSH 命令][9]。 + +``` +$ vi /tmp/multiple-host.sh + + for host in CentOS7.2daygeek.com CentOS6.2daygeek.com + do + ssh daygeek@CentOS7.2daygeek.com${host} "uname -a;uptime;date;w" + done +``` + +上面脚本的输出如下: + +``` +$ sh multiple-host.sh + + Linux CentOS7.2daygeek.com 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux + + 01:33:57 up 39 min, 1 user, load average: 0.07, 0.06, 0.06 + + Wed Sep 25 01:33:57 CDT 2019 + + 01:33:57 up 39 min, 1 user, load average: 0.07, 0.06, 0.06 + USER TTY FROM daygeek@CentOS7.2daygeek.com IDLE JCPU PCPU WHAT + daygeek pts/0 192.168.1.6 01:08 23:25 0.06s 0.06s -bash + + Linux CentOS6.2daygeek.com 2.6.32-754.el6.x86_64 #1 SMP Tue Jun 19 21:26:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux + + 23:33:58 up 39 min, 0 users, load average: 0.00, 0.00, 0.00 + + Tue Sep 24 23:33:58 MST 2019 + + 23:33:58 up 39 min, 0 users, load average: 0.00, 0.00, 0.00 + USER TTY FROM daygeek@CentOS7.2daygeek.com IDLE JCPU PCPU WHAT +``` + +### 9) 如何使用 sshpass 命令添加一个密码 + +如果你觉得每次输入密码很麻烦,我建议你视你的需求选择以下方法中的一项来解决这个问题。 + +如果你经常进行类似的操作,我建议你设置 [免密码认证][10],因为它是标准且永久的解决方案。 + +如果你一个月只是执行几次这些任务,我推荐你使用 `sshpass` 工具。只需要使用 `-p` 参数选项提供你的密码即可。 + +``` +$ sshpass -p '在这里输入你的密码' ssh -p 2200 daygeek@CentOS7.2daygeek.com ip a + + 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 + link/ether 08:00:27:18:90:7f brd ff:ff:ff:ff:ff:ff + inet 192.168.1.12/24 brd 192.168.1.255 scope global dynamic eth0 + valid_lft 86145sec preferred_lft 86145sec + inet6 fe80::a00:27ff:fe18:907f/64 scope link tentative dadfailed + valid_lft forever preferred_lft forever +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/execute-run-linux-commands-remote-system-over-ssh/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[alim0x](https://github.com/alim0x) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/linux-check-disk-space-usage-df-command/ +[2]: https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/ +[3]: https://www.2daygeek.com/linux-fdisk-command-to-manage-disk-partitions/ +[4]: https://www.2daygeek.com/four-methods-to-change-the-hostname-in-linux/ +[5]: https://www.2daygeek.com/understanding-linux-top-command-output-usage/ +[6]: https://www.2daygeek.com/category/shell-script/ +[7]: https://www.2daygeek.com/pssh-parallel-ssh-run-execute-commands-on-multiple-linux-servers/ +[8]: https://www.2daygeek.com/clustershell-clush-run-commands-on-cluster-nodes-remote-system-in-parallel-linux/ +[9]: https://www.2daygeek.com/dsh-run-execute-shell-commands-on-multiple-linux-servers-at-once/ +[10]: https://www.2daygeek.com/configure-setup-passwordless-ssh-key-based-authentication-linux/ diff --git a/published/20190926 You Can Now Use OneDrive in Linux Natively Thanks to Insync.md b/published/20190926 You Can Now Use OneDrive in Linux Natively Thanks to Insync.md new file mode 100644 index 0000000000..9b850bca1e --- /dev/null +++ b/published/20190926 You Can Now Use OneDrive in Linux Natively Thanks to Insync.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11433-1.html) +[#]: subject: (You Can Now Use OneDrive in Linux Natively Thanks to Insync) +[#]: via: (https://itsfoss.com/use-onedrive-on-linux/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +现在你可以借助 Insync 在 Linux 中原生使用 OneDrive +====== + +[OneDrive][1] 是微软的一项云存储服务,它为每个用户提供 5GB 的免费存储空间。它已与微软帐户集成,如果你使用 Windows,那么已在其中预安装了 OneDrive。 + +OneDrive 无法在 Linux 中作为桌面应用使用。你可以通过网页访问已存储的文件,但无法像在文件管理器中那样使用云存储。 + +好消息是,你现在可以使用一个非官方工具,它可让你在 Ubuntu 或其他 Linux 发行版中使用 OneDrive。 + +当 [Insync][2] 在 Linux 上支持 Google Drive 时,它变成了 Linux 上非常流行的高级第三方同步工具。我们有篇对 [Insync 支持 Google Drive][3] 的详细点评文章。 + +而最近[发布的 Insync 3][4] 支持了 OneDrive。因此在本文中,我们将看下如何在 Insync 中使用 OneDrive 以及它的新功能。 + +> 非 FOSS 警告 + +> 少数开发者会对非 FOSS 软件引入 Linux 感到痛苦。作为专注于桌面 Linux 的门户,即使不是 FOSS,我们也会在此介绍此类软件。 + +> Insync 3 既不是开源软件,也不免费使用。你只有 15 天的试用期进行测试。如果你喜欢它,那么可以按每个帐户终生 29.99 美元的费用购买。 + +> 我们不会拿钱来推广它们(以防你这么想)。我们不会在这里这么做。 + +### 在 Linux 中通过 Insync 获得原生 OneDrive 体验 + +![][5] + +尽管它是一个付费工具,但依赖 OneDrive 的用户或许希望在他们的 Linux 系统中获得同步 OneDrive 的无缝体验。 + +首先,你需要从[官方页面][6]下载适合你 Linux 发行版的软件包。 + +- [下载 Insync][7] + +你也可以选择添加仓库并进行安装。你将在 Insync 的[官方网站][7]看到说明。 + +安装完成后,只需启动并选择 OneDrive 选项。 + +![][8] + +另外,要注意的是,你添加的每个 OneDrive 或 Google Drive 帐户都需要单独的许可证。 + +现在,在授权 OneDrive 帐户后,你必须选择一个用于同步所有内容的基础文件夹,这是 Insync 3 中的一项新功能。 + +![Insync 3 Base Folder][9] + +除此之外,设置完成后,你还可以选择性地同步本地或云端的文件/文件夹。 + +![Insync Selective Sync][10] + +你还可以通过添加自己的规则来自定义同步选项,以忽略/同步所需的文件夹和文件,这完全是可选的。 + +![Insync Customize Sync Preferences][11] + +最后,就这样完成了。 + +![Insync 3][12] + +你现在可以在包括带有 Insync 的 Linux 桌面在内的多个平台使用 OneDrive 开始同步文件/文件夹。除了上面所有新功能/更改之外,你还可以在 Insync 上获得更快/更流畅的体验。 + +此外,借助 Insync 3,你可以查看同步进度: + +![][13] + +### 总结 + +总的来说,对于希望在 Linux 系统上同步 OneDrive 的用户而言,Insync 3 是令人印象深刻的升级。如果你不想付款,你可以尝试其他 [Linux 的免费云服务][14]。 + +你如何看待 Insync?如果你已经在使用它,到目前为止的体验如何?在下面的评论中让我们知道你的想法。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/use-onedrive-on-linux/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://onedrive.live.com +[2]: https://www.insynchq.com +[3]: https://itsfoss.com/insync-linux-review/ +[4]: https://www.insynchq.com/blog/insync-3/ +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/onedrive-linux.png?ssl=1 +[6]: https://www.insynchq.com/downloads?start=true +[7]: https://www.insynchq.com/downloads +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-3one-drive-sync.png?ssl=1 +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-3-base-folder-1.png?ssl=1 +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-selective-syncs.png?ssl=1 +[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-customize-sync.png?ssl=1 +[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-homescreen.png?ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-3-progress-bar.png?ssl=1 +[14]: https://itsfoss.com/cloud-services-linux/ diff --git a/published/20190927 CentOS 8 Installation Guide with Screenshots.md b/published/20190927 CentOS 8 Installation Guide with Screenshots.md new file mode 100644 index 0000000000..7e1a49882c --- /dev/null +++ b/published/20190927 CentOS 8 Installation Guide with Screenshots.md @@ -0,0 +1,251 @@ +[#]: collector: (lujun9972) +[#]: translator: (HankChow) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11438-1.html) +[#]: subject: (CentOS 8 Installation Guide with Screenshots) +[#]: via: (https://www.linuxtechi.com/centos-8-installation-guide-screenshots/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +CentOS 8 安装图解 +====== + +继 RHEL 8 发布之后,CentOS 社区也发布了让人期待已久的 CentOS 8,并发布了两种模式: + +* CentOS stream:滚动发布的 Linux 发行版,适用于需要频繁更新的开发者 +* CentOS:类似 RHEL 8 的稳定操作系统,系统管理员可以用其部署或配置服务和应用 + +在这篇文章中,我们会使用图解的方式演示 CentOS 8 的安装方法。 + +### CentOS 8 的新特性 + +* DNF 成为了默认的软件包管理器,同时 yum 仍然是可用的 +* 使用网络管理器(`nmcli` 和 `nmtui`)进行网络配置,移除了网络脚本 +* 使用 Podman 进行容器管理 +* 引入了两个新的包仓库:BaseOS 和 AppStream +* 使用 Cockpit 作为默认的系统管理工具 +* 默认使用 Wayland 作为显示服务器 +* `iptables` 将被 `nftables` 取代 +* 使用 Linux 内核 4.18 +* 提供 PHP 7.2、Python 3.6、Ansible 2.8、VIM 8.0 和 Squid 4 + +### CentOS 8 所需的最低硬件配置: + +* 2 GB RAM +* 64 位 x86 架构、2 GHz 或以上的 CPU +* 20 GB 硬盘空间 + +### CentOS 8 安装图解 + +#### 第一步:下载 CentOS 8 ISO 文件 + +在 CentOS 官方网站 下载 CentOS 8 ISO 文件。 + +#### 第二步: 创建 CentOS 8 启动介质(USB 或 DVD) + +下载 CentOS 8 ISO 文件之后,将 ISO 文件烧录到 USB 移动硬盘或 DVD 光盘中,作为启动介质。 + +然后重启系统,在 BIOS 中设置为从上面烧录好的启动介质启动。 + +#### 第三步:选择“安装 CentOS Linux 8.0”选项 + +当系统从 CentOS 8 ISO 启动介质启动之后,就可以看到以下这个界面。选择“Install CentOS Linux 8.0”(安装 CentOS Linux 8.0)选项并按回车。 + +![Choose-Install-CentOS8][2] + +#### 第四步:选择偏好语言 + +选择想要在 CentOS 8 **安装过程**中使用的语言,然后继续。 + +![Select-Language-CentOS8-Installation][3] + +#### 第五步:准备安装 CentOS 8 + +这一步我们会配置以下内容: + +* 键盘布局 +* 日期和时间 +* 安装来源 +* 软件选择 +* 安装目标 +* Kdump + +![Installation-Summary-CentOS8][4] + +如上图所示,安装向导已经自动提供了“键盘布局Keyboard”、“时间和日期Time & Date”、“安装来源Installation Source”和“软件选择Software Selection”的选项。 + +如果你需要修改以上设置,点击对应的图标就可以了。例如修改系统的时间和日期,只需要点击“时间和日期Time & Date”,选择正确的时区,然后点击“完成Done”即可。 + +![TimeZone-CentOS8-Installation][5] + +在软件选择选项中选择安装的模式。例如“包含图形界面Server with GUI”选项会在安装后的系统中提供图形界面,而如果想安装尽可能少的额外软件,可以选择“最小化安装Minimal Install”。 + +![Software-Selection-CentOS8-Installation][6] + +这里我们选择“包含图形界面Server with GUI”,点击“完成Done”。 + +Kdump 功能默认是开启的。尽管这是一个强烈建议开启的功能,但也可以点击对应的图标将其关闭。 + +如果想要在安装过程中对网络进行配置,可以点击“网络与主机名Network & Host Name”选项。 + +![Networking-During-CentOS8-Installation][7] + +如果系统连接到启用了 DHCP 功能的调制解调器上,就会在启动网络接口的时候自动获取一个 IP 地址。如果需要配置静态 IP,点击“配置Configure”并指定 IP 的相关信息。除此以外我们还将主机名设置为 “linuxtechi.com”。 + +完成网络配置后,点击“完成Done”。 + +最后我们要配置“安装目标Installation Destination”,指定 CentOS 8 将要安装到哪一个硬盘,以及相关的分区方式。 + +![Installation-Destination-Custom-CentOS8][8] + +点击“完成Done”。 + +如图所示,我为 CentOS 8 分配了 40 GB 的硬盘空间。有两种分区方案可供选择:如果由安装向导进行自动分区,可以从“存储配置Storage Configuration”中选择“自动Automatic”选项;如果想要自己手动进行分区,可以选择“自定义Custom”选项。 + +在这里我们选择“自定义Custom”选项,并按照以下的方式创建基于 LVM 的分区: + +* `/boot` – 2 GB (ext4 文件系统) +* `/` – 12 GB (xfs 文件系统) +* `/home` – 20 GB (xfs 文件系统) +* `/tmp` – 5 GB (xfs 文件系统) +* Swap – 1 GB (xfs 文件系统) + +首先创建 `/boot` 标准分区,设置大小为 2GB,如下图所示: + +![boot-partition-CentOS8-Installation][9] + +点击“添加挂载点Add mount point”。 + +再创建第二个分区 `/`,并设置大小为 12GB。点击加号,指定挂载点和分区大小,点击“添加挂载点Add mount point”即可。 + +![slash-root-partition-centos8-installation][10] + +然后在页面上将 `/` 分区的分区类型从标准更改为 LVM,并点击“更新设置Update Settings”。 + +![Change-Partition-Type-CentOS8][11] + +如上图所示,安装向导已经自动创建了一个卷组。如果想要更改卷组的名称,只需要点击“卷组Volume Group”标签页中的“修改Modify”选项。 + +同样地,创建 `/home` 分区和 `/tmp` 分区,分别将大小设置为 20GB 和 5GB,并设置分区类型为 LVM。 + +![home-partition-CentOS8-Installation][12] + +![tmp-partition-centos8-installation][13] + +最后创建交换分区Swap Partition。 + +![Swap-Partition-CentOS8-Installation][14] + +点击“添加挂载点Add mount point”。 + +在完成所有分区设置后,点击“完成Done”。 + +![Choose-Done-after-manual-partition-centos8][15] + +在下一个界面,点击“应用更改Accept changes”,以上做的更改就会写入到硬盘中。 + +![Accept-changes-CentOS8-Installation][16] + +#### 第六步:选择“开始安装” + +完成上述的所有更改后,回到先前的安装概览界面,点击“开始安装Begin Installation”以开始安装 CentOS 8。 + +![Begin-Installation-CentOS8][17] + +下面这个界面表示安装过程正在进行中。 + +![Installation-progress-centos8][18] + +要设置 root 用户的口令,只需要点击 “root 口令Root Password”选项,输入一个口令,然后点击“创建用户User Creation”选项创建一个本地用户。 + +![Root-Password-CentOS8-Installation][19] + +填写新创建的用户的详细信息。 + +![Local-User-Details-CentOS8][20] + +在安装完成后,安装向导会提示重启系统。 + +![CentOS8-Installation-Progress][21] + +#### 第七步:完成安装并重启系统 + +安装完成后要重启系统。只需点击“重启Reboot”按钮。 + +![Installation-Completed-CentOS8][22] + +注意:重启完成后,记得要把安装介质断开,并将 BIOS 的启动介质设置为硬盘。 + +#### 第八步:启动新安装的 CentOS 8 并接受许可协议 + +在 GRUB 引导菜单中,选择 CentOS 8 进行启动。 + +![Grub-Boot-CentOS8][23] + +同意 CentOS 8 的许可证,点击“完成Done”。 + +![Accept-License-CentOS8-Installation][24] + +在下一个界面,点击“完成配置Finish Configuration”。 + +![Finish-Configuration-CentOS8-Installation][25] + +#### 第九步:配置完成后登录 + +同意 CentOS 8 的许可证以及完成配置之后,会来到登录界面。 + +![Login-screen-CentOS8][26] + +使用刚才创建的用户以及对应的口令登录,按照提示进行操作,就可以看到以下界面。 + +![CentOS8-Ready-Use-Screen][27] + +点击“开始使用 CentOS LinuxStart Using CentOS Linux”。 + +![Desktop-Screen-CentOS8][28] + +以上就是 CentOS 8 的安装过程,至此我们已经完成了 CentOS 8 的安装。 + +欢迎给我们发送评论。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/centos-8-installation-guide-screenshots/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Install-CentOS8.jpg +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Language-CentOS8-Installation.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Summary-CentOS8.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/TimeZone-CentOS8-Installation.jpg +[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Software-Selection-CentOS8-Installation.jpg +[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Networking-During-CentOS8-Installation.jpg +[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Destination-Custom-CentOS8.jpg +[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/boot-partition-CentOS8-Installation.jpg +[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/slash-root-partition-centos8-installation.jpg +[11]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Change-Partition-Type-CentOS8.jpg +[12]: https://www.linuxtechi.com/wp-content/uploads/2019/09/home-partition-CentOS8-Installation.jpg +[13]: https://www.linuxtechi.com/wp-content/uploads/2019/09/tmp-partition-centos8-installation.jpg +[14]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Swap-Partition-CentOS8-Installation.jpg +[15]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Done-after-manual-partition-centos8.jpg +[16]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Accept-changes-CentOS8-Installation.jpg +[17]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Begin-Installation-CentOS8.jpg +[18]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-progress-centos8.jpg +[19]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Root-Password-CentOS8-Installation.jpg +[20]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Local-User-Details-CentOS8.jpg +[21]: https://www.linuxtechi.com/wp-content/uploads/2019/09/CentOS8-Installation-Progress.jpg +[22]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Installation-Completed-CentOS8.jpg +[23]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Grub-Boot-CentOS8.jpg +[24]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Accept-License-CentOS8-Installation.jpg +[25]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Finish-Configuration-CentOS8-Installation.jpg +[26]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Login-screen-CentOS8.jpg +[27]: https://www.linuxtechi.com/wp-content/uploads/2019/09/CentOS8-Ready-Use-Screen.jpg +[28]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Desktop-Screen-CentOS8.jpg diff --git a/published/20190929 Bash Script to Generate System Uptime Reports on Linux.md b/published/20190929 Bash Script to Generate System Uptime Reports on Linux.md new file mode 100644 index 0000000000..8f147863b7 --- /dev/null +++ b/published/20190929 Bash Script to Generate System Uptime Reports on Linux.md @@ -0,0 +1,134 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11455-1.html) +[#]: subject: (Bash Script to Generate System Uptime Reports on Linux) +[#]: via: (https://www.2daygeek.com/bash-script-generate-linux-system-uptime-reports/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +生成 Linux 运行时间报告的 Bash 脚本 +====== + +出于一些原因,你可能需要每月收集一次 [Linux 系统运行时间][1]报告。如果是这样,你可以根据需要使用以下 [bash 脚本][2] 之一。 + +我们为什么要收集这份报告?在一段时间后重启 Linux 服务器是解决某些未解决问题的好方法。(LCTT 译注:本文这些观点值得商榷,很多服务器可以稳定运行几千天,尤其是有了内核热补丁之后,启动并不是必须的。) + +建议每 180 天重新启动一次。但时间段也许取决于你公司的政策。如果你已经长时间运行服务器而没有重启。这可能导致服务器上出现一些性能或内存问题,我在许多服务器上都注意到了这一点。 + +这些脚本一次性提供了所有系统运行报告。 + +### 什么是 uptime 命令 + +`uptime` 命令将告诉你系统已经运行了多长时间。它在一行中显示以下信息:当前时间、系统运行了多长时间、当前登录了多少用户以及过去 1、5 和 15 分钟的平均系统负载。 + +### 什么是 tuptime? + +[tuptime][3] 是用于报告系统的历史和统计运行时间的工具,可在重启之间保存。它类似于 `uptime` 命令,但输出更有趣。 + +### 1)检查 Linux 系统运行时间的 Bash 脚本 + +该 bash 脚本将收集所有服务器正常运行时间,并将报告发送到给定的电子邮箱地址。 + +请替换为你的电子邮箱地址,而不是用我们的,否则你将不会收到邮件。 + +``` +# vi /opt/scripts/system-uptime-script.sh + +#!/bin/bash +> /tmp/uptime-report.out +for host in cat /tmp/servers.txt +do +echo -n "$host: " +ssh $host uptime | awk '{print $3,$4}' | sed 's/,//' +done | column -t >> /tmp/uptime-report.out +cat /tmp/uptime-report.out | mail -s "Linux Servers Uptime Report" "2daygeek@gmail.com" +``` + +给 `system-uptime-script.sh` 设置可执行权限。 + +``` +$ chmod +x /opt/scripts/system-uptime-script.sh +``` + +最后运行 bash 脚本获取输出。 + +``` +# sh /opt/scripts/system-uptime-script.sh +``` + +你将收到类似以下的报告。 + +``` +# cat /tmp/uptime-report.out + +192.168.1.5: 2 days +192.168.1.6: 15 days +192.168.1.7: 30 days +192.168.1.8: 7 days +192.168.1.9: 67 days +192.168.1.10: 130 days +192.168.1.11: 23 days +``` + +### 2)检查 Linux 系统是否运行了 30 天以上的 Bash 脚本 + +此 bash 脚本会收集运行 30 天以上的服务器,并将报告发送到指定的邮箱地址。你可以根据需要更改天数。 + +``` +# vi /opt/scripts/system-uptime-script-1.sh + +#!/bin/bash +> /tmp/uptime-report-1.out +for host in cat /tmp/servers.txt +do +echo -n "$host: " +ssh $host uptime | awk '{print $3,$4}' | sed 's/,//' +done | column -t >> /tmp/uptime-report-1.out +cat /tmp/uptime-report-1.out | awk ' $2 >= 30' > /tmp/uptime-report-2.out +cat /tmp/uptime-report-2.out | mail -s "Linux Servers Uptime Report" "2daygeek@gmail.com" +``` + +给 `system-uptime-script-1.sh` 设置可执行权限。 + +``` +$ chmod +x /opt/scripts/system-uptime-script-1.sh +``` + +最后添加一条 [cronjob][4] 来自动执行。它会在每天早上 7 点运行。 + +``` +# crontab -e + +0 7 * * * /bin/bash /opt/scripts/system-uptime-script-1.sh +``` + +**注意:** 你会在每天早上 7 点会收到一封电子邮件提醒,它是昨天的详情。 + +你将收到类似下面的报告。 + +``` +# cat /tmp/uptime-report-2.out + +192.168.1.7: 30 days +192.168.1.9: 67 days +192.168.1.10: 130 days +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/bash-script-generate-linux-system-uptime-reports/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/linux-system-server-uptime-check/ +[2]: https://www.2daygeek.com/category/shell-script/ +[3]: https://www.2daygeek.com/linux-tuptime-check-historical-uptime/ +[4]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/ diff --git a/published/20190929 How to Install and Use Cockpit on CentOS 8 - RHEL 8.md b/published/20190929 How to Install and Use Cockpit on CentOS 8 - RHEL 8.md new file mode 100644 index 0000000000..ebefb9d662 --- /dev/null +++ b/published/20190929 How to Install and Use Cockpit on CentOS 8 - RHEL 8.md @@ -0,0 +1,127 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11449-1.html) +[#]: subject: (How to Install and Use Cockpit on CentOS 8 / RHEL 8) +[#]: via: (https://www.linuxtechi.com/install-use-cockpit-tool-centos8-rhel8/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +如何在 CentOS 8/RHEL 8 上安装和使用 Cockpit +====== + +![](https://img.linux.net.cn/data/attachment/album/201910/12/093405gb8hv3exdbsdyfda.jpg) + +Cockpit 是一个基于 Web 的服务器管理工具,可用于 CentOS 和 RHEL 系统。最近发布的 CentOS 8 和 RHEL 8,其中 cockpit 是默认的服务器管理工具。它的软件包在默认的 CentOS 8 和 RHEL 8 仓库中就有。Cockpit 是一个有用的基于 Web 的 GUI 工具,系统管理员可以通过该工具监控和管理 Linux 服务器,它还可用于管理服务器、容器、虚拟机中的网络和存储,以及检查系统和应用的日志。 + +在本文中,我们将演示如何在 CentOS 8 和 RHEL 8 中安装和设置 Cockpit。 + +### 在 CentOS 8/RHEL 8 上安装和设置Cockpit + +登录你的 CentOS 8/RHEL 8,打开终端并执行以下 `dnf` 命令: + +``` +[root@linuxtechi ~]# dnf install cockpit -y +``` + +运行以下命令启用并启动 cockpit 服务: + +``` +[root@linuxtechi ~]# systemctl start cockpit.socket +[root@linuxtechi ~]# systemctl enable cockpit.socket +``` + +使用以下命令在系统防火墙中允许 Cockpit 端口: + +``` +[root@linuxtechi ~]# firewall-cmd --permanent --add-service=cockpit +[root@linuxtechi ~]# firewall-cmd --reload +``` + +验证 cockpit 服务是否已启动和运行,执行以下命令: + +``` +[root@linuxtechi ~]# systemctl status cockpit.socket +[root@linuxtechi ~]# ss -tunlp | grep cockpit +[root@linuxtechi ~]# ps auxf|grep cockpit +``` + +![cockpit-status-centos8-rhel8][1] + +### 在 CentOS 8/RHEL 8 上访问 Cockpit + +正如我们在上面命令的输出中看到的,cockpit 正在监听 tcp 9090 端口,打开你的 Web 浏览器并输入 url:`https://:9090`。 + +![CentOS8-cockpit-login-screen][2] + +RHEL 8 中的 Cockpit 登录页面: + +![RHEL8-Cockpit-Login-Screen][3] + +使用有管理员权限的用户名,或者我们也可以使用 root 用户的密码登录。如果要将管理员权限分配给任何本地用户,请执行以下命令: + +``` +[root@linuxtechi ~]# usermod -G wheel pkumar +``` + +这里 `pkumar` 是我的本地用户, + +在输入用户密码后,选择 “Reuse my password for privileged tasks”,然后单击 “Log In”,然后我们看到以下页面: + +![cockpit-dashboard-centos8][4] + +在左侧栏上,我们可以看到可以通过 cockpit GUI 监控和配置的内容, + +假设你要检查 CentOS 8/RHEL 8 中是否有任何可用更新,请单击 “System Updates”: + +![Software-Updates-Cockpit-GUI-CentOS8-RHEL8][5] + +要安装所有更新,点击 “Install All Updates”: + +![Install-Software-Updates-CentOS8-RHEL8][6] + +如果想要修改网络并要添加 Bond 接口和网桥,请单击 “Networking”: + +![Networking-Cockpit-Dashboard-CentOS8-RHEL8][7] + +如上所见,我们有创建 Bond 接口、网桥和 VLAN 标记接口的选项。 + +假设我们想创建一个 `br0` 网桥,并要为它添加 `enp0s3` 端口,单击 “Add Bridge”: + +将网桥名称指定为 `br0`,将端口指定为 `enp0s3`,然后单击“Apply”。 + +![Add-Bridge-Cockpit-CentOS8-RHEL8][8] + +在下个页面,我们将看到该网桥处于活动状态,并且获得了与 enp0s3 接口相同的 IP: + +![Bridge-Details-Cockpit-Dashboard-CentOS8-RHEL8][9] + +如果你想检查系统日志,单击 “Logs”,我们可以根据严重性查看日志: + +![System-Logs-Cockpit-Dashboard-CentOS8-RHEL8][10] + +本文就是这些了,类似地,系统管理员可以使用 cockpit 的其他功能来监控和管理 CentOS 8 和 RHEL 8 服务器。如果这些步骤可以帮助你在 Linux 服务器上设置 cockpit,请在下面的评论栏分享你的反馈和意见。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/install-use-cockpit-tool-centos8-rhel8/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/wp-content/uploads/2019/09/cockpit-status-centos8-rhel8.jpg +[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/CentOS8-cockpit-login-screen.jpg +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/RHEL8-Cockpit-Login-Screen.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/cockpit-dashboard-centos8.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Software-Updates-Cockpit-GUI-CentOS8-RHEL8.jpg +[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Install-Software-Updates-CentOS8-RHEL8.jpg +[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Networking-Cockpit-Dashboard-CentOS8-RHEL8.jpg +[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Add-Bridge-Cockpit-CentOS8-RHEL8.jpg +[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Bridge-Details-Cockpit-Dashboard-CentOS8-RHEL8.jpg +[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/System-Logs-Cockpit-Dashboard-CentOS8-RHEL8.jpg diff --git a/published/20191002 3 command line games for learning Bash the fun way.md b/published/20191002 3 command line games for learning Bash the fun way.md new file mode 100644 index 0000000000..beea31d857 --- /dev/null +++ b/published/20191002 3 command line games for learning Bash the fun way.md @@ -0,0 +1,145 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11448-1.html) +[#]: subject: (3 command line games for learning Bash the fun way) +[#]: via: (https://opensource.com/article/19/10/learn-bash-command-line-games) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +Bash 学习的快乐之旅:3 个命令行游戏 +====== + +> 通过这些命令行游戏,学习有用的 Bash 技能也是一件乐事。 + +![](https://attackofthefanboy.com/wp-content/uploads/2018/11/fallout-terminal-computer-hacking-guide.jpg) + +学习是件艰苦的工作,然而没有人喜欢工作。这意味着无论学习 Bash 多么容易,它仍然对你来说就像工作一样。当然,除非你通过游戏来学习。 + +你不会觉得会有很多游戏可以教你如何使用 Bash 终端吧,这是对的。严肃的 PC 游戏玩家知道,《辐射Fallout》系列在金库中配备了基于终端的计算机,这可以帮你理解通过文本与计算机进行交互是什么样子,但是尽管其功能或多或少地类似于 [Alpine][2] 或 [Emacs][3],可是玩《辐射》并不会教给你可以在现实生活中使用的命令或应用程序。《辐射》系列从未直接移植到Linux(尽管可以通过 Steam 的开源的 [Proton][4] 来玩。)曾是《辐射》的前身的《[废土][5]Wasteland》系列的最新作品倒是面向 Linux 的,因此,如果你想体验游戏中的终端,可以在你的 Linux 游戏计算机上玩《[废土 2][6]》和《[废土 3][7]》。《[暗影狂奔][8]Shadowrun》系列也有面向 Linux 的版本,它有许多基于终端的交互,尽管公认 [hot sim][9] 序列常常使它黯然失色。 + +虽然这些游戏中采用了有趣的操作计算机终端的方式,并且可以在开源的系统上运行,但它们本身都不是开源的。不过,至少有两个游戏采用了严肃且非常有趣的方法来教人们如何通过文本命令与系统进行交互。最重要的是,它们是开源的。 + +### Bashcrawl + +你可能听说过《[巨洞探险][10]Colossal Cave Adventure》游戏,这是一款古老的基于文本的交互式游戏,其风格为“自由冒险”类。早期的计算机爱好者们在 DOS 或 ProDOS 命令行上痴迷地玩这些游戏,他们努力寻找有效语法和(如一个讽刺黑客所解释的)滑稽幻想逻辑的正确组合来击败游戏。想象一下,如果除了探索虚拟的中世纪地下城之外,挑战还在于回忆起有效的 Bash 命令,那么这样的挑战会多么有成效。这就是 [Bashcrawl][11] 的基调,这是一个基于 Bash 的地下城探险游戏,你可以通过学习和使用 Bash 命令来玩这个游戏。 + +在 Bashcrawl 中,“地下城”是以目录和文件的形式创建在你的计算机上的。你可以通过使用 `cd` 命令更改目录进入地下城的每个房间来探索它。当你[穿行目录][12]时,你可以用 [ls -F][13] 来查看文件,用 [cat][14] 读取文件,[设置变量][15]来收集宝藏,并运行脚本来与怪物战斗。你在游戏中所做的一切操作都是有效的 Bash 命令,你可以稍后在现实生活中使用它,玩这个游戏提供了 Bash 体验,因为这个“游戏”是由计算机上的实际目录和文件组成的。 + +``` +$ cd entrance/ +$ ls +cellar  scroll +$ cat scroll + +It is pitch black in these catacombs. +You have a magical spell that lists all items in a room. + +To see in the dark, type:     ls +To move around, type:         cd <directory> + +Try looking around this room. +Then move into one of the next rooms. + +EXAMPLE: + +$ ls +$ cd cellar + +Remember to cast ``ls`` when you get into the next room! +$ +``` + +#### 安装 Bashcrawl + +在玩 Bashcrawl 之前,你的系统上必须有 Bash 或 [Zsh][16]。Linux、BSD 和 MacOS 都附带了 Bash。Windows 用户可以下载并安装 [Cygwin][17] 或 [WSL][18] 或[试试 Linux][19]。 + +要安装 Bashcrawl,请在 Firefox 或你选择的 Web 浏览器中导航到这个 [GitLab 存储库][11]。在页面的右侧,单击“下载”图标(位于“Find file”按钮右侧)。在“下载”弹出菜单中,单击“zip”按钮以下载最新版本的游戏。 + +![Download a zip from Gitlab][20] + +下载完成后,解压缩该存档文件。 + +另外,如果你想从终端中开始安装,则可以使用 [Git][21] 命令: + +``` +$ git clone https://gitlab.com/slackermedia/bashcrawl.git bashcrawl +``` + +#### 游戏入门 + +与你下载的几乎所有新的软件包一样,你必须做的第一件事是阅读 README 文件。你可以通过双击`bashcrawl` 目录中的 `README.md` 文件来阅读。在 Mac 上,你的计算机可能不知道要使用哪个应用程序打开该文件;你也可以使用任何文本编辑器或 LibreOffice 打开它。`README.md` 这个文件会具体告诉你如何开始玩游戏,包括如何在终端上进入游戏以及要开始游戏必须发出的第一条命令。如果你无法阅读 README 文件,那游戏就不战自胜了(尽管由于你没有玩而无法告诉你)。 + +Bashcrawl 并不意味着是给比较聪明或高级用户玩的。相反,为了对新用户透明,它尽可能地简单。理想情况下,新的 Bash 用户可以从游戏中学习 Bash 的一些基础知识,然后会偶然发现一些游戏机制,包括使游戏运行起来的简单脚本,并学习到更多的 Bash 知识。此外,新的 Bash 用户可以按照 Bashcrawl 现有内容的示例设计自己的地下城,没有比编写游戏更好的学习编码的方法了。 + +### 命令行英雄:BASH + +Bashcrawl 适用于绝对初学者。如果你经常使用 Bash,则很有可能会尝试通过以初学者尚不了解的方式查看 Bashcrawl 的文件,从而找到胜过它的秘径。如果你是中高级的 Bash 用户,则应尝试一下 [命令行英雄:BASH][22]。 + +这个游戏很简单:在给定的时间内输入尽可能多的有效命令(LCTT 译注:BASH 也有“猛击”的意思)。听起来很简单。作为 Bash 用户,你每天都会使用许多命令。对于 Linux 用户来说,你知道在哪里可以找到命令列表。仅 util-linux 软件包就包含一百多个命令!问题是,在倒计时的压力下,你的指尖是否忙的过来输入这些命令? + +![Command Line Heroes: BASH][23] + +这个游戏听起来很简单,它确实也很简单!原则上,它与闪卡flashcard相似,只是反过来而已。在实践中,这是测试你的知识和回忆的一种有趣方式。当然,它是开源的,是由 [Open Jam][24] 的开发者开发的。 + +#### 安装 + +你可以[在线][25]玩“命令行英雄:BASH”,或者你也可以从 [GitHub][26] 下载它的源代码。 + +这个游戏是用 Node.js 编写的,因此除非你想帮助开发该游戏,否则在线进行游戏就够了。 + +### 在 Bash 中扫雷 + +如果你是高级 Bash 用户,并且已经编写了多个 Bash 脚本,那么你可能不仅仅想学习 Bash。你可以尝试编写游戏而不是玩游戏,这才是真的挑战。稍加思考,用上一个下午或几个小时,便可以在 Bash 中实现流行的游戏《扫雷》。你可以先尝试自己编写这个游戏,然后参阅 Abhishek Tamrakar 的[文章][27],以了解他如何完成该游戏的。 + +![][28] + +有时编程没有什么目的而是为了教育。在 Bash 中编写的游戏可能不是可以让你在网上赢得声誉的项目,但是该过程可能会很有趣且很有启发性。面对一个你从未想到的问题,这是学习新技巧的好方法。 + +### 学习 Bash,玩得开心 + +不管你如何学习它,Bash 都是一个功能强大的界面,因为它使你能够指示计算机执行所需的操作,而无需通过图形界面的应用程序的“中间人”界面。有时,图形界面很有帮助,但有时你想离开那些已经非常了解的东西,然后转向可以快速或通过自动化来完成的事情。由于 Bash 基于文本,因此易于编写脚本,使其成为自动化作业的理想起点。 + +了解 Bash 以开始走向高级用户之路,但是请确保你乐在其中。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/learn-bash-command-line-games + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_maze.png?itok=mZ5LP4-X (connecting yellow dots in a maze) +[2]: https://opensource.com/article/17/10/alpine-email-client +[3]: http://www.gnu.org/software/emacs +[4]: https://github.com/ValveSoftware/Proton/ +[5]: https://www.gog.com/game/wasteland_the_classic_original +[6]: https://www.inxile-entertainment.com/wasteland2 +[7]: https://www.inxile-entertainment.com/wasteland3 +[8]: http://harebrained-schemes.com/games/ +[9]: https://forums.shadowruntabletop.com/index.php?topic=21804.0 +[10]: https://opensource.com/article/18/12/linux-toy-adventure +[11]: https://gitlab.com/slackermedia/bashcrawl +[12]: https://opensource.com/article/19/8/understanding-file-paths-linux +[13]: https://opensource.com/article/19/7/master-ls-command +[14]: https://opensource.com/article/19/2/getting-started-cat-command +[15]: https://opensource.com/article/19/8/using-variables-bash +[16]: https://opensource.com/article/19/9/getting-started-zsh +[17]: https://www.cygwin.com/ +[18]: https://docs.microsoft.com/en-us/windows/wsl/wsl2-about +[19]: https://opensource.com/article/19/7/ways-get-started-linux +[20]: https://opensource.com/sites/default/files/images/education/screenshot_from_2019-09-28_10-49-49.png (Download a zip from Gitlab) +[21]: https://opensource.com/life/16/7/stumbling-git +[22]: https://www.redhat.com/en/command-line-heroes/bash/index.html?extIdCarryOver=true&sc_cid=701f2000001OH79AAG +[23]: https://opensource.com/sites/default/files/uploads/commandlineheroes-bash.jpg (Command Line Heroes: BASH) +[24]: http://openjam.io/ +[25]: https://www.redhat.com/en/command-line-heroes/bash/index.html +[26]: https://github.com/CommandLineHeroes/clh-bash/ +[27]: https://linux.cn/article-11430-1.html +[28]: https://opensource.com/sites/default/files/uploads/extractmines.png diff --git a/published/20191002 7 Bash history shortcuts you will actually use.md b/published/20191002 7 Bash history shortcuts you will actually use.md new file mode 100644 index 0000000000..0b26c89131 --- /dev/null +++ b/published/20191002 7 Bash history shortcuts you will actually use.md @@ -0,0 +1,221 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11456-1.html) +[#]: subject: (7 Bash history shortcuts you will actually use) +[#]: via: (https://opensource.com/article/19/10/bash-history-shortcuts) +[#]: author: (Ian Miell https://opensource.com/users/ianmiell) + +7 个实用的操作 Bash 历史记录的快捷方式 +====== + +> 这些必不可少的 Bash 快捷键可在命令行上节省时间。 + +![Command line prompt][1] + +大多数介绍 Bash 历史记录的指南都详尽地列出了全部可用的快捷方式。这样做的问题是,你会对每个快捷方式都浅尝辄止,然后在尝试了那么多的快捷方式后就搞得目不暇接。而在开始工作时它们就全被丢在脑后,只记住了刚开始使用 Bash 时学到的 [!! 技巧][2]。这些技巧大多数从未进入记忆当中。 + +本文概述了我每天实际使用的快捷方式。它基于我的书《[Bash 学习,艰难之旅][3]》中的某些内容(你可以阅读其中的[样章][4]以了解更多信息)。 + +当人们看到我使用这些快捷方式时,他们经常问我:“你做了什么!?” 学习它们只需付出很少的精力或智力,但是要真正的学习它们,我建议每周用一天学一个,然后下次再继续学习一个。值得花时间让它们落在你的指尖下,因为从长远来看,节省的时间将很重要。 + +### 1、最后一个参数:`!$` + +如果你仅想从本文中学习一种快捷方式,那就是这个。它会将最后一个命令的最后一个参数替换到你的命令行中。 + +看看这种情况: + +``` +$ mv /path/to/wrongfile /some/other/place +mv: cannot stat '/path/to/wrongfile': No such file or directory +``` + +啊哈,我在命令中写了错误的文件名 “wrongfile”,我应该用正确的文件名 “rightfile” 代替。 + +你可以重新键入上一个命令,并用 “rightfile” 完全替换 “wrongfile”。但是,你也可以键入: + +``` +$ mv /path/to/rightfile !$ +mv /path/to/rightfile /some/other/place +``` + +这个命令也可以奏效。 + +在 Bash 中还有其他方法可以通过快捷方式实现相同的目的,但是重用上一个命令的最后一个参数的这种技巧是我最常使用的。 + +### 2、第 n 个参数:`!:2` + +是不是干过像这样的事情: + +``` +$ tar -cvf afolder afolder.tar +tar: failed to open +``` + +像许多其他人一样,我也经常搞错 `tar`(和 `ln`)的参数顺序。 + +![xkcd comic][5] + +当你搞混了参数,你可以这样: + +``` +$ !:0 !:1 !:3 !:2 +tar -cvf afolder.tar afolder +``` + +这样就不会出丑了。 + +上一个命令的各个参数的索引是从零开始的,并且可以用 `!:` 之后跟上该索引数字代表各个参数。 + +显然,你也可以使用它来重用上一个命令中的特定参数,而不是所有参数。 + +### 3、全部参数:`!:1-$` + +假设我运行了类似这样的命令: + +``` +$ grep '(ping|pong)' afile +``` + +参数是正确的。然而,我想在文件中匹配 “ping” 或 “pong”,但我使用的是 `grep` 而不是 `egrep`。 + +我开始输入 `egrep`,但是我不想重新输入其他参数。因此,我可以使用 `!:1-$` 快捷方式来调取上一个命令的所有参数,从第二个(记住它们的索引从零开始,因此是 `1`)到最后一个(由 `$` 表示)。 + +``` +$ egrep !:1-$ +egrep '(ping|pong)' afile +ping +``` + +你不用必须用 `1-$` 选择全部参数;你也可以选择一个子集,例如 `1-2` 或 `3-9` (如果上一个命令中有那么多参数的话)。 + +### 4、倒数第 n 行的最后一个参数:`!-2:$` + +当我输错之后马上就知道该如何更正我的命令时,上面的快捷键非常有用,但是我经常在原来的命令之后运行别的命令,这意味着上一个命令不再是我所要引用的命令。 + +例如,还是用之前的 `mv` 例子,如果我通过 `ls` 检查文件夹的内容来纠正我的错误: + +``` +$ mv /path/to/wrongfile /some/other/place +mv: cannot stat '/path/to/wrongfile': No such file or directory +$ ls /path/to/ +rightfile +``` + +我就不能再使用 `!$` 快捷方式了。 + +在这些情况下,我可以在 `!` 之后插入 `-n`:(其中 `n` 是要在历史记录中回溯的命令条数),以从较旧的命令取得最后的参数: + +``` +$ mv /path/to/rightfile !-2:$ +mv /path/to/rightfile /some/other/place +``` + +同样,一旦你学会了它,你可能会惊讶于你需要使用它的频率。 + +### 5、进入文件夹:`!$:h` + +从表面上看,这个看起来不太有用,但我每天要用它几十次。 + +想象一下,我运行的命令如下所示: + +``` +$ tar -cvf system.tar /etc/system + tar: /etc/system: Cannot stat: No such file or directory + tar: Error exit delayed from previous errors. +``` + +我可能要做的第一件事是转到 `/etc` 文件夹,查看其中的内容并找出我做错了什么。 + +我可以通过以下方法来做到这一点: + +``` +$ cd !$:h +cd /etc +``` + +这是说:“获取上一个命令的最后一个参数(`/etc/system`),并删除其最后的文件名部分,仅保留 `/ etc`。” + +### 6、当前行:`!#:1` + +多年以来,在我最终找到并学会之前,我有时候想知道是否可以在当前行引用一个参数。我多希望我能早早学会这个快捷方式。我经常常使用它制作备份文件: + +``` +$ cp /path/to/some/file !#:1.bak +cp /path/to/some/file /path/to/some/file.bak +``` + +但当我学会之后,它很快就被下面的快捷方式替代了…… + +### 7、搜索并替换:`!!:gs` + +这将搜索所引用的命令,并将前两个 `/` 之间的字符替换为后两个 `/` 之间的字符。 + +假设我想告诉别人我的 `s` 键不起作用,而是输出了 `f`: + +``` +$ echo my f key doef not work +my f key doef not work +``` + +然后我意识到这里出现的 `f` 键都是错的。要将所有 `f` 替换为 `s`,我可以输入: + +``` +$ !!:gs/f /s / +echo my s key does not work +my s key does not work +``` + +它不只对单个字符起作用。我也可以替换单词或句子: + +``` +$ !!:gs/does/did/ +echo my s key did not work +my s key did not work +``` + +### 测试一下 + +为了向你展示如何组合这些快捷方式,你知道这些命令片段将输出什么吗? + +``` +$ ping !#:0:gs/i/o +$ vi /tmp/!:0.txt +$ ls !$:h +$ cd !-2:$:h +$ touch !$!-3:$ !! !$.txt +$ cat !:1-$ +``` + +### 总结 + +对于日常的命令行用户,Bash 可以作为快捷方式的优雅来源。虽然有成千上万的技巧要学习,但这些是我经常使用的最喜欢的技巧。 + +如果你想更深入地了解 Bash 可以教给你的全部知识,请买本我的书,《[Bash 学习,艰难之旅][3]》,或查看我的在线课程《[精通 Bash shell][7]》。 + +* * * + +本文最初发布在 Ian 的博客 [Zwischenzugs.com][8] 上,并经允许重复发布。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/bash-history-shortcuts + +作者:[Ian Miell][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ianmiell +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt) +[2]: https://opensource.com/article/18/5/bash-tricks +[3]: https://leanpub.com/learnbashthehardway +[4]: https://leanpub.com/learnbashthehardway/read_sample +[5]: https://opensource.com/sites/default/files/uploads/tar_2x.png (xkcd comic) +[6]: https://xkcd.com/1168/ +[7]: https://www.educative.io/courses/master-the-bash-shell +[8]: https://zwischenzugs.com/2019/08/25/seven-god-like-bash-history-shortcuts-you-will-actually-use/ diff --git a/published/20191004 9 essential GNU binutils tools.md b/published/20191004 9 essential GNU binutils tools.md new file mode 100644 index 0000000000..49c30e20d2 --- /dev/null +++ b/published/20191004 9 essential GNU binutils tools.md @@ -0,0 +1,640 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11441-1.html) +[#]: subject: (9 essential GNU binutils tools) +[#]: via: (https://opensource.com/article/19/10/gnu-binutils) +[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe) + +GNU binutils 里的九种武器 +====== + +> 二进制分析是计算机行业中最被低估的技能。 + +![](https://img.linux.net.cn/data/attachment/album/201910/10/115409g9nkdm2omutduw7u.jpg) + +想象一下,在无法访问软件的源代码时,但仍然能够理解软件的实现方式,在其中找到漏洞,并且更厉害的是还能修复错误。所有这些都是在只有二进制文件时做到的。这听起来就像是超能力,对吧? + +你也可以拥有这样的超能力,GNU 二进制实用程序(binutils)就是一个很好的起点。[GNU binutils] [2] 是一个二进制工具集,默认情况下所有 Linux 发行版中都会安装这些二进制工具。 + +二进制分析是计算机行业中最被低估的技能。它主要由恶意软件分析师、反向工程师和使用底层软件的人使用。 + +本文探讨了 binutils 可用的一些工具。我使用的是 RHEL,但是这些示例应该在任何 Linux 发行版上可以运行。 + +``` +[~]# cat /etc/redhat-release +Red Hat Enterprise Linux Server release 7.6 (Maipo) +[~]# +[~]# uname -r +3.10.0-957.el7.x86_64 +[~]# +``` + +请注意,某些打包命令(例如 `rpm`)在基于 Debian 的发行版中可能不可用,因此请使用等效的 `dpkg` 命令替代。 + +### 软件开发的基础知识 + +在开源世界中,我们很多人都专注于源代码形式的软件。当软件的源代码随时可用时,很容易获得源代码的副本,打开喜欢的编辑器,喝杯咖啡,然后就可以开始探索了。 + +但是源代码不是在 CPU 上执行的代码,在 CPU 上执行的是二进制或者说是机器语言指令。二进制或可执行文件是编译源代码时获得的。熟练的调试人员深谙通常这种差异。 + +### 编译的基础知识 + +在深入研究 binutils 软件包本身之前,最好先了解编译的基础知识。 + +编译是将程序从某种编程语言(如 C/C++)的源代码(文本形式)转换为机器代码的过程。 + +机器代码是 CPU(或一般而言,硬件)可以理解的 1 和 0 的序列,因此可以由 CPU 执行或运行。该机器码以特定格式保存到文件,通常称为可执行文件或二进制文件。在 Linux(和使用 [Linux 兼容二进制][3]的 BSD)上,这称为 [ELF][4](可执行和可链接格式Executable and Linkable Format)。 + +在生成给定的源文件的可执行文件或二进制文件之前,编译过程将经历一系列复杂的步骤。以这个源程序(C 代码)为例。打开你喜欢的编辑器,然后键入以下程序: + +``` +#include + +int main(void) +{ + printf("Hello World\n"); + return 0; +} +``` + +#### 步骤 1:用 cpp 预处理 + +[C 预处理程序(cpp)][5]用于扩展所有宏并将头文件包含进来。在此示例中,头文件 `stdio.h` 将被包含在源代码中。`stdio.h` 是一个头文件,其中包含有关程序内使用的 `printf` 函数的信息。对源代码运行 `cpp`,其结果指令保存在名为 `hello.i` 的文件中。可以使用文本编辑器打开该文件以查看其内容。打印 “hello world” 的源代码在该文件的底部。 + +``` +[testdir]# cat hello.c +#include + +int main(void) +{ + printf("Hello World\n"); + return 0; +} +[testdir]# +[testdir]# cpp hello.c > hello.i +[testdir]# +[testdir]# ls -lrt +total 24 +-rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c +-rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i +[testdir]# +``` + +#### 步骤 2:用 gcc 编译 + +在此阶段,无需创建目标文件就将步骤 1 中生成的预处理源代码转换为汇编语言指令。这个阶段使用 [GNU 编译器集合(gcc)][6]。对 `hello.i` 文件运行带有 `-S` 选项的 `gcc` 命令后,它将创建一个名为 `hello.s` 的新文件。该文件包含该 C 程序的汇编语言指令。 + +你可以使用任何编辑器或 `cat` 命令查看其内容。 + +``` +[testdir]# +[testdir]# gcc -Wall -S hello.i +[testdir]# +[testdir]# ls -l +total 28 +-rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c +-rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i +-rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s +[testdir]# +[testdir]# cat hello.s +.file "hello.c" +.section .rodata +.LC0: +.string "Hello World" +.text +.globl main +.type main, @function +main: +.LFB0: +.cfi_startproc +pushq %rbp +.cfi_def_cfa_offset 16 +.cfi_offset 6, -16 +movq %rsp, %rbp +.cfi_def_cfa_register 6 +movl $.LC0, %edi +call puts +movl $0, %eax +popq %rbp +.cfi_def_cfa 7, 8 +ret +.cfi_endproc +.LFE0: +.size main, .-main +.ident "GCC: (GNU) 4.8.5 20150623 (Red Hat 4.8.5-36)" +.section .note.GNU-stack,"",@progbits +[testdir]# +``` + +#### 步骤 3:用 as 汇编 + +汇编器的目的是将汇编语言指令转换为机器语言代码,并生成扩展名为 `.o` 的目标文件。此阶段使用默认情况下在所有 Linux 平台上都可用的 GNU 汇编器。 + +``` +testdir]# as hello.s -o hello.o +[testdir]# +[testdir]# ls -l +total 32 +-rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c +-rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i +-rw-r--r--. 1 root root 1496 Sep 13 03:39 hello.o +-rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s +[testdir]# +``` + +现在,你有了第一个 ELF 格式的文件;但是,还不能执行它。稍后,你将看到“目标文件object file”和“可执行文件executable file”之间的区别。 + +``` +[testdir]# file hello.o +hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped +``` + +#### 步骤 4:用 ld 链接 + +这是编译的最后阶段,将目标文件链接以创建可执行文件。可执行文件通常需要外部函数,这些外部函数通常来自系统库(`libc`)。 + +你可以使用 `ld` 命令直接调用链接器;但是,此命令有些复杂。相反,你可以使用带有 `-v`(详细)标志的 `gcc` 编译器,以了解链接是如何发生的。(使用 `ld` 命令进行链接作为一个练习,你可以自行探索。) + +``` +[testdir]# gcc -v hello.o +Using built-in specs. +COLLECT_GCC=gcc +COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper +Target: x86_64-redhat-linux +Configured with: ../configure --prefix=/usr --mandir=/usr/share/man [...] --build=x86_64-redhat-linux +Thread model: posix +gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) +COMPILER_PATH=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/:/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/:[...]:/usr/lib/gcc/x86_64-redhat-linux/ +LIBRARY_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.8.5/:/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/:/lib/../lib64/:/usr/lib/../lib64/:/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../:/lib/:/usr/lib/ +COLLECT_GCC_OPTIONS='-v' '-mtune=generic' '-march=x86-64' +/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/collect2 --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu [...]/../../../../lib64/crtn.o +[testdir]# +``` + +运行此命令后,你应该看到一个名为 `a.out` 的可执行文件: + +``` +[testdir]# ls -l +total 44 +-rwxr-xr-x. 1 root root 8440 Sep 13 03:45 a.out +-rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c +-rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i +-rw-r--r--. 1 root root 1496 Sep 13 03:39 hello.o +-rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s +``` + +对 `a.out` 运行 `file` 命令,结果表明它确实是 ELF 可执行文件: + +``` +[testdir]# file a.out +a.out: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=48e4c11901d54d4bf1b6e3826baf18215e4255e5, not stripped +``` + +运行该可执行文件,看看它是否如源代码所示工作: + +``` +[testdir]# ./a.out Hello World +``` + +工作了!在幕后发生了很多事情它才在屏幕上打印了 “Hello World”。想象一下在更复杂的程序中会发生什么。 + +### 探索 binutils 工具 + +上面这个练习为使用 binutils 软件包中的工具提供了良好的背景。我的系统带有 binutils 版本 2.27-34;你的 Linux 发行版上的版本可能有所不同。 + +``` +[~]# rpm -qa | grep binutils +binutils-2.27-34.base.el7.x86_64 +``` + +binutils 软件包中提供了以下工具: + +``` +[~]# rpm -ql binutils-2.27-34.base.el7.x86_64 | grep bin/ +/usr/bin/addr2line +/usr/bin/ar +/usr/bin/as +/usr/bin/c++filt +/usr/bin/dwp +/usr/bin/elfedit +/usr/bin/gprof +/usr/bin/ld +/usr/bin/ld.bfd +/usr/bin/ld.gold +/usr/bin/nm +/usr/bin/objcopy +/usr/bin/objdump +/usr/bin/ranlib +/usr/bin/readelf +/usr/bin/size +/usr/bin/strings +/usr/bin/strip +``` + +上面的编译练习已经探索了其中的两个工具:用作汇编器的 `as` 命令,用作链接器的 `ld` 命令。继续阅读以了解上述 GNU binutils 软件包工具中的其他七个。 + +#### readelf:显示 ELF 文件信息 + +上面的练习提到了术语“目标文件”和“可执行文件”。使用该练习中的文件,通过带有 `-h`(标题)选项的 `readelf` 命令,以将文件的 ELF 标题转储到屏幕上。请注意,以 `.o` 扩展名结尾的目标文件显示为 `Type: REL (Relocatable file)`(可重定位文件): + +``` +[testdir]# readelf -h hello.o +ELF Header: +Magic: 7f 45 4c 46 02 01 01 00 [...] +[...] +Type: REL (Relocatable file) +[...] +``` + +如果尝试执行此目标文件,会收到一条错误消息,指出无法执行。这仅表示它尚不具备在 CPU 上执行所需的信息。 + +请记住,你首先需要使用 `chmod` 命令在对象文件上添加 `x`(可执行位),否则你将得到“权限被拒绝”的错误。 + +``` +[testdir]# ./hello.o +bash: ./hello.o: Permission denied +[testdir]# chmod +x ./hello.o +[testdir]# +[testdir]# ./hello.o +bash: ./hello.o: cannot execute binary file +``` + +如果对 `a.out` 文件尝试相同的命令,则会看到其类型为 `EXEC (Executable file)`(可执行文件)。 + +``` +[testdir]# readelf -h a.out +ELF Header: +Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 +Class: ELF64 +[...] Type: EXEC (Executable file) +``` + +如上所示,该文件可以直接由 CPU 执行: + +``` +[testdir]# ./a.out Hello World +``` + +`readelf` 命令可提供有关二进制文件的大量信息。在这里,它会告诉你它是 ELF 64 位格式,这意味着它只能在 64 位 CPU 上执行,而不能在 32 位 CPU 上运行。它还告诉你它应在 X86-64(Intel/AMD)架构上执行。该二进制文件的入口点是地址 `0x400430`,它就是 C 源程序中 `main` 函数的地址。 + +在你知道的其他系统二进制文件上尝试一下 `readelf` 命令,例如 `ls`。请注意,在 RHEL 8 或 Fedora 30 及更高版本的系统上,由于安全原因改用了位置无关可执行文件position independent executable([PIE][7]),因此你的输出(尤其是 `Type:`)可能会有所不同。 + +``` +[testdir]# readelf -h /bin/ls +ELF Header: +Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 +Class: ELF64 +Data: 2's complement, little endian +Version: 1 (current) +OS/ABI: UNIX - System V +ABI Version: 0 +Type: EXEC (Executable file) +``` + +使用 `ldd` 命令了解 `ls` 命令所依赖的系统库,如下所示: + +``` +[testdir]# ldd /bin/ls +linux-vdso.so.1 => (0x00007ffd7d746000) +libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f060daca000) +libcap.so.2 => /lib64/libcap.so.2 (0x00007f060d8c5000) +libacl.so.1 => /lib64/libacl.so.1 (0x00007f060d6bc000) +libc.so.6 => /lib64/libc.so.6 (0x00007f060d2ef000) +libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f060d08d000) +libdl.so.2 => /lib64/libdl.so.2 (0x00007f060ce89000) +/lib64/ld-linux-x86-64.so.2 (0x00007f060dcf1000) +libattr.so.1 => /lib64/libattr.so.1 (0x00007f060cc84000) +libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f060ca68000) +``` + +对 `libc` 库文件运行 `readelf` 以查看它是哪种文件。正如它指出的那样,它是一个 `DYN (Shared object file)`(共享对象文件),这意味着它不能直接执行;必须由内部使用了该库提供的任何函数的可执行文件使用它。 + +``` +[testdir]# readelf -h /lib64/libc.so.6 +ELF Header: +Magic: 7f 45 4c 46 02 01 01 03 00 00 00 00 00 00 00 00 +Class: ELF64 +Data: 2's complement, little endian +Version: 1 (current) +OS/ABI: UNIX - GNU +ABI Version: 0 +Type: DYN (Shared object file) +``` + +#### size:列出节的大小和全部大小 + +`size` 命令仅适用于目标文件和可执行文件,因此,如果尝试在简单的 ASCII 文件上运行它,则会抛出错误,提示“文件格式无法识别”。 + +``` +[testdir]# echo "test" > file1 +[testdir]# cat file1 +test +[testdir]# file file1 +file1: ASCII text +[testdir]# size file1 +size: file1: File format not recognized +``` + +现在,在上面的练习中,对目标文件和可执行文件运行 `size` 命令。请注意,根据 `size` 命令的输出可以看出,可执行文件(`a.out`)的信息要比目标文件(`hello.o`)多得多: + +``` +[testdir]# size hello.o +text data bss dec hex filename +89 0 0 89 59 hello.o +[testdir]# size a.out +text data bss dec hex filename +1194 540 4 1738 6ca a.out +``` + +但是这里的 `text`、`data` 和 `bss` 节是什么意思? + +`text` 节是指二进制文件的代码部分,其中包含所有可执行指令。`data` 节是所有初始化数据所在的位置,`bss` 节是所有未初始化数据的存储位置。(LCTT 译注:一般来说,在静态的映像文件中,各个部分称之为section,而在运行时的各个部分称之为segment,有时统称为段。) + +比较其他一些可用的系统二进制文件的 `size` 结果。 + +对于 `ls` 命令: + +``` +[testdir]# size /bin/ls +text data bss dec hex filename +103119 4768 3360 111247 1b28f /bin/ls +``` + +只需查看 `size` 命令的输出,你就可以看到 `gcc` 和 `gdb` 是比 `ls` 大得多的程序: + +``` +[testdir]# size /bin/gcc +text data bss dec hex filename +755549 8464 81856 845869 ce82d /bin/gcc +[testdir]# size /bin/gdb +text data bss dec hex filename +6650433 90842 152280 6893555 692ff3 /bin/gdb +``` + +#### strings:打印文件中的可打印字符串 + +在 `strings` 命令中添加 `-d` 标志以仅显示 `data` 节中的可打印字符通常很有用。 + +`hello.o` 是一个目标文件,其中包含打印出 `Hello World` 文本的指令。因此,`strings` 命令的唯一输出是 `Hello World`。 + +``` +[testdir]# strings -d hello.o +Hello World +``` + +另一方面,在 `a.out`(可执行文件)上运行 `strings` 会显示在链接阶段该二进制文件中包含的其他信息: + +``` +[testdir]# strings -d a.out +/lib64/ld-linux-x86-64.so.2 +!^BU +libc.so.6 +puts +__libc_start_main +__gmon_start__ +GLIBC_2.2.5 +UH-0 +UH-0 +=( +[]A\A]A^A_ +Hello World +;*3$" +``` + +#### objdump:显示目标文件信息 + +另一个可以从二进制文件中转储机器语言指令的 binutils 工具称为 `objdump`。使用 `-d` 选项,可从二进制文件中反汇编出所有汇编指令。 + +回想一下,编译是将源代码指令转换为机器代码的过程。机器代码仅由 1 和 0 组成,人类难以阅读。因此,它有助于将机器代码表示为汇编语言指令。汇编语言是什么样的?请记住,汇编语言是特定于体系结构的;由于我使用的是 Intel(x86-64)架构,因此如果你使用 ARM 架构编译相同的程序,指令将有所不同。 + +``` +[testdir]# objdump -d hello.o +hello.o: file format elf64-x86-64 +Disassembly of section .text: +0000000000000000 +: +0: 55 push %rbp +1: 48 89 e5 mov %rsp,%rbp +4: bf 00 00 00 00 mov $0x0,%edi +9: e8 00 00 00 00 callq e + +e: b8 00 00 00 00 mov $0x0,%eax +13: 5d pop %rbp +14: c3 retq +``` + +该输出乍一看似乎令人生畏,但请花一点时间来理解它,然后再继续。回想一下,`.text` 节包含所有的机器代码指令。汇编指令可以在第四列中看到(即 `push`、`mov`、`callq`、`pop`、`retq` 等)。这些指令作用于寄存器,寄存器是 CPU 内置的存储器位置。本示例中的寄存器是 `rbp`、`rsp`、`edi`、`eax` 等,并且每个寄存器都有特殊的含义。 + +现在对可执行文件(`a.out`)运行 `objdump` 并查看得到的内容。可执行文件的 `objdump` 的输出可能很大,因此我使用 `grep` 命令将其缩小到 `main` 函数: + +``` +[testdir]# objdump -d a.out | grep -A 9 main\> +000000000040051d +: +40051d: 55 push %rbp +40051e: 48 89 e5 mov %rsp,%rbp +400521: bf d0 05 40 00 mov $0x4005d0,%edi +400526: e8 d5 fe ff ff callq 400400 +40052b: b8 00 00 00 00 mov $0x0,%eax +400530: 5d pop %rbp +400531: c3 retq +``` + +请注意,这些指令与目标文件 `hello.o` 相似,但是其中包含一些其他信息: + +* 目标文件 `hello.o` 具有以下指令:`callq e` +* 可执行文件 `a.out` 由以下指令组成,该指令带有一个地址和函数:`callq 400400 ` +   +上面的汇编指令正在调用 `puts` 函数。请记住,你在源代码中使用了一个 `printf` 函数。编译器插入了对 `puts` 库函数的调用,以将 `Hello World` 输出到屏幕。 + +查看 `put` 上方一行的说明: + +* 目标文件 `hello.o` 有个指令 `mov`:`mov $0x0,%edi` +* 可执行文件 `a.out` 的 `mov` 指令带有实际地址(`$0x4005d0`)而不是 `$0x0`:`mov $0x4005d0,%edi` + +该指令将二进制文件中地址 `$0x4005d0` 处存在的内容移动到名为 `edi` 的寄存器中。 + +这个存储位置的内容中还能是别的什么吗?是的,你猜对了:它就是文本 `Hello, World`。你是如何确定的? + +`readelf` 命令使你可以将二进制文件(`a.out`)的任何节转储到屏幕上。以下要求它将 `.rodata`(这是只读数据)转储到屏幕上: + +``` +[testdir]# readelf -x .rodata a.out + +Hex dump of section '.rodata': +0x004005c0 01000200 00000000 00000000 00000000 .... +0x004005d0 48656c6c 6f20576f 726c6400 Hello World. +``` + +你可以在右侧看到文本 `Hello World`,在左侧可以看到其二进制格式的地址。它是否与你在上面的 `mov` 指令中看到的地址匹配?是的,确实匹配。 + +#### strip:从目标文件中剥离符号 + +该命令通常用于在将二进制文件交付给客户之前减小二进制文件的大小。 + +请记住,由于重要信息已从二进制文件中删除,因此它会妨碍调试。但是,这个二进制文件可以完美地执行。 + +对 `a.out` 可执行文件运行该命令,并注意会发生什么。首先,通过运行以下命令确保二进制文件没有被剥离(`not stripped`): + +``` +[testdir]# file a.out +a.out: ELF 64-bit LSB executable, x86-64, [......] not stripped +``` + +另外,在运行 `strip` 命令之前,请记下二进制文件中最初的字节数: + +``` +[testdir]# du -b a.out +8440 a.out +``` + +现在对该可执行文件运行 `strip` 命令,并使用 `file` 命令以确保正常完成: + +``` +[testdir]# strip a.out +[testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, [......] stripped + +``` + +剥离该二进制文件后,此小程序的大小从之前的 `8440` 字节减小为 `6296` 字节。对于这样小的一个程序都能有这么大的空间节省,难怪大型程序经常被剥离。 + +``` +[testdir]# du -b a.out +6296 a.out +``` + +#### addr2line:转换地址到文件名和行号 + +`addr2line` 工具只是在二进制文件中查找地址,并将其与 C 源代码程序中的行进行匹配。很酷,不是吗? + +为此编写另一个测试程序;只是这一次确保使用 `gcc` 的 `-g` 标志进行编译,这将为二进制文件添加其它调试信息,并包含有助于调试的行号(由源代码中提供): + +``` +[testdir]# cat -n atest.c +1 #include +2 +3 int globalvar = 100; +4 +5 int function1(void) +6 { +7 printf("Within function1\n"); +8 return 0; +9 } +10 +11 int function2(void) +12 { +13 printf("Within function2\n"); +14 return 0; +15 } +16 +17 int main(void) +18 { +19 function1(); +20 function2(); +21 printf("Within main\n"); +22 return 0; +23 } +``` + +用 `-g` 标志编译并执行它。正如预期: + +``` +[testdir]# gcc -g atest.c +[testdir]# ./a.out +Within function1 +Within function2 +Within main +``` + +现在使用 `objdump` 来标识函数开始的内存地址。你可以使用 `grep` 命令来过滤出所需的特定行。函数的地址在下面突出显示(`55 push %rbp` 前的地址): + +``` +[testdir]# objdump -d a.out | grep -A 2 -E 'main>:|function1>:|function2>:' +000000000040051d : +40051d: 55 push %rbp +40051e: 48 89 e5 mov %rsp,%rbp +-- +0000000000400532 : +400532: 55 push %rbp +400533: 48 89 e5 mov %rsp,%rbp +-- +0000000000400547 +: +400547: 55 push %rbp +400548: 48 89 e5 mov %rsp,%rbp +``` + +现在,使用 `addr2line` 工具从二进制文件中的这些地址映射到 C 源代码匹配的地址: + +``` +[testdir]# addr2line -e a.out 40051d +/tmp/testdir/atest.c:6 +[testdir]# +[testdir]# addr2line -e a.out 400532 +/tmp/testdir/atest.c:12 +[testdir]# +[testdir]# addr2line -e a.out 400547 +/tmp/testdir/atest.c:18 +``` + +它说 `40051d` 从源文件 `atest.c` 中的第 `6` 行开始,这是 `function1` 的起始大括号(`{`)开始的行。`function2` 和 `main` 的输出也匹配。 + +#### nm:列出目标文件的符号 + +使用上面的 C 程序测试 `nm` 工具。使用 `gcc` 快速编译并执行它。 + +``` +[testdir]# gcc atest.c +[testdir]# ./a.out +Within function1 +Within function2 +Within main +``` + +现在运行 `nm` 和 `grep` 获取有关函数和变量的信息: + +``` +[testdir]# nm a.out | grep -Ei 'function|main|globalvar' +000000000040051d T function1 +0000000000400532 T function2 +000000000060102c D globalvar +U __libc_start_main@@GLIBC_2.2.5 +0000000000400547 T main +``` + +你可以看到函数被标记为 `T`,它表示 `text` 节中的符号,而变量标记为 `D`,表示初始化的 `data` 节中的符号。 + +想象一下在没有源代码的二进制文件上运行此命令有多大用处?这使你可以窥视内部并了解使用了哪些函数和变量。当然,除非二进制文件已被剥离,这种情况下它们将不包含任何符号,因此 `nm` 就命令不会很有用,如你在此处看到的: + +``` +[testdir]# strip a.out +[testdir]# nm a.out | grep -Ei 'function|main|globalvar' +nm: a.out: no symbols +``` + +### 结论 + +GNU binutils 工具为有兴趣分析二进制文件的人提供了许多选项,这只是它们可以为你做的事情的冰山一角。请阅读每种工具的手册页,以了解有关它们以及如何使用它们的更多信息。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/gnu-binutils + +作者:[Gaurav Kamathe][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/gkamathe +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn (Tools for the sysadmin) +[2]: https://en.wikipedia.org/wiki/GNU_Binutils +[3]: https://www.freebsd.org/doc/handbook/linuxemu.html +[4]: https://en.wikipedia.org/wiki/Executable_and_Linkable_Format +[5]: https://en.wikipedia.org/wiki/C_preprocessor +[6]: https://gcc.gnu.org/onlinedocs/gcc/ +[7]: https://en.wikipedia.org/wiki/Position-independent_code#Position-independent_executables diff --git a/published/20191004 All That You Can Do with Google Analytics, and More.md b/published/20191004 All That You Can Do with Google Analytics, and More.md new file mode 100644 index 0000000000..a06d46414c --- /dev/null +++ b/published/20191004 All That You Can Do with Google Analytics, and More.md @@ -0,0 +1,253 @@ +[#]: collector: (lujun9972) +[#]: translator: (HankChow) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11460-1.html) +[#]: subject: (All That You Can Do with Google Analytics, and More) +[#]: via: (https://opensourceforu.com/2019/10/all-that-you-can-do-with-google-analytics-and-more/) +[#]: author: (Ashwin Sathian https://opensourceforu.com/author/ashwin-sathian/) + +Google Analytics 的一些用法介绍 +====== + +![][1] + +Google Analytics (GA)这个最流行的用户活动追踪工具我们或多或少都听说过甚至使用过,但它的用途并不仅仅限于对页面访问的追踪。作为一个既实用又流行的工具,它已经受到了广泛的欢迎,因此我们将要在下文中介绍如何在各种 Angular 和 React 单页应用中使用 Google Analytics。 + +这篇文章源自这样一个问题:如何对单页应用中的页面访问进行跟踪? + +通常来说,有很多的方法可以解决这个问题,在这里我们只讨论其中的一种方法。下面我会使用 Angular 来写出对应的实现,如果你使用的是 React,相关的用法和概念也不会有太大的差别。接下来就开始吧。 + +### 准备好应用程序 + +首先需要有一个追踪 IDtracking ID。在开始编写业务代码之前,要先准备好一个追踪 ID,通过这个唯一的标识,Google Analytics 才能识别出某个点击或者某个页面访问是来自于哪一个应用。 + +按照以下的步骤: + + 1. 访问 ; + 2. 提交指定信息以完成注册,并确保可用于 Web 应用,因为我们要开发的正是一个 Web 应用; + 3. 同意相关的条款,生成一个追踪 ID; + 4. 保存好追踪 ID。 + +追踪 ID 拿到了,就可以开始编写业务代码了。 + +### 添加 analytics.js 脚本 + +Google 已经帮我们做好了接入之前需要做的所有事情,接下来就是我们的工作了。不过我们要做的也很简单,只需要把下面这段脚本添加到应用的 `index.html` 里,就可以了: + +``` + +``` + +现在我们来看一下 Google Analytics 是如何在应用程序中初始化的。 + +### 创建追踪器 + +首先创建一个应用程序的追踪器。在 `app.component.ts` 中执行以下两个步骤: + + 1. 声明一个名为 `ga`,类型为 `any` 的全局变量(在 Typescript 中需要制定变量类型); + 2. 将下面一行代码加入到 `ngInInit()` 中。 + +``` +ga('create', <你的追踪 ID>, 'auto'); +``` + +这样就已经成功地在应用程序中初始化了一个 Google Analytics 的追踪器了。由于追踪器的初始化是在 `OnInit()` 函数中执行的,因此每当应用程序启动,追踪器就会启动。 + +### 在单页应用中记录页面访问情况 + +我们需要实现的是记录访问路由route-visits。 + +如何记录用户在一个应用中不同部分的访问,这是一个难点。从功能上来看,单页应用中的路由对应了传统多页面应用中各个页面之间的跳转,因此我们需要记录访问路由。要做到这一点尽管不算简单,但仍然是可以实现的。在 `app.component.ts` 的 `ngOnInit()` 函数中添加以下的代码片段: + +``` +import { Router, NavigationEnd } from '@angular/router'; +... +constructor(public router: Router) {} +... +this.router.events.subscribe( + event => { + if (event instanceof NavigationEnd) { + ga('set', 'page', event.urlAfterRedirects); + ga('send', { hitType: 'pageview', hitCallback: () => { this.pageViewSent = true; }}); + } + } +); +``` + +神奇的是,只需要这么几行代码,就实现了 Angular 应用中记录页面访问情况的功能。 + +这段代码实际上做了以下几件事情: + + 1. 从 Angular Router 中导入了 `Router`、`NavigationEnd`; + 2. 通过构造函数中将 `Router` 添加到组件中; + 3. 然后订阅 `router` 事件,也就是由 Angular Router 发出的所有事件; + 4. 只要产生了一个 `NavigationEnd` 事件实例,就将路由和目标作为一个页面访问进行记录。 + +这样,只要使用到了页面路由,就会向 Google Analytics 发送一条页面访问记录,在 Google Analytics 的在线控制台中可以看到这些记录。 + +类似地,我们可以用相同的方式来记录除了页面访问之外的活动,例如某个界面的查看次数或者时长等等。只要像上面的代码那样使用 `hitCallBack()` 就可以在有需要收集的数据的时候让应用程序作出反应,这里我们做的事情仅仅是把一个变量的值设为 `true`,但实际上 `hitCallBack()` 中可以执行任何代码。 + +### 追踪用户交互活动 + +除了页面访问记录之外,Google Analytics 还经常被用于追踪用户的交互活动,例如某个按钮的点击情况。“提交按钮被用户点击了多少次?”,“产品手册会被经常查阅吗?”这些都是 Web 应用程序的产品评审会议上的常见问题。这一节我们将会介绍如何实现这些数据的统计。 + +#### 按钮点击 + +设想这样一种场景,需要统计到应用程序中某个按钮或链接被点击的次数,这是一个和注册之类的关键动作关系最密切的数据指标。下面我们来举一个例子: + +假设应用程序中有一个“感兴趣”按钮,用于显示即将推出的活动,你希望通过统计这个按钮被点击的次数来推测有多少用户对此感兴趣。那么我们可以使用以下的代码来实现这个功能: + +``` +... +params = { + eventCategory: + 'Button' + , + eventAction: + 'Click' + , + eventLabel: + 'Show interest' + , + eventValue: + 1 +}; + +showInterest() { + ga('send', 'event', this.params); +} +... +``` + +现在看下这段代码实际上做了什么。正如前面说到,当我们向 Google Analytics 发送数据的时候,Google Analytics 就会记录下来。因此我们可以向 `send()` 方法传递不同的参数,以区分不同的事件,例如两个不同按钮的点击记录。 + +1、首先我们定义了一个 `params` 对象,这个对象包含了以下几个字段: + + 1. `eventCategory` – 交互发生的对象,这里对应的是按钮(button) + 2. `eventAction` – 发生的交互的类型,这里对应的是点击(click) + 3. `eventLabel` – 交互动作的标识,这里对应的是这个按钮的内容,也就是“感兴趣” + 4. `eventValue` – 与每个发生的事件实例相关联的值 + +由于这个例子中是要统计点击了“感兴趣”按钮的用户数,因此我们把 `eventValue` 的值定为 1。 + +2、对象构造完成之后,下一步就是将 `params` 对象作为请求负载发送到 Google Analytics,而这一步是通过事件绑定将 `showInterest()` 绑定在按钮上实现的。这也是使用 Google Analytics 追踪中最常用的发送事件方法。 + +至此,Google Analytics 就可以通过记录按钮的点击次数来统计感兴趣的用户数了。 + +#### 追踪社交活动 + +Google Analytics 还可以通过应用程序追踪用户在社交媒体上的互动。其中一种场景就是在应用中放置类似 Facebook 的点赞按钮,下面我们来看看如何使用 Google Analytics 来追踪这一行为。 + +``` +... +fbLikeParams = { + socialNetwork: + 'Facebook', + socialAction: + 'Like', + socialTarget: + 'https://facebook.com/mypage' +}; +... +fbLike() { + ga('send', 'social', this.fbLikeParams); +} +``` + +如果你觉得这段代码似曾相识,那是因为它确实跟上面统计“感兴趣”按钮点击次数的代码非常相似。下面我们继续看其中每一步的内容: + +1、构造发送的数据负载,其中包括以下字段: + + 1. `socialNetwork` – 交互发生的社交媒体,例如 Facebook、Twitter 等等 + 2. `socialAction` – 发生的交互类型,例如点赞、发表推文、分享等等 + 3. `socialTarget` – 交互的目标 URL,一般是社交媒体账号的主页 + +2、下一步是增加一个函数来发送整个交互记录。和统计按钮点击数量时相比,这里使用 `send()` 的方式有所不同。另外,我们还需要把这个函数绑定到已有的点赞按钮上。 + +在追踪用户交互方面,Google Analytics 还可以做更多的事情,其中最重要的一种是针对异常的追踪,这让我们可以通过 Google Analytics 来追踪应用程序中出现的错误和异常。在本文中我们就不赘述这一点了,但我们鼓励读者自行探索。 + +### 用户识别 + +#### 隐私是一项权利,而不是奢侈品 + +Google Analytics 除了可以记录很多用户的操作和交互活动之外,这一节还将介绍一个不太常见的功能,就是可以控制是否对用户的身份进行追踪。 + +#### Cookies + +Google Analytics 追踪用户活动的方式是基于 Cookies 的,因此我们可以自定义 Cookies 的名称以及一些其它的内容,请看下面这段代码: + +``` +trackingID = + 'UA-139883813-1' +; +cookieParams = { + cookieName: 'myGACookie', + cookieDomain: window.location.hostname, + cookieExpires: 604800 +}; +... +ngOnInit() { + ga('create', this.trackingID, this.cookieParams); +... +} +``` + +在上面这段代码中,我们设置了 Google Analytics Cookies 的名称、域以及过期时间,这就让我们能够将不同网站或 Web 应用的 Cookies 区分开来。因此我们需要为我们自己的应用程序的 Google Analytics 追踪器的 Cookies 设置一个自定义的标识1,而不是一个自动生成的随机标识。 + +#### IP 匿名 + +在某些场景下,我们可能不需要知道应用程序的流量来自哪里。例如对于一个按钮点击的追踪器,我们只需要关心按钮的点击量,而不需要关心点击者的地理位置。在这种场景下,Google Analytics 允许我们只追踪用户的操作行为,而不获取到用户的 IP 地址。 + +``` +ipParams = { + anonymizeIp: true +}; +... +ngOnInit() { + ... + ga('set', this.ipParams); + ... +} +``` + +在上面这段代码中,我们将 Google Analytics 追踪器的 `abibymizeIp` 参数设置为 `true`。这样用户的 IP 地址就不会被 Google Analytics 所追踪,这可以让用户知道自己的隐私正在被保护。 + +#### 不被跟踪 + +还有些时候用户可能不希望自己的行为受到追踪,而 Google Analytics 也允许这样的需求。因此也存在让用户不被追踪的选项。 + +``` +... +optOut() { + window['ga-disable-UA-139883813-1'] = true; +} +... +``` + +`optOut()` 是一个自定义函数,它可以禁用页面中的 Google Analytics 追踪,我们可以使用按钮或复选框上得事件绑定来使用这一个功能,这样用户就可以选择是否会被 Google Analytics 追踪。 + +在本文中,我们讨论了 Google Analytics 集成到单页应用时的难点,并探索出了一种相关的解决方法。我们还了解到了如何在单页应用中追踪页面访问和用户交互,例如按钮点击、社交媒体活动等。 + +最后,我们还了解到 Google Analytics 为用户提供了保护隐私的功能,尤其是用户的一些隐私数据并不需要参与到统计当中的时候。而用户也可以选择完全不受到 Google Analytics 的追踪。除此以外,Google Analytics 还可以做到很多其它的事情,这就需要我们继续不断探索了。 + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/all-that-you-can-do-with-google-analytics-and-more/ + +作者:[Ashwin Sathian][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/ashwin-sathian/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Analytics-illustration.jpg?resize=696%2C396&ssl=1 (Analytics illustration) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Analytics-illustration.jpg?fit=900%2C512&ssl=1 diff --git a/published/20191004 In Fedora 31, 32-bit i686 is 86ed.md b/published/20191004 In Fedora 31, 32-bit i686 is 86ed.md new file mode 100644 index 0000000000..933083eb29 --- /dev/null +++ b/published/20191004 In Fedora 31, 32-bit i686 is 86ed.md @@ -0,0 +1,56 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11424-1.html) +[#]: subject: (In Fedora 31, 32-bit i686 is 86ed) +[#]: via: (https://fedoramagazine.org/in-fedora-31-32-bit-i686-is-86ed/) +[#]: author: (Justin Forbes https://fedoramagazine.org/author/jforbes/) + +Fedora 31 将放弃 32 位 i686 支持 +====== + +![][1] + +Fedora 31 中[丢弃了][2] 32 位 i686 内核及其可启动镜像。虽然可能有一些用户仍然拥有无法与 64 位 x86_64 内核一起使用的硬件,但数量很少。本文为你提供了这次更改背后的整个事情,以及在 Fedora 31 中仍然可以找到的 32 位元素。 + +### 发生了什么? + +i686 架构实质上从 [Fedora 27 版本][3]就进入了社区支持阶段(LCTT 译注:不再由官方支持)。不幸的是,社区中没有足够的成员愿意做维护该体系结构的工作。不过请放心,Fedora 不会删除所有 32 位软件包,仍在构建许多 i686 软件包,以确保诸如 multilib、wine 和 Steam 之类的东西可以继续工作。 + +尽管该存储库不再构建和镜像输出,但存在一个 koji i686 存储库,该库可与 mock 一起使用以构建 32 位程序包,并且可以在紧要关头安装不属于 x86_64 multilib 存储库的 32 位版本。当然,维护人员希望这样做解决有限的使用场景。只是需要运行一个 32 位应用程序的用户应该可以在 64 位系统上使用 multilib 来运行。 + +### 如果你要运行 32 位应用需要做什么? + +如果你仍在运行 32 位 i686 系统,则会在 Fedora 30 生命周期中继续收到受支持的 Fedora 更新。直到大约 2020 年 5 月或 6 月。到那时,如果硬件支持,你可以将其重新安装为 64 位 x86_64,或者如果可能的话,将其替换为支持 64 位的硬件。 + +社区中有一个用户已经成功地从 32 位 Fedora “升级” 到了 64 位 x86 Fedora。虽然这不是预期或受支持的升级路径,但应该也可行。该项目希望可以为具有 64 位功能的硬件的用户提供一些文档,以在 Fedora 30 使用寿命终止之前说明该升级过程。 + +如果有 64 位的 CPU,但由于内存不足而运行 32 位 Fedora,请尝试[备用桌面流派][4]之一。LXDE 和其他产品在内存受限的环境中往往表现良好。对于仅在旧的可以扔掉的 32 位硬件上运行简单服务器的用户,请考虑使用较新的 ARM 板之一。在许多情况下,仅节能一项就可以支付新硬件的费用。如果以上皆不可行,[CentOS 7][5] 提供了一个 32 位镜像,并对该平台提供长期支持。 + +### 安全与你 + +尽管有些用户可能会在生命周期结束后继续运行旧版本的 Fedora,但强烈建议不要这样做。人们不断研究软件的安全问题。通常,他们发现这些问题已经存在多年了。 + +一旦 Fedora 维护人员知道了此类问题,他们通常会为它们打补丁,并为支持的发行版提供更新,而不会给使用寿命已终止的发行版提供。当然,一旦这些漏洞公开,就会有人尝试利用它们。如果你在生命周期结束时运行了较旧的发行版,则安全风险会随着时间的推移而增加,从而使你的系统面临不断增长的风险。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/in-fedora-31-32-bit-i686-is-86ed/ + +作者:[Justin Forbes][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/jforbes/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/i686-86-816x345.jpg +[2]: https://fedoraproject.org/wiki/Changes/Stop_Building_i686_Kernels +[3]: https://fedoramagazine.org/announcing-fedora-27/ +[4]: https://spins.fedoraproject.org +[5]: https://centos.org +[6]: https://unsplash.com/@alexkixa?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[7]: https://unsplash.com/s/photos/motherboard?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/published/20191006 How to Install and Configure VNC Server on Centos 8 - RHEL 8.md b/published/20191006 How to Install and Configure VNC Server on Centos 8 - RHEL 8.md new file mode 100644 index 0000000000..ad7f88f885 --- /dev/null +++ b/published/20191006 How to Install and Configure VNC Server on Centos 8 - RHEL 8.md @@ -0,0 +1,190 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11458-1.html) +[#]: subject: (How to Install and Configure VNC Server on Centos 8 / RHEL 8) +[#]: via: (https://www.linuxtechi.com/install-configure-vnc-server-centos8-rhel8/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +如何在 Centos 8 / RHEL 8 上安装和配置 VNC 服务器 +====== + +VNC(虚拟网络计算Virtual Network Computing)服务器是基于 GUI 的桌面共享平台,它可让你访问远程桌面计算机。在 Centos 8 和 RHEL 8 系统中,默认未安装 VNC 服务器,它需要手动安装。在本文中,我们将通过简单的分步指南,介绍如何在 Centos 8 / RHEL 8 上安装 VNC 服务器。 + +### 在 Centos 8 / RHEL 8 上安装 VNC 服务器的先决要求 + +要在你的系统中安装 VNC 服务器,请确保你的系统满足以下要求: + + * CentOS 8 / RHEL 8 + * GNOME 桌面环境 + * root 用户权限 + * DNF / YUM 软件包仓库 + +### 在 Centos 8 / RHEL 8 上安装 VNC 服务器的分步指导 + +#### 步骤 1)安装 GNOME 桌面环境 + +在 CentOS 8 / RHEL 8 中安装 VNC 服务器之前,请确保已安装了桌面环境(DE)。如果已经安装了 GNOME 桌面或安装了 GUI 支持,那么可以跳过此步骤。 + +在 CentOS 8 / RHEL 8 中,GNOME 是默认的桌面环境。如果你的系统中没有它,请使用以下命令进行安装: + +``` +[root@linuxtechi ~]# dnf groupinstall "workstation" +或者 +[root@linuxtechi ~]# dnf groupinstall "Server with GUI +``` + +成功安装上面的包后,请运行以下命令启用图形模式: + +``` +[root@linuxtechi ~]# systemctl set-default graphical +``` + +现在重启系统,进入 GNOME 登录页面(LCTT 译注:你可以通过切换运行态来进入图形界面)。 + +``` +[root@linuxtechi ~]# reboot +``` + +重启后,请取消注释 `/etc/gdm/custom.conf` 中的 `WaylandEnable=false`,以使通过 vnc 进行的远程桌面会话请求由 GNOME 桌面的 xorg 处理,来代替 Wayland 显示管理器。 + +注意: Wayland 是 GNOME 中的默认显示管理器 (GDM),并且未配置用于处理 X.org 等远程渲染的 API。 + +#### 步骤 2)安装 VNC 服务器(tigervnc-server) + +接下来,我们将安装 VNC 服务器,有很多 VNC 服务器可以选择,出于安装目的,我们将安装 `TigerVNC 服务器`。它是最受欢迎的 VNC 服务器之一,并且高性能还独立于平台,它使用户可以轻松地与远程计算机进行交互。 + +现在,使用以下命令安装 TigerVNC 服务器: + +``` +[root@linuxtechi ~]# dnf install tigervnc-server tigervnc-server-module -y +``` + +#### 步骤 3)为本地用户设置 VNC 密码 + +假设我们希望用户 `pkumar` 使用 VNC 进行远程桌面会话,然后切换到该用户并使用 `vncpasswd` 命令设置其密码, + +``` +[root@linuxtechi ~]# su - pkumar +[root@linuxtechi ~]$ vncpasswd +Password: +Verify: +Would you like to enter a view-only password (y/n)? n +A view-only password is not used +[root@linuxtechi ~]$ +[root@linuxtechi ~]$ exit +logout +[root@linuxtechi ~]# +``` + +#### 步骤 4)设置 VNC 服务器配置文件 + +下一步是配置 VNC 服务器配置文件。创建含以下内容的 `/etc/systemd/system/vncserver@.service`,以便为上面的本地用户 `pkumar` 启动 tigervnc-server 的服务。 + +``` +[root@linuxtechi ~]# vim /etc/systemd/system/vncserver@.service +[Unit] +Description=Remote Desktop VNC Service +After=syslog.target network.target + +[Service] +Type=forking +WorkingDirectory=/home/pkumar +User=pkumar +Group=pkumar + +ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :' +ExecStart=/usr/bin/vncserver -autokill %i +ExecStop=/usr/bin/vncserver -kill %i + +[Install] +WantedBy=multi-user.target +``` + +保存并退出文件, + +注意:替换上面文件中的用户名为你自己的。 + +默认情况下,VNC 服务器在 tcp 端口 5900+n 上监听,其中 n 是显示端口号,如果显示端口号为 “1”,那么 VNC 服务器将在 TCP 端口 5901 上监听其请求。 + +#### 步骤 5)启动 VNC 服务并允许防火墙中的端口 + +我将显示端口号设置为 1,因此请使用以下命令在显示端口号 “1” 上启动并启用 vnc 服务, + +``` +[root@linuxtechi ~]# systemctl daemon-reload +[root@linuxtechi ~]# systemctl start vncserver@:1.service +[root@linuxtechi ~]# systemctl enable vncserver@:1.service +Created symlink /etc/systemd/system/multi-user.target.wants/vncserver@:1.service → /etc/systemd/system/vncserver@.service. +[root@linuxtechi ~]# +``` + +使用下面的 `netstat` 或 `ss` 命令来验证 VNC 服务器是否开始监听 5901 上的请求, + +``` +[root@linuxtechi ~]# netstat -tunlp | grep 5901 +tcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN 8169/Xvnc +tcp6 0 0 :::5901 :::* LISTEN 8169/Xvnc +[root@linuxtechi ~]# ss -tunlp | grep -i 5901 +tcp LISTEN 0 5 0.0.0.0:5901 0.0.0.0:* users:(("Xvnc",pid=8169,fd=6)) +tcp LISTEN 0 5 [::]:5901 [::]:* users:(("Xvnc",pid=8169,fd=7)) +[root@linuxtechi ~]# +``` + +使用下面的 `systemctl` 命令验证 VNC 服务器的状态, + +``` +[root@linuxtechi ~]# systemctl status vncserver@:1.service +``` + +![vncserver-status-centos8-rhel8][2] + +上面命令的输出确认在 tcp 端口 5901 上成功启动了 VNC。使用以下命令在系统防火墙中允许 VNC 服务器端口 “5901”, + +``` +[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5901/tcp +success +[root@linuxtechi ~]# firewall-cmd --reload +success +[root@linuxtechi ~]# +``` + +#### 步骤 6)连接到远程桌面会话 + +现在,我们已经准备就绪,可以查看远程桌面连接是否正常工作。要访问远程桌面,请在 Windows / Linux 工作站中启动 VNC Viewer,然后输入 VNC 服务器的 IP 地址和端口号,然后按回车。 + +![VNC-Viewer-Windows10][3] + +接下来,它将询问你的 VNC 密码。输入你先前为本地用户创建的密码,然后单击 “OK” 继续。 + +![VNC-Viewer-Connect-CentOS8-RHEL8-VNC-Server][4] + +现在你可以看到远程桌面, + +![VNC-Desktop-Screen-CentOS8][5] + +就是这样,你已经在 Centos 8 / RHEL 8 中成功安装了 VNC 服务器。 + +### 总结 + +希望这篇在 Centos 8 / RHEL 8 上安装 VNC 服务器的分步指南为你提供了轻松设置 VNC 服务器并访问远程桌面的所有信息。请在下面的评论栏中提供你的意见和建议。下篇文章再见。谢谢再见!!! + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/install-configure-vnc-server-centos8-rhel8/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/cdn-cgi/l/email-protection +[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/vncserver-status-centos8-rhel8.jpg +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Viewer-Windows10.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Viewer-Connect-CentOS8-RHEL8-VNC-Server.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Desktop-Screen-CentOS8.jpg diff --git a/published/20191007 IceWM - A really cool desktop.md b/published/20191007 IceWM - A really cool desktop.md new file mode 100644 index 0000000000..8ad9f9f045 --- /dev/null +++ b/published/20191007 IceWM - A really cool desktop.md @@ -0,0 +1,76 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11443-1.html) +[#]: subject: (IceWM – A really cool desktop) +[#]: via: (https://fedoramagazine.org/icewm-a-really-cool-desktop/) +[#]: author: (tdawson https://fedoramagazine.org/author/tdawson/) + +IceWM:一个非常酷的桌面 +====== + +![][1] + +IceWM 是一款非常轻量的桌面。它已经出现 20 多年了,它今天的目标仍然与当时相同:速度、简单性以及不妨碍用户。 + +我曾经将 IceWM 添加到 Scientific Linux 中作为轻量级桌面。当时它只是 0.5 兆的 rpm 包。运行时,它仅使用 5 兆的内存。这些年来,IceWM 有所增长。rpm 包现在为 1 兆。运行时,IceWM 现在使用 10 兆的内存。尽管在过去十年中它的大小增加了一倍,但它仍然非常小。 + +这么小的包,你能得到什么?确切地说,就是一个窗口管理器。没有其他东西。你有一个带有菜单或图标的工具栏来启动程序。速度很快。最后,还有主题和选项。除了工具栏中的一些小东西,就只有这些了。 + +![][2] + +### 安装 + +因为 IceWM 很小,你只需安装主软件包。 + +``` +$ sudo dnf install icewm +``` + +如果要节省磁盘空间,许多依赖项都是可选的。没有它们,IceWM 也可以正常工作。 + +``` +$ sudo dnf install icewm --setopt install_weak_deps=false +``` + +### 选项 + +IceWM 默认已经设置完毕,以使普通的 Windows 用户也能感到舒适。这是一件好事,因为选项是通过配置文件手动完成的。 + +我希望你不会因此而止步,因为它并没有听起来那么糟糕。它只有 8 个配置文件,大多数人只使用其中几个。主要的三个配置文件是 `keys`(键绑定),`preferences`(总体首选项)和 `toolbar`(工具栏上显示的内容)。默认配置文件位于 `/usr/share/icewm/`。 + +要进行更改,请将默认配置复制到 IceWM 家目录(`~/.icewm`),编辑文件,然后重新启动 IceWM。第一次做可能会有点害怕,因为在 “Logout” 菜单项下可以找到 “Restart Icewm”。但是,当你重启 IceWM 时,你只会看到闪烁一下,更改就生效了。任何打开的程序均不受影响,并保持原样。 + +### 主题 + +![IceWM in the NanoBlue theme][3] + +如果安装 icewm-themes 包,那么会得到很多主题。与常规选项不同,你无需重启 IceWM 即可更改为新主题。通常我不会谈论主题,但是由于其他功能很少,因此我想提下。 + +### 工具栏 + +工具栏是为 IceWM 添加了更多的功能的地方。你可以看到它可以切换不同的工作区。工作区有时称为虚拟桌面。单击工作区图标以移动到它。右键单击一个窗口的任务栏项目,可以在工作区之间移动它。如果你喜欢工作区,它拥有你想要的所有功能。如果你不喜欢工作区,那么可以选择关闭它。 + +工具栏还有网络/内存/CPU 监控图。将鼠标悬停在图标上可获得详细信息。单击图标可以打开一个拥有完整监控功能的窗口。这些小图形曾经出现在每个窗口管理器上。但是,随着桌面环境的成熟,它们都将这些图形去除了。我很高兴 IceWM 留下了这个不错的功能。 + +### 总结 + +如果你想要轻量但功能强大的桌面,IceWM 适合你。它已经设置好了,因此新的 Linux 用户也可以立即使用它。它是灵活的,因此 Unix 用户可以根据自己的喜好进行调整。最重要的是,IceWM 可以让你的程序不受阻碍地运行。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/icewm-a-really-cool-desktop/ + +作者:[tdawson][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/tdawson/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/icewm-1-816x346.png +[2]: https://fedoramagazine.org/wp-content/uploads/2019/09/icewm.2-1024x768.png +[3]: https://fedoramagazine.org/wp-content/uploads/2019/09/icewm.3-1024x771.png diff --git a/published/20191008 7 steps to securing your Linux server.md b/published/20191008 7 steps to securing your Linux server.md new file mode 100644 index 0000000000..a98495f234 --- /dev/null +++ b/published/20191008 7 steps to securing your Linux server.md @@ -0,0 +1,223 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11444-1.html) +[#]: subject: (7 steps to securing your Linux server) +[#]: via: (https://opensource.com/article/19/10/linux-server-security) +[#]: author: (Patrick H. Mullins https://opensource.com/users/pmullins) + +安全强化你的 Linux 服务器的七个步骤 +====== + +> 通过七个简单的步骤来加固你的 Linux 服务器。 + +![](https://img.linux.net.cn/data/attachment/album/201910/11/094107k8skl8wwxq62pzld.jpg) + +这篇入门文章将向你介绍基本的 Linux 服务器安全知识。虽然主要针对 Debian/Ubuntu,但是你可以将此处介绍的所有内容应用于其他 Linux 发行版。我也鼓励你研究这份材料,并在适用的情况下进行扩展。 + +### 1、更新你的服务器 + +保护服务器安全的第一件事是更新本地存储库,并通过应用最新的修补程序来升级操作系统和已安装的应用程序。 + +在 Ubuntu 和 Debian 上: + +``` +$ sudo apt update && sudo apt upgrade -y +``` + +在 Fedora、CentOS 或 RHEL: + +``` +$ sudo dnf upgrade +``` + +### 2、创建一个新的特权用户 + +接下来,创建一个新的用户帐户。永远不要以 root 身份登录服务器,而是创建你自己的帐户(用户),赋予它 `sudo` 权限,然后使用它登录你的服务器。 + +首先创建一个新用户: + +``` +$ adduser +``` + +通过将 `sudo` 组(`-G`)附加(`-a`)到用户的组成员身份里,从而授予新用户帐户 `sudo` 权限: + +``` +$ usermod -a -G sudo +``` + +### 3、上传你的 SSH 密钥 + +你应该使用 SSH 密钥登录到新服务器。你可以使用 `ssh-copy-id` 命令将[预生成的 SSH 密钥][2]上传到你的新服务器: + +``` +$ ssh-copy-id @ip_address +``` + +现在,你无需输入密码即可登录到新服务器。 + +### 4、安全强化 SSH + +接下来,进行以下三个更改: + +* 禁用 SSH 密码认证 +* 限制 root 远程登录 +* 限制对 IPv4 或 IPv6 的访问 + +使用你选择的文本编辑器打开 `/etc/ssh/sshd_config` 并确保以下行: + +``` +PasswordAuthentication yes +PermitRootLogin yes +``` + +改成这样: + +``` +PasswordAuthentication no +PermitRootLogin no +``` + +接下来,通过修改 `AddressFamily` 选项将 SSH 服务限制为 IPv4 或 IPv6。要将其更改为仅使用 IPv4(对大多数人来说应该没问题),请进行以下更改: + +``` +AddressFamily inet +``` + +重新启动 SSH 服务以启用你的更改。请注意,在重新启动 SSH 服务之前,与服务器建立两个活动连接是一个好主意。有了这些额外的连接,你可以在重新启动 SSH 服务出错的情况下修复所有问题。 + +在 Ubuntu 上: + +``` +$ sudo service sshd restart +``` + +在 Fedora 或 CentOS 或任何使用 Systemd 的系统上: + +``` +$ sudo systemctl restart sshd +``` + +### 5、启用防火墙 + +现在,你需要安装防火墙、启用防火墙并对其进行配置,以仅允许你指定的网络流量通过。(Ubuntu 上的)[简单的防火墙][3](UFW)是一个易用的 iptables 界面,可大大简化防火墙的配置过程。 + +你可以通过以下方式安装 UFW: + +``` +$ sudo apt install ufw +``` + +默认情况下,UFW 拒绝所有传入连接,并允许所有传出连接。这意味着服务器上的任何应用程序都可以访问互联网,但是任何尝试访问服务器的内容都无法连接。 + +首先,确保你可以通过启用对 SSH、HTTP 和 HTTPS 的访问来登录: + +``` +$ sudo ufw allow ssh +$ sudo ufw allow http +$ sudo ufw allow https +``` + +然后启用 UFW: + +``` +$ sudo ufw enable +``` + +你可以通过以下方式查看允许和拒绝了哪些服务: + +``` +$ sudo ufw status +``` + +如果你想禁用 UFW,可以通过键入以下命令来禁用: + +``` +$ sudo ufw disable +``` + +你还可以(在 RHEL/CentOS 上)使用 [firewall-cmd][4],它已经安装并集成到某些发行版中。 + +### 6、安装 Fail2ban + +[Fail2ban][5] 是一种用于检查服务器日志以查找重复或自动攻击的应用程序。如果找到任何攻击,它会更改防火墙以永久地或在指定的时间内阻止攻击者的 IP 地址。 + +你可以通过键入以下命令来安装 Fail2ban: + +``` +$ sudo apt install fail2ban -y +``` + +然后复制随附的配置文件: + +``` +$ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local +``` + +重启 Fail2ban: + +``` +$ sudo service fail2ban restart +``` + +这样就行了。该软件将不断检查日志文件以查找攻击。一段时间后,该应用程序将建立相当多的封禁的 IP 地址列表。你可以通过以下方法查询 SSH 服务的当前状态来查看此列表: + +``` +$ sudo fail2ban-client status ssh +``` + +### 7、移除无用的网络服务 + +几乎所有 Linux 服务器操作系统都启用了一些面向网络的服务。你可能希望保留其中大多数,然而,有一些你或许希望删除。你可以使用 `ss` 命令查看所有正在运行的网络服务:(LCTT 译注:应该是只保留少部分,而所有确认无关的、无用的服务都应该停用或删除。) + +``` +$ sudo ss -atpu +``` + +`ss` 的输出取决于你的操作系统。下面是一个示例,它显示 SSH(`sshd`)和 Ngnix(`nginx`)服务正在侦听网络并准备连接: + +``` +tcp LISTEN 0 128 *:http *:* users:(("nginx",pid=22563,fd=7)) +tcp LISTEN 0 128 *:ssh *:* users:(("sshd",pid=685,fd=3)) +``` + +删除未使用的服务的方式因你的操作系统及其使用的程序包管理器而异。 + +要在 Debian / Ubuntu 上删除未使用的服务: + +``` +$ sudo apt purge +``` + +要在 Red Hat/CentOS 上删除未使用的服务: + +``` +$ sudo yum remove +``` + +再次运行 `ss -atup` 以确认这些未使用的服务没有安装和运行。 + +### 总结 + +本教程介绍了加固 Linux 服务器所需的最起码的措施。你应该根据服务器的使用方式启用其他安全层。这些安全层可以包括诸如各个应用程序配置、入侵检测软件(IDS)以及启用访问控制(例如,双因素身份验证)之类的东西。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/linux-server-security + +作者:[Patrick H. Mullins][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/pmullins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8 (computer servers processing data) +[2]: https://opensource.com/article/19/4/ssh-keys-seahorse +[3]: https://launchpad.net/ufw +[4]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd +[5]: https://www.fail2ban.org/wiki/index.php/Main_Page diff --git a/published/20191008 How to manage Go projects with GVM.md b/published/20191008 How to manage Go projects with GVM.md new file mode 100644 index 0000000000..62cf495b26 --- /dev/null +++ b/published/20191008 How to manage Go projects with GVM.md @@ -0,0 +1,232 @@ +[#]: collector: (lujun9972) +[#]: translator: (heguangzhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11447-1.html) +[#]: subject: (How to manage Go projects with GVM) +[#]: via: (https://opensource.com/article/19/10/introduction-gvm) +[#]: author: (Chris Collins https://opensource.com/users/clcollins) + +如何用 GVM 管理 Go 项目 +====== + +> 使用 Go 版本管理器管理多个版本的 Go 语言环境及其模块。 + +![正在编程的女人][1] + +Go 语言版本管理器([GVM][2])是管理 Go 语言环境的开源工具。GVM “pkgsets” 支持安装多个版本的 Go 并管理每个项目的模块。它最初由 [Josh Bussdieker][3] 开发,GVM(像它的对手 Ruby RVM 一样)允许你为每个项目或一组项目创建一个开发环境,分离不同的 Go 版本和包依赖关系,以提供更大的灵活性,防止不同版本造成的问题。 + +有几种管理 Go 包的方式,包括内置于 Go 中的 Go 1.11 的 Modules。我发现 GVM 简单直观,即使我不用它来管理包,我还是会用它来管理 Go 不同的版本的。 + +### 安装 GVM + +安装 GVM 很简单。[GVM 存储库][4]安装文档指示你下载安装程序脚本并将其传送到 Bash 来安装: + +``` +bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer) +``` + +尽管越来越多的人采用这种安装方法,但是在安装之前先看看安装程序在做什么仍然是一个很好的想法。以 GVM 为例,该安装程序脚本: + +1. 检查一些相关依赖性 +2. 克隆 GVM 存储库 +3. 使用 shell 脚本: + * 安装 Go 语言 + * 管理 `GOPATH` 环境变量 + * 向 `bashrc`、`zshrc` 或配置文件中添加一行内容 + +如果你想确认它在做什么,你可以克隆该存储库并查看 shell 脚本,然后运行 `./binscripts/gvm-installer` 这个本地脚本进行设置。 + +`注意:` 因为 GVM 可以用来下载和编译新的 Go 版本,所以有一些预期的依赖关系,如 Make、Git 和 Curl。你可以在 [GVM 的自述文件][5]中找到完整的发行版列表。 + +### 使用 GVM 安装和管理 GO 版本 + +一旦安装了 GVM,你就可以使用它来安装和管理不同版本的 Go。`gvm listall` 命令显示可下载和编译的可用版本的 Go: + +``` +[chris@marvin ]$ gvm listall + +gvm gos (available) + +   go1 +   go1.0.1 +   go1.0.2 +   go1.0.3 + +<输出截断> +``` + +安装特定的 Go 版本就像 `gvm install <版本>` 一样简单,其中 `<版本>` 是 `gvm listall` 命令返回的版本之一。 + +假设你正在进行一个使用 Go1.12.8 版本的项目。你可以使用 `gvm install go1.12.8` 安装这个版本: + +``` +[chris@marvin]$ gvm install go1.12.8 +Installing go1.12.8... + * Compiling... +go1.12.8 successfully installed! +``` + +输入 `gvm list`,你会看到 Go 版本 1.12.8 与系统 Go 版本(使用操作系统的软件包管理器打包的版本)一起并存: + +``` +[chris@marvin]$ gvm list + +gvm gos (installed) + +   go1.12.8 +=> system +``` + +GVM 仍在使用系统版本的 Go ,由 `=>` 符号表示。你可以使用 `gvm use` 命令切换你的环境以使用新安装的 go1.12.8: + +``` +[chris@marvin]$ gvm use go1.12.8 +Now using version go1.12.8 + +[chris@marvin]$ go version +go version go1.12.8 linux/amd64 +``` + +GVM 使管理已安装版本的 Go 变得极其简单,但它不止于此! + +### 使用 GVM pkgset + +开箱即用,Go 有一种出色而令人沮丧的管理包和模块的方式。默认情况下,如果你 `go get` 获取一个包,它将被下载到 `$GOPATH` 目录中的 `src` 和 `pkg` 目录下,然后可以使用 `import` 将其包含在你的 Go 程序中。这使得获得软件包变得很容易,特别是对于非特权用户,而不需要 `sudo` 或 root 特权(很像 Python 中的 `pip install --user`)。然而,在不同的项目中管理相同包的不同版本是非常困难的。 + +有许多方法可以尝试修复或缓解这个问题,包括实验性 Go Modules(Go 1.11 版中增加了初步支持)和 [Go dep][6](Go Modules 的“官方实验”并且持续迭代)。在我发现 GVM 之前,我会在一个 Go 项目自己的 Docker 容器中构建和测试它,以确保分离。 + +GVM 通过使用 “pkgsets” 将项目的新目录附加到安装的 Go 版本的默认 `$GOPATH` 上,很好地实现了项目之间包的管理和隔离,就像 `$PATH` 在 Unix/Linux 系统上工作一样。 + +想象它如何运行的。首先,安装新版 Go 1.12.9: + +``` +[chris@marvin]$ echo $GOPATH +/home/chris/.gvm/pkgsets/go1.12.8/global + +[chris@marvin]$ gvm install go1.12.9 +Installing go1.12.9... + * Compiling... +go1.12.9 successfully installed + +[chris@marvin]$ gvm use go1.12.9 +Now using version go1.12.9 +``` + +当 GVM 被告知使用新版本时,它会更改为新的 `$GOPATH`,默认 `gloabl` pkgset 应用于该版本: + +``` +[chris@marvin]$ echo $GOPATH +/home/chris/.gvm/pkgsets/go1.12.9/global + +[chris@marvin]$ gvm pkgset list + +gvm go package sets (go1.12.9) + +=>  global +``` + +尽管默认情况下没有安装额外的包,但是全局 pkgset 中的包对于使用该特定版本的 Go 的任何项目都是可用的。 + +现在,假设你正在启用一个新项目,它需要一个特定的包。首先,使用 GVM 创建一个新的 pkgset,名为 `introToGvm`: + +``` +[chris@marvin]$ gvm pkgset create introToGvm + +[chris@marvin]$ gvm pkgset use introToGvm +Now using version go1.12.9@introToGvm + +[chris@marvin]$ gvm pkgset list + +gvm go package sets (go1.12.9) + +    global +=>  introToGvm +``` + +如上所述,pkgset 的一个新目录被添加到 `$GOPATH`: + +``` +[chris@marvin]$ echo $GOPATH +/home/chris/.gvm/pkgsets/go1.12.9/introToGvm:/home/chris/.gvm/pkgsets/go1.12.9/global +``` + +将目录更改为预先设置的 `introToGvm` 路径,检查目录结构,这里使用 `awk` 和 `bash` 完成。 + +``` +[chris@marvin]$ cd $( awk -F':' '{print $1}' <<< $GOPATH ) +[chris@marvin]$ pwd +/home/chris/.gvm/pkgsets/go1.12.9/introToGvm + +[chris@marvin]$ ls +overlay pkg src +``` + +请注意,新目录看起来很像普通的 `$GOPATH`。新的 Go 包使用同样的 `go get` 命令下载并正常使用,且添加到 pkgset 中。 + +例如,使用以下命令获取 `gorilla/mux` 包,然后检查 pkgset 的目录结构: + +``` +[chris@marvin]$ go get github.com/gorilla/mux +[chris@marvin]$ tree +[chris@marvin introToGvm ]$ tree +. +├── overlay +│ ├── bin +│ └── lib +│ └── pkgconfig +├── pkg +│ └── linux_amd64 +│ └── github.com +│ └── gorilla +│ └── mux.a +src/ +└── github.com + └── gorilla + └── mux + ├── AUTHORS + ├── bench_test.go + ├── context.go + ├── context_test.go + ├── doc.go + ├── example_authentication_middleware_test.go + ├── example_cors_method_middleware_test.go + ├── example_route_test.go + ├── go.mod + ├── LICENSE + ├── middleware.go + ├── middleware_test.go + ├── mux.go + ├── mux_test.go + ├── old_test.go + ├── README.md + ├── regexp.go + ├── route.go + └── test_helpers.go +``` + +如你所见,`gorilla/mux` 已按预期添加到 pkgset `$GOPATH` 目录中,现在可用于使用此 pkgset 项目了。 + +### GVM 让 Go 管理变得轻而易举 + +GVM 是一种直观且非侵入性的管理 Go 版本和包的方式。它可以单独使用,也可以与其他 Go 模块管理技术结合使用并利用 GVM Go 版本管理功能。按 Go 版本和包依赖来分离项目使得开发更加容易,并且减少了管理版本冲突的复杂性,GVM 让这变得轻而易举。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/introduction-gvm + +作者:[Chris Collins][a] +选题:[lujun9972][b] +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clcollins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming) +[2]: https://github.com/moovweb/gvm +[3]: https://github.com/jbussdieker +[4]: https://github.com/moovweb/gvm#installing +[5]: https://github.com/moovweb/gvm/blob/master/README.md +[6]: https://golang.github.io/dep/ diff --git a/published/20191009 Command line quick tips- Locate and process files with find and xargs.md b/published/20191009 Command line quick tips- Locate and process files with find and xargs.md new file mode 100644 index 0000000000..038a61aaa6 --- /dev/null +++ b/published/20191009 Command line quick tips- Locate and process files with find and xargs.md @@ -0,0 +1,98 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11469-1.html) +[#]: subject: (Command line quick tips: Locate and process files with find and xargs) +[#]: via: (https://fedoramagazine.org/command-line-quick-tips-locate-and-process-files-with-find-and-xargs/) +[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/) + +命令行技巧:使用 find 和 xargs 查找和处理文件 +====== + +![][1] + +`find` 是日常工具箱中功能强大、灵活的命令行程序之一。它如它名字所暗示的:查找符合你指定条件的文件和目录。借助 `-exec` 或 `-delete` 之类的参数,你可以让它对找到的文件进行操作。 + +在[命令行提示][2]系列的这一期中,你将会看到 `find` 命令的介绍,并学习如何使用内置命令或使用 `xargs` 命令处理文件。 + +### 查找文件 + +`find` 至少要加上查找的路径。例如,此命令将查找(并打印)系统上的每个文件: + +``` +find / +``` + +由于一切皆文件,因此你会看到大量的输出。这可能无法帮助你找到所需的内容。你可以更改路径参数缩小范围,但这实际上并没有比使用 `ls` 命令更好。因此,你需要考虑要查找的内容。 + +也许你想在家目录中查找所有 JPEG 文件。 `-name` 参数允许你将结果限制为与给定模式匹配的文件。 + +``` +find ~ -name '*jpg' +``` + +但是等等!如果其中一些扩展名是大写怎么办? `-iname` 类似于 `-name`,但不区分大小写: + +``` +find ~ -iname '*jpg' +``` + +很好!但是 8.3 命名方案出自 1985 年。某些图片的扩展名可能是 .jpeg。幸运的是,我们可以将模式使用“或”(`-o`)进行组合。括号需要转义,以便使 `find` 命令而不是 shell 程序尝试解释它们。 + +``` +find ~ \( -iname 'jpeg' -o -iname 'jpg' \) +``` + +更进一步。如果你有一些以 `jpg` 结尾的目录怎么办?(我不懂你为什么将目录命名为 `bucketofjpg` 而不是 `pictures`?)我们可以加上 `-type` 参数来仅查找文件: + +``` +find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f +``` + +或者,也许你想找到那些名字奇怪的目录,以便之后可以重命名它们: + +``` +find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d +``` + +最近你拍摄了很多照片,因此使用 `-mtime`(修改时间)将范围缩小到最近一周修改过的文件。 `-7` 表示 7 天或更短时间内修改的所有文件。 + +``` +find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7 +``` + +### 使用 xargs 进行操作 + +`xargs` 命令从标准输入流中获取参数,并基于它们执行命令。继续使用上一节中的示例,假设你要将上周修改过的家目录中的所有 JPEG 文件复制到 U 盘,以便插到电子相册上。假设你已经将 U 盘挂载到 `/media/photo_display`。 + +``` +find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7 -print0 | xargs -0 cp -t /media/photo_display +``` + +这里的 `find` 命令与以前的版本略有不同。`-print0` 命令让输出有一些更改:它不使用换行符,而是添加了一个 `null` 字符。`xargs` 的 `-0`(零)选项可调整解析以达到预期效果。这很重要,不然对包含空格、引号或其他特殊字符的文件名执行操作可能无法按预期进行。对文件采取任何操作时,都应使用这些选项。 + +`cp` 命令的 `-t` 参数很重要,因为 `cp` 通常要求目的地址在最后。你可以不使用 `xargs` 而使用 `find` 的 `-exec` 执行此操作,但是 `xargs` 的方式会更快,尤其是对于大量文件,因为它会单次调用 `cp`。 + +### 了解更多 + +这篇文章仅仅是 `find` 可以做的事情的表面。 `find` 支持基于权限、所有者、访问时间等的测试。它甚至可以将搜索路径中的文件与其他文件进行比较。将测试与布尔逻辑相结合,可以为你提供惊人的灵活性,以精确地找到你要查找的文件。使用内置命令或管道传递给 `xargs`,你可以快速处理大量文件。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/command-line-quick-tips-locate-and-process-files-with-find-and-xargs/ + +作者:[Ben Cotton][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/bcotton/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg +[2]: https://fedoramagazine.org/?s=command+line+quick+tips +[3]: https://opensource.com/article/18/4/how-use-find-linux +[4]: https://unsplash.com/@wflwong?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[5]: https://unsplash.com/s/photos/search?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/published/20191010 Viewing files and processes as trees on Linux.md b/published/20191010 Viewing files and processes as trees on Linux.md new file mode 100644 index 0000000000..f8d1fb7183 --- /dev/null +++ b/published/20191010 Viewing files and processes as trees on Linux.md @@ -0,0 +1,242 @@ +[#]: collector: (lujun9972) +[#]: translator: (laingke) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11462-1.html) +[#]: subject: (Viewing files and processes as trees on Linux) +[#]: via: (https://www.networkworld.com/article/3444589/viewing-files-and-processes-as-trees-on-linux.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +在 Linux 上以树状查看文件和进程 +====== + +> 介绍三个 Linux 命令:ps、pstree 和 tree 以类似树的格式查看文件和进程。 + +![](https://img.linux.net.cn/data/attachment/album/201910/15/093202rwm5k9pnpntgbtpr.jpg) + +[Linux][3] 提供了一些方便的命令,用于以树状分支形式查看文件和进程,从而易于查看它们之间的关系。在本文中,我们将介绍 `ps`、`pstree` 和 `tree` 命令以及它们提供的一些选项,这些选项可帮助你将注意力集中在要查看的内容上。 + +### ps + +我们用来列出进程的 `ps` 命令有一些有趣的选项,但是很多人从来没有利用过。虽然常用的 `ps -ef` 提供了正在运行的进程的完整列表,但是 `ps -ejH` 命令增加了一个不错的效果。它缩进了相关的进程以使这些进程之间的关系在视觉上更加清晰——就像这个片段: + +``` +$ ps -ejH + PID PGID SID TTY TIME CMD +... + 1396 1396 1396 ? 00:00:00 sshd +28281 28281 28281 ? 00:00:00 sshd +28409 28281 28281 ? 00:00:00 sshd +28410 28410 28410 pts/0 00:00:00 bash +30968 30968 28410 pts/0 00:00:00 ps +``` + +可以看到,正在运行的 `ps` 进程是在 `bash` 中运行的,而 `bash` 是在 ssh 会话中运行的。 + +`-exjf` 选项字符串提供了类似的视图,但是带有一些其它细节和符号以突出显示进程的层次结构性质: + +``` +$ ps -exjf +PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND +... + 1 1396 1396 1396 ? -1 Ss 0 0:00 /usr/sbin/sshd -D + 1396 28281 28281 28281 ? -1 Ss 0 0:00 \_ sshd: shs [priv] +28281 28409 28281 28281 ? -1 S 1000 0:00 \_ sshd: shs@pts/0 +28409 28410 28410 28410 pts/0 31028 Ss 1000 0:00 \_ -bash +28410 31028 31028 28410 pts/0 31028 R+ 1000 0:00 \_ ps axjf +``` + +命令中使用的这些选项表示: + +``` +-e 选择所有进程 +-j 使用工作格式 +-f 提供完整格式列表 +-H 分层显示进程(如,树状格式) +-x 取消“必须与 tty 相关联”的限制 +``` + +同时,该命令也有一个 `--forest` 选项提供了类似的视图。 + +``` +$ ps -ef --forest +UID PID PPID C STIME TTY TIME CMD +... +root 1396 1 0 Oct08 ? 00:00:00 /usr/sbin/sshd -D +root 28281 1396 0 12:55 ? 00:00:00 \_ sshd: shs [priv] +shs 28409 28281 0 12:56 ? 00:00:00 \_ sshd: shs@pts/0 +shs 28410 28409 0 12:56 pts/0 00:00:00 \_ -bash +shs 32351 28410 0 14:39 pts/0 00:00:00 \_ ps -ef --forest +``` + +注意,这些示例只是这些命令如何使用的示例。你可以选择最适合你的进程视图的任何选项组合。 + +### pstree + +使用 `pstree` 命令可以获得类似的进程视图。尽管 `pstree` 具备了许多选项,但是该命令本身就提供了非常有用的显示。注意,许多父子进程关系显示在单行而不是后续行上。 + +``` +$ pstree +... + ├─sshd───sshd───sshd───bash───pstree + ├─systemd─┬─(sd-pam) + │ ├─at-spi-bus-laun─┬─dbus-daemon + │ │ └─3*[{at-spi-bus-laun}] + │ ├─at-spi2-registr───2*[{at-spi2-registr}] + │ ├─dbus-daemon + │ ├─ibus-portal───2*[{ibus-portal}] + │ ├─pulseaudio───2*[{pulseaudio}] + │ └─xdg-permission-───2*[{xdg-permission-}] +``` + +通过 `-n` 选项,`pstree` 以数值(按进程 ID)顺序显示进程: + +``` +$ pstree -n +systemd─┬─systemd-journal + ├─systemd-udevd + ├─systemd-timesyn───{systemd-timesyn} + ├─systemd-resolve + ├─systemd-logind + ├─dbus-daemon + ├─atopacctd + ├─irqbalance───{irqbalance} + ├─accounts-daemon───2*[{accounts-daemon}] + ├─acpid + ├─rsyslogd───3*[{rsyslogd}] + ├─freshclam + ├─udisksd───4*[{udisksd}] + ├─networkd-dispat + ├─ModemManager───2*[{ModemManager}] + ├─snapd───10*[{snapd}] + ├─avahi-daemon───avahi-daemon + ├─NetworkManager───2*[{NetworkManager}] + ├─wpa_supplicant + ├─cron + ├─atd + ├─polkitd───2*[{polkitd}] + ├─colord───2*[{colord}] + ├─unattended-upgr───{unattended-upgr} + ├─sshd───sshd───sshd───bash───pstree +``` + +使用 `pstree` 时可以考虑的一些选项包括 `-a`(包括命令行参数)和 `-g`(包括进程组)。 + +以下是一些简单的示例(片段)。 + +命令 `pstree -a` 的输出内容: + +``` +└─wpa_supplicant -u -s -O /run/wpa_supplicant +``` + +命令 `pstree -g` 的输出内容: + +``` +├─sshd(1396)───sshd(28281)───sshd(28281)───bash(28410)───pstree(1115) +``` + +### tree + +虽然 `tree` 命令听起来与 `pstree` 非常相似,但这是用于查看文件而非进程的命令。它提供了一个漂亮的树状目录和文件视图。 + +如果你使用 `tree` 命令查看 `/proc` 目录,你显示的开头部分将类似于这个: + +``` +$ tree /proc +/proc +├── 1 +│ ├── attr +│ │ ├── apparmor +│ │ │ ├── current +│ │ │ ├── exec +│ │ │ └── prev +│ │ ├── current +│ │ ├── display +│ │ ├── exec +│ │ ├── fscreate +│ │ ├── keycreate +│ │ ├── prev +│ │ ├── smack +│ │ │ └── current +│ │ └── sockcreate +│ ├── autogroup +│ ├── auxv +│ ├── cgroup +│ ├── clear_refs +│ ├── cmdline +... +``` + +如果以 root 权限运行这条命令(`sudo tree /proc`),你将会看到更多详细信息,因为 `/proc` 目录的许多内容对于普通用户而言是无法访问的。 + +命令 `tree -d` 将会限制仅显示目录。 + +``` +$ tree -d /proc +/proc +├── 1 +│ ├── attr +│ │ ├── apparmor +│ │ └── smack +│ ├── fd [error opening dir] +│ ├── fdinfo [error opening dir] +│ ├── map_files [error opening dir] +│ ├── net +│ │ ├── dev_snmp6 +│ │ ├── netfilter +│ │ └── stat +│ ├── ns [error opening dir] +│ └── task +│ └── 1 +│ ├── attr +│ │ ├── apparmor +│ │ └── smack +... +``` + +使用 `-f` 选项,`tree` 命令会显示完整的路径。 + +``` +$ tree -f /proc +/proc +├── /proc/1 +│ ├── /proc/1/attr +│ │ ├── /proc/1/attr/apparmor +│ │ │ ├── /proc/1/attr/apparmor/current +│ │ │ ├── /proc/1/attr/apparmor/exec +│ │ │ └── /proc/1/attr/apparmor/prev +│ │ ├── /proc/1/attr/current +│ │ ├── /proc/1/attr/display +│ │ ├── /proc/1/attr/exec +│ │ ├── /proc/1/attr/fscreate +│ │ ├── /proc/1/attr/keycreate +│ │ ├── /proc/1/attr/prev +│ │ ├── /proc/1/attr/smack +│ │ │ └── /proc/1/attr/smack/current +│ │ └── /proc/1/attr/sockcreate +... +``` + +分层显示通常可以使进程和文件之间的关系更容易理解。可用选项的数量很多,而你总可以找到一些视图,帮助你查看所需的内容。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3444589/viewing-files-and-processes-as-trees-on-linux.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[laingke](https://github.com/laingke) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.flickr.com/photos/cricketsblog/46967168105/in/photolist-2eyk1Lr-KQsMHg-JbWG41-FWu8FU-6daUYv-cxH2Aq-DV2CNk-25eF8V1-GEEwLx-S9a29U-GpiYf2-Yi5dnF-YLPMV3-23ThoAZ-dTyphv-DVXTMY-ERmSjL-6z86DE-QVnnyv-7PLo9u-58CYnd-dYmbPX-63nVid-p7Ea54-238LQaD-Qb6CkZ-QoRhQX-suMNcq-22JeozK-BwMvBg-26AQHz1-PhQT4J-AGyhXA-2fhixB3-qngdKE-UiptQQ-ZzpiHa-pH4g9e-28CoU2s-81gNxg-qnoewg-2cmYaRk-d3FRuo-4fJrSL-23NqveR-LLEYMU-FZixFK-5aBDGU-PBQbWq-dJoaKi +[2]: https://creativecommons.org/licenses/by/2.0/legalcode +[3]: https://www.networkworld.com/article/3215226/what-is-linux-uses-featres-products-operating-systems.html +[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) +[5]: https://www.networkworld.com/slideshow/153439/linux-best-desktop-distros-for-newbies.html#tk.nww-infsb +[6]: https://www.facebook.com/NetworkWorld/ +[7]: https://www.linkedin.com/company/network-world diff --git a/published/20191011 How to Unzip a Zip File in Linux -Beginner-s Tutorial.md b/published/20191011 How to Unzip a Zip File in Linux -Beginner-s Tutorial.md new file mode 100644 index 0000000000..a40d412a74 --- /dev/null +++ b/published/20191011 How to Unzip a Zip File in Linux -Beginner-s Tutorial.md @@ -0,0 +1,124 @@ +[#]: collector: (lujun9972) +[#]: translator: (singledo) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11470-1.html) +[#]: subject: (How to Unzip a Zip File in Linux [Beginner’s Tutorial]) +[#]: via: (https://itsfoss.com/unzip-linux/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +新手教程:如何在 Linux 下解压 Zip 文件 +====== + +> 本文将会向你展示如何在 Ubuntu 和其他 Linux 发行版本上解压文件。终端和图形界面的方法都会讨论。 + +[Zip][1] 是一种创建压缩存档文件的最普通、最流行的方法。它也是一种古老的文件归档文件格式,这种格式创建于 1989 年。由于它的广泛使用,你会经常遇见 zip 文件。 + +在更早的一份教程里,我介绍了[如何在 Linux 上用 zip 压缩一个文件夹][2]。在这篇面向初学者的快速教程中,我会介绍如何在 Linux 上解压文件。 + +先决条件:检查你是否安装了 `unzip`。 + +为了解压 zip 归档文件,你必须在你的系统上安装了 unzip 软件包。大多数现代的的 Linux 发行版本提供了解压 zip 文件的支持,但是对这些 zip 文件进行校验以避免以后出现损坏总是没有坏处的。 + +在基于 [Unbutu][3] 和 [Debian][4] 的发行版上,你能够使用下面的命令来安装 `unzip`。如果你已经安装了,你会被告知已经被安装。 + +``` +sudo apt install unzip +``` + +一旦你能够确认你的系统中安装了 `unzip`,你就可以通过 `unzip` 来解压 zip 归档文件。 + +你也能够使用命令行或者图形工具来达到目的,我会向你展示两种方法: + +### 使用命令行解压文件 + +在 Linux 下使用 `unzip` 命令是非常简单的。在你放 zip 文件的目录,用下面的命令: + +``` +unzip zipped_file.zip +``` + +你可以给 zip 文件提供解压路径而不是解压到当前所在路径。你会在终端输出中看到提取的文件: + +``` +unzip metallic-container.zip -d my_zip +Archive: metallic-container.zip + inflating: my_zip/625993-PNZP34-678.jpg + inflating: my_zip/License free.txt + inflating: my_zip/License premium.txt +``` + +上面的命令有一个小问题。它会提取 zip 文件中所有的内容到现在的文件夹。你会在当前文件夹下留下一堆没有组织的文件,这不是一件很好的事情。 + +#### 解压到文件夹下 + +在 Linux 命令行下,对于把文件解压到一个文件夹下是一个好的做法。这种方式下,所有的提取文件都会被存储到你所指定的文件夹下。如果文件夹不存在,会创建该文件夹。 + +``` +unzip zipped_file.zip -d unzipped_directory +``` + +现在 `zipped_file.zip` 中所有的内容都会被提取到 `unzipped_directory` 中。 + +由于我们在讨论好的做法,这里有另一个注意点,我们可以查看压缩文件中的内容而不用实际解压。 + +#### 查看压缩文件中的内容而不解压压缩文件 + +``` +unzip -l zipped_file.zip +``` + +下面是该命令的输出: + +``` +unzip -l metallic-container.zip +Archive: metallic-container.zip + Length Date Time Name +--------- ---------- ----- ---- + 6576010 2019-03-07 10:30 625993-PNZP34-678.jpg + 1462 2019-03-07 13:39 License free.txt + 1116 2019-03-07 13:39 License premium.txt +--------- ------- + 6578588 3 files +``` + +在 Linux 下,还有些 `unzip` 的其它用法,但我想你现在已经对在 Linux 下使用解压文件有了足够的了解。 + +### 使用图形界面来解压文件 + +如果你使用桌面版 Linux,那你就不必总是使用终端。在图形化的界面下,我们又要如何解压文件呢? 我使用的是 [GNOME 桌面][7],不过其它桌面版 Linux 发行版也大致相同。 + +打开文件管理器,然后跳转到 zip 文件所在的文件夹下。在文件上点击鼠标右键,你会在弹出的窗口中看到 “提取到这里”,选择它。 + +![Unzip File in Ubuntu][8] + +与 `unzip` 命令不同,这个提取选项会创建一个和压缩文件名相同的文件夹(LCTT 译注:文件夹没有 `.zip` 扩展名),并且把压缩文件中的所有内容存储到创建的文件夹下。相对于 `unzip` 命令的默认行为是将压缩文件提取到当前所在的文件下,图形界面的解压对于我来说是一件非常好的事情。 + +这里还有一个选项“提取到……”,你可以选择特定的文件夹来存储提取的文件。 + +你现在知道如何在 Linux 解压文件了。你也许还对学习[在 Linux 下使用 7zip][9] 感兴趣? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/unzip-linux/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[octopus](https://github.com/singledo) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Zip_(file_format) +[2]: https://itsfoss.com/linux-zip-folder/ +[3]: https://ubuntu.com/ +[4]: https://www.debian.org/ +[5]: tmp.eqEocGssC8#terminal +[6]: tmp.eqEocGssC8#gui +[7]: https://gnome.org/ +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/unzip-files-ubuntu.jpg?ssl=1 +[9]: https://itsfoss.com/use-7zip-ubuntu-linux/ + + diff --git a/published/20191015 4 Free and Open Source Alternatives to Adobe Photoshop.md b/published/20191015 4 Free and Open Source Alternatives to Adobe Photoshop.md new file mode 100644 index 0000000000..447b694c3a --- /dev/null +++ b/published/20191015 4 Free and Open Source Alternatives to Adobe Photoshop.md @@ -0,0 +1,136 @@ +[#]: collector: (lujun9972) +[#]: translator: (algzjh) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11474-1.html) +[#]: subject: (4 Free and Open Source Alternatives to Adobe Photoshop) +[#]: via: (https://itsfoss.com/open-source-photoshop-alternatives/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Adobe Photoshop 的 4 种自由开源替代品 +====== + +> 想寻找免费的 Photoshop 替代品?这里有一些最好的自由开源软件,你可以用它们来代替 Adobe Photoshop。 + +Adobe Photoshop 是一个可用于 Windows 和 macOS 的高级图像编辑和设计工具。毫无疑问,几乎每个人都知道它。其十分受欢迎。在 Linux 上,你可以在虚拟机中使用 Windows 或[通过 Wine][1] 来使用 Photoshop,但这并不是一种理想的体验。 + +一般来说,我们没有太多可以替代 Adobe Photoshop 的选项。然而,在本文中,我们将提到一些在 Linux 上可用的最佳的开源 Photoshop 替代品(也支持跨平台)。 + +请注意 Photoshop 不仅仅是一个图片编辑器。摄影师、数码艺术家、专业编辑使用它用于各种用途。此处的替代软件可能不具备 Photoshop 的所有功能,但你可以将它们用于在 Photoshop 中完成的各种任务。 + +### 适用于 Linux、Windows 和 macOS 的 Adobe Photoshop 的开源替代品 + +![][2] + +最初,我想只关注 Linux 中的 Photoshop 替代品,但为什么要把这个列表局限于 Linux 呢?其他操作系统用户也可使用开源软件。 + +**如果你正在使用 Linux,则所有提到的软件都应该可以在你的发行版的存储库中找到。你可以使用软件中心或包管理器进行安装。** + +对于其他平台,请查看官方项目网站以获取安装文件。 + +*该列表没有特定的排名顺序* + +#### 1、GIMP:真正的 Photoshop 替代品 + +![][3] + +主要特点: + + * 可定制的界面 + * 数字级修饰 + * 照片增强(使用变换工具) + * 支持广泛的硬件(压敏平板、音乐数字接口等) + * 几乎支持所有主要的图像文件 + * 支持图层管理 + +可用平台:Linux、Windows 和 macOS + +[GIMP][4] 是我处理任何事情的必备工具,无论任务多么基础/高级。也许,这是你在 Linux 下最接近 Photoshop 的替代品。除此之外,它还是一个开源和免费的解决方案,适合希望在 Linux 上创作伟大作品的艺术家。 + +它具有任何类型的图像处理所必需的所有功能。当然,还有图层管理支持。根据你的经验水平,利用率会有所不同。因此,如果你想充分利用它,则应阅读 [文档][5] 并遵循 [官方教程][6]。 + +#### 2、Krita + +![][7] + +主要特点: + + * 支持图层管理 + * 转换工具 + * 丰富的笔刷/绘图工具 + +可用平台:Linux、Windows 和 macOS + +[Krita][8] 是一个令人印象深刻的开源的数字绘画工具。图层管理支持和转换工具的存在使它成为 Photoshop 的基本编辑任务的替代品之一。 + +如果你喜欢素描/绘图,这将对你很有帮助。 + +#### 3、Darktable + +![][9] + +主要特点: + + * RAW 图像显影 + * 支持多种图像格式 + * 多个带有混合运算符的图像操作模块 + +可用平台:Linux、Windows 和 macOS + +[Darktable][10] 是一个由摄影师制作的开源摄影工作流应用程序。它可以让你在数据库中管理你的数码底片。从你的收藏中,显影 RAW 格式的图像并使用可用的工具对其进行增强。 + +从基本的图像编辑工具到支持混合运算符的多个图像模块,你将在探索中发现许多。 + +#### 4、Inkscape + +![][11] + +主要特点: + + * 创建对象的工具(最适合绘图/素描) + * 支持图层管理 + * 用于图像处理的转换工具 + * 颜色选择器(RGB、HSL、CMYK、色轮、CMS) + * 支持所有主要文件格式 + +可用平台:Linux、Windows 和 macOS + +[Inkscape][12] 是一个非常流行的开源矢量图形编辑器,许多专业人士都使用它。它提供了灵活的设计工具,可帮助你创作漂亮的艺术作品。从技术上说,它是 Adobe Illustrator 的直接替代品,但它也提供了一些技巧,可以帮助你将其作为 Photoshop 的替代品。 + +与 GIMP 的官方资源类似,你可以利用 [Inkscape 的教程][13] 来最大程度地利用它。 + +### 在你看来,真正的 Photoshop 替代品是什么? + +很难提供与 Adobe Photoshop 完全相同的功能。然而,如果你遵循官方文档和资源,则可以使用上述 Photoshop 替代品做很多很棒的事情。 + +Adobe 提供了一系列的图形工具,并且我们有 [整个 Adobe 创意套件的开源替代方案][14]。 你也可以去看看。 + +你觉得我们在此提到的 Photoshop 替代品怎么样?你是否知道任何值得提及的更好的替代方案?请在下面的评论中告诉我们。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/open-source-photoshop-alternatives/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[algzjh](https://github.com/algzjh) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/install-latest-wine/ +[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/open_source_photoshop_alternatives.png?ssl=1 +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/08/gimp-screenshot.jpg?ssl=1 +[4]: https://www.gimp.org/ +[5]: https://www.gimp.org/docs/ +[6]: https://www.gimp.org/tutorials/ +[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/krita-paint.png?ssl=1 +[8]: https://krita.org/ +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/darktable.jpg?ssl=1 +[10]: https://www.darktable.org/ +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/12/inkscape-screenshot.jpg?ssl=1 +[12]: https://inkscape.org/ +[13]: https://inkscape.org/learn/ +[14]: https://itsfoss.com/adobe-alternatives-linux/ diff --git a/sources/news/20190924 Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale.md b/sources/news/20190924 Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale.md new file mode 100644 index 0000000000..f2525fa198 --- /dev/null +++ b/sources/news/20190924 Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale.md @@ -0,0 +1,61 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale) +[#]: via: (https://opensourceforu.com/2019/09/global-tech-giants-form-presto-foundation-to-tackle-distributed-data-processing-at-scale/) +[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/) + +Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale +====== + + * _**The Foundation aims to make the database search engine “the fastest and most reliable SQL engine for massively distributed data processing.”**_ + * _**Presto’s architecture allows users to query a variety of data sources and move at scale and speed.**_ + + + +![Facebook][1] + +Facebook, Uber, Twitter and Alibaba have joined hands to form a foundation to help Presto, a database search engine and processing tool, scale and diversify its community. + +Under Presto will be now hosted under the Linux Foundation, the U.S.-based non-profit organization announced on Monday. + +The newly established Presto Foundation will operate under a community governance model with representation from each of the founding members. It aims to make the engine “the fastest and most reliable SQL engine for massively distributed data processing.” + +“The Linux Foundation is excited to work with the Presto community, collaborating to solve the increasing problem of massive distributed data processing at internet scale,” said Michael Dolan, VP of Strategic Programs at the Linux Foundation.” + +**Presto can run on large clusters of machines** + +Presto was developed at Facebook in 2012 as a high-performance distributed SQL query engine for large scale data analytics. Presto’s architecture allows users to query a variety of data sources such as Hadoop, S3, Alluxio, MySQL, PostgreSQL, Kafka, MongoDB and move at scale and speed. + +It can query data where it is stored without needing to move the data to a separate system. Its in-memory and distributed query processing results in query latencies of seconds to minutes. + +“Presto has been designed for high performance exabyte-scale data processing on a large number of machines. Its flexible design allows processing data from a wide variety of data sources. From day one Presto has been designed with efficiency, scalability and reliability in mind, and it has been improved over the years to take on additional use cases at Facebook, such as batch and other application specific interactive use cases,” said Nezih Yigitbasi, Engineering Manager of Presto at Facebook. + +Presto is being used by over a thousand Facebook employees for running several million queries and processing petabytes of data per day, according to Kathy Kam, Head of Open Source at Facebook. + +**Expanding community for the benefit of all** + +Facebook released the source code of Presto to developers in 2013 in the hope that other companies would help to drive the future direction of the project. + +“It turns out many other companies were interested and so under The Linux Foundation, we believe the project can engage others and grow the community for the benefit of all,” said Kathy Kam. + +Uber’s data platform architecture uses Presto to extract critical insights from aggregated data. “Uber is honoured to partner with the Linux Foundation and major contributors from the tech community to bring the Presto Foundation to life. Our goal is to help create an open and collaborative community in which Presto developers can thrive,” asserted Brian Hsieh, Head of Open Source at Uber. + +Liang Lin, Senior Director of Alibaba OLAP products, believes that the collaboration would eventually benefit the community as well as Alibaba and its customers. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/09/global-tech-giants-form-presto-foundation-to-tackle-distributed-data-processing-at-scale/ + +作者:[Longjam Dineshwori][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/dineshwori-longjam/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/06/Facebook-Like.jpg?resize=350%2C213&ssl=1 diff --git a/sources/news/20190926 Cisco- 13 IOS, IOS XE security flaws you should patch now.md b/sources/news/20190926 Cisco- 13 IOS, IOS XE security flaws you should patch now.md new file mode 100644 index 0000000000..5867ac6848 --- /dev/null +++ b/sources/news/20190926 Cisco- 13 IOS, IOS XE security flaws you should patch now.md @@ -0,0 +1,65 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cisco: 13 IOS, IOS XE security flaws you should patch now) +[#]: via: (https://www.networkworld.com/article/3441221/cisco-13-ios-ios-xe-security-flaws-you-should-patch-now.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +Cisco: 13 IOS, IOS XE security flaws you should patch now +====== +Cisco says vulnerabilities in IOS/IOS XE could cause DOS situation; warns on Traceroute setting +Woolzian / Getty Images + +Cisco this week warned its IOS and IOS XE customers of 13 vulnerabilities in the operating system software they should patch as soon as possible. + +All of the vulnerabilities – revealed in the company’s semiannual [IOS and IOS XE Software Security Advisory Bundle][1] – have a security impact rating (SIR) of "high". Successful exploitation of the vulnerabilities could allow an attacker to gain unauthorized access to, conduct a command injection attack on, or cause a denial of service (DoS) condition on an affected device, Cisco stated.  + +["How to determine if Wi-Fi 6 is right for you"][2] + +Two of the vulnerabilities affect both Cisco IOS Software and Cisco IOS XE Software. Two others affect Cisco IOS Software, and eight of the vulnerabilities affect Cisco IOS XE Software. The final one affects the Cisco IOx application environment. Cisco has confirmed that none of the vulnerabilities affect Cisco IOS XR Software or Cisco NX-OS Software.  Cisco [has released software updates][3] that address these problems. + +Some of the worst exposures include: + + * A [vulnerability in the IOx application environment][4] for Cisco IOS Software could let an authenticated, remote attacker gain unauthorized access to the Guest Operating System (Guest OS) running on an affected device. The vulnerability is due to incorrect role-based access control (RBAC) evaluation when a low-privileged user requests access to a Guest OS that should be restricted to administrative accounts. An attacker could exploit this vulnerability by authenticating to the Guest OS by using the low-privileged-user credentials. An exploit could allow the attacker to gain unauthorized access to the Guest OS as a root.This vulnerability affects Cisco 800 Series Industrial Integrated Services Routers and Cisco 1000 Series Connected Grid Routers (CGR 1000) that are running a vulnerable release of Cisco IOS Software with Guest OS installed.  While Cisco did not rate this vulnerability as critical, it did have a Common Vulnerability Scoring System (CVSS) of 9.9 out of 10.  Cisco recommends disabling the guest feature until a proper fix is installed. + * An exposure in the [Ident protocol handler of Cisco IOS and IOS XE][5] software could allow a remote attacker to cause an affected device to reload. The problem exists because the affected software incorrectly handles memory structures, leading to a NULL pointer dereference, Cisco stated. An attacker could exploit this vulnerability by opening a TCP connection to specific ports and sending traffic over that connection. A successful exploit could let the attacker cause the affected device to reload, resulting in a denial of service (DoS) condition. This vulnerability affects Cisco devices that are running a vulnerable release of Cisco IOS or IOS XE Software and that are configured to respond to Ident protocol requests. + * A vulnerability in the [common Session Initiation Protocol (SIP) library][6] of Cisco IOS and IOS XE Software could let an unauthenticated, remote attacker trigger a reload of an affected device, resulting in a denial of service (DoS). The vulnerability is due to insufficient sanity checks on an internal data structure. An attacker could exploit this vulnerability by sending a sequence of malicious SIP messages to an affected device. An exploit could allow the attacker to cause a NULL pointer dereference, resulting in a crash of the _iosd_ This triggers a reload of the device, Cisco stated. + * A [vulnerability in the ingress packet-processing][7] function of Cisco IOS Software for Cisco Catalyst 4000 Series Switches could let an aggressor cause a denial of service (DoS). The vulnerability is due to improper resource allocation when processing TCP packets directed to the device on specific Cisco Catalyst 4000 switches. An attacker could exploit this vulnerability by sending crafted TCP streams to an affected device. A successful exploit could cause the affected device to run out of buffer resources, impairing operations of control-plane and management-plane protocols, resulting in a DoS condition. This vulnerability can be triggered only by traffic that is destined to an affected device and cannot be exploited using traffic that transits an affected device Cisco stated. + + + +In addition to the warnings, Cisco also [issued an advisory][8] for users to deal with problems in its IOS and IOS XE  Layer 2 (L2) traceroute utility program.  The traceroute identifies the L2 path that a packet takes from a source device to a destination device. + +Cisco said that by design, the L2 traceroute server does not require authentication, but it allows certain information about an affected device to be read, including Hostname, hardware model, configured interfaces, IP addresses and other details.  Reading this information from multiple switches in the network could allow an attacker to build a complete L2 topology map of that network. + +Depending on whether the L2 traceroute feature is used in the environment and whether the Cisco IOS or IOS XE Software release supports the CLI commands to implement the respective option, Cisco said there are several ways to secure the L2 traceroute server: disable it, restrict access to it through infrastructure access control lists (iACLs), restrict access through control plane policing (CoPP), and upgrade to a software release that disables the server by default. + +**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][9] ]** + +Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3441221/cisco-13-ios-ios-xe-security-flaws-you-should-patch-now.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://tools.cisco.com/security/center/viewErp.x?alertId=ERP-72547 +[2]: https://www.networkworld.com/article/3356838/how-to-determine-if-wi-fi-6-is-right-for-you.html +[3]: https://tools.cisco.com/security/center/softwarechecker.x +[4]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-ios-gos-auth +[5]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-identd-dos +[6]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-sip-dos +[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-cat4000-tcp-dos +[8]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-l2-traceroute +[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[10]: https://www.facebook.com/NetworkWorld/ +[11]: https://www.linkedin.com/company/network-world diff --git a/sources/news/20190926 MG Motor Announces Developer Program and Grant in India.md b/sources/news/20190926 MG Motor Announces Developer Program and Grant in India.md new file mode 100644 index 0000000000..2f88770e2a --- /dev/null +++ b/sources/news/20190926 MG Motor Announces Developer Program and Grant in India.md @@ -0,0 +1,51 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (MG Motor Announces Developer Program and Grant in India) +[#]: via: (https://opensourceforu.com/2019/09/mg-motor-announces-developer-program-and-grant-in-india/) +[#]: author: (Mukul Yudhveer Singh https://opensourceforu.com/author/mukul-kumar/) + +MG Motor Announces Developer Program and Grant in India +====== + +[![][1]][2] + + * _**Launched in partnership with Adobe, Cognizant, SAP, Airtel, TomTom and Unlimit**_ + * _**Initiative provides developers to build innovative mobility applications and experiences**_ + + + +![][3]MG Motor India has today announced the introduction of its MG Developer Program and Grant. Launched in collaboration with leading technology companies such as SAP, Cognizant, Adobe, Airtel, TomTom and Unlimit, the initiative is aimed at incentivizing Indian innovators and developers to build futuristic mobility applications and experiences. The program also brings in TiE Delhi NCR as the ecosystem partner. + +Rajeev Chaba, president & MD, MG Motor India said, “The automobile industry is currently witnessing sweeping transformations in the space of connected, electric and shared mobility. MG aims to take this revolution forward with its focus on attaining technological leadership in the automotive industry. We have partnered with leading tech giants to enable start-ups to build innovative applications that would enable unique experiences for customers across the entire automotive ecosystem. More partners are likely to join the program in due course.” + +The company is encouraging developers to send in their ideas to the MG India Team. During the program, selected ideas will get access to resources from the likes of Airtel, SAP, Adobe, Unlimit and Cognizant. + +**Grants ranging up to Rs 25 lakhs (2.5 million) for start-ups and innovators** + +As part of the MG Developer Program & Grant, MG Motor India will provide innovators with an unparalleled opportunity to secure mentorship and funding from industry leaders. Shortlisted ideas will receive specialized, high-level mentoring and networking opportunities to assist with the practical development of the solution, business plan and modelling, testing facilities, go-to-market strategy, etc. Winning ideas will also have access to a grant, the amount of which will be decided by the jury, on a case-to-case basis. + +The MG Developer Program & Grant will initially focus on driving innovation in the following verticals: electric vehicles and components, batteries and management,  harging infrastructure, connected mobility, voice recognition, AI & ML, navigation technologies, customer experiences, car buying experiences, and autonomous vehicles. + +“The MG Developer & Grant Program is the latest in a series of initiatives as part of our commitment to innovation as a core organizational pillar. The program will ensure proper mentoring from over 20 industry leaders for start-ups, laying a foundation for them to excel in the future and trigger a stream of newer Internet Car use-cases that will, in turn, drive adoption of new technologies within the Indian automotive ecosystem. It has been our commitment in the market and Innovation is our key pillar,” added Chaba. + +The program will award grants ranging from INR5 lakhs to INR25 Lakhs. The program will be open to both external developers – including students, innovators, inventors, startups and other tech companies – and internal employee teams at MG Motor and its program partners. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/09/mg-motor-announces-developer-program-and-grant-in-india/ + +作者:[Mukul Yudhveer Singh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/mukul-kumar/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/MG-Developer-program.png?resize=660%2C440&ssl=1 (MG Developer program) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/MG-Developer-program.png?fit=660%2C440&ssl=1 +[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/MG-Developer-program.png?resize=350%2C233&ssl=1 diff --git a/sources/news/20191002 Fedora projects for Hacktoberfest.md b/sources/news/20191002 Fedora projects for Hacktoberfest.md new file mode 100644 index 0000000000..b8a5874b53 --- /dev/null +++ b/sources/news/20191002 Fedora projects for Hacktoberfest.md @@ -0,0 +1,77 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Fedora projects for Hacktoberfest) +[#]: via: (https://fedoramagazine.org/fedora-projects-for-hacktoberfest/) +[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/) + +Fedora projects for Hacktoberfest +====== + +![][1] + +It’s October! That means its time for the annual [Hacktoberfest][2] presented by DigitalOcean and DEV. Hacktoberfest is a month-long event that encourages contributions to open source software projects. Participants who [register][3] and submit at least four pull requests to GitHub-hosted repositories during the month of October will receive a free t-shirt. + +In a recent Fedora Magazine article, I listed some areas where would-be contributors could [get started contributing to Fedora][4]. In this article, I highlight some specific projects that provide an opportunity to help Fedora while you participate in Hacktoberfest. + +### Fedora infrastructure + + * [Bodhi][5] — When a package maintainer builds a new version of a software package to fix bugs or add new features, it doesn’t go out to users right away. First it spends time in the updates-testing repository where in can receive some real-world usage. Bodhi manages the flow of updates from the testing repository into the updates repository and provides a web interface for testers to provide feedback. + * [the-new-hotness][6] — This project listens to [release-monitoring.org][7] (which is also on [GitHub][8]) and opens a Bugzilla issue when a new upstream release is published. This allows package maintainers to be quickly informed of new upstream releases. + * [koschei][9] — koschei enables continuous integration for Fedora packages. It is software for running a service for scratch-rebuilding RPM packages in Koji instance when their build-dependencies change or after some time elapses. + * [MirrorManager2][10] — Distributing Fedora packages to a global user base requires a lot of bandwidth. Just like developing Fedora, distributing Fedora is a collaborative effort. MirrorManager2 tracks the hundreds of public and private mirrors and routes each user to the “best” one. + * [fedora-messaging][11] — Actions within the Fedora community—from source code commits to participating in IRC meetings to…lots of things—generate messages that can be used to perform automated tasks or send notifications. fedora-messaging is the tool set that makes sending and receiving these messages possible. + * [fedocal][12] — When is that meeting? Which IRC channel was it in again? Fedocal is the calendar system used by teams in the Fedora community to coordinate meetings. Not only is it a good Hacktoberfest project, it’s also [looking for a new maintainer][13] to adopt it. + + + +In addition to the projects above, the Fedora Infrastructure team has highlighted [good Hacktoberfest issues][14] across all of their GitHub projects. + +### Community projects + + * [bodhi-rs][15] — This project provides Rust bindings for Bodhi. + * [koji-rs][16] — Koji is the system used to build Fedora packages. Koji-rs provides bindings for Rust applications. + * [fedora-rs][17] — This project provides a Rust library for interacting with Fedora services like other languages like Python have. + * [feedback-pipeline][18] — One of the current Fedora Council objectives is [minimization][19]: work to reduce the installation and patching footprint of Fedora releases. feedback-pipeline is a tool developed by this team to generate reports of RPM sizes and dependencies. + + + +### And many more + +The projects above are only a small sample focused on software used to build Fedora. Many Fedora packages have upstreams hosted on GitHub—too many to list here. The best place to start is with a project that’s important to you. Any contributions you make help improve the entire open source ecosystem. If you’re looking for something in particular, the [Join Special Interest Group][20] can help. Happy hacking! + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/fedora-projects-for-hacktoberfest/ + +作者:[Ben Cotton][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/bcotton/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/hacktoberfest-816x345.jpg +[2]: https://hacktoberfest.digitalocean.com/ +[3]: https://hacktoberfest.digitalocean.com/register +[4]: https://fedoramagazine.org/how-to-contribute-to-fedora/ +[5]: https://github.com/fedora-infra/bodhi +[6]: https://github.com/fedora-infra/the-new-hotness +[7]: https://release-monitoring.org/ +[8]: https://github.com/release-monitoring/anitya +[9]: https://github.com/fedora-infra/koschei +[10]: https://github.com/fedora-infra/mirrormanager2 +[11]: https://github.com/fedora-infra/fedora-messaging +[12]: https://github.com/fedora-infra/fedocal +[13]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/GH4N3HYJ4ARFRP666O6EQCHDIQMXVUJB/ +[14]: https://github.com/orgs/fedora-infra/projects/4 +[15]: https://github.com/ironthree/bodhi-rs +[16]: https://github.com/ironthree/koji-rs +[17]: https://github.com/ironthree/fedora-rs +[18]: https://github.com/minimization/feedback-pipeline +[19]: https://docs.fedoraproject.org/en-US/minimization/ +[20]: https://fedoraproject.org/wiki/SIGs/Join diff --git a/sources/news/20191008 Kubernetes communication, SRE struggles, and more industry trends.md b/sources/news/20191008 Kubernetes communication, SRE struggles, and more industry trends.md new file mode 100644 index 0000000000..a3ba0a6a52 --- /dev/null +++ b/sources/news/20191008 Kubernetes communication, SRE struggles, and more industry trends.md @@ -0,0 +1,64 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Kubernetes communication, SRE struggles, and more industry trends) +[#]: via: (https://opensource.com/article/19/10/kubernetes-sre-more-industry-trends) +[#]: author: (Tim Hildred https://opensource.com/users/thildred) + +Kubernetes communication, SRE struggles, and more industry trends +====== +A weekly look at open source community and industry trends. +![Person standing in front of a giant computer screen with numbers, data][1] + +As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update. + +## [Review of pod-to-pod communications in Kubernetes][2] + +> In this article, we dive into pod-to-pod communications by showing you ways in which pods within a Kubernetes network can communicate with one another. +> +> While Kubernetes is opinionated in how containers are deployed and operated, it is very non-prescriptive of how the network should be designed in which pods are to be run. Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies) + +**The impact**: Networking is one of the most complicated parts of making computers work together to solve our problems. Kubernetes turns that complexity up to 11, and this article dials it back down to 10.75. + +## [One SRE's struggle and success to improve Infrastructure as Code][3] + +> Convergence is our goal because we expect our infrastructure to reach a desired state over time expressed in the code. Software idempotence means software can run as many times as it wants and unintended changes don’t happen. As a result, we built an in-house service that runs as specified to apply configurations in source control. Traditionally, we’ve aimed for a masterless configuration design so our configuration agent looks for information on the host. + +**The impact**: I've heard it said that the [human element][4] is the most important element of any digital transformation. While I don't know that the author would use that term to describe the outcome he was after, he does a great job of showing that it is not automation for automation's sake we want but rather automation that makes a meaningful impact on the lives of the people it supports. + +## [Why GitHub is the gold standard for developer-focused companies][5] + +> Now, with last year’s purchase by Microsoft supporting them, it is clear that GitHub has a real opportunity to continue building out a robust ecosystem, with billion dollar companies built upon what could turn into a powerful platform. Is GitHub the next ecosystem success story? In a word, yes. At my company, we bet on GitHub as a successful platform to build upon from the very start. We felt it was the place to build our solution if we wanted to streamline project management and keep software teams close to the code. + +**The impact**: It is one of the great ironies of open source that the most popular tool for open source development is not itself open source. The only way this works is if that tool is so good that open source developers are willing to overlook that inconsistency. + +## [KubeVirt joins Cloud Native Computing Foundation][6] + +> This month the Cloud Native Computing Foundation (CNCF) formally adopted [KubeVirt][7] into the CNCF Sandbox. KubeVirt allows you to provision, manage and run virtual machines from and within Kubernetes. In joining the CNCF Sandbox, KubeVirt now has a more substantial platform to grow as well as educate the CNCF community on the use cases for placing virtual machines within Kubernetes. The CNCF onboards projects into the CNCF Sandbox when they warrant experimentation on neutral ground to promote and foster collaborative development. + +**The impact**: The convergence of containers and virtual machines is clearly a direction vendors think is valuable. Moving this project to the CNCF gives a way to see whether this idea is going to be as popular with users and customers as vendors hope it will be. + +_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/kubernetes-sre-more-industry-trends + +作者:[Tim Hildred][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/thildred +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data) +[2]: https://superuser.openstack.org/articles/review-of-pod-to-pod-communications-in-kubernetes/ +[3]: https://thenewstack.io/one-sres-struggle-and-success-to-improve-infrastructure-as-code/ +[4]: https://devops.com/the-secret-to-digital-transformation-is-human-connection/ +[5]: https://thenextweb.com/podium/2019/10/02/why-github-is-the-gold-standard-for-developer-focused-companies/ +[6]: https://blog.openshift.com/kubevirt-joins-cloud-native-computing-foundation/ +[7]: https://kubevirt.io/ diff --git a/sources/news/20191013 System76 will ship Coreboot-powered firmware, a new OS for the apocalypse, and more open source news.md b/sources/news/20191013 System76 will ship Coreboot-powered firmware, a new OS for the apocalypse, and more open source news.md new file mode 100644 index 0000000000..eab1e9bd0e --- /dev/null +++ b/sources/news/20191013 System76 will ship Coreboot-powered firmware, a new OS for the apocalypse, and more open source news.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (System76 will ship Coreboot-powered firmware, a new OS for the apocalypse, and more open source news) +[#]: via: (https://opensource.com/article/19/10/news-october-13) +[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo) + +System76 will ship Coreboot-powered firmware, a new OS for the apocalypse, and more open source news +====== +Catch up on the biggest open source headlines from the past two weeks. +![Weekly news roundup with TV][1] + +In this edition of our open source news roundup, we cover System76 shipping Coreboot-powered firmware, a new OS for the apocalypse, and more open source news! + +### System76 will ship 2 Linux laptops with Coreboot-powered open source firmware + +The Denver-based Linux PC manufacturer announced plans to start shipping two laptop models with its Coreboot-powered open source firmware later this month. Jason Evangelho, Senior Contributor at _Forbes_, cited this move as a march towards offering open source software and hardware from the ground up.  + +System76, which also develops [Pop OS][2], is now taking pre-orders for its Galago Pro and Darter Pro laptops. It claims that Coreboot will let users boot from power off to the desktop 29% faster. + +Coreboot is a lightweight firmware designed to simplify the boot cycle of systems using it. It requires the minimum number of tasks needed to load and run a modern 32-bit or 64-bit operating system. Coreboot can offer a replacement for proprietary firmware, though it omits features like execution environments. Our own [Don Watkins][3] asked if Coreboot will ship on other System76 machines. Their response, [as reported by _Forbes_][4]: + +> _"Yes. Long term, System76 is working to open source all aspects of the computer. Thelio Io, the controller board in the Thelio desktop, is both open hardware and open firmware. This is a long journey but we're picking up speed. It's been less than a year since the our open hardware Thelio desktop was released and we're now producing two laptops with System76 Open Firmware."_ + +### Collapse OS is an operating system for the post-apocalypse + +Virgil Dupras, a software developer based in Quebec, is convinced the world's global supply chain will collapse before 2030. And he's worried that most [electronics will get caught in the crosshairs][5] due to "a very complex supply chain that we won't be able to achieve again for decades (ever?)."  + +To prepare for the worst, Dupras built Collapse OS. It's [designed to run][6] on "minimal or improvised machines" and perform simple tasks that are helpful in a post-apocalyptic society. These include editing text files, collecting sources files for MCUs and CPUs, and reading/writing from several storage devices. + +Dupras says it's intended for worst-case scenarios, and that a "weak collapse" might not be enough to justify its use. If you err on the side of caution, the Collapse OS project is accepting new contributors [on GitHub][7].  + +Per the project website, Dupras says his goal is for Collapse OS to be as self-contained as possible with the ability for users to install the OS without Internet access or other resources. Ideally, the goal is for Collapse OS to not be used at all. + +### ExpressionEngine will stay open source post-acquisition + +The team behind open source CMS ExpressEngine was acquired by Packet Tide - EEHarbor's parent company - in early October. [This announcement ][8]comes one year after Digital Locations acquired EllisLab, which develops EE core.  + +[In an announcement][9] on ExpressionEngine's website, EllisLab founder Rick Ellis said Digital Locations wasn't a good fit for ExpressionEngine. Citing Digital Location's goals to build an AI business, Ellis realized several months ago that ExpressionEngine needed a new home: + +> _"We decided that what was best for ExpressionEngine was to seek a new owner, one that could devote all the resources necessary for ExpressionEngine to flourish. Our top candidate was Packet Tide due to their development capability, extensive catalog of add-ons, and deep roots in the ExpressionEngine community._ +> +> _We are thrilled that they immediately expressed enthusiastic interest in becoming the caretakers of ExpressionEngine."_ + +Ellis says Packet Tide's first goal is to finish building ExpressionEngine 6.0, which will have a new control panel with a dark theme (who doesn't love dark mode?). ExpressionEngine adopted the Apache License Version 2.0 in November 2018, after 16 years as a proprietary tool. + +The tool is still marketed as an open source CMS, and EE Harbor developer Tom Jaeger said [in the EE Slack][10] that their plan is to keep ExpressionEngine open source now. But he also left the door open to possible changes.  + +### McAfee and IBM Security to lead the Open Source Cybersecurity Alliance + +The two tech giants will contribute the initiative's first open source code and content, under guidance from the OASIS consortium. The Alliance aims to share best practices, tech stacks, and security solutions in an open source platform.  + +Carol Geyer, chief development officer of OASIS, said the lack of standard language makes it hard for businesses to share data between tools and products. Despite efforts to collaborate, the lack of a standardized format yields more integration costs that are expensive and time-consuming. + +In lieu of building connections and integrations, [the Alliance wants members][11] to "develop protocols and standards which enable tools to work together and share information across vendors."  + +According to _Tech Republic_, IBM Security will contribute [STIX-Shifter][12], an open source library that offer a universal security system. Meanwhile, McAfee added its [OpenDXL Standard Ontology][13], a cybersecurity messaging format. Other members of the Alliance include CrowdStrike, CyberArk, and SafeBreach. + +#### In other news + + * [Paris uses open source to get closer to the citizen][14] + * [SD Times open source project of the week: ABAP SDK for IBM Watson][15] + * [Google's keeping Knative development under its thumb 'for the foreseeable future'][16] + * [Devs engage in soul-searching on future of open source][17] + * [Why leading Formula 1 teams back 'copycat' open source design idea][18] + + + +_Thanks, as always, to Opensource.com staff members and moderators for their help this week._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/news-october-13 + +作者:[Lauren Maffeo][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lmaffeo +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV) +[2]: https://system76.com/pop +[3]: https://opensource.com/users/don-watkins +[4]: https://www.forbes.com/sites/jasonevangelho/2019/10/10/system76-will-begin-shipping-2-linux-laptops-with-coreboot-based-open-source-firmware/#15a4da174e64 +[5]: https://collapseos.org/why.html +[6]: https://www.digitaltrends.com/cool-tech/collapse-os-after-societys-collapse/ +[7]: https://github.com/hsoft/collapseos +[8]: https://wptavern.com/expressionengine-under-new-ownership-will-remain-open-source-for-now +[9]: https://expressionengine.com/blog/expressionengine-has-a-new-owner +[10]: https://eecms.slack.com/?redir=%2Farchives%2FC04CUNNR9%2Fp1570576465005500 +[11]: https://www.techrepublic.com/article/mcafee-ibm-join-forces-for-global-open-source-cybersecurity-initiative/ +[12]: https://github.com/opencybersecurityalliance/stix-shifter +[13]: https://www.opendxl.com/ +[14]: https://www.smartcitiesworld.net/special-reports/special-reports/paris-uses-open-source-to-get-closer-to-the-citizen +[15]: https://sdtimes.com/os/sd-times-open-source-project-of-the-week-abap-sdk-for-ibm-watson/ +[16]: https://www.datacenterknowledge.com/google-alphabet/googles-keeping-knative-development-under-its-thumb-foreseeable-future +[17]: https://www.linuxinsider.com/story/86282.html +[18]: https://www.autosport.com/f1/news/146407/why-leading-f1-teams-back-copycat-design-proposal diff --git a/sources/talk/20181218 The Rise and Demise of RSS.md b/sources/talk/20181218 The Rise and Demise of RSS.md index 2dfea2074c..e260070c5c 100644 --- a/sources/talk/20181218 The Rise and Demise of RSS.md +++ b/sources/talk/20181218 The Rise and Demise of RSS.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: (beamrolling) +[#]: translator: ( ) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) diff --git a/sources/talk/20190923 Deloitte Launches New Tool for Tracking the Trajectory of Open Source Technologies.md b/sources/talk/20190923 Deloitte Launches New Tool for Tracking the Trajectory of Open Source Technologies.md new file mode 100644 index 0000000000..aebc6cb011 --- /dev/null +++ b/sources/talk/20190923 Deloitte Launches New Tool for Tracking the Trajectory of Open Source Technologies.md @@ -0,0 +1,67 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Deloitte Launches New Tool for Tracking the Trajectory of Open Source Technologies) +[#]: via: (https://opensourceforu.com/2019/09/deloitte-launches-new-tool-for-tracking-the-trajectory-of-open-source-technologies/) +[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/) + +Deloitte Launches New Tool for Tracking the Trajectory of Open Source Technologies +====== + + * _**Called Open Source Compass, the new open source analysis tool provides insights into 15 emergent technology domains**_ + * _**It can help software engineers in identifying potential platforms for prototyping, experimentation and scaled innovation.**_ + + + +Deloitte has launched a first-of-its-kind public data visualization tool, called Open Source Compass (OSC), which is intended to help C-suite leaders, product managers and software engineers understand the trajectory of open source development and emerging technologies. + +Deloitte collaborated with University of Toulouse Chair of Artificial and Natural Intelligence Toulouse Institute (ANITI) and co-founder of Datawheel, César Hidalgo to design and developed the tool. + +The tool enables users to search technology domains, projects, programming languages and locations of interest, explore emerging trends, run comparisons, and share and download data. + +“Open source software has been around since the early days of the internet and has incited a completely new kind of collaboration and productivity — especially in the realm of emerging technology,” said Bill Briggs, chief technology officer, Deloitte Consulting LLP. + +“Deloitte’s Open Source Compass can help provide insights that allow organizations to be more deliberate in their approach to innovation, while connecting to a pool of bourgeoning talent,” he added. + +**Free and open to the public** + +Open Source Compass will provide insights into 15 emergent technology domains, including cyber security, virtual/augmented reality, serverless computing and machine learning, to name a few. + +The site will offer a view into systemic trends on how the domains are evolving. The open source platform will also explore geographic trends based on project development, authors and knowledge sharing across cities and countries. It will also track how certain programming languages are being used and how fast they are growing. Free and open to the public, the site will enable users to query technology domains of interest, run their own comparisons and share or download data. + +**The benefits of using Open Source Compass** + +OSC analyzes data from the largest open source development platform which brings together over 36 million developers from around the world. OSC visualizes the scale and reach of emerging technology domains — over 100 million repositories/projects — in areas including blockchain, machine learning and the Internet of Things (IoT). + +Some of the key benefits of Deloitte’s new open source analysis tool include: + + * Exploring which specific open source projects are growing or stagnating in domains like machine learning. + * Identifying potential platforms for prototyping, experimentation and scaled innovation. + * Scouting for tech talent in specific technology domains and locations. + * Detecting and assessing technology risks. + * Understanding what programming languages are gaining or losing ground to inform training and recruitment + + + +According to Ragu Gurumurthy, global chief innovation officer for Deloitte Consulting LLP, Open Source Compass can address different organizational needs for different types of users based on their priorities. + +He explained, “A CTO could explore the latest project developments in machine learning to help drive experimentation, while a learning and development leader can find the most popular programming language for robotics that could then be taught as a new skill in an internal course offering.” + +Datawheel is an award-winning company specialized in the creation of data visualization solutions. “Making sense of large streams of data is one of the most pressing challenges of our day,” said Hidalgo. + +“In Open Source Compass, we used our latest technologies to create a platform that turns opaque and difficult to understand streams of data into simple and easy to understand visualizations,” he commented. +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/09/deloitte-launches-new-tool-for-tracking-the-trajectory-of-open-source-technologies/ + +作者:[Longjam Dineshwori][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/dineshwori-longjam/ +[b]: https://github.com/lujun9972 diff --git a/sources/talk/20190923 Simulating Smart Cities with CupCarbon.md b/sources/talk/20190923 Simulating Smart Cities with CupCarbon.md new file mode 100644 index 0000000000..78d8fb2eac --- /dev/null +++ b/sources/talk/20190923 Simulating Smart Cities with CupCarbon.md @@ -0,0 +1,100 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Simulating Smart Cities with CupCarbon) +[#]: via: (https://opensourceforu.com/2019/09/simulating-smart-cities-with-cupcarbon/) +[#]: author: (Dr Kumar Gaurav https://opensourceforu.com/author/dr-gaurav-kumar/) + +Simulating Smart Cities with CupCarbon +====== + +[![][1]][2] + +_CupCarbon is a smart city and IoT wireless sensor network (WSN) simulator. It is a new platform for 2D/3D design, visualisation and the simulation of radio propagation and interferences in IoT networks. It is particularly relevant in India today, since the development of smart cities is a priority of the government._ + +It was a wide range of devices interconnected through wireless technologies that gave birth to the Internet of Things (IoT). A number of smart gadgets and machines are now monitored and controlled using IoT protocols. Across the world, devices enjoy all-time connectivity because of the IoT. + +![Figure 1: Key element and components of a smart city project][3] + +![Figure 2: Official Web portal of the CupCarbon simulator][4] + +![Figure 3: Roads, objects and connections in the CupCarbon simulator][5] + +From the research reports of _Statista.com_, sales of smart home devices in the US went up from US$ 1.3 billion to US$ 4.5 billion, from 2016 to 2019. The _Economic Times_ reports that there will be around 2 billion units of eSIM based devices by 2025. An eSIM enables subscribers to use the digital SIM card for smart devices and the services can be activated without a physical SIM card. It is one of the recent and more secure applications of IoT. + +Beyond the traditional applications, IoT is being researched for purposes like monitoring the environment and providing prior notifications to regulating agencies so that appropriate action can be taken, when required. Reports from _LiveMint.com_ indicate that the Indian Institute of Technology, New Delhi and Ericsson are partnering to tackle the air pollution in Delhi. News reports from Grand View Research Inc. indicate that the global NB (Narrow Band)-IoT market size is predicted to touch more than US$ 6 billion by 2025. NB-IoT refers to the radio technology standard with a low-power wide-area network (LPWAN) that enables wide scale coverage and better performance of connected smart devices. + +![Figure 4: Working panel of the CupCarbon simulator][6] + +![Figure 5: Adding different types of sensor nodes in CupCarbon][7] + +![Figure 6: Option for the SenScript window in CupCarbon][8] + +**Free and open source tools for IoT implementation** +A wide range of free and open source simulators and frameworks is available to simulate IoT scenarios. These can be used for R&D so that the performance of different smart city and IoT algorithms can be analysed. Research projects for a smart city need to be simulated so that the citizen behaviour can be evaluated on multiple parameters before launching the actual IoT enabled smart city systems. + +**[![][9]][10]Installing and working with CupCarbon** +CupCarbon (__) is a prominent, multi-featured simulator that is used for the simulation of smart cities and IoT based advanced wireless network scenarios. + +It provides an effective graphical user interface (GUI) for the integration of objects in the smart city with wireless sensors. The sensor nodes and algorithms can be programmed in the SenScript Editor in CupCarbon. SenScript is the script that is used for the programming and control of sensors used in the simulation environment. In SenScript, a number of programming constructs and modules can be used so that the smart city environment can be simulated. + +![Figure 7: The SenScript Editor in CupCarbon for programming of sensors][11] + +![Figure 8: Integration of markers and route in CupCarbon][12] + +![Figure 9: Executing SenScript in CupCarbon to get an animated view of the smart city][13] + +**Creating dynamic scenarios for IoT and smart cities using the CupCarbon simulator** +The working environment of CupCarbon has numerous options to create and program sensors of different types. In the middle, there is a Map View, in which the smart city under simulation can be viewed dynamically. + +The sensors and smart objects are displayed in Map View. To program these smart devices and traffic objects, the toolbar of CupCarbon provides the programming modules so that the behaviour of every object can be controlled. + +Any number of nodes or motes can be imported in CupCarbon to program them in random positions. In addition, the weather conditions and environment factors can be added so that the smart city project can be simulated under specific environmental conditions. Using this option, the performance of the smart city can be evaluated under different situations with varying city temperatures. + +The SenScript editor provides the programming editor so that the functions and methods with each sensor or smart device can be executed. This editor has a wide range of inbuilt functions which can be called. These functions can be attached to the sensors and smart objects in the CupCarbon simulator. + +The markers and routes provide the traffic path for the vehicles in the smart city, so that these can follow the shortest path from source to destination, factoring in congestion or traffic jams. +On executing the code written in SenScript, an animated view of the smart city is produced, representing the mobility of vehicles, persons and traffic objects. This view enables the development team to check whether there is any probability of congestion or loss of performance. Using this process of visualisation, the algorithms and associated code of SenScript can be improved so that the proposed implementation performs better, with minimum resources. + +![Figure 10: Google Map View of a simulation in CupCarbon][14] + +![Figure 11: Analysing the energy consumption and research parameters in CupCarbon][15] + +In CupCarbon, the simulation scenario can be viewed like a Google Map including Satellite View. It can be changed to Satellite View with a single click. Using these options, the traffic, roads, towers, vehicles and even the congestion can be visualised in the simulation, for developers to get a sense of the real environment. + +[![][16]][17]Simulating a smart city scenario using CupCarbon is always required to analyse the performance of the network that is to be deployed. For such evaluations of a new smart city project, key parameters like energy, power and security also need to be investigated. CupCarbon integrates the options for energy consumption and other parameters, so that researchers and engineers can view the expected effectiveness of the project. + +Government agencies as well as corporate giants are getting involved in big smart city projects so that there is better control over the huge infrastructure and resources. Research scholars and practitioners can propose novel and effective algorithms for smart city implementations. The proposed algorithms can be simulated using smart city simulators and the performance parameters can be analysed. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/09/simulating-smart-cities-with-cupcarbon/ + +作者:[Dr Kumar Gaurav][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/dr-gaurav-kumar/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Smart-Cities-3d-Simulating-1.jpg?resize=696%2C379&ssl=1 (Smart Cities 3d Simulating) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Smart-Cities-3d-Simulating-1.jpg?fit=800%2C436&ssl=1 +[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-1-Key-elements-and-components-of-a-smart-city-project.jpg?resize=253%2C243&ssl=1 +[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-2-Official-Web-portal-of-the-CupCarbon-simulator.jpg?resize=350%2C174&ssl=1 +[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-3-Roads-objects-and-connections-in-the-CupCarbon-simulator.jpg?resize=350%2C193&ssl=1 +[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-4-Working-panel-of-the-CupCarbon-simulator.jpg?resize=350%2C130&ssl=1 +[7]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-5-Adding-different-types-of-sensor-nodes-in-CupCarbon.jpg?resize=350%2C240&ssl=1 +[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-6-Option-for-the-SenScript-window-in-CupCarbon.jpg?resize=350%2C237&ssl=1 +[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Smart-cities-and-advanced-wireless-scenarios-using-IoT.jpg?resize=350%2C259&ssl=1 +[10]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Smart-cities-and-advanced-wireless-scenarios-using-IoT.jpg?ssl=1 +[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-7-The-SenScript-Editor-in-CupCarbon-for-programming-of-sensors.jpg?resize=350%2C172&ssl=1 +[12]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-8-Integration-of-markers-and-routes-in-CupCarbon.jpg?resize=350%2C257&ssl=1 +[13]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-9-Executing-SenScript-in-CupCarbon-to-get-an-animated-view-of-the-smart-city.jpg?resize=350%2C227&ssl=1 +[14]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-10-Google-Map-View-of-a-simulation-in-CupCarbon.jpg?resize=350%2C213&ssl=1 +[15]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-11-Analysing-the-energy-consumption-and-research-parameters-in-CupCarbon.jpg?resize=350%2C214&ssl=1 +[16]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Table-1-Free-and-open-source-simulators-for-IoT-integrated-smart-city-implementations.jpg?resize=350%2C181&ssl=1 +[17]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Table-1-Free-and-open-source-simulators-for-IoT-integrated-smart-city-implementations.jpg?ssl=1 diff --git a/sources/talk/20190924 How DevOps professionals can become security champions.md b/sources/talk/20190924 How DevOps professionals can become security champions.md new file mode 100644 index 0000000000..ed1769cf4c --- /dev/null +++ b/sources/talk/20190924 How DevOps professionals can become security champions.md @@ -0,0 +1,112 @@ +[#]: collector: (lujun9972) +[#]: translator: (hopefully2333) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How DevOps professionals can become security champions) +[#]: via: (https://opensource.com/article/19/9/devops-security-champions) +[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo) + +How DevOps professionals can become security champions +====== +Breaking down silos and becoming a champion for security will help you, +your career, and your organization. +![A lock on the side of a building][1] + +Security is a misunderstood element in DevOps. Some see it as outside of DevOps' purview, while others find it important (and overlooked) enough to recommend moving to [DevSecOps][2]. No matter your perspective on where it belongs, it's clear that security affects everyone. + +Each year, the [statistics on hacking][3] become more alarming. For example, there's a hacker attack every 39 seconds, which can lead to stolen records, identities, and proprietary projects you're writing for your company. It can take months (and possibly forever) for your security team to discover the who, what, where, or when behind a hack. + +What are operations professionals to do about these dire problems? I say it is time for us to become part of the solution by becoming security champions. + +### Silos and turf wars + +Over my years of working side-by-side with my local IT security (ITSEC) teams, I've noticed a great many things. A big one is that tension is very common between DevOps and security. This tension almost always stems from the security team's efforts to protect against vulnerabilities (e.g., by setting rules or disabling things) that interrupt DevOps' work and hinder their ability to deploy apps quickly. + +You've seen it, I've seen it, everyone you meet in the field has at least one story about it. A small set of grudges turns into a burned bridge that takes time to repair—or the groups begin a small turf war, and the resulting silos make achieving DevOps unlikely. + +### Get a new perspective + +To try to break down these silos and end the turf wars, I talk to at least one person on each security team to learn about the ins and outs of daily security operations in our organization. I started doing this out of general curiosity, but I've continued because it always gives me a valuable new perspective. For example, I've learned that for every deployment that's stopped due to failed security, the ITSEC team is feverishly trying to patch 10 other problems it sees. Their brashness and quickness to react are due to the limited time they have to fix something before it becomes a large problem. + +Consider the immense amount of knowledge it takes to find, analyze, and undo what has been done. Or to figure out what the DevOps team is doing—without background information—then replicate and test it. And to do all of this with their usual greatly understaffed security team. + +This is the daily life of your security team, and your DevOps team is not seeing it. ITSEC's daily work can mean overtime hours and overwork to make sure that the company, its teams, and the proprietary work its teams are producing are secure. + +### Ways to be a security champion + +This is where being your own security champion can help. This means—for everything you work on—you must take a good, hard look at all the ways someone could log into it and what could be taken from it. + +Help your security team help you. Introduce tools into your pipelines to integrate what you know will work with what they will know will work. Start with small things, such as reading up on Common Vulnerabilities and Exposures (CVEs) and adding scanning functions to your [CI/CD][4] pipelines. For everything you build, there is an open source scanning tool, and adding small open source tools (such as the ones below) can go the extra mile in the long run. + +**Container scanning tools:** + + * [Anchore Engine][5] + * [Clair][6] + * [Vuls][7] + * [OpenSCAP][8] + + + +**Code scanning tools:** + + * [OWASP SonarQube][9] + * [Find Security Bugs][10] + * [Google Hacking Diggity Project][11] + + + +**Kubernetes security tools:** + + * [Project Calico][12] + * [Kube-hunter][13] + * [NeuVector][14] + + + +### Keep your DevOps hat on + +Learning about new technology and how to create new things with it is part of the job if you're in a DevOps-related role. Security is no different. Here's my list of ways to keep up to date on the security front while keeping your DevOps hat on. + + * Read one article each week about something related to security in whatever you're working on. + * Look at the [CVE][15] website weekly to see what's new. + * Try doing a hackathon. Some companies do this once a month; check out the [Beginner Hack 1.0][16] site if yours doesn't and you'd like to learn more. + * Try to attend at least one security conference a year with a member of your security team to see things from their side. + + + +### Be a champion for good + +There are several reasons you should become your own security champion. The first and foremost is to further your knowledge and advance your career. The second reason is to help other teams, foster new relationships, and break down the silos that harm your organization. Creating friendships across your organization has multiple benefits, including setting a good example of bridging teams and encouraging people to work together. You will also foster sharing knowledge throughout the organization and provide everyone with a new lease on security and greater internal cooperation. + +Overall, being a security champion will lead you to be a champion for good across your organization. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/devops-security-champions + +作者:[Jessica Repka][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building) +[2]: https://opensource.com/article/19/1/what-devsecops +[3]: https://hostingtribunal.com/blog/hacking-statistics/ +[4]: https://opensource.com/article/18/8/what-cicd +[5]: https://github.com/anchore/anchore-engine +[6]: https://github.com/coreos/clair +[7]: https://vuls.io/ +[8]: https://www.open-scap.org/ +[9]: https://github.com/OWASP/sonarqube +[10]: https://find-sec-bugs.github.io/ +[11]: https://resources.bishopfox.com/resources/tools/google-hacking-diggity/ +[12]: https://www.projectcalico.org/ +[13]: https://github.com/aquasecurity/kube-hunter +[14]: https://github.com/neuvector/neuvector-helm +[15]: https://cve.mitre.org/ +[16]: https://www.hackerearth.com/challenges/hackathon/beginner-hack-10/ diff --git a/sources/talk/20190925 6 Fintech Startups That Are Revolutionizing the Finance Space.md b/sources/talk/20190925 6 Fintech Startups That Are Revolutionizing the Finance Space.md new file mode 100644 index 0000000000..3dfd921b79 --- /dev/null +++ b/sources/talk/20190925 6 Fintech Startups That Are Revolutionizing the Finance Space.md @@ -0,0 +1,86 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (6 Fintech Startups That Are Revolutionizing the Finance Space) +[#]: via: (https://opensourceforu.com/2019/09/6-fintech-startups-that-are-revolutionizing-the-finance-space/) +[#]: author: (Andrew Cioffi https://opensourceforu.com/author/andrew-cioffi/) + +6 Fintech Startups That Are Revolutionizing the Finance Space +====== + +[![][1]][2] + +_Financial technology “FinTech” is rapidly changing the way people all over the world interact with finances. This includes transactions, trading currencies, borrowing, or even obtaining business capital. Gone are the days when getting access to cash for various uses required changing a lot of hands or took days. Below are six fintech companies that are making a difference in people’s lives._ + +**MANGOPAY** + +This startup is shaking payment systems by eliminating the many partners needed to facilitate online payments. The company employs white-label technology to cover all activities required in payments processes, making them simpler. + +[![][3]][4]In this way, MANGOPAY also enables new startups to get a leg up in the market by covering the often daunting process of account opening and connecting to financial services. It also simplifies payment processes for companies with intricate payment processes which traditional payment channels cannot handle. An example is the online farmers market The Food Assembly, which now has a shorter and more convenient supply chain favorable to farmers, customers, and communities. + +**Xapo** + +[_Bitcoin_][5] owners can now access the currency easier and keep it more securely, thanks to Xapo. Since it’s a virtual currency, many investors fear that their accounts might be hacked, and the company eases those fears by providing ‘vaults’ for them to secure their money. + +[![][6]][7]These vaults are in the form of offline encrypted servers located around the world that permit access only by biometric identification. On top of that, the servers have round the clock video surveillance together with armed personnel. Xapo has also revolutionized payments by introducing a debit card backed to the Xapo Wallet that will allow users to use it as a standard debit card. + +**Kantox** + +Kantox is a multinational company that provides foreign currency exchange and international payment services for small businesses and individual customers. Its platform is designed to help clients gain more control over their currency transactions. + +[![][8]][9]Its foreign currency solutions include a free [_forex demo_][10] account to practice your forex trading skills, managing currency risks by the provision of hedging tools, etc. it also provides access to over 130 currencies including exotics like the Colombian Peso, The South Korean Won, the Djibouti Franc and many more. + +**TransferWise** + +TransferWise is an online transfer service that allows users to transfer money abroad in fast and affordably. In fact, the platform can save you up to 90% of the traditional bank charges. The service is a hit among expatriates and global travelers. TransferWise excels in the international money transfer space in these ways: + +[![][11]][12]You can send money overseas at a cost eight times cheaper than with your local bank +There are no hidden costs like a rise in exchange rates – all transaction charges are transparent +Internal money transfers are finalized within 24 hours, mostly in mere seconds, compared with the three or more days for mainstream banks +Money is sent directly to your recipient via their bank account +Both businesses and individuals can use the platform + +**Seedrs** + +Seedrs is a UK-based [_equity crowdfunding_][13] platform that makes it easier for startups and established companies to get funding. Unlike before, when it was difficult for aspiring entrepreneurs to kick start their businesses unless they had rich connections, Seedrs helps upcoming firms to get access to capital. + +[![][14]][15]Seedrs has also opened a pathway for people who are interested in investing in the asset class. This is because the platform eliminates all the time and money requirements that were required to invest in companies before. With as little as 20, 500, or 1,000 pounds, anyone can now invest in an inspiring business. This is a testimony to why the platform has been so successful – it has so far raised over £210 million for more than 490 startups. + +**Expensify** + +This is a fintech startup that enables small businesses to streamline the receipts and expenses payment function. It has partnered with reputable firms in the accounting space to design integrations to accounting packages, time tracking software, and other workflow solutions. With this, the company hopes to make all work functions as seamless and efficient as possible, allowing workers to focus energy on productive tasks. + +[![][16]][17]Even if you’re just launching your business and you can’t yet afford a complex digital platform, you can still access Expensify’s freemium version. This allows you to keep an audit trail of your expenses without financial stress. Expensify also works with thousands of nonprofits to streamline their socially important work. This enables them to focus on their mission and spend less time on managing expenses. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/09/6-fintech-startups-that-are-revolutionizing-the-finance-space/ + +作者:[Andrew Cioffi][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/andrew-cioffi/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1.jpg?resize=696%2C464&ssl=1 (1) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1.jpg?fit=960%2C640&ssl=1 +[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/2-3.png?resize=350%2C233&ssl=1 +[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/2-3.png?ssl=1 +[5]: https://money.cnn.com/infographic/technology/what-is-bitcoin/index.html +[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/3-1.png?resize=350%2C164&ssl=1 +[7]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/3-1.png?ssl=1 +[8]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/4-2.png?resize=350%2C230&ssl=1 +[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/4-2.png?ssl=1 +[10]: https://admiralmarkets.com/education/articles/forex-basics/forex-demo-account-benefits +[11]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/5-1.png?resize=350%2C177&ssl=1 +[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/5-1.png?ssl=1 +[13]: https://www.forbes.com/sites/howardmarks/2018/12/19/what-is-equity-crowdfunding/#5f6e958f3b5d +[14]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/6.png?resize=236%2C240&ssl=1 +[15]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/6.png?ssl=1 +[16]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/7.png?resize=220%2C391&ssl=1 +[17]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/7.png?ssl=1 diff --git a/sources/talk/20190925 How a simpler mmWave architecture can connect IoT.md b/sources/talk/20190925 How a simpler mmWave architecture can connect IoT.md new file mode 100644 index 0000000000..2712e000ee --- /dev/null +++ b/sources/talk/20190925 How a simpler mmWave architecture can connect IoT.md @@ -0,0 +1,74 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How a simpler mmWave architecture can connect IoT) +[#]: via: (https://www.networkworld.com/article/3440498/how-a-simpler-mmwave-architecture-can-connect-iot.html) +[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/) + +How a simpler mmWave architecture can connect IoT +====== +Upcoming 5G millimeter wave frequencies are being bandied as a good, solid network for IoT. But some say they may be too elaborate. A stripped-down version of millimeter wave technology may be the solution. +Cofotoisme / Getty Images + +Current wireless technologies, such as Wi-Fi, won’t provide enough support for the billions of internet of things (IoT) sensors and networks that are expected to come on stream in the next few years, say researchers. More speed, efficiency and bandwidth will be needed. Plus, the equipment must cost significantly less than existing gear, including upcoming 5G equipment. + +To address the issue, scientists at University of Waterloo are developing a stripped-down version of millimeter wave technology. + +“A growing strain will be placed on requirements of wireless networks,” the researchers say in an [article announcing a new low-power, low-cost 5G network technology][1] that it calls mmX. They say the technology is specifically geared towards IoT. + +“Millimeter wave offers multi-gigahertz of unlicensed bandwidth. More than 200 times that allocated to today's Wi-Fi and cellular networks,” the article says. That’s “in comparison to Wi-Fi and Bluetooth, which are slow for many IoT applications.” + +**[ Also see: [What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3] ]** + +However, upcoming, ultra-fast, ultra-high capacity 5G networks, which will take advantage of millimeter wave, use considerable amounts of electrical energy and computing power, the researchers say. That means they aren't good for the low-cost, low-power IoT devices of the kind we’re going to see in many use cases. New devices must be low-power because they need to stay up longer, preferably indefinitely. Therefore, the idea of just adding power-intensive millimeter radios to the networks defeats the object to a certain extent. There needs to be more of a stripped-down millimeter network. + +“We address the key challenges that prevent existing mmWave technology from being used for such IoT devices,” the researcher say in their [the SIGCOMM ’19-published paper][4]. + +The problem with current wireless technologies isn’t so much that there’s anything fundamentally wrong with them, but that new IoT devices have triggered changes in requirements from incumbent radios, such as in today’s smartphones, and also that new devices function with a low-rate modulation scheme—rates much lower than channel capacity, in other words. Both are inefficient in use of spectrum. + +**** From HPE: [ITaaS and Corporate Storage Technology][5]: This blog explains why pay-per-use IT models, such as ITaaS, could be the next chapter in IT infrastructure. (Sponsored) **** + +### Beam searching prevents mmWave from being used for IoT + +The researchers say they have identified high-power consumption, expensive hardware, and beam searching as the key culprits that will prevent mmWave from being adopted for IoT implementation. + +Beam searching, for example, is a limitation of regular mmWave. It’s where power is focused to prevent the signal path-loss from decaying, but it is computationally complex, is expensive, and uses a lot of energy. That all adds to overhead. The researchers say they can eliminate beam searching through a form of over-the-air modulation where the signal isn’t modulated before transmission but during the transmission. That “eliminates the need of beam searching in mmWave radios,” they say. They also reduce the amount of feedback data needed from access points, which also helps. + +Another special feature mmX offers is that its hardware is a simple Raspberry Pi add-on board, allowing the “networking community” to easily experiment. Twenty-five million Raspberry Pi development computers have reportedly been sold as of earlier this year.  + +Energy efficiency is “even lower than existing Wi-Fi modules,” the researchers claim. Their mmX is “a far more efficient and cost-effective architecture for imminent IoT applications,” they say. + +**More about edge networking:** + + * [How edge networking and IoT will reshape data centers][3] + * [Edge computing best practices][6] + * [How edge computing can help secure the IoT][7] + + + +Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3440498/how-a-simpler-mmwave-architecture-can-connect-iot.html + +作者:[Patrick Nelson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Patrick-Nelson/ +[b]: https://github.com/lujun9972 +[1]: https://uwaterloo.ca/news/news/researchers-develop-low-power-low-cost-network-5g +[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html +[4]: https://dl.acm.org/citation.cfm?id=3342068 +[5]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/ +[6]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html +[7]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html +[8]: https://www.facebook.com/NetworkWorld/ +[9]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190925 Most enterprise networks can-t handle big data loads.md b/sources/talk/20190925 Most enterprise networks can-t handle big data loads.md new file mode 100644 index 0000000000..191900c10a --- /dev/null +++ b/sources/talk/20190925 Most enterprise networks can-t handle big data loads.md @@ -0,0 +1,66 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Most enterprise networks can't handle big data loads) +[#]: via: (https://www.networkworld.com/article/3440519/most-enterprise-networks-cant-handle-big-data-loads.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +Most enterprise networks can't handle big data loads +====== +As more data moves through the network, efforts to keep up are lagging due to leadership and technology issues. +Metamorworks / Getty Images + +Another week, another survey that finds IT cannot keep up with the ever-expanding data overload. This time the problem surrounds network bandwidth and overall performance. + +[A survey of 300 IT professionals][1] conducted by management consultant firm Accenture found the majority feel their enterprise networks are not up to the task of handling big data and internet of things (IoT) deployments. Only 43% of those companies polled said their networks are ready to support the cloud, IoT, and other digital technologies. + +**[ Learn more about SDN: Find out [where SDN is going][2] and learn the [difference between SDN and NFV][3]. | Get regularly scheduled insights: [Sign up for Network World newsletters][4]. ]** + +A key reason (58%) is a “misalignment between IT and business needs” that is slowing those rollouts. That is an unusual finding, since 85% of respondents also reported that their networks were completely or mostly ready to support the business’ digital initiatives. So which is it? + +The second and third most commonly cited barriers were “inherent complexities between business requirements and operational needs” and “demands for bandwidth, performance, etc. outpacing the ability to deliver” at 45% each. + +Network bottlenecks continue to grow as the sheer amount of data being pumped over the wires continues to increase thanks to analytics and other big data technologies. The survey found that bandwidth demands were not being met and current network performance continues to fall short. + +Other reasons cited were lack of networking skills, device sprawl, and aging equipment. + +### One solution to network performance woes: SDN + +Accenture found that most firms said [software-defined networks (SDN)][5] were the solution for bandwidth and performance challenges, with 77% of those surveyed reporting they were in the process of deploying SDN or have completed the deployment. It qualified that, noting that while SDN may be in place in parts of the organization, it is not always rolled out uniformly enterprise-wide. + +**** From HPE: [ITaaS and Corporate Storage Technology][6]: This blog explains why pay-per-use IT models, such as ITaaS, could be the next chapter in IT infrastructure. (Sponsored) **** + +Now, while it seems no one ever has enough budget for all of their IT ambitions, 31% of those surveyed describe funding network improvements as “easy” and within the network infrastructure team’s control, with CIOs/CTOs being much more likely to report the funding process as “easy” (40%), compared to their direct reports (13%) or directors and vice presidents of infrastructure/network (19%).  + +Saying, "Legacy networks alone cannot support the innovation and performance required in the digital age," the report calls for embracing new technologies, without saying SDN by name. It also called for greater collaboration between the C suite and their direct reports because it was clear there was a disconnect between how the two sides viewed things. + +"We believe a new network paradigm is needed to ensure networks meet current and future business needs. However, although there are signs of progress, the pace of change is slow. Companies must undertake significant work before they achieve a unified and standardized enterprise capability that will offer the bandwidth, performance and security necessary to support business needs today—and tomorrow," the report concluded. + +**[ Now see: [How network pros acquire skills for SDN, programmable networks][7] ]** + +Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3440519/most-enterprise-networks-cant-handle-big-data-loads.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://www.accenture.com/_acnmedia/pdf-107/accenture-network-readiness-survey.pdf#zoom=50 +[2]: https://www.networkworld.com/article/3209131/lan-wan/what-sdn-is-and-where-its-going.html +[3]: https://www.networkworld.com/article/3206709/lan-wan/what-s-the-difference-between-sdn-and-nfv.html +[4]: https://www.networkworld.com/newsletters/signup.html +[5]: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html +[6]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/ +[7]: https://www.networkworld.com/article/3405522/how-network-pros-acquire-skills-for-sdn-programmable-networks.html +[8]: https://www.facebook.com/NetworkWorld/ +[9]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190925 The 10 most powerful companies in IoT.md b/sources/talk/20190925 The 10 most powerful companies in IoT.md new file mode 100644 index 0000000000..b176b87fc8 --- /dev/null +++ b/sources/talk/20190925 The 10 most powerful companies in IoT.md @@ -0,0 +1,79 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The 10 most powerful companies in IoT) +[#]: via: (https://www.networkworld.com/article/3440857/the-10-most-powerful-companies-in-iot.html) +[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/) + +The 10 most powerful companies in IoT +====== + +The [Internet of Things][1] is still very much a growth industry. As a technology area whose development is dictated by the needs of the operational side of any given business, it’s a new challenge for traditional IT companies – and one that gives them an unusual array of competitors. But there are always going to be a few companies that set the tone, and we’ve collected what we think are the 10 most powerful players in the IoT sector right now. + +A word on methodology. We began by looking at about 25 prominent corporate names in IoT, comparing them based on how innovative their technology is, their market share and solution depth and breadth. + +**More on IoT:** + + * [What is edge computing and how it’s changing the network][2] + * [10 Hot IoT startups to watch][3] + * [The 6 ways to make money in IoT][4] + * [What is digital twin technology, and why it's important to IoT][5] + * [Blockchain, service-centric networking key to IoT success][6] + * [Getting grounded in IoT networking and security][7] + * [Building IoT-ready networks must become a priority][8] + * [What is the Industrial IoT? [And why the stakes are so high]][9] + + + +What we mean by the latter two terms is fairly straightforward. Depth refers to how much of the stack in a given IoT implementation that a company’s products are designed to handle, while breadth refers to how many different verticals to which those products are relevant. + +Market share can be difficult to measure, so we offer those estimates based mostly on extensive conversations with and data provided by analysts. Finally, where innovation is concerned, we tried to get a sense of the degree to which a given company’s technology is unique or at least much-imitated by its competitors. + +Here's the 10 most powerful in alphabetical order. + +### Accenture + +_Innovation:_ Accenture isn’t known for its in-house technical wizardry, and the secret sauce here is the company’s expertise at bringing in hardware and software from its partners, including Microsoft, Amazon and Cisco, which in itself is quite an achievement. The company refers to it as “connected platforms as a service,” or CPaaS. + +_Market Share:_ Directly quantifying IoT market share is a difficult exercise, but Accenture’s one of the best-known integrators on the market, bringing together platform providers, hardware manufacturers and makers of specialist solutions. + +_Depth of solution:_ Per Gartner’s latest IIoT Magic Quadrant report, the combination of open-source IP and Accenture’s own, usually acquired, tech makes for an “extensible and configurable” IoT platform. Ironically, it can be something of a walled garden. Once you’re working with Accenture, you’re mostly locked into working with its partners, Gartner notes, but that partner ecosystem is still quite extensive. + +_Breadth of solution:_ Accenture’s made a successful business out of helping enterprises in a wide variety of industries get their technology to work for them, so they’ve got a broad base of vertical-specific knowledge to call on for IoT, which is a critically important thing to have. + +### Amazon Web Services + +_Innovation:_ The fully integrated approach to IoT analytics – which lets AWS bring its formidable array of data analysis and machine learning tools together with several purpose-built frameworks for IoT insights and control – is now par for the course among big public cloud providers offering themselves up as a general purpose IoT back-end. But AWS was the first one to pull all those elements together in a meaningful way and has set the standard. + +To continue reading this article register now + +[Get Free Access][10] + +[Learn More][11]   Existing Users [Sign In][10] + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3440857/the-10-most-powerful-companies-in-iot.html + +作者:[Jon Gold][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Jon-Gold/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html +[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[3]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[4]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[5]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[6]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[7]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[8]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[9]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[10]: javascript:// +[11]: https://www.networkworld.com/learn-about-insider/ diff --git a/sources/talk/20190925 The Great Open Source Divide- ICE, Hippocratic License and the Controversy.md b/sources/talk/20190925 The Great Open Source Divide- ICE, Hippocratic License and the Controversy.md new file mode 100644 index 0000000000..bbb1fd64de --- /dev/null +++ b/sources/talk/20190925 The Great Open Source Divide- ICE, Hippocratic License and the Controversy.md @@ -0,0 +1,143 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The Great Open Source Divide: ICE, Hippocratic License and the Controversy) +[#]: via: (https://itsfoss.com/hippocratic-license/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +The Great Open Source Divide: ICE, Hippocratic License and the Controversy +====== + +_**Coraline Ada Ehmke has created “Hippocratic License” that “add ethics to open source projects”. But this seems to be just the beginning of a controversy as the “Hippocratic License” may not be open source at all.**_ + +Coraline Ada Ehmke, better known for her [Contributor Covenant][1], has modified the MIT open source license into Hippocratic License that adds a couple of conditions to the existing MIT license. Before you learn what it is, let me give you the context on why it’s been created in the first place. + +### No Tech for ICE + +![No Tech For ICE | Image Credit Science for All][2] + +Immigration and Customs Enforcement agency of the US government, [ICE][3], has been condemned by human rights groups and activists for inhumane practices of separating children from their parents at the US-Mexico border under the new strict immigration policy. + +Some techies have been vocal against the actions of ICE and they don’t want ICE to use tech projects they work on as it helps ICE in one way or another. + +The “[No Tech for ICE][4]” movement has been going on for some time but it got highlighted once again this week when an engineer named [Seth Vargo took down his open source project after finding ICE was using it][5] through Chef. + +The project was called [Chef Sugar][6], a Ruby library for simplifying work with [Chef][7], a platform for configuration management. ICE is one of the clients for Chef. The project withdrawal momentarily impacted Chef and its clients. Chef swiftly fixed the problem by uploading the Chef Sugar project on its own GitHub repository. + +Despite the trouble it caused for a number of companies using Chef worldwide, Vargo made a point. The pressure tactic worked and after [initial resistance][8], Chef caved in and [agreed to not renew its contract with ICE][9]. + +Now Chef Sugar is an open source project and its developer cannot stop people from forking it and continue using it. And that’s where [Coraline Ada Ehmke][10] came up with a new licensing model called Hippocratic License. + +### What is Hippocratic License? + +![][11] + +To enable more developers to forbid unethical organizations like ICE from using their open source projects, Coraline Ada Ehmake introduced a new license called “Hippocratic License”. + +The term Hippocratic relates to ancient Greek physician [Hippocrates][12]. The [Hippocratic oath][13] is an ethical oath (historically taken by physicians) and one of the crucial part of the oath is “I will abstain from all intentional wrong-doing and harm”. This part of the oath is known as “Primum non nocere” or “First do no harm”. + +The entire terminology is significant. The license is called Hippocratic license and is hosted on a domain called [firstdonoharm.dev][14] and the idea is to enable the developers to be not part of ‘intentional wrong-doing’. + +The [Hippocratic License][14] is based on the popular [MIT open source license][15]. It adds this additional and crucial condition: + +> The software may not be used by individuals, corporations, governments, or other groups for systems or activities that actively and knowingly endanger, harm, or otherwise threaten the physical, mental, economic, or general well-being of underprivileged individuals or groups. + +### Is Hippocratic license really an open source license? + +No, it is not. That’s what [Open Source Initiative][16] (OSI) says. OSI is the community-recognized body for reviewing and approving licenses as Open Source Definition conformant. + +> The intro to the Hippocratic Licence might lead some to believe +> the license is an Open Source Software licence, and software distributed under the Hippocratic Licence is Open Source Software. +> +> As neither is true, we ask you to please modify the language to remove confusion. +> +> — OpenSourceInitiative (@OpenSourceOrg) [September 23, 2019][17] + +Coraline first [thanked][18] OSI for pointing it out and then goes on to attack it as an “open source problem”. + +> This is the problem: the current structure of open source specifically prohibits us from protecting our labor from use by organizations like ICE. +> +> That’s not a license problem. That’s an Open Source™ problem. +> +> — Coraline Ada Ehmke (@CoralineAda) [September 23, 2019][19] + +Coraline clearly doesn’t accept that OSI (open Source Initiative) and [FSF][20] (Free Software Foundation) has the authority on the matter of defining open source and free software. + +> OSI and FSF are not the real arbiters of what is Open Source and what is Free Software. +> +> We are. +> +> — Coraline Ada Ehmke (@CoralineAda) [September 22, 2019][21] + +So if OSI and FSF, the organizations created for the sole purpose of defining open source and free software, are not the authority on this subject then who is? The “we” in “we are” of Coraline’s statement is ambiguous. Does ‘we’ represents the people who agree to Coraline’s view or ‘we’ means the entire open source community? If it’s the latter, then Coraline doesn’t represent or speak for every person in the open source community. + +### Does it solve the problem or does it create more problems? Can open source be neutral? + +> Developers are (finally) becoming more aware of the impact that their work has on the world, and in particular on underprivileged people. +> +> It’s late to come to that realization, but not TOO LATE to do something about it. +> +> The lesson here is that TECH IS NOT NEUTRAL. +> +> — Coraline Ada Ehmke (@CoralineAda) [September 23, 2019][22] + +Everything looks good from an idealistic point of view at the first glance. It seems like this new license will solve the problem of evil people using open source projects. + +But I see a problem here and that problem is the perception of ‘evil’. What you consider evil depends on your point of view. + +A number of “No Tech for ICE” supporting techies are also supporters of ANTIFA. [ANTIFA has been indulging in physical violence from time to time][23]. What if a bunch of ‘cis white men’, who found [far-left organizations like ANTIFA][24] evil, stop them from using their open source projects? What if [Richard Stallman comes back from his forced retirement][25] and starts selecting people who can use GNU projects based on whether or not they agree with his views? + +The license condition also says “knowingly endanger, harm, or otherwise threaten the physical, mental, economic, or general well-being of underprivileged individuals or groups”. + +So the entire stuff is only applicable to “underprivileged individuals or groups”, not others? So the others don’t get the same rights anymore? This should not come as surprise because Coraline is the same person who took extreme measure to ‘harm’ the ‘economic well being’ of a developer ([Coraline disagreed with his views][26]) by doing everything in capacity to get him fired from his job. + +Until these concerns are addressed, the Hippocratic License will unfortunately remain hypocrite license. + +Where will this end? How many open source projects will be forked between sparring groups of different ideologies? Why should the rest of the world suffer from the American domestic politics? Can we not leave open source undivided? + +Your views are welcome. Please note that abusive comments won’t be published. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][27]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/hippocratic-license/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://www.contributor-covenant.org/ +[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/no-tech-for-ice.jpg?resize=800%2C340&ssl=1 +[3]: https://en.wikipedia.org/wiki/U.S._Immigration_and_Customs_Enforcement +[4]: https://notechforice.com/ +[5]: https://www.zdnet.com/article/developer-takes-down-ruby-library-after-he-finds-out-ice-was-using-it/ +[6]: https://github.com/sethvargo/chef-sugar +[7]: https://www.chef.io/ +[8]: https://blog.chef.io/2019/09/19/chefs-position-on-customer-engagement-in-the-public-and-private-sectors/ +[9]: https://www.vice.com/en_us/article/qvg3q5/chef-not-renewing-ice-immigration-customs-enforcement-contract-after-code-deleting-protest +[10]: https://where.coraline.codes/ +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/hippocratic-license.png?ssl=1 +[12]: https://en.wikipedia.org/wiki/Hippocrates +[13]: https://en.wikipedia.org/wiki/Hippocratic_Oath +[14]: https://firstdonoharm.dev/ +[15]: https://opensource.org/licenses/MIT +[16]: https://opensource.org/ +[17]: https://twitter.com/OpenSourceOrg/status/1176229398929977344?ref_src=twsrc%5Etfw +[18]: https://twitter.com/CoralineAda/status/1176246765676302336 +[19]: https://twitter.com/CoralineAda/status/1176262778459496454?ref_src=twsrc%5Etfw +[20]: https://www.fsf.org/ +[21]: https://twitter.com/CoralineAda/status/1175878569169432582?ref_src=twsrc%5Etfw +[22]: https://twitter.com/CoralineAda/status/1176207120133447680?ref_src=twsrc%5Etfw +[23]: https://www.aol.com/article/news/2017/05/04/what-is-antifa-controversial-far-left-group-defends-use-of-violence/22067671/?guccounter=1&guce_referrer=aHR0cHM6Ly9lbi53aWtpcGVkaWEub3JnLw&guce_referrer_sig=AQAAAHYUcIrnC8zD4UX-W4N2Vshf-QVSVDTwNXlTNmy4gbUJUb9smDm7W9Bf1IelnBGz5x0QAdI-O3Zhm9obQjZcORvHjvp3J8tUgEbdlpKNef-jk1rTH8BTZOP7YJule2n7wbIc4wDHPMFjrZUsMx-kypQYVCpkjtEDltAHHo-73ZD_ +[24]: https://www.bbc.com/news/world-us-canada-40930831 +[25]: https://itsfoss.com/richard-stallman-controversy/ +[26]: https://itsfoss.com/linux-code-of-conduct/ +[27]: https://reddit.com/r/linuxusersgroup diff --git a/sources/talk/20190926 DeviceHive- The Scalable Open Source M2M Development Platform.md b/sources/talk/20190926 DeviceHive- The Scalable Open Source M2M Development Platform.md new file mode 100644 index 0000000000..f37750c13d --- /dev/null +++ b/sources/talk/20190926 DeviceHive- The Scalable Open Source M2M Development Platform.md @@ -0,0 +1,114 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (DeviceHive: The Scalable Open Source M2M Development Platform) +[#]: via: (https://opensourceforu.com/2019/09/devicehive-the-scalable-open-source-m2m-development-platform/) +[#]: author: (Dr Anand Nayyar https://opensourceforu.com/author/anand-nayyar/) + +DeviceHive: The Scalable Open Source M2M Development Platform +====== + +[![][1]][2] + +_DeviceHive provides powerful instruments for smart devices to communicate and manage services. It incorporates three critical technologies that affect mobile developers and users— the cloud, mobile and embedded systems. It consists of a communication layer, control software and multi-platform libraries and clients to bootstrap the development of remote sensing, remote control, monitoring and automation, smart energy, etc._ + +Today, people look for easy ways to get things done and the best example of this is automation. Machine-to-machine (M2M) communication aims to connect everyday objects and allows them (these non-human content providers) to feed the Internet with data in various formats, frequently. M2M communication is the latest trend in the evolution of industry, combining technology with data communication between devices or machines. + +M2M technology was first implemented in the manufacturing and industrial sectors, where other technologies like SCADA and remote monitoring helped to remotely manage and control data from equipment. M2M communications is all about direct inter-device communications, through which a robot or machine controls other machines. It can be used to more effectively monitor the condition of critical public infrastructure such as water treatment facilities or bridges, with less human intervention. + +Making a machine-to-machine communications system work is a step by step procedure. The three main elements in this process are: sensors (that acquire data from the operational environment and transmit it wirelessly), peer-to-peer wireless networks, and Internet enabled PCs. The most common types of M2M communications are listed below. + + * _**Backend-to-backend:**_ This is all about transmitting device logs over the Internet to the cloud provider. It also works for schedulers, daemons and continuous processes. + * _**IoT devices:**_ These small connected units put together data from small, autonomous, specialised devices at the server. + * _**CLI clients:**_ This is the creation of CLI apps that have the necessary rights to perform the actions, but which are only available on certain computers. + + + +The following points highlight the architecture and components of M2M communication. + + * _**M2M devices:**_ These are devices that are capable of replying to the request for data contained within those devices or capable of transmitting data in an autonomous manner. Examples are sensors, WPAN technologies like ZigBee or Bluetooth, LoWPAN, etc. + * _**M2M area network (device domain):**_ This enables connectivity between M2M devices and M2M gateways. An example is a personal area network. + * _**M2M gateway:**_ This utilises M2M capabilities to ensure M2M devices are interoperable and interconnected to the communications network. Gateways and routers are endpoints of the operator’s network in scenarios where sensors and M2M devices connect to the network. + * _**M2M communication networks:**_ These comprise communication between M2M gateways and M2M applications. Examples are xDSL, LTE, WiMAX and WLAN. + * _**M2M applications:**_ All M2M applications are based on assets provided by the operator. Examples are IoT based smart homes, e-health, m-health, telemedicine and the Internet of Medical Things, vending machines, smart parking systems, autonomous store payments and wireless payment systems, digital control systems in factories, smart IIoT, industrial monitoring, etc. + + + +There are various M2M open source development platforms. In this article, the primary focus is on DeviceHive, an open source M2M development platform. + +![Figure 1: DeviceHive microservices architecture][3] + +**Introducing DeviceHive** +DeviceHive was created and launched by DataArt, a boutique software development and outsourcing company in New York city, as an open source M2M communications framework using which developers could design M2M projects. It provides powerful instruments for smart devices to communicate and manage services. It incorporates three critical technologies that affect mobile developers and users— the cloud, the mobile and embedded. It consists of a communications layer, control software and multi-platform libraries and clients to bootstrap the development of remote sensing, remote control, monitoring and automation, smart energy, etc. + +DeviceHive provides a strong foundation and building support to create or customise any IoT/M2M solution, bridging the gap between embedded development, cloud platforms, Big Data and client applications. It is a scalable, hardware and cloud agnostic microservice-based platform with device-management APIs in varied protocols, which allows end users to set up and monitor device connectivity, and perform data analytics. + +**Features** +Listed below are the features of DeviceHive. + + * **Deployment:** DeviceHive facilitates innumerable deployment options and is suitable for every organisation, whether a startup or big enterprise. It includes Docker Compose and Kubernetes deployment to facilitate the development of public, private or hybrid clouds. Various DeviceHive services are started using Docker Compose — DeviceHive Frontend service, DeviceHive backend service, DeviceHive Auth service, DeviceHive Admin Console, DeviceHive WebSocket Proxy, DeviceHive Nginx Proxy, Kafka, PostgreSQL and Hazelcast IMDG. + * **Scalability:** DeviceHive includes outstanding software design practices like a container service oriented architecture approach, managed and orchestrated by Kubernetes, which brings about scalability and availability in seconds. + * **Connectivity:** It supports connectivity with any device via the REST API, WebSockets or MQTT. It supports libraries written in varied languages — both Android and iOS. It even supports embedded devices like ESP8266. + * **Seamless integration:** DeviceHive supports seamless integration with voice assisted services like Google, Siri and Alexa by enabling users to run customised JavaScript code. + * **Smart analytics:** It supports smart analytics using ElasticSearch, Apache Spark, Cassandra and Kafka for real-time processes. It also facilitates machine learning support. + * **Open source:** Comes under the Apache 2.0 licence for free use, and end users are supported by DataArt’s IoT professionals. + + + +**Protocols, client libraries and devices supported by DeviceHive** +Protocols: DeviceHive supports the REST, WebSocket API and MQTT protocols. In addition to this, for all RESTful services, it provides the Swagger API tool to test installation and other capabilities. + +**Client/device libraries:** DeviceHive supports numerous device libraries — the .NET framework, .NET Micro Framework, C++, Python and C (microcontrollers). + +**Client libraries:** These include the .NET framework, iOS and the Web/JavaScript. +**Device support**: DeviceHive supports any device with Python, Node.js or Java based Linux boards via the DeviceHive Client library. It also supports the ESP8266 chip with a simple API to handle all types of sensors for things like temperature (DS18B20, LM75A/LM75B/LM75C, DHT11 and DHT22) and pressure (BMP180, BMP280, etc). + +DeviceHive is a microservice based system, with high scalability and availability. Figure 1 highlights the microservices architecture of DeviceHive. + +**Components of DeviceHive** +The following are the components of DeviceHive. + + * **PostgreSQL:** This is the backend database to store all the data with regard to devices, networks, users, device types and configuration settings. + * **Hazelcast IMDG:** This is a clustered, in-memory data grid that uses sharding for data distribution and supports monitoring. All notifications are saved to a distributed cache storage for speedy access, and this can be removed in 2 minutes. + * **Message Bus (Kafka):** This supports communication between services and load balancing, as Kafka is a fast, distributed and fault-tolerant messaging system. In DeviceHive, the WebSocket Kafka proxy is used. It is written in Node.js because of flexibility in messaging. + * **DeviceHive frontend service:** This supports the RESTful and WebSocket APIs, performing all sorts of primary checks, sending requests to backend services and delivering responses in an asynchronous manner. + * **DeviceHive backend service:** This stores data in Hazelcast to manage subscriptions and retrieve data via requests from other services, from Hazelcast or from the database. + * **DeviceHive Auth service:** This contains information regarding the access control of users, the devices connected, network types and device types. It provides the RESTful API for generating, validating and refreshing tokens. + * **DeviceHive plugin service:** DeviceHive plugin support can help users to register devices, and define network types with the required JWT tokens. All the plugins are created in Node.js, Python and Java. + + + +**DeviceHive API** +DeviceHive API acts as a central component of the framework to facilitate communication and interaction with varied components. The API is responsible for providing access to information regarding all the components registered in the system in order to exchange messages in real-time scenarios. +The DeviceHive API has three types of consumers: + + * Client + * Administrator + * Device + + + +_**Client:**_ This is regarded as an application to control and administer devices. It can be an interface or software to manage the entire network. + +_**Administrator:**_ This controls the whole environment with full access to all components. It can create and manage API users, device networks, and all notifications and commands. + +_**Device:**_ This is termed as an individual unit with a unique identifier, name and other meta-information to communicate with the API. It takes commands from other components and executes them in an efficient manner. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/09/devicehive-the-scalable-open-source-m2m-development-platform/ + +作者:[Dr Anand Nayyar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/anand-nayyar/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/DeviceHive-IoT-connectivity-Illustration.jpg?resize=696%2C412&ssl=1 (DeviceHive IoT connectivity Illustration) +[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/DeviceHive-IoT-connectivity-Illustration.jpg?fit=800%2C474&ssl=1 +[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-1-DeviceHive-microservices-architecture.jpg?resize=350%2C180&ssl=1 diff --git a/sources/talk/20190926 How to contribute to GitLab.md b/sources/talk/20190926 How to contribute to GitLab.md new file mode 100644 index 0000000000..06425e979f --- /dev/null +++ b/sources/talk/20190926 How to contribute to GitLab.md @@ -0,0 +1,111 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to contribute to GitLab) +[#]: via: (https://opensource.com/article/19/9/how-contribute-gitlab) +[#]: author: (Ray Paik https://opensource.com/users/rpaikhttps://opensource.com/users/barkerd427) + +How to contribute to GitLab +====== +Help the community by contributing to code, documentation, translations, +user experience design, and more. +![Woman programming][1] + +I think many people are familiar with GitLab—the company or the software. What many may not realize is that GitLab is also an open source community that started with this [first commit][2] from our co-founder [Dmitriy Zaporozhet][3] in 2011. As a matter of fact, we have [more than 2,000 contributors][4] from the wider community who have contributed to GitLab. + +The wider community contributions span code, documentation, translations, user experience design, etc. If you are interested in open source and in contributing to a complete DevOps platform, I'd like you to consider joining the GitLab community. + +You can find things that you can start contributing to by looking at issues with the "[Accepting merge requests" label sorted by weight][5]. Low-weight issues will be easier to accomplish. If you find an issue that you're interested in working on, be sure to add a comment on the issue saying that you'd like to work on this, and verify that no one is already working on it. If you cannot find an issue that you are interested in but have an idea for a contribution (e.g., bug fixes, documentation update, new features, etc.), we encourage you to open a new issue or even [open a merge request][6] (MR) to start working with reviewers or other community members. + +If you are interested, here are the different areas at GitLab where you can contribute and how you can get started. + +### Development + +Whether it's fixing bugs, adding new features, or helping with reviews, GitLab is a great open source community for developers from all backgrounds. Many contributors have started contributing to GitLab development without being familiar with languages like Ruby. You can follow the steps below to start contributing to GitLab development: + + 1. For GitLab development, you should download and set up the [GitLab Development Kit][7]. The GDK README has instructions on how you can get started. + 2. [Fork the GitLab project][8] that you want to contribute to. + 3. Add the feature or fix the bug you want to work on. + 4. If you work on a feature change that impacts users or admins, please also [update the documentation][9]. + 5. [Open an MR][6] to merge your code and its documentation. The earlier you open an MR, the sooner you can get feedback. You can mark your MR as a [Work in Progress][10] so that people know that you're not done yet. + 6. Add tests, if needed, as well as a [changelog entry][11] so you can be credited for your work. + 7. Make sure the test suite is passing. + 8. Wait for a reviewer. A "Community contribution" label will be added to your MR, and it will be triaged within a few days and a reviewer notified. You may need multiple reviews/iterations depending on the size of the change. If you don't hear from anyone in several days, feel free to mention the Merge Request Coaches by typing **@gitlab-org/coaches** in a comment. + + + +### Documentation + +Contributing to documentation is a great way to get familiar with the GitLab development process and to meet reviewers and other community members. From fixing typos to better organizing our documentation, you will find many areas where you can contribute. Here are the recommended steps for people interested in helping with documentation: + + 1. Visit [https://docs.gitlab.com][12] for the latest GitLab documentation. + 2. If you find a page that needs improvement, click the "Edit this page" link at the bottom of the page, fork the project, and modify the documentation. + 3. Open an MR and follow the [branch-naming convention for documentation][13] so you can speed up the continuous integration process. + 4. Wait for a reviewer. A "Community contribution" label will be added to your MR and it will be triaged within a few days and a reviewer notified. If you don't hear from a reviewer in several days, feel free to mention **@gl-docsteam** in a comment. + + + +You may also want to reference [GitLab Documentation Guidelines][9] as you contribute to documentation. + +### Translation + +GitLab is being translated into more than 35 languages, and this is driven primarily by wider community members. If you speak another language, you can join more than 1,500 community members who are helping translate GitLab. + +The translation is managed at using [CrowdIn][14]. First, a phrase (e.g., one that appears in the GitLab user interface or in error messages) needs to be internationalized before it can be translated. The internationalized phrases are then made available for translations on . Here's how you can help us speak your language: + + 1. Log into (you can use your GitLab login). + 2. Find a language you'd like to contribute to. + 3. Improve existing translations, vote on new translations, and/or contribute new translations to your given language. + 4. Once your translation is approved, it will be merged into future GitLab releases. + + + +### UX design + +In order to help make a product that is easy to use and built for a diverse group of people, we welcome contributions from the wider community. You can help us better understand how you use GitLab and your needs as you work with the GitLab UX team members. Here's how you can get started: + + 1. Visit the [https://design.gitlab.com][15] for an overview of GitLab's open source Design System. You may also find the [Get Started guide][16] to be helpful. + 2. Choose an [issue][17] to work on. If you can't find an issue that you are interested in, you can open a new issue to start a conversation and get early feedback. + 3. Create an MR to make changes that reflect the issue you're working on. + 4. Wait for a reviewer. A "Community contribution" label will be added to your MR, and it will be triaged within a few days and a reviewer notified. If you don't hear from anyone in several days, feel free to mention **@gitlab-com/gitlab-ux** in a comment. + + + +### Getting help + +If you need any help while contributing to GitLab, you can refer to the [Getting Help][18] section on our Contribute page for available resources. One thing I want to emphasize is that you should not feel afraid to [mention][19] people at GitLab in issues or MRs if you have any questions or if you feel like someone has not been responsive. GitLab team members should be responsive to other community members whether they work at GitLab or not. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/how-contribute-gitlab + +作者:[Ray Paik][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rpaikhttps://opensource.com/users/barkerd427 +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming) +[2]: https://gitlab.com/gitlab-org/gitlab-ce/commit/9ba1224867665844b117fa037e1465bb706b3685 +[3]: https://about.gitlab.com/company/team/#dzaporozhets +[4]: https://contributors.gitlab.com +[5]: https://gitlab.com/groups/gitlab-org/-/issues?assignee_id=None&label_name%5B%5D=Accepting+merge+requests&scope=all&sort=weight&state=opened&utf8=%E2%9C%93 +[6]: https://docs.gitlab.com/ee/gitlab-basics/add-merge-request.html +[7]: https://gitlab.com/gitlab-org/gitlab-development-kit +[8]: https://docs.gitlab.com/ee/workflow/forking_workflow.html#creating-a-fork +[9]: https://docs.gitlab.com/ee/development/documentation/ +[10]: https://docs.gitlab.com/ee/user/project/merge_requests/work_in_progress_merge_requests.html +[11]: https://docs.gitlab.com/ee/development/changelog.html +[12]: https://docs.gitlab.com/ +[13]: https://docs.gitlab.com/ee/development/documentation/index.html#branch-naming +[14]: https://crowdin.com/ +[15]: https://design.gitlab.com/ +[16]: https://design.gitlab.com/contribute/get-started/ +[17]: https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/issues +[18]: https://about.gitlab.com/community/contribute/#getting-help +[19]: https://docs.gitlab.com/ee/user/group/subgroups/#mentioning-subgroups diff --git a/sources/talk/20190927 10 counterintuitive takeaways from 10 years of DevOpsDays.md b/sources/talk/20190927 10 counterintuitive takeaways from 10 years of DevOpsDays.md new file mode 100644 index 0000000000..bb64f660b2 --- /dev/null +++ b/sources/talk/20190927 10 counterintuitive takeaways from 10 years of DevOpsDays.md @@ -0,0 +1,146 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (10 counterintuitive takeaways from 10 years of DevOpsDays) +[#]: via: (https://opensource.com/article/19/9/counterintuitive-takeaways-devopsdays) +[#]: author: (KrisBuytaert https://opensource.com/users/krisbuytaert) + +10 counterintuitive takeaways from 10 years of DevOpsDays +====== +DevOps veteran Kris Buytaert, who was there at the founding of +DevOpsDays, shares lessons learned that might surprise you. +![gears and lightbulb to represent innovation][1] + +Ten years ago, we started an accidental journey. We brought together some of our good friends in Ghent, Belgium, to discuss our agile, open source, and early cloud experiences. [Patrick Debois][2] coined the event #DevOpsdays after [John Allspaw][3] and [Paul Hammond][4]'s talk from Velocity 2009, "10+ deploys per day: dev and ops cooperation at Flickr" (which is well [worth watching][5]). + +![Celebrate 10 years of DevOps Days where it all began: Ghent][6] + +Now, 10 years later, the world is different. DevOps is everywhere, right? Or is it really? I have been going to [DevOpsDays][7] events since that founding, and I have learned quite a lot from my experience. Here are 10 takeaways I hope you can learn from as well. + +### 1\. There is no such thing as a DevOps engineer. + +Plenty of people now have "DevOps engineer" as a job title, and lots of them don't like the title. The title gives the false impression that DevOps is a job that a single "DevOp" can achieve. Most often, a DevOps engineer is a Linux engineer who, if they're lucky, does a little bit of automation. When recruiters start looking for a DevOps engineer, applicants need to ask themselves the right questions, starting with: "What does the company actually need from an applicant?" Are they looking for a build engineer, a senior developer who understands non-functional requirements, a senior operations person who can automate things, or something else entirely? Most often, what they are really looking for is someone who will turn their eyes away from the lack of agile principles in practice. + +In an organization with a lot of DevOps engineers, it very often means that no DevOps is happening. A DevOps title is a sign of a new silo, and an applicant could make good money and learn new skills on the job, but they will not be "doing DevOps." + +### 2\. There is no such thing as a DevOps team. + +In the early days, we often said DevOps is about removing the walls of confusion between different teams, developers, and ops, about breaking down the silos. Yet somewhere along the journey, we saw a new phenomenon: the rise of the DevOps team. + +"DevOps team" sounds like a new practice, but the inconsistencies across organizations are clear. In one organization, it's the team in charge of tooling, in another, it literally is the silo between the dev and the ops teams—a silo that creates more confusion, more frustration, and less collaboration. In the best of cases, it occasionally is the cross-functional team with end-to-end responsibility for the service they are building. Those teams typically prefer not to be called a DevOps team. + +Calling a team "DevOps," I have found, will likely hinder the outcomes you're aiming for with DevOps. + +### 3\. There is no such thing as a DevOps project. + +A "project" by nature is finite. It is something you build, deliver, and then move on from. A consistent theme from 10 years of talks is that DevOps is about continual improvement, and continual improvement is never complete. Similarly, the output of these supposed projects are long-term _services_, and a service is something you build, deliver, and keep running. + +It's only after we think about how services extend beyond projects that we start to see the things that are easily forgotten: non-functional requirements (NFRs). NFRs include functionality that does not fit neatly into a specific behavior. NFRs define how we judge the operation of a system. They often include all the "-ilities" you hear around DevOps: securability, reliability, usability, maintainability, and scalability. All of which are essential to the business outcome. + +There is risk in the lack of empathy needed to think in short-term projects. When you have moved on to another project, you won't be as concerned with NFRs, since you're busy with a new challenge and it's someone else's problem now. However, when you run a service, you do care, and it is in your best interest to reduce the friction to keep things running smoothly. + +### 4\. There is no such thing as a DevOps tool. + +While many vendors will try to [sell you one][8], even the ultimate one, DevOps is not about tooling. It is about humans and collaboration. Some tools help people collaborate; often they give people with different backgrounds a shared terminology and a shared ecosystem; but equally often, the popular tools work against collaboration. + +A tool-focused culture is one that can isolate people more than it helps them collaborate, as people become obsessed with their toolchain and distance themselves from those not using it. While technically, they might be awesome tools and help us in some areas, a bunch of the new, so-called DevOps tools have widened the gap between different groups. For instance, I often hear "it works in my container" is a statement that developers make to define that "their" work is done. Containers alone do not solve the collaboration challenges needed to run applications effectively. We can't let tools become the new silos. + +### 5\. There is no such thing as DevOps "certified." + +No multiple-choice test can confirm that you, as an individual, collaborate with your colleagues. Organizations that offer certifications may have the most excellent advice on technology and even principles of collaboration, but a certificate cannot show that someone is good at DevOps. + +It is unfortunate that management teams require certificates in something we can't be certified in. Be educated in your favorite software, hardware, or cloud. Attend a local university and read the books that will educate you, such as those by [Deming][9], [Forsgren][10], [Humble][11], and [others][12]. But don't expect to become excellent at DevOps from a certification; it's more important to work with your coworkers. + +### 6\. There is no such thing as a DevOps pipeline. + +"Is the DevOps done yet?" "The DevOps pipeline is running." "The DevOps pipeline is broken." Whenever I hear these statements, I wonder how we got to such a term. Did we just rebrand a delivery pipeline, or is it because some companies are starting DevOps teams that manage the infrastructure for the pipeline? Or is it because the developers call the ops when the pipeline is broken? + +While introducing continuous integration and continuous delivery (CI/CD) principles has a huge impact on an organization, the "DevOps pipeline" term is used in a way that I see as blame-inducing. Ops teams are at fault when the dev's pipeline is broken. Dev teams don't worry about failing pipelines as long as they wrote tests. + +The concept is also misleading. Pipelines are aligned to a service or application, not to all of DevOps. When we generalize pipelines, we run the risk of encouraging silos between teams, which will leave us far from the goals of DevOps. + +What I do recommend is what I've seen in hundreds of organizations across the world: Call the pipeline for Application X the Application X pipeline. This way, we'll know what application has trouble getting through its tests, getting deployed, or getting updated. We will also know the team responsible for Application X, which will be busy trying to fix it, maybe with some help from their ops friends. + +### 7\. There is no such thing as standard DevOps. + +The toughest news from thousands of DevOps stories across the globe is standardization. Just like we can't certify people, there is also no-one-size-fits-all standard; every organization is on a different step in their journey, a different journey than other organizations. There is no magic recipe in which you implement a number of tools, set up a number of automated flows, and suddenly you are DevOps. + +A standard DevOps would mean that you implement a recipe, and suddenly the organization starts to collaborate, drops office politics, improves quality, increases morale, and is on the fast track to higher earnings with fewer outages. + +DevOps is better understood as a body of practice similar to [ITIL][13]. Remember the L in ITIL stands for Library, a library with best practices to cherrypick from—not an instruction manual. Lots of the hate against ITIL came from those (failed) implementations that took the library as a detailed instruction manual. Standardized DevOps will bear the same fruits. + +### 8\. There is no such thing as DevSecOps. + +From the very beginning in 2009, we started DevOpsDays as a place to invite everybody. Sure, the initial battleground was visible with developers and operations people, but everybody was included: database administrators, testers, business, finance, and, yes, also security. Even as early as 2012, we were giving talks at [OWASP][14] meetups, evangelizing what we did. We joked that the "s" in DevOps stood for security, just like the "S" in HTTPS. + +DevOps is inherently about security. I have found the greatest success in organizational adoption of continuous delivery comes from security teams. CD is a security requirement: you _need_ to have the ability to deploy whenever you want so that you can deploy and upgrade when you need to for business or security reasons. + +On the one hand, it is sad that we have to invent a word to get the security folks included. On the other hand, it's good to have the discussion again. Fundamentally, there is no difference between DevSecOps and DevOps. Security has always been part of the development and operations mindset. I'll call it DevSecOps if that helps, but it's okay to just call it DevOps. + +### 9\. There is no such thing as a finished DevOps transition. + +Have you ever seen an organization that said, "We'll do the DevOps project in Q4, and by next year we'll be DevOps"—and succeeded? Neither have I. + +Software delivery never stops, technology always changes, maintenance will be required, and—ideally—the DevOps mindset stays around. Once you have improved your delivery approach, you will want to keep improving. It won't be because your application is feature-complete or that the ecosystem it lives in has stopped evolving. It will be because the quality of your work improves exponentially, and many experience a similar improvement in their quality of life. DevOps will continue long after someone calls the effort "done." + +### 10\. There is such a thing as DevOops. + +Unfortunately, many people have not caught onto the importance of collaboration. They, often unintentionally, build silos, hold tools in higher regard than practices, require certification, and believe all the other nine takeaways. And they will struggle to succeed in a way that I like to call DevOops. + +To DevOops is to hold the tools and the silos in higher regard than the principles of DevOps that will improve your work. May we all be more DevOpsy and less DevOopsy. + +### The main takeaway + +Throughout the 10 years of DevOpsDays events, many thousands of people around the world have learned from each other in a collaborative and open environment. Some concepts that I find to be counter to the goals of DevOps and agile are popular. Stay focused on making your services run well at your company, and learn along the way. That won't mean a copy-and-paste effort of tools or dashboards. It will mean focusing on continually improving in every way. + +Good luck, and I hope to see you at the 10 year celebration at [DevOpsDays Ghent October 29-30th][15]. We have a great line up of speakers, including:  + + * [Patrick Debois][2] will talk about [Connect all the business pipelines][16] + * [Bryan Liles][17] on [Sysadmins to DevOps: Where we came from and where we are going][18] + * [Bridget Kromhout][19] on [distributed DevOps][20] + * [Ant Stanley][21] on [how serverless design patterns change nothing [for DevOps]][22] + * [Julie Gundersen][23] on [being an Advocate for DevOps][24] + + + +See you soon, back where it all began. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/counterintuitive-takeaways-devopsdays + +作者:[KrisBuytaert][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/krisbuytaert +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation) +[2]: https://twitter.com/patrickdebois +[3]: https://twitter.com/allspaw +[4]: https://twitter.com/ph +[5]: https://www.youtube.com/watch?v=LdOe18KhtT4 +[6]: https://opensource.com/sites/default/files/uploads/devopsdays-ghent-2019-10year.png (Celebrate 10 years of DevOps Days where it all began: Ghent) +[7]: https://devopsdays.org/ +[8]: https://opensource.com/article/19/6/you-cant-buy-devops +[9]: https://mitpress.mit.edu/books/out-crisis +[10]: https://nicolefv.com/book +[11]: https://continuousdelivery.com/about/ +[12]: https://itrevolution.com/devops-books/ +[13]: https://en.wikipedia.org/wiki/ITIL +[14]: https://www.owasp.org +[15]: https://devopsdays.org/events/2019-ghent/registration +[16]: https://devopsdays.org/events/2019-ghent/program/patrick-debois/ +[17]: https://twitter.com/bryanl +[18]: https://devopsdays.org/events/2019-ghent/program/bryan-liles/ +[19]: https://twitter.com/bridgetkromhout +[20]: https://devopsdays.org/events/2019-ghent/program/bridget-kromhout/ +[21]: https://twitter.com/IamStan +[22]: https://devopsdays.org/events/2019-ghent/program/ant-stanley/ +[23]: https://twitter.com/Julie_Gund +[24]: https://devopsdays.org/events/2019-ghent/program/julie-gunderson/ diff --git a/sources/talk/20190927 Data center gear will increasingly move off-premises.md b/sources/talk/20190927 Data center gear will increasingly move off-premises.md new file mode 100644 index 0000000000..fabede3541 --- /dev/null +++ b/sources/talk/20190927 Data center gear will increasingly move off-premises.md @@ -0,0 +1,55 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Data center gear will increasingly move off-premises) +[#]: via: (https://www.networkworld.com/article/3440746/data-center-gear-will-increasingly-move-off-premises.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +Data center gear will increasingly move off-premises +====== +Cloud and colocation sites will account for half of all data center equipment in just five years, according to 451 Research. +artisteer / Getty Images + +I've said that [colocation][1] and [downsizing in favor of the cloud][2] is happening, and the latest research from 451 Research confirms the trend. More than half of global utilized racks will be located at off-premises facilities, such as cloud and colocation sites, by the end of 2024, the company found. + +As companies get out of data center ownership, hardware will move to colocation sites like Equinix and DRT or cloud providers. The result is the total worldwide data center installed-base growth will see a dip of 0.1% CAGR between 2019-2024, according to the report, but overall total capacity in terms of space, power, and racks will continue to shift toward larger data centers. + +**[ Read also: [Colocation facilities buck the cloud-data-center trend][1] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]** + +Enterprises are moving to colocation sites and hyperscale cloud providers such as Amazon Web Services (AWS) and Microsoft Azure for different reasons. AWS and Microsoft tend to base their data centers in remote areas with cheap land and some form of renewable energy, while colocation providers tend to be in big cities. They are popular for edge-computing projects such as internet of things (IoT) implementations and autonomous vehicle data gathering. + +Either way, enterprise IT is moving outward and becoming more distributed and less reliant on their own data centers. + +"Across all owner types and geographic locations, cloud and service providers are driving expansion, with the hyperscalers representing the tip of the spear," said Greg Zwakman, vice president of market and competitive intelligence at 451 Research, in a statement. "We expect to see a decline in utilized racks across the enterprise, with a mid-single-digit CAGR increase in non-cloud colocation, and cloud and service providers expanding their utilized footprint over 13%." + +While all the focus is on large-scale data centers, the report found that server rooms and closets account for nearly 95% of total data centers, but only 23% of total utilized racks in 2019. Anyone who works in a remote office can probably relate to this. And 60% of enterprise data center space is less than 10,000 square feet. + +On the flipside, the top six hyperscalers account for 42% of total cloud and service providers' utilized racks in 2019. 451 Research expects that to grow at an 18% CAGR, reaching 50.4% by 2024. + +**** From HPE: [ITaaS and Corporate Storage Technology][4]: This blog explains why pay-per-use IT models, such as ITaaS, could be the next chapter in IT infrastructure. (Sponsored) **** + +So, the trend is rather clear: Owning your own gear is optional, and owning your data center is increasingly falling out of favor. + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3440746/data-center-gear-will-increasingly-move-off-premises.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3407756/colocation-facilities-buck-the-cloud-data-center-trend.html +[2]: https://www.networkworld.com/article/3439917/how-to-decommission-a-data-center.html +[3]: https://www.networkworld.com/newsletters/signup.html +[4]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/ +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190927 How to contribute to Fedora.md b/sources/talk/20190927 How to contribute to Fedora.md new file mode 100644 index 0000000000..4c6b34de0f --- /dev/null +++ b/sources/talk/20190927 How to contribute to Fedora.md @@ -0,0 +1,99 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to contribute to Fedora) +[#]: via: (https://fedoramagazine.org/how-to-contribute-to-fedora/) +[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/) + +How to contribute to Fedora +====== + +![][1] + +One of the great things about open source software projects is that users can make meaningful contributions. With a large project like Fedora, there’s somewhere for almost everyone to contribute. The hard part is finding the thing that appeals to you. This article covers a few of the ways people participate in the Fedora community every day. + +The first step for contributing is to [create an account][2] in the Fedora Account System. After that, you can start finding areas to contribute. This article is not comprehensive. If you don’t see something you’re interested in, check out [What Can I Do For Fedora][3] or contact the [Join Special Interest Group][4] (SIG). + +### Software development + +This seems like an obvious place to get started, but Fedora has an “upstream first” philosophy. That means most of the software that ends up on your computer doesn’t originate in the Fedora Project, but with other open source communities. Even when Fedora package maintainers write code to add a feature or fix a bug, they work with the community to get those patches into the upstream project. + +Of course, there are some applications that are specific to Fedora. These are generally more about building and shipping operating systems than the applications that get shipped to the end users. The [Fedora Infrastructure project][5] on GitHub has several applications that help make Fedora happen. + +### Packaging applications + +Once software is written, it doesn’t just magically end up in Fedora. [Package maintainers are the ones who make that happen][6]. Fundamentally, the job of the package maintainer is to make sure the application successfully builds into an RPM package and to generally keep up-to-date with upstream releases. Sometimes, that’s as simple as editing a line in the RPM spec file and uploading the new source code. Other times, it involves diagnosing build problems or adding patches to fix bugs or apply configuration settings. + +Packagers are also often the first point of contact for user support. When something goes wrong with an application, the user (or [ABRT][7]) will file a bug in Red Hat Bugzilla. The Fedora package maintainer can help the user diagnose the problem and either fix it in the Fedora package or help file a bug in the upstream project’s issue tracker. + +### Writing + +Documentation is a key part of the success of any open source project. Without documentation, users don’t know how to use the software, contributors don’t know how to submit code or run test suites, and administrators don’t know how to install and run the application. The [Fedora Documentation team][8] writes [release notes][9], [in-depth guides][10], and short “[quick docs][11]” that provide task-specific information. Multi-lingual contributors can also help with translation and localization of both the documentation and software strings by joining the [localization (L10n) team][12]. + +Of course, Fedora Magazine is always looking for contributors to write articles. The [Contributing page][13] has more information. **[We’re partial to this way of contributing! — ed.]** + +### Testing + +Fedora users have come to rely on our releases working well. While we emphasize being on the leading edge, we want to make sure releases are usable, too. The [Fedora Quality Assurance team][14] runs a broad set of test cases and ensures all of the release criteria are met before anything ships. Before each release, the team arranges test days for various components. + +Once the release is out, testing continues. Each package update first goes to the [updates-testing repository][15] before being published to the main testing repository. This gives people who are willing to test the opportunity to try updates before they go to the wider community.  + +### Graphic design + +One of the first things that people notice when they install a new Fedora release is the desktop background. In fact, using a new desktop background is one of our release criteria. The [Fedora Design team][16] produces several backgrounds for each release. In addition, they design stickers, logos, infographics, and many other visual elements for teams within Fedora. As you contribute, you may notice that you get awarded [badges][17]; the [Badges team][18] produces the art for those. + +### Helping others + +Cooperative effort is a hallmark of open source communities. One of the best ways to contribute to any project is to help other users. In Fedora, that can mean answering questions on the [Ask Fedora][19] forum, the [users mailing list][20], or in the [#fedora IRC channel][21]. Many third-party social media and news aggregator sites have discussion related to Fedora where you can help out as well. + +### Spreading the word + +Why put so much effort into making something that no one knows about? Spreading the word helps our user and contributor communities grow. You can host a release party, speak at a conference, or share how you use Fedora on your blog or social media sites. The [Fedora Mindshare committee][22] has funds available to help with the costs of parties and other events. + +### Other contributions + +This article only shared a few of the areas where you can contribute to Fedora. [What Can I Do For Fedora][3] has more options. If there’s something you don’t see, you can just start doing it. If others see the value, they can join in and help you. We look forward to your contributions! + +* * * + +_Photo by _[_Anunay Mahajan_][23]_ on [Unsplash][24]_. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/how-to-contribute-to-fedora/ + +作者:[Ben Cotton][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/bcotton/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/how-to-contribute-816x345.jpg +[2]: https://admin.fedoraproject.org/accounts/user/new +[3]: https://whatcanidoforfedora.org/ +[4]: https://fedoraproject.org/wiki/SIGs/Join +[5]: https://github.com/fedora-infra +[6]: https://fedoramagazine.org/day-life-fedora-packager/ +[7]: https://abrt.readthedocs.io/en/latest/index.html +[8]: https://docs.fedoraproject.org/en-US/fedora-docs/contributing/ +[9]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/ +[10]: https://docs.fedoraproject.org/en-US/fedora/f30/ +[11]: https://docs.fedoraproject.org/en-US/quick-docs/ +[12]: https://fedoraproject.org/wiki/L10N +[13]: https://docs.fedoraproject.org/en-US/fedora-magazine/contributing/ +[14]: https://fedoraproject.org/wiki/QA +[15]: https://fedoraproject.org/wiki/QA:Updates_Testing +[16]: https://fedoraproject.org/wiki/Design +[17]: https://badges.fedoraproject.org/ +[18]: https://fedoraproject.org/wiki/Open_Badges?rd=Badges +[19]: https://ask.fedoraproject.org/ +[20]: https://lists.fedoraproject.org/archives/list/users%40lists.fedoraproject.org/ +[21]: https://fedoraproject.org/wiki/IRC_support_sig +[22]: https://docs.fedoraproject.org/en-US/mindshare-committee/ +[23]: https://unsplash.com/@anunaymahajan?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[24]: https://unsplash.com/s/photos/give?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/sources/talk/20190927 What does an open source AI future look like.md b/sources/talk/20190927 What does an open source AI future look like.md new file mode 100644 index 0000000000..cfc97020b3 --- /dev/null +++ b/sources/talk/20190927 What does an open source AI future look like.md @@ -0,0 +1,95 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What does an open source AI future look like?) +[#]: via: (https://opensource.com/article/19/9/open-source-ai-future) +[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta) + +What does an open source AI future look like? +====== +Placing the fundamental building blocks of AI in the hands of the open +source community is advancing AI in multiple industries. +![A brain design in a head][1] + +The recent announcement about [Neuralink][2], Elon Musk's latest startup, added to the buzz already in the air about where tech is taking us in the not-so-distant future. Judging by its ambitious plans, which involve pairing computers wirelessly with the human brain, Neuralink demonstrates that the future is now. + +And a big part of that future is open source artificial intelligence (AI). Rapid advancements have opened up a whole new world of possibilities that startups can take advantage of right now. + +### How does open source AI work? + +Traditionally, tech innovations were held close to the vest and developed in secret; no one wanted to give a leg up to the competition. Open source technology, a concept that really gained traction in the '90s with the development of Linux, works on the same underlying principle that spurred the creation of the internet: the idea that information should be freely shared and available to all who want access. + +Open source supports [global collaboration][3] by allowing some of the most progressive minds in tech to work together and craft solutions, and it makes technology cheaper and more widely available for developers. The potential benefits for every industry and level of society are immense. + +When it comes to [artificial intelligence][4] and machine learning (ML), open source technology is all about high-speed innovation. If algorithm work was kept inside a closed system, growth would be stifled. But placing the fundamental building blocks of AI in the hands of the open source community has created a valuable feedback loop for all involved. + +When individuals or organizations contribute to an open source AI project, they typically share code at the algorithm level to allow others to understand their method for managing datasets and detecting patterns. Then other groups can reuse or tweak this code to integrate it with their own products and solutions. The following are some of the industries and areas benefiting the most from open source AI. + +### Autonomous vehicles + +One of the most popular use cases for AI and ML technologies is the development of self-driving cars and other types of vehicles. With all of the competition in the auto industry, you might think that companies would be desperate to keep their technology private. However, the opposite is true, as evidenced by the open source projects that are becoming more and more common. + +[Webviz][5] is an open source browser-based tool that lets developers visualize the huge amount of data being recorded by a self-driving car's various sensors. This becomes very valuable when trying to simulate different road conditions without running costly tests. There are over 1,000 engineers now contributing to Webviz, and it is also being used in various fields of robotics. + +### E-commerce + +Shopping online was once a novelty with a handful of storefronts. Now, thanks to trends in web design—away from custom content management systems—moving to [DIY website builders][6] and vendors like Shopify, even the smallest home business has an e-commerce website. In fact, not having an online option is the rarity now. + +One challenge that online retailers face is how to deal with customer inquiries and complaints efficiently. That's where open source AI projects have pushed technology forward. A machine learning library called [TensorFlow][7] is capable of running algorithms to automatically tag and categorize incoming email messages. The system then prioritizes the content so that humans can respond to the most urgent issues more quickly. + +### Cybersecurity and data privacy + +A recent article on Forbes' website synthesized many leading opinions when it claimed that [AI is the future of cybersecurity][8]. The report, which summarized Capgemini research, states that 61% of enterprises admit they wouldn't be able to detect breach attempts effectively without AI. Considering the increasing number of malware and virus attacks, the internet's threat surface has reached such a level that we have little prayer of securing it without an AI-powered system that is super-fast and learns on the fly how to detect threats as they occur and shuts them down without human interference. + +Popular cybersecurity software, like virtual private networks (VPNs), has been banned in a growing number of countries and is the target of regulatory extinction even in freedom-loving countries like Canada. Telecom giant Bell [asked NAFTA negotiators][9] to declare VPNs illegal due to their geoblocking feature, popular for [accessing Netflix content][10] when outside the United States. + +Companies need smarter solutions to keep their technology assets safe. Open source tools like the [Adversarial Robustness Toolbox][11] can scan AI neural networks and determine their level of risk. This can help prevent the algorithms from being manipulated by external hackers. + +### Banking and financial industries + +Financial services are being revolutionized by open AI. AI's ability to analyze and make recommendations will allow banking professionals to turn over a lot of their more mundane duties over to technology while they perfect the art of customer service. + +Currently, the primary use of open source by banks is on the analytics side, where they can leverage AI algorithms to filter large sets of data easily. But bigger changes could be coming soon, considering that popular cryptocurrencies like Bitcoin are based on open source machine learning platforms. + +Some FinTech advances to look out for are: + + * Banking APIs, such as [open banking][12], that allow banks to connect to customer data through third-party interfaces + * Microservices and containerization that remove old functions and replace them with new open cores and that eliminate functionalities one by one and reintroduce them as microservices + * Big data and machine learning applications that can coordinate data production, collection, processing, storage, and presentation + + + +### Final thoughts + +Machine learning technologies are powering forward with juggernaut speed as we speak. Many of them are making modern life possible and changing business in ways that most people may not fully realize. Although the technology [creates challenges][13] at times, it's difficult to imagine how we ever lived without many of these advancements, especially when we lose them, even temporarily. + +With artificial intelligence, the likelihood of system breakdowns and obsolescence decreases and efficiency improves exponentially. It's self-correcting, scalable, and progressive in ways that will enrich our world immeasurably. When technology is in control behind the scenes, it frees us to realize the full extent of our capabilities—or at least come closer than we were before. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/open-source-ai-future + +作者:[Sam Bocetta][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/sambocetta +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_patents4abstract_B.png?itok=6RHeRaYh (A brain design in a head) +[2]: https://www.theverge.com/2019/7/16/20697123/elon-musk-neuralink-brain-reading-thread-robot +[3]: https://www.iflscience.com/technology/why-big-tech-companies-are-open-sourcing-their-ai-systems/ +[4]: https://opensource.com/article/18/12/how-get-started-ai +[5]: https://webviz.io/ +[6]: https://webeminence.com/wysiwyg-wordpress-website-builders/ +[7]: https://www.tensorflow.org/ +[8]: https://www.forbes.com/sites/louiscolumbus/2019/07/14/why-ai-is-the-future-of-cybersecurity/#69a04c8f117e +[9]: https://www.vice.com/en_us/article/d3mvam/canadian-telecom-giant-bell-wanted-nafta-to-ban-some-vpns +[10]: https://privacycanada.net/best-vpn-netflix/ +[11]: https://adversarial-robustness-toolbox.readthedocs.io/en/latest/ +[12]: https://www.thebalance.com/what-is-open-banking-and-how-will-it-affect-you-4173727 +[13]: https://www.dataversity.net/for-better-or-worse-ai-is-eating-data-centers/ diff --git a/sources/talk/20190930 How Open Source Software Lets Us Push It to the Limit.md b/sources/talk/20190930 How Open Source Software Lets Us Push It to the Limit.md new file mode 100644 index 0000000000..53aa99933c --- /dev/null +++ b/sources/talk/20190930 How Open Source Software Lets Us Push It to the Limit.md @@ -0,0 +1,77 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How Open Source Software Lets Us Push It to the Limit) +[#]: via: (https://opensourceforu.com/2019/09/how-open-source-software-lets-us-push-it-to-the-limit/) +[#]: author: (Ebbe Kernel https://opensourceforu.com/author/ebbe-kernel/) + +How Open Source Software Lets Us Push It to the Limit +====== + +[![best text editors for web development][1]][2] + +_Here is a conversation with Johan, a leading developer of an advanced proxy network. As his team tackles complex load-balancing problems, they are constantly forced to push their solutions beyond what the original developers imagined. He says that the decision to use an open source load-balancer HAProxy has made it possible to do what would not be possible with other solutions._ + +**Ebbe: Tell us a bit about why you chose HAProxy for load-balancing.** + +**Johan: **Even though we use both open source and private source solutions for our network, I am a real ambassador for open source in our team. I think HAProxy is a perfect example of a great solution for a particular problem that can be adapted in unforeseen ways precisely because it is open sourced. + +Ever since we started work developing our proxy network, I looked into using an open source solution for load-balancing. We tried Nginx and Squid, but we soon realized that HAProxy is an indisputable industry standard and the only option for our product. + +**Ebbe: What made it exemplary?** + +**Johan: **What I’ve found with great open source software is that it must be constantly evolving, updated and managed. And in the case of HAProxy, we get minor updates every month. At first we liked the quick bug fixes. But now we jumped the waggon on the new major release, as it offered new features we were aching to implement. + +Everyone knows that you do not update any working solutions until the last minute to make sure that early bugs are fixed, but good software [_offers features you can’t resist_][3]. We trust it because it is transparent and has a strong community that has proven it can tackle most issues quickly. + +**Ebbe: You mentioned the community, which often accompanies great open source solutions. Does it really have that much of an impact for your business?** + +**Johan:** Of course. In terms of scale, everything pales in comparison to the community that HAProxy has mustered over the years. Every issue we encounter is usually solved or already escalated, and, as more and more companies use HAProxy, the community becomes vaster and more intelligent. + +What we’ve found with other services we use, even enterprise solutions might not offer the freedom and flexibility we need. In our case, an active community is what makes it possible to adapt software in previously untested ways. + +**Ebbe: What in particular does it let you do?** + +**Johan: **Since we chose HAProxy to use in our network, we found that creating ‘add-ons’ with Lua let us fully customize it to our own logic and integrate it with all of the other services that make the network work. This was extremely important, as we have a lot of services that need to work together, including some that are not open source. + +Another great thing is that the community is always solving problems and bugs, so we do not really encounter stuff we couldn’t handle. Over the years, I’ve found that this is only possible for open source software. + +What makes it a truly exceptional open source solution is the documentation. Even though I’ve been working closely with HAProxy for over two years, I still find new things almost every month. + +I know it sounds like a lot of praise, but I really love HAProxy for its resilience to our constant attempts to break it. + +**Ebbe: What do you mean by ‘break it’?** + +**Johan: **Originally, HAProxy works great as [_a load balancer for a couple of dozen servers_][4], usually 10 to 20. But, since our network is several orders of magnitude larger, we’ve constantly pushed it to its limits. + +It’s not uncommon for our HAProxy instances to load-balance over 10,000 servers, and we are certain that the original developers haven’t thought about optimizing it for these kind of loads. Due to this, it sometimes fails, but we are constantly optimizing our own solutions to make everything work. And, thanks to HAProxy’s developers and community, we are able to solve most of the issues we encounter easily. + +**Ebbe: Doesn’t this downtime impact your product negatively?** + +**Johan: **First of all, our product would not work without HAProxy. At least not as successfully as it has over the years. As I’ve said, all other solutions on the market are less optimized for what we do than HAProxy. + +Also, ‘breaking’ a service is nothing bad in and of itself. We always have backup services in place to handle the network. Testing in production is what we do for a simple reason: since we ‘break’ the HAProxy so much, we cannot really test any updates before launching something on our network. We need the full scale of our network to run HAProxy instances and all the millions of servers to be available, and creating such a testing environment seems like a huge waste of resources. + +**Ebbe: Do you have anything to add to the community of OpenSourceForU.com?** + +**Johan: **My team and I want to thank everyone for supporting open source principles and making the world a better place! + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/09/how-open-source-software-lets-us-push-it-to-the-limit/ + +作者:[Ebbe Kernel][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/ebbe-kernel/ +[b]: https://github.com/lujun9972 +[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2017/07/developer-blog.jpg?resize=696%2C433&ssl=1 (text editor) +[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2017/07/developer-blog.jpg?fit=750%2C467&ssl=1 +[3]: https://smartproxy.com/what-is-a-proxy +[4]: https://opensourceforu.com/2016/09/github-open-sources-internal-load-balancer/ diff --git a/sources/talk/20191001 Earning, spending, saving- The currency of influence in open source.md b/sources/talk/20191001 Earning, spending, saving- The currency of influence in open source.md new file mode 100644 index 0000000000..0fb67c9d80 --- /dev/null +++ b/sources/talk/20191001 Earning, spending, saving- The currency of influence in open source.md @@ -0,0 +1,90 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Earning, spending, saving: The currency of influence in open source) +[#]: via: (https://opensource.com/open-organization/19/10/gaining-influence-open-community) +[#]: author: (ldimaggi https://opensource.com/users/ldimaggi) + +Earning, spending, saving: The currency of influence in open source +====== +In open organizations, people gain influence primarily through +contribution—not formal title or position in a hierarchy. To make an +impact in yours, start here. +![Bees on a hive, connected by dots][1] + +The acquisition and application of influence is a vital aspect of any organization. But the manner in which people acquire influence can vary widely. In traditional, hierarchical organizations, for example, someone might acquire influence by virtue of their title or position in a hierarchy. In government organizations, someone might acquire influence by virtue of being elected. On social media, someone might acquire influence through endless self-promotion. Or someone might acquire influence through inheritance or wealth. + +In open source communities, influence operates differently. It can't be bought, inherited, elected through a ballot, bestowed through a job title, or gained through celebrity. In this world, influence must be _earned_ through the merit of the contributions one makes to a team, organization, or community. + +In [open organizations][2]—which often look to open source communities as models for a more dynamic, inclusive, and innovative way to operate, as Jim Whitehurst explains in _The Open Organization_—[influence operates the same way][3]. "Everyone has the ability to earn influence and to get his or her ideas heard," Whitehurst writes. "It simply related to how effective you are at presenting and getting people behind your ideas, throughout the organization." So anyone hoping to succeed in an open organization must understand how to acquire, manage, and leverage influence in ways that may not come naturally to them. + +In this two-part series, we'll draw on our experiences in open source software development to examine the mechanics of influence in open organizations. In this installment, we'll explain how influence is acquired in an open organization. We'll also offer advice on ways one might _earn_ influence in these organizations—and some tips on behaviors to avoid. + +### The currency of influence + +Even though you can't buy it, influence behaves like a form of virtual currency in an open source community: a scarce resource, always needed, but also always in short supply. One must earn it through contributions to an open source project or community. In contrast to monetary currency, however, influence is not _transferable_. You must earn it for yourself. You can neither give nor receive it as a gift. + +In traditional organizational structures, influence follows an organization's top-down, command and control pattern—and influence is largely the result of one's position in a hierarchy. People at the top of those hierarchical structures make decisions, and those decisions flow downward to everyone else. The model is relatively stable, frequently rigid, and [can encounter difficulties when conditions change][4]. + +But as Eric Raymond argues in _The Cathedral and the Bazaar,_ open source communities don't operate in this "cathedral"-style manner. They work more like "bazaars," where act activity, power, and influence cut across formal lines of command and control. In an open organization therefore, influence _must be earned_. Influence _not_ earned can be corrosive in an open organization. + +That's because open organizations [are more communal][2]. The modern word "community" has as its root the Latin word "communitas," which has as part of its definition "public spirit, a sense of duty and willingness to serve one's community." This definition (and how it relates to your motivation for being involved in a community) is an important place to begin thinking about what it means to have influence in an open organization—and how you can acquire that influence. + +### Acquiring influence + +In open source communities, influence operates differently. It can't be bought, inherited, elected through a ballot, bestowed through a job title, or gained through celebrity. + +When starting out—in an open source community or an open organization—you'll need a level of interest in and an honest commitment to the goals and mission of the community. + +Note our careful and intentional choice of the word "honest." Your intentions for acquiring influence must have at their root the goal of furthering the mission of the community. Your commitment must be to the _community itself_—not to the community insofar as it functions like a vehicle for self-promotion or padding your resume. Any attempts to use the community as a stepping stone will probably fail, as other members of the community will quickly discover your true motives. The open nature of an open source community means that insincerity has no place to hide, and when found, will be exposed. The "glue" that holds an open source community together is a commitment to the community's goals, the value that the community's projects provide, and ultimately, the community's output (most notably, the code it creates). The same is true in any open organization, no matter what it aims to produce. + +As with any durable and meaningful relationship, making a commitment to a community takes time. In order to acquire influence in a community, you'll need to _invest_ in that community. You cannot "parachute into" a community and acquire influence overnight. + +So how _can_ you begin? + +In open source communities, before you start churning out code and documentation, you have to _watch, listen, and learn_ before you act. You don't want to act in such a way that community members will think of you as an uninvited guest. Successful contributors are those that study a project and understand its goals, accomplishments, and challenges. They watch how the community _functions_. They figure out who the most active members are. They understand the types of contributions that accepted and which get rejected. Only then can they be ready to contribute. + +When preparing to attain influence in an open organization, look for problems that need solving. That way, your contributions take the form of _solutions_ as opposed to unwanted *additions (*new features, etc., in software communities). Occasionally you can make progress faster by moving _slowly_—easing into the community, as opposed to jumping into the pool and trying to make a big splash (you end up just spilling water into everyone's drinks). + +The level of influence that you can earn is directly proportional to the scope and value of the contributions that you make to the community. By becoming a _contributor_ to an open community, you also earn the _credibility_ you'll need to achieve some level of influence in that community and on its projects. + +Having your contributions noticed in and embraced by the community is always nice. When this happens, you will receive some notoriety (in open source code communities, for example, this can come in the form of pull request comments, recognition in blog posts, or other online acknowledgements and thanks). While it's fine for you to publicize these accomplishments and the growing influence in the community that these that these represent, refrain from public self-congratulations and self-promotion. The community should remain the center of attention. + +### Meritocracy != democracy + +When acquiring influence in an open community, always pay attention to that community's governance model. Most open source coding communities, for example, aren't democracies; [they're meritocracies][5]. Ideas presented to the teams must be vetted and critically reviewed by the team in order to ensure that they provide value to the community. Changes do not take place in a vacuum, as they can affect many other people's work. + +Practically speaking, this means that in open organizations everyone and anyone has the ability to voice an opinion. Transparency rules, and it's the key to giving everyone a fair opportunity to express opinions and thoughts. In an open source software project, for instance, anyone can open issues, respond to issues, provide code for features, influence new features, and so on. "Open" means, anyone and everyone can see the code, comment on it, raise issues against it, and provide fixes and features. Leaders in open source software communities derive their leadership capabilities from the merit of their contributions and respect these have garnered them from the community. + +When preparing to attain influence in an open organization, look for problems that need solving. That way, your contributions take the form of solutions as opposed to unwanted additions. + +However, these leaders don't _command_ the open source software communities or impose arbitrary rules or opinions—mainly because everyone would ignore their commands and leave if they tried to do so. Transparency and partnership is what attracts community members to a project and grows a community successfully. + +In the end, a leader in an open organization would fail miserably if he or she had to deliver everything personally. In fact, it's a mistake for a leader to attempt to do this. Contributions from the community are not a luxury "extra" for an open organization; they're vital to its success. + +### Patience and perseverance + +Think of a world-class athlete, someone born with an innate skill may quickly rise to the top of his or her sport at a young age. Overcoming a serious injury might force that athlete to learn patience (and perhaps some humility, too) during a long rehabilitation, where even small steps forwards are painful and time-consuming. + +Likewise, building credibility in an open source community is a long process. Influence can take years to develop. So patience and persistence are _crucial_. Early on, the process can seem daunting; the whole world is available to be influenced, yet you deploy patience by taking a step by step approach to starting small and thinking big. Like ripples in a pond when you drop a stone, your influence in your immediate circle of connections can grow through those connections to other people too and spread over time. In the second installment, then, we'll explain how influence, once acquired, can be applied in an open organization. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/19/10/gaining-influence-open-community + +作者:[ldimaggi][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ldimaggi +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_bees_network.png?itok=NFNRQpJi (Bees on a hive, connected by dots) +[2]: https://opensource.com/open-organization/resources/open-org-definition +[3]: https://opensource.com/open-organization/16/8/how-make-meritocracy-work +[4]: https://opensource.com/open-organization/16/3/fastest-result-isnt-always-best-result +[5]: https://opensource.com/open-organization/17/2/new-perspective-meritocracy diff --git a/sources/talk/20191001 How to keep your messages private with an open source app.md b/sources/talk/20191001 How to keep your messages private with an open source app.md new file mode 100644 index 0000000000..d6e017435d --- /dev/null +++ b/sources/talk/20191001 How to keep your messages private with an open source app.md @@ -0,0 +1,100 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to keep your messages private with an open source app) +[#]: via: (https://opensource.com/article/19/10/secure-private-messaging) +[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen) + +How to keep your messages private with an open source app +====== +Even encrypted messaging apps are leveraging your private data to sell +you things. +![Chat via email][1] + +Messaging apps have changed how we communicate. Where would we be today without [SMS][2]? Can you imagine returning to a world where near-instant communication is not pervasive? + +We have lots of messaging options in addition to SMS and [MMS][3]. There are [Skype][4], [iMessage][5], [Facebook Messenger][6], [Twitter][7] (with and without direct messages), [WeChat][8], [WhatsApp][9], [SnapChat][10], and more. Many of them are encrypted, and many people presume that their communications on these apps are private and secure. But are they really? Cloud-hosted applications that are harvesting metadata from your conversations, then using it to sell you products that support their services, may not be your friends after all. + +### Risks of using messaging systems + +Fellow Opensource.com Community Moderator [Chris Hermansen][11] shares my concern about the growing invasion of our privacy by online communication tools. Chris says, "in my household, it's safe to say that we're not comfortable with commercial interests recording our every online move and using every possible gap to promote goods, services, points of view, and so on, or to promote themselves to others based on using our data." + +[Employers are using social media account information][12] within their hiring and firing decision-making process, he says. And it's not just to check whether the candidate's or employee's online personality conflicts with company values; in many cases, candidates who don't have a social media presence are unlikely to get an interview. + +He is also concerned about certain apps that allow message senders to see when recipients open their messages. He says, "I did not opt into that kind of sharing, and it seems the only way to opt out is to use software that is specifically designed to block this kind of unauthorized abuse, which may, in turn, block me from other, legitimate web content." + +### Hide those prying eyes + +Chris recently told me about [Signal][13], which ticks all the right boxes for those of us who have had enough of these prying eyes. The organization behind Signal is open, so we can know what it's doing with our data. (The answer? [Not very much at all][14].) Moreover, the organization is dedicated to broadening the use of Signal without harvesting user data, and all communications are encrypted end-to-end with the keys stored on users' devices. + +In addition, the mobile app is robust and reliable and enables users to make video and voice calls over the internet. [Chris' family has been using Signal for the past 18 months][15] or so to communicate around the world, and he says that "the call quality is far, far better than with the competition." I also find that Signal provides extremely high voice and video call quality, even over long-distance connections that often bamboozle other communications applications. + +"I prefer to have Signal also manage my SMS traffic," says Chris, "and I'll often open my Signal app to make a call to a fellow Signal user rather than telephoning." + +Chris and I aren't the only ones who like using Signal. In 2017, the U.S. Senate Sergeant-at-Arms [approved][16] the app for lawmakers and their staffs to use. + +Chris has only a couple of complaints about Signal. The first is that the desktop application (in Linux, anyway) isn't as full-featured as the mobile application. For instance, the desktop app can't make video or voice calls nor send or receive SMS. This isn't a show-stopper, but it would surely be nice when your cellphone battery is low or when it's easier to use your big screen and headset than a small mobile device. + +### Using Signal + +It is easy to [install Signal][17] on [Android][18], [iOS][19], Windows, MacOS, and Debian-based Linux distributions, and it offers excellent support [documentation][20] with detailed installation instructions for each operating system. You can also link devices, like laptops and desktops, that run one of the supported operating systems. + +Signal uses your existing mobile number, provided it can send and receive SMS and phone calls. The first time you set Signal up on your mobile phone, the application can search your address book for any of your contacts who use it. + +Signal is openly licensed with the [GNU Public License 3.0][21], and you can inspect the source code on [GitHub][22]. + +### Signal's future + +In early 2018, Signal received $50 million in funding from WhatsApp co-founder [Brian Acton][23]. With that cash infusion, Signal founder [Moxie Marlinspike][24] and Acton founded a new non-profit, 501(c)(3) organization named the [Signal Foundation][25]. + +Marlinspike says Signal plans to use Acton's investment to "increase the size of our team, our capacity, and our ambitions. This means reduced uncertainty on the path to sustainability, and the strengthening of our long-term goals and values. Perhaps most significantly, the addition of Brian brings an incredibly talented engineer and visionary with decades of experience building successful products to our team." + +Signal is currently looking for [developers][26] who have skills with iOS, Rust, Android, and more, as well as people interested in supporting it with financial [donations][27]. + +To learn more, you can follow Signal on [Twitter][28], [Instagram][29], and its [blog][30]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/secure-private-messaging + +作者:[Chris Hermansen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_chat_communication_message.png?itok=LKjiLnQu (Chat via email) +[2]: https://en.wikipedia.org/wiki/SMS +[3]: https://en.wikipedia.org/wiki/Multimedia_Messaging_Service +[4]: https://www.skype.com/en/ +[5]: https://en.wikipedia.org/wiki/IMessage +[6]: https://www.messenger.com/ +[7]: https://twitter.com/?lang=en +[8]: https://play.google.com/store/apps/details?id=com.tencent.mm&hl=en +[9]: https://www.whatsapp.com/ +[10]: https://www.snapchat.com/ +[11]: https://opensource.com/users/clhermansen +[12]: https://www.businessnewsdaily.com/2377-social-media-hiring.html +[13]: https://signal.org/ +[14]: https://en.wikipedia.org/wiki/Signal_(software) +[15]: https://opensource.com/article/19/3/open-messenger-client +[16]: https://thehill.com/policy/cybersecurity/333802-sen-staff-can-use-signal-for-encrypted-chat +[17]: https://signal.org/download/ +[18]: https://play.google.com/store/apps/details?id=org.thoughtcrime.securesms&referrer=utm_source%3DOWS%26utm_medium%3DWeb%26utm_campaign%3DNav +[19]: https://apps.apple.com/us/app/signal-private-messenger/id874139669 +[20]: https://support.signal.org +[21]: https://github.com/signalapp/Signal-iOS/blob/master/LICENSE +[22]: https://github.com/signalapp +[23]: https://en.wikipedia.org/wiki/Brian_Acton +[24]: https://moxie.org/ +[25]: https://signal.org/blog/signal-foundation/ +[26]: https://signal.org/workworkwork/ +[27]: https://signal.org/donate/ +[28]: https://twitter.com/signalapp +[29]: https://www.instagram.com/signal_app/ +[30]: https://signal.org/blog/ diff --git a/sources/talk/20191001 The monumental impact of C.md b/sources/talk/20191001 The monumental impact of C.md new file mode 100644 index 0000000000..7ab59a6574 --- /dev/null +++ b/sources/talk/20191001 The monumental impact of C.md @@ -0,0 +1,143 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The monumental impact of C) +[#]: via: (https://opensource.com/article/19/10/command-line-heroes-c) +[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg) + +The monumental impact of C +====== +The season finale of Command Line Heroes offers a lesson in how a small +community of open source enthusiasts can change the world. +![In the finale of Command Line Heroes, we learn about the significant impact of C][1] + +C is the original general-purpose programming language. The Season 3 finale of the [Command Line Heroes][2] podcast explores C's origin story in a way that showcases the longevity and power of its design. It's a perfect synthesis of all the languages discussed throughout the podcast's third season and this [series of articles][3]. + +![The original C programming guide by two of the language authors, circa 1978][4] + +C is such a fundamental language that many of us forget how much it has changed. Technically a "high-level language," in the sense that it requires a compiler to be runnable, it's as close to assembly language as people like to get these days (outside of specialized, low-memory environments). It's also considered to be the language that made nearly all languages that came after it possible. + +### The path to C began with failure + +While the myth persists that all great inventions come from highly competitive garage dwellers, C's story is more fit for the Renaissance period. + +In the 1960s, Bell Labs in suburban New Jersey was one of the most innovative places of its time. Jon Gertner, author of [_The idea factory_][5], describes the culture of the time marked by optimism and the excitement to solve tough problems. Instead of monetization pressures with tight timelines, Bell Labs offered seemingly endless funding for wild ideas. It had a research and development ethos that aligns well with today's [open leadership principles][6]. The results were significant and prove that brilliance can come without the promise of VC funding or an IPO. + +The challenge back then was terminal sharing: finding a way for lots of people to access the (very limited number of) available computers. Before there was a scalable answer for that, and long before we had [a shell like Bash][7], there was the Multics project. It was a hypothetical operating system where hundreds or even thousands of developers could share time on the same system. This was a dream of John McCarty, creator of Lisp and the term artificial intelligence (AI), as I [recently explored][8]. + +Joy Lisi Ranken, author of [_A people's history of computing in the United States_][9], describes what happened next. There was a lot of public interest in driving forward with Multics' vision of more universally available timesharing. Academics, scientists, educators, and some in the broader public were looking forward to this computer-powered future. Many advocated for computing as a public utility, akin to electricity, and the push toward timesharing was a global movement. + +Up to that point, high-end mainframes topped out at 40-50 terminals per system. The change of scale was ambitious and eventually failed, as Warren Toomey writes in [IEEE Spectrum][10]: + +> "Over five years, AT&T invested millions in the Multics project, purchasing a GE-645 mainframe computer and dedicating to the effort many of the top researchers at the company's renowned Bell Telephone Laboratories—including Thompson and Ritchie, Joseph F. Ossanna, Stuart Feldman, M. Douglas McIlroy, and the late Robert Morris. But the new system was too ambitious, and it fell troublingly behind schedule. In the end, AT&T's corporate leaders decided to pull the plug." + +Bell Labs pulled out of the Multics program in 1969. Multics wasn't going to happen. + +### The fellowship of the C + +Funding wrapped up, and the powerful GE645 mainframe was assigned to other tasks inside Bell Labs. But that didn't discourage everyone. + +Among the last holdouts from the Multics project were four men who felt passionately tied to the project: Ken Thompson, Dennis Ritchie, Doug McIlroy, and J.F. Ossanna. These four diehards continued to muse and scribble ideas on paper. Thompson and Ritchie developed a game called Space Travel for the PDP-7 minicomputer. While they were working on that, Thompson started implementing all those crazy hand-written ideas about filesystems they'd developed among the wreckage of Multics. + +![A PDP-7 minicomputer][11] + +A PDP-7 minicomputer was not top of line technology at the time, but the team implemented foundational technologies that change the future of programming languages and operating systems alike. + +That's worth emphasizing: Some of the original filesystem specifications were written by hand and then programmed on what was effectively a toy compared to the systems they were using to build Multics. [Wikipedia's Ken Thompson page][12] dives deeper into what came next: + +> "While writing Multics, Thompson created the Bon programming language. He also created a video game called [Space Travel][13]. Later, Bell Labs withdrew from the MULTICS project. In order to go on playing the game, Thompson found an old [PDP-7][14] machine and rewrote Space Travel on it. Eventually, the tools developed by Thompson became the [Unix][15] [operating system][16]: Working on a PDP-7, a team of Bell Labs researchers led by Thompson and Ritchie, and including Rudd Canaday, developed a [hierarchical file system][17], the concepts of [computer processes][18] and [device files][19], a [command-line interpreter][20], [pipes][21] for easy inter-process communication, and some small utility programs. In 1970, [Brian Kernighan][22] suggested the name 'Unix,' in a pun on the name 'Multics.' After initial work on Unix, Thompson decided that Unix needed a system programming language and created [B][23], a precursor to Ritchie's [C][24]." + +As Walter Toomey documented in the IEEE Spectrum article mentioned above, Unix showed promise in a way the Multics project never materialized. After winning over the team and doing a lot more programming, the pathway to Unix was paved. + +### Getting from B to C in Unix + +Thompson quickly created a Unix language he called B. B inherited much from its predecessor BCPL, but it wasn't enough of a breakaway from older languages. B didn't know data types, for starters. It's considered a typeless language, which meant its "Hello World" program looked like this: + + +``` +main( ) { +extrn a, b, c; +putchar(a); putchar(b); putchar(c); putchar('!*n'); +} + +a 'hell'; +b 'o, w'; +c 'orld'; +``` + +Even if you're not a programmer, it's clear that carving up strings four characters at a time would be limiting. It's also worth noting that this text is considered the original "Hello World" from Brian Kernighan's 1972 book, [_A tutorial introduction to the language B_][25] (although that claim is not definitive). + +[![A diagram showing the key Unix and Unix-like operating systems][26]][27] + +Typelessness aside, B's assembly-language counterparts were still yielding programs faster than was possible using the B compiler's threaded-code technique. So, from 1971 to 1973, Ritchie modified B. He added a "character type" and built a new compiler so that it didn't have to use threaded code anymore. After two years of work, B had become C. + +### The right abstraction at the right time + +C's use of types and ease of compiling down to efficient assembly code made it the perfect language for the rise of minicomputers, which speak in bytecode. B was eventually overtaken by C. Once C became the language of Unix, it became the de facto standard across the budding computer industry. Unix was _the_ sharing platform of the pre-internet days. The more people wrote C, the better it got, and the more it was adopted. It eventually became an open standard itself. According to the [Brief history of C programming language][28]: + +> "For many years, the de facto standard for C was the version supplied with the Unix operating system. In the summer of 1983 a committee was established to create an ANSI (American National Standards Institute) standard that would define the C language. The standardization process took six years (much longer than anyone reasonably expected)." + +How influential is C today? A [quick review][29] reveals: + + * Parts of all major operating systems are written in C, including macOS, Windows, Linux, and Android. + * The world's most prolific databases, including DB2, MySQL, MS SQL, and PostgreSQL, are written in C. + * Many programming-language specifics begun in C, including Python, Go, Perl's core interpreter, and the R statistical language. + + + +Decades after they started as scrappy outsiders, Thompson and Ritchie are praised as titans of the programming world. They shared 1983's Turing Award, and in 1998, received the [National Medal of Science][30] for their work on the C language and Unix.  + +![Ritchie and Thompson receiving the National Medal of Technology from President Clinton, 1998][31] + +But Doug McIlroy and J.F. Ossanna deserve their share of praise, too. All four of them are true Command Line Heroes. + +### Wrapping up the season + +[Command Line Heroes][2] has completed an entire season of insights into the programming languages that affect how we code today. It's been a joy to learn about these languages and share them with you. I hope you've enjoyed it as well! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/command-line-heroes-c + +作者:[Matthew Broberg][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mbbroberg +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/commnad_line_hereos_ep8_header_opensourcedotcom.png?itok=d7MJQHFJ (In the finale of Command Line Heroes, we learn about the significant impact of C) +[2]: https://www.redhat.com/en/command-line-heroes +[3]: https://opensource.com/tags/command-line-heroes-podcast +[4]: https://opensource.com/sites/default/files/uploads/2482009942_6caea217e0_c.jpg (The original C programming guide by two of the language authors, circa 1978) +[5]: https://en.wikipedia.org/wiki/The_Idea_Factory +[6]: https://opensource.com/open-organization/18/12/what-is-open-leadership +[7]: https://opensource.com/19/9/command-line-heroes-bash +[8]: https://opensource.com/article/19/9/command-line-heroes-lisp +[9]: https://www.hup.harvard.edu/catalog.php?isbn=9780674970977 +[10]: https://spectrum.ieee.org/tech-history/cyberspace/the-strange-birth-and-long-life-of-unix +[11]: https://opensource.com/sites/default/files/uploads/800px-pdp7-oslo-2005.jpeg (A PDP-7 minicomputer) +[12]: https://en.wikipedia.org/wiki/Ken_Thompson +[13]: https://en.wikipedia.org/wiki/Space_Travel_(video_game) +[14]: https://en.wikipedia.org/wiki/PDP-7 +[15]: https://en.wikipedia.org/wiki/Unix +[16]: https://en.wikipedia.org/wiki/Operating_system +[17]: https://en.wikipedia.org/wiki/File_system#Aspects_of_file_systems +[18]: https://en.wikipedia.org/wiki/Process_(computing) +[19]: https://en.wikipedia.org/wiki/Device_file +[20]: https://en.wikipedia.org/wiki/Command-line_interface#Command-line_interpreter +[21]: https://en.wikipedia.org/wiki/Pipeline_(Unix) +[22]: https://en.wikipedia.org/wiki/Brian_Kernighan +[23]: https://en.wikipedia.org/wiki/B_(programming_language) +[24]: https://en.wikipedia.org/wiki/C_(programming_language) +[25]: https://www.bell-labs.com/usr/dmr/www/btut.pdf +[26]: https://opensource.com/sites/default/files/uploads/640px-unix_history-simple.svg_.png (A diagram showing the key Unix and Unix-like operating systems) +[27]: https://commons.wikimedia.org/w/index.php?curid=1801948 +[28]: http://cs-fundamentals.com/c-programming/history-of-c-programming-language.php +[29]: https://www.toptal.com/c/after-all-these-years-the-world-is-still-powered-by-c-programming +[30]: https://www.nsf.gov/od/nms/medal.jsp +[31]: https://opensource.com/sites/default/files/uploads/medal.jpeg (Ritchie and Thompson receiving the National Medal of Technology from President Clinton, 1998) diff --git a/sources/talk/20191003 Mobile App Security Tips to Secure Your Mobile Applications.md b/sources/talk/20191003 Mobile App Security Tips to Secure Your Mobile Applications.md new file mode 100644 index 0000000000..2feea9a568 --- /dev/null +++ b/sources/talk/20191003 Mobile App Security Tips to Secure Your Mobile Applications.md @@ -0,0 +1,89 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Mobile App Security Tips to Secure Your Mobile Applications) +[#]: via: (https://opensourceforu.com/2019/10/mobile-app-security-tips-to-secure-your-mobile-applications/) +[#]: author: (Petr Kudlacek https://opensourceforu.com/author/petr-kudlacek/) + +Mobile App Security Tips to Secure Your Mobile Applications +====== + +[![][1]][2] + +  + +_The world has become a mobile as every new and then the person has a smartphone in the hands with the Internet Connection. By using mobile devices, you can do everything online from the comfort of your home. You are allowed to do banking, tracking your health and control Internet of Things at home._ + +Today, the use of the mobile application is also increasing constantly as it is completely dominating mobile internet usage. As per the Flurry report, mobile applications account approximately 86% of the average U.S. mobile user’s time that amounts to more than two hours per day. + +Moreover, applications that are obtainable through online app distributors like Google Play Store, Apple’s App Store and third-party marketplaces, are no doubting the dominant form of delivery value to users across the world. + +*[![][3]][4]*Moreover, companies and organizations are embracing mobile applications as a great way to boost employees’ skills and productivity, agile with their new agile and mobile lifestyle. But do you know that whether these mobile apps are safe and secure and protected from any kind of virus? + +**What to do to Secure Your Mobile App?** + +If you have decided to develop an application or already have one, there are chances that you may ignore to consider how to secure your mobile application, your data, and your customer’s data. However, a mobile application comes with a good sanitation to make it work, but there is a software code itself, the business logic on the back-end network and the client side, databases. + +Both are playing a significant role in the fabric of the app’s security. All those companies that have mobile apps in a packed, competitive market, it is essential for them to have a robust security as it would be a big differentiator. In this post, we are going to mention some of the few tips for you to consider with mobile app security. + +**Essential Tips to Secure Your Mobile Apps** + +_Ensure that You Secure Your Network Connections On The Back-end_ + +Servers and cloud servers, which are accessing an app’s API, need to have security measures in place in order to protect data and prevent unauthorized access. It is important that APIs and those accessing them need to be verified to prevent snooping on sensitive information passing from the client back to the application’s server and database. + + * If you want to securely store your data and important documents, containerization is one of the best methods of developing encrypted containers. + * You can get in touch with a professional network security analyst so that he can conduct a penetration testing and susceptibility assessments of your network to make sure that the right data is safe in the right ways. + * Today, the federation is the next-level security measure, which mainly spread resources across servers so that they are not all in one place, and separates main resources from users with encryption measures. + + + +**Make Sure to Secure Transaction – Regulate the Implementation of Risky Mobile Transactions** + +Today, mobile applications allow users to easily manage with enterprise services on the go, so the risk lenience for transactions will differ. Therefore, it is essential for organizations to adopt an approach of risk-aware transaction execution, which restricts client-side functionality based on different policies that ponder mobile risk factors like user location, device security attributes and the security of the network connection. + +Enterprise apps can easily leverage an enterprise mobile risk engine to associate risk factors like IP velocity – accessing to the same account from two different locations, which are far apart over a short period even when client transactions are allowed. + +It is one such approach that extends the enterprise’s ability to detect and respond to complex attacks which will span multiple interaction channels and outwardly unrelated security events. + +**[![][5]][6]Securing the Data – Stopping Data Theft and Leakage** + +When mobile applications are accessing enterprise data, documents and unstructured information often stored on the device. Whenever the device is lost or data is shared with non-enterprise apps, the potential for data loss is heightened. + +There are various enterprises that are already considering remote wipe capabilities to address taken or lost devices. Mobile data encryption can be easily used to secure data within the app sandbox against malware and other kinds of criminal access. When it comes to controlling the app’s data sharing on the device, individual data elements can be encrypted and controlled. + +**Testing Your App’s Software & The Test Again** + +It is important to test app’s code in the app development process. As we all know that applications are being produced so rapidly that it should be an essential step in the process that falls to the wayside to speed up a time to market. At the time of testing functionality and usability, experts recommend testing for security whether their app is a native, hybrid or web app. + +You can know the vulnerabilities in the code so that you can correct them before publishing your application on the web. There are some essential tips that you need to consider: + + * Make sure to test thoroughly for verification and authorization, data security issues and session management. + * Penetration testing needs purposely searching a network or system for weaknesses. + * Emulators for operating systems, devices and browsers allow you to test how an application can perform in a simulated environment. + + + +Today, mobile and mobile apps are increasingly where most of the users are; however, you will also find most of the hackers also there to seal your important and sensitive data and information. With creative mobile security strategy and an experienced mobile app developer, you can rapidly to threats and keep your app safer. Moreover, you consider above-mentioned tips as well for securing your mobile applications. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/mobile-app-security-tips-to-secure-your-mobile-applications/ + +作者:[Petr Kudlacek][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/petr-kudlacek/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/MOHD3.png?resize=626%2C419&ssl=1 (MOHD3) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/MOHD3.png?fit=626%2C419&ssl=1 +[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/MOHD1.png?resize=350%2C116&ssl=1 +[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/MOHD1.png?ssl=1 +[5]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/MOHD2.png?resize=350%2C233&ssl=1 +[6]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/MOHD2.png?ssl=1 diff --git a/sources/talk/20191003 Sharing vs. free vs. public- The real definition of open source.md b/sources/talk/20191003 Sharing vs. free vs. public- The real definition of open source.md new file mode 100644 index 0000000000..31b4851639 --- /dev/null +++ b/sources/talk/20191003 Sharing vs. free vs. public- The real definition of open source.md @@ -0,0 +1,85 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Sharing vs. free vs. public: The real definition of open source) +[#]: via: (https://opensource.com/article/19/10/shareware-vs-open-source) +[#]: author: (Jeffrey Robert Kaufman https://opensource.com/users/jkaufman) + +Sharing vs. free vs. public: The real definition of open source +====== +If you think open source is synonymous with shareware, freeware, and +public domain, you are not alone. +![Person in a field of dandelions][1] + +When you hear the term open source, do you think this is synonymous with terms such as shareware, freeware, or public domain? If so, you are not alone. Many people, both within and without the technology industry, think of these terms as one and the same. This article illustrates how these terms are different and how open source is a transformative licensing and development model. Perhaps the best way to explore the differences will be to share my experience with software provided under one of the above models. + +### Shareware and freeware + +My early years as a computer programmer started when I began to code in BASIC on my Apple II Plus in 1982. I recall going to the local computer store in my hometown and finding floppy diskettes in plastic bags containing software games and utilities for what seemed to be extraordinarily high prices. Keep in mind, this was from the perspective of a middle-schooler. + +There was, however, some software that was available for free or at a minimal price; this was referred to as shareware or freeware, depending on the exact licensing model. Under the shareware model, you could use the software for only a certain amount of time, and/or if you found it useful, then there was a request that you send in a check to the author of that software. + +Some shareware software, however, actually encouraged you to also make a copy and give it to your friends. This model is often referred to as freeware. That said, the exact definitions and differences between shareware and freeware are a bit soft, so it's collectively easiest to refer to both simply as "shareware." I cannot say for certain, but I doubt I ever provided money to any of the software authors for using their shareware, mainly because I had no money as an early teenager, but I sure enjoyed using these software programs and learned a lot about computers along the way. + +In retrospect, I realize now that I could have learned and accomplished so much more in my growth as a budding programmer if the software had been provided under open source license terms instead of shareware terms. This is because the source code (i.e., the human-readable form of software) is almost never provided with shareware. Shareware also contains licensing restrictions that prohibit the recipient from attempting to reveal the source code. Without access to the source code, it is extraordinarily difficult to learn how the software actually works, making it very difficult to expand or change its functionality. This leaves the end user completely dependent on the original shareware author for any changes or improvements. + +With the shareware model, it is practically impossible to enable any community of developers to leverage and further innovate around the code. There can also be further restrictions on redistribution and commercial usage. Although the shareware may be free in terms of price (at least initially), _it is not free in terms of freedom_ and does not allow you to learn and innovate by exploring the inner workings of the code. + +Which leads me to the big question: _How is this different from open source software?_ + +### The basics of open source licensing + +First, we need to understand that "open source" refers to a _licensing_ and a _software development model_ that are both significantly different than shareware. Under one form of open source called non-copyleft open source licensing, the user is provided key freedoms such as no restrictions on accessing source code; selling, using, or giving away the software for any purpose; or modifying the software. + +This form of license also does not require payment of any fee or royalty for use. One amazing outcome of this licensing model is its unique ability to enable countless software developers to collaborate on new and useful changes and innovations to the code because the license is highly permissive, requiring no negotiations for use. Although the source code is technically not required to be provided under such a license, it is almost always available for everyone to view, learn from, modify, and distribute to others. + +Another aspect of non-copyleft open source licensing is that any recipient of such software may add additional license restrictions. This means that the initial author that licensed the code under this form of license has no assurances that the recipient may not further license to others under more restrictive terms. For example: + +> _Let us assume an author, Noah, wrote some software and distributed it under a non-copyleft open source license to a recipient, Aviva. Aviva then modifies and improves Noah's software, which she is entitled to do under the non-copyleft open source license terms. Aviva could then decide to add further restrictions to any recipients of her software that could limit its use, such as where or how it may be used (e.g., Aviva could add in a restriction that the software may only be used within the geographical boundaries of California and never in any nuclear power plant). Aviva could also opt to never release the modified source code to others even though she had access to the source code._ + +Sadly, there are countless proprietary software companies that use non-copyleft open source licensed software in the way described immediately above. In fact, a shareware program could use non-copyleft open source licensed software by adding shareware-type restrictions (e.g., no access to source code or excluding commercial use) thereby converting non-copyleft open source licensed code to a shareware licensing model. + +Fortunately, many proprietary companies using non-copyleft open source licensed software see the benefits of releasing source code. These organizations often continue to perpetuate the open source model by providing their modified source code to their recipients or the broader open source community via software repositories like GitHub to enable a virtuous cycle of innovation. This isn't entirely out of benevolence (or at least it normally isn't): These companies want to encourage community innovation and further enhancements, which can benefit them further. + +At the same time, many proprietary companies do not opt to do this, which is well within the terms of non-copyleft open source licenses. + +### Copyleft-licensed open source software + +In 1989, a new open source license named the GNU General Public License, also known commonly as the GPL license, was developed with the objective to ensure that software should be inherently free (as in free speech) and that that these freedoms must always persist, unlike what sometimes happens with non-copyleft open source licensed software. In a unique application of copyright law, the GPL uses copyright law to ensure perpetual software freedoms, so long as the rules are followed (more on that later). This unique use of copyright is called copy**left**. + +Like non-copyleft open source software, this license allows recipients to use the software without restriction, examine the source code, change the software, and make further distributions of the original or modified software to other recipients. _Unlike_ a non-copyleft open source license, the copyleft open source license absolutely requires that any recipients are also provided these same freedoms. They can never be taken away unless the rules are not followed. + +What makes the copyleft open source license enforceable and an incentive for compliance is the application of copyright law. If one of the recipients of copyleft code does not comply with the license terms (e.g., by adding any additional restrictions on the use of the software or not providing the source code), then their license terminates, and they become a copyright infringer because they no longer have legal permission to use the software. In this way, the software freedoms are ensured for any downstream recipients of that copyleft software. + +### Beyond the basics: Other software license models + +I mentioned public domain earlier—while it's commonly conflated with open source, this model is a bit different. Public domain means that steps have been taken to see that there are no applicable copyright rights associated with the software, which most often happens when the software copyright expires or is disclaimed by the author. (In many countries, the mechanism to disclaim copyright is unclear, which is why some public domain software may provide an option to obtain an open source-type license as a fallback.) No license is required to use public domain software; whether this makes it "open source" or not is the subject of much debate, though many would consider public domain a form of open source if the source code were made available. + +Interestingly, there are a significant number of open source projects that make use of small modules of public domain software for certain functions. There are even entire programs that claim to be in the public domain, such as SQLite, which implements a SQL database engine and is used in many applications and devices. It is also common to see software with no license terms. + +Many people incorrectly assume that such unlicensed software is open source, in the public domain, or otherwise free to use without restriction. In most countries, including the United States, copyright in software exists when it is created. This means that it cannot be used without permission in the form of a license, unless the copyright is somehow disclaimed, rendering it in the public domain. Some exceptions exist to this general rule, like the laws of implied licenses or fair use, but these are quite complex in how they may apply to a specific situation. I do not recommend providing software with no license terms when the intention is for it to be under open source license terms as this leads to confusion and potential misuse. + +### Benefits of open source software + +As I said previously, open source enables an efficient software development model with enormous ability to drive innovation. But what does this really mean? + +One of the benefits of the open source licensing model is a significant reduction in the friction around innovation, especially innovation done by other users beyond the original creator. This friction is limited because using open source code generally does not require the negotiation of license terms, thereby greatly simplifying and lowering any cost burden for use. In turn, this creates a type of open source ecosystem that encourages rapid modification and combination of existing technologies to form something new. These changes are often provided back into this open source ecosystem, creating a cycle of innovation. + +There is an innumerable number of software programs that run everything from your toaster to Mars-going spacecraft that are the direct result of this effortless ability to combine various programs together… all enabled by the open source development model. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/shareware-vs-open-source + +作者:[Jeffrey Robert Kaufman][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jkaufman +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_dandelion_520x292.png?itok=-xhFQvUj (Person in a field of dandelions) diff --git a/sources/talk/20191003 XMPP- A Communication Protocol for the IoT.md b/sources/talk/20191003 XMPP- A Communication Protocol for the IoT.md new file mode 100644 index 0000000000..108867cfdd --- /dev/null +++ b/sources/talk/20191003 XMPP- A Communication Protocol for the IoT.md @@ -0,0 +1,99 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (XMPP: A Communication Protocol for the IoT) +[#]: via: (https://opensourceforu.com/2019/10/xmpp-a-communication-protocol-for-the-iot/) +[#]: author: (Neetesh Mehrotra https://opensourceforu.com/author/neetesh-mehrotra/) + +XMPP: A Communication Protocol for the IoT +====== + +[![][1]][2] + +_Formerly developed by the Jabber open source community in 1999 (and initially known as Jabber), the Extensible Messaging and Presence Protocol (XMPP) is now widely used as a communication protocol. Based on Extensible Markup Language (XML), XMPP enables fast, near-real-time exchange of data between multiple entities on a network._ + +In contrast to most direct messaging protocols, XMPP is described in an open standard and uses an open systems approach of development and application, by which anyone may implement an XMPP service and interoperate with other organisations’ implementations. Since XMPP is an open set of rules, implementations can be developed using any software licence, and many server, client, and library XMPP implementations are distributed as free and open source software. Numerous freeware and commercial software implementations also exist. + +**XMPP: An overview** +XMPP is an open set of rules for streaming XML elements in order to swap messages and presence information in close to real-time. The XMPP protocol is based on the typical client server architecture, in which the XMPP client uses the XMPP server with the TCP socket. +XMPP provides a general framework for messaging across a network, offering a multitude of applications beyond traditional instant messaging (IM) and the distribution of presence data. It enables the discovery of services residing locally or across a network, as well as finding out about the availability of these services. +XMPP is well-matched for cloud computing where virtual machines, networks and firewalls would otherwise present obstacles to alternative service discovery and presence-based solutions. Cloud computing and storage systems rely on diverse forms of communication over multiple levels, including not only messaging between systems to relay state but also the migration of the distribution of larger objects, like storage or virtual machines. Along with validation and in-transit data protection, XMPP can be useful at many levels and may prove ideal as an extensible middleware or a message-oriented middleware (MOM) protocol. + +![Figure 1: XMPP IM conversation][3] + +**Comparisons with MQTT** +Given below are a few comparisons between the XMPP and MQTT protocols. + + * MQTT is a lightweight publisher/subscriber protocol, which makes it a clear choice when implementing M2M on memory-constrained devices. + * MQTT does not define a message format; with XMPP you can define the message format and get structured data from devices. The defined structure helps validate messages, making it easier to handle and understand data coming from these connected devices. + * XMPP builds a device’s identity (also called a Jabber ID). In MQTT, identities are created and managed separately in broker implementations. + * XMPP supports federation, which means that devices from different manufacturers connected to different platforms can talk to each other with a standard communication protocol. + * MQTT has different levels of quality of service. This flexibility is not available in XMPP. + * MQTT deployments become difficult to manage when the number of devices increases, while XMPP scales very easily. + + + +**The pros and cons of XMPP** + +_**Pros**_ + + * Addressing scheme to recognise devices on the network + * Client-server architecture + * Decentralised + * Flexible + * Open standards and formalised + + + +_**Cons**_ + + * Text-based messaging and no provision for end-to-end encryption + * No provision for quality of service + * The data flow is usually more than 70 per cent of the XMPP protocol server, of which nearly 60 per cent is repeated; the XMPP protocol has a large overhead of data to multiple recipients + * Absence of binary data + * Limited scope for stability + + + +![Figure 2: XML stream establishment][4] + +**How the XMPP protocol manages communication between an XMPP client and server** +The features of the XMPP protocol that impact communication between the XMPP client and the XMPP server are described in Figure 1. +Figure 2 depicts an XML message swap between client ‘Mike’ and server ‘Ollie.org’. + + * XMPP uses Port 5222 for the client to server (C2S) communication. + * It utilises Port 5269 for server to server (S2S) communication. + * Discovery and XML streams are used for S2S and C2S communication. + * XMPP uses security mechanisms such as TLS (Transport Layer Security) and SASL (Simple Authentication and Security Layer). + * There are no in-between servers for federation, unlike e-mail. + + + +Direct messaging is used as a method for immediate message transmission to and reception from online users (Figure 3). + +![Figure 3: Client server communication][5] + +**XMPP via HTTP** +As an alternative to the TCP protocol, XMPP can be used with HTTP in two ways: polling and binding. The polling method, now deprecated, essentially implies that messages stored on a server-side database are acquired by an XMPP client by way of HTTP ‘GET’ and ‘POST’ requests. Binding methods are applied using bi-directional streams over synchronous HTTP permit servers to push messages to clients as soon as they are sent. This push model of notification is more efficient than polling, wherein many of the polls return no new data. +XMPP provides a lot of support for communication, making it well suited for use within the realm of the Internet of Things. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/xmpp-a-communication-protocol-for-the-iot/ + +作者:[Neetesh Mehrotra][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/neetesh-mehrotra/ +[b]: https://github.com/lujun9972 +[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2017/06/IoT-Connecting-all-apps.jpg?resize=696%2C592&ssl=1 (IoT Connecting all apps) +[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2017/06/IoT-Connecting-all-apps.jpg?fit=1675%2C1425&ssl=1 +[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-1-XMPP-IM-conversation.jpg?resize=261%2C287&ssl=1 +[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-2-XML-stream-establishment-350x276.jpg?resize=350%2C276&ssl=1 +[5]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-3-Client-server-communication.jpg?resize=350%2C146&ssl=1 diff --git a/sources/talk/20191004 Chuwi GBox Pro Mini PC Review for Linux Users.md b/sources/talk/20191004 Chuwi GBox Pro Mini PC Review for Linux Users.md new file mode 100644 index 0000000000..61e7fe5e7d --- /dev/null +++ b/sources/talk/20191004 Chuwi GBox Pro Mini PC Review for Linux Users.md @@ -0,0 +1,117 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Chuwi GBox Pro Mini PC Review for Linux Users) +[#]: via: (https://itsfoss.com/chuwi-gbox-pro-review/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Chuwi GBox Pro Mini PC Review for Linux Users +====== + +Early computers filled entire rooms. Since then, there has been a huge drive among computer manufacturers to make things smaller and smaller. Even the regular desktops can be replaced with mini PCs these days. + +We have covered several [Linux mini-PCs][1] in the past. Today we shall take a look at Chuwi GBox Pro. + +Chuwi is a computer manufacturer based in China. They are known for making good-looking but inexpensive devices. A few years back, some resellers used to rebrand Chuwi computers and sell them under their own brand name. Chuwi is now trying to expand its own brand visibility by selling Chuwi systems to a global audience. + +Chuwi contacted It’s FOSS and offered us the GBox Pro device to review for Linux users. Just because they offered something for free, it doesn’t mean we are going to favor them unnecessarily. I used the sample GBox Pro device with Linux and I am sharing my experience with this device. It’s up to you to make a decision about purchasing this gadget. + +_The Amazon links in the article are affiliate links. Please_ [_read our affiliate policy_][2]_._ + +### Chuwi GBox Pro + +The [Chuwi GBox Pro][3] has a pretty small form factor. At 7.4 x 5.4 x 1.5 inches, it is about the size of a hardcover book. The body is made out of aluminum, so it is light. weighing only 1 lb 4 oz. It comes with an Intel Atom X7-E3950 quad-core CPU, Intel HD 505 graphics, 4 GB of DDR3 RAM, and 64 GB of storage on an eMMC. If that is not enough storage, you can add a 2.5 inch SATA drive to increase storage. + +![Chuwi GBox Pro on my desk][4] + +The GBox Pro is a fanless computer, so it depends on its special design to keep it cool. It has vents around the top and bottom to let air circulate. + +Besides using it as a desktop computer, you can also use the GBox Pro as a media center. It comes with a VESA mount so you can attach to a wall or behind or TV or monitor. Then, all you’d need is a No products found.. + +![Ubuntu On Gbox Pro][5] + +The GBox Pro comes with a good number of ports. It has 5 USB ports total: one for type C, two for USB 2.0 and two for USB 3.0. It also has a built-in MicroSD reader. For video output, you can choose between VGA and HDMI. It also has an Ethernet jack and an audio out port. It also has support for Bluetooth. + +![Chuwi GBox Pro Ports][6] + +### Installation + +The system came with Windows 10 preinstalled, but they mention on their website that it also supports Linux. So, I tested it with two distros: Ubuntu and Manjaro. + +Overwriting Windows 10 with stock Ubuntu was fairly easy. The only worry I had during the install was a message that one of the partitions was mounted and needed to be unmounted to continue. This was the first time I install Linux on an eMMC and I wonder if that was the issue. + +![Super Tux Kart On Gbox Pro][7] + +The GBox Pro didn’t run Ubuntu as well as I would have liked. I think GNOME might have been a little heavy for it. Overall performance wasn’t too bad, but when I had a couple of processes running at once (such as installing snaps and watching videos on YouTube) I felt a noticeable slowdown. Keep in mind, the GBox Pro has an Intel Atom CPU, not the more powerful core i3 or core i5. + +After I installed the [Manjaro Xfce edition][8], I didn’t feel like the bottom was dragging. It’s probably because [Xfce][9] is lighter than GNOME. On Manjaro, the system did everything I wanted quickly. + +![Manjaro Xfce on GBox Pro][10] + +### Experiencing Linux on Chuwi GBox Pro + +Overall, the GBox Pro is a nice little device with a few niggles that should be expected for this form-factor, chip setup, and general price. + +One of the main talking points of the GBox Pro is its ability to run high-quality graphics, both for movies and games. I tried out several games, including Super Tux Kart, Warzone 2100, and Mr. Rescue. (Yes, I’m not much of a gamer.) These games ran fine, except I ran into an issue with Super Tux Kart where some of the maps flickered so much that they were almost unplayable. + +![Rifftrax On Gbox Pro][11] + +I also wanted to try HD movie playback. I don’t own a lot of digital movies and the one site I do use isn’t quite Linux friendly. However, I was able to watch a couple of 1080p videos on [Rifftrax][12] without issue. Chuwi claims that GBox Pro supports 4K hard-decoding. I couldn’t test it though. + +As I mentioned above, you can add storage space by adding a SATA drive. I did not realize that was a feature until I started writing this review and I was looking at the pictures on the [GBox Pro's Amazon page][13]. As a result, I removed the bottom panel to take a look. I like the fact that this is an option, unfortunately, they used six tiny screws to hold the bottom panel in place. I was worried I’d lose a couple. It also looks like the bottom panel keeps the drive in place. There is no room to screw it into the mounts. + +### Final Thoughts on Chuwi GBox Pro + +![Chuwi GBox Pro is fanless and thermal conductive aluminum alloy provide effective cooling][14] + +Overall, I like the GBox Pro. It has a nice small form factor, which makes it easy to set up and move. On the website, they say that you can carry it in your pocket, but I would not want to risk it. The case has a cool design and I like the fact that it has a place to add a larger drive. + +It may not be as powerful as the [Intel NUC][15] but it is still a good enough device considering its modest price tag. You can use it for a [media server][16] or for medium to light computing. I didn’t use it as media server but it works well for an entry-level desktop system. + +It’s FOSS has also requested Chuwi team to launch Linux version of their devices with a relatively reduced pricing than the Windows ones. Let’s see if they consider it in future. + +If you think [Chuwi GBox Pro][17] is a good fit for your needs, it is available to order from Aliexpress and Amazon. I recommend ordering on Amazon though. Please refer to this page for [warranty information][18]. + +Preview | Product | Price | +---|---|---|--- +![CHUWI GBox Pro Fanless Mini PC, Intel Atom X7-E3950,Win10 \(64-bit\) Desktop Computer with 4GB DDR4/64GB eMMC, Support Gigabit Ethernet, Linux, BT 4.0, 4K, Dual WiFi][19] ![CHUWI GBox Pro Fanless Mini PC, Intel Atom X7-E3950,Win10 \(64-bit\) Desktop Computer with 4GB DDR4/64GB eMMC, Support Gigabit Ethernet, Linux, BT 4.0, 4K, Dual WiFi][19] | [CHUWI GBox Pro Fanless Mini PC, Intel Atom X7-E3950,Win10 (64-bit) Desktop Computer with 4GB...][20] | $189.99[][21] | [Buy on Amazon][22] + +Have you ever used the GBox Pro or any other Chuwi products? Please share your experience with us in the comment section. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/chuwi-gbox-pro-review/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/linux-based-mini-pc/ +[2]: https://itsfoss.com/affiliate-policy/ +[3]: https://www.chuwi.com/product/items/Chuwi-GBox-Pro.html +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/Chuwi-GBox-Pro.jpg?ssl=1 +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/Ubuntu-on-GBox-Pro.jpg?ssl=1 +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/chuwi-gbox-pro-ports.jpg?resize=800%2C230&ssl=1 +[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/Super-Tux-Kart-on-GBox-Pro.jpg?ssl=1 +[8]: https://manjaro.org/download/xfce/ +[9]: https://xfce.org/ +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/Manjaro-on-GBox-Pro.jpg?resize=800%2C470&ssl=1 +[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/RiffTrax-on-GBox-Pro.jpg?resize=800%2C450&ssl=1 +[12]: https://www.rifftrax.com/ +[13]: https://www.amazon.com/CHUWI-Fanless-X7-E3950-Computer-Ethernet/dp/B07THWPRS1?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07THWPRS1 (GBox Pro's Amazon page) +[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/chuwi-gbox-pro.jpg?ssl=1 +[15]: https://itsfoss.com/install-linux-on-intel-nuc/ +[16]: https://itsfoss.com/best-linux-media-server/ +[17]: https://www.chuwi.com/product/buy/Chuwi-GBox-Pro.html +[18]: https://www.chuwi.com/warranty.html +[19]: https://i2.wp.com/images-na.ssl-images-amazon.com/images/I/41ZyikLlCGL._SL160_.jpg?ssl=1 +[20]: https://www.amazon.com/CHUWI-Fanless-X7-E3950-Computer-Ethernet/dp/B07THWPRS1?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07THWPRS1 (CHUWI GBox Pro Fanless Mini PC, Intel Atom X7-E3950,Win10 (64-bit) Desktop Computer with 4GB DDR4/64GB eMMC, Support Gigabit Ethernet, Linux, BT 4.0, 4K, Dual WiFi) +[21]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime) +[22]: https://www.amazon.com/CHUWI-Fanless-X7-E3950-Computer-Ethernet/dp/B07THWPRS1?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07THWPRS1 (Buy on Amazon) diff --git a/sources/talk/20191004 DARPA looks for new NICs to speed up networks.md b/sources/talk/20191004 DARPA looks for new NICs to speed up networks.md new file mode 100644 index 0000000000..7b94190ae9 --- /dev/null +++ b/sources/talk/20191004 DARPA looks for new NICs to speed up networks.md @@ -0,0 +1,62 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (DARPA looks for new NICs to speed up networks) +[#]: via: (https://www.networkworld.com/article/3443046/darpa-looks-for-new-nics-to-speed-up-networks.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +DARPA looks for new NICs to speed up networks +====== +The creator of the Internet now looks to speed it up by unclogging network bottlenecks. +RVLsoft / Shulz / Getty Images + +The government agency that gave us the Internet 50 years ago is now looking to drastically increase network speed to address bottlenecks and chokepoints for compute-intensive applications. + +The Defense Advanced Research Projects Agency (DARPA), an arm of the Pentagon, has unveiled a computing initiative, one of many, that will attempt to overhaul the network stack and interfaces that cannot keep up with high-end processors and are often the choke point for data-driven applications. + +[[Get regularly scheduled insights by signing up for Network World newsletters. ]][1] + +The DARPA initiative, Fast Network Interface Cards, or FastNICs, aims to boost network performance by a factor of 100 through a clean-slate transformation of the network stack from the application to the system software layers running on top of steadily faster hardware. DARPA is soliciting proposals from networking vendors. . + +**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][2] ]** + +“The true bottleneck for processor throughput is the network interface used to connect a machine to an external network, such as an Ethernet, therefore severely limiting a processor’s data ingest capability,” said Dr. Jonathan Smith, a program manager in DARPA’s Information Innovation Office (I2O) in a statement. + +“Today, network throughput on state-of-the-art technology is about 1014 bits per second (bps) and data is processed in aggregate at about 1014 bps. Current stacks deliver only about 1010 to 1011 bps application throughputs,” he added. + +Advertisement + +Many other elements of server design have seen leaps in performance, like memory, meshes, NVMe-over-Fabric, and PCI Express, but networking speed has been something of a laggard, getting minor bumps in speed and throughput by comparison. The fact is we’re still using Ethernet as our network protocol 56 years after Bob Metcalf invented it at Xerox PARC. + +So DARPA’s program managers are using an approach that reworks existing network architectures. The FastNICs programs will select a challenge application and provide it with the hardware support it needs, operating system software, and application interfaces that will enable an overall system acceleration that comes from having faster NICs. + +Researchers will design, implement, and demonstrate 10 Tbps network interface hardware using existing or road-mapped hardware interfaces. The hardware solutions must attach to servers via one or more industry-standard interface points, such as I/O buses, multiprocessor interconnection networks and memory slots to support the rapid transition of FastNICs technology. + +“It starts with the hardware; if you cannot get that right, you are stuck. Software can’t make things faster than the physical layer will allow so we have to first change the physical layer,” said Smith. + +The next step would be developing system software to manage FastNICs hardware. The open-source software based on at least one open-source OS would enable faster, parallel data transfer between network hardware and applications. + +Details on the proposal can be found [here][3]. + +Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3443046/darpa-looks-for-new-nics-to-speed-up-networks.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/newsletters/signup.html +[2]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11 +[3]: https://www.fbo.gov/index?s=opportunity&mode=form&id=fb5cfba969669de12025ff1ce2c99935&tab=core&_cview=1 +[4]: https://www.facebook.com/NetworkWorld/ +[5]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20191004 Quantum computing, the open source way.md b/sources/talk/20191004 Quantum computing, the open source way.md new file mode 100644 index 0000000000..d075fcb00c --- /dev/null +++ b/sources/talk/20191004 Quantum computing, the open source way.md @@ -0,0 +1,62 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Quantum computing, the open source way) +[#]: via: (https://opensource.com/article/19/10/open-source-quantum-future) +[#]: author: (Jaouhari Youssef https://opensource.com/users/jaouhari) + +Quantum computing, the open source way +====== +Quantum computing is promising, provided we overcome hurdles preventing +it from moving deeper into the real world. +![A circuit design in lights][1] + +The quantum vision of reality is both strange and mesmerizing at the same time. As theoretical physicist [Michio Kaku][2] once said, "Common sense has no place in quantum mechanics." + +Knowing this is a new and uncommon place, we can expect quantum innovations to surpass anything we have seen before. The theory behind it will enable as-yet-unseen capabilities, but there are also some hurdles that are slowing it from being unleashed into the real world. + +By using the concepts of entanglement and superposition on quantum bits, a quantum computer can solve some problems faster than a classical computer. For example, quantum computers are useful for solving [NP-hard][3] problems, such as the [Boolean satisfiability problem][4], known as the SAT problem. Using [Grover's algorithm][5], the complexity of the evaluation of a boolean proposition of **$n$** variables goes down from **$O(n2^{n})$** to **$O(n2^{n/2})$** by applying its quantum version. + +An even more interesting more problem quantum computing can solve is the [Bernstein–Vazirani problem][6], where given a function **$f$**, such as **$f(x)=x.s=x_{1}s_{1} + x_{2}s_{2} + x_{3}s_{3} + ... x_{n}s_{n}$**, you have to find **$s$**. While the classical solution requires **$n$** queries to find the solution, the quantum version requires only one query. + +Quantum computing is very valuable for security issues. One interesting riddle it answers is: How can two communicating parties share a key to encrypt and decrypt their messages without any third party stealing it? + +A valid answer would use [quantum key distribution][7], which is a method of communication that implements cryptographic protocols that involve quantum mechanics. This method relies on a quantum principle that "the measurement of a system generally disturbs it." Knowing that a third party measuring the quantum state would disturb the system, the two communicating parties can thereby know if a communication is secure by establishing a threshold for eavesdropping. This method is used for securing bank transfers in China and transferring ballot results in Switzerland. + +However, there are some serious hurdles to the progress of quantum computing to meet the requirements for industrial-scale use and deployment. First, quantum computers operate at temperatures near absolute zero since any heat in the system can introduce errors. Second, there is a scalability issue for quantum chipsets. Knowing that there are chips in the order of 1,000 qubits, expanding to millions or billions of qubits for fully fault-tolerant systems, error-corrected algorithms will require significant work. + +The best way to tackle real-life problems with quantum solutions is to use a hybridization of classic and quantum algorithms using quantum hardware. This way, the part of the problem that can be solved faster using a quantum algorithm can be transferred to a quantum computer for processing. One example would be using a quantum support vector machine for solving a classification problem, where the matrix-exponentiation task is handled by the quantum computer. + +The [Quantum Open Source Foundation][8] is an initiative to support the development of open source tools for quantum computing. Its goal is to expand the role of open source software in quantum computing, focusing on using current or near-term quantum computing technologies. The foundation also offers links to open courses, papers, videos, development tools, and blogs about quantum computing. + +The foundation also supports [OQS-OpenSSH][9], an interesting project that concerns quantum cryptography. The project aims to construct a public-key cryptosystem that will be safe even against quantum computing. Since it is still under development, using hybrid-cryptography, with both quantum-safe public key and classic public-key algorithms, is recommended. + +A fun way to learn about quantum computing is by playing [Entanglion][10], a two-player game made by IBM Research. The goal is to rebuild a quantum computer from scratch. The game is very instructive and could be a great way to introduce youth to the quantum world. + +All in all, the mysteries of the quantum world haven't stopped amazing us, and they will surely continue into the future. The most exciting parts are yet to come! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/open-source-quantum-future + +作者:[Jaouhari Youssef][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jaouhari +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/adi-goldstein-eusvweosble-unsplash.jpg?itok=8shMsRyC (Circuit design) +[2]: https://en.wikipedia.org/wiki/Michio_Kaku +[3]: https://en.wikipedia.org/wiki/NP-hardness +[4]: https://en.wikipedia.org/wiki/Boolean_satisfiability_problem +[5]: https://en.wikipedia.org/wiki/Grover%27s_algorithm +[6]: https://en.wikipedia.org/wiki/Bernstein%E2%80%93Vazirani_algorithm +[7]: https://en.wikipedia.org/wiki/Quantum_key_distribution +[8]: https://qosf.org/ +[9]: https://github.com/open-quantum-safe/openssh-portable +[10]: https://github.com/Entanglion/entanglion diff --git a/sources/talk/20191004 Secure Access Service Edge (SASE)- A reflection of our times.md b/sources/talk/20191004 Secure Access Service Edge (SASE)- A reflection of our times.md new file mode 100644 index 0000000000..3d8fc2f4ac --- /dev/null +++ b/sources/talk/20191004 Secure Access Service Edge (SASE)- A reflection of our times.md @@ -0,0 +1,130 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Secure Access Service Edge (SASE): A reflection of our times) +[#]: via: (https://www.networkworld.com/article/3442941/secure-access-service-edge-sase-a-reflection-of-our-times.html) +[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/) + +Secure Access Service Edge (SASE): A reflection of our times +====== +Gartner makes the claim that the shift to SASE will make obsolete existing networking and security models. +RVLsoft / Shulz / Getty Images + +There’s a buzz in the industry about a new type of product that promises to change the way we secure and network our organizations. It is called the Secure Access Service Edge (SASE). It was first mentioned by Gartner, Inc. in its hype cycle for networking. Since then Barracuda highlighted SASE in a recent [PR update][1] and Zscaler also discussed it in their [earnings call][2]. Most recently, [Cato Networks][3] announced that it was mentioned by Gartner as a “sample vendor” in the hype cycle. + +Today, the enterprises have upgraded their portfolio and as a consequence, the ramifications of the network also need to be enhanced. What we are witnessing is cloud, mobility, and edge, which has resulted in increased pressure on the legacy network and security architecture. Enterprises are transitioning all users, applications, and data located on-premise, to a heavy reliance on the cloud, edge applications, and a dispersed mobile workforce.   + +### Our technologies must evolve + +Digital transformation improves agility and competitiveness. However, at the same time, it impacts the way we connect and secure these connections. Therefore, as the landscape evolves must our technologies. In such a scenario, the introduction of a SASE is a reflection of this change. + +[][4] + +BrandPost Sponsored by HPE + +[Take the Intelligent Route with Consumption-Based Storage][4] + +Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency. + +The new SASE category converges the capabilities of WAN with network security to support the needs of the digital enterprise. Some of these disparate networks and security services include SD-WAN, secure web gateway, CASB, software-defined perimeter, DNS protection, and firewall-as-a-service. + +Today, there are a number of devices that should be folded into a converged single software stack. There should be a fabric wherein all the network and security functionality can be controlled centrally. + +Advertisement + +### SD-WAN forms part of the picture + +The hardest thing is to accept what we have been doing in the past is not the best way forward for our organizations. The traditional methods to protect the mobile, cloud assets and sites are no longer the optimum way to support today's digital environment. Gartner claims that the shift to SASE will make the existing networking and security models obsolete. + +Essentially, SASE is not just about offering SD-WAN services. SD-WAN is just a part of the much bigger story since it doesn't address all the problems. For this, you need to support a full range of capabilities. This means you must support mobile users and cloud resources (from anywhere), in a way that doesn't require backhauling.  + +**[ Related: [MPLS explained – What you need to know about multi-protocol label switching][5]** + +Security should be embedded into the network which some SD-WAN vendors do not offer. Therefore, I could sense SASE saying that SD-WAN alone is insufficient. + +### An overview of the SASE requirements + +Primarily, to provide secure access in this new era and to meet the operational requirements will involve relying heavily on cloud-based services. This is contrary to a collection of on-premise network and security devices. + +Whereas, to be SASE enabled, the network and security domain should be folded in a cloud-native approach to networking and security. This provides significant support for all types of edges**.** + +To offer SASE services you need to fulfill a number of requirements: + + 1. The convergence of WAN edge and network security models + 2. Cloud-native, cloud-based service delivery + 3. A network designed for all edges + 4. Identity and network location + + + +### 1\. The convergence of WAN edge and network security models + +Firstly, it requires the convergence of the WAN edge and network security models. Why? It is because the customer demands simplicity, scalability, low latency and pervasive security which drive the requirement for the convergence of these models. + +So, we have a couple of options. One may opt to service the chain appliances; physical or virtual. Although this option does shorten the time to market but it will also result in inconsistent services, poor manageability, and high latency. + +Keep in mind the service insertion fragments as it makes two separate domains. There are two different entities that are being managed by limiting visibility. Service chaining solutions for Gartner is not SASE. + +The approach is to converge both networking and security into the cloud. This creates a global and cloud-native architecture that connects and secures all the locations, cloud resources, and mobile users everywhere. + +SASE offerings will be purpose-built for scale-out, cloud-native, and cloud-based delivery. This will notably optimize the solution to deliver low latency services. + +You need a cloud-native architecture to achieve the milestone of economy and agility. To deliver maximum flexibility with the lowest latency and resource requirements, cloud-native single-pass architecture is a very significant advantage. + +### 2\. Cloud-native, cloud-based service delivery + +Edge applications are latency sensitive. Hence, these require networking and security to be delivered in a distributed manner which is close to the endpoint. Edge is the new cloud that requires a paradigm shift to what cloud-based providers offer with a limited set of PoP. + +The geographical footprint is critical and to effectively support these edge applications requires a cloud-delivery-based approach. Such an approach favors providers with many points of presence. Since the users are global, so you must have global operations. + +It is not sufficient to offer a SASE service built solely on a hyper-scale. This limits the providers with the number of points of presence. You need to deliver where the customers are and to do this, you need a global footprint and the ability to instantiate a PoP in response to the customer demands. + +### 3\. A network designed for all edges + +The proliferation of the mobile workforce requires SASE services to connect with more than just sites. For this, you need to have an agent-based capability that should be managed as a cloud service. + +In plain words, SASE offerings that rely on the on-premises, box-oriented delivery model, or a limited number of cloud points of presence (without agent-based capability), will be unable to meet the requirements of an increasingly mobile workforce and the emerging latency-sensitive applications. + +### 4\. Identity and network location + +Let’s face it, now there are new demands on networks emerging from a variety of sources. This results in increased pressure on the traditional network and security architectures. Digital transformation and the adoption of mobile, cloud and edge deployment models, accompanied by the change in traffic patterns, make it imperative to rethink the place of legacy enterprise networks.  + +To support these changes, we must reassess how we view the traditional data center. We must evaluate the way we use IP addresses as an anchor for the network location and security enforcement. Please keep in mind that anything tied to an IP address is useless as it does not provide a valid hook for network and security policy enforcement. This is often referred to as the IP address conundrum. + +SASE is the ability to deliver network experience with the right level of security access. This access is based on the identity and real-time condition that is in accordance with company policy. Fundamentally, the traffic can be routed and prioritized in certain ways. This allows you to customize your level of security. For example, the user will get a different experience from a different location or device type. All policies are tied to the user identity and not based on the IP address.  + +Finally, the legacy data center should no longer be considered as the center of network architecture. The new center of secure access networking design is the identity with a policy that follows regardless. Identities can be associated with people, devices, IoT or edge computing locations. + +### A new market category + +The introduction of the new market category SASE is a reflection of our current times. Technologies have changed considerably. The cloud, mobility, and edge have put increased pressure on the legacy network and network security architectures. Therefore, for some use cases, SASE will make the existing models obsolete. + +For me, this is an exciting time to see a new market category and I will track this thoroughly with future posts. As we are in the early stages, there will be a lot of marketing buzz. My recommendation would be to line up who says they are claiming/mentioning SASE against the criteria set out in this post and see who does what. + +**This article is published as part of the IDG Contributor Network. [Want to Join?][6]** + +Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3442941/secure-access-service-edge-sase-a-reflection-of-our-times.html + +作者:[Matt Conran][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Matt-Conran/ +[b]: https://github.com/lujun9972 +[1]: http://www.backupreview.info/2019/09/11/new-release-of-barracuda-cloudgen-firewall-automates-and-secures-enterprise-migrations-to-public-cloud/ +[2]: https://seekingalpha.com/article/4290853-zscaler-inc-zs-ceo-jay-chaudhry-q4-2019-results-earnings-call-transcript +[3]: https://www.catonetworks.com/news/cato-networks-listed-for-sase-category-in-the-gartner-hype-cycle-2019 +[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) +[5]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html +[6]: https://www.networkworld.com/contributor-network/signup.html +[7]: https://www.facebook.com/NetworkWorld/ +[8]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20191004 What-s in an open source name.md b/sources/talk/20191004 What-s in an open source name.md new file mode 100644 index 0000000000..e15ac57a28 --- /dev/null +++ b/sources/talk/20191004 What-s in an open source name.md @@ -0,0 +1,198 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What's in an open source name?) +[#]: via: (https://opensource.com/article/19/10/open-source-name-origins) +[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja) + +What's in an open source name? +====== +Ever wonder where the names of your favorite open source projects or +programming languages came from? Get the origin stories behind popular +tech nomenclature from A to Z. +![A person writing.][1] + +GNOME, Java, Jupyter, Python. If your friends or family members have ever eavesdropped on your work conversations, they might think you've made a career in Renaissance folklore, coffee roasting, astronomy, or zoology. Where did the names of these open source technologies come from? We asked our writer community for input and rounded up some of our favorite tech name origin stories. + +### Ansible + +The name "Ansible" is lifted directly from science fiction. Ursula Le Guin's book _Rocannon's World_ had devices allowing instantaneous (faster than light) communication called ansibles (derived, apparently, from the word "answerable"). Ansibles became a staple of science fiction, including in Orson Scott Card's _Ender's Game_ (which later became a popular film), where the device controlled many remote space ships. This seemed to be a good model for software that controls distributed machines, so Michael DeHaan (creator and founder of Ansible) borrowed the name. + +### Apache + +[Apache][2] is an open source web server that was originally released in 1995. Its name is not related to the famous Native American tribe; it instead refers to the repeated patches to its original software code. Hence, "A-patchy server." + +### awk + +"awk(1) Stands for Aho, Weinberger, Kernighan (authors)" —Michael Greenberg + +### Bash + +"The original Unix shell, the Bourne shell, was named after its creator. At the time Bash was being developed, csh (pronounced 'seashell') was actually more popular for interactive user logins. The Bash project aimed to give new life to the Bourne shell by making it more suitable for interactive use, thus it was named the 'Bourne again shell,' a pun on 'born again.'" —Ken Gaillot + +### C + +"In early days, Ken Thompson and Dennis Ritchie at AT&T found it interesting that you could use a higher-level programming language (instead of low-level and less-portable assembly programming) to write operating systems and tools. There was an early programming system called BCPL (Basic Combined Programming Language), and Thompson created a stripped-down version of BCPL called B. But B wasn't very flexible or fast. Ritchie then took the ideas of B and expanded it into a compiled language called C." —Jim Hall + +### dd + +"I don't think you can publish such an article without mentioning dd. My nickname is Didi. Correctly pronounced, it sounds like 'dd.' I first learned Unix, and then Linux, in 1993 as a student. Then I went to the army, arrived to one of the very few sections in my unit that used Unix (Ultrix) (the rest were mainly VMS), and one of the people there said: 'So, you are a hacker, right? You think you know Unix? OK, so what's the reason for the name dd?' I had no idea and tried to guess: "Data duplicator?" So he said, 'I'll tell you the story of dd. dd is short for _convert and copy_ (as anyone can still see today on the manpage), but since cc was already taken by the c compiler, it was named dd.' Only years later, I heard the true story about JCL's data definition and the non-uniform, semi-joking syntax for the Unix dd command somewhat being based on it." —Yedidyah Bar David + +### Emacs + +The classic anti-vi editor, the true etymology of the name is unremarkable, in that it derives from "Editing MACroS." Being an object of great religious opprobrium and worship it has, however, attracted many spoof bacronyms such as "Escape Meta Alt Control Shift" (to spoof its heavy reliance on keystrokes), "Eight Megabytes And Constantly Swapping" (from when that was a lot of memory), "Eventually malloc()s All Computer Storage," and "EMACS Makes A Computer Slow." —Adapted from the Jargon File/Hacker's Dictionary + +### Enarx + +[Enarx][3] is a new project in the confidential computing space. One of the project's design principles was that it should be "fungible." so an initial name was "psilocybin" (the famed magic mushroom). The general feeling was that manager types would probably be resistant, so new names were considered. The project's two founders, Mike Bursell and Nathaniel McCallum, are both ancient language geeks, so they considered lots of different ideas, including тайна (Tayna—Russian for secret or mystery—although Russian, admittedly, is not ancient, but hey), crypticon (total bastardization of Greek), cryptidion (Greek for small secret place), arcanus (Latin masculine adjective for secret), arcanum (Latin neuter adjective for secret), and ærn (Anglo-Saxon for place, secret place, closet, habitation, house, or cottage). In the end, for various reasons, including the availability of domains and GitHub project names, they settled on enarx, a combination of two Latin roots: en- (meaning within) and -arx (meaning citadel, stronghold, or fortress). + +### GIMP + +Where would we be without [GIMP][4]? The GNU Image Manipulation Project has been an open source staple for many years. [Wikipedia][5] states, "In 1995, [Spencer Kimball][6] and [Peter Mattis][7] began developing GIMP as a semester-long project at the University of California, Berkeley, for the eXperimental Computing Facility." + +### GNOME + +Have you ever wondered why GNOME is called GNOME? According to [Wikipedia][8], GNOME was originally an acronym that represented the "GNU Network Object Model Environment." Now that name no longer represents the project and has been dropped, but the name has stayed. [GNOME 3][9] is the default desktop environment for Fedora, Red Hat Enterprise, Ubuntu, Debian, SUSE Linux Enterprise, and more. + +### Java + +Can you imagine this programming language being named anything else? Java was originally called Oak, but alas, the legal team at Sun Microsystems vetoed that name due to its existing trademark. So it was back to the drawing board for the development team. [Legend has it][10] that a massive brainstorm was held by the language's working group in January 1995. Lots of other names were tossed around including Silk, DNA, WebDancer, and so on. The team did not want the new name to have anything to do with the overused terms, "web" or "net." Instead, they were searching for something more dynamic, fun, and easy to remember. Java met the requirements and miraculously, the team agreed! + +### Jupyter + +Many of today's data scientists and students use [Jupyter][11] notebooks in their work. The name Jupyter is an amalgamation of three open source computer languages that are used in the notebooks and prominent in data science: [Julia][12], [Python][13], and [R][14]. + +### Kubernetes + +Kubernetes is derived from the Greek word for helmsman. This etymology was corroborated in a [2015 Hacker News][15] response by a Kubernetes project founder, Craig McLuckie. Wanting to stick with the nautical theme, he explained that the technology drives containers, much like a helmsman or pilot drives a container ship. Thus, Kubernetes was the chosen name. Many of us are still trying to get the pronunciation right (koo-bur-NET-eez), so K8s is an acceptable substitute. Interestingly, it shares its etymology with the English word "governor," so has that in common with the mechanical negative-feedback device on steam engines. + +### KDE + +What about the K desktop? KDE originally represented the "Kool Desktop Environment." It was founded in 1996 by [Matthias Ettrich][16]. According to [Wikipedia][17], the name was a play on the words [Common Desktop Environment][18] (CDE) on Unix. + +### Linux + +[Linux][19] was named for its inventor, Linus Torvalds. Linus originally wanted to name his creation "Freax" as he thought that naming the creation after himself was too egotistical. According to [Wikipedia][19], "Ari Lemmke, Torvalds' coworker at the Helsinki University of Technology, who was one of the volunteer administrators for the FTP server at the time, did not think that 'Freax' was a good name. So, he named the project 'Linux' on the server without consulting Torvalds." + +Following are some of the most popular Linux distributions. + +#### CentOS + +[CentOS][20] is an acronym for Community Enterprise Operating System. It contains the upstream packages from Red Hat Enterprise Linux. + +#### Debian + +[Debian][21] Linux, founded in September 1993, is a portmanteau of its founder, Ian Murdock, and his then-girlfriend Debra Lynn. + +#### RHEL + +[Red Hat Linux][22] got its name from its founder Marc Ewing, who wore a red Cornell University fedora given to him by his grandfather. Red Hat was founded on March 26, 1993. [Fedora Linux][23] began as a volunteer project to provide extra software for the Red Hat distribution and got its name from Red Hat's "Shadowman" logo. + +#### Ubuntu + +[Ubuntu][24] aims to share open source widely and is named after the African philosophy of ubuntu, which can be translated as "humanity to others" or "I am what I am because of who we all are." + +### Moodle + +The open source learning platform [Moodle][25] is an acronym for "modular object-oriented dynamic learning environment." Moodle continues to be a leading platform for e-learning. There are nearly 104,000 registered Moodle sites worldwide. + +Two other popular open source content management systems are Drupal and Joomla. Drupal's name comes from the Dutch word for "druppel" which means "drop." Joomla is an [anglicized spelling][26] of the Swahili word "jumla," which means "all together" in Arabic, Urdu, and other languages, according to Wikipedia. + +### Mozilla + +[Mozilla][27] is an open source software community founded in 1998. According to its website, "The Mozilla project was created in 1998 with the release of the Netscape browser suite source code. It was intended to harness the creative power of thousands of programmers on the internet and fuel unprecedented levels of innovation in the browser market." The name was a portmanteau of [Mosaic][28] and Godzilla. + +### Nginx + +"Many tech people try to be cool and say it 'n' 'g' 'n' 'x'. Few actually did the basic actions of researching a bit more to find out very quickly that the name is actually supposed to be said as 'EngineX,' in reference to the powerful web server, like an engine." —Jean Sebastien Tougne + +### Perl + +Perl's founder Larry Wall originally named his project "Pearl." According to Wikipedia, Wall wanted to give the language a short name with positive connotations. Wall discovered the existing [PEARL][29] programming language before Perl's official release and changed the spelling of the name. + +### Piet and Mondrian + +"There are two programming language named after the artist Piet Mondrian. One is called 'Piet' and the other 'Mondrian.' [David Morgan-Mar [writes][30]]: 'Piet is a programming language in which programs look like abstract paintings. The language is named after Piet Mondrian, who pioneered the field of geometric abstract art. I would have liked to call the language Mondrian, but someone beat me to it with a rather mundane-looking scripting language. Oh well, we can't all be esoteric language writers, I suppose.'" —Yuval Lifshitz + +### Python + +The Python programming language received its unique name from its creator, Guido Van Rossum, who was a fan of the comedy group Monty Python. + +### Raspberry Pi + +Known for its tiny-but-mighty capabilities and wallet-friendly price tag, the Raspberry Pi is a favorite in the open source community. But where did its endearing (and yummy) name come from? In the '70s and '80s, it was a popular trend to name computers after fruit. Apple, Tangerine, Apricot... anyone getting hungry? According to a [2012 interview][31] with founder Eben Upton, the name "Raspberry Pi" is a nod to that trend. Raspberries are also tiny in size, yet mighty in flavor. The "Pi" in the name alludes to the fact that, originally, the computer could only run Python. + +### Samba + +[Server Message Block][32] for sharing Windows files on Linux. + +### ScummVM + +[ScummVM][33] (Script Creation Utility for Maniac Mansion Virtual Machine) is a program that makes it possible to run some classic computer adventure games on a modern computer. Originally, it was designed to play LucasArts adventure games that were built using SCUMM, which was originally used to develop Maniac Mansion before being used to develop most of LucasArts's other adventure games. Currently, ScummVM supports a large number of game engines, including Sierra Online's AGI and SCI, but still retains the name ScummVM. A related project, [ResidualVM][34], got its name because it covers the "residual" LucasArts adventure games not covered by ScummVM. The LucasArts games covered by ResidualVM were developed using GrimE (Grim Engine), which was first used to develop Grim Fandango, so the ResidualVM name is a double pun. + +### SQL + +"You may know [SQL] stands for Structured Query Language, but do you know why it's often pronounced 'sequel'? It was created as a follow-up (i.e. sequel) to the original 'QUEL' (QUEry Language)." —Ken Gaillot + +### XFCE + +[XFCE][35] is a popular desktop founded by [Olivier Fourdan][36]. It began as an alternative to CDE in 1996 and its name was originally an acronym for XForms Common Environment. + +### Zsh + +Zsh is an interactive login shell. In 1990, the first version of the shell was written by Princeton student Paul Falstad. He named it after seeing the login ID of Zhong Sha (zsh), then a teaching assistant at Princeton, and thought that it sounded like a [good name for a shell][37]. + +There are many more projects and names that we have not included in this list. Be sure to share your favorites in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/open-source-name-origins + +作者:[Joshua Allen Holm][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/holmja +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E (A person writing.) +[2]: https://httpd.apache.org/ +[3]: https://enarx.io +[4]: https://www.gimp.org/ +[5]: https://en.wikipedia.org/wiki/GIMP +[6]: https://en.wikipedia.org/wiki/Spencer_Kimball_(computer_programmer) +[7]: https://en.wikipedia.org/wiki/Peter_Mattis +[8]: https://en.wikipedia.org/wiki/GNOME +[9]: https://www.gnome.org/gnome-3/ +[10]: https://www.javaworld.com/article/2077265/so-why-did-they-decide-to-call-it-java-.html +[11]: https://jupyter.org/ +[12]: https://julialang.org/ +[13]: https://www.python.org/ +[14]: https://www.r-project.org/ +[15]: https://news.ycombinator.com/item?id=9653797 +[16]: https://en.wikipedia.org/wiki/Matthias_Ettrich +[17]: https://en.wikipedia.org/wiki/KDE +[18]: https://sourceforge.net/projects/cdesktopenv/ +[19]: https://en.wikipedia.org/wiki/Linux +[20]: https://www.centos.org/ +[21]: https://www.debian.org/ +[22]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux +[23]: https://getfedora.org/ +[24]: https://ubuntu.com/about +[25]: https://moodle.org/ +[26]: https://en.wikipedia.org/wiki/Joomla#Historical_background +[27]: https://www.mozilla.org/en-US/ +[28]: https://en.wikipedia.org/wiki/Mosaic_(web_browser) +[29]: https://en.wikipedia.org/wiki/PEARL_(programming_language) +[30]: http://www.dangermouse.net/esoteric/piet.html +[31]: https://www.techspot.com/article/531-eben-upton-interview/ +[32]: https://www.samba.org/ +[33]: https://www.scummvm.org/ +[34]: https://www.residualvm.org/ +[35]: https://www.xfce.org/ +[36]: https://en.wikipedia.org/wiki/Olivier_Fourdan +[37]: http://www.zsh.org/mla/users/2005/msg00951.html diff --git a/sources/talk/20191005 Machine Learning (ML) and IoT can Work Together to Improve Lives.md b/sources/talk/20191005 Machine Learning (ML) and IoT can Work Together to Improve Lives.md new file mode 100644 index 0000000000..8142c85416 --- /dev/null +++ b/sources/talk/20191005 Machine Learning (ML) and IoT can Work Together to Improve Lives.md @@ -0,0 +1,85 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Machine Learning (ML) and IoT can Work Together to Improve Lives) +[#]: via: (https://opensourceforu.com/2019/10/machine-learning-ml-and-iot-can-work-together-to-improve-lives/) +[#]: author: (Vinayak Ramachandra Adkoli https://opensourceforu.com/author/vinayak-adkoli/) + +Machine Learning (ML) and IoT can Work Together to Improve Lives +====== + +[![][1]][2] + +_IoT devices are becoming popular nowadays. The widespread use of IoT yields huge amounts of raw data. This data can be effectively processed by using machine learning to derive many useful insights that can become game changers and affect our lives deeply._ + +The field of machine learning is growing steadily, along with the growth of the IoT. Sensors, nano cameras, and other such IoT elements are now ubiquitous, placed in mobile phones, computers, parking stations, traffic control centres and even in home appliances. There are millions of IoT devices in the world and more are being manufactured every day. They collect huge amounts of data that is fed to machines via the Internet, enabling machines to ‘learn’ from the data and make them more efficient. + +In IoT, it is important to note that a single device/element can generate immense amounts of data every second. All this data from IoT is transmitted to servers or gateways to create better machine learning models. Data analytics software can convert this raw data into useful insights so that the machine can be made more intelligent, and perform better with cost-effectiveness and a long life. By the year 2020, the world will have an estimated 20 billion IoT devices. Data collected by these devices mostly pertains to machines. By using this data, machines can learn more effectively and can overcome their own drawbacks. + +Now let’s look at how machine learning and IoT can be combined together. Let us suppose that I have some bananas and apples. I have got a sophisticated nano camera and sensors to collect the data from these fruits. If the data collected by these elements is fed to my laptop through the Internet, my laptop will start analysing the information by using sophisticated data analytics software and the cloud platform. Now if my laptop shows graphically how many bananas and apples I have left, it probably means that my machine (laptop) hasn’t learnt enough. On the other had, if my laptop is able to describe graphically how many of these are now ripe enough to be eaten, how many are not quite ripe and how many are very raw, it proves that my machine (laptop) has learned enough and has become more intelligent. + +Storing, processing, analysing and being able to ‘reason out’ using IoT data requires numerous computational and financial resources to attain business and machine learning values. + +Today an Airbus aircraft is provided with thousands of sensors to measure temperature, speed, fuel consumption, air flow dynamics, mechanisms of working, etc. All this data provided by IoT devices is connected to cloud platforms such as IBM Watson, Microsoft Azure, etc, via the Internet. Using sophisticated data analytics software, useful information is fed back to the machine, i.e., the aircraft. Using this data, the machine can learn very fast to overcome its problems, so that its life span and performance can be greatly enhanced. + +Today, the IoT connects several sectors such as manufacturing industries, healthcare, buildings, vehicles, traffic, shopping centres and so on. Data gathered from such diverse domains can certainly make the infrastructure learn meaningfully to work more efficiently. + +**Giving a new deal to electronic vision** +Amazon DeepLens is a wireless-enabled video camera and is integrated with Amazon Cloud. It makes use of the latest AI tools to develop computer vision applications. Using deep learning frameworks such as Caffe, MxNet and Tensorflow, it can develop effective computer vision applications. The device can be effectively connected to Amazon IoT. It can be used to build custom models with Amazon Sage Market. Its efficiency can even be enhanced using Apache MxNet. In fact, Amazon DeepLens can be used in a variety of projects, ranging from safety and education to health and wellness. For example, individuals diagnosed with dementia have difficulty in recognising friends and even family, which can make them disoriented and confused when speaking with loved ones. Amazon DeepLens can greatly assist those who have difficulty in recognising other people. + +**Why postpone the smart city concept?** +Cities today are experiencing unprecedented population growth as more people move to urban areas, and are dealing with several problems such as pollution, surging energy demand, public safety concerns, etc. It is important to remember the lessons from such urban problems. It’s time now to view the smart city concept as an effective way to solve such problems. Smart city projects take advantage of IoT with advanced AI algorithms and machine learning, to relieve pressure on the infrastructure and staff while creating a better environment. + +Let us look at the example of smart parking — it effectively solves vehicle parking problems. IoT monitoring today can locate empty parking spaces and quickly direct vehicles to parking spots. Today, up to 30 per cent of traffic congestion is caused by drivers looking for places to park. Not only does the extra traffic clog roadways, it also strains infrastructure and raises carbon emissions. +Today, smart buildings can automate central heating, air conditioning, lighting, elevators, fire-safety systems, the opening of doors, kitchen appliances, etc, using the IoT and machine learning (ML) techniques. + +Another important problem faced by smart cities is vehicle platooning (flocking).This situation can be avoided by the construction of automated highways and by building smart cars. IoT and ML together offer better solutions to avoid vehicle platooning. This will result in greater fuel economy, reduced congestion and fewer traffic collisions. + +IoT and ML can be effectively implemented in machine prognostics — an engineering discipline that mainly focuses on predicting the time at which a system or component will no longer perform its intended function. So ML with IoT can be effectively implemented in system health management (SHM), e.g., in transportation applications, in vehicle health management (VHM) or engine health management (EHM). + +ML and IoT are rapidly attracting the attention of the defence and space sectors. Let’s look at the case of NASA, the US space exploration agency. As a part of a five-node network, Xbee and ZigBee will be used to monitor Exo-Brake devices in space to collect data, which includes three-axis acceleration in addition to temperature and air pressure. This data is relayed to the ground control station via NASA’s Iridium satellite to make the ML of the Exo-Brake instrument more efficient. + +Today, drones in military operations are programmed with ML algorithms. This enables them to determine which pieces of data collected by IoT are critical to the mission and which are not. They collect real-time data when in-flight. These drones assess all incoming data and automatically discard irrelevant data, effectively managing data payloads. + +In defence systems today, self-healing drones are slowly gaining widespread acceptance. Each drone has its own ML algorithm as it flies on a mission. Using this, a group of drones on a mission can detect when one member of the group has failed, and then communicate with other drones to regroup and continue the military mission without interruption. + +In both the lunar and Mars projects, NASA is using hardened sensors that can withstand extreme heat and cold, high radiation levels and other harsh environmental conditions found in space to make the ML algorithm of the Rovers more effective and hence increase their life span and reliability. + +In NASA ‘s Lunar Lander project, the energy choice was solar, which is limitless in space. NASA is planning to take advantage of IoT and ML technology in this sector as well. + +**IoT and ML can boost growth in agriculture** +Agriculture is one of the most fundamental human activities. Better technologies mean greater yield. This, in turn, keeps the human race happier and healthier. According to some estimates, worldwide food production will need to increase by 70 per cent by 2050 to keep up with global demand. + +Adoption of IoT and ML in the agricultural space is also increasing quickly with the total number of connected devices expected to grow from 30 million in 2015 to 75 million in 2020. + +In modern agriculture, all interactions between farmers and agricultural processes are becoming more and more data driven. Even analytical tools are providing the right information at the right time. Slowly but surely, ML is providing the impetus to scale and automate the agricultural sector. It is helping to learn patterns and extract information from large amounts of data, whether structured or unstructured. + +**ML and IoT ensure better healthcare** +Today, intelligent, assisted living environments for home based healthcare for chronic patients are very essential. The environment combines the patient’s clinical history and semantic representation of ICP (individual care process) with the ability to monitor the living conditions using IoT technologies. Thus the Semantic Web of Things (SWOT) and ML algorithms, when combined together, result in LDC (less differentiated caregiver). The resultant integrated healthcare framework can provide significant savings while improving general health. + +Machine learning algorithms, techniques and machinery are already present in the market to implement reasonable LDC processes. Thus, this technology is sometimes described as supervised or predictive ML. + +IoT in home healthcare systems comprises multi-tier area networks. These consist of body area networks (BAN), the LAN and ultimately the WAN. These also need highly secured hybrid clouds. +IoT devices in home healthcare include nano sensors attached to the skin of the patient’s body to measure blood pressure, sugar levels, the heart beat, etc. This raw data is transmitted to the patient’s database that resides in the highly secured cloud platform. The doctor can access the raw data, previous prescriptions, etc, using sophisticated ML algorithms to recommend specific drugs to patients at remote places if required. Thus, patients at home can be saved from life threatening health conditions such as sudden heart attacks, paralysis, etc. + +In this era of communication and connectivity, individuals have multiple technologies to support their day-to-day requirements. In this scenario, IoT together with ML is emerging as a practical solution for problems facing several sectors. + +Growth in IoT is fine but just how much of the data collected by IoT devices is actually useful, is the key question. To answer that, efficient data analytics software, open source platforms and cloud technologies should be used. Machine learning and IoT should work towards creating a better technology, which will ensure efficiency and productivity for all sectors. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/machine-learning-ml-and-iot-can-work-together-to-improve-lives/ + +作者:[Vinayak Ramachandra Adkoli][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/vinayak-adkoli/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/ML-IoT_Sept-19.jpg?resize=696%2C458&ssl=1 (ML & IoT_Sept 19) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/ML-IoT_Sept-19.jpg?fit=1081%2C711&ssl=1 diff --git a/sources/talk/20191006 Cloud Native Computing- The Hidden Force behind Swift App Development.md b/sources/talk/20191006 Cloud Native Computing- The Hidden Force behind Swift App Development.md new file mode 100644 index 0000000000..6b29dcb37f --- /dev/null +++ b/sources/talk/20191006 Cloud Native Computing- The Hidden Force behind Swift App Development.md @@ -0,0 +1,57 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cloud Native Computing: The Hidden Force behind Swift App Development) +[#]: via: (https://opensourceforu.com/2019/10/cloud-native-computing-the-hidden-force-behind-swift-app-development/) +[#]: author: (Robert Shimp https://opensourceforu.com/author/robert-shimp/) + +Cloud Native Computing: The Hidden Force behind Swift App Development +====== + +[![][1]][2] + +_Cloud native computing can bolster the development of advanced applications powered by artificial intelligence, machine learning and the Internet of Things._ + +Modern enterprises are constantly adapting their business strategies and processes as they respond to evolving market conditions. This is especially true for enterprises serving fast-growing economies in the Asia Pacific, such as India and Australia. For these businesses, cloud computing is an invaluable means to accelerate change. From quickly deploying new applications to rapidly scaling infrastructure, enterprises are using cloud computing to create new value, build better solutions and expand business. + +Now cloud providers are introducing new ‘cloud native computing’ services that enable even more dynamic application development. This new technology will make cloud application developers more agile and efficient, even as it reduces deployment costs and increases cloud vendor independence. + +Many enterprises are intrigued but are also feeling overwhelmed by the rapidly changing cloud native technology landscape and hence, aren’t sure how to proceed. While cloud native computing has demonstrated success among early adopters, harnessing this technology has posed a challenge for many mainstream businesses. + +**Choosing the right cloud native open source projects** +There are several ways that an enterprise can bring cloud native computing on board. One option is to build its own cloud native environment using open source software. This comes at the price of carefully evaluating many different open source projects before choosing which software to use. Once the software is selected, the IT department will need to staff and train hard-to-find talent to provide in-house support. All in all, this can be an expensive and risky way to adopt new technology. + +A second option is to contract with a software vendor to provide a complete cloud native solution. But this compromises the organisation’s freedom to choose the best open source technologies in exchange for better vendor support, not to mention the added perils of a closed contract. + +This dilemma can be resolved by using a technology provider that offers the best of both worlds — i.e., delivering standards-based off-the-shelf software from the open source projects designated by the Cloud Native Computing Foundation (CNCF), and also providing integration, testing and enterprise-class support for the entire software stack. + +CNCF uses experts from across the industry to evaluate the maturity, quality and security of cloud native open source projects and give guidance on which ones are ready for enterprise use. Selected cloud native technologies cover the entire scope of containers, microservices, continuous integration, serverless functions, analytics and much more. + +Once CNCF declares these cloud native open source projects as having ‘graduated’, they can confidently be incorporated into an enterprise’s cloud native strategy with the knowledge that these are high quality, mainstream technologies that will get industry-wide support. + +**Finding that single vendor who offers multiple benefits** +But adopting CNCF’s rich cloud native technology framework is only half the battle won. You also must choose a technology provider who will package these CNCF-endorsed technologies without proprietary extensions that lock you in, and provide the necessary integration, testing, support, documentation, training and more. + +A well-designed software stack built based on CNCF guidelines and offered by a single vendor has many benefits. First, it reduces the risks associated with technology adoption. Second, it provides a single point of contact to rapidly get support when needed and resolve issues, which means faster time to market and higher customer satisfaction. Third, it helps make cloud native applications portable to any popular cloud. This flexibility can help enterprises improve their operating margins by reducing expenses and unlocking future revenue growth opportunities. + +Cloud native computing is becoming an everyday part of mainstream cloud application development. It can also bolster the development of advanced applications powered by artificial intelligence (AI), machine learning (ML) and the Internet of Things (IoT), among others. + +Leading users of cloud native technologies include R&D laboratories; high tech, manufacturing and logistics companies; critical service providers and many others. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/cloud-native-computing-the-hidden-force-behind-swift-app-development/ + +作者:[Robert Shimp][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/robert-shimp/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Cloud-Native_Cloud-Computing.jpg?resize=696%2C459&ssl=1 (Cloud Native_Cloud Computing) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Cloud-Native_Cloud-Computing.jpg?fit=900%2C593&ssl=1 diff --git a/sources/talk/20191007 DevOps is Eating the World.md b/sources/talk/20191007 DevOps is Eating the World.md new file mode 100644 index 0000000000..f7571da4c1 --- /dev/null +++ b/sources/talk/20191007 DevOps is Eating the World.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (DevOps is Eating the World) +[#]: via: (https://opensourceforu.com/2019/10/devops-is-eating-the-world/) +[#]: author: (Jens Eckels https://opensourceforu.com/author/jens-eckels/) + +DevOps is Eating the World +====== + +[![][1]][2] + +_Ten years ago, DevOps wasn’t a thing. Now, if you’re not adopting DevOps practices, you’re in danger of being left behind the competition. Over the last decade, JFrog’s Liquid Software vision has driven a commitment to helping companies around the world adopt, mature and evolve their CI/CD pipelines. Why? DevOps powers the software that powers the world. Most companies today turn into Software companies with more and more applications to build and to update. They have to manage releases fast and secure with distributed development teams and a growing amount of data to manage._ + +**Our Mission** +JFrog is on a mission to enable continuous updates through liquid software, empowering developers to code high-quality applications that securely flow to end-users with zero downtime. We are the creators of [_Artifactory_][3], the heart of the end-to-end Universal DevOps platform for automating, managing, securing, distributing, and monitoring [_all types of technologies_][4]. As the leading universal, highly available enterprise DevOps Solution, the [_JFrog platform_][5] empowers customers with trusted and expedited software releases from code-to-production. Trusted by more than 5,500 customers, the world’s top brands, such as Amazon, Facebook, Google, Netflix, Uber, VMware, and Spotify depend on JFrog to manage their binaries for their mission-critical applications. + +**“Liquid Software”** +In its truest form, Liquid Software updates software continuously from code to the edge seamlessly, securely and with no downtime. No versions. No big buttons. Just flowing updates that frictionlessly power all the devices and applications around you. Why? To an edge device or browser or end-user, versions don’t really have to matter. What version of Facebook is on your phone? You don’t care – until it’s time to update it and you get annoyed. What is the current version of the operating system in your laptop? You might know, but again you don’t really care as long as it’s up to date. How about your version of Microsoft products? The version of your favorite website? You don’t care. You want it to work, and the more transparently it works the better. In fact, you’d prefer it most times if software would just update and you didn’t even need to click a button. JFrog is powering that change. + +**A fully Automated CI/CD Pipeline** +The idea of automating everything in the CI/CD pipeline is exciting and groundbreaking. Imagine a single platform where you could automate every step from code into production. It’s not a pipe dream (or a dream for your pipeline). It’s the Liquid Software vision: a world without versions. We’re excited about it, and eager to share the possibilities with you. + +**The Frog gives back!** +JFrog’s roots are in the many **open source** communities that are mainstays today. In addition to the many community contributions through global standards organizations, JFrog is proud to give enterprise-grade tools away for open source committers, as well as provide free versions of products for specific package types. There are “developer-first” companies that like to talk about their target market. JFrog is a developer company built by and for developers. We’re happy to support you. + +**JFrog is all over the world!** +JFrog has nine-and-counting global offices, including one in India, where we have a rapidly-growing team with R&D and support functions. **And, we’re hiring fast!** ([_see open positions_][6]). Join us and the Liquid Software revolution! + +**We are a proud sponsor of Open Source India** +As the sponsor of the DevOps track, we want to be sure that you see and have access to all the cool tools and methods available. So, we have a couple of amazing experiences you can enjoy: + + 1. Stop by the booth where we will be demonstrating the latest versions of the JFrog Platform, enabling Liquid Software. We’re excited to show you what’s possible. + 2. Join **Stephen Chin**, world-renowned speaker and night-hacker who will be giving a talk on the future of Liquid Software. Stephen spent many years at Sun and Oracle running teams of developer advocates. + + + +He’s a developer and advocate for your communities, and he’s excited to join you. + +**The bottom line:** JFrog is proud to be a developer company, serving your needs and the needs of the OSS communities around the globe with DevOps, DevSecOps and pipeline automation solutions that are changing how the world does business. We’re happy to help and eager to serve. + +JFrog products are available as [_open-source_][7], [_on-premise_][8], and [_on the cloud_][9] on [_AWS_][10], [_Microsoft Azure_][11], and [_Google Cloud_][12]. JFrog is privately held with offices across North America, Europe, and Asia. **Learn more at** [_jfrog.com_][13]. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/devops-is-eating-the-world/ + +作者:[Jens Eckels][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/jens-eckels/ +[b]: https://github.com/lujun9972 +[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/07/DevOps-Statup-rising.jpg?resize=696%2C498&ssl=1 (DevOps Statup rising) +[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/07/DevOps-Statup-rising.jpg?fit=1460%2C1045&ssl=1 +[3]: https://jfrog.com/artifactory/ +[4]: https://jfrog.com/integration/ +[5]: https://jfrog.com/enterprise-plus-platform/ +[6]: https://join.jfrog.com/ +[7]: https://jfrog.com/open-source/ +[8]: https://jfrog.com/artifactory/free-trial/ +[9]: https://jfrog.com/artifactory/free-trial/#saas +[10]: https://jfrog.com/artifactory/cloud-native-aws/ +[11]: https://jfrog.com/artifactory/cloud-native-azure/ +[12]: https://jfrog.com/artifactory/cloud-native-gcp/ +[13]: https://jfrog.com/ diff --git a/sources/talk/20191008 Fight for the planet- Building an open platform and open culture at Greenpeace.md b/sources/talk/20191008 Fight for the planet- Building an open platform and open culture at Greenpeace.md new file mode 100644 index 0000000000..0e20d932f2 --- /dev/null +++ b/sources/talk/20191008 Fight for the planet- Building an open platform and open culture at Greenpeace.md @@ -0,0 +1,117 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Fight for the planet: Building an open platform and open culture at Greenpeace) +[#]: via: (https://opensource.com/open-organization/19/10/open-platform-greenpeace) +[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger) + +Fight for the planet: Building an open platform and open culture at Greenpeace +====== +Global problems require global solutions. Global solutions require open +organizations. Learn how Greenpeace is opening up to address climate +change. +![The Open Organization at Greenpeace][1] + +Global problems require global solutions. + +Few organizations know this better than Greenpeace. For nearly 50 years, the non-profit has been campaigning for a greener and more peaceful future. + +But in 2015, Greenpeace found itself at a crossroads. To address the climate emergency, Greenpeace knew it needed to [shift its organizational culture][2]. + +The organization needed a bold, new way of _being_. It needed to invest in new priorities, including digital technology and data analysis, inclusive and agile organizational structures, leadership supportive of culture change, new engagement practices, digital systems thinking, and more. It needed to facilitate the collective power of activists to embody [distributed leadership][3] and help the organization drive change. It needed to become [more transparent, more adaptable, and more collaborative][4]—and imbue those same values into a platform that would help others do the same as they joined forces to save the world. + +To address the ecological problems of the 21st century, Greenpeace needed to become a more open organization. + +And I helped Greenpeace International do it. But—as with any open effort—I didn't work alone. + +As an [Open Organization Ambassador][5], a [writer in the open organization community][6], and co-founder [a cooperative][7] working to spread the culture, processes, and benefits of openness wherever it can, I connected Greenpeace with the combined resources, perspectives, and energy of communities working to make the world a better place. + +Working with an organization in the midst of a massive cultural transition presented plenty of challenges for me—but my colleagues at Greenpeace and partners in the open organization community shared both their experience and their tried-and-true solutions for infusing open principles into complex organizations. + +In this three-part series, I'll explain how [Greenpeace International][8] built its first fully free and open source project: [Planet 4][9], a global platform that connects activists and advocates in search of opportunities to impact the climate crisis. + +The work itself is open. But, just as importantly, so is the spirit that guides it. + +### From secretive to open + +But I'm getting ahead of myself. Let's rewind to 2015. + +To address the ecological problems of the 21st century, Greenpeace needed to become a more open organization. + +Like so many others concerned with the climate emergency I headed to [Greenpeace.org][10] to learn how I could join the legendary organization fight for the planet. What greeted me was an outdated, complicated, and difficult-to-use website. I (somehow) found my way to the organization's jobs page, applied, and landed an interview. + +As part of the interview process, I learned of an internal document circulating among Greenpeacers. In vivid and compelling terms, that document described [seven “shifts” Greenpeace would need to make][2] to its internal culture if it was to adapt to a changing social and political landscape. It was a new internal [storytelling][11] initiative aimed at helping Greenpeace both _imagine_ and _become_ the organization its staff wanted it to be. + +As I read the document—especially the part that described a desired shift “from secretive to open source”—I knew I could help. My entire career, I've used open principles to help guide people and projects to spark powerful, positive change in the world. Helping [a traditionally “secretive” organization][12] embrace openness to galvanize others in fighting for our planet. + +I was all in. + +### Getting off the ground + +Greenpeace needed to return to one of its founding values: _transparency._ Its founders were [open by default][13]. Like any organization, Greenpeace will always have secrets, from the locations activists plan to gather for protests to supporters' credit card numbers. But consensus was that Greenpeace had grown _too_ secretive. What good is being a global hub for activism if no one knows what you're doing—or how to help you? + +Likewise, Greenpeace sought new methods of collaboration, both internally and with populations around the world. Throughout the 1970s, people-powered strategies helped the organization unleash new modes of successful activism. But today's world required even _more_. Becoming more open would mean accepting more unsolicited help, allowing more people work toward shared goals in creative and ingenious ways, and extending greater trust and connection between the organization and its supporters. + +Greenpeace needed a new approach. And that approach would be embodied in a new platform codenamed “Planet 4.” + +### Enter Planet 4 + +Being as open as we can, pushing the boundaries of what it means to work openly, doesn't just impact our work. It impacts our identity. It's certainly part of mine, and it's part of what makes open source so successful—but I knew I'd need to work hard to help Greenpeace change its identity. + +Planet 4 would be a tool that drove people to action. We would harness modern technology to help people visualize their impact on the planet, then help them understand how they can drive change. We would publish calls to action and engage with our supporters in [a new and meaningful way][14]. + +Getting off the ground would require monumental effort—not just technical skill, but superior persuasive and educational ability to _explain_ the need for an open culture project (disguised as a software project) to people at Greenpeace. Before we wrote a single line of code, we needed to do some convincing. + +Being radically open is a scary prospect, especially for a global organization that the press loves to scrutinize. Working transparently while others are watching means accepting a certain kind of vulnerability. Some people never leave their house unless they look perfect. Others go to work in shorts and sandals. Asking the question of "when to be open" is kind of like asking "when do we want to be perfectly polished and where do we get to hang out in our pajamas?" + +Being as open as we can, pushing the boundaries of what it means to work openly, doesn't just impact our work. It impacts our _identity_. It's certainly part of mine, and it's part of what makes open source so successful—but I knew I'd need to work hard to help Greenpeace change its identity. + +As I tried to figure out how we could garner support for a fully open project, the solution presented itself at a monthly meeting of the [Open Organization Ambassador community][5]. One day in June 2016, fellow ambassador Rebecca Fernandez began describing one of her recent projects at Red Hat: the [Open Decision Framework][15]. + +_Listen to Rebecca Fernandez explain the Open Decision Framework._ + +And as she presented it, I knew right away I would be able to remix that tool into a language that would help Greenpeace leaders understand the power of thinking and acting openly. + +_Listen to Rebecca Fernandez explain Greenpeace's use of the Open Decision Framework._ + +It worked. And so began our journey with Planet 4. + +We had a huge task in front of us. We initiated a discovery phase to research what stakeholders needed from the new engagement platform. We held [community calls][16]. We published blog posts. We designed a software concept to rattle the bones of non-profit technology. We talked openly about our work. And we spent the next two years helping our glimmer of an idea become a functioning prototype launched in more than 35 countries all over the globe. + +We'd been successful. Yet the vision we'd developed—the vision of a platform built on modern technologies and capable of inspiring people to act on behalf of our planet—hadn't been fully realized. We needed help seeing the path from successful prototype to world-changing engagement platform. + +So with the support of a few strategic Greenpeacers, I reached out to some colleagues at Red Hat the only way I knew how—the open source way, by starting a conversation. We've started a collaboration between our two organizations, one that will enable us all to build and learn together. This effort will help us design community-driven, engaging, open systems that spur change. + +This is just the beginning of our story. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/19/10/open-platform-greenpeace + +作者:[Laura Hilliger][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/laurahilliger +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/images/open-org/open-org-greenpeace-article-blog-header-thumbnail.png?itok=M8Y0WQOT (The Open Organization at Greenpeace) +[2]: https://opensource.com/open-organization/16/1/greenpeace-makes-7-shifts-toward-open +[3]: https://opensource.com/open-organization/18/3/empowerment-and-leadership +[4]: https://opensource.com/open-organization/resources/open-org-definition +[5]: https://opensource.com/open-organization/resources/meet-ambassadors +[6]: https://opensource.com/users/laurahilliger +[7]: http://weareopen.coop +[8]: http://greenpeace.org/international +[9]: http://medium.com/planet4 +[10]: https://www.greenpeace.org/ +[11]: https://storytelling.greenpeace.org/ +[12]: https://opensource.com/open-organization/15/10/using-open-source-fight-man +[13]: https://www.youtube.com/watch?v=O49U2M1uczQ +[14]: https://medium.com/planet4/greenpeaces-engagement-content-vision-fbd6bb66018a#.u0gmrzf0f +[15]: https://opensource.com/open-organization/resources/open-decision-framework +[16]: https://opensource.com/open-organization/16/1/community-calls-will-increase-participation-your-open-organization diff --git a/sources/talk/20191009 -Open Standards Are The Key To Expanding IoT Innovation In India Market.md b/sources/talk/20191009 -Open Standards Are The Key To Expanding IoT Innovation In India Market.md new file mode 100644 index 0000000000..b359bddbe6 --- /dev/null +++ b/sources/talk/20191009 -Open Standards Are The Key To Expanding IoT Innovation In India Market.md @@ -0,0 +1,87 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (“Open Standards Are The Key To Expanding IoT Innovation In India Market”) +[#]: via: (https://opensourceforu.com/2019/10/open-standards-are-the-key-to-expanding-iot-innovation-in-india-market/) +[#]: author: (Ankita KS https://opensourceforu.com/author/ankita-ks/) + +“Open Standards Are The Key To Expanding IoT Innovation In India Market” +====== + +[![][1]][2] + +_With IoT creating a new digital realm today, it brings along intense connectivity requirements and standards for them to communicate with other devices through a common protocol. For these reasons, interoperability is the most defining and crucial element in the Internet of Things. To understand the landscape of IoT in India, the standards and interoperability challenges and the solutions for that **Ankita KS, Senior Technology Journalist** from **EFY group** spoke to **Dr. Aloknath De, Corporate Vice President**, and **Chief Technology Officer, Samsung R&D Institute India – Bangalore**. Excerpts follow.._ + +**Q: Many opine that IoT is just a buzzword coined to create hype by marketers? Do you agree with that line of thought? Or do you feel that IoT is opening an entirely new market?** + +**A.** IoT is not hype; so, I could not disagree more with that line of thought. I view IoT not only as ‘Internet of Things’, but also as ‘Integration of Technologies’. It allows multiple protocols for connectivity, various cloud technologies and humongous amounts of sensor data. Also, as multiple stakeholders are involved in this process of integration, every stakeholder has to be brought together, business cases have to be analyzed and revenue models have to be evolved. This process is time-consuming and takes efforts to fructify. + +**Q: In the past year what are the new developments in R&D with respect to IoT globally?** +**A.** Globally, different IoT technologies have emerged and some have been embraced. You can see new types of devices getting connected to the internet; this would not have been thought of 5 years ago. In the Home context, I group these IoT devices into three broad classes. First is the ‘Elephant Class’ – comprising of large appliances like Refrigerator, washing machine, AC, Television which sits in its primary location. Second is the ‘Horse Class’ comprising of Speaker, Smartphone, Smart Watch that “trots” with the person. And the Third is the ‘Rat Class’ comprising of smaller sensors like the thermostat, door sensor, occupancy sensor. Some of these devices do not benefit always from data, but they enrich the IoT system by feeding sensor data. Like the Smart Home scenario, we can also define devices in the context of Manufacturing, Industry and other verticals. + +Standards are emerging to cover multiple layers in IoT. Plug-fests are being done across the globe to test interoperability. Companies are investing in R&D to make devices smarter, securer and interoperable. Also, a trend is emerging in the control of home appliances over voice interactions. IoT in R&D is changing gears from mere automation to predictive services. ‘Sense, Analyze, React’ paradigm is shifting to ‘Sense, Analyze and Proact’. + +**Q. Are you satisfied at the rate of deployment of IoT solutions in India?** +**A.** Both Yes and No. The rate of deployment of IoT is different for different verticals. For some verticals, like Smart Home, Smart Manufacturing and Smart Logistics, it’s relatively going good. You can see smart security and surveillance being adopted in apartment complexes; predictive maintenance being adopted for heavy machinery in the industries; asset tracking and monitoring being adopted in Smart Logistics. + +Whereas for some verticals like Smart Cities initial steps have been taken. And then some verticals require more attention to resolve major technical challenges like Connected Healthcare (e.g. electronic health records, health regulations) and Smart Agriculture (e.g. Open spaces, placement of sensors, etc.). Overall, the Indian IoT market is still at its infancy; I would say in the exploratory phase with use cases being piloted primarily in the Home, Manufacturing and Logistics verticals. + +**Q. How do you see the IoT market (in India) evolving in the next 2-3 years?** +**A.** In the next 2-3 years, we expect the number of connected devices in India to increase significantly with demand generating from both consumer and industrial applications. Nasscom predicts that there will be 2.7 billion connected devices by 2020, leading to an economic impact of up to $15B USD. Indian IoT market will represent more than 5% of the global market size. However, more than half of this market growth is locked by the lack of interoperability standards. Standards will play a major role in bringing in the required interoperability and reducing the lead time to develop IoT solutions and services. Smaller companies including startups will join hands with the MNCs to participate in a bigger ecosystem as IoT is purely an ecosystem play rather than an individual company trying to solve problems. + +**Q. Which industry segments do you believe will be driving a larger chunk of demand for IoT in India? Why?** +**A.** On the consumer side, Smart Home and Building has been driving the demand for IoT as of today. Real Estate developers are offering Smart Homes with lighting automation and security as primary solutions. Access control, Smart Street Lighting, Common area surveillance are some primary use cases which are being deployed in a real-world scenario. + +On the Enterprise side, Smart Manufacturing and Logistics drive major demand. Predictive maintenance of machinery is a high revenue-driven business case helping in the drastic reduction of losses by tracking the downtime/uptime of machinery. Asset tracking and management have been there for a while but with multiple IoT sensors coming in like temperature, moisture, weight, etc., the continuous monitoring and tracking of the assets has improved inaccuracy. + +**Q. Almost everyone agrees that there is a dire need for standards for the IoT ecosystem to develop. And this is being stated for quite some time now. What’s causing the delay? What’s missing that needs to happen?** +**A.** India has moved from what and why to how in its IoT journey and evolution. IoT standards are still a work in progress in terms of its adoption. There are multiple standards available in the market with overlapping functionalities using different technical approaches battling out for IoT dominance. Some technologies are less proven than the others and so the goal of the standards body has to be to legitimize the technology and make customers feel safe in adopting it. Open standards are the key to expanding IoT innovation in India market. With open standards, there is a higher chance of finding the right resources to integrate the required technology into a successful IoT solution. + +**Q. For someone new to the concept—how would you explain the challenges caused by the lack of standards (interoperability)?** +**A.** With the rise of connected things and machines, there will be billions of connected devices including sensors, appliances, and machinery, etc. generating terabytes of data. Security becomes one of the biggest concerns with this much number of devices and data. Also, the IoT market has become so fragmented that it has led to the development of multiple vertical-specific standards, various connectivity protocols and smaller groups working towards solving individual problems. What the industry needs is a secure consolidation of standards to achieve true interoperability. + +**Q. What are the key areas in which standards needed first?** +**A.** Smart Manufacturing and Industry 4.0 would need the standards first. The Industrial Internet Consortium has already started working towards defining the standards for the industry vertical. It also has a liaison with the Open Connectivity Foundation to combine the best of both the frameworks. Smart Homes in India would not work in the Do-It-Yourself mode at least in the coming 2-3 years. The market penetration will happen primarily through the Real Estate Builders (B2B2C) working closely with System Integrators. So, the devices going into the homes needs to be standardized for seamless deployment of the smart home solution. Connected Health would be next and Agriculture a good to have. + +**Q. Would there be a need for different standards for different nations (e.g. India) or would it better to adopt global standards?** +**A.** Significant portions of Standards are applicable globally. However, every nation has its own set of requirements. For example, a connected fan is something which could be found only in India and other emerging countries. But building a standard from scratch for a specific country needs a lot of resources, time and investment. Bringing in best practices from global markets opens up a lot of opportunities for companies to learn from and also customize based on the country-specific requirements on top of the established core framework. Also, companies get access to a bigger global ecosystem by adopting global standards. + +Global standards are working together in a unified approach to address specific requirements and bring in the best of both through consolidation. One such example is the Asymmetrical bridge function between oneM2M and OCF exposing proximal OCF devices to distal oneM2M devices. While OCF is a device-centric common addressing framework with the current focus on Smart Home, oneM2M is a cloud service layer with a focus on Smart Cities; Working in tandem, they open up a whole new set of use cases and standards interworking. + +**Q. How does OCF aim to solve this challenge of standards & interoperability?** +**A.** OCF stands for Open Connectivity Foundation. It is an open-source consortium that delivers “just-works” interconnectivity for developers, manufacturers, and end-users. Through a common communication framework, OCF connects and manages information flow intelligently among devices. This framework is agnostic of device form factor, the operating system, manufacturer or even service provider + +**Q. What category of standards is OCF working on? And that’s not within OCF’s domain, but important for the IoT eco-system?** +**A.** OCF’s current focus is developing Standards for Smart Home and Buildings. It aims to build Standards for Health, Automotive, and Industry; possibly in conjunction with partners going forward. But OCF’s common addressing scheme and the core framework enables it to be easily adapted into other verticals + +**Q. What is the main vision and objectives behind OCF India Forum?** +**A.** OCF India Forum is a local chapter of OCF established in April 2019 with 45+ member companies and Nasscom CoE IoT as its Executive Office. The OCF India Steering Committee comprises of representatives from Nasscom, Samsung, Intel, L&T Technology Services and Granite River Labs. + +OCF India’s vision and objectives are twofold, one is to facilitate small size companies including SMEs & startups to easily build interoperable IoT products using the Open Source implementation of OCF spec (IoTivity) The second is to build & Engage the Indian IoT Ecosystem including Industry, Startups, University, Government, Dev community to work towards a common cause of IoT Standardization. + +**Q. What should happen within the next year, and the next three years—to indicate that OCF India is succeeding in its charter?** +**A.** Within a year more companies joining OCF India Forum; with 50 in 2019 and overall 100+ by 2020 and the developer community contributing actively into IoTivity. In the next 3 years, Indian MNCs will be adopting OCF to create a bigger IoT ecosystem for the startups. OCF is recognized as a National Standard alongside oneM2M covering ‘proximal’ and ‘distal’ devices and OCF will have a certification lab in India. + +**Q. What role are partners like NASSCOM playing in enabling OCF India to achieve its mission?** +**A.** The Center of Excellence (CoE) for IoT is a joint an initiative of Nasscom, MeiTy, and ERNET. It was established in India to jump-start the Indian IoT ecosystem by helping IoT startups to leverage cutting edge technology & build market-ready products. Nasscom CoE IoT will serve as the Executive Office of OCF India Forum. Having a neutral entity represent and drive the Standards the initiative encourages faster ecosystem growth and adoption. With a well-established network of Industry, Academia, Startups and Government, Nasscom CoE IoT will help OCF India towards a much-focused ecosystem engagement drive. + +**Q. Any initiatives executed, and any planned in the near future—by OCF India?** +**A.** OCF India has been participating in IoT events across India since 2018 in the form of Seminar, Exhibition Booth & Workshop; To name a few – IoTShow 2019, 3rd IoT India Expo 2019, 7th IoT Symposium 2018. Series of hands-on workshops are planned to engage the gov’t and dev community. Also, in line are some of the premiere IoT events in India where OCF India will participate this year like IoT India Congress 2019, IoTNext 2019. The Flagship event “OCF India Day” co-located with Open Source India 2019 + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/open-standards-are-the-key-to-expanding-iot-innovation-in-india-market/ + +作者:[Ankita KS][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/ankita-ks/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/samsung-aloknath.jpg?resize=640%2C360&ssl=1 (samsung-aloknath) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/samsung-aloknath.jpg?fit=640%2C360&ssl=1 diff --git a/sources/talk/20191009 Things You Should Know If You Want to Become a Machine Learning Engineer.md b/sources/talk/20191009 Things You Should Know If You Want to Become a Machine Learning Engineer.md new file mode 100644 index 0000000000..a404fd55af --- /dev/null +++ b/sources/talk/20191009 Things You Should Know If You Want to Become a Machine Learning Engineer.md @@ -0,0 +1,76 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Things You Should Know If You Want to Become a Machine Learning Engineer) +[#]: via: (https://opensourceforu.com/2019/10/things-you-should-know-if-you-want-to-become-a-machine-learning-engineer/) +[#]: author: (Samantha https://opensourceforu.com/author/samantha/) + +Things You Should Know If You Want to Become a Machine Learning Engineer +====== + +[![][1]][2] + +_Machine learning is the next big thing in the market. Have you seen machines that do activities without any human involvement? This is what machine learning engineers do. They develop machines and systems which can learn and apply knowledge._ + +**How is artificial intelligence changing the job scenario for machine learning engineers?** +Artificial intelligence and machine learning have been successful in touching almost every aspect of our daily life. It may be the voice-activated virtual assistants like Siri and Alexa, or Predictive technologies used by companies like Netflix and Amazon for a better understanding of the customers. + +Artificial intelligence makes the computer do the tasks which earlier needed human intelligence and machine learning is about building the algorithm for the machines which helps them to identify patterns and thus give a better insight into the data. Countries around the world are continuously working on strategies and initiatives to guide the development of artificial intelligence. + +Lately, organizations from almost every sector are investing in AI tools and techniques, thus boosting their companies. Currently, AI investments are being dominated by large tech companies like Baidu, Microsoft, Google, Apple, Facebook, and so on. And almost 10%-30% of non-tech companies are adopting artificial intelligence, depending upon their industry. + +There has been a considerable advancement in the automobile industry with the implementation of Artificial intelligence with vehicles. Self-driving cars were something impossible without IoT working closely with AI. Then there is the Facial recognition feature by Google, which helps to identify a person using digital images or patterns. These technologies are changing the way people have expected their life to be. + +As per a recent [_study_][3], Artificial intelligence will be creating almost 58 million new jobs by 2022, giving a major shift in quality, location, and permanency the new jobs. BY 2025 machines will be taking over the work tasks that are being performed by humans by almost 71%, with the human workforce focusing more on productive tasks. This creates the need for reskilling and upskilling of the current workforce. + +In a current report, the top decision-makers of IT/ITES observed that Machine learning and other AI-powered solutions would play a major role in shaping future workplaces. With the latest technological advancements, the tech companies are on the lookout for talents equipped with a better understanding of these technologies. + +Here are some of the skills needed for becoming a machine learning engineer. + +**Programming skills:** +Machine learning calls for a stronghold over programming and software development skills. It’s all about creating dynamic algorithms. Being clear with the fundamentals of analysis and design can be an added advantage for you. Here are the skills that you should be acquainted with: + + * **Fundamentals of Programming and CS: **Machine learning involves computation of huge sets of data which requires knowledge on the fundamentals concepts such as computer architecture, data structures, algorithms, etc. The basics of stacks, b-trees, sort logos, or the parallel programming problems come in handy when we talk about the fundamentals. + * **Software design: **Being a machine learning engineer, you will be creating algorithms and systems to integrate with existing ecosystems other software components. And for this, a stronghold over in Application Programming Interfaces (APIs) like web API’s, static and dynamic libraries, etc. are essential for sustenance in the future. + * **Programming languages: **Machine learning is known for its versatility and is not bound to any specific language. All it needs is the required components and features, and you can virtually use any language if it satisfies the above condition. ML libraries have got different programming languages, and each language can be used for a different task. + * **Python: **One of the popular languages used among machine learning engineers is Python. It has got many useful libraries like NumPy, SciPy, and Pandals which help in the efficient processing of data and better scientific computing. It has got some specialized libraries like Scikit-learn, Theano, and TensorFlow, which allow learning algorithms using different computing platforms. + * **R Language: **Developed by Ross Ihaka and Robert Gentleman, this one of the best languages used for machine learning tasks. Coming with a large number of algorithms and statistical models, it is specially tailored for data mining and statistical computing. + * **C/C++:** C/C++ use is pretty much lower when we talk about the programming languages needed for machine learning. But it cannot be as it is used to program the infrastructure and mechanics of machine learning. In fact, a number of ML libraries are actually developed in C/C++ and wrapped around with API calls to make it available for other languages. + + + +Although the language is a bit different from traditional languages, it is not difficult to learn. + +**Basic skills needed:** +Machine learning is a combination of math, data science, and software engineering. And no matter how many certifications you have got, but you should be well acquainted with these basic skills to be master in your domain: + + * **Data modeling:** +Data modeling is a process used to estimate the structure of a dataset, for finding patterns and at times when data is nonexistent. In machine learning, we have to analyze unstructured data, which relies wholly on data modeling. Data modeling and evaluation concepts are needed for creating sound algorithms. + * **Statistics:** +Statistics is mainly the creation of models from data. Most of the machine learning algorithms are building upon statistical models. It has got various other branches which are also used in the process like analysis of variance and hypothesis testing. + + + +Along with these two, there is one more basic skill, which is of utmost importance – Probability. The principles of probability and its derivative techniques like Markov Decision Processes and Bayes Nets help in dealing with the uncertainties and make reliable predictions. + +These are many skills which are needed to become a machine learning engineer, and many institutes provide professional [_Machine learning certification course_][4]. They are playing a major role in the rise of AI and efficient machine learning engineers by guiding the participants through the latest advancements and technical approaches in artificial intelligence technologies. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/things-you-should-know-if-you-want-to-become-a-machine-learning-engineer/ + +作者:[Samantha][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/samantha/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/12/Machine-Learning-Framework.jpg?resize=696%2C611&ssl=1 (machine-learning-framework) +[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/12/Machine-Learning-Framework.jpg?fit=1389%2C1219&ssl=1 +[3]: https://www.forbes.com/sites/amitchowdhry/2018/09/18/artificial-intelligence-to-create-58-million-new-jobs-by-2022-says-report/#7830694a4d4b +[4]: https://www.simplilearn.com/big-data-and-analytics/machine-learning-certification-training-course diff --git a/sources/talk/20191009 Why to choose Rust as your next programming language.md b/sources/talk/20191009 Why to choose Rust as your next programming language.md new file mode 100644 index 0000000000..d8cad5c342 --- /dev/null +++ b/sources/talk/20191009 Why to choose Rust as your next programming language.md @@ -0,0 +1,112 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why to choose Rust as your next programming language) +[#]: via: (https://opensource.com/article/19/10/choose-rust-programming-language) +[#]: author: (Ryan Levick https://opensource.com/users/ryanlevick) + +Why to choose Rust as your next programming language +====== +Selecting a programming language can be complicated, but some +enterprises are finding that switching to Rust is a relatively easy +decision. +![Programming books on a shelf][1] + +Choosing a programming language for a project is often a complicated decision, particularly when it involves switching from one language to another. For many programmers, it is not only a technical exercise but also a deeply emotional one. The lack of known or measurable criteria for picking a language often means the choice digresses into a series of emotional appeals. + +I've been involved in many discussions about choosing a programming language, and they usually conclude in one of two ways: either the decision is made using measurable, yet unimportant criteria while ignoring relevant, yet hard to measure criteria; or it is made using anecdotes and emotional appeals. + +There has been one language selection process that I've been a part of that has gone—at least so far—rather smoothly: the growing [consideration inside Microsoft][2] for using [Rust][3]. + +This article will explore several issues related to choosing a programming language in general and Rust in particular. They are: What are the criteria usually used for selecting a programming language, especially in large businesses, and why does this process rarely end successfully? Why has the consideration of Rust in Microsoft gone smoothly so far, and are there some general best practices that can be gleaned from it? + +### Criteria for choosing a language + +There are many criteria for deciding whether to switch to a new programming language. In general, the criteria that are most easily measured are the ones that are most often talked about, even if they are less important than other, more difficult-to-measure criteria. + +#### Technical criteria + +The first group of criteria are the technical considerations; they are often the first that come to mind because they are the easiest to measure. + +Interestingly, the technical costs (e.g., build system integration, monitoring, tooling, support libraries, and more) are often easier to measure than the technical benefits. This is especially detrimental to the adoption of new programming languages, as the downsides of adoption are often the clearest part of the picture. + +While some technical benefits (like performance) can be measured relatively easily, others are much harder to measure. For example, what are the relative merits of a dynamic typing system (like in Python) to a relatively verbose and feature-poor system (like Java), and how does this change when compared to stronger typed systems like Scala or Haskell? Many people have strong gut feelings that such technical differences should be taken very seriously in language considerations, but they are no good ways to measure them. + +A side effect of the discrepancy in measurement ease is that the easiest-to-measure items are often given the most weight in the decision-making process even if that would not be the case with perfect information. This not only throws off the cost/benefit analysis but also the process of assigning importance to different costs and benefits. + +#### Organizational criteria + +Organizational criteria, which are the second consideration, include: + + * How easy will it be to hire developers in this language? + * How easy is it to enforce programming standards? + * How quickly, on average, will developers be able to deliver software? + + + +Costs and benefits of organizational criteria are hard to measure. People usually have vague, "gut feeling" answers to them, which create strong opinions on the matter. Unfortunately, however, it's often very difficult to measure these criteria. For example, it might be obvious to most that TypeScript allows programmers to deliver functioning, relatively bug-free software to customers more quickly than C does, but where is the data to back this up? + +Moreover, it's often extremely difficult to assign importance weights to these criteria. It's easy to see that Go enforces standardized coding practices more easily than Scala (due to the wide use of gofmt), but it is extremely difficult to measure the concrete benefits to a company from standardizing codebases. + +These criteria are still extremely important but, because of the difficulty in measuring them, they are often either ignored or reasoned about through anecdotes. + +#### Emotional criteria + +Third are the emotional criteria, which tend to be overlooked if not outright dismissed. + +Software programming has traditionally tried to emulate more true "engineering" practices, where technical considerations are generally the most important. Some would argue that programming languages are "just tools" and should be measured only against technical criteria. Others would argue that programming languages assist the programmer in some of the more artistic aspects of the job. These criteria are extremely difficult to measure in any meaningful way. + +In general, this comes down to how happy (and thus productive) programmers feel using this language. Such considerations can have a real impact on programmers, but how this translates to benefitting to an entire team is next to impossible to measure. + +Because of the difficulty of quantifying these criteria, this is often ignored. But does this mean that emotional considerations of programming languages have no significant impact on programmers or programming organizations? + +#### Unknown criteria + +Finally, there's a set of criteria that are often overlooked because a new programming language is usually judged by the criteria set by the language currently in use. New languages may have capabilities that have no equivalent in other languages, so many people will not be familiar with them. Having no exposure to those capabilities may mean the evaluator unknowingly ignores or downplays them. + +These criteria can be technical (e.g., the merits of Kotlin data classes over Java constructs), organizational (e.g., how helpful Elm error messages are for teaching those new to the language), or emotional (e.g., the way Ruby makes the programmer feel when writing it). + +Because these aspects are hard to measure, and someone completely unfamiliar with them has no existing framework for judging them based on experience, intuition, or anecdote, they are often undervalued versus more well-understood criteria—if not outright ignored. + +### Why Rust? + +This brings us back to the growing excitement for Rust in Microsoft. I believe the discussions around Rust adoption have gone relatively smoothly so far because Rust offers an extremely clear and compelling advantage—not only over the language it seeks to replace (C++)—but also over any other language practically available to industry: great performance, a high level of control, and being memory safe. + +Microsoft's decision to investigate Rust (and other languages) began due to the fact that roughly [70% of Common Vulnerabilities and Exposures][4] (CVEs) in Microsoft products were related to memory safety issues in C and C++. When it was discovered that most of the affected codebases could not be effectively rewritten in C# because of performance concerns, the search began. Rust was viewed as the only possible candidate to replace C++. It was similar enough that not everything had to be reworked, but it has a differentiator that makes it measurably better than the current alternative: being able to eliminate nearly 70% of Microsoft's most serious security vulnerabilities. + +There are other reasons beyond memory safety, performance, and control that make Rust appealing (e.g., strong type safety guarantees, being an extremely loved language, etc.) but as expected, they were hard to talk about because they were hard to measure. In general, most people involved in the selection process were more interested in verifying that these other aspects of the language weren't perceivably worse than C++ but, because measuring these aspects was so difficult, they weren't considered active reasons to adopt the language. + +However, the Microsoft teams that had already adopted Rust, like for the [IoT Edge Security Daemon][5], touted other aspects of the language (particularly "correctness" due to the advanced type system) as the reasons they were most keen on investing more in the language. These teams couldn't provide reliable measurements for these criteria, but they had clearly developed an intuition that this aspect of the language was extremely important. + +With Rust at Microsoft, the main criteria being judged happened to be an easily measurable one. But what happens when an organization's most important issues are hard to measure? These issues are no less important just because they are currently difficult to measure. + +### What now? + +Having clearly measurable criteria is important when adopting a new programming language, but this does not mean that hard-to-measure criteria aren't real and shouldn't be taken seriously. We simply lack the tools to evaluate new languages holistically. + +There has been some research into this question, but it has not yet produced anything that has been widely adopted by industry. While the case for Rust was relatively clear inside Microsoft, this doesn't mean new languages should be adopted only where there is one clear, technical reason to do so. We should become better at evaluating more aspects of programming languages beyond just the traditional ones (such as performance). + +The path to Rust adoption is just beginning at Microsoft, and having just one reason to justify investment in Rust is definitely not ideal. While we're beginning to form collective, anecdotal evidence to justify Rust adoption further, there is definitely a need to quantify this understanding better and be able to talk about it in more objective terms. + +We're still not quite sure how to do this, but stay tuned for more as we go down this path. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/choose-rust-programming-language + +作者:[Ryan Levick][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ryanlevick +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_programming_languages.jpg?itok=KJcdnXM2 (Programming books on a shelf) +[2]: https://msrc-blog.microsoft.com/tag/rust +[3]: https://www.rust-lang.org/ +[4]: https://github.com/microsoft/MSRC-Security-Research/blob/master/presentations/2019_02_BlueHatIL/2019_01%20-%20BlueHatIL%20-%20Trends%2C%20challenge%2C%20and%20shifts%20in%20software%20vulnerability%20mitigation.pdf +[5]: https://msrc-blog.microsoft.com/2019/09/30/building-the-azure-iot-edge-security-daemon-in-rust/ diff --git a/sources/talk/20191010 Climate challenges call for open solutions.md b/sources/talk/20191010 Climate challenges call for open solutions.md new file mode 100644 index 0000000000..41ba2a683b --- /dev/null +++ b/sources/talk/20191010 Climate challenges call for open solutions.md @@ -0,0 +1,111 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Climate challenges call for open solutions) +[#]: via: (https://opensource.com/open-organization/19/10/global-energy-climate-challenges) +[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland) + +Climate challenges call for open solutions +====== +We can meet global energy and climate change challenges with an open +mindset. +![Globe up in the clouds][1] + +Global climate change affects us all. It is, at its heart, an energy issue—a problem too large and too complex for any single person, company, university, research institute, science laboratory, nuclear trade association, or government to address alone. It will require a truly global, cooperative effort, one aimed at continued innovation across a range of technologies: renewables, batteries, carbon capture, nuclear power development, and more. + +Throughout the past year, I've been part of an initiative working on nuclear power decommissioning in Japan. As part of that work—which includes several meetings every month on this issue, as well as my own independent research on the subject—I've learned more about the ways various communities can play a role in understanding and impacting energy needs and climate discussions. + +In this article, I'll offer one example that illustrates how this is the case—that of "[Generation IV][2]" nuclear power plant development. This example demonstrates how [open organization principles][3] can influence future discussions about our global energy and climate change challenges. We must address these challenges with an open mindset. + +### Community purposes and frameworks + +Members of a community must [believe in a common purpose][4]. That sense of common purpose is not only what unites an open project but also what helps an open, distributed group maintain its focus and measure its success. Clear, public, and mutually agreed-upon statements of purpose are a basic feature of open organizations. + +So an open approach to global energy and climate change challenges should do the same. For example, when researching Generation IV nuclear power plant development, I've learned of a basic framework for task force goals: + + 1. There should be a desire to reduce current carbon dioxide (CO2) emissions and greenhouse gases. + 2. There should be a desire to reduce nuclear waste. + 3. There should be a desire to provide stable, low-cost electricity without increasing CO2 emissions globally, particularly in rural areas and developing countries where most of the future CO2 emissions will come from in the future. + 4. There should be a desire to improve safety in nuclear power energy production. This should include developing a nuclear fuel that cannot be converted to weapons, reducing the chance of nuclear weapon confrontation or terrorist attacks. + 5. There should be a desire to reduce global air, water, and land pollution. + + + +A successful open approach to these issues must begin by uniting a community around a common set of goals like these. + +### Building community: inclusivity and collaboration + +Once a community has clarified its motivations, desires, and goals, how does it attract people who share those desires? + +Once a community has clarified its motivations, desires, and goals, how does it attract people who _share_ those desires? + +One method is by developing associations and having global conferences. For example, the [Generation IV International Forum (GIF)][5] was formed to address some of the desires I listed above. Members represent countries like Argentina, Brazil, Canada, China, EU, France, Japan, S. Korea, South Africa, Switzerland, UK, USA, Australia, and Russia. They have symposia to allow countries to exchange information, build communities, and expand inclusivity. In 2018, the group held its fourth symposium in Paris. + +But in-person meetings aren't the only way to build community. Universities are working to build distributed, global communities focused on energy and climate challenges. MIT, for instance, is doing this with its own [energy initiative][6], which includes the "[Center for Advanced Nuclear Energy Systems][7]." The center's website facilitates discussions between like-minded advocates for energy solutions—a beautiful example of collaboration in action. Likewise, [Abilene Christian University][8] features a department in future nuclear power. That department collaborates with nuclear development institutes and works to inspire the next generation of nuclear scientists, which they hope will lead to: + + 1. raising people out of poverty worldwide through inexpensive, clean, safe and available energy, + 2. developing systems that provide clean water supply, and + 3. curing cancer. + + + +Those are goals worth collaborating on. + +### Community and passionate, purposeful participation + +As we know from studying open organizations, the more specific a community's goals, the more successful it will likely be. + +As we know from studying open organizations, _the more specific a community's goals, the more successful it will likely be._ This is especially true when working with _passionate_ communities, as keeping those communities focused ensures they're channeling their energy in appropriate, helpful directions. + +Global attempts to solve energy and climate problems should consider this. Once again in the case of Generation IV nuclear power, there is growing interest in one type of nuclear power plant concept, the [Molten-salt reactor][9] (MSR), which uses thorium in nuclear fuel. Proponents of MSR hope to create a safer type of fuel, so they've started their own association, the [Thorium Energy World][10], to advocate their cause. This conference centers on the use of thorium in the fuel of these type nuclear power plants. Experts meet to discuss their concepts and progress on MSR technology. + +But it's also true that communities are much more likely to invest in the ideas that _they_ specify—not necessarily those "handed down" from leadership. Whenever possible, communities focused on energy and climate change challenges should take their cues from members. + +Recall the Generation IV International Forum (GIF), which I mentioned above. That organization ran into a problem: too many competing concepts for next-generation nuclear power solutions. Rather than simply select one and demand that all members support it, the GIF created general categories and let participants select the concepts they favored from each. This resulted in a list of six concepts for future nuclear power plant development—one of which was MSR technology. + +Narrowing the community's focus on a smaller set of options should help that community have more detailed and productive technical discussions. But on top of that, letting the community itself select the parameters of its discussions should greatly increase its chances of success. + +### Community and transparency + +Once a community has formed, questions of transparency and collaboration often arise. How well will members interact, communicate, and work with each other? + +I've seen these issues firsthand while working with overseas distributors of the products I want them to sell for me. Why should they buy, stock, promote, advertise, and exhibit the products if at any time I could just cut them out and start selling to their competitors? + +Taking an open approach to building communities often involves making the communities' rules, responsibilities and norms explicit and transparent. + +Taking an open approach to building communities often involves making the communities' rules, responsibilities and norms _explicit_ and _transparent_. To solve my own problem with distributors, for instance, I entered into distributor agreements with them. These detailed both my responsibilities and theirs. With that clear agreement in-hand, we could actively and collaboratively promote the product. + +The Generation IV International Forum (GIF) faced a similar challenge with it member countries, specifically with regard to intellectual property. Each country knew it would be creating significant (and likely very valuable) intellectual property as part of its work exploring the six types of nuclear power. To ensure that knowledge sharing occurred effectively and amicably between the members, the group established guidelines for exchanging knowledge and research findings. It also granted a steering committee the authority to dismiss potential members who weren't operating according to the same standards of transparency and collaboration (less they become a burden on the growing community). + +They formed three types of agreements: "[Framework Agreements][11]" (in both French and English), System Arrangements (for each of the six systems I mentioned), and Memoranda of Understanding (MOU). With those agreements, the members could be more transparent, be more collaborative, and form more productive communities. + +### Growing demand—for energy and openness + +Increasing demand for electrical power in developing countries will impact global energy needs and climate change. The need for electricity and clean water for both health and agriculture will continue to grow. And the way we _address_ both those needs and that growth will determine how we meet next-generation energy and climate challenges. Adopting technologies like Generation IV nuclear power (and MSR) could help—but doing so will require a global, community-driven effort. An approach based on open organization principles will help us solve climate problems faster. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/19/10/global-energy-climate-challenges + +作者:[Ron McFarland][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ron-mcfarland +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-globe.png?itok=_drXt4Tn (Globe up in the clouds) +[2]: https://en.wikipedia.org/wiki/Generation_IV_reactor +[3]: https://opensource.com/open-organization/resources/open-org-definition +[4]: https://opensource.com/open-organization/17/9/rediscovering-your-why +[5]: https://www.gen-4.org/gif/jcms/c_74878/generation-iv-international-forum-gif-symposium +[6]: http://energy.mit.edu/ +[7]: https://canes.mit.edu/ +[8]: https://www.youtube.com/watch?v=3pa35s6HKa8 +[9]: https://en.wikipedia.org/wiki/Molten_salt_reactor +[10]: http://www.thoriumenergyworld.com/ +[11]: http://www.gen-4.org/gif/upload/docs/application/pdf/2014-01/framework_agreement.pdf diff --git a/sources/talk/20191010 Reimagining-the-Internet project gets funding.md b/sources/talk/20191010 Reimagining-the-Internet project gets funding.md new file mode 100644 index 0000000000..4a908b1312 --- /dev/null +++ b/sources/talk/20191010 Reimagining-the-Internet project gets funding.md @@ -0,0 +1,73 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Reimagining-the-Internet project gets funding) +[#]: via: (https://www.networkworld.com/article/3444765/reimagining-the-internet-project-gets-funding.html) +[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/) + +Reimagining-the-Internet project gets funding +====== +A National Science Foundation-financed Internet replacement testbed project lays out its initial plans. +Thinkstock + +The [Internet of Things][1] and 5G could be among the benefactors of an upcoming $20 million U.S. government cash injection designed to come up with new architectures to replace existing public internet. + +FABRIC, as the National Science Foundation-funded umbrella project is called, aims to come up with a proving ground to explore new ways to move, keep and compute data in shared infrastructure such as the public internet. The project “will allow scientists to explore what a new internet could look like at scale,” says the lead institution, the University of North Carolina at Chapel Hill, [in a media release][2]. And it “will help determine the internet architecture of the future.” + +[[Get regularly scheduled insights by signing up for Network World newsletters. ]][3] + +Bottlenecks, security and overall performance are infrastructure areas that the group are looking to improve on. The “Internet is showing its age and limitations,” Ilya Baldin, director of Network Research and Infrastructure at the Renaissance Computing Institute at UNC-Chapel Hill is quoted as saying in the release. “Especially when it comes to processing large amounts of data.” RENCI is involved in developing and deploying research technologies. + +[][4] + +BrandPost Sponsored by HPE + +[Take the Intelligent Route with Consumption-Based Storage][4] + +Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency. + +“Today’s internet was not designed for the massive datasets, machine-learning tools, advanced sensors and [Internet of Things devices][5],” FABRIC says, echoing others who, too, are envisioning a new internet: + +[I wrote, in July,][6] for example, about a team of network engineers known as NOIA, who also want to revolutionize global public internet traffic. That group wants to co-join a new software-defined public internet with a bandwidth- and routing-trading system based on blockchain. Others, such as companies [FileStorm and YottaChain, are working on distributed blockchain-like storage for Internet][7] adoption. + +Advertisement + +Another group led by researchers at University of Magdeburg, [whom I’ve also written about][8], want to completely restructure the internet. That university, which has received German government funding, says adapting IoT to existing networks won’t work. Centralized security that causes choke points, is just one trouble-spot that needs fixing, it thinks. “The internet, as we know it, is based on network architectures of the 70s and 80s, when it was designed for completely different applications,” those researchers say. + +FABRIC, the UNC project, which has begun to address ideas for the architecture it thinks will work best, says it will function using “storage, computational and network hardware nodes,” joined by 100Gbps and Terabit optical links. “Interconnected deeply programmable core nodes [will be] deployed across the country,” [it proposes in its media release][9]. Much like the original internet, in fact, universities, labs and [supercomputers][10] will be connected, this time in order for today’s massive datasets to be experimented with. + +“All major aspects of the FABRIC infrastructure will be programmable,” it says. It will be “an everywhere programmable nationwide instrument comprised of novel extensible network elements.” Machine learning and distributed network systems control will be included. + +The project asserts that it's the programmability that will let it customize the platform to experiment with specific aspects of public Internet: cybersecurity is one, it says; distributed architectures, could be another. + +“If computer scientists were to start over today, knowing what they now know, the Internet might be designed in a different way,” Baldin says. + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3444765/reimagining-the-internet-project-gets-funding.html + +作者:[Patrick Nelson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Patrick-Nelson/ +[b]: https://github.com/lujun9972 +[1]: https://www.zdnet.com/article/what-is-the-internet-of-things-everything-you-need-to-know-about-the-iot-right-now/ +[2]: https://uncnews.unc.edu/2019/09/17/unc-chapel-hill-to-lead-20-million-project-to-test-a-reimagined-internet/ +[3]: https://www.networkworld.com/newsletters/signup.html +[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) +[5]: https://www.networkworld.com/article/3331676/iot-devices-proliferate-from-smart-bulbs-to-industrial-vibration-sensors.html +[6]: https://www.networkworld.com/article/3409783/public-internet-should-be-all-software-defined.html +[7]: https://www.networkworld.com/article/3390722/how-data-storage-will-shift-to-blockchain.html +[8]: https://www.networkworld.com/article/3407852/smarter-iot-concepts-reveal-creaking-networks.html +[9]: https://fabric-testbed.net/news/fabric-award +[10]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20191010 SD-WAN- What is it and why you-ll use it one day.md b/sources/talk/20191010 SD-WAN- What is it and why you-ll use it one day.md new file mode 100644 index 0000000000..9e07a987bf --- /dev/null +++ b/sources/talk/20191010 SD-WAN- What is it and why you-ll use it one day.md @@ -0,0 +1,108 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (SD-WAN: What is it and why you’ll use it one day) +[#]: via: (https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +SD-WAN: What is it and why you’ll use it one day +====== +Software-defined wide area networks, a software approach managing wide-area networks, offers ease of deployment, central manageability and reduced costs, and can improve connectivity to branch offices and the cloud. +Shutterstock + +There have been significant changes in wide-area networks over the past few years, none more important than software-defined WAN or SD-WAN, which is changing how network pros think about optimizing the use of connectivity that is as varied as Multiprotocol Label Switching ([MPLS][1]), frame relay and even DSL. + +### What is SD-WAN? + +As the name states, software-defined wide-area networks use software to control the connectivity, management and services between [data centers][2] and remote branches or cloud instances. Like its bigger technology brother, software-defined networking, SD-WAN decouples the control plane from the data plane. + +[[Get regularly scheduled insights by signing up for Network World newsletters. ]][3] + +An SD-WAN deployment can include, existing routers and switches or virtualized customer premises equipment (vCPE) all running some version of software that handles policy, security, networking functions and other management tools, depending on vendor and customer configuration. **  ** + +[][4] + +BrandPost Sponsored by HPE + +[Take the Intelligent Route with Consumption-Based Storage][4] + +Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency. + +One of SD-WAN’s chief features is the ability to manage multiple connections from MPLS to broadband to LTE. Another important piece is the ability to segment, partition and secure the traffic traversing the WAN.  + +SD-WAN's driving principle is to simplify the way big companies turn up new links to branch offices, better manage the way those links are utilized – for data, voice or video – and potentially save money in the process. + +Advertisement + +As a recent [Gartner][5] report said, SD-WAN and vCPE are key technologies to help enterprises transform their networks from “fragile to agile.” + +“We believe that emerging SD-WAN solutions and vCPE platforms will best address enterprise requirements for the next five years, as they provide the best mix of performance, price and flexibility compared to alternative hardware-centric approaches,” Gartner stated.   “Specifically, we predict that by 2023, more than 90% of WAN edge infrastructure refresh initiatives will be based on vCPE or SD-WAN appliances versus traditional routers (up from less than 40% today).” + +Network World / Gartner + +With all of its advanced features making it an attractive choice for customers, the market has also attracted a number of choices with more than 60 vendors – including [Cisco][6], VMware, Silver Peak, Riverbed, Aryaka, Fortinet, Nokia and Versa – that compete in the SD-WAN market; many with very specialized offerings, Gartner says.  [IDC says][7] that SD-WAN technology will grow at a 30.8% compound annual growth rate from 2018 to 2023 to reach $5.25 billion. + +From its VNI study, Cisco says that globally, SD-WAN traffic was 9 percent of business IP WAN traffic in 2017 and will be 29 percent of business IP WAN traffic by 2022.  In addition, SD-WAN traffic will grow five-fold from 2017 to 2022, a compound annual growth rate of 37 percent. + +“SD-WAN continues to be one of the fastest-growing segments of the network infrastructure market, driven by a variety of factors. First, traditional enterprise WANs are increasingly not meeting the needs of today's modern digital businesses, especially as it relates to supporting SaaS apps and multi- and hybrid-cloud usage. Second, enterprises are interested in easier management of multiple connection types across their WAN to improve application performance and end-user experience," said Rohit Mehra, IDC vice president, Network Infrastructure. "Combined with the rapid embrace of SD-WAN by leading communications service providers globally, these trends continue to drive deployments of SD-WAN, providing enterprises with dynamic management of hybrid WAN connections and the ability to guarantee high levels of quality of service on a per-application basis." + +### How does SD-WAN help network security? + +One of the bigger areas SD-WAN impacts is network security.  + +The tipping point for a lot of customers was the advent of applications like the cloud-based Office 365 and Amazon Web Services (AWS) applications that require secure remote access. said [Neil Anderson practice director, network solutions at World Wide Technology,][8] a technology service provider.  “SD-WAN lets customers set up secure regional zones or whatever the customer needs and lets them securely direct that traffic to where it needs to go based on internal security policies. SD-WAN is about architecting and incorporating security for apps like AWS and Office 365 into your connectivity fabric. It’s a big motivator to move toward SD-WAN.” + + “With SD-WAN, mission-critical traffic and assets can be partitioned and protected against vulnerabilities in other parts of the enterprise. This use case appears to be especially popular in verticals such as retail, healthcare, and financial,” [IDC wrote][9]. "SD-WAN can also protect application traffic from threats within the enterprise and from outside by leveraging a full stack of security solutions included in SD-WAN such as [next-gen firewalls][10], IPS, URL filtering, malware protection, and cloud security.” + +### What does SD-WAN mean for MPLS? + +One of the hotter SD-WAN debates is what the software technology would do to the use of MPLS, the packet-forwarding technology that uses labels in order to make data forwarding decisions. The most common use cases are branch offices, campus networks, metro Ethernet services and enterprises that need quality of service (QoS) for real-time applications. + +For the most part, networking vendors believe MPLS will be around for a long time and that SD-WAN won’t totally eliminate the need for it. The major slaps against MPLS are how traditionally expensive the service is and how complicated it is to set up. + +A recent report from [Avant Communications][11], a cloud services provider that specializes in SD-WAN, found that 83% of enterprises that use or are familiar with MPLS plan to increase their MPLS network infrastructure this year, and 40% say they will “significantly increase” their use of it. + +How that shakes out remains an unknown, but it seems both technologies will have role in near future enterprises anyway. + +“For us, MPLS is just another option.  We have never said that SD-WAN versus MPLS so that MPLS is going to get killed off or it needs to get killed off,” said [Sanjay Uppal,][12] vice president and general manager of VMware’s VeloCloud Business Unit.  + +Uppal said with MPLS, VMware at least is not finding that customers are turning off their MPLS in droves.  “They are capping it in several instances.  They are continuing to buy some more.  Maybe not as much as they probably had in the past but it’s really opening up applications to use more [of the the underlying network responsible for delivery of packets].  All kinds of underlay are being purchased.  MPLS is being purchased, more of broadband, direct internet access,” he said. + +Gartner says its clients hope to fund their WAN expansion/update by replacing or augmenting expensive MPLS connections with internet-based VPNs, often from alternate providers. However, suitability of internet connections varies widely by geography, and service providers mixing connections from multiple providers increases complexity. SD-WAN has dramatically simplified this approach for a number of reasons, Gartner says, including: + + * Due to the simpler operational environment and the ability to use multiple circuits from multiple carriers, enterprises can abstract the transport layer from the logical layer and be less dependent on their service providers. + * This decoupling of layers is enabling new MSPs to emerge to take advantage of the above for customers that still want to outsource their WANs. + * Traditional service providers are responding with Network Function Virtualization ([NFV][13])-based offerings that combine and orchestrate services (SD-WAN, security, WAN optimization) from multiple popular vendors.  NFV enables virtualized network functions including routing mobility and security. + + + +There are other reasons customers will use MPLS in the SD-WAN world, experts said.   “There is a concern about how customers will back up systems when there are outages,” Anderson said. “MPLS and other technologies have a role there.” + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/2297171/network-security-mpls-explained.html +[2]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html +[3]: https://www.networkworld.com/newsletters/signup.html +[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) +[5]: https://www.gartner.com/doc/reprints?id=1-5MNRUAO&ct=181019&st=sb&mkt_tok=eyJpIjoiTnpZNFlXTXpabU0xTVdNeSIsInQiOiJzSmZGdzFWZldRN0s0TUxWMVBKOFUxdnJVMCtEUk13Z3Y5VCs1Z1wvcUY5ZHQ1XC9uZG1WY1Uxbm5TOFFMZzcxQ3pybmhMSHo5RFdPVEVCVUZrbnJnODlGVklOZGtlT0pFQ1A1aFNaQ3N1ODk5Y1FaN0JqTDJiM0U5cnZpTVBMTnliIn0%3D +[6]: https://www.networkworld.com/article/3322937/what-will-be-hot-for-cisco-in-2019.html +[7]: https://www.idc.com/getdoc.jsp?containerId=prUS45380319 +[8]: https://www.wwt.com/profile/neil-anderson +[9]: https://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise-networks/intelligent-wan/idc-tangible-benefits.pdf +[10]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html +[11]: http://www.networkworld.com/cms/article/Avant%20Communications,%20a%20cloud%20services%20provider%20that%20specializes%20in%20SD-WAN,%20recently%20issued%20a%20report%20entitled%20State%20of%20Disruption%20that%20found%20that%2083%25%20of%20enterprises%20that%20use%20or%20are%20familiar%20with%20MPLS%20plan%20to%20increase%20their%20MPLS%20network%20infrastructure%20this%20year,%20and%2040%25%20say%20they%20will%20 +[12]: https://www.networkworld.com/article/3387641/beyond-sd-wan-vmwares-vision-for-the-network-edge.html +[13]: https://www.networkworld.com/article/3253118/what-is-nfv-and-what-are-its-benefits.html diff --git a/sources/talk/20191010 The biggest risk to uptime- Your staff.md b/sources/talk/20191010 The biggest risk to uptime- Your staff.md new file mode 100644 index 0000000000..a595014cae --- /dev/null +++ b/sources/talk/20191010 The biggest risk to uptime- Your staff.md @@ -0,0 +1,65 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The biggest risk to uptime? Your staff) +[#]: via: (https://www.networkworld.com/article/3444762/the-biggest-risk-to-uptime-your-staff.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +The biggest risk to uptime? Your staff +====== +Human error is the chief cause of downtime, a new study finds. Imagine that. +Getty Images + +There was an old joke: "To err is human, but to really foul up you need a computer." Now it seems the reverse is true. The reliability of [data center][1] equipment is vastly improved but the humans running them have not kept up and it's a threat to uptime. + +The Uptime Institute has surveyed thousands of IT professionals throughout the year on outages and said the vast majority of data center failures are caused by human error, from 70 percent to 75 percent. + +[[Get regularly scheduled insights by signing up for Network World newsletters. ]][2] + +And some of them are severe. It found more than 30 percent of IT service and data center operators experienced downtime that they called a “severe degradation of service” over the last year, with 10 percent of the 2019 respondents reporting that their most recent incident cost more than $1 million. + +[][3] + +BrandPost Sponsored by HPE + +[Take the Intelligent Route with Consumption-Based Storage][3] + +Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency. + +In Uptime's April 2019 survey, 60 percent of respondents believed that their most recent significant downtime incident could have been prevented with better management/processes or configuration. For outages that cost greater than $1 million, this figure jumped to 74 percent. + +However, the end fault is not necessarily with the staff, Uptime argues, but with management that has failed them. + +Advertisement + +"Perhaps there is simply a limit to what can be achieved in an industry that still relies heavily on people to perform many of the most basic and critical tasks and thus is subject to human error, which can never be completely eliminated," wrote Kevin Heslin, chief editor of the Uptime Institute Journal in a [blog post][4]. + +"However, a quick survey of the issues suggests that management failure — not human error — is the main reason that outages persist. By under-investing in training, failing to enforce policies, allowing procedures to grow outdated, and underestimating the importance of qualified staff, management sets the stage for a cascade of circumstances that leads to downtime," Heslin went on to say. + +Uptime noted that the complexity of a company’s infrastructure, especially the distributed nature of it, can increase the risk that simple errors will cascade into a service outage and said companies need to be aware of the greater risk involved with greater complexity. + +On the staffing side, it cautioned against expanding critical IT capacity faster than the company can attract and apply the resources to manage that infrastructure and to be aware of any staffing and skills shortage before they start to impair mission-critical operations. + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3444762/the-biggest-risk-to-uptime-your-staff.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html +[2]: https://www.networkworld.com/newsletters/signup.html +[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) +[4]: https://journal.uptimeinstitute.com/how-to-avoid-outages-try-harder/ +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20191011 Everything you need to know about Grace Hopper in six books.md b/sources/talk/20191011 Everything you need to know about Grace Hopper in six books.md new file mode 100644 index 0000000000..4c077a80c1 --- /dev/null +++ b/sources/talk/20191011 Everything you need to know about Grace Hopper in six books.md @@ -0,0 +1,65 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Everything you need to know about Grace Hopper in six books) +[#]: via: (https://opensource.com/article/19/10/grace-hopper-books) +[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja) + +Everything you need to know about Grace Hopper in six books +====== +A reading list for people of all ages about the legendary Queen of Code. +![Book list, favorites][1] + +Grace Hopper is one of those iconic figures that really needs no introduction. During her long career in the United States Navy, she was a key figure in the early days of modern computing. If you have been involved in open source or technology in general, chances are you have already heard several anecdotes about Grace Hopper. The story of finding [the first computer bug][2], perhaps? Or maybe you have heard some of her nicknames: Queen of Code, Amazing Grace, or Grandma COBOL? + +While computing has certainly changed from the days of punch cards, Grace Hopper's legacy lives on. She was posthumously awarded a Presidential Medal of Freedom, the Navy named a warship after her, and the [Grace Hopper Celebration][3] is an annual conference with an emphasis on topics that are relevant to women in computing. Suffice it to say, Grace Hopper's name is going to live on for a very long time. + +Grace Hopper had a career anyone should be proud of, and she accomplished many great things. Like many historical figures who have accomplished great things, sometimes the anecdotes about her contributions start to drift towards the realm of tall tales, which does Grace Hopper a disservice. Her real accomplishments are already legendary, and there is no reason to try to turn her into the computer science version of [John Henry][4] or [Paul Bunyan][5]. + +To that end, here are six books that explore the life and legacy of Grace Hopper. No tall tales, just story after story of Grace Hopper, a woman who changed the world. + +## + +[broad_band_150.jpg][6] + +![Broad Band book cover][7] + +Broad Band: The Untold Story of the Women Who Made the Internet + +### by Claire L. Evans + +In [_Broad Band: The Untold Story of the Women Who Made the Internet_][8], Claire L. Evans explores the lives of several women whose contributions to technology helped to shape the internet. Starting with Ada Lovelace and moving towards modern times with Grace Hopper and others, Evans weaves an interesting narrative that highlights the roles various women played in early computing. While only part of the book focuses on Grace Hopper, the overarching narrative of Evans's work does an excellent job of showcasing Hopper's place in computing history. + +## + +[grace_hopper_admiral_of_the_cyber_sea_150.jpg][9] + +![Grace Hopper: Admiral of the Cyber Sea cover][10] + +I sat down with Leslie Hawthorn , Community Manager at Red Hat, and chatted with her about the 2012... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/grace-hopper-books + +作者:[Joshua Allen Holm][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/holmja +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/reading_book_stars_list.png?itok=Iwa1oBOl (Book list, favorites) +[2]: https://www.computerhistory.org/tdih/september/9/ +[3]: https://ghc.anitab.org/ +[4]: https://en.wikipedia.org/wiki/John_Henry_(folklore) +[5]: https://en.wikipedia.org/wiki/Paul_Bunyan +[6]: https://opensource.com/file/453331 +[7]: https://opensource.com/sites/default/files/uploads/broad_band_150.jpg (Broad Band book cover) +[8]: https://clairelevans.com/ +[9]: https://opensource.com/file/453336 +[10]: https://opensource.com/sites/default/files/uploads/grace_hopper_admiral_of_the_cyber_sea_150.jpg (Grace Hopper: Admiral of the Cyber Sea cover) diff --git a/sources/talk/20191011 FOSS in India- Perspectives of an American Anthropologist.md b/sources/talk/20191011 FOSS in India- Perspectives of an American Anthropologist.md new file mode 100644 index 0000000000..667449b238 --- /dev/null +++ b/sources/talk/20191011 FOSS in India- Perspectives of an American Anthropologist.md @@ -0,0 +1,66 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (FOSS in India: Perspectives of an American Anthropologist) +[#]: via: (https://opensourceforu.com/2019/10/foss-in-india-perspectives-of-an-american-anthropologist-2/) +[#]: author: (Prof. C.N Krishnan https://opensourceforu.com/author/cn-krishnan/) + +FOSS in India: Perspectives of an American Anthropologist +====== + +[![][1]][2] + +_In her doctoral thesis done at the University of Manchester, UK, titled ‘Free and Open Source Software (FOSS) in India: Mobilising Technology for the National Good’, American anthropologist Dr Jasmine Folz addresses the question “What can the case of FOSS in India tell us about the roles and relationships between technology, autonomy, and the state?” This article gives a quick summary of some topics discussed in the thesis to convey a flavour of the work._ + +Dr Folz spent one year in India doing field work for her thesis, travelling and interacting extensively with many different players in the Indian FOSS space. Studies of FOSS in India have been very few and mostly lacking in depth, and even those that have been done are confined to its technological and economic aspects. This work is therefore unique and valuable in many ways as it provides the deeper insights and understanding that only anthropologists are capable of, and Dr Folz has maintained the rigour and focus of her profession in this effort. Among other things, it could help answer frequently asked questions like “Why is there very little contribution from India to the global FOSS corpus?” Much of this review quotes directly from the thesis itself. + +**Historical origins of FOSS in India** +Modern science and technology (S&T) came to India bundled with the European colonial project, and it was viewed with suspicion, if not hostility. Searching for alternatives to modern S&T (especially in the areas of healthcare, agriculture, village industries, etc) was thus a part of the freedom struggle, and it continued well beyond Independence. Even decades later, large corporations like IBM and Coca Cola came to be seen as symbols of continued western domination. When software and IT became a major force in the country from the 1980s onwards, driven by the global outsourcing industry, Microsoft too joined the list of such corporations. + +The Indian FOSS community took root in this context, with a strong nationalist position and in favour of the digitally disadvantaged sections of the population. In fact, FOSS can be considered a descendant of Gandhian Swadeshi, which includes self-rule via home grown craft and technology, though many in the FOSS community may be opposed to Gandhi on other issues. The Indian FOSS community has always been in conversation with larger issues of what role technology can or should play in Indian society; FOSS came to be seen as a way to develop the nation along particular moral and material pathways, and for that material development to be understood in moral terms. +While sections of the Indian government were sympathetic to FOSS, the former stayed rather ‘non-aligned’ between proprietary and free software, given how the former was the backbone of the country’s massively successful IT boom that was generating jobs and wealth. The FOSS community, too, mostly did not take any strong position on national political issues, considering that there were many differing ideological strands within the community itself. Yet, the community was quite united on the need to make essential software free and open in the country. + +**Notions of freedom, autonomy, individualism and FOSS** +Although FOSS is often free of cost, the FOSS community continues to emphasise that it uses the term ‘free’ to mean freedom itself, as a concept. FOSS was born in the USA, with Richard Stallman and the Free Software Foundation (FSF) that he created, driving it. The LINUX project of Linus Torvalds from Europe illustrated the process of community-based creation and support of FOSS. +Awareness of FOSS did reach India early on, though the approaches to engaging in and mobilising FOSS have been quite different in India compared to the US. These differences can be understood, to a large degree, as arising from the different conceptions of and relationships between freedom and individualism that exist in the two societies. Hence, they help to explain why FOSS acquired the status of a tool for nation building in India. + +To understand what American FOSS enthusiasts mean when they say “Free as in freedom” it is also necessary to understand American individualism. There are two liberal tenets integral to American individualism — free speech and intellectual property law, both of which are at the core of the FOSS ideology as it developed there. American individualism is utilitarian and premised on a core value of self-reliance so that individuals tend to negate the influence of family and class, even minimising personal responsibility to society. American software engineers who produce FOSS code are predominantly educated white men with high incomes and a strong sense of individualism. +For Western FOSS enthusiasts, then, freedom as in free speech is the central issue. The code is viewed as a form of speech and thus not to be owned. To them, the activity of writing code is a form of exploring and opening up physical and mental frontiers. The technical practice of creating FOSS is central to making ideas meaningful to the community. Stallman and the FSF do not view FOSS, per se, as standing for the promotion of socio-economic equality, though they would be for it. To Stallman and the FSF, large scale profiteering from FOSS (if it is possible), as such, would not be an issue, as long as the four freedoms of the code are scrupulously guaranteed. FOSS in the US takes no stand vis-a-vis capitalism, free markets or profit making. In fact, it enables participation in the market in a healthier and stronger manner. + +Concepts like freedom and individualism are not universal or uniform; they must be understood as products of unique cultural histories, and there are vast differences between the US and Indian societies, in this regard. It is not that the concept of individualism does not exist in India but it is more in the nature of a relational individualism as against the utilitarian individualism of the US. The relational individualists understand themselves as part of a community with shared beliefs and practices and, besides this, their personal duties and desires are bound to the collective. By ensuring that the goals of the collective are met, the rights of the individual are protected. Because Indian FOSS enthusiasts conceptualise individualism as inherently relational, their interpretation of the FOSS philosophy and its implications, includes the role and rights of the individual, but expands on this to include the community. Thus the ‘freedom imagined’ by the Indian FOSS community includes the wider socio-economic potential FOSS holds for their developing nation, to which they feel morally obligated to contribute as individuals. It can now be understood how so many Indian FOSS enthusiasts can, on the one hand accept that, “Free as in freedom” is the foundational underpinning of the FOSS philosophy and, on the other hand, emphasise the importance that FOSS is also “Free, as in cost.” + +While individual self-reliance is not a traditional cultural touchstone in India, national self-reliance is, as reflected in the Swadeshi movement and post-Independence political and economic policy. As a matter of fact, autonomy could be a more appropriate term than freedom for the Indian context. + +**The Indian FOSS community** +Members of the Indian FOSS community come overwhelmingly from middle-class, upper-caste backgrounds, but the majority of them are best understood as being ideologically closer to the ‘old middle class’ who historically engaged with the nation building process. Many of them stayed back in India rather than emigrate to the West as a commitment to improving India. There are strong ties between the FOSS community and earlier politically engaged science and technology based movements. A majority of the community conceives of FOSS as a technology that can be made Indian and that should be mobilised in efforts to improve the lives of all Indians. In this way, the community is using FOSS as a social as much as a technical tool. + +The Indian FOSS community is deeply committed to evangelising FOSS, both in terms of bringing in more people (particularly students) into its fold, as well as by lobbying with the government, academia, NGOs, industry, media, etc. Unlike in the US, it is not essential in India for one to contribute quality code to be accepted as an important member of the FOSS community. + +Another aspect that Dr Folz’s study examines is gender. The ways in which gender informs the IT industry generally and the FOSS community, specifically in India, are unique. While low participation of women in FOSS is a worldwide issue, within India it has played out in particular ways which have to do with how the male and female roles at home, work, and in public are conceived. ‘Respectable femininity’, which is a refraction of hegemonic masculinity, has provided women with the ‘right amount of freedom’ to participate in the lucrative IT industry so long as their essential identity as wives and mothers was not impacted. + +Women’s exclusion and discrimination in the Indian FOSS community is rooted in a different set of gendered assumptions and priorities from those in the West, where it is assumed women are not as intellectually capable as men. While Indian women are generally presumed to be as technically capable as men (because much of the work of the Indian FOSS community such as evangelising is social rather than technical and done in public, social settings), many women cannot participate because they do not have as much free time away from work and family obligations as men do. Further, due to the prevalence of homosociality, many women and men do not necessarily feel comfortable socialising together in the public sphere. + +For the most part, the Indian FOSS community is left-leaning, middle class and urban, though there are significant generational, regional, and gendered differences. The older generation (aged 50 +) is almost exclusively upper middle class and upper caste, and its members have held important positions in the government, academia, R&D, NGOs, etc. The younger generation, in its 20s and 30s, is most likely to be students or employed by IT companies, with a few entrepreneurs as well. In general, class, caste and gender inequalities are maintained in the FOSS community, though many members of the community are sincerely committed to making it more inclusive. + +FOSS offers Indians the possibility of exerting autonomy in the relationships between technology and the state. FOSS allows for the autonomy of states in relation to the market and it also, crucially, offers autonomy to citizens in relation to markets and the Indian state. The Indian FOSS community has taken a technology created in a Western context and mobilised it towards what can broadly be called nation building efforts, though to what extent these mobilisations lead to substantive societal change is yet to be seen. + +**Note:** Interested readers can obtain a copy of the entire thesis by writing to Dr Jasmine Folz at _[jasminefolz@gmail.com][3]_. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/foss-in-india-perspectives-of-an-american-anthropologist-2/ + +作者:[Prof. C.N Krishnan][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/cn-krishnan/ +[b]: https://github.com/lujun9972 +[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Foss-Apps.jpg?resize=696%2C396&ssl=1 (Foss Apps) +[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Foss-Apps.jpg?fit=900%2C512&ssl=1 +[3]: mailto:jasminefolz@gmail.com diff --git a/sources/talk/20191011 How a business was built on podcasts for Linux- The story of Jupiter Broadcasting.md b/sources/talk/20191011 How a business was built on podcasts for Linux- The story of Jupiter Broadcasting.md new file mode 100644 index 0000000000..36fd17754f --- /dev/null +++ b/sources/talk/20191011 How a business was built on podcasts for Linux- The story of Jupiter Broadcasting.md @@ -0,0 +1,94 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How a business was built on podcasts for Linux: The story of Jupiter Broadcasting) +[#]: via: (https://opensource.com/article/19/10/linux-podcasts-Jupiter-Broadcasting) +[#]: author: (Don Watkins https://opensource.com/users/don-watkins) + +How a business was built on podcasts for Linux: The story of Jupiter Broadcasting +====== +Learn about Jupiter Broadcasting's origins and its future under the +Linux Academy wing in an interview with cofounder Chris Fisher. +![Woman programming][1] + +I spend a lot of time on the road and enjoy listening to podcasts about a variety of topics. One of my interests is keeping up with the latest news and information about Linux and open source, and that led me to [Jupiter Broadcasting][2], an open source (both in topics covered and its own license) podcasting network. I met Jupiter's cofounder [Chris Fisher][3] when I [visited System76][4]'s Denver headquarters in late 2018. + +Jupiter Broadcasting emerged from [The Linux Action Show][5], a podcast that began in 2006 and ended 10 years later in early 2017. The show was such a success that, in 2008, Chris and co-founder [Bryan Lunduke][6] decided to start Jupiter Broadcasting. Back then, the company only had two shows, The Linux Action Show and [CastaBlasta][7]. Now it offers 10 Linux-related podcasts with titles like [Linux Headlines][8], [Linux Action News][9], [Choose Linux][10], [Coder Radio][11], [Self-Hosted][12], and more. + +I was interested in learning more about Jupiter, so I was grateful when Chris agreed to do this interview (which has been lightly edited for length and clarity). + +**Don Watkins**: What is your background? + +**Chris Fisher**: I grew up during the transition from early '80s digital solutions to more "modern" networked solutions. In both schools and businesses, the world was slowly getting networked and online. Some people made early bets on DOS-based systems, and others held out completely for the move to digital. From a very young age, I was fortunate to have access to tech I could tinker with to learn. And right out of high school, I got to work migrating systems, standing up networks, and building out centralized authentication, storage, and early web services. + +**DW**: How did you get started with Linux? + +**CF**: Coming from a Microsoft and Novell admin background, distros like Debian Linux and the open source services that run on top of it were like discovering a new world of solutions. I quickly became very enthusiastic about open source software and the long-term possibilities of Linux. It didn't take long for me to **rm -rf** my root partition and be blown away with how powerful Linux was. From that moment, I had to have it on the desktop and found more and more uses for it on the server. + +**DW**: What is your favorite distribution? Why is it your favorite? + +**CF**: I really do enjoy something about all of them. These days, my studio runs Ubuntu LTS, my servers run Fedora (as does my Thinkpad), and my workstation runs Xubuntu. From Gentoo to MX, I like to try them all. But I often stick to the classics on my production systems. + +**DW**: What was the genesis of Jupiter Broadcasting? + +**CF**: It really started as an outlet to share things my friends and I are passionate about. Over time, as we started more podcasts, it made a lot of sense to put it all under one roof. Podcasts grew into a real industry, we started taking advertisers, and after a few years of working a few jobs at once, I was able to go full time. Fast forward some 10+ years later, and one year ago, we merged with [Linux Academy][13]. They have enabled us to give our podcasts away to the community without advertising and while investing in the crew to make them better than ever. We're on a mission to keep people informed and passionate about Linux and open source, and that fits in really great with our bigger mission at Linux Academy to train people on the stacks we talk about. + +**DW**: There are 10 different podcasts at Jupiter Broadcasting. How do you stay on top of all of that? + +**CF**: It can be a big job, more than ever these days. We just launched two new podcasts: Linux Headlines—a daily Linux, open source, and cloud news show in three minutes or less, and Self-Hosted, a podcast all about hosting services on your LAN with open source software and leveraging the cloud in a secure, under-your-control way, when it makes sense. + +Just getting those efforts off the ground, while also keeping the existing shows fresh and packed with good content, is a lot of work! Especially combined with a fair bit of travel required for the job. Now that we're part of Linux Academy, I have a good team behind me—from a core of full-timers to a raft of co-hosts from various areas in the industry—and they're all really good at their jobs. + +I am constantly finding my balance and working with the team to take that on. It's definitely an ongoing process. And I'm really thankful I get to make these podcasts as a living. + +**DW**: Do you use open source hardware and software to record the content? + +**CF**: Our shows are recorded on Linux, and we use the amazing REAPER editor on Linux. It's not open source, but it's a great example of what kind of powerful, workstation-grade software you can get when you have an open platform underneath it enabling it. + +**DW**: How large is your audience? + +**CF**: I'm not sure we have ever shared numbers, but I'm thrilled to say it's well over 1 million unique downloads a quarter. Without advertising, we don't have to track very aggressively and have turned our focus on the content. So now we work with some high-level numbers for health checks. + +**DW**: Do you have a favorite topic on a particular show? + +**CF**: I love the #askerror questions that come into our [User Error][14] podcast. They're always a source of great conversation among the guys. The moment that show hits my podcast player, I hit Play. + +But I'm a real newshound, so my favorite topics in the shows I do are driven by the news cycle. + +**DW**: How is your content licensed? + +**CF**: Attribution-ShareAlike 4.0 International. + +**DW**: How do we support you? + +**CF**: For us—and for all content creators in this space—word of mouth. People trust direct recommendations, and that means a lot in this area. It's hard to find good content that avoids clickbait and does its research. By the very nature of not chasing that clickbait, it limits the viral discovery of content that is working very hard to get right. So taking a few minutes to tell a friend, or share a post, or anything that helps spread awareness is real support! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/linux-podcasts-Jupiter-Broadcasting + +作者:[Don Watkins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming) +[2]: https://www.jupiterbroadcasting.com/ +[3]: https://twitter.com/ChrisLAS +[4]: https://opensource.com/article/19/5/system76-secret-sauce +[5]: https://www.jupiterbroadcasting.com/tag/linux-action-show/ +[6]: https://twitter.com/BryanLunduke +[7]: https://www.jupiterbroadcasting.com/show/castablasta/ +[8]: https://linuxheadlines.show/ +[9]: https://linuxactionnews.com/ +[10]: https://chooselinux.show/ +[11]: https://coder.show/ +[12]: https://www.jupiterbroadcasting.com/show/self-hosted/ +[13]: https://linuxacademy.com/ +[14]: https://www.jupiterbroadcasting.com/show/error/ diff --git a/sources/talk/20191011 How to use IoT devices to keep children safe.md b/sources/talk/20191011 How to use IoT devices to keep children safe.md new file mode 100644 index 0000000000..5acc31a838 --- /dev/null +++ b/sources/talk/20191011 How to use IoT devices to keep children safe.md @@ -0,0 +1,62 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to use IoT devices to keep children safe?) +[#]: via: (https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/) +[#]: author: (Andrew Carroll https://opensourceforu.com/author/andrew-carroll/) + +How to use IoT devices to keep children safe? +====== + +[![][1]][2] + +_IoT (Internet of Things) devices are transforming our lives rapidly. These devices are everywhere, from our homes to industries. According to some estimates, there will be 10 billion IoT devices by 2020. By 2025, the number of IoT devices will grow to 22 billion. IoT has found its application in a range of fields, including smart homes, industrial processes, agriculture, and even healthcare. With such a wide variety of applications, it is obvious why IoT has become one of the hot topics in recent years._ + +Several factors have contributed to the explosion of IoT devices in multiple disciplines. These include the availability of low-cost processors and wireless connectivity. Moreover, open-source platforms have enabled the exchange of information in driving innovation in the field of IoT. Compared with conventional application development, IoT has developed exponentially because its resources are open-source. +Before explaining how IoT can be used to protect children, a basic understanding of IoT technology is essential. + +**What are IoT devices?** +IoT devices are those that can communicate with each other, without the involvement of humans. Hence, smartphones and computers are not considered as IoT devices by many experts. Moreover, IoT devices must be able to gather data and communicate it to other devices or the cloud for processing. + +However, there are some fields where we need to explore the potential for IoT. Children are vulnerable, which makes them an easy target for criminals and others who mean to harm them. Whether in the physical or digital world, children are susceptible to crime. Since parents cannot be physically present to protect their children at all times; that’s where the need for monitoring tools is obvious. + +In addition to wearable devices for children, there are plenty of parental monitoring applications such as Xnspy that monitor children in real-time and provide live updates. These tools ensure that the child is safe. While wearable devices ensure that the child is not physically in danger, parental monitoring apps ensure that the child is safe online. + +As more children spend time on their smartphones, it is no surprise to see them becoming the primary target for frauds and scammers. Moreover, there is also a chance of children becoming targets of cyberbullying because pedophilia, catfishing, and other crimes are prevalent on the internet. + +Are these solutions enough? We need to find IoT solutions for ensuring our children’s safety, both online and offline. How can we keep children secure in these times? We need to come up with new and innovative solutions that keep our children safe. The solutions provided by IoT can help keep our children safe in schools as well as homes. + +**The potential of IoT** +The benefits offered by IoT devices are numerous. For one, parents can remotely monitor their children without being too overbearing. Thus, children have space and freedom to become independent while having a safe environment to do so. + +Moreover, parents do not have to worry about their children’s safety. IoT devices can provide 24/7 updates about a child. Monitoring apps such as Xnspy go a step further in providing information regarding a child’s smartphone activity. As IoT devices become more sophisticated, it is only a matter of time before we have devices with increased battery life. IoT devices such as location tracking can provide accurate details regarding a child’s whereabouts, so parents do not have to worry. + +While wearable devices are great to have, these are often not enough, when ensuring a child’s safety. Hence, to provide a safe environment for children, we need other methods. Many incidents have shown that schools are just as susceptible to attacks than any other public place. Therefore, schools need to adopt safety measures that keep children and teachers safe. In this, IoT devices can be used to detect threats and take necessary action to prevent the onslaught of an attack. The threat detection system can include cameras. Once the system detects a threat, it can notify the authorities, including law enforcement agencies and hospitals. Devices such as smart locks can be used to lock down the school, including classrooms, to protect children. In addition to this, parents can be informed about their child’s safety, receive immediate alerts on threats. It would require the implementation of wireless technology, such as Wi-Fi and sensors. Thus, schools need to create a budget that is specifically for providing security in the classroom. + +Smart homes have made it possible to turn off lights with a clap, or telling your home assistant to do so. Likewise, IoT devices can be used in a house to protect children. In a home, IoT devices such as cameras can be used to provide parents with 100% visibility when looking after the children. When parents aren’t in the house, cameras and other sensors can be used to detect if any suspicious activity takes place. Other devices, such as smart locks connected to these sensors, can lock the doors, windows, and bedrooms to ensure that the kids are safe. +Likewise, there are plenty of IoT solutions that can be introduced to keep kids safe. + +**Just as bad as they are good** +Sensors in IoT devices create an enormous amount of data. The safety of the data is a crucial factor. The data gathered on a child falling into the wrong hands is a risk. Hence, precautions are required. Any data data breached from your IoT devices can be used to determine behavior patterns. So one must invest in providing safe IoT solutions that do not breach user privacy. + +Often IoT devices connect to the Wi-Fi to transmit data between devices. Unsecure networks that deal with unencrypted data pose certain risks. Such networks are easy to eavesdrop. Hackers can use such network points to hack the system. They can also introduce malware into the system, making it vulnerable. Moreover, cyberattacks on devices and public networks such as those in schools can lead to data breaches and theft of private data. Hence, an overall plan for protecting the network and IoT devices must be in effect when implementing an IoT solution for the protection of children. + +The potential of IoT devices to protect children in schools and homes is yet to find innovation. We need more effort to protect the network that connects IoT devices. Moreover, the data generated by an IoT device can fall into the wrong hands, causing more trouble. So this is one area where IoT security is essential. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/ + +作者:[Andrew Carroll][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/andrew-carroll/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?resize=696%2C507&ssl=1 (Visual Internet of things_EB May18) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?fit=900%2C656&ssl=1 diff --git a/sources/talk/20191012 How the oil and gas industry exploits IoT.md b/sources/talk/20191012 How the oil and gas industry exploits IoT.md new file mode 100644 index 0000000000..d78a6ad967 --- /dev/null +++ b/sources/talk/20191012 How the oil and gas industry exploits IoT.md @@ -0,0 +1,55 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How the oil and gas industry exploits IoT) +[#]: via: (https://www.networkworld.com/article/3445204/how-the-oil-and-gas-industry-exploits-iot.html) +[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/) + +How the oil and gas industry exploits IoT +====== +The energy industry has embraced IoT technology in its operations, from monitoring well production to predicting when its gear will need maintenance. +Like many traditional industries that have long-standing, tried-and-true methods of operation, the oil-and-gas sector hasn’t been the quickest to embrace [IoT][1] technology – despite having had instrumentation on drilling rigs, pipelines and refining facilities for decades, the extraction industry has only recently begun to work with modern IoT. + +Part of the issue has been interoperability, according to Mark Carrier, oil-and-gas development director for RTI, which produces connectivity software for industrial companies. Energy companies are most comfortable working with the same vendors they’ve worked with before, but that tendency means there isn’t a strong impetus toward sharing data across platforms. + +[[Get regularly scheduled insights by signing up for Network World newsletters.]][2] + +“On a very low level, things are pretty well-connected, at the connectivity to the back-end they’re well-connected, but there’s a huge expense in understanding what that data is,” he said. + +Christine Boles, a vice president in Intel’s IoT group, said that the older systems still being used by the industry have been tough to displace. + +“The biggest challenge they’re facing is aging infrastructrure, and how they get to a more standardized, interoperable version,” she said. + +Changes are coming, however, in part because energy prices have taken a hit in recent years. Oil companies have been looking to cut costs, and one of the easiest places to do that is in integration and automation. On a typical oil well, said Carrier, a driller will have up to 70 different companies’ products working – sensors covering everything from flow rates to temperature and pressure to azimuth and incline, different components of the drill itself – but until fairly recently, these all had to be independently monitored. + +An IoT solution that can tie all these various threads of data together, of late, has become an attractive option for companies looking to minimize human error and glean real-time insights from the wide range of instrumentation present on the average oil rig. + +Those threads are numerous, with a lot of vertically unique sensor and endpoint types. Mud pulse telemetry uses a module in a drill head to create slight fluctuations in the pressure of drilling fluid to pulse information to a receiver on the surface. Temperature and pressure sensors operating in the extreme environmental conditions of an active borehole might use heavily ruggedized serial cable to push data back aboveground. + +Andre Kindness, a principal analyst at Forrester Research, said that the wide range of technologies, manufacturers and standards in use at any given oil-and-gas facility is the product of cutthroat competition + +To continue reading this article register now + +[Get Free Access][3] + +[Learn More][4]   Existing Users [Sign In][3] + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3445204/how-the-oil-and-gas-industry-exploits-iot.html + +作者:[Jon Gold][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Jon-Gold/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html +[2]: https://www.networkworld.com/newsletters/signup.html +[3]: javascript:// +[4]: https://www.networkworld.com/learn-about-insider/ diff --git a/sources/talk/20191012 The Role of Open Source Tools and Concepts in IoT Security.md b/sources/talk/20191012 The Role of Open Source Tools and Concepts in IoT Security.md new file mode 100644 index 0000000000..ed650062c6 --- /dev/null +++ b/sources/talk/20191012 The Role of Open Source Tools and Concepts in IoT Security.md @@ -0,0 +1,170 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The Role of Open Source Tools and Concepts in IoT Security) +[#]: via: (https://opensourceforu.com/2019/10/the-role-of-open-source-tools-and-concepts-in-iot-security/) +[#]: author: (Shashidhar Soppin https://opensourceforu.com/author/shashidhar-soppin/) + +The Role of Open Source Tools and Concepts in IoT Security +====== + +[![][1]][2] + +_With IoT devices permeating the commercial and personal space, their security becomes important. Hackers and malicious agents are always trying to exploit vulnerabilities in order to control these devices. The advantages of using open source rather than proprietary software needs no elaboration. Here is a selection of open source tools popular in the IoT field._ + +Security is one of the key factors to consider while choosing any IoT platform. The IoT market is very large and is growing constantly. Scary hacks and security breaches are happening to IoT related devices on a regular basis. These could be DoS (Denial of Service) attacks or may completely wipe out the firmware sitting on top of the device. Early detection and prevention of these attacks is a major concern for any enterprise or organisation. Many companies are adopting open source security tools and solutions and laying out predefined best practices for IoT security. + +![Figure 1: The Mirai botnet][3] + +**Recent security attacks** +As explained earlier, many attacks and threats targeting IoT systems have happened during the last few years. Let’s look at some of the major ones. + +_**The Silex malware attack**_: In 2019, a 14-year-old hacker bricked at least around 4,000 IoT devices with a new strain of malware called Silex, which was used to abruptly shut down command and control servers. Larry Cashdollar, who is a senior security intelligence response engineer at Akamai, first discovered this malware on his honeypot. Like the BrickerBot malware in 2017, Silex too targeted insecure IoT devices and made them unusable. + +Silex trashes an IoT device’s storage, dropping firewall rules, removing the network configuration and then halting the device completely. It is as destructive as it can get without actually frying the IoT device’s circuits. To recover, victims must manually reinstall the device’s firmware, a task too complicated for most device owners. (__) + +_**The BrickerBot attack**_: The BrickerBot malware attack took place in 2017 and its author has claimed that 60,000 modems and routers across India lost Internet connectivity. The incident affected modems and routers belonging to two Indian state-owned telecommunications service providers, Bharat Sanchar Nigam Limited (BSNL) and Mahanagar Telephone Nigam Limited (MTNL). The attack was so intensive that from July 25 and up to July 29, users reported losing Internet connectivity as routers and modems became stuck with their red LED remaining always on. The main purpose of this bot is to brick devices so they will not be usable once attacked. (__) + +![Figure 2: Princeton IoT Icon][4] + +**Note:** _Bricking is the process of changing the code of the device so that the hardware can no longer be used, thereby turning the device essentially into a brick (totally unusable)._ + +**The Mirai botnet attack:** The Mirai botnet attack took place in 2016. This was a major malware/virus attack. Mirai is a self-propagating botnet virus. The source code for it was made publicly available by the author after a successful and well-publicised attack on the Krebbs website. Since then, the source code has been built and used by many others to launch attacks on Internet infrastructure, causing major damage. The Mirai botnet code infects poorly protected Internet devices by using telnet to find those that are still using their factory default usernames and passwords. The effectiveness of Mirai is due to its ability to infect tens of thousands of these insecure devices and co-ordinate them to mount a DDoS attack against a chosen victim. (__) + +**Detection and prevention of security attacks** +The major cause of concern in IoT devices is the use of the Linux OS by the device vendors instead of developing a custom OS, which is time consuming and expensive. Most of the attackers/hackers know this and so target these devices. + +Some time back, Symantec came up with a solution for this by using a router called Norton Core (__). This was not a success as it was expensive and had a monthly maintenance cost. In addition, people felt that it was too early to use such a router that came with a monthly subscription, since most homes still do not have enough IoT enabled devices to make such an investment worthwhile. + +**Open source security tools** +Subsequently, many state-of-art security tools with multiple security features have been launched. Some of the most used and popular open source security tools are featured below. + +**Princeton IoT-Inspector** +This is an open source desktop tool with a one-click, easy-to-install process. It has many built-in security validation features: + + * Automatically discovers IoT devices and analyses their network traffic. + * Helps one to identify security and privacy issues with graphs and tables. + * Requires minimal technical skills and no special hardware. + + + +This tool can be configured on Linux/Mac (Windows support is still under discussion). + +**What data does IoT Inspector collect?** For each IoT device in the network, IoT Inspector collects the following information and sends it to identified secure servers at the Princeton University: + + * Device manufacturers, based on the first six characters of the MAC address of each device on the network + * DNS requests and responses + * Destination IP addresses and ports contacted — but not the public-facing IP address (i.e., the one that your ISP assigns to you) + * Scrambled MAC addresses (i.e., those with a salted hash) + * Aggregate traffic statistics, i.e., the number of bytes sent and received over a period of time + * The names of devices on the identified network + + + +By collecting the above types of data, there are some risks involved, such as: + + * Performance degradation + * Data breaches + * Best-effort support + + + +![Figure 3: OWASP IoT][5] + +_**How the security validation is done:**_ Princeton releases its findings in a journal/conference publication. When consumers are unsure about whether to buy a new IoT device or not, they can read the relevant papers before making a decision, checking if the device of interest features in the Princeton data. Otherwise, the consumer can always buy the product, analyse it with IoT Inspector, and return it if the results are unsatisfactory. + +**Open Web Application Security Project (OWASP) set of standards** +OWASP is an open community dedicated to enabling organisations to conceive, develop, acquire, operate and maintain applications that can be trusted. + +_**Testing an IoT device for poor authentication/authorisation (OWASP I2):**_ When we think of weak authentication, we might think of passwords that are not changed on a regular basis, six-to-eight character passwords that are nonetheless easy to guess, or of systems without multi-factor authentication mechanisms. Unfortunately, with many smart devices, weak authentication causes major havoc. +Many of the IoT devices are secured with default passwords like ‘1234’, ‘password’, or ‘ABCD’. Users put their password checks in client-side Java code, send credentials without using HTTPS or other encrypted transport protocols, or require no passwords at all. This kind of mismanagement of passwords causes a lot of damage to devices. +Many OWASP l2 to I10 standards provide different levels of security, which are listed in Figure 3. + + * I1 – Insecure Web interface + * I2 – Insufficient authentication/authorisation + * I3 – Insecure network services + * I4 – Lack of transport encryption + * I5 – Privacy concerns + * I6 – Insecure cloud interface + * I7 – Insecure mobile interface + * I8 – Insufficient security configurability + * I9 – Insecure software/firmware + * I10 – Poor physical security + + + +**Mainflux platform: For authentication and authorisation** +Mainflux is an open source IoT platform providing features like edge computing and consulting services. Mainflux Labs is a technology company offering an end-to-end, open source patent-free IoT platform, an LF EdgeX Foundry compliant IoT edge gateway with an interoperable ecosystem of plug-and-play components, and consulting services for the software and hardware layers of the IoT technology. It provides enhanced and fine-grained security via the deployment-ready Mainflux authentication and authorisation server, with an access control scheme based on customisable API keys and scoped JWT. It also offers mutual TLS (mTLS) authentication using X.509 certificates, NGINX reverse proxy for security, load balancing and termination of TLS and DTLS connections, etc. Many of these features can be explored and used according to the need of the hour. + +**Best practices for building a secure IoT framework** +To prevent/avoid attacks on any IoT device, environment or ecosystem, the following best practices need to be applied: + + * Always use strong passwords for device accounts and Wi-Fi networks. + * It is a best practice to always change default passwords. + * Use stronger and most recent encryption methods when setting up Wi-Fi networks such as WPA2. + * Develop the habit of disabling or protecting the remote access to IoT devices when not needed. + * Use wired connections instead of wireless, where possible. + * Be careful when buying used IoT devices, as they could have been tampered with. It is better to consult a genuine authority and confirm the device’s validation or buy from a certified authority. + * Research the vendor’s device security measures as well as the features that they support. + * Modify the privacy and security settings of the device to meet your needs immediately after buying the device. + * It is better to disable features that are not used frequently. + * Install updates regularly, when they become available. It is a best practice to use the latest firmware updates. + * Ensure that an outage due to jamming or a network failure does not result in an insecure state of the installation. + * Verify if the smart features are required or if a normal device suffices for the purpose. + + + +**Best practices for designers of IoT frameworks and device manufacturers** + + * Always use SSL/TLS-encrypted connections for communication purposes. + * Check the SSL certificate and the certificate revocation list. + * Allow and encourage the use of strong passwords and change default passwords immediately. + * Provide a simple and secure update process with a chain of trust. + * Provide a standalone option that works without Internet and cloud connections. + * Prevent brute-force attacks at the login stage through account lockout measures or with multi-factor types of authentication mechanisms. + * Implement a smart fail-safe mechanism when the connection or power is lost or jammed. + * Remove unused tools and allow only trusted applications and software. + * Where applicable, security analytics features should be provided in the device management strategy. + + + +IoT developers and designers should include security at the start of the device development process, irrespective of whether the device is for the consumer market, the enterprise or industry. Incorporating security at the design phase always helps. Enabling security by default is very critical, as is providing the most recent operating systems and using secure hardware with the latest firmware versions. + +**Enabling PKI and digital certificates** +Public key infrastructure (PKI) and 509 digital certificates play important and critical roles in the development of secure IoT devices. It is always a best practice to provide the trust and control needed to distribute and identify public encryption keys, secure data exchanges over networks and verify the identity. + +**API (application performance indicator) security** +For any IoT environment, API security is essential to protect the integrity of data. As this data is being sent from IoT devices to back-end systems, we always have to make sure only authorised devices, developers and apps communicate with these APIs. + +**Patch management/continuous software updates** +This is one crucial aspect in IoT security management. Providing the means of updating devices and software either over network connections or through automation is critical. Having a coordinated disclosure of vulnerabilities is also important to updating devices as soon as possible. Consider end-of-life strategies as well. + +Always remember that hard coded credentials should never be used nor be part of the design process. If there are any default credentials, users should immediately update them using strong passwords as described earlier, or follow multi-factor or biometric authentication mechanisms. + +**Hardware security** +It is absolutely essential to make devices tamper-proof or tamper-evident, and this can be achieved by endpoint hardening. + +Strong encryption is critical to securing communication between devices. It is always a best practice to encrypt data at rest and in transit using cryptographic algorithms. + +IoT and operating system security are new to many security teams. It is critical to keep security staff up to date with new or unknown systems, enabling them to learn new architectures and programming languages to be ready for new security challenges. C-level and cyber security teams should receive regular training to keep up with modern threats and security measures. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/the-role-of-open-source-tools-and-concepts-in-iot-security/ + +作者:[Shashidhar Soppin][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/shashidhar-soppin/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Security-of-IoT-devices.jpg?resize=696%2C550&ssl=1 (Security of IoT devices) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Security-of-IoT-devices.jpg?fit=900%2C711&ssl=1 +[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-1-The-Mirai-botnet.jpg?resize=350%2C188&ssl=1 +[4]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-2-Princeton-IoT-Icon.jpg?resize=350%2C329&ssl=1 +[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-3-OWASP-IoT.jpg?resize=350%2C147&ssl=1 diff --git a/sources/talk/20191014 A Primer on Open Source IoT Middleware for the Integration of Enterprise Applications.md b/sources/talk/20191014 A Primer on Open Source IoT Middleware for the Integration of Enterprise Applications.md new file mode 100644 index 0000000000..96c4558da0 --- /dev/null +++ b/sources/talk/20191014 A Primer on Open Source IoT Middleware for the Integration of Enterprise Applications.md @@ -0,0 +1,148 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A Primer on Open Source IoT Middleware for the Integration of Enterprise Applications) +[#]: via: (https://opensourceforu.com/2019/10/a-primer-on-open-source-iot-middleware-for-the-integration-of-enterprise-applications/) +[#]: author: (Gopala Krishna Behara https://opensourceforu.com/author/gopalakrishna-behara/) + +A Primer on Open Source IoT Middleware for the Integration of Enterprise Applications +====== + +[![][1]][2] + +_The Internet of Things (IoT) integrates a virtual world of information to the real world of devices through a layered architecture. IoT middleware is an interface between the physical world (hardware layer) of devices with the virtual world (application layer), which is responsible for interacting with devices and information management systems. This article discusses IoT middleware, the characteristics of open source IoT middleware, IoT middleware platform architecture and key open source IoT middleware platforms._ + +With billions of devices generating trillions of bytes of data, there is a need for heterogeneous IoT device management and application enablement. This requires a revamp of existing architectures. There is a need to identify industry agnostic application middleware to address the complexity of IoT solutions, future changes, the integration of IoT with mobile devices, various types of machinery, equipment and tablets, among other devices. + +According to Statista, the total installed base of IoT connected devices is projected to be 75.44 billion worldwide by 2025. + +Most of the IoT applications are heterogeneous, and domain specific. Deciding on the appropriate IoT middleware for app development is the major challenge faced by developers today. The functionalities provided by different middleware vendors are almost similar but differ mainly in their underlying technologies. Middleware services provided by different IoT vendors include data acquisition, device management, data storage, security and analytics. Selecting the right middleware platform is one of the critical steps in application development. + +The important parameters for choosing the right middleware for an IoT application are scalability, availability, the ability to handle huge amounts of data, a high processing speed, flexibility, integration with varied analytical tools, security and cost. + +**Industry adoption of open source middleware in IoT** +The market for IoT middleware was valued at US$ 6.44 billion in 2018 and is expected to reach a value of US$ 18.68 billion by 2024 at a CAGR of 19.72 per cent, over the forecast period 2019-2024 (__). + +According to an Ericsson forecast (__), there will be around 29 billion connected devices in use by 2022, of which around 18 billion will be related to IoT. Gartner forecasts that 14.2 billion connected things will be in use in 2019, and that the total will reach 25 billion by 2021, producing immense volume of data. + +**IoT middleware and its features** +Middleware acts as an agent between the service providers (IoT devices) and service consumers (enterprise applications). It is a software layer that sits in between applications and objects. It is a mediator interface that enables the interaction between the Internet and ‘things’. It hides the heterogeneity among the devices, components and technology of an IoT system. Middleware provides solutions to frequently encountered problems, such as interoperability, security and dependability. The following are the important features of middleware, which improve the performance of devices. + +**Flexibility:** This feature helps in establishing better connectivity, which improves the communication between applications and things. There are different kinds of flexibility (e.g., response time, faster to evolve and change). + +**Transparency:** Middleware hides many complexities and architectural information details from both the application and the object sides, so that the two can communicate with minimum knowledge of either side. + +**Interoperability:** This functionality allows two sets of applications on interconnected networks to exchange data and services meaningfully with different assumptions on protocols, data models, and configurations. + +**Platform portability:** An IoT platform should be able to communicate from everywhere, anytime with any device. Middleware runs on the user side and can provide independence from network protocols, programming languages, OSs and others. + +**Re-usability:** This feature makes designing and developing easier by modifying system components and assets for specific requirements, which results in cost efficiency. + +**Maintainability:** Maintainability has a fault tolerance approximation. Middleware performs maintainability efficiently and extends the network. + +**Security:** Middleware should provide different security measures for ubiquitous applications and pervasive environments. Authentication, authorisation and access control helps in verification and accountability. + +**Characteristics of open source IoT middleware** +An open source IoT middleware platform should be fault-tolerant and highly available. It has the following characteristics: + + * No vendor lock-in, and it comes with the surety of seamless integration of enterprise-wide tools, applications, products and systems developed and deployed by different organisations and vendors. + * Open source middleware increases the productivity, speeds up time to market, reduces risk and increases quality. + * Adoption of open source middleware enhances the interoperability with other enterprise applications because of the ability to reuse recommended software stacks, libraries and components. + * IoT middleware platforms should support open APIs, deployment models of the cloud, and be highly available. + * It should support open data formats like RestAPI, JSON, XML and Java, and be freely available + * An IoT middleware platform should support multi-service and heterogeneous devices, and be compatible with the hardware for sensing environmental information. + * Migration to any new platform or system should be seamless. It should be possible to adopt or integrate with any solution. + * The information data model should be distributed and extensible, providing availability and scalability to the system. + * An IoT middleware platform should support major communication protocols like MQTT, CoAP, HTTP, WebSockets, etc. + * An IoT middleware platform should support different security features like encryption, authentication, authorisation and auditing. + * It should support technologies such as M2M applications, real-time analytics, machine learning, artificial intelligence, analytics, visualisation and event reporting. + + + +**IoT middleware architecture** +The middleware mediates between IoT data producers and the consumers. APIs for interactions with the middleware are based on standard application protocols. + +API endpoints for accessing the data and services should be searchable via an open catalogue, and should contain linked metadata about the resources. + +The device manager communicates messages to the devices. The database needs to access and deliver messages to the devices with minimum latency. + +Data processing involves data translation, aggregation and data filtering on the incoming data, which enables real-time decision making at the edge. The database needs to support high-speed reads and writes with sub-millisecond latency. It helps in performing complex analytical computations on the data. + +The IoT data stream normalises the data to a common format and sends it to enterprise systems. The database needs to perform the data transformation operations efficiently. + +Middleware supports the authentication of users, organisations, applications and devices. It supports functionalities like certificates, password credentials, API keys, tokens, etc. It should also support single sign-on, time based credentials, application authentication (via signatures) and device authentication (via certificates). + +Logging is necessary for both system debugging as well as auditing. Middleware manages the logging of system debugging and auditing details. It helps to track the status of the various services, APIs, etc, and administers them. + +**Key open source IoT middleware platforms** +Holistically, an IoT implementation covers data collection and insertion through sensors as well as giving control back to devices. The different types of IoT middleware are categorised as: + + * Application-centric (application and data management) + * Platform-centric (application enablement, device management and connectivity management) + * Industry-specific (manufacturing, healthcare, energy and utilities, transportation and logistics, agriculture, etc) + + + +![Figure 1: IoT middleware architecture][3] + +Selecting the right middleware during various stages of IoT implementation depends on multiple factors like the size of the enterprise, the nature of the business, the development and operational perspectives, etc.The following are some of the top open source middleware platforms for IoT based applications. + +**Kaa** is platform-centric middleware. It manages an unlimited number of connected devices with cross-device interoperability. It performs real-time device monitoring, remote device provisioning and configuration, collection and analysis of sensor data. It has microservices based portability, horizontal scalability and a highly available IoT platform. It supports on premise, public cloud and hybrid models of deployment. Kaa is built on open components like Kafka, Cassandra, MongoDB, Redis, NATS, Spring, React, etc. + +**SiteWhere** is platform-centric middleware. It provides ingestion, storage, processing and integration of device data. It supports multi-tenancy, MQTT, AMQP, Stomp, CoAP and WebSocket. It seamlessly integrates with Android, iOS, and multiple SDKs. It is built on open source technology stacks like MongoDB, Eclipse Californium, InfluxDB, HBase and many others. + +**IoTSyS** is platform-centric and industry-specific middleware. It uses IPv6 for non-IP IoT devices and systems. It is used in smart city and smart grid projects to make the automation infrastructure smart. IoTSyS provides interoperable Web technologies for sensors and actuator networks. + +**DeviceHive** is cloud agnostic, microservices based, platform-centric middleware used for device connectivity and management. It has the ability to connect to any device via MQTT, REST or WebSockets. It supports Big Data solutions like ElasticSearch, Apache Spark, Cassandra and Kafka for real-time and batch processing. + +**EclipseIoT (Kura)** provides device connectivity, data transformation and business logic with intelligent routing, edge processing/analytics and real-time decisions. + +**Zetta** is application-centric middleware. It is a service-oriented open source IoT platform built on Node.js combining REST API, WebSockets and reactive programming. It can run on cloud platforms like Heroku to create geo-distributed networks and stream data into machine analytics systems like Splunk. + +**MainFlux** is application-centric middleware providing solutions based on the cloud. It has been developed as microservices, containerised by Docker and orchestrated with Kubernetes. The architecture is flexible and allows seamless integration with enterprise systems like ERP, BI and CRM. It can also integrate with databases, analytics, backend systems and cloud services easily. In addition, it supports REST, MQTT, WebSocket and CoAP. + +**Distributed Services Architecture (DSA)** facilitates decentralised device inter-communication, allowing protocol translation and data integration to and from third party data sources. + +**OpenRemote** is application-centric middleware used to connect any device regardless of vendor or protocol, to create meaningful connections by converting data points into smart applications. It finds use in home automation, commercial buildings, public space and healthcare. Data visualisation is integrated with devices and sensors, and turns data into smart applications. + +**OpenIoT** is application-centric open source middleware for pulling information from sensors. It incorporates Sensing-as-a-Service for deploying and providing services in cloud environments. + +**ThingsBoard** is an open source IoT platform for data collection, processing, data visualisation and device management. It supports IoT protocols like MQTT, CoA, and HTTP with on-premise and cloud deployment. It is horizontally scalable, stores data in Cassandra DB, HSQLDB and PostgreSQL. + +**NATS.io** is a simple, secure and high performance open source messaging system for cloud native solutions. It is developed as microservices architecture with high performance, secure and resilient capabilities. + +**Benefits of open source IoT middleware** +Open source middleware for the IoT has the following advantages over proprietary options: + + * It is easy to upgrade to new technologies with open source middleware. + * It has the ability to connect with upcoming device protocols and backend applications. + * Open source middleware ensures lower overall software costs, and is easier to use when changing technology and open source APIs for integration. + * It has a microservices based architecture and is built using open source technologies, resulting in high performance, scalability and fault-tolerance. + * It provides multi-protocol support and is hardware-agnostic. It supports connectivity for any device and any application. + * It has the flexibility to allow the cloud service provider to be changed. + * It is very important to choose the right set of open source middleware for an IoT solution. This is a big challenge as the market offers a vast choice. + + + +Analyse the business problem and arrive at the solution as a first step. Break the solution into services and understand the middleware needs of these services. This will help to narrow down the middleware choices. + +IoT middleware helps overcome the problems associated with the heterogeneity of the entire Internet of Things by enabling smooth communication among devices and components from different vendors and based on different technologies. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/a-primer-on-open-source-iot-middleware-for-the-integration-of-enterprise-applications/ + +作者:[Gopala Krishna Behara][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/gopalakrishna-behara/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Middle-were-architecture-illustration.jpg?resize=696%2C426&ssl=1 (Middle were architecture illustration) +[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Middle-were-architecture-illustration.jpg?fit=800%2C490&ssl=1 +[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-1-IoT-middleware-architecture.jpg?resize=350%2C162&ssl=1 diff --git a/sources/talk/20191014 Pros and cons of event-driven security.md b/sources/talk/20191014 Pros and cons of event-driven security.md new file mode 100644 index 0000000000..b65a9153e5 --- /dev/null +++ b/sources/talk/20191014 Pros and cons of event-driven security.md @@ -0,0 +1,160 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Pros and cons of event-driven security) +[#]: via: (https://opensource.com/article/19/10/event-driven-security) +[#]: author: (Yuriy Andamasov https://opensource.com/users/yuriy-andamasov) + +Pros and cons of event-driven security +====== +Event-driven security is not an impenetrable wall, but it may be cheaper +and better than what you've been doing to prevent data breaches. +![Three closed doors][1] + +Great news, everyone! Forrester Research says that [95% of all recorded breaches][2] in 2016 came from only three industries: government, technology, and retail. Everyone else is safe... ish, right? + +Hold on for a moment. Tech? Retail? What kind of industry diversification is this? We are, after all, living in 2019, where every business is a tech business. And all of us are continuously selling something, whether it’s an innovative product or an amazing service. + +So what the report should have said is that 95% of all recorded breaches came from attacks on 95% of all businesses both online and offline. And some of the attackers went for the .gov. + +More on the matter, [43% of attackers target small businesses][3]—and that’s a lot considering that, on average, a hack attempt takes place every 39 seconds. + +To top things off, the average cost of a data breach in 2020 is expected to exceed [$150 million][4]. These stats sound a bit more terrifying out of context, but the threat is still very much real. Ouch. + +What are our options then? + +Well, either the developers, stakeholders, decision-makers, and business owners willingly risk the integrity and security of their solutions by doing nothing, or they can consider fortifying their digital infrastructure. + +Sure, the dilemma doesn’t seem like it offers too many options, and that’s only because it doesn’t. That said, establishing efficient network security is easier said than done. + +### The cost of safety + +Clearly, security is an expensive endeavor, a luxury even. + + * Cybersecurity costs increased by 22.7% in only a year from 2016 to 2017. + * According to Gartner, organizations spent a total of $81.6 billion on cybersecurity, a $17.7 billion increase! + * And the worst part yet—the problem doesn’t seem like it’s going away regardless of how much money we throw at it. + + + +Perhaps we are doing something wrong? Maybe it’s the way that we perceive network security that’s flawed? Maybe, just maybe, there’s a cheaper AND better solution? + +### Scalability: Bad? + +Software, network, and architecture development have evolved dramatically over the last decade. We’ve moved from the age-old monolithic approach to leaner, more dynamic methodologies that allow faster reactions to the ever-shifting demands of the modern market. + +That said, flexibility comes at a cost. A monolith is a solid, stable element of infrastructure where a small change can crash the whole thing like a house of cards. But said change—regardless of its potential danger—is easily traceable. + +Today, the architecture is mostly service-based, where every single piece of functionality is like a self-contained Lego block. An error in one of the blocks won’t discard the entire system. It may not even affect the blocks standing near it. + +This approach, while adding scalability, has a downside—it’s really hard to trace a single malicious change, especially in an environment where every element is literally bombarded with new data coming anywhere from an HR or security update to, well, a malicious code attack. + +Does this mean it’s best if we sacrifice scalability in favor of security? + +Not at all. We’ve moved away from the monolith for a reason. Going back now will probably cost you your entire project. The tricky part is in effectively identifying what is and what isn’t a threat, as this is where the flaw of microservices lies. + +We need preventive measures. + +### Events, alerts, and incidents + +Everything that happens within your network can be described in one of three words: event, alert, or incident. + +An **event** is any observed change taking place in a network, environment, or workflow. So, for example, when a new firewall policy is pushed, you may consider that the event has happened. When the routers are updated, another event has happened, and so on and so forth. + +An **alert** is an event that requires action. In simpler words, if you or your team need to do something due to the event taking place, it is considered an alert. + +According to the CERT NISTT 800-61 definition, an **incident** is an event that violates your security policies. Or, in simpler words, it is an event that negatively impacts the business like a worm spreading through the network, a phishing attempt, or the loss of sensitive data. + +By this logic, your infrastructure developers, security officers, and net admins are tasked with a very simple mission: establishing efficient preventive measures against any and all incidents. + +Again, easier said than done. + +There are simply too many different events taking place at one time. Every change, shift, or update differs, one from another, resulting in dozens of false-positive incidents. Add the fact that the mischievous events are very keen on disguising themselves, and you’ll get why your net admins look like they’ve lived on coffee and Red Bull for (at least) the past few weeks. + +Is there anything we, as a responsible community of developers, managers, stakeholders, product, and business owners, can do? + +### Event-driven security in a nutshell + +What’s the one thing everything you ignore, act upon, or react to shares in common? + +An event. + +Something needs to happen for you to respond to it in any shape or form. Additionally, many events are similar to one another and can be categorized as a stream. + +Here’s an example. + +Let’s say you have an e-commerce store. One of your clients adds an item to his cart (event) and then removes it (event) or proceeds with the purchase (event). + +Need I say that, other than simply categorizing them, we can analyze these events to identify behavioral patterns, and this makes it easier to identify threats in real time (or even empower HR/dev/marketing teams with additional data)? + +#### Event vs. command + +So event-driven security is essentially based on following up events. Were we ever _not_ following up on them? Didn’t we have commands for that? + +Yes, yes, we have, and that’s partially the problem. Here’s an example of an event versus a command: + +_> Event: I sit on the couch, and the TV turns on._ +_> Command: I sit on the couch and turn on the TV._ + +See the difference? I had to perform an action in the second scenario; in the first, the TV reacted to the event (me sitting on the couch generated the TV turning on) on its own. + +The first approach ensures the integrity of your network through efficient use of automation, essentially allowing the software to operate on its own and decide whether to launch the next season of _Black Mirror_ on Netflix or to quarantine an upcoming threat. + +#### Isolation + +Any event is essentially a trigger that launches the next application in the architecture. A user inputs his login, and the system validates its correctness, requests confirmation from the database, and tests the input for the presence of code. + +So far, so good. Not much has changed, right? + +Here’s the thing—every process and every app run autonomously like separate events, each triggering their own chains. None of the apps know if other apps have been triggered and whether they are running any processes or not. + +Think of them as separate, autonomous clusters. If one is compromised, it will not affect the entirety of the system, as it simply doesn’t know if anything else exists. That said, a malfunction in one of the clusters will trigger an alert, thus preventing the incident. + +#### An added bonus + +Isolated apps are not dependent on one another, meaning you’ll be able to plug in as many of them as you need without any of them risking or affecting the rest of the system. + +Call it scalability out of the box, if you will. + +### Pros of the event-driven approach + +We’ve already discussed most of the pros of the event-driven approach. Let’s summarize them here in the form of short bullet points. + + * **Encapsulation:** Every process has a set of clear, easily executed boundaries. + * **Decoupling:** The processes are independent and unaware of one another. + * **Scalability:** It’s easy to add new functionality, apps, and processes. + * **Data generation:** Event strings generate predictable data patterns you can easily analyze. + + + +### Cons of the event-driven approach + +Sadly, despite an impressive array of benefits to the business, event-driven architecture and security are not silver bullets. The approach has its flaws. + +For starters, developing any architecture with an insane level of scalability is hard, expensive, and time-consuming. + +Event-driven security is far from being a truly impenetrable wall. Hackers evolve and adapt rather quickly. They’ll likely find a breach in any system if they put their mind to it, whether through coding or through phishing. + +Luckily, you don’t have to build a Fort Knox. All you need is a solid system that’s hard enough to crack for the hacker to give up and move to an easier target. The event-driven approach to network security does just that. + +Moreover, it minimizes your losses if an incident actually happens, so you have that going for you, which is nice. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/event-driven-security + +作者:[Yuriy Andamasov][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/yuriy-andamasov +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA (Three closed doors) +[2]: https://www.techrepublic.com/article/forrester-what-can-we-learn-from-a-disastrous-year-of-hacks-and-breaches/ +[3]: https://www.cybintsolutions.com/industries-likely-to-get-hacked/ +[4]: https://www.cybintsolutions.com/cyber-security-facts-stats/ diff --git a/sources/talk/20191015 Beamforming explained- How it makes wireless communication faster.md b/sources/talk/20191015 Beamforming explained- How it makes wireless communication faster.md new file mode 100644 index 0000000000..d2ff87a163 --- /dev/null +++ b/sources/talk/20191015 Beamforming explained- How it makes wireless communication faster.md @@ -0,0 +1,106 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Beamforming explained: How it makes wireless communication faster) +[#]: via: (https://www.networkworld.com/article/3445039/beamforming-explained-how-it-makes-wireless-communication-faster.html) +[#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/) + +Beamforming explained: How it makes wireless communication faster +====== +Beamforming uses the science of electromagnetic interference to make Wi-Fi and 5G connections more precise. +Thinkstock + +Beamforming is a technique that focuses a wireless signal towards a specific receiving device, rather than having the signal spread in all directions from a broadcast antenna, as it normally would. The resulting more direct connection is faster and more reliable than it would be without beamforming. + +Although the principles of beamforming have been known since the 1940s, in recent years beamforming technologies have introduced incremental improvements in Wi-Fi networking. Today, beamforming is crucial to the [5G networks][1] that are just beginning to roll out. + +[5G versus 4G: How speed, latency and application support differ][1] + +### How beamforming works + +A single antenna broadcasting a wireless signal radiates that signal in all directions (unless it's blocked by some physical object). That's the nature of how electromagnetic waves work. But what if you wanted to focus that signal in a specific direction, to form a targeted beam of electromagnetic energy? One technique for doing this involves having multiple antennas in close proximity, all broadcasting the same signal at slightly different times. The overlapping waves will produce interference that in some areas is _constructive_ (it makes the signal stronger) and in other areas is _destructive_ (it makes the signal weaker, or undetectable). If executed correctly, this beamforming process can focus your signal where you want it to go. + +[][2] + +BrandPost Sponsored by HPE + +[Take the Intelligent Route with Consumption-Based Storage][2] + +Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency. + +[][3] ojogabonitoo / Getty Images + +The mathematics behind beamforming is very complex (the [Math Encounters blog][4] has an introduction, if you want a taste), but the application of beamforming techniques is not new. Any form of energy that travels in waves, including sound, can benefit from beamforming techniques; they were first developed to [improve sonar during World War II][5] and are [still important to audio engineering][6]. But we're going to limit our discussion here to wireless networking and communications.   + +### Beamforming benefits and limitations + +By focusing a signal in a specific direction, beamforming allows you deliver higher signal quality to your receiver — which in practice means faster information transfer and fewer errors — without needing to boost broadcast power. That's basically the holy grail of wireless networking and the goal of most techniques for improving wireless communication. As an added benefit, because you aren't broadcasting your signal in directions where it's not needed, beamforming can reduce interference experienced by people trying to pick up other signals. + +The limitations of beamforming mostly involve the computing resources it requires; there are many scenarios where the time and power resources required by beamforming calculations end up negating its advantages. But continuing improvements in processor power and efficiency have made beamforming techniques affordable enough to build into consumer networking equipment. + +### Wi-Fi beamforming routers: 802.11n vs. 802.11ac + +Beamforming began to appear in routers back in 2008, with the advent of the [802.11n Wi-Fi standard][7]. 802.11n was the first version of Wi-Fi to support multiple-input multiple-output, or MIMO, technology, which beamforming needs in order to send out multiple overlapping signals. Beamforming with 802.11n equipment never really took off, however, because the spec doesn't lay out how beamforming should be implemented. A few vendors put out proprietary implementations that required purchasing matching routers and wireless cards to work, and they were not popular. + +With the emergence of the [802.11ac standard][8] in 2016, that all changed. There's now a set of specified beamforming techniques for Wi-Fi gear, and while 802.11ac routers aren't required by the specification to implement beamforming, if they do (and almost all on the market now do) they do so in a vendor-neutral and interoperable way. While some offerings might tout branded names, such as D-Link's AC Smart Beam, these are all implementations of the same standard. (The even newer [802.11ax standard][9] continues to support ac-style beamforming.) + +### Beamforming and MU-MIMO + +Beamforming is key for the support of multiuser MIMO, or [MU-MIMO][10], which is becoming more popular as 802.11ax routers roll out. As the name implies, MU-MIMO involves multiple users that can each communicate to multiple antennas on the router. MU-MIMO [uses beamforming][11] to make sure communication from the router is efficiently targeted to each connected client. + +### Explicit beamforming vs. implicit beamforming + +There are a couple of ways that Wi-Fi beamforming can work. If both the router and the endpoint support 802.11ac-compliant beamforming, they'll begin their communication session with a little "handshake" that helps both parties establish their respective locations and the channel on which they'll communicate; this improves the quality of the connection and is known as _explicit_ beamforming. But there are still plenty of network cards in use that only support 802.11n or even older versions of Wi-Fi. A beamforming router can still attempt to target these devices, but without help from the endpoint, it won't be able to zero in as precisely. This is known as _implicit_ beamforming, or sometimes as _universal_ beamforming, because it works in theory with any Wi-Fi device. + +In many routers, implicit beamforming is a feature you can turn on and off. Is enabling implicit beamforming worth it? The [Router Guide suggests][12] that you test how your network operates with it on and off to see if you get a boost from it. It's possible that devices such as phones that you carry around your house can see dropped connections with implicit beamforming. + +### 5G beamforming + +To date, local Wi-Fi networks is where the average person is most likely to encounter beamforming in the wild. But with the rollout of wide-area 5G networks now under way, that's about to change. 5G uses radio frequencies between 30 and 300 GHz, which can transmit data much more quickly but are also much more prone to interference and encounter more difficulty passing through physical objects. A host of technologies are required to overcome these problems, including smaller cells, massive MIMO — basically cramming tons of antennas onto 5G base stations — and, yes, [beamforming][13]. If 5G takes off in the way that vendors are counting on, the time will come soon enough when we'll all be using beamforming (behind the scenes) every day. + +**Learn more about wireless networking** + + * [How to determine if Wi-Fi 6 is right for you][14] + * [What is 5G? How is it better than 4G?][13] + * [Cisco exec details how Wi-Fi 6 and 5G will fire-up enterprises][15] + * [Wi-Fi 6 is coming to a router near you][16] + * [Wi-Fi analytics get real][17] + + + +Join the Network World communities on [Facebook][18] and [LinkedIn][19] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3445039/beamforming-explained-how-it-makes-wireless-communication-faster.html + +作者:[Josh Fruhlinger][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Josh-Fruhlinger/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3330603/5g-versus-4g-how-speed-latency-and-application-support-differ.html +[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) +[3]: https://images.idgesg.net/images/article/2019/10/nw_wifi_router_traditional-and-beamformer_foundational_networking_internet-100814037-orig.jpg +[4]: https://www.mathscinotes.com/2012/01/beamforming-math/ +[5]: https://apps.dtic.mil/dtic/tr/fulltext/u2/a250189.pdf +[6]: https://www.mathworks.com/company/newsletters/articles/making-all-the-right-noises-shaping-sound-with-audio-beamforming.html +[7]: https://www.networkworld.com/article/2270718/the-role-of-beam-forming-in-11n.html +[8]: https://www.networkworld.com/article/3067702/mu-mimo-makes-wi-fi-better.html +[9]: https://www.networkworld.com/article/3258807/what-is-802-11ax-wi-fi-and-what-will-it-mean-for-802-11ac.html +[10]: https://www.networkworld.com/article/3250268/what-is-mu-mimo-and-why-you-need-it-in-your-wireless-routers.html +[11]: https://www.networkworld.com/article/3256905/13-things-you-need-to-know-about-mu-mimo-wi-fi.html +[12]: https://routerguide.net/enable-beamforming-on-or-off/ +[13]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html +[14]: https://www.networkworld.com/article/3356838/how-to-determine-if-wi-fi-6-is-right-for-you.html +[15]: https://www.networkworld.com/article/3342158/cisco-exec-details-how-wi-fi-6-and-5g-will-fire-up-enterprises-in-2019-and-beyond.html +[16]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html +[17]: https://www.networkworld.com/article/3305583/wi-fi/wi-fi-analytics-get-real.html +[18]: https://www.facebook.com/NetworkWorld/ +[19]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20191015 How GNOME uses Git.md b/sources/talk/20191015 How GNOME uses Git.md new file mode 100644 index 0000000000..f64057287e --- /dev/null +++ b/sources/talk/20191015 How GNOME uses Git.md @@ -0,0 +1,60 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How GNOME uses Git) +[#]: via: (https://opensource.com/article/19/10/how-gnome-uses-git) +[#]: author: (Molly de Blanc https://opensource.com/users/mollydb) + +How GNOME uses Git +====== +The GNOME project's decision to centralize on GitLab is creating +benefits across the community—even beyond the developers. +![red panda][1] + +“What’s your GitLab?” is one of the first questions I was asked on my first day working for the [GNOME Foundation][2]—the nonprofit that supports GNOME projects, including the [desktop environment][3], [GTK][4], and [GStreamer][5]. The person was referring to my username on [GNOME’s GitLab instance][6]. In my time with GNOME, I’ve been asked for my GitLab a lot. + +We use GitLab for basically everything. In a typical day, I get several issues and reference bug reports, and I occasionally need to modify a file. I don’t do this in the capacity of being a developer or a sysadmin. I’m involved with the Engagement and Inclusion & Diversity (I&D) teams. I write newsletters for Friends of GNOME and interview contributors to the project. I work on sponsorships for GNOME events. I don’t write code, and I use GitLab every day. + +The GNOME project has been managed a lot of ways over the past two decades. Different parts of the project used different systems to track changes to code, collaborate, and share information both as a project and as a social space. However, the project made the decision that it needed to become more integrated and it took about a year from conception to completion. + +There were a number of reasons GNOME wanted to switch to a single tool for use across the community. External projects touch GNOME, and providing them an easier way to interact with resources was important for the project, both to support the community and to grow the ecosystem. We also wanted to better track metrics for GNOME—the number of contributors, the type and number of contributions, and the developmental progress of different parts of the project. + +When it came time to pick a collaboration tool, we considered what we needed. One of the most important requirements was that it must be hosted by the GNOME community; being hosted by a third party didn’t feel like an option, so that discounted services like GitHub and Atlassian. And, of course, it had to be free software. It quickly became obvious that the only real contender was GitLab. We wanted to make sure contribution would be easy. GitLab has features like single sign-on, which allows people to use GitHub, Google, GitLab.com, and GNOME accounts. + +We agreed that GitLab was the way to go, and we began to migrate from many tools to a single tool. GNOME board member [Carlos Soriano][7] led the charge. With lots of support from GitLab and the GNOME community, we completed the process in May 2018. + +There was a lot of hope that moving to GitLab would help grow the community and make contributing easier. Because GNOME previously used so many different tools, including Bugzilla and CGit, it’s hard to quantitatively measure how the switch has impacted the number of contributions. We can more clearly track some statistics though, such as the nearly 10,000 issues closed and 7,085 merge requests merged between June and November 2018. People feel that the community has grown and become more welcoming and that contribution is, in fact, easier. + +People come to free software from all sorts of different starting points, and it’s important to try to even out the playing field by providing better resources and extra support for people who need them. Git, as a tool, is widely used, and more people are coming to participate in free software with those skills ready to go. Self-hosting GitLab provides the perfect opportunity to combine the familiarity of Git with the feature-rich, user-friendly environment provided by GitLab. + +It’s been a little over a year, and the change is really noticeable. Continuous integration (CI) has been a huge benefit for development, and it has been completely integrated into nearly every part of GNOME. Teams that aren’t doing code development have also switched to using the GitLab ecosystem for their work. Whether it’s using issue tracking to manage assigned tasks or version control to share and manage assets, even teams like Engagement and I&D have taken up using GitLab. + +It can be hard for a community, even one developing free software, to adapt to a new technology or tool. It is especially hard in a case like GNOME, a project that [recently turned 22][8]. After more than two decades of building a project like GNOME, with so many parts used by so many people and organizations, the migration was an endeavor that was only possible thanks to the hard work of the GNOME community and generous assistance from GitLab. + +I find a lot of convenience in working for a project that uses Git for version control. It’s a system that feels comfortable and is familiar—it’s a tool that is consistent across workplaces and hobby projects. As a new member of the GNOME community, it was great to be able to jump in and just use GitLab. As a community builder, it’s inspiring to see the results: more associated projects coming on board and entering the ecosystem; new contributors and community members making their first contributions to the project; and increased ability to measure the work we’re doing to know it’s effective and successful. + +It’s great that so many teams doing completely different things (such as what they’re working on and what skills they’re using) agree to centralize on any tool—especially one that is considered a standard across open source. As a contributor to GNOME, I really appreciate that we’re using GitLab. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/how-gnome-uses-git + +作者:[Molly de Blanc][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mollydb +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/redpanda_firefox_pet_animal.jpg?itok=aSpKsyna (red panda) +[2]: https://www.gnome.org/foundation/ +[3]: https://gnome.org/ +[4]: https://www.gtk.org/ +[5]: https://gstreamer.freedesktop.org/ +[6]: https://gitlab.gnome.org/ +[7]: https://twitter.com/csoriano1618?lang=en +[8]: https://opensource.com/article/19/8/poll-favorite-gnome-version diff --git a/sources/talk/20191015 The Emergence of Edge Analytics in the IoT Ecosystem.md b/sources/talk/20191015 The Emergence of Edge Analytics in the IoT Ecosystem.md new file mode 100644 index 0000000000..e7cca334fb --- /dev/null +++ b/sources/talk/20191015 The Emergence of Edge Analytics in the IoT Ecosystem.md @@ -0,0 +1,93 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The Emergence of Edge Analytics in the IoT Ecosystem) +[#]: via: (https://opensourceforu.com/2019/10/the-emergence-of-edge-analytics-in-the-iot-ecosystem/) +[#]: author: (Shashidhar Soppin https://opensourceforu.com/author/shashidhar-soppin/) + +The Emergence of Edge Analytics in the IoT Ecosystem +====== + +[![][1]][2] + +_Edge analytics involves processing, collecting and analysing data near its source so that not all of it is sent to the cloud. This is crucial when massive volumes of data are generated by numerous devices. It saves a lot of time and reduces latency._ + +The exponential proliferation of IoT devices and IoT based platforms is leading to multiple complex scenarios. Such sudden growth and scalability need a lot of analytics to be done for better predictions and choices. IoT analytics will become more complex and challenging when the number of IoT devices grows even further. A good analytics system will become crucial for any IoT environment to succeed and be more robust. + +Making any sense out of IoT data needs good and efficient analytics. This may not always be Big Data (volume, velocity and variety of data). There are many other categories of analytics, like simple past event review, or more advanced analytics using historical data to make predictions about outcomes, etc. + +**What is analytics?** +Analytics (for IoT) is the science and art of trying to find matching patterns in the huge quantity of data (it could be Big Data, or otherwise) generated by connected and smart devices. In layman’s terms, this can be defined as simply as monitoring trends and finding any abnormalities. + +Listed below are some of the well-known analytics methodologies or types: + + * Descriptive analytics tells us what is happening + * Diagnostics analytics tells us why it happened + * Predictive analytics tells us what is likely to happen + * Prescriptive analytics tells us what should be done to prevent something from happening + + + +Today, there are various tools and technologies available for analytics. Many of the enterprises now expect to have intelligence or analytics sit on the platform itself for better monitoring and live streaming capabilities. Processing historical data from the cloud or any other outside means takes time and adds latency to the entire response time. Analytics using microservices based technologies is also one trend observed recently. + +Each of the existing IoT solutions may not need analytics to be done. Listed below are some types of analytics that can be performed: + + * Generating basic reports + * Real-time stream analytics + * Long-term data analytics + * Large-scale data analytics + * Advanced data analytics + + + +Once the data is acquired from any selected source, it has to be pre-processed to identify missing values and for the purpose of scaling the data. Once the pre-processing is done,‘feature extraction’ needs to be carried out. From feature extraction, we can identify any significant information that helps in improving subsequent steps. + +**Basic analysis** +This analysis typically involves Big Data and will, most of the time, involve descriptive and diagnostic kinds of analytics. + +This is the era of edge analytics. Many of the industries that work on IoT based solutions and frameworks use it, and it is expected to become ‘the next big thing’ in the coming days. At present, there are many tools and technologies that deal with edge analytics. + +**Edge analytics and the IoT tools** +This is an analysis done at the source, by processing, collecting and analysing the data instead of sending it to the cloud or server for processing and analysis. This saves a lot of time and avoids latency. By using edge analytics, most problems in today’s world can be solved. + +According to Gartner’s predictions, by 2020, more than half the major new business processes and systems will incorporate some element of IoT. Analytics will become one of the key aspects of any of these IoT systems/sub-systems. It will support the decision-making process in related operations and help in business transformation. +A brief introduction of some of the well-known open source IoT tools for edge analytics follows. + +**EdgeX Foundry:** This is an open source project that is hosted by the Linux Foundation. It provides interoperability, can be hosted within the hardware and is an OS-agnostic software platform. The EdgeX ecosystem can be used as plug-and-play with support for multiple components, and this can be easily deployed as an IoT solution with edge analytics. EdgeX Foundry can be deployed using microservices as well. + +**Website:** __ + +**Eclipse Kura:** Eclipse Kura is one more popular open source IoT edge framework. It is based on Java/OSGi and offers API based access to the underlying hardware interface of IoT gateways (for serial ports, watchdog, GPS, I2C, etc). The Kura architecture can be found at __. + +Kura components are designed as configurable OSGi based and declarative services, which are exposed service APIs. Most of the Kura components are purely Java based, while others can be invoked through JNI, and generally have a dependency on the Linux operating system. + +**Eclipse Kapua:** Eclipse Kapua is a modular IoT cloud platform that is mainly used to manage and integrate devices and their data. Kapua comes loaded with the following features. + + * **Connect:** Many of the IoT devices can be connected to Kapua using MQTT and other protocols. + * **Manage:** The management of a device’s applications, configurations and resources can be easily done. + * **Store and analyse:** Kapua helps in storing and indexing the data published by IoT devices for quick and easy analysis, and later visualisation in the dashboard. + * **Integrate:** Kapua services can be easily integrated with various IT applications through flexible message routing and ReST based APIs. + + + +**Website:** __ + +As mentioned earlier and based on Gartner analysis, edge analytics will become one of the leading tech trends in the coming months. The more analysis that is done at the edge, the more sophisticated and advanced the whole IoT ecosystem will become. There will come a day where M2M communication may happen independently, without much human intervention. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/the-emergence-of-edge-analytics-in-the-iot-ecosystem/ + +作者:[Shashidhar Soppin][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/shashidhar-soppin/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Graph-Analytics-1.jpg?resize=696%2C464&ssl=1 (Graph Analytics) +[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Graph-Analytics-1.jpg?fit=800%2C533&ssl=1 diff --git a/sources/talk/20191015 The software-defined data center drives agility.md b/sources/talk/20191015 The software-defined data center drives agility.md new file mode 100644 index 0000000000..3223692cad --- /dev/null +++ b/sources/talk/20191015 The software-defined data center drives agility.md @@ -0,0 +1,102 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The software-defined data center drives agility) +[#]: via: (https://www.networkworld.com/article/3446040/the-software-defined-data-center-drives-agility.html) +[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/) + +The software-defined data center drives agility +====== +The value of SDN is doing as much as possible in the software so you don’t depend on the delivery of new features to come from a new generation of the hardware. +monsitj / Getty Images + +In this day and age, demands on networks are coming from a variety of sources, internal end-users, external customers and via changes in the application architecture. Such demands put pressure on traditional architectures. + +To deal effectively with these demands requires the network domain to become more dynamic. For this, we must embrace digital transformation. However, current methods are delaying this much-needed transition. One major pain point that networks suffer from is the necessity to dispense with manual working, which lacks fabric wide automation. This must be addressed if organizations are to implement new products and services ahead of the competition. + +So, to evolve, to be in line with the current times and use technology as an effective tool, one must drive the entire organization to become a digital enterprise. The network components do play a key role, but the digital transformation process is an enterprise-wide initiative. + +[][1] + +BrandPost Sponsored by HPE + +[Take the Intelligent Route with Consumption-Based Storage][1] + +Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency. + +### Digital transformation and network visibility + +As a part of this transformation, the network is no longer considered a black box. Now the network is a source for deep visibility that can aid a large set of use cases like network performance, monitoring, security and capacity planning, to name a few. However, in spite of such critical importance visibility is often overlooked. + +We need the ability to provide deep visibility for the application at a flow level. Rationally, today if you want anything comparable, you would deploy a redundant monitoring network. Such a network would consist of probes, packet brokers and various tools to process the packet for metadata. + +A more viable solution would integrate the network visibility into the fabric and therefore would not need a bunch of components. This enables us to do more with the data and aids with agility for ongoing network operations. There will always be some kind of requirement for application optimization or a security breach. And this is where visibility can help you iron out such issues quickly. + +### Gaining agility with SDN + +When increasing agility, what is useful is the building of a complete network overlay. An overlay is a solution that is abstracted from the underlying physical infrastructure in a certain way. + +**[ Related: [MPLS explained – What you need to know about multi-protocol label switching][2]** + +What this means is that we are separating and disaggregating the customer applications or services from the network infrastructure. This is more like a sandbox or private network for each application that is on an existing network. This way we are empowered with both the SDN controller and controllerless options. Both [data center architectures][3] _[Disclaimer: The author is employed by Network Insight]_ have their advantages and disadvantages. + +Traditionally, the deployment of an application to the network involves propagating the policy to work through the entire infrastructure. Why? Because the network simply acts as an underlay and the segmentation rules configured on this underlay are needed to separate different applications and services. This, ultimately, creates a very rigid architecture that is unable to react quickly and adapt to the changes, therefore lacking agility. In essence, the applications and the physical network are tightly coupled. + +[Virtual networks][4] are mostly built from either the servers or ToR switches. Either way, the underlying network transports the traffic and doesn’t need to be configured to accommodate the customer application. That is all carried in the overlay. By and large, everything happens in the overlay network which is most efficient when done in a fully distributed manner. + +Now the application and service deployment occur without touching the network. Once the tight coupling between the application and network is removed, increased agility and simplicity of deploying applications and services are created. + +### Where do your features live? + +Some vendors build into the hardware the differentiator of the offering. With different hardware, you can accelerate the services. With this design, the hardware level is manipulated, but it does not use the standard Open Networking protocols. The result is that you are 100% locked, unable to move as the cost of moving is too much. + +You could have numerous generations: for example, line cards, all the line cards have different capabilities, resulting in a complex feature matrix. When the Cisco Nexus platform first came out, I was onsite as a TDA trying to bring in some redundancy into the edge/access layer. + +When the virtual PortChannel (vPC) came out they were several topologies and some of these topologies were only available on certain hardware. As it’s just a topology, it would have been useful to have it across all line cards. This is the world of closed networking, which has been accepted as the norm until now. + +### Open networking + +Traditionally, networking products were a combination of the hardware and software that had to be purchased together as an integrated solution. Open Networking, on the other hand, is the disaggregation of hardware from the software. This basically allows IT to mix and match at will. + +With Open Networking, you are not reinventing the way packets are forwarded, or the way routers communicate with each other. Why? Because, with Open Networking, you are never alone and never the only vendor. You need to adapt and fit, and for this, you need to use open protocols. + +The value of SDN is doing as much as possible in the software so you don’t depend on the delivery of new features to come from a new generation of the hardware. You want to put as much intelligence as possible into the software, thus removing the intelligence from the physical layer. + +You don’t want to build the hardware features; instead, you want to use the software to provide the new features. This is an important philosophy and is the essence of Open Networking. From the customer's point of view, they get more agility as they can move from generation to generation of services without having hardware dependency. They don’t have to incur the operational costs of swapping out the hardware constantly. + +### First steps + +It is agreed that agility is a necessity. So, what is the prime step? One of the key steps is to create a software-defined data center that will allow the rapid deployment of compute and storage for the workloads. In addition to software-defined compute and storage, the network must be automated and not be an impediment. + +Many organizations assume that to achieve agility, we must move everything to the cloud. Migrating workloads to the cloud indeed allow organizations to be competitive and equipped with the capabilities of a much larger organization. + +Only a small proportion can adopt a fully cloud-native design. More often than not, there will always be some kind of application that needs to stay on-premise. In this case, the agility in the cloud needs to be matched by the on-premise infrastructure. This requires the virtualization of the on-premise compute, storage and network components. + +Compute and storage, affordable software control, and virtualization have progressed dramatically. However, the network can cause a lag. Solutions do exist but they are complex, expensive and return on investment (ROI) is a stretch. Therefore, such solutions are workable only for the largest enterprises. This creates a challenge for mid-sized businesses that want to virtualize the network components. + +**This article is published as part of the IDG Contributor Network. [Want to Join?][5]** + +Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3446040/the-software-defined-data-center-drives-agility.html + +作者:[Matt Conran][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Matt-Conran/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) +[2]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html +[3]: https://network-insight.net/2014/08/data-center-topologies/ +[4]: https://youtu.be/-Yjk0GiysLI +[5]: https://www.networkworld.com/contributor-network/signup.html +[6]: https://www.facebook.com/NetworkWorld/ +[7]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20191016 Drupal shows leadership on diversity and inclusion.md b/sources/talk/20191016 Drupal shows leadership on diversity and inclusion.md new file mode 100644 index 0000000000..945d92acef --- /dev/null +++ b/sources/talk/20191016 Drupal shows leadership on diversity and inclusion.md @@ -0,0 +1,135 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Drupal shows leadership on diversity and inclusion) +[#]: via: (https://opensource.com/article/19/10/drupals-diversity-initiatives) +[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo) + +Drupal shows leadership on diversity and inclusion +====== +Drupal's Diversity & Inclusion Group is taking an innovative approach to +bringing attention to underrepresented groups in open source. +![Two diverse hands holding a globe][1] + +I didn't expect [DrupalCon Seattle][2]'s opening keynote to address the barriers that hold people back from making open source contributions. So imagine my surprise when Dries Buytaert, Drupal's project lead and co-founder and CTO of Acquia, which created Drupal, [used his time onstage][3] to share an apology. + +> _"I used to think anyone could contribute to Drupal if they just had the will. I was wrong—many people don't contribute to open source because they don't have the time."_ +> — Dries Buytaert + +Buytaert [disproved the long-held belief][4] that open source is a meritocracy. The truth is that anyone who has free time to do ongoing, unpaid work is more privileged than most. If you're working a second job, caring for aging parents, or earning less due to systemic wage gaps for people of color, you can't start your open source career on equal ground. + +> I wonder if a personal story will help :) In the past year, I dedicated my life to [#drupaldiversity][5]. The bridges that helped us achieve success at the highest level this year were often at a personal cost of my nights, weekends, and personal life. +> +> — Fatima (she/her) (@sugaroverflow) [April 15, 2019][6] + +### Increasing diversity and inclusion in Drupal + +Buytaert's keynote highlighted the Drupal project's commitment to diversity—and diversifying. + +For example, the Drupal Association awards grants, scholarships, and money from its inclusion fund to contributors of minority status so they can travel and attend the Association's DrupalCon events. Recently, 18 contributors [received awards][7] to attend [DrupalCon Amsterdam][8] in late October. + +In addition, the all-volunteer [Drupal Diversity & Inclusion][9] (DDI) collective works to diversify Drupal's speaker base. All of DDI's projects are open for anyone to work on in the [Drupal Diversity][10] or [DDI Contrib][11] project repositories. + +In its August newsletter, DDI shared another way it seeks to expand awareness of diverse populations. The group starts each of its weekly meetings by asking members where they're from and to acknowledge the indigenous history of the land they live on. In July, DDI launched a related project: [Land Acknowledgements][12], which invites Drupal community members to research their homelands' indigenous histories and share them in a community blog post. + +This project caught my eye, and I made a [contribution][13] about the indigenous history of Montgomery County, Maryland, where I live. This project is still open: [anyone can add their research][14] to the blog post. + +To learn more, I interviewed [Alanna Burke][15], a Drupal developer who helps lead the Land Acknowledgements project. Our conversation has been edited for length and clarity. + +### Acknowledging indigenous history + +**Lauren Maffeo:** Describe Drupal's Land Acknowledgments project. How did the idea for this project come about, and what is its end goal? + +**Alanna Burke:** In our weekly Slack meetings, we ask everyone to introduce themselves and do a land acknowledgment. I'm not sure when we started doing that. One week, I had the idea that it would be really neat to have folks do a little more research and contribute it back into a blog post—we do these acknowledgments, but without more context or research, they're not very meaningful. We wanted people to find out more about the land that they live on and the indigenous people who, in many cases, still live there. + +**LM:** How long will you accept contributions to the project? Do you have an ultimate goal for project contributions? + +**AB:** Right now, we intend to accept contributions for as long as people want to send them in! + +**LM:** How many contributions have you received thus far? + +**AB:** We've had four contributions, plus mine. I think folks have been busy, but that's why we made the decision to keep contributions open. We don't have a goal in terms of numbers, but I'd love to see more people do the research into their land, learn about it, and find out something interesting they didn't know before. + +**LM:** Describe the response to this project so far. Do you have plans to expand it beyond Drupal to the broader open source community? + +**AB:** Folks seemed to think it was a really great idea! There were definitely a lot of people who wanted to participate but haven't found the time or who just mentioned that it was cool. We haven't discussed any plans to expand it, since we focus on the Drupal community, but I'd encourage any community to take this idea and run with it, see what your community members come up with! + +**LM:** Which leaders in the Drupal community have created and maintained this project? + +**AB:** Myself and the other members of the DDI leadership team: Tara King, Fatima Khalid, Marc Drummond, Elli Ludwigson, Alex Laughnan, and Alex McCabe + +**LM:** What are some 2019 highlights of Drupal's Diversity & Inclusion initiative? Which goals do you aim to achieve in 2020? + +**AB:** Our biggest highlight this year was the Speaker Diversity Workshop, which we held on September 28th. Jill Binder of the WordPress community led this free online workshop aimed at helping underrepresented folks prepare for speaking at camps and conferences. + +We are also going to hold an online [train-the-trainers workshop][16] on November 16th so communities can hold their own speaker workshops. + +In 2020, we'd like to build on these successes. We did a lot of fundraising and created a lot of great relationships in order to make this happen. We have a [handful of initiatives][17] that we are working on at any given time, so we'll be continuing to work on those. + +**LM:** How can Opensource.com readers contribute to the Land Acknowledgements post and other Drupal D&I initiatives? + +**AB:** Check out [the doc][14] in the issue. Spend a little time doing research, write up a few paragraphs, and submit it! Or, start up an initiative in your community to do land acknowledgments in your meetings or do a collaborative post like ours. Do land acknowledgments as part of events like camps and conferences. + +To get involved in DDI, check out our [guide][18]. We have [an issue queue][10], and we meet in Slack every Thursday for a text-only meeting. + +**LM:** Do you have statistics for how many women and people of minority status (gender, sexuality, religion, etc.) contribute to the Drupal project? If so, what are they? + +**AB:** We have some numbers—we'd love to have more. This [post][19] has some breakdowns, but here's the gist from 2017-2018, the latest we have: + + * [Drupal.org][20] received code contributions from 7,287 different individuals and 1,002 different organizations. + * The reported data shows that only 7% of the recorded contributions were made by contributors that do not identify as male, which continues to indicate a steep gender gap. + * When measuring geographic diversity, we saw individual contributors from six different continents and 123 different countries. + + + +Recently, we have implemented [Open Demographics][21] on Drupal.org, a project by DDI's founder, Nikki Stevens. We hope this will give us better demographic data in the future. + +### Closing the diversity gap in open source + +Drupal is far from alone among [open source communities with a diversity gap][22], and I think it deserves a lot of credit for tackling these issues head-on. Diversity and inclusion is a much broader topic than most of us realize. Before I read DDI's August newsletter, the history of indigenous people in my community was something that I hadn't really thought about before. Thanks to DDI's project, I'm not only aware of the people who lived in Maryland long before me, but I've come to appreciate and respect what they brought to this land. + +I encourage you to learn about the native people in your homeland and record their history in DDI's Land Acknowledgements blog. If you're a member of another open source project, consider replicating this project there.  + +Open source research often paints the community as a homogeneous landscape. I have collected... + +Egle Sigler, Kavit Munshi, and Carol Barrett talk about the importance of diversity in the... + +Diversity has a new full-time ally. Marina Zhurakhinskaya recently won an O'Reilly award for her... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/drupals-diversity-initiatives + +作者:[Lauren Maffeo][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lmaffeo +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/world_hands_diversity.png?itok=zm4EDxgE (Two diverse hands holding a globe) +[2]: https://events.drupal.org/seattle2019 +[3]: https://www.youtube.com/watch?v=BNoCn6T9Xf8 +[4]: https://dri.es/the-privilege-of-free-time-in-open-source +[5]: https://twitter.com/hashtag/drupaldiversity?src=hash&ref_src=twsrc%5Etfw +[6]: https://twitter.com/sugaroverflow/status/1117876869590728705?ref_src=twsrc%5Etfw +[7]: https://events.drupal.org/amsterdam2019/grants-scholarships +[8]: https://events.drupal.org/amsterdam2019 +[9]: https://opencollective.com/drupal-diversity-and-inclusion +[10]: https://www.drupal.org/project/issues/diversity +[11]: https://www.drupal.org/project/issues/ddi_contrib +[12]: https://www.drupaldiversity.com/blog/2019/land-acknowledgments +[13]: https://www.drupal.org/project/diversity/issues/3063065#comment-13234777 +[14]: https://www.drupal.org/project/diversity/issues/3063065 +[15]: https://www.drupal.org/u/aburke626 +[16]: https://www.drupaldiversity.com/blog/2019/learn-how-hold-your-own-speaker-diversity-workshop-saturday-november-16 +[17]: https://www.drupaldiversity.com/initiatives +[18]: https://www.drupaldiversity.com/get-involved +[19]: https://dri.es/who-sponsors-drupal-development-2018 +[20]: http://Drupal.org +[21]: https://www.drupal.org/project/open_demographics +[22]: https://opensource.com/resources/diversity-open-source diff --git a/sources/tech/20180705 Building a Messenger App- Schema.md b/sources/tech/20180705 Building a Messenger App- Schema.md deleted file mode 100644 index e2c80f0013..0000000000 --- a/sources/tech/20180705 Building a Messenger App- Schema.md +++ /dev/null @@ -1,114 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (PsiACE) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Building a Messenger App: Schema) -[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-schema/) -[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/) - -Building a Messenger App: Schema -====== - -New post on building a messenger app. You already know this kind of app. They allow you to have conversations with your friends. [Facebook Messenger][1], [WhatsApp][2] and [Skype][3] are a few examples. Tho, these apps allows you to send pictures, stream video, record audio, chat with large groups of people, etc… We’ll try to keep it simple and just send text messages between two users. - -We’ll use [CockroachDB][4] as the SQL database, [Go][5] as the backend language, and JavaScript to make a web app. - -In this first post, we’re getting around the database design. - -``` -CREATE TABLE users ( - id SERIAL NOT NULL PRIMARY KEY, - username STRING NOT NULL UNIQUE, - avatar_url STRING, - github_id INT NOT NULL UNIQUE -); -``` - -Of course, this app requires users. We will go with social login. I selected just [GitHub][6] so we keep a reference to the github user ID there. - -``` -CREATE TABLE conversations ( - id SERIAL NOT NULL PRIMARY KEY, - last_message_id INT, - INDEX (last_message_id DESC) -); -``` - -Each conversation references the last message. Every time we insert a new message, we’ll go and update this field. (I’ll add the foreign key constraint below). - -… You can say that we can group conversations and get the last message that way, but that will add much more complexity to the queries. - -``` -CREATE TABLE participants ( - user_id INT NOT NULL REFERENCES users ON DELETE CASCADE, - conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE, - messages_read_at TIMESTAMPTZ NOT NULL DEFAULT now(), - PRIMARY KEY (user_id, conversation_id) -); -``` - -Even tho I said conversations will be between just two users, we’ll go with a design that allow the possibility to add multiple participants to a conversation. That’s why we have a participants table between the conversation and users. - -To know whether the user has unread messages we have the `messages_read_at` field. Every time the user read in a conversation, we update this value, so we can compare it with the conversation last message `created_at` field. - -``` -CREATE TABLE messages ( - id SERIAL NOT NULL PRIMARY KEY, - content STRING NOT NULL, - user_id INT NOT NULL REFERENCES users ON DELETE CASCADE, - conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE, - created_at TIMESTAMPTZ NOT NULL DEFAULT now(), - INDEX(created_at DESC) -); -``` - -Last but not least is the messages table, it saves a reference to the user who created it and the conversation in which it goes. Is has an index on `created_at` too to sort messages. - -``` -ALTER TABLE conversations -ADD CONSTRAINT fk_last_message_id_ref_messages -FOREIGN KEY (last_message_id) REFERENCES messages ON DELETE SET NULL; -``` - -And yep, the fk constraint I said. - -These four tables will do the trick. You can save those queries to a file and pipe it to the Cockroach CLI. First start a new node: - -``` -cockroach start --insecure --host 127.0.0.1 -``` - -Then create the database and tables: - -``` -cockroach sql --insecure -e "CREATE DATABASE messenger" -cat schema.sql | cockroach sql --insecure -d messenger -``` - -* * * - -That’s it. In the next part we’ll do the login. Wait for it. - -[Souce Code][7] - --------------------------------------------------------------------------------- - -via: https://nicolasparada.netlify.com/posts/go-messenger-schema/ - -作者:[Nicolás Parada][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://nicolasparada.netlify.com/ -[b]: https://github.com/lujun9972 -[1]: https://www.messenger.com/ -[2]: https://www.whatsapp.com/ -[3]: https://www.skype.com/ -[4]: https://www.cockroachlabs.com/ -[5]: https://golang.org/ -[6]: https://github.com/ -[7]: https://github.com/nicolasparada/go-messenger-demo diff --git a/sources/tech/20180706 Building a Messenger App- OAuth.md b/sources/tech/20180706 Building a Messenger App- OAuth.md index 72f8c4e3f6..36732e9795 100644 --- a/sources/tech/20180706 Building a Messenger App- OAuth.md +++ b/sources/tech/20180706 Building a Messenger App- OAuth.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (PsiACE) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) @@ -436,7 +436,7 @@ via: https://nicolasparada.netlify.com/posts/go-messenger-oauth/ 作者:[Nicolás Parada][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[PsiACE](https://github.com/PsiACE) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/sources/tech/20180906 What a shell dotfile can do for you.md b/sources/tech/20180906 What a shell dotfile can do for you.md deleted file mode 100644 index 35593e1e32..0000000000 --- a/sources/tech/20180906 What a shell dotfile can do for you.md +++ /dev/null @@ -1,238 +0,0 @@ -What a shell dotfile can do for you -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o) - -Ask not what you can do for your shell dotfile, but what a shell dotfile can do for you! - -I've been all over the OS map, but for the past several years my daily drivers have been Macs. For a long time, I used Bash, but when a few friends started proselytizing [zsh][1], I gave it a shot. It didn't take long for me to appreciate it, and several years later, I strongly prefer it for many of the little things that it does. - -I've been using zsh (provided via [Homebrew][2], not the system installed), and the [Oh My Zsh enhancement][3]. - -The examples in this article are for my personal `.zshrc`. Most will work directly in Bash, and I don't believe that any rely on Oh My Zsh, but your mileage may vary. There was a period when I was maintaining a shell dotfile for both zsh and Bash, but I did eventually give up on my `.bashrc`. - -### We're all mad here - -If you want the possibility of using the same dotfile across OS's, you'll want to give your dotfile a little smarts. -``` -### Mac Specifics -if [[ "$OSTYPE" == "darwin"* ]]; then -        # Mac-specific stuff here. -fi -``` - -For instance, I expect the Alt + arrow keys to move the cursor by the word rather than by a single space. To make this happen in [iTerm2][4] (my preferred shell), I add this snippet to the Mac-specific portion of my .zshrc: -``` -### Mac Specifics -if [[ "$OSTYPE" == "darwin"* ]]; then -        ### Mac cursor commands for iTerm2; map ctrl+arrows or alt+arrows to fast-move -        bindkey -e -        bindkey '^[[1;9C' forward-word -        bindkey '^[[1;9D' backward-word -        bindkey '\e\e[D' backward-word -        bindkey '\e\e[C' forward-word -fi -``` - -### What about Bob? - -While I came to love my shell dotfile, I didn't always want the same things available on my home machines as on my work machines. One way to solve this is to have supplementary dotfiles to use at home but not at work. Here's how I accomplished this: -``` -if [[ `egrep 'dnssuffix1|dnssuffix2' /etc/resolv.conf` ]]; then -        if [ -e $HOME/.work ] -                source $HOME/.work -        else -                echo "This looks like a work machine, but I can't find the ~/.work file" -        fi -fi -``` - -In this case, I key off of my work dns suffix (or multiple suffixes, depending on your situation) and source a separate file that makes my life at work a little better. - -### That thing you do - -Now is probably a good time to quit using the tilde (`~`) to represent your home directory when writing scripts. You'll find that there are some contexts where it's not recognized. Getting in the habit of using the environment variable `$HOME` will save you a lot of troubleshooting time and headaches later on. - -The logical extension would be to have OS-specific dotfiles to include if you are so inclined. - -### Memory, all alone in the moonlight - -I've written embarrassing amounts of shell, and I've come to the conclusion that I really don't want to write more. It's not that shell can't do what I need most of the time, but I find that if I'm writing shell, I'm probably slapping together a duct-tape solution rather than permanently solving the problem. - -Likewise, I hate memorizing things, and throughout my career, I have had to do radical context shifting during the course of a day. The practical consequence is that I've had to re-learn many things several times over the years. ("Wait... which for-loop structure does this language use?") - -So, every so often I decide that I'm tired of looking up how to do something again. One way that I improve my life is by adding aliases. - -A common scenario for anyone who works with systems is finding out what's taking up all of the disk. Unfortunately, I have never been able to remember this incantation, so I made a shell alias, creatively called `bigdirs`: -``` -alias bigdirs='du --max-depth=1 2> /dev/null | sort -n -r | head -n20' -``` - -While I could be less lazy and actually memorize it, well, that's just not the Unix way... - -### Typos, and the people who love them - -Another way that using shell aliases improves my life is by saving me from typos. I don't know why, but I've developed this nasty habit of typing a `w` after the sequence `ea`, so if I want to clear my terminal, I'll often type `cleawr`. Unfortunately, that doesn't mean anything to my shell. Until I add this little piece of gold: -``` -alias cleawr='clear' -``` - -In one instance of Windows having an equivalent, but better, command, I find myself typing `cls`. It's frustrating to see your shell throw up its hands, so I add: -``` -alias cls='clear' -``` - -Yes, I'm aware of `ctrl + l`, but I never use it. - -### Amuse yourself - -Work can be stressful. Sometimes you just need to have a little fun. If your shell doesn't know the command that it clearly should just do, maybe you want to shrug your shoulders right back at it! You can do this with a function: -``` -shrug() { echo "¯\_(ツ)_/¯"; } -``` - -If that doesn't work, maybe you need to flip a table: -``` -fliptable() { echo "(╯°□°)╯ ┻━┻"; } # Flip a table. Example usage: fsck -y /dev/sdb1 || fliptable -``` - -Imagine my chagrin and frustration when I needed to flip a desk and I couldn't remember what I had called it. So I added some more shell aliases: -``` -alias flipdesk='fliptable' -alias deskflip='fliptable' -alias tableflip='fliptable' -``` - -And sometimes you need to celebrate: -``` -disco() { -        echo "(•_•)" -        echo "<)   )╯" -        echo " /    \ " -        echo "" -        echo "\(•_•)" -        echo " (   (>" -        echo " /    \ " -        echo "" -        echo " (•_•)" -        echo "<)   )>" -        echo " /    \ " -} -``` - -Typically, I'll pipe the output of these commands to `pbcopy `and paste it into the relevant chat tool I'm using. - -I got this fun function from a Twitter account that I follow called "Command Line Magic:" [@climagic][5]. Since I live in Florida now, I'm very happy that this is the only snow in my life: -``` -snow() { -        clear;while :;do echo $LINES $COLUMNS $(($RANDOM%$COLUMNS));sleep 0.1;done|gawk '{a[$3]=0;for(x in a) {o=a[x];a[x]=a[x]+1;printf "\033[%s;%sH ",o,x;printf "\033[%s;%sH*\033[0;0H",a[x],x;}}' -} - -``` - -### Fun with functions - -We've seen some examples of functions that I use. Since few of these examples require an argument, they could be done as aliases. I use functions out of personal preference when it's more than a single short statement. - -At various times in my career, I've run [Graphite][6], an open-source, scalable, time-series metrics solution. There have been enough instances where I needed to transpose a metric path (delineated with periods) to a filesystem path (delineated with slashes), or vice versa, that it became useful to have dedicated functions for these tasks: -``` -# Useful for converting between Graphite metrics and file paths -function dottoslash() { -        echo $1 | sed 's/\./\//g' -} -function slashtodot() { -        echo $1 | sed 's/\//\./g' -} -``` - -During another time in my career, I was running a lot of Kubernetes. If you aren't familiar with running Kubernetes, you need to write a lot of YAML. Unfortunately, it's not hard to write invalid YAML. Worse, Kubernetes doesn't validate YAML before trying to apply it, so you won't find out it's invalid until you apply it. Unless you validate it first: -``` -function yamllint() { -        for i in $(find . -name '*.yml' -o -name '*.yaml'); do echo $i; ruby -e "require 'yaml';YAML.load_file(\"$i\")"; done -} -``` - -Because I got tired of embarrassing myself and occasionally breaking a customer's setup, I wrote this little snippet and added it as a pre-commit hook to all of my relevant repos. Something similar would be very helpful as part of your continuous integration process, especially if you're working as part of a team. - -### Oh, fingers, where art thou? - -I was once an excellent touch-typist. Those days are long gone. I typo more than I would have believed possible. - -At different times, I have used a fair amount of either Chef or Kubernetes. Fortunately for me, I never used both at the same time. - -Part of the Chef ecosystem is Test Kitchen, a suite of tools that facilitate testing, which is invoked with the commands `kitchen test`. Kubernetes is managed with a CLI tool `kubectl`. Both commands require several subcommands, and neither rolls off the fingers particularly fluidly. - -Rather than create a bunch of "typo aliases," I aliased those commands to `k`: -``` -alias k='kitchen test $@' -``` - -or -``` -alias k='kubectl $@' -``` - -### Timesplitters - -The last half of my career has involved writing more code with other people. I've worked in many environments where we have forked copies of repos on our account and use pull requests as part of the review process. When I want to make sure that my fork of a given repo is up to date with the parent, I use `fetchupstream`: -``` -alias fetchupstream='git fetch upstream && git checkout master && git merge upstream/master && git push' -``` - -### Mine eyes have seen the glory of the coming of color - -I like color. It can make things like diffs easier to use. -``` -alias diff='colordiff' -``` - -I thought that colorized man pages was a neat trick, so I incorporated this function: -``` -# Colorized man pages, from: -# http://boredzo.org/blog/archives/2016-08-15/colorized-man-pages-understood-and-customized -man() { -        env \ -                LESS_TERMCAP_md=$(printf "\e[1;36m") \ -                LESS_TERMCAP_me=$(printf "\e[0m") \ -                LESS_TERMCAP_se=$(printf "\e[0m") \ -                LESS_TERMCAP_so=$(printf "\e[1;44;33m") \ -                LESS_TERMCAP_ue=$(printf "\e[0m") \ -                LESS_TERMCAP_us=$(printf "\e[1;32m") \ -                man "$@" -} -``` - -I love the command `which`. It simply tells you where in the filesystem the command you're running comes from—unless it's a shell function. After multiple cascading dotfiles, sometimes it's not clear where a function is defined or what it does. It turns out that the `whence` and `type` commands can help with that. -``` -# Where is a function defined? -whichfunc() { -        whence -v $1 -        type -a $1 -} -``` - -### Conclusion - -I hope this article helps and inspires you to find ways to improve your daily shell-using experience. They don't need to be huge, novel, or complex. They might solve a minor but frequent bit of friction, create a shortcut, or even offer a solution to reducing common typos. - -You're welcome to look through my [dotfiles repo][7], but I warn you that it could use a lot of cleaning up. Feel free to use anything that you find helpful, and please be excellent to one another. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/shell-dotfile - -作者:[H.Waldo Grunenwald][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/gwaldo -[1]: http://www.zsh.org/ -[2]: https://brew.sh/ -[3]: https://github.com/robbyrussell/oh-my-zsh -[4]: https://www.iterm2.com/ -[5]: https://twitter.com/climagic -[6]: https://github.com/graphite-project/ -[7]: https://github.com/gwaldo/dotfiles diff --git a/sources/tech/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md b/sources/tech/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md deleted file mode 100644 index 3b9af595d6..0000000000 --- a/sources/tech/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md +++ /dev/null @@ -1,103 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (The Earliest Linux Distros: Before Mainstream Distros Became So Popular) -[#]: via: (https://itsfoss.com/earliest-linux-distros/) -[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/) - -The Earliest Linux Distros: Before Mainstream Distros Became So Popular -====== - -In this throwback history article, we’ve tried to look back into how some of the earliest Linux distributions evolved and came into being as we know them today. - -![][1] - -In here we have tried to explore how the idea of popular distros such as Red Hat, Debian, Slackware, SUSE, Ubuntu and many others came into being after the first Linux kernel became available. - -As Linux was initially released in the form of a kernel in 1991, the distros we know today was made possible with the help of numerous collaborators throughout the world with the creation of shells, libraries, compilers and related packages to make it a complete Operating System. - -### 1\. The first known “distro” by HJ Lu - -The way we know Linux distributions today goes back to 1992, when the first known distro-like tools to get access to Linux were released by HJ Lu. It consisted of two 5.25” floppy diskettes: - -![Linux 0.12 Boot and Root Disks | Photo Credit][2] - - * **LINUX 0.12 BOOT DISK** : The “boot” disk was used to boot the system first. - * **LINUX 0.12 ROOT DISK** : The second “root” disk for getting a command prompt for access to the Linux file system after booting. - - - -To install 0.12 on a hard drive, one had to use a hex editor to edit its master boot record (MBR) and that was quite a complex process, especially during that era. - -Feeling too nostalgic? - -You can [install cool-retro-term application][3] that gives you a Linux terminal in the vintage looks of the 90’s computers. - -### 2\. MCC Interim Linux - -![MCC Linux 0.99.14, 1993 | Image Credit][4] - -Initially released in the same year as “LINUX 0.12” by Owen Le Blanc of Manchester Computing Centre in England, MCC Interim Linux was the first Linux distribution for novice users with a menu driven installer and end user/programming tools. Also in the form of a collection of diskettes, it could be installed on a system to provide a basic text-based environment. - -MCC Interim Linux was much more user-friendly than 0.12 and the installation process on a hard drive was much easier and similar to modern ways. It did not require using a hex editor to edit the MBR. - -Though it was first released in February 1992, it was also available for download through FTP since November that year. - -### 3\. TAMU Linux - -![TAMU Linux | Image Credit][5] - -TAMU Linux was developed by Aggies at Texas A&M with the Texas A&M Unix & Linux Users Group in May 1992 and was called TAMU 1.0A. It was the first Linux distribution to offer the X Window System instead of just a text based operating system. - -### 4\. Softlanding Linux System (SLS) - -![SLS Linux 1.05, 1994 | Image Credit][6] - -“Gentle Touchdowns for DOS Bailouts” was their slogan! SLS was released by Peter McDonald in May 1992. SLS was quite widely used and popular during its time and greatly promoted the idea of Linux. But due to a decision by the developers to change the executable format in the distro, users stopped using it. - -Many of the popular distros the present community is most familiar with, evolved via SLS. Two of them are: - - * **Slackware** : One of the earliest Linux distros, Slackware was created by Patrick Volkerding in 1993. Slackware is based on SLS and was one of the very first Linux distributions. - * **Debian** : An initiative by Ian Murdock, Debian was also released in 1993 after moving on from the SLS model. The very popular Ubuntu distro we know today is based on Debian. - - - -### 5\. Yggdrasil - -![LGX Yggdrasil Fall 1993 | Image Credit][7] - -Released on December 1992, Yggdrasil was the first distro to give birth to the idea of Live Linux CDs. It was developed by Yggdrasil Computing, Inc., founded by Adam J. Richter in Berkeley, California. It could automatically configure itself on system hardware as “Plug-and-Play”, which is a very regular and known feature in today’s time. The later versions of Yggdrasil included a hack for running any proprietary MS-DOS CD-ROM driver within Linux. - -![Yggdrasil’s Plug-and-Play Promo | Image Credit][8] - -Their motto was “Free Software For The Rest of Us”. - -In the late 90s, one very popular distro was [Mandriva][9], first released in 1998, by unifying the French _Mandrake Linux_ distribution with the Brazilian _Conectiva Linux_ distribution. It had a release lifetime of 18 months for updates related to Linux and system software and desktop based updates were released every year. It also had server versions with 5 years of support. Now we have [Open Mandriva][10]. - -If you have more nostalgic distros to share from the earliest days of Linux release, please share with us in the comments below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/earliest-linux-distros/ - -作者:[Avimanyu Bandyopadhyay][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/avimanyu/ -[b]: https://github.com/lujun9972 -[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/earliest-linux-distros.png?resize=800%2C450&ssl=1 -[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Linux-0.12-Floppies.jpg?ssl=1 -[3]: https://itsfoss.com/cool-retro-term/ -[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/MCC-Interim-Linux-0.99.14-1993.jpg?fit=800%2C600&ssl=1 -[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/TAMU-Linux.jpg?ssl=1 -[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/SLS-1.05-1994.jpg?ssl=1 -[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/LGX_Yggdrasil_CD_Fall_1993.jpg?fit=781%2C800&ssl=1 -[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Yggdrasil-Linux-Summer-1994.jpg?ssl=1 -[9]: https://en.wikipedia.org/wiki/Mandriva_Linux -[10]: https://www.openmandriva.org/ diff --git a/sources/tech/20190301 Guide to Install VMware Tools on Linux.md b/sources/tech/20190301 Guide to Install VMware Tools on Linux.md deleted file mode 100644 index e6a43bcde1..0000000000 --- a/sources/tech/20190301 Guide to Install VMware Tools on Linux.md +++ /dev/null @@ -1,143 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Guide to Install VMware Tools on Linux) -[#]: via: (https://itsfoss.com/install-vmware-tools-linux) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) - -Guide to Install VMware Tools on Linux -====== - -**VMware Tools enhances your VM experience by allowing you to share clipboard and folder among other things. Learn how to install VMware tools on Ubuntu and other Linux distributions.** - -In an earlier tutorial, you learned to [install VMware Workstation on Ubuntu][1]. You can further enhance the functionality of your virtual machines by installing VMware Tools. - -If you have already installed a guest OS on VMware, you must have noticed the requirement for [VMware tools][2] – even though not completely aware of what it is needed for. - -In this article, we will highlight the importance of VMware tools, the features it offers, and the method to install VMware tools on Ubuntu or any other Linux distribution. - -### VMware Tools: Overview & Features - -![Installing VMware Tools on Ubuntu][3]Installing VMware Tools on Ubuntu - -For obvious reasons, the virtual machine (your Guest OS) will not behave exactly like the host. There will be certain limitations in terms of its performance and operationg. And, that is why a set of utilities (VMware Tools) was introduced. - -VMware tools help in managing the guest OS in an efficient manner while also improving its performance. - -#### What exactly is VMware tool responsible for? - -![How to Install VMware tools on Linux][4] - -You have got a vague idea of what it does – but let us talk about the details: - - * Synchronize the time between the guest OS and the host to make things easier. - * Unlocks the ability to pass messages from host OS to guest OS. For example, you copy a text on the host to your clipboard and you can easily paste it to your guest OS. - * Enables sound in guest OS. - * Improves video resolution. - * Improves the cursor movement. - * Fixes incorrect network speed data. - * Eliminates inadequate color depth. - - - -These are the major changes that happen when you install VMware tools on Guest OS. But, what exactly does it contain / feature in order to unlock/enhance these functionalities? Let’s see.. - -#### VMware tools: Core Feature Details - -![Sharing clipboard between guest and host OS with VMware Tools][5]Sharing clipboard between guest and host OS with VMware Tools - -If you do not want to know what it includes to enable the functionalities, you can skip this part. But, for the curious readers, let us briefly discuss about it: - -**VMware device drivers:** It really depends on the OS. Most of the major operating systems do include device drivers by default. So, you do not have to install it separately. This generally involves – memory control driver, mouse driver, audio driver, NIC driver, VGA driver and so on. - -**VMware user process:** This is where things get really interesting. With this, you get the ability to copy-paste and drag-drop between the host and the guest OS. You can basically copy and paste the text from the host to the virtual machine or vice versa. - -You get to drag and drop files as well. In addition, it enables the pointer release/lock when you do not have an SVGA driver installed. - -**VMware tools lifecycle management** : Well, we will take a look at how to install VMware tools below – but this feature helps you easily install/upgrade VMware tools in the virtual machine. - -**Shared Folders** : In addition to these, VMware tools also allow you to have shared folders between the guest OS and the host. - -![Sharing folder between guest and host OS using VMware Tools in Linux][6]Sharing folder between guest and host OS using VMware Tools in Linux - -Of course, what it does and facilitates also depends on the host OS. For example, on Windows, you get a Unity mode on VMware to run programs on virtual machine and operate it from the host OS. - -### How to install VMware Tools on Ubuntu & other Linux distributions - -**Note:** For Linux guest operating systems, you should already have “Open VM Tools” suite installed, eliminating the need of installing VMware tools separately, most of the time. - -Most of the time, when you install a guest OS, you will get a prompt as a software update or a popup telling you to install VMware tools if the operating system supports [Easy Install][7]. - -Windows and Ubuntu does support Easy Install. So, even if you are using Windows as your host OS or trying to install VMware tools on Ubuntu, you should first get an option to install the VMware tools easily as popup message. Here’s how it should look like: - -![Pop-up to install VMware Tools][8]Pop-up to install VMware Tools - -This is the easiest way to get it done. So, make sure you have an active network connection when you setup the virtual machine. - -If you do not get any of these pop ups – or options to easily install VMware tools. You have to manually install it. Here’s how to do that: - -1\. Launch VMware Workstation Player. - -2\. From the menu, navigate through **Virtual Machine - > Install VMware tools**. If you already have it installed, and want to repair the installation, you will observe the same option to appear as “ **Re-install VMware tools** “. - -3\. Once you click on that, you will observe a virtual CD/DVD mounted in the guest OS. - -4\. Open that and copy/paste the **tar.gz** file to any location of your choice and extract it, here we choose the **Desktop**. - -![][9] - -5\. After extraction, launch the terminal and navigate to the folder inside by typing in the following command: - -``` -cd Desktop/VMwareTools-10.3.2-9925305/vmware-tools-distrib -``` - -You need to check the name of the folder and path in your case – depending on the version and where you extracted – it might vary. - -![][10] - -Replace **Desktop** with your storage location (such as cd Downloads) and the rest should remain the same if you are installing **10.3.2 version**. - -6\. Now, simply type in the following command to start the installation: - -``` -sudo ./vmware-install.pl -d -``` - -![][11] - -You will be asked the password for permission to install, type it in and you should be good to go. - -That’s it. You are done. These set of steps should be applicable to almost any Ubuntu-based guest operating system. If you want to install VMware tools on Ubuntu Server, or any other OS. - -**Wrapping Up** - -Installing VMware tools on Ubuntu Linux is pretty easy. In addition to the easy method, we have also explained the manual method to do it. If you still need help, or have a suggestion regarding the installation, let us know in the comments down below. - - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/install-vmware-tools-linux - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/install-vmware-player-ubuntu-1310/ -[2]: https://kb.vmware.com/s/article/340 -[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-downloading.jpg?fit=800%2C531&ssl=1 -[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/install-vmware-tools-linux.png?resize=800%2C450&ssl=1 -[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-features.gif?resize=800%2C500&ssl=1 -[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-shared-folder.jpg?fit=800%2C660&ssl=1 -[7]: https://docs.vmware.com/en/VMware-Workstation-Player-for-Linux/15.0/com.vmware.player.linux.using.doc/GUID-3F6B9D0E-6CFC-4627-B80B-9A68A5960F60.html -[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools.jpg?fit=800%2C481&ssl=1 -[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-extraction.jpg?fit=800%2C564&ssl=1 -[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-folder.jpg?fit=800%2C487&ssl=1 -[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-installation-ubuntu.jpg?fit=800%2C492&ssl=1 diff --git a/sources/tech/20190320 Move your dotfiles to version control.md b/sources/tech/20190320 Move your dotfiles to version control.md deleted file mode 100644 index 7d070760c7..0000000000 --- a/sources/tech/20190320 Move your dotfiles to version control.md +++ /dev/null @@ -1,130 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Move your dotfiles to version control) -[#]: via: (https://opensource.com/article/19/3/move-your-dotfiles-version-control) -[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg) - -Move your dotfiles to version control -====== -Back up or sync your custom configurations across your systems by sharing dotfiles on GitLab or GitHub. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ) - -There is something truly exciting about customizing your operating system through the collection of hidden files we call dotfiles. In [What a Shell Dotfile Can Do For You][1], H. "Waldo" Grunenwald goes into excellent detail about the why and how of setting up your dotfiles. Let's dig into the why and how of sharing them. - -### What's a dotfile? - -"Dotfiles" is a common term for all the configuration files we have floating around our machines. These files usually start with a **.** at the beginning of the filename, like **.gitconfig** , and operating systems often hide them by default. For example, when I use **ls -a** on MacOS, it shows all the lovely dotfiles that would otherwise not be in the output. - -``` -dotfiles on master -➜ ls -README.md  Rakefile   bin       misc    profiles   zsh-custom - -dotfiles on master -➜ ls -a -.               .gitignore      .oh-my-zsh      README.md       zsh-custom -..              .gitmodules     .tmux           Rakefile -.gemrc          .global_ignore .vimrc           bin -.git            .gvimrc         .zlogin         misc -.gitconfig      .maid           .zshrc          profiles -``` - -If I take a look at one, **.gitconfig** , which I use for Git configuration, I see a ton of customization. I have account information, terminal color preferences, and tons of aliases that make my command-line interface feel like mine. Here's a snippet from the **[alias]** block: - -``` -87   # Show the diff between the latest commit and the current state -88   d = !"git diff-index --quiet HEAD -- || clear; git --no-pager diff --patch-with-stat" -89 -90   # `git di $number` shows the diff between the state `$number` revisions ago and the current state -91   di = !"d() { git diff --patch-with-stat HEAD~$1; }; git diff-index --quiet HEAD -- || clear; d" -92 -93   # Pull in remote changes for the current repository and all its submodules -94   p = !"git pull; git submodule foreach git pull origin master" -95 -96   # Checkout a pull request from origin (of a github repository) -97   pr = !"pr() { git fetch origin pull/$1/head:pr-$1; git checkout pr-$1; }; pr" -``` - -Since my **.gitconfig** has over 200 lines of customization, I have no interest in rewriting it on every new computer or system I use, and either does anyone else. This is one reason sharing dotfiles has become more and more popular, especially with the rise of the social coding site GitHub. The canonical article advocating for sharing dotfiles is Zach Holman's [Dotfiles Are Meant to Be Forked][2] from 2008. The premise is true to this day: I want to share them, with myself, with those new to dotfiles, and with those who have taught me so much by sharing their customizations. - -### Sharing dotfiles - -Many of us have multiple systems or know hard drives are fickle enough that we want to back up our carefully curated customizations. How do we keep these wonderful files in sync across environments? - -My favorite answer is distributed version control, preferably a service that will handle the heavy lifting for me. I regularly use GitHub and continue to enjoy GitLab as I get more experienced with it. Either one is a perfect place to share your information. To set yourself up: - - 1. Sign into your preferred Git-based service. - 2. Create a repository called "dotfiles." (Make it public! Sharing is caring.) - 3. Clone it to your local environment.* - 4. Copy your dotfiles into the folder. - 5. Symbolically link (symlink) them back to their target folder (most often **$HOME** ). - 6. Push them to the remote repository. - - - -* You may need to set up your Git configuration commands to clone the repository. Both GitHub and GitLab will prompt you with the commands to run. - -![](https://opensource.com/sites/default/files/uploads/gitlab-new-project.png) - -Step 4 above is the crux of this effort and can be a bit tricky. Whether you use a script or do it by hand, the workflow is to symlink from your dotfiles folder to the dotfiles destination so that any updates to your dotfiles are easily pushed to the remote repository. To do this for my **.gitconfig** file, I would enter: - -``` -$ cd dotfiles/ -$ ln -nfs .gitconfig $HOME/.gitconfig -``` - -The flags added to the symlinking command offer a few additional benefits: - - * **-s** creates a symbolic link instead of a hard link - * **-f** continues with other symlinking when an error occurs (not needed here, but useful in loops) - * **-n** avoids symlinking a symlink (same as **-h** for other versions of **ln** ) - - - -You can review the IEEE and Open Group [specification of **ln**][3] and the version on [MacOS 10.14.3][4] if you want to dig deeper into the available parameters. I had to look up these flags since I pulled them from someone else's dotfiles. - -You can also make updating simpler with a little additional code, like the [Rakefile][5] I forked from [Brad Parbs][6]. Alternatively, you can keep it incredibly simple, as Jeff Geerling does [in his dotfiles][7]. He symlinks files using [this Ansible playbook][8]. Keeping everything in sync at this point is easy: you can cron job or occasionally **git push** from your dotfiles folder. - -### Quick aside: What not to share - -Before we move on, it is worth noting what you should not add to a shared dotfile repository—even if it starts with a dot. Anything that is a security risk, like files in your **.ssh/** folder, is not a good choice to share using this method. Be sure to double-check your configuration files before publishing them online and triple-check that no API tokens are in your files. - -### Where should I start? - -If Git is new to you, my [article about the terminology][9] and [a cheat sheet][10] of my most frequently used commands should help you get going. - -There are other incredible resources to help you get started with dotfiles. Years ago, I came across [dotfiles.github.io][11] and continue to go back to it for a broader look at what people are doing. There is a lot of tribal knowledge hidden in other people's dotfiles. Take the time to scroll through some and don't be shy about adding them to your own. - -I hope this will get you started on the joy of having consistent dotfiles across your computers. - -What's your favorite dotfile trick? Add a comment or tweet me [@mbbroberg][12]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/3/move-your-dotfiles-version-control - -作者:[Matthew Broberg][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mbbroberg -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/article/18/9/shell-dotfile -[2]: https://zachholman.com/2010/08/dotfiles-are-meant-to-be-forked/ -[3]: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ln.html -[4]: https://www.unix.com/man-page/FreeBSD/1/ln/ -[5]: https://github.com/mbbroberg/dotfiles/blob/master/Rakefile -[6]: https://github.com/bradp/dotfiles -[7]: https://github.com/geerlingguy/dotfiles -[8]: https://github.com/geerlingguy/mac-dev-playbook -[9]: https://opensource.com/article/19/2/git-terminology -[10]: https://opensource.com/downloads/cheat-sheet-git -[11]: http://dotfiles.github.io/ -[12]: https://twitter.com/mbbroberg?lang=en diff --git a/sources/tech/20190404 How writers can get work done better with Git.md b/sources/tech/20190404 How writers can get work done better with Git.md deleted file mode 100644 index 1da47fd69f..0000000000 --- a/sources/tech/20190404 How writers can get work done better with Git.md +++ /dev/null @@ -1,266 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How writers can get work done better with Git) -[#]: via: (https://opensource.com/article/19/4/write-git) -[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/noreplyhttps://opensource.com/users/seth) - -How writers can get work done better with Git -====== -If you're a writer, you could probably benefit from using Git. Learn how -in our series about little-known uses of Git. -![Writing Hand][1] - -[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at ways writers can use Git to get work done. - -### Git for writers - -Some people write fiction; others write academic papers, poetry, screenplays, technical manuals, or articles about open source. Many do a little of each. The common thread is that if you're a writer, you could probably benefit from using Git. While Git is famously a highly technical tool used by computer programmers, it's ideal for the modern author, and this article will demonstrate how it can change the way you write—and why you'd want it to. - -Before talking about Git, though, it's important to talk about what _copy_ (or _content_ , for the digital age) really is, and why it's different from your delivery _medium_. It's the 21 st century, and the tool of choice for most writers is a computer. While computers are deceptively good at combining processes like copy editing and layout, writers are (re)discovering that separating content from style is a good idea, after all. That means you should be writing on a computer like it's a typewriter, not a word processor. In computer lingo, that means writing in _plaintext_. - -### Writing in plaintext - -It used to be a safe assumption that you knew what market you were writing for. You wrote content for a book, or a website, or a software manual. These days, though, the market's flattened: you might decide to use content you write for a website in a printed book project, and the printed book might release an EPUB version later. And in the case of digital editions of your content, the person reading your content is in ultimate control: they may read your words on the website where you published them, or they might click on Firefox's excellent [Reader View][3], or they might print to physical paper, or they could dump the web page to a text file with Lynx, or they may not see your content at all because they use a screen reader. - -It makes sense to write your words as words, leaving the delivery to the publishers. Even if you are also your own publisher, treating your words as a kind of source code for your writing is a smarter and more efficient way to work, because when it comes time to publish, you can use the same source (your plaintext) to generate output appropriate to your target (PDF for print, EPUB for e-books, HTML for websites, and so on). - -Writing in plaintext not only means you don't have to worry about layout or how your text is styled, but you also no longer require specialized tools. Anything that can produce text becomes a valid "word processor" for you, whether it's a basic notepad app on your mobile or tablet, the text editor that came bundled with your computer, or a free editor you download from the internet. You can write on practically any device, no matter where you are or what you're doing, and the text you produce integrates perfectly with your project, no modification required. - -And, conveniently, Git specializes in managing plaintext. - -### The Atom editor - -When you write in plaintext, a word processor is overkill. Using a text editor is easier because text editors don't try to "helpfully" restructure your input. It lets you type the words in your head onto the screen, no interference. Better still, text editors are often designed around a plugin architecture, such that the application itself is woefully basic (it edits text), but you can build an environment around it to meet your every need. - -A great example of this design philosophy is the [Atom][4] editor. It's a cross-platform text editor with built-in Git integration. If you're new to working in plaintext and new to Git, Atom is the easiest way to get started. - -#### Install Git and Atom - -First, make sure you have Git installed on your system. If you run Linux or BSD, Git is available in your software repository or ports tree. The command you use will vary depending on your distribution; on Fedora, for instance: - - -``` -`$ sudo dnf install git` -``` - -You can also download and install Git for [Mac][5] and [Windows][6]. - -You won't need to use Git directly, because Atom serves as your Git interface. Installing Atom is the next step. - -If you're on Linux, install Atom from your software repository through your software installer or the appropriate command, such as: - - -``` -`$ sudo dnf install atom` -``` - -Atom does not currently build on BSD. However, there are very good alternatives available, such as [GNU Emacs][7]. For Mac and Windows users, you can find installers on the [Atom website][4]. - -Once your installs are done, launch the Atom editor. - -#### A quick tour - -If you're going to live in plaintext and Git, you need to get comfortable with your editor. Atom's user interface may be more dynamic than what you are used to. You can think of it more like Firefox or Chrome than as a word processor, in fact, because it has tabs and panels that can be opened and closed as they are needed, and it even has add-ons that you can install and configure. It's not practical to try to cover all of Atom's many features, but you can at least get familiar with what's possible. - -When Atom opens, it displays a welcome screen. If nothing else, this screen is a good introduction to Atom's tabbed interface. You can close the welcome screens by clicking the "close" icons on the tabs at the top of the Atom window and create a new file using **File > New File**. - -Working in plaintext is a little different than working in a word processor, so here are some tips for writing content in a way that a human can connect with and that Git and computers can parse, track, and convert. - -#### Write in Markdown - -These days, when people talk about plaintext, mostly they mean Markdown. Markdown is more of a style than a format, meaning that it intends to provide a predictable structure to your text so computers can detect natural patterns and convert the text intelligently. Markdown has many definitions, but the best technical definition and cheatsheet is on [CommonMark's website][8]. - - -``` -# Chapter 1 - -This is a paragraph with an *italic* word and a **bold** word in it. -And it can even reference an image. - -![An image will render here.](drawing.jpg) -``` - -As you can tell from the example, Markdown isn't meant to read or feel like code, but it can be treated as code. If you follow the expectations of Markdown defined by CommonMark, then you can reliably convert, with just one click of a button, your writing from Markdown to .docx, .epub, .html, MediaWiki, .odt, .pdf, .rtf, and a dozen other formats _without_ loss of formatting. - -You can think of Markdown a little like a word processor's styles. If you've ever written for a publisher with a set of styles that govern what chapter titles and section headings look like, this is basically the same thing, except that instead of selecting a style from a drop-down menu, you're adding little notations to your text. These notations look natural to any modern reader who's used to "txt speak," but are swapped out with fancy text stylings when the text is rendered. It is, in fact, what word processors secretly do behind the scenes. The word processor shows bold text, but if you could see the code generated to make your text bold, it would be a lot like Markdown (actually it's the far more complex XML). With Markdown, that barrier is removed, which looks scarier on the one hand, but on the other hand, you can write Markdown on literally anything that generates text without losing any formatting information. - -The popular file extension for Markdown files is .md. If you're on a platform that doesn't know what a .md file is, you can associate the extension to Atom manually or else just use the universal .txt extension. The file extension doesn't change the nature of the file; it just changes how your computer decides what to do with it. Atom and some platforms are smart enough to know that a file is plaintext no matter what extension you give it. - -#### Live preview - -Atom features the **Markdown Preview** plugin, which shows you both the plain Markdown you're writing and the way it will (commonly) render. - -![Atom's preview screen][9] - -To activate this preview pane, select **Packages > Markdown Preview > Toggle Preview** or press **Ctrl+Shift+M**. - -This view provides you with the best of both worlds. You get to write without the burden of styling your text, but you also get to see a common example of what your text will look like, at least in a typical digital format. Of course, the point is that you can't control how your text is ultimately rendered, so don't be tempted to adjust your Markdown to force your render preview to look a certain way. - -#### One sentence per line - -Your high school writing teacher doesn't ever have to see your Markdown. - -It won't come naturally at first, but maintaining one sentence per line makes more sense in the digital world. Markdown ignores single line breaks (when you've pressed the Return or Enter key) and only creates a new paragraph after a single blank line. - -![Writing in Atom][10] - -The advantage of writing one sentence per line is that your work is easier to track. That is, if you've changed one word at the start of a paragraph, then it's easy for Atom, Git, or any application to highlight that change in a meaningful way if the change is limited to one line rather than one word in a long paragraph. In other words, a change to one sentence should only affect that sentence, not the whole paragraph. - -You might be thinking, "many word processors track changes, too, and they can highlight a single word that's changed." But those revision trackers are bound to the interface of that word processor, which means you can't look through revisions without being in front of that word processor. In a plaintext workflow, you can review revisions in plaintext, which means you can make or approve edits no matter what you have on hand, as long as that device can deal with plaintext (and most of them can). - -Writers admittedly don't usually think in terms of line numbers, but it's a useful tool for computers, and ultimately a great reference point in general. Atom numbers the lines of your text document by default. A _line_ is only a line once you have pressed the Enter or Return key. - -![Writing in Atom][11] - -If a line has a dot instead of a number, that means it's part of the previous line wrapped for you because it couldn't fit on your screen. - -#### Theme it - -If you're a visual person, you might be very particular about the way your writing environment looks. Even if you are writing in plain Markdown, it doesn't mean you have to write in a programmer's font or in a dark window that makes you look like a coder. The simplest way to modify what Atom looks like is to use [theme packages][12]. It's conventional for theme designers to differentiate dark themes from light themes, so you can search with the keyword Dark or Light, depending on what you want. - -To install a theme, select **Edit > Preferences**. This opens a new tab in the Atom interface. Yes, tabs are used for your working documents _and_ for configuration and control panels. In the **Settings** tab, click on the **Install** category. - -In the **Install** panel, search for the name of the theme you want to install. Click the **Themes** button on the right of the search field to search only for themes. Once you've found your theme, click its **Install** button. - -![Atom's themes][13] - -To use a theme you've installed or to customize a theme to your preference, navigate to the **Themes** category in your **Settings** tab. Pick the theme you want to use from the drop-down menu. The changes take place immediately, so you can see exactly how the theme affects your environment. - -You can also change your working font in the **Editor** category of the **Settings** tab. Atom defaults to monospace fonts, which are generally preferred by programmers. But you can use any font on your system, whether it's serif or sans or gothic or cursive. Whatever you want to spend your day staring at, it's entirely up to you. - -On a related note, by default Atom draws a vertical marker down its screen as a guide for people writing code. Programmers often don't want to write long lines of code, so this vertical line is a reminder to them to simplify things. The vertical line is meaningless to writers, though, and you can remove it by disabling the **wrap-guide** package. - -To disable the **wrap-guide** package, select the **Packages** category in the **Settings** tab and search for **wrap-guide**. When you've found the package, click its **Disable** button. - -#### Dynamic structure - -When creating a long document, I find that writing one chapter per file makes more sense than writing an entire book in a single file. Furthermore, I don't name my chapters in the obvious syntax **chapter-1.md** or **1.example.md** , but by chapter titles or keywords, such as **example.md**. To provide myself guidance in the future about how the book is meant to be assembled, I maintain a file called **toc.md** (for "Table of Contents") where I list the (current) order of my chapters. - -I do this because, no matter how convinced I am that chapter 6 just couldn't possibly happen before chapter 1, there's rarely a time that I don't swap the order of one or two chapters or sections before I'm finished with a book. I find that keeping it dynamic from the start helps me avoid renaming confusion, and it also helps me treat the material less rigidly. - -### Git in Atom - -Two things every writer has in common is that they're writing for keeps and their writing is a journey. You don't sit down to write and finish with a final draft; by definition, you have a first draft. And that draft goes through revisions, each of which you carefully save in duplicate and triplicate just in case one of your files turns up corrupted. Eventually, you get to what you call a final draft, but more than likely you'll be going back to it one day, either to resurrect the good parts or to fix the bad. - -The most exciting feature in Atom is its strong Git integration. Without ever leaving Atom, you can interact with all of the major features of Git, tracking and updating your project, rolling back changes you don't like, integrating changes from a collaborator, and more. The best way to learn it is to step through it, so here's how to use Git within the Atom interface from the beginning to the end of a writing project. - -First thing first: Reveal the Git panel by selecting **View > Toggle Git Tab**. This causes a new tab to open on the right side of Atom's interface. There's not much to see yet, so just keep it open for now. - -#### Starting a Git project - -You can think of Git as being bound to a folder. Any folder outside a Git directory doesn't know about Git, and Git doesn't know about it. Folders and files within a Git directory are ignored until you grant Git permission to keep track of them. - -You can create a Git project by creating a new project folder in Atom. Select **File > Add Project Folder** and create a new folder on your system. The folder you create appears in the left **Project Panel** of your Atom window. - -#### Git add - -Right-click on your new project folder and select **New File** to create a new file in your project folder. If you have files you want to import into your new project, right-click on the folder and select **Show in File Manager** to open the folder in your system's file viewer (Dolphin or Nautilus on Linux, Finder on Mac, Explorer on Windows), and then drag-and-drop your files. - -With a project file (either the empty one you created or one you've imported) open in Atom, click the **Create Repository** button in the **Git** tab. In the pop-up dialog box, click **Init** to initialize your project directory as a local Git repository. Git adds a **.git** directory (invisible in your system's file manager, but visible to you in Atom) to your project folder. Don't be fooled by this: The **.git** directory is for Git to manage, not you, so you'll generally stay out of it. But seeing it in Atom is a good reminder that you're working in a project actively managed by Git; in other words, revision history is available when you see a **.git** directory. - -In your empty file, write some stuff. You're a writer, so type some words. It can be any set of words you please, but remember the writing tips above. - -Press **Ctrl+S** to save your file and it will appear in the **Unstaged Changes** section of the **Git** tab. That means the file exists in your project folder but has not yet been committed over to Git's purview. Allow Git to keep track of your file by clicking on the **Stage All** button in the top-right of the **Git** tab. If you've used a word processor with revision history, you can think of this step as permitting Git to record changes. - -#### Git commit - -Your file is now staged. All that means is Git is aware that the file exists and is aware that it has been changed since the last time Git was made aware of it. - -A Git commit sends your file into Git's internal and eternal archives. If you're used to word processors, this is similar to naming a revision. To create a commit, enter some descriptive text in the **Commit** message box at the bottom of the **Git** tab. You can be vague or cheeky, but it's more useful if you enter useful information for your future self so that you know why the revision was made. - -The first time you make a commit, you must create a branch. Git branches are a little like alternate realities, allowing you to switch from one timeline to another to make changes that you may or may not want to keep forever. If you end up liking the changes, you can merge one experimental branch into another, thereby unifying different versions of your project. It's an advanced process that's not worth learning upfront, but you still need an active branch, so you have to create one for your first commit. - -Click on the **Branch** icon at the very bottom of the **Git** tab to create a new branch. - -![Creating a branch][14] - -It's customary to name your first branch **master**. You don't have to; you can name it **firstdraft** or whatever you like, but adhering to the local customs can sometimes make talking about Git (and looking up answers to questions) a little easier because you'll know that when someone mentions **master** , they really mean **master** and not **firstdraft** or whatever you called your branch. - -On some versions of Atom, the UI may not update to reflect that you've created a new branch. Don't worry; the branch will be created (and the UI updated) once you make your commit. Press the **Commit** button, whether it reads **Create detached commit** or **Commit to master**. - -Once you've made a commit, the state of your file is preserved forever in Git's memory. - -#### History and Git diff - -A natural question is how often you should make a commit. There's no one right answer to that. Saving a file with **Ctrl+S** and committing to Git are two separate processes, so you will continue to do both. You'll probably want to make commits whenever you feel like you've done something significant or are about to try out a crazy new idea that you may want to back out of. - -To get a feel for what impact a commit has on your workflow, remove some text from your test document and add some text to the top and bottom. Make another commit. Do this a few times until you have a small history at the bottom of your **Git** tab, then click on a commit to view it in Atom. - -![Viewing differences][15] - -When viewing a past commit, you see three elements: - - 1. Text in green was added to a document when the commit was made. - 2. Text in red was removed from the document when the commit was made. - 3. All other text was untouched. - - - -#### Remote backup - -One of the advantages of using Git is that, by design, it is distributed, meaning you can commit your work to your local repository and push your changes out to any number of servers for backup. You can also pull changes in from those servers so that whatever device you happen to be working on always has the latest changes. - -For this to work, you must have an account on a Git server. There are several free hosting services out there, including GitHub, the company that produces Atom but oddly is not open source, and GitLab, which is open source. Preferring open source to proprietary, I'll use GitLab in this example. - -If you don't already have a GitLab account, sign up for one and start a new project. The project name doesn't have to match your project folder in Atom, but it probably makes sense if it does. You can leave your project private, in which case only you and anyone you give explicit permissions to can access it, or you can make it public if you want it to be available to anyone on the internet who stumbles upon it. - -Do not add a README to the project. - -Once the project is created, it provides you with instructions on how to set up the repository. This is great information if you decide to use Git in a terminal or with a separate GUI, but Atom's workflow is different. - -Click the **Clone** button in the top-right of the GitLab interface. This reveals the address you must use to access the Git repository. Copy the **SSH** address (not the **https** address). - -In Atom, click on your project's **.git** directory and open the **config**. Add these configuration lines to the file, adjusting the **seth/example.git** part of the **url** value to match your unique address. - -* * * - - -``` -[remote "origin"] -url = [git@gitlab.com][16]:seth/example.git -fetch = +refs/heads/*:refs/remotes/origin/* -[branch "master"] -remote = origin -merge = refs/heads/master -``` - -At the bottom of the **Git** tab, a new button has appeared, labeled **Fetch**. Since your server is brand new and therefore has no data for you to fetch, right-click on the button and select **Push**. This pushes your changes to your Gitlab account, and now your project is backed up on a Git server. - -Pushing changes to a server is something you can do after each commit. It provides immediate offsite backup and, since the amount of data is usually minimal, it's practically as fast as a local save. - -### Writing and Git - -Git is a complex system, useful for more than just revision tracking and backups. It enables asynchronous collaboration and encourages experimentation. This article has covered the basics, but there are many more articles—and entire books—on Git and how to use it to make your work more efficient, more resilient, and more dynamic. It all starts with using Git for small tasks. The more you use it, the more questions you'll find yourself asking, and eventually the more tricks you'll learn. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/4/write-git - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/sethhttps://opensource.com/users/noreplyhttps://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/write-hand_0.jpg?itok=Uw5RJD03 (Writing Hand) -[2]: https://git-scm.com/ -[3]: https://support.mozilla.org/en-US/kb/firefox-reader-view-clutter-free-web-pages -[4]: http://atom.io -[5]: https://git-scm.com/download/mac -[6]: https://git-scm.com/download/win -[7]: http://gnu.org/software/emacs -[8]: https://commonmark.org/help/ -[9]: https://opensource.com/sites/default/files/uploads/atom-preview.jpg (Atom's preview screen) -[10]: https://opensource.com/sites/default/files/uploads/atom-para.jpg (Writing in Atom) -[11]: https://opensource.com/sites/default/files/uploads/atom-linebreak.jpg (Writing in Atom) -[12]: https://atom.io/themes -[13]: https://opensource.com/sites/default/files/uploads/atom-theme.jpg (Atom's themes) -[14]: https://opensource.com/sites/default/files/uploads/atom-branch.jpg (Creating a branch) -[15]: https://opensource.com/sites/default/files/uploads/git-diff.jpg (Viewing differences) -[16]: mailto:git@gitlab.com diff --git a/sources/tech/20190513 Blockchain 2.0 - Introduction To Hyperledger Fabric -Part 10.md b/sources/tech/20190513 Blockchain 2.0 - Introduction To Hyperledger Fabric -Part 10.md deleted file mode 100644 index c685af487c..0000000000 --- a/sources/tech/20190513 Blockchain 2.0 - Introduction To Hyperledger Fabric -Part 10.md +++ /dev/null @@ -1,81 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Blockchain 2.0 – Introduction To Hyperledger Fabric [Part 10]) -[#]: via: (https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/) -[#]: author: (sk https://www.ostechnix.com/author/sk/) - -Blockchain 2.0 – Introduction To Hyperledger Fabric [Part 10] -====== - -![Hyperledger Fabric][1] - -### Hyperledger Fabric - -The [**Hyperledger project**][2] is an umbrella organization of sorts featuring many different modules and systems under development. Among the most popular among these individual sub-projects is the **Hyperledger Fabric**. This post will explore the features that would make the Fabric almost indispensable in the near future once blockchain systems start proliferating into main stream use. Towards the end we will also take a quick look at what developers and enthusiasts need to know regarding the technicalities of the Hyperledger Fabric. - -### Inception - -In the usual fashion for the Hyperledger project, Fabric was “donated” to the organization by one of its core members, **IBM** , who was previously the principle developer of the same. The technology platform shared by IBM was put to joint development at the Hyperledger project with contributions from over a 100 member companies and institutions. - -Currently running on **v1.4** of the LTS version, Fabric has come a long way and is currently seen as the go to enterprise solution for managing business data. The core vision that surrounds the Hyperledger project inevitably permeates into the Fabric as well. The Hyperledger Fabric system carries forward all the enterprise ready and scalable features that are hard coded into all projects under the Hyperledger organization. - -### Highlights Of Hyperledger Fabric - -Hyperledger Fabric offers a wide variety of features and standards that are built around the mission of supporting fast development and modular architectures. Furthermore, compared to its competitors (primarily **Ripple** and [**Ethereum**][3]), Fabric takes an explicit stance toward closed and [**permissioned blockchains**][4]. Their core objective here is to develop a set of tools which will aid blockchain developers in creating customized solutions and not to create a standalone ecosystem or a product. - -Some of the highlights of the Hyperledger Fabric are given below: - - * **Permissioned blockchain systems** - - - -This is a category where other platforms such as Ethereum and Ripple differ quite a lot with Hyperledger Fabric. The Fabric by default is a tool designed to implement a private permissioned blockchain. Such blockchains cannot be accessed by everyone and the nodes working to offer consensus or to verify transactions are chosen by a central authority. This might be important for some applications such as banking and insurance, where transactions have to be verified by the central authority rather than participants. - - * **Confidential and controlled information flow** - - - -The Fabric has built in permission systems that will restrict information flow within a specific group or certain individuals as the case may be. Unlike a public blockchain where anyone and everyone who runs a node will have a copy and selective access to data stored in the blockchain, the admin of the system can choose how to and who to share access to the information. There are also subsystems which will encrypt the stored data at better security standards compared to existing competition. - - * **Plug and play architecture** - - - -Hyperledger Fabric has a plug and play type architecture. Individual components of the system may be chosen to be implemented and components of the system that developers don’t see a use for maybe discarded. The Fabric takes a highly modular and customizable route to development rather than a one size fits all approach taken by its competitors. This is especially attractive for firms and companies looking to build a lean system fast. This combined with the interoperability of the Fabric with other Hyperledger components implies that developers and designers now have access to a diverse set of standardized tools instead of having to pull code from different sources and integrate them afterwards. It also presents a rather fail-safe way to build robust modular systems. - - * **Smart contracts and chaincode** - - - -A distributed application running on a blockchain is called a [**Smart contract**][5]. While the smart contract term is more or less associated with the Ethereum platform, chaincode is the name given to the same in the Hyperledger camp. Apart from possessing all the benefits of **DApps** being present in chaincode applications, what sets Hyperledger apart is the fact that the code for the same may be written in multiple high-level programming language. It supports [**Go**][6] and **JavaScript** out of the box and supports many other after integration with appropriate compiler modules as well. Though this fact might not mean much at this point, the fact remains that if existing talent can be used for ongoing projects involving blockchain that has the potential to save companies billions of dollars in personnel training and management in the long run. Developers can code in languages they’re comfortable in to start building applications on the Hyperledger Fabric and need not learn nor train in platform specific languages and syntax. This presents flexibility which current competitors of the Hyperledger Fabric do not offer. - - * The Hyperledger Fabric is a back-end driver platform and is mainly aimed at integration projects where a blockchain or another distributed ledger technology is required. As such it does not provide any user facing services except for minor scripting capabilities. (Think of it to be more like a scripting language.) - * Hyperledger Fabric supports building sidechains for specific use-cases. In case, the developer wishes to isolate a set of users or participants to a specific part or functionality of the application, they may do so by implementing side-chains. Side-chains are blockchains that derive from a main parent, but form a different chain after their initial block. This block which gives rise to the new chain will stay immune to further changes in the new chain and the new chain remains immutable even if new information is added to the original chain. This functionality will aid in scaling the platform being developed and usher in user specific and case specific processing capabilities. - * The previous feature also means that not all users will have an “exact” copy of all the data in the blockchain as is expected usually from public chains. Participating nodes will have a copy of data that is only relevant to them. For instance, consider an application similar to PayTM in India. The app has wallet functionality as well as an e-commerce end. However, not all its wallet users use PayTM to shop online. In this scenario, only active shoppers will have the corresponding chain of transactions on the PayTM e-commerce site, whereas the wallet users will just have a copy of the chain that stores wallet transactions. This flexible architecture for data storage and retrieval is important while scaling, since massive singular blockchains have been shown to increase lead times for processing transactions. The chain can be kept lean and well categorised this way. - - - -We will look at other modules under the Hyperledger Project in detail in upcoming posts. - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/ - -作者:[sk][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/ -[3]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ -[4]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/ -[5]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ -[6]: https://www.ostechnix.com/install-go-language-linux/ diff --git a/sources/tech/20190528 A Quick Look at Elvish Shell.md b/sources/tech/20190528 A Quick Look at Elvish Shell.md deleted file mode 100644 index 778965d442..0000000000 --- a/sources/tech/20190528 A Quick Look at Elvish Shell.md +++ /dev/null @@ -1,106 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (A Quick Look at Elvish Shell) -[#]: via: (https://itsfoss.com/elvish-shell/) -[#]: author: (John Paul https://itsfoss.com/author/john/) - -A Quick Look at Elvish Shell -====== - -Everyone who comes to this site has some knowledge (no matter how slight) of the Bash shell that comes default of so many systems. There have been several attempts to create shells that solve some of the shortcomings of Bash that have appeared over the years. One such shell is Elvish, which we will look at today. - -### What is Elvish Shell? - -![Pipelines In Elvish][1] - -[Elvish][2] is more than just a shell. It is [also][3] “an expressive programming language”. It has a number of interesting features including: - - * Written in Go - * Built-in file manager, inspired by the [Ranger file manager][4] (`Ctrl + N`) - * Searchable command history (`Ctrl + R`) - * History of directories visited (`Ctrl + L`) - * Powerful pipelines that support structured data, such as lists, maps, and functions - * Includes a “standard set of control structures: conditional control with `if`, loops with `for` and `while`, and exception handling with `try`“ - * Support for [third-party modules via a package manager to extend Elvish][5] - * Licensed under the BSD 2-Clause license - - - -“Why is it named Elvish?” I hear you shout. Well, according to [their website][6], they chose their current name because: - -> In roguelikes, items made by the elves have a reputation of high quality. These are usually called elven items, but “elvish” was chosen because it ends with “sh”, a long tradition of Unix shells. It also rhymes with fish, one of the shells that influenced the philosophy of Elvish. - -### How to Install Elvish Shell - -Elvish is available in several mainstream distributions. - -Note that the software is very young. The most recent version is 0.12. According to the project’s [GitHub page][3]: “Despite its pre-1.0 status, it is already suitable for most daily interactive use.” - -![Elvish Control Structures][7] - -#### Debian and Ubuntu - -Elvish packages were introduced into Debian Buster and Ubuntu 17.10. Unfortunately, those packages are out of date and you will need to use a [PPA][8] to install the latest version. You will need to use the following commands: - -``` -sudo add-apt-repository ppa:zhsj/elvish -sudo apt update -sudo apt install elvish -``` - -#### Fedora - -Elvish is not available in the main Fedora repos. You will need to add the [FZUG Repository][9] to install Evlish. To do so, you will need to use these commands: - -``` -sudo dnf config-manager --add-repo=http://repo.fdzh.org/FZUG/FZUG.repol -sudo dnf install elvish -``` - -#### Arch - -Elvish is available in the [Arch User Repository][10]. - -I believe you know [how to change shell in Linux][11] so after installing you can switch to Elvish to use it. - -### Final Thoughts on Elvish Shell - -Personally, I have no reason to install Elvish on any of my systems. I can get most of its features by installing a couple of small command line programs or using already installed programs. - -For example, the search past commands feature already exists in Bash and it works pretty well. If you want to improve your ability to search past commands, I would recommend installing [fzf][12] instead. Fzf uses fuzzy search, so you don’t need to remember the exact command you are looking for. Fzf also allows you to preview and open files. - -I do think that the fact that Elvish is also a programming language is neat, but I’ll stick with Bash shell scripting until Elvish matures a little more. - -Have you every used Elvish? Do you think it would be worthwhile to install Elvish? What is your favorite Bash replacement? Please let us know in the comments below. - -If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][13]. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/elvish-shell/ - -作者:[John Paul][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/john/ -[b]: https://github.com/lujun9972 -[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/pipelines-in-elvish.png?fit=800%2C421&ssl=1 -[2]: https://elv.sh/ -[3]: https://github.com/elves/elvish -[4]: https://ranger.github.io/ -[5]: https://github.com/elves/awesome-elvish -[6]: https://elv.sh/ref/name.html -[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Elvish-control-structures.png?fit=800%2C425&ssl=1 -[8]: https://launchpad.net/%7Ezhsj/+archive/ubuntu/elvish -[9]: https://github.com/FZUG/repo/wiki/Add-FZUG-Repository -[10]: https://aur.archlinux.org/packages/elvish/ -[11]: https://linuxhandbook.com/change-shell-linux/ -[12]: https://github.com/junegunn/fzf -[13]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20190627 RPM packages explained.md b/sources/tech/20190627 RPM packages explained.md deleted file mode 100644 index 3fb3cee6b2..0000000000 --- a/sources/tech/20190627 RPM packages explained.md +++ /dev/null @@ -1,339 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (RPM packages explained) -[#]: via: (https://fedoramagazine.org/rpm-packages-explained/) -[#]: author: (Ankur Sinha "FranciscoD" https://fedoramagazine.org/author/ankursinha/) - -RPM packages explained -====== - -![][1] - -Perhaps the best known way the Fedora community pursues its [mission of promoting free and open source software and content][2] is by developing the [Fedora software distribution][3]. So it’s not a surprise at all that a very large proportion of our community resources are spent on this task. This post summarizes how this software is “packaged” and the underlying tools such as _rpm_ that make it all possible. - -### RPM: the smallest unit of software - -The editions and flavors ([spins][4]/[labs][5]/[silverblue][6]) that users get to choose from are all very similar. They’re all composed of various software that is mixed and matched to work well together. What differs between them is the exact list of tools that goes into each. That choice depends on the use case that they target. The basic unit of all of these is an RPM package file. - -RPM files are archives that are similar to ZIP files or tarballs. In fact, they uses compression to reduce the size of the archive. However, along with files, RPM archives also contain metadata about the package. This can be queried using the _rpm_ tool: -``` - -``` - -$ rpm -q fpaste -fpaste-0.3.9.2-2.fc30.noarch - -$ rpm -qi fpaste -Name        : fpaste -Version     : 0.3.9.2 -Release     : 2.fc30 -Architecture: noarch -Install Date: Tue 26 Mar 2019 08:49:10 GMT -Group       : Unspecified -Size        : 64144 -License     : GPLv3+ -Signature   : RSA/SHA256, Thu 07 Feb 2019 15:46:11 GMT, Key ID ef3c111fcfc659b9 -Source RPM  : fpaste-0.3.9.2-2.fc30.src.rpm -Build Date  : Thu 31 Jan 2019 20:06:01 GMT -Build Host  : buildhw-07.phx2.fedoraproject.org -Relocations : (not relocatable) -Packager    : Fedora Project -Vendor      : Fedora Project -URL         : -Bug URL     : -Summary     : A simple tool for pasting info onto sticky notes instances -Description : -It is often useful to be able to easily paste text to the Fedora -Pastebin at and this simple script -will do that and return the resulting URL so that people may -examine the output. This can hopefully help folks who are for -some reason stuck without X, working remotely, or any other -reason they may be unable to paste something into the pastebin - -$ rpm -ql fpaste -/usr/bin/fpaste -/usr/share/doc/fpaste -/usr/share/doc/fpaste/README.rst -/usr/share/doc/fpaste/TODO -/usr/share/licenses/fpaste -/usr/share/licenses/fpaste/COPYING -/usr/share/man/man1/fpaste.1.gz -``` - -``` - -When an RPM package is installed, the _rpm_ tools know exactly what files were added to the system. So, removing a package also removes these files, and leaves the system in a consistent state. This is why installing software using _rpm_ is preferred over installing software from source whenever possible. - -### Dependencies - -Nowadays, it is quite rare for software to be completely self-contained. Even [fpaste][7], a simple one file Python script, requires that the Python interpreter be installed. So, if the system does not have Python installed (highly unlikely, but possible), _fpaste_ cannot be used. In packager jargon, we say that “Python is a **run-time dependency** of _fpaste_“. - -When RPM packages are built (the process of building RPMs is not discussed in this post), the generated archive includes all of this metadata. That way, the tools interacting with the RPM package archive know what else must must be installed so that fpaste works correctly: -``` - -``` - -$ rpm -q --requires fpaste -/usr/bin/python3 -python3 -rpmlib(CompressedFileNames) &lt;= 3.0.4-1 -rpmlib(FileDigests) &lt;= 4.6.0-1 -rpmlib(PayloadFilesHavePrefix) &lt;= 4.0-1 -rpmlib(PayloadIsXz) &lt;= 5.2-1 - -$ rpm -q --provides fpaste -fpaste = 0.3.9.2-2.fc30 - -$ rpm -qi python3 -Name        : python3 -Version     : 3.7.3 -Release     : 3.fc30 -Architecture: x86_64 -Install Date: Thu 16 May 2019 18:51:41 BST -Group       : Unspecified -Size        : 46139 -License     : Python -Signature   : RSA/SHA256, Sat 11 May 2019 17:02:44 BST, Key ID ef3c111fcfc659b9 -Source RPM  : python3-3.7.3-3.fc30.src.rpm -Build Date  : Sat 11 May 2019 01:47:35 BST -Build Host  : buildhw-05.phx2.fedoraproject.org -Relocations : (not relocatable) -Packager    : Fedora Project -Vendor      : Fedora Project -URL         : -Bug URL     : -Summary     : Interpreter of the Python programming language -Description : -Python is an accessible, high-level, dynamically typed, interpreted programming -language, designed with an emphasis on code readability. -It includes an extensive standard library, and has a vast ecosystem of -third-party libraries. - -The python3 package provides the "python3" executable: the reference -interpreter for the Python language, version 3. -The majority of its standard library is provided in the python3-libs package, -which should be installed automatically along with python3. -The remaining parts of the Python standard library are broken out into the -python3-tkinter and python3-test packages, which may need to be installed -separately. - -Documentation for Python is provided in the python3-docs package. - -Packages containing additional libraries for Python are generally named with -the "python3-" prefix. - -$ rpm -q --provides python3 -python(abi) = 3.7 -python3 = 3.7.3-3.fc30 -python3(x86-64) = 3.7.3-3.fc30 -python3.7 = 3.7.3-3.fc30 -python37 = 3.7.3-3.fc30 -``` - -``` - -### Resolving RPM dependencies - -While _rpm_ knows the required dependencies for each archive, it does not know where to find them. This is by design: _rpm_ only works on local files and must be told exactly where they are. So, if you try to install a single RPM package, you get an error if _rpm_ cannot find the package’s run-time dependencies. This example tries to install a package downloaded from the Fedora package set: -``` - -``` - -$ ls -python3-elephant-0.6.2-3.fc30.noarch.rpm - -$ rpm -qpi python3-elephant-0.6.2-3.fc30.noarch.rpm -Name        : python3-elephant -Version     : 0.6.2 -Release     : 3.fc30 -Architecture: noarch -Install Date: (not installed) -Group       : Unspecified -Size        : 2574456 -License     : BSD -Signature   : (none) -Source RPM  : python-elephant-0.6.2-3.fc30.src.rpm -Build Date  : Fri 14 Jun 2019 17:23:48 BST -Build Host  : buildhw-02.phx2.fedoraproject.org -Relocations : (not relocatable) -Packager    : Fedora Project -Vendor      : Fedora Project -URL         : -Bug URL     : -Summary     : Elephant is a package for analysis of electrophysiology data in Python -Description : -Elephant - Electrophysiology Analysis Toolkit Elephant is a package for the -analysis of neurophysiology data, based on Neo. - -$ rpm -qp --requires python3-elephant-0.6.2-3.fc30.noarch.rpm -python(abi) = 3.7 -python3.7dist(neo) >= 0.7.1 -python3.7dist(numpy) >= 1.8.2 -python3.7dist(quantities) >= 0.10.1 -python3.7dist(scipy) >= 0.14.0 -python3.7dist(six) >= 1.10.0 -rpmlib(CompressedFileNames) &lt;= 3.0.4-1 -rpmlib(FileDigests) &lt;= 4.6.0-1 -rpmlib(PartialHardlinkSets) &lt;= 4.0.4-1 -rpmlib(PayloadFilesHavePrefix) &lt;= 4.0-1 -rpmlib(PayloadIsXz) &lt;= 5.2-1 - -$ sudo rpm -i ./python3-elephant-0.6.2-3.fc30.noarch.rpm -error: Failed dependencies: -        python3.7dist(neo) >= 0.7.1 is needed by python3-elephant-0.6.2-3.fc30.noarch -        python3.7dist(quantities) >= 0.10.1 is needed by python3-elephant-0.6.2-3.fc30.noarch -``` - -``` - -In theory, one could download all the packages that are required for _python3-elephant_, and tell _rpm_ where they all are, but that isn’t convenient. What if _python3-neo_ and _python3-quantities_ have other run-time requirements and so on? Very quickly, the **dependency chain** can get quite complicated. - -#### Repositories - -Luckily, _dnf_ and friends exist to help with this issue. Unlike _rpm_, _dnf_ is aware of **repositories**. Repositories are collections of packages, with metadata that tells _dnf_ what these repositories contain. All Fedora systems come with the default Fedora repositories enabled by default: -``` - -``` - -$ sudo dnf repolist -repo id              repo name                             status -fedora               Fedora 30 - x86_64                    56,582 -fedora-modular       Fedora Modular 30 - x86_64               135 -updates              Fedora 30 - x86_64 - Updates           8,573 -updates-modular      Fedora Modular 30 - x86_64 - Updates     138 -updates-testing      Fedora 30 - x86_64 - Test Updates      8,458 -``` - -``` - -There’s more information on [these repositories][8], and how they [can be managed][9] on the Fedora quick docs. - -_dnf_ can be used to query repositories for information on the packages they contain. It can also search them for software, or install/uninstall/upgrade packages from them: -``` - -``` - -$ sudo dnf search elephant -Last metadata expiration check: 0:05:21 ago on Sun 23 Jun 2019 14:33:38 BST. -============================================================================== Name &amp; Summary Matched: elephant ============================================================================== -python3-elephant.noarch : Elephant is a package for analysis of electrophysiology data in Python -python3-elephant.noarch : Elephant is a package for analysis of electrophysiology data in Python - -$ sudo dnf list \\*elephant\\* -Last metadata expiration check: 0:05:26 ago on Sun 23 Jun 2019 14:33:38 BST. -Available Packages -python3-elephant.noarch      0.6.2-3.fc30      updates-testing -python3-elephant.noarch      0.6.2-3.fc30              updates -``` - -``` - -#### Installing dependencies - -When installing the package using _dnf_ now, it _resolves_ all the required dependencies, then calls _rpm_ to carry out the _transaction_: -``` - -``` - -$ sudo dnf install python3-elephant -Last metadata expiration check: 0:06:17 ago on Sun 23 Jun 2019 14:33:38 BST. -Dependencies resolved. -============================================================================================================================================================================================== - Package                                      Architecture                     Version                                                        Repository                                 Size -============================================================================================================================================================================================== -Installing: - python3-elephant                             noarch                           0.6.2-3.fc30                                                   updates-testing                           456 k -Installing dependencies: - python3-neo                                  noarch                           0.8.0-0.1.20190215git49b6041.fc30                              fedora                                    753 k - python3-quantities                           noarch                           0.12.2-4.fc30                                                  fedora                                    163 k -Installing weak dependencies: - python3-igor                                 noarch                           0.3-5.20150408git2c2a79d.fc30                                  fedora                                     63 k - -Transaction Summary -============================================================================================================================================================================================== -Install  4 Packages - -Total download size: 1.4 M -Installed size: 7.0 M -Is this ok [y/N]: y -Downloading Packages: -(1/4): python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch.rpm                                                                                                  222 kB/s |  63 kB     00:00 -(2/4): python3-elephant-0.6.2-3.fc30.noarch.rpm                                                                                                               681 kB/s | 456 kB     00:00 -(3/4): python3-quantities-0.12.2-4.fc30.noarch.rpm                                                                                                            421 kB/s | 163 kB     00:00 -(4/4): python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch.rpm                                                                                               840 kB/s | 753 kB     00:00 -\---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -Total                                                                                                                                                         884 kB/s | 1.4 MB     00:01 -Running transaction check -Transaction check succeeded. -Running transaction test -Transaction test succeeded. -Running transaction -  Preparing        :                                                                                                                                                                      1/1 -  Installing       : python3-quantities-0.12.2-4.fc30.noarch                                                                                                                              1/4 -  Installing       : python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch                                                                                                                    2/4 -  Installing       : python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch                                                                                                                 3/4 -  Installing       : python3-elephant-0.6.2-3.fc30.noarch                                                                                                                                 4/4 -  Running scriptlet: python3-elephant-0.6.2-3.fc30.noarch                                                                                                                                 4/4 -  Verifying        : python3-elephant-0.6.2-3.fc30.noarch                                                                                                                                 1/4 -  Verifying        : python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch                                                                                                                    2/4 -  Verifying        : python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch                                                                                                                 3/4 -  Verifying        : python3-quantities-0.12.2-4.fc30.noarch                                                                                                                              4/4 - -Installed: -  python3-elephant-0.6.2-3.fc30.noarch   python3-igor-0.3-5.20150408git2c2a79d.fc30.noarch   python3-neo-0.8.0-0.1.20190215git49b6041.fc30.noarch   python3-quantities-0.12.2-4.fc30.noarch - -Complete! -``` - -``` - -Notice how dnf even installed _python3-igor_, which isn’t a direct dependency of _python3-elephant_. - -### DnfDragora: a graphical interface to DNF - -While technical users may find _dnf_ straightforward to use, it isn’t for everyone. [Dnfdragora][10] addresses this issue by providing a graphical front end to _dnf_. - -![dnfdragora \(version 1.1.1-2 on Fedora 30\) listing all the packages installed on a system.][11] - -From a quick look, dnfdragora appears to provide all of _dnf_‘s main functions. - -There are other tools in Fedora that also manage packages. GNOME Software, and Discover are two examples. GNOME Software is focused on graphical applications only. You can’t use the graphical front end to install command line or terminal tools such as _htop_ or _weechat_. However, GNOME Software does support the installation of [Flatpaks][12] and Snap applications which _dnf_ does not. So, they are different tools with different target audiences, and so provide different functions. - -This post only touches the tip of the iceberg that is the life cycle of software in Fedora. This article explained what RPM packages are, and the main differences between using _rpm_ and using _dnf_. - -In future posts, we’ll speak more about: - - * The processes that are needed to create these packages - * How the community tests them to ensure that they are built correctly - * The infrastructure that the community uses to get them to community users in future posts. - - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/rpm-packages-explained/ - -作者:[Ankur Sinha "FranciscoD"][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/ankursinha/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/rpm.png-816x345.jpg -[2]: https://docs.fedoraproject.org/en-US/project/#_what_is_fedora_all_about -[3]: https://getfedora.org -[4]: https://spins.fedoraproject.org/ -[5]: https://labs.fedoraproject.org/ -[6]: https://silverblue.fedoraproject.org/ -[7]: https://src.fedoraproject.org/rpms/fpaste -[8]: https://docs.fedoraproject.org/en-US/quick-docs/repositories/ -[9]: https://docs.fedoraproject.org/en-US/quick-docs/adding-or-removing-software-repositories-in-fedora/ -[10]: https://src.fedoraproject.org/rpms/dnfdragora -[11]: https://fedoramagazine.org/wp-content/uploads/2019/06/dnfdragora-1024x558.png -[12]: https://fedoramagazine.org/getting-started-flatpak/ diff --git a/sources/tech/20190719 Buying a Linux-ready laptop.md b/sources/tech/20190719 Buying a Linux-ready laptop.md deleted file mode 100644 index f63f9276e4..0000000000 --- a/sources/tech/20190719 Buying a Linux-ready laptop.md +++ /dev/null @@ -1,80 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Buying a Linux-ready laptop) -[#]: via: (https://opensource.com/article/19/7/linux-laptop) -[#]: author: (Ricardo Berlasso https://opensource.com/users/rgb-eshttps://opensource.com/users/greg-phttps://opensource.com/users/chrisodhttps://opensource.com/users/victorhckhttps://opensource.com/users/hankghttps://opensource.com/users/noplanman) - -Buying a Linux-ready laptop -====== -Tuxedo makes it easy to buy an out-of-the-box "penguin-ready" laptop. -![Penguin with green background][1] - -Recently, I bought and started using a Tuxedo Book BC1507, a Linux laptop computer. Ten years ago, if someone had told me that, by the end of the decade, I could buy top-quality, "penguin-ready" laptops from companies such as [System76][2], [Slimbook][3], and [Tuxedo][4], I probably would have laughed. Well, now I'm laughing, but with joy! - -Going beyond designing computers for free/libre open source software (FLOSS), all three companies recently [announced][5] they are trying to eliminate proprietary BIOS software by switching to [Coreboot][6]. - -### Buying it - -Tuxedo Computers is a German company that builds Linux-ready laptops. In fact, if you want a different operating system, it costs more. - -Buying the computer was incredibly easy. Tuxedo offers many payment methods: not only credit cards but also PayPal and even bank transfers. Just fill out the bank transfer form on Tuxedo's web page, and the company will send you the bank coordinates. - -Tuxedo builds every computer on demand, and picking exactly what you want is as easy as selecting the basic model and exploring the drop-down menus to select different components. There is a lot of information on the page to guide you in the purchase. - -If you pick a different Linux distribution from the recommended one, Tuxedo does a "net install," so have a network cable ready to finish the installation, or you can burn your preferred image onto a USB key. I used a DVD with the openSUSE Leap 15.1 installer through an external DVD reader instead, but you get the idea. - -The model I chose accepts up to two disks: one SSD and the other either an SSD or a conventional hard drive. As I was already over budget, I decided to pick a conventional 1TB disk and increase the RAM to 16GB. The processor is an 8th Generation i5 with four cores. I selected a back-lit Spanish keyboard, a 1920×1080/96dpi screen, and an SD card reader—all in all, a great system. - -If you're fine with the default English or German keyboard, you can even ask for a penguin icon on the Meta key! I needed a Spanish keyboard, which doesn't offer this option. - -### Receiving and using it - -The perfectly packaged computer arrived in total safety to my door just six working days after the payment was registered. After unpacking the computer and unlocking the battery, I was ready to roll. - -![Tuxedo Book BC1507][7] - -The new toy on top of my (physical) desktop. - -The computer's design is really nice and feels solid. Even though the chassis on this model is not aluminum, it stays cool. The fan is really quiet, and the airflow goes to the back edge, not to the sides, as in many other laptops. The battery provides several hours of autonomy from an electrical outlet. An option in the BIOS called FlexiCharger stops charging the battery after it reaches a certain percentage, so you don't need to remove the battery when you work for a long time while plugged in. - -The keyboard is really comfortable and surprisingly quiet. Even the touchpad keys are quiet! Also, you can easily adjust the light intensity on the back-lit keyboard. - -Finally, it's easy to access every component in the laptop so the computer can be updated or repaired without problems. Tuxedo even sends spare screws! - -### Conclusion - -After a month of heavy use, I'm really happy with the system. I got exactly what I asked for, and everything works perfectly. - -Because they are usually high-end systems, Linux-included computers tend to be on the expensive side of the spectrum. If you compare the price of a Tuxedo or Slimbook computer with something with similar specifications from a more established brand, the prices are not that different. If you are after a powerful system to use with free software, don't hesitate to support these companies: What they offer is worth the price. - -Let's us know in the comments about your experience with Tuxedo and other "penguin-friendly" companies. - -* * * - -_This article is based on "[My new 'penguin ready' laptop: Tuxedo-Book-BC1507][8]," published on Ricardo's blog, [From Mind to Type][9]._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/7/linux-laptop - -作者:[Ricardo Berlasso][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/rgb-eshttps://opensource.com/users/greg-phttps://opensource.com/users/chrisodhttps://opensource.com/users/victorhckhttps://opensource.com/users/hankghttps://opensource.com/users/noplanman -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background) -[2]: https://system76.com/ -[3]: https://slimbook.es/en/ -[4]: https://www.tuxedocomputers.com/ -[5]: https://www.tuxedocomputers.com/en/Infos/News/Tuxedo-Computers-stands-for-Free-Software-and-Security-.tuxedo -[6]: https://coreboot.org/ -[7]: https://opensource.com/sites/default/files/uploads/tuxedo-600_0.jpg (Tuxedo Book BC1507) -[8]: https://frommindtotype.wordpress.com/2019/06/17/my-new-penguin-ready-laptop-tuxedo-book-bc1507/ -[9]: https://frommindtotype.wordpress.com/ diff --git a/sources/tech/20190809 Mutation testing is the evolution of TDD.md b/sources/tech/20190809 Mutation testing is the evolution of TDD.md deleted file mode 100644 index 766d2a4285..0000000000 --- a/sources/tech/20190809 Mutation testing is the evolution of TDD.md +++ /dev/null @@ -1,285 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Mutation testing is the evolution of TDD) -[#]: via: (https://opensource.com/article/19/8/mutation-testing-evolution-tdd) -[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) - -Mutation testing is the evolution of TDD -====== -Since test-driven development is modeled on how nature works, mutation -testing is the natural next step in the evolution of DevOps. -![Ants and a leaf making the word "open"][1] - -In "[Failure is a feature in blameless DevOps][2]," I discussed the central role of failure in delivering quality by soliciting feedback. This is the failure agile DevOps teams rely on to guide them and drive development. [Test-driven development (TDD)][3] is the _[conditio sine qua non][4]_ of any agile DevOps value stream delivery. Failure-centric TDD methodology only works if it is paired with measurable tests. - -TDD methodology is modeled on how nature works and how nature produces winners and losers in the evolutionary game. - -### Natural selection - -![Charles Darwin][5] - -In 1859, [Charles Darwin][6] proposed the theory of evolution in his book _[On the Origin of Species][7]_. Darwin's thesis was that natural variability is caused by the combination of spontaneous mutations in individual organisms and environmental pressures. These pressures eliminate less-adapted organisms while favoring other, more fit organisms. Each and every living being mutates its chromosomes, and those spontaneous mutations are carried to the next generation (the offspring). The newly emerged variability is then tested under natural selection—the environmental pressures that exist due to the variability of environmental conditions. - -This simplified diagram illustrates the process of adjusting to environmental conditions. - -![Environmental pressures on fish][8] - -Fig. 1. Different environmental pressures result in different outcomes governed by natural selection. Image screenshot from a [video by Richard Dawkins][9]. - -This illustration shows a school of fish in their natural habitat. The habitat varies (darker or lighter gravel at the bottom of the sea or riverbed), as does each fish (darker or lighter body patterns and colors). - -It also shows two situations (i.e., two variations of the environmental pressure): - - 1. The predator is present - 2. The predator is absent - - - -In the first situation, fish that are easier to spot against the gravel shade are at higher risk of being picked off by predators. When the gravel is darker, the lighter portion of the fish population is thinned out. And vice versa—when the gravel is a lighter shade, the darker portion of the fish population suffers the thinning out scenario. - -In the second situation, fish are sufficiently relaxed to engage in mating. In the absence of predators and in the presence of the mating ritual, the opposite results can be expected: The fish that stand out against the background have a better chance of being picked for mating and transferring their characteristics to offspring. - -### Selection criteria - -When selecting among variability, the process is never arbitrary, capricious, whimsical, nor random. The decisive factor is always measurable. That decisive factor is usually called a _test_ or a _goal_. - -A simple mathematical example can illustrate this process of decision making. (Only in this case it won't be governed by natural selection, but by artificial selection.) Suppose someone asks you to build a little function that will take a positive number and calculate that number's square root. How would you go about doing that? - -The agile DevOps way is to _fail fast_. Start with humility, admitting upfront that you don't really know how to develop that function. All you know, at this point, is how to _describe_ what you'd like to do. In technical parlance, you are ready to engage in crafting a _unit test_. - -"Unit test" describes your specific expectation. It could simply be formulated as "given the number 16, I expect the square root function to return number 4." You probably know that the square root of 16 is 4. However, you don't know the square root for some larger numbers (such as 533). - -At the very least, you have formulated your selection criteria, your test or goal. - -### Implement the failing test - -The [.NET Core][10] platform can illustrate the implementation. .NET typically uses [xUnit.net][11] as a unit-testing framework. (To follow the coding examples, please install .NET Core and xUnit.net.) - -Open the command line and create a folder where your square root solution will be implemented. For example, type: - - -``` -`mkdir square_root` -``` - -Then type: - - -``` -`cd square_root` -``` - -Create a separate folder for unit tests: - - -``` -`mkdir unit_tests` -``` - -Move into the **unit_tests** folder (**cd unit_tests**) and initiate the xUnit framework: - - -``` -`dotnet new xunit` -``` - -Now, move one folder up to the **square_root** folder, and create the **app** folder: - - -``` -mkdir app -cd app -``` - -Create the scaffold necessary for the C# code: - - -``` -`dotnet new classlib` -``` - -Now open your favorite editor and start cracking! - -In your code editor, navigate to the **unit_tests** folder and open **UnitTest1.cs**. - -Replace auto-generated code in **UnitTest1.cs** with: - - -``` -using System; -using Xunit; -using app; - -namespace unit_tests{ - -   public class UnitTest1{ -       Calculator calculator = new Calculator(); - -       [Fact] -       public void GivenPositiveNumberCalculateSquareRoot(){ -           var expected = 4; -           var actual = calculator.CalculateSquareRoot(16); -           Assert.Equal(expected, actual); -       } -   } -} -``` - -This unit test describes the expectation that the variable **expected** should be 4. The next line describes the **actual** value. It proposes to calculate the **actual** value by sending a message to the component called **calculator**. This component is described as capable of handling the **CalculateSquareRoot** message by accepting a numeric value. That component hasn't been developed yet. But it doesn't really matter, because this merely describes the expectations. - -Finally, it describes what happens when the message is triggered to be sent. At that point, it asserts whether the **expected** value is equal to the **actual** value. If it is, the test passed and the goal is reached. If the **expected** value isn't equal to the **actual value**, the test fails. - -Next, to implement the component called **calculator**, create a new file in the **app** folder and call it **Calculator.cs**. To implement a function that calculates the square root of a number, add the following code to this new file: - - -``` -namespace app { -   public class Calculator { -       public double CalculateSquareRoot(double number) { -           double bestGuess = number; -           return bestGuess; -       } -   } -} -``` - -Before you can test this implementation, you need to instruct the unit test how to find this new component (**Calculator**). Navigate to the **unit_tests** folder and open the **unit_tests.csproj** file. Add the following line in the **<ItemGroup>** code block: - - -``` -`` -``` - -Save the **unit_test.csproj** file. Now you are ready for your first test run. - -Go to the command line and **cd** into the **unit_tests** folder. Run the following command: - - -``` -`dotnet test` -``` - -Running the unit test will produce the following output: - -![xUnit output after the unit test run fails][12] - -Fig. 2. xUnit output after the unit test run fails. - -As you can see, the unit test failed. It expected that sending number 16 to the **calculator** component would result in the number 4 as the output, but the output (the **actual** value) was the number 16. - -Congratulations! You have created your first failure. Your unit test provided strong, immediate feedback urging you to fix the failure. - -### Fix the failure - -To fix the failure, you must improve **bestGuess**. Right now, **bestGuess** merely takes the number the function receives and returns it. Not good enough. - -But how do you figure out a way to calculate the square root value? I have an idea—how about looking at how Mother Nature solves problems. - -### Emulate Mother Nature by iterating - -It is extremely hard (pretty much impossible) to guess the correct value from the first (and only) attempt. You must allow for several attempts at guessing to increase your chances of solving the problem. And one way to allow for multiple attempts is to _iterate_. - -To iterate, store the **bestGuess** value in the **previousGuess** variable, transform the **bestGuess** value, and compare the difference between the two values. If the difference is 0, you solved the problem. Otherwise, keep iterating. - -Here is the body of the function that produces the correct value for the square root of any positive number: - - -``` -double bestGuess = number; -double previousGuess; - -do { -   previousGuess = bestGuess; -   bestGuess = (previousGuess + (number/previousGuess))/2; -} while((bestGuess - previousGuess) != 0); - -return bestGuess; -``` - -This loop (iteration) converges bestGuess values to the desired solution. Now your carefully crafted unit test passes! - -![Unit test successful][13] - -Fig. 3. Unit test successful, 0 tests failed. - -### The iteration solves the problem - -Just like Mother Nature's approach, in this exercise, iteration solves the problem. An incremental approach combined with stepwise refinement is the guaranteed way to arrive at a satisfactory solution. The decisive factor in this game is having a measurable goal and test. Once you have that, you can keep iterating until you hit the mark. - -### Now the punchline! - -OK, this was an amusing experiment, but the more interesting discovery comes from playing with this newly minted solution. Until now, your starting **bestGuess** was always equal to the number the function receives as the input parameter. What happens if you change the initial **bestGuess**? - -To test that, you can run a few scenarios. First, observe the stepwise refinement as the iteration loops through a series of guesses as it tries to calculate the square root of 25: - -![Code iterating for the square root of 25][14] - -Fig. 4. Iterating to calculate the square root of 25. - -Starting with 25 as the **bestGuess**, it takes eight iterations for the function to calculate the square root of 25. But what would happen if you made a comical, ridiculously wrong stab at the **bestGuess**? What if you started with a clueless second guess, that 1 million might be the square root of 25? What would happen in such an obviously erroneous situation? Would your function be able to deal with such idiocy? - -Take a look at the horse's mouth. Rerun the scenario, this time starting from 1 million as the **bestGuess**: - -![Stepwise refinement][15] - -Fig. 5. Stepwise refinement when calculating the square root of 25 by starting with 1 million as the initial **bestGuess**. - -Oh wow! Starting with a ludicrously large number, the number of iterations only tripled (from eight iterations to 23). Not nearly as dramatic an increase as you might intuitively expect. - -### The moral of the story - -The _Aha!_ moment arrives when you realize that, not only is iteration guaranteed to solve the problem, but it doesn't matter whether your search for the solution begins with a good or a terribly botched initial guess. However erroneous your initial understanding, the process of iteration, coupled with a measurable test/goal, puts you on the right track and delivers the solution. Guaranteed. - -Figures 4 and 5 show a steep and dramatic burndown. From a wildly incorrect starting point, the iteration quickly burns down to an absolutely correct solution. - -This amazing methodology, in a nutshell, is the essence of agile DevOps. - -### Back to some high-level observations - -Agile DevOps practice stems from the recognition that we live in a world that is fundamentally based on uncertainty, ambiguity, incompleteness, and a healthy dose of confusion. From the scientific/philosophical point of view, these traits are well documented and supported by [Heisenberg's Uncertainty Principle][16] (covering the uncertainty part), [Wittgenstein's Tractatus Logico-Philosophicus][17] (the ambiguity part), [Gödel's incompleteness theorems][18] (the incompleteness aspect), and the [Second Law of Thermodynamics][19] (the confusion caused by relentless entropy). - -In a nutshell, no matter how hard you try, you can never get complete information when trying to solve any problem. It is, therefore, more profitable to abandon an arrogant stance and adopt a more humble approach to solving problems. Humility pays big dividends in rewarding you—not only with the hoped-for solution but also with the byproduct of a well-structured solution. - -### Conclusion - -Nature works incessantly—it's a continuous flow. Nature has no master plan; everything happens as a response to what happened earlier. The feedback loops are very tight, and apparent progress/regress is piecemeal. Everywhere you look in nature, you see stepwise refinement, in one shape or form or another. - -Agile DevOps is a very interesting outcome of the engineering model's gradual maturation. DevOps is based on the recognition that the information you have available is always incomplete, so you'd better proceed cautiously. Obtain a measurable test (e.g., a hypothesis, a measurable expectation), make a humble attempt at satisfying it, most likely fail, then collect the feedback, fix the failure, and continue. There is no plan other than agreeing that, with each step of the way, there must be a measurable hypothesis/test. - -In the next article in this series, I'll take a closer look at how mutation testing provides much-needed feedback that drives value. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/8/mutation-testing-evolution-tdd - -作者:[Alex Bunardzic][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/alex-bunardzic -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520X292_openanttrail-2.png?itok=xhD3WmUd (Ants and a leaf making the word "open") -[2]: https://opensource.com/article/19/7/failure-feature-blameless-devops -[3]: https://en.wikipedia.org/wiki/Test-driven_development -[4]: https://www.merriam-webster.com/dictionary/conditio%20sine%20qua%20non -[5]: https://opensource.com/sites/default/files/uploads/darwin.png (Charles Darwin) -[6]: https://en.wikipedia.org/wiki/Charles_Darwin -[7]: https://en.wikipedia.org/wiki/On_the_Origin_of_Species -[8]: https://opensource.com/sites/default/files/uploads/environmentalconditions2.png (Environmental pressures on fish) -[9]: https://www.youtube.com/watch?v=MgK5Rf7qFaU -[10]: https://dotnet.microsoft.com/ -[11]: https://xunit.net/ -[12]: https://opensource.com/sites/default/files/uploads/xunit-output.png (xUnit output after the unit test run fails) -[13]: https://opensource.com/sites/default/files/uploads/unit-test-success.png (Unit test successful) -[14]: https://opensource.com/sites/default/files/uploads/iterating-square-root.png (Code iterating for the square root of 25) -[15]: https://opensource.com/sites/default/files/uploads/bestguess.png (Stepwise refinement) -[16]: https://en.wikipedia.org/wiki/Uncertainty_principle -[17]: https://en.wikipedia.org/wiki/Tractatus_Logico-Philosophicus -[18]: https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems -[19]: https://en.wikipedia.org/wiki/Second_law_of_thermodynamics diff --git a/sources/tech/20190812 Cloud-native Java, open source security, and more industry trends.md b/sources/tech/20190812 Cloud-native Java, open source security, and more industry trends.md deleted file mode 100644 index 58791aba9c..0000000000 --- a/sources/tech/20190812 Cloud-native Java, open source security, and more industry trends.md +++ /dev/null @@ -1,88 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (laingke) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Cloud-native Java, open source security, and more industry trends) -[#]: via: (https://opensource.com/article/19/8/cloud-native-java-and-more) -[#]: author: (Tim Hildred https://opensource.com/users/thildred) - -Cloud-native Java, open source security, and more industry trends -====== -A weekly look at open source community and industry trends. -![Person standing in front of a giant computer screen with numbers, data][1] - -As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update. - -## [Why is modern web development so complicated?][2] - -> Modern frontend web development is a polarizing experience: many love it, others despise it. -> -> I am a huge fan of modern web development, though I would describe it as "magical"—and magic has its upsides and downsides... Recently I’ve been needing to explain “modern web development workflows” to folks who only have a cursory of vanilla web development workflows and… It is a LOT to explain! Even a hasty explanation ends up being pretty long. So in the effort of writing more of my explanations down, here is the beginning of a long yet hasty explanation of the evolution of web development.. - -**The impact:** Specific enough to be useful to (especially new) frontend developers, but simple and well explained enough to help non-developers understand better some of the frontend developer problems. By the end, you'll (kinda) know the difference between Javascript and WebAPIs and how 2019 Javascript is different than 2006 Javascript. - -## [Open sourcing the Kubernetes security audit][3] - -> Last year, the Cloud Native Computing Foundation (CNCF) began the process of performing and open sourcing third-party security audits for its projects in order to improve the overall security of our ecosystem. The idea was to start with a handful of projects and gather feedback from the CNCF community as to whether or not this pilot program was useful. The first projects to undergo this process were [CoreDNS][4], [Envoy][5] and [Prometheus][6]. These first public audits identified security issues from general weaknesses to critical vulnerabilities. With these results, project maintainers for CoreDNS, Envoy and Prometheus have been able to address the identified vulnerabilities and add documentation to help users. -> -> The main takeaway from these initial audits is that a public security audit is a great way to test the quality of an open source project along with its vulnerability management process and more importantly, how resilient the open source project’s security practices are. With CNCF [graduated projects][7] especially, which are used widely in production by some of the largest companies in the world, it is imperative that they adhere to the highest levels of security best practices. - -**The impact:** A lot of companies are placing big bets on Kubernetes being to the cloud what Linux is to that data center. Seeing 4 of those companies working together to make sure the project is doing what it should be from a security perspective inspires confidence. Sharing that research shows that open source is so much more than code in a repository; it is the capturing and sharing of expert opinions in a way that benefits the community at large rather than the interests of a few. - -## [Quarkus—what's next for the lightweight Java framework?][8] - -> What does “container first” mean? What are the strengths of Quarkus? What’s new in 0.20.0? What features can we look forward to in the future? When will version 1.0.0 be released? We have so many questions about Quarkus and Alex Soto was kind enough to answer them all. _With the release of Quarkus 0.20.0, we decided to get in touch with [JAX London speaker][9], Java Champion, and Director of Developer Experience at Red Hat – Alex Soto. He was kind enough to answer all our questions about the past, present, and future of Quarkus. It seems like we have a lot to look forward to with this exciting lightweight framework!_ - -**The impact**: Someone clever recently told me that Quarkus has the potential to make Java "possibly one of the best languages for containers and serverless environments". That made me do a double-take; while Java is one of the most popular programming languages ([if not the most popular][10]) it probably isn't the first one that jumps to mind when you hear the words "cloud native." Quarkus could extend and grow the value of the skills held by a huge chunk of the developer workforce by allowing them to apply their experience to new challenges. - -## [Julia programming language: Users reveal what they love and hate the most about it][11] - -> The most popular technical feature of Julia is speed and performance followed by ease of use, while the most popular non-technical feature is that users don't have to pay to use it.  -> -> Users also report their biggest gripes with the language. The top one is that packages for add-on features aren't sufficiently mature or well maintained to meet their needs.  - -**The impact:** The Julia 1.0 release has been out for a year now, and has seen impressive growth in a bunch of relevant metrics (downloads, GitHub stars, etc). It is a language aimed squarely at some of our biggest current and future challenges ("scientific computing, machine learning, data mining, large-scale linear algebra, distributed and parallel computing") so finding out how it's users are feeling about it gives an indirect read on how well those challenges are being addressed. - -## [Multi-cloud by the numbers: 11 interesting stats][12] - -> If you boil our recent dive into [interesting stats about Kubernetes][13] down to its bottom line, it looks something like this: [Kubernetes'][14] popularity will continue for the foreseeable future. -> -> Spoiler alert: When you dig up recent numbers about [multi-cloud][15] usage, they tell a similar story: Adoption is soaring. -> -> This congruity makes sense. Perhaps not every organization will use Kubernetes to manage its multi-cloud and/or [hybrid cloud][16] infrastructure, but the two increasingly go hand-in-hand. Even when they don’t, they both reflect a general shift toward more distributed and heterogeneous IT environments, as well as [cloud-native development][17] and other overlapping trends. - -**The impact**: Another explanation of increasing adoption of "multi-cloud strategies" is they retroactively legitimize decisions taken in separate parts of an organization without consultation as "strategic." "Wait, so you bought hours from who? And you bought hours from the other one? Why wasn't that in the meeting minutes? I guess we're a multi-cloud company now!" Of course I'm joking, I'm sure most big companies are a lot better coordinated than that, right? - -_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/8/cloud-native-java-and-more - -作者:[Tim Hildred][a] -选题:[lujun9972][b] -译者:[laingke](https://github.com/laingke) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/thildred -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data) -[2]: https://www.vrk.dev/2019/07/11/why-is-modern-web-development-so-complicated-a-long-yet-hasty-explanation-part-1/ -[3]: https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/ -[4]: https://coredns.io/2018/03/15/cure53-security-assessment/ -[5]: https://github.com/envoyproxy/envoy/blob/master/docs/SECURITY_AUDIT.pdf -[6]: https://cure53.de/pentest-report_prometheus.pdf -[7]: https://www.cncf.io/projects/ -[8]: https://jaxenter.com/quarkus-whats-next-for-the-lightweight-java-framework-160793.html -[9]: https://jaxlondon.com/cloud-kubernetes-serverless/java-particle-acceleration-using-quarkus/ -[10]: https://opensource.com/article/19/8/possibly%20one%20of%20the%20best%20languages%20for%20containers%20and%20serverless%20environments. -[11]: https://www.zdnet.com/article/julia-programming-language-users-reveal-what-they-love-and-hate-the-most-about-it/#ftag=RSSbaffb68 -[12]: https://enterprisersproject.com/article/2019/8/multi-cloud-statistics -[13]: https://enterprisersproject.com/article/2019/7/kubernetes-statistics-13-compelling -[14]: https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA -[15]: https://www.redhat.com/en/topics/cloud-computing/what-is-multicloud?intcmp=701f2000000tjyaAAA -[16]: https://enterprisersproject.com/hybrid-cloud -[17]: https://enterprisersproject.com/article/2018/10/how-explain-cloud-native-apps-plain-english diff --git a/sources/tech/20190822 A Raspberry Pi Based Open Source Tablet is in Making and it-s Called CutiePi.md b/sources/tech/20190822 A Raspberry Pi Based Open Source Tablet is in Making and it-s Called CutiePi.md deleted file mode 100644 index ab12c95ddd..0000000000 --- a/sources/tech/20190822 A Raspberry Pi Based Open Source Tablet is in Making and it-s Called CutiePi.md +++ /dev/null @@ -1,82 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (A Raspberry Pi Based Open Source Tablet is in Making and it’s Called CutiePi) -[#]: via: (https://itsfoss.com/cutiepi-open-source-tab/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) - -A Raspberry Pi Based Open Source Tablet is in Making and it’s Called CutiePi -====== - -CutiePie is an 8-inch open-source tablet built on top of Raspberry Pi. For now, it is just a working prototype which they announced on [Raspberry Pi forums][1]. - -In this article, you’ll get to know more details on the specifications, price, and availability of CutiePi. - -They have made the Tablet using a custom-designed Compute Model (CM3) carrier board. The [official website][2] mentions the purpose of a custom CM3 carrier board as: - -> A custom CM3/CM3+ carrier board designed for portable use, with enhanced power management and Li-Po battery level monitoring features; works with selected HDMI or MIPI DSI displays. - -So, this is what makes the Tablet thin enough while being portable. - -### CutiePi Specifications - -![CutiePi Board][3] - -I was surprised to know that it rocks an 8-inch IPS LCD display – which is a good thing for starters. However, you won’t be getting a true HD screen because the resolution is 1280×800 – as mentioned officially. - -It is also planned to come packed with Li-Po 4800 mAh battery (the prototype had a 5000 mAh battery). Well, for a Tablet, that isn’t bad at all. - -Connectivity options include the support for Wi-Fi and Bluetooth 4.0. In addition to this, a USB Type-A, 6x GPIO pins, and a microSD card slot is present. - -![CutiePi Specifications][4] - -The hardware is officially compatible with [Raspbian OS][5] and the user interface is built with [Qt][6] for a fast and intuitive user experience. Also, along with the in-built apps, it is expected to support Raspbian PIXEL apps via XWayland. - -### CutiePi Source Code - -You can second-guess the pricing of this tablet by analyzing the bill for the materials used. CutiePi follows a 100% open-source hardware design for this project. So, if you are curious, you can check out their GitHub page for more information on the hardware design and stuff. - -[CutiePi on GitHub][7] - -### CutiePi Pricing, Release Date & Availability - -CutiePi plans to work on [DVT][8] batch PCBs in August (this month). And, they target to launch the final product by the end of 2019. - -Officially, they expect it to launch it at around $150-$250. This is just an approximate for the range and should be taken with a pinch of salt. - -Obviously, the price will be a major factor in order to make it a success – even though the product itself sounds promising. - -**Wrapping Up** - -CutiePi is not the first project to use a [single board computer like Raspberry Pi][9] to make a tablet. We have the upcoming [PineTab][10] which is based on Pine64 single board computer. Pine also has a laptop called [Pinebook][11] based on the same. - -Judging by the prototype – it is indeed a product that we can expect to work. However, the pre-installed apps and the apps that it will support may turn the tide. Also, considering the price estimate – it sounds promising. - -What do you think about it? Let us know your thoughts in the comments below or just play this interactive poll. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/cutiepi-open-source-tab/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://www.raspberrypi.org/forums/viewtopic.php?t=247380 -[2]: https://cutiepi.io/ -[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/cutiepi-board.png?ssl=1 -[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/cutiepi-specifications.jpg?ssl=1 -[5]: https://itsfoss.com/raspberry-pi-os-desktop/ -[6]: https://en.wikipedia.org/wiki/Qt_%28software%29 -[7]: https://github.com/cutiepi-io/cutiepi-board -[8]: https://en.wikipedia.org/wiki/Engineering_validation_test#Design_verification_test -[9]: https://itsfoss.com/raspberry-pi-alternatives/ -[10]: https://www.pine64.org/pinetab/ -[11]: https://itsfoss.com/pinebook-pro/ diff --git a/sources/tech/20190822 How to move a file in Linux.md b/sources/tech/20190822 How to move a file in Linux.md deleted file mode 100644 index 10ffcc75e5..0000000000 --- a/sources/tech/20190822 How to move a file in Linux.md +++ /dev/null @@ -1,286 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to move a file in Linux) -[#]: via: (https://opensource.com/article/19/8/moving-files-linux-depth) -[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/doni08521059) - -How to move a file in Linux -====== -Whether you're new to moving files in Linux or experienced, you'll learn -something in this in-depth writeup. -![Files in a folder][1] - -Moving files in Linux can seem relatively straightforward, but there are more options available than most realize. This article teaches beginners how to move files in the GUI and on the command line, but also explains what’s actually happening under the hood, and addresses command line options that many experience users have rarely explored. - -### Moving what? - -Before delving into moving files, it’s worth taking a closer look at what actually happens when _moving_ file system objects. When a file is created, it is assigned to an _inode_, which is a fixed point in a file system that’s used for data storage. You can what inode maps to a file with the [ls][2] command: - - -``` -$ ls --inode example.txt -7344977 example.txt -``` - -When you move a file, you don’t actually move the data from one inode to another, you only assign the file object a new name or file path. In fact, a file retains its permissions when it’s moved, because moving a file doesn’t change or re-create it. - -File and directory inodes never imply inheritance and are dictated by the filesystem itself. Inode assignment is sequential based on when the file was created and is entirely independent of how you organize your computer. A file "inside" a directory may have a lower inode number than its parent directory, or a higher one. For example: - - -``` -$ mkdir foo -$ mv example.txt foo -$ ls --inode -7476865 foo -$ ls --inode foo -7344977 example.txt -``` - -When moving a file from one hard drive to another, however, the inode is very likely to change. This happens because the new data has to be written onto a new filesystem. For this reason, in Linux the act of moving and renaming files is literally the same action. Whether you move a file to another directory or to the same directory with a new name, both actions are performed by the same underlying program. - -This article focuses on moving files from one directory to another. - -### Moving with a mouse - -The GUI is a friendly and, to most people, familiar layer of abstraction on top of a complex collection of binary data. It’s also the first and most intuitive way to move files on Linux. If you’re used to the desktop experience, in a generic sense, then you probably already know how to move files around your hard drive. In the GNOME desktop, for instance, the default action when dragging and dropping a file from one window to another is to move the file rather than to copy it, so it’s probably one of the most intuitive actions on the desktop: - -![Moving a file in GNOME.][3] - -The Dolphin file manager in the KDE Plasma desktop defaults to prompting the user for an action. Holding the **Shift** key while dragging a file forces a move action: - -![Moving a file in KDE.][4] - -### Moving on the command line - -The shell command intended for moving files on Linux, BSD, Illumos, Solaris, and MacOS is **mv**. A simple command with a predictable syntax, **mv <source> <destination>** moves a source file to the specified destination, each defined by either an [absolute][5] or [relative][6] file path. As mentioned before, **mv** is such a common command for [POSIX][7] users that many of its additional modifiers are generally unknown, so this article brings a few useful modifiers to your attention whether you are new or experienced. - -Not all **mv** commands were written by the same people, though, so you may have GNU **mv**, BSD **mv**, or Sun **mv**, depending on your operating system. Command options differ from implementation to implementation (BSD **mv** has no long options at all) so refer to your **mv** man page to see what’s supported, or install your preferred version instead (that’s the luxury of open source). - -#### Moving a file - -To move a file from one folder to another with **mv**, remember the syntax **mv <source> <destination>**. For instance, to move the file **example.txt** into your **Documents** directory: - - -``` -$ touch example.txt -$ mv example.txt ~/Documents -$ ls ~/Documents -example.txt -``` - -Just like when you move a file by dragging and dropping it onto a folder icon, this command doesn’t replace **Documents** with **example.txt**. Instead, **mv** detects that **Documents** is a folder, and places the **example.txt** file into it. - -You can also, conveniently, rename the file as you move it: - - -``` -$ touch example.txt -$ mv example.txt ~/Documents/foo.txt -$ ls ~/Documents -foo.txt -``` - -That’s important because it enables you to rename a file even when you don’t want to move it to another location, like so: - - -``` -`$ touch example.txt $ mv example.txt foo2.txt $ ls foo2.txt` -``` - -#### Moving a directory - -The **mv** command doesn’t differentiate a file from a directory the way [**cp**][8] does. You can move a directory or a file with the same syntax: - - -``` -$ touch file.txt -$ mkdir foo_directory -$ mv file.txt foo_directory -$ mv foo_directory ~/Documents -``` - -#### Moving a file safely - -If you copy a file to a directory where a file of the same name already exists, the **mv** command replaces the destination file with the one you are moving, by default. This behavior is called _clobbering_, and sometimes it’s exactly what you intend. Other times, it is not. - -Some distributions _alias_ (or you might [write your own][9]) **mv** to **mv --interactive**, which prompts you for confirmation. Some do not. Either way, you can use the **\--interactive** or **-i** option to ensure that **mv** asks for confirmation in the event that two files of the same name are in conflict: - - -``` -$ mv --interactive example.txt ~/Documents -mv: overwrite '~/Documents/example.txt'? -``` - -If you do not want to manually intervene, use **\--no-clobber** or **-n** instead. This flag silently rejects the move action in the event of conflict. In this example, a file named **example.txt** already exists in **~/Documents**, so it doesn't get moved from the current directory as instructed: - - -``` -$ mv --no-clobber example.txt ~/Documents -$ ls -example.txt -``` - -#### Moving with backups - -If you’re using GNU **mv**, there are backup options offering another means of safe moving. To create a backup of any conflicting destination file, use the **-b** option: - - -``` -$ mv -b example.txt ~/Documents -$ ls ~/Documents -example.txt    example.txt~ -``` - -This flag ensures that **mv** completes the move action, but also protects any pre-existing file in the destination location. - -Another GNU backup option is **\--backup**, which takes an argument defining how the backup file is named: - - * **existing**: If numbered backups already exist in the destination, then a numbered backup is created. Otherwise, the **simple** scheme is used. - * **none**: Does not create a backup even if **\--backup** is set. This option is useful to override a **mv** alias that sets the backup option. - * **numbered**: Appends the destination file with a number. - * **simple**: Appends the destination file with a **~**, which can conveniently be hidden from your daily view with the **\--ignore-backups** option for **[ls][2]**. - - - -For example: - - -``` -$ mv --backup=numbered example.txt ~/Documents -$ ls ~/Documents --rw-rw-r--. 1 seth users 128 Aug  1 17:23 example.txt --rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~ -``` - -A default backup scheme can be set with the environment variable VERSION_CONTROL. You can set environment variables in your **~/.bashrc** file or dynamically before your command: - - -``` -$ VERSION_CONTROL=numbered mv --backup example.txt ~/Documents -$ ls ~/Documents --rw-rw-r--. 1 seth users 128 Aug  1 17:23 example.txt --rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~ --rw-rw-r--. 1 seth users 128 Aug  1 17:22 example.txt.~2~ -``` - -The **\--backup** option still respects the **\--interactive** or **-i** option, so it still prompts you to overwrite the destination file, even though it creates a backup before doing so: - - -``` -$ mv --backup=numbered example.txt ~/Documents -mv: overwrite '~/Documents/example.txt'? y -$ ls ~/Documents --rw-rw-r--. 1 seth users 128 Aug  1 17:24 example.txt --rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~ --rw-rw-r--. 1 seth users 128 Aug  1 17:22 example.txt.~2~ --rw-rw-r--. 1 seth users 128 Aug  1 17:23 example.txt.~3~ -``` - -You can override **-i** with the **\--force** or **-f** option. - - -``` -$ mv --backup=numbered --force example.txt ~/Documents -$ ls ~/Documents --rw-rw-r--. 1 seth users 128 Aug  1 17:26 example.txt --rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~ --rw-rw-r--. 1 seth users 128 Aug  1 17:22 example.txt.~2~ --rw-rw-r--. 1 seth users 128 Aug  1 17:24 example.txt.~3~ --rw-rw-r--. 1 seth users 128 Aug  1 17:25 example.txt.~4~ -``` - -The **\--backup** option is not available in BSD **mv**. - -#### Moving many files at once - -When moving multiple files, **mv** treats the final directory named as the destination: - - -``` -$ mv foo bar baz ~/Documents -$ ls ~/Documents -foo   bar   baz -``` - -If the final item is not a directory, **mv** returns an error: - - -``` -$ mv foo bar baz -mv: target 'baz' is not a directory -``` - -The syntax of GNU **mv** is fairly flexible. If you are unable to provide the **mv** command with the destination as the final argument, use the **\--target-directory** or **-t** option: - - -``` -$ mv --target-directory=~/Documents foo bar baz -$ ls ~/Documents -foo   bar   baz -``` - -This is especially useful when constructing **mv** commands from the output of some other command, such as the **find** command, **xargs**, or [GNU Parallel][10]. - -#### Moving based on mtime - -With GNU **mv**, you can define a move action based on whether the file being moved is newer than the destination file it would replace. This option is possible with the **\--update** or **-u** option, and is not available in BSD **mv**: - - -``` -$ ls -l ~/Documents --rw-rw-r--. 1 seth users 128 Aug  1 17:32 example.txt -$ ls -l --rw-rw-r--. 1 seth users 128 Aug  1 17:42 example.txt -$ mv --update example.txt ~/Documents -$ ls -l ~/Documents --rw-rw-r--. 1 seth users 128 Aug  1 17:42 example.txt -$ ls -l -``` - -This result is exclusively based on the files’ modification time, not on a diff of the two files, so use it with care. It’s easy to fool **mv** with a mere **touch** command: - - -``` -$ cat example.txt -one -$ cat ~/Documents/example.txt -one -two -$ touch example.txt -$ mv --update example.txt ~/Documents -$ cat ~/Documents/example.txt -one -``` - -Obviously, this isn’t the most intelligent update function available, but it offers basic protection against overwriting recent data. - -### Moving - -There are more ways to move data than just the **mv** command, but as the default program for the job, **mv** is a good universal option.  Now that you know what options you have available, you can use **mv** smarter than ever before. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/8/moving-files-linux-depth - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/sethhttps://opensource.com/users/doni08521059 -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder) -[2]: https://opensource.com/article/19/7/master-ls-command -[3]: https://opensource.com/sites/default/files/uploads/gnome-mv.jpg (Moving a file in GNOME.) -[4]: https://opensource.com/sites/default/files/uploads/kde-mv.jpg (Moving a file in KDE.) -[5]: https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them -[6]: https://opensource.com/article/19/7/navigating-filesystem-relative-paths -[7]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains -[8]: https://opensource.com/article/19/7/copying-files-linux -[9]: https://opensource.com/article/19/7/bash-aliases -[10]: https://opensource.com/article/18/5/gnu-parallel diff --git a/sources/tech/20190823 The lifecycle of Linux kernel testing.md b/sources/tech/20190823 The lifecycle of Linux kernel testing.md deleted file mode 100644 index 65bab32536..0000000000 --- a/sources/tech/20190823 The lifecycle of Linux kernel testing.md +++ /dev/null @@ -1,78 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (The lifecycle of Linux kernel testing) -[#]: via: (https://opensource.com/article/19/8/linux-kernel-testing) -[#]: author: (Major Hayden https://opensource.com/users/mhaydenhttps://opensource.com/users/mhaydenhttps://opensource.com/users/marcobravohttps://opensource.com/users/mhayden) - -The lifecycle of Linux kernel testing -====== -The Continuous Kernel Integration (CKI) project aims to prevent bugs -from entering the Linux kernel. -![arrows cycle symbol for failing faster][1] - -In _[Continuous integration testing for the Linux kernel][2]_, I wrote about the [Continuous Kernel Integration][3] (CKI) project and its mission to change how kernel developers and maintainers work. This article is a deep dive into some of the more technical aspects of the project and how all the pieces fit together. - -### It all starts with a change - -Every exciting feature, improvement, and bug in the kernel starts with a change proposed by a developer. These changes appear on myriad mailing lists for different kernel repositories. Some repositories focus on certain subsystems in the kernel, such as storage or networking, while others focus on broad aspects of the kernel. The CKI project springs into action when developers propose a change, or patchset, to the kernel or when a maintainer makes changes in the repository itself. - -The CKI project maintains triggers that monitor these patchsets and take action. Software projects such as [Patchwork][4] make this process much easier by collating multi-patch contributions into a single patch series. This series travels as a unit through the CKI system and allows for publishing a single report on the series. - -Other triggers watch the repository for changes. This occurs when kernel maintainers merge patchsets, revert patches, or create new tags. Testing these critical changes ensures that developers always have a solid baseline to use as a foundation for writing new patches. - -All of these changes make their way into a GitLab pipeline and pass through multiple stages and multiple systems. - -### Prepare the build - -Everything starts with getting the source ready for compile time. This requires cloning the repository, applying the patchset proposed by the developer, and generating a kernel config file. These config files have thousands of options that turn features on or off, and config files differ incredibly between different system architectures. For example, a fairly standard x86_64 system may have a ton of options available in its config file, but an s390x system (IBM zSeries mainframes) likely has much fewer options. Some options might make sense on that mainframe but they have no purpose on a consumer laptop. - -The kernel moves forward and transforms into a source artifact. The artifact contains the entire repository, with patches applied, and all kernel configuration files required for compiling. Upstream kernels move on as a tarball, while Red Hat kernels become a source RPM for the next step. - -### Piles of compiles - -Compiling the kernel turns the source code into something that a computer can boot up and use. The config file describes what to build, scripts in the kernel describe how to build it, and tools on the system (like GCC and glibc) do the building. This process takes a while to complete, but the CKI project needs it done quickly for four architectures: aarch64 (64-bit ARM), ppc64le (POWER), s390x (IBM zSeries), and x86_64. It's important that we compile kernels quickly so that we keep our backlog manageable and developers receive prompt feedback. - -Adding more CPUs provides plenty of speed improvements, but every system has its limits. The CKI project compiles kernels within containers in an OpenShift deployment; although OpenShift allows for tons of scalability, the deployment still has a finite number of CPUs available. The CKI team allocates 20 virtual CPUs for compiling each kernel. With four architectures involved, this balloons to 80 CPUs! - -Another speed increase comes from a tool called [ccache][5]. Kernel development moves quickly, but a large amount of the kernel remains unchanged even between multiple releases. The ccache tool caches the built objects (small pieces of the overall kernel) during the compile on a disk. When another kernel compile comes along later, ccache looks for unchanged pieces of the kernel that it saw before. Ccache pulls the cached object from the disk and reuses it. This allows for faster compiles and lower overall CPU usage. Kernels that took 20 minutes to compile now race to the finish line in less than a few minutes. - -### Testing time - -The kernel moves onto its last step: testing on real hardware. Each kernel boots up on its native architecture using Beaker, and myriad tests begin poking it to find problems. Some tests look for simple problems, such as issues with containers or error messages on boot-up. Other tests dive deep into various kernel subsystems to find regressions in system calls, memory allocation, and threading. - -Large testing frameworks, such as the [Linux Test Project][6] (LTP), contain tons of tests that look for troublesome regressions in the kernel. Some of these regressions could roll back critical security fixes, and there are tests to ensure those improvements remain in the kernel. - -One critical step remains when tests finish: reporting. Kernel developers and maintainers need a concise report that tells them exactly what worked, what did not work, and how to get more information. Each CKI report contains details about the source code used, the compile parameters, and the testing output. That information helps developers know where to begin looking to fix an issue. Also, it helps maintainers know when a patchset needs to be held for another look before a bug makes its way into their kernel repository. - -### Summary - -The CKI project team strives to prevent bugs from entering the Linux kernel by providing timely, automated feedback to kernel developers and maintainers. This work makes their job easier by finding the low-hanging fruit that leads to kernel bugs, security issues, and performance problems. - -* * * - -_To learn more, you can attend the [CKI Hackfest][7] on September 12-13 following the [Linux Plumbers Conference][8] September 9-11 in Lisbon, Portugal._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/8/linux-kernel-testing - -作者:[Major Hayden][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mhaydenhttps://opensource.com/users/mhaydenhttps://opensource.com/users/marcobravohttps://opensource.com/users/mhayden -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster) -[2]: https://opensource.com/article/19/6/continuous-kernel-integration-linux -[3]: https://cki-project.org/ -[4]: https://github.com/getpatchwork/patchwork -[5]: https://ccache.dev/ -[6]: https://linux-test-project.github.io -[7]: https://cki-project.org/posts/hackfest-agenda/ -[8]: https://www.linuxplumbersconf.org/ diff --git a/sources/tech/20190824 How to compile a Linux kernel in the 21st century.md b/sources/tech/20190824 How to compile a Linux kernel in the 21st century.md deleted file mode 100644 index 5821826706..0000000000 --- a/sources/tech/20190824 How to compile a Linux kernel in the 21st century.md +++ /dev/null @@ -1,225 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (luming) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to compile a Linux kernel in the 21st century) -[#]: via: (https://opensource.com/article/19/8/linux-kernel-21st-century) -[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/greg-p) - -How to compile a Linux kernel in the 21st century -====== -You don't have to compile the Linux kernel but you can with this quick -tutorial. -![and old computer and a new computer, representing migration to new software or hardware][1] - -In computing, a kernel is the low-level software that handles communication with hardware and general system coordination. Aside from some initial firmware built into your computer's motherboard, when you start your computer, the kernel is what provides awareness that it has a hard drive and a screen and a keyboard and a network card. It's also the kernel's job to ensure equal time (more or less) is given to each component so that your graphics and audio and filesystem and network all run smoothly, even though they're running concurrently. - -The quest for hardware support, however, is ongoing, because the more hardware that gets released, the more stuff a kernel must adopt into its code to make the hardware work as expected. It's difficult to get accurate numbers, but the Linux kernel is certainly among the top kernels for hardware compatibility. Linux operates innumerable computers and mobile phones, embedded system on a chip (SoC) boards for hobbyist and industrial uses, RAID cards, sewing machines, and much more. - -Back in the 20th century (and even in the early years of the 21st), it was not unreasonable for a Linux user to expect that when they purchased a very new piece of hardware, they would need to download the very latest kernel source code, compile it, and install it so that they could get support for the device. Lately, though, you'd be hard-pressed to find a Linux user who compiles their own kernel except for fun or profit by way of highly specialized custom hardware. It generally isn't required these days to compile the Linux kernel yourself. - -Here are the reasons why, plus a quick tutorial on how to compile a kernel when you need to. - -### Update your existing kernel - -Whether you've got a brand new laptop featuring a fancy new graphics card or WiFi chipset or you've just brought home a new printer, your operating system (called either GNU+Linux or just Linux, which is also the name of the kernel) needs a driver to open communication channels to that new component (graphics card, WiFi chip, printer, or whatever). It can be deceptive, sometimes, when you plug in a new device and your computer _appears_ to acknowledge it. But don't let that fool you. Sometimes that _is_ all you need, but other times your OS is just using generic protocols to probe a device that's attached. - -For instance, your computer may be able to identify your new network printer, but sometimes that's only because the network card in the printer is programmed to identify itself to a network so it can gain a DHCP address. It doesn't necessarily mean that your computer knows what instructions to send to the printer to produce a page of printed text. In fact, you might argue that the computer doesn't even really "know" that the device is a printer; it may only display that there's a device on the network at a specific address and the device identifies itself with the series of characters _p-r-i-n-t-e-r_. The conventions of human language are meaningless to a computer; what it needs is a driver. - -Kernel developers, hardware manufacturers, support technicians, and hobbyists all know that new hardware is constantly being released. Many of them contribute drivers, submitted straight to the kernel development team for inclusion in Linux. For example, Nvidia graphic card drivers are often written into the [Nouveau][2] kernel module and, because Nvidia cards are common, the code is usually included in any kernel distributed for general use (such as the kernel you get when you download [Fedora][3] or [Ubuntu][4]. Where Nvidia is less common, for instance in embedded systems, the Nouveau module is usually excluded. Similar modules exist for many other devices: printers benefit from [Foomatic][5] and [CUPS][6], wireless cards have [b43, ath9k, wl][7] modules, and so on. - -Distributions tend to include as much as they reasonably can in their Linux kernel builds because they want you to be able to attach a device and start using it immediately, with no driver installation required. For the most part, that's what happens, especially now that many device vendors are now funding Linux driver development for the hardware they sell and submitting those drivers directly to the kernel team for general distribution. - -Sometimes, however, you're running a kernel you installed six months ago with an exciting new device that just hit the stores a week ago. In that case, your kernel may not have a driver for that device. The good news is that very often, a driver for that device may exist in a very recent edition of the kernel, meaning that all you have to do is update what you're running. - -Generally, this is done through a package manager. For instance, on RHEL, CentOS, and Fedora: - - -``` -`$ sudo dnf update kernel` -``` - -On Debian and Ubuntu, first get your current kernel version: - - -``` -$ uname -r -4.4.186 -``` - -Search for newer versions: - - -``` -$ sudo apt update -$ sudo apt search linux-image -``` - -Install the latest version you find. In this example, the latest available is 5.2.4: - - -``` -`$ sudo apt install linux-image-5.2.4` -``` - -After a kernel upgrade, you must [reboot][8] (unless you're using kpatch or kgraft). Then, if the device driver you need is in the latest kernel, your hardware will work as expected. - -### Install a kernel module - -Sometimes a distribution doesn't expect that its users often use a device (or at least not enough that the device driver needs to be in the Linux kernel). Linux takes a modular approach to drivers, so distributions can ship separate driver packages that can be loaded by the kernel even though the driver isn't compiled into the kernel itself. This is useful, although it can get complicated when a driver isn't included in a kernel but is needed during boot, or when the kernel gets updated out from under the modular driver. The first problem is solved with an **initrd** (initial RAM disk) and is out of scope for this article, and the second is solved by a system called **kmod**. - -The kmod system ensures that when a kernel is updated, all modular drivers installed alongside it are also updated. If you install a driver manually, you miss out on the automation that kmod provides, so you should opt for a kmod package whenever it is available. For instance, while Nvidia drivers are built into the kernel as the Nouveau driver, the official Nvidia drivers are distributed only by Nvidia. You can install Nvidia-branded drivers manually by going to the website, downloading the **.run** file, and running the shell script it provides, but you must repeat that same process after you install a new kernel, because nothing tells your package manager that you manually installed a kernel driver. Because Nvidia drives your graphics, updating the Nvidia driver manually usually means you have to perform the update from a terminal, because you have no graphics without a functional graphics driver. - -![Nvidia configuration application][9] - -However, if you install the Nvidia drivers as a kmod package, updating your kernel also updates your Nvidia driver. On Fedora and related: - - -``` -`$ sudo dnf install kmod-nvidia` -``` - -On Debian and related: - - -``` -$ sudo apt update -$ sudo apt install nvidia-kernel-common nvidia-kernel-dkms nvidia-glx nvidia-xconfig nvidia-settings nvidia-vdpau-driver vdpau-va-driver -``` - -This is only an example, but if you're installing Nvidia drivers in real life, you must also blacklist the Nouveau driver. See your distribution's documentation for the best steps. - -### Download and install a driver - -Not everything is included in the kernel, and not everything _else_ is available as a kernel module. In some cases, you have to download a special driver written and bundled by the hardware vendor, and other times, you have the driver but not the frontend to configure driver options. - -Two common examples are HP printers and [Wacom][10] illustration tablets. If you get an HP printer, you probably have generic drivers that can communicate with your printer. You might even be able to print. But the generic driver may not be able to provide specialized options specific to your model, such as double-sided printing, collation, paper tray choices, and so on. [HPLIP][11] (the HP Linux Imaging and Printing system) provides options to manage jobs, adjust printing options, select paper trays where applicable, and so on. - -HPLIP is usually bundled in package managers; just search for "hplip." - -![HPLIP in action][12] - -Similarly, drivers for Wacom tablets, the leading illustration tablet for digital artists, are usually included in your kernel, but options to fine-tune settings, such as pressure sensitivity and button functionality, are only accessible through the graphical control panel included by default with GNOME but installable as the extra package **kde-config-tablet** on KDE. - -There are likely some edge cases that don't have drivers in the kernel but offer kmod versions of driver modules as an RPM or DEB file that you can download and install through your package manager. - -### Patching and compiling your own kernel - -Even in the futuristic utopia that is the 21st century, there are vendors that don't understand open source enough to provide installable drivers. Sometimes, such companies provide source code for a driver but expect you to download the code, patch a kernel, compile, and install manually. - -This kind of distribution model has the same disadvantages as installing packaged drivers outside of the kmod system: an update to your kernel breaks the driver because it must be re-integrated into your kernel manually each time the kernel is swapped out for a new one. - -This has become rare, happily, because the Linux kernel team has done an excellent job of pleading loudly for companies to communicate with them, and because companies are finally accepting that open source isn't going away any time soon. But there are still novelty or hyper-specialized devices out there that provide only kernel patches. - -Officially, there are distribution-specific preferences for how you should compile a kernel to keep your package manager involved in upgrading such a vital part of your system. There are too many package managers to cover each; as an example, here is what happens behind the scenes when you use tools like **rpmdev** on Fedora or **build-essential** and **devscripts** on Debian. - -First, as usual, find out which kernel version you're running: - - -``` -`$ uname -r` -``` - -In most cases, it's safe to upgrade your kernel if you haven't already. After all, it's possible that your problem will be solved in the latest release. If you tried that and it didn't work, then you should download the source code of the kernel you are running. Most distributions provide a special command for that, but to do it manually, you can find the source code on [kernel.org][13]. - -You also must download whatever patch you need for your kernel. Sometimes, these patches are specific to the kernel release, so choose carefully. - -It's traditional, or at least it was back when people regularly compiled their own kernels, to place the source code and patches in **/usr/src/linux**. - -Unarchive the kernel source and the patch files as needed: - - -``` -$ cd /usr/src/linux -$ bzip2 --decompress linux-5.2.4.tar.bz2 -$ cd  linux-5.2.4 -$ bzip2 -d ../patch*bz2 -``` - -The patch file may have instructions on how to do the patch, but often they're designed to be executed from the top level of your tree: - - -``` -`$ patch -p1 < patch*example.patch` -``` - -Once the kernel code is patched, you can use your old configuration to prepare the patched kernel config: - - -``` -`$ make oldconfig` -``` - -The **make oldconfig** command serves two purposes: it inherits your current kernel's configuration, and it allows you to configure new options introduced by the patch. - -You may need to run the **make menuconfig** command, which launches an ncurses-based, menu-driven list of possible options for your new kernel. The menu can be overwhelming, but since it starts with your old config as a foundation, you can look through the menu and disable modules for hardware that you know you do not have and do not anticipate needing. Alternately, if you know that you have some piece of hardware and see it is not included in your current configuration, you may choose to build it, either as a module or directly into the kernel. In theory, this isn't necessary because presumably, your current kernel was treating you well but for the missing patch, and probably the patch you applied has activated all the necessary options required by whatever device prompted you to patch your kernel in the first place. - -Next, compile the kernel and its modules: - - -``` -$ make bzImage -$ make modules -``` - -This leaves you with a file named **vmlinuz**, which is a compressed version of your bootable kernel. Save your old version and place the new one in your **/boot** directory: - - -``` -$ sudo mv /boot/vmlinuz /boot/vmlinuz.nopatch -$ sudo cat arch/x86_64/boot/bzImage > /boot/vmlinuz -$ sudo mv /boot/System.map /boot/System.map.stock -$ sudo cp System.map /boot/System.map -``` - -So far, you've patched and built a kernel and its modules, you've installed the kernel, but you haven't installed any modules. That's the final build step: - - -``` -`$ sudo make modules_install` -``` - -The new kernel is in place, and its modules are installed. - -The final step is to update your bootloader so that the part of your computer that loads before the kernel knows where to find Linux. The GRUB bootloader makes this process relatively simple: - - -``` -`$ sudo grub2-mkconfig` -``` - -### Real-world compiling - -Of course, nobody runs those manual commands now. Instead, refer to your distribution for instructions on modifying a kernel using the developer toolset that your distribution's maintainers use. This toolset will probably create a new installable package with all the patches incorporated, alert the package manager of the upgrade, and update your bootloader for you. - -### Kernels - -Operating systems and kernels are mysterious things, but it doesn't take much to understand what components they're built upon. The next time you get a piece of tech that appears to not work on Linux, take a deep breath, investigate driver availability, and go with the path of least resistance. Linux is easier than ever—and that includes the kernel. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/8/linux-kernel-21st-century - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/sethhttps://opensource.com/users/greg-p -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q (and old computer and a new computer, representing migration to new software or hardware) -[2]: https://nouveau.freedesktop.org/wiki/ -[3]: http://fedoraproject.org -[4]: http://ubuntu.com -[5]: https://wiki.linuxfoundation.org/openprinting/database/foomatic -[6]: https://www.cups.org/ -[7]: https://wireless.wiki.kernel.org/en/users/drivers -[8]: https://opensource.com/article/19/7/reboot-linux -[9]: https://opensource.com/sites/default/files/uploads/nvidia.jpg (Nvidia configuration application) -[10]: https://linuxwacom.github.io -[11]: https://developers.hp.com/hp-linux-imaging-and-printing -[12]: https://opensource.com/sites/default/files/uploads/hplip.jpg (HPLIP in action) -[13]: https://www.kernel.org/ diff --git a/sources/tech/20190826 Introduction to the Linux chown command.md b/sources/tech/20190826 Introduction to the Linux chown command.md deleted file mode 100644 index cb79c6fec6..0000000000 --- a/sources/tech/20190826 Introduction to the Linux chown command.md +++ /dev/null @@ -1,138 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Introduction to the Linux chown command) -[#]: via: (https://opensource.com/article/19/8/linux-chown-command) -[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/greg-phttps://opensource.com/users/alanfdoss) - -Introduction to the Linux chown command -====== -Learn how to change a file or directory's ownership with chown. -![Hand putting a Linux file folder into a drawer][1] - -Every file and directory on a Linux system is owned by someone, and the owner has complete control to change or delete the files they own. In addition to having an owning _user_, a file has an owning _group_. - -You can view the ownership of a file using the **ls -l** command: - - -``` -[pablo@workstation Downloads]$ ls -l -total 2454732 --rw-r--r--. 1 pablo pablo 1934753792 Jul 25 18:49 Fedora-Workstation-Live-x86_64-30-1.2.iso -``` - -The third and fourth columns of the output are the owning user and group, which together are referred to as _ownership_. Both are **pablo** for the ISO file above. - -The ownership settings, set by the [**chmod** command][2], control who is allowed to perform read, write, or execute actions. You can change ownership (one or both) with the **chown** command. - -It is often necessary to change ownership. Files and directories can live a long time on a system, but users can come and go. Ownership may also need to change when files and directories are moved around the system or from one system to another. - -The ownership of the files and directories in my home directory are my user and my primary group, represented in the form **user:group**. Suppose Susan is managing the Delta group, which needs to edit a file called **mynotes**. You can use the **chown** command to change the user to **susan** and the group to **delta**: - - -``` -$ chown susan:delta mynotes -ls -l --rw-rw-r--. 1 susan delta 0 Aug  1 12:04 mynotes -``` - -Once the Delta group is finished with the file, it can be assigned back to me: - - -``` -$ chown alan mynotes -$ ls -l mynotes --rw-rw-r--. 1 alan delta 0 Aug  1 12:04 mynotes -``` - -Both the user and group can be assigned back to me by appending a colon (**:**) to the user: - - -``` -$ chown alan: mynotes -$ ls -l mynotes --rw-rw-r--. 1 alan alan 0 Aug  1 12:04 mynotes -``` - -By prepending the group with a colon, you can change just the group. Now members of the **gamma** group can edit the file: - - -``` -$ chown :gamma mynotes -$ ls -l --rw-rw-r--. 1 alan gamma 0 Aug  1 12:04 mynotes -``` - -A few additional arguments to chown can be useful at both the command line and in a script. Just like many other Linux commands, chown has a recursive argument ****(**-R**) which tells the command to descend into the directory to operate on all files inside. Without the **-R** flag, you change permissions of the folder only, leaving the files inside it unchanged. In this example, assume that the intent is to change permissions of a directory and all its contents. Here I have added the **-v** (verbose) argument so that chown reports what it is doing: - - -``` -$ ls -l . conf -.: -drwxrwxr-x 2 alan alan 4096 Aug  5 15:33 conf - -conf: --rw-rw-r-- 1 alan alan 0 Aug  5 15:33 conf.xml - -$ chown -vR susan:delta conf -changed ownership of 'conf/conf.xml' from alan:alan to  susan:delta -changed ownership of 'conf' from alan:alan to  susan:delta -``` - -Depending on your role, you may need to use **sudo** to change ownership of a file. - -You can use a reference file (**\--reference=RFILE**) when changing the ownership of files to match a certain configuration or when you don't know the ownership (as might be the case when running a script). You can duplicate the user and group of another file (**RFILE**, known as a reference file), for example, to undo the changes made above. Recall that a dot (**.**) refers to the present working directory. - - -``` -`$ chown -vR --reference=. conf` -``` - -### Report Changes - -Most commands have arguments for controlling their output. The most common is **-v** (-**-verbose**) to enable verbose, but chown also has a **-c** (**\--changes**) argument to instruct chown to only report when a change is made. Chown still reports other things, such as when an operation is not permitted. - -The argument **-f** (**\--silent**, **\--quiet**) is used to suppress most error messages. I will use **-f** and the **-c** in the next section so that only actual changes are shown. - -### Preserve Root - -The root (**/**) of the Linux filesystem should be treated with great respect. If a mistake is made at this level, the consequences could leave a system completely useless. Particularly when you are running a recursive command that makes any kind of change or worse: deletions. The chown command has an argument that can be used to protect and preserve the root. The argument is **\--preserve-root**. If this argument is used with a recursive chown command on the root, nothing is done and a message appears instead. - - -``` -$ chown -cfR --preserve-root alan / -chown: it is dangerous to operate recursively on '/' -chown: use --no-preserve-root to override this failsafe -``` - -The option has no effect when not used in conjunction with **\--recursive**. However, if the command is run by the root user, the permissions of the **/** itself will be changed, but not of other files or directories within. - - -``` -$ chown -c --preserve-root alan / -chown: changing ownership of '/': Operation not permitted -[root@localhost /]# chown -c --preserve-root alan / -changed ownership of '/' from root to alan -``` - -### Ownership is security - -File and directory ownership is part of good information security, so it's important to occasionally check and maintain file ownership to prevent unwanted access. The chown command is one of the most common and important in the set of Linux security commands. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/8/linux-chown-command - -作者:[Alan Formy-Duval][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/greg-phttps://opensource.com/users/alanfdoss -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer) -[2]: https://opensource.com/article/19/8/introduction-linux-chmod-command diff --git a/sources/tech/20190830 How to Install Linux on Intel NUC.md b/sources/tech/20190830 How to Install Linux on Intel NUC.md index 86d73c5ddc..c5d4726a40 100644 --- a/sources/tech/20190830 How to Install Linux on Intel NUC.md +++ b/sources/tech/20190830 How to Install Linux on Intel NUC.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (amwps290) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) diff --git a/sources/tech/20190901 Best Linux Distributions For Everyone in 2019.md b/sources/tech/20190901 Best Linux Distributions For Everyone in 2019.md deleted file mode 100644 index 6959b35d60..0000000000 --- a/sources/tech/20190901 Best Linux Distributions For Everyone in 2019.md +++ /dev/null @@ -1,392 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Best Linux Distributions For Everyone in 2019) -[#]: via: (https://itsfoss.com/best-linux-distributions/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) - -Best Linux Distributions For Everyone in 2019 -====== - -_**Brief: Which is the best Linux distribution? There is no definite answer to that question. This is why we have compiled this list of best Linux in various categories.**_ - -There are a lot of Linux distributions. I can’t even think of coming up with an exact number because you would find loads of Linux distros that differ from one another in one way or the other. - -Some of them just turn out to be a clone of one another while some of them tend to be unique. So, it’s kind of a mess – but that is the beauty of Linux. - -Fret not, even though there are thousands of distributions around, in this article, I have compiled a list of the best Linux distros available right now. Of course, the list can be subjective. But, here, we try to categorize the distros – so there’s something for everyone. - - * [Best distribution for new Linux users][1] - * [Best Linux distros for servers][2] - * [Best Linux distros that can run on old computers][3] - * [Best distributions for advanced Linux users][4] - * [Best evergreen Linux distributions][5] - - - -**Note:** _The list is in no particular order of ranking._ - -### Best Linux Distributions for Beginners - -In this category, we aim to list the distros which are easy-to-use out of the box. You do not need to dig deeper, you can just start using it right away after installation without needing to know any commands or tips. - -#### Ubuntu - -![][6] - -Ubuntu is undoubtedly one of the most popular Linux distributions. You can even find it pre-installed on a lot of laptops available. - -The user interface is easy to get comfortable with. If you play around, you can easily customize the look of it as per your requirements. In either case, you can opt to install a theme as well. You can learn more about [how to install themes in Ubuntu][7] to get started. - -In addition to what it offers, you will find a huge online community of Ubuntu users. So, if you face an issue – head to any of the forums (or a subreddit) to ask for help. If you are looking for direct solutions in no time, you should check out our coverage on [Ubuntu][8] (where we have a lot of tutorials and recommendations for Ubuntu). - -[Ubuntu][9] - -#### Linux Mint - -![Linux Mint 19 Cinnamon desktop screenshot][10] - -Linux Mint Cinnamon is another popular Linux distribution among beginners. The default Cinnamon desktop resembles Windows XP and this is why many users opted for it when Windows XP was discontinued. - -Linux Mint is based on Ubuntu and thus it has all the applications available for Ubuntu. The simplicity and ease of use is why it has become a prominent choice for new Linux users. - -[Linux Mint][11] - -#### elementary OS - -![][12] - -elementary OS is one of the most beautiful Linux distros I’ve ever used. The UI resembles that of Mac OS – so if you have already used a Mac-powered system, it’s easy to get comfortable with. - -This distribution is based on Ubuntu and focuses to deliver a user-friendly Linux environment which looks as pretty as possible while keeping the performance in mind. If you choose to install elementary OS, a list of [11 things to do after installing elementary OS][13] should come in handy. - -[elementary OS][14] - -#### MX Linux - -![][15] - -MX Linux came in the limelight almost a year ago. Now (at the time of publishing this), it is the most popular Linux distro on [DistroWatch.com][16]. If you haven’t used it yet – you will be surprised when you get to use it. - -Unlike Ubuntu, MX Linux is a [rolling release distribution][17] based on Debian with Xfce as its desktop environment. In addition to its impeccable stability – it comes packed with a lot of GUI tools which makes it easier for any user comfortable with Windows/Mac originally. - -Also, the package manager is perfectly tailored to facilitate one-click installations. You can even search for [Flatpak][18] packages and install it in no time (Flathub is available by default in the package manager as one of the sources). - -[MX Linux][19] - -#### Zorin OS - -![][20] - -Zorin OS is yet another Ubuntu-based distribution which happens to be one of the most good-looking and intuitive OS for desktop. Especially, after [Zorin OS 15 release][21] – I would definitely recommend it for users without any Linux background. A lot of GUI-based applications comes baked in as well. - -You can also install it on older PCs – however, make sure to choose the “Lite” edition. In addition, you have “Core”, “Education” & “Ultimate” editions. You can choose to install the Core edition for free – but if you want to support the developers and help improve Zorin, consider getting the Ultimate edition. - -Zorin OS was started by two teenagers based in Ireland. You may [read their story here][22]. - -[Zorin OS][23] - -**Other Options** - -[Deepin][24] and other flavors of Ubuntu (like Kubuntu, Xubuntu) could also be some of the preferred choices for beginners. You can take a look at them if you want to explore more options. - -If you want a challenge, you can indeed try Fedora over Ubuntu – but make sure to follow our article on [Ubuntu vs Fedora][25] to make a better decision from the desktop point of view. - -### Best Linux Server Distributions - -For servers, the choice of a Linux distro comes down to stability, performance, and enterprise support. If you are just experimenting, you can try any distro you want. - -But, if you are installing it for a web server or anything vital – you should take a look at some of our recommendations. - -#### Ubuntu Server - -Depending on where you want it, Ubuntu provides different options for your server. If you are looking for an optimized solution to run on AWS, Azure, Google Cloud Platform, etc., [Ubuntu Cloud][26] is the way to go. - -In either case, you can opt for Ubuntu Server packages and have it installed on your server. Nevertheless, Ubuntu is the most popular Linux distro when it comes to deployment on the cloud (judging by the numbers – [source 1][27], [source 2][28]). - -Do note that we recommend you to go for the LTS editions – unless you have specific requirements. - -[Ubuntu Server][29] - -#### Red Hat Enterprise Linux - -Red Hat Enterprise Linux is a top-notch Linux platform for businesses and organizations. If we go by the numbers, Red Hat may not be the most popular choice for servers. But, there’s a significant group of enterprise users who rely on RHEL (like Lenovo). - -Technically, Fedora and Red Hat are related. Whatever Red Hat supports – gets tested on Fedora before making it available for RHEL. I’m not an expert on server distributions for tailored requirements – so you should definitely check out their [official documentation][30] to know if it’s suitable for you. - -[Red Hat Enterprise Linux][31] - -#### SUSE Linux Enterprise Server - -![Suse Linux Enterprise \(Image: Softpedia\)][32] - -Fret not, do not confuse this with OpenSUSE. Everything comes under a common brand “SUSE” – but OpenSUSE is an open-source distro targeted and yet, maintained by the community. - -SUSE Linux Enterprise Server is one of the most popular solutions for cloud-based servers. You will have to opt for a subscription in order to get priority support and assistance to manage your open source solution. - -[SUSE Linux Enterprise Server][33] - -#### CentOS - -![][34] - -As I mentioned, you need a subscription for RHEL. But, CentOS is more like a community edition of RHEL because it has been derived from the sources of Red Hat Enterprise Linux. And, it is open source and free as well. Even though the number of hosting providers using CentOS is significantly less compared to the last few years – it still is a great choice. - -CentOS may not come loaded with the latest software packages – but it is considered as one of the most stable distros. You should find CentOS images on a variety of cloud platforms. If you don’t, you can always opt for the self-hosted image that CentOS provides. - -[CentOS][35] - -**Other Options** - -You can also try exploring [Fedora Server][36] or [Debian][37] as alternatives to some of the distros mentioned above. - -![Coding][38] - -![Coding][38] - -If you are into programming and software development check out the list of - -[Best Linux Distributions for Programmers][39] - -![Hacking][40] - -![Hacking][40] - -Interested in learning and practicing cyber security? Check out the list of - -[Best Linux Distribution for Hacking and Pen-Testing][41] - -### Best Linux Distributions for Older Computers - -If you have an old PC laying around or if you didn’t really need to upgrade your system – you can still try some of the best Linux distros available. - -We’ve already talked about some of the [best lightweight Linux distributions][42] in details. Here, we shall only mention what really stands out from that list (and some new additions). - -#### Puppy Linux - -![][43] - -Puppy Linux is literally one of the smallest distribution there is. When I first started to explore Linux, my friend recommended me to experiment with Puppy Linux because it can run on older hardware configurations with ease. - -It’s worth checking it out if you want a snappy experience on your good old PC. Over the years, the user experience has improved along with the addition of several new useful features. - -[Puppy Linux][44] - -#### Solus Budgie - -![][45] - -After a recent major release – [Solus 4 Fortitude][46] – it is an impressive lightweight desktop OS. You can opt for desktop environments like GNOME or MATE. However, Solus Budgie happens to be one of my favorites as a full-fledged Linux distro for beginners while being light on system resources. - -[Solus][47] - -#### Bodhi - -![][48] - -Bodhi Linux is built on top of Ubuntu. However, unlike Ubuntu – it does run well on older configurations. - -The main highlight of this distro is its [Moksha Desktop][49] (which is a continuation of Enlightenment 17 desktop). The user experience is intuitive and screaming fast. Even though it’s not something for my personal use – you should give it a try on your older systems. - -[Bodhi Linux][50] - -#### antiX - -![][51] - -antiX – which is also partially responsible for MX Linux is a lightweight Linux distribution tailored for old and new computers. The UI isn’t impressive – but it works as expected. - -It is based on Debian and can be utilized as a live CD distribution without needing to install it. antiX also provides live bootloaders. In contrast to some other distros, you get to save the settings so that you don’t lose it with every reboot. Not just that, you can also save changes to the root directory with its “Live persistence” feature. - -So, if you are looking for a live-USB distro to provide a snappy user experience on old hardware – antiX is the way to go. - -[antiX][52] - -#### Sparky Linux - -![][53] - -Sparky Linux is based on Debian which turns out to be a perfect Linux distro for low-end systems. Along with a screaming fast experience, Sparky Linux offers several special editions (or varieties) for different users. - -For example, it provides a stable release (with varieties) and rolling releases specific to a group of users. Sparky Linux GameOver edition is quite popular for gamers because it includes a bunch of pre-installed games. You can check out our list of [best Linux Gaming distributions][54] – if you also want to play games on your system. - -#### Other Options - -You can also try [Linux Lite][55], [Lubuntu][56], and [Peppermint][57] as some of the lightweight Linux distributions. - -### Best Linux Distro for Advanced Users - -Once you get comfortable with the variety of package managers and commands to help troubleshoot your way to resolve any issue, you can start exploring Linux distros which are tailored for Advanced users only. - -Of course, if you are a professional – you will have a set of specific requirements. However, if you’ve been using Linux for a while as a common user – these distros are worth checking out. - -#### Arch Linux - -![Image Credits: Samiuvic / Deviantart][58] - -Arch Linux is itself a simple yet powerful distribution with a huge learning curve. Unlike others, you won’t have everything pre-installed in one go. You have to configure the system and add packages as needed. - -Also, when installing Arch Linux, you will have to follow a set of commands (without GUI). To know more about it, you can follow our guide on [how to install Arch Linux][59]. If you are going to install it, you should also know about some of the [essential things to do after installing Arch Linux][60]. It will help you get a jump start. - -In addition to all the versatility and simplicity, it’s worth mentioning that the community behind Arch Linux is very active. So, if you run into a problem, you don’t have to worry. - -[Arch Linux][61] - -#### Gentoo - -![Gentoo Linux][62] - -If you know how to compile the source code, Gentoo Linux is a must-try for you. It is also a lightweight distribution – however, you need to have the required technical knowledge to make it work. - -Of course, the [official handbook][63] provides a lot of information that you need to know. But, if you aren’t sure what you’re doing – it will take a lot of your time to figure out how to make the most out of it. - -[Gentoo Linux][64] - -#### Slackware - -![Image Credits: thundercr0w / Deviantart][65] - -Slackware is one of the oldest Linux distribution that still matters. If you are willing to compile or develop software to set up a perfect environment for yourself – Slackware is the way to go. - -In case you’re curious about some of the oldest Linux distros, we have an article on the [earliest linux distributions][66] – go check it out. - -Even though the number of users/developers utilizing it has significantly decreased, it is still a fantastic choice for advanced users. Also, with the recent news of [Slackware getting a Patreon page][67] – we hope that Slackware continues to exist as one of the best Linux distros out there. - -[Slackware][68] - -### Best Multi-purpose Linux Distribution - -There are certain Linux distros which you can utilize as a beginner-friendly / advanced OS for both desktops and servers. Hence, we thought of compiling a separate section for such distributions. - -If you don’t agree with us (or have suggestions to add here) – feel free to let us know in the comments. Here’s what we think could come in handy for every user: - -#### Fedora - -![][69] - -Fedora offers two separate editions – one for desktops/laptops and the other for servers (Fedora Workstation and Fedora Server respectively). - -So, if you are looking for a snappy desktop OS – with a potential learning curve while being user-friendly – Fedora is an option. In either case, if you are looking for a Linux OS for your server – that’s a good choice as well. - -[Fedora][70] - -#### Manjaro - -![][71] - -Manjaro is based on [Arch Linux][72]. Fret not, while Arch Linux is tailored for advanced users, Manjaro makes it easy for a newcomer. It is a simple and beginner-friendly Linux distro. The user interface is good enough and offers a bunch of useful GUI applications built-in. - -You get options to choose a [desktop environment][73] for Manjaro while downloading it. Personally, I like the KDE desktop for Manjaro. - -[Manjaro Linux][74] - -### Debian - -![Image Credits: mrneilypops / Deviantart][75] - -Well, Ubuntu’s based on Debian – so it must be a darn good distribution itself. Debian is an ideal choice for both desktop and servers. - -It may not be the best beginner-friendly OS – but you can easily get started by going through the [official documentation][76]. The recent release of [Debian 10 Buster][77] introduces a lot of changes and necessary improvements. So, you must give it a try! - -**Wrapping Up** - -Overall, these are the best Linux distributions that we recommend you to try. Yes, there are a lot of other Linux distributions that deserve the mention – but to each of their own, depending on personal preferences – the choices will be subjective. - -But, we also have a separate list of distros for [Windows users][78], [hackers and pen testers][41], [gamers][54], [programmers][39], and [privacy buffs.][79] So, if that interest you – do go through them. - -If you think we missed listing one of your favorites that deserves as one of the best Linux distributions out there, let us know your thoughts in the comments below and we’ll keep the article up-to-date accordingly. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/best-linux-distributions/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: tmp.NoRXbIWHkg#for-beginners -[2]: tmp.NoRXbIWHkg#for-servers -[3]: tmp.NoRXbIWHkg#for-old-computers -[4]: tmp.NoRXbIWHkg#for-advanced-users -[5]: tmp.NoRXbIWHkg#general-purpose -[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/install-google-chrome-ubuntu-10.jpg?ssl=1 -[7]: https://itsfoss.com/install-themes-ubuntu/ -[8]: https://itsfoss.com/tag/ubuntu/ -[9]: https://ubuntu.com/download/desktop -[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/linux-Mint-19-desktop.jpg?ssl=1 -[11]: https://www.linuxmint.com/ -[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/elementary-os-juno-feat.jpg?ssl=1 -[13]: https://itsfoss.com/things-to-do-after-installing-elementary-os-5-juno/ -[14]: https://elementary.io/ -[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/mx-linux.jpg?ssl=1 -[16]: https://distrowatch.com/ -[17]: https://en.wikipedia.org/wiki/Linux_distribution#Rolling_distributions -[18]: https://flatpak.org/ -[19]: https://mxlinux.org/ -[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/zorin-os-15.png?ssl=1 -[21]: https://itsfoss.com/zorin-os-15-release/ -[22]: https://itsfoss.com/zorin-os-interview/ -[23]: https://zorinos.com/ -[24]: https://www.deepin.org/en/ -[25]: https://itsfoss.com/ubuntu-vs-fedora/ -[26]: https://ubuntu.com/download/cloud -[27]: https://w3techs.com/technologies/details/os-linux/all/all -[28]: https://thecloudmarket.com/stats -[29]: https://ubuntu.com/download/server -[30]: https://developers.redhat.com/products/rhel/docs-and-apis -[31]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux -[32]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/SUSE-Linux-Enterprise.jpg?ssl=1 -[33]: https://www.suse.com/products/server/ -[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/centos.png?ssl=1 -[35]: https://www.centos.org/ -[36]: https://getfedora.org/en/server/ -[37]: https://www.debian.org/distrib/ -[38]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/coding.jpg?ssl=1 -[39]: https://itsfoss.com/best-linux-distributions-progammers/ -[40]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/hacking.jpg?ssl=1 -[41]: https://itsfoss.com/linux-hacking-penetration-testing/ -[42]: https://itsfoss.com/lightweight-linux-beginners/ -[43]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/puppy-linux-bionic.jpg?ssl=1 -[44]: http://puppylinux.com/ -[45]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/solus-4-featured.jpg?resize=800%2C450&ssl=1 -[46]: https://itsfoss.com/solus-4-release/ -[47]: https://getsol.us/home/ -[48]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/bodhi-linux.png?fit=800%2C436&ssl=1 -[49]: http://www.bodhilinux.com/moksha-desktop/ -[50]: http://www.bodhilinux.com/ -[51]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/10/antix-linux-screenshot.jpg?ssl=1 -[52]: https://antixlinux.com/ -[53]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/sparky-linux.jpg?ssl=1 -[54]: https://itsfoss.com/linux-gaming-distributions/ -[55]: https://www.linuxliteos.com/ -[56]: https://lubuntu.me/ -[57]: https://peppermintos.com/ -[58]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/arch_linux_screenshot.jpg?ssl=1 -[59]: https://itsfoss.com/install-arch-linux/ -[60]: https://itsfoss.com/things-to-do-after-installing-arch-linux/ -[61]: https://www.archlinux.org -[62]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/gentoo-linux.png?ssl=1 -[63]: https://wiki.gentoo.org/wiki/Handbook:Main_Page -[64]: https://www.gentoo.org -[65]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/slackware-screenshot.jpg?ssl=1 -[66]: https://itsfoss.com/earliest-linux-distros/ -[67]: https://distrowatch.com/dwres.php?resource=showheadline&story=8743 -[68]: http://www.slackware.com/ -[69]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/fedora-overview.png?ssl=1 -[70]: https://getfedora.org/ -[71]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/manjaro-gnome.jpg?ssl=1 -[72]: https://www.archlinux.org/ -[73]: https://itsfoss.com/glossary/desktop-environment/ -[74]: https://manjaro.org/ -[75]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/debian-screenshot.png?ssl=1 -[76]: https://www.debian.org/releases/stable/installmanual -[77]: https://itsfoss.com/debian-10-buster/ -[78]: https://itsfoss.com/windows-like-linux-distributions/ -[79]: https://itsfoss.com/privacy-focused-linux-distributions/ diff --git a/sources/tech/20190901 Different Ways to Configure Static IP Address in RHEL 8.md b/sources/tech/20190901 Different Ways to Configure Static IP Address in RHEL 8.md deleted file mode 100644 index 9c354e84e1..0000000000 --- a/sources/tech/20190901 Different Ways to Configure Static IP Address in RHEL 8.md +++ /dev/null @@ -1,250 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (heguangzhi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Different Ways to Configure Static IP Address in RHEL 8) -[#]: via: (https://www.linuxtechi.com/configure-static-ip-address-rhel8/) -[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) - -Different Ways to Configure Static IP Address in RHEL 8 -====== - -While Working on **Linux Servers**, assigning Static IP address on NIC / Ethernet cards is one of the common tasks that every Linux engineer do. If one configures the **Static IP address** correctly on a Linux server then he/she can access it remotely over network. In this article we will demonstrate what are different ways to assign Static IP address on RHEL 8 Server’s NIC. - -[![Configure-Static-IP-RHEL8][1]][2] - -Following are the ways to configure Static IP on a NIC, - - * nmcli (command line tool) - * Network Scripts files(ifcfg-*) - * nmtui  (text based user interface) - - - -### Configure Static IP Address using nmcli command line tool - -Whenever we install RHEL 8 server then ‘**nmcli**’, a command line tool is installed automatically, nmcli is used by network manager and allows us to configure static ip address on Ethernet cards. - -Run the below ip addr command to list Ethernet cards on your RHEL 8 server - -``` -[root@linuxtechi ~]# ip addr -``` - -![ip-addr-command-rhel8][1] - -As we can see in above command output, we have two NICs enp0s3 & enp0s8. Currently ip address assigned to the NIC is via dhcp server. - -Let’s assume we want to assign the static IP address on first NIC (enp0s3) with the following details, - - * IP address = 192.168.1.4 - * Netmask = 255.255.255.0 - * Gateway= 192.168.1.1 - * DNS = 8.8.8.8 - - - -Run the following nmcli commands one after the another to configure static ip, - -List currently active Ethernet cards using “**nmcli connection**” command, - -``` -[root@linuxtechi ~]# nmcli connection -NAME UUID TYPE DEVICE -enp0s3 7c1b8444-cb65-440d-9bf6-ea0ad5e60bae ethernet enp0s3 -virbr0 3020c41f-6b21-4d80-a1a6-7c1bd5867e6c bridge virbr0 -[root@linuxtechi ~]# -``` - -Use beneath nmcli command to assign static ip on enp0s3, - -**Syntax:** - -# nmcli connection modify <interface_name> ipv4.address  <ip/prefix> - -**Note:** In short form, we usually replace connection with ‘con’ keyword and modify with ‘mod’ keyword in nmcli command. - -Assign ipv4 (192.168.1.4) to enp0s3 interface, - -``` -[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.addresses 192.168.1.4/24 -[root@linuxtechi ~]# -``` - -Set the gateway using below nmcli command, - -``` -[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.gateway 192.168.1.1 -[root@linuxtechi ~]# -``` - -Set the manual configuration (from dhcp to static), - -``` -[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.method manual -[root@linuxtechi ~]# -``` - -Set DNS value as “8.8.8.8”, - -``` -[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.dns "8.8.8.8" -[root@linuxtechi ~]# -``` - -To save the above changes and to reload the interface execute the beneath nmcli command, - -``` -[root@linuxtechi ~]# nmcli con up enp0s3 -Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4) -[root@linuxtechi ~]# -``` - -Above command output confirms that interface enp0s3 has been configured successfully.Whatever the changes we have made using above nmcli commands, those changes is saved permanently under the file “etc/sysconfig/network-scripts/ifcfg-enp0s3” - -``` -[root@linuxtechi ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3 -``` - -![ifcfg-enp0s3-file-rhel8][1] - -To Confirm whether IP address has been to enp0s3 interface use the below ip command, - -``` -[root@linuxtechi ~]#ip addr show enp0s3 -``` - -### Configure Static IP Address manually using network-scripts (ifcfg-) files - -We can configure the static ip address to an ethernet card using its network-script or ‘ifcfg-‘ files. Let’s assume we want to assign the static ip address on our second Ethernet card ‘enp0s8’. - - * IP= 192.168.1.91 - * Netmask / Prefix = 24 - * Gateway=192.168.1.1 - * DNS1=4.2.2.2 - - - -Go to the directory “/etc/sysconfig/network-scripts” and look for the file ‘ifcfg- enp0s8’, if it does not exist then create it with following content, - -``` -[root@linuxtechi ~]# cd /etc/sysconfig/network-scripts/ -[root@linuxtechi network-scripts]# vi ifcfg-enp0s8 -TYPE="Ethernet" -DEVICE="enp0s8" -BOOTPROTO="static" -ONBOOT="yes" -NAME="enp0s8" -IPADDR="192.168.1.91" -PREFIX="24" -GATEWAY="192.168.1.1" -DNS1="4.2.2.2" -``` - -Save and exit the file and then restart network manager service to make above changes into effect, - -``` -[root@linuxtechi network-scripts]# systemctl restart NetworkManager -[root@linuxtechi network-scripts]# -``` - -Now use below ip command to verify whether ip address is assigned to nic or not, - -``` -[root@linuxtechi ~]# ip add show enp0s8 -3: enp0s8: mtu 1500 qdisc fq_codel state UP group default qlen 1000 - link/ether 08:00:27:7c:bb:cb brd ff:ff:ff:ff:ff:ff - inet 192.168.1.91/24 brd 192.168.1.255 scope global noprefixroute enp0s8 - valid_lft forever preferred_lft forever - inet6 fe80::a00:27ff:fe7c:bbcb/64 scope link - valid_lft forever preferred_lft forever -[root@linuxtechi ~]# -``` - -Above output confirms that static ip address has been configured successfully on the NIC ‘enp0s8’ - -### Configure Static IP Address using ‘nmtui’ utility - -nmtui is a text based user interface for controlling network manager, when we execute nmtui, it will open a text base user interface through which we can add, modify and delete connections. Apart from this nmtui can also be used to set hostname of your system. - -Let’s assume we want to assign static ip address to interface enp0s3 with following details, - - * IP address = 10.20.0.72 - * Prefix = 24 - * Gateway= 10.20.0.1 - * DNS1=4.2.2.2 - - - -Run nmtui and follow the screen instructions, example is show - -``` -[root@linuxtechi ~]# nmtui -``` - -[![nmtui-rhel8][1]][3] - -Select the first option ‘**Edit a connection**‘ and then choose the interface as ‘enp0s3’ - -[![Choose-interface-nmtui-rhel8][1]][4] - -Choose Edit and then specify the IP address, Prefix, Gateway and DNS Server ip, - -[![set-ip-nmtui-rhel8][1]][5] - -Choose OK and hit enter. In the next window Choose ‘**Activate a connection**’ - -[![Activate-option-nmtui-rhel8][1]][6] - -Select **enp0s3**,  Choose **Deactivate** & hit enter - -[![Deactivate-interface-nmtui-rhel8][1]][7] - -Now choose **Activate** & hit enter, - -[![Activate-interface-nmtui-rhel8][1]][8] - -Select Back and then select Quit, - -[![Quit-Option-nmtui-rhel8][1]][9] - -Use below IP command to verify whether ip address has been assigned to interface enp0s3 - -``` -[root@linuxtechi ~]# ip add show enp0s3 -2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000 - link/ether 08:00:27:53:39:4d brd ff:ff:ff:ff:ff:ff - inet 10.20.0.72/24 brd 10.20.0.255 scope global noprefixroute enp0s3 - valid_lft forever preferred_lft forever - inet6 fe80::421d:5abf:58bd:c47e/64 scope link noprefixroute - valid_lft forever preferred_lft forever -[root@linuxtechi ~]# -``` - -Above output confirms that we have successfully assign the static IP address to interface enp0s3 using nmtui utility. - -That’s all from this tutorial, we have covered three different ways to configure ipv4 address to an Ethernet card on RHEL 8 system. Please do not hesitate to share feedback and comments in comments section below. - --------------------------------------------------------------------------------- - -via: https://www.linuxtechi.com/configure-static-ip-address-rhel8/ - -作者:[Pradeep Kumar][a] -选题:[lujun9972][b] -译者:[heguangzhi](https://github.com/heguangzhi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linuxtechi.com/author/pradeep/ -[b]: https://github.com/lujun9972 -[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Configure-Static-IP-RHEL8.jpg -[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/nmtui-rhel8.jpg -[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-interface-nmtui-rhel8.jpg -[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/set-ip-nmtui-rhel8.jpg -[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Activate-option-nmtui-rhel8.jpg -[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Deactivate-interface-nmtui-rhel8.jpg -[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Activate-interface-nmtui-rhel8.jpg -[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Quit-Option-nmtui-rhel8.jpg diff --git a/sources/tech/20190909 How to Setup Multi Node Elastic Stack Cluster on RHEL 8 - CentOS 8.md b/sources/tech/20190909 How to Setup Multi Node Elastic Stack Cluster on RHEL 8 - CentOS 8.md deleted file mode 100644 index f56e708426..0000000000 --- a/sources/tech/20190909 How to Setup Multi Node Elastic Stack Cluster on RHEL 8 - CentOS 8.md +++ /dev/null @@ -1,476 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8) -[#]: via: (https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/) -[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) - -How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8 -====== - -Elastic stack widely known as **ELK stack**, it is a group of opensource products like **Elasticsearch**, **Logstash** and **Kibana**. Elastic Stack is developed and maintained by Elastic company. Using elastic stack, one can feed system’s logs to Logstash, it is a data collection engine which accept the logs or data from all the sources and normalize logs and then it forwards the logs to Elasticsearch for **analyzing**, **indexing**, **searching** and **storing** and finally using Kibana one can represent the visualize data, using Kibana we can also create interactive graphs and diagram based on user’s queries. - -[![Elastic-Stack-Cluster-RHEL8-CentOS8][1]][2] - -In this article we will demonstrate how to setup multi node elastic stack cluster on RHEL 8 / CentOS 8 servers. Following are details for my Elastic Stack Cluster: - -### Elasticsearch: - - * Three Servers with Minimal RHEL 8 / CentOS 8 - * IPs & Hostname – 192.168.56.40 (elasticsearch1.linuxtechi. local), 192.168.56.50 (elasticsearch2.linuxtechi. local), 192.168.56.60 (elasticsearch3.linuxtechi. local) - - - -### Logstash: - - * Two Servers with minimal RHEL 8 / CentOS 8 - * IPs & Hostname – 192.168.56.20 (logstash1.linuxtechi. local) , 192.168.56.30 (logstash2.linuxtechi. local) - - - -### Kibana: - - * One Server with minimal RHEL 8 / CentOS 8 - * Hostname – kibana.linuxtechi.local - * IP – 192.168.56.10 - - - -### Filebeat: - - * One Server with minimal CentOS 7 - * IP & hostname – 192.168.56.70 (web-server) - - - -Let’s start with Elasticsearch cluster setup, - -#### Setup 3 node Elasticsearch cluster - -As I have already stated that I have kept nodes for Elasticsearch cluster, login to each node, set the hostname and configure yum/dnf repositories. - -Use the below hostnamectl command to set the hostname on respective nodes, - -``` -[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch1.linuxtechi. local" -[root@linuxtechi ~]# exec bash -[root@linuxtechi ~]# -[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch2.linuxtechi. local" -[root@linuxtechi ~]# exec bash -[root@linuxtechi ~]# -[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch3.linuxtechi. local" -[root@linuxtechi ~]# exec bash -[root@linuxtechi ~]# -``` - -For CentOS 8 System we don’t need to configure any OS package repository and for RHEL 8 Server, if you have valid subscription and then subscribed it with Red Hat for getting package repository.  In Case you want to configure local yum/dnf repository for OS packages then refer the below url: - -[How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File][3] - -Configure Elasticsearch package repository on all the nodes, create a file elastic.repo  file under /etc/yum.repos.d/ folder with the following content - -``` -~]# vi /etc/yum.repos.d/elastic.repo -[elasticsearch-7.x] -name=Elasticsearch repository for 7.x packages -baseurl=https://artifacts.elastic.co/packages/7.x/yum -gpgcheck=1 -gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch -enabled=1 -autorefresh=1 -type=rpm-md -``` - -save & exit the file - -Use below rpm command on all three nodes to import Elastic’s public signing key - -``` -~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch -``` - -Add the following lines in /etc/hosts file on all three nodes, - -``` -192.168.56.40 elasticsearch1.linuxtechi.local -192.168.56.50 elasticsearch2.linuxtechi.local -192.168.56.60 elasticsearch3.linuxtechi.local -``` - -Install Java on all three Nodes using yum / dnf command, - -``` -[root@linuxtechi ~]# dnf install java-openjdk -y -[root@linuxtechi ~]# dnf install java-openjdk -y -[root@linuxtechi ~]# dnf install java-openjdk -y -``` - -Install Elasticsearch using beneath dnf command on all three nodes, - -``` -[root@linuxtechi ~]# dnf install elasticsearch -y -[root@linuxtechi ~]# dnf install elasticsearch -y -[root@linuxtechi ~]# dnf install elasticsearch -y -``` - -**Note:** In case OS firewall is enabled and running in each Elasticsearch node then allow following ports using beneath firewall-cmd command, - -``` -~]# firewall-cmd --permanent --add-port=9300/tcp -~]# firewall-cmd --permanent --add-port=9200/tcp -~]# firewall-cmd --reload -``` - -Configure Elasticsearch, edit the file “**/etc/elasticsearch/elasticsearch.yml**” on all the three nodes and add the followings, - -``` -~]# vim /etc/elasticsearch/elasticsearch.yml -………………………………………… -cluster.name: opn-cluster -node.name: elasticsearch1.linuxtechi.local -network.host: 192.168.56.40 -http.port: 9200 -discovery.seed_hosts: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"] -cluster.initial_master_nodes: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"] -…………………………………………… -``` - -**Note:** on Each node, add the correct hostname in node.name parameter and ip address in network.host parameter and other parameters will remain the same. - -Now Start and enable the Elasticsearch service on all three nodes using following systemctl command, - -``` -~]# systemctl daemon-reload -~]# systemctl enable elasticsearch.service -~]# systemctl start elasticsearch.service -``` - -Use below ‘ss’ command to verify whether elasticsearch node is start listening on 9200 port, - -``` -[root@linuxtechi ~]# ss -tunlp | grep 9200 -tcp LISTEN 0 128 [::ffff:192.168.56.40]:9200 *:* users:(("java",pid=2734,fd=256)) -[root@linuxtechi ~]# -``` - -Use following curl commands to verify the Elasticsearch cluster status - -``` -[root@linuxtechi ~]# curl http://elasticsearch1.linuxtechi.local:9200 -[root@linuxtechi ~]# curl -X GET http://elasticsearch2.linuxtechi.local:9200/_cluster/health?pretty -``` - -Output above command would be something like below, - -![Elasticsearch-cluster-status-rhel8][1] - -Above output confirms that we have successfully created 3 node Elasticsearch cluster and status of cluster is also green. - -**Note:** If you want to modify JVM heap size then you have edit the file “**/etc/elasticsearch/jvm.options**” and change the below parameters that suits to your environment, - - * -Xms1g - * -Xmx1g - - - -Now let’s move to Logstash nodes, - -#### Install and Configure Logstash - -Perform the following steps on both Logstash nodes, - -Login to both the nodes set the hostname using following hostnamectl command, - -``` -[root@linuxtechi ~]# hostnamectl set-hostname "logstash1.linuxtechi.local" -[root@linuxtechi ~]# exec bash -[root@linuxtechi ~]# -[root@linuxtechi ~]# hostnamectl set-hostname "logstash2.linuxtechi.local" -[root@linuxtechi ~]# exec bash -[root@linuxtechi ~]# -``` - -Add the following entries in /etc/hosts file in both logstash nodes - -``` -~]# vi /etc/hosts -192.168.56.40 elasticsearch1.linuxtechi.local -192.168.56.50 elasticsearch2.linuxtechi.local -192.168.56.60 elasticsearch3.linuxtechi.local -``` - -Save and exit the file - -Configure Logstash repository on both the nodes, create a file **logstash.repo** under the folder /ete/yum.repos.d/ with following content, - -``` -~]# vi /etc/yum.repos.d/logstash.repo -[elasticsearch-7.x] -name=Elasticsearch repository for 7.x packages -baseurl=https://artifacts.elastic.co/packages/7.x/yum -gpgcheck=1 -gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch -enabled=1 -autorefresh=1 -type=rpm-md -``` - -Save and exit the file, run the following rpm command to import the signing key - -``` -~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch -``` - -Install Java OpenJDK on both the nodes using following dnf command, - -``` -~]# dnf install java-openjdk -y -``` - -Run the following dnf command from both the nodes to install logstash, - -``` -[root@linuxtechi ~]# dnf install logstash -y -[root@linuxtechi ~]# dnf install logstash -y -``` - -Now configure logstash, perform below steps on both logstash nodes, - -Create a logstash conf file, for that first we have copy sample logstash file under ‘/etc/logstash/conf.d/’ - -``` -# cd /etc/logstash/ -# cp logstash-sample.conf conf.d/logstash.conf -``` - -Edit conf file and update the following content, - -``` -# vi conf.d/logstash.conf - -input { - beats { - port => 5044 - } -} - -output { - elasticsearch { - hosts => ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"] - index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" - #user => "elastic" - #password => "changeme" - } -} -``` - -Under output section, in hosts parameter specify FQDN of all three Elasticsearch nodes, other parameters leave as it is. - -Allow logstash port “5044” in OS firewall using following firewall-cmd command, - -``` -~ # firewall-cmd --permanent --add-port=5044/tcp -~ # firewall-cmd –reload -``` - -Now start and enable Logstash service, run the following systemctl commands on both the nodes - -``` -~]# systemctl start logstash -~]# systemctl eanble logstash -``` - -Use below ss command to verify whether logstash service start listening on 5044, - -``` -[root@linuxtechi ~]# ss -tunlp | grep 5044 -tcp LISTEN 0 128 *:5044 *:* users:(("java",pid=2416,fd=96)) -[root@linuxtechi ~]# -``` - -Above output confirms that logstash has been installed and configured successfully. Let’s move to Kibana installation. - -#### Install and Configure Kibana - -Login to Kibana node, set the hostname with **hostnamectl** command, - -``` -[root@linuxtechi ~]# hostnamectl set-hostname "kibana.linuxtechi.local" -[root@linuxtechi ~]# exec bash -[root@linuxtechi ~]# -``` - -Edit /etc/hosts file and add the following lines - -``` -192.168.56.40 elasticsearch1.linuxtechi.local -192.168.56.50 elasticsearch2.linuxtechi.local -192.168.56.60 elasticsearch3.linuxtechi.local -``` - -Setup the Kibana repository using following, - -``` -[root@linuxtechi ~]# vi /etc/yum.repos.d/kibana.repo -[elasticsearch-7.x] -name=Elasticsearch repository for 7.x packages -baseurl=https://artifacts.elastic.co/packages/7.x/yum -gpgcheck=1 -gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch -enabled=1 -autorefresh=1 -type=rpm-md - -[root@linuxtechi ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch -``` - -Execute below dnf command to install kibana, - -``` -[root@linuxtechi ~]# yum install kibana -y -``` - -Configure Kibana by editing the file “**/etc/kibana/kibana.yml**” - -``` -[root@linuxtechi ~]# vim /etc/kibana/kibana.yml -………… -server.host: "kibana.linuxtechi.local" -server.name: "kibana.linuxtechi.local" -elasticsearch.hosts: ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"] -………… -``` - -Start and enable kibana service - -``` -[root@linuxtechi ~]# systemctl start kibana -[root@linuxtechi ~]# systemctl enable kibana -``` - -Allow Kibana port ‘5601’ in OS firewall, - -``` -[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5601/tcp -success -[root@linuxtechi ~]# firewall-cmd --reload -success -[root@linuxtechi ~]# -``` - -Access Kibana portal / GUI using the following URL: - - - -[![Kibana-Dashboard-rhel8][1]][4] - -From dashboard, we can also check our Elastic Stack cluster status - -[![Stack-Monitoring-Overview-RHEL8][1]][5] - -This confirms that we have successfully setup multi node Elastic Stack cluster on RHEL 8 / CentOS 8. - -Now let’s send some logs to logstash nodes via filebeat from other Linux servers, In my case I have one CentOS 7 Server, I will push all important logs of this server to logstash via filebeat. - -Login to CentOS 7 server and install filebeat package using following rpm command, - -``` -[root@linuxtechi ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm -Retrieving https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm -Preparing... ################################# [100%] -Updating / installing... - 1:filebeat-7.3.1-1 ################################# [100%] -[root@linuxtechi ~]# -``` - -Edit the /etc/hosts file and add the following entries, - -``` -192.168.56.20 logstash1.linuxtechi.local -192.168.56.30 logstash2.linuxtechi.local -``` - -Now configure the filebeat so that it can send logs to logstash nodes using load balancing technique, edit the file “**/etc/filebeat/filebeat.yml**” and add the following parameters, - -Under the ‘**filebeat.inputs:**’ section change ‘**enabled: false**‘ to ‘**enabled: true**‘ and under the “**paths**” parameter specify the location log files that we can send to logstash, In output Elasticsearch section comment out “**output.elasticsearch**” and **host** parameter. In Logstash output section, remove the comments for “**output.logstash:**” and “**hosts:**” and add the both logstash nodes in hosts parameters and also “**loadbalance: true**”. - -``` -[root@linuxtechi ~]# vi /etc/filebeat/filebeat.yml -………………………. -filebeat.inputs: -- type: log - enabled: true - paths: - - /var/log/messages - - /var/log/dmesg - - /var/log/maillog - - /var/log/boot.log -#output.elasticsearch: - # hosts: ["localhost:9200"] - -output.logstash: - hosts: ["logstash1.linuxtechi.local:5044", "logstash2.linuxtechi.local:5044"] - loadbalance: true -……………………………………… -``` - -Start and enable filebeat service using beneath systemctl commands, - -``` -[root@linuxtechi ~]# systemctl start filebeat -[root@linuxtechi ~]# systemctl enable filebeat -``` - -Now go to Kibana GUI, verify whether new indices are visible or not, - -Choose Management option from Left side bar and then click on Index Management under Elasticsearch, - -[![Elasticsearch-index-management-Kibana][1]][6] - -As we can see above, indices are visible now, let’s create index pattern, - -Click on “Index Patterns” from Kibana Section, it will prompt us to create a new pattern, click on “**Create Index Pattern**” and specify the pattern name as “**filebeat**” - -[![Define-Index-Pattern-Kibana-RHEL8][1]][7] - -Click on Next Step - -Choose “**Timestamp**” as time filter for index pattern and then click on “Create index pattern” - -[![Time-Filter-Index-Pattern-Kibana-RHEL8][1]][8] - -[![filebeat-index-pattern-overview-Kibana][1]][9] - -Now Click on Discover to see real time filebeat index pattern, - -[![Discover-Kibana-REHL8][1]][10] - -This confirms that Filebeat agent has been configured successfully and we are able to see real time logs on Kibana dashboard. - -That’s all from this article, please don’t hesitate to share your feedback and comments in case these steps help you to setup multi node Elastic Stack Cluster on RHEL 8 / CentOS 8 system. - --------------------------------------------------------------------------------- - -via: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/ - -作者:[Pradeep Kumar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linuxtechi.com/author/pradeep/ -[b]: https://github.com/lujun9972 -[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elastic-Stack-Cluster-RHEL8-CentOS8.jpg -[3]: https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/ -[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Kibana-Dashboard-rhel8.jpg -[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Stack-Monitoring-Overview-RHEL8.jpg -[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elasticsearch-index-management-Kibana.jpg -[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Define-Index-Pattern-Kibana-RHEL8.jpg -[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Time-Filter-Index-Pattern-Kibana-RHEL8.jpg -[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/filebeat-index-pattern-overview-Kibana.jpg -[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Discover-Kibana-REHL8.jpg diff --git a/sources/tech/20190909 How to use Terminator on Linux to run multiple terminals in one window.md b/sources/tech/20190909 How to use Terminator on Linux to run multiple terminals in one window.md deleted file mode 100644 index 6ee0820fdf..0000000000 --- a/sources/tech/20190909 How to use Terminator on Linux to run multiple terminals in one window.md +++ /dev/null @@ -1,118 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to use Terminator on Linux to run multiple terminals in one window) -[#]: via: (https://www.networkworld.com/article/3436784/how-to-use-terminator-on-linux-to-run-multiple-terminals-in-one-window.html) -[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) - -How to use Terminator on Linux to run multiple terminals in one window -====== -Providing an option for multiple GNOME terminals within a single window frame, Terminator lets you flexibly align your workspace to suit your needs. -Sandra Henry-Stocker - -If you’ve ever wished that you could line up multiple terminal windows and organize them in a single window frame, we may have some good news for you. The Linux **Terminator** can do this for you. No problemo! - -### Splitting windows - -Terminator will initially open like a terminal window with a single window. Once you mouse click within that window, however, it will bring up an options menu that gives you the flexibility to make changes. You can choose “**split horizontally**” or “**split vertically**” to split the window you are currently position in into two smaller windows. In fact, with these menu choices, complete with tiny illustrations of the resultant split (resembling **=** and **||**), you can split windows repeatedly if you like. Of course, if you split the overall window into more than six or nine sections, you might just find that they're too small to be used effectively. - -**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][1] ]** - -Using ASCII art to illustrate the process of splitting windows, you might see something like this: - -``` -+-------------------+ +-------------------+ +-------------------+ -| | | | | | -| | | | | | -| | ==> |-------------------| ==> |-------------------| -| | | | | | | -| | | | | | | -+-------------------+ +-------------------+ +-------------------+ - Original terminal Split horizontally Split vertically -``` - -Another option for splitting windows is to use control sequences like **Ctrl+Shift+e** to split a window vertically and **Ctrl+Shift+o** (“o" as in “open”) to split the screen horizontally. - -Once Terminator has split into smaller windows for you, you can click in any window to use it and move from window to window as your work dictates. - -### Maximizing a window - -If you want to ignore all but one of your windows for a while and focus on just one, you can click in that window and select the "**Maximize**" option from the menu. That window will then grow to claim all of the space. Click again and select "**Restore all terminals**" to return to the multi-window display. **Ctrl+Shift+x** will toggle between the normal and maximized settings. - -The window size indicators (e.g., 80x15) on window labels display the number of characters per line and the number of lines per window that each window provides. - -### Closing windows - -To close any window, bring up the Terminator menu and select **Close**. Other windows will adjust themselves to take up the space until you close the last remaining window. - -### Saving your customized setup(s) - -Setting up your customized terminator settings as your default once you've split your overall window into multiple segments is quite easy. Select **Preferences** from the pop-up menu and then **Layouts** from the tab along the top of the window that opens. You should then see **New Layout** listed. Just click on the **Save** option at the bottom and **Close** on the bottom right. Terminator will save your settings in  **~/.config/terminator/config** and will then use this file every time you use it. - -You can also enlarge your overall window by stretching it with your mouse. Again, if you want to retain the changes, select **Preferences** from the menu, **Layouts** and then **Save** and **Close** again. - -### Choosing between saved configurations - -If you like, you can set up multiple options for your Terminator window arrangements by maintaining a number of config files, renaming each afterwards (e.g., config-1, config-2) and then moving your choice into place as **~/.config/terminator/config** when you want to use that layout. Here's an example script for doing something like this script. It lets you choose between three pre-configured window arrangements: - -``` -#!/bin/bash - -PS3='Terminator options: ' -options=("Split 1" "Split 2" "Split 3" "Quit") -select opt in "${options[@]}" -do - case $opt in - "Split 1") - config=config-1 - break - ;; - "Split 2") - config=config-2 - break - ;; - "Split 3") - config=config-3 - break - ;; - *) - exit - ;; - esac -done - -cd ~/.config/terminator -cp config config- -cp $config config -cd -terminator & -``` - -You could give the options more meaningful names than "config-1" if that helps. - -### Wrap-up - -Terminator is a good choice for setting up multiple windows to work on related tasks. If you've never used it, you'll probably need to install it first with a command such as "sudo apt install terminator" or "sudo yum install -y terminator". - -Hopefully, you will enjoy using Terminator. And, as another character of the same name might say, "I'll be back!" - -Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3436784/how-to-use-terminator-on-linux-to-run-multiple-terminals-in-one-window.html - -作者:[Sandra Henry-Stocker][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[b]: https://github.com/lujun9972 -[1]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua -[2]: https://www.facebook.com/NetworkWorld/ -[3]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190911 4 open source cloud security tools.md b/sources/tech/20190911 4 open source cloud security tools.md deleted file mode 100644 index 5a9e6d9d83..0000000000 --- a/sources/tech/20190911 4 open source cloud security tools.md +++ /dev/null @@ -1,90 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (hopefully2333) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (4 open source cloud security tools) -[#]: via: (https://opensource.com/article/19/9/open-source-cloud-security) -[#]: author: (Alison NaylorAaron Rinehart https://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/sethhttps://opensource.com/users/bretthunoldtcomhttps://opensource.com/users/aaronrineharthttps://opensource.com/users/marcobravo) - -4 open source cloud security tools -====== -Find and eliminate vulnerabilities in the data you store in AWS and -GitHub. -![Tools in a cloud][1] - -If your day-to-day as a developer, system administrator, full-stack engineer, or site reliability engineer involves Git pushes, commits, and pulls to and from GitHub and deployments to Amazon Web Services (AWS), security is a persistent concern. Fortunately, open source tools are available to help your team avoid common mistakes that could cost your organization thousands of dollars. - -This article describes four open source tools that can help improve your security practices when you're developing on GitHub and AWS. Also, in the spirit of open source, I've joined forces with three security experts—[Travis McPeak][2], senior cloud security engineer at Netflix; [Rich Monk][3], senior principal information security analyst at Red Hat; and [Alison Naylor][4], principal information security analyst at Red Hat—to contribute to this article. - -We've separated each tool by scenario, but they are not mutually exclusive. - -### 1\. Find sensitive data with Gitrob - -You need to find any potentially sensitive information present in your team's Git repos so you can remove it. It may make sense for you to use tools that are focused towards attacking an application or a system using a red/blue team model, in which an infosec team is divided in two: an attack team (a.k.a. a red team) and a defense team (a.k.a. a blue team). Having a red team to try to penetrate your systems and applications is lots better than waiting for an adversary to do so. Your red team might try using [Gitrob][5], a tool that can clone and crawl through your Git repositories looking for credentials and sensitive files. - -Even though tools like Gitrob could be used for harm, the idea here is for your infosec team to use it to find inadvertently disclosed sensitive data that belongs to your organization (such as AWS keypairs or other credentials that were committed by mistake). That way, you can get your repositories fixed and sensitive data expunged—hopefully before an adversary finds them. Remember to remove not only the affected files but [also their history][6]! - -### 2\. Avoid committing sensitive data with git-secrets - -While it's important to find and remove sensitive information in your Git repos, wouldn't it be better to avoid committing those secrets in the first place? Mistakes happen, but you can protect yourself from public embarrassment by using [git-secrets][7]. This tool allows you to set up hooks that scan your commits, commit messages, and merges looking for common patterns for secrets. Choose patterns that match the credentials your team uses, such as AWS access keys and secret keys. If it finds a match, your commit is rejected and a potential crisis averted. - -It's simple to set up git-secrets for your existing repos, and you can apply a global configuration to protect all future repositories you initialize or clone. You can also use git-secrets to scan your repos (and all previous revisions) to search for secrets before making them public. - -### 3\. Create temporary credentials with Key Conjurer - -It's great to have a little extra insurance to prevent inadvertently publishing stored secrets, but maybe we can do even better by not storing credentials at all. Keeping track of credentials generally—including who has access to them, where they are stored, and when they were last rotated—is a hassle. However, programmatically generating temporary credentials can avoid a lot of those issues altogether, neatly side-stepping the issue of storing secrets in Git repos. Enter [Key Conjurer][8], which was created to address this need. For more on why Riot Games created Key Conjurer and how they developed it, read _[Key conjurer: our policy of least privilege][9]_. - -### 4\. Apply least privilege automatically with Repokid - -Anyone who has taken a security 101 course knows that least privilege is the best practice for role-based access control configuration. Sadly, outside school, it becomes prohibitively difficult to apply least-privilege policies manually. An application's access requirements change over time, and developers are too busy to trim back their permissions manually. [Repokid][10] uses data that AWS provides about identity and access management (IAM) use to automatically right-size policies. Repokid helps even the largest organizations apply least privilege automatically in AWS. - -### Tools, not silver bullets - -These tools are by no means silver bullets, but they are just that: tools! So, make sure you work with the rest of your organization to understand the use cases and usage patterns for your cloud services before trying to implement any of these tools or other controls. - -Becoming familiar with the best practices documented by all your cloud and code repository services should be taken seriously as well. The following articles will help you do so. - -**For AWS:** - - * [Best practices for managing AWS access keys][11] - * [AWS security audit guidelines][12] - - - -**For GitHub:** - - * [Introducing new ways to keep your code secure][13] - * [GitHub Enterprise security best practices][14] - - - -Last but not least, reach out to your infosec team; they should be able to provide you with ideas, recommendations, and guidelines for your team's success. Always remember: security is everyone's responsibility, not just theirs. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/9/open-source-cloud-security - -作者:[Alison NaylorAaron Rinehart][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/sethhttps://opensource.com/users/bretthunoldtcomhttps://opensource.com/users/aaronrineharthttps://opensource.com/users/marcobravo -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT (Tools in a cloud) -[2]: https://twitter.com/travismcpeak?lang=en -[3]: https://github.com/rmonk -[4]: https://www.linkedin.com/in/alperkins/ -[5]: https://github.com/michenriksen/gitrob -[6]: https://help.github.com/en/articles/removing-sensitive-data-from-a-repository -[7]: https://github.com/awslabs/git-secrets -[8]: https://github.com/RiotGames/key-conjurer -[9]: https://technology.riotgames.com/news/key-conjurer-our-policy-least-privilege -[10]: https://github.com/Netflix/repokid -[11]: https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html -[12]: https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html -[13]: https://github.blog/2019-05-23-introducing-new-ways-to-keep-your-code-secure/ -[14]: https://github.blog/2015-10-09-github-enterprise-security-best-practices/ diff --git a/sources/tech/20190912 An introduction to Markdown.md b/sources/tech/20190912 An introduction to Markdown.md deleted file mode 100644 index 1e0a990913..0000000000 --- a/sources/tech/20190912 An introduction to Markdown.md +++ /dev/null @@ -1,166 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (qfzy1233) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (An introduction to Markdown) -[#]: via: (https://opensource.com/article/19/9/introduction-markdown) -[#]: author: (Juan Islas https://opensource.com/users/xislashttps://opensource.com/users/mbbroberghttps://opensource.com/users/scottnesbitthttps://opensource.com/users/scottnesbitthttps://opensource.com/users/f%C3%A1bio-emilio-costahttps://opensource.com/users/don-watkinshttps://opensource.com/users/greg-phttps://opensource.com/users/marcobravohttps://opensource.com/users/alanfdosshttps://opensource.com/users/scottnesbitthttps://opensource.com/users/jamesf) - -An introduction to Markdown -====== -Write once and convert your text into multiple formats. Here's how to -get started with Markdown. -![Woman programming][1] - -For a long time, I thought all the files I saw on GitLab and GitHub with an **.md** extension were written in a file type exclusively for developers. That changed a few weeks ago when I started using Markdown. It quickly became the most important tool in my daily work. - -Markdown makes my life easier. I just need to add a few symbols to what I'm already writing and, with the help of a browser extension or an open source program, I can transform my text into a variety of commonly used formats such as ODT, email (more on that later), PDF, and EPUB. - -### What is Markdown? - -A friendly reminder from [Wikipedia][2]: - -> Markdown is a lightweight markup language with plain text formatting syntax. - -What this means to you is that by using just a few extra symbols in your text, Markdown helps you create a document with an explicit structure. When you take notes in plain text (in a notepad application, for example), there's nothing to indicate which text is meant to be bold or italic. In ordinary text, you might write a link as **** one time, then as just **example.com**, and later **go to the website (example.com)**. There's no internal consistency. - -But if you write the way Markdown prescribes, your text has internal consistency. Computers like consistency because it enables them to follow strict instructions without worrying about exceptions. - -Trust me; once you learn to use Markdown, every writing task will be, in some way, easier and better than before. So let's learn it. - -### Markdown basics - -The following rules are the basics for writing in Markdown. - - 1. Create a text file with an **.md** extension (for example, **example.md**.) You can use any text editor (even a word processor like LibreOffice or Microsoft Word), as long as you remember to save it as a _text_ file. - - - -![Names of Markdown files][3] - - 2. Write whatever you want, just as you usually do: - - -``` -Lorem ipsum - -Consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. -Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. -Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. - -De Finibus Bonorum et Malorum - -Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. -Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. - -  Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. -``` - - 3. Make sure to place an empty line between paragraphs. That might feel unnatural if you're used to writing business letters or traditional prose, where paragraphs have only one new line and maybe even an indentation before the first word. For Markdown, an empty line (some word processors mark this with **¶**, called a Pilcrow symbol) guarantees a new paragraph is created when you convert it to another format like HTML. - - 4. Designate titles and subtitles. For the document's title, add a pound or hash (**#**) symbol and a space before the text (e.g., **# Lorem ipsum**). The first subtitle level uses two (**## De Finibus Bonorum et Malorum**), the next level gets three (**### Third Subtitle**), and so on. Note that there is a space between the pound sign and the first word. - - -``` -# Lorem ipsum - -Consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. -Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. -Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. - -## De Finibus Bonorum et Malorum - -Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. -Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. - -  Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. -``` - - 5. If you want **bold** letters, just place the letters between two asterisks (stars) with no spaces: ****This will be in bold****. - - - - -![Bold text in Markdown][4] - - 6. For _italics_, put the text between underline symbols with no spaces: **_I want this text to be in italics_**. - - - -![Italics text in Markdown][5] - - 7. To insert a link (like [Markdown Tutorial][6]), put the text you want to link in brackets and the URL in parentheses with no spaces between them: -**[Markdown Tutorial]()**. - - - -![Hyperlinks in Markdown][7] - - 8. Blockquotes are written with a greater-than (**>**) symbol and a space before the text you want to quote: **> A famous quote**. - - - -![Blockquote text in Markdown][8] - -### Markdown tutorials and tip sheets - -These tips will get you started writing in Markdown, but it has a lot more functions than just bold and italics and links. The best way to learn Markdown is to use it, but I recommend investing 15 minutes stepping through the simple [Markdown Tutorial][6] to practice these rules and learn a couple more. - -Because modern Markdown is an amalgamation of many different interpretations of the idea of structured text, the [CommonMark][9] project defines a spec with a rigid set of rules to bring clarity to Markdown. It might be helpful to keep a [CommonMark-compliant cheatsheet][10] on hand when writing. - -### What you can do with Markdown - -Markdown lets you write anything you want—once—and transform it into almost any kind of format you want to use. The following examples show how to turn simple text written in MD into different formats. You don't need multiple formats of your text—you can start from a single source and then… rule the world! - - 1. **Simple note-taking:** You can write your notes in Markdown and, the moment you save them, the open source note application [Turtl][11] interprets your text file and shows you the formatted result. You can have your notes anywhere! - - - -![Turtl application][12] - - 2. **PDF files:** With the [Pandoc][13] application, you can convert your Markdown into a PDF with one simple command: **pandoc <file.md> -o <file.pdf>**. - - - -![Markdown text converted to PDF with Pandoc][14] - - 3. **Email:** You can also convert Markdown text into an HTML-formatted email by installing the browser extension [Markdown Here][15]. To use it, just select your Markdown text, use Markdown Here to translate it into HTML, and send your message using your favorite email client. - - - -![Markdown text converted to email with Markdown Here][16] - -### Start using it - -You don't need a special application to use Markdown—you just need a text editor and the tips above. It's compatible with how you already write; all you need to do is use it, so give it a try. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/9/introduction-markdown - -作者:[Juan Islas][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/xislashttps://opensource.com/users/mbbroberghttps://opensource.com/users/scottnesbitthttps://opensource.com/users/scottnesbitthttps://opensource.com/users/f%C3%A1bio-emilio-costahttps://opensource.com/users/don-watkinshttps://opensource.com/users/greg-phttps://opensource.com/users/marcobravohttps://opensource.com/users/alanfdosshttps://opensource.com/users/scottnesbitthttps://opensource.com/users/jamesf -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming) -[2]: https://en.wikipedia.org/wiki/Markdown -[3]: https://opensource.com/sites/default/files/uploads/markdown_names_md-1.png (Names of Markdown files) -[4]: https://opensource.com/sites/default/files/uploads/markdown_bold.png (Bold text in Markdown) -[5]: https://opensource.com/sites/default/files/uploads/markdown_italic.png (Italics text in Markdown) -[6]: https://www.markdowntutorial.com/ -[7]: https://opensource.com/sites/default/files/uploads/markdown_link.png (Hyperlinks in Markdown) -[8]: https://opensource.com/sites/default/files/uploads/markdown_blockquote.png (Blockquote text in Markdown) -[9]: https://commonmark.org/help/ -[10]: https://opensource.com/downloads/cheat-sheet-markdown -[11]: https://turtlapp.com/ -[12]: https://opensource.com/sites/default/files/uploads/markdown_turtl_02.png (Turtl application) -[13]: https://opensource.com/article/19/5/convert-markdown-to-word-pandoc -[14]: https://opensource.com/sites/default/files/uploads/markdown_pdf.png (Markdown text converted to PDF with Pandoc) -[15]: https://markdown-here.com/ -[16]: https://opensource.com/sites/default/files/uploads/markdown_mail_02.png (Markdown text converted to email with Markdown Here) diff --git a/sources/tech/20190916 Copying large files with Rsync, and some misconceptions.md b/sources/tech/20190916 Copying large files with Rsync, and some misconceptions.md deleted file mode 100644 index ae314e2a2e..0000000000 --- a/sources/tech/20190916 Copying large files with Rsync, and some misconceptions.md +++ /dev/null @@ -1,101 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Copying large files with Rsync, and some misconceptions) -[#]: via: (https://fedoramagazine.org/copying-large-files-with-rsync-and-some-misconceptions/) -[#]: author: (Daniel Leite de Abreu https://fedoramagazine.org/author/dabreu/) - -Copying large files with Rsync, and some misconceptions -====== - -![][1] - -There is a notion that a lot of people working in the IT industry often copy and paste from internet howtos. We all do it, and the copy-and-paste itself is not a problem. The problem is when we run things without understanding them. - -Some years ago, a friend who used to work on my team needed to copy virtual machine templates from site A to site B. They could not understand why the file they copied was 10GB on site A but but it became 100GB on-site B. - -The friend believed that _rsync_ is a magic tool that should just “sync” the file as it is. However, what most of us forget is to understand what _rsync_ really is, and how is it used, and the most important in my opinion is, where it come from. This article provides some further information about rsync, and an explanation of what happened in that story. - -### About rsync - -_rsync_ is a tool was created by Andrew Tridgell and Paul Mackerras who were motivated by the following problem: - -Imagine you have two files, _file_A_ and _file_B_. You wish to update _file_B_ to be the same as _file_A_. The obvious method is to copy _file_A_ onto _file_B_. - -Now imagine that the two files are on two different servers connected by a slow communications link, for example, a dial-up IP link. If _file_A_ is large, copying it onto _file_B_ will be slow, and sometimes not even possible. To make it more efficient, you could compress _file_A_ before sending it, but that would usually only gain a factor of 2 to 4. - -Now assume that _file_A_ and _file_B_ are quite similar, and to speed things up, you take advantage of this similarity. A common method is to send just the differences between _file_A_ and _file_B_ down the link and then use such list of differences to reconstruct the file on the remote end. - -The problem is that the normal methods for creating a set of differences between two files rely on being able to read both files. Thus they require that both files are available beforehand at one end of the link. If they are not both available on the same machine, these algorithms cannot be used. (Once you had copied the file over, you don’t need the differences). This is the problem that _rsync_ addresses. - -The _rsync_ algorithm efficiently computes which parts of a source file match parts of an existing destination file. Matching parts then do not need to be sent across the link; all that is needed is a reference to the part of the destination file. Only parts of the source file which are not matching need to be sent over. - -The receiver can then construct a copy of the source file using the references to parts of the existing destination file and the original material. - -Additionally, the data sent to the receiver can be compressed using any of a range of common compression algorithms for further speed improvements. - -The rsync algorithm addresses this problem in a lovely way as we all might know. - -After this introduction on _rsync_, Back to the story! - -### Problem 1: Thin provisioning - -There were two things that would help the friend understand what was going on. - -The problem with the file getting significantly bigger on the other size was caused by Thin Provisioning (TP) being enabled on the source system — a method of optimizing the efficiency of available space in Storage Area Networks (SAN) or Network Attached Storages (NAS). - -The source file was only 10GB because of TP being enabled, and when transferred over using _rsync_ without any additional configuration, the target destination was receiving the full 100GB of size. _rsync_ could not do the magic automatically, it had to be configured. - -The Flag that does this work is _-S_ or _–sparse_ and it tells _rsync_ to handle sparse files efficiently. And it will do what it says! It will only send the sparse data so source and destination will have a 10GB file. - -### Problem 2: Updating files - -The second problem appeared when sending over an updated file. The destination was now receiving just the 10GB, but the whole file (containing the virtual disk) was always transferred. Even when a single configuration file was changed on that virtual disk. In other words, only a small portion of the file changed. - -The command used for this transfer was: - -``` -rsync -avS vmdk_file syncuser@host1:/destination -``` - -Again, understanding how _rsync_ works would help with this problem as well. - -The above is the biggest misconception about rsync. Many of us think _rsync_ will simply send the delta updates of the files, and that it will automatically update only what needs to be updated. But this is not the default behaviour of _rsync_. - -As the man page says, the default behaviour of _rsync_ is to create a new copy of the file in the destination and to move it into the right place when the transfer is completed. - -To change this default behaviour of _rsync_, you have to set the following flags and then rsync will send only the deltas: - -``` ---inplace update destination files in-place ---partial keep partially transferred files ---append append data onto shorter files ---progress show progress during transfer -``` - -So the full command that would do exactly what the friend wanted is: - -``` -rsync -av --partial --inplace --append --progress vmdk_file syncuser@host1:/destination -``` - -Note that the sparse flag _-S_ had to be removed, for two reasons. The first is that you can not use _–sparse_ and _–inplace_ together when sending a file over the wire. And second, when you once sent a file over with _–sparse_, you can’t updated with _–inplace_ anymore. Note that versions of rsync older than 3.1.3 will reject the combination of _–sparse_ and _–inplace_. - -So even when the friend ended up copying 100GB over the wire, that only had to happen once. All the following updates were only copying the difference, making the copy to be extremely efficient. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/copying-large-files-with-rsync-and-some-misconceptions/ - -作者:[Daniel Leite de Abreu][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/dabreu/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/rsync-816x345.jpg diff --git a/sources/tech/20190916 How to start developing with .NET.md b/sources/tech/20190916 How to start developing with .NET.md deleted file mode 100644 index 059a313839..0000000000 --- a/sources/tech/20190916 How to start developing with .NET.md +++ /dev/null @@ -1,170 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to start developing with .NET) -[#]: via: (https://opensource.com/article/19/9/getting-started-net) -[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic) - -How to start developing with .NET -====== -Learn the basics to get up and running with the .NET development -platform. -![Coding on a computer][1] - -The .NET framework was released in 2000 by Microsoft. An open source implementation of the platform, [Mono][2], was the center of controversy in the early 2000s because Microsoft held several patents for .NET technology and could have used those patents to end Mono implementations. Fortunately, in 2014, Microsoft declared that the .NET development platform would be open source under the MIT license from then on, and in 2016, Microsoft purchased Xamarin, the company that produces Mono. - -Both .NET and Mono have grown into cross-platform programming environments for C#, F#, GTK#, Visual Basic, Vala, and more. Applications created with .NET and Mono have been delivered to Linux, BSD, Windows, MacOS, Android, and even some gaming consoles. You can use either .NET or Mono to develop .NET applications. Both are open source, and both have active and vibrant communities. This article focuses on getting started with Microsoft's implementation of the .NET environment. - -### How to install .NET - -The .NET downloads are divided into packages: one containing just a .NET runtime, and the other a .NET software development kit (SDK) containing the .NET Core and runtime. Depending on your platform, there may be several variants of even these packages, accounting for architecture and OS version. To start developing with .NET, you must [install the SDK][3]. This gives you the [dotnet][4] terminal or PowerShell command, which you can use to create and build projects. - -#### Linux - -To install .NET on Linux, first, add the Microsoft Linux software repository to your computer. - -On Fedora: - - -``` -$ sudo rpm --import -$ sudo wget -q -O /etc/yum.repos.d/microsoft-prod.repo -``` - -On Ubuntu: - - -``` -$ wget -q -O packages-microsoft-prod.deb -$ sudo dpkg -i packages-microsoft-prod.deb -``` - -Next, install the SDK using your package manager, replacing **<X.Y>** with the current version of the .NET release: - -On Fedora: - - -``` -`$ sudo dnf install dotnet-sdk-` -``` - -On Ubuntu: - - -``` -$ sudo apt install apt-transport-https -$ sudo apt update -$ sudo apt install dotnet-sdk-<X.Y> -``` - -Once all the packages are downloaded and installed, confirm the installation by opening a terminal and typing: - - -``` -$ dotnet --version -X.Y.Z -``` - -#### Windows - -If you're on Microsoft Windows, you probably already have the .NET runtime installed. However, to develop .NET applications, you must also install the .NET Core SDK. - -First, [download the installer][3]. To keep your options open, download .NET Core for cross-platform development (the .NET Framework is Windows-only). Once the **.exe** file is downloaded, double-click it to launch the installation wizard, and click through the two-step install process: accept the license and allow the install to proceed. - -![Installing dotnet on Windows][5] - -Afterward, open PowerShell from your Application menu in the lower-left corner. In PowerShell, type a test command: - - -``` -`PS C:\Users\osdc> dotnet` -``` - -If you see information about a dotnet installation, .NET has been installed correctly. - -#### MacOS - -If you're on an Apple Mac, [download the Mac installer][3], which comes in the form of a **.pkg** package. Download and double-click on the **.pkg** file and click through the installer. You may need to grant permission for the installer since the package is not from the App Store. - -Once all packages are downloaded and installed, confirm the installation by opening a terminal and typing: - - -``` -$ dotnet --version -X.Y.Z -``` - -### Hello .NET - -A sample "hello world" application written in .NET is provided with the **dotnet** command. Or, more accurately, the command provides the sample application. - -First, create a project directory and the required code infrastructure using the **dotnet** command with the **new** and **console** options to create a new console-only application. Use the **-o** option to specify a project name: - - -``` -`$ dotnet new console -o hellodotnet` -``` - -This creates a directory called **hellodotnet** in your current directory. Change into your project directory and have a look around: - - -``` -$ cd hellodotnet -$ dir -hellodotnet.csproj  obj  Program.cs -``` - -The file **Program.cs** is an empty C# file containing a simple Hello World application. Open it in a text editor to view it. Microsoft's Visual Studio Code is a cross-platform, open source application built with dotnet in mind, and while it's not a bad text editor, it also collects a lot of data about its user (and grants itself permission to do so in the license applied to its binary distribution). If you want to try out Visual Studio Code, consider using [VSCodium][6], a distribution of Visual Studio Code that's built from the MIT-licensed source code _without_ the telemetry (read the [documentation][7] for options to disable other forms of tracking in even this build). Alternatively, just use your existing favorite text editor or IDE. - -The boilerplate code in a new console application is: - - -``` -using System; - -namespace hellodotnet -{ -    class Program -    { -        static void Main(string[] args) -        { -            Console.WriteLine("Hello World!"); -        } -    } -} -``` - -To run the program, use the **dotnet run** command: - - -``` -$ dotnet run -Hello World! -``` - -That's the basic workflow of .NET and the **dotnet** command. The full [C# guide for .NET][8] is available, and everything there is relevant to .NET. For examples of .NET in action, follow [Alex Bunardzic][9]'s mutation testing articles here on opensource.com. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/9/getting-started-net - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer) -[2]: https://www.monodevelop.com/ -[3]: https://dotnet.microsoft.com/download -[4]: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet?tabs=netcore21 -[5]: https://opensource.com/sites/default/files/uploads/dotnet-windows-install.jpg (Installing dotnet on Windows) -[6]: https://vscodium.com/ -[7]: https://github.com/VSCodium/vscodium/blob/master/DOCS.md -[8]: https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/intro-to-csharp/ -[9]: https://opensource.com/users/alex-bunardzic (View user profile.) diff --git a/sources/tech/20190916 Linux commands to display your hardware information.md b/sources/tech/20190916 Linux commands to display your hardware information.md deleted file mode 100644 index f0a13905e5..0000000000 --- a/sources/tech/20190916 Linux commands to display your hardware information.md +++ /dev/null @@ -1,417 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Linux commands to display your hardware information) -[#]: via: (https://opensource.com/article/19/9/linux-commands-hardware-information) -[#]: author: (Howard Fosdick https://opensource.com/users/howtechhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth) - -Linux commands to display your hardware information -====== -Get the details on what's inside your computer from the command line. -![computer screen ][1] - -There are many reasons you might need to find out details about your computer hardware. For example, if you need help fixing something and post a plea in an online forum, people will immediately ask you for specifics about your computer. Or, if you want to upgrade your computer, you'll need to know what you have and what you can have. You need to interrogate your computer to discover its specifications. - -The easiest way is to do that is with one of the standard Linux GUI programs: - - * [i-nex][2] collects hardware information and displays it in a manner similar to the popular [CPU-Z][3] under Windows. - * [HardInfo][4] displays hardware specifics and even includes a set of eight popular benchmark programs you can run to gauge your system's performance. - * [KInfoCenter][5] and [Lshw][6] also display hardware details and are available in many software repositories. - - - -Alternatively, you could open up the box and read the labels on the disks, memory, and other devices. Or you could enter the boot-time panels—the so-called UEFI or BIOS panels. Just hit [the proper program function key][7] during the boot process to access them. These two methods give you hardware details but omit software information. - -Or, you could issue a Linux line command. Wait a minute… that sounds difficult. Why would you do this? - -Sometimes it's easy to find a specific bit of information through a well-targeted line command. Perhaps you don't have a GUI program available or don't want to install one. - -Probably the main reason to use line commands is for writing scripts. Whether you employ the Linux shell or another programming language, scripting typically requires coding line commands. - -Many line commands for detecting hardware must be issued under root authority. So either switch to the root user ID, or issue the command under your regular user ID preceded by **sudo**: - - -``` -`sudo ` -``` - -and respond to the prompt for the root password. - -This article introduces many of the most useful line commands for system discovery. The quick reference chart at the end summarizes them. - -### Hardware overview - -There are several line commands that will give you a comprehensive overview of your computer's hardware. - -The **inxi** command lists details about your system, CPU, graphics, audio, networking, drives, partitions, sensors, and more. Forum participants often ask for its output when they're trying to help others solve problems. It's a standard diagnostic for problem-solving: - - -``` -`inxi -Fxz` -``` - -The **-F** flag means you'll get full output, **x** adds details, and **z** masks out personally identifying information like MAC and IP addresses. - -The **hwinfo** and **lshw** commands display much of the same information in different formats: - - -``` -`hwinfo --short` -``` - -or - - -``` -`lshw -short` -``` - -The long forms of these two commands spew out exhaustive—but hard to read—output: - - -``` -`hwinfo` -``` - -or - - -``` -`lshw` -``` - -### CPU details - -You can learn everything about your CPU through line commands. View CPU details by issuing either the **lscpu** command or its close relative **lshw**: - - -``` -`lscpu` -``` - -or - - -``` -`lshw -C cpu` -``` - -In both cases, the last few lines of output list all the CPU's capabilities. Here you can find out whether your processor supports specific features. - -With all these commands, you can reduce verbiage and narrow any answer down to a single detail by parsing the command output with the **grep** command. For example, to view only the CPU make and model: - - -``` -`lshw -C cpu | grep -i product` -``` - -To view just the CPU's speed in megahertz: - - -``` -`lscpu | grep -i mhz` -``` - -or its [BogoMips][8] power rating: - - -``` -`lscpu | grep -i bogo` -``` - -The **-i** flag on the **grep** command simply ensures your search ignores whether the output it searches is upper or lower case. - -### Memory - -Linux line commands enable you to gather all possible details about your computer's memory. You can even determine whether you can add extra memory to the computer without opening up the box. - -To list each memory stick and its capacity, issue the **dmidecode** command: - - -``` -`dmidecode -t memory | grep -i size` -``` - -For more specifics on system memory, including type, size, speed, and voltage of each RAM stick, try: - - -``` -`lshw -short -C memory` -``` - -One thing you'll surely want to know is is the maximum memory you can install on your computer: - - -``` -`dmidecode -t memory | grep -i max` -``` - -Now find out whether there are any open slots to insert additional memory sticks. You can do this without opening your computer by issuing this command: - - -``` -`lshw -short -C memory | grep -i empty` -``` - -A null response means all the memory slots are already in use. - -Determining how much video memory you have requires a pair of commands. First, list all devices with the **lspci** command and limit the output displayed to the video device you're interested in: - - -``` -`lspci | grep -i vga` -``` - -The output line that identifies the video controller will typically look something like this: - - -``` -`00:02.0 VGA compatible controller: Intel Corporation 82Q35 Express Integrated Graphics Controller (rev 02)` -``` - -Now reissue the **lspci** command, referencing the video device number as the selected device: - - -``` -`lspci -v -s 00:02.0` -``` - -The output line identified as _prefetchable_ is the amount of video RAM on your system: - - -``` -... -Memory at f0100000 (32-bit, non-prefetchable) [size=512K] -I/O ports at 1230 [size=8] -Memory at e0000000 (32-bit, prefetchable) [size=256M] -Memory at f0000000 (32-bit, non-prefetchable) [size=1M] -... -``` - -Finally, to show current memory use in megabytes, issue: - - -``` -`free -m` -``` - -This tells how much memory is free, how much is in use, the size of the swap area, and whether it's being used. For example, the output might look like this: - - -``` -              total        used        free     shared    buff/cache   available -Mem:          11891        1326        8877      212        1687       10077 -Swap:          1999           0        1999 -``` - -The **top** command gives you more detail on memory use. It shows current overall memory and CPU use and also breaks it down by process ID, user ID, and the commands being run. It displays full-screen text output: - - -``` -`top` -``` - -### Disks, filesystems, and devices - -You can easily determine whatever you wish to know about disks, partitions, filesystems, and other devices. - -To display a single line describing each disk device: - - -``` -`lshw -short -C disk` -``` - -Get details on any specific SATA disk, such as its model and serial numbers, supported modes, sector count, and more with: - - -``` -`hdparm -i /dev/sda` -``` - -Of course, you should replace **sda** with **sdb** or another device mnemonic if necessary. - -To list all disks with all their defined partitions, along with the size of each, issue: - - -``` -`lsblk` -``` - -For more detail, including the number of sectors, size, filesystem ID and type, and partition starting and ending sectors: - - -``` -`fdisk -l` -``` - -To start up Linux, you need to identify mountable partitions to the [GRUB][9] bootloader. You can find this information with the **blkid** command. It lists each partition's unique identifier (UUID) and its filesystem type (e.g., ext3 or ext4): - - -``` -`blkid` -``` - -To list the mounted filesystems, their mount points, and the space used and available for each (in megabytes): - - -``` -`df -m` -``` - -Finally, you can list details for all USB and PCI buses and devices with these commands: - - -``` -`lsusb` -``` - -or - - -``` -`lspci` -``` - -### Network - -Linux offers tons of networking line commands. Here are just a few. - -To see hardware details about your network card, issue: - - -``` -`lshw -C network` -``` - -Traditionally, the command to show network interfaces was **ifconfig**: - - -``` -`ifconfig -a` -``` - -But many people now use: - - -``` -`ip link show` -``` - -or - - -``` -`netstat -i` -``` - -In reading the output, it helps to know common network abbreviations: - -**Abbreviation** | **Meaning** ----|--- -**lo** | Loopback interface -**eth0** or **enp*** | Ethernet interface -**wlan0** | Wireless interface -**ppp0** | Point-to-Point Protocol interface (used by a dial-up modem, PPTP VPN connection, or USB modem) -**vboxnet0** or **vmnet*** | Virtual machine interface - -The asterisks in this table are wildcard characters, serving as a placeholder for whatever series of characters appear from system to system. **** - -To show your default gateway and routing tables, issue either of these commands: - - -``` -`ip route | column -t` -``` - -or - - -``` -`netstat -r` -``` - -### Software - -Let's conclude with two commands that display low-level software details. For example, what if you want to know whether you have the latest firmware installed? This command shows the UEFI or BIOS date and version: - - -``` -`dmidecode -t bios` -``` - -What is the kernel version, and is it 64-bit? And what is the network hostname? To find out, issue: - - -``` -`uname -a` -``` - -### Quick reference chart - -This chart summarizes all the commands covered in this article: - -Display info about all hardware | **inxi -Fxz**              _\--or--_ -**hwinfo --short**     _\--or--_ -**lshw  -short** ----|--- -Display all CPU info | **lscpu**                  _\--or--_ -**lshw -C cpu** -Show CPU features (e.g., PAE, SSE2) | **lshw -C cpu | grep -i capabilities** -Report whether the CPU is 32- or 64-bit | **lshw -C cpu | grep -i width** -Show current memory size and configuration | **dmidecode -t memory | grep -i size**    _\--or--_ -**lshw -short -C memory** -Show maximum memory for the hardware | **dmidecode -t memory | grep -i max** -Determine whether memory slots are available | **lshw -short -C memory | grep -i empty** -(a null answer means no slots available) -Determine the amount of video memory | **lspci | grep -i vga** -then reissue with the device number; -for example:  **lspci -v -s 00:02.0** -The VRAM is the _prefetchable_ value. -Show current memory use | **free -m**    _\--or--_ -**top** -List the disk drives | **lshw -short -C disk** -Show detailed information about a specific disk drive | **hdparm -i /dev/sda** -(replace **sda** if necessary) -List information about disks and partitions | **lsblk **     (simple)      _\--or--_ -**fdisk -l**   (detailed) -List partition IDs (UUIDs) | **blkid** -List mounted filesystems, their mount points, -and megabytes used and available for each | **df -m** -List USB devices | **lsusb** -List PCI devices | **lspci** -Show network card details | **lshw -C network** -Show network interfaces | **ifconfig -a**       _\--or--_ -**ip link show   **_\--or--_ -**netstat -i** -Display routing tables | **ip route | column -t`  `**_\--or--_ -**netstat -r** -Display UEFI/BIOS info | **dmidecode -t bios** -Show kernel version, network hostname, more | **uname -a** - -Do you have a favorite command that I overlooked? Please add a comment and share it. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/9/linux-commands-hardware-information - -作者:[Howard Fosdick][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/howtechhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK (computer screen ) -[2]: http://sourceforge.net/projects/i-nex/ -[3]: https://www.cpuid.com/softwares/cpu-z.html -[4]: http://sourceforge.net/projects/hardinfo.berlios/ -[5]: https://userbase.kde.org/KInfoCenter -[6]: http://www.binarytides.com/linux-lshw-command/ -[7]: http://www.disk-image.com/faq-bootmenu.htm -[8]: https://en.wikipedia.org/wiki/BogoMips -[9]: https://www.dedoimedo.com/computers/grub.html diff --git a/sources/tech/20190918 Adding themes and plugins to Zsh.md b/sources/tech/20190918 Adding themes and plugins to Zsh.md deleted file mode 100644 index 9601223452..0000000000 --- a/sources/tech/20190918 Adding themes and plugins to Zsh.md +++ /dev/null @@ -1,210 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (amwps290 ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Adding themes and plugins to Zsh) -[#]: via: (https://opensource.com/article/19/9/adding-plugins-zsh) -[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth) - -Adding themes and plugins to Zsh -====== -Expand Z-shell's capabilities with themes and plugins installed with Oh -My Zsh. -![Someone wearing a hardhat and carrying code ][1] - -In my [previous article][2], I explained how to get started with [Z-shell][2] (Zsh). For some users, the most exciting thing about Zsh is its ability to adopt new themes. It's so easy to theme Zsh both because of the active community designing visuals for the shell and also because of the [Oh My Zsh][3] project, which makes it trivial to install them. - -Theming is one of those changes you notice immediately, so if you don't feel like you changed shells when you installed Zsh, you'll definitely feel it once you've adopted one of the 100+ themes bundled with Oh My Zsh. There's a lot more to Oh My Zsh than just pretty themes, though; there are also hundreds of plugins that add features to your Z-shell environment. - -### Installing Oh My Zsh - -The [ohmyz.sh][3] website encourages you to install the framework by running a script over the internet from your computer. While the Oh My Zsh project is almost certainly trustworthy, it's generally ill-advised to blindly run scripts on your system. If you want to run the install script, you can download it, read it, and run it after you're satisfied you understand what it's doing. - -If you download the script and read it, you may notice that installation is only a three-step process: - -#### 1\. Clone oh-my-zsh - -First, clone the oh-my-zsh repository into a directory called **~/.oh-my-zsh**: - - -``` -`% git clone http://github.com/robbyrussell/oh-my-zsh ~/.oh-my-zsh` -``` - -#### 2\. Switch the config file - -Next, back up your existing **.zshrc** file and move the default one from the oh-my-zsh install into its place. You can do this in one command using the **-b** (backup) option for **mv**, as long as your version of the **mv** command includes that option: - - -``` -% mv -b \ -~/.oh-my-zsh/templates/zshrc.zsh-template \ -~/.zshrc -``` - -#### 3\. Edit the config - -By default, Oh My Zsh's configuration is pretty bland, so you might want to reintegrate your custom **~/.zshrc** into the **.oh-my-zsh** config. To do that, append your old config to the end of the new one using the [cat command][4]: - - -``` -`% cat ~/.zshrc~ >> ~/.zshrc` -``` - -To see the default configuration and learn about some of the options it provides, open **~/.zshrc** in your favorite text editor. The file is well-commented, so it's a great way to get a good idea of what's possible. - -For instance, you can change the location of your **.oh-my-zsh** directory. At installation, it resides at the base of your home directory, but modern Linux convention, as defined by the [Free Desktop][5] specification, is to place directories that extend the functionality of applications in the **~/.local/share** directory. You can change it in **~/.zshrc** by editing the line: - - -``` -# Path to your oh-my-zsh installation. -export ZSH=$HOME/.local/share/oh-my-zsh -``` - -then moving the directory to that location: - - -``` -% mv ~/.oh-my-zsh \ -$HOME/.local/share/oh-my-zsh -``` - -If you're using MacOS, the specification is less clear, but arguably the most appropriate place for the directory is **$HOME/Library/Application\ Support**. - -### Relaunching Zsh - -After editing the config, you have to relaunch your shell. Before you do that, make sure you've finished any in-progress config changes; for instance, don't change the path of **.oh-my-zsh** then forget to move the directory to its new location. If you don't want to relaunch your shell, you can **source** the config file, just as you can with Bash: - - -``` -% source ~/.zshrc -➜  .oh-my-zsh git:(master) ✗ -``` - -You can ignore any warnings about missing update files; they will be resolved upon relaunch. - -### Changing your theme - -Installing Oh My Zsh sets your Z-shell theme to **robbyrussell**, a theme by the project's maintainer. This theme's changes are minimal, mostly involving the color of your prompt. - -To view all the available themes, list the contents of the **.oh-my-zsh** theme directory: - - -``` -➜  .oh-my-zsh git:(master) ✗ ls \ -~/.local/share/oh-my-zsh/themes -3den.zsh-theme -adben.zsh-theme -af-magic.zsh-theme -afowler.zsh-theme -agnoster.zsh-theme -[...] -``` - -To see screenshots of themes before trying them, visit the Oh My Zsh [wiki][6]. For even more themes, visit the [External themes][7] wiki page. - -Most themes are simple to set up and use. Just change the value of the theme name in **.zshrc** and reload the config: - - -``` -➜ ~ sed -i \ -'s/_THEME=\"robbyrussel\"/_THEME=\"linuxonly\"/g' \ -~/.zshrc -➜ ~ source ~/.zshrc -seth@darkstar:pts/0->/home/skenlon (0) ➜ -``` - -Other themes require extra configuration. For example, to use the **agnoster** theme, you must first install the Powerline font. This is an open source font, and it's probably in your software repository if you're running Linux. Install it with: - - -``` -`➜ ~ sudo dnf install powerline-fonts` -``` - -Set your theme in the config: - - -``` -➜ ~ sed -i \ -'s/_THEME=\"linuxonly\"/_THEME=\"agnoster\"/g' \ -~/.zshrc -``` - -and then relaunch (a simple **source** won't work). Upon relaunch, you will see the new theme: - -![agnoster theme][8] - -### Installing plugins - -Over 200 plugins ship with Oh My Zsh, and you can see them by looking in **.oh-my-zsh/plugins**. Each plugin directory has a README file explaining what the plugin does. - -Some plugins are relatively simple. For instance, the **dnf**, **ubuntu**, **brew**, and **macports** plugins are collections of aliases to simplify interactions with the DNF, Apt, Homebrew, and MacPorts package managers. - -Others are more complex. The **git** plugin, active by default, detects when you're working in a [Git repository][9] and updates your shell prompt so that it lists the current branch and even indicates whether there are unmerged changes. - -To activate a plugin, add it to the plugin setting in **~/.zshrc**. For example, to add the **dnf** and **pass** plugins, open **~/.zshrc** in your favorite text editor: - - -``` -`plugins=(git dnf pass)` -``` - -Save your changes and reload your Zsh session: - - -``` -`% source ~/.zshrc` -``` - -The plugins are now active. You can test the **dnf** plugin by using one of the aliases it provides: - - -``` -% dnfs fop -====== Name Exactly Matched: fop ====== -fop.noarch : XSL-driven print formatter -``` - -Different plugins do different things, so you may want to install only one or two at a time to help you learn the new capabilities of your shell. - -#### Cheating - -Some Oh My Zsh plugins are pretty generic. If you look at a plugin that claims to be a Z-shell plugin and the code is also compatible with Bash, then you can use it in your Bash shell. Some plugins require Z-shell-specific functions, so this won't work with all of them. But you can load plugins like **dnf**, **ubuntu**, **[firewalld][10]**, and others into a Bash shell by using **source** to load the plugin of your choice. For example: - - -``` -if [ -d $HOME/.local/share/oh-my-zsh/plugins ]; then -        source $HOME/.local/share/oh-my-zsh/plugins/dnf/dnf.plugin.zsh -fi -``` - -### To Z or not to Z - -Z-shell is a powerful shell both for its built-in features and the plugins contributed by its passionate community. Whether you use it as your primary shell or just as a shell you visit on weekends or holidays, you owe it to yourself to try it out. - -What are your favorite Z-shell themes and plugins? Tell us in the comments! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/9/adding-plugins-zsh - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code ) -[2]: https://opensource.com/article/19/9/getting-started-zsh -[3]: https://ohmyz.sh/ -[4]: https://opensource.com/article/19/2/getting-started-cat-command -[5]: http://freedesktop.org -[6]: https://github.com/robbyrussell/oh-my-zsh/wiki/Themes -[7]: https://github.com/robbyrussell/oh-my-zsh/wiki/External-themes -[8]: https://opensource.com/sites/default/files/uploads/zsh-agnoster.jpg (agnoster theme) -[9]: https://opensource.com/resources/what-is-git -[10]: https://opensource.com/article/19/7/make-linux-stronger-firewalls diff --git a/sources/tech/20190918 How to remove carriage returns from text files on Linux.md b/sources/tech/20190918 How to remove carriage returns from text files on Linux.md deleted file mode 100644 index 45b8a8b89d..0000000000 --- a/sources/tech/20190918 How to remove carriage returns from text files on Linux.md +++ /dev/null @@ -1,114 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to remove carriage returns from text files on Linux) -[#]: via: (https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html) -[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) - -How to remove carriage returns from text files on Linux -====== -When carriage returns (also referred to as Ctrl+M's) get on your nerves, don't fret. There are several easy ways to show them the door. -[Kim Siever][1] - -Carriage returns go back a long way – as far back as typewriters on which a mechanism or a lever swung the carriage that held a sheet of paper to the right so that suddenly letters were being typed on the left again. They have persevered in text files on Windows, but were never used on Linux systems. This incompatibility sometimes causes problems when you’re trying to process files on Linux that were created on Windows, but it's an issue that is very easily resolved. - -The carriage return, also referred to as **Ctrl+M**, character would show up as an octal 15 if you were looking at the file with an **od** octal dump) command. The characters **CRLF** are often used to represent the carriage return and linefeed sequence that ends lines on Windows text files. Those who like to gaze at octal dumps will spot the **\r \n**. Linux text files, by comparison, end with just linefeeds. - -**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]** - -Here's a sample of **od** output with the lines containing the **CRLF** characters in both octal and character form highlighted. - -``` -$ od -bc testfile.txt -0000000 124 150 151 163 040 151 163 040 141 040 164 145 163 164 040 146 - T h i s i s a t e s t f -0000020 151 154 145 040 146 162 157 155 040 127 151 156 144 157 167 163 - i l e f r o m W i n d o w s -0000040 056 015 012 111 164 047 163 040 144 151 146 146 145 162 145 156 <== - . \r \n I t ' s d i f f e r e n <== -0000060 164 040 164 150 141 156 040 141 040 125 156 151 170 040 164 145 - t t h a n a U n i x t e -0000100 170 164 040 146 151 154 145 015 012 167 157 165 154 144 040 142 <== - x t f i l e \r \n w o u l d b <== -``` - -While these characters don’t represent a huge problem, they can sometimes interfere when you want to parse the text files in some way and don’t want to have to code around their presence or absence. - -### 3 ways to remove carriage return characters from text files - -Fortunately, there are several ways to easily remove carriage return characters. Here are three options: - -#### dos2unix - -You might need to go through the trouble of installing it, but **dos2unix** is probably the easiest way to turn Windows text files into Unix/Linux text files. One command with one argument, and you’re done. No second file name is required. The file will be changed in place. - -``` -$ dos2unix testfile.txt -dos2unix: converting file testfile.txt to Unix format... -``` - -You should see the file length decrease, depending on how many lines it contains. A file with 100 lines would likely shrink by 99 characters, since only the last line will not end with the **CRLF** characters. - -Before: - -``` --rw-rw-r-- 1 shs shs 121 Sep 14 19:11 testfile.txt -``` - -After: - -``` --rw-rw-r-- 1 shs shs 118 Sep 14 19:12 testfile.txt -``` - -If you need to convert a large collection of files, don't fix them one at a time. Instead, put them all in a directory by themselves and run a command like this: - -``` -$ find . -type f -exec dos2unix {} \; -``` - -In this command, we use find to locate regular files and then run the **dos2unix** command to convert them one at a time. The {} in the command is replaced by the filename. You should be sitting in the directory with the files when you run it. This command could damage other types of files, such as those that contain octal 15 characters in some context other than a text file (e.g., bytes in an image file). - -#### sed - -You can also use **sed**, the stream editor, to remove carriage returns. You will, however, have to supply a second file name. Here’s an example: - -``` -$ sed -e “s/^M//” before.txt > after.txt -``` - -One important thing to note is that you DON’T type what that command appears to be. You must enter **^M** by typing **Ctrl+V** followed by **Ctrl+M**. The “s” is the substitute command. The slashes separate the text we’re looking for (the Ctrl+M) and the text (nothing in this case) that we’re replacing it with. - -#### vi - -You can even remove carriage return (**Ctrl+M**) characters with **vi**, although this assumes you’re not running through hundreds of files and are maybe making some other changes, as well. You would type “**:**” to go to the command line and then type the string shown below. As with **sed**, the **^M** portion of this command requires typing **Ctrl+V** to get the **^** and then **Ctrl+M** to insert the **M**. The **%s** is a substitute operation, the slashes again separate the characters we want to remove and the text (nothing) we want to replace it with. The “**g**” (global) means to do this on every line in the file. - -``` -:%s/^M//g -``` - -#### Wrap-up - -The **dos2unix** command is probably the easiest to remember and most reliable way to remove carriage returns from text files. Other options are a little trickier to use, but they provide the same basic function. - -Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html - -作者:[Sandra Henry-Stocker][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[b]: https://github.com/lujun9972 -[1]: https://www.flickr.com/photos/kmsiever/5895380540/in/photolist-9YXnf5-cNmpxq-2KEvib-rfecPZ-9snnkJ-2KAcDR-dTxzKW-6WdgaG-6H5i46-2KzTZX-7cnSw7-e3bUdi-a9meh9-Zm3pD-xiFhs-9Hz6YM-ar4DEx-4PXAhw-9wR4jC-cihLcs-asRFJc-9ueXvG-aoWwHq-atwL3T-ai89xS-dgnntH-5en8Te-dMUDd9-aSQVn-dyZqij-cg4SeS-abygkg-f2umXt-Xk129E-4YAeNn-abB6Hb-9313Wk-f9Tot-92Yfva-2KA7Sv-awSCtG-2KDPzb-eoPN6w-FE9oi-5VhaNf-eoQgx7-eoQogA-9ZWoYU-7dTGdG-5B1aSS -[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua -[3]: https://www.facebook.com/NetworkWorld/ -[4]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190923 Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots.md b/sources/tech/20190923 Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots.md new file mode 100644 index 0000000000..353f26db5b --- /dev/null +++ b/sources/tech/20190923 Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots.md @@ -0,0 +1,222 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots) +[#]: via: (https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots +====== + +Within a year of releasing **Manjaro 18.0** (**Illyria**), the team has come out with their next big release with **Manjaro 18.1**, codenamed “**Juhraya**“. The team also have come up with an official announcement saying that Juhraya comes packed with a lot of improvements and bug fixes. + +### New Features in Manjaro 18.1 + +Some of the new features and enhancements in Manjaro 18.1 are listed below: + + * Option to choose between LibreOffice or Free Office + * New Matcha theme for Xfce edition + * Redesigned messaging system in KDE edition + * Support for Snap and FlatPak packages using “bhau” tool + + + +### Minimum System Requirements for Manjaro 18.1 + + * 1 GB RAM + * One GHz Processor + * Around 30 GB Hard disk space + * Internet Connection + * Bootable Media (USB/DVD) + + + +### Step by Step Guide to Install Manjaro 18.1 (KDE Edition) + +To start installing Manjaro 18.1 (KDE Edition) in your system, please follow the steps outline below: + +### Step 1) Download Manjaro 18.1 ISO + +Before installing, you need to download the latest copy of Manjaro 18.1 from its official download page located **[here][1]**. Since we are seeing about the KDE version, we chose to install the KDE version. But the installation process is the same for all desktop environments including Xfce, KDE and Gnome editions. + +### Step 2) Create a USB Bootable Disk + +Once you have successfully downloaded the ISO file from Manjaro downloads page, it is time to create an USB disk. Copy the downloaded ISO file in a USB disk and create a bootable disk. Make sure to change your boot settings to boot using a USB and restart your system + +### Step 3) Manjaro Live Installation Environment + +When the system restarts, it will automatically detect the USB drive and starts booting into the Manjaro Live Installation Screen. + +[![Boot-Manjaro-18-1-kde-installation][2]][3] + +Next use the arrow keys to choose “**Boot: Manjaro x86_64 kde**” and hit enter to launch the Manjaro Installer. + +### Step 4) Choose Launch Installer + +Next the Manjaro installer will be launched and If you are connected to the internet, Manjaro will automatically detect your location and time zone. Click “**Launch Installer**” start installing Manjaro 18.1 KDE edition in your system. + +[![Choose-Launch-Installaer-Manjaro18-1-kde][2]][4] + +### Step 5) Choose Your Language + +Next the installer will take you to choose your preferred language. + +[![Choose-Language-Manjaro18-1-Kde-Installation][2]][5] + +Select your desired language and click “Next” + +### Step 6) Choose Your time zone and region + +In the next screen, select your desired time zone and region and click “Next” to continue + +[![Select-Location-During-Manjaro18-1-KDE-Installation][2]][6] + +### Step 7) Choose Keyboard layout + +In the next screen, select your preferred keyboard layout and click “Next” to continue. + +[![Select-Keyboard-Layout-Manjaro18-1-kde-installation][2]][7] + +### Step 8) Choose Partition Type + +This is a very critical step in the installation process. It will allow you to choose between: + + * Erase Disk + * Manual Partitioning + * Install Alongside + * Replace a Partition + + + +If you are installing Manjaro 18.1 in a VM (Virtual Machine), then you won’t be able to see the last 2 options. + +If you are new to Manjaro Linux then I would suggest you should go with first option (**Erase Disk**), it will automatically create required partitions for you. If you want to create custom partitions then choose the second option “**Manual Partitioning**“, as its name suggests it will allow us to create our own custom partitions. + +In this tutorial I will be creating custom partitions by selecting “Manual Partitioning” option, + +[![Manual-Partition-Manjaro18-1-KDE][2]][8] + +Choose the second option and click “Next” to continue. + +As we can see i have 40 GB hard disk, so I will create following partitions on it, + + * /boot       –  2GB (ext4 file system) + * /               –  10 GB (ext4 file system) + * /home     –  22 GB (ext4 file system) + * /opt         –  4 GB (ext4 file system) + * Swap       –  2 GB + + + +When we click on Next in above window, we will get the following screen, choose to create a ‘**new partition table**‘, + +[![Create-Partition-Table-Manjaro18-1-Installation][2]][9] + +Click on Ok, + +Now choose the free space and then click on ‘**create**‘ to setup the first partition as /boot of size 2 GB, + +[![boot-partition-manjaro-18-1-installation][2]][10] + +Click on OK to proceed with further, in the next window choose again free space and then click on create  to setup second partition as / of size 10 GB, + +[![slash-root-partition-manjaro18-1-installation][2]][11] + +Similarly create next partition as /home of size 22 GB, + +[![home-partition-manjaro18-1-installation][2]][12] + +As of now we have created three partitions as primary, now create next partition as extended, + +[![Extended-Partition-Manjaro18-1-installation][2]][13] + +Click on OK to proceed further, + +Create /opt and Swap partition of size 5 GB and 2 GB respectively as logical partitions + +[![opt-partition-manjaro-18-1-installation][2]][14] + +[![swap-partition-manjaro18-1-installation][2]][15] + +Once are done with all the partitions creation, click on Next + +[![choose-next-after-partition-creation][2]][16] + +### Step 9) Provide User Information + +In the next screen, you need to provide the user information including your name, username, password, computer name etc. + +[![User-creation-details-manjaro18-1-installation][2]][17] + +Click “Next” to continue with the installation after providing all the information. + +In the next screen you will be prompted to choose the office suite, so make a choice that suits to your installation, + +[![Office-Suite-Selection-Manjaro18-1][2]][18] + +Click on Next to proceed further, + +### Step 10) Summary Information + +Before the actual installation is done, the installer will show you all the details you’ve chosen including the language, time zone, keyboard layout and partitioning information etc. Click “**Install**” to proceed with the installation process. + +[![Summary-manjaro18-1-installation][2]][19] + +### Step 11) Install Manjaro 18.1 KDE Edition + +Now the actual installation process begins and once it gets completed, restart the system to login to Manjaro 18.1 KDE edition , + +[![Manjaro18-1-Installation-Progress][2]][20] + +[![Restart-Manjaro-18-1-after-installation][2]][21] + +### Step:12) Login after successful installation + +After the restart we will get the following login screen, use the user’s credentials that we created during the installation + +[![Login-screen-after-manjaro-18-1-installation][2]][22] + +Click on Login, + +[![KDE-Desktop-Screen-Manjaro-18-1][2]][23] + +That’s it! You’ve successfully installed Manjaro 18.1 KDE edition in your system and explore all the exciting features. Please post your feedback and suggestions in the comments section below. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://manjaro.org/download/official/kde/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Boot-Manjaro-18-1-kde-installation.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Launch-Installaer-Manjaro18-1-kde.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Language-Manjaro18-1-Kde-Installation.jpg +[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Location-During-Manjaro18-1-KDE-Installation.jpg +[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Keyboard-Layout-Manjaro18-1-kde-installation.jpg +[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manual-Partition-Manjaro18-1-KDE.jpg +[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Create-Partition-Table-Manjaro18-1-Installation.jpg +[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/boot-partition-manjaro-18-1-installation.jpg +[11]: https://www.linuxtechi.com/wp-content/uploads/2019/09/slash-root-partition-manjaro18-1-installation.jpg +[12]: https://www.linuxtechi.com/wp-content/uploads/2019/09/home-partition-manjaro18-1-installation.jpg +[13]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Extended-Partition-Manjaro18-1-installation.jpg +[14]: https://www.linuxtechi.com/wp-content/uploads/2019/09/opt-partition-manjaro-18-1-installation.jpg +[15]: https://www.linuxtechi.com/wp-content/uploads/2019/09/swap-partition-manjaro18-1-installation.jpg +[16]: https://www.linuxtechi.com/wp-content/uploads/2019/09/choose-next-after-partition-creation.jpg +[17]: https://www.linuxtechi.com/wp-content/uploads/2019/09/User-creation-details-manjaro18-1-installation.jpg +[18]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Office-Suite-Selection-Manjaro18-1.jpg +[19]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Summary-manjaro18-1-installation.jpg +[20]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manjaro18-1-Installation-Progress.jpg +[21]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Restart-Manjaro-18-1-after-installation.jpg +[22]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Login-screen-after-manjaro-18-1-installation.jpg +[23]: https://www.linuxtechi.com/wp-content/uploads/2019/09/KDE-Desktop-Screen-Manjaro-18-1.jpg diff --git a/sources/tech/20190923 Mutation testing by example- How to leverage failure.md b/sources/tech/20190923 Mutation testing by example- How to leverage failure.md new file mode 100644 index 0000000000..f86183f798 --- /dev/null +++ b/sources/tech/20190923 Mutation testing by example- How to leverage failure.md @@ -0,0 +1,195 @@ +[#]: collector: (lujun9972) +[#]: translator: (Morisun029) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Mutation testing by example: How to leverage failure) +[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-tdd) +[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) + +Mutation testing by example: How to leverage failure +====== +Use planned failure to ensure your code meets expected outcomes and +follow along with the .NET xUnit.net testing framework. +![failure sign at a party, celebrating failure][1] + +In my article _[Mutation testing is the evolution of TDD][2]_, I exposed the power of iteration to guarantee a solution when a measurable test is available. In that article, an iterative approach helped to determine how to implement code that calculates the square root of a given number. + +I also demonstrated that the most effective method is to find a measurable goal or test, then start iterating with best guesses. The first guess at the correct answer will most likely fail, as expected, so the failed guess needs to be refined. The refined guess must be validated against the measurable goal or test. Based on the result, the guess is either validated or must be further refined. + +In this model, the only way to learn how to reach the solution is to fail repeatedly. It sounds counterintuitive, but amazingly, it works. + +Following in the footsteps of that analysis, this article examines the best way to use a DevOps approach when building a solution containing some dependencies. The first step is to write a test that can be expected to fail. + +### The problem with dependencies is that you can't depend on them + +The problem with dependencies, as Michael Nygard wittily expresses in _[Architecture without an end state][3]_, is a huge topic better left for another article. Here, you'll look into potential pitfalls that dependencies tend to bring to a project and how to leverage test-driven development (TDD) to avoid those pitfalls. + +First, pose a real-life challenge, then see how it can be solved using TDD. + +### Who let the cat out? + +![Cat standing on a roof][4] + +In Agile development environments, it's helpful to start building the solution by defining the desired outcomes. Typically, the desired outcomes are described in a [_user story_][5]: + +> _Using my home automation system (HAS), +> I want to control when the cat can go outside, +> because I want to keep the cat safe overnight._ + +Now that you have a user story, you need to elaborate on it by providing some functional requirements (that is, by specifying the _acceptance criteria_). Start with the simplest of scenarios described in pseudo-code: + +> _Scenario #1: Disable cat trap door during nighttime_ +> +> * Given that the clock detects that it is nighttime +> * When the clock notifies the HAS +> * Then HAS disables the Internet of Things (IoT)-capable cat trap door +> + + +### Decompose the system + +The system you are building (the HAS) needs to be _decomposed_–broken down to its dependencies–before you can start working on it. The first thing you must do is identify any dependencies (if you're lucky, your system has no dependencies, which would make it easy to build, but then it arguably wouldn't be a very useful system). + +From the simple scenario above, you can see that the desired business outcome (automatically controlling a cat door) depends on detecting nighttime. This dependency hinges upon the clock. But the clock is not capable of determining whether it is daylight or nighttime. It's up to you to supply that logic. + +Another dependency in the system you're building is the ability to automatically access the cat door and enable or disable it. That dependency most likely hinges upon an API provided by the IoT-capable cat door. + +### Fail fast toward dependency management + +To satisfy one dependency, we will build the logic that determines whether the current time is daylight or nighttime. In the spirit of TDD, we will start with a small failure. + +Refer to my [previous article][2] for detailed instructions on how to set the development environment and scaffolds required for this exercise. We will be reusing the same NET environment and relying on the [xUnit.net][6] framework. + +Next, create a new project called HAS (for "home automation system") and create a file called **UnitTest1.cs**. In this file, write the first failing unit test. In this unit test, describe your expectations. For example, when the system runs, if the time is 7pm, then the component responsible for deciding whether it's daylight or nighttime returns the value "Nighttime." + +Here is the unit test that describes that expectation: + + +``` +using System; +using Xunit; + +namespace unittest +{ +   public class UnitTest1 +   { +       DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility(); + +       [Fact] +       public void Given7pmReturnNighttime() +       { +           var expected = "Nighttime"; +           var actual = dayOrNightUtility.GetDayOrNight(); +           Assert.Equal(expected, actual); +       } +   } +} +``` + +By this point, you may be familiar with the shape and form of a unit test. A quick refresher: describe the expectation by giving the unit test a descriptive name, **Given7pmReturnNighttime**, in this example. Then in the body of the unit test, a variable named **expected** is created, and it is assigned the expected value (in this case, the value "Nighttime"). Following that, a variable named **actual** is assigned the actual value (available after the component or service processes the time of day). + +Finally, it checks whether the expectation has been met by asserting that the expected and actual values are equal: **Assert.Equal(expected, actual)**. + +You can also see in the above listing a component or service called **dayOrNightUtility**. This module is capable of receiving the message **GetDayOrNight** and is supposed to return the value of the type **string**. + +Again, in the spirit of TDD, the component or service being described hasn't been built yet (it is merely being described with the intention to prescribe it later). Building it is driven by the described expectations. + +Create a new file in the **app** folder and give it the name **DayOrNightUtility.cs**. Add the following C# code to that file and save it: + + +``` +using System; + +namespace app { +   public class DayOrNightUtility { +       public string GetDayOrNight() { +           string dayOrNight = "Undetermined"; +           return dayOrNight; +       } +   } +} +``` + +Now go to the command line, change directory to the **unittests** folder, and run the test: + + +``` +[Xunit.net 00:00:02.33] unittest.UnitTest1.Given7pmReturnNighttime [FAIL] +Failed unittest.UnitTest1.Given7pmReturnNighttime +[...] +``` + +Congratulations, you have written the first failing unit test. The unit test was expecting **DayOrNightUtility** to return string value "Nighttime" but instead, it received the string value "Undetermined." + +### Fix the failing unit test + +A quick and dirty way to fix the failing test is to replace the value "Undetermined" with the value "Nighttime" and save the change: + + +``` +using System; + +namespace app { +   public class DayOrNightUtility { +       public string GetDayOrNight() { +           string dayOrNight = "Nighttime"; +           return dayOrNight; +       } +   } +} +``` + +Now when we run the test, it passes: + + +``` +Starting test execution, please wait... + +Total tests: 1. Passed: 1. Failed: 0. Skipped: 0. +Test Run Successful. +Test execution time: 2.6470 Seconds +``` + +However, hardcoding the values is basically cheating, so it's better to endow **DayOrNightUtility** with some intelligence. Modify the **GetDayOrNight** method to include some time-calculation logic: + + +``` +public string GetDayOrNight() { +    string dayOrNight = "Daylight"; +    DateTime time = new DateTime(); +    if(time.Hour < 7) { +        dayOrNight = "Nighttime"; +    } +    return dayOrNight; +} +``` + +The method now gets the current time from the system and compares the **Hour** value to see if it is less than 7am. If it is, the logic transforms the **dayOrNight** string value from "Daylight" to "Nighttime." The unit test now passes. + +### The start of a test-driven solution + +We now have the beginnings of a base case unit test and a viable solution for our time dependency. There are more than a few more cases to work through.  + +In the next article, I'll demonstrate how to test for daylight hours and how to leverage failure along the way. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/mutation-testing-example-tdd + +作者:[Alex Bunardzic][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alex-bunardzic +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_failure_celebrate.png?itok=LbvDAEZF (failure sign at a party, celebrating failure) +[2]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd +[3]: https://www.infoq.com/presentations/Architecture-Without-an-End-State/ +[4]: https://opensource.com/sites/default/files/uploads/cat.png (Cat standing on a roof) +[5]: https://www.agilealliance.org/glossary/user-stories +[6]: https://xunit.net/ +[7]: http://www.google.com/search?q=new+msdn.microsoft.com diff --git a/sources/tech/20190924 A human approach to reskilling in the age of AI.md b/sources/tech/20190924 A human approach to reskilling in the age of AI.md new file mode 100644 index 0000000000..8eaeb099f1 --- /dev/null +++ b/sources/tech/20190924 A human approach to reskilling in the age of AI.md @@ -0,0 +1,121 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A human approach to reskilling in the age of AI) +[#]: via: (https://opensource.com/open-organization/19/9/claiming-human-age-of-AI) +[#]: author: (Jen Kelchner https://opensource.com/users/jenkelchnerhttps://opensource.com/users/jenkelchner) + +A human approach to reskilling in the age of AI +====== +Investing in learning agility and core capabilities is as important for +the individual worker as it is for the decision-making executive. +Thinking openly can get us there. +![Person on top of a mountain, arm raise][1] + +[The age of AI is upon us][2]. Emerging technologies give humans some relief from routine tasks and allow us to get back to the creative, adaptable creatures many of us prefer being. + +So a shift to developing _human_ skills in the workplace should be a critical focus for organizations. In this part of my series on learning agility, we'll take a look at some reasons for a sense of urgency over reskilling our workforce and reconnecting to our humanness. + +### The clock is ticking + +If you don't believe AI conversations affect you, then I suggest reviewing this 2018 McKinsey Report on [reskilling in the age of automation][3], which provides some interesting statistics. Here are a few applicable nuggets: + + * 62% of executives believe they need to **retrain or replace more than a quarter** of their workforce **by 2023** due to advancing digitization + * The **US and Europe face a larger threat** on reskilling than the rest of the world + * 70% of execs in companies with more than $500 million in annual revenue state this **will affect more than 25%** of their employees + + + +No matter where you fall on an organizational chart, automation (and digitalization more generally) is an important topic for you—because the need for reskilling that it introduces will most likely affect you. + +But what does this reskilling conversation have to do with core capability development? + +To answer _that_ question, let's take a look at a few statistics curated in a [2019 LinkedIn Global Talent Report][4]. + +When surveyed on the topic of ~~soft skills~~ core human capabilities, global companies had this to say: + + * **92%** agree that they matter as much or more than "hard skills" + * **80%** said these skills are increasingly important to company success + * Only **41%** have a formal process to identify these skills + + + +Before panicking at the thought of what these stats could mean to you or your company, let's actually dig into these core capabilities that you already have but may need to brush up on and strengthen. + +### Core human capabilities + +_What the heck does all this have to do with learning agility_, you may be asking, _and why should I care_? + +What many call "soft skills" are really human skills—core capabilities anyone can cultivate. + +I recommend catching up with this introduction to [learning agility][5]. There, I define learning agility as "the capacity for adapting to situations and applying knowledge from prior experience—even when you don't know what to do [...], a willingness to learn from all your experiences and then apply that knowledge to tackle new challenges in new situations." In that piece, we also discussed reasons why characteristics associated with learning agility are among the most sought after skills on the planet today. + +Too often, [these skills go by the name "soft skills."][6] Explanations usually go something like this: "hard skills" are more like engineering- or science-based skills and, well, "non-peopley" related things. But what many call "soft skills" are really _human skills_—core capabilities anyone can cultivate. As leaders, we need to continue to change the narrative concerning these core capabilities (for many reasons, not least of which is the fact that the distinction frequently re-entrenches a [gender bias][7], as if skills somehow fit on a spectrum from "soft to hard.") + +For two decades, I've heard decision makers choose not to invest in people or leadership development because "there isn't money in soft skills" and "there's no way to track the ROI" on developing them. Fortunately, we're moving out of this tragic mindset, as leaders recognize how digital transformation has reshaped how we connect, build community, and organize for work. Perhaps this has something to do with increasingly pervasive reports (and blowups) we see across ecosystems regarding [toxic work culture][8] or broken leadership styles. Top consulting firms doing [global talent surveys][9] continue to identify crucial breakdowns in talent development pointing right back to our topic at hand. + +For two decades, I've heard decision makers choose not to invest in people or leadership development because "there isn't money in soft skills" and "there's no way to track the ROI" on developing them. Fortunately, we're moving out of this tragic mindset. + +We all have access to these capabilities, but often we've lacked examples to learn by or have had little training on how to put them to work. Let's look at the list of the most-needed human skills right now, shall we? + +Topping the leaderboard moving into 2020: + + * Communication + * Relationship building + * Emotional intelligence (EQ) + * Critical thinking and problem-solving (CQ) + * [Learning agility][5] and adaptability quotient (AQ) + * Creativity + + + +If we were to take the items on this list and generalize them into three categories of importance for the future of work, it would look like: + + 1. Emotional Quotient + 2. Adaptability Quotient + 3. Creativity Quotient + + + +Some of us have been conditioned to think we're "not creative" because the term "creativity" refers only to things like art, design, or music. However, in this case, "creativity" means the ability to combine ideas, things, techniques, or approaches in new ways—and it's [crucial to innovation][10]. Solving problems in new ways is the [most important skill][11] companies look for when trying to solve their skill-gap problems. (_Spoiler alert: This is learning agility!_) Obviously, our generalized list ignores many nuances (not to mention additional skills we might develop in our people and organizations as contexts shift); however, this is a really great place to start. + +### Where do we go from here? + +In order to accommodate the demands of tomorrow's organizations, we must: + + * look at retraining and reskilling from early education models to organizational talent development programs, and + * adjust our organizational culture and internal frameworks to support being human and innovative. + + + +This means exploring [open principles][12], agile methodologies, collaborative work models, and continuous states of learning across all aspects of your organization. Digital transformation and reskilling on core capabilities leaves no one—and _no department_—behind. + +In our next installment, we'll begin digging into these core capabilities and examine the five dimensions of learning agility with simple ways to apply them. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/19/9/claiming-human-age-of-AI + +作者:[Jen Kelchner][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jenkelchnerhttps://opensource.com/users/jenkelchner +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/developer_mountain_cloud_top_strong_win.jpg?itok=axK3EX-q (Person on top of a mountain, arm raise) +[2]: https://appinventiv.com/blog/ai-technology-trends/ +[3]: https://www.mckinsey.com/featured-insights/future-of-work/retraining-and-reskilling-workers-in-the-age-of-automation +[4]: https://app.box.com/s/c5scskbsz9q6lb0hqb7euqeb4fr8m0bl/file/388525098383 +[5]: https://opensource.com/open-organization/19/8/introduction-learning-agility +[6]: https://enterprisersproject.com/article/2019/9/6-soft-skills-for-ai-age +[7]: https://enterprisersproject.com/article/2019/8/why-soft-skills-core-to-IT +[8]: https://ldr21.com/how-ubers-workplace-crisis-can-save-your-organization-money/ +[9]: https://www.inc.com/scott-mautz/new-deloitte-study-of-10455-millennials-says-employers-are-failing-to-help-young-people-develop-4-crucial-skills.html +[10]: https://velites.nl/en/2018/11/12/creative-quotient/ +[11]: https://learning.linkedin.com/blog/top-skills/why-creativity-is-the-most-important-skill-in-the-world +[12]: https://opensource.com/open-organization/resources/open-org-definition diff --git a/sources/tech/20190924 An advanced look at Python interfaces using zope.interface.md b/sources/tech/20190924 An advanced look at Python interfaces using zope.interface.md new file mode 100644 index 0000000000..16b4780710 --- /dev/null +++ b/sources/tech/20190924 An advanced look at Python interfaces using zope.interface.md @@ -0,0 +1,132 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (An advanced look at Python interfaces using zope.interface) +[#]: via: (https://opensource.com/article/19/9/zopeinterface-python-package) +[#]: author: (Moshe Zadka https://opensource.com/users/moshezhttps://opensource.com/users/lauren-pritchetthttps://opensource.com/users/sethhttps://opensource.com/users/drmjg) + +An advanced look at Python interfaces using zope.interface +====== +Zope.interface helps declare what interfaces exist, which objects +provide them, and how to query for that information. +![Snake charmer cartoon with a yellow snake and a blue snake][1] + +The **zope.interface** library is a way to overcome ambiguity in Python interface design. Let's take a look at it. + +### Implicit interfaces are not zen + +The [Zen of Python][2] is loose enough and contradicts itself enough that you can prove anything from it. Let's meditate upon one of its most famous principles: "Explicit is better than implicit." + +One thing that traditionally has been implicit in Python is the expected interface. Functions have been documented to expect a "file-like object" or a "sequence." But what is a file-like object? Does it support **.writelines**? What about **.seek**? What is a "sequence"? Does it support step-slicing, such as **a[1:10:2]**? + +Originally, Python's answer was the so-called "duck-typing," taken from the phrase "if it walks like a duck and quacks like a duck, it's probably a duck." In other words, "try it and see," which is possibly the most implicit you could possibly get. + +In order to make those things explicit, you need a way to express expected interfaces. One of the first big systems written in Python was the [Zope][3] web framework, and it needed those things desperately to make it obvious what rendering code, for example, expected from a "user-like object." + +Enter **zope.interface**, which is developed by Zope but published as a separate Python package. **Zope.interface** helps declare what interfaces exist, which objects provide them, and how to query for that information. + +Imagine writing a simple 2D game that needs various things to support a "sprite" interface; e.g., indicate a bounding box, but also indicate when the object intersects with a box. Unlike some other languages, in Python, attribute access as part of the public interface is a common practice, instead of implementing getters and setters. The bounding box should be an attribute, not a method. + +A method that renders the list of sprites might look like: + + +``` +def render_sprites(render_surface, sprites): +    """ +    sprites should be a list of objects complying with the Sprite interface: +    * An attribute "bounding_box", containing the bounding box. +    * A method called "intersects", that accepts a box and returns +      True or False +    """ +    pass # some code that would actually render +``` + +The game will have many functions that deal with sprites. In each of them, you would have to specify the expected contract in a docstring. + +Additionally, some functions might expect a more sophisticated sprite object, maybe one that has a Z-order. We would have to keep track of which methods expect a Sprite object, and which expect a SpriteWithZ object. + +Wouldn't it be nice to be able to make what a sprite is explicit and obvious so that methods could declare "I need a sprite" and have that interface strictly defined? Enter **zope.interface**. + + +``` +from zope import interface + +class ISprite(interface.Interface): + +    bounding_box = interface.Attribute( +        "The bounding box" +    ) + +    def intersects(box): +        "Does this intersect with a box" +``` + +This code looks a bit strange at first glance. The methods do not include a **self**, which is a common practice, and it has an **Attribute** thing. This is the way to declare interfaces in **zope.interface**. It looks strange because most people are not used to strictly declaring interfaces. + +The reason for this practice is that the interface shows how the method will be called, not how it is defined. Because interfaces are not superclasses, they can be used to declare data attributes. + +One possible implementation of the interface can be with a circular sprite: + + +``` +@implementer(ISprite) +@attr.s(auto_attribs=True) +class CircleSprite: +    x: float +    y: float +    radius: float + +    @property +    def bounding_box(self): +        return ( +            self.x - self.radius, +            self.y - self.radius, +            self.x + self.radius, +            self.y + self.radius, +        ) + +    def intersects(self, box): +        # A box intersects a circle if and only if +        # at least one corner is inside the circle. +        top_left, bottom_right = box[:2], box[2:] +        for choose_x_from (top_left, bottom_right): +            for choose_y_from (top_left, bottom_right): +                x = choose_x_from[0] +                y = choose_y_from[1] +                if (((x - self.x) ** 2 + (y - self.y) ** 2) <= +                    self.radius ** 2): +                     return True +        return False +``` + +This _explicitly_ declares that the **CircleSprite** class implements the interface. It even enables us to verify that the class implements it properly: + + +``` +from zope.interface import verify + +def test_implementation(): +    sprite = CircleSprite(x=0, y=0, radius=1) +    verify.verifyObject(ISprite, sprite) +``` + +This is something that can be run by **pytest**, **nose**, or another test runner, and it will verify that the sprite created complies with the interface. The test is often partial: it will not test anything only mentioned in the documentation, and it will not even test that the methods can be called without exceptions! However, it does check that the right methods and attributes exist. This is a nice addition to the unit test suite and—at a minimum—prevents simple misspellings from passing the tests. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/zopeinterface-python-package + +作者:[Moshe Zadka][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshezhttps://opensource.com/users/lauren-pritchetthttps://opensource.com/users/sethhttps://opensource.com/users/drmjg +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Snake charmer cartoon with a yellow snake and a blue snake) +[2]: https://en.wikipedia.org/wiki/Zen_of_Python +[3]: http://zope.org diff --git a/sources/tech/20190924 CodeReady Containers- complex solutions on OpenShift - Fedora.md b/sources/tech/20190924 CodeReady Containers- complex solutions on OpenShift - Fedora.md new file mode 100644 index 0000000000..f3522e9717 --- /dev/null +++ b/sources/tech/20190924 CodeReady Containers- complex solutions on OpenShift - Fedora.md @@ -0,0 +1,165 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (CodeReady Containers: complex solutions on OpenShift + Fedora) +[#]: via: (https://fedoramagazine.org/codeready-containers-complex-solutions-on-openshift-fedora/) +[#]: author: (Marc Chisinevski https://fedoramagazine.org/author/mchisine/) + +CodeReady Containers: complex solutions on OpenShift + Fedora +====== + +![][1] + +Want to experiment with (complex) solutions on [OpenShift][2] 4.1+? CodeReady Containers (CRC) on a physical Fedora server is a great choice. It lets you: + + * Configure the RAM available to CRC / OpenShift (this is key as we’ll deploy Machine Learning, Change Data Capture, Process Automation and other solutions with significant memory requirements) + * Avoid installing anything on your laptop + * Standardize (on Fedora 30) so that you get the same results every time + + + +Start by installing CRC and Ansible Agnostic Deployer (AgnosticD) on a Fedora 30 physical server. Then, you’ll use AgnosticD to deploy Open Data Hub on the OpenShift 4.1 environment created by CRC. Let’s get started! + +### Set up CodeReady Containers + +``` +$ dnf config-manager --set-enabled fedora +$ su -c 'dnf -y install git wget tar qemu-kvm libvirt NetworkManager jq libselinux-python' +$ sudo systemctl enable --now libvirtd +``` + +Let’s also add a user. + +``` +$ sudo adduser demouser +$ sudo passwd demouser +$ sudo usermod -aG wheel demouser +``` + +Download and extract CodeReady Containers: + +``` +$ su demouser +$ cd /home/demouser +$ wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/1.0.0-beta.3/crc-linux-amd64.tar.xz +$ tar -xvf crc-linux-amd64.tar.xz +$ cd crc-linux-1.0.0-beta.3-amd64/ +$ sudo cp ./crc /usr/bin +``` + +Set the memory available to CRC according to what you have on your physical server. For example, on a physical server with around 100GB you can allocate 80G to CRC as follows: + +``` +$ crc config set memory 81920 +$ crc setup +``` + +You’ll need your pull secret from . + +``` +$ crc start +``` + +That’s it — you can now login to your OpenShift environment: + +``` +eval $(crc oc-env) && oc login -u kubeadmin -p https://api.crc.testing:6443 +``` + +### Set up Ansible Agnostic Deployer + +[github.com/redhat-cop/agnosticd][3] is a fully automated two-phase deployer. Let’s deploy it! + +``` +$ su demouser +$ cd /home/demouser +$ git clone https://github.com/redhat-cop/agnosticd.git +$ cd agnosticd/ansible +$ python -m pip install --upgrade --trusted-host files.pythonhosted.org -r requirements.txt +$ python3 -m pip install --upgrade --trusted-host files.pythonhosted.org -r requirements.txt +$ pip3 install kubernetes +$ pip3 install openshift +$ pip install kubernetes +$ pip install openshift +``` + +### Set up Open Data Hub on Code Ready Containers + +[Open Data Hub][4] is a machine-learning-as-a-service platform built on OpenShift and Kafka/Strimzi. It integrates a collection of open source projects. + +First, create an Ansible inventory file with the following content. + +``` +$ cat inventory +$ 127.0.0.1 ansible_connection=local +``` + +Set up the WORKLOAD environment variable so that Ansible Agnostic Deployer knows that we want to deploy Open Data Hub. + +``` +$ export WORKLOAD="ocp4-workload-open-data-hub" +$ sudo cp /usr/local/bin/ansible-playbook /usr/bin/ansible-playbook +``` + +We are only deploying one Open Data Hub project, so set _user_count_ to 1. You can set up workshops for many students by setting _user_count_. + +An OpenShift project (with Open Data Hub in our case) will be created for each student. + +``` +$ eval $(crc oc-env) && oc login -u kubeadmin -p https://api.crc.testing:6443 +$ ansible-playbook -i inventory ./configs/ocp-workloads/ocp-workload.yml -e"ocp_workload=${WORKLOAD}" -e"ACTION=create" -e"user_count=1" -e"ocp_username=kubeadmin" -e"ansible_become_pass=" -e"silent=False" +$ oc project open-data-hub-user1 +$ oc get route +NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD +jupyterhub jupyterhub-open-data-hub-user1.apps-crc.testing jupyterhub 8080-tcp edge/Redirect None +``` + +On your laptop, add _jupyterhub-open-data-hub-user1.apps-crc.testing_ to your _/etc/hosts_ file. For example: + +``` +127.0.0.1 localhost fedora30 console-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing mapit-app-management.apps-crc.testing mapit-spring-pipeline-demo.apps-crc.testing jupyterhub-open-data-hub-user1.apps-crc.testing jupyterhub-open-data-hub-user1.apps-crc.testing +``` + +On your laptop: + +``` +$ sudo ssh marc@fedora30 -L 443:jupyterhub-open-data-hub-user1.apps-crc.testing:443 +``` + +You can now browse to [https://jupyterhub-open-data-hub-user1.apps-crc.testing][5]. + +Now that we have Open Data Hub ready, you could deploy something interesting on it. For example, you could deploy IBM’s Qiskit open source framework for quantum computing. For more information, refer to Video no. 9 at [this YouTube playlist][6], and the [Github repo here][7]. + +You could also deploy plenty of other useful tools for Process Automation, Change Data Capture, Camel Integration, and 3scale API Management. You don’t have to wait for articles on these, though. Step-by-step short videos are already [available on YouTube][6]. + +The corresponding step-by-step instructions are [also on YouTube][6]. You can also follow along with this article using the [GitHub repo][8]. + +* * * + +_Photo by _[_Marta Markes_][9]_ on _[_Unsplash_][10]_._ + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/codeready-containers-complex-solutions-on-openshift-fedora/ + +作者:[Marc Chisinevski][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/mchisine/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/codeready-containers-816x345.jpg +[2]: https://fedoramagazine.org/run-openshift-locally-minishift/ +[3]: https://github.com/redhat-cop/agnosticd +[4]: https://opendatahub.io/ +[5]: https://jupyterhub-open-data-hub-user1.apps-crc.testing/ +[6]: https://www.youtube.com/playlist?list=PLg1pvyPzFye2UtQjZTSjoXhFdqkGK6exw +[7]: https://github.com/marcredhat/crcdemos/blob/master/IBMQuantum-qiskit +[8]: https://github.com/marcredhat/crcdemos/tree/master/fedora +[9]: https://unsplash.com/@vnevremeni?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[10]: https://unsplash.com/s/photos/container?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/sources/tech/20190924 Integrate online documents editors, into a Python web app using ONLYOFFICE.md b/sources/tech/20190924 Integrate online documents editors, into a Python web app using ONLYOFFICE.md new file mode 100644 index 0000000000..35c101ed2c --- /dev/null +++ b/sources/tech/20190924 Integrate online documents editors, into a Python web app using ONLYOFFICE.md @@ -0,0 +1,381 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Integrate online documents editors, into a Python web app using ONLYOFFICE) +[#]: via: (https://opensourceforu.com/2019/09/integrate-online-documents-editors-into-a-python-web-app-using-onlyoffice/) +[#]: author: (Aashima Sharma https://opensourceforu.com/author/aashima-sharma/) + +Integrate online documents editors, into a Python web app using ONLYOFFICE +====== + +[![][1]][2] + +_[ONLYOFFICE][3] is an open-source collaborative office suite distributed under the terms of GNU AGPL v.3 license. It contains three editors for text documents, spreadsheets, and presentations and features the following:_ + + * Viewing, editing and co-editing docx, .xlsx, pptx files. OOXML as a core format ensures high compatibility with Microsoft Word, Excel and PowerPoint files. + * Editing other popular formats (.odt, .rtf, .txt, .html, .ods., .csv, .odp) with inner conversion to OOXML. + * Familiar tabbed interface. + * Collaboration tools: two co-editing modes (fast and strict), track changes, comments and integrated chat. + * Flexible access rights management: full access, read only, review, form filling and comment. + * Building your own add-ons using the API. + * 250 languages available and hieroglyphic alphabets. + + + +API allowing the developers integrate ONLYOFFICE editors into their own web sites and apps written in any programming language and setup and manage the editors. + +To integrate ONLYOFFICE editors, we will need an integration app connecting the editors (ONLYOFFICE Document Server) and your service. To use editors within your interface, it should grant to ONLYOFFICE the following permissions : + + * Adding and executing custom code. + * Anonymous access for downloading and saving files. It means that the editors only communicate with your service on the server side without involving any user authorization data from the client side (browser cookies). + * Adding new buttons to UI (for example, “Open in ONLYOFFICE”, “Edit in ONLYOFFICE”). + * Оpening a new page where ONLYOFFICE can execute the script to add an editor. + * Ability to specify Document Server connection settings. + + + +There are several cases of successful integration with popular collaboration solutions such as Nextcloud, ownCloud, Alfresco, Confluence and SharePoint, via official ready-to-use connectors offered by ONLYOFFICE. + +One of the most actual integration cases is the integration of ONLYOFFICE editors with its open-source collaboration platform written in C#. This platform features document and project management, CRM, email aggregator, calendar, user database, blogs, forums, polls, wiki, and instant messenger. + +Integrating online editors with CRM and Projects modules, you can: + + * Attach documents to CRM opportunities and cases, or to project tasks and discussions, or even create a separate folder with documents, spreadsheets, and presentations related to the project. + * Create new docs, sheets, and presentations right in CRM or in the Project module. + * Open and edit attached documents, or download and delete them. + * Import contacts to your CRM in bulk from a CSV file as well as export the customer database as a CSV file. + + + +In the Mail module, you can attach files stored in the Documents module or insert a link to the needed document into the message body. When ONLYOFFICE users receive a message with an attached document, they are able to: download the attachment, view the file in the browser, open the file for editing or save it to the Documents module. As mentioned above, if the format differs from OOXML, the file will be automatically converted to .docx/.xlsx/.pptx and its copy will be saved in the original format as well. + +In this article, you will see the integration process of ONLYOFFICE into the Document Management System written in Python, one of the most popular programming languages. The following steps will show you how to create all the necessary elements to make possible work and collaboration on documents within DMS interface: viewing, editing, co-editing, saving files and users access management and may serve as an example of integration into your Python app. + +**1\. What you will need** + +Let’s start off by creating key components of the integration process: [_ONLYOFFICE Document Server_][4] and DMS written in Python. + +1.1 To install ONLYOFFICE Document Server you can choose from multiple installation options: compile the source code available on GitHub, use .deb or .rpm packages or the Docker image. +We recommend installing Document Server and all the necessary dependencies with only one command using the Docker image. Please note, that choosing this method, you need the latest Docker version installed. + +``` +docker run -itd -p 80:80 onlyoffice/documentserver-de +``` + +1.2 We need to develop DMS in Python. If you have one already, please, check if it meets the following conditions: + + * Has a list of files you need to open for viewing/editing + * Allows downloading files + + + +For the app, we will use a Bottle framework. We will install it in the working directory using the following command: + +``` +pip install bottle +``` + +Then we create the app’s code * main.py*  and the template _index.tpl_ . +We add the following code into this * main.py*  file: + +``` +from bottle import route, run, template, get, static_file # connecting the framework and the necessary components +@route('/') # setting up routing for requests for / +def index(): +return template('index.tpl') # showing template in response to request +run(host="localhost", port=8080) # running the application on port 8080 +``` + +Once we run the app, an empty page will be rendered on . + +In order, the Document Server to be able to create new docs, add default files and form a list of their names in the template, we should create a folder  _files_ , and put 3 files (.docx, .xlsx and .pptx) in there. + +To read these files’ names, we use the _listdir_ component. + +``` +from os import listdir +``` + +Now let’s create a variable for all the file names from the files folder: + +``` +sample_files = [f for f in listdir('files')] +``` + +To use this variable in the template, we need to pass it through the _template_ method: + +``` +def index(): +return template('index.tpl', sample_files=sample_files) + +Here’s this variable in the template: +%for file in sample_files: +
+{{file}} +
+% end +``` + +We restart the application to see the list of filenames on the page. +Here’s the method to make these files available for all the app users: + +``` +@get("/files/") +def show_sample_files(filepath): +return static_file(filepath, root="files") +``` + +**2\. How to view docs in ONLYOFFICE within the Python App** +Once all the components are ready, let’s add functions to make editors operational within the app interface. + +The first option enables users to open and view docs. Connect document editors API in the template: + +``` + +``` + +_editor_url_  is a link to document editors. + +A button to open each file for viewing: + +``` + +``` + +Now we need to add a div with  _id_ , in which the document editor will be opened: + +``` +
+``` + +To open the editor, we have to call a function: + +``` + +``` + +There are two arguments for the DocEditor function: id of the element where the editors will be opened and a JSON with the editors’ settings. +In this example, the following mandatory parameters are used: + + * _documentType_ is identified by its format (.docx, .xlsx, .pptx for texts, spreadsheets and presentations accordingly) + * _document.url_ is the link to the file you are going to open. + * _editorConfig.mode_. + + + +We can also add _title_ that will be displayed in the editors. + +So, now we have everything to view docs in our Python app. + +**3\. How to edit docs in ONLYOFFICE within the Python App** +First of all, add the “Edit” button: + +``` + +``` + +Then create a new function that will open files for editing. It is similar to the View function. +Now we have 3 functions: + +``` + +``` + +_destroyEditor_  is called to close an open editor. +As you might notice, the _editorConfig_ parameter is absent from the _edit()_ function, because it has by default the value * {“mode”: “edit”}.* + +Now we have everything to open docs for co-editing in your Python app. + +**4\. How to co-edit docs in ONLYOFFICE within the Python App** +Co-editing is implemented by using the same document.key for the same document in the editors’ settings. Without this key, the editors will create the editing session each time you open the file. + +Set unique keys for each doc to make users connect to the same editing session for co-editing. The format of the key should be the following:  _filename + “_key”_. The next step is to add it to all of the configs where document is present. + +``` +document: { +url: "host_url" + '/' + filepath, +title: filename, +key: filename + '_key' +}, +``` + +**5\. How to save docs in ONLYOFFICE within the Python App** +Every time we change and save the file, ONLYOFFICE stores all its versions. Let’s see closely how it works. After we close the editor, Document Server builds the file version to be saved and sends the request to callbackUrl address. This request contains document.key and the link to the just built file. +document.key is used to find the old version of the file and replace it with the new one. As we do not have any database here, we just send the filename using callbackUrl. +Specify _callbackUrl_ parameter in the setting in _editorConfig.callbackUrl_ and add it to the _edit()method_: + +``` +function edit(filename) { +const filepath = 'files/' + filename; +if (editor) { +editor.destroyEditor() +} +editor = new DocsAPI.DocEditor("editor", +{ +documentType: get_file_type(filepath), +document: { +url: "host_url" + '/' + filepath, +title: filename, +key: filename + '_key' +} +, +editorConfig: { +mode: 'edit', +callbackUrl: "host_url" + '/callback' + '&filename=' + filename // add file name as a request parameter +} +}); +} +``` + +Write a method that will save file after getting the POST request to* /callback* address: + +``` +@post("/callback") # processing post requests for /callback +def callback(): +if request.json['status'] == 2: +file = requests.get(request.json['url']).content +with open('files/' + request.query['filename'], 'wb') as f: +f.write(file) +return "{\"error\":0}" +``` + +* # status 2*  is the built file. + +When we close the editor, the new version of the file will be saved to storage. + +**6\. How to manage users in ONLYOFFICE within the Python App** +If there are users in your app, and you need to see who exactly is editing a doc, write their identifiers (id and name) in the editors’ configuration. +Add the ability to select a user in the interface: + +``` + +``` + +If you add the call of the function *pick_user()*at the beginning of the tag _<script>_, it will initialize, in the function itself, the variables responsible for the id and the user name. + +``` +function pick_user() { +const user_selector = document.getElementById("user_selector"); +this.current_user_name = user_selector.options[user_selector.selectedIndex].text; +this.current_user_id = user_selector.options[user_selector.selectedIndex].value; +} +``` + +Make use of _editorConfig.user.id_ and  _editorConfig.user.name_ to configure user’s settings. Add these parameters to the editors’ configuration in the file editing function. + +``` +function edit(filename) { +const filepath = 'files/' + filename; +if (editor) { +editor.destroyEditor() +} +editor = new DocsAPI.DocEditor("editor", +{ +documentType: get_file_type(filepath), +document: { +url: "host_url" + '/' + filepath, +title: filename +}, +editorConfig: { +mode: 'edit', +callbackUrl: "host_url" + '/callback' + '?filename=' + filename, +user: { +id: this.current_user_id, +name: this.current_user_name +} +} +}); +} +``` + +Using this approach, you can integrate ONLYOFFICE editors into your app written in Python and get all the necessary tools for working and collaborating on docs. For more integration examples (Java, Node.js, PHP, Ruby), please, refer to the official [_API documentation_][5]. + +**By: Maria Pashkina** + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/09/integrate-online-documents-editors-into-a-python-web-app-using-onlyoffice/ + +作者:[Aashima Sharma][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/aashima-sharma/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/09/Typist-composing-text-in-laptop.jpg?resize=696%2C420&ssl=1 (Typist composing text in laptop) +[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/09/Typist-composing-text-in-laptop.jpg?fit=900%2C543&ssl=1 +[3]: https://www.onlyoffice.com/en/ +[4]: https://www.onlyoffice.com/en/developer-edition.aspx +[5]: https://api.onlyoffice.com/editors/basic diff --git a/sources/tech/20190925 Debugging in Emacs- The Grand Unified Debugger.md b/sources/tech/20190925 Debugging in Emacs- The Grand Unified Debugger.md new file mode 100644 index 0000000000..f1a7fe8060 --- /dev/null +++ b/sources/tech/20190925 Debugging in Emacs- The Grand Unified Debugger.md @@ -0,0 +1,97 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Debugging in Emacs: The Grand Unified Debugger) +[#]: via: (https://opensourceforu.com/2019/09/debugging-in-emacs-the-grand-unified-debugger/) +[#]: author: (Vineeth Kartha https://opensourceforu.com/author/vineeth-kartha/) + +Debugging in Emacs: The Grand Unified Debugger +====== + +[![][1]][2] + +_This article briefly explores the features of the Grand Unified Debugger, a debugging tool for Emacs._ + +If you are a C/C++ developer, it is highly likely that you have crossed paths with GDB (the GNU debugger) which is, without doubt, one of the most powerful and unrivalled debuggers out there. Its only drawback is that it is command line based, and though that offers a lot of power, it is sometimes a bit restrictive as well. This is why smart people started coming up with IDEs to integrate editors and debuggers, and give them a GUI. There are still developers who believe that using the mouse reduces productivity and that mouse-click based GUIs are temptations by the devil. +Since Emacs is one of the coolest text editors out there, I am going to show you how to write, compile and debug code without having to touch the mouse or move out of Emacs. + +![Figure 1: Compile command in Emacs’ mini buffer][3] + +![Figure 2: Compilation status][4] + +The Grand Unified Debugger, or GUD as it is commonly known, is an Emacs mode in which GDB can be run from within Emacs. This provides all the features of Emacs in GDB. The user does not have to move out of the editor to debug the code written. + +**Setting the stage for the Grand Unified Debugger** +If you are using a Linux machine, then it is likely you will have GDB and gcc already installed. The next step is to ensure that Emacs is also installed. I am assuming that the readers are familiar with GDB and have used it at least for basic debugging. If not, please do check out some quick introductions to GDB that are widely available on the Internet. + +For people who are new to Emacs, let me introduce you to some basic terminology. Throughout this article, you will see shortcut commands such as C-c, M-x, etc. C means the Ctrl key and M means the Alt key. C-c means the Ctrl + c keys are pressed. If you see C-c c, it means Ctrl + c is pressed followed by c. Also, in Emacs, the main area where you edit the text is called the main buffer, and the area at the bottom of the Emacs window, where commands are entered, is called the mini buffer. +Start Emacs and to create a new file, press _C-x C-f_. This will prompt you to enter a file name. Let us call our file ‘buggyFactorial.cpp’. Once the file is open, type in the code shown below: + +``` +#include +#include +int factorial(int num) { +int product = 1; +while(num--) { +product *= num; +} +return product; +} +int main() { +int result = factorial(5); +assert(result == 120); +} +``` + +Save the file with _C-x C-s_. Once the file is saved, it’s time to compile the code. Press _M-x_ and in the prompt that comes up, type in compile and hit Enter. Then, in the prompt, replace whatever is there with _g++ -g buggyFactorial.cpp_ and again hit _Enter_. + +This will open up another buffer in Emacs that will show the status of the compile and, hopefully, if the code typed in is correct, you will get a buffer like the one shown in Figure 2. + +To hide this compilation status buffer, make sure your cursor is in the compilation buffer (you can do this without the mouse using _C-x o_-this is used to move the cursor from one open buffer to the other), and then press _C-x 0_. The next step is to run the code and see if it works fine. Press M-! and in the mini buffer prompt, type _./a.out._ + +See the mini buffer that says the assertion is failed. Clearly, something is wrong with the code, because the factorial (5) is 120. So let’s debug the code now. + +![Figure 3: Output of the code in the mini buffer][5] + +![Figure 4: The GDB buffer in Emacs][6] + +**Debugging the code using GUD** +Now, since we have the code compiled, it’s time to see what is wrong with it. Press M-x and in the prompt, enter _gdb._ In the next prompt that appears, write _gdb -i=mi a.out_, which will start GDB in the Emacs buffer and if everything goes well, you should get the window that’s shown in Figure 4. +At the gdb prompt, type break main and then r to run the program. This should start running the program and should break at the _main()_. + +As soon as GDB hits the break point at main, a new buffer will open up showing the code that you are debugging. Notice the red dot on the left side, which is where your breakpoint was set. There will be a small indicator that shows which line of the code you are on. Currently, this will be the same as the break point itself (Figure 5). + +![Figure 5: GDB and the code in split windows][7] + +![Figure 6: Show the local variables in a separate frame in Emacs][8] + +To debug the factorial function, we need to step into it. For this, you can either use the _gdb_ prompt and the gdb command step, or you can use the Emacs shortcut _C-c C-s_. There are other similar shortcuts, but I prefer using the GDB commands. So I will use them in the rest of this article. +Let us keep an eye on the local variables while stepping through the factorial number. Check out Figure 6 for how to get an Emacs frame to show the local variables. + +Step through the code in the GDB prompt and watch the value of the local variable change. In the first iteration of the loop itself, we see a problem. The value of the product should have been 5 and not 4. + +This is where I leave you and now it’s up to the readers to explore and discover the magic land called GUD mode. Every gdb command works in the GUD mode as well. I leave the fix to this code as an exercise to readers. Explore and see how you can customise things to make your workflow simpler and become more productive while debugging. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/09/debugging-in-emacs-the-grand-unified-debugger/ + +作者:[Vineeth Kartha][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/vineeth-kartha/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-25-15-39-46.png?resize=696%2C440&ssl=1 (Screenshot from 2019-09-25 15-39-46) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-25-15-39-46.png?fit=800%2C506&ssl=1 +[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure_1.png?resize=350%2C228&ssl=1 +[4]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure_2.png?resize=350%2C228&ssl=1 +[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure_3.png?resize=350%2C228&ssl=1 +[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure_4.png?resize=350%2C227&ssl=1 +[7]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure_5.png?resize=350%2C200&ssl=1 +[8]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure_6.png?resize=350%2C286&ssl=1 diff --git a/sources/tech/20190925 Mutation testing by example- Execute the test.md b/sources/tech/20190925 Mutation testing by example- Execute the test.md new file mode 100644 index 0000000000..2706e6dae1 --- /dev/null +++ b/sources/tech/20190925 Mutation testing by example- Execute the test.md @@ -0,0 +1,163 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Mutation testing by example: Execute the test) +[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-execute-test) +[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) + +Mutation testing by example: Execute the test +====== +Use the logic created so far in this series to implement functioning +code, then use failure and unit testing to make it better. +![A cat.][1] + +The [second article][2] in this series demonstrated how to implement the logic for determining whether it's daylight or nighttime in a home automation system (HAS) application that controls locking and unlocking a cat door. This third article explains how to write code to use that logic in an application that locks a door at night and unlocks it during daylight hours. + +As a reminder, set yourself up to follow along using the .NET xUnit.net testing framework by following the [instructions here][3]. + +### Disable the cat trap door during nighttime + +Assume the cat door is a sophisticated Internet of Things (IoT) product that has an IP address and can be accessed by sending a request to its API. For the sake of brevity, this series doesn't go into how to program an IoT device; rather, it simulates the service to keep the focus on test-driven development (TDD) and mutation testing. + +Start by writing a failing unit test: + + +``` +[Fact] +public void GivenNighttimeDisableTrapDoor() { +   var expected = "Cat trap door disabled"; +   var timeOfDay = dayOrNightUtility.GetDayOrNight(nightHour); +   var actual = catTrapDoor.Control(timeOfDay); +   Assert.Equal(expected, actual); +} +``` + +This describes a brand new component or service (**catTrapDoor**). That component (or service) has the capability to control the trap door given the current time. Now it's time to implement **catTrapDoor**. + +To simulate this service, you must first describe its capabilities by using the interface. Create a new file in the app folder and name it **ICatTrapDoor.cs** (by convention, an interface name starts with an uppercase letter **I**). Add the following code to that file: + + +``` +namespace app{ +   public interface ICatTrapDoor { +       string Control(string dayOrNight); +   } +} +``` + +This interface is not capable of functioning. It merely describes your intention when building the **CatTrapDoor** service. Interfaces are a nice way to create abstractions of the services you are working with. In a way, you could regard this interface as an API of the **CatTrapDoor** service. + +To implement the API, create a new file in the app folder and name it **FakeCatTrapDoor.cs**. Enter the following code into the class file: + + +``` +namespace app{ +   public class FakeCatTrapDoor : ICatTrapDoor { +       public string Control(string dayOrNight) { +           string trapDoorStatus = "Undetermined"; +           if(dayOrNight == "Nighttime") { +               trapDoorStatus = "Cat trap door disabled"; +           } + +           return trapDoorStatus; +       } +   } +} +``` + +This new **FakeCatTrapDoor** class implements the interface **ICatTrapDoor**. Its method **Control** accepts string value **dayOrNight** and checks whether the value passed in is "Nighttime." If it is, it modifies **trapDoorStatus** from "Undetermined" to "Cat trap door disabled" and returns that value to the calling client. + +Why is it called **FakeCatTrapDoor**? Because it's not a representation of the real cat trap door. The fake just helps you work out the processing logic. Once your logic is airtight, the fake service is replaced with the real service (this topic is reserved for the discipline of integration testing). + +With everything implemented, all the unit tests pass when they run: + + +``` +Starting test execution, please wait... + +Total tests; 3. Passed: 3. failed: 0. Skipped: 0. +Test Run Successful. +Test execution time: 1.3913 Seconds +``` + +### Enable the cat trap door during daytime + +It's time to look at the next scenario in our user story: + +> _Scenario #2: Enable cat trap door during daylight_ +> +> * Given that the clock detects the daylight +> * When the clock notifies the HAS +> * Then the HAS enables the cat trap door +> + + +This should be easy, just the flip side of the first scenario. First, write the failing test. Add the following unit test to your **UnitTest1.cs** file in the **unittest** folder: + + +``` +[Fact] +public void GivenDaylightEnableTrapDoor() { +   var expected = "Cat trap door enabled"; +   var timeOfDay = dayOrNightUtility.GetDayOrNight(dayHour); +   var actual = catTrapDoor.Control(timeOfDay); +   Assert.Equal(expected, actual); +} +``` + +You can expect to receive a "Cat trap door enabled" notification when sending the "Daylight" status to **catTrapDoor** service. When you run unit tests, you see the result you expect, which fails as expected: + + +``` +Starting test execution, please wait... +[Xunit unittest.UnitTest1.UnitTest1.GivenDaylightEnableTrapDoor [FAIL] +Failed unittest.UnitTest1.UnitTest1.GivenDaylightEnableTrapDoor +[...] +``` + +The unit test expected to receive a "Cat trap door enabled" notification but instead was notified that the cat trap door status is "Undetermined." Cool; now's the time to fix this minor failure. + +Adding three lines of code to the **FakeCatTrapDoor** does the trick: + + +``` +if(dayOrNight == "Daylight") { +   trapDoorStatus = "Cat trap door enabled"; +} +``` + +Run the unit tests again, and all tests pass: + + +``` +Starting test execution, please wait... + +Total tests: 4. Passed: 4. Failed: 0. Skipped: 0. +Test Run Successful. +Test execution time: 2.4888 Seconds +``` + +Awesome! Everything looks good, all the unit tests are in green, you have a rock-solid solution. Thank you, TDD! + +### Not so fast! + +Experienced engineers would not be convinced that the solution is rock-solid. Why? Because the solution hasn't been mutated yet. To dive deeply into what mutation is and why it's important, be sure to read the final article in this series. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/mutation-testing-example-execute-test + +作者:[Alex Bunardzic][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alex-bunardzic +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cat_pet_animal.jpg?itok=HOrVTfBZ (A cat.) +[2]: https://opensource.com/article/19/9/mutation-testing-example-part-2-failure-experimentation +[3]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd diff --git a/sources/tech/20190926 3 open source social platforms to consider.md b/sources/tech/20190926 3 open source social platforms to consider.md new file mode 100644 index 0000000000..dddde6dc77 --- /dev/null +++ b/sources/tech/20190926 3 open source social platforms to consider.md @@ -0,0 +1,76 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (3 open source social platforms to consider) +[#]: via: (https://opensource.com/article/19/9/open-source-social-networks) +[#]: author: (Jaouhari Youssef https://opensource.com/users/jaouharihttps://opensource.com/users/danarelhttps://opensource.com/users/osmomjianhttps://opensource.com/users/dff) + +3 open source social platforms to consider +====== +A photo-sharing platform, a privacy-friendly social network, and a web +application for building and sharing portfolios. +![Hands holding a mobile phone with open on the screen][1] + +It is no mystery why modern social media platforms were designed to be addictive: the more we consult them, the more data they have to fuel them—which enables them to grow smarter and bigger and more powerful. + +The massive, global interest in these platforms has created the attention economy, and people's focused mental engagement is the new gold in the age of information abundance. As economist, political scientist, and cognitive psychologist Herbert A. Simon said in [_Designing organizations for an information-rich world_][2], "the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes." And information consumes our attention, a resource we only have so much of it. + +According to [GlobalWebIndex][3], we are now spending an average of 142 minutes on social media and messaging platforms daily, 63% more than the 90 minutes we spent on these platforms just seven years ago. This can be explained by the fact that these platforms have grown more intelligent over time by studying the minds and behaviors of users and applying those findings to boost their appeal. + +Of relevance here is the psychological concept [variable-ratio schedule][4], which gives rewards after an average number of responses but on an unpredictable schedule. One example is slot machines, which may provide a reward an average of every five games, but the players don't know the specific number of games (one, two, seven, or even 15) they must play before obtaining a reward. This schedule leads to a high response rate and strong engagement. + +Knowing all of this, what can we do to make things better and loosen the grip social networks have on us and our data? I suggest the answer is migrating to open source social platforms, which I believe consider the humane aspect of technology more than private companies do. Here are three open source social platforms to consider. + +### Pixelfed + +[Pixelfed][5] is a photo-sharing platform that is ad-free and privacy-focused, which means no third party is making a profit from your data. Posts are in chronological order, which means there is no algorithm making distinctions between content. + +To join the network, you can pick one of the servers on the [list of instances][6], or you can [install and run][7] your own Pixelfed instance. + +Once you are set up, you can connect with other Pixelfed instances. This is known as federation, which means many instances of a software (in this case, Pixelfed) share data (in this case, pictures). When you federate with another instance of Pixelfed, you can see and interact with pictures posted to other accounts. + +The project is ongoing and needs the community's support to grow. Check [Pixelfed's GitHub][8] page for more information about contributing. + +### Okuna + +[Okuna][9] is an open source, privacy-friendly social network. It is committed to being a positive influence on society and the environment, plus it donates 30% of its profits to worthy causes. + +### Mahara + +[Mahara][10] is an open source web application for building and sharing electronic portfolios. (The word _mahara_ is Māori for _memory_ or _thoughtful consideration_.) With Mahara, you can create a meaningful and verifiable professional profile, but all your data belongs to you rather than a corporate sponsor. It is customizable and can be integrated into other web services. + +You can try Mahara on its [demo site][11]. + +### Engage for change + +If you want to know more about the impact of the attention economy on our lives and engage for positive change, take a look at the [Center for Humane Technology][12], an organization trying to temper the attention economy and make technology more humane. Its aim is to spur change that will protect human vulnerabilities from being exploited and therefore build a better society. + +As Sonya Parker said, "whatever you focus your attention on will become important to you even if it's unimportant." So let's focus our attention on building a better world for all. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/open-source-social-networks + +作者:[Jaouhari Youssef][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jaouharihttps://opensource.com/users/danarelhttps://opensource.com/users/osmomjianhttps://opensource.com/users/dff +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78 (Hands holding a mobile phone with open on the screen) +[2]: https://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=33748 +[3]: https://www.digitalinformationworld.com/2019/01/how-much-time-do-people-spend-social-media-infographic.html +[4]: https://dictionary.apa.org/variable-ratio-schedule +[5]: https://pixelfed.org/ +[6]: https://pixelfed.org/join +[7]: https://docs.pixelfed.org/installing-pixelfed/ +[8]: https://github.com/pixelfed/pixelfed +[9]: https://www.okuna.io/en/home +[10]: https://mahara.org/ +[11]: https://demo.mahara.org/ +[12]: https://humanetech.com/problem/ diff --git a/sources/tech/20190926 Mutation testing by example- Evolving from fragile TDD.md b/sources/tech/20190926 Mutation testing by example- Evolving from fragile TDD.md new file mode 100644 index 0000000000..4ce6e23232 --- /dev/null +++ b/sources/tech/20190926 Mutation testing by example- Evolving from fragile TDD.md @@ -0,0 +1,258 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Mutation testing by example: Evolving from fragile TDD) +[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-definition) +[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/marcobravo) + +Mutation testing by example: Evolving from fragile TDD +====== +Test-driven development is not enough for delivering lean code that +works exactly to expectations. Mutation testing is a powerful step +forward. Here's what that looks like. +![Binary code on a computer screen][1] + +The [third article][2] in this series demonstrated how to use failure and unit testing to develop better code. + +While it seemed that the journey was over with a successful sample Internet of Things (IoT) application to control a cat door, experienced programmers know that solutions need _mutation_. + +### What's mutation testing? + +Mutation testing is the process of iterating through each line of implemented code, mutating that line, then running unit tests and checking if the mutation broke the expectations. If it hasn't, you have created a surviving mutant. + +Surviving mutants are always an alarming issue that points to potentially risky areas in a codebase. As soon as you catch a surviving mutant, you must kill it. And the only way to kill a surviving mutant is to create additional descriptions—new unit tests that describe your expectations regarding the output of your function or module. In the end, you deliver a lean, mean solution that is airtight and guarantees no pesky bugs or defects are lurking in your codebase. + +If you leave surviving mutants to kick around and proliferate, live long, and prosper, then you are creating the much dreaded technical debt. On the other hand, if any unit test complains that the temporarily mutated line of code produces output that's different from the expected output, the mutant has been killed. + +### Installing Stryker + +The quickest way to try mutation testing is to leverage a dedicated framework. This example uses [Stryker][3]. + +To install Stryker, go to the command line and run: + + +``` +`$ dotnet tool install -g dotnet-stryker` +``` + +To run Stryker, navigate to the **unittest** folder and type: + + +``` +`$ dotnet-stryker` +``` + +Here is Stryker's report on the quality of our solution: + + +``` +14 mutants have been created. Each mutant will now be tested, this could take a while. + +Tests progress | 14/14 | 100% | ~0m 00s | +Killed : 13 +Survived : 1 +Timeout : 0 + +All mutants have been tested, and your mutation score has been calculated +\- \app [13/14 (92.86%)] +[...] +``` + +The report says: + + * Stryker created 14 mutants + * Stryker saw 13 mutants were killed by the unit tests + * Stryker saw one mutant survive the onslaught of the unit tests + * Stryker calculated that the existing codebase contains 92.86% of code that serves the expectations + * Stryker calculated that 7.14% of the codebase contains code that does not serve the expectations + + + +Overall, Stryker claims that the application assembled in the first three articles in this series failed to produce a reliable solution. + +### How to kill a mutant + +When software developers encounter surviving mutants, they typically reach for the implemented code and look for ways to modify it. For example, in the case of the sample application for cat door automation, change the line: + + +``` +`string trapDoorStatus = "Undetermined";` +``` + +to: + + +``` +`string trapDoorStatus = "";` +``` + +and run Stryker again. A mutant has survived: + + +``` +All mutants have been tested, and your mutation score has been calculated +\- \app [13/14 (92.86%)] +[...] +[Survived] String mutation on line 4: '""' ==> '"Stryker was here!"' +[...] +``` + +This time, you can see that Stryker mutated the line: + + +``` +`string trapDoorStatus = "";` +``` + +into: + + +``` +`string trapDoorStatus = ""Stryker was here!";` +``` + +This is a great example of how Stryker works: it mutates every line of our code, in a smart way, in order to see if there are further test cases we have yet to think about. It's forcing us to consider our expectations in greater depth. + +Defeated by Stryker, you can attempt to improve the implemented code by adding more logic to it: + + +``` +public string Control(string dayOrNight) { +   string trapDoorStatus = "Undetermined"; +   if(dayOrNight == "Nighttime") { +       trapDoorStatus = "Cat trap door disabled"; +   } else if(dayOrNight == "Daylight") { +       trapDoorStatus = "Cat trap door enabled"; +   } else { +       trapDoorStatus = "Undetermined"; +   } +   return trapDoorStatus; +} +``` + +But after running Stryker again, you see this attempt created a new mutant: + + +``` +ll mutants have been tested, and your mutation score has been calculated +\- \app [13/15 (86.67%)] +[...] +[Survived] String mutation on line 4: '"Undetermined"' ==> '""' +[...] +[Survived] String mutation on line 10: '"Undetermined"' ==> '""' +[...] +``` + +![Stryker report][4] + +You cannot wiggle out of this tight spot by modifying the implemented code. It turns out the only way to kill surviving mutants is to _describe additional expectations_. And how do you describe expectations? By writing unit tests. + +### Unit testing for success + +It's time to add a new unit test. Since the surviving mutant is located on line 4, you realize you have not specified expectations for the output with value "Undetermined." + +Let's add a new unit test: + + +``` +[Fact] +public void GivenIncorrectTimeOfDayReturnUndetermined() { +   var expected = "Undetermined"; +   var actual = catTrapDoor.Control("Incorrect input"); +   Assert.Equal(expected, actual); +} +``` + +The fix worked! Now all mutants are killed: + + +``` +All mutants have been tested, and your mutation score has been calculated +\- \app [14/14 (100%)] +[Killed] [...] +``` + +You finally have a complete solution, including a description of what is expected as output if the system receives incorrect input values. + +### Mutation testing to the rescue + +Suppose you decide to over-engineer a solution and add this method to the **FakeCatTrapDoor**: + + +``` +private string getTrapDoorStatus(string dayOrNight) { +   string status = "Everything okay"; +   if(dayOrNight != "Nighttime" || dayOrNight != "Daylight") { +       status = "Undetermined"; +   } +   return status; +} +``` + +Then replace the line 4 statement: + + +``` +`string trapDoorStatus = "Undetermined";` +``` + +with: + + +``` +`string trapDoorStatus = getTrapDoorStatus(dayOrNight);` +``` + +When you run unit tests, everything passes: + + +``` +Starting test execution, please wait... + +Total tests: 5. Passed: 5. Failed: 0. Skipped: 0. +Test Run Successful. +Test execution time: 2.7191 Seconds +``` + +The test has passed without an issue. TDD has worked. But bring  Stryker to the scene, and suddenly the picture looks a bit grim: + + +``` +All mutants have been tested, and your mutation score has been calculated +\- \app [14/20 (70%)] +[...] +``` + +Stryker created 20 mutants; 14 mutants were killed, while six mutants survived. This lowers the success score to 70%. This means only 70% of our code is there to fulfill the described expectations. The other 30% of the code is there for no clear reason, which puts us at risk of misuse of that code. + +In this case, Stryker helps fight the bloat. It discourages the use of unnecessary and convoluted logic because it is within the crevices of such unnecessary complex logic where bugs and defects breed. + +### Conclusion + +As you've seen, mutation testing ensures that no uncertain fact goes unchecked. + +You could compare Stryker to a chess master who is thinking of all possible moves to win a match. When Stryker is uncertain, it's telling you that winning is not yet a guarantee. The more unit tests we record as facts, the further we are in our match, and the more likely Stryker can predict a win. In any case, Stryker helps detect losing scenarios even when everything looks good on the surface. + +It is always a good idea to engineer code properly. You've seen how TDD helps in that regard. TDD is especially useful when it comes to keeping your code extremely modular. However, TDD on its own is not enough for delivering lean code that works exactly to expectations. Developers can add code to an already implemented codebase without first describing the expectations. That puts the entire code base at risk. Mutation testing is especially useful in catching breaches in the regular test-driven development (TDD) cadence. You need to mutate every line of implemented code to be certain no line of code is there without a specific reason. + +Now that you understand how mutation testing works, you should look into how to leverage it. Next time, I'll show you how to put mutation testing to good use when tackling more complex scenarios. I will also introduce more agile concepts to see how DevOps culture can benefit from maturing technology. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/mutation-testing-example-definition + +作者:[Alex Bunardzic][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/marcobravo +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/binary_code_computer_screen.png?itok=7IzHK1nn (Binary code on a computer screen) +[2]: https://opensource.com/article/19/9/mutation-testing-example-part-3-execute-test +[3]: https://stryker-mutator.io/ +[4]: https://opensource.com/sites/default/files/uploads/strykerreport.png (Stryker report) diff --git a/sources/tech/20190927 5 tips for GNU Debugger.md b/sources/tech/20190927 5 tips for GNU Debugger.md new file mode 100644 index 0000000000..faedf4240d --- /dev/null +++ b/sources/tech/20190927 5 tips for GNU Debugger.md @@ -0,0 +1,230 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 tips for GNU Debugger) +[#]: via: (https://opensource.com/article/19/9/tips-gnu-debugger) +[#]: author: (Tim Waugh https://opensource.com/users/twaugh) + +5 tips for GNU Debugger +====== +Learn how to use some of the lesser-known features of gdb to inspect and +fix your code. +![Bug tracking magnifying glass on computer screen][1] + +The [GNU Debugger][2] (gdb) is an invaluable tool for inspecting running processes and fixing problems while you're developing programs. + +You can set breakpoints at specific locations (by function name, line number, and so on), enable and disable those breakpoints, display and alter variable values, and do all the standard things you would expect any debugger to do. But it has many other features you might not have experimented with. Here are five for you to try. + +### Conditional breakpoints + +Setting a breakpoint is one of the first things you'll learn to do with the GNU Debugger. The program stops when it reaches a breakpoint, and you can run gdb commands to inspect it or change variables before allowing the program to continue. + +For example, you might know that an often-called function crashes sometimes, but only when it gets a certain parameter value. You could set a breakpoint at the start of that function and run the program. The function parameters are shown each time it hits the breakpoint, and if the parameter value that triggers the crash is not supplied, you can continue until the function is called again. When the troublesome parameter triggers a crash, you can step through the code to see what's wrong. + + +``` +(gdb) break sometimes_crashes +Breakpoint 1 at 0x40110e: file prog.c, line 5. +(gdb) run +[...] +Breakpoint 1, sometimes_crashes (f=0x7fffffffd1bc) at prog.c:5 +5      fprintf(stderr, +(gdb) continue +Breakpoint 1, sometimes_crashes (f=0x7fffffffd1bc) at prog.c:5 +5      fprintf(stderr, +(gdb) continue +``` + +To make this more repeatable, you could count how many times the function is called before the specific call you are interested in, and set a counter on that breakpoint (for example, "continue 30" to make it ignore the next 29 times it reaches the breakpoint). + +But where breakpoints get really powerful is in their ability to evaluate expressions at runtime, which allows you to automate this kind of testing. Enter: conditional breakpoints. + + +``` +break [LOCATION] if CONDITION + +(gdb) break sometimes_crashes if !f +Breakpoint 1 at 0x401132: file prog.c, line 5. +(gdb) run +[...] +Breakpoint 1, sometimes_crashes (f=0x0) at prog.c:5 +5      fprintf(stderr, +(gdb) +``` + +Instead of having gdb ask what to do every time the function is called, a conditional breakpoint allows you to make gdb stop at that location only when a particular expression evaluates as true. If the execution reaches the conditional breakpoint location, but the expression evaluates as false, the + +debugger automatically lets the program continue without asking the user what to do. + +### Breakpoint commands + +An even more sophisticated feature of breakpoints in the GNU Debugger is the ability to script a response to reaching a breakpoint. Breakpoint commands allow you to write a list of GNU Debugger commands to run whenever it reaches a breakpoint. + +We can use this to work around the bug we already know about in the **sometimes_crashes** function and make it return from that function harmlessly when it provides a null pointer. + +We can use **silent** as the first line to get more control over the output. Without this, the stack frame will be displayed each time the breakpoint is hit, even before our breakpoint commands run. + + +``` +(gdb) break sometimes_crashes +Breakpoint 1 at 0x401132: file prog.c, line 5. +(gdb) commands 1 +Type commands for breakpoint(s) 1, one per line. +End with a line saying just "end". +>silent +>if !f + >frame + >printf "Skipping call\n" + >return 0 + >continue + >end +>printf "Continuing\n" +>continue +>end +(gdb) run +Starting program: /home/twaugh/Documents/GDB/prog +warning: Loadable section ".note.gnu.property" outside of ELF segments +Continuing +Continuing +Continuing +#0  sometimes_crashes (f=0x0) at prog.c:5 +5      fprintf(stderr, +Skipping call +[Inferior 1 (process 9373) exited normally] +(gdb) +``` + +### Dump binary memory + +GNU Debugger has built-in support for examining memory using the **x** command in various formats, including octal, hexadecimal, and so on. But I like to see two formats side by side: hexadecimal bytes on the left, and ASCII characters represented by those same bytes on the right. + +When I want to view the contents of a file byte-by-byte, I often use **hexdump -C** (hexdump comes from the [util-linux][3] package). Here is gdb's **x** command displaying hexadecimal bytes: + + +``` +(gdb) x/33xb mydata +0x404040 <mydata>:    0x02    0x01    0x00    0x02    0x00    0x00    0x00    0x01 +0x404048 <mydata+8>:    0x01    0x47    0x00    0x12    0x61    0x74    0x74    0x72 +0x404050 <mydata+16>:    0x69    0x62    0x75    0x74    0x65    0x73    0x2d    0x63 +0x404058 <mydata+24>:    0x68    0x61    0x72    0x73    0x65    0x75    0x00    0x05 +0x404060 <mydata+32>:    0x00 +``` + +What if you could teach gdb to display memory just like hexdump does? You can, and in fact, you can use this method for any format you prefer. + +By combining the **dump** command to store the bytes in a file, the **shell** command to run hexdump on the file, and the **define** command, we can make our own new **hexdump** command to use hexdump to display the contents of memory. + + +``` +(gdb) define hexdump +Type commands for definition of "hexdump". +End with a line saying just "end". +>dump binary memory /tmp/dump.bin $arg0 $arg0+$arg1 +>shell hexdump -C /tmp/dump.bin +>end +``` + +Those commands can even go in the **~/.gdbinit** file to define the hexdump command permanently. Here it is in action: + + +``` +(gdb) hexdump mydata sizeof(mydata) +00000000  02 01 00 02 00 00 00 01  01 47 00 12 61 74 74 72  |.........G..attr| +00000010  69 62 75 74 65 73 2d 63  68 61 72 73 65 75 00 05  |ibutes-charseu..| +00000020  00                                                |.| +00000021 +``` + +### Inline disassembly + +Sometimes you want to understand more about what happened leading up to a crash, and the source code is not enough. You want to see what's going on at the CPU instruction level. + +The **disassemble** command lets you see the CPU instructions that implement a function. But sometimes the output can be hard to follow. Usually, I want to see what instructions correspond to a certain section of source code in the function. To achieve this, use the **/s** modifier to include source code lines with the disassembly. + + +``` +(gdb) disassemble/s main +Dump of assembler code for function main: +prog.c: +11    { +   0x0000000000401158 <+0>:    push   %rbp +   0x0000000000401159 <+1>:    mov      %rsp,%rbp +   0x000000000040115c <+4>:    sub      $0x10,%rsp + +12      int n = 0; +   0x0000000000401160 <+8>:    movl   $0x0,-0x4(%rbp) + +13      sometimes_crashes(&n); +   0x0000000000401167 <+15>:    lea     -0x4(%rbp),%rax +   0x000000000040116b <+19>:    mov     %rax,%rdi +   0x000000000040116e <+22>:    callq  0x401126 <sometimes_crashes> +[...snipped...] +``` + +This, along with **info registers** to see the current values of all the CPU registers and commands like **stepi** to step one instruction at a time, allow you to have a much more detailed understanding of the program. + +### Reverse debug + +Sometimes you wish you could turn back time. Imagine you've hit a watchpoint on a variable. A watchpoint is like a breakpoint, but instead of being set at a location in the program, it is set on an expression (using the **watch** command). Whenever the value of the expression changes, execution stops, and the debugger takes control. + +So imagine you've hit this watchpoint, and the memory used by a variable has changed value. This can turn out to be caused by something that occurred much earlier; for example, the memory was freed and is now being re-used. But when and why was it freed? + +The GNU Debugger can solve even this problem because you can run your program in reverse! + +It achieves this by carefully recording the state of the program at each step so that it can restore previously recorded states, giving the illusion of time flowing backward. + +To enable this state recording, use the **target record-full** command. Then you can use impossible-sounding commands, such as: + + * **reverse-step**, which rewinds to the previous source line + * **reverse-next**, which rewinds to the previous source line, stepping backward over function calls + * **reverse-finish**, which rewinds to the point when the current function was about to be called + * **reverse-continue**, which rewinds to the previous state in the program that would (now) trigger a breakpoint (or anything else that causes it to stop) + + + +Here is an example of reverse debugging in action: + + +``` +(gdb) b main +Breakpoint 1 at 0x401160: file prog.c, line 12. +(gdb) r +Starting program: /home/twaugh/Documents/GDB/prog +[...] + +Breakpoint 1, main () at prog.c:12 +12      int n = 0; +(gdb) target record-full +(gdb) c +Continuing. + +Program received signal SIGSEGV, Segmentation fault. +0x0000000000401154 in sometimes_crashes (f=0x0) at prog.c:7 +7      return *f; +(gdb) reverse-finish +Run back to call of #0  0x0000000000401154 in sometimes_crashes (f=0x0) +        at prog.c:7 +0x0000000000401190 in main () at prog.c:16 +16      sometimes_crashes(0); +``` + +These are just a handful of useful things the GNU Debugger can do. There are many more to discover. Which hidden, little-known, or just plain amazing feature of gdb is your favorite? Please share it in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/tips-gnu-debugger + +作者:[Tim Waugh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/twaugh +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bug_software_issue_tracking_computer_screen.jpg?itok=6qfIHR5y (Bug tracking magnifying glass on computer screen) +[2]: https://www.gnu.org/software/gdb/ +[3]: https://en.wikipedia.org/wiki/Util-linux diff --git a/sources/tech/20190928 Microsoft open sourcing its C-- library, Cloudera-s open source data platform, new tools to remove leaked passwords on GitHub and combat ransomware, and more open source news.md b/sources/tech/20190928 Microsoft open sourcing its C-- library, Cloudera-s open source data platform, new tools to remove leaked passwords on GitHub and combat ransomware, and more open source news.md new file mode 100644 index 0000000000..cb803113ab --- /dev/null +++ b/sources/tech/20190928 Microsoft open sourcing its C-- library, Cloudera-s open source data platform, new tools to remove leaked passwords on GitHub and combat ransomware, and more open source news.md @@ -0,0 +1,83 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Microsoft open sourcing its C++ library, Cloudera's open source data platform, new tools to remove leaked passwords on GitHub and combat ransomware, and more open source news) +[#]: via: (https://opensource.com/article/19/9/news-september-28) +[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) + +Microsoft open sourcing its C++ library, Cloudera's open source data platform, new tools to remove leaked passwords on GitHub and combat ransomware, and more open source news +====== +Catch up on the biggest open source headlines from the past two weeks. +![Weekly news roundup with TV][1] + +In this edition of our open source news roundup, we take a look Cloudera's open source data platform, Microsoft open sourcing its C++ library, new tools to beef up digital security, and more! + +### Cloudera releases open source cloud data platform + +It was only a few months ago that data processing software vendor Cloudera went [all in on open source][2]. The results of that shift have started to appear, with the company releasing "[an integrated data platform made up entirely of open-source elements.][3]" + +Called Cloudera Data Platform, it combines "a cloud-native data warehouse, machine learning service and data hub, each running as instances within the self-contained operating environments." Cloudera's chief product officer Arun Murthy said that by using "existing components in the cloud, the platform cuts deployment times from weeks to hours." The speed of open source adoption is a great industry proof point. One can image the next step is Cloudera's participation in the underlying open source communities they now depend on.  + +### Microsoft open sources its C++ standard library + +When you think of open source software, programming language libraries probably aren't the first things that come to mind. But they're often an essential part of the software that we use. A team at Microsoft recognized the importance of the company's implementation of the C++ Standard Library (STL) and it's been [released as open source][4]. + +By making the library open source, users get "easy access to all the latest developments in C++" and enables them to participate "in the STL’s development by reporting issues and commenting on pull requests." The library, which is under an Apache License, is [available on GitHub][5]. + +### Two new open source security tools + +Nowadays, more than ever it seems, digital security is important to anyone using a computer — from average users to system administrators to software developers. Open source has been playing its part in helping make systems more secure, and two new open source tools to help secure an organization's code and its computers have been released. + +If you, or someone in your company, has ever accidentally published sensitive information to a public GitHub repository, then [Shhgit is for you][6]. The tool, which you can [find on GitHub][7], is designed to detect passwords, connection strings, and access keys that wind up being exposed. Unlike similar tools, you don't need to point Shhgit at a particular repository. Instead, it "taps into the GitHub firehose to automatically flag up leaked secrets". + +Ransomware attacks are no joke, and defending against them is serious business. Cameyo, a company specializing in virtualization, has released an [open source monitoring tool][8] that "any organization can use to identify attacks taking place over RDP (Remote Desktop Protocol) in their environment." Called [RDPmon][9], the software enables users to "monitor and identify brute force attacks and to help protect against ransomware". It does this by watching the number of attempted RDP connections, along with the number of users and which programs those users are running. + +### New foundation to develop open source data processing engine + +There's a new open source foundation in town. Tech firms Alibaba, Facebook, Twitter, and Uber have [teamed up][10] to further develop Presto, a database search engine and processing tool originally crafted by Facebook. + +The Presto Foundation, which operates under the Linux Foundation's umbrella, aims to make Presto the "fastest and most reliable SQL engine for massively distributed data processing." One of the foundation members, Alibaba, already has plans for the tool. According to an [article in CX Tech][11], Alibaba intends to refine Presto to more efficiently "sift through the mountains of data generated by its e-commerce platforms." + +#### In other news + + * [Scientists Create World’s First Open Source Tool for 3D Analysis of Advanced Biomaterials][12] + * [Percona announces Percona Distribution for PostgreSQL to support open source databases][13] + * [Sage gets cloudy, moves towards open source and microservices][14] + * [Compliance monitoring of EU’s Common Agricultural Policy made more transparent and efficient with Open Source][15] + * [WebLinc is taking its in-house ecommerce platform open source][16] + + + +_Thanks, as always, to Opensource.com staff members and moderators for their help this week._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/news-september-28 + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV) +[2]: https://opensource.com/19/7/news-july-20#cloudera +[3]: https://siliconangle.com/2019/09/24/cloudera-debuts-open-source-integrated-cloud-data-platform/ +[4]: https://devclass.com/2019/09/18/microsoft-turns-to-github-to-open-source-c-stl/ +[5]: https://github.com/microsoft/STL +[6]: https://portswigger.net/daily-swig/open-source-tool-for-bug-hunters-searches-for-leaked-secrets-in-github-commits +[7]: https://github.com/eth0izzle/shhgit/ +[8]: https://betanews.com/2019/09/18/tool-prevents-brute-force-ransomware/ +[9]: https://github.com/cameyo/rdpmon +[10]: https://sdtimes.com/data/the-presto-foundation-launches-under-the-linux-foundation/ +[11]: https://www.caixinglobal.com/2019-09-24/alibaba-global-tech-giants-form-foundation-for-open-source-database-tool-101465449.html +[12]: https://sputniknews.com/science/201909111076763585-russian-german-scientists-create-worlds-first-open-source-tool-for-3d-analysis-of-advanced/ +[13]: https://hub.packtpub.com/percona-announces-percona-distribution-for-postgresql-to-support-open-source-databases/ +[14]: https://www.itworldcanada.com/article/sage-gets-cloudy-moves-towards-open-source-and-microservices/421771 +[15]: https://joinup.ec.europa.eu/node/702122 +[16]: https://technical.ly/philly/2019/09/24/weblinc-ecommerce-platform-open-source-workarea/ diff --git a/sources/tech/20190929 Open Source Voice Chat Mumble Makes a Big Release After 10 Years.md b/sources/tech/20190929 Open Source Voice Chat Mumble Makes a Big Release After 10 Years.md new file mode 100644 index 0000000000..3205c22a0a --- /dev/null +++ b/sources/tech/20190929 Open Source Voice Chat Mumble Makes a Big Release After 10 Years.md @@ -0,0 +1,117 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Open Source Voice Chat Mumble Makes a Big Release After 10 Years) +[#]: via: (https://itsfoss.com/mumble-voice-chat/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Open Source Voice Chat Mumble Makes a Big Release After 10 Years +====== + +The greatest power of the Internet is its ability to connect people anywhere in the world. Voice chat applications are just one category of tools uniting us. Recently, one of the biggest open-source voice chat apps made a new release, 10 years after its previous release. + +### Mumble: Open Source, Low Latency, High Quality Voice Chat + +![Mumble][1] + +[Mumble][2] is a “free, open source, low latency, high quality voice chat application”. It was originally created to be used by gamers, but it is also used to record podcasts. Several [Linux podcasts][3] use Mumble to record hosts located at different places in the world, including Late Nite Linux. To give you an idea of how powerful Mumble is, it has been used to connect “Eve Online players with huge communities of over 100 simultaneous voice participants”. + +Here are some of the features that make Mumble interesting: + + * Low-latency (ideal for gamers) + * Connections always encrypted and secured + * Connect with friends across servers + * Extensive user permission system + * Extendable through Ice and GRPC protocols + * Automatable administration through Ice middleware + * Low resource cost for hosting + * Free choice between official and third-party server software + * Provide users with channel viewer data (CVP) without giving control away + + + +It’s a powerful software with a lot of features. If you are new to it and want to start using it, I suggest [going through its documentation][4]. + +### What’s New in Mumble 1.3.0? + +![Mumble 1.30 Interface with Lite Theme][5] + +The team behind Mumble released [version 1.3.0][6] in early August. This is the first major release in ten years and it contains over 3,000 changes. Here are just a few of the new features in Mumble 1.3.0: + + * UI redesign + * New lite and dark themes + * Individual user volume adjustment + * New bindable shortcut for changing transmission modes + * Quickly filter channels + * Multichannel recordings are synchronous even after several hours + * PulseAudio monitor devices can be used as input devices + * An optional clock (current time) in the overlay + * Improved user management, including searchable ban list + * Added support for systemd + * Option to disable public server list + * Lower volume of other users when “Priority Speaker” talks + * New interface allows renaming users as well as (batch) deletions + * Mumble client can be controlled through SocketRPC + * Support for Logitech G-keys has been added + + + +### Installing Mumble on Linux + +![Mumble 1.30 Interface Dark Theme][7] + +The Mumble team has installers available for Linux, Windows (32 and 64 bit), and macOS. You can find and download them from the [project’s website][8]. You can also browse its [source code on GitHub][9]. + +They have a [PPA available for Ubuntu][10]. Which means you can easily install it on Ubuntu and Ubuntu-based distributions like Linux Mint, elementary OS. To install, just enter these commands, one by one, in the terminal: + +``` +sudo add-apt-repository ppa:mumble/release +sudo apt update +sudo apt install mumble +``` + +The Snap community also created a [snap app for Mumble][11]. This makes installing Mumble easier in any Linux distribution that supports Snap. You can install it with the following command: + +``` +sudo snap install mumble +``` + +There are also _third-party clients_ for Android and iOS on the download page. + +[Download Mumble for other platforms][8] + +**Final Thoughts** + +I have never used Mumble or any other voice chat app. I just never had the need. That being said, I’m glad that there is a powerful FOSS option available and so widely used. + +Have you ever used Mumble? What is your favorite voice chat app? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][12]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/mumble-voice-chat/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/mumble-voice-chat-logo.png?ssl=1 +[2]: https://www.mumble.info/ +[3]: https://itsfoss.com/linux-podcasts/ +[4]: https://wiki.mumble.info/wiki/Main_Page +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/mumble-1.30-interface.jpg?ssl=1 +[6]: https://www.mumble.info/blog/mumble-1.3.0-release-announcement/ +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/mumble-1.30-interface-1.png?resize=800%2C529&ssl=1 +[8]: https://www.mumble.info/downloads/ +[9]: https://github.com/mumble-voip/mumble +[10]: https://itsfoss.com/ppa-guide/ +[11]: https://snapcraft.io/mumble +[12]: https://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20190930 Cacoo- A Lightweight Online Tool for Modelling AWS Architecture.md b/sources/tech/20190930 Cacoo- A Lightweight Online Tool for Modelling AWS Architecture.md new file mode 100644 index 0000000000..428c68007a --- /dev/null +++ b/sources/tech/20190930 Cacoo- A Lightweight Online Tool for Modelling AWS Architecture.md @@ -0,0 +1,72 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cacoo: A Lightweight Online Tool for Modelling AWS Architecture) +[#]: via: (https://opensourceforu.com/2019/09/cacoo-a-lightweight-online-tool-for-modelling-aws-architecture/) +[#]: author: (Magesh Kasthuri https://opensourceforu.com/author/magesh-kasthuri/) + +Cacoo: A Lightweight Online Tool for Modelling AWS Architecture +====== + +[![AWS][1]][2] + +_Cacoo is a simple and efficient online tool that can be used to model diagrams for AWS architecture. It is not specific to AWS architecture and can be used for UML modelling, cloud architecture for GCP, Azure, network architecture, etc. However, this open source tool is one of the most efficient in architecture modelling for AWS solutions._ + +For a cloud architect, representing the solution’s design as an architecture diagram is much more helpful in explaining the details visually to target audiences like the IT manager, the development team, business stakeholders and the application owner. Though there are many tools like Sparkx Enterprise Architect, Rational Software Modeler and Visual Paradigm, to name a few, these are not so sophisticated and flexible enough for cloud architecture modelling. Cacoo is an advanced and lightweight tool that has many features to support AWS cloud modelling, as can be seen in Figures 1 and 2. + +![Figure 1: Template options for AWS architecture diagram][3] + +![Figure 2: Sample AWS architecture diagram in Cacoo][4] + +![Figure 3: AWS diagram options in Cacoo][5] + +Though AWS provides developer tools, there is no built-in tool provided for solution modelling and hence we have to choose an external tool like Cacoo for the design preparation. + +We can start with solution modelling in Cacoo either by using the AWS diagram templates, which list pre-built templates for standard architecture diagrams like the network diagram, DevOps solutions, etc. If you want to develop a custom solution from the list of shapes available in the Cacoo online editor, you can choose AWS components like compute, storage, network, analytics, AI tools, etc, and prepare custom architecture to suit your solution, as shown in Figure 2. + +There are connectors available to relate the components (for example, how network communication happens, and how ELB or elastic load balancing branches to EC2 storage). Figure 3 lists sample diagram shapes available for AWS architecture diagrams in Cacoo. + +![Figure 4: Create an IAM role to connect to Cacoo][6] + +![Figure 5: Add the policy to the IAM role to enable Cacoo to import from the AWS account][7] + +**Integrating Cacoo with an AWS account to import architecture** +One of the biggest advantages of Cacoo compared to other cloud modelling tools is that it can import architecture from an AWS account. We can connect to an AWS account, and Cacoo selects the services created in the account with the role attached and prepares an architecture diagram, on the fly. + +For this, we need to first create an IAM (Identity and Access Management) role in the AWS account with the account ID and external ID as given in the Cacoo Import AWS Architecture account (Figure 4). + +Then we need to add a policy to the IAM role in order to access the components attached to this role from Cacoo. For policy creation, we have sample policies available in Cacoo’s Import AWS Architecture wizard. We just need to copy and paste the policy as shown in Figure 5. + +Once this is done, the IAM role is created in the AWS account. Now we need to copy the role ARN (Amazon Resource Name) from the new role created and paste it in Cacoo’s Import AWS Architecture wizard as shown in Figure 6. This imports the architecture of the services created in the account, which is attached to the IAM role we have created and displays it as an architecture diagram. + +![Figure 6: Cacoo’s AWS Architecture Import wizard][8] + +![Figure 7: Cacoo’ worksheet with AWS imported architecture][9] + +Once this is done, we can see the architecture in Cacoo’s worksheet (Figure 7). We can print or export the architecture diagram into PPT, PNG, SVG, PDF, etc, for an architecture document, or for poster printing and other technical discussion purposes, as needed. +Cacoo is one of the most powerful cloud architecture modelling tools and can be used for visual designs for AWS architecture, on the fly, using online tools without installing any software. The online account is accessible from anywhere and can be used for quick architecture presentation. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/09/cacoo-a-lightweight-online-tool-for-modelling-aws-architecture/ + +作者:[Magesh Kasthuri][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/magesh-kasthuri/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/07/AWS.jpg?resize=696%2C427&ssl=1 (AWS) +[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/07/AWS.jpg?fit=750%2C460&ssl=1 +[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-1-Template-options-for-AWS-architecture-diagram.jpg?resize=350%2C262&ssl=1 +[4]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-2-Sample-AWS-architecture-diagram-in-Cacoo.jpg?resize=350%2C186&ssl=1 +[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-3-AWS-diagram-options-in-Cacoo.jpg?resize=350%2C337&ssl=1 +[6]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-4-Create-an-IAM-role-to-connect-to-Cacoo.jpg?resize=350%2C228&ssl=1 +[7]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-5-Add-the-policy-to-the-IAM-role-to-enable-Cacoo-to-import-from-the-AWS-account.jpg?resize=350%2C221&ssl=1 +[8]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-6-Cacoo%E2%80%99s-AWS-Architecture-Import-wizard.jpg?resize=350%2C353&ssl=1 +[9]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-7-Cacoo%E2%80%99s-worksheet-with-AWS-imported-architecture.jpg?resize=350%2C349&ssl=1 diff --git a/sources/tech/20190930 How the Linux screen tool can save your tasks - and your sanity - if SSH is interrupted.md b/sources/tech/20190930 How the Linux screen tool can save your tasks - and your sanity - if SSH is interrupted.md new file mode 100644 index 0000000000..5de0f02b79 --- /dev/null +++ b/sources/tech/20190930 How the Linux screen tool can save your tasks - and your sanity - if SSH is interrupted.md @@ -0,0 +1,119 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How the Linux screen tool can save your tasks – and your sanity – if SSH is interrupted) +[#]: via: (https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +How the Linux screen tool can save your tasks – and your sanity – if SSH is interrupted +====== +The Linux screen command can be a life-saver when you need to ensure long-running tasks don't get killed when an SSH session is interrupted. Here's how to use it. +Sandra Henry-Stocker + +If you’ve ever had to restart a time-consuming process because your SSH session was disconnected, you might be very happy to learn about an interesting tool that you can use to avoid this problem – the **screen** tool. + +Screen, which is a terminal multiplexor, allows you to run many terminal sessions within a single ssh session, detaching from them and reattaching them as needed. The process for doing this is surprising simple and involves only a handful of commands. + +**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][1] ]** + +To start a screen session, you simply type **screen** within your ssh session. You then start your long-running process, type **Ctrl+A Ctrl+D** to detach from the session and **screen -r** to reattach when the time is right. + +[][2] + +BrandPost Sponsored by HPE + +[Take the Intelligent Route with Consumption-Based Storage][2] + +Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency. + +If you’re going to run more than one screen session, a better option is to give each session a meaningful name that will help you remember what task is being handled in it. Using this approach, you would name each session when you start it by using a command like this: + +``` +$ screen -S slow-build +``` + +Once you have multiple sessions running, reattaching to one then requires that you pick it from the list. In the commands below, we list the currently running sessions before reattaching one of them. Notice that initially both sessions are marked as being detached. + +Advertisement + +``` +$ screen -ls +There are screens on: + 6617.check-backups (09/26/2019 04:35:30 PM) (Detached) + 1946.slow-build (09/26/2019 02:51:50 PM) (Detached) +2 Sockets in /run/screen/S-shs +``` + +Reattaching to the session then requires that you supply the assigned name. For example: + +``` +$ screen -r slow-build +``` + +The process you left running should have continued processing while it was detached and you were doing some other work. If you ask about your screen sessions while using one of them, you should see that the session you’re currently reattached to is once again “attached.” + +``` +$ screen -ls +There are screens on: + 6617.check-backups (09/26/2019 04:35:30 PM) (Attached) + 1946.slow-build (09/26/2019 02:51:50 PM) (Detached) +2 Sockets in /run/screen/S-shs. +``` + +You can ask what version of screen you’re running with the **-version** option. + +``` +$ screen -version +Screen version 4.06.02 (GNU) 23-Oct-17 +``` + +### Installing screen + +If “which screen” doesn’t provide information on screen, it probably isn't installed on your system. + +``` +$ which screen +/usr/bin/screen +``` + +If you need to install it, one of the following commands is probably right for your system: + +``` +sudo apt install screen +sudo yum install screen +``` + +The screen tool comes in handy whenever you need to run time-consuming processes that could be interrupted if your SSH session for any reason disconnects. And, as you've just seen, it’s very easy to use and manage. + +Here's a recap of the commands used above: + +``` +screen -S start a session +Ctrl+A Ctrl+D detach from a session +screen -ls list sessions +screen -r reattach a session +``` + +While there is more to know about **screen**, including additional ways that you can maneuver between screen sessions, this should get you started using this handy tool. + +Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua +[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage) +[3]: https://www.facebook.com/NetworkWorld/ +[4]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20191001 How to create the data structure for a Corteza Low Code application.md b/sources/tech/20191001 How to create the data structure for a Corteza Low Code application.md new file mode 100644 index 0000000000..6e065ac302 --- /dev/null +++ b/sources/tech/20191001 How to create the data structure for a Corteza Low Code application.md @@ -0,0 +1,225 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to create the data structure for a Corteza Low Code application) +[#]: via: (https://opensource.com/article/19/10/corteza-low-code-data-structure) +[#]: author: (Lenny Horstink https://opensource.com/users/lenny-horstink) + +How to create the data structure for a Corteza Low Code application +====== +Corteza is an open source alternative to Salesforce. Learn how to use it +in this series. +![Green graph of measurements][1] + +In the [first article][2] in this series, I showed how to create a custom application to track donations using Corteza Low-Code, a graphical user interface- (GUI) and web-based development environment that serves as an alternative to Salesforce. So far, the Donations application merely exists, but this article explains how to make it do something by populating it with a data structure using modules and fields. + +Modules and fields exist inside your application. (In programming terminology, they are "locally defined.") Modules and fields define places where data is stored in your application. Without modules and fields, your application has no memory nor anything to work with, so defining them is the next step when creating a new app. + +The [Donations application][3] is available on the Corteza community server. You need to be logged in or create a free Corteza community server account to check it out. + +### Enter the application's admin area + +To enter the admin area of an application, you first need to open the application inside Corteza Low-Code. To enter the Donations application created in the first part of this series: + + 1. Enter Corteza. (Read [_Intro to Corteza, an open source alternative to Salesforce_][4] if you need some background on this.) + 2. Click on the **+** button to create a new tab. + 3. Select Corteza Low-Code. + 4. Click on the Donations namespace to enter the Donations application. + + + +Since the Donations application doesn't have any modules or pages yet, the only thing you see is an **Admin panel** link on the right. If the applications had pages, it would show the main menu and the **Admin panel** link on the far right. + +![Open Corteza Low Code admin panel][5] + +Click on it to enter the application's admin area. There are four menu items: + +![Corteza Low Code admin panel menu][6] + + * **Modules:** Create or edit modules and fields + * **Pages:** Define the visual part of your application + * **Charts:** Create charts to add to pages + * **Automation:** Add automation rules to automate business processes and workflows + + + +The **Public pages** link takes you back to your application. + +### Create modules and fields + +Modules and fields define what data you need to store in your application and how that data links to other data. If you've ever built a database with [LibreOffice Base][7], Filemaker Pro, or a similar application, this might feel familiar—but you don't need any database experience to work with Corteza. + +#### Modules + +A module is like a table in a database. A simple application typically has a few modules, while bigger applications have many more. Corteza CRM, for example, has over 35. The number of modules an application can have is unlimited. + +A new application does not have any modules. You can create one by using the form on top or by importing an existing module from a different application using an export file. You can import and export individual modules or all modules at the same time. + +When you create a module, best practice is to give it a descriptive name without spaces and using capital letters on different words, e.g., _Lead_, _Account_, or _CaseUpdate_. + +The Donations application includes the following modules: + + * **Contact:** To store the donor's contact data + * **ContactDonation:** To track a contact's donation(s) + * **Project:** To store a project you can assign donations to + * **Note:** To store notes related to a project + + + +![Donations application modules][8] + +#### Fields + +Each module consists of a set of fields that define what data you want to store and in what format. + +You can add new fields to a module by using the **Add new field** button. This adds a new row with the following fields: + + * **Name:** It must be unique and cannot have spaces, e.g., "firstname." This is not shown to the end user. + * **Title:** This is the field's label—the field name the end users see when they view or edit a record. It can contain any character, including spaces. Although it's best practice to keep this title unique, it's not mandatory. An example is "First name." + * **Type:** This is where you set the field type. The wrench icon on the right allows you to set more detailed data for the field type. + * **Multiple values:** This checkbox is available when you want a field type to allow multiple value entries. + * **Required:** This makes the field mandatory for the end user when creating or editing a record. + * **Sensitive:** This allows you to mark data that is sensitive, such as name, email, or telephone number, so your application is compliant with privacy regulations such as the [GDPR][9]. + + + +At the end of the row, you can find a **Delete** button (to remove a field) and a **Permission** button (to set read permissions and update field permissions per role). + +### Field types + +You can select from the following field types. The wrench icon beside the field type provides further options for each case. + + * **Checkbox (Y/N):** This field shows a checkbox to the end user when editing a record. When you click on the wrench icon, you can select what checked and unchecked represent. For example: Yes/No, Active/Inactive, etc. + * **DateTime:** This makes a date field. You can select: + * Date only + * Time only + * Past values only + * Future value only + * Output relative value (e.g., three days ago) + * Custom output format (see [Moment.js][10] for formatting options) + * **Email:** This field auto-validates whether the input is an email and turns it into a clickable email link in record-viewing mode. You can select the **Don't turn email into a link** option to remove the auto-link feature. + * **Select:** When you click on the wrench icon, you can use the **Add** button to add as many Select options as you need. You can also set whether the end user can select multiple values at once. + * **Number:** This field gives you the option to add a prefix (for example a $ for values in dollars), a suffix (for example % for a number that represents a percentage), and the decimal precision (e.g., zero for whole numbers or two for values like 1.13, 2.44, 3.98), and you can use the **Format Input** field to create more complex formats. + * **Record:** This field allows you to link the current module to another module. It will show as a Select to the end user. You can select the module in the **Module name** field and choose the field to use to load the Select options. In **Query fields on search**, you can define what fields you want the user to be able to search on. As with the **Select** field type, you can set whether the user can select multiple values at once. + * **String:** By default, a String field is a single-line text-input field, but you can choose to make it multi-line or even a rich text editor. + * **URL:** The URL field automatically validates whether the field is a link to a site. You can select the following options for this field: + * Trim # from the URL + * Trim ? from the URL + * Only allow SSL (HTTPS) URLs + * Don't turn URL into a link + * **User:** This creates a Select field that loads with all users in Corteza. You can preset the value to the current user. + * **File:** This creates a **File Upload** button for the end user. + + + +#### Field types in the Donations application + +The Donations application includes the following fields in its four modules. + +##### 1\. Contact + +![Contact module][11] + + * Name (String) + * Email (Email) + * Phone (String) + * Address (String; _Multi-line_) + + + +##### 2\. ContactDonation + +![Corteza Donations app modules][12] + + * Contact (Record; link to **Contact**) + * Donation (Number; _Prefix $_ and _Precision 2_) + * Project (Record; link to **Project**) + + + +##### 3\. Project + +![Project module][13] + + * Name (String) + * Description (String; _Multi-line_ and _Use rich text editor_) + * Status (Select; with options _Planning_, _Active_, and _Finished_) + * Start date (DateTime; _Date only_) + * Website link (URL) + * Donations total (Number; _Prefix $_ and _Precision 2_) + * Project owner (User; _Multiple select_ and _Preset with current user_) + + + +##### 4\. Notes + +![Notes module][14] + + * Project (Record; link to **Project**) + * Subject (String) + * Note (String; _Multi-line_ and _Use rich text editor_) + * File (File; _Single image_) + + + +### Create relationships between modules + +Practically every Corteza Low Code application consists of multiple modules that are linked together. For example, projects can have notes or donations can be assigned to different projects. The **Record** field type creates relationships between modules. + +The **Record** field type's basic function is to link from module B back to module A. Records in module B are children of records in module A (you could say it's a 1-N relationship). + +For example, in the Donations application, the module **Note** has a **Record** field that links to the module **Project**. The end user will see a **Select** field in a **Note** record with the value of the **Project** that the note pertains to. + +To create this relationship in the Donations application, select the wrench icon in the **projectId** row: + +![Wrench icon][15] + +In the popup that opens, select the module the field will link to, the label end users will see, and which fields the end user can search on.  + +![Setting query fields for search][16] + +This creates a simple relationship that allows the **Project** to have **Notes**. A many-to-many relationship between modules is more complex. For example, the Donations application needs to support contacts who make multiple donations and donations that are assigned to different projects. The **ContactDonation** module sits in the middle to manage this. + +This module has two fields of the **Record** type. For each, we need to select the correct module and set the label and query fields the user can search on. The Donations application needs the following to be set for the **Contact** and **Project** modules: + +![Contact module field settings][17] + +![Project module field settings][18] + +This creates a many-to-many relationship between modules. + +You've now set up a structure for the data in your application. The next step is to create the visual side of your app using Corteza's **Pages** feature. It's easier than you might expect, as you'll see in the third article in this series. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/corteza-low-code-data-structure + +作者:[Lenny Horstink][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lenny-horstink +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_lead-steps-measure.png?itok=DG7rFZPk (Green graph of measurements) +[2]: https://opensource.com/article/19/9/how-build-application-corteza-low-code-open-source-alternative-salesforce +[3]: https://latest.cortezaproject.org/compose/ns/donations/ +[4]: https://opensource.com/article/19/8/corteza-open-source-alternative-salesforce +[5]: https://opensource.com/sites/default/files/uploads/corteza_donationsadminpanel.png (Open Corteza Low Code admin panel) +[6]: https://opensource.com/sites/default/files/uploads/corteza_donationsmenuadminpanel.png (Corteza Low Code admin panel menu) +[7]: https://www.libreoffice.org/discover/base/ +[8]: https://opensource.com/sites/default/files/uploads/corteza_donationstmodules.png (Donations application modules) +[9]: https://eugdpr.org/ +[10]: https://momentjs.com/docs/#/displaying/format/ +[11]: https://opensource.com/sites/default/files/uploads/corteza_contactmodulefields.png (Contact module) +[12]: https://opensource.com/sites/default/files/uploads/corteza_contactdonationmodule.png (Corteza Donations app modules) +[13]: https://opensource.com/sites/default/files/uploads/corteza_projectmodule.png (Project module) +[14]: https://opensource.com/sites/default/files/uploads/corteza_notesmodule.png (Notes module) +[15]: https://opensource.com/sites/default/files/uploads/corteza_createrelationshipicon.png (Wrench icon) +[16]: https://opensource.com/sites/default/files/uploads/corteza_queryfieldsonsearch.png (Setting query fields for search) +[17]: https://opensource.com/sites/default/files/uploads/corteza_modulefieldsettings-contact.png (Contact module field settings) +[18]: https://opensource.com/sites/default/files/uploads/corteza_modulefieldsettings-project.png (Project module field settings) diff --git a/sources/tech/20191001 The Best Android Apps for Protecting Privacy and Keeping Information Secure.md b/sources/tech/20191001 The Best Android Apps for Protecting Privacy and Keeping Information Secure.md new file mode 100644 index 0000000000..6e47df1e3a --- /dev/null +++ b/sources/tech/20191001 The Best Android Apps for Protecting Privacy and Keeping Information Secure.md @@ -0,0 +1,134 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The Best Android Apps for Protecting Privacy and Keeping Information Secure) +[#]: via: (https://opensourceforu.com/2019/10/the-best-android-apps-for-protecting-privacy-and-keeping-information-secure/) +[#]: author: (Magimai Prakash https://opensourceforu.com/author/magimai-prakash/) + +The Best Android Apps for Protecting Privacy and Keeping Information Secure +====== + +[![][1]][2] + +_Privacy violations and data theft occur every day, making it necessary for all of us to safeguard our data. We trust our smartphones way too much and tend to store personal data on them, ignoring the fact that these devices could easily be compromised. However, there are a few open source apps that can ensure the data on your phone is not compromised. This article lists the best ones._ + +Everyone is becoming aware about information security. There are plenty of privacy and security apps available in the Google Play store too, but it is not easy to select the right one. Most users prefer free apps, but some of these offer only limited functionality and force users to upgrade to a premium membership, which many cannot afford. + +This article sheds light on some FOSS Android apps that will really help in safeguarding your privacy. + +![Figure 1: Safe Notes][3] + +![Figure 2: Exodus Privacy][4] + +**Safe Notes** +Safe Notes is a companion app for the Protected Text website (__). It is an online encrypted notepad which offers space on a separate site for users to store their notes. To use this service, you do not need to sign up with the website. Instead, you need to choose a site name and a password to protect it. + +You have two options to use Safe Notes — you can either use this app to save your notes locally, or you can import your existing Protected Text site in the app. In the latter case, you can synchronise your notes between the app as well as in the Protected Text website. + +By default, all the notes will be in an unlocked state. After you have saved your notes, if you want to encrypt them, click on the key icon beside your note and you will be prompted to give a password. After entering the password of your choice, your note will be encrypted and instead of the key icon, you will see an unlocked icon in its place, which means that your note is not locked. To lock your note, click the ‘Unlocked’ icon beside your note — your note will get locked and the password will be removed from your device. + +Passwords that you are using are not transmitted anywhere. Even if you are using an existing Protected Text site, your passwords are not transmitted. Only your encrypted notes get sent to the Protected Text servers, so you are in total control. But this also means that you cannot recover your password if you lose it. + +Your notes are encrypted by the AES algorithm and SHA 12 for hashing, while SSL is used for data transmission. + +![Figure 3: Net Guard][5] + +**Exodus Privacy** +Have you ever wondered how many permissions you are granting to an Android app? While you can see these in the Google Play store, you may not know that some of those permissions are impacting your privacy more severely than you realise. + +While permissions are taking control of your device with or without your knowledge, third party trackers also compromise your privacy by stealthily collecting data without your consent. And the worst part is that you have no clue as to how many trackers you have in your Android app. + +To view the permissions for an Android app and the trackers in it, use Exodus Privacy. + +Exodus Privacy is an Android app that has been created and maintained by a French non-profit organisation. While the app is not capable of any analysis, it will fetch reports from the Exodus Platform for the apps that are installed in your device. + +These reports are auto-generated by using the static analysis method and, currently, the Exodus platform contains 58,392 reports. Each report gives you information about the number of trackers and permissions. + +Permissions are evaluated using the three levels of Google Permission Classification. These are ‘Normal’, ‘Signature’ and ‘Dangerous’. We should be concerned about the ‘Dangerous’ level because such permissions can access the user’s private and other stored sensitive data. + +Trackers are also listed in this app. When you click on a tracker, you will be taken to a page which shows you the other Android apps that have that particular tracker. This can be really useful to know if the same tracker has been used in the other apps that you have installed. + +In addition, the reports will contain information such as ‘Fingerprint’ and other geographical details about the app publisher such as ‘Country’, ‘State’ and ‘Locality’. + +![Figure 4: xBrowserSync][6] + +![Figure 5: Scrambled Exif][7] + +**Net Guard** +Most Android apps need network access to function properly, but offline apps don’t need this to operate. Yet some of these offline apps continue to run in the background and use network access for some reason or the other. As a result, your battery gets drained very quickly and the data plan on your phone gets exhausted faster than you think. + +Net Guard solves this problem by blocking the network access to selected apps. Net Guard will only block the outgoing traffic from apps, not what’s incoming. + +The Net Guard main window displays all the installed apps. For every app you will see the ‘mobile network’ icon and the ‘Wi-Fi’ icon. When they are both green, it means that Net Guard will allow the app to have network access via the mobile network and Wi-Fi. Alternatively, you can enable any one of them; for example, you can allow the app to use the Internet only via the mobile network by clicking on the ‘Mobile network’ icon to turn it green while the ‘Wi-Fi’ icon is red. + +When both the ‘Mobile network’ and ‘Wi-Fi’ icons are red, the app’s outgoing traffic is blocked. +Also, when ‘Lockdown’ mode is enabled, it will block the network access for all apps except those that are configured to have network access in the ‘Lockdown’ mode too. This is useful when you have very little battery and your data plan is about to expire. + +Net Guard can also block network access to the system apps, but please be cautious about this because sometimes, when the user blocks Internet access to some critical system apps, it could result in a malfunction of other apps. + +**xBrowserSync** +xBrowserSync is a free and open source service that helps to sync bookmarks across your devices. Most of the sync services require you to sign up and keep your data with them. + +xBrowserSync is an anonymous and secure service, for which you need not sign up. To use this service you need to know your sync ID and have a strong password for it. + +Currently, xBrowserSync supports the Mozilla and Chrome browsers; so if you’re using either one of them, you can proceed further. Also, if you have to transfer a huge number of bookmarks from your existing service to xBrowserSync, it is advised that you have a backup of all your bookmarks before you create your first sync. + +You can create your first sync by entering a strong password for it. After your sync is created, a unique sync ID will be shown to you, which can be used to sync your bookmarks across your devices. + +xBrowserSync encrypts all your data locally before it is synced. It also uses PBKDF2 with 250,000 iterations of SHA-256 for the key derivation to combat brute force attacks. Apart from that, It uses PBKDF2 with 250,000 iterations of SHA-256 for the key derivation to combat brute force attacks. And it uses AES-GCM with a random 16 byte IV (Initialization Vector- a random number that is used with secret key to encrypt the data) with 32-bit char sync ID of the user as a salt value. All of these are in place to ensure that your data cannot be decrypted without your password. + +The app provides you with a sleek interface that makes it easy for you to add bookmarks, and share and edit them by adding descriptions and tags to them. + +xBrowserSync is currently hosted by four providers, including the official one. So to accommodate all the users, the synced data that isn’t accessed for a long time is removed. If you don’t want to be dependent on other providers, you can host xBrowserSync for yourself. + +![Figure 6: Riseup VPN][8] + +**Scrambled Exif** +When we share our photos on social media, sometimes we share the metadata on those photos accidentally. Metadata can be useful for some situations but it can also pose a serious threat to your privacy. A typical photo may consist of the following pieces of data such as ‘date and time’, ‘make and model of the camera’, ‘phone name’ and ‘location’. When all these pieces of data are put together by a system or by a group of people, they are able to determine your location at that particular time. + +So if you want to share your photos with your friends as well as on social media without divulging metadata, you can use Scrambled Exif. + +Scrambled Exif is a free and open source tool which removes the Exif data from your photos, after installing the app. So when you want to share a photo, you have to click on the ‘Share’ button from the photo, and it will show you the available options for sharing — choose ‘Scrambled Exif’. Once you have done that, all your metadata is removed from that photo, and you will again be shown the share list. From there on, you can share your photos normally. + +**Riseup VPN** +Riseup VPN (Virtual Private Network) is a tool that enables you to protect your identity, as well as bypass the censorship that is imposed on your network and the encryption of your Internet traffic. Some VPN service providers log your IP address and quietly betray your trust. + +Riseup VPN is a personal VPN service offered by the Riseup Organization, which is a non-profit that fights for a free Internet by providing tools and other resources for anyone who wants to enjoy the Internet without being restrained. + +To use the Riseup VPN, you do not need to register, nor do you need to configure the settings — it is all prepped for you. All you need is to click on the ‘Turn on’ button and within a few moments, you can see that your traffic is routed through the Riseup networks. By default, Riseup does not log your IP address. + +At present, Riseup VPN supports the Riseup networks in Hong Kong and Amsterdam. + +![Figure 7: Secure Photo Viewer][9] + +**Secure Photo Viewer** +When you want to show a cool picture of yours to your friends by giving your phone to them, some of them may get curious and go to your gallery to view all your photos. Once you unlock the gallery, you cannot control what should be shown and what ought to be hidden, as long as your phone is with them. + +Secure Photo Viewer fixes this problem. After installing it, choose the photos or videos you want to show to a friend and click ‘share’. This will show ‘Secure Photo Viewer’ in the available options. Once you click on it, a new window will open and it will instruct you to lock your device. Within a few seconds the photo you have chosen will show up on the screen. Now you can show your friends just that photo, and they can’t get into your gallery and view the rest of your private photos. + +Most of the apps listed here are available on F-Droid as well as on Google Play. I recommend using F-Droid because every app has been compiled via its source code by F-Droid itself, so it is unlikely to have malicious code injected in it. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/the-best-android-apps-for-protecting-privacy-and-keeping-information-secure/ + +作者:[Magimai Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/magimai-prakash/ +[b]: https://github.com/lujun9972 +[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Android-Apps-security.jpg?resize=696%2C658&ssl=1 (Android Apps security) +[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Android-Apps-security.jpg?fit=890%2C841&ssl=1 +[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-1-Safe-Notes.jpg?resize=211%2C364&ssl=1 +[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-2-Exodus-Privacy.jpg?resize=225%2C386&ssl=1 +[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-3-Net-Guard.jpg?resize=226%2C495&ssl=1 +[6]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-4-xBrowserSync.jpg?resize=251%2C555&ssl=1 +[7]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-5-Scrambled-Exif-350x535.jpg?resize=235%2C360&ssl=1 +[8]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-6-Riseup-VPN.jpg?resize=242%2C536&ssl=1 +[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-7-Secure-Photo-Viewer.jpg?resize=228%2C504&ssl=1 diff --git a/sources/tech/20191002 How to create the user interface for your Corteza Low Code application.md b/sources/tech/20191002 How to create the user interface for your Corteza Low Code application.md new file mode 100644 index 0000000000..52056a29ac --- /dev/null +++ b/sources/tech/20191002 How to create the user interface for your Corteza Low Code application.md @@ -0,0 +1,240 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to create the user interface for your Corteza Low Code application) +[#]: via: (https://opensource.com/article/19/10/corteza-low-code-user-interface) +[#]: author: (Lenny Horstink https://opensource.com/users/lenny-horstink) + +How to create the user interface for your Corteza Low Code application +====== +Add a user-friendly interface to your application built in Corteza Low +Code, an open source alternative to Salesforce. +![metrics and data shown on a computer screen][1] + +In the first two articles in this series, I explained how to use Corteza Low Code to [create an application][2] to track donations and [set up its data structure][3] with modules and fields. In the third article, I will explain how to create the graphical part of the Donations application. + +**Pages** is the HTTP web layer of Corteza Low Code. For comfort of design and to ensure your application is responsive and mobile-ready by default, Pages are built-in blocks. Each block can be resized and dragged wherever you desire. In all blocks, you can define the title, the description, and the layout. + +There are two types of pages: **Record** pages (which show data for or related to a single record) and **List** pages (which show a searchable list of multiple records). Each type is described below. + +### Record pages + +A module without a Record page cannot do anything. To store data inside a module, you need to create a Record page and add it to a module by selecting the appropriate **Page builder** button on the **Modules** page. This opens the drag-and-drop page editor. + +The Donations application has four modules, and each one has the **Page builder** link: + +![Page Builder Link][4] + +First, create the record page for the **Contact** module. When you click on the **Page builder** link, an empty record page opens. Add blocks with the **\+ Add block** button. + +![Add block button][5] + +There are multiple block types available. + +![Block types][6] + +The "Contact" record page in the "Donations" application uses two block types: **Record** and **Record list**. + +#### Record blocks + +The **Record** block is the most important block for a Record page. You can select the block's layout and the fields you want to show. The **Contact** record page needs to show: _Name_, _Email_, _Phone,_ and _Address_. Select those fields and hit **Save and close**, and the block will be added. + +![Form to change Record block][7] + +When you view a record, the values of these fields are shown as strings, and when you add or edit a record, these fields turn into form-input fields. + +Tip: You can drag-and-drop the fields and place them in any order you prefer. + +#### Record list blocks + +The **Contact** page will show the list of donation each contact has given. Create a list of records by selecting the **Record list** block. + +Make **Donations** the title, and select the **ContactDonation** module in the **Module** field. After selecting a module, the columns that are available are populated automatically, and you can select the columns you want to show in the **Record list**: _Donation_, _Project_, and the system field _Created at_. + +If you saved the **Record list** block right now, you would see all donations from all contacts. Because you want to show the donations related to a single contact record, you need to add a **prefilter**. + +The **Prefilter records** field allows simplified SQL "Where" conditions, and variables like **${recordID}**, **${ownerID}**, and **${userID}** are evaluated (when available). For the **Record list**, you want to filter **ContactDonation** records by contact, so you need to fill in: **${recordID} = contactId**. Note: **contactId** is a **Record** field in the module **ContactDonation**. Take a look back at the [second article][3] in this series for more info about linking modules. + +You also want to be able to sort a contact's donations by date. This can be done in the **Presort records** field by inserting **createdAt DESC**. This field supports simplified SQL _Order by_ condition syntax. + +You can also select to hide or show the **New record** button and Search box, and you can define the number of records shown. A best practice is to adjust this number to the size of the block. + +![Form to change Record list block][8] + +To save the block and add it to the page, hit **Save and close**. Now the second block has been added to the page. + +#### Other block types + +Other block types are: + + * **Content:** This block allows you to add fixed text, which you can create with a rich text editor. This is ideal for "help" texts or links to resources, such as the sales handbook on an intranet. + * **Chart:** Inserts charts that have been created with the chart builder. This is very useful when you are creating dashboards. + * **Social media feed:** You can show live content from Twitter here—either a fixed Twitter feed (which is shown in all records) or from a Module field that represents a Twitter link (which enables each record to have his own feed). + * **Automation:** In this block, you can add automation rules that have a manual trigger and that are available for the module, as well as automation rules with no primary module. They are shown to end users as buttons. You can format the automation rule buttons by inserting custom text and selecting a style, and you can change the order of them (when you have multiple buttons) with a drag-and-drop. + * **Calendar:** This block inserts a calendar, which can be shown in the following formats: + * Month + * Month agenda + * Week agenda + * Day agenda + * Month list + * Week list + * Day list The source of the calendar is a list of records from one or multiple modules. For each source, you can select which field represents the title, start date, and end date of the event. + * **File:** You can upload a file and show it on the page. Just like the **Content** block, the content of this block will be the same for all records. To have files that are related to a record, you need to use the **File** field type when creating fields in a module. + + + +Next, add the Record pages for the other modules in the Donations application. Once that is done, you will see the following list under **Pages**: + +![List of pages][9] + +### Change the page layout + +After adding blocks to pages, such as the **Contact Details** and **Donations** blocks in the **Contact** module's Record page, you can resize and position them to create the layout you want. + +![Moving blocks around][10] + +The end result is: + +![Corteza layout][11] + +Corteza Low-Code is responsive by default, so the blocks will resize and reposition automatically on devices with small screens. + +### List pages + +List pages are not related to any single record; rather, they show lists of records. This page type is used to create a home page, list of contacts, list of projects, dashboards, etc. List pages are important because you can't enter new records without viewing a list because the **Add new record** button is shown on lists. + +For the Donations application, create the following list pages: _Home_, _Contacts_, and _Projects_. + +To create a List page, you need to go to the **Pages** administrative page and enter a title in the **Create a new page** box at the top. When you submit this form, it opens the **Edit page** form, which allows you to add a page description (for internal use; the end user will not see it), and you can set the page to **Enabled** so it can be accessed. + +Your list of pages will now look like: + +![List of pages][12] + +You can drag-and-drop to rearrange this to: + +![List of pages][13] + +Rearranging pages makes it easier to maintain the application. It also allows you to generate the application menu structure because List pages (but not Record pages) are shown as menu items. + +Adding content to each List page is exactly the same as adding blocks to Record pages. The only difference is that you cannot select the **Record** block type (because it is related to a single record). + +### Create a menu + +The menu in a Corteza Low-Code application is automatically generated by the tree of pages on the admin page **Pages**. It only shows List pages and ignores Record pages. + +To reorder the menu, simply drag-and-drop the pages in the desired order within the tree of pages. + +### Add charts + +Everybody loves charts and graphs. If pictures are worth 1,000 words, then you can create a priceless application in Corteza. + +Corteza Low-Code comes with a chart builder that allows you to build line, bar, pie, and donut charts: + +![Chart types available in Corteza Low Code][14] + +As an example, add a chart that shows how many donations have been made to each Project. To begin, enter the **Charts** page in the admin menu. + +![Corteza charts admin page][15] + +To create a new chart, use the **Create a new chart** field. + +Inside the chart builder, you will find the following fields: + + * **Name:** Enter a name for the chart; e.g., _Donations_. + * **Module:** This is the module that provides the data to the chart. + * **Filters:** You can select one of the predefined filters, such as **Records created this year**, or add any custom filter (such as **status = "Active"**). + * **Dimensions:** These can be **Datetime** and **Select** fields. Datetime fields allow grouping (e.g., by day, by week, by month). The **Skip missing values** option is handy to remove values that would return null (e.g., records with incomplete data), and **Calculate how many labels can be shown** can avoid overlapping labels (which is useful for charts with many dates on the X-axis). + * **Metrics:** Metrics are numeric fields and have a predefined _count_ option. You can add multiple metric blocks and give each a different label, field (source), function (COUNTD, SUM, MAX, MIN, AVG, or STD, if possible), output (line or bar), and color. + + + +This sample chart uses the **ContactDonation** module and shows total donations per day. + +![Chart of donations per day][16] + +The final step is to add a chart to a page. To add this chart to the home page: + + * Enter **Pages** in the admin menu. + * Click on the **Page builder** link of the **Home** page. + * Add a page block of the type **Chart**, add a block title, and select the chart. + * Resize and reposition the block (or blocks) to make the layout look nice. + + + +![Chart added][17] + +When you save the page and enter your Donation application (via the **Public pages** link on the top right), you will see the home page with the chart. + +![Chart displayed on Corteza UI][18] + +### Add automation + +Automation can make your Corteza Low Code application more efficient. With the Automation tool, you can create business logic that evaluates records automatically when they are created, updated, or deleted, or you can execute a rule manually. + +Triggers are written in JavaScript, one of the most used programming languages in the world, enabling you to write simple code that can evaluate, calculate, and transform data (such as numbers, strings, or dates). Corteza Low Code comes with extra functions that allow you to access, create, save, or delete records; find users; send notifications via email; use Corteza Messaging; and more. + +[Corteza CRM][19] has an extensive set of automation rules that can be used as examples. Some of them are: + + * Account: Create new case + * Account: Create new opportunity + * Case: Insert case number + * Contract: Send contract to custom email + * Lead: Convert a lead into an account and opportunity + * Opportunity: Apply price book + * Opportunity: Generate new quote + * Quote: Submit quote for approval + + + +A complete manual on how to use the automation module, together with code examples, is in development. + +### Deploy an application + +Deploying a Corteza Low Code application is very simple. As soon as it's Enabled, it's deployed and available in the Corteza Low Code Namespaces menu. Once deployed, you can start using your application! + +### For more information + +As I mentioned in parts 1 and 2 of this series, the complete Donations application created in this series is available on the [Corteza community server][20]. You need to be logged in or create a free Corteza community server account to check it out. + +Also, check out the documentation on the [Corteza website][21] for other, up-to-date user and admin tutorials. + +If you have any questions—or would like to contribute—please join the [Corteza Community][22]. After you log in, please introduce yourself in the #Welcome channel. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/corteza-low-code-user-interface + +作者:[Lenny Horstink][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lenny-horstink +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen) +[2]: https://opensource.com/article/19/9/how-build-application-corteza-low-code-open-source-alternative-salesforce +[3]: https://opensource.com/article/19/9/creating-data-structure-corteza-low-code +[4]: https://opensource.com/sites/default/files/uploads/corteza_donationspagebuilderlink.png (Page Builder Link) +[5]: https://opensource.com/sites/default/files/uploads/corteza_addblock.png (Add block button) +[6]: https://opensource.com/sites/default/files/uploads/corteza_blocktypes.png (Block types) +[7]: https://opensource.com/sites/default/files/uploads/corteza_changerecordblock.png (Form to change Record block) +[8]: https://opensource.com/sites/default/files/uploads/corteza_changerecordlistblock.png (Form to change Record list block) +[9]: https://opensource.com/sites/default/files/uploads/corteza_pageslist.png (List of pages) +[10]: https://opensource.com/sites/default/files/uploads/corteza_movingblocks.png (Moving blocks around) +[11]: https://opensource.com/sites/default/files/uploads/corteza_layoutresult.png (Corteza layout) +[12]: https://opensource.com/sites/default/files/uploads/corteza_pageslist2.png (List of pages) +[13]: https://opensource.com/sites/default/files/uploads/corteza_pageslist3.png (List of pages) +[14]: https://opensource.com/sites/default/files/uploads/corteza_charttypes.png (Chart types available in Corteza Low Code) +[15]: https://opensource.com/sites/default/files/uploads/corteza_createachart.png (Corteza charts admin page) +[16]: https://opensource.com/sites/default/files/uploads/corteza_chartdonationsperday.png (Chart of donations per day) +[17]: https://opensource.com/sites/default/files/uploads/corteza_addchartpreview.png (Chart added) +[18]: https://opensource.com/sites/default/files/uploads/corteza_pageshowingchart.png (Chart displayed on Corteza UI) +[19]: https://cortezaproject.org/technology/core/corteza-crm/ +[20]: https://latest.cortezaproject.org/compose/ns/donations/ +[21]: https://www.cortezaproject.org/ +[22]: https://latest.cortezaproject.org/ diff --git a/sources/tech/20191003 4 open source eBook readers for Android.md b/sources/tech/20191003 4 open source eBook readers for Android.md new file mode 100644 index 0000000000..f2c6638bc4 --- /dev/null +++ b/sources/tech/20191003 4 open source eBook readers for Android.md @@ -0,0 +1,174 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 open source eBook readers for Android) +[#]: via: (https://opensource.com/article/19/10/open-source-ereaders-android) +[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) + +4 open source eBook readers for Android +====== +Looking for a new eBook app? Check out these four solid, open source +eBook readers for Android. +![Computer browser with books on the screen][1] + +Who doesn't like a good read? Instead of frittering away your time on social media or a [messaging app][2], you can enjoy a book, magazine, or another document on your Android-powered phone or tablet. + +To do that, all you need is the right eBook reader app. So let's take a look at four solid, open source eBook readers for Android. + +### Book Reader + +Let's start off with my favorite open source Android eBook reader: [Book Reader][3]. It's based on the older, open source version of the now-proprietary FBReader app. Like earlier versions of its progenitor, Book Reader is simple and minimal, but it does a great job. + +**Pros of Book Reader:** + + * It's easy to use. + * The app's interface follows Android's [Material Design guidelines][4], so it's very clean. + * You can add bookmarks to an eBook and share text with other apps on your device. + * There's growing support for languages other than English. + + + +**Cons of Book Reader:** + + * Book Reader has a limited number of configuration options. + * There's no built-in dictionary or support for an external dictionary. + + + +**Supported eBook formats:** + +Book Reader supports EPUB, .mobi, PDF, [DjVu][5], HTML, plain text, Word documents, RTF, and [FictionBook][6]. + +![Book Reader Android app][7] + +Book Reader's source code is licensed under the GNU General Public License version 3.0, and you can find it on [GitLab][8]. + +### Cool Reader + +[Cool Reader][9] is a zippy and easy-to-use eBook app. While I think the app's icons are reminiscent of those found in Windows Vista, Cool Reader does have several useful features. + +**Pros of Cool Reader:** + + * It's highly configurable. You can change fonts, line and paragraph spacing, hyphenation, font sizes, margins, and background colors. + * You can override the stylesheet in a book. I found this useful with two or three books that set all text in small capital letters. + * It automatically scans your device for new books when you start it up. You can also access books on [Project Gutenberg][10] and the [Internet Archive][11]. + + + +**Cons of Cool Reader:** + + * Cool Reader doesn't have the cleanest or most modern interface. + * While it's usable out of the box, you really need to do a bit of configuration to make Cool Reader comfortable to use. + * The app's default dictionary is proprietary, although you can swap it out for [an open one][12]. + + + +**Supported eBook formats:** + +You can use Cool Reader to browse EPUB, FictionBook, plain text, RTF, HTML, [Compiled HTML Help][13] (.chm), and TCR (the eBook format for the Psion series of handheld computers) files. + +![Cool Reader Android app][14] + +Cool Reader's source code is licensed under the GNU General Public License version 2, and you can find it on [Sourceforge][15]. + +### KOReader + +[KOReader][16] was originally created for [E Ink][17] eBook readers but found its way to Android. While testing it, I found KOReader to be both useful and frustrating in equal measures. It's definitely not a bad app, but it's not my first choice. + +**Pros of KOReader:** + + * It's highly configurable. + * It supports multiple languages. + * It allows you to look up words using a [dictionary][18] (if you have one installed) or Wikipedia (if you're connected to the internet). + + + +**Cons of KOReader:** + + * You need to change the settings for each book you read. KOReader doesn't remember settings when you open a new book. + * The interface is reminiscent of a dedicated eBook reader. The app doesn't have that Android look and feel. + + + +**Supported eBook formats:** + +You can view PDF, DjVu, CBT, and [CBZ][5] eBooks. It also supports EPUB, FictionBook, .mobi, Word documents, text files, and [Compiled HTML Help][13] (.chm) files. + +![KOReader Android app][19] + +KOReader's source code is licensed under the GNU Affero General Public License version 3.0, and you can find it on [GitHub][20]. + +### Booky McBookface + +Yes, that really is the name of [this eBook reader][21]. It's the most basic of the eBook readers in this article but don't let that (or the goofy name) put you off. Booky McBookface is easy to use and does the one thing it does quite well. + +**Pros of Booky McBookface:** + + * There are no frills. It's just you and your eBook. + * The interface is simple and clean. + * Long-tapping the app's icon in the Android Launcher pops up a menu from which you can open the last book you were reading, get a list of unread books, or find and open a book on your device. + + + +**Cons of Booky McBookface:** + + * The app has few configuration options—you can change the size of the font and the brightness, and that's about it. + * You need to use the buttons at the bottom of the screen to navigate through an eBook. Tapping the edges of the screen doesn't work. + * You can't add bookmarks to an eBook. + + + +**Supported eBook formats:** + +You can read eBooks in EPUB, HTML, or plain text formats with Booky McBookface. + +![Booky McBookface Android app][22] + +Booky McBookface's source code is available under the GNU General Public License version 3.0, and you can find it [on GitHub][23]. + +Do you have a favorite open source eBook reader for Android? Share it with the community by leaving a comment. + +Have you ever downloaded an Android app only to find that it wants access to all your phone's... + +There is a rich and growing ecosystem of open source applications for mobile devices, just like the... + +With these seven open source apps, you can play chess against your phone or an online opponent,... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/open-source-ereaders-android + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_browser_program_books_read.jpg?itok=iNMWe8Bu (Computer browser with books on the screen) +[2]: https://opensource.com/article/19/3/open-messenger-client +[3]: https://f-droid.org/en/packages/com.github.axet.bookreader/ +[4]: https://material.io/design/ +[5]: https://opensource.com/article/19/3/comic-book-archive-djvu +[6]: https://en.wikipedia.org/wiki/FictionBook +[7]: https://opensource.com/sites/default/files/uploads/book_reader-book-list.png (Book Reader Android app) +[8]: https://gitlab.com/axet/android-book-reader/tree/HEAD +[9]: https://f-droid.org/en/packages/org.coolreader/ +[10]: https://www.gutenberg.org/ +[11]: https://archive.org +[12]: http://aarddict.org/ +[13]: https://fileinfo.com/extension/chm +[14]: https://opensource.com/sites/default/files/uploads/cool_reader-icons.png (Cool Reader Android app) +[15]: https://sourceforge.net/projects/crengine/ +[16]: https://f-droid.org/en/packages/org.koreader.launcher/ +[17]: https://en.wikipedia.org/wiki/E_Ink +[18]: https://github.com/koreader/koreader/wiki/Dictionary-support +[19]: https://opensource.com/sites/default/files/uploads/koreader-lookup.png (KOReader Android app) +[20]: https://github.com/koreader/koreader +[21]: https://f-droid.org/en/packages/com.quaap.bookymcbookface/ +[22]: https://opensource.com/sites/default/files/uploads/booky_mcbookface-menu.png (Booky McBookface Android app) +[23]: https://github.com/quaap/BookyMcBookface diff --git a/sources/tech/20191003 Creating a perfect landing page for free.md b/sources/tech/20191003 Creating a perfect landing page for free.md new file mode 100644 index 0000000000..877e133f50 --- /dev/null +++ b/sources/tech/20191003 Creating a perfect landing page for free.md @@ -0,0 +1,74 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Creating a perfect landing page for free) +[#]: via: (https://opensourceforu.com/2019/10/creating-a-perfect-landing-page-for-free/) +[#]: author: (Jagoda Glowacka https://opensourceforu.com/author/jagoda-glowacka/) + +Creating a perfect landing page for free +====== + +[![][1]][2] + +_Nowadays running an online business has become more popular than doing it in a traditional way. Entrepreneurs are lured by the lack of barriers of entry, simplicity of reaching wide ranges of customers and endless possibilities of growing. With Internet and new technologies it is far easier today to become an online businessman than a traditional one. However, one thing is to become an entrepreneur and another is to keep oneself on the market._ + +Since the digital business world is in constant expansion the competition is getting fiercer and the quality of products and services offered increases. It makes it more troublesome to be noticed in the crowd of alike ambitious online businessmen offering similar products. In order to survive you need to use all the cards you have and even if you have already done that you should always think about improvement and innovation. + +One of this card should definitely be a decent nice-looking and attention-grabbing landing page that boosts your conversions and build trust among your potential customers. Since today you can easily [_create landing page_][3] for free you should never deprive your business of one. As it is a highly powerful tool than can move your business off the ground and gain a lot of new leads. However, in order to do all of this it has to be a high quality landing page that will be impeccable for your targeted audience. + +**A landing page is a must for every online business** + +The concept of landing pages arrived a few years back but these few years were enough to settle down and become the necessity of every online business. At the beginning loads of businessmen decided to ignore their existence and preferred to persuade themselves that a homepage is already enough. Well, sorry to break it for them – but it’s not. + +**Homepage should never equal landing page** + +Obviously, a homepage is also a must for every online business and without it the business can only exist in entrepreneur’s imagination ;-) However, an essence of a homepage is not the same what an essence of a landing page is. And even the most state-of-the-art business website does not replace a good piece of landing page. + +Homepages do serve multiple purposes but none of them is focused on attracting new clients as they don’t clearly encourage visitors to take an action such as subscribing or filling out a contact form. Homepages’ primary focus is the company itself – its full offer, history or founder and it makes them full of distracting information and links. And last but not least, the information on them is not put in order that would make the visitors desire the product instantly. + +**Landing pages impose action** + +Landing page is a standalone website and serves as a first-impression maker among the visitors. It is the place where your new potential customers land and in order to keep them you need to show them instantly that your solution is something they need. It should quickly grab attention of the visitors, engage them in an action and make them interested in your product or service. And it should do all of that as quickly as possible. + +Therefore, landing pages are a great tool which helps you increase your conversion rate, getting information about your visitors, engage new potential leads into action (such as subscribing for a free trial or a newsletter what provide you with personal information about them) and make them believe your product or service is worthwhile. However, in order to fulfill all these functions it needs to have all the necessary landing page elements and it has to be a landing page of high quality. + +**Every landing page needs some core features** +In order to create a perfectly converting landing page you need to plan its structure and put all the essential elements on it that will help you achieve your goals. The core elements that should be placed on every landing page are: + + * headlines which should be catchy, keywords focused and eye-catching. It is the first, and sometimes only, element that visitors read so it has to be well-thought and a little intriguing, + * subheadlines which should be completion of headlines, a little bit more descriptive but still keywords focused and catchy, + * benefits of your solution clearly outlined and demonstrating high value and absolute necessity of purchasing it for your potential leads, + * call-to-action in a visible place and allowing the visitors to subscribe for a free trial, coupons, a newsletter or purchase right away. + + + +All of these features put together in a right order enable you to boost your conversions and make your product or service absolutely desirable for your customers. They are all the core elements of every landing page and without any of them there is a higher risk of landing page failure. + +However, putting all the elements is one thing but designing a landing page is another. When planning its structure you should always have in mind who your target is and adjust your landing page look accordingly. You should always keep up with landing page trends which make your landing page up-to-date and appealing for the customers. + +If it all sounds quite confusing and you are a landing page newbie or still don’t really feel confident in landing page creation you may facilitate this task and use a highly powerful tool the landing page savvies have prepared for you. And that is a [_free landing page creator_][4] which help you create a high quality and eye-catching landing page in less than an hour. + +**Creating a free landing page is a piece of cake** +Today the digital marketing world is full of bad quality landing pages that don’t truly work miracles for businesses. In order to give you all the bonanza the quality of landing page is crucial and choosing a landing page builder designed by landing page experts is one of the most secure options to create a landing page of excellence. + +They are online tools which slightly guide you through the whole creation process making it effortless and quick. They are full of pre-installed features such as landing page layouts and templates, drag and drop function, simple copying and moving or tailoring your landing page to every type of device. You can use these builders up to 14 days for free thanks to a free trial period. Quite nice, huh? ;-) + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/creating-a-perfect-landing-page-for-free/ + +作者:[Jagoda Glowacka][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/jagoda-glowacka/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/09/Long-wait-open-for-webpage-in-broser-using-laptop.jpg?resize=696%2C405&ssl=1 (Long wait open for webpage in broser using laptop) +[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/09/Long-wait-open-for-webpage-in-broser-using-laptop.jpg?fit=1996%2C1162&ssl=1 +[3]: https://landingi.com/blog/how-to-create-landing-page +[4]: https://landingi.com/free-landing-page diff --git a/sources/tech/20191003 How to Run the Top Command in Batch Mode.md b/sources/tech/20191003 How to Run the Top Command in Batch Mode.md new file mode 100644 index 0000000000..4516e08387 --- /dev/null +++ b/sources/tech/20191003 How to Run the Top Command in Batch Mode.md @@ -0,0 +1,335 @@ +[#]: collector: (lujun9972) +[#]: translator: (way-ww) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Run the Top Command in Batch Mode) +[#]: via: (https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +How to Run the Top Command in Batch Mode +====== + +The **[Linux Top command][1]** is the best and most well known command that everyone uses to **[monitor Linux system performance][2]**. + +You probably already know most of the options available, except for a few options, and if I’m not wrong, “batch more” is one of the options. + +Most script writer and developers know this because this option is mainly used when writing the script. + +If you’re not sure about this, don’t worry we’re here to explain this. + +### What is “Batch Mode” in the Top Command + +The “Batch Mode” option allows you to send top command output to other programs or to a file. + +In this mode, top will not accept input and runs until the iterations limit you’ve set with the “-n” command-line option. + +If you want to fix any performance issues on the Linux server, you need to **[understand the top command output][3]** correctly. + +### 1) How to Run the Top Command in Batch Mode + +By default, the top command sort the results based on CPU usage, so when you run the below top command in batch mode, it does the same and prints the first 35 lines. + +``` +# top -bc | head -35 + +top - 06:41:14 up 8 days, 20:24, 1 user, load average: 0.87, 0.77, 0.81 +Tasks: 139 total, 1 running, 136 sleeping, 0 stopped, 2 zombie +%Cpu(s): 0.0 us, 3.2 sy, 0.0 ni, 96.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +KiB Mem : 3880940 total, 1595932 free, 886736 used, 1398272 buff/cache +KiB Swap: 1048572 total, 514640 free, 533932 used. 2648472 avail Mem + +PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND + 1 root 20 0 191144 2800 1596 S 0.0 0.1 5:43.63 /usr/lib/systemd/systemd --switched-root --system --deserialize 22 + 2 root 20 0 0 0 0 S 0.0 0.0 0:00.32 [kthreadd] + 3 root 20 0 0 0 0 S 0.0 0.0 0:28.10 [ksoftirqd/0] + 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H] + 7 root rt 0 0 0 0 S 0.0 0.0 0:33.96 [migration/0] + 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcu_bh] + 9 root 20 0 0 0 0 S 0.0 0.0 63:05.12 [rcu_sched] + 10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [lru-add-drain] + 11 root rt 0 0 0 0 S 0.0 0.0 0:08.79 [watchdog/0] + 12 root rt 0 0 0 0 S 0.0 0.0 0:08.82 [watchdog/1] + 13 root rt 0 0 0 0 S 0.0 0.0 0:44.27 [migration/1] + 14 root 20 0 0 0 0 S 0.0 0.0 1:22.45 [ksoftirqd/1] + 16 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/1:0H] + 18 root 20 0 0 0 0 S 0.0 0.0 0:00.01 [kdevtmpfs] + 19 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [netns] + 20 root 20 0 0 0 0 S 0.0 0.0 0:01.35 [khungtaskd] + 21 root 0 -20 0 0 0 S 0.0 0.0 0:00.02 [writeback] + 22 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kintegrityd] + 23 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset] + 24 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kblockd] + 25 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [md] + 26 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [edac-poller] + 33 root 20 0 0 0 0 S 0.0 0.0 1:19.07 [kswapd0] + 34 root 25 5 0 0 0 S 0.0 0.0 0:00.00 [ksmd] + 35 root 39 19 0 0 0 S 0.0 0.0 0:12.80 [khugepaged] + 36 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [crypto] + 44 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kthrotld] + 46 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kmpath_rdacd] +``` + +### 2) How to Run the Top Command in Batch Mode and Sort the Output Based on Memory Usage + +Run the below top command to sort the results based on memory usage in batch mode. + +``` +# top -bc -o +%MEM | head -n 20 + +top - 06:42:00 up 8 days, 20:25, 1 user, load average: 0.66, 0.74, 0.80 +Tasks: 146 total, 1 running, 145 sleeping, 0 stopped, 0 zombie +%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +KiB Mem : 3880940 total, 1422044 free, 1059176 used, 1399720 buff/cache +KiB Swap: 1048572 total, 514640 free, 533932 used. 2475984 avail Mem + + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND + 18105 mysql 20 0 1453900 156096 8816 S 0.0 4.0 2:12.98 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid + 1841 root 20 0 228980 107036 5360 S 0.0 2.8 0:05.56 /usr/local/cpanel/3rdparty/perl/528/bin/perl -T -w /usr/local/cpanel/3rdparty/bin/spamd --max-children=3 --max-spare=1 --allowed-ips=127.0.0.+ + 4301 root 20 0 230208 104608 1816 S 0.0 2.7 0:03.77 spamd child + 8139 nobody 20 0 257000 27108 3408 S 0.0 0.7 0:00.04 /usr/sbin/httpd -k start + 7961 nobody 20 0 256988 26912 3160 S 0.0 0.7 0:00.05 /usr/sbin/httpd -k start + 8190 nobody 20 0 256976 26812 3140 S 0.0 0.7 0:00.05 /usr/sbin/httpd -k start + 8353 nobody 20 0 256976 26812 3144 S 0.0 0.7 0:00.04 /usr/sbin/httpd -k start + 8629 nobody 20 0 256856 26736 3108 S 0.0 0.7 0:00.02 /usr/sbin/httpd -k start + 8636 nobody 20 0 256856 26712 3100 S 0.0 0.7 0:00.03 /usr/sbin/httpd -k start + 8611 nobody 20 0 256844 25764 2228 S 0.0 0.7 0:00.01 /usr/sbin/httpd -k start + 8451 nobody 20 0 256844 25760 2220 S 0.0 0.7 0:00.04 /usr/sbin/httpd -k start + 8610 nobody 20 0 256844 25748 2224 S 0.0 0.7 0:00.01 /usr/sbin/httpd -k start + 8632 nobody 20 0 256844 25744 2216 S 0.0 0.7 0:00.03 /usr/sbin/httpd -k start +``` + +**Details of the above command:** + + * **-b :** Batch mode operation + * **-c :** To print the absolute path of the running process + * **-o :** To specify fields for sorting processes + * **head :** Output the first part of files + * **-n :** To print the first “n” lines + + + +### 3) How to Run the Top Command in Batch Mode and Sort the Output Based on a Specific User Process + +If you want to sort results based on a specific user, run the below top command. + +``` +# top -bc -u mysql | head -n 10 + +top - 06:44:58 up 8 days, 20:27, 1 user, load average: 0.99, 0.87, 0.84 +Tasks: 140 total, 1 running, 137 sleeping, 0 stopped, 2 zombie +%Cpu(s): 13.3 us, 3.3 sy, 0.0 ni, 83.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +KiB Mem : 3880940 total, 1589832 free, 885648 used, 1405460 buff/cache +KiB Swap: 1048572 total, 514640 free, 533932 used. 2649412 avail Mem + + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND + 18105 mysql 20 0 1453900 156888 8816 S 0.0 4.0 2:16.42 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid +``` + +### 4) How to Run the Top Command in Batch Mode and Sort the Output Based on the Process Age + +Use the below top command to sort the results based on the age of the process in batch mode. It shows the total CPU time the task has used since it started. + +But if you want to check how long a process has been running on Linux, go to the following article. + + * **[Five Ways to Check How Long a Process Has Been Running in Linux][4]** + + + +``` +# top -bc -o TIME+ | head -n 20 + +top - 06:45:56 up 8 days, 20:28, 1 user, load average: 0.56, 0.77, 0.81 +Tasks: 148 total, 1 running, 146 sleeping, 0 stopped, 1 zombie +%Cpu(s): 0.0 us, 3.1 sy, 0.0 ni, 96.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +KiB Mem : 3880940 total, 1378664 free, 1094876 used, 1407400 buff/cache +KiB Swap: 1048572 total, 514640 free, 533932 used. 2440332 avail Mem + + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND + 9 root 20 0 0 0 0 S 0.0 0.0 63:05.70 [rcu_sched] + 272 root 20 0 0 0 0 S 0.0 0.0 16:12.13 [xfsaild/vda1] + 3882 root 20 0 229832 6212 1220 S 0.0 0.2 9:00.84 /usr/sbin/httpd -k start + 1 root 20 0 191144 2800 1596 S 0.0 0.1 5:43.75 /usr/lib/systemd/systemd --switched-root --system --deserialize 22 + 3761 root 20 0 68784 9820 2048 S 0.0 0.3 5:09.67 tailwatchd + 3529 root 20 0 404380 3472 2604 S 0.0 0.1 3:24.98 /usr/sbin/rsyslogd -n + 3520 root 20 0 574208 572 164 S 0.0 0.0 3:07.74 /usr/bin/python2 -Es /usr/sbin/tuned -l -P + 444 dbus 20 0 58444 1144 612 S 0.0 0.0 2:23.90 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation + 18105 mysql 20 0 1453900 157152 8816 S 0.0 4.0 2:17.29 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid + 249 root 0 -20 0 0 0 S 0.0 0.0 1:28.83 [kworker/0:1H] + 14 root 20 0 0 0 0 S 0.0 0.0 1:22.46 [ksoftirqd/1] + 33 root 20 0 0 0 0 S 0.0 0.0 1:19.07 [kswapd0] + 342 root 20 0 39472 2940 2752 S 0.0 0.1 1:18.17 /usr/lib/systemd/systemd-journald +``` + +### 5) How to Run the Top Command in Batch Mode and Save the Output to a File + +If you want to share the output of the top command to someone for troubleshooting purposes, redirect the output to a file using the following command. + +``` +# top -bc | head -35 > top-report.txt + +# cat top-report.txt + +top - 06:47:11 up 8 days, 20:30, 1 user, load average: 0.67, 0.77, 0.81 +Tasks: 133 total, 4 running, 129 sleeping, 0 stopped, 0 zombie +%Cpu(s): 59.4 us, 12.5 sy, 0.0 ni, 28.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +KiB Mem : 3880940 total, 1596268 free, 843284 used, 1441388 buff/cache +KiB Swap: 1048572 total, 514640 free, 533932 used. 2659084 avail Mem + + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND + 9686 daygeekc 20 0 406132 62184 43448 R 94.1 1.6 0:00.34 /opt/cpanel/ea-php56/root/usr/bin/php-cgi + 9689 nobody 20 0 256588 24428 1184 S 5.9 0.6 0:00.01 /usr/sbin/httpd -k start + 1 root 20 0 191144 2800 1596 S 0.0 0.1 5:43.79 /usr/lib/systemd/systemd --switched-root --system --deserialize 22 + 2 root 20 0 0 0 0 S 0.0 0.0 0:00.32 [kthreadd] + 3 root 20 0 0 0 0 S 0.0 0.0 0:28.11 [ksoftirqd/0] + 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:0H] + 7 root rt 0 0 0 0 S 0.0 0.0 0:33.96 [migration/0] + 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcu_bh] + 9 root 20 0 0 0 0 R 0.0 0.0 63:05.82 [rcu_sched] + 10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [lru-add-drain] + 11 root rt 0 0 0 0 S 0.0 0.0 0:08.79 [watchdog/0] + 12 root rt 0 0 0 0 S 0.0 0.0 0:08.82 [watchdog/1] + 13 root rt 0 0 0 0 S 0.0 0.0 0:44.28 [migration/1] + 14 root 20 0 0 0 0 S 0.0 0.0 1:22.46 [ksoftirqd/1] + 16 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/1:0H] + 18 root 20 0 0 0 0 S 0.0 0.0 0:00.01 [kdevtmpfs] + 19 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [netns] + 20 root 20 0 0 0 0 S 0.0 0.0 0:01.35 [khungtaskd] + 21 root 0 -20 0 0 0 S 0.0 0.0 0:00.02 [writeback] + 22 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kintegrityd] + 23 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset] + 24 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kblockd] + 25 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [md] + 26 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [edac-poller] + 33 root 20 0 0 0 0 S 0.0 0.0 1:19.07 [kswapd0] + 34 root 25 5 0 0 0 S 0.0 0.0 0:00.00 [ksmd] + 35 root 39 19 0 0 0 S 0.0 0.0 0:12.80 [khugepaged] + 36 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [crypto] +``` + +### How to Sort Output Based on Specific Fields + +In the latest version of the top command release, press the **“f”** key to sort the fields via the field letter. + +To sort with a new field, use the **“up/down”** arrow to select the correct selection, and then press **“s”** to sort it. Finally press **“q”** to exit from this window. + +``` +Fields Management for window 1:Def, whose current sort field is %CPU + Navigate with Up/Dn, Right selects for move then or Left commits, + 'd' or toggles display, 's' sets sort. Use 'q' or to end! + PID = Process Id nsUTS = UTS namespace Inode + USER = Effective User Name LXC = LXC container name + PR = Priority RSan = RES Anonymous (KiB) + NI = Nice Value RSfd = RES File-based (KiB) + VIRT = Virtual Image (KiB) RSlk = RES Locked (KiB) + RES = Resident Size (KiB) RSsh = RES Shared (KiB) + SHR = Shared Memory (KiB) CGNAME = Control Group name + S = Process Status NU = Last Used NUMA node + %CPU = CPU Usage + %MEM = Memory Usage (RES) + TIME+ = CPU Time, hundredths + COMMAND = Command Name/Line + PPID = Parent Process pid + UID = Effective User Id + RUID = Real User Id + RUSER = Real User Name + SUID = Saved User Id + SUSER = Saved User Name + GID = Group Id + GROUP = Group Name + PGRP = Process Group Id + TTY = Controlling Tty + TPGID = Tty Process Grp Id + SID = Session Id + nTH = Number of Threads + P = Last Used Cpu (SMP) + TIME = CPU Time + SWAP = Swapped Size (KiB) + CODE = Code Size (KiB) + DATA = Data+Stack (KiB) + nMaj = Major Page Faults + nMin = Minor Page Faults + nDRT = Dirty Pages Count + WCHAN = Sleeping in Function + Flags = Task Flags + CGROUPS = Control Groups + SUPGIDS = Supp Groups IDs + SUPGRPS = Supp Groups Names + TGID = Thread Group Id + OOMa = OOMEM Adjustment + OOMs = OOMEM Score current + ENVIRON = Environment vars + vMj = Major Faults delta + vMn = Minor Faults delta + USED = Res+Swap Size (KiB) + nsIPC = IPC namespace Inode + nsMNT = MNT namespace Inode + nsNET = NET namespace Inode + nsPID = PID namespace Inode + nsUSER = USER namespace Inode +``` + +For older version of the top command, press the **“shift+f”** or **“shift+o”** key to sort the fields via the field letter. + +To sort with a new field, select the corresponding sort **field letter**, and then press **“Enter”** to sort it. + +``` +Current Sort Field: N for window 1:Def + Select sort field via field letter, type any other key to return + a: PID = Process Id + b: PPID = Parent Process Pid + c: RUSER = Real user name + d: UID = User Id + e: USER = User Name + f: GROUP = Group Name + g: TTY = Controlling Tty + h: PR = Priority + i: NI = Nice value + j: P = Last used cpu (SMP) + k: %CPU = CPU usage + l: TIME = CPU Time + m: TIME+ = CPU Time, hundredths +* N: %MEM = Memory usage (RES) + o: VIRT = Virtual Image (kb) + p: SWAP = Swapped size (kb) + q: RES = Resident size (kb) + r: CODE = Code size (kb) + s: DATA = Data+Stack size (kb) + t: SHR = Shared Mem size (kb) + u: nFLT = Page Fault count + v: nDRT = Dirty Pages count + w: S = Process Status + x: COMMAND = Command name/line + y: WCHAN = Sleeping in Function + z: Flags = Task Flags + Note1: + If a selected sort field can't be + shown due to screen width or your + field order, the '<' and '>' keys + will be unavailable until a field + within viewable range is chosen. + Note2: + Field sorting uses internal values, + not those in column display. Thus, + the TTY & WCHAN fields will violate + strict ASCII collating sequence. + (shame on you if WCHAN is chosen) +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/ +[2]: https://www.2daygeek.com/category/system-monitoring/ +[3]: https://www.2daygeek.com/understanding-linux-top-command-output-usage/ +[4]: https://www.2daygeek.com/how-to-check-how-long-a-process-has-been-running-in-linux/ diff --git a/sources/tech/20191003 SQL queries don-t start with SELECT.md b/sources/tech/20191003 SQL queries don-t start with SELECT.md new file mode 100644 index 0000000000..18fb43d437 --- /dev/null +++ b/sources/tech/20191003 SQL queries don-t start with SELECT.md @@ -0,0 +1,144 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (SQL queries don't start with SELECT) +[#]: via: (https://jvns.ca/blog/2019/10/03/sql-queries-don-t-start-with-select/) +[#]: author: (Julia Evans https://jvns.ca/) + +SQL queries don't start with SELECT +====== + +Okay, obviously many SQL queries do start with `SELECT` (and actually this post is only about `SELECT` queries, not `INSERT`s or anything). + +But! Yesterday I was working on an [explanation of window functions][1], and I found myself googling “can you filter based on the result of a window function”. As in – can you filter the result of a window function in a WHERE or HAVING or something? + +Eventually I concluded “window functions must run after WHERE and GROUP BY happen, so you can’t do it”. But this led me to a bigger question – **what order do SQL queries actually run in?**. + +This was something that I felt like I knew intuitively (“I’ve written at least 10,000 SQL queries, some of them were really complicated! I must know this!“) but I struggled to actually articulate what the order was. + +### SQL queries happen in this order + +I looked up the order, and here it is! (SELECT isn’t the first thing, it’s like the 5th thing!) ([here it is in a tweet][2]). + +(I really want to find a more accurate way of phrasing this than “sql queries happen/run in this order” but I haven’t figured it out yet) + + + +In a non-image format, the order is: + + * `FROM/JOIN` and all the `ON` conditions + * `WHERE` + * `GROUP BY` + * `HAVING` + * `SELECT` (including window functions) + * `ORDER BY` + * `LIMIT` + + + +### questions this diagram helps you answer + +This diagram is about the _semantics_ of SQL queries – it lets you reason through what a given query will return and answers questions like: + + * Can I do `WHERE` on something that came from a `GROUP BY`? (no! WHERE happens before GROUP BY!) + * Can I filter based on the results of a window function? (no! window functions happen in `SELECT`, which happens after both `WHERE` and `GROUP BY`) + * Can I `ORDER BY` based on something I did in GROUP BY? (yes! `ORDER BY` is basically the last thing, you can `ORDER BY` based on anything!) + * When does `LIMIT` happen? (at the very end!) + + + +**Database engines don’t actually literally run queries in this order** because they implement a bunch of optimizations to make queries run faster – we’ll get to that a little later in the post. + +So: + + * you can use this diagram when you just want to understand which queries are valid and how to reason about what results of a given query will be + * you _shouldn’t_ use this diagram to reason about query performance or anything involving indexes, that’s a much more complicated thing with a lot more variables + + + +### confounding factor: column aliases + +Someone on Twitter pointed out that many SQL implementations let you use the syntax: + +``` +SELECT CONCAT(first_name, ' ', last_name) AS full_name, count(*) +FROM table +GROUP BY full_name +``` + +This query makes it _look_ like GROUP BY happens after SELECT even though GROUP BY is first, because the GROUP BY references an alias from the SELECT. But it’s not actually necessary for the GROUP BY to run after the SELECT for this to work – the database engine can just rewrite the query as + +``` +SELECT CONCAT(first_name, ' ', last_name) AS full_name, count(*) +FROM table +GROUP BY CONCAT(first_name, ' ', last_name) +``` + +and run the GROUP BY first. + +Your database engine also definitely does a bunch of checks to make sure that what you put in SELECT and GROUP BY makes sense together before it even starts to run the query, so it has to look at the query as a whole anyway before it starts to come up with an execution plan. + +### queries aren’t actually run in this order (optimizations!) + +Database engines in practice don’t actually run queries by joining, and then filtering, and then grouping, because they implement a bunch of optimizations reorder things to make the query run faster as long as reordering things won’t change the results of the query. + +One simple example of a reason why need to run queries in a different order to make them fast is that in this query: + +``` +SELECT * FROM +owners LEFT JOIN cats ON owners.id = cats.owner +WHERE cats.name = 'mr darcy' +``` + +it would be silly to do the whole left join and match up all the rows in the 2 tables if you just need to look up the 3 cats named ‘mr darcy’ – it’s way faster to do some filtering first for cats named ‘mr darcy’. And in this case filtering first doesn’t change the results of the query! + +There are lots of other optimizations that database engines implement in practice that might make them run queries in a different order but there’s no room for that and honestly it’s not something I’m an expert on. + +### LINQ starts queries with `FROM` + +LINQ (a querying syntax in C# and VB.NET) uses the order `FROM ... WHERE ... SELECT`. Here’s an example of a LINQ query: + +``` +var teenAgerStudent = from s in studentList + where s.Age > 12 && s.Age < 20 + select s; +``` + +pandas (my [favourite data wrangling tool][3]) also basically works like this, though you don’t need to use this exact order – I’ll often write pandas code like this: + +``` +df = thing1.join(thing2) # like a JOIN +df = df[df.created_at > 1000] # like a WHERE +df = df.groupby('something', num_yes = ('yes', 'sum')) # like a GROUP BY +df = df[df.num_yes > 2] # like a HAVING, filtering on the result of a GROUP BY +df = df[['num_yes', 'something1', 'something']] # pick the columns I want to display, like a SELECT +df.sort_values('sometthing', ascending=True)[:30] # ORDER BY and LIMIT +df[:30] +``` + +This isn’t because pandas is imposing any specific rule on how you have to write your code, though. It’s just that it often makes sense to write code in the order JOIN / WHERE / GROUP BY / HAVING. (I’ll often put a `WHERE` first to improve performance though, and I think most database engines will also do a WHERE first in practice) + +`dplyr` in R also lets you use a different syntax for querying SQL databases like Postgres, MySQL and SQLite, which is also in a more logical order. + +### I was really surprised that I didn’t know this + +I’m writing a blog post about this because when I found out the order I was SO SURPRISED that I’d never seen it written down that way before – it explains basically everything that I knew intuitively about why some queries are allowed and others aren’t. So I wanted to write it down in the hopes that it will help other people also understand how to write SQL queries. + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2019/10/03/sql-queries-don-t-start-with-select/ + +作者:[Julia Evans][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://jvns.ca/ +[b]: https://github.com/lujun9972 +[1]: https://twitter.com/b0rk/status/1179419244808851462?s=20 +[2]: https://twitter.com/b0rk/status/1179449535938076673 +[3]: https://github.com/jvns/pandas-cookbook diff --git a/sources/tech/20191005 Use GameHub to Manage All Your Linux Games in One Place.md b/sources/tech/20191005 Use GameHub to Manage All Your Linux Games in One Place.md new file mode 100644 index 0000000000..747408db02 --- /dev/null +++ b/sources/tech/20191005 Use GameHub to Manage All Your Linux Games in One Place.md @@ -0,0 +1,161 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Use GameHub to Manage All Your Linux Games in One Place) +[#]: via: (https://itsfoss.com/gamehub/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Use GameHub to Manage All Your Linux Games in One Place +====== + +How do you [play games on Linux][1]? Let me guess. Either you install games from the software center or from Steam or from GOG or Humble Bundle etc, right? But, how do you plan to manage all your games from multiple launchers and clients? Well, that sounds like a hassle to me – which is why I was delighted when I come across [GameHub][2]. + +GameHub is a desktop application for Linux distributions that lets you manage “All your games in one place”. That sounds interesting, isn’t it? Let me share more details about it. + +![][3] + +### GameHub Features to manage Linux games from different sources at one place + +Let’s see all the features that make GameHub one of the [essential Linux applications][4], specially for gamers. + +#### Steam, GOG & Humble Bundle Support + +![][5] + +It supports Steam, [GOG][6], and [Humble Bundle][7] account integration. You can sign in to your account to see manager your library from within GameHub. + +For my usage, I have a lot of games on Steam and a couple on Humble Bundle. I can’t speak for all – but it is safe to assume that these are the major platforms one would want to have. + +#### Native Game Support + +![][8] + +There are several [websites where you can find and download Linux games][9]. You can also add native Linux games by downloading their installers or add the executable file. + +Unfortunately, there’s no easy way of finding out games for Linux from within GameHub at the moment. So, you will have to download them separately and add it to the GameHub as shown in the image above. + +#### Emulator Support + +With emulators, you can [play retro games on Linux][10]. As you can observe in the image above, you also get the ability to add emulators (and import emulated images). + +You can see [RetroArch][11] listed already but you can also add custom emulators as per your requirements. + +#### User Interface + +![Gamehub Appearance Option][12] + +Of course, the user experience matters. Hence, it is important to take a look at its user interface and what it offers. + +To me, I felt it very easy to use and the presence of a dark theme is a bonus. + +#### Controller Support + +If you are comfortable using a controller with your Linux system to play games – you can easily add it, enable or disable it from the settings. + +#### Multiple Data Providers + +Just because it fetches the information (or metadata) of your games, it needs a source for that. You can see all the sources listed in the image below. + +![Data Providers Gamehub][13] + +You don’t have to do anything here – but if you are using anything else other than steam as your platform, you can generate an [API key for IDGB.][14] + +I shall recommend you to do that only if you observe a prompt/notice within GameHub or if you have some games that do not have any description/pictures/stats on GameHub. + +#### Compatibility Layer + +![][15] + +Do you have a game that does not support Linux? + +You do not have to worry. GameHub offers multiple compatibility layers like Wine/Proton which you can use to get the game installed in order to make it playable. + +We can’t be really sure on what works for you – so you have to test it yourself for that matter. Nevertheless, it is an important feature that could come handy for a lot of gamers. + +### How Do You Manage Your Games in GameHub? + +You get the option to add Steam/GOG/Humble Bundle account right after you launch it. + +For Steam, you need to have the Steam client installed on your Linux distro. Once, you have it, you can easily link the games to GameHub. + +![][16] + +For GOG & Humble Bundle, you can directly sign in using your credentials to get your games organized in GameHub. + +If you are adding an emulated image or a native installer, you can always do that by clicking on the “**+**” button that you observe in the top-right corner of the window. + +### How Do You Install Games? + +For Steam games, it automatically launches the Steam client to download/install (I wish if this was possible without launching Steam!) + +![][17] + +But, for GOG/Humble Bundle, you can directly start downloading to install the games after signing in. If necessary, you can utilize the compatibility layer for non-native Linux games. + +In either case, if you want to install an emulated game or a native game – just add the installer or import the emulated image. There’s nothing more to it. + +### GameHub: How do you install it? + +![][18] + +To start with, you can just search for it in your software center or app center. It is available in the **Pop!_Shop**. So, it can be found in most of the official repositories. + +If you don’t find it there, you can always add the repository and install it via terminal by typing these commands: + +``` +sudo add-apt-repository ppa:tkashkin/gamehub +sudo apt update +sudo apt install com.github.tkashkin.gamehub +``` + +In case you encounter “**add-apt-repository command not found**” error, you can take a look at our article to help fix [add-apt-repository not found error.][19] + +There are also AppImage and Flatpak versions available. You can find installation instructions for other Linux distros on its [official webpage][2]. + +Also, you have the option to download pre-release packages from its [GitHub page][20]. + +[GameHub][2] + +**Wrapping Up** + +GameHub is a pretty neat application as a unified library for all your games. The user interface is intuitive and so are the options. + +Have you had the chance it test it out before? If yes, let us know your experience in the comments down below. + +Also, feel free to tell us about some of your favorite tools/applications similar to this which you would want us to try. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/gamehub/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/linux-gaming-guide/ +[2]: https://tkashkin.tk/projects/gamehub/ +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-home-1.png?ssl=1 +[4]: https://itsfoss.com/essential-linux-applications/ +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-platform-support.png?ssl=1 +[6]: https://www.gog.com/ +[7]: https://www.humblebundle.com/monthly?partner=itsfoss +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-native-installers.png?ssl=1 +[9]: https://itsfoss.com/download-linux-games/ +[10]: https://itsfoss.com/play-retro-games-linux/ +[11]: https://www.retroarch.com/ +[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-appearance.png?ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/data-providers-gamehub.png?ssl=1 +[14]: https://www.igdb.com/api +[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-windows-game.png?fit=800%2C569&ssl=1 +[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-library.png?ssl=1 +[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-compatibility-layer.png?ssl=1 +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-install.jpg?ssl=1 +[19]: https://itsfoss.com/add-apt-repository-command-not-found/ +[20]: https://github.com/tkashkin/GameHub/releases diff --git a/sources/tech/20191006 Use internal packages to reduce your public API surface.md b/sources/tech/20191006 Use internal packages to reduce your public API surface.md new file mode 100644 index 0000000000..eef43ae560 --- /dev/null +++ b/sources/tech/20191006 Use internal packages to reduce your public API surface.md @@ -0,0 +1,54 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Use internal packages to reduce your public API surface) +[#]: via: (https://dave.cheney.net/2019/10/06/use-internal-packages-to-reduce-your-public-api-surface) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) + +Use internal packages to reduce your public API surface +====== + +In the beginning, before the `go` tool, before Go 1.0, the Go distribution stored the standard library in a subdirectory called `pkg/` and the commands which built upon it in `cmd/`. This wasn’t so much a deliberate taxonomy but a by product of the original `make` based build system. In [September 2014][1], the Go distribution dropped the `pkg/` subdirectory, but then this tribal knowledge had set root in large Go projects and continues to this day. + +I tend to view empty directories inside a Go project with suspicion. Often they are a hint that the module’s author may be trying to create a taxonomy of packages rather than ensuring each package’s name, and thus its enclosing directory, [uniquely describes its purpose][2]. While the symmetry with `cmd/` for `package main` commands is appealing, a directory that exists only to hold other packages is a potential design smell. + +More importantly, the boilerplate of an empty `pkg/` directory distracts from the more useful idiom of an `internal/` directory. `internal/` is a special directory name recognised by the `go` tool which will prevent one package from being imported by another unless both share a common ancestor. Packages within an `internal/` directory are therefore said to be _internal packages_. + +To create an internal package, place it within a directory named `internal/`. When the `go` command sees an import of a package with `internal/` in the import path, it verifies that the importing package is within the tree rooted at the _parent_ of the `internal/` directory. + +For example, a package `/a/b/c/internal/d/e/f` can only be imported by code in the directory tree rooted at `/a/b/c`. It cannot be imported by code in `/a/b/g` or in any other repository. + +If your project contains multiple packages you may find you have some exported symbols which are intended to be used by other packages in your project, but are not intended to be part of your project’s public API. Although Go has limited visibility modifiers–public, exported, symbols and private, non exported, symbols–internal packages provide a useful mechanism for controlling visibility to parts of your project which would otherwise be considered part of its public versioned API. + +You can, of course, promote internal packages later if you want to commit to supporting that API; just move them up a directory level or two. The key is this process is _opt-in_. As the author, internal packages give you control over which symbols in your project’s public API without being forced to glob concepts together into unwieldy mega packages to avoid exporting them. + +### Related posts: + + 1. [Stress test your Go packages][3] + 2. [Practical public speaking for Nerds][4] + 3. [Five suggestions for setting up a Go project][5] + 4. [Automatically fetch your project’s dependencies with gb][6] + + + +-------------------------------------------------------------------------------- + +via: https://dave.cheney.net/2019/10/06/use-internal-packages-to-reduce-your-public-api-surface + +作者:[Dave Cheney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://dave.cheney.net/author/davecheney +[b]: https://github.com/lujun9972 +[1]: https://groups.google.com/forum/m/#!msg/golang-dev/c5AknZg3Kww/OFLmvGyfNR0J +[2]: https://dave.cheney.net/2019/01/08/avoid-package-names-like-base-util-or-common +[3]: https://dave.cheney.net/2013/06/19/stress-test-your-go-packages (Stress test your Go packages) +[4]: https://dave.cheney.net/2015/02/17/practical-public-speaking-for-nerds (Practical public speaking for Nerds) +[5]: https://dave.cheney.net/2014/12/01/five-suggestions-for-setting-up-a-go-project (Five suggestions for setting up a Go project) +[6]: https://dave.cheney.net/2016/06/26/automatically-fetch-your-projects-dependencies-with-gb (Automatically fetch your project’s dependencies with gb) diff --git a/sources/tech/20191007 7 Java tips for new developers.md b/sources/tech/20191007 7 Java tips for new developers.md new file mode 100644 index 0000000000..6a560ceb2d --- /dev/null +++ b/sources/tech/20191007 7 Java tips for new developers.md @@ -0,0 +1,222 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (7 Java tips for new developers) +[#]: via: (https://opensource.com/article/19/10/java-basics) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +7 Java tips for new developers +====== +If you're just getting started with Java programming, here are seven +basics you need to know. +![Coffee and laptop][1] + +Java is a versatile programming language used, in some way, in nearly every industry that touches a computer. Java's greatest power is that it runs in a Java Virtual Machine (JVM), a layer that translates Java code into bytecode compatible with your operating system. As long as a JVM exists for your operating system, whether that OS is on a server (or [serverless][2], for that matter), desktop, laptop, mobile device, or embedded device, then a Java application can run on it. + +This makes Java a popular language for both programmers and users. Programmers know that they only have to write one version of their software to end up with an application that runs on any platform, and users know that an application will run on their computer regardless of what operating system they use. + +Many languages and frameworks are cross-platform, but none deliver the same level of abstraction. With Java, you target the JVM, not the OS. For programmers, that's the path of least resistance when faced with several programming challenges, but it's only useful if you know how to program Java. If you're just getting started with Java programming, here are seven basic tips you need to know. + +But first, if you're not sure whether you have Java installed, you can find out in a terminal (such as [Bash][3] or [PowerShell][4]) by running: + + +``` +$ java --version +openjdk 12.0.2 2019-07-16 +OpenJDK Runtime Environment 19.3 (build 12.0.2+9) +OpenJDK 64-Bit Server VM 19.3 (build 12.0.2+9, mixed mode, sharing) +``` + +If you get an error or nothing in return, then you should install the [Java Development Kit][5] (JDK) to get started with Java development. Or install a Java Runtime Environment ****(JRE) if you just need to run Java applications. + +### 1\. Java packages + +In Java, related classes are grouped into a _package_. The basic Java libraries you get when you download the JDK are grouped into packages starting with **java** or **javax**. Packages serve a similar function as folders on your computer: they provide structure and definition for related elements (in programming terminology, a _namespace_). Additional packages can be obtained from independent coders, open source projects, and commercial vendors, just as libraries can be obtained for any programming language. + +When you write a Java program, you should declare a package name at the top of your code. If you're just writing a simple application to get started with Java, your package name can be as simple as the name of your project. If you're using a Java integrated development environment (IDE), like [Eclipse][6], it generates a sane package name for you when you start a new project. + + +``` +package helloworld; + +/** + * @author seth + * An application written in Java. + */ +``` + +Otherwise, you can determine the name of your package by looking at its path in relation to the broad definition of your project. For instance, if you're writing a set of classes to assist in game development and the collection is called **jgamer**, then you might have several unique classes within it. + + +``` +package jgamer.avatar; + +/** + * @author seth + * An imaginary game library. + */ +``` + +The top level of your package is **jgamer**, and each package inside it is a descendant, such as **jgamer.avatar** and **jgamer.score** and so on. In your filesystem, the structure reflects this, with **jgamer** being the top directory containing the files **avatar.java** and **score.java**. + +### 2\. Java imports + +The most fun you'll ever have as a polyglot programmer is trying to keep track of whether you **include**, **import**, **use**, **require**, or **some other term** a library in whatever programming language you're writing in. Java, for the record, uses the **import** keyword when importing libraries needed for your code. + + +``` +package helloworld; + +import javax.swing.*; +import java.awt.*; +import java.awt.event.*; + +/** + * @author seth + * A GUI hello world. + */ +``` + +Imports work based on an environment's Java path. If Java doesn't know where Java libraries are stored on a system, then an import cannot be successful. As long as a library is stored in a system's Java path, then an import can succeed, and a library can be used to build and run a Java application. + +If a library is not expected to be in the Java path (because, for instance, you are writing the library yourself), then the library can be bundled with your application (license permitting) so that the import works as expected. + +### 3\. Java classes + +A Java class is declared with the keywords **public class** along with a unique class name mirroring its file name. For example, in a file **Hello.java** in project **helloworld**: + + +``` +package helloworld; + +import javax.swing.*; +import java.awt.*; +import java.awt.event.*; + +/** + * @author seth + * A GUI hello world. + */ + +public class Hello { +        // this is an empty class +} +``` + +You can declare variables and functions inside a class. In Java, variables within a class are called _fields_. + +### 4\. Java methods + +Java methods are, essentially, functions within an object. They are defined as being **public** (meaning they can be accessed by any other class) or **private** (limiting their use) based on the expected type of returned data, such as **void**, **int**, **float**, and so on. + + +``` +    public void helloPrompt([ActionEvent][7] event) { +        [String][8] salutation = "Hello %s"; +  +        string helloMessage = "World"; +        message = [String][8].format(salutation, helloMessage); +        [JOptionPane][9].showMessageDialog(this, message); +    } +  +    private int someNumber (x) { +        return x*2; +    } +``` + +When calling a method directly, it is referenced by its class and method name. For instance, **Hello.someNumber** refers to the **someNumber** method in the **Hello** class. + +### 5\. Static + +The **static** keyword in Java makes a member in your code accessible independently of the object that contains it. + +In object-oriented programming, you write code that serves as a template for "objects" that get spawned as the application runs. You don't code a specific window, for instance, but an _instance_ of a window based upon a window class in Java (and modified by your code). Since nothing you are coding "exists" until the application generates an instance of it, most methods and variables (and even nested classes) cannot be used until the object they depend upon has been created. + +However, sometimes you need to access or use data in an object before it is created by the application (for example, an application can't generate a red ball without first knowing that the ball is meant to be red). For those cases, there's the **static** keyword. + +### 6\. Try and catch + +Java is excellent at catching errors, but it can only recover gracefully if you tell it what to do. The cascading hierarchy of attempting to perform an action in Java starts with **try**, falls back to **catch**, and ends with **finally**. Should the **try** clause fail, then **catch** is invoked, and in the end, there's always **finally** to perform some sensible action regardless of the results. Here's an example: + + +``` +try { +        cmd = parser.parse(opt, args);  +        +        if(cmd.hasOption("help")) { +                HelpFormatter helper = new HelpFormatter(); +                helper.printHelp("Hello <options>", opt); +                [System][10].exit(0); +                } +        else { +                if(cmd.hasOption("shell") || cmd.hasOption("s")) { +                [String][8] target = cmd.getOptionValue("tgt"); +                } // else +        } // fi +} catch ([ParseException][11] err) { +        [System][10].out.println(err); +        [System][10].exit(1); +        } //catch +        finally { +                new Hello().helloWorld(opt); +        } //finally +} //try +``` + +It's a robust system that attempts to avoid irrecoverable errors or, at least, to provide you with the option to give useful feedback to the user. Use it often, and your users will thank you! + +### 7\. Running a Java application + +Java files, usually ending in **.java**, theoretically can be run with the **java** command. If an application is complex, however, whether running a single file results in anything meaningful is another question. + +To run a **.java** file directly: + + +``` +`$ java ./Hello.java` +``` + +Usually, Java applications are distributed as Java Archives (JAR) files, ending in **.jar**. A JAR file contains a manifest file specifying the main class, some metadata about the project structure, and all the parts of your code required to run the application. + +To run a JAR file, you may be able to double-click its icon (depending on how you have your OS set up), or you can launch it from a terminal: + + +``` +`$ java -jar ./Hello.jar` +``` + +### Java for everyone + +Java is a powerful language, and thanks to the [OpenJDK][12] project and other initiatives, it's an open specification that allows projects like [IcedTea][13], [Dalvik][14], and [Kotlin][15] to thrive. Learning Java is a great way to prepare to work in a wide variety of industries, and what's more, there are plenty of [great reasons to use it][16]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/java-basics + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop) +[2]: https://www.redhat.com/en/resources/building-microservices-eap-7-reference-architecture +[3]: https://www.gnu.org/software/bash/ +[4]: https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell?view=powershell-6 +[5]: http://openjdk.java.net/ +[6]: http://www.eclipse.org/ +[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+actionevent +[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string +[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+joptionpane +[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system +[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+parseexception +[12]: https://openjdk.java.net/ +[13]: https://icedtea.classpath.org/wiki/Main_Page +[14]: https://source.android.com/devices/tech/dalvik/ +[15]: https://kotlinlang.org/ +[16]: https://opensource.com/article/19/9/why-i-use-java diff --git a/sources/tech/20191007 Introduction to open source observability on Kubernetes.md b/sources/tech/20191007 Introduction to open source observability on Kubernetes.md new file mode 100644 index 0000000000..acd1bc1331 --- /dev/null +++ b/sources/tech/20191007 Introduction to open source observability on Kubernetes.md @@ -0,0 +1,202 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Introduction to open source observability on Kubernetes) +[#]: via: (https://opensource.com/article/19/10/open-source-observability-kubernetes) +[#]: author: (Yuri Grinshteyn https://opensource.com/users/yuri-grinshteyn) + +Introduction to open source observability on Kubernetes +====== +In the first article in this series, learn the signals, mechanisms, +tools, and platforms you can use to observe services running on +Kubernetes. +![Looking back with binoculars][1] + +With the advent of DevOps, engineering teams are taking on more and more ownership of the reliability of their services. While some chafe at the increased operational burden, others welcome the opportunity to treat service reliability as a key feature, invest in the necessary capabilities to measure and improve reliability, and deliver the best possible customer experiences. + +This change is measured explicitly in the [2019 Accelerate State of DevOps Report][2]. One of its most interesting conclusions (as written in the summary) is: + +> "Delivering software quickly, **reliably** _[emphasis mine]_, and safely is at the heart of technology transformation and organizational performance. We see continued evidence that software speed, stability, and **availability** _[emphasis mine]_ contribute to organizational performance (including profitability, productivity, and customer satisfaction). Our highest performers are twice as likely to meet or exceed their organizational performance goals." + +The full [report][3] says: + +> "**Low performers use more proprietary software than high and elite performers**: The cost to maintain and support proprietary software can be prohibitive, prompting high and elite performers to use open source solutions. This is in line with results from previous reports. In fact, the 2018 Accelerate State of DevOps Report found that elite performers were 1.75 times more likely to make extensive use of open source components, libraries, and platforms." + +This is a strong testament to the value of open source as a general accelerator of performance. Combining these two conclusions leads to the rather obvious thesis for this series: + +> Reliability is a critical feature, observability is a necessary component of reliability, and open source tooling is at least _A_ right approach, if not _THE_ right approach. + +This article, the first in a series, will introduce the types of signals engineers typically rely on and the mechanisms, tools, and platforms that you can use to instrument services running on Kubernetes to emit these signals, ingest and store them, and use and interpret them. + +From there, the series will continue with hands-on tutorials, where I will walk through getting started with each of the tools and technologies. By the end, you should be well-equipped to start improving the observability of your own systems! + +### What is observability? + +While observability as a general [concept in control theory][4] has been around since at least 1960, its applicability to digital systems and services is rather new and in some ways an evolution of how these systems have been monitored for the last two decades. You are likely familiar with the necessity of monitoring services to ensure you know about issues before your users are impacted. You are also likely familiar with the idea of using metric data to better understand the health and state of a system, especially in the context of troubleshooting during an incident or debugging. + +The key differentiation between monitoring and observability is that observability is an inherent property of a system or service, rather than something someone does to the system, which is what monitoring fundamentally is. [Cindy Sridharan][5], author of a free [e-book][6] on observability in distributed systems, does a great job of explaining the difference in an excellent [Medium article][7]. + +It is important to distinguish between these two terms because observability, as a property of the service you build, is your responsibility. As a service developer and owner, you have full control over the signals your system emits, how and where those signals are ingested and stored, and how they're utilized. This is in contrast to "monitoring," which may be done by others (and by you) to measure the availability and performance of your service and generate alerts to let you know that service reliability has degraded. + +### Signals + +Now that you understand the idea of observability as a property of a system that you control and that is explicitly manifested as the signals you instruct your system to emit, it's important to understand and describe the kinds of signals generally considered in this context. + +#### What are metrics? + +A metric is a fundamental type of signal that can be emitted by a service or the infrastructure it's running on. At its most basic, it is the combination of: + + 1. Some identifier, hopefully descriptive, that indicates what the metric represents + 2. A series of data points, each of which contains two elements: +a. The timestamp at which the data point was generated (or ingested) +b. A numeric value representing the state of the thing you're measuring at that time + + + +Time-series metrics have been and remain the key data structure used in monitoring and observability practice and are the primary way that the state and health of a system are represented over time. They are also the primary mechanism for alerting, but that practice and others (like incident management, on-call, and postmortems) are outside the scope here. For now, the focus is on how to instrument systems to emit metrics, how to store them, and how to use them for charts and dashboards to help you visualize the current and historical state of your system. + +Metrics are used for two primary purposes: health and insight. + +Understanding the health and state of your infrastructure, platform, and service is essential to keeping them available to users. Generally, these are emitted by the various components chosen to build services, and it's just a matter of setting up the right collection and storage infrastructure to be able to use them. Metrics from the simple (node CPU utilization) to the esoteric (garbage collection statistics) fall into this category. + +Metrics are also essential to understanding what is happening in the system to avoid interruptions to your services. From this perspective, a service can emit custom telemetry that precisely describes specific aspects of how the service is functioning and performing. This will require you to instrument the code itself, usually by including specific libraries, and specify an export destination. + +#### What are logs? + +Unlike metrics that represent numeric values that change over time, logs represent discrete events. Log entries contain both the log payload—the message emitted by a component of the service or the code—and often metadata, such as the timestamp, label, tag, or other identifiers. Therefore, this is by far the largest volume of data you need to store, and you should carefully consider your log ingestion and storage strategies as you look to take on increasing user traffic. + +#### What are traces? + +Distributed tracing is a relatively new addition to the observability toolkit and is specifically relevant to microservice architectures to allow you to understand latency and how various backend service calls contribute to it. Ted Young published an [excellent article on the concept][8] that includes its origins with Google's [Dapper paper][9] and subsequent evolution. This series will be specifically concerned with the various implementations available. + +### Instrumentation + +Once you identify the signals you want to emit, store, and analyze, you need to instruct your system to create the signals and build a mechanism to store and analyze them. Instrumentation refers to those parts of your code that are used to generate metrics, logs, and traces. In this series, we'll discuss open source instrumentation options and introduce the basics of their use through hands-on tutorials. + +### Observability on Kubernetes + +Kubernetes is the dominant platform today for deploying and maintaining containers. As it rose to the top of the industry's consciousness, so did new technologies to provide effective observability tooling around it. Here is a short list of these essential technologies; they will be covered in greater detail in future articles in this series. + +#### Metrics + +Once you select your preferred approach for instrumenting your service with metrics, the next decision is where to store those metrics and what set of services will support your effort to monitor your environment. + +##### Prometheus + +[Prometheus][10] is the best place to start when looking to monitor both your Kubernetes infrastructure and the services running in the cluster. It provides everything you'll need, including client instrumentation libraries, the [storage backend][11], a visualization UI, and an alerting framework. Running Prometheus also provides a wealth of infrastructure metrics right out of the box. It further provides [integrations][12] with third-party providers for storage, although the data exchange is not bi-directional in every case, so be sure to read the documentation if you want to store metric data in multiple locations. + +Later in this series, I will walk through setting up Prometheus in a cluster for basic infrastructure monitoring and adding custom telemetry to an application using the Prometheus client libraries. + +##### Graphite + +[Graphite][13] grew out of an in-house development effort at Orbitz and is now positioned as an enterprise-ready monitoring tool. It provides metrics storage and retrieval mechanisms, but no instrumentation capabilities. Therefore, you will still need to implement Prometheus or OpenCensus instrumentation to collect metrics. Later in this series, I will walk through setting up Graphite and sending metrics to it. + +##### InfluxDB + +[InfluxDB][14] is another open source database purpose-built for storing and retrieving time-series metrics. Unlike Graphite, InfluxDB is supported by a company called InfluxData, which provides both the InfluxDB software and a cloud-hosted version called InfluxDB Cloud. Later in this series, I will walk through setting up InfluxDB in a cluster and sending metrics to it. + +##### OpenTSDB + +[OpenTSDB][15] is also an open source purpose-built time-series database. One of its advantages is the ability to use [HBase][16] as the storage layer, which allows integration with a cloud managed service like Google's Cloud Bigtable. Google has published a [reference guide][17] on setting up OpenTSDB to monitor your Kubernetes cluster (assuming it's running in Google Kubernetes Engine, or GKE). Since it's a great introduction, I recommend following Google's tutorial if you're interested in learning more about OpenTSDB. + +##### OpenCensus + +[OpenCensus][18] is the open source version of the [Census library][19] developed at Google. It provides both metric and tracing instrumentation capabilities and supports a number of backends to [export][20] the metrics to—including Prometheus! Note that OpenCensus does not monitor your infrastructure, and you will still need to determine the best approach if you choose to use OpenCensus for custom metric telemetry. + +We'll revisit this library later in this series, and I will walk through creating metrics in a service and exporting them to a backend. + +#### Logging for observability + +If metrics provide "what" is happening, logging tells part of the story of "why." Here are some common options for consistently gathering and analyzing logs. + +##### Collecting with fluentd + +In the Kubernetes ecosystem, [fluentd][21] is the de-facto open source standard for collecting logs emitted in the cluster and forwarding them to a specified backend. You can use config maps to modify fluentd's behavior, and later in the series, I'll walk through deploying it in a cluster and modifying the associated config map to parse unstructured logs and convert them to structured for better and easier analysis. In the meantime, you can read my post "[Customizing Kubernetes logging (Part 1)][22]" on how to do that on GKE. + +##### Storing and analyzing with ELK + +The most common storage mechanism for logs is provided by [Elastic][23] in the form of the "ELK" stack. As Elastic says: + +> "'ELK' is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a 'stash' like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch." + +Later in the series, I'll walk through setting up Elasticsearch, Kibana, and Logstash in +a cluster to store and analyze logs being collected by fluentd. + +#### Distributed traces and observability + +When asking "why" in analyzing service issues, logs can only provide the information that applications are designed to share with it. The way to go even deeper is to gather traces. As the [OpenTracing initiative][24] says: + +> "Distributed tracing, also called distributed request tracing, is a method used to profile and monitor applications, especially those built using a microservices architecture. Distributed tracing helps pinpoint where failures occur and what causes poor performance." + +##### Istio + +The [Istio][25] open source service mesh provides multiple benefits for microservice architectures, including traffic control, security, and observability capabilities. It does not combine multiple spans into a single trace to assemble a full picture of what happens when a user call traverses a distributed system, but it can nevertheless be useful as an easy first step toward distributed tracing. It also provides other observability benefits—it's the easiest way to get ["golden signal"][26] metrics for each service, and it also adds logging for each request, which can be very useful for calculating error rates. You can read my post on [using it with Google's Stackdriver][27]. I'll revisit it in this series and show how to install it in a cluster and configure it to export observability data to a backend. + +##### OpenCensus + +I described [OpenCensus][28] in the Metrics section above, and that's one of the main reasons for choosing it for distributed tracing: Using a single library for both metrics and traces is a great option to reduce your instrumentation work—with the caveat that you must be working in a language that supports both the traces and stats exporters. I'll come back to OpenCensus and show how to get started instrumenting code for distributed tracing. Note that OpenCensus provides only the instrumentation library, and you'll still need to use a storage and visualization layer like Zipkin, Jaeger, Stackdriver (on GCP), or X-Ray (on AWS). + +##### Zipkin + +[Zipkin][29] is a full, distributed tracing solution that includes instrumentation, storage, and visualization. It's a tried and true set of tools that's been around for years and has a strong user and developer community. It can also be used as a backend for other instrumentation options like OpenCensus. In a future tutorial, I'll show how to set up the Zipkin server and instrument your code. + +##### Jaeger + +[Jaeger][30] is another open source tracing solution that includes all the components you'll need. It's a newer project that's being incubated at the Cloud Native Computing Foundation (CNCF). Whether you choose to use Zipkin or Jaeger may ultimately depend on your experience with them and their support for the language you're writing your service in. In this series, I'll walk through setting up Jaeger and instrumenting code for tracing. + +### Visualizing observability data + +The final piece of the toolkit for using metrics is the visualization layer. There are basically two options here: the "native" visualization that your persistence layers enable (e.g., the Prometheus UI or Flux with InfluxDB) or a purpose-built visualization tool. + +[Grafana][31] is currently the de facto standard for open source visualization. I'll walk through setting it up and using it to visualize data from various backends later in this series. + +### Looking ahead + +Observability on Kubernetes has many parts and many options for each type of need. Metric, logging, and tracing instrumentation provide the bedrock of information needed to make decisions about services. Instrumenting, storing, and visualizing data are also essential. Future articles in this series will dive into all of these options with hands-on tutorials for each. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/open-source-observability-kubernetes + +作者:[Yuri Grinshteyn][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/yuri-grinshteyn +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/look-binoculars-sight-see-review.png?itok=NOw2cm39 (Looking back with binoculars) +[2]: https://cloud.google.com/blog/products/devops-sre/the-2019-accelerate-state-of-devops-elite-performance-productivity-and-scaling +[3]: https://services.google.com/fh/files/misc/state-of-devops-2019.pdf +[4]: https://en.wikipedia.org/wiki/Observability +[5]: https://twitter.com/copyconstruct +[6]: https://t.co/0gOgZp88Jn?amp=1 +[7]: https://medium.com/@copyconstruct/monitoring-and-observability-8417d1952e1c +[8]: https://opensource.com/article/18/5/distributed-tracing +[9]: https://research.google.com/pubs/pub36356.html +[10]: https://prometheus.io/ +[11]: https://prometheus.io/docs/prometheus/latest/storage/ +[12]: https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage +[13]: https://graphiteapp.org/ +[14]: https://www.influxdata.com/get-influxdb/ +[15]: http://opentsdb.net/ +[16]: https://hbase.apache.org/ +[17]: https://cloud.google.com/solutions/opentsdb-cloud-platform +[18]: https://opencensus.io/ +[19]: https://opensource.googleblog.com/2018/03/how-google-uses-opencensus-internally.html +[20]: https://opencensus.io/exporters/#exporters +[21]: https://www.fluentd.org/ +[22]: https://medium.com/google-cloud/customizing-kubernetes-logging-part-1-a1e5791dcda8 +[23]: https://www.elastic.co/ +[24]: https://opentracing.io/docs/overview/what-is-tracing +[25]: http://istio.io/ +[26]: https://landing.google.com/sre/sre-book/chapters/monitoring-distributed-systems/ +[27]: https://medium.com/google-cloud/istio-and-stackdriver-59d157282258 +[28]: http://opencensus.io/ +[29]: https://zipkin.io/ +[30]: https://www.jaegertracing.io/ +[31]: https://grafana.com/ diff --git a/sources/tech/20191007 Understanding Joins in Hadoop.md b/sources/tech/20191007 Understanding Joins in Hadoop.md new file mode 100644 index 0000000000..ea0025a9d2 --- /dev/null +++ b/sources/tech/20191007 Understanding Joins in Hadoop.md @@ -0,0 +1,66 @@ +[#]: collector: (lujun9972) +[#]: translator: (heguangzhi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Understanding Joins in Hadoop) +[#]: via: (https://opensourceforu.com/2019/10/understanding-joins-in-hadoop/) +[#]: author: (Bhaskar Narayan Das https://opensourceforu.com/author/bhaskar-narayan/) + +Understanding Joins in Hadoop +====== + +[![Hadoop big data career opportunities][1]][2] + +_Those who have just begun the study of Hadoop might have come across different types of joins. This article briefly discusses normal joins, map side joins and reduce side joins. The differences between map side joins and reduce side joins, as well as their pros and cons, are also discussed._ + +Normally, the term join is used to refer to the combination of the record-sets of two tables. Thus when we run a query, tables are joined and we get the data from two tables in the joined format, as is the case in SQL joins. Joins find maximum usage in Hadoop processing. They should be used when large data sets are encountered and there is no urgency to generate the outcome. In case of Hadoop common joins, Hadoop distributes all the rows on all the nodes based on the join key. Once this is achieved, all the keys that have the same values end up on the same node and then, finally, the join at the reducer happens. This scenario is perfect when both the tables are huge, but when one table is small and the other is quite big, common joins become inefficient and take more time to distribute the row. + +While processing data using Hadoop, we generally do it over the map phase and the reduce phase. Thus there are mappers and reducers that do the job for the map phase and the reduce phase. We use map reduce joins when we encounter a large data set that is too big to use data-sharing techniques. + +**Map side joins** +Map side join is the term used when the record sets of two tables are joined within the mapper. In this case, the reduce phase is not involved. In the map side join, the record sets of the tables are loaded into memory, ensuring a faster join operation. Map side join is convenient for small tables and not recommended for large tables. In situations where you have queries running too frequently with small table joins you could experience a very significant reduction in query computation time. + +**Reduce side joins** +Reduce side joins happen at the reduce side of Hadoop processing. They are also known as repartitioned sort merge joins, or simply, repartitioned joins or distributed joins or common joins. They are the most widely used joins. Reduce side joins happen when both the tables are so big that they cannot fit into the memory. The process flow of reduce side joins is as follows: + + 1. The input data is read by the mapper, which needs to be combined on the basis of the join key or common column. + 2. Once the input data is processed by the mapper, it adds a tag to the processed input data in order to distinguish the input origin sources. + 3. The mapper returns the intermediate key-value pair, where the key is also the join key. + 4. For the reducer, a key and a list of values is generated once the sorting and shuffling phase is complete. + 5. The reducer joins the values that are present in the generated list along with the key to produce the final outcome. + + + +The join at the reduce side combines the output of two mappers based on a common key. This scenario is quite synonymous with SQL joins, where the data sets of two tables are joined based on a primary key. In this case we have to decide which field is the primary key. +There are a few terms associated with reduce side joins: +1\. _Data source:_ This is nothing but the input files. +2\. _Tag:_ This is basically used to distinguish each input data on the basis of its origin. +3\. _Group key:_ This refers to the common column that is used as a join key to combine the output of two mappers. + +**Difference between map side joins and reduce side joins** + + 1. A map side join, as explained earlier, happens on the map side whereas a reduce side join happens on the reduce side. + 2. A map side join happens in the memory whereas a reduce side join happens off the memory. + 3. Map side joins are effective when one data set is big while the other is small, whereas reduce side joins work effectively for big size data sets. + 4. Map side joins are expensive, whereas reduce side joins are cheap. + + + +Opt for map side joins when the table size is small and fits in memory, and you require the job to be completed in a short span of time. Use the reduce side join when dealing with large data sets, which cannot fit into the memory. Reduce side joins are easy to implement and have the advantage of their inbuilt sorting and shuffling algorithms. Besides this, there is no requirement to strictly follow any formatting rule for input in case of reduce side joins, and these could also be performed on unstructured data sets. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/understanding-joins-in-hadoop/ + +作者:[Bhaskar Narayan Das][a] +选题:[lujun9972][b] +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/bhaskar-narayan/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2017/06/Hadoop-big-data.jpg?resize=696%2C441&ssl=1 (Hadoop big data career opportunities) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2017/06/Hadoop-big-data.jpg?fit=750%2C475&ssl=1 diff --git a/sources/tech/20191007 Using the Java Persistence API.md b/sources/tech/20191007 Using the Java Persistence API.md new file mode 100644 index 0000000000..e911428044 --- /dev/null +++ b/sources/tech/20191007 Using the Java Persistence API.md @@ -0,0 +1,273 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using the Java Persistence API) +[#]: via: (https://opensource.com/article/19/10/using-java-persistence-api) +[#]: author: (Stephon Brown https://opensource.com/users/stephb) + +Using the Java Persistence API +====== +Learn how to use the JPA by building an example app for a bike store. +![Coffee beans][1] + +The Java Persistence API (JPA) is an important Java functionality for application developers to understand. It translates exactly how Java developers turn method calls on objects into accessing, persisting, and managing data stored in NoSQL and relational databases. + +This article examines the JPA in detail through a tutorial example of building a bicycle loaning service. This example will create a create, read, update, and delete (CRUD) layer for a larger application using the Spring Boot framework, the MongoDB database (which is [no longer open source][2]), and the Maven package manager. I also use NetBeans 11 as my IDE of choice. + +This tutorial focuses on the open source angle of the Java Persistence API, rather than the tools, to show how it works. This is all about learning the pattern of programming applications, but it's still smart to understand the software. You can access the full code in my [GitHub repository][3]. + +### Java: More than 'beans' + +Java is an object-oriented language that has gone through many changes since the Java Development Kit (JDK) was released in 1996. Understanding the language's various pathways and its virtual machine is a history lesson in itself; in brief, the language has forked in many directions, similar to the Linux kernel, since its release. There are standard editions that are free to the community, enterprise editions for business, and an open source alternatives contributed to by multiple vendors. Major versions are released at six-month intervals; since there are often major differences in features, you may want to do some research before choosing a version. + +All and all, Java is steeped in history. This tutorial focuses on [JDK 11][4], which is the open source implementation of Java 11, because it is one of the long-term-support versions that is still active. + + * **Spring Boot: **Spring Boot is a module from the larger Spring framework developed by Pivotal. Spring is a very popular framework for working with Java. It allows for a variety of architectures and configurations. Spring also offers support for web applications and security. Spring Boot offers basic configurations for bootstrapping various types of Java projects quickly. This tutorial uses Spring Boot to quickly write a console application and test functionality against the database. + * **Maven:** Maven is a project/package manager developed by Apache. Maven allows for the management of packages and various dependencies within its POM.xml file. If you have used NPM, you may be familiar with how package managers function. Maven also manages build and reporting functionality. + * **Lombok:** Lombok is a library that allows the creation of object getters/setters through annotation within the object file. This is already present in languages like C#, and Lombok introduces this functionality into Java. + * **NetBeans: **NetBeans is a popular open source IDE that focuses specifically on Java development. Many of its tools provide an implementation for the latest Java SE and EE updates. + + + +This group of tools will be used to create a simple application for a fictional bike store. It will implement functionality for inserting collections for "Customer" and "Bike" objects. + +### Brewed to perfection + +Navigate to the [Spring Initializr][5]. This website enables you to generate basic project needs for Spring Boot and the dependencies you will need for the project. Select the following options: + + 1. **Project:** Maven Project + 2. **Language:** Java + 3. **Spring Boot:** 2.1.8 (or the most stable release) + 4. **Project Metadata:** Whatever your naming conventions are (e.g., **com.stephb**) + * You can keep Artifact as "Demo" + 5. **Dependencies:** Add: + * Spring Data MongoDB + * Lombok + + + +Click **Download** and open the new project in your chosen IDE (e.g., NetBeans). + +#### Model outline + +The models represent information collected about specific objects in the program that will be persisted in your database. Focus on two objects: **Customer** and **Bike**. First, create a **dto** folder within the **src** folder. Then, create the two Java class objects named **Customer.java** and **Bike.java**. They will be structured in the program as follows: + +**Customer. Java** + + +``` + 1 package com.stephb.JavaMongo.dto; + 2  + 3 import lombok.Getter; + 4 import lombok.Setter; + 5 import org.springframework.data.annotation.Id; + 6  + 7 /** + 8  * + 9  * @author stephon +10  */ +11 @Getter @Setter +12 public class Customer { +13  +14         private @Id [String][6] id; +15         private [String][6] emailAddress; +16         private [String][6] firstName; +17         private [String][6] lastName; +18         private [String][6] address; +19          +20 } +``` + +**Bike.java** + + +``` + 1 package com.stephb.JavaMongo.dto; + 2  + 3 import lombok.Getter; + 4 import lombok.Setter; + 5 import org.springframework.data.annotation.Id; + 6  + 7 /** + 8  * + 9  * @author stephon +10  */ +11 @Getter @Setter +12 public class Bike { +13         private @Id [String][6] id; +14         private [String][6] modelNumber; +15         private [String][6] color; +16         private [String][6] description; +17  +18         @Override +19         public [String][6] toString() { +20                 return "This bike model is " + this.modelNumber + " is the color " + this.color + " and is " + description; +21         } +22 } +``` + +As you can see, Lombok annotation is used within the object to generate the getters/setters for the properties/attributes. Properties can specifically receive the annotations if you do not want all of the attributes to have getters/setters within that class. These two classes will form the container carrying your data to wherever you want to display information. + +#### Set up a database + +I used a [Mongo Docker][7] container for testing. If you have MongoDB installed on your system, you do not have to run an instance in Docker. You can install MongoDB from its website by selecting your system information and following the installation instructions. + +After installing, you can interact with your new MongoDB server through the command line, a GUI such as MongoDB Compass, or IDE drivers for connecting to data sources. Now you can define your data layer to pull, transform, and persist your data. To set your database access properties, navigate to the **applications.properties** file in your application and provide the following: + + +``` + 1 spring.data.mongodb.host=localhost + 2 spring.data.mongodb.port=27017 + 3 spring.data.mongodb.database=BikeStore +``` + +#### Define the data access object/data access layer + +The data access objects (DAO) in the data access layer (DAL) will define how you will interact with data in the database. The awesome thing about using a **spring-boot-starter** is that most of the work for querying the database is already done. + +Start with the **Customer** DAO. Create an interface in a new **dao** folder within the **src** folder, then create another Java class name called **CustomerRepository.java**. The class should look like: + + +``` + 1 package com.stephb.JavaMongo.dao; + 2  + 3 import com.stephb.JavaMongo.dto.Customer; + 4 import java.util.List; + 5 import org.springframework.data.mongodb.repository.MongoRepository; + 6  + 7 /** + 8  * + 9  * @author stephon +10  */ +11 public interface CustomerRepository extends MongoRepository<Customer, String>{ +12         @Override +13         public List<Customer> findAll(); +14         public List<Customer> findByFirstName([String][6] firstName); +15         public List<Customer> findByLastName([String][6] lastName); +16 } +``` + +This class is an interface that extends or inherits from the **MongoRepository** class with your DTO (**Customer.java**) and a string because they will be used for querying with your custom functions. Because you have inherited from this class, you have access to many functions that allow persistence and querying of your object without having to implement or reference your own functions. For example, after you instantiate the **CustomerRepository** object, you can use the **Save** function immediately. You can also override these functions if you need more extended functionality. I created a few custom queries to search my collection, given specific elements of my object. + +The **Bike** object also has a repository for interacting with the database. Implement it very similarly to the **CustomerRepository**. It should look like: + + +``` + 1 package com.stephb.JavaMongo.dao; + 2  + 3 import com.stephb.JavaMongo.dto.Bike; + 4 import java.util.List; + 5 import org.springframework.data.mongodb.repository.MongoRepository; + 6  + 7 /** + 8  * + 9  * @author stephon +10  */ +11 public interface BikeRepository extends MongoRepository<Bike,String>{ +12         public Bike findByModelNumber([String][6] modelNumber); +13         @Override +14         public List<Bike> findAll(); +15         public List<Bike> findByColor([String][6] color); +16 } +``` + +#### Run your program + +Now that you have a way to structure your data and a way to pull, transform, and persist it, run your program! + +Navigate to your **Application.java** file (it may have a different name, depending on what you named your application, but it should include "application"). Where the class is defined, include an **implements CommandLineRunner** afterward. This will allow you to implement a **run** method to create a command-line application. Override the **run** method provided by the **CommandLineRunner** interface and include the following to test the **BikeRepository**: + + +``` + 1 package com.stephb.JavaMongo; + 2  + 3 import com.stephb.JavaMongo.dao.BikeRepository; + 4 import com.stephb.JavaMongo.dao.CustomerRepository; + 5 import com.stephb.JavaMongo.dto.Bike; + 6 import java.util.Scanner; + 7 import org.springframework.beans.factory.annotation.Autowired; + 8 import org.springframework.boot.CommandLineRunner; + 9 import org.springframework.boot.SpringApplication; +10 import org.springframework.boot.autoconfigure.SpringBootApplication; +11  +12  +13 @SpringBootApplication +14 public class JavaMongoApplication implements CommandLineRunner { +15                 @Autowired +16                 private BikeRepository bikeRepo; +17                 private CustomerRepository custRepo; +18                  +19     public static void main([String][6][] args) { +20                         SpringApplication.run(JavaMongoApplication.class, args); +21     } +22         @Override +23         public void run([String][6]... args) throws [Exception][8] { +24                 Scanner scan = new Scanner([System][9].in); +25                 [String][6] response = ""; +26                 boolean running = true; +27                 while(running){ +28                         [System][9].out.println("What would you like to create? \n C: The Customer \n B: Bike? \n X:Close"); +29                         response = scan.nextLine(); +30                         if ("B".equals(response.toUpperCase())) { +31                                 [String][6][] bikeInformation = new [String][6][3]; +32                                 [System][9].out.println("Enter the information for the Bike"); +33                                 [System][9].out.println("Model Number"); +34                                 bikeInformation[0] = scan.nextLine(); +35                                 [System][9].out.println("Color"); +36                                 bikeInformation[1] = scan.nextLine(); +37                                 [System][9].out.println("Description"); +38                                 bikeInformation[2] = scan.nextLine(); +39  +40                                 Bike bike = new Bike(); +41                                 bike.setModelNumber(bikeInformation[0]); +42                                 bike.setColor(bikeInformation[1]); +43                                 bike.setDescription(bikeInformation[2]); +44  +45                                 bike = bikeRepo.save(bike); +46                                 [System][9].out.println(bike.toString()); +47  +48  +49                         } else if ("X".equals(response.toUpperCase())) { +50                                 [System][9].out.println("Bye"); +51                                 running = false; +52                         } else { +53                                 [System][9].out.println("Sorry nothing else works right now!"); +54                         } +55                 } +56                  +57         } +58 } +``` + +The **@Autowired** annotation allows automatic dependency injection of the **BikeRepository** and **CustomerRepository** beans. You will use these classes to persist and gather data from the database. + +There you have it! You have created a command-line application that connects to a database and is able to perform CRUD operations with minimal code on your part. + +### Conclusion + +Translating from programming language concepts like objects and classes into calls to store, retrieve, or change data in a database is essential to building an application. The Java Persistence API (JPA) is an important tool in the Java developer's toolkit to solve that challenge. What databases are you exploring in Java? Please share in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/using-java-persistence-api + +作者:[Stephon Brown][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/stephb +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/java-coffee-beans.jpg?itok=3hkjX5We (Coffee beans) +[2]: https://www.techrepublic.com/article/mongodb-ceo-tells-hard-truths-about-commercial-open-source/ +[3]: https://github.com/StephonBrown/SpringMongoJava +[4]: https://openjdk.java.net/projects/jdk/11/ +[5]: https://start.spring.io/ +[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string +[7]: https://hub.docker.com/_/mongo +[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception +[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system diff --git a/sources/tech/20191008 5 Best Password Managers For Linux Desktop.md b/sources/tech/20191008 5 Best Password Managers For Linux Desktop.md new file mode 100644 index 0000000000..c9a51c91e6 --- /dev/null +++ b/sources/tech/20191008 5 Best Password Managers For Linux Desktop.md @@ -0,0 +1,201 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 Best Password Managers For Linux Desktop) +[#]: via: (https://itsfoss.com/password-managers-linux/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +5 Best Password Managers For Linux Desktop +====== + +_**A password manager is a useful tool for creating unique passwords and storing them securely so that you don’t have to remember them. Check out the best password managers available for Linux desktop.**_ + +Passwords are everywhere. Websites, forums, web apps and what not, you need to create accounts and password for them. The trouble comes with the password. Keeping the same password for various accounts poses a security risk because [if one of the websites is compromised, hackers try the same email-password combination on other websites][1] as well. + +But keeping unique passwords for all the new accounts means that you have to remember all of them and it’s not possible for normal humans. This is where password managers come to your help. + +Password managing apps suggest/create strong passwords for you and store them in an encrypted database. You just need to remember the master password for the password manager. + +Mainstream modern web browsers like Mozilla Firefox and Google Chrome have built in password manager. This helps but you are restricted to use it on their web browser only. + +There are third party, dedicated password managers and some of them also provide native desktop applications for Linux. In this article, we filter out the best password managers available for Linux. + +Before you see that, I would also advise going through the list of [free password generators for Linux][2] to generate strong, unique passwords for you. + +### Password Managers for Linux + +Possible non-FOSS alert! + +We’ve given priority to the ones which are open source (with some proprietary options, don’t hate me!) and also offer a standalone desktop app (GUI) for Linux. The proprietary options have been highlighted. + +#### 1\. Bitwarden + +![][3] + +Key Highlights: + + * Open Source + * Free for personal use (paid options available for upgrade) + * End-to-end encryption for Cloud servers + * Cross-platform + * Browser Extensions available + * Command-line tools + + + +Bitwarden is one of the most impressive password managers for Linux. I’ll be honest that I didn’t know about this until now – and I’m already making the switch from [LastPass][4]. I was able to easily import the data from LastPass without any issues and had no trouble whatsoever. + +The premium version costs just $10/year – which seems to be worth it (I’ve upgraded for my personal usage). + +It is an open source solution – so there’s nothing shady about it. You can even host it on your own server and create a password solution for your organization. + +In addition to that, you get all the necessary features like 2FA for login, import/export options for your credentials, fingerprint phrase (a unique key), password generator, and more. + +You can upgrade your account as an organization account for free to be able to share your information with 2 users in total. However, if you want additional encrypted vault storage and the ability to share passwords with 5 users, premium upgrades are available starting from as low as $1 per month. I think it’s definitely worth a shot! + +[Bitwarden][5] + +#### 2\. Buttercup + +![][6] + +Key Highlights: + + * Open Source + * Free, with no premium options. + * Cross-platform + * Browser Extensions available + + + +Yet another open-source password manager for Linux. Buttercup may not be a very popular solution – but if you are looking for a simpler alternative to store your credentials, this would be a good start. + +Unlike some others, you do not have to be skeptical about its cloud servers because it sticks to offline usage only and supports connecting cloud sources like [Dropbox][7], [OwnCloud][8], [Nextcloud][9], and [WebDAV][10]. + +So, you can opt for the cloud source if you need to sync the data. You’ve got the choice for it. + +[Buttercup][11] + +#### 4\. KeePassXC + +![][12] + +Key Highlights: + + * Open Source + * Simple password manager + * Cross-platform + * No mobile support + + + +KeePassXC is a community fork of [KeePassX][13] – which was originally a Linux port for [KeePass][14] on Windows. + +Unless you’re not aware, KeePassX hasn’t been maintained for years – so KeePassXC is a good alternative if you are looking for a dead-simple password manager. KeePassXC may not be the most prettiest or fanciest password manager, but it does the job. + +It is secure and open source as well. I think that makes it worth a shot, what say? + +[KeePassXC][15] + +#### 4\. Enpass (not open source) + +![][16] + +Key Highlights: + + * Proprietary + * A lot of features – including ‘Wearable’ device support. + * Completely free for Linux (with premium features) + + + +Enpass is a quite popular password manager across multiple platforms. Even though it’s not an open source solution, a lot of people rely on it – so you can be sure that it works, at least. + +It offers a great deal of features and if you have a wearable device, it will support that too – which is rare. + +It’s great to see that Enpass manages the package for Linux distros actively. Also, note that it works for 64-bit systems only. You can find the [official instructions for installation][17] on their website. It will require utilizing the terminal, but I followed the steps to test it out and it worked like a charm. + +[Enpass][18] + +#### 5\. myki (not open source) + +![][19] + +Key Highlights: + + * Proprietary + * Avoids cloud servers for storing passwords + * Focuses on local peer-to-peer syncing + * Ability to replace passwords with Fingerprint IDs on mobile + + + +This may not be a popular recommendation – but I found it very interesting. It is a proprietary password manager which lets you avoid cloud servers and relies on peer-to-peer sync. + +So, if you do not want to utilize any cloud servers to store your information, this is for you. It is also interesting to note that the app available for Android and iOS helps you replace passwords with your fingerprint ID. If you want convenience on your mobile phone along with the basic functionality on a desktop password manager – this looks like a good option. + +However, if you are opting for a premium upgrade, the pricing plans are for you to judge, definitely not cheap. + +Do try it out and let us know how it goes! + +[myki][20] + +### Some Other Password Managers Worth Pointing Out + +Even without offering a standalone app for Linux, there are some password managers that may deserve a mention. + +If you need to utilize browser-based (extensions) password managers, we would recommend trying out [LastPass][21], [Dashlane][22], and [1Password][23]. LastPass even offers a [Linux client (and a command-line tool)][24]. + +If you are looking for CLI password managers, you should check out [Pass][25]. + +[Password Safe][26] is also an option – but the Linux client is in beta. I wouldn’t recommend relying on “beta” applications for storing passwords. [Universal Password Manager][27] exists but it’s no longer maintained. You may have also heard about [Password Gorilla][28] but it isn’t actively maintained. + +**Wrapping Up** + +Bitwarden seems to be my personal favorite for now. However, there are several options to choose from on Linux. You can either opt for something that offers a native app or just a browser extension – the choice is yours. + +If we missed listing out a password manager worth trying out, let us know about it in the comments below. As always, we’ll extend our list with your suggestion. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/password-managers-linux/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://medium.com/@computerphonedude/one-of-my-old-passwords-was-hacked-on-6-different-sites-and-i-had-no-clue-heres-how-to-quickly-ced23edf3b62 +[2]: https://itsfoss.com/password-generators-linux/ +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/bitward.png?ssl=1 +[4]: https://www.lastpass.com/ +[5]: https://bitwarden.com/ +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/buttercup.png?ssl=1 +[7]: https://www.dropbox.com/ +[8]: https://owncloud.com/ +[9]: https://nextcloud.com/ +[10]: https://en.wikipedia.org/wiki/WebDAV +[11]: https://buttercup.pw/ +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/KeePassXC.png?ssl=1 +[13]: https://www.keepassx.org/ +[14]: https://keepass.info/ +[15]: https://keepassxc.org +[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/enpass.png?ssl=1 +[17]: https://www.enpass.io/support/kb/general/how-to-install-enpass-on-linux/ +[18]: https://www.enpass.io/ +[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/myki.png?ssl=1 +[20]: https://myki.com/ +[21]: https://lastpass.com/ +[22]: https://www.dashlane.com/ +[23]: https://1password.com/ +[24]: https://lastpass.com/misc_download2.php +[25]: https://www.passwordstore.org/ +[26]: https://pwsafe.org/ +[27]: http://upm.sourceforge.net/ +[28]: https://github.com/zdia/gorilla/wiki diff --git a/sources/tech/20191008 Bringing Some Order into a Collection of Photographs.md b/sources/tech/20191008 Bringing Some Order into a Collection of Photographs.md new file mode 100644 index 0000000000..b3c2dee08e --- /dev/null +++ b/sources/tech/20191008 Bringing Some Order into a Collection of Photographs.md @@ -0,0 +1,119 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Bringing Some Order into a Collection of Photographs) +[#]: via: (https://opensourceforu.com/2019/10/bringing-some-order-into-a-collection-of-photographs/) +[#]: author: (Dr Anil Seth https://opensourceforu.com/author/anil-seth/) + +Bringing Some Order into a Collection of Photographs +====== + +[![][1]][2] + +_In this article, the author shares tips on managing photographs using different Internet resources and Python programming._ + +These days, it is very easy to let Google Photos or similar cloud based services manage your photos. You can keep clicking on the smartphone and the photos get saved. The tools for helping you find photos, especially based on the content, keep getting better. There is no cost to keeping all your photos as long as you are an amateur and not taking very high resolution images. And it is far easier to let the dozens of photos clicked by accident remain on the cloud, than to remove them! + +Even if you are willing to delegate the task of managing photos to AI tools, there is still the challenge of what to do with the photos taken before the smartphone era. Broadly, the photos can be divided into two groups — those taken with digital cameras and the physical photo prints. +Each of the two categories will need to be handled and managed differently. First, consider the older physical photos. + +**Managing physical photos in the digital era** + +Photos can deteriorate over time. So, the sooner you digitise them, the better you will preserve your memories. Besides, it is far easier to share a memory digitally when the family members are scattered across the globe. + +The first hard decision is related to the physical albums. Should you take photos out of albums for scanning and risk damaging the albums, or scan the album pages and then crop individual photos from the album pages? Scanning or imaging tools can help with the cropping of photos. +In this article, we assume that you are ready to deal with a collection of individual photos. + +One of the great features of photo management software, both on the cloud and the desktop, is that they organise the photos by date. However, the only date associated with scanned photos is the date of scanning! It will be a while before the AI software will place the photos on a timeline by examining the age of the people in the photos. Currently, you will need to handle this aspect manually. + +One would like to be able to store a date in the metadata of the image so every tool can use it. +Python has a number of packages to help you do this. A pretty easy one to use is pyexiv2. Here is a snippet of sample code to modify the date of an image: + +``` +import datetime +import pyexiv2 +EXIF_DATE = ‘Exif.Image.DateTime’ +EXIF_ORIG_DATE = ‘Exif.Photo.DateTimeOriginal’ +def update_exif(filename,date): +try: +metadata=pyexiv2.ImageMetadata(filename) +metadata.read() +metadata[EXIF_DATE]=date +metadata[EXIF_ORIG_DATE]=date +metadata.write() +except: +print(“Error “ + f) +``` + +Most photo management software seem to use either of the two dates, whichever is available. While you are setting the date, you might as well set both! There can be various ways in which the date for the photo may be specified. You may find the following scheme convenient. +Sort the photos manually into directories, each with the name _yy-mm-dd_. If the date is not known, you might as well select an approximate date. If the month also is not known, set it to 01. Now, you can use the _os.walk_ function to iterate over the directories and files, and set the date for each file as just suggested above. + +You may further divide the files into event based sub-directories, event_label, and use that to label photos, as follows: + +``` +LABEL = ‘Xmp.xmp.Label’ +metadata[LABEL] = pyexiv2.XmpTag(LABEL,event_label) +``` + +This is only for illustration purposes. You can decide on how you would like to organise the photos and use what seems most convenient for you. + +**Digital photos** +Digital photos have different challenges. It is so easy to keep taking photos that you are likely to have a lot of them. Unless you have been careful, you are likely to find that you have used different tools for downloading photos from digital cameras and smartphones, so the file names and directory names are not consistent. A convenient option is to use the date and time of an image from the metadata and rename files accordingly. An example code follows: + +``` +import os +import datetime +import pyexiv2 +EXIF_DATE = ‘Exif.Image.DateTime’ +EXIF_ORIG_DATE = ‘Exif.Photo.DateTimeOriginal’ +def rename_file(p,f,fpref,ctr): +fold,fext = f.rsplit(‘.’,1) # separate the ext, e.g. jpg +fname = fpref + “-%04i”%ctr # add a serial number to ensure uniqueness +fnew = ‘.’.join((fname,fext)) +os.rename(‘/’.join((p,f)),’/’.join((p,fnew))) + +def process_files(path, files): +ctr = 0 +for f in files: +try: +metadata=pyexiv2.ImageMetadata(‘/’.join((path,f))) +metadata.read() +if EXIF_ORIG_DATE in metadata.exif_keys: +datestamp = metadata[EXIF_ORIG_DATE].human_value +else: +datestamp = metadata[EXIF_DATE].human_value +datepref = ‘_’.join([ x.replace(‘:’,’-’) for x in datestamp.split(‘ ‘)]) +rename_file(path,f,datepref,ctr) +ctr += 1 +except: +print(‘Error in %s/%s’%(path,f)) +for path, dirs, files in os.walk(‘.’): # work with current directory for convenience +if len(files) > 0: +process_files(path, files) +``` + +All the file names now have a consistent file name. Since the photo managing software provides a way to view the photos by time, it seems that organising the files into directories that have meaningful names may be preferable. You can move photos into directories/albums that are meaningful. The photo management software will let you view photos either by albums or by dates. + +**Reducing clutter and duplicates** +Over time, my collection included multiple copies of the same photos. In the old days, to share photos easily, I used to even keep low resolution copies. Digikam has an excellent option of identifying similar photos. However, each photo needs to be handled individually. A very convenient tool for finding the duplicate files and managing them programmatically is *. The output of this program contains each set of duplicate files on a separate line. + +You can use the Python Pillow and Matplotlib packages to display the images. Use the image’s size to select the image with the highest resolution among the duplicates, retain that and delete the rest. +One thing is certain, though. After all the work is done, it is a pleasure to look at the photographs and relive all those old memories. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/bringing-some-order-into-a-collection-of-photographs/ + +作者:[Dr Anil Seth][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/anil-seth/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Gimp-6-Souping-up-photos.jpg?resize=696%2C492&ssl=1 (Gimp-6 Souping up photos) +[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Gimp-6-Souping-up-photos.jpg?fit=900%2C636&ssl=1 diff --git a/sources/tech/20191009 Start developing in the cloud with Eclipse Che IDE.md b/sources/tech/20191009 Start developing in the cloud with Eclipse Che IDE.md new file mode 100644 index 0000000000..e3ddcf5e07 --- /dev/null +++ b/sources/tech/20191009 Start developing in the cloud with Eclipse Che IDE.md @@ -0,0 +1,124 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Start developing in the cloud with Eclipse Che IDE) +[#]: via: (https://opensource.com/article/19/10/cloud-ide-che) +[#]: author: (Bryant Son https://opensource.com/users/brson) + +Start developing in the cloud with Eclipse Che IDE +====== +Eclipse Che offers Java developers an Eclipse IDE in a container-based +cloud environment. +![Tools in a cloud][1] + +In the many, many technical interviews I've gone through in my professional career, I've noticed that I'm rarely asked questions that have definitive answers. Most of the time, I'm asked open-ended questions that do not have an absolutely correct answer but evaluate my prior experiences and how well I can explain things. + +One interesting open-ended question that I've been asked several times is: + +> "As you start your first day on a project, what five tools do you install first and why?" + +There is no single definitely correct answer to this question. But as a programmer who codes, I know the must-have tools that I cannot live without. And as a Java developer, I always include an interactive development environment (IDE)—and my two favorites are Eclipse IDE and IntelliJ IDEA. + +### My Java story + +When I was a student at the University of Texas at Austin, most of my computer science courses were taught in Java. And as an enterprise developer working for different companies, I have mostly worked with Java to build various enterprise-level applications. So, I know Java, and most of the time I've developed with Eclipse. I have also used the Spring Tools Suite (STS), which is a variation of the Eclipse IDE that is installed with Spring Framework plugins, and IntelliJ, which is not exactly open source, since I prefer its paid edition, but some Java developers favor it due to its faster performance and other fancy features. + +Regardless of which IDE you use, installing your own developer IDE presents one common, big problem: _"It works on my computer, and I don't know why it doesn't work on your computer."_ + +[![xkcd comic][2]][3] + +Because a developer tool like Eclipse can be highly dependent on the runtime environment, library configuration, and operating system, the task of creating a unified sharing environment for everyone can be quite a challenge. + +But there is a perfect solution to this. We are living in the age of cloud computing, and Eclipse Che provides an open source solution to running an Eclipse-based IDE in a container-based cloud environment. + +### From local development to a cloud environment + +I want the benefits of a cloud-based development environment with the familiarity of my local system. That's a difficult balance to find. + +When I first heard about Eclipse Che, it looked like the cloud-based development environment I'd been looking for, but I got busy with technology I needed to learn and didn't follow up with it. Then a new project came up that required a remote environment, and I had the perfect excuse to use Che. Although I couldn't fully switch to the cloud-based IDE for my daily work, I saw it as a chance to get more familiar with it. + +![Eclipse Che interface][4] + +Eclipse Che IDE has a lot of excellent [features][5], but what I like most is that it is an open source framework that offers exactly what I want to achieve: + + 1. Scalable workspaces leveraging the power of cloud + 2. Extensible and customizable plugins for different runtimes + 3. A seamless onboarding experience to enable smooth collaboration between members + + + +### Getting started with Eclipse Che + +Eclipse Che can be installed on any container-based environment. I run both [Code Ready Workspace 1.2][6] and [Eclipse Che 7][7] on [OpenShift][8], but I've also tried it on top of [Minikube][9] and [Minishift][10]. + +![Eclipse Che on OpenShift][11] + +Read the requirement guides to ensure your runtime is compatible with Che: + + * [Che on Kubernetes][12] + * [Che on OpenShift-compatible OSS environments like OKD][13] + + + +For instance, you can quickly install Eclipse Che if you launch OKD locally through Minishift, but make sure to have at least 5GB RAM to have a smooth experience. + +There are various ways to install Eclipse Che; I recommend leveraging the Che command-line interface, [chectl][14]. Although it is still in an incubator stage, it is my preferred way because it gives multiple configuration and management options. You can also run the installation as [an Operator][15], which you can [read more about][16]. I decided to go with chectl since I did not want to take on both concepts at the same time. Che's quick-start provides [installation steps for many scenarios][17]. + +### Why cloud works best for me + +Although the local installation of Eclipse Che works, I found the most painless way is to install it on one of the common public cloud vendors. + +I like to collaborate with others in my IDE; working collaboratively is essential if you want your application to be something more than a hobby project. And when you are working at a company, there will be enterprise considerations around the application lifecycle of develop, test, and deploy for your application. + +Eclipse Che's multi-user capability means each person owns an isolated workspace that does not interfere with others' workspaces, yet team members can still collaborate on application development by working in the same cluster. And if you are considering moving to Eclipse Che for something more than a hobby or testing, the cloud environment's multi-user features will enable a faster development cycle. This includes [resource management][18] to ensure resources are allocated to each environment, as well as security considerations like [authentication and authorization][19] (or specific needs like [OpenID][20]) that are important to maintaining the environment. + +Therefore, moving Eclipse Che to the cloud early will be a good choice if your development experience is like mine. By moving to the cloud, you can take advantage of cloud-based scalability and resource flexibility while on the road. + +### Use Che and give back + +I really enjoy this new development configuration that enables me to regularly code in the cloud. Open source enables me to do so in an easy way, so it's important for me to consider how to give back. All of Che's components are open source under the Eclipse Public License 2.0 and available on GitHub at the following links: + + * [Eclipse Che GitHub][21] + * [Eclipse Che Operator][15] + * [chectl (Eclipse Che CLI)][14] + + + +Consider using Che and giving back—either as a user by filing bug reports or as a developer to help enhance the project. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/cloud-ide-che + +作者:[Bryant Son][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/brson +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT (Tools in a cloud) +[2]: https://opensource.com/sites/default/files/uploads/1_xkcd.jpg (xkcd comic) +[3]: https://xkcd.com/1316 +[4]: https://opensource.com/sites/default/files/uploads/0_banner.jpg (Eclipse Che interface) +[5]: https://www.eclipse.org/che/features +[6]: https://developers.redhat.com/products/codeready-workspaces/overview +[7]: https://che.eclipse.org/eclipse-che-7-is-now-available-40ae07120b38 +[8]: https://www.openshift.com/ +[9]: https://kubernetes.io/docs/tutorials/hello-minikube/ +[10]: https://www.okd.io/minishift/ +[11]: https://opensource.com/sites/default/files/uploads/2_openshiftresources.jpg (Eclipse Che on OpenShift) +[12]: https://www.eclipse.org/che/docs/che-6/kubernetes-single-user.html +[13]: https://www.eclipse.org/che/docs/che-6/openshift-single-user.html +[14]: https://github.com/che-incubator/chectl +[15]: https://github.com/eclipse/che-operator +[16]: https://opensource.com/article/19/6/kubernetes-potential-run-anything +[17]: https://www.eclipse.org/che/docs/che-7/che-quick-starts.html#running-che-locally_che-quick-starts +[18]: https://www.eclipse.org/che/docs/che-6/resource-management.html +[19]: https://www.eclipse.org/che/docs/che-6/user-management.html +[20]: https://www.eclipse.org/che/docs/che-6/authentication.html +[21]: https://github.com/eclipse/che diff --git a/sources/tech/20191009 The Emacs Series ht.el- The Hash Table Library for Emacs.md b/sources/tech/20191009 The Emacs Series ht.el- The Hash Table Library for Emacs.md new file mode 100644 index 0000000000..84e5a46acb --- /dev/null +++ b/sources/tech/20191009 The Emacs Series ht.el- The Hash Table Library for Emacs.md @@ -0,0 +1,414 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The Emacs Series ht.el: The Hash Table Library for Emacs) +[#]: via: (https://opensourceforu.com/2019/10/the-emacs-series-ht-el-the-hash-table-library-for-emacs/) +[#]: author: (Shakthi Kannan https://opensourceforu.com/author/shakthi-kannan/) + +The Emacs Series ht.el: The Hash Table Library for Emacs +====== + +[![][1]][2] + +_In this article, we explore the various hash table functions and macros provided by the ht.el library._ + +The ht.el hash table library for Emacs has been written by Wilfred Hughes. The latest tagged release is version 2.2 and the software is released under the GNU General Public License v3. The source code is available at __. It provides a comprehensive list of hash table operations and a very consistent API. For example, any mutation function will always return nil. + +**Installation** +The Milkypostman’s Emacs Lisp Package Archive (MELPA) and Marmalade repositories have ht.el available for installation. You can add the following command to your Emacs init.el configuration file: + +``` +(require ‘package) +(add-to-list ‘package-archives ‘(“melpa” . “https://melpa.org/packages/”) t) +``` + +You can then run _M-x package <RET> ht <RET>_ to install the _ht.el_ library. If you are using Cask, then you simply add the following code to your Cask file: + +``` +(depends-on “ht”) +``` + +You will need the ht library in your Emacs environment before using the API functions. + +``` +(require ‘ht) +``` + +**Usage** +Let us now explore the various API functions provided by the _ht.el_ library. The _ht-create_ function will return a hash table that can be assigned to a hash table variable. You can also verify that the variable is a hash table using the type-of function as shown below: + +``` +(let ((greetings (ht-create))) +(type-of greetings)) + +hash-table +``` + +You can add an item to the hash table using the ht-set! function, which takes the hash table, key and value as arguments. The entries in the hash table can be listed using the _ht-items_ function as illustrated below: + +``` +(ht-set! hash-table key value) ;; Syntax +(ht-items hash-table) ;; Syntax + +(let ((greetings (ht-create))) +(ht-set! greetings “Adam” “Hello Adam!”) +(ht-items greetings)) + +((“Adam” “Hello Adam!”)) +``` + +The keys present in a hash table can be retrieved using the _ht-keys_ function, while the values in a hash table can be obtained using the _ht-values_ function, as shown in the following examples: + +``` +(ht-keys hash-table) ;; Syntax +(ht-values hash-table) ;; Syntax + +(let ((greetings (ht-create))) +(ht-set! greetings “Adam” “Hello Adam!”) +(ht-keys greetings)) + +(“Adam”) + +(let ((greetings (ht-create))) +(ht-set! greetings “Adam” “Hello Adam!”) +(ht-values greetings)) + +(“Hello Adam!”) +``` + +The “ht-clear!” function can be used to clear all the items in a hash-table. For example: + +``` +(ht-clear! hash-table) ;; Syntax + +(let ((greetings (ht-create))) +(ht-set! greetings “Adam” “Hello Adam!”) +(ht-clear! greetings) +(ht-items greetings)) + +nil +``` + +An entire hash table can be copied to another hash table using the _ht-copy_ API as shown below: + +``` +(ht-copy hash-table) ;; Syntax + +(let ((greetings (ht-create))) +(ht-set! greetings “Adam” “Hello Adam!”) +(ht-items (ht-copy greetings))) + +((“Adam” “Hello Adam!”)) +``` + +The _ht-merge_ function can combine two different hash tables into one. In the following example, the items in the _english_ and _numbers_ hash tables are merged together. + +``` +(ht-merge hash-table1 hash-table2) ;; Syntax + +(let ((english (ht-create)) +(numbers (ht-create))) +(ht-set! english “a” “A”) +(ht-set! numbers “1” “One”) +(ht-items (ht-merge english numbers))) + +((“1” “One”) (“a” “A”)) +``` + +You can make modifications to an existing hash table. For example, you can remove an item in the hash table using the _ht-remove!_ function, which takes as input a hash table and a key as shown below: + +``` +(ht-remove hash-table key) ;; Syntax + +(let ((greetings (ht-create))) +(ht-set! greetings “Adam” “Hello Adam!”) +(ht-set! greetings “Eve” “Hello Eve!”) +(ht-remove! greetings “Eve”) +(ht-items greetings)) + +((“Adam” “Hello Adam!”)) +``` + +You can do an in-place modification to an item in the hash table using the _ht-update!_ function. An example is given below: + +``` +(ht-update! hash-table key value) ;; Syntax + +(let ((greetings (ht-create))) +(ht-set! greetings “Adam” “Hello Adam!”) +(ht-update! greetings (ht (“Adam” “Howdy Adam!”))) +(ht-items greetings)) + +((“Adam” “Howdy Adam!”)) +``` + +A number of predicate functions are available in _ht.el_ that can be used to check for conditions in a hash table. The _ht_? function checks to see if the input argument is a hash table. It returns t if the argument is a hash table and _nil_ otherwise. + +``` +(ht? hash-table) ;; Syntax + +(ht? nil) + +nil + +(let ((greetings (ht-create))) +(ht? greetings)) + +t +``` + +You can verify if a key is present in a hash table using the _ht-contains_? API, which takes a hash table and key as arguments. It returns t if the item exists in the hash table. Otherwise, it simply returns _nil_. + +``` +(ht-contains? hash-table key) ;; Syntax + +(let ((greetings (ht-create))) +(ht-set! greetings “Adam” “Hello Adam!”) +(ht-contains? greetings “Adam”)) + +t + +(let ((greetings (ht-create))) +(ht-set! greetings “Adam” “Hello Adam!”) +(ht-contains? greetings “Eve”)) + +nil +``` + +The _ht-empty?_ function can be used to check if the input hash-table is empty or not. A couple of examples are shown below: + +``` +(ht-empty? hash-table) ;; Syntax + +(let ((greetings (ht-create))) +(ht-set! greetings “Adam” “Hello Adam!”) +(ht-empty? greetings)) + +nil + +(let ((greetings (ht-create))) +(ht-empty? greetings)) + +t +``` + +The equality check can be used on a couple of hash tables to verify if they are the same, using the _ht-equal_? function as illustrated below: + +``` +(ht-equal? hash-table1 hash-table2) ;; Syntax + +(let ((english (ht-create)) +(numbers (ht-create))) +(ht-set! english “a” “A”) +(ht-set! numbers “1” “One”) +(ht-equal? english numbers)) + +nil +``` + +A few of the ht.el library functions accept a function as an argument and apply it to the items of the list. For example, the ht-map function takes a function with a key and value as arguments, and applies the function to each item in the hash table. For example: + +``` +(ht-map function hash-table) ;; Syntax + +(let ((numbers (ht-create))) +(ht-set! numbers 1 “One”) +(ht-map (lambda (x y) (* x 2)) numbers)) + +(2) +``` + +You can also use the _ht-each_ API to iterate through each item in the hash-table. In the following example, the sum of all the values is calculated and finally printed in the output. + +``` +(ht-each function hash-table) ;; Syntax + +(let ((numbers (ht-create)) +(sum 0)) +(ht-set! numbers “A” 1) +(ht-set! numbers “B” 2) +(ht-set! numbers “C” 3) +(ht-each (lambda (key value) (setq sum (+ sum value))) numbers) +(print sum)) + +6 +``` + +The _ht-select_ function can be used to match and pick a specific set of items in the list. For example: + +``` +(ht-select function hash-table) ;; Syntax + +(let ((numbers (ht-create))) +(ht-set! numbers 1 “One”) +(ht-set! numbers 2 “Two”) +(ht-items (ht-select (lambda (x y) (= x 2)) numbers))) + +((“2” “Two”)) +``` + +You can also reject a set of values by passing a filter function to the _ht-reject API_, and retrieve those items from the hash table that do not match the predicate function. In the following example, key 2 is rejected and the item with key 1 is returned. + +``` +(ht-reject function hash-table) ;; Syntax + +(let ((numbers (ht-create))) +(ht-set! numbers 1 “One”) +(ht-set! numbers 2 “Two”) +(ht-items (ht-reject (lambda (x y) (= x 2)) numbers))) + +((“1” “One”)) +``` + +If you want to mutate the existing hash table and remove the items that match a filter function, you can use the _ht-reject_! function as shown below: + +``` +(ht-reject! function hash-table) ;; Syntax + +(let ((numbers (ht-create))) +(ht-set! numbers 1 “One”) +(ht-set! numbers 2 “Two”) +(ht-reject! (lambda (x y) (= x 2)) numbers) +(ht-items numbers)) + +((“1” “One”)) +``` + +The _ht-find_ function accepts a function and a hash table, and returns the items that satisfy the input function. For example: + +``` +(ht-find function hash-table) ;; Syntax + +(let ((numbers (ht-create))) +(ht-set! numbers 1 “One”) +(ht-set! numbers 2 “Two”) +(ht-find (lambda (x y) (= x 2)) numbers)) + +(2 “Two”) +``` + +You can retrieve the items in the hash table using a specific set of keys with the _ht-select-keys_ API, as illustrated below: + +``` +(ht-select-keys hash-table keys) ;; Syntax + +(let ((numbers (ht-create))) +(ht-set! numbers 1 “One”) +(ht-set! numbers 2 “Two”) +(ht-items (ht-select-keys numbers ‘(1)))) + +((1 “One”)) +``` + +The following two examples are more comprehensive in using the hash table library functions. The _say-hello_ function returns a greeting based on the name as shown below: + +``` +(defun say-hello (name) +(let ((greetings (ht-create))) +(ht-set! greetings “Adam” “Hello Adam!”) +(ht-set! greetings “Eve” “Hello Eve!”) +(ht-get greetings name “Hello stranger!”))) + +(say-hello “Adam”) +“Hello Adam!” + +(say-hello “Eve”) +“Hello Eve!” + +(say-hello “Bob”) +“Hello stranger!” +``` + +The _ht_ macro returns a hash table and we create nested hash tables in the following example: + +``` +(let ((alphabets (ht (“Greek” (ht (1 (ht (‘letter “α”) +(‘name “alpha”))) +(2 (ht (‘letter “β”) +(‘name “beta”))))) +(“English” (ht (1 (ht (‘letter “a”) +(‘name “A”))) +(2 (ht (‘letter “b”) +(‘name “B”)))))))) +(ht-get* alphabets “Greek” 1 ‘letter)) + +“α” +``` + +**Testing** +The _ht.el_ library has built-in tests that you can execute to validate the API functions. You first need to clone the repository using the following commands: + +``` +$ git clone git@github.com:Wilfred/ht.el.git + +Cloning into ‘ht.el’... +remote: Enumerating objects: 1, done. +remote: Counting objects: 100% (1/1), done. +Receiving objects: 100% (471/471), 74.58 KiB | 658.00 KiB/s, done. +remote: Total 471 (delta 0), reused 1 (delta 0), pack-reused 470 +Resolving deltas: 100% (247/247), done. +``` + +If you do not have Cask, install the same using the instructions provided in the _README_ file at __. +You can then change the directory into the cloned ‘_ht.el_’ folder and run _cask install_. This will locally install the required dependencies for running the tests. + +``` +$ cd ht.el/ +$ cask install +Loading package information... Select coding system (default utf-8): +done +Package operations: 4 installs, 0 removals +- Installing [ 1/4] dash (2.12.0)... done +- Installing [ 2/4] ert-runner (latest)... done +- Installing [ 3/4] cl-lib (latest)... already present +- Installing [ 4/4] f (latest)... already present +``` + +A _Makefile_ exists in the top-level directory and you can simply run ‘make’ to run the tests, as shown below: + +``` +$ make +rm -f ht.elc +make unit +make[1]: Entering directory ‘/home/guest/ht.el’ +cask exec ert-runner +......................................... + +Ran 41 tests in 0.016 seconds +make[1]: Leaving directory ‘/home/guest/ht.el’ +make compile +make[1]: Entering directory ‘/home/guest/ht.el’ +cask exec emacs -Q -batch -f batch-byte-compile ht.el +make[1]: Leaving directory ‘/home/guest/ht.el’ +make unit +make[1]: Entering directory ‘/home/guest/ht.el’ +cask exec ert-runner +......................................... + +Ran 41 tests in 0.015 seconds +make[1]: Leaving directory ‘/home/guest/ht.el’ +make clean-elc +make[1]: Entering directory ‘/home/guest/ht.el’ +rm -f ht.elc +make[1]: Leaving directory ‘/home/guest/ht.el’ +``` + +You are encouraged to read the ht.el _README_ file from the GitHub repository at __ for more information. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/the-emacs-series-ht-el-the-hash-table-library-for-emacs/ + +作者:[Shakthi Kannan][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/shakthi-kannan/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/GPL-3.jpg?resize=696%2C351&ssl=1 (GPL 3) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/GPL-3.jpg?fit=998%2C503&ssl=1 diff --git a/sources/tech/20191010 Achieve high-scale application monitoring with Prometheus.md b/sources/tech/20191010 Achieve high-scale application monitoring with Prometheus.md new file mode 100644 index 0000000000..dc5ecedfff --- /dev/null +++ b/sources/tech/20191010 Achieve high-scale application monitoring with Prometheus.md @@ -0,0 +1,301 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Achieve high-scale application monitoring with Prometheus) +[#]: via: (https://opensource.com/article/19/10/application-monitoring-prometheus) +[#]: author: (Paul Brebner https://opensource.com/users/paul-brebner) + +Achieve high-scale application monitoring with Prometheus +====== +Prometheus' prowess as a monitoring system and its ability to achieve +high-scalability make it a strong choice for monitoring applications and +servers. +![Tall building with windows][1] + +[Prometheus][2] is an increasingly popular—for good reason—open source tool that provides monitoring and alerting for applications and servers. Prometheus' great strength is in monitoring server-side metrics, which it stores as [time-series data][3]. While Prometheus doesn't lend itself to application performance management, active control, or user experience monitoring (although a GitHub extension does make user browser metrics available to Prometheus), its prowess as a monitoring system and ability to achieve high-scalability through a [federation of servers][4] make Prometheus a strong choice for a wide variety of use cases. + +In this article, we'll take a closer look at Prometheus' architecture and functionality and then examine a detailed instance of the tool in action. + +### Prometheus architecture and components + +Prometheus consists of the Prometheus server (handling service discovery, metrics retrieval and storage, and time-series data analysis through the PromQL query language), a data model for metrics, a graphing GUI, and native support for [Grafana][5]. There is also an optional alert manager that allows users to define alerts via the query language and an optional push gateway for short-term application monitoring. These components are situated as shown in the following diagram. + +![Promethius architecture][6] + +Prometheus can automatically capture standard metrics by using agents to execute general-purpose code in the application environment. It can also capture custom metrics through instrumentation, placing custom code within the source code of the monitored application. Prometheus officially supports [client libraries][7] for Go, Python, Ruby, and Java/Scala and also enables users to write their own libraries. Additionally, many unofficial libraries for other languages are available. + +Developers can also utilize third-party [exporters][8] to automatically activate instrumentation for many popular software solutions they might be using. For example, users of JVM-based applications like open source [Apache Kafka][9] and [Apache Cassandra][10] can easily collect metrics by leveraging the existing [JMX exporter][11]. In other cases, an exporter won't be needed because the application will [expose metrics][12] that are already in the Prometheus format. Those on Cassandra might also find Instaclustr's freely available [Cassandra Exporter for Prometheus][13] to be helpful, as it integrates Cassandra metrics from a self-managed cluster into Prometheus application monitoring. + +Also important: Developers can leverage an available [node exporter][14] to monitor kernel metrics and host hardware. Prometheus offers a [Java client][15] as well, with a number of features that can be registered either piecemeal or at once through a single **DefaultExports.initialize();** command—including memory pools, garbage collection, JMX, classloading, and thread counts. + +### Prometheus data modeling and metrics + +Prometheus provides four metric types: + + * **Counter:** Counts incrementing values; a restart can return these values to zero + * **Gauge:** Tracks metrics that can go up and down + * **Histogram:** Observes data according to specified response sizes or durations and counts the sums of observed values along with counts in configurable buckets + * **Summary:** Counts observed data similar to a histogram and offers configurable quantiles that are calculated over a sliding time window + + + +Prometheus time-series data metrics each include a string name, which follows a naming convention to include the name of the monitored data subject, the logical type, and the units of measure used. Each metric includes streams of 64-bit float value that are timestamped down to the millisecond, and a set of key:value pairs labeling the dimensions it measures. Prometheus automatically adds **Job** and **Instance** labels to each metric to keep track of the configured job name of the data target and the **<host>:<port>** piece of the scraped target URL, respectively. + +### Prometheus example: the Anomalia Machina anomaly detection experiment + +Before moving into the example, download and begin using open source Prometheus by following this [getting started][16] guide. + +To demonstrate how to put Prometheus into action and perform application monitoring at a high scale, let's take a look at a recent [experimental Anomalia Machina project][17] we completed at Instaclustr. This project—just a test case, not a commercially available solution—leverages Kafka and Cassandra in an application deployed by Kubernetes, which performs anomaly detection on streaming data. (Such detection is critical to use cases including IoT applications and digital ad fraud, among other areas.) The experimental application relies heavily on Prometheus to collect application metrics across distributed instances and make them readily available to view. + +This diagram displays the experiment's architecture: + +![Anomalia Machina Architecture][18] + +Our goals in utilizing Prometheus included monitoring the application's more generic metrics, such as throughput, as well as the response times delivered by the Kafka load generator (the Kafka producer), the Kafka consumer, and the Cassandra client tasked with detecting any anomalies in the data. Prometheus monitors the system's hardware metrics as well, such as the CPU for each AWS EC2 instance running the application. The project also counts on Prometheus to monitor application-specific metrics such as the total number of rows each Cassandra read returns and, crucially, the number of anomalies it detects. All of this monitoring is centralized for simplicity. + +In practice, this means forming a test pipeline with producer, consumer, and detector methods, as well as the following three metrics: + + * A counter metric, called **prometheusTest_requests_total**, increments each time that each pipeline stage executes without incident, while a **stage** label allows for tracking the successful execution of each stage, and a **total** label tracks the total pipeline count. + * Another counter metric, called **prometheusTest_anomalies_total**, counts any detected anomalies. + * Finally, a gauge metric called **prometheusTest_duration_seconds** tracks the seconds of duration for each stage (again using a **stage** label and a **total** label). + + + +The code behind these measurements increments counter metrics using the **inc()** method and sets the time value of the gauge metric with the **setToTime()** method. This is demonstrated in the following annotated example code: + + +``` +import java.io.IOException; +import io.prometheus.client.Counter; +import io.prometheus.client.Gauge; +import io.prometheus.client.exporter.HTTPServer; +import io.prometheus.client.hotspot.DefaultExports; +  +// +// Demo of how we plan to use Prometheus Java client to instrument Anomalia Machina. +// Note that the Anomalia Machina application will have Kafka Producer and Kafka consumer and rest of pipeline running in multiple separate processes/instances. +// So metrics from each will have different host/port combinations. +public class PrometheusBlog {   +static String appName = "prometheusTest"; +// counters can only increase in value (until process restart) +// Execution count. Use a single Counter for all stages of the pipeline, stages are distinguished by labels +static final Counter pipelineCounter = Counter.build() +    .name(appName + "_requests_total").help("Count of executions of pipeline stages") +    .labelNames("stage") +    .register(); +// in theory could also use pipelineCounter to count anomalies found using another label +// but less potential for confusion having another counter. Doesn't need a label +static final Counter anomalyCounter = Counter.build() +    .name(appName + "_anomalies_total").help("Count of anomalies detected") +    .register(); +// A Gauge can go up and down, and is used to measure current value of some variable. +// pipelineGauge will measure duration in seconds of each stage using labels. +static final Gauge pipelineGauge = Gauge.build() +    .name(appName + "_duration_seconds").help("Gauge of stage durations in seconds") +    .labelNames("stage") +    .register(); +  +public static void main(String[] args) { +// Allow default JVM metrics to be exported +   DefaultExports.initialize(); +  +   // Metrics are pulled by Prometheus, create an HTTP server as the endpoint +   // Note if there are multiple processes running on the same server need to change port number. +   // And add all IPs and port numbers to the Prometheus configuration file. +HTTPServer server = null; +try { +server = new HTTPServer(1234); +} catch (IOException e) { +e.printStackTrace(); +} +// now run 1000 executions of the complete pipeline with random time delays and increasing rate +int max = 1000; +for (int i=0; i < max; i++) +{ +// total time for complete pipeline, and increment anomalyCounter +pipelineGauge.labels("total").setToTime(() -> { +producer(); +consumer(); +if (detector()) +anomalyCounter.inc(); +}); +// total pipeline count +pipelineCounter.labels("total").inc(); +System.out.println("i=" + i); +  +// increase the rate of execution +try { +Thread.sleep(max-i); +} catch (InterruptedException e) { +e.printStackTrace(); +} +} +server.stop(); +} +// the 3 stages of the pipeline, for each we increase the stage counter and set the Gauge duration time +public  static void producer() { +class Local {}; +String name = Local.class.getEnclosingMethod().getName(); +pipelineGauge.labels(name).setToTime(() -> { +try { +Thread.sleep(1 + (long)(Math.random()*20)); +} catch (InterruptedException e) { +e.printStackTrace(); +} +}); +pipelineCounter.labels(name).inc(); +   } +public  static void consumer() { +class Local {}; +String name = Local.class.getEnclosingMethod().getName(); +pipelineGauge.labels(name).setToTime(() -> { +try { +Thread.sleep(1 + (long)(Math.random()*10)); +} catch (InterruptedException e) { +e.printStackTrace(); +} +}); +pipelineCounter.labels(name).inc(); +   } +// detector returns true if anomaly detected else false +public  static boolean detector() { +class Local {}; +String name = Local.class.getEnclosingMethod().getName(); +pipelineGauge.labels(name).setToTime(() -> { +try { +Thread.sleep(1 + (long)(Math.random()*200)); +} catch (InterruptedException e) { +e.printStackTrace(); +} +}); +pipelineCounter.labels(name).inc(); +return (Math.random() > 0.95); +   } +} +``` + +Prometheus collects metrics by polling ("scraping") instrumented code (unlike some other monitoring solutions that receive metrics via push methods). The code example above creates a required HTTP server on port 1234 so that Prometheus can scrape metrics as needed. + +The following sample code addresses Maven dependencies: + + +``` +<!-- The client --> +<dependency> +<groupId>io.prometheus</groupId> +<artifactId>simpleclient</artifactId> +<version>LATEST</version> +</dependency> +<!-- Hotspot JVM metrics--> +<dependency> +<groupId>io.prometheus</groupId> +<artifactId>simpleclient_hotspot</artifactId> +<version>LATEST</version> +</dependency> +<!-- Exposition HTTPServer--> +<dependency> +<groupId>io.prometheus</groupId> +<artifactId>simpleclient_httpserver</artifactId> +<version>LATEST</version> +</dependency> +<!-- Pushgateway exposition--> +<dependency> +<groupId>io.prometheus</groupId> +<artifactId>simpleclient_pushgateway</artifactId> +<version>LATEST</version> +</dependency> +``` + +The code example below tells Prometheus where it should look to scrape metrics. This code can simply be added to the configuration file (default: Prometheus.yml) for basic deployments and tests. + + +``` +global: + scrape_interval:    15s # By default, scrape targets every 15 seconds. +  +# scrape_configs has jobs and targets to scrape for each. +scrape_configs: +# job 1 is for testing prometheus instrumentation from multiple application processes. + # The job name is added as a label job=<job_name> to any timeseries scraped from this config. + - job_name: 'testprometheus' +  +   # Override the global default and scrape targets from this job every 5 seconds. +   scrape_interval: 5s +    +   # this is where to put multiple targets, e.g. for Kafka load generators and detectors +   static_configs: +     - targets: ['localhost:1234', 'localhost:1235'] +      + # job 2 provides operating system metrics (e.g. CPU, memory etc). + - job_name: 'node' +  +  # Override the global default and scrape targets from this job every 5 seconds. +   scrape_interval: 5s +    +   static_configs: +     - targets: ['localhost:9100'] +``` + +Note the job named "node" that uses port 9100 in this configuration file; this job offers node metrics and requires running the [Prometheus node exporter][14] on the same server where the application is running. Polling for metrics should be done with care: doing it too often can overload applications, too infrequently can result in lag. Where application metrics can't be polled, Prometheus also offers a [push gateway][19]. + +### Viewing Prometheus metrics and results + +Our experiment initially used [expressions][20], and later [Grafana][5], to visualize data and overcome Prometheus' lack of default dashboards. Using the Prometheus interface (or [http://localhost:][21]9[090/metrics][21]), select metrics by name and then enter them in the expression box for execution. (Note that it's common to experience error messages at this stage, so don't be discouraged if you encounter a few issues.) With correctly functioning expressions, results will be available for display in tables or graphs as appropriate. + +Using the **[irate][22]** or **[rate][23]** function on a counter metric will produce a useful rate graph: + +![Rate graph][24] + +Here is a similar graph of a gauge metric: + +![Gauge graph][25] + +Grafana provides much more robust graphing capabilities and built-in Prometheus support with graphs able to display multiple metrics: + +![Grafana graph][26] + +To enable Grafana, install it, navigate to , create a Prometheus data source, and add a Prometheus graph using an expression. A note here: An empty graph often points to a time range issue, which can usually be solved by using the "Last 5 minutes" setting. + +Creating this experimental application offered an excellent opportunity to build our knowledge of what Prometheus is capable of and resulted in a high-scale experimental production application that can monitor 19 billion real-time data events for anomalies each day. By following this guide and our example, hopefully, more developers can successfully put Prometheus into practice. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/application-monitoring-prometheus + +作者:[Paul Brebner][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/paul-brebner +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/windows_building_sky_scale.jpg?itok=mH6CAX29 (Tall building with windows) +[2]: https://prometheus.io/ +[3]: https://prometheus.io/docs/concepts/data_model +[4]: https://prometheus.io/docs/prometheus/latest/federation +[5]: https://grafana.com/ +[6]: https://opensource.com/sites/default/files/uploads/prometheus_architecture.png (Promethius architecture) +[7]: https://prometheus.io/docs/instrumenting/clientlibs/ +[8]: https://prometheus.io/docs/instrumenting/exporters/ +[9]: https://kafka.apache.org/ +[10]: http://cassandra.apache.org/ +[11]: https://github.com/prometheus/jmx_exporter +[12]: https://prometheus.io/docs/instrumenting/exporters/#software-exposing-prometheus-metrics +[13]: https://github.com/instaclustr/cassandra-exporter +[14]: https://prometheus.io/docs/guides/node-exporter/ +[15]: https://github.com/prometheus/client_java +[16]: https://prometheus.io/docs/prometheus/latest/getting_started/ +[17]: https://github.com/instaclustr/AnomaliaMachina +[18]: https://opensource.com/sites/default/files/uploads/anomalia_machina_architecture.png (Anomalia Machina Architecture) +[19]: https://prometheus.io/docs/instrumenting/pushing/ +[20]: https://prometheus.io/docs/prometheus/latest/querying/basics/ +[21]: http://localhost:9090/metrics +[22]: https://prometheus.io/docs/prometheus/latest/querying/functions/#irate +[23]: https://prometheus.io/docs/prometheus/latest/querying/functions/#rate +[24]: https://opensource.com/sites/default/files/uploads/rate_graph.png (Rate graph) +[25]: https://opensource.com/sites/default/files/uploads/gauge_graph.png (Gauge graph) +[26]: https://opensource.com/sites/default/files/uploads/grafana_graph.png (Grafana graph) diff --git a/sources/tech/20191010 DevSecOps pipelines and tools- What you need to know.md b/sources/tech/20191010 DevSecOps pipelines and tools- What you need to know.md new file mode 100644 index 0000000000..c9e7432d49 --- /dev/null +++ b/sources/tech/20191010 DevSecOps pipelines and tools- What you need to know.md @@ -0,0 +1,74 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (DevSecOps pipelines and tools: What you need to know) +[#]: via: (https://opensource.com/article/19/10/devsecops-pipeline-and-tools) +[#]: author: (Sagar Nangare https://opensource.com/users/sagarnangare) + +DevSecOps pipelines and tools: What you need to know +====== +DevSecOps evolves DevOps to ensure security remains an essential part of +the process. +![An intersection of pipes.][1] + +DevOps is well-understood in the IT world by now, but it's not flawless. Imagine you have implemented all of the DevOps engineering practices in modern application delivery for a project. You've reached the end of the development pipeline—but a penetration testing team (internal or external) has detected a security flaw and come up with a report. Now you have to re-initiate all of your processes and ask developers to fix the flaw. + +This is not terribly tedious in a DevOps-based software development lifecycle (SDLC) system—but it does consume time and affects the delivery schedule. If security were integrated from the start of the SDLC, you might have tracked down the glitch and eliminated it on the go. But pushing security to the end of the development pipeline, as in the above scenario, leads to a longer development lifecycle. + +This is the reason for introducing DevSecOps, which consolidates the overall software delivery cycle in an automated way. + +In modern DevOps methodologies, where containers are widely used by organizations to host applications, we see greater use of [Kubernetes][2] and [Istio][3]. However, these tools have their own vulnerabilities. For example, the Cloud Native Computing Foundation (CNCF) recently completed a [Kubernetes security audit][4] that identified several issues. All tools used in the DevOps pipeline need to undergo security checks while running in the pipeline, and DevSecOps pushes admins to monitor the tools' repositories for upgrades and patches. + +### What Is DevSecOps? + +Like DevOps, DevSecOps is a mindset or a culture that developers and IT operations teams follow while developing and deploying software applications. It integrates active and automated security audits and penetration testing into agile application development. + +To utilize [DevSecOps][5], you need to: + + * Introduce the concept of security right from the start of the SDLC to minimize vulnerabilities in software code. + * Ensure everyone (including developers and IT operations teams) shares responsibility for following security practices in their tasks. + * Integrate security controls, tools, and processes at the start of the DevOps workflow. These will enable automated security checks at each stage of software delivery. + + + +DevOps has always been about including security—as well as quality assurance (QA), database administration, and everyone else—in the dev and release process. However, DevSecOps is an evolution of that process to ensure security is never forgotten as an essential part of the process. + +### Understanding the DevSecOps pipeline + +There are different stages in a typical DevOps pipeline; a typical SDLC process includes phases like Plan, Code, Build, Test, Release, and Deploy. In DevSecOps, specific security checks are applied in each phase. + + * **Plan:** Execute security analysis and create a test plan to determine scenarios for where, how, and when testing will be done. + * **Code:** Deploy linting tools and Git controls to secure passwords and API keys. + * **Build:** While building code for execution, incorporate static application security testing (SAST) tools to track down flaws in code before deploying to production. These tools are specific to programming languages. + * **Test:** Use dynamic application security testing (DAST) tools to test your application while in runtime. These tools can detect errors associated with user authentication, authorization, SQL injection, and API-related endpoints. + * **Release:** Just before releasing the application, employ security analysis tools to perform thorough penetration testing and vulnerability scanning. + * **Deploy:** After completing the above tests in runtime, send a secure build to production for final deployment. + + + +### DevSecOps tools + +Tools are available for every phase of the SDLC. Some are commercial products, but most are open source. In my next article, I will talk more about the tools to use in different stages of the pipeline. + +DevSecOps will play a more crucial role as we continue to see an increase in the complexity of enterprise security threats built on modern IT infrastructure. However, the DevSecOps pipeline will need to improve over time, rather than simply relying on implementing all security changes simultaneously. This will eliminate the possibility of backtracking or the failure of application delivery. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/devsecops-pipeline-and-tools + +作者:[Sagar Nangare][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/sagarnangare +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.) +[2]: https://opensource.com/resources/what-is-kubernetes +[3]: https://opensource.com/article/18/9/what-istio +[4]: https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/ +[5]: https://resources.whitesourcesoftware.com/blog-whitesource/devsecops diff --git a/sources/tech/20191013 How to Enable EPEL Repository on CentOS 8 and RHEL 8 Server.md b/sources/tech/20191013 How to Enable EPEL Repository on CentOS 8 and RHEL 8 Server.md new file mode 100644 index 0000000000..d959b30d0c --- /dev/null +++ b/sources/tech/20191013 How to Enable EPEL Repository on CentOS 8 and RHEL 8 Server.md @@ -0,0 +1,163 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Enable EPEL Repository on CentOS 8 and RHEL 8 Server) +[#]: via: (https://www.linuxtechi.com/enable-epel-repo-centos8-rhel8-server/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +How to Enable EPEL Repository on CentOS 8 and RHEL 8 Server +====== + +**EPEL** Stands for Extra Packages for Enterprise Linux, it is a free and opensource additional packages repository available for **CentOS** and **RHEL** servers. As the name suggests, EPEL repository provides extra and additional packages which are not available in the default package repositories of [CentOS 8][1] and [RHEL 8][2]. + +In this article we will demonstrate how to enable and use epel repository on CentOS 8 and RHEL 8 Server. + +[![EPEL-Repo-CentOS8-RHEL8][3]][4] + +### Prerequisites of EPEL Repository + + * Minimal CentOS 8 and RHEL 8 Server + * Root or sudo admin privileges + * Internet Connection + + + +### Install and Enable EPEL Repository on RHEL 8.x Server + +Login or ssh to your RHEL 8.x server and execute the following dnf command to install EPEL rpm package, + +``` +[root@linuxtechi ~]# dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y +``` + +Output of above command would be something like below, + +![dnf-install-epel-repo-rehl8][3] + +Once epel rpm package is installed successfully then it will automatically enable and configure its yum / dnf repository.  Run following dnf or yum command to verify whether EPEL repository is enabled or not, + +``` +[root@linuxtechi ~]# dnf repolist epel +Or +[root@linuxtechi ~]# dnf repolist epel -v +``` + +![epel-repolist-rhel8][3] + +### Install and Enable EPEL Repository on CentOS 8.x Server + +Login or ssh to your CentOS 8 server and execute following dnf or yum command to install ‘**epel-release**‘ rpm package. In CentOS 8 server, epel rpm package is available in its default package repository. + +``` +[root@linuxtechi ~]# dnf install epel-release -y +Or +[root@linuxtechi ~]# yum install epel-release -y +``` + +Execute the following commands to verify the status of epel repository on CentOS 8 server, + +``` + [root@linuxtechi ~]# dnf repolist epel +Last metadata expiration check: 0:00:03 ago on Sun 13 Oct 2019 04:18:05 AM BST. +repo id repo name status +*epel Extra Packages for Enterprise Linux 8 - x86_64 1,977 +[root@linuxtechi ~]# +[root@linuxtechi ~]# dnf repolist epel -v +…………………… +Repo-id : epel +Repo-name : Extra Packages for Enterprise Linux 8 - x86_64 +Repo-status : enabled +Repo-revision: 1570844166 +Repo-updated : Sat 12 Oct 2019 02:36:32 AM BST +Repo-pkgs : 1,977 +Repo-size : 2.1 G +Repo-metalink: https://mirrors.fedoraproject.org/metalink?repo=epel-8&arch=x86_64&infra=stock&content=centos + Updated : Sun 13 Oct 2019 04:28:24 AM BST +Repo-baseurl : rsync://repos.del.extreme-ix.org/epel/8/Everything/x86_64/ (34 more) +Repo-expire : 172,800 second(s) (last: Sun 13 Oct 2019 04:28:24 AM BST) +Repo-filename: /etc/yum.repos.d/epel.repo +Total packages: 1,977 +[root@linuxtechi ~]# +``` + +Above command’s output confirms that we have successfully enabled epel repo. Let’s perform some basic operations on EPEL repo. + +### List all available packages from epel repository + +If you want to list all the packages from epel repository then run the following dnf command, + +``` +[root@linuxtechi ~]# dnf repository-packages epel list +…………… +Last metadata expiration check: 0:38:18 ago on Sun 13 Oct 2019 04:28:24 AM BST. +Installed Packages +epel-release.noarch 8-6.el8 @epel +Available Packages +BackupPC.x86_64 4.3.1-2.el8 epel +BackupPC-XS.x86_64 0.59-3.el8 epel +CGSI-gSOAP.x86_64 1.3.11-7.el8 epel +CGSI-gSOAP-devel.x86_64 1.3.11-7.el8 epel +Field3D.x86_64 1.7.2-16.el8 epel +Field3D-devel.x86_64 1.7.2-16.el8 epel +GraphicsMagick.x86_64 1.3.33-1.el8 epel +GraphicsMagick-c++.x86_64 1.3.33-1.el8 epel +………………………… +zabbix40-web-mysql.noarch 4.0.12-1.el8 epel +zabbix40-web-pgsql.noarch 4.0.12-1.el8 epel +zerofree.x86_64 1.1.1-3.el8 epel +zimg.x86_64 2.8-4.el8 epel +zimg-devel.x86_64 2.8-4.el8 epel +zstd.x86_64 1.4.2-1.el8 epel +zvbi.x86_64 0.2.35-9.el8 epel +zvbi-devel.x86_64 0.2.35-9.el8 epel +zvbi-fonts.noarch 0.2.35-9.el8 epel +[root@linuxtechi ~]# +``` + +### Search a package from epel repository + +Let’s assume if we want to search Zabbix package in epel repository, execute the following dnf command, + +``` +[root@linuxtechi ~]# dnf repository-packages epel list | grep -i zabbix +``` + +Output of above command would be something like below, + +![epel-repo-search-package-centos8][3] + +### Install a package from epel repository + +Let’s assume we want to install htop package from epel repo, then issue the following dnf command, + +Syntax: + +# dnf –enablerepo=”epel” install <pkg_name> + +``` +[root@linuxtechi ~]# dnf --enablerepo="epel" install htop -y +``` + +**Note:** If we don’t specify the “**–enablerepo=epel**” in above command then it will look for htop package in all available package repositories. + +That’s all from this article, I hope above steps helps you to enable and configure EPEL repository on CentOS 8 and RHEL 8 Server, please don’t hesitate to share your comments and feedback in below comments section. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/enable-epel-repo-centos8-rhel8-server/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/centos-8-installation-guide-screenshots/ +[2]: https://www.linuxtechi.com/install-configure-kvm-on-rhel-8/ +[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/EPEL-Repo-CentOS8-RHEL8.jpg diff --git a/sources/tech/20191013 Object-Oriented Programming and Essential State.md b/sources/tech/20191013 Object-Oriented Programming and Essential State.md new file mode 100644 index 0000000000..b51c726cdd --- /dev/null +++ b/sources/tech/20191013 Object-Oriented Programming and Essential State.md @@ -0,0 +1,98 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Object-Oriented Programming and Essential State) +[#]: via: (https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html) +[#]: author: (Simon Arneaud https://theartofmachinery.com) + +Object-Oriented Programming and Essential State +====== + +Back in 2015, Brian Will wrote a provocative blog post: [Object-Oriented Programming: A Disaster Story][1]. He followed it up with a video called [Object-Oriented Programming is Bad][2], which is much more detailed. I recommend taking the time to watch the video, but here’s my one-paragraph summary: + +The Platonic ideal of OOP is a sea of decoupled objects that send stateless messages to one another. No one really makes software like that, and Brian points out that it doesn’t even make sense: objects need to know which other objects to send messages to, and that means they need to hold references to one another. Most of the video is about the pain that happens trying to couple objects for control flow, while pretending that they’re decoupled by design. + +Overall his ideas resonate with my own experiences of OOP: objects can be okay, but I’ve just never been satisfied with object-_orientation_ for modelling a program’s control flow, and trying to make code “properly” object-oriented always seems to create layers of unneccessary complexity. + +There’s one thing I don’t think he explains fully. He says outright that “encapsulation does not work”, but follows it with the footnote “at fine-grained levels of code”, and goes on to acknowledge that objects can sometimes work, and that encapsulation can be okay at the level of, say, a library or file. But he doesn’t explain exactly why it sometimes works and sometimes doesn’t, and how/where to draw the line. Some people might say that makes his “OOP is bad” claim flawed, but I think his point stands, and that the line can be drawn between essential state and accidental state. + +If you haven’t heard this usage of the terms “essential” and “accidental” before, you should check out Fred Brooks’ classic [No Silver Bullet][3] essay. (He’s written many great essays about building software systems, by the way.) I’ve aleady written [my own post about essential and accidential complexity][4] before, but here’s a quick TL;DR: Software is complex. Partly that’s because we want software to solve messy real-world problems, and we call that “essential complexity”. “Accidental complexity” is all the other complexity that exists because we’re trying to use silicon and metal to solve problems that have nothing to do with silicon and metal. For example, code for memory management, or transferring data between RAM and disk, or parsing text formats, is all “accidental complexity” for most programs. + +Suppose you’re building a chat application that supports multiple channels. Messages can arrive for any channel at any time. Some channels are especially interesting and the user wants to be notified or pinged when a new message comes in. Other channels are muted: the message is stored, but the user isn’t interrupted. You need to keep track of the user’s preferred setting for each channel. + +One way to do it is to use a map (a.k.a, hash table, dictionary or associative array) between the channels and channel settings. Note that a map is the kind of abstract data type (ADT) that Brian Will said can work as an object. + +If we get a debugger and look inside the map object in memory, what will we see? We’ll find channel IDs and channel settings data of course (or pointers to them, at least). But we’ll also find other data. If the map is implemented using a red-black tree, we’ll see tree node objects with red/black labels and pointers to other nodes. The channel-related data is the essential state, and the tree nodes are the accidental state. Notice something, though: The map effectively encapsulates its accidental state — you could replace the map with another one implemented using AVL trees and your chat app would still work. On the other hand, the map doesn’t encapsulate the essential state (simply using `get()` and `set()` methods to access data isn’t encapsulation). In fact, the map is as agnostic as possible about the essential state — you could use basically the same map data structure to store other mappings unrelated to channels or notifications. + +And that’s why the map ADT is so successful: it encapsulates accidental state and is decoupled from essential state. If you think about it, the problems that Brian describes with encapsulation are problems with trying to encapsulate essential state. The benefits that others describe are benefits from encapsulating accidental state. + +It’s pretty hard to make entire software systems meet this ideal, but scaling up, I think it looks something like this: + + * No global, mutable state + * Accidental state encapsulated (in objects or modules or whatever) + * Stateless accidental complexity enclosed in free functions, decoupled from data + * Inputs and outputs made explicit using tricks like dependency injection + * Components fully owned and controlled from easily identifiable locations + + + +Some of this goes against instincts I had a long time ago. For example, if you have a function that makes a database query, the interface looks simpler and nicer if the database connection handling is hidden inside the function, and the only parameters are the query parameters. However, when you build a software system out of functions like this, it actually becomes more complex to coordinate the database usage. Not only are the components doing things their own ways, they’re trying to hide what they’re doing as “implementation details”. The fact that a database query requires a database connection never was an implementation detail. If something can’t be hidden, it’s saner to make it explicit. + +I’m wary of feeding the OOP and functional programming false dichotomy, but I think it’s interesting that FP goes to the opposite extreme of OOP: OOP tries to encapsulate things, including the essential complexity that can’t be encapsulated, while pure FP tends to make things explicit, including some accidental complexity. Most of the time, that’s the safer side to go wrong, but sometimes (such as when [building self-referential data structures in a purely functional language][5]) you can get designs that are more for the sake of FP than for the sake of simplicity (which is why [Haskell includes some escape hatches][6]). I’ve written before about [the middle ground of so-called “weak purity”][7]. + +Brian found that encapsulation works at a larger scale for a couple of reasons. One is that larger components are simply more likely to contain accidental state, just because of size. Another is that what’s “accidental” is relative to what problem you’re solving. From the chat app user’s point of view, “accidental complexity” is anything unrelated to messages and channels and users, etc. As you break the problems into subproblems, however, more things become essential. For example, the mapping between channel names and channel IDs is arguably accidental complexity when solving the “build a chat app” problem, but it’s essential complexity when solving the “implement the `getChannelIdByName()` function” subproblem. So, encapsulation tends to be less useful for subcomponents than supercomponents. + +By the way, at the end of his video, Brian Will wonders if any language supports anonymous functions that _can’t_ access they scope they’re in. [D][8] does. Anonymous lambdas in D are normally closures, but anonymous stateless functions can also be declared if that’s what you want: + +``` +import std.stdio; + +void main() +{ + int x = 41; + + // Value from immediately executed lambda + auto v1 = () { + return x + 1; + }(); + writeln(v1); + + // Same thing + auto v2 = delegate() { + return x + 1; + }(); + writeln(v2); + + // Plain functions aren't closures + auto v3 = function() { + // Can't access x + // Can't access any mutable global state either if also marked pure + return 42; + }(); + writeln(v3); +} +``` + +-------------------------------------------------------------------------------- + +via: https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html + +作者:[Simon Arneaud][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://theartofmachinery.com +[b]: https://github.com/lujun9972 +[1]: https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab +[2]: https://www.youtube.com/watch?v=QM1iUe6IofM +[3]: http://www.cs.nott.ac.uk/~pszcah/G51ISS/Documents/NoSilverBullet.html +[4]: https://theartofmachinery.com/2017/06/25/compression_complexity_and_software.html +[5]: https://wiki.haskell.org/Tying_the_Knot +[6]: https://en.wikibooks.org/wiki/Haskell/Mutable_objects#The_ST_monad +[7]: https://theartofmachinery.com/2016/03/28/dirtying_pure_functions_can_be_useful.html +[8]: https://dlang.org diff --git a/sources/tech/20191013 Sugarizer- The Taste of Sugar on Any Device.md b/sources/tech/20191013 Sugarizer- The Taste of Sugar on Any Device.md new file mode 100644 index 0000000000..749ff78037 --- /dev/null +++ b/sources/tech/20191013 Sugarizer- The Taste of Sugar on Any Device.md @@ -0,0 +1,59 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Sugarizer: The Taste of Sugar on Any Device) +[#]: via: (https://opensourceforu.com/2019/10/sugarizer-the-taste-of-sugar-on-any-device/) +[#]: author: (Dr Anil Seth https://opensourceforu.com/author/anil-seth/) + +Sugarizer: The Taste of Sugar on Any Device +====== + +[![][1]][2] + +_Sugar is a learning platform that was initially developed for the OLPC project. The Sugar Learning Environment can be downloaded and installed on any Linux-compatible hardware. Sugarizer mimics the UI of Sugar using HTML5 and CSS3._ + +The One Laptop Per Child (OLPC) project was launched less than 12 years ago. The goal of bringing down the cost of a laptop to US$ 100 was never really achieved. The project also did not turn out to be as much of a success as anticipated. However, the goal was not really about the laptop, but to educate as many children as possible. +The interactive learning environment of the OLPC project was equally critical. This became a separate project under Sugar Labs, [_https://wiki.sugarlabs.org/_][3], and continues to be active. The Sugar Learning Environment is available as a Fedora spin, and can be downloaded and installed on any Linux-compatible hardware. It would be a good option to install it on an old system, which could then be donated. The US$ 90 Pinebook, [_https://www.pine64.org/,_][4] with Sugar installed on it would also make a memorable and useful gift. +The Sugar Environment can happily coexist with other desktop environments on Linux. So, the computer does not have to be dedicated to Sugar. On Fedora, you may add it to your existing desktop as follows: + +``` +$ sudo dnf group install ‘Sugar Desktop Environment’ +``` + +I have not tried it on Ubuntu. However, the following command should work: + +``` +$ sudo apt install sucrose +``` + +However, Sugar remains, by and large, an unknown entity. This is especially disappointing considering that the need to _learn to learn_ has never been greater. +Hence, the release of Sugarizer is a pleasant surprise. It allows you to use the Sugar environment on any device, with the help of Web technologies. Sugarizer mimics the UI of Sugar using HTML5 and CSS3. It runs activities that have been written in HTML5/JavaScript. The current release includes a number of Sugar activities written initially in Python, which have been ported to HTML5/JavaScript. +You may try the new release at _sugarizer.org_. Better still, install it from Google Play on your Android tablet or from App Store on an Apple device. It works well even on a two-year-old, low-end tablet. Hence, you may easily put your old tablet to good use by gifting it to a child after installing Sugarizer on it. In this way, you could even rationalise your desire to buy the replacement tablet you have been eyeing. + +**Does it work?** +My children are too old and grandchildren too young. Reason tells me that it should work. Experience also tells me that it will most likely NOT improve school grades. I did not like school. I was bored most of the time. If I was studying in today’s schools, I would have had ulcers or a nervous breakdown! +When I think of schools, I recall the frustration of a child long ago (just 20 years) who got an answer wrong. The book and the teacher said that a mouse has two buttons. The mouse he used at home had three! +So, can you risk leaving the education of children you care about to the schools? Think about the skills you may be using today. Could these have been taught at schools a mere five years ago? +I never took JavaScript seriously and never made an effort to learn it. Today, I see Sugarizer and Snap! (a clone of Scratch in JavaScript) and am acutely aware of my foolishness. However, having learnt programming outside the classroom, I am confident that I can learn to program in JavaScript, should the need arise. +The intention at the start was to write about the activities in Sugarizer and, maybe, explore the source code. My favourite activities include TamTam, Turtle Blocks, Maze, etc. From the food chain activity, I discovered that some animals that I had believed to be carnivores, were not. I have also seen children get excited by the Speak activity. +However, once I started writing after the heading ‘Does it work?’, my mind took a radical turn. Now, I am convinced that Sugarizer will work only if you try it out. + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/sugarizer-the-taste-of-sugar-on-any-device/ + +作者:[Dr Anil Seth][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/anil-seth/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2017/05/Technology-Development-in-Computers-Innovation-eLearning-1.jpg?resize=696%2C696&ssl=1 (Technology Development in Computers (Innovation), eLearning) +[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2017/05/Technology-Development-in-Computers-Innovation-eLearning-1.jpg?fit=900%2C900&ssl=1 +[3]: https://wiki.sugarlabs.org/ +[4]: https://www.pine64.org/, diff --git a/sources/tech/20191014 How to make a Halloween lantern with Inkscape.md b/sources/tech/20191014 How to make a Halloween lantern with Inkscape.md new file mode 100644 index 0000000000..0f15fae6e6 --- /dev/null +++ b/sources/tech/20191014 How to make a Halloween lantern with Inkscape.md @@ -0,0 +1,188 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to make a Halloween lantern with Inkscape) +[#]: via: (https://opensource.com/article/19/10/how-make-halloween-lantern-inkscape) +[#]: author: (Jess Weichler https://opensource.com/users/cyanide-cupcake) + +How to make a Halloween lantern with Inkscape +====== +Use open source tools to make a spooky and fun decoration for your +favorite Halloween haunt. +![Halloween - backlit bat flying][1] + +The spooky season is almost here! This year, decorate your haunt with a unique Halloween lantern made with open source! + +Typically, a portion of a lantern's structure is opaque to block the light from within. What makes a lantern a lantern are the parts that are missing: windows cut from the structure so that light can escape. While it's impractical for lighting, a lantern with windows in spooky shapes and lurking silhouettes can be atmospheric and a lot of fun to create. + +This article demonstrates how to create your own lantern using [Inkscape][2]. If you don't have Inkscape, you can install it from your software repository on Linux or download it from the [Inkscape website][3] on MacOS and Windows. + +### Supplies + + * Template ([A4][4] or [Letter][5] size) + * Cardstock (black is traditional) + * Tracing paper (optional) + * Craft knife, ruler, and cutting mat (a craft cutting machine/laser cutter can be used instead) + * Craft glue + * LED tea-light "candle" +_Safety note:_ Only use battery-operated candles for this project. + + + +### Understanding the template + +To begin, download the correct template for your region (A4 or Letter) from the links above and open it in Inkscape. + +* * * + +* * * + +* * * + +**![Lantern template screen][6]** + +The gray-and-white checkerboard background is see-through (in technical terms, it's an _alpha channel_.) + +The black base forms the lantern. Right now, there are no windows for light to shine through; the lantern is a solid black base. You will use the **Union** and **Difference** options in Inkscape to design the windows digitally. + +The dotted blue lines represent fold scorelines. The solid orange lines represent guides. Windows for light should not be placed outside the orange boxes. + +To the left of the template are a few pre-made objects you can use in your design. + +### To create a window or shape + + 1. Create an object that looks like the window style you want. Objects can be created using any of the shape tools in Inkscape's left toolbar. Alternately, you can download Creative Commons- or Public Domain-licensed clipart and import the PNG file into your project. + 2. When you are happy with the shape of the object, turn it into a **Path** (rather than a **Shape**, which Inkscape sees as two different kinds of objects) by selecting **Object > Object to Path** in the top menu. + + + +![Object to path menu][7] + + 3. Place the object on top of the base shape. + 4. Select both the object and the black base by clicking one, pressing and holding the Shift key, then selecting the other. + 5. Select **Object > Difference** from the top menu to remove the shape of the object from the base. This creates what will become a window in your lantern. + + + +![Object > Difference menu][8] + +### To add an object to a window + +After making a window, you can add objects to it to create a scene. + +**Tips:** + + * All objects, including text, must be connected to the base of the lantern. If not, they will fall out after cutting and leave a blank space. + * Avoid small, intricate details. These are difficult to cut, even when using a machine like a laser cutter or a craft plotter. + + + 1. Create or import an object. + 2. Place the object inside the window so that it is touching at least two sides of the base. + 3. With the object selected, choose **Object > Object to Path** from the top menu. + + + +![Object to path menu][9] + + 4. Select the object and the black base by clicking on each one while holding the Shift key). + 5. Select **Object > Union** to join the object and the base. + + + +### Add text + +Text can either be cut out from the base to create a window (as I did with the stars) or added to a window (which blocks the light from within the lantern). If you're creating a window, only follow steps 1 and 2 below, then use **Difference** to remove the text from the base layer. + + 1. Select the Text tool from the left sidebar to create text. Thick, bold fonts work best. + +![Text tool][10] + + 2. Select your text, then choose **Path > Object to Path** from the top menu. This converts the text object to a path. Note that this step means you can no longer edit the text, so perform this step _only after_ you're sure you have the word or words you want. + + 3. After you have converted the text, you can press **F2** on your keyboard to activate the **Node Editor** tool to clearly show the nodes of the text when it is selected with this tool. + + + + +![Text selected with Node editor][11] + + 4. Ungroup the text. + 5. Adjust each letter so that it slightly overlaps its neighboring letter or the base. + + + +![Overlapping the text][12] + + 6. To connect all of the letters to one another and to the base, re-select all the text and the base, then select **Path > Union**. + +![Connecting letters and base with Path > Union][13] + + + + +### Prepare for printing + +The following instructions are for hand-cutting your lantern. If you're using a laser cutter or craft plotter, follow the techniques required by your hardware to prepare your files. + + 1. In the **Layer** panel, click the **Eye** icon beside the **Safety** layer to hide the safety lines. If you don't see the Layer panel, reveal it by selecting **Layer > Layers** from the top menu. + 2. Select the black base. In the **Fill and Stroke** panel, set the fill to **X** (meaning _no fill_) and the **Stroke** to solid black (that's #000000ff to fans of hexes). + + + +![Setting fill and stroke][14] + + 3. Print your pattern with **File > Print**. + + 4. Using a craft knife and ruler, carefully cut around each black line. Lightly score the dotted blue lines, then fold. + +![Cutting out the lantern][15] + + 5. To finish off the windows, cut tracing paper to the size of each window and glue it to the inside of the lantern. + +![Adding tracing paper][16] + + 6. Glue the lantern together at the tabs. + + 7. Turn on a battery-powered LED candle and place it inside your lantern. + + + + +![Completed lantern][17] + +Now your lantern is complete and ready to light up your haunt. Happy Halloween! + +How to make Halloween bottle labels with Inkscape, GIMP, and items around the house. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/how-make-halloween-lantern-inkscape + +作者:[Jess Weichler][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/cyanide-cupcake +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/halloween_bag_bat_diy.jpg?itok=24M0lX25 (Halloween - backlit bat flying) +[2]: https://opensource.com/article/18/1/inkscape-absolute-beginners +[3]: http://inkscape.org +[4]: https://www.dropbox.com/s/75qzjilg5ak2oj1/papercraft_lantern_A4_template.svg?dl=0 +[5]: https://www.dropbox.com/s/8fswdge49jwx91n/papercraft_lantern_letter_template%20.svg?dl=0 +[6]: https://opensource.com/sites/default/files/uploads/lanterntemplate_screen.png (Lantern template screen) +[7]: https://opensource.com/sites/default/files/uploads/lantern1.png (Object to path menu) +[8]: https://opensource.com/sites/default/files/uploads/lantern2.png (Object > Difference menu) +[9]: https://opensource.com/sites/default/files/uploads/lantern3.png (Object to path menu) +[10]: https://opensource.com/sites/default/files/uploads/lantern4.png (Text tool) +[11]: https://opensource.com/sites/default/files/uploads/lantern5.png (Text selected with Node editor) +[12]: https://opensource.com/sites/default/files/uploads/lantern6.png (Overlapping the text) +[13]: https://opensource.com/sites/default/files/uploads/lantern7.png (Connecting letters and base with Path > Union) +[14]: https://opensource.com/sites/default/files/uploads/lantern8.png (Setting fill and stroke) +[15]: https://opensource.com/sites/default/files/uploads/lantern9.jpg (Cutting out the lantern) +[16]: https://opensource.com/sites/default/files/uploads/lantern10.jpg (Adding tracing paper) +[17]: https://opensource.com/sites/default/files/uploads/lantern11.jpg (Completed lantern) diff --git a/sources/tech/20191014 My Linux story- I grew up on PC Magazine not candy.md b/sources/tech/20191014 My Linux story- I grew up on PC Magazine not candy.md new file mode 100644 index 0000000000..d3f967357f --- /dev/null +++ b/sources/tech/20191014 My Linux story- I grew up on PC Magazine not candy.md @@ -0,0 +1,48 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (My Linux story: I grew up on PC Magazine not candy) +[#]: via: (https://opensource.com/article/19/10/linux-journey-newb-ninja) +[#]: author: (Michael Zamot https://opensource.com/users/mzamot) + +My Linux story: I grew up on PC Magazine not candy +====== +This Linux story begins with a kid reading about Linux in issues of PC +Magazine from his childhood home in Costa Rica. Today, he's a passionate +member of the global Linux community. +![The back of a kid head][1] + +In 1998, the movie _Titanic_ was released, mobile phones were just a luxury, and pagers were still in use. This was also the year I got my first computer. I can remember the details as if it were yesterday: Pentium 133MHz and just 16MB of memory. Back in that time (while running nothing less than Windows 95), this was a good machine. I can still hear in my mind the old spinning hard drive noise when I powered that computer on, and see the Windows 95 flag. It never crossed my mind, though (especially as an 8-year-old kid), that I would dedicate every minute of my life to Linux and open source. + +Being just a kid, I always asked my mom to buy me every issue of PC Magazine instead of candies. I never skipped a single issue, and all of those dusty old magazines are still there in Costa Rica. It was in these magazines that I discovered the essential technology that changed my life. An issue in the year 2000 talked extensively about Linux and the advantages of free and open-source software. That issue also included a review of one of the most popular Linux distributions back then: Corel Linux. Unfortunately, the disc was not included. Without internet at home, I was out of luck, but that issue still lit a spark within me. + +In 2003, I asked my mom to take me to a Richard Stallman talk. I couldn’t believe he was in the country. I was the only kid in that room, and I was laser-focused on everything he was saying, though I didn’t understand anything about patents, licenses, or the jokes about him with an old hard drive over his head. + +Despite my attempts, I couldn’t make Linux work on my computer. One rainy afternoon in the year 2003, with the heavy smell of recently brewed coffee, my best friend and I were able to get a local magazine with a two-disk bundle: Mandrake Linux 7.1 (if my memory doesn’t fail) on one and StarOffice on the other. My friend poured more coffee into our mugs while I inserted the Mandrake disk into the computer with my shaking, excited hands. Linux was finally running—the same Linux I had been obsessed with since I read about it 3 years earlier. + +We were lucky enough to get broadband internet in 2006 (at the lightning speed of 128/64Kbps), so I was able to use an old Pentium II computer under my bed and run it 24x7 with Debian, Apache, and my own mail server (my personal server, I told myself). This old machine was my playground to experiment on and put into practice all of the knowledge and reading I had been doing (and also to make the electricity bill more expensive). + +As soon as I discovered there were open source communities in the country, I started attending their meetings. Eventually, I was helping in their events, and not long after I was organizing and giving talks. We used to host two annual events for many years: Festival Latinoamericano de Software Libre (Latin American Free Software Installation Fest) and Software Freedom Day. + +Thanks to what I learned from my reading, but more importantly from the people in these local communities that guided and mentored me, I was able to land my first Linux job in 2011, even without college. I kept growing from there, working for many companies and learning more about open source and Linux at each one. Eventually, I felt that I had an obligation (or a social debt) to give back to the community so that other people like the younger me could also learn. Not long after, I started teaching classes and meeting wonderful and passionate people, many of whom are now as devoted to Linux and open source as I am. I can definitely say: Mission accomplished! + +Eventually, what I learned about open source, Linux, OpenStack, Docker, and every other technology I played with sent me overseas, allowing me to work (doesn’t feel like it) for the most amazing company I’ve ever worked for, doing what I love. Because of open source and Linux, I became a part of something bigger than me. I was a member of a community, and I experienced what I consider the most significant impact on my life: Meeting and learning from so many masterminds and amazing people that today I can call friends. Without them and these communities, I wouldn’t be the person I am today. + +How could I know when I was 10 years old and reading a magazine that Linux and open source would connect me to the greatest people, and change my life forever? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/linux-journey-newb-ninja + +作者:[Michael Zamot][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mzamot +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_kid_education.png?itok=3lRp6gFa (The back of a kid head) diff --git a/sources/tech/20191014 Use sshuttle to build a poor man-s VPN.md b/sources/tech/20191014 Use sshuttle to build a poor man-s VPN.md new file mode 100644 index 0000000000..8e49d71a71 --- /dev/null +++ b/sources/tech/20191014 Use sshuttle to build a poor man-s VPN.md @@ -0,0 +1,81 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Use sshuttle to build a poor man’s VPN) +[#]: via: (https://fedoramagazine.org/use-sshuttle-to-build-a-poor-mans-vpn/) +[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/) + +Use sshuttle to build a poor man’s VPN +====== + +![][1] + +Nowadays, business networks often use a VPN (virtual private network) for [secure communications with workers][2]. However, the protocols used can sometimes make performance slow. If you can reach reach a host on the remote network with SSH, you could set up port forwarding. But this can be painful, especially if you need to work with many hosts on that network. Enter **sshuttle** — which lets you set up a quick and dirty VPN with just SSH access. Read on for more information on how to use it. + +The sshuttle application was designed for exactly the kind of scenario described above. The only requirement on the remote side is that the host must have Python available. This is because sshuttle constructs and runs some Python source code to help transmit data. + +### Installing sshuttle + +The sshuttle application is packaged in the official repositories, so it’s easy to install. Open a terminal and use the following command [with sudo][3]: + +``` +$ sudo dnf install sshuttle +``` + +Once installed, you may find the manual page interesting: + +``` +$ man sshuttle +``` + +### Setting up the VPN + +The simplest case is just to forward all traffic to the remote network. This isn’t necessarily a crazy idea, especially if you’re not on a trusted local network like your own home. Use the _-r_ switch with the SSH username and the remote host name: + +``` +$ sshuttle -r username@remotehost 0.0.0.0/0 +``` + +However, you may want to restrict the VPN to specific subnets rather than all network traffic. (A complete discussion of subnets is outside the scope of this article, but you can read more [here on Wikipedia][4].) Let’s say your office internally uses the reserved Class A subnet 10.0.0.0 and the reserved Class B subnet 172.16.0.0. The command above becomes: + +``` +$ sshuttle -r username@remotehost 10.0.0.0/8 172.16.0.0/16 +``` + +This works great for working with hosts on the remote network by IP address. But what if your office is a large network with lots of hosts? Names are probably much more convenient — maybe even required. Never fear, sshuttle can also forward DNS queries to the office with the _–dns_ switch: + +``` +$ sshuttle --dns -r username@remotehost 10.0.0.0/8 172.16.0.0/16 +``` + +To run sshuttle like a daemon, add the _-D_ switch. This also will send log information to the systemd journal via its syslog compatibility. + +Depending on the capabilities of your system and the remote system, you can use sshuttle for an IPv6 based VPN. You can also set up configuration files and integrate it with your system startup if desired. If you want to read even more about sshuttle and how it works, [check out the official documentation][5]. For a look at the code, [head over to the GitHub page][6]. + +* * * + +_Photo by _[_Kurt Cotoaga_][7]_ on _[_Unsplash_][8]_._ + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/use-sshuttle-to-build-a-poor-mans-vpn/ + +作者:[Paul W. Frields][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/pfrields/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/sshuttle-816x345.jpg +[2]: https://en.wikipedia.org/wiki/Virtual_private_network +[3]: https://fedoramagazine.org/howto-use-sudo/ +[4]: https://en.wikipedia.org/wiki/Subnetwork +[5]: https://sshuttle.readthedocs.io/en/stable/index.html +[6]: https://github.com/sshuttle/sshuttle +[7]: https://unsplash.com/@kydroon?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[8]: https://unsplash.com/s/photos/shuttle?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/sources/tech/20191015 10 Ways to Customize Your Linux Desktop With GNOME Tweaks Tool.md b/sources/tech/20191015 10 Ways to Customize Your Linux Desktop With GNOME Tweaks Tool.md new file mode 100644 index 0000000000..2cf9c93596 --- /dev/null +++ b/sources/tech/20191015 10 Ways to Customize Your Linux Desktop With GNOME Tweaks Tool.md @@ -0,0 +1,167 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (10 Ways to Customize Your Linux Desktop With GNOME Tweaks Tool) +[#]: via: (https://itsfoss.com/gnome-tweak-tool/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +10 Ways to Customize Your Linux Desktop With GNOME Tweaks Tool +====== + +![GNOME Tweak Tool Icon][1] + +There are several ways you can tweak Ubuntu to customize its looks and behavior. The easiest way I find is by using the [GNOME Tweak tool][2]. It is also known as GNOME Tweaks or simply Tweaks. + +I have mentioned it numerous time in my tutorials in the past. Here, I list all the major tweaks you can perform with this tool. + +I have used Ubuntu here but the steps should be applicable to any Linux distribution using GNOME desktop environment. + +### Install GNOME Tweak tool in Ubuntu 18.04 and other versions + +Gnome Tweak tool is available in the [Universe repository in Ubuntu][3] so make sure that you have it enabled in your Software & Updates tool: + +![Enable Universe Repository in Ubuntu][4] + +After that, you can install GNOME Tweak tool from the software center. Just open the Software Center and search for GNOME Tweaks and install it from there: + +![Install GNOME Tweaks Tool from Software Center][5] + +Alternatively, you may also use command line to install software with [apt command][6]: + +``` +sudo apt install gnome-tweaks +``` + +### Customizing GNOME desktop with Tweaks tool + +![][7] + +GNOME Tweak tool enables you to do a number of settings changes. Some of these changes like wallpaper changes, startup applications etc are also available in the official System Settings tool. I am going to focus on tweaks that are not available in the Settings by default. + +#### 1\. Change themes + +You can [install new themes in Ubuntu][8] in various ways. But if you want to change to the newly installed theme, you’ll have to install GNOME Tweaks tool. + +You can find the theme and icon settings in Appearance section. You can browse through the available themes and icons and set the ones you like. The changes take into effect immediately. + +![Change Themes With GNOME Tweaks][9] + +#### 2\. Disable animation to speed up your desktop + +There are subtle animations for application window opening, closing, maximizing etc. You can disable these animations to speed up your system slightly as it will use slightly fewer resources. + +![Disable Animations For Slightly Faster Desktop Experience][10] + +#### 3\. Control desktop icons + +At least in Ubuntu, you’ll see the Home and Trash icons on the desktop. If you don’t like, you can choose to disable it. You can also choose which icons will be displayed on the desktop. + +![Control Desktop Icons in Ubuntu][11] + +#### 4\. Manage GNOME extensions + +I hope you are aware of [GNOME Extensions][12]. These are small ‘plugins’ for your desktop that extends the functionalities of the GNOME desktop. There are [plenty of GNOME extensions][13] that you can use to get CPU consumption in the top panel, get clipboard history etc. + +I have written in detail about [installing and using GNOME extensions][14]. Here, I assume that you are already using them and if that’s the case, you can manage them from within GNOME Tweaks. + +![Manage GNOME Extensions][15] + +#### 5\. Change fonts and scaling factor + +You can [install new fonts in Ubuntu][16] and apply the system wide font change using Tweaks tool. You can also change the scaling factor if you think the icons, text are way too small on your desktop. + +![Change Fonts and Scaling Factor][17] + +#### 6\. Control touchpad behavior like Disable touchpad while typing, Make right click on touchpad working + +The GNOME Tweaks also allows you to disable touchpad while typing. This is useful if you type fast on a laptop. The bottom of your palm may touch the touchpad and the cursor moves away to an undesired location on the screen. + +Automatically disabling touchpad while typing fixes this problem. + +![Disable Touchpad While Typing][18] + +You’ll also notice that [when you press the bottom right corner of your touchpad for right click, nothing happens][19]. There is nothing wrong with your touchpad. It’s a system settings that disables the right clicking this way for any touchpad that doesn’t have a real right click button (like the old Thinkpad laptops). Two finger click gives you the right click. + +You can also get this back by choosing Area in under Mouse Click Simulation instead of Fingers. + +![Fix Right Click Issue][20] + +You may have to [restart Ubuntu][21] in order to take the changes in effect. If you are Emacs lover, you can also force keybindings from Emacs. + +#### 7\. Change power settings + +There is only one power settings here. It allows you to put your laptop in suspend mode when the lid is closed. + +![Power Settings in GNOME Tweaks Tool][22] + +#### 8\. Decide what’s displayed in the top panel + +The top panel in your desktop gives shows a few important things. You have the calendar, network icon, system settings and the Activities option. + +You can also [display battery percentage][23], add date along with day and time and show week numbers. You can also enable hot corners so that if you take your mouse to the top left corner of the screen, you’ll get the activities view with all the running applications. + +![Top Panel Settings in GNOME Tweaks Tool][24] + +If you have the mouse focus on an application window, you’ll notice that it’s menu is displayed in the top panel. If you don’t like it, you may toggle it off and then the application menu will be available on the application itself. + +#### 9\. Configure application window + +You can decide if maximize and minimize option (the buttons on the top right corner) will be shown in the application window. You may also change their positioning between left and right. + +![Application Window Configuration][25] + +There are some other configuration options as well. I don’t use them but feel free to explore them on your own. + +#### 10\. Configure workspaces + +GNOME Tweaks tool also allows you to configure a couple of things around workspaces. + +![Configure Workspaces in Ubuntu][26] + +**In the end…** + +GNOME Tweaks tool is a must have utility for any GNOME user. It helps you configure looks and functionality of the desktop. I find it surprising that this tool is not even in Main repository of Ubuntu. In my opinion, it should be installed by default. Till then, you’ll have to install GNOME Tweak tool in Ubuntu manually. + +If you find some hidden gem in GNOME Tweaks that hasn’t been discussed here, why not share it with the rest of us? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/gnome-tweak-tool/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gnome-tweak-tool-icon.png?ssl=1 +[2]: https://wiki.gnome.org/action/show/Apps/Tweaks?action=show&redirect=Apps%2FGnomeTweakTool +[3]: https://itsfoss.com/ubuntu-repositories/ +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/enable-repositories-ubuntu.png?ssl=1 +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/install-gnome-tweaks-tool.jpg?ssl=1 +[6]: https://itsfoss.com/apt-command-guide/ +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/customize-gnome-with-tweak-tool.jpg?ssl=1 +[8]: https://itsfoss.com/install-themes-ubuntu/ +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/change-theme-ubuntu-gnome.jpg?ssl=1 +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/disable-animation-ubuntu-gnome.jpg?ssl=1 +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/desktop-icons-ubuntu.jpg?ssl=1 +[12]: https://extensions.gnome.org/ +[13]: https://itsfoss.com/best-gnome-extensions/ +[14]: https://itsfoss.com/gnome-shell-extensions/ +[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/manage-gnome-extension-tweaks-tool.jpg?ssl=1 +[16]: https://itsfoss.com/install-fonts-ubuntu/ +[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/change-fonts-ubuntu-gnome.jpg?ssl=1 +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/disable-touchpad-while-typing-ubuntu.jpg?ssl=1 +[19]: https://itsfoss.com/fix-right-click-touchpad-ubuntu/ +[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/enable-right-click-ubuntu.jpg?ssl=1 +[21]: https://itsfoss.com/schedule-shutdown-ubuntu/ +[22]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/power-settings-gnome-tweaks-tool.jpg?ssl=1 +[23]: https://itsfoss.com/display-battery-ubuntu/ +[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/top-panel-settings-gnome-tweaks-tool.jpg?ssl=1 +[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/windows-configuration-ubuntu-gnome-tweaks.jpg?ssl=1 +[26]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/configure-workspaces-ubuntu.jpg?ssl=1 diff --git a/sources/tech/20191015 Bash Script to Delete Files-Folders Older Than -X- Days in Linux.md b/sources/tech/20191015 Bash Script to Delete Files-Folders Older Than -X- Days in Linux.md new file mode 100644 index 0000000000..cb606aa1c7 --- /dev/null +++ b/sources/tech/20191015 Bash Script to Delete Files-Folders Older Than -X- Days in Linux.md @@ -0,0 +1,215 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Bash Script to Delete Files/Folders Older Than “X” Days in Linux) +[#]: via: (https://www.2daygeek.com/bash-script-to-delete-files-folders-older-than-x-days-in-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +Bash Script to Delete Files/Folders Older Than “X” Days in Linux +====== + +**[Disk Usage][1]** Monitoring tools are capable of alerting us when a given threshold is reached. + +But they don’t have the ingenuity to fix the **[disk usage][2]** problem on their own. + +Manual intervention is needed to solve the problem. + +But if you want to fully automate this kind of activity, what you will do. + +Yes, it can be done using the bash script. + +This script prevents alerts from **[monitoring tool][3]** because we delete old log files before filling the disk space. + +We have added many useful shell scripts in the past. If you want to check them out, go to the link below. + + * **[How to automate day to day activities using shell scripts?][4]** + + + +I’ve added two bash scripts to this article, which helps clear up old logs. + +### 1) Bash Script to Delete a Folders Older Than “X” Days in Linux + +We have a folder named **“/var/log/app/”** that contains 15 days of logs and we are going to delete 10 days old folders. + +``` +$ ls -lh /var/log/app/ + +drwxrw-rw- 3 root root 24K Oct 1 23:52 app_log.01 +drwxrw-rw- 3 root root 24K Oct 2 23:52 app_log.02 +drwxrw-rw- 3 root root 24K Oct 3 23:52 app_log.03 +drwxrw-rw- 3 root root 24K Oct 4 23:52 app_log.04 +drwxrw-rw- 3 root root 24K Oct 5 23:52 app_log.05 +drwxrw-rw- 3 root root 24K Oct 6 23:54 app_log.06 +drwxrw-rw- 3 root root 24K Oct 7 23:53 app_log.07 +drwxrw-rw- 3 root root 24K Oct 8 23:51 app_log.08 +drwxrw-rw- 3 root root 24K Oct 9 23:52 app_log.09 +drwxrw-rw- 3 root root 24K Oct 10 23:52 app_log.10 +drwxrw-rw- 3 root root 24K Oct 11 23:52 app_log.11 +drwxrw-rw- 3 root root 24K Oct 12 23:52 app_log.12 +drwxrw-rw- 3 root root 24K Oct 13 23:52 app_log.13 +drwxrw-rw- 3 root root 24K Oct 14 23:52 app_log.14 +drwxrw-rw- 3 root root 24K Oct 15 23:52 app_log.15 +``` + +This script will delete 10 days old folders and send folder list via mail. + +You can change the value **“-mtime X”** depending on your requirement. Also, replace your email id instead of us. + +``` +# /opt/script/delete-old-folders.sh + +#!/bin/bash +prev_count=0 +fpath=/var/log/app/app_log.* +find $fpath -type d -mtime +10 -exec ls -ltrh {} \; > /tmp/folder.out +find $fpath -type d -mtime +10 -exec rm -rf {} \; +count=$(cat /tmp/folder.out | wc -l) +if [ "$prev_count" -lt "$count" ] ; then +MESSAGE="/tmp/file1.out" +TO="[email protected]" +echo "Application log folders are deleted older than 15 days" >> $MESSAGE +echo "+----------------------------------------------------+" >> $MESSAGE +echo "" >> $MESSAGE +cat /tmp/folder.out | awk '{print $6,$7,$9}' >> $MESSAGE +echo "" >> $MESSAGE +SUBJECT="WARNING: Apache log files are deleted older than 15 days $(date)" +mail -s "$SUBJECT" "$TO" < $MESSAGE +rm $MESSAGE /tmp/folder.out +fi +``` + +Set an executable permission to **“delete-old-folders.sh”** file. + +``` +# chmod +x /opt/script/delete-old-folders.sh +``` + +Finally add a **[cronjob][5]** to automate this. It runs daily at 7AM. + +``` +# crontab -e + +0 7 * * * /bin/bash /opt/script/delete-old-folders.sh +``` + +You will get an output like the one below. + +``` +Application log folders are deleted older than 20 days ++--------------------------------------------------------+ +Oct 11 /var/log/app/app_log.11 +Oct 12 /var/log/app/app_log.12 +Oct 13 /var/log/app/app_log.13 +Oct 14 /var/log/app/app_log.14 +Oct 15 /var/log/app/app_log.15 +``` + +### 2) Bash Script to Delete a Files Older Than “X” Days in Linux + +We have a folder named **“/var/log/apache/”** that contains 15 days of logs and we are going to delete 10 days old files. + +The articles below are related to this topic, so you may be interested to read. + + * **[How To Find And Delete Files Older Than “X” Days And “X” Hours In Linux?][6]** + * **[How to Find Recently Modified Files/Folders in Linux][7]** + * **[How To Automatically Delete Or Clean Up /tmp Folder Contents In Linux?][8]** + + + +``` +# ls -lh /var/log/apache/ + +-rw-rw-rw- 3 root root 24K Oct 1 23:52 2daygeek_access.01 +-rw-rw-rw- 3 root root 24K Oct 2 23:52 2daygeek_access.02 +-rw-rw-rw- 3 root root 24K Oct 3 23:52 2daygeek_access.03 +-rw-rw-rw- 3 root root 24K Oct 4 23:52 2daygeek_access.04 +-rw-rw-rw- 3 root root 24K Oct 5 23:52 2daygeek_access.05 +-rw-rw-rw- 3 root root 24K Oct 6 23:54 2daygeek_access.06 +-rw-rw-rw- 3 root root 24K Oct 7 23:53 2daygeek_access.07 +-rw-rw-rw- 3 root root 24K Oct 8 23:51 2daygeek_access.08 +-rw-rw-rw- 3 root root 24K Oct 9 23:52 2daygeek_access.09 +-rw-rw-rw- 3 root root 24K Oct 10 23:52 2daygeek_access.10 +-rw-rw-rw- 3 root root 24K Oct 11 23:52 2daygeek_access.11 +-rw-rw-rw- 3 root root 24K Oct 12 23:52 2daygeek_access.12 +-rw-rw-rw- 3 root root 24K Oct 13 23:52 2daygeek_access.13 +-rw-rw-rw- 3 root root 24K Oct 14 23:52 2daygeek_access.14 +-rw-rw-rw- 3 root root 24K Oct 15 23:52 2daygeek_access.15 +``` + +This script will delete 10 days old files and send folder list via mail. + +You can change the value **“-mtime X”** depending on your requirement. Also, replace your email id instead of us. + +``` +# /opt/script/delete-old-files.sh + +#!/bin/bash +prev_count=0 +fpath=/var/log/apache/2daygeek_access.* +find $fpath -type f -mtime +15 -exec ls -ltrd {} \; > /tmp/file.out +find $fpath -type f -mtime +15 -exec rm -rf {} \; +count=$(cat /tmp/file.out | wc -l) +if [ "$prev_count" -lt "$count" ] ; then +MESSAGE="/tmp/file1.out" +TO="[email protected]" +echo "Apache Access log files are deleted older than 20 days" >> $MESSAGE +echo "+--------------------------------------------- +" >> $MESSAGE +echo "" >> $MESSAGE +cat /tmp/file.out | awk '{print $6,$7,$9}' >> $MESSAGE +echo "" >> $MESSAGE +SUBJECT="WARNING: Apache log folders are deleted older than 15 days $(date)" +mail -s "$SUBJECT" "$TO" < $MESSAGE +rm $MESSAGE /tmp/file.out +fi +``` + +Set an executable permission to **“delete-old-files.sh”** file. + +``` +# chmod +x /opt/script/delete-old-files.sh +``` + +Finally add a **[cronjob][5]** to automate this. It runs daily at 7AM. + +``` +# crontab -e + +0 7 * * * /bin/bash /opt/script/delete-old-folders.sh +``` + +You will get an output like the one below. + +``` +Apache Access log files are deleted older than 20 days ++--------------------------------------------------------+ +Oct 11 /var/log/apache/2daygeek_access.11 +Oct 12 /var/log/apache/2daygeek_access.12 +Oct 13 /var/log/apache/2daygeek_access.13 +Oct 14 /var/log/apache/2daygeek_access.14 +Oct 15 /var/log/apache/2daygeek_access.15 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/bash-script-to-delete-files-folders-older-than-x-days-in-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/linux-check-disk-usage-files-and-directories-folders-size-du-command/ +[2]: https://www.2daygeek.com/linux-check-disk-space-usage-df-command/ +[3]: https://www.2daygeek.com/category/monitoring-tools/ +[4]: https://www.2daygeek.com/category/shell-script/ +[5]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/ +[6]: https://www.2daygeek.com/how-to-find-and-delete-files-older-than-x-days-and-x-hours-in-linux/ +[7]: https://www.2daygeek.com/check-find-recently-modified-files-folders-linux/ +[8]: https://www.2daygeek.com/automatically-delete-clean-up-tmp-directory-folder-contents-in-linux/ diff --git a/sources/tech/20191015 Formatting NFL data for doing data science with Python.md b/sources/tech/20191015 Formatting NFL data for doing data science with Python.md new file mode 100644 index 0000000000..67f15777ad --- /dev/null +++ b/sources/tech/20191015 Formatting NFL data for doing data science with Python.md @@ -0,0 +1,235 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Formatting NFL data for doing data science with Python) +[#]: via: (https://opensource.com/article/19/10/formatting-nfl-data-python) +[#]: author: (Christa Hayes https://opensource.com/users/cdhayes2) + +Formatting NFL data for doing data science with Python +====== +In part 1 of this series on machine learning with Python, learn how to +prepare a National Football League dataset for training. +![A football field.][1] + +No matter what medium of content you consume these days (podcasts, articles, tweets, etc.), you'll probably come across some reference to data. Whether it's to back up a talking point or put a meta-view on how data is everywhere, data and its analysis are in high demand. + +As a programmer, I've found data science to be more comparable to wizardry than an exact science. I've coveted the ability to get ahold of raw data and glean something useful and concrete from it. What a useful talent! + +This got me thinking about the difference between data scientists and programmers. Aren't data scientists just statisticians who can code? Look around and you'll see any number of tools aimed at helping developers become data scientists. AWS has a full-on [machine learning course][2] geared specifically towards turning developers into experts. [Visual Studio][3] has built-in Python projects that—with the click of a button—will create an entire template for classification problems. And scores of programmers are writing tools designed to make data science easier for anyone to pick up. + +I thought I'd lean into the clear message of recruiting programmers to the data (or dark) side and give it a shot with a fun project: training a machine learning model to predict plays using a National Football League (NFL) dataset. + +### Set up the environment + +Before I can dig into the data, I need to set up my [virtual environment][4]. This is important because, without an environment, I'll have nowhere to work. Fortunately, Opensource.com has [some great resources][5] for installing and configuring the setup. + +Any of the code you see here, I was able to look up through existing documentation. If there is one thing programmers are familiar with, it's navigating foreign (and sometimes very sparse) documentation. + +### Get the data + +As with any modern problem, the first step is to make sure you have quality data. Luckily, I came across a set of [NFL tracking data][6] from 2017 that was used for the NFL Big Data Bowl. Even the NFL is trying its best to attract the brightest stars in the data realm. + +Everything I need to know about the schema is in the README. This exercise will train a machine learning model to predict run (in which the ball carrier keeps the football and runs downfield) and pass (in which the ball is passed to a receiving player) plays using the plays.csv [data file][7]. I won't use player tracking data in this exercise, but it could be fun to explore later. + +First things first, I need to get access to my data by importing it into a dataframe. The [Pandas][8] library is an open source Python library that provides algorithms for easy analysis of data structures. The structure in the sample NFL data happens to be a two-dimensional array (or in simpler terms, a table), which data scientists often refer to as a dataframe. The Pandas function dealing with dataframes is [pandas.DataFrame][9]. I'll also import several other libraries that I will use later. + + +``` +import pandas as pd +import numpy as np +import seaborn as sns +import matplotlib.pyplot as plt +import xgboost as xgb + +from sklearn import metrics + +df = pd.read_csv('data/plays.csv') + +print(len(df)) +print(df.head()) +``` + +### Format the data + +The NFL data dump does not explicitly indicate which plays are runs (also called rushes) and which are passes. Therefore, I have to classify the offensive play types through some football savvy and reasoning. + +Right away, I can get rid of special teams plays in the **isSTPLAY** column. Special teams are neither offense nor defense, so they are irrelevant to my objective. + + +``` +#drop st plays +df = df[~df['isSTPlay']] +print(len(df)) +``` + +Skimming the **playDescription** column, I see some plays where the quarterback kneels, which effectively ends a play. This is usually called a "victory formation" because the intent is to run out the clock. These are significantly different than normal running plays, so I can drop them as well. + + +``` +#drop kneels +df = df[~df['playDescription'].str.contains("kneels")] +print (len(df)) +``` + +The data reports time in terms of the quarters in which a game is normally played (as well as the time on the game clock in each quarter). Is this the most intuitive in terms of trying to predict a sequence? One way to answer this is to consider how gameplay differs between time splits. + +When a team has the ball with a minute left in the first quarter, will it act the same as if it has the ball with a minute left in the second quarter? Probably not. Will it act the same with a minute to go at the end of both halves? All else remaining equal, the answer is likely yes in most scenarios. + +I'll convert the **quarter** and **GameClock** columns from quarters to halves, denoted in seconds rather than minutes. I'll also create a **half** column from the **quarter** values. There are some fifth quarter values, which I take to be overtime. Since overtime rules are different than normal gameplay, I can drop them. + + +``` +#drop overtime +df = df[~(df['quarter'] == 5)] +print(len(df)) + +#convert time/quarters +def translate_game_clock(row): +    raw_game_clock = row['GameClock'] +    quarter = row['quarter'] +    minutes, seconds_raw = raw_game_clock.partition(':')[::2] + +    seconds = seconds_raw.partition(':')[0] + +    total_seconds_left_in_quarter = int(seconds) + (int(minutes) * 60) + +    if quarter == 3 or quarter == 1: +        return total_seconds_left_in_quarter + 900 +    elif quarter == 4 or quarter == 2: +        return total_seconds_left_in_quarter + +if 'GameClock' in list (df.columns): +    df['secondsLeftInHalf'] = df.apply(translate_game_clock, axis=1) + +if 'quarter' in list(df.columns): +    df['half'] = df['quarter'].map(lambda q: 2 if q > 2 else 1) +``` + +The **yardlineNumber** column also needs to be transformed. The data currently lists the yard line as a value from one to 50. Again, this is unhelpful because a team would not act the same on its own 20-yard line vs. its opponent's 20-yard line. I will convert it to represent a value from one to 99, where the one-yard line is nearest the possession team's endzone, and the 99-yard line is nearest the opponent's end zone. + + +``` +def yards_to_endzone(row): +    if row['possessionTeam'] == row['yardlineSide']: +        return 100 - row['yardlineNumber'] +    else : +        return row['yardlineNumber'] + +df['yardsToEndzone'] = df.apply(yards_to_endzone, axis = 1) +``` + +The personnel data would be extremely useful if I could get it into a format for the machine learning algorithm to take in. Personnel identifies the different types of skill positions on the field at a given time. The string value currently shown in **personnel.offense** is not conducive to input, so I'll convert each personnel position to its own column to indicate the number present on the field during the play. Defense personnel might be interesting to include later to see if it has any outcome on prediction. For now, I'll just stick with offense. + + +``` +def transform_off_personnel(row): + +   rb_count = 0 +   te_count = 0 +   wr_count = 0 +   ol_count = 0 +   dl_count = 0 +   db_count = 0 + +   if not pd.isna(row['personnel.offense']): +       personnel = row['personnel.offense'].split(', ') +       for p in personnel: +           if p[2:4] == 'RB': +               rb_count = int(p[0]) +           elif p[2:4] == 'TE': +                te_count = int(p[0]) +           elif p[2:4] == 'WR': +                wr_count = int(p[0]) +           elif p[2:4] == 'OL': +                ol_count = int(p[0]) +           elif p[2:4] == 'DL': +                dl_count = int(p[0]) +           elif p[2:4] == 'DB': +               db_count = int(p[0]) + +   return pd.Series([rb_count,te_count,wr_count,ol_count,dl_count, db_count]) + +df[['rb_count','te_count','wr_count','ol_count','dl_count', 'db_count']] = df.apply(transform_off_personnel, axis=1) +``` + +Now the offense personnel values are represented by individual columns. + +![Result of reformatting offense personnel][10] + +Formations describe how players are positioned on the field, and this is also something that would seemingly have value in predicting play outcomes. Once again, I'll convert the string values into integers. + + +``` +df['offenseFormation'] = df['offenseFormation'].map(lambda f : 'EMPTY' if pd.isna(f) else f) + +def formation(row): +    form = row['offenseFormation'].strip() +    if form == 'SHOTGUN': +        return 0 +    elif form == 'SINGLEBACK': +        return 1 +    elif form == 'EMPTY': +        return 2 +    elif form == 'I_FORM': +        return 3 +    elif form == 'PISTOL': +        return 4 +    elif form == 'JUMBO': +        return 5 +    elif form == 'WILDCAT': +        return 6 +    elif form=='ACE': +        return 7 +    else: +        return -1 + +df['numericFormation'] = df.apply(formation, axis=1) + +print(df.yardlineNumber.unique()) +``` + +Finally, it's time to classify the play types. The **PassResult** column has four distinct values: I, C, S, and null, which represent Incomplete passing plays, Complete passing plays, Sacks (classified as passing plays), and a null value. Since I've already eliminated all special teams plays, I can assume the null values are running plays. So I'll convert the play outcome into a single column called **play_type** represented by either a 0 for running or a 1 for passing. This will be the column (or _label_, as the data scientists say) I want my algorithm to predict. + + +``` +def play_type(row): +    if row['PassResult'] == 'I' or row['PassResult'] == 'C' or row['PassResult'] == 'S': +        return 'Passing' +    else: +        return 'Rushing' + +df['play_type'] = df.apply(play_type, axis = 1) +df['numericPlayType'] = df['play_type'].map(lambda p: 1 if p == 'Passing' else 0) +``` + +### Take a break + +Is it time to start predicting things yet? Most of my work so far has been trying to understand the data and what format it needs to be in—before I even get started on predicting anything. Anyone else need a minute? + +In part two, I'll do some analysis and visualization of the data before feeding it into a machine learning algorithm, and then I'll score the model's results to see how accurate they are. Stay tuned! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/formatting-nfl-data-python + +作者:[Christa Hayes][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/cdhayes2 +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_LIFE_football__520x292.png?itok=5hPbxQF8 (A football field.) +[2]: https://aws.amazon.com/training/learning-paths/machine-learning/developer/ +[3]: https://docs.microsoft.com/en-us/visualstudio/python/overview-of-python-tools-for-visual-studio?view=vs-2019 +[4]: https://opensource.com/article/19/9/get-started-data-science-python +[5]: https://opensource.com/article/17/10/python-101 +[6]: https://github.com/nfl-football-ops/Big-Data-Bowl +[7]: https://github.com/nfl-football-ops/Big-Data-Bowl/tree/master/Data +[8]: https://pandas.pydata.org/ +[9]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html +[10]: https://opensource.com/sites/default/files/uploads/nfl-python-7_personneloffense.png (Result of reformatting offense personnel) diff --git a/sources/tech/20191016 Open source interior design with Sweet Home 3D.md b/sources/tech/20191016 Open source interior design with Sweet Home 3D.md new file mode 100644 index 0000000000..bc5a17c51c --- /dev/null +++ b/sources/tech/20191016 Open source interior design with Sweet Home 3D.md @@ -0,0 +1,142 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Open source interior design with Sweet Home 3D) +[#]: via: (https://opensource.com/article/19/10/interior-design-sweet-home-3d) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +Open source interior design with Sweet Home 3D +====== +Try out furniture layouts, color schemes, and more in virtual reality +before you go shopping in the real world. +![Several houses][1] + +There are three schools of thought on how to go about decorating a room: + + 1. Buy a bunch of furniture and cram it into the room + 2. Take careful measurements of each item of furniture, calculate the theoretical capacity of the room, then cram it all in, ignoring the fact that you've placed a bookshelf on top of your bed + 3. Use a computer for pre-visualization + + + +Historically, I practiced the little-known fourth principle: don't have furniture. However, since I became a remote worker, I've found that a home office needs conveniences like a desk and a chair, a bookshelf for reference books and tech manuals, and so on. Therefore, I have been formulating a plan to populate my living and working space with actual furniture, made of actual wood rather than milk crates (or glue and sawdust, for that matter), with an emphasis on _plan_. The last thing I want is to bring home a great find from a garage sale to discover that it doesn't fit through the door or that it's oversized compared to another item of furniture. + +It was time to do what the professionals do. It was time to pre-viz. + +### Open source interior design + +[Sweet Home 3D][2] is an open source (GPLv2) interior design application that helps you draw your home's floor plan and then define, resize, and arrange furniture. You can do all of this with precise measurements, down to fractions of a centimeter, without having to do any math and with the ease of basic drag-and-drop operations. And when you're done, you can view the results in 3D. If you can create a basic table (not the furniture kind) in a word processor, you can plan the interior design of your home in Sweet Home 3D. + +### Installing + +Sweet Home 3D is a [Java][3] application, so it's universal. It runs on any operating system that can run Java, which includes Linux, Windows, MacOS, and BSD. Regardless of your OS, you can [download][4] the application from the website. + + * On Linux, [untar][5] the archive. Right-click on the SweetHome3D file and select **Properties**. In the **Permission** tab, grant the file executable permission. + * On MacOS and Windows, expand the archive and launch the application. You must grant it permission to run on your system when prompted. + + + +![Sweet Home 3D permissions][6] + +On Linux, you can also install Sweet Home 3D as a Snap package, provided you have **snapd** installed and enabled. + +### Measures of success + +First thing first: Break out your measuring tape. To get the most out of Sweet Home 3D, you must know the actual dimensions of the living space you're planning for. You may or may not need to measure down to the millimeter or 16th of an inch; you know your own tolerance for variance. But you must get the basic dimensions, including measuring walls and windows and doors. + +Use your best judgment for common sense. For instance, When measuring doors, include the door frame; while it's not technically part of the _door_ itself, it is part of the wall space that you probably don't want to cover with furniture. + +![Measure twice, execute once][7] + +CC-SA-BY opensource.com + +### Creating a room + +When you first launch Sweet Home 3D, it opens a blank canvas in its default viewing mode, a blueprint view in the top panel, and a 3D rendering in the bottom panel. On my [Slackware][8] desktop computer, this works famously, but my desktop is also my video editing and gaming computer, so it's got a great graphics card for 3D rendering. On my laptop, this view was a lot slower. For best performance (especially on a computer not dedicated to 3D rendering), go to the **3D View** menu at the top of the window and select **Virtual Visit**. This view mode renders your work from a ground-level point of view based on the position of a virtual visitor. That means you get to control what is rendered and when. + +It makes sense to switch to this view regardless of your computer's power because an aerial 3D rendering doesn't provide you with much more detail than what you have in your blueprint plan. Once you have changed the view mode, you can start designing. + +The first step is to define the walls of your home. This is done with the **Create Walls** tool, found to the right of the **Hand** icon in the top toolbar. Drawing walls is simple: Click where you want a wall to begin, click to anchor it, and continue until your room is complete. + +![Drawing walls in Sweet Home 3D][9] + +Once you close the walls, press **Esc** to exit the tool. + +#### Defining a room + +Sweet Home 3D is flexible on how you create walls. You can draw the outer boundary of your house first, and then subdivide the interior, or you can draw each room as conjoined "containers" that ultimately form the footprint of your house. This flexibility is possible because, in real life and in Sweet Home 3D, walls don't always define a room. To define a room, use the **Create Rooms** button to the right of the **Create Walls** button in the top toolbar. + +If the room's floor space is defined by four walls, then all you need to do to define that enclosure as a room is double-click within the four walls. Sweet Home 3D defines the space as a room and provides you with its area in feet or meters, depending on your preference. + +For irregular rooms, you must manually define each corner of the room with a click. Depending on the complexity of the room shape, you may have to experiment to find whether you need to work clockwise or counterclockwise from your origin point to avoid quirky Möbius-strip flooring. Generally, however, defining the floor space of a room is straightforward. + +![Defining rooms in Sweet Home 3D][10] + +After you give the room a floor, you can change to the **Arrow** tool and double-click on the room to give it a name. You can also set the color and texture of the flooring, walls, ceiling, and baseboards. + +![Modifying room floors, ceilings, etc. in Sweet Home 3D][11] + +None of this is rendered in your blueprint view by default. To enable room rendering in your blueprint panel, go to the **File** menu and select **Preferences**. In the **Preferences** panel, set **Room rendering in plan** to **Floor color or texture**. + +### Doors and windows + +Once you've finished the basic floor plan, you can switch permanently to the **Arrow** tool. + +You can find doors and windows in the left column of Sweet Home 3D, in the **Doors and Windows** category. You have many choices, so choose whatever is closest to what you have in your home. + +![Moving a door in Sweet Home 3D][12] + +To place a door or window into your plan, drag-and-drop it on the appropriate wall in your blueprint panel. To adjust its position and size, double-click the door or window. + +### Adding furniture + +With the base plan complete, the part of the job that feels like _work_ is over! From this point onward, you can play with furniture arrangements and other décor. + +You can find furniture in the left column, organized by the room for which each is intended. You can drag-and-drop any item into your blueprint plan and control orientation and size with the tools visible when you hover your mouse over the item's corners. Double-click on any item to adjust its color and finish. + +### Visiting and exporting + +To see what your future home will look like, drag the "person" icon in your blueprint view into a room. + +![Sweet Home 3D rendering][13] + +You can strike your own balance between realism and just getting a feel for space, but your imagination is your only limit. You can get additional assets to add to your home from the Sweet Home 3D [download page][4]. You can even create your own furniture and textures with the **Library Editor** applications, which are optional downloads from the project site. + +Sweet Home 3D can export your blueprint plan to SVG format for use in [Inkscape][14], and it can export your 3D model to OBJ format for use in [Blender][15]. To export your blueprint, go to the **Plan** menu and select **Export to SVG format**. To export a 3D model, go to the **3D View** menu and select **Export to OBJ format**. + +You can also take "snapshots" of your home so that you can refer to your ideas without opening Sweet Home 3D. To create a snapshot, go to the **3D View** menu and select **Create Photo**. The snapshot is rendered from the perspective of the person icon in the blueprint view, so adjust as required, then click the **Create** button in the **Create Photo** window. If you're happy with the photo, click **Save**. + +### Home sweet home + +There are many more features in Sweet Home 3D. You can add a sky and a lawn, position lights for your photos, set ceiling height, add another level to your house, and much more. Whether you're planning for a flat you're renting or a house you're buying—or a house that doesn't even exist (yet), Sweet Home 3D is an engaging and easy application that can entertain and help you make better purchasing choices when scurrying around for furniture, so you can finally stop eating breakfast at the kitchen counter and working while crouched on the floor. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/interior-design-sweet-home-3d + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_housing.png?itok=s7i6pQL1 (Several houses) +[2]: http://www.sweethome3d.com/ +[3]: https://opensource.com/resources/java +[4]: http://www.sweethome3d.com/download.jsp +[5]: https://opensource.com/article/17/7/how-unzip-targz-file +[6]: https://opensource.com/sites/default/files/uploads/sweethome3d-permissions.png (Sweet Home 3D permissions) +[7]: https://opensource.com/sites/default/files/images/life/sweethome3d-measure.jpg (Measure twice, execute once) +[8]: http://www.slackware.com/ +[9]: https://opensource.com/sites/default/files/uploads/sweethome3d-walls.jpg (Drawing walls in Sweet Home 3D) +[10]: https://opensource.com/sites/default/files/uploads/sweethome3d-rooms.jpg (Defining rooms in Sweet Home 3D) +[11]: https://opensource.com/sites/default/files/uploads/sweethome3d-rooms-modify.jpg (Modifying room floors, ceilings, etc. in Sweet Home 3D) +[12]: https://opensource.com/sites/default/files/uploads/sweethome3d-move.jpg (Moving a door in Sweet Home 3D) +[13]: https://opensource.com/sites/default/files/uploads/sweethome3d-view.jpg (Sweet Home 3D rendering) +[14]: http://inkscape.org +[15]: http://blender.org diff --git a/translated/talk/20180117 How technology changes the rules for doing agile.md b/translated/talk/20180117 How technology changes the rules for doing agile.md deleted file mode 100644 index 4c5b66f133..0000000000 --- a/translated/talk/20180117 How technology changes the rules for doing agile.md +++ /dev/null @@ -1,97 +0,0 @@ -技术如何改变敏捷的规则 -====== - -![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Containers%20Ecosystem.png?itok=lDTaYXzk) - -越来越多的企业正因为一个非常明显的原因开始尝试敏捷和[DevOps][1]: 企业需要通过更快的速度和更多的实验为创新和竞争性提供优势。而DevOps将帮助我们得到所需的创新速度。但是,在小团队或初创企业中实践DevOps与进行大规模实践完全是两码事。我们都明白这样的一个事实,那就是在10人的跨职能团队中能够很好地解决问题的方案,当将相同的模式应用到100人的团队中时就可能无法奏效。这条道路是如此艰难,以至于IT领导者很容易将敏捷方法的推行再推迟一年。 - -但那样的时代已经结束了。如果你已经尝试过,但是没有成功,那么现在是时候重新开始了。 - -直到现在,DevOps需要为许多组织提供个性化的解决方案,因此往往需要进行大量的调整以及付出额外的工作。但在今天,[Linux容器][2]和Kubernetes正在推动DevOps工具和过程的标准化。而这样的标准化将会加速整个软件开发过程。因此,我们用来实践DevOps工作方式的技术最终能够满足我们加快软件开发速度的愿望。 - -Linux容器和[Kubernetes][3]正在改变团队交互的方式。此外,你可以在Kubernetes平台上运行任何能够在Linux运行的应用程序。这意味着什么呢?你可以运行大量的企业及应用程序(甚至可以解决以前令人烦恼的Windows和Linux之间的协调问题)。最后,容器和Kubernetes将能够满足未来所有运行内容的需求。它们正在经受着未来的考验,以应对机器学习、人工智能和分析工作等下一代解决问题工具。 - -**[ 参考相关文章,[4 container adoption patterns: What you need to know. ] ][4]** - -让我们以机器学习为例来思考一下。今天,人们可以在大量的企业数据中找到一些模式。当机器发现这些模式时(想想机器学习),你的员工就能更快地采取行动。随着人工智能的加入,机器不仅可以发现模式,还可以对模式进行操作。如今,三个星期已经成为了一个积极的软件开发冲刺周期。有了人工智能,机器每秒可以多次修改代码。创业公司会利用这种能力来“打扰你”。 - -考虑一下你需要多快才能参与到竞争当中。如果你对于无法对于DevOps和每周一个迭代周期充满信心,那么考虑一下当那个创业公司将AI驱动的过程指向你时会发生什么?现在是时候转向DevOps的工作方式了,否认就会像你的竞争对手一样被甩在后面。 - -### 容器技术如何改变团队的工作? - -DevOps使得许多试图将这种工作方式扩展到更大范围的团队感到沮丧。即使许多IT(和业务)人员之前都听说过敏捷相关的语言、框架、模型(如DevOps)等承诺将会彻底应用程序开发和IT过程的全部相关内容,但他们还是对此持怀疑态度。 - -**[ 想要获取来自其他CIO们的建议吗?不放参考下我们的综述性资源, [DevOps: The IT Leader's Guide][5]. ]** - -向你的涉众“推销”快速开发冲刺也不是一件容易的事情。想象一下,如果你以这种方式买了一栋房子:你将不再需要向开发商支付固定的金额,而是会得到这样的信息:“我们将在4周内浇筑完地基,其成本是X,之后再搭建房屋框架和铺设电路,但是我们现在只能够知道地基完成的时间表。”人们已经习惯了买房子的时候有一个预先的价格和交付时间表。 - -挑战在于构建软件与构建房屋不同。同一个建筑商往往建造了成千上万个完全相同的房子,而软件项目从来都各不相同。这是你要克服的第一个障碍。 - -开发和运维团队的工作方式确实不同,我之所以知道这一点是因为我曾经从事过这两方面的工作。企业往往会用不同的方式来激励他们,开发人员会因为更改和创建而获得奖励,而运维专家则会因降低成本和确保安全性而获得奖励。我们会把他们分成不同的小组,并且尽量减少互动。而这些角色通常会吸引那些思维方式完全不同的技术人员。但是这样的解决方案注定会失败,你必须打破横亘在开发和运维之间的藩篱。 - -想想传统情况下会发生什么。业务会把需求扔过墙,这是因为他们在“买房”模式下运作,并且说上一句“我们9个月后见。”开发人员根据这些需求进行开发,并根据技术约束的需要进行更改。然后,他们把它扔过墙传递给运维人员,并说一句“搞清楚如何运行这个软件”。然后,运维人员勤就会奋地进行大量更改,使软件与基础设施保持一致。然而,最终的结果是什么呢? - -通常情况下,当业务人员看到需求实现的最终结果时甚至根本辨认不出。在过去20年的大部分时间里,我们一次又一次地目睹了这种模式在软件行业中上演。而现在,是时候改变了。 - -Linux容器能够真正地解决这样的问题,这是因为容器缩小了开发和运维之间的间隙。容器技术允许两个团队共同理解和设计所有的关键需求,但仍然独立地履行各自团队的职责。基本上,我们去掉了开发人员和运维人员之间的电话游戏。 - -因为容器技术,我们可以使得运维团队的规模更小,但依旧能够承担起数百万应用程序的运维工作,并且能够使得开发团队可以更加快速地根据需要更改软件。(在较大的组织中,所需的速度可能比运维人员的响应速度更快。) - -使用容器,您可以将所需要交付的内容与它运行的位置分开。你的运维团队只需要负责运行容器的主机和安全的内存占用,仅此而已。这意味着什么呢? - -首先,这意味着你现在可以和团队一起实践DevOps了。没错,只需要让团队专注于他们已经拥有的专业知识,而对于容器,只需让团队了解所需集成依赖关系的必要知识即可。 - -如果你想要重新训练每个人,往往会收效甚微。容器技术允许团队之间进行交互,但同时也会为每个团队提供一个围绕该团队优势而构建的强大边界。开发人员会知道需要消耗什么,但不需要知道如何使其大规模运行。运维团队了解核心基础设施,但不需要了解应用程序的细节。此外,运维团队也可以通过更新应用程序来解决新的安全问题,以免你成为下一个数据泄露的热门话题。 - -想要为一个大型IT组织,比如30000人的团队教授运维和开发技能?那或许需要花费你十年的时间,而你可能并没有那么多时间。 - -当人们谈论“构建新的云原生应用程序将帮助我们摆脱这个问题”时,请批判性地进行思考。你可以在10个人的团队中构建云原生应用程序,但这对《财富》杂志前1000强的企业而言或许并不适用。除非你不再需要依赖现有的团队,否则你无法一个接一个地构建新的微服务:你最终将得到一个竖井式的组织。这是一个诱人的想法,但你不能指望这些应用程序来重新定义你的业务。我还没见过哪家公司能在如此大规模的并行开发中获得成功。IT预算已经受到限制;在很长一段时间内将预算翻倍甚至三倍是不现实的。 - -### 当奇迹发生时: 你好, 速度 - -Linux容器就是为扩容而生的。一旦你开始这样做,[Kubernetes之类的编制工具就会发挥作用][6],这是因为你将需要运行数千个容器。应用程序将不仅仅由一个容器组成,它们将依赖于许多不同的部分,所有的部分都会作为一个单元运行在容器上。如果不这样做,你的应用程序将无法在生产环境中很好地运行。 - -思考一下有多少小滑轮和杠杆组合在一起来支撑你的业务,对于任何应用程序都是如此。开发人员负责应用程序中的所有滑轮和杠杆。(如果开发人员没有这些组件,您可能会在集成时做噩梦。)与此同时,无论是在线下还是在云上,运维团队都会负责构成基础设施的所有滑轮和杠杆。做一个较为抽象的比喻,使用Kubernetes,你的运维团队就可以为应用程序提供运行所需的燃料,但又不必成为所有方面的专家。 - -开发人员进行实验,运维团队则保持基础设施的安全和可靠。这样的组合使得企业敢于承担小风险,从而实现创新。不同于打几个孤注一掷的赌,公司中真正的实验往往是循序渐进的和快速的。 - -从个人经验来看,这就是组织内部发生的显著变化:因为人们说:“我们如何通过改变计划来真正地利用这种能力进行实验?”它强制执行敏捷计划。 - -举个例子,使用DevOps模型、容器和Kubernetes的KeyBank如今每天都会部署代码。(观看视频[7],其中主导了KeyBank持续交付和反馈的John Rzeszotarski将解释这一变化。)类似地,Macquarie银行也借助DevOps和容器技术每天将一些东西投入生产环境。 - -一旦你每天都推出软件,它就会改变你计划的每一个方面,并且会[加速业务的变化速度][8]。Macquarie银行和金融服务集团的CDO,Luis Uguina表示:“创意可以在一天内触达客户。”(参见[9]对Red Hat与Macquarie银行合作的案例研究)。 - -### 是时候去创造一些伟大的东西了 - -Macquarie的例子说明了速度的力量。这将如何改变你的经营方式?记住,Macquarie不是一家初创企业。这是CIO们所面临的颠覆性力量,它不仅来自新的市场进入者,也来自老牌同行。 - -开发人员的自由还改变了运营敏捷商店的CIO们的人才方程式。突然之间,大公司里的个体(即使不是在最热门的行业或地区)也可以产生巨大的影响。Macquarie利用这一变动作为招聘工具,并向开发人员承诺,所有新招聘的员工将会在第一周内推出新产品。 - -与此同时,在这个基于云的计算和存储能力的时代,我们比以往任何时候都拥有更多可用的基础设施。考虑到[机器学习和人工智能工具将很快实现的飞跃][10],这是幸运的。 - -所有这些都说明现在正是打造伟大事业的好时机。考虑到市场创新的速度,你需要不断地创造伟大的东西来保持客户的忠诚度。因此,如果你一直在等待将赌注押在DevOps上,那么现在就是正确的时机。容器技术和Kubernetes改变了规则,并且对你有利。 - -**想要获取更多这样的智慧吗, IT领导者? [订阅每周邮件][11].** - --------------------------------------------------------------------------------- - -via: https://enterprisersproject.com/article/2018/1/how-technology-changes-rules-doing-agile - -作者:[Matt Hicks][a] -译者:[JayFrank](https://github.com/JayFrank) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://enterprisersproject.com/user/matt-hicks -[1]:https://enterprisersproject.com/tags/devops -[2]:https://www.redhat.com/en/topics/containers?intcmp=701f2000000tjyaAAA -[3]:https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA -[4]:https://enterprisersproject.com/article/2017/8/4-container-adoption-patterns-what-you-need-know?sc_cid=70160000000h0aXAAQ -[5]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ -[6]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity -[7]:https://www.redhat.com/en/about/videos/john-rzeszotarski-keybank-red-hat-summit-2017?intcmp=701f2000000tjyaAAA -[8]:https://enterprisersproject.com/article/2017/11/dear-cios-stop-beating-yourselves-being-behind-transformation -[9]:https://www.redhat.com/en/resources/macquarie-bank-case-study?intcmp=701f2000000tjyaAAA -[10]:https://enterprisersproject.com/article/2018/1/4-ai-trends-watch -[11]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ diff --git a/translated/talk/20191009 Top 10 open source video players for Linux.md b/translated/talk/20191009 Top 10 open source video players for Linux.md new file mode 100644 index 0000000000..67eab29960 --- /dev/null +++ b/translated/talk/20191009 Top 10 open source video players for Linux.md @@ -0,0 +1,155 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top 10 open source video players for Linux) +[#]: via: (https://opensourceforu.com/2019/10/top-10-open-source-video-players-for-linux/) +[#]: author: (Stella Aldridge https://opensourceforu.com/author/stella-aldridge/) + +Linux 中的十大开源视频播放器 +====== + +[![][1]][2] + +_选择合适的视频播放器有助于确保你获得最佳的观看体验,并为你提供[创建视频网站][3]的工具。你甚至可以根据个人喜好自定义正在观看的视频。_ + +因此,为了帮助你挑选适合你需求的最佳播放器,我们列出了 Linux 中十大开源播放器。 + +让我们来看看: + +**1\. XBMC – Kodi 媒体中心** + +这是一个灵活的跨平台播放器,核心使用 C++ 编写,并提供 Python 脚本作为附加组件。使用 Kodi 的好处包括: + + * 提供超过 69 种语言版本 + * 用户可以从网络和本地存储播放音频、视频和媒体播放文件 + * 可与 JeOS 一起作为应用套件用于智能电视和机顶盒等设备 + * 有很多不错的附加组件,如视频和音频流插件、主题、屏幕保护程序等 + * 它支持多种格式,如 MPEG-1、2、4、RealVideo、HVC、HEVC 等 + + + +**2\. VLC 媒体播放器** + +由于该播放器在一系列操作系统上具有令人印象深刻的功能和可用性,他在列表上是理所当然的。它使用 C、C++ 和 Objective C 编写,用户无需使用插件,这要归功于它对解码库的广泛支持。VLC 媒体播放器的优势包括: + + * 在 Linux 上支持 DVD 播放器 + * 能够播放 .iso 文件 + * 能够播放高清录制的 D-VHS 磁带 + * 可以直接从 U 盘或外部驱动器运行 + * API 支持和浏览器支持(通过插件) + + + +**3\. Bomi(CMPlayer)** + +这个灵活和强大的播放器被许多普通用户选择,它的优势有: + + * 易于使用的图形用户界面 (GUI) + * 令人印象深刻的播放能力 + * 恢复播放的选项 + * 支持字幕,可以渲染多个字幕文件 + + + +**[![][4]][5] +4\. Miro Music and Video Player** + +以前被称为 Democracy Player (DTV), Miro 由分享文化基金会(Participatory Culture Foundation)重新开发,是一个不错的跨平台音频视频播放器。令人印象深刻,因为: + + * 支持一些高清音频和视频 + * 提供超过 40 种语言版本 + * 可以播放多种文件格式,例如,QuickTime、WMV、MPEG 文件、音频视频接口 (AVI)、XVID + * 一旦可用,可以自动通知用户并下载视频 + + + +**5\. SMPlayer** + +这个跨平台的媒体播放器,只使用 C++ 的 Qt 库编写,它是一个强大的,多功能播放器。我们喜欢它,因为: + + * 有多语言选择 + * 支持所有默认格式 + * 支持 EDL 文件,你可以配置从 Internet 获取的字幕 + * 可从互联网下载的各种皮肤 + * 倍速播放 + + + +**6\. MPV Player** + +它用 C、Objective-C、Lua 和 Python 编写,免费、易于使用,并且有许多新功能,便于使用。主要加分是: + + * 可以编译为一个库,公开客户端 API,从而增强控制 + * 允许媒体编码 + * 平滑运动 + + + +**7\. Deepin Movie** + +此播放器是开源媒体播放器的一个极好的例子,它有很多优势,包括: + + * 通过键盘完成所有播放操作 + * 各种格式的视频文件可以通过这个播放器轻松播放 + * 流媒体功能能让用户享受许多在线视频资源 + + + +**8\. Gnome Videos** + +以前称为 Totem,这是 Gnome 桌面环境选择的播放器。 +完全用 C 编写,使用 GStreamer 多媒体框架构建,另外的版本(>2.7.1)使用 xine 作为后端。它是很棒的,因为: + +它支持大量的格式,包括: + + * Supports for direct video playback from Internet channels such as Apple + * SHOUTcast、SMIL、M3U、Windows 媒体播放器格式等 + * 你可以在播放过程中调整灯光设置,如亮度和对比度 + * 加载 SubRip 字幕 + * 支持从互联网频道(如 Apple)直接播放视频 + + + +**9\. Xine Multimedia Player** + +我们列表中用 C 编写的另外一个跨平台多媒体播放器。这是一个全能播放器,因为: + + * 它支持物理媒体以及视频设备。3gp, Matroska(MKV)、 MOV, Mp4、音频格式, + * 网络协议,V4L、DVB 和 PVR 等 + * 它可以手动校正音频和视频流的同步 + + + +**10\. ExMPlayer** + +最后单同样重要的一个,ExMPlayer 是一个惊人的、强大的 MPlayer 的 GUI 前端。它的优点包括: + + * 可以播放任何媒体格式 + * 支持网络流和字幕 + * 易于使用的音频转换器 + * 高品质的音频提取,而不会影响音质 + + + +上面的视频播放器在 Linux 上工作得很好。我们建议你尝试一下,选择一个最适合你的播放器。 + +-------------------------------------------------------------------------------- + +via: https://opensourceforu.com/2019/10/top-10-open-source-video-players-for-linux/ + +作者:[Stella Aldridge][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensourceforu.com/author/stella-aldridge/ +[b]: https://github.com/lujun9972 +[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Depositphotos_50337841_l-2015.jpg?resize=696%2C585&ssl=1 (Depositphotos_50337841_l-2015) +[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Depositphotos_50337841_l-2015.jpg?fit=900%2C756&ssl=1 +[3]: https://www.ning.com/create-video-website/ +[4]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Depositphotos_20380441_l-2015.jpg?resize=350%2C231&ssl=1 +[5]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Depositphotos_20380441_l-2015.jpg?ssl=1 diff --git a/translated/tech/20140510 Managing Digital Files (e.g., Photographs) in Files and Folders.md b/translated/tech/20140510 Managing Digital Files (e.g., Photographs) in Files and Folders.md deleted file mode 100644 index 8455eb1d6d..0000000000 --- a/translated/tech/20140510 Managing Digital Files (e.g., Photographs) in Files and Folders.md +++ /dev/null @@ -1,611 +0,0 @@ - - - - -数码文件与文件夹收纳术(以照片为例) -====== -更新 2014-05-14:增加了一些具体实例 - -更新 2015-03-16:根据照片的 GPS 坐标过滤图片 - -更新 2016-08-29:以新的 `filetags--filter` (LCTT译注:文件标签过滤器)替换已经过时的 `show-sel.sh` 脚本(LCTT译注:show-sel 为 show firmware System Event Log records 即硬件系统事件及日志显示) - -更新 2017-08-28: geeqier 视频缩略图的邮件评论 - -每当度假或去哪游玩时我就会化身为一个富有激情的摄影师。所以,过去的几年中我积累了许多的 [JPEG][1] 文件。这篇文章中我会介绍我是如何避免[vendor lock-in][2](LCTT译注:vendor lock-in 供应商锁定,原为经济学术语,这里引申为避免过于依赖某一服务平台)造成受限于那些临时性的解决方案及数据丢失。相反,我更倾向于使用那些可以让我投入时间和精力打理并能长久使用的解决方案。 - -这一(相当长的)攻略 **并不仅仅适用于图像文件** :我将进一步阐述像是文件夹结构,文件的命名规则,等等许多领域的事情。因此,这些规范适用于我所能接触到的所以类型的文件。 - -在我开始传授我的方法之前,我们应该先就我将要介绍方法的达成一个共识,那就是我们是否有相同的需求。如果你对[raw 图像格式][3]十分推崇,将照片存储在云端或其他你信赖的地方(对我而言可能不会),那么你可能不会认同这篇文章将要描述的方式了。请根据你的情况来灵活做出选择。 - -### 我的需求 - -对于 **将照片(或视频)从我的数码相机中导出到电脑里**,我仅仅将 SD 卡茶道我的电脑里并调用 fetch-workflow 软件。这一步也完成了 **图像软件的预处理** 适用于我提出的文件命名规范(下文会具体论述)同时也可以将图片旋转至正常的方向(而不是横着)。 - -这些文件将会被存入到我的摄影收藏文件夹 `$HOME/tmp/digicam/`。 在这一文件夹中我希望能完成以下的操作 **浏览图像和视频文件** 以便于 **整理排序/删除,重命名,添加/移除标签,以及将一系列相关的文件移动到相应的文件夹中**。 - -在完成这些以后,我将会**浏览包含图像/电影文件集的文件夹**。在极少数情况下,我希望**在独立的图像处理工具**中打开一个图像文件,比如[GIMP][4]。如果仅是为了**旋转JPEG文件**,我想找到一个快速的方法,不需要图像处理工具,这是旋转JPEG图像[无损的方式][5]。 - -我的数码相机支持用[GPS][6]坐标标记图像。因此,我需要一个方法来**可视化GPS坐标的单个文件以及一组文件**显示我走过的路径。 - -我想安利的另一个好功能是:假设你在威尼斯度假时拍了几百张照片。每一个都很漂亮,所以你每张都舍不得删除。另一方面,你可能想要一组更小的照片,送给家里的朋友。而且,为了不让他们过于嫉妒,他们可能只希望看到20多张照片。因此,我希望能够**定义并显示一组特定的照片**。 - -就独立性和**避免锁定效应**而言,我不想使用那种一旦公司停止产品或服务就无法使用的工具。出于同样的原因,由于我是一个注重隐私的人,**我不想使用任何基于云的服务**。为了让自己对新的可能性保持开放的心态,我不希望仅在一个特定的操作系统平台上倾注全部的精力。**基本的东西必须在任何平台上可用**(查看、导航、……)。但是**全套需求必须在GNU/Linux上运行**且我选择Debian GNU/Linux。 - -在我传授当前针对上述大量需求的解决方案之前,我必须解释一下我的一般文件夹结构和文件命名约定,我也使用它来命名数码照片。但首先,你必须考虑一个重要的事实: - -#### iPhoto, Picasa, 诸如此类应被认为是有害的 - -管理照片集合的软件工具确实提供了相当酷的功能。他们提供了一个良好的用户界面,并试图为你提供各种需求的舒适的工作流程。 - -这些软件的功能和我的个人需求之间的差异很大。它们几乎对所有东西都使用专有的存储格式:图像文件、元数据等等。这是一个大问题,当你打算在几年内换一个不同的软件。相信我:总有一天你会因为多种原因而改变。 - -如果你现在正打算更换 相应的工具,你将会意识到iPhoto或Picasa确实分别存储原始图像文件和你对它们所做的所有操作。旋转图像,向图像文件添加描述,标签,裁剪,等等,如果你不能导出并重新导入到新工具,那么**所有的东西都将永远丢失**。而无损的进行转换和迁移几乎是不可能的。 - -我不想在一个锁住我工作的工具上投入任何精力。**我也拒绝把自己锁在任何专有工具上**。我是一个过来人,希望你们吸取我的经验。 - -这就是我在文件名中保留时间戳、图像描述或标记的原因。文件名是永久性的,除非我手动更改它们。当我把照片备份或复制到u盘或其他操作系统时,它们不会丢失。每个人都能读懂。任何未来的系统都能够处理它们。 - -### 我的文件命名约定 - -我所有的文件都与一个特定的日期或时间有关,根据所采用的[ISO 8601][7]规范,我采用的是**日期-标记**或**时间-标记** -带有日期戳和两个标签的示例文件名:`2014-05-09 42号项目的预算 -- 金融公司.csv` - -带有时间戳(甚至包括可选秒)和两个标签的示例文件名:`2014-05-09T22.19.58 Susan展示她的新鞋子 -- 家庭衣物.jpg` - -由于冒号不适用于Windows[文件系统NTFS][8],所以我必须使用已采用的ISO时间戳。因此,我用点代替冒号,以便将小时与分钟区别开来。 - -如果是**时间或日期持续时间**,我将两个日期或时间戳用两个负号分开:`2014-05-09—2014-05-13爵士音乐节Graz—folder 旅游音乐.pdf`。 - -文件名中的时间/日期戳的优点是,除非我手动更改它们,否则它们保持不变。当通过某些不处理这些元数据的软件进行处理时,包含在文件内容本身中的元数据(如[Exif][9])往往会丢失。此外,使用这样的日期/时间戳启动文件名可以确保文件按时间顺序显示,而不是按字母顺序显示。字母表是一种[完全人工的排序顺序][10],对于用户定位文件通常不太实用。 - -当我想将**tags**关联到文件名时,我将它们放在原始文件名和[文件名扩展名][11]之间,中间用空格、两个减号和一个额外的空格分隔"`--`"。我的标签是小写的英文单词,不包含空格或特殊字符。有时,我可能会使用`quantifiedself`或`usergenerated`等连接词。我[倾向于选择一般类别][12],而不是太过具体的描述标签。我用这一方式在Twitter [hashtags][13]上重用标记、文件名、文件夹名、书签、诸如此类的博客条目等等。 - -标签作为文件名的一部分有几个优点。通过使用常用的桌面搜索引擎,你可以在标签的帮助下定位文件。文件名称中的标签不能因为在不同的存储介质上复制而丢失。当系统使用与文件名不同的存储位置如:元数据数据库、[dot-files][14]、[备用数据流][15]等,通常会发生这种情况 - -当然,在一般的文件和文件夹名称中,**请避免使用特殊字符**,umlauts,冒号等。尤其是在不同操作系统平台之间同步文件时。 - -我的**文件夹名命名约定**与文件的相应规范相同。 - -注意:由于[Memacs][17]的[filenametimestamp][16]-module的聪明之处,所有带有日期/时间戳的文件和文件夹都在同一时间/天出现在我的组织模式日历(agenda)上。这样,我就能很好地了解当天发生了什么,包括我拍的所有照片。 - -### 我的一般文件夹结构 - -在本节中,我将描述主文件夹中最重要的文件夹。注意:这可能在将来的被移动到一个独立的页面。或许不是。让我们等着瞧:-) -很多东西只有在一定的时间内才会引起人们的兴趣。这些内容包括快速浏览其内容的下载、解压缩文件以检查包含的文件、一些有趣的小内容等等。对于**临时的东西**,我有 `$HOME/tmp/ ` 子层次结构。新照片放在`$HOME/tmp/digicam/`中。我从CD、DVD或USB记忆棒临时复制的东西放在`$HOME/tmp/fromcd/`中。每当软件工具需要用户文件夹层次结构中的临时数据时,我就使用` $HOME/tmp/Tools/ `作为起点。我经常使用的文件夹是`$HOME/tmp/2del/`:`2del`的意思是“随时可以删除”。例如,我所有的浏览器都使用这个文件夹作为默认的下载文件夹。如果我需要在机器上腾出空间,我首先查看这个`2del`-文件夹,用于删除内容。 - -与上面描述的临时文件相比,我当然也想将文件**保存更长的时间**。这些文件被移动到我的`$HOME/archive/`子层次结构中。它有几个子文件夹备份,web /下载我想保留,二进制文件我要存档,索引文件的可移动媒体(CD, DVD,记忆棒、外部硬盘驱动器),和一个文件夹用来存档(和寻找一个合适的的目标文件夹)在不久的将来。有时,我太忙或没有耐心的时候将文件妥善整理。是的,那就是我,我甚至有一个名为`现在不要整理我`的文件夹。这对你而言是否很怪?:-) - -我的归档中最重要的子层次结构是 `$HOME/archive/events_memories/ `及其子文件夹` 2014/ `、` 2013/ `、` 2012/ `等等。正如你可能已经猜到的,每个年份有一个**子文件夹**。其中每个文件中都有单个文件和文件夹。这些文件是根据我在前一节中描述的文件名约定命名的。文件夹名称符合“YYYY-MM-DD”[ISO 8601][7] 日期标签开头,后面跟着一个具有描述性的名称,如`$HOME/archive/events_memories/2014/2014-05-08 Business marathon with /`。在这些与日期相关的文件夹中,我保存着各种与特定事件相关的文件:照片、(扫描的)pdf文件、文本文件等等。 - -对于**共享数据**,我设置一个`$HOME/share/`子层次结构。这是我的Dropbox文件夹,我用各种各样的方法(比如[unison][18])来分享数据。我也在我的设备之间共享数据:家里的Mac Mini,家里的GNU/Linux笔记本,Android手机,root-server(我的个人云),工作时的windows笔记本。我不想在这里详细说明我的同步设置。如果你想了解相关的设置,可以参考另一篇相关的文章。:-) - -在我的` $HOME/ templates_tags / `子层次结构中,我保存了各种**模板文件** ([LaTeX][19], 脚本,…),剪辑和**logos**,等等。 - -我的**Org-mode**文件,主要是保存在`$ HOME /org/`。我练习保持记忆,并没有解释我有多喜欢 [Emacs/Org-mode][20]以及这我从中获益多少。你可能读过或听过我详细描述我用它做的很棒的事情。具体可以在我的博客上查找[我的' emacs '标签][21],在twitter上查找[hashtag ' #orgmode '][22]。 - -以上就是我最重要的文件夹子层次结构设置方式。 - -### 我的工作流程 - -Tataaaa,在你了解了我的文件夹结构和文件名约定之后,下面是我当前的工作流程和工具,我使用它们来满足我前面描述的需求。 -请注意,**你必须知道你在做什么**。我这里的示例及文件夹路径和更多只**适用我的机器或我的设置的文件夹路径。**你必须采用**相应的路径、文件名等**来满足你的需求! - -#### 工作流程:将文件从SD卡移动到笔记本电脑,旋转人像图像,并重命名文件 - -当我想把数据从我的数码相机移到我的GNU/Linux笔记本上时,我拿出它的mini sd存储卡,把它放在我的笔记本上。然后它会自动挂载在` /media/digicam `上。 - -然后,调用[getdigicamdata]。它做了如下几件事:它将文件从SD卡移动到一个临时文件夹中进行处理。原始文件名会转换为小写字符。使用[jhead][24]旋转所有人像照片。同样使用jhead,我从Exif头时间戳生成文件名称时间戳。使用[date2name][25],我将时间戳添加到电影文件中。处理完所有这些文件后,它们将被移动到新的digicam文件的目标文件夹:$HOME/tmp/digicam/tmp/~。 - -#### 工作流程:文件夹索引、查看、重命名、删除图像文件 - -为了快速浏览我的图像和电影文件,我更喜欢在GNU/Linux上使用[geeqie][26]。这是一个相当轻量级的图像浏览器,它具有其他文件浏览器所缺少的一大优势:我可以通过键盘快捷方式调用的外部脚本/工具。通过这种方式,我可以通过任意的外部命令扩展图像浏览器的特性。 - -基本的图像管理功能是内置在geeqie:索引我的文件夹层次结构,在窗口模式或全屏查看图像(快捷键' f '),重命名文件名,删除文件,显示Exif元数据(快捷键` Ctrl-e `)。 - -在OS X上,我使用[Xee][27]。与geeqie不同,它不能通过外部命令进行扩展。不过,基本的导航、查看和重命名功能也是可用的。 - -#### 工作流:添加和删除标签 - -我创建了一个名为[filetags][28]的Python脚本,用于向单个文件以及一组文件添加和删除标记。 - -对于数码照片,我使用标签,例如,`specialL`用于我认为适合桌面背景的风景图片,`specialP`用于我想展示给其他人的人像照片,`sel`用于筛选,等等。 - -##### 使用geeqie初始设置文件标签 - -向geeqie添加文件标签是一个手动步骤:`编辑>首选项>配置编辑器…`然后创建一个带有`New`的附加条目。在这里,你可以定义一个新的桌面文件,如下所示: - -add-tags.desktop -``` -[Desktop Entry] -Name=filetags -GenericName=filetags -Comment= -Exec=/home/vk/src/misc/vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh %F -Icon= -Terminal=true -Type=Application -Categories=Application;Graphics; -hidden=false -MimeType=image/*;video/*;image/mpo;image/thm -Categories=X-Geeqie; - -``` - -包装器脚本的`vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh `是必须的,因为我想要弹出一个新的终端,以便添加标签到我的文件: - -vk-filetags-interactive-adding-wrapper-with-gnome-terminal.sh -``` -#!/bin/sh - -/usr/bin/gnome-terminal \ - --geometry=85x15+330+5 \ - --tab-with-profile=big \ - --hide-menubar \ - -x /home/vk/src/filetags/filetags.py --interactive "${@}" - -#end - -``` - -在geeqie中,你可以在` Edit > Preferences > Preferences…>键盘`。我将`t`与`filetags`命令相关联。 - -标签脚本还能够从单个文件或一组文件中删除标记。它基本上使用与上面相同的方法。唯一的区别是文件标签脚本额外的`--remove`参数: - -remove-tags.desktop -``` -[Desktop Entry] -Name=filetags-remove -GenericName=filetags-remove -Comment= -Exec=/home/vk/src/misc/vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh %F -Icon= -Terminal=true -Type=Application -Categories=Application;Graphics; -hidden=false -MimeType=image/*;video/*;image/mpo;image/thm -Categories=X-Geeqie; - -``` - -vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh -``` -#!/bin/sh - -/usr/bin/gnome-terminal \ - --geometry=85x15+330+5 \ - --tab-with-profile=big \ - --hide-menubar \ - -x /home/vk/src/filetags/filetags.py --interactive --remove "${@}" - -#end - -``` - -为了删除标签,我为`T`参数创建了一个键盘快捷方式。 - -##### 在geeqie中使用文件标签 - -当我在geeqie文件浏览器中浏览图像文件时,我选择要标记的文件(一到多个)并按`t`。然后,一个小窗口弹出,要求我提供一个或多个标签。在与` Return `确认后,这些标签被添加到文件名中。 - -删除标签也是一样:选择多个文件,按下`T`,输入要删除的标签,然后用`Return`确认。就是这样。几乎没有[更简单的方法来添加或删除标签到文件][29]。 - -#### 工作流:使用appendfilename重命名高级文件 - -##### 不使用 appendfilename - -重命名一组大型文件可能是一个冗长乏味的过程。对于`2014-04-20T17.09.11_p1100386.jpg`这样的原始文件名,在文件名中添加描述的过程相当烦人。你将按`Ctrl-r`(重命名)在geeqie打开文件重命名对话框。默认情况下,原始名称(没有文件扩展名的文件名称)被标记。因此,如果不希望删除/覆盖文件名(但要追加),则必须按下光标键` `。然后,光标放在基本名称和扩展名之间。输入你的描述(不要忘记初始空格字符),并用`Return`进行确认。 - -##### 在geeqie使中用appendfilename - -使用[appendfilename][30],我的过程得到了简化,可以获得将文本附加到文件名的最佳用户体验:当我在geeqie中按下` a ` (append)时,会弹出一个对话框窗口,询问文本。在`Return`确认后,输入的文本将放置在时间戳和可选标记之间。 - -例如,当我在`2014-04-20T17.09.11_p1100386.jpg`上按下`a`,然后在`Graz`中键入`Pick-nick in Graz`时,文件名变为`2014-04-20T17.09.11_p1100386 Pick-nick in Graz.jpg`。当我再次按下`a`并输入`with Susan`时,文件名变为`2014-04-20T17.09.11_p1100386 Pick-nick in Graz with Susan.jpg`。当文件名也获得标记时,附加的文本将附加在标记分隔符之前。 - -这样,我就不必担心覆盖时间戳或标记。重命名的过程对我来说变得更加有趣! - -最好的部分是:当我想要将相同的文本添加到多个选定的文件中时,也可以使用appendfilename。 - -##### 使用geeqie初始appendfilename - -添加一个额外的编辑器到geeqie: ` Edit > Preferences > Configure editor…>New`。然后输入桌面文件定义: - -appendfilename.desktop -``` -[Desktop Entry] -Name=appendfilename -GenericName=appendfilename -Comment= -Exec=/home/vk/src/misc/vk-appendfilename-interactive-wrapper-with-gnome-terminal.sh %F -Icon= -Terminal=true -Type=Application -Categories=Application;Graphics; -hidden=false -MimeType=image/*;video/*;image/mpo;image/thm -Categories=X-Geeqie; - -``` - -同样,我也使用了一个包装脚本,它将为我打开一个新的终端: - -vk-appendfilename-interactive-wrapper-with-gnome-terminal.sh -``` -#!/bin/sh - -/usr/bin/gnome-terminal \ - --geometry=90x5+330+5 \ - --tab-with-profile=big \ - --hide-menubar \ - -x /home/vk/src/appendfilename/appendfilename.py "${@}" - -#end - -``` - -#### 工作流程:播放电影文件 - -在GNU/Linux上,我使用[mplayer][31]回放视频文件。由于geeqie本身不播放电影文件,所以我必须创建一个设置,以便在mplayer中打开电影文件。 - -##### 在geeqie中初始化mplayer的设置 - -我已经使用[xdg-open][32]将电影文件扩展名关联到mplayer。因此,我只需要为geeqie创建一个通用的“open”命令,使用xdg-open打开任何文件及其关联的应用程序。 - -再次访问` Edit > Preferences > Configure editor…`在geeqie中添加`open`的条目: - -open.desktop -``` -[Desktop Entry] -Name=open -GenericName=open -Comment= -Exec=/usr/bin/xdg-open %F -Icon= -Terminal=true -Type=Application -hidden=false -NOMimeType=*; -MimeType=image/*;video/* -Categories=X-Geeqie; - -``` - -当你还将快捷方式`o`(见上文)与geeqie关联时,你就能够打开与其关联的应用程序的视频文件(和其他文件)。 - -##### 使用xdg-open打开电影文件(和其他文件) - -在上面的设置过程之后,当你的geeqie光标位于文件上方时,你只需按下`o`即可。就是如此简洁。 - -#### 工作流:在外部图像编辑器中打开 - -我不太希望能够在GIMP中快速编辑图像文件。因此,我添加了一个快捷方式`g`,并将其与外部编辑器“GNU图像处理程序”(GIMP)关联起来,geeqie已经默认创建了该程序 - -这样,只需按下`g`就可以打开GIMP中的当前图像。 - -#### 工作流程:移动到存档文件夹 - -现在我已经在我的文件名中添加了注释,我想将单个文件移动到`$HOME/archive/events_memories/2014/`,或者将一组文件移动到这个文件夹中的新文件夹中,如`$HOME/archive/events_memories/2014/2014-05-08 business marathon after show - party`。 - -通常的方法是选择一个或多个文件,并将它们移动到具有快捷方式`Ctrl-m`的文件夹中。 - -何等繁复无趣之至! - -因此,我(再次)编写了一个Python脚本,它为我完成了这项工作:[move2archive][33](简而言之:` m2a `需要一个或多个文件作为命令行参数。然后,出现一个对话框,我可以在其中输入一个可选文件夹名。当我不输入任何东西,但按`Return`,文件被移动到相应年份的文件夹。当我输入一个类似`business marathon after show - party`的文件夹名称时,第一个图像文件的日期戳被附加到该文件夹(`$HOME/archive/events_memories/2014/2014-05-08 business marathon after show - party`),得到的文件夹是(`$HOME/archive/events_memories/2014/2014-05-08 Business-Marathon After-Show-Party`),并移动文件。 - -再一次:我在geeqie中,选择一个或多个文件,按`m`(移动),或者只按`Return`(没有特殊的子文件夹),或者输入一个描述性文本,这是要创建的子文件夹的名称(可选不带日期戳)。 - -**没有一个图像管理工具像我的geeqie一样通过快捷键快速且有趣的使用 appendfilename和move2archive完成工作。** - -##### 在geeqie里初始化m2a的相关设置 - -同样,向geeqie添加`m2a`是一个手动步骤:“编辑>首选项>配置编辑器……”然后创建一个带有“New”的附加条目。在这里,你可以定义一个新的桌面文件,如下所示: - -m2a.desktop -``` -[Desktop Entry] -Name=move2archive -GenericName=move2archive -Comment=Moving one or more files to my archive folder -Exec=/home/vk/src/misc/vk-m2a-interactive-wrapper-with-gnome-terminal.sh %F -Icon= -Terminal=true -Type=Application -Categories=Application;Graphics; -hidden=false -MimeType=image/*;video/*;image/mpo;image/thm -Categories=X-Geeqie; - -``` -包装器脚本的`vk-m2a-interactive-wrapper-with-gnome-terminal.sh `是必要的,因为我想要弹出一个新的终端窗口,以便我的文件进入我指定的目标文件夹: - -vk-m2a-interactive-wrapper-with-gnome-terminal.sh -``` -#!/bin/sh - -/usr/bin/gnome-terminal \ - --geometry=157x56+330+5 \ - --tab-with-profile=big \ - --hide-menubar \ - -x /home/vk/src/m2a/m2a.py --pauseonexit "${@}" - -#end - -``` - -在geeqie中,你可以在`Edit > Preferences > Preferences ... > Keyboard`将`m`与`m2a`命令相关联。 - -#### 工作流程:旋转图像(无损) - -通常,我的数码相机会自动将人像照片标记为人像照片。然而,在某些特定的情况下(比如从主题上方拍照),我的相机会出错。在那些**罕见的情况下**,我必须手动修正方向。 - -你必须知道,JPEG文件格式是一种有损格式,应该只用于照片,而不是计算机生成的东西,如屏幕截图或图表。以傻瓜方式旋转JPEG图像文件通常会解压/可视化图像文件,旋转生成新的图像,然后重新编码结果。这将导致生成的图像[比原始图像质量差得多][5]。 - -因此,你应该使用无损方法来旋转JPEG图像文件。 - -再一次,我添加了一个“外部编辑器”到geeqie:`Edit > Preferences > Configure Editors ... > New`。在这里,我添加了两个条目:一个用于旋转270度(即逆时针旋转90度),另一个用于使用[exiftran][34]旋转90度(逆时针旋转90度): - -rotate-270.desktop -``` -[Desktop Entry] -Version=1.0 -Type=Application -Name=Losslessly rotate JPEG image counterclockwise - -# call the helper script -TryExec=exiftran -Exec=exiftran -p -2 -i -g %f - -# Desktop files that are usable only in Geeqie should be marked like this: -Categories=X-Geeqie; -OnlyShowIn=X-Geeqie; - -# Show in menu "Edit/Orientation" -X-Geeqie-Menu-Path=EditMenu/OrientationMenu - -MimeType=image/jpeg; - -``` - -rotate-90.desktop -``` -[Desktop Entry] -Version=1.0 -Type=Application -Name=Losslessly rotate JPEG image clockwise - -# call the helper script -TryExec=exiftran -Exec=exiftran -p -9 -i -g %f - -# Desktop files that are usable only in Geeqie should be marked like this: -Categories=X-Geeqie; -OnlyShowIn=X-Geeqie; - -# Show in menu "Edit/Orientation" -X-Geeqie-Menu-Path=EditMenu/OrientationMenu - -# It can be made verbose -# X-Geeqie-Verbose=true - -MimeType=image/jpeg; - -``` - -我为“[”(逆时针方向)和“]”(逆时针方向)创建了geeqie快捷键。 - -#### 工作流程:可视化GPS坐标 - -我的数码相机有一个GPS传感器,它在JPEG文件的Exif元数据中存储当前的地理位置。位置数据以[WGS 84][35]格式存储,如“47,58,26.73;16、23、55.51”(纬度;经度)。这一方式可读性较差,从我所期望的意义上讲:要么是地图,要么是位置名称。因此,我向geeqie添加了一些功能,这样我就可以在[OpenStreetMap][36]上看到单个图像文件的位置: `Edit > Preferences > Configure Editors ... > New` - -photolocation.desktop -``` -[Desktop Entry] -Name=vkphotolocation -GenericName=vkphotolocation -Comment= -Exec=/home/vk/src/misc/vkphotolocation.sh %F -Icon= -Terminal=true -Type=Application -Categories=Application;Graphics; -hidden=false -MimeType=image/bmp;image/gif;image/jpeg;image/jpg;image/pjpeg;image/png;image/tiff;image/x-bmp;image/x-gray;image/x-icb;image/x-ico;image/x-png;image/x-portable-anymap;image/x-portable-bitmap;image/x-portable-graymap;image/x-portable-pixmap;image/x-xbitmap;image/x-xpixmap;image/x-pcx;image/svg+xml;image/svg+xml-compressed;image/vnd.wap.wbmp; - -``` - -这就调用了我的名为`vkphotolocation.sh`的包装脚本,它使用[ExifTool][37]让[Marble][38]能够读取和可视化的适当格式并提取坐标: - -vkphotolocation.sh -``` -#!/bin/sh - -IMAGEFILE="${1}" -IMAGEFILEBASENAME=`basename ${IMAGEFILE}` - -COORDINATES=`exiftool -c %.6f "${IMAGEFILE}" | awk '/GPS Position/ { print $4 " " $6 }'` - -if [ "x${COORDINATES}" = "x" ]; then - zenity --info --title="${IMAGEFILEBASENAME}" --text="No GPS-location found in the image file." -else - /usr/bin/marble --latlon "${COORDINATES}" --distance 0.5 -fi - -#end - -``` - -映射到键盘快捷键“G”,我可以快速地得到**单个图像文件的映射位置位置**。 - -当我想将多个JPEG图像文件的**位置可视化为路径**时,我使用[GpsPrune][39]。我无法派生出GpsPrune将一组文件作为命令行参数的方法。正因为如此,我必须手动启动GpsPrune,`选择一组文件或一个文件夹>添加照片`。 - -通过这种方式,我可以为OpenStreetMap地图上的每个JPEG位置获得一个点(如果配置为这样)。通过单击这样一个点,我可以得到相应图像的详细信息。 - -如果你恰好在国外拍摄照片,可视化GPS位置对**在文件名中添加描述**大有帮助! - -#### 工作流程:根据GPS坐标过滤照片 - -这并非我的工作流程。为了完整起见,我列出该工作流对应工具的特性。我想做的就是从一大堆图片中寻找那些在一定区域内(范围或点+距离)的照片。 - -到目前为止,我只找到了[DigiKam][40],它能够[根据矩形区域进行过滤][41]。如果你知道其他工具,请将其添加到下面的评论或写一封电子邮件。 - -#### 工作流:显示给定集合的子集 - -如上面的需求所述,我希望能够在一个文件夹中定义一组子文件,以便将这个小集合呈现给其他人。 - -工作流程非常简单:我向选择的文件添加一个标记(通过` t ` /filetags)。为此,我使用标记`sel`,它是“selection”的缩写。在标记了一组文件之后,我可以按下` s `,它与一个脚本相关联,该脚本只显示标记为` sel `的文件。 - -当然,这也适用于任何标签或标签组合。因此,用同样的方法,你可以得到一个适当的概述,你的婚礼上的所有照片都标记着“教堂”和“戒指”。 - -很棒的功能,不是吗?:-) - -##### 根据标签和geeqie初始设置文件标签 - -你必须定义一个额外的“外部编辑器”:`Edit > Preferences > Configure Editors ... > New`: - -filter-tags.desktop -``` -[Desktop Entry] -Name=filetag-filter -GenericName=filetag-filter -Comment= -Exec=/home/vk/src/misc/vk-filetag-filter-wrapper-with-gnome-terminal.sh -Icon= -Terminal=true -Type=Application -Categories=Application;Graphics; -hidden=false -MimeType=image/*;video/*;image/mpo;image/thm -Categories=X-Geeqie; - -``` - -再次调用我编写的包装脚本: - -vk-filetag-filter-wrapper-with-gnome-terminal.sh -``` -#!/bin/sh - -/usr/bin/gnome-terminal \ - --geometry=85x15+330+5 \ - --hide-menubar \ - -x /home/vk/src/filetags/filetags.py --filter - -#end - -``` - -带参数`--filter`的`filetags`基本上完成的是:用户被要求输入一个或多个标签。然后,当前文件夹中所有匹配的文件都能使用[符号链接][42]都链接到` $HOME/.filetags_tagfilter/ `。然后,启动一个新的geeqie实例,显示链接的文件。 - -在退出这个新的geeqie实例之后,你将从该实例调用了选择过程中看到旧的geeqie实例。 - -#### 用一个真实的案例来总结 - -Wow, 这是一篇很长的博客文章。难怪你可能已经忘了之前的概述。总结一下我在geeqie(扩展了标准功能集)中可以做的事情,我有一个很酷的总结: - -快捷功能 `m` m2a `o` 打开(针对非图像文件) `a` 在文件名里添加字段 `t` 文件标签(添加) `T` 文件标签(删除) `s` 文件标签(排序) `g` gimp `G` 显示GPS信息 `[` 无损的逆时针旋转 `]` 无损的顺时针旋转 `Ctrl-e` EXIF图像信息 `f` 全屏显示 - -一些针对文件名的(包括它的路径)及我用来操作组件的示例: -``` - /this/is/a/folder/2014-04-20T17.09 Pick-nick in Graz -- food graz.jpg - [ m2a ] [ date2name ] [ appendfilename ] [filetags] - -``` -在示例中,我按照以下步骤将照片从相机保存到存档:我将SD存储卡放入计算机的SD卡读卡器中。然后我运行[getdigicamdata.sh][23]。完成之后,我在geeqie中打开`$HOME/tmp/digicam/tmp/`。我浏览了一下照片,把那些不成功的删除了。如果有一个图像的方向错误,我用`[`or`]`纠正它。 - -在第二步中,我向我认为值得评论的文件添加描述(` a `)。每当我想添加标签时,我也这样做:我快速地标记所有应该共享标签的文件(` Ctrl ` \+鼠标点击),并使用[filetags][28] (` t `)进行标记。 - -要组合来自给定事件的文件,我选中相应的文件,将它们移动到年度归档文件夹中的“event-folder”,并通过在[move2archive][33] (`m `)中键入事件描述,其余的(非特殊的文件夹)由move2archive (`m `)直接移动到年度归档中,而不需要声明事件描述。 - -为了完成我的工作流程,我删除了SD卡上的所有文件,把它从操作系统上弹出,然后把它放回我的数码相机里。 - -以上。 - -因为这种工作流程几乎不需要任何开销,所以评论、标记和归档照片不再是一项乏味的工作。 - -### 最后 - -所以,这是一个详细描述我关于照片和电影的工作流程的叙述。你可能已经发现了我可能感兴趣的其他东西。所以请不要犹豫,请使用下面的链接留下评论或电子邮件。 -我也希望得到反馈,如果我的工作流程适用于你。并且:如果你已经发布了你的工作流程或者找到了其他人工作流程的描述,也请留下评论! -及时行乐,莫让错误的工具或低效的方法浪费了我们的人生! - -### 其他工具 - -阅读关于[本文中关于 gThumb 的部分][43]. - -当你觉得你以上文中所叙述的符合你的需求时,请根据相关的建议来选择对应的工具。 - -### 邮件回复 - -> Date: Sat, 26 Aug 2017 22:05:09 +0200 - -> 你好卡尔, -我喜欢你的文章,喜欢和memacs一起工作,当然还有orgmode,但是我对python不是很熟悉……在你的博客文章“管理数码照片”,你写了关于打开视频与[Geeqie][26]。是的,但是我在浏览器里看不到任何视频缩略图。你有什么建议吗? - -> 谢谢你,托马斯 - - - -你好托马斯, - -谢谢你的美言。当有人发现我的工作对他/她的生活有用时,我总是感觉很棒。 -不幸的是,大多数时候,我从未听到过这些。 -是的,我有时使用Geeqie来可视化文件夹,这些文件夹不仅包含图像文件,还包含电影文件。在这些情况下,我没有看到任何视频的缩略图。你说得对,有很多文件浏览器可以显示视频的预览图像。 -坦白地说,我从来没有想过视频缩略图,我也不怀念它们。在我的首选项和搜索引擎上做了一个快速的研究,并没有发现在Geeqie中启用视频预览的相关方法。所以这里要说声抱歉。 - - - --------------------------------------------------------------------------------- - -via: http://karl-voit.at/managing-digital-photographs/ - -作者:[Karl Voit][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://karl-voit.at -[1]:https://en.wikipedia.org/wiki/Jpeg -[2]:http://en.wikipedia.org/wiki/Vendor_lock-in -[3]:https://en.wikipedia.org/wiki/Raw_image_format -[4]:http://www.gimp.org/ -[5]:http://petapixel.com/2012/08/14/why-you-should-always-rotate-original-jpeg-photos-losslessly/ -[6]:https://en.wikipedia.org/wiki/Gps -[7]:https://en.wikipedia.org/wiki/Iso_date -[8]:https://en.wikipedia.org/wiki/Ntfs -[9]:https://en.wikipedia.org/wiki/Exif -[10]:http://www.isisinform.com/reinventing-knowledge-the-medieval-controversy-of-alphabetical-order/ -[11]:https://en.wikipedia.org/wiki/File_name_extension -[12]:http://karl-voit.at/tagstore/en/papers.shtml -[13]:https://en.wikipedia.org/wiki/Hashtag -[14]:https://en.wikipedia.org/wiki/Dot-file -[15]:https://en.wikipedia.org/wiki/NTFS#Alternate_data_streams_.28ADS.29 -[16]:https://github.com/novoid/Memacs/blob/master/docs/memacs_filenametimestamps.org -[17]:https://github.com/novoid/Memacs -[18]:http://www.cis.upenn.edu/~bcpierce/unison/ -[19]:https://github.com/novoid/LaTeX-KOMA-template -[20]:http://orgmode.org/ -[21]:http://karl-voit.at/tags/emacs -[22]:https://twitter.com/search?q%3D%2523orgmode&src%3Dtypd -[23]:https://github.com/novoid/getdigicamdata.sh -[24]:http://www.sentex.net/%3Ccode%3Emwandel/jhead/ -[25]:https://github.com/novoid/date2name -[26]:http://geeqie.sourceforge.net/ -[27]:http://xee.c3.cx/ -[28]:https://github.com/novoid/filetag -[29]:http://karl-voit.at/tagstore/ -[30]:https://github.com/novoid/appendfilename -[31]:http://www.mplayerhq.hu -[32]:https://wiki.archlinux.org/index.php/xdg-open -[33]:https://github.com/novoid/move2archive -[34]:http://manpages.ubuntu.com/manpages/raring/man1/exiftran.1.html -[35]:https://en.wikipedia.org/wiki/WGS84#A_new_World_Geodetic_System:_WGS_84 -[36]:http://www.openstreetmap.org/ -[37]:http://www.sno.phy.queensu.ca/~phil/exiftool/ -[38]:http://userbase.kde.org/Marble/Tracking -[39]:http://activityworkshop.net/software/gpsprune/ -[40]:https://en.wikipedia.org/wiki/DigiKam -[41]:https://docs.kde.org/development/en/extragear-graphics/digikam/using-kapp.html#idp7659904 -[42]:https://en.wikipedia.org/wiki/Symbolic_link -[43]:http://karl-voit.at/2017/02/19/gthumb diff --git a/translated/tech/20180330 Go on very small hardware Part 1.md b/translated/tech/20180330 Go on very small hardware Part 1.md deleted file mode 100644 index cff72cade7..0000000000 --- a/translated/tech/20180330 Go on very small hardware Part 1.md +++ /dev/null @@ -1,510 +0,0 @@ -Go语言在极小硬件上的运用(第一部分) -============================================================ - - -_Go_ 语言,能在多低下的配置上运行并发挥作用呢? - -我最近购买了一个特别便宜的开发板: - - [![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/board.jpg)][2] - -我购买它的理由有三个。首先,我(作为程序员)从未接触过STM320系列的开发板。其次, STM32F10x系列使用频率也在降低。STM320系列的MCU很便宜,有更新的外设,对系列产品进行了改进,问题修复也做得更好了。最后,为了这篇文章,我选用了这一系列中最低配置的开发板,整件事情就变得有趣起来了。 - - -### 硬件部分 - -[STM32F030F4P6][3] 给人留下了很深的印象: - -* CPU: [Cortex M0][1] 48 MHz (最低配置,只有12000个逻辑门电路), - -* RAM: 4 KB, - -* Flash: 16 KB, - -* ADC, SPI, I2C, USART 和几个定时器, - -以上这些采用了TSSOP20封装。正如你所见,这是一个很小的32位系统。 - -### 软件部分 - -如果你想知道如何在这块开发板上使用 [Go][4] 编程,你需要反复阅读硬件手册。真实情况是:有人在Go 编译器中给Cortex-M0提供支持,可能性很小。而且,这还仅仅只是第一个要解决的问题。 - -我会使用[Emgo][5],但别担心,之后你会看到,它如何让Go在如此小的系统上尽可能发挥作用。 - -在我拿到这块开发板之前,对 [stm32/hal][6] 系列下的F0 MCU 没有任何支持。在简单研究 [参考手册][7]后,我发现 STM32F0系列是STM32F3的一个基础,这让在新端口上开发的工作变得容易了一些。 - -如果你想接着本文的步骤做下去,需要先安装Emgo - -``` -cd $HOME -git clone https://github.com/ziutek/emgo/ -cd emgo/egc -go install - -``` - -然后设置一下环境变量 - -``` -export EGCC=path_to_arm_gcc # eg. /usr/local/arm/bin/arm-none-eabi-gcc -export EGLD=path_to_arm_linker # eg. /usr/local/arm/bin/arm-none-eabi-ld -export EGAR=path_to_arm_archiver # eg. /usr/local/arm/bin/arm-none-eabi-ar - -export EGROOT=$HOME/emgo/egroot -export EGPATH=$HOME/emgo/egpath - -export EGARCH=cortexm0 -export EGOS=noos -export EGTARGET=f030x6 - -``` - -更详细的说明可以在 [Emgo][8]官网上找到。 - -要确保 egc 在你的PATH 中。 你可以使用 `go build` 来代替 `go install`,然后把 egc 复制到你的 _$HOME/bin_ 或 _/usr/local/bin_ 中。 - -现在,为你的第一个Emgo程序创建一个新文件夹,随后把示例中链接器脚本复制过来: - -``` -mkdir $HOME/firstemgo -cd $HOME/firstemgo -cp $EGPATH/src/stm32/examples/f030-demo-board/blinky/script.ld . - -``` - -### 最基本程序 - -在 _main.go_ 文件中创建一个最基本的程序: - -``` -package main - -func main() { -} - -``` - -文件编译没有出现任何问题: - -``` -$ egc -$ arm-none-eabi-size cortexm0.elf - text data bss dec hex filename - 7452 172 104 7728 1e30 cortexm0.elf - -``` - -第一次编译可能会花点时间。编译后产生的二进制占用了7624个字节的Flash空间(文本+数据)。对于一个什么都没做的程序来说,占用的空间有些大。还剩下8760字节,可以用来做些有用的事。 - -不妨试试传统的 _Hello, World!_ 程序: - -``` -package main - -import "fmt" - -func main() { - fmt.Println("Hello, World!") -} - -``` - -不幸的是,这次结果有些糟糕: - -``` -$ egc -/usr/local/arm/bin/arm-none-eabi-ld: /home/michal/P/go/src/github.com/ziutek/emgo/egpath/src/stm32/examples/f030-demo-board/blog/cortexm0.elf section `.text' will not fit in region `Flash' -/usr/local/arm/bin/arm-none-eabi-ld: region `Flash' overflowed by 10880 bytes -exit status 1 - -``` - - _Hello, World!_ 需要 STM32F030x6 上至少32KB的Flash空间. - -_fmt_ 包强制包含整个 _strconv_ 和 _reflect_ 包。这三个包,即使在精简版本中的Emgo中,占用空间也很大。我们不能使用这个例子了。有很多的应用不需要好看的文本输出。通常,一个或多个LED,或者七段数码管显示就足够了。不过,在第二部分,我会尝试使用 _strconv_ 包来格式化,并在UART 上显示一些数字和文本。 - - -### 闪烁 - -我们的开发板上有一个与PA4引脚和 VCC 相连的LED。这次我们的代码稍稍长了一些: - -``` -package main - -import ( - "delay" - - "stm32/hal/gpio" - "stm32/hal/system" - "stm32/hal/system/timer/systick" -) - -var led gpio.Pin - -func init() { - system.SetupPLL(8, 1, 48/8) - systick.Setup(2e6) - - gpio.A.EnableClock(false) - led = gpio.A.Pin(4) - - cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain} - led.Setup(cfg) -} - -func main() { - for { - led.Clear() - delay.Millisec(100) - led.Set() - delay.Millisec(900) - } -} - -``` - -按照惯例, _init_ 函数用来初始化和配置外设。 - -`system.SetupPLL(8, 1, 48/8)` 用来配置RCC,将外部的8 MHz振荡器的PLL作为系统时钟源。PLL 分频器设置为1,倍频数设置为 48/8 =6,这样系统时钟频率为48MHz. - -`systick.Setup(2e6)` 将 Cortex-M SYSTICK 时钟作为系统时钟,每隔 2e6次纳秒运行一次(每秒钟500次)。 - -`gpio.A.EnableClock(false)` 开启了 GPIO A 口的时钟。_False_ 意味着这一时钟在低功耗模式下会被禁用,但在STM32F0系列中并未实现这一功能。 - -`led.Setup(cfg)` 设置 PA4 引脚为开漏输出. - -`led.Clear()` 将 PA4引脚设为低, 在开漏设置中,打开LED. - -`led.Set()` 将 PA4 设为高电平状态 , 关掉LED. - -编译这个代码: -``` -$ egc -$ arm-none-eabi-size cortexm0.elf - text data bss dec hex filename - 9772 172 168 10112 2780 cortexm0.elf - -``` - -正如你所看到的,闪烁占用了2320 字节,比最基本程序占用空间要大。还有6440字节的剩余空间。 - -看看代码是否能运行: - -``` -$ openocd -d0 -f interface/stlink.cfg -f target/stm32f0x.cfg -c 'init; program cortexm0.elf; reset run; exit' -Open On-Chip Debugger 0.10.0+dev-00319-g8f1f912a (2018-03-07-19:20) -Licensed under GNU GPL v2 -For bug reports, read - http://openocd.org/doc/doxygen/bugs.html -debug_level: 0 -adapter speed: 1000 kHz -adapter_nsrst_delay: 100 -none separate -adapter speed: 950 kHz -target halted due to debug-request, current mode: Thread -xPSR: 0xc1000000 pc: 0x0800119c msp: 0x20000da0 -adapter speed: 4000 kHz -** Programming Started ** -auto erase enabled -target halted due to breakpoint, current mode: Thread -xPSR: 0x61000000 pc: 0x2000003a msp: 0x20000da0 -wrote 10240 bytes from file cortexm0.elf in 0.817425s (12.234 KiB/s) -** Programming Finished ** -adapter speed: 950 kHz - -``` - -在这篇文章中,这是我第一次,将一个短视频转换成[动画PNG][9]。我对此印象很深,再见了 YouTube. 对于IE用户,我很抱歉,更多信息请看[apngasm][10].我本应该学习 HTML5,但现在,APNG是我最喜欢的,用来播放循环短视频的方法了。 - -![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/blinky.png) - -### 更多的Go语言编程 - -如果你不是一个Go 程序员,但你已经听说过一些关于Go 语言的事情,你可能会说:“Go语法很好,但跟C比起来,并没有明显的提升.让我看看 _Go 语言_ 的 _channels_ 和 _goroutines!” - - -接下来我会一一展示: - -``` -import ( - "delay" - - "stm32/hal/gpio" - "stm32/hal/system" - "stm32/hal/system/timer/systick" -) - -var led1, led2 gpio.Pin - -func init() { - system.SetupPLL(8, 1, 48/8) - systick.Setup(2e6) - - gpio.A.EnableClock(false) - led1 = gpio.A.Pin(4) - led2 = gpio.A.Pin(5) - - cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain} - led1.Setup(cfg) - led2.Setup(cfg) -} - -func blinky(led gpio.Pin, period int) { - for { - led.Clear() - delay.Millisec(100) - led.Set() - delay.Millisec(period - 100) - } -} - -func main() { - go blinky(led1, 500) - blinky(led2, 1000) -} - -``` - -代码改动很小: 添加了第二个LED,上一个例子中的 _main_ 函数被重命名为 _blinky_ 并且需要提供两个参数. _Main_ 在新的goroutine 中先调用 _blinky_, 所以两个LED灯在并行使用. 值得一提的是, _gpio.Pin_ 可以同时访问同一GPIO口的不同引脚。 - -Emgo 还有很多不足。其中之一就是你需要提前规定goroutines(tasks)的最大执行数量.是时候修改 _script.ld_ 了: - -``` -ISRStack = 1024; -MainStack = 1024; -TaskStack = 1024; -MaxTasks = 2; - -INCLUDE stm32/f030x4 -INCLUDE stm32/loadflash -INCLUDE noos-cortexm - -``` - -栈的大小需要靠猜,现在还不用关心这一点。 - - -``` -$ egc -$ arm-none-eabi-size cortexm0.elf - text data bss dec hex filename - 10020 172 172 10364 287c cortexm0.elf - -``` -另一个LED 和 goroutine 一共占用了248字节的Flash空间. - - -![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/goroutines.png) - -### Channels - -Channels 是Go语言中goroutines之间相互通信的一种[推荐方式][11].Emgo 甚至能允许通过 _中断处理_ 来使用缓冲channel. 下一个例子就展示了这种情况. - -``` -package main - -import ( - "delay" - "rtos" - - "stm32/hal/gpio" - "stm32/hal/irq" - "stm32/hal/system" - "stm32/hal/system/timer/systick" - "stm32/hal/tim" -) - -var ( - leds [3]gpio.Pin - timer *tim.Periph - ch = make(chan int, 1) -) - -func init() { - system.SetupPLL(8, 1, 48/8) - systick.Setup(2e6) - - gpio.A.EnableClock(false) - leds[0] = gpio.A.Pin(4) - leds[1] = gpio.A.Pin(5) - leds[2] = gpio.A.Pin(9) - - cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain} - for _, led := range leds { - led.Set() - led.Setup(cfg) - } - - timer = tim.TIM3 - pclk := timer.Bus().Clock() - if pclk < system.AHB.Clock() { - pclk *= 2 - } - freq := uint(1e3) // Hz - timer.EnableClock(true) - timer.PSC.Store(tim.PSC(pclk/freq - 1)) - timer.ARR.Store(700) // ms - timer.DIER.Store(tim.UIE) - timer.CR1.Store(tim.CEN) - - rtos.IRQ(irq.TIM3).Enable() -} - -func blinky(led gpio.Pin, period int) { - for range ch { - led.Clear() - delay.Millisec(100) - led.Set() - delay.Millisec(period - 100) - } -} - -func main() { - go blinky(leds[1], 500) - blinky(leds[2], 500) -} - -func timerISR() { - timer.SR.Store(0) - leds[0].Set() - select { - case ch <- 0: - // Success - default: - leds[0].Clear() - } -} - -//c:__attribute__((section(".ISRs"))) -var ISRs = [...]func(){ - irq.TIM3: timerISR, -} - -``` - -与之前例子相比较下的不同: - -1. 添加了第三个LED,并连接到 PA9 引脚.(UART头的TXD引脚) - -2. 时钟(TIM3)作为中断源. - -3. 新函数 _timerISR_ 用来处理 _irq.TIM3_ 的中断. - -4. 新增容量为1 的缓冲channel 是为了 _timerISR_ 和 _blinky_ goroutines 之间的通信. - -5. _ISRs_ 数组作为 _中断向量表_,是 更大的_异常向量表_ 的一部分. - -6. _blinky中的for语句_ 被替换成 _range语句_ . - -为了方便起见,所有的LED,或者说他们的引脚,都被放在 _leds_ 这个数组里. 另外, 所有引脚在被配置为输出之前,都设置为一种已知的初始状态(高电平状态). - -在这个例子里,我们想让时钟以1 kHz的频率运行。为了配置预分频器,我们需要知道它的输入时钟频率。通过参考手册我们知道,输入时钟频率在APBCLK = AHBCLK时,与APBCLK 相同,反之等于2倍的APBCLK。 - -如果CNT寄存器增加 1kHz,那么ARR寄存器的值等于 _更新事件_ (重载事件)在毫秒中的计数周期。 为了让更新事件产生中断,必须要设置DIER 寄存器中的UIE位。CEN位能启动时钟。 - -时钟外设在低功耗模式下必须启用,为了自身能在CPU处于休眠时保持运行: `timer.EnableClock(true)`。这在STM32F0中无关紧要,但对代码可移植性却十分重要。 - -_timerISR_ 函数处理 _irq.TIM3_ 的中断请求。 `timer.SR.Store(0)` 会清除SR寄存器里的所有事件标志,无效化向[NVIC][12]发出的所有中断请求。凭借经验,由于中断请求无效的延时性,需要在程序一开始马上清除所有的中断标志。这避免了无意间再次调用处理。为了确保万无一失,需要先清除标志,再读取,但是在我们的例子中,清除标志就已经足够了。 - -下面的这几行代码: - -``` -select { -case ch <- 0: - // Success -default: - leds[0].Clear() -} - -``` - -是Go语言中,如何在channel 上非阻塞地发送消息的方法。 中断处理程序无法一直等待channel 中的空余空间。如果channel已满,则执行default,开发板上的LED就会开启,直到下一次中断。 - -_ISRs_ 数组包含了中断向量表。 `//c:__attribute__((section(".ISRs")))` 会导致链接器将数组插入到 .ISRs section 中。 - -_blinky’s for_ 循环的新写法: - -``` -for range ch { - led.Clear() - delay.Millisec(100) - led.Set() - delay.Millisec(period - 100) -} - -``` - -等价于: - -``` -for { - _, ok := <-ch - if !ok { - break // Channel closed. - } - led.Clear() - delay.Millisec(100) - led.Set() - delay.Millisec(period - 100) -} - -``` - -注意,在这个例子中,我们不在意channel中收到的值,我们只对其接受到的消息感兴趣。我们可以在声明时,将channel元素类型中的 _int_ 用空结构体来代替,发送消息时, 用`struct{}{}` 结构体的值代替0,但这部分对新手来说可能会有些陌生。 - -让我们来编译一下代码: - -``` -$ egc -$ arm-none-eabi-size cortexm0.elf - text data bss dec hex filename - 11096 228 188 11512 2cf8 cortexm0.elf - -``` - -新的例子占用了11324字节的Flash 空间,比上一个例子多占用了1132字节。 - -采用现在的时序,两个 _blinky_ goroutines 从channel 中获取数据的速度,比 _timerISR_ 发送数据的速度要快。所以它们在同时等待新数据,你还能观察到 _select_ 的随机性,这也是[Go 规范][13]所要求的. - -![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/channels1.png) - -开发板上的LED一直没有亮起,说明channel 从未出现过溢出。 - -我们可以加快消息发送的速度,将 `timer.ARR.Store(700)` 改为 `timer.ARR.Store(200)`。 现在 _timerISR_ 每秒钟发送5条消息,但是两个接收者加起来,每秒也只能接受4条消息。 - - -![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/channels2.png) - -正如你所看到的, _timerISR_ 开启黄色LED灯,意味着 channel 上已经没有剩余空间了。 - -第一部分到这里就结束了。你应该知道,这一部分并未展示Go中最重要的部分, _接口_. - -Goroutine 和channel 只是一些方便好用的语法。你可以用自己的代码来替换它们,这并不容易,但也可以实现。 接口是Go 语言的基础。这是文章中 [第二部分][14]所要提到的. - -在Flash上我们还有些剩余空间. - --------------------------------------------------------------------------------- - -via: https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html - -作者:[ Michał Derkacz][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://ziutek.github.io/ -[1]:https://en.wikipedia.org/wiki/ARM_Cortex-M#Cortex-M0 -[2]:https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html -[3]:http://www.st.com/content/st_com/en/products/microcontrollers/stm32-32-bit-arm-cortex-mcus/stm32-mainstream-mcus/stm32f0-series/stm32f0x0-value-line/stm32f030f4.html -[4]:https://golang.org/ -[5]:https://github.com/ziutek/emgo -[6]:https://github.com/ziutek/emgo/tree/master/egpath/src/stm32/hal -[7]:http://www.st.com/resource/en/reference_manual/dm00091010.pdf -[8]:https://github.com/ziutek/emgo -[9]:https://en.wikipedia.org/wiki/APNG -[10]:http://apngasm.sourceforge.net/ -[11]:https://blog.golang.org/share-memory-by-communicating -[12]:http://infocenter.arm.com/help/topic/com.arm.doc.ddi0432c/Cihbecee.html -[13]:https://golang.org/ref/spec#Select_statements -[14]:https://ziutek.github.io/2018/04/14/go_on_very_small_hardware2.html diff --git a/translated/tech/20181227 Linux commands for measuring disk activity.md b/translated/tech/20181227 Linux commands for measuring disk activity.md deleted file mode 100644 index 1c93b212c4..0000000000 --- a/translated/tech/20181227 Linux commands for measuring disk activity.md +++ /dev/null @@ -1,252 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (laingke) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Linux commands for measuring disk activity) -[#]: via: (https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html) -[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) - -用于测量磁盘活动的 Linux 命令 -====== -![](https://images.idgesg.net/images/article/2018/12/tape-measure-100782593-large.jpg) -Linux 系统提供了一套方便的命令,帮助您查看磁盘有多忙,而不仅仅是磁盘有多满。在本文中,我们将研究五个非常有用的命令,用于查看磁盘活动。其中两个命令(iostat 和 ioping)可能必须添加到您的系统中,这两个相同的命令要求您使用 sudo 特权,但是这五个命令都提供了查看磁盘活动的有用方法。 - -这些命令中最简单、最明显的一个可能是 **dstat** 了。 - -### dtstat - -尽管 **dstat** 命令以字母 "d" 开头,但它提供的统计信息远远不止磁盘活动。如果您只想查看磁盘活动,可以使用 **-d** 选项。如下所示,您将得到一个磁盘读/写测量值的连续列表,直到使用 a ^c 停止显示为止。注意,在第一个报告之后,显示中的每个后续行将在接下来的时间间隔内报告磁盘活动,缺省值仅为一秒。 - -``` -$ dstat -d --dsk/total- - read writ - 949B 73k - 65k 0 <== first second - 0 24k <== second second - 0 16k - 0 0 ^C -``` - -在 -d 选项后面包含一个数字将把间隔设置为其秒数。 - -``` -$ dstat -d 10 --dsk/total- - read writ - 949B 73k - 65k 81M <== first five seconds - 0 21k <== second five second - 0 9011B ^C -``` - -请注意,报告的数据可能以许多不同的单位显示——例如,M (megabytes), k (kilobytes), and B (bytes). - -如果没有选项,dstat 命令还将显示许多其他信息——指示 CPU 如何使用时间、显示网络和分页活动、报告中断和上下文切换。 - -``` -$ dstat -You did not select any stats, using -cdngy by default. ---total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system-- -usr sys idl wai stl| read writ| recv send| in out | int csw - 0 0 100 0 0| 949B 73k| 0 0 | 0 3B| 38 65 - 0 0 100 0 0| 0 0 | 218B 932B| 0 0 | 53 68 - 0 1 99 0 0| 0 16k| 64B 468B| 0 0 | 64 81 ^C -``` - -dstat 命令提供了关于整个 Linux 系统性能的有价值的见解,几乎可以用它灵活而功能强大的命令来代替 vmstat,netstat,iostat 和 ifstat 等较旧的工具集合,该命令结合了这些旧工具的功能。要深入了解 dstat 命令可以提供的其它信息,请参阅这篇关于 [dstat][1] 命令的文章。 - -### iostat - -iostat 命令通过观察设备活动的时间与其平均传输速率之间的关系,帮助监视系统输入/输出设备的加载情况。它有时用于评估磁盘之间的活动平衡。 - -``` -$ iostat -Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) - -avg-cpu: %user %nice %system %iowait %steal %idle - 0.07 0.01 0.03 0.05 0.00 99.85 - -Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn -loop0 0.00 0.00 0.00 1048 0 -loop1 0.00 0.00 0.00 365 0 -loop2 0.00 0.00 0.00 1056 0 -loop3 0.00 0.01 0.00 16169 0 -loop4 0.00 0.00 0.00 413 0 -loop5 0.00 0.00 0.00 1184 0 -loop6 0.00 0.00 0.00 1062 0 -loop7 0.00 0.00 0.00 5261 0 -sda 1.06 0.89 72.66 2837453 232735080 -sdb 0.00 0.02 0.00 48669 40 -loop8 0.00 0.00 0.00 1053 0 -loop9 0.01 0.01 0.00 18949 0 -loop10 0.00 0.00 0.00 56 0 -loop11 0.00 0.00 0.00 7090 0 -loop12 0.00 0.00 0.00 1160 0 -loop13 0.00 0.00 0.00 108 0 -loop14 0.00 0.00 0.00 3572 0 -loop15 0.01 0.01 0.00 20026 0 -loop16 0.00 0.00 0.00 24 0 -``` - -当然,当您只想关注磁盘时,Linux loop 设备上提供的所有统计信息都会使结果显得杂乱无章。但是,该命令也确实提供了 **-p** 选项,该选项使您可以仅查看磁盘——如以下命令所示。 - -``` -$ iostat -p sda -Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) - -avg-cpu: %user %nice %system %iowait %steal %idle - 0.07 0.01 0.03 0.05 0.00 99.85 - -Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn -sda 1.06 0.89 72.54 2843737 232815784 -sda1 1.04 0.88 72.54 2821733 232815784 -``` - -请注意 **tps** 是指每秒的传输量。 - -您还可以让 iostat 提供重复的报告。在下面的示例中,我们使用 **-d** 选项每五秒钟进行一次测量。 - -``` -$ iostat -p sda -d 5 -Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) - -Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn -sda 1.06 0.89 72.51 2843749 232834048 -sda1 1.04 0.88 72.51 2821745 232834048 - -Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn -sda 0.80 0.00 11.20 0 56 -sda1 0.80 0.00 11.20 0 56 -``` - -如果您希望省略第一个(自启动以来的统计信息)报告,请在命令中添加 **-y**。 - -``` -$ iostat -p sda -d 5 -y -Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) - -Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn -sda 0.80 0.00 11.20 0 56 -sda1 0.80 0.00 11.20 0 56 -``` - -接下来,我们看第二个磁盘驱动器。 - -``` -$ iostat -p sdb -Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) - -avg-cpu: %user %nice %system %iowait %steal %idle - 0.07 0.01 0.03 0.05 0.00 99.85 - -Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn -sdb 0.00 0.02 0.00 48669 40 -sdb2 0.00 0.00 0.00 4861 40 -sdb1 0.00 0.01 0.00 35344 0 -``` - -### iotop - -**iotop** 命令是类似 top 的实用程序,用于查看磁盘 I/O。它收集 Linux 内核提供的 I/O 使用信息,以便您了解哪些进程在磁盘 I/O 方面的要求最高。在下面的示例中,循环时间被设置为5秒。显示将自动更新,覆盖前面的输出。 - -``` -$ sudo iotop -d 5 -Total DISK READ: 0.00 B/s | Total DISK WRITE: 1585.31 B/s -Current DISK READ: 0.00 B/s | Current DISK WRITE: 12.39 K/s - TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND -32492 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.12 % [kworker/u8:1-ev~_power_efficient] - 208 be/3 root 0.00 B/s 1585.31 B/s 0.00 % 0.11 % [jbd2/sda1-8] - 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init splash - 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] - 3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp] - 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp] - 8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq] -``` - -### ioping - -**ioping** 命令是一种完全不同的工具,但是它可以报告磁盘延迟——也就是磁盘响应请求需要多长时间,而这有助于诊断磁盘问题。 - -``` -$ sudo ioping /dev/sda1 -4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=1 time=960.2 us (warmup) -4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=2 time=841.5 us -4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=3 time=831.0 us -4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=4 time=1.17 ms -^C ---- /dev/sda1 (block device 111.8 GiB) ioping statistics --- -3 requests completed in 2.84 ms, 12 KiB read, 1.05 k iops, 4.12 MiB/s -generated 4 requests in 3.37 s, 16 KiB, 1 iops, 4.75 KiB/s -min/avg/max/mdev = 831.0 us / 947.9 us / 1.17 ms / 158.0 us -``` - -### atop - -**atop** 命令,像 **top** 一样提供了大量有关系统性能的信息,包括有关磁盘活动的一些统计信息。 - -``` -ATOP - butterfly 2018/12/26 17:24:19 37d3h13m------ 10ed -PRC | sys 0.03s | user 0.01s | #proc 179 | #zombie 0 | #exit 6 | -CPU | sys 1% | user 0% | irq 0% | idle 199% | wait 0% | -cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu000 w 0% | -CPL | avg1 0.00 | avg5 0.00 | avg15 0.00 | csw 677 | intr 470 | -MEM | tot 5.8G | free 223.4M | cache 4.6G | buff 253.2M | slab 394.4M | -SWP | tot 2.0G | free 2.0G | | vmcom 1.9G | vmlim 4.9G | -DSK | sda | busy 0% | read 0 | write 7 | avio 1.14 ms | -NET | transport | tcpi 4 | tcpo stall 8 | udpi 1 | udpo 0swout 2255 | -NET | network | ipi 10 | ipo 7 | ipfrw 0 | deliv 60.67 ms | -NET | enp0s25 0% | pcki 10 | pcko 8 | si 1 Kbps | so 3 Kbp0.73 ms | - - PID SYSCPU USRCPU VGROW RGROW ST EXC THR S CPUNR CPU CMD 1/1673e4 | - 3357 0.01s 0.00s 672K 824K -- - 1 R 0 0% atop - 3359 0.01s 0.00s 0K 0K NE 0 0 E - 0% - 3361 0.00s 0.01s 0K 0K NE 0 0 E - 0% - 3363 0.01s 0.00s 0K 0K NE 0 0 E - 0% -31357 0.00s 0.00s 0K 0K -- - 1 S 1 0% bash - 3364 0.00s 0.00s 8032K 756K N- - 1 S 1 0% sleep - 2931 0.00s 0.00s 0K 0K -- - 1 I 1 0% kworker/u8:2-e - 3356 0.00s 0.00s 0K 0K -E 0 0 E - 0% - 3360 0.00s 0.00s 0K 0K NE 0 0 E - 0% - 3362 0.00s 0.00s 0K 0K NE 0 0 E - 0% -``` - -如果您 _只_ 想查看磁盘统计信息,则可以使用以下命令轻松进行管理: - -``` -$ atop | grep DSK -$ atop | grep DSK -DSK | sda | busy 0% | read 122901 | write 3318e3 | avio 0.67 ms | -DSK | sdb | busy 0% | read 1168 | write 103 | avio 0.73 ms | -DSK | sda | busy 2% | read 0 | write 92 | avio 2.39 ms | -DSK | sda | busy 2% | read 0 | write 94 | avio 2.47 ms | -DSK | sda | busy 2% | read 0 | write 99 | avio 2.26 ms | -DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms | -DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms | -DSK | sda | busy 2% | read 0 | write 92 | avio 2.43 ms | -^C -``` - -### 了解磁盘 I/O - -Linux 提供了足够的命令,可以让您很好地了解磁盘的工作强度,并帮助您关注潜在的问题或慢速。希望这些命令中的一个可以告诉您何时需要质疑磁盘性能。偶尔使用这些命令将有助于确保当您需要检查磁盘,特别是忙碌或缓慢的磁盘时可以显而易见地发现它们。 - -加入 [Facebook][2] 和 [LinkedIn][3] 上的 Network World 社区,对最重要的话题发表评论。 - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html - -作者:[Sandra Henry-Stocker][a] -选题:[lujun9972][b] -译者:[laingke](https://github.com/laingke) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[b]: https://github.com/lujun9972 -[1]: https://www.networkworld.com/article/3291616/linux/examining-linux-system-performance-with-dstat.html -[2]: https://www.facebook.com/NetworkWorld/ -[3]: https://www.linkedin.com/company/network-world diff --git a/translated/tech/20190404 How writers can get work done better with Git.md b/translated/tech/20190404 How writers can get work done better with Git.md new file mode 100644 index 0000000000..213c63bba9 --- /dev/null +++ b/translated/tech/20190404 How writers can get work done better with Git.md @@ -0,0 +1,261 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How writers can get work done better with Git) +[#]: via: (https://opensource.com/article/19/4/write-git) +[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/noreplyhttps://opensource.com/users/seth) + +用 Git 帮助写作者更好地完成工作 +====== + +> 如果你是一名写作者,你也能从使用 Git 中受益。在我们的系列文章中了解有关 Git 鲜为人知的用法。 + +![Writing Hand][1] + +[Git][2] 是一个少有的能将如此多的现代计算封装到一个程序之中的应用程序,它可以用作许多其他应用程序的计算引擎。虽然它以跟踪软件开发中的源代码更改而闻名,但它还有许多其他用途,可以让你的生活更轻松、更有条理。在这个 Git 系列中,我们将分享七种鲜为人知的使用 Git 的方法。 + +今天我们来看看写作者如何使用 Git 更好的地完成工作。 + +### 写作者的 Git + +有些人写小说,也有人撰写学术论文、诗歌、剧本、技术手册或有关开源的文章。许多人都在做一点各种写作。相同的是,如果你是一名写作者,则或许能从使用 Git 中受益。尽管 Git 是著名的计算机程序员所使用的高度技术性工具,但它也是现代写作者的理想之选,本文将向你演示如何改变你的书写方式以及为什么要这么做的原因。 + +但是,在谈论 Git 之前,重要的是先谈谈“副本”(或者叫“内容”,对于数字时代而言)到底是什么,以及为什么它与你的交付*媒介*不同。这是 21 世纪,大多数写作者选择的工具是计算机。尽管计算机看似擅长将副本的编辑和布局等过程结合在一起,但写作者还是(重新)发现将内容与样式分开是一个好主意。这意味着你应该在计算机上像在打字机上而不是在文字处理器中进行书写。以计算机术语而言,这意味着以*纯文本*形式写作。 + +### 以纯文本写作 + +这个假设曾经是毫无疑问的:你知道自己的写作所要针对的市场,你可以为书籍、网站或软件手册等不同市场编写内容。但是,近来各种市场趋于扁平化:你可能决定在纸质书中使用为网站编写的内容,并且纸质书可能会在以后发布 EPUB 版本。对于你的内容的数字版本,读者才是最终控制者:他们可以在你发布内容的网站上阅读你的文字,也可以点击 Firefox 出色的[阅读视图][3],还可能会打印到纸张上,或者可能会使用 Lynx 将网页转储到文本文件中,甚至可能因为使用屏幕阅读器而根本看不到你的内容。 + +你只需要逐字写下你的内容,而将交付工作留给发布者。即使你是自己发布,将字词作为写作作品的一种源代码也是一种更聪明、更有效的工作方式,因为在发布时,你可以使用相同的源(你的纯文本)生成适合你的目标输出(用于打印的 PDF、用于电子书的 EPUB、用于网站的 HTML 等)。 + +用纯文本编写不仅意味着你不必担心布局或文本样式,而且也不再需要专门的工具。无论是手机或平板电脑上的基本记事本应用程序、计算机附带的文本编辑器,还是从互联网上下载的免费编辑器,任何能够产生文本内容的工具对你而言都是有效的“文字处理器”。无论你身在何处或在做什么,几乎可以在任何设备上书写,并且所生成的文本可以与你的项目完美集成,而无需进行任何修改。 + +而且,Git 专门用来管理纯文本。 + +### Atom 编辑器 + +当你以纯文本形式书写时,文字处理程序会显得过于庞大。使用文本编辑器更容易,因为文本编辑器不会尝试“有效地”重组输入内容。它使你可以将脑海中的单词输入到屏幕中,而不会受到干扰。更好的是,文本编辑器通常是围绕插件体系结构设计的,这样应用程序本身就很基础(它用来编辑文本),但是你可以围绕它构建一个环境来满足你的各种需求。 + +[Atom][4] 编辑器就是这种设计理念的一个很好的例子。这是一个具有内置 Git 集成的跨平台文本编辑器。如果你不熟悉纯文本格式,也不熟悉 Git,那么 Atom 是最简单的入门方法。 + +#### 安装 Git 和 Atom + +首先,请确保你的系统上已安装 Git。如果运行 Linux 或 BSD,则 Git 在软件存储库或 ports 树中可用。你使用的命令将根据你的发行版而有所不同。例如在 Fedora 上: + +``` +$ sudo dnf install git +``` + +你也可以下载并安装适用于 [Mac][5] 和 [Windows][6] 的 Git。 + +你不需要直接使用 Git,因为 Atom 会充当你的 Git 界面。下一步是安装 Atom。 + +如果你使用的是 Linux,请通过软件安装程序或适当的命令从软件存储库中安装 Atom,例如: + +``` +$ sudo dnf install atom +``` + +Atom 当前没有在 BSD 上构建。但是,有很好的替代方法,例如 [GNU Emacs][7]。对于 Mac 和 Windows 用户,可以在 [Atom 网站][4]上找到安装程序。 + +安装完成后,启动 Atom 编辑器。 + +#### 快速指导 + +如果要使用纯文本和 Git,则需要适应你的编辑器。Atom 的用户界面可能比你习惯的更加动态。实际上,你可以将它视为 Firefox 或 Chrome,而不是文字处理程序,因为它具有可以根据需要打开和关闭的选项卡和面板,甚至还可以安装和配置附件。尝试全部掌握 Atom 如许之多的功能是不切实际的,但是你至少可以知道有什么功能。 + +当 Atom 打开时,它将显示一个欢迎屏幕。如果不出意外,此屏幕很好地介绍了 Atom 的选项卡式界面。你可以通过单击 Atom 窗口顶部选项卡上的“关闭”图标来关闭欢迎屏幕,并使用“文件 > 新建文件”创建一个新文件。 + +使用纯文本格式与使用文字处理程序有点不同,因此这里有一些技巧,以人可以连接的方式编写内容,并且 Git 和计算机可以解析,跟踪和转换。 + +#### 用 Markdown 书写 + +如今,当人们谈论纯文本时,大多是指 Markdown。Markdown 与其说是格式,不如说是样式,这意味着它旨在为文本提供可预测的结构,以便计算机可以检测自然的模式并智能地转换文本。Markdown 有很多定义,但是最好的技术定义和备忘单在 [CommonMark 的网站][8]上。 + +``` +# Chapter 1 + +This is a paragraph with an *italic* word and a **bold** word in it. +And it can even reference an image. + +![An image will render here.](drawing.jpg) +``` + +从示例中可以看出,Markdown 读起来感觉不像代码,但可以将其视为代码。如果你遵循 CommonMark 定义的 Markdown 规范,那么一键就可以可靠地将 Markdown 的文字转换为 .docx、.epub、.html、MediaWiki、.odt、.pdf、.rtf 和各种其他的格式,而*不会*失去格式。 + +你可以认为 Markdown 有点像文字处理程序的样式。如果你曾经为出版社撰写过一套样式来控制章节标题和章节标题的样式,那基本上就是一回事,除了不是从下拉菜单中选择样式以外,你要给你的文字添加一些小记号。对于任何习惯“以文字交谈”的现代阅读者来说,这些表示法都是很自然的,但是在呈现文本时,它们会被精美的文本样式替换掉。实际上,这是文字处理程序在后台秘密进行的操作。文字处理器显示粗体文本,但是如果你可以看到使文本变为粗体的生成代码,则它与 Markdown 很像(实际上,它是更复杂的 XML)。使用 Markdown 可以消除这种代码和样式之间的阻隔,一方面看起来更可怕,但另一方面,你可以在几乎所有可以生成文本的东西上书写 Markdown 而不会丢失任何格式信息。 + +Markdown 文件流行d 文件扩展名是 .md。如果你使用的平台不知道 .md 文件是什么,则可以手动将扩展名与 Atom 关联,或者仅使用通用的 .txt 扩展名。文件扩展名不会更改文件的性质。它只会改变你的计算机决定如何处理它的方式。Atom 和某些平台足够聪明,可以知道该文件是纯文本格式,无论你给它以什么扩展名。 + +#### 实时预览 + +Atom 具有 “Markdown 预览” 插件,该插件可以向你显示正在编写的纯文本 Markdown 及其(通常)呈现的方式。 + +![Atom's preview screen][9] + +要激活此预览窗格,请选择“包 > Markdown 预览 > 切换预览” 或按 `Ctrl + Shift + M`。 + +此视图为你提供了两全其美的方法。无需承担为你的文本添加样式的负担,就可以写作,而你也可以看到一个通用的示例外观,至少是以典型的数字化格式显示了文本的外观。当然,关键是你无法控制文本的最终呈现方式,因此不要试图调整 Markdown 来强制以某种方式显示呈现的预览。 + +#### 每行一句话 + +你的高中写作老师不会看你的 Markdown。 + +一开始它并那么自然,但是在数字世界中,保持每行一个句子更有意义。Markdown 忽略单个换行符(当你按下 Return 或 Enter 键时),并且只在单个空行之后才会创建一个新段落。 + +![Writing in Atom][10] + +每行写一个句子的好处是你的工作更容易跟踪。也就是说,如果你在段落的开头更改了一个单词,那么如果更改仅限于一行而不是一个长的段落中的一个单词,那么 Atom、Git 或任何应用程序很容易以有意义的方式突出显示该更改。换句话说,对一个句子的更改只会影响该句子,而不会影响整个段落。 + +你可能会想:“许多文字处理器也可以跟踪更改,它们可以突出显示已更改的单个单词。”但是这些修订跟踪器绑定到该字处理器的界面上,这意味着你必须先打开该字处理器才能浏览修订。在纯文本工作流程中,你可以以纯文本形式查看修订,这意味着无论手头有什么,只要该设备可以处理纯文本(大多数都可以),就可以进行编辑或批准编辑。 + +诚然,写作者通常不会考虑行号,但它对于计算机有用,并且通常是一个很好的参考点。默认情况下,Atom 为文本文档的行进行编号。按下 Enter 键或 Return 键后,一*行*就是一行。 + +![Writing in Atom][11] + +如果一行中有一个点而不是一个数字,则表示它是上一行折叠的一部分,因为它不超出了你的屏幕。 + +#### 主题 + +如果你是一个在意视觉形象的人,那么你可能会非常注重自己的写作环境。即使你使用普通的 Markdown 进行编写,也并不意味着你必须使用程序员的字体或在使你看起来像码农的黑窗口中进行书写。修改 Atom 外观的最简单方法是使用[主题包][12]。主题设计人员通常将深色主题与浅色主题区分开,因此你可以根据需要使用关键字“Dark”或“Light”进行搜索。 + +要安装主题,请选择“编辑 > 首选项”。这将在 Atom 界面中打开一个新标签页。是的,标签页可以用于处理文档*和*用于配置及控制面板。在“设置”标签页中,单击“安装”类别。 + +在“安装”面板中,搜索要安装的主题的名称。单击搜索字段右侧的“主题”按钮,以仅搜索主题。找到主题后,单击其“安装”按钮。 + +![Atom's themes][13] + +要使用已安装的主题或根据喜好自定义主题,请导航至“设置”标签页中的“主题”类别中。从下拉菜单中选择要使用的主题。更改会立即进行,因此你可以准确了解主题如何影响您的环境。 + +你也可以在“设置”标签的“编辑器”类别中更改工作字体。Atom 默认采用等宽字体,程序员通常首选这种字体。但是你可以使用系统上的任何字体,无论是衬线字体、无衬线字体、哥特式字体还是草书字体。无论你想整天盯着什么字体都行。 + +作为相关说明,默认情况下,Atom 会在其屏幕上绘制一条垂直线,以提示编写代码的人员。程序员通常不想编写太长的代码行,因此这条垂直线会提醒他们不要写太长的代码行。不过,这条竖线对写作者而言毫无意义,你可以通过禁用 “wrap-guide” 包将其删除。 + +要禁用 “wrap-guide” 软件包,请在“设置”标签中选择“折行”类别,然后搜索 “wrap-guide”。找到该程序包后,单击其“禁用”按钮。 + +#### 动态结构 + +创建长文档时,我发现每个文件写一个章节比在一个文件中写整本书更有意义。此外,我不会以明显的语法 ` chapter-1.md` 或 `1.example.md` 来命名我的章节,而是以章节标题或关键词(例如 `example.md`)命名。为了将来为自己提供有关如何编写本书的指导,我维护了一个名为 `toc.md` (用于“目录”)的文件,其中列出了各章的(当前)顺序。 + +我这样做是因为,无论我多么相信第 6 章都不可能出现在第 1 章之前,但在我完成整本书之前,几乎不大可能出现我不会交换一两个章节的顺序。我发现从一开始就保持动态变化可以帮助我避免重命名混乱,也可以帮助我避免僵化的结构。 + +### 在 Atom 中使用 Git + +每位写作者的共同点是两件事:他们为流传而写作,而他们的写作是一段旅程。你无需坐下来写作就完成最终稿件。顾名思义,你有一个初稿。该草稿会经过修订,你会仔细地将每个修订保存一式两份或三份,以防万一你的文件损坏了。最终,你得到了所谓的最终草案,但很有可能你有一天还会回到这份最终草案,要么恢复好的部分要么修改坏的部分。 + +Atom 最令人兴奋的功能是其强大的 Git 集成。无需离开 Atom,你就可以与 Git 的所有主要功能进行交互,跟踪和更新项目、回滚你不喜欢的更改、集成来自协作者的更改等等。最好的学习方法就是逐步学习,因此这是从写作项目开始到结束在 Atom 界面中使用 Git 的方法。 + +第一件事:通过选择 “视图 > 切换 Git 标签页” 来显示 Git 面板。这将在 Atom 界面的右侧打开一个新标签页。现在没什么可看的,所以暂时保持打开状态就行。 + +#### 建立一个 Git 项目 + +你可以将 Git 视为它被绑定到文件夹。Git 目录之外的任何文件夹都不知道 Git,而 Git 也不知道外面。Git 目录中的文件夹和文件将被忽略,直到你授予 Git 权限来跟踪它们为止。 + +你可以通过在 Atom 中创建新的项目文件夹来创建 Git 项目。选择 “文件 > 添加项目文件夹”,然后在系统上创建一个新文件夹。你创建的文件夹将出现在 Atom 窗口的左侧“项目面板”中。 + +#### Git 添加文件 + +右键单击你的新项目文件夹,然后选择“新建文件”以在项目文件夹中创建一个新文件。如果你要导入文件到新项目中,请右键单击该文件夹,然后选择“在文件管理器中显示”,以在系统的文件查看器中打开该文件夹(Linux 上为 Dolphin 或 Nautilus,Mac 上为 Finder,在 Windows 上是 Explorer),然后拖放文件到你的项目文件夹。 + +在 Atom 中打开一个项目文件(你创建的空文件或导入的文件)后,单击 Git 标签中的 “创建存储库Create Repository” 按钮。在弹出的对话框中,单击 “初始化Init” 以将你的项目目录初始化为本地 Git 存储库。 Git 会将 `.git` 目录(在系统的文件管理器中不可见,但在 Atom 中可见)添加到项目文件夹中。不要被这个愚弄了:`.git` 目录是 Git 管理的,而不是由你管理的,因此你一般不要动它。但是在 Atom 中看到它可以很好地提醒你正在由 Git 管理的项目中工作。换句话说,当你看到 `.git` 目录时,就有了修订历史记录。 + +在你的空文件中,写一些东西。你是写作者,所以输入一些单词就行。你可以随意输入任何一组单词,但要记住上面的写作技巧。 + +按 `Ctrl + S` 保存文件,该文件将显示在 Git 标签的 “未暂存的改变Unstaged Changes” 部分中。这意味着该文件存在于你的项目文件夹中,但尚未提交给 Git 管理。通过单击 Git 选项卡右上方的 “暂存全部Stage All” 按钮,允许 Git 跟踪这些文件。如果你使用过带有修订历史记录的文字处理器,则可以将此步骤视为允许 Git记录更改。 + +#### Git 提交 + +你的文件现在已暂存。这意味着 Git 知道该文件存在,并且知道自上次 Git 知道该文件以来,该文件已被更改。 + +Git 的提交commit会将你的文件发送到 Git 的内部和永久存档中。如果你习惯于文字处理程序,这就类似于给一个修订版命名。要创建一个提交,请在 Git 选项卡底部的“提交Commit”消息框中输入一些描述性文本。你可能会感到含糊不清或随意写点什么,但如果你想在将来知道进行修订的原因,那么输入一些有用的信息会更有用。 + +第一次提交时,必须创建一个分支branch。Git 分支有点像另外一个空间,它允许你从一个时间轴切换到另一个时间轴,以进行你可能想要或可能不想要永久保留的更改。如果最终喜欢该更改,则可以将一个实验分支合并到另一个实验分支,从而统一项目的不同版本。这是一个高级过程,不需要先学会,但是你仍然需要一个活动分支,因此你必须为首次提交创建一个分支。 + +单击 Git 选项卡最底部的“分支Branch”图标,以创建新的分支。 + +![Creating a branch][14] + +通常将第一个分支命名为 `master`,但不是必须如此;你可以将其命名为 `firstdraft` 或任何你喜欢的名称,但是遵守当地习俗有时会使谈论 Git(和查找问题的答案)变得容易一些,因为你会知道有人提到 “master” 时,它们的真正意思是“主干”而不是“初稿”或你给分支起的什么名字。 + +在某些版本的 Atom 上,UI 也许不会更新以反映你已经创建的新分支。不用担心,做了提交之后,它会创建分支(并更新 UI)。按下 “提交Commit” 按钮,无论它显示的是 “创建脱离的提交Create detached commit” 还是 “提交到主干Commit to master。 + +提交后,文件的状态将永久保留在 Git 的记忆之中。 + +#### 历史记录和 Git 差异 + +一个自然而然的问题是你应该多久做一次提交。这并没有正确的答案。使用 `Ctrl + S` 保存文件并提交到 Git 是两个单独的过程,因此你会一直做这两个过程。每当你觉得自己已经做了重要的事情或打算尝试一个可能要被干掉的疯狂的新想法时,你可能都会想要做个提交。 + +要了解提交对工作流程的影响,请从测试文档中删除一些文本,然后在顶部和底部添加一些文本。再次提交。 这样做几次,直到你在 Git 标签的底部有了一小段历史记录,然后单击其中一个提交以在 Atom 中查看它。 + +![Viewing differences][15] + +查看过去的提交时,你会看到三种元素: + +1. 绿色文本是该提交中已被添加到文档中的内容。 +2. 红色文本是该提交中已从文档中删除的内容。 +3. 其他所有文字均未做更改。 + +#### 远程备份 + +使用 Git 的优点之一是,按照设计,它是分布式的,这意味着你可以将工作提交到本地存储库,并将所做的更改推送到任意数量的服务器上进行备份。你还可以从这些服务器中拉取更改,以便你碰巧正在使用的任何设备始终具有最新更改。 + +为此,你必须在 Git 服务器上拥有一个帐户。有几种免费的托管服务,其中包括 GitHub,这个公司开发了 Atom,但奇怪的是 GitHub 不是开源的;而 GitLab 是开源的。相比私有的,我更喜欢开源,在本示例中,我将使用 GitLab。 + +如果你还没有 GitLab 帐户,请注册一个帐户并开始一个新项目。项目名称不必与 Atom 中的项目文件夹匹配,但是如果匹配,则可能更有意义。你可以将项目保留为私有,在这种情况下,只有你和任何一个你给予了明确权限的人可以访问它,或者,如果你希望该项目可供任何互联网上偶然发现它的人使用,则可以将其公开。 + +不要将 README 文件添加到项目中。 + +创建项目后,这个文件将为你提供有关如何设置存储库的说明。如果你决定在终端中或通过单独的 GUI 使用 Git,这是非常有用的信息,但是 Atom 的工作流程则有所不同。 + +单击 GitLab 界面右上方的 “克隆Clone” 按钮。这显示了访问 Git 存储库必须使用的地址。复制 “SSH” 地址(而不是 “https” 地址)。 + +在 Atom 中,点击项目的 `.git` 目录,然后打开 `config` 文件。将下面这些配置行添加到该文件中,调整 `url` 值的 `seth/example.git` 部分以匹配你自己独有的地址。 + +``` +[remote "origin"] + url = git@gitlab.com:seth/example.git + fetch = +refs/heads/*:refs/remotes/origin/* +[branch "master"] + remote = origin + merge = refs/heads/master +``` + +在 Git 标签的底部,出现了一个新按钮,标记为 “提取Fetch”。由于你的服务器是全新的服务器,因此没有可供你提取的数据,因此请右键单击该按钮,然后选择“推送Push”。这会将你的更改推送到你的 GitLab 帐户,现在你的项目已备份到 Git 服务器上。 + +你可以在每次提交后将更改推送到服务器。它提供了立即的异地备份,并且由于数据量通常很少,因此它几乎与本地保存一样快。 + +### 撰写而 Git + +Git 是一个复杂的系统,不仅对修订跟踪和备份有用。它还支持异步协作并鼓励实验。本文介绍了一些基础知识,但还有更多关于 Git 的文章和整本的书,以及如何使用它使你的工作更高效、更具弹性和更具活力。 从使用 Git 完成小任务开始,使用的次数越多,你会发现自己提出的问题就越多,最终你将学到的技巧越多。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/write-git + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/sethhttps://opensource.com/users/noreplyhttps://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/write-hand_0.jpg?itok=Uw5RJD03 (Writing Hand) +[2]: https://git-scm.com/ +[3]: https://support.mozilla.org/en-US/kb/firefox-reader-view-clutter-free-web-pages +[4]: http://atom.io +[5]: https://git-scm.com/download/mac +[6]: https://git-scm.com/download/win +[7]: http://gnu.org/software/emacs +[8]: https://commonmark.org/help/ +[9]: https://opensource.com/sites/default/files/uploads/atom-preview.jpg (Atom's preview screen) +[10]: https://opensource.com/sites/default/files/uploads/atom-para.jpg (Writing in Atom) +[11]: https://opensource.com/sites/default/files/uploads/atom-linebreak.jpg (Writing in Atom) +[12]: https://atom.io/themes +[13]: https://opensource.com/sites/default/files/uploads/atom-theme.jpg (Atom's themes) +[14]: https://opensource.com/sites/default/files/uploads/atom-branch.jpg (Creating a branch) +[15]: https://opensource.com/sites/default/files/uploads/git-diff.jpg (Viewing differences) +[16]: mailto:git@gitlab.com diff --git a/translated/tech/20190920 Hone advanced Bash skills by building Minesweeper.md b/translated/tech/20190920 Hone advanced Bash skills by building Minesweeper.md deleted file mode 100644 index a5cc1a977f..0000000000 --- a/translated/tech/20190920 Hone advanced Bash skills by building Minesweeper.md +++ /dev/null @@ -1,334 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wenwensnow) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Hone advanced Bash skills by building Minesweeper) -[#]: via: (https://opensource.com/article/19/9/advanced-bash-building-minesweeper) -[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakarhttps://opensource.com/users/dnearyhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/marcobravo) - -通过编写扫雷游戏提高你的bash技巧 -====== -那些令人怀念的经典游戏可是提高编程能力的好素材。今天就让我们仔细探索一番,怎么用Bash编写一个扫雷程序。 -![bash logo on green background][1] - -我在编程教学方面不是专家,但当我想更好掌握某一样东西时,会试着找出让自己乐在其中的方法。比方说,当我想在shell编程方面更进一步时,我决定用Bash编写一个[扫雷][2]游戏来加以练习。 - -如果你是一个有经验的Bash程序员,并且在提高技巧的同时乐在其中,你可以在终端中编写个人版本的扫雷。完整代码可以在这个 [GitHub 库]中找到[3]. - -### 做好准备 - -在我编写任何代码之前,我列出了游戏所必须的几个部分: - - 1. 显示雷区 - 2. 创建玩家逻辑 - 3. 创建判断单元格是否可选的逻辑 - 4. 记录已选择和可用单元格的个数 - 5. 创建游戏结束逻辑 - - -### 显示雷区 - -在扫雷中,游戏界面是一个由2D数组(列和行)组成的不透明小方格。每一格下都有可能藏有地雷。玩家的任务就是找到那些不含雷的方格,并且在这一过程中,不能点到地雷。Bash版本的扫雷使用10x10的矩阵,实际逻辑则由一个简单的Bash数组来完成。 - -首先,我先生成了一些随机数字。这将是地雷在雷区里的位置。为了控制地雷的数量,在开始编写代码之前,这么做会容易一些。实现这一功能的逻辑可以更好,但我这么做,是为了让游戏实现保持简洁,并有改进空间。(我编写这个游戏纯属娱乐,但如果你能将它修改的更好,我也是很乐意的。) - -下面这些变量是整个过程中是不变的,声明它们是为了随机生成数字。就像下面的变量a-g,它们会被用来计算可选择的地雷 -的值: - -``` -# 变量 -score=0 # 会用来存放游戏分数 -#下面这些变量,用来随机生成可选择地雷的实际值 -a="1 10 -10 -1" -b="-1 0 1" -c="0 1" -d="-1 0 1 -2 -3" -e="1 2 20 21 10 0 -10 -20 -23 -2 -1" -f="1 2 3 35 30 20 22 10 0 -10 -20 -25 -30 -35 -3 -2 -1" -g="1 4 6 9 10 15 20 25 30 -30 -24 -11 -10 -9 -8 -7" -# -# 声明 -declare -a room # 声明一个room 数组,它用来表示雷区的每一格。 -``` - -接下来,我会用列(0-9)和行(a-j)显示出游戏界面,并且使用一个10x10矩阵作为雷区。(M[10][10] 是一个索引从0-99,有100个值的数组。) 如想了解更多关于Bash 数组的内容,请阅读这本书[_那些关于Bash你所不了解的事: Bash数组简介_][4]。 - - -创建一个叫 **plough**的函数,我们先将标题显示出来:两个空行,列头,和一行 “-”,以示意往下是游戏界面: - - -``` -printf '\n\n' -printf '%s' "     a   b   c   d   e   f   g   h   i   j" -printf '\n   %s\n' "-----------------------------------------" -``` - -然后,我初始化一个计数器变量,叫 **r**,它会用来记录已显示多少横行。 注意,稍后在游戏代码中,我们会用同一个变量**r**,作为我们的数组索引。 在 [Bash **for** 循环][5]中,用 **seq**命令从0增加到9。我用 (**d%**)占位,来显示行号($row,被**seq**定义的变量) - - -``` -r=0 # our counter -for row in $(seq 0 9); do - printf '%d ' "$row" # 显示 行数 0-9 -``` - -在我们接着往下做之前,让我们看看到现在都做了什么。我们先横着显示 **[a-j]** 然后再将 **[0-9]** 的行号显示出来,我们会用这两个范围,来确定用户选择的确切位置。 - -接着,在每行中,插入列,所以是时候写一个新的 **for** 循环了。 这一循环管理着每一列,也就是说,实际上是生成游戏界面的每一格。我添加了一些说明函数,你能在源码中看到它的完整实现。 对每一格来说,我们需要一些让它看起来像地雷的东西,所以我们先用一个点(.)来初始化空格。实现这一想法,我们用的是一个叫[**is_null_field**][6] 的自定义函数。 同时,我们需要一个存储每一格具体值的数组,这儿会用到之前已定义的全局数组 **[room][7]** , 并用 [变量 **r**][8]作为索引。 随着 **r** 的增加,遍历所有单元格,并随机部署地雷。 - -``` -  for col in $(seq 0 9); do - ((r+=1)) # 循环完一列行数加一 - is_null_field $r # 假设这里有个函数,它会检查单元格是否为空,为真,则此单元格初始值为点(.) - printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}" # 最后显示分隔符,注意,${room[$r]} 的第一个值为 '.',等于其初始值。 - #结束 col 循环 - done -``` - -最后,为了保持游戏界面整齐好看,我会在每行用一个竖线作为结尾,并在最后结束行循环: - -``` -printf '%s\n' "|" #显示出行分隔符 -printf ' %s\n' "-----------------------------------------" -# 结束行循环 -done -printf '\n\n' -``` - -完整的 **plough** 代码如下: - -``` -plough() -{ -  r=0 -  printf '\n\n' -  printf '%s' "     a   b   c   d   e   f   g   h   i   j" -  printf '\n   %s\n' "-----------------------------------------" -  for row in $(seq 0 9); do -    printf '%d  ' "$row" -    for col in $(seq 0 9); do -       ((r+=1)) -       is_null_field $r -       printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}" -    done -    printf '%s\n' "|" -    printf '   %s\n' "-----------------------------------------" -  done -  printf '\n\n' -} -``` - -我花了点时间来思考,**is_null_field** 的具体功能是什么。让我们来看看,它到底能做些什么。在最开始,我们需要游戏有一个固定的状态。你可以随便选择所有格子的初始值,可以是一个数字或者任意字符。 我最后决定,所有单元格的初始值为一个点(.),因为我觉得,这样会让游戏界面更好看。下面就是这一函数的完整代码: - -``` -is_null_field() -{ - local e=$1 # 在数组room中,我们已经用过循环变量 'r'了,这次我们用'e' - if [[ -z "${room[$e]}" ]];then - room[$r]="." #这里用点(.)来初始化每一个单元格 - fi -} -``` - -现在,我已经初始化了所有的格子,现在只要用一个很简单的函数,就能得出,当前游戏中还有多少单元格可以操作: - -``` -get_free_fields() -{ - free_fields=0 # 初始化变量 - for n in $(seq 1 ${#room[@]}); do - if [[ "${room[$n]}" = "." ]]; then # 检查当前单元格是否等于初始值(.),结果为真,则记为空余格子。 - ((free_fields+=1)) -    fi -  done -} -``` - -这是显示出来的游戏界面,[**a-j]** 为列, [**0-9**] 为行。 -![Minefield][9] - -### 创建玩家逻辑 - -玩家操作背后的逻辑在于,先从[stdin][10] 中读取数据作为坐标,然后再找出对应位置实际包含的值。这里用到了Bash的[参数扩展][11],来设法得到行列数。然后将代表列数的字母传给switch,从而得到其对应的列数。为了更好地理解这一过程,可以看看下面这段代码中,变量 '**o'** 所对应的值。 举个例子,玩家输入了 **c3** ,这时 Bash 将其分成两个字符: **c** and **3** 。 为了简单起见,我跳过了如何处理无效输入的部分。 - -``` - colm=${opt:0:1} # 得到第一个字符,一个字母 - ro=${opt:1:1} # 得到第二个字符,一个整数 - case $colm in - a ) o=1;; # 最后,通过字母得到对应列数。 - b ) o=2;; -    c ) o=3;; -    d ) o=4;; -    e ) o=5;; -    f ) o=6;; -    g ) o=7;; -    h ) o=8;; -    i ) o=9;; -    j ) o=10;; -  esac -``` - -下面的代码会计算,用户所选单元格实际对应的数字,然后将结果储存在变量中。 - -这里也用到了很多的 **shuf** 命令,**shuf** 是一个专门用来生成随机序列的[Linux命令][12]。 **-i** 选项,后面需要提供需要打乱的数或者范围, **-n** 选择则规定,输出结果最多需要返回几个值。Bash中,可以在两个圆括号内进行[数学计算],这里我们会多次用到。 - -还是沿用之前的例子,玩家输入了 **c3** 。 接着,它被转化成了**ro=3** 和 **o=3**。 之后,通过上面的switch 代码, 将**c** 转化为对应的整数,带进公式,以得到最终结果 '**i'.** 的值。 - - -``` - i=$(((ro*10)+o)) # 遵循运算规则,算出最终值 - is_free_field $i $(shuf -i 0-5 -n 1) # 调用自定义函数,判断其指向空/可选择单元格。 -``` - -仔细观察这个计算过程,看看最终结果 '**i**' 是如何计算出来的: - -``` -i=$(((ro*10)+o)) -i=$(((3*10)+3))=$((30+3))=33 -``` - -最后结果是33。在我们的游戏界面显示出来,玩家输入坐标指向了第33个单元格,也就是在第3行(从0开始,否则这里变成4),第3列。 - -### 创建判断单元格是否可选的逻辑 - -为了找到地雷,在将坐标转化,并找到实际位置之后,程序会检查这一单元格是否可选。如不可选,程序会显示一条警告信息,并要求玩家重新输入坐标。 - -在这段代码中,单元格是否可选,是由数组里对应的值是否为点(**.**)决定的。 如果可选,则重置单元格对应的值,并更新分数。反之,因为其对应值不为点,则设置 变量 **not_allowed**。 为简单起见,游戏中[警告消息][14]这部分源码,我会留给读者们自己去探索。 - -``` -is_free_field() -{ -  local f=$1 -  local val=$2 -  not_allowed=0 -  if [[ "${room[$f]}" = "." ]]; then -    room[$f]=$val -    score=$((score+val)) -  else -    not_allowed=1 -  fi -} -``` - -![Extracting mines][15] - -如输入坐标有效,且对应位置为地雷,如下图所示。 玩家输入 **h6**,游戏界面会出现一些随机生成的值。在发现地雷后,这些值会被加入用户得分。 - - -![Extracting mines][16] - -还记得我们开头定义的变量,[a-g]吗,我会用它们来确定,随机生成地雷的具体值。 所以,根据玩家输入坐标,程序会根据 (**m**) 中随机生成的数,来生成周围其他单元格的值。(如上图所示) 。之后将所有值和初始输入坐标相加,最后结果放在**i (**计算结果如上**)**中. - - - -请注意下面代码中的 **X**,它是我们唯一的游戏结束标志。我们将它添加到随机列表中。在 **shuf** 命令的魔力下,X可以在任意情况下出现,但如果你足够幸运的话,也可能一直不会出现。 - -``` -m=$(shuf -e a b c d e f g X -n 1) # 将 X 添加到随机列表中,当 m=X,游戏结束 - if [[ "$m" != "X" ]]; then # X将会是我们爆炸地雷(游戏结束)的触发标志 - for limit in ${!m}; do # !m 代表m变量的值 - field=$(shuf -i 0-5 -n 1) # 然后再次获得一个随机数字 - index=$((i+limit)) # 将m中的每一个值和index加起来,直到列表结尾 - is_free_field $index $field -    done -``` - -我想要游戏界面中,所有随机显示出来的单元格,都靠近玩家选择的单元格。 - -![Extracting mines][17] - -### 记录已选择和可用单元格的个数 - -这个程序需要记录,游戏界面中哪些单元格是可选择的。否则,程序会一直让用户输入数据,即使所有单元格都被选中过。为了实现这一功能,我创建了一个叫 **free_fields** 的变量,初始值为0。 用一个 **for** 循环,记录下游戏界面中可选择单元格的数量。 ****如果单元格所对应的值为点 (**.**), 则 **free_fields** 加一。 - - - -``` -get_free_fields() -{ -  free_fields=0 -  for n in $(seq 1 ${#room[@]}); do -    if [[ "${room[$n]}" = "." ]]; then -      ((free_fields+=1)) -    fi -  done -} -``` - -等下,如果 **free_fields=0** 呢? 这意味着,玩家已选择过所有单元格。如果想更好理解这一部分,可以看看这里的[源代码][18] 。 - - -``` -if [[ $free_fields -eq 0 ]]; then # 这意味着你已选择过所有格子 - printf '\n\n\t%s: %s %d\n\n' "You Win" "you scored" "$score" -      exit 0 -fi -``` - -### 创建游戏结束逻辑 - -对于游戏结束这种情况,我们这里使用了一些很巧妙的技巧,将结果在屏幕中央显示出来。我把这部分留给读者朋友们自己去探索。 - - - -``` -if [[ "$m" = "X" ]]; then - g=0 # 为了在参数扩展中使用它 - room[$i]=X # 覆盖此位置原有的值,并将其赋值为X - for j in {42..49}; do # 在游戏界面中央, - out="gameover" - k=${out:$g:1} # 在每一格中显示一个字母 - room[$j]=${k^^} -      ((g+=1)) -    done -fi -``` - - 最后,我们显示出玩家最关心的两行。 - -``` -if [[ "$m" = "X" ]]; then -      printf '\n\n\t%s: %s %d\n' "GAMEOVER" "you scored" "$score" -      printf '\n\n\t%s\n\n' "You were just $free_fields mines away." -      exit 0 -fi -``` - -![Minecraft Gameover][20] - -文章到这里就结束了,朋友们! 如果你想了解更多,具体可以查看我的[GitHub 库][3],那儿有这个扫雷游戏的源代码,并且你还能找到更多用Bash 编写的游戏。 我希望,这篇文章能激起你学习Bash的兴趣,并乐在其中。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/9/advanced-bash-building-minesweeper - -作者:[Abhishek Tamrakar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/tamrakarhttps://opensource.com/users/dnearyhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/marcobravo -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background) -[2]: https://en.wikipedia.org/wiki/Minesweeper_(video_game) -[3]: https://github.com/abhiTamrakar/playground/tree/master/bash_games -[4]: https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays -[5]: https://opensource.com/article/19/6/how-write-loop-bash -[6]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L114-L120 -[7]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L41 -[8]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L74 -[9]: https://opensource.com/sites/default/files/uploads/minefield.png (Minefield) -[10]: https://en.wikipedia.org/wiki/Standard_streams#Standard_input_(stdin) -[11]: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html -[12]: https://linux.die.net/man/1/shuf -[13]: https://www.tldp.org/LDP/abs/html/dblparens.html -[14]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L143-L177 -[15]: https://opensource.com/sites/default/files/uploads/extractmines.png (Extracting mines) -[16]: https://opensource.com/sites/default/files/uploads/extractmines2.png (Extracting mines) -[17]: https://opensource.com/sites/default/files/uploads/extractmines3.png (Extracting mines) -[18]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L91 -[19]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L131-L141 -[20]: https://opensource.com/sites/default/files/uploads/gameover.png (Minecraft Gameover) diff --git a/translated/tech/20190924 Mutation testing by example- Failure as experimentation.md b/translated/tech/20190924 Mutation testing by example- Failure as experimentation.md new file mode 100644 index 0000000000..939359c7cc --- /dev/null +++ b/translated/tech/20190924 Mutation testing by example- Failure as experimentation.md @@ -0,0 +1,192 @@ +[#]: collector: (lujun9972) +[#]: translator: (Morisun029) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Mutation testing by example: Failure as experimentation) +[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation) +[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew) + +以变异测试为例:基于故障的试验 +====== +基于 .NET 的 xUnit.net 测试框架,开发一款自动猫门的逻辑,让门在白天开放,夜间锁定, +![Digital hand surrounding by objects, bike, light bulb, graphs][1] + + +在本系列的[第一篇文章][2]中,我演示了如何使用设计的故障来确保代码中的预期结果。 在第二篇文章中,我将继续开发示例项目——一款自动猫门,该门在白天开放,夜间锁定。 +在此提醒一下,您可以按照[此处的说明][3]使用 .NET 的 xUnit.net 测试框架。 + + + +### 关于白天时间 + +回想一下,测试驱动开发(TDD)围绕着大量的单元测试。 + + +第一篇文章中实现了满足 **Given7pmReturnNighttime** 单元测试期望的逻辑。 但还没有完, 现在,您需要描述当前时间大于7点时期望发生的结果。 这是新的单元测试,称为 **Given7amReturnDaylight**: + + +``` + [Fact] + public void Given7amReturnDaylight() + { + var expected = "Daylight"; + var actual = dayOrNightUtility.GetDayOrNight(); + Assert.Equal(expected, actual); + } +``` + + 现在,新的单元测试失败了(越早失败越好!): + +``` +Starting test execution, please wait... +[Xunit.net 00:00:01.23] unittest.UnitTest1.Given7amReturnDaylight [FAIL] +Failed unittest.UnitTest1.Given7amReturnDaylight +[...] +``` + +期望接收到字符串值是 "Daylight" ,但实际接收到的值是 "Nighttime"。 + + +### 分析失败的测试用例 + +经过仔细检查,代码本身似乎已经出现问题。 事实证明,**GetDayOrNight** 方法的实现是不可测试的! +看看我们面临的核心挑战: + + 1. **GetDayOrNight 依赖隐藏输入。 ** +**dayOrNight** 的值取决于隐藏输入(它从内置系统时钟中获取一天的时间值)。 + + 2. **GetDayOrNight 包含非确定性行为。 ** +从系统时钟中获取到的时间值是不确定的。 (因为)该时间取决于你运行代码的时间点,而这一点我们认为这是不可预测的。 + + 3. **GetDayOrNight API 的质量差。** +该 API 与具体的数据源(系统 **DateTime**) 紧密耦合。 + + 4. **GetDayOrNight violates 违反了单一责任原则。** +该方法实现同时使用和处理信息。优良作法是一种方法应负责执行一项职责。 + 5. **GetDayOrNight 有多个更改原因。** +可以想象内部时间源可能会更改的情况。同样,很容易想象处理逻辑也将改变。这些变化的不同原因必须相互隔离。 + 6. **当(我们)尝试了解 GetDayOrNight 行为时,会发现它的 API 签名不足。 ** +最理想的做法就是通过简单的查看API的签名,就能了解API预期的行为类型。。 + 7. **GetDayOrNight 取决于全局共享可变状态。** +要不惜一切代价避免共享的可变状态! + 8. **即使在阅读源代码之后,也无法预测 GetDayOrNight方法的行为。** +这是一个严重的问题。 通过阅读源代码,应该始终非常清楚,系统一旦开始运行,便可以预测出其行为。 + + +### 失败背后的原则 + +每当您遇到工程问题时,建议使用久经考验的分而治之策略。 在这种情况下,遵循关注点分离的原则是一种可行的方法。 + +> **separation of concerns** (**SoC**) 是一种用于将计算机程序分为不同模块的设计原理,以便每个模块都可以解决一个关注点。 关注点是影响计算机程序代码的一组信息。 关注点信息可能与要优化代码的硬件的细节一样概括,也可能与要实例化的类的名称一样具体。完美体现 SoC 的程序称为模块化程序。 +> +> ([source][4]) + +**GetDayOrNight** 方法应仅与确定日期和时间值表示白天还是夜晚有关。 它不应该与寻找该值的来源有关。该问题应留给调用客户端。 + +必须将这个问题留给调用客户端,以获取当前时间。 这种方法符合另一个有价值的工程原理-控制反转。 Martin Fowler [在这里][5]详细探讨了这一概念。 + +> 框架的一个重要特征是用户定义的用于定制框架的方法通常来自于框架本身而不是从用户的应用程序代码调用来的。 该框架通常在协调和排序应用程序活动中扮演主程序的角色。 控制权的这种反转使框架有能力充当可扩展的框架。 用户提供的方法为框架中的特定应用程序量身制定泛化算法。 +> +> \-- [Ralph Johnson and Brian Foote][6] + +### 重构测试用例 + + +因此,代码需要重构。 摆脱对内部时钟的依赖(**DateTime** 系统实用程序): + +``` +` DateTime time = new DateTime();` +``` + +删除上述代码(在你的文件中应该是第7行)。 通过将输入参数 **DateTime** 时间添加到 **GetDayOrNight** 方法,进一步重构代码。 +这是重构类 **DayOrNightUtility.cs**: + + +``` +using System; + +namespace app { + public class DayOrNightUtility { + public string GetDayOrNight(DateTime time) { + string dayOrNight = "Nighttime"; + if(time.Hour >= 7 && time.Hour < 19) { + dayOrNight = "Daylight"; + } + return dayOrNight; + } + } +} +``` + + +重构代码需要更改单元测试。 需要准备 **nightHour** 和 **dayHour** 的测试数据,并将这些值传到**GetDayOrNight** 方法中。 以下是重构的单元测试: + + +``` +using System; +using Xunit; +using app; + +namespace unittest +{ + public class UnitTest1 + { + DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility(); + DateTime nightHour = [new][7] DateTime(2019, 08, 03, 19, 00, 00); + DateTime dayHour = [new][7] DateTime(2019, 08, 03, 07, 00, 00); + + [Fact] + public void Given7pmReturnNighttime() + { + var expected = "Nighttime"; + var actual = dayOrNightUtility.GetDayOrNight(nightHour); + Assert.Equal(expected, actual); + } + + [Fact] + public void Given7amReturnDaylight() + { + var expected = "Daylight"; + var actual = dayOrNightUtility.GetDayOrNight(dayHour); + Assert.Equal(expected, actual); + } + + } +} +``` + +### 经验教训 + + +在继续开发这种简单的场景之前,请先回顾复习一下本次练习中所学到的东西。 + +运行无法测试的代码,很容易在不经意间制造陷阱。 从表面上看,这样的代码似乎可以正常工作。但是,遵循测试驱动开发(TDD)的实践(首先描述期望结果---执行测试---暴露了代码中的严重问题。 + +这表明 TDD 是确保代码不会太凌乱的理想方法。 TDD 指出了一些问题区域,例如缺乏单一责任和存在隐藏输入。 此外,TDD 有助于删除不确定性代码,并用行为明确的完全可测试代码替换它。 + +最后,TDD 帮助交付易于阅读、逻辑易于遵循的代码。 + +在本系列的下一篇文章中,我将演示如何使用在本练习中创建的逻辑来实现功能代码,以及如何进行进一步的测试使其变得更好。 + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation + +作者:[Alex Bunardzic][a] +选题:[lujun9972][b] +译者:[Morisun029](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs) +[2]: https://opensource.com/article/19/9/mutation-testing-example-part-1-how-leverage-failure +[3]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd +[4]: https://en.wikipedia.org/wiki/Separation_of_concerns +[5]: https://martinfowler.com/bliki/InversionOfControl.html +[6]: http://www.laputan.org/drc/drc.html +[7]: http://www.google.com/search?q=new+msdn.microsoft.com diff --git a/translated/tech/20190925 Essential Accessories for Intel NUC Mini PC.md b/translated/tech/20190925 Essential Accessories for Intel NUC Mini PC.md new file mode 100644 index 0000000000..56655d2ee3 --- /dev/null +++ b/translated/tech/20190925 Essential Accessories for Intel NUC Mini PC.md @@ -0,0 +1,118 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Essential Accessories for Intel NUC Mini PC) +[#]: via: (https://itsfoss.com/intel-nuc-essential-accessories/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Intel NUC 迷你 PC 的基本配件 +====== + +几周前,我买了一台 [Intel NUC 迷你 PC][1]。我[在上面安装了 Linux][2],我非常享受。这个小巧的无风扇机器取代了台式机那庞大的 CPU。 + +Intel NUC 通常采用准系统形式,这意味着它没有任何内存、硬盘,也显然没有操作系统。许多[基于 Linux 的微型 PC][3] 定制化 Intel NUC 并添加磁盘、RAM 和操作系统将它出售给终端用户。 + +不用说,它不像大多数其他台式机那样带有键盘,鼠标或屏幕。 + +[Intel NUC][4] 是一款出色的设备,如果你要购买台式机,我强烈建议你购买它。如果你正在考虑购买 Intel NUC,你需要买一些配件,以便开始使用它。 + +### 基本的 Intel NUC 配件 + +![][5] + +_文章中的 Amazon 链接是联盟链接。请阅读我们的[联盟政策][6]。_ + +#### 外围设备:显示器、键盘和鼠标 + +这很容易想到。你需要具有屏幕、键盘和鼠标才能使用计算机。你需要一台有 HDMI 连接的显示器和一个 USB 或无线键盘鼠标。如果你已经有了这些东西,那你可以继续。 + +如果你正在寻求建议,我建议购买 LG IPS LED 显示器。我有两台 22 英寸的型号,我对它提供的清晰视觉效果感到满意。 + +这些显示器有一个简单的固定支架。如果要使显示器可以上下移动并纵向旋转,请尝试使用 [HP EliteDisplay 显示器][7]。 + +![HP EliteDisplay Monitor][8] + +我在多屏设置中同时连接了三台显示器。一台显示器连接到指定的 HDMI 端口。两台显示器通过[Club 3D 的 Thunderbolt 转 HDMI 分配器][9]连接到 Thunderbolt 端口。 + +你也可以选择超宽显示器。我对此没有亲身经历。 + +#### 交流电源线 + +当你拿到 NUC 时,你会惊讶地发现,尽管它有电源适配器,但它并没有插头。 + +![][10] + +由于不同国家/地区的插头不同,因此英特尔决定将其从 NUC 套件中删除。我使用的是旧笔记本的电源线,但是如果你没有笔记本的电源线,那么很可能你需要自己准备一个。 + +#### 内存 + +Intel NUC 有两个内存插槽,最多可支持 32GB 内存。由于我的是 i3 核心处理器,因此我选择了 [Crucial 的 8GB DDR4 内存][11],价格约为 $33。 + +![][12] + +8 GB 内存在大多数情况下都没问题,但是如果你的是 i7 核心处理器,那么可以选择 [16GB 内存][13],价格约为 $67。你可以加倍,以获得最大 32GB。选择全在于你。 + +#### 硬盘(重要) + +Intel NUC 同时支持 2.5 英寸驱动器和 M.2 SSD,因此你可以同时使用两者以获得更多存储空间。 + +2.5 英寸插槽可同时容纳 SSD 和 HDD。我强烈建议选择 SSD,因为它比 HDD 快得多。[480GB 2.5寸][14]的价格是 $60。我认为这是一个合理的价格。 + +![][15] + +2.5 英寸驱动器的标准 SATA 口速度为 6Gb/秒。根据你是否选择 NVMe SSD,M.2 插槽可能会更快。 NVMe(非易失性内存主机控制器接口规范)SSD 的速度比普通 SSD(也称为 SATA SSD)快 4 倍。但是它们可能也比 SATA M2 SSD 贵一些。 + +当购买 M.2 SSD 时,请检查产品图片。无论是 NVMe 还是 SATA SSD,都应在磁盘本身的图片中提到。你可以考虑使用[经济的三星 EVO NVMe M.2 SSD][16]。 + +![Make sure that your are buying the faster NVMe M2 SSD][17] + +M.2 插槽和 2.5 英寸插槽中的 SATA SSD 具有相同的速度。这就是为什么如果你不想选择昂贵的 NVMe SSD,建议你选择 2.5 英寸 SATA SSD,并保留 M.2 插​​槽供以后升级。 + +#### 其他配套配件 + +你需要使用 HDMI 线缆连接显示器。如果你要购买新显示器,通常应会有一根线缆。 + +如果要使用 M.2 插槽,那么可能需要螺丝刀。Intel NUC 是一款出色的设备,你只需用手旋转四个脚即可拧开底部面板。你必须打开设备才能放置内存和磁盘。 + +![Intel NUC with Security Cable | Image Credit Intel][18] + +NUC 还有防盗孔,可与防盗绳一起使用。在业务环境中,建议使用防盗绳保护计算机安全。购买[防盗绳几美元][19]便可节省数百美元。 + +**你使用什么配件?** + +这些即使我在使用和建议使用的 Intel NUC 配件。你呢?如果你有一台 NUC,你会使用哪些配件并推荐给其他 NUC 用户? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/intel-nuc-essential-accessories/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://www.amazon.com/Intel-NUC-Mainstream-Kit-NUC8i3BEH/dp/B07GX4X4PW?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07GX4X4PW (barebone Intel NUC mini PC) +[2]: https://itsfoss.com/install-linux-on-intel-nuc/ +[3]: https://itsfoss.com/linux-based-mini-pc/ +[4]: https://www.intel.in/content/www/in/en/products/boards-kits/nuc.html +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-accessories.png?ssl=1 +[6]: https://itsfoss.com/affiliate-policy/ +[7]: https://www.amazon.com/HP-EliteDisplay-21-5-Inch-1FH45AA-ABA/dp/B075L4VKQF?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B075L4VKQF (HP EliteDisplay monitors) +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/hp-elitedisplay-monitor.png?ssl=1 +[9]: https://www.amazon.com/Club3D-CSV-1546-USB-C-Multi-Monitor-Splitter/dp/B06Y2FX13G?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B06Y2FX13G (thunderbolt to HDMI splitter from Club 3D) +[10]: https://itsfoss.com/wp-content/uploads/2019/09/ac-power-cord-3-pongs.webp +[11]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B01BIWKP58?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01BIWKP58 (8GB DDR4 RAM from Crucial) +[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/crucial-ram.jpg?ssl=1 +[13]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B019FRBHZ0?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B019FRBHZ0 (16 GB RAM) +[14]: https://www.amazon.com/Green-480GB-Internal-SSD-WDS480G2G0A/dp/B01M3POPK3?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01M3POPK3 (480 GB 2.5) +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/wd-green-ssd.png?ssl=1 +[16]: https://www.amazon.com/Samsung-970-EVO-500GB-MZ-V7E500BW/dp/B07BN4NJ2J?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BN4NJ2J (Samsung EVO is a cost effective NVMe M.2 SSD) +[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/samsung-evo-nvme.jpg?ssl=1 +[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-security-cable.jpg?ssl=1 +[19]: https://www.amazon.com/Kensington-Combination-Laptops-Devices-K64673AM/dp/B005J7Y99W?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B005J7Y99W (few dollars in the security cable)