Merge remote-tracking branch 'upstream/master'

This commit is contained in:
graveaccent 2019-09-28 10:39:21 +08:00
commit a49e381f41
28 changed files with 2945 additions and 1244 deletions

View File

@ -0,0 +1,93 @@
技术如何改变敏捷的规则
======
> 当我们开始推行敏捷时,还没有容器和 Kubernetes。但是它们改变了过去最困难的部分将敏捷性从小团队应用到整个组织。
![](https://img.linux.net.cn/data/attachment/album/201909/26/113910ytmoosx5tt79gan5.jpg)
越来越多的企业正因为一个非常明显的原因开始尝试敏捷和 [DevOps][1]: 企业需要通过更快的速度和更多的实验为创新和竞争性提供优势。而 DevOps 将帮助我们得到所需的创新速度。但是,在小团队或初创企业中实践 DevOps 与进行大规模实践完全是两码事。我们都明白这样的一个事实,那就是在十个人的跨职能团队中能够很好地解决问题的方案,当将相同的模式应用到一百个人的团队中时就可能无法奏效。这条道路是如此艰难,以至于 IT 领导者最简单的应对就是将敏捷方法的推行再推迟一年。
但那样的时代已经结束了。如果你已经尝试过,但是没有成功,那么现在是时候重新开始了。
到目前为止DevOps 需要为许多组织提供个性化的解决方案,因此往往需要进行大量的调整以及付出额外的工作。但在今天,[Linux 容器][2]和 Kubernetes 正在推动 DevOps 工具和过程的标准化。而这样的标准化将会加速整个软件开发过程。因此,我们用来实践 DevOps 工作方式的技术最终能够满足我们加快软件开发速度的愿望。
Linux 容器和 [Kubernetes][3] 正在改变团队交互的方式。此外,你可以在 Kubernetes 平台上运行任何能够在 Linux 运行的应用程序。这意味着什么呢?你可以运行大量的企业及应用程序(甚至可以解决以前令人烦恼的 Windows 和 Linux 之间的协调问题)。最后,容器和 Kubernetes 能够满足你未来将要运行的几乎所有工作。它们正在经受着未来的考验,以应对机器学习、人工智能和分析工作等下一代解决问题工具。
让我们以机器学习为例来思考一下。今天,人们可以在大量的企业数据中找到一些模式。当机器发现这些模式时(想想机器学习),你的员工就能更快地采取行动。随着人工智能的加入,机器不仅可以发现模式,还可以对模式进行操作。如今,一个积极的软件开发冲刺周期也就是三个星期而已。有了人工智能,机器每秒可以多次修改代码。创业公司会利用这种能力来“打扰你”。
考虑一下你需要多快才能参与到竞争当中。如果你对于无法对于 DevOps 和每周一个迭代周期充满信心,那么考虑一下当那个创业公司将 AI 驱动的过程指向你时会发生什么?现在是时候转向 DevOps 的工作方式了,否则就会像你的竞争对手一样被甩在后面。
### 容器技术如何改变团队的工作?
DevOps 使得许多试图将这种工作方式扩展到更大范围的团队感到沮丧。即使许多 IT和业务人员之前都听说过敏捷相关的语言、框架、模型如 DevOps而这些都有望彻底应用程序开发和 IT 流程,但他们还是对此持怀疑态度。
向你的受众“推销”快速开发冲刺也不是一件容易的事情。想象一下,如果你以这种方式买了一栋房子 —— 你将不再需要向开发商支付固定的金额,而是会得到这样的信息:“我们将在 4 周内浇筑完地基,其成本是 X之后再搭建房屋框架和铺设电路但是我们现在只能够知道地基完成的时间表。”人们已经习惯了买房子的时候有一个预先的价格和交付时间表。
挑战在于构建软件与构建房屋不同。同一个建筑商往往建造了成千上万个完全相同的房子,而软件项目从来都各不相同。这是你要克服的第一个障碍。
开发和运维团队的工作方式确实不同,我之所以知道这一点是因为我曾经从事过这两方面的工作。企业往往会用不同的方式来激励他们,开发人员会因为更改和创建而获得奖励,而运维专家则会因降低成本和确保安全性而获得奖励。我们会把他们分成不同的小组,并且尽量减少互动。而这些角色通常会吸引那些思维方式完全不同的技术人员。但是这样的解决方案注定会失败,你必须打破横亘在开发和运维之间的藩篱。
想想传统情况下会发生什么。业务会把需求扔过墙,这是因为他们在“买房”模式下运作,并且说上一句“我们 9 个月后见。”开发人员根据这些需求进行开发,并根据技术约束的需要进行更改。然后,他们把它扔过墙传递给运维人员,并说一句“搞清楚如何运行这个软件”。然后,运维人员勤就会勤奋地进行大量更改,使软件与基础设施保持一致。然而,最终的结果是什么呢?
通常情况下,当业务人员看到需求实现的最终结果时甚至根本辨认不出。在过去 20 年的大部分时间里,我们一次又一次地目睹了这种模式在软件行业中上演。而现在,是时候改变了。
Linux 容器能够真正地解决这样的问题,这是因为容器弥合开发和运维之间的鸿沟。容器技术允许两个团队共同理解和设计所有的关键需求,但仍然独立地履行各自团队的职责。基本上,我们去掉了开发人员和运维人员之间的电话游戏。
有了容器技术,我们可以使得运维团队的规模更小,但依旧能够承担起数百万应用程序的运维工作,并且能够使得开发团队可以更加快速地根据需要更改软件。(在较大的组织中,所需的速度可能比运维人员的响应速度更快。)
有了容器技术,你可以将所需要交付的内容与它运行的位置分开。你的运维团队只需要负责运行容器的主机和安全的内存占用,仅此而已。这意味着什么呢?
首先,这意味着你现在可以和团队一起实践 DevOps 了。没错,只需要让团队专注于他们已经拥有的专业知识,而对于容器,只需让团队了解所需集成依赖关系的必要知识即可。
如果你想要重新训练每个人,没有人会精通所有事情。容器技术允许团队之间进行交互,但同时也会为每个团队提供一个围绕该团队优势而构建的强大边界。开发人员会知道需要消耗什么资源,但不需要知道如何使其大规模运行。运维团队了解核心基础设施,但不需要了解应用程序的细节。此外,运维团队也可以通过更新应用程序来解决新的安全问题,以免你成为下一个数据泄露的热门话题。
想要为一个大型 IT 组织,比如 30000 人的团队教授运维和开发技能?那或许需要花费你十年的时间,而你可能并没有那么多时间。
当人们谈论“构建新的云原生应用程序将帮助我们摆脱这个问题”时,请批判性地进行思考。你可以在 10 个人的团队中构建云原生应用程序,但这对《财富》杂志前 1000 强的企业而言或许并不适用。除非你不再需要依赖现有的团队否则你无法一个接一个地构建新的微服务你最终将成为一个孤立的组织。这是一个诱人的想法但你不能指望这些应用程序来重新定义你的业务。我还没见过哪家公司能在如此大规模的并行开发中获得成功。IT 预算已经受到限制;在很长时间内,将预算翻倍甚至三倍是不现实的。
### 当奇迹发生时:你好,速度
Linux 容器就是为扩容而生的。一旦你开始这样做,[Kubernetes 之类的编制工具就会发挥作用][6],这是因为你将需要运行数千个容器。应用程序将不仅仅由一个容器组成,它们将依赖于许多不同的部分,所有的部分都会作为一个单元运行在容器上。如果不这样做,你的应用程序将无法在生产环境中很好地运行。
思考一下有多少小滑轮和杠杆组合在一起来支撑你的业务对于任何应用程序都是如此。开发人员负责应用程序中的所有滑轮和杠杆。如果开发人员没有这些组件你可能会在集成时做噩梦。与此同时无论是在线下还是在云上运维团队都会负责构成基础设施的所有滑轮和杠杆。做一个较为抽象的比喻使用Kubernetes你的运维团队就可以为应用程序提供运行所需的燃料但又不必成为所有方面的专家。
开发人员进行实验,运维团队则保持基础设施的安全和可靠。这样的组合使得企业敢于承担小风险,从而实现创新。不同于打几个孤注一掷的赌,公司中真正的实验往往是循序渐进的和快速的。
从个人经验来看,这就是组织内部发生的显著变化:因为人们说:“我们如何通过改变计划来真正地利用这种实验能力?”它会强制执行敏捷计划。
举个例子,使用 DevOps 模型、容器和 Kubernetes 的 KeyBank 如今每天都会部署代码。(观看[视频][7],其中主导了 KeyBank 持续交付和反馈的 John Rzeszotarski 将解释这一变化。类似地Macquarie 银行也借助 DevOps 和容器技术每天将一些东西投入生产环境。
一旦你每天都推出软件,它就会改变你计划的每一个方面,并且会[加速业务的变化速度][8]。Macquarie 银行和金融服务集团的 CDOLuis Uguina 表示:“创意可以在一天内触达客户。”(参见对 Red Hat 与 Macquarie 银行合作的[案例研究][9])。
### 是时候去创造一些伟大的东西了
Macquarie 的例子说明了速度的力量。这将如何改变你的经营方式记住Macquarie 不是一家初创企业。这是 CIO 们所面临的颠覆性力量,它不仅来自新的市场进入者,也来自老牌同行。
开发人员的自由还改变了运营敏捷商店的 CIO 们的人才方程式。突然之间大公司里的个体即使不是在最热门的行业或地区也可以产生巨大的影响。Macquarie 利用这一变动作为招聘工具,并向开发人员承诺,所有新招聘的员工将会在第一周内推出新产品。
与此同时,在这个基于云的计算和存储能力的时代,我们比以往任何时候都拥有更多可用的基础设施。考虑到[机器学习和人工智能工具将很快实现的飞跃][10],这是幸运的。
所有这些都说明现在正是打造伟大事业的好时机。考虑到市场创新的速度,你需要不断地创造伟大的东西来保持客户的忠诚度。因此,如果你一直在等待将赌注押在 DevOps 上,那么现在就是正确的时机。容器技术和 Kubernetes 改变了规则,并且对你有利。
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2018/1/how-technology-changes-rules-doing-agile
作者:[Matt Hicks][a]
译者:[JayFrank](https://github.com/JayFrank)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://enterprisersproject.com/user/matt-hicks
[1]:https://enterprisersproject.com/tags/devops
[2]:https://www.redhat.com/en/topics/containers?intcmp=701f2000000tjyaAAA
[3]:https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA
[4]:https://enterprisersproject.com/article/2017/8/4-container-adoption-patterns-what-you-need-know?sc_cid=70160000000h0aXAAQ
[5]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ
[6]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity
[7]:https://www.redhat.com/en/about/videos/john-rzeszotarski-keybank-red-hat-summit-2017?intcmp=701f2000000tjyaAAA
[8]:https://enterprisersproject.com/article/2017/11/dear-cios-stop-beating-yourselves-being-behind-transformation
[9]:https://www.redhat.com/en/resources/macquarie-bank-case-study?intcmp=701f2000000tjyaAAA
[10]:https://enterprisersproject.com/article/2018/1/4-ai-trends-watch
[11]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ

View File

@ -0,0 +1,116 @@
[#]: collector: "lujun9972"
[#]: translator: "PsiACE"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-11396-1.html"
[#]: subject: "Building a Messenger App: Schema"
[#]: via: "https://nicolasparada.netlify.com/posts/go-messenger-schema/"
[#]: author: "Nicolás Parada https://nicolasparada.netlify.com/"
构建一个即时消息应用(一):模式
========
![](https://img.linux.net.cn/data/attachment/album/201909/27/211458n44f7jvp77lfxxm0.jpg)
这是一系列关于构建“即时消息”应用的新帖子。你应该对这类应用并不陌生。有了它们的帮助,我们才可以与朋友畅聊无忌。[Facebook Messenger][1]、[WhatsApp][2] 和 [Skype][3] 就是其中的几个例子。正如你所看到的那样,这些应用允许我们发送图片、传输视频、录制音频、以及和一大帮子人聊天等等。当然,我们的教程应用将会尽量保持简单,只在两个用户之间发送文本消息。
我们将会用 [CockroachDB][4] 作为 SQL 数据库,用 [Go][5] 作为后端语言,并且用 JavaScript 来制作 web 应用。
这是第一篇帖子,我们将会讲述数据库的设计。
```
CREATE TABLE users (
id SERIAL NOT NULL PRIMARY KEY,
username STRING NOT NULL UNIQUE,
avatar_url STRING,
github_id INT NOT NULL UNIQUE
);
```
显然,这个应用需要一些用户。我们这里采用社交登录的形式。由于我选用了 [GitHub][6],所以这里需要保存一个对 GitHub 用户 ID 的引用。
```
CREATE TABLE conversations (
id SERIAL NOT NULL PRIMARY KEY,
last_message_id INT,
INDEX (last_message_id DESC)
);
```
每个对话都会引用最近一条消息。每当我们输入一条新消息时,我们都会更新这个字段。我会在后面添加外键约束。
… 你可能会想,我们可以先对对话进行分组,然后再通过这样的方式获取最近一条消息。但这样做会使查询变得更加复杂。
```
CREATE TABLE participants (
user_id INT NOT NULL REFERENCES users ON DELETE CASCADE,
conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE,
messages_read_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (user_id, conversation_id)
);
```
尽管之前我提到过对话只会在两个用户之间进行,但我们还是采用了允许向对话中添加多个参与者的设计。因此,在对话和用户之间有一个参与者表。
为了知道用户是否有未读消息,我们在消息表中添加了“读取时间”(`messages_read_at`)字段。每当用户在对话中读取消息时,我们都会更新它的值,这样一来,我们就可以将它与对话中最后一条消息的“创建时间”(`created_at`)字段进行比较。
```
CREATE TABLE messages (
id SERIAL NOT NULL PRIMARY KEY,
content STRING NOT NULL,
user_id INT NOT NULL REFERENCES users ON DELETE CASCADE,
conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
INDEX(created_at DESC)
);
```
尽管我们将消息表放在最后,但它在应用中相当重要。我们用它来保存对创建它的用户以及它所出现的对话的引用。而且还可以根据“创建时间”(`created_at`)来创建索引以完成对消息的排序。
```
ALTER TABLE conversations
ADD CONSTRAINT fk_last_message_id_ref_messages
FOREIGN KEY (last_message_id) REFERENCES messages ON DELETE SET NULL;
```
我在前面已经提到过这个外键约束了,不是吗:D
有这四张表就足够了。你也可以将这些查询保存到一个文件中,并将其通过管道传送到 Cockroach CLI。
首先,我们需要启动一个新节点:
```
cockroach start --insecure --host 127.0.0.1
```
然后创建数据库和这些表:
```
cockroach sql --insecure -e "CREATE DATABASE messenger"
cat schema.sql | cockroach sql --insecure -d messenger
```
这篇帖子就到这里。在接下来的部分中,我们将会介绍「登录」,敬请期待。
- [源代码][7]
---
via: https://nicolasparada.netlify.com/posts/go-messenger-schema/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[PsiACE](https://github.com/PsiACE)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://www.messenger.com/
[2]: https://www.whatsapp.com/
[3]: https://www.skype.com/
[4]: https://www.cockroachlabs.com/
[5]: https://golang.org/
[6]: https://github.com/
[7]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11393-1.html)
[#]: subject: (A Quick Look at Elvish Shell)
[#]: via: (https://itsfoss.com/elvish-shell/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
Elvish Shell 速览
======
每个来到这里的人都会对许多系统中默认 Bash shell 有所了解(无论多少)。过去这些年已经有一些新的 shell 出现来解决 Bash 中的一些缺点。Elvish 就是其中之一,我们将在今天讨论它。
### 什么是 Elvish Shell
![Pipelines In Elvish][1]
[Elvish][2] 不仅仅是一个 shell。它[也是][3]“一种表达性编程语言”。它有许多有趣的特性,包括:
* 它是由 Go 语言编写的
* 内置文件管理器,灵感来自 [Ranger 文件管理器][4]`Ctrl + N`
* 可搜索的命令历史记录(`Ctrl + R`
* 访问的目录的历史记录(`Ctrl + L`
* 支持结构化数据,例如列表、字典和函数的强大的管道
* 包含“一组标准的控制结构:有 `if` 条件控制、`for` 和 `while` 循环,还有 `try` 的异常处理”
* 通过包管理器支持[第三方模块扩展 Elvish][5]
* BSD 两句版许可证
你肯定在喊,“为什么叫 Elvish”。好吧根据[他们的网站][6],他们之所以选择当前的名字,是因为:
> 在 Roguelike 游戏中,精灵制造的物品质量很高。它们通常被称为“精灵物品”。但是之所以选择 “elvish” 是因为它以 “sh” 结尾,这是 Unix shell 的久远传统。这个与 fish 押韵,它是影响 Elvish 哲学的 shell 之一。
### 如何安装 Elvish Shell
Elvish 在几种主流发行版中都有。
请注意,该软件还很年轻。最新版本是 0.12。根据该项目的 [GitHub 页面][3]:“尽管还处在 1.0 之前,但它已经适合大多数日常交互使用。”
![Elvish Control Structures][7]
#### Debian 和 Ubuntu
Elvish 包已引入 Debian Buster 和 Ubuntu 17.10。不幸的是,这些包已经过时,你需要使用 [PPA][8] 安装最新版本。你需要使用以下命令:
```
sudo add-apt-repository ppa:zhsj/elvish
sudo apt update
sudo apt install elvish
```
#### Fedora
Elvish 在 Fedora 的主仓库中没有。你需要添加 [FZUG 仓库][9]安装 Evlish。为此你需要使用以下命令
```
sudo dnf config-manager --add-repo=http://repo.fdzh.org/FZUG/FZUG.repol
sudo dnf install elvish
```
#### Arch
Elvish 在 [Arch 用户仓库][10]中可用。
我相信你知道该[如何在 Linux 中更改 Shell][11],因此安装后可以切换到 Elvish 来使用它。
### 对 Elvish Shell 的想法
就个人而言,我没有理由在任何系统上安装 Elvish。我可以通过安装几个小的命令行程序或使用已经安装的程序来获得它的大多数功能。
例如Bash 中已经存在“搜索历史命令”功能,并且效果很好。如果要提高历史命令的能力,我建议安装 [fzf][12]。`fzf` 使用模糊搜索,因此你无需记住要查找的确切命令。`fzf` 还允许你预览和打开文件。
我认为 Elvish 作为一种编程语言是不错的,但是我会坚持使用 Bash shell 脚本,直到 Elvish 变得更成熟。
你们都有用过 Elvish 么?你认为安装 Elvish 是否值得?你最喜欢的 Bash 替代品是什么?请在下面的评论中告诉我们。
如果你发现这篇文章有趣请花一点时间在社交媒体、Hacker News 或 Reddit 上分享它。
--------------------------------------------------------------------------------
via: https://itsfoss.com/elvish-shell/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/pipelines-in-elvish.png?fit=800%2C421&ssl=1
[2]: https://elv.sh/
[3]: https://github.com/elves/elvish
[4]: https://ranger.github.io/
[5]: https://github.com/elves/awesome-elvish
[6]: https://elv.sh/ref/name.html
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Elvish-control-structures.png?fit=800%2C425&ssl=1
[8]: https://launchpad.net/%7Ezhsj/+archive/ubuntu/elvish
[9]: https://github.com/FZUG/repo/wiki/Add-FZUG-Repository
[10]: https://aur.archlinux.org/packages/elvish/
[11]: https://linuxhandbook.com/change-shell-linux/
[12]: https://github.com/junegunn/fzf
[13]: http://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,237 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11390-1.html)
[#]: subject: (Different Ways to Configure Static IP Address in RHEL 8)
[#]: via: (https://www.linuxtechi.com/configure-static-ip-address-rhel8/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
在 RHEL8 配置静态 IP 地址的不同方法
======
在 Linux 服务器上工作时,在网卡/以太网卡上分配静态 IP 地址是每个 Linux 工程师的常见任务之一。如果一个人在 Linux 服务器上正确配置了静态地址,那么他/她就可以通过网络远程访问它。在本文中,我们将演示在 RHEL 8 服务器网卡上配置静态 IP 地址的不同方法。
![](https://img.linux.net.cn/data/attachment/album/201909/25/222737dx94bbl9qbhzlfe4.jpg)
以下是在网卡上配置静态IP的方法
* `nmcli`(命令行工具)
* 网络脚本文件(`ifcfg-*`
* `nmtui`(基于文本的用户界面)
### 使用 nmcli 命令行工具配置静态 IP 地址
每当我们安装 RHEL 8 服务器时,就会自动安装命令行工具 `nmcli`,它是由网络管理器使用的,可以让我们在以太网卡上配置静态 IP 地址。
运行下面的 `ip addr` 命令,列出 RHEL 8 服务器上的以太网卡
```
[root@linuxtechi ~]# ip addr
```
正如我们在上面的命令输出中看到的,我们有两个网卡 `enp0s3``enp0s8`。当前分配给网卡的 IP 地址是通过 DHCP 服务器获得的。
假设我们希望在第一个网卡 (`enp0s3`) 上分配静态 IP 地址,具体内容如下:
* IP 地址 = 192.168.1.4
* 网络掩码 = 255.255.255.0
* 网关 = 192.168.1.1
* DNS = 8.8.8.8
依次运行以下 `nmcli` 命令来配置静态 IP
使用 `nmcli connection` 命令列出当前活动的以太网卡,
```
[root@linuxtechi ~]# nmcli connection
NAME UUID TYPE DEVICE
enp0s3 7c1b8444-cb65-440d-9bf6-ea0ad5e60bae ethernet enp0s3
virbr0 3020c41f-6b21-4d80-a1a6-7c1bd5867e6c bridge virbr0
[root@linuxtechi ~]#
```
使用下面的 `nmcli``enp0s3` 分配静态 IP。
**命令语法:**
```
# nmcli connection modify <interface_name> ipv4.address <ip/prefix>
```
**注意:** 为了简化语句,在 `nmcli` 命令中,我们通常用 `con` 关键字替换 `connection`,并用 `mod` 关键字替换 `modify`
将 IPv4 地址 (192.168.1.4) 分配给 `enp0s3` 网卡上,
```
[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.addresses 192.168.1.4/24
```
使用下面的 `nmcli` 命令设置网关,
```
[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.gateway 192.168.1.1
```
设置手动配置(从 dhcp 到 static
```
[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.method manual
```
设置 DNS 值为 “8.8.8.8”,
```
[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.dns "8.8.8.8"
```
要保存上述更改并重新加载,请执行如下 `nmcli` 命令,
```
[root@linuxtechi ~]# nmcli con up enp0s3
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)
```
以上命令显示网卡 `enp0s3` 已成功配置。我们使用 `nmcli` 命令做的那些更改都将永久保存在文件 `etc/sysconfig/network-scripts/ifcfg-enp0s3` 里。
```
[root@linuxtechi ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
```
![ifcfg-enp0s3-file-rhel8][2]
要确认 IP 地址是否分配给了 `enp0s3` 网卡了,请使用以下 IP 命令查看,
```
[root@linuxtechi ~]#ip addr show enp0s3
```
### 使用网络脚本文件ifcfg-*)手动配置静态 IP 地址
我们可以使用配置以太网卡的网络脚本或 `ifcfg-*` 文件来配置以太网卡的静态 IP 地址。假设我们想在第二个以太网卡 `enp0s8` 上分配静态 IP 地址:
* IP 地址 = 192.168.1.91
* 前缀 = 24
* 网关 =192.168.1.1
* DNS1 =4.2.2.2
转到目录 `/etc/sysconfig/network-scripts`,查找文件 `ifcfg-enp0s8`,如果它不存在,则使用以下内容创建它,
```
[root@linuxtechi ~]# cd /etc/sysconfig/network-scripts/
[root@linuxtechi network-scripts]# vi ifcfg-enp0s8
TYPE="Ethernet"
DEVICE="enp0s8"
BOOTPROTO="static"
ONBOOT="yes"
NAME="enp0s8"
IPADDR="192.168.1.91"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="4.2.2.2"
```
保存并退出文件,然后重新启动网络管理器服务以使上述更改生效,
```
[root@linuxtechi network-scripts]# systemctl restart NetworkManager
```
现在使用下面的 `ip` 命令来验证 IP 地址是否分配给网卡,
```
[root@linuxtechi ~]# ip add show enp0s8
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:7c:bb:cb brd ff:ff:ff:ff:ff:ff
inet 192.168.1.91/24 brd 192.168.1.255 scope global noprefixroute enp0s8
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe7c:bbcb/64 scope link
valid_lft forever preferred_lft forever
[root@linuxtechi ~]#
```
以上输出内容确认静态 IP 地址已在网卡 `enp0s8` 上成功配置了。
### 使用 nmtui 实用程序配置静态 IP 地址
`nmtui` 是一个基于文本用户界面的,用于控制网络的管理器,当我们执行 `nmtui` 时,它将打开一个基于文本的用户界面,通过它我们可以添加、修改和删除连接。除此之外,`nmtui` 还可以用来设置系统的主机名。
假设我们希望通过以下细节将静态 IP 地址分配给网卡 `enp0s3`
* IP 地址 = 10.20.0.72
* 前缀 = 24
* 网关 = 10.20.0.1
* DNS1 = 4.2.2.2
运行 `nmtui` 并按照屏幕说明操作,示例如下所示,
```
[root@linuxtechi ~]# nmtui
```
![nmtui-rhel8][3]
选择第一个选项 “Edit a connection”然后选择接口为 “enp0s3”
![Choose-interface-nmtui-rhel8][4]
选择 “Edit”然后指定 IP 地址、前缀、网关和域名系统服务器 IP
![set-ip-nmtui-rhel8][5]
选择确定,然后点击回车。在下一个窗口中,选择 “Activate a connection”
![Activate-option-nmtui-rhel8][6]
选择 “enp0s3”选择 “Deactivate” 并点击回车,
![Deactivate-interface-nmtui-rhel8][7]
现在选择 “Activate” 并点击回车,
![Activate-interface-nmtui-rhel8][8]
选择 “Back”然后选择 “Quit”
![Quit-Option-nmtui-rhel8][9]
使用下面的 `ip` 命令验证 IP 地址是否已分配给接口 `enp0s3`
```
[root@linuxtechi ~]# ip add show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:53:39:4d brd ff:ff:ff:ff:ff:ff
inet 10.20.0.72/24 brd 10.20.0.255 scope global noprefixroute enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::421d:5abf:58bd:c47e/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@linuxtechi ~]#
```
以上输出内容显示我们已经使用 `nmtui` 实用程序成功地将静态 IP 地址分配给接口 `enp0s3`
以上就是本教程的全部内容,我们已经介绍了在 RHEL 8 系统上为以太网卡配置 IPv4 地址的三种不同方法。请在下面的评论部分分享反馈和评论。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/configure-static-ip-address-rhel8/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Configure-Static-IP-RHEL8.jpg
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/ifcfg-enp0s3-file-rhel8.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/nmtui-rhel8.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-interface-nmtui-rhel8.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/set-ip-nmtui-rhel8.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Activate-option-nmtui-rhel8.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Deactivate-interface-nmtui-rhel8.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Activate-interface-nmtui-rhel8.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Quit-Option-nmtui-rhel8.jpg

View File

@ -0,0 +1,462 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11394-1.html)
[#]: subject: (How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8)
[#]: via: (https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
如何在 RHEL8 /CentOS8 上建立多节点 Elastic stack 集群
======
Elastic stack 俗称 ELK stack是一组包括 Elasticsearch、Logstash 和 Kibana 在内的开源产品。Elastic Stack 由 Elastic 公司开发和维护。使用 Elastic stack可以将系统日志发送到 Logstash它是一个数据收集引擎接受来自可能任何来源的日志或数据并对日志进行归一化然后将日志转发到 Elasticsearch用于分析、索引、搜索和存储最后使用 Kibana 表示为可视化数据,使用 Kibana我们还可以基于用户的查询创建交互式图表。
![Elastic-Stack-Cluster-RHEL8-CentOS8][2]
在本文中,我们将演示如何在 RHEL 8 / CentOS 8 服务器上设置多节点 elastic stack 集群。以下是我的 Elastic Stack 集群的详细信息:
**Elasticsearch**
* 三台服务器,最小化安装 RHEL 8 / CentOS 8
* IP & 主机名 192.168.56.40`elasticsearch1.linuxtechi.local`、192.168.56.50 `elasticsearch2.linuxtechi.local`、192.168.56.60elasticsearch3.linuxtechi.local`
Logstash**
* 两台服务器,最小化安装 RHEL 8 / CentOS 8
* IP & 主机 192.168.56.20`logstash1.linuxtechi.local`、192.168.56.30`logstash2.linuxtechi.local`
**Kibana**
* 一台服务器,最小化安装 RHEL 8 / CentOS 8
* IP & 主机名 192.168.56.10`kibana.linuxtechi.local`
**Filebeat**
* 一台服务器,最小化安装 CentOS 7
* IP & 主机名 192.168.56.70`web-server`
让我们从设置 Elasticsearch 集群开始,
### 设置3个节点 Elasticsearch 集群
正如我已经说过的,设置 Elasticsearch 集群的节点,登录到每个节点,设置主机名并配置 yum/dnf 库。
使用命令 `hostnamectl` 设置各个节点上的主机名:
```
[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch1.linuxtechi. local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch2.linuxtechi. local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch3.linuxtechi. local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
```
对于 CentOS 8 系统,我们不需要配置任何操作系统包库,对于 RHEL 8 服务器,如果你有有效订阅,那么用红帽订阅以获得包存储库就可以了。如果你想为操作系统包配置本地 yum/dnf 存储库,请参考以下网址:
- [如何使用 DVD 或 ISO 文件在 RHEL 8 服务器上设置本地 Yum / DNF 存储库][3]
在所有节点上配置 Elasticsearch 包存储库,在 `/etc/yum.repo.d/` 文件夹下创建一个包含以下内容的 `elastic.repo` 文件:
```
~]# vi /etc/yum.repos.d/elastic.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
```
保存文件并退出。
在所有三个节点上使用 `rpm` 命令导入 Elastic 公共签名密钥。
```
~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
```
在所有三个节点的 `/etc/hosts` 文件中添加以下行:
```
192.168.56.40 elasticsearch1.linuxtechi.local
192.168.56.50 elasticsearch2.linuxtechi.local
192.168.56.60 elasticsearch3.linuxtechi.local
```
使用 `yum`/`dnf` 命令在所有三个节点上安装 Java
```
[root@linuxtechi ~]# dnf install java-openjdk -y
[root@linuxtechi ~]# dnf install java-openjdk -y
[root@linuxtechi ~]# dnf install java-openjdk -y
```
使用 `yum`/`dnf` 命令在所有三个节点上安装 Elasticsearch
```
[root@linuxtechi ~]# dnf install elasticsearch -y
[root@linuxtechi ~]# dnf install elasticsearch -y
[root@linuxtechi ~]# dnf install elasticsearch -y
```
**注意:** 如果操作系统防火墙已启用并在每个 Elasticsearch 节点中运行,则使用 `firewall-cmd` 命令允许以下端口开放:
```
~]# firewall-cmd --permanent --add-port=9300/tcp
~]# firewall-cmd --permanent --add-port=9200/tcp
~]# firewall-cmd --reload
```
配置 Elasticsearch, 在所有节点上编辑文件 `/etc/elasticsearch/elasticsearch.yml` 并加入以下内容:
```
~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: opn-cluster
node.name: elasticsearch1.linuxtechi.local
network.host: 192.168.56.40
http.port: 9200
discovery.seed_hosts: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"]
cluster.initial_master_nodes: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"]
```
**注意:** 在每个节点上,在 `node.name` 中填写正确的主机名,在 `network.host` 中填写正确的 IP 地址,其他参数保持不变。
现在使用 `systemctl` 命令在所有三个节点上启动并启用 Elasticsearch 服务:
```
~]# systemctl daemon-reload
~]# systemctl enable elasticsearch.service
~]# systemctl start elasticsearch.service
```
使用下面 `ss` 命令验证 elasticsearch 节点是否开始监听 9200 端口:
```
[root@linuxtechi ~]# ss -tunlp | grep 9200
tcp LISTEN 0 128 [::ffff:192.168.56.40]:9200 *:* users:(("java",pid=2734,fd=256))
[root@linuxtechi ~]#
```
使用以下 `curl` 命令验证 Elasticsearch 群集状态:
```
[root@linuxtechi ~]# curl http://elasticsearch1.linuxtechi.local:9200
[root@linuxtechi ~]# curl -X GET http://elasticsearch2.linuxtechi.local:9200/_cluster/health?pretty
```
命令的输出如下所示:
![Elasticsearch-cluster-status-rhel8][3]
以上输出表明我们已经成功创建了 3 节点的 Elasticsearch 集群,集群的状态也是绿色的。
**注意:** 如果你想修改 JVM 堆大小,那么你可以编辑了文件 `/etc/elasticsearch/jvm.options`,并根据你的环境更改以下参数:
* `-Xms1g`
* `-Xmx1g`
现在让我们转到 Logstash 节点。
### 安装和配置 Logstash
在两个 Logstash 节点上执行以下步骤。
登录到两个节点使用 `hostnamectl` 命令设置主机名:
```
[root@linuxtechi ~]# hostnamectl set-hostname "logstash1.linuxtechi.local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
[root@linuxtechi ~]# hostnamectl set-hostname "logstash2.linuxtechi.local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
```
在两个 logstash 节点的 `/etc/hosts` 文件中添加以下条目:
```
~]# vi /etc/hosts
192.168.56.40 elasticsearch1.linuxtechi.local
192.168.56.50 elasticsearch2.linuxtechi.local
192.168.56.60 elasticsearch3.linuxtechi.local
```
保存文件并退出。
在两个节点上配置 Logstash 存储库,在文件夹 `/ete/yum.repo.d/` 下创建一个包含以下内容的文件 `logstash.repo`
```
~]# vi /etc/yum.repos.d/logstash.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
```
保存并退出文件,运行 `rpm` 命令导入签名密钥:
```
~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
```
使用 `yum`/`dnf` 命令在两个节点上安装 Java OpenJDK
```
~]# dnf install java-openjdk -y
```
从两个节点运行 `yum`/`dnf` 命令来安装 logstash
```
[root@linuxtechi ~]# dnf install logstash -y
[root@linuxtechi ~]# dnf install logstash -y
```
现在配置 logstash在两个 logstash 节点上执行以下步骤,创建一个 logstash 配置文件,首先我们在 `/etc/logstash/conf.d/` 下复制 logstash 示例文件:
```
# cd /etc/logstash/
# cp logstash-sample.conf conf.d/logstash.conf
```
编辑配置文件并更新以下内容:
```
# vi conf.d/logstash.conf
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
```
`output` 部分之下,在 `hosts` 参数中指定所有三个 Elasticsearch 节点的 FQDN其他参数保持不变。
使用 `firewall-cmd` 命令在操作系统防火墙中允许 logstash 端口 “5044”
```
~ # firewall-cmd --permanent --add-port=5044/tcp
~ # firewall-cmd reload
```
现在,在每个节点上运行以下 `systemctl` 命令,启动并启用 Logstash 服务:
```
~]# systemctl start logstash
~]# systemctl eanble logstash
```
使用 `ss` 命令验证 logstash 服务是否开始监听 5044 端口:
```
[root@linuxtechi ~]# ss -tunlp | grep 5044
tcp LISTEN 0 128 *:5044 *:* users:(("java",pid=2416,fd=96))
[root@linuxtechi ~]#
```
以上输出表明 logstash 已成功安装和配置。让我们转到 Kibana 安装。
### 安装和配置 Kibana
登录 Kibana 节点,使用 `hostnamectl` 命令设置主机名:
```
[root@linuxtechi ~]# hostnamectl set-hostname "kibana.linuxtechi.local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
```
编辑 `/etc/hosts` 文件并添加以下行:
```
192.168.56.40 elasticsearch1.linuxtechi.local
192.168.56.50 elasticsearch2.linuxtechi.local
192.168.56.60 elasticsearch3.linuxtechi.local
```
使用以下命令设置 Kibana 存储库:
```
[root@linuxtechi ~]# vi /etc/yum.repos.d/kibana.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@linuxtechi ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
```
执行 `yum`/`dnf` 命令安装 kibana
```
[root@linuxtechi ~]# yum install kibana -y
```
通过编辑 `/etc/kibana/kibana.yml` 文件,配置 Kibana
```
[root@linuxtechi ~]# vim /etc/kibana/kibana.yml
…………
server.host: "kibana.linuxtechi.local"
server.name: "kibana.linuxtechi.local"
elasticsearch.hosts: ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"]
…………
```
启用并启动 kibana 服务:
```
[root@linuxtechi ~]# systemctl start kibana
[root@linuxtechi ~]# systemctl enable kibana
```
在系统防火墙上允许 Kibana 端口 “5601”
```
[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5601/tcp
success
[root@linuxtechi ~]# firewall-cmd --reload
success
[root@linuxtechi ~]#
```
使用以下 URL 访问 Kibana 界面:<http://kibana.linuxtechi.local:5601>
![Kibana-Dashboard-rhel8][4]
从面板上,我们可以检查 Elastic Stack 集群的状态。
![Stack-Monitoring-Overview-RHEL8][5]
这证明我们已经在 RHEL 8 /CentOS 8 上成功地安装并设置了多节点 Elastic Stack 集群。
现在让我们通过 `filebeat` 从其他 Linux 服务器发送一些日志到 logstash 节点中,在我的例子中,我有一个 CentOS 7服务器我将通过 `filebeat` 将该服务器的所有重要日志推送到 logstash。
登录到 CentOS 7 服务器使用 yum/rpm 命令安装 filebeat 包:
```
[root@linuxtechi ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
Retrieving https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:filebeat-7.3.1-1 ################################# [100%]
[root@linuxtechi ~]#
```
编辑 `/etc/hosts` 文件并添加以下内容:
```
192.168.56.20 logstash1.linuxtechi.local
192.168.56.30 logstash2.linuxtechi.local
```
现在配置 `filebeat`,以便它可以使用负载平衡技术向 logstash 节点发送日志,编辑文件 `/etc/filebeat/filebeat.yml`,并添加以下参数:
`filebeat.inputs:` 部分将 `enabled: false` 更改为 `enabled: true`,并在 `paths` 参数下指定我们可以发送到 logstash 的日志文件的位置;注释掉 `output.elasticsearch``host` 参数;删除 `output.logstash:``hosts:` 的注释,并在 `hosts` 参数添加两个 logstash 节点,以及设置 `loadbalance: true`
```
[root@linuxtechi ~]# vi /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/dmesg
- /var/log/maillog
- /var/log/boot.log
#output.elasticsearch:
# hosts: ["localhost:9200"]
output.logstash:
hosts: ["logstash1.linuxtechi.local:5044", "logstash2.linuxtechi.local:5044"]
loadbalance: true
```
使用下面的 2 个 `systemctl` 命令 启动并启用 `filebeat` 服务:
```
[root@linuxtechi ~]# systemctl start filebeat
[root@linuxtechi ~]# systemctl enable filebeat
```
现在转到 Kibana 用户界面,验证新索引是否可见。
从左侧栏中选择管理选项,然后单击 Elasticsearch 下的索引管理:
![Elasticsearch-index-management-Kibana][6]
正如我们上面看到的,索引现在是可见的,让我们现在创建索引模型。
点击 Kibana 部分的 “Index Patterns”它将提示我们创建一个新模型点击 “Create Index Pattern” ,并将模式名称指定为 “filebeat”
![Define-Index-Pattern-Kibana-RHEL8][7]
点击下一步。
选择 “Timestamp” 作为索引模型的时间过滤器,然后单击 “Create index pattern”
![Time-Filter-Index-Pattern-Kibana-RHEL8][8]
![filebeat-index-pattern-overview-Kibana][9]
现在单击查看实时 filebeat 索引模型:
![Discover-Kibana-REHL8][10]
这表明 Filebeat 代理已配置成功,我们能够在 Kibana 仪表盘上看到实时日志。
以上就是本文的全部内容,对这些帮助你在 RHEL 8 / CentOS 8 系统上设置 Elastic Stack 集群的步骤,请不要犹豫分享你的反馈和意见。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elastic-Stack-Cluster-RHEL8-CentOS8.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elasticsearch-cluster-status-rhel8.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Kibana-Dashboard-rhel8.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Stack-Monitoring-Overview-RHEL8.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elasticsearch-index-management-Kibana.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Define-Index-Pattern-Kibana-RHEL8.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Time-Filter-Index-Pattern-Kibana-RHEL8.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/filebeat-index-pattern-overview-Kibana.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Discover-Kibana-REHL8.jpg

View File

@ -1,22 +1,24 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11389-1.html)
[#]: subject: (How to remove carriage returns from text files on Linux)
[#]: via: (https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
何在 Linux 中删除文本中的回车
何在 Linux 中删除文本中的回车字符
======
当回车(也称为 Ctrl+M让你紧张时别担心。有几种简单的方法消除它们。
[Kim Siever][1]
回车可以往回追溯很长一段时间 - 早在打字机上就有一个机械装置或杠杆将承载纸滚筒的机架移到最后边,以便重新在左侧输入字母。他们在 Windows 的文本上保留了它,但从未在 Linux 系统上使用过。当你尝试在 Linux 上处理在 Windows 上创建的文件时,这种不兼容性有时会导致问题,但这是一个非常容易解决的问题
> 当回车字符(`Ctrl+M`)让你紧张时,别担心。有几种简单的方法消除它们。
如果你使用 **od**octal dump命令查看文件那么回车也称为 **Ctrl+M**)字符将显示为八进制的 15。字符 **CRLF** 通常用于表示在 Windows 文本上结束行的回车符和换行符序列。那些注意看八进制转储的会看到 **\r \n**。相比之下Linux 文本仅以换行符结束。
![](https://img.linux.net.cn/data/attachment/album/201909/25/214211xenk2dqfepx3xemm.jpg)
这有一个 **od** 输出的示例,高亮显示了行中的 **CRLF** 字符,以及它的八进制。
“回车”字符可以往回追溯很长一段时间 —— 早在打字机上就有一个机械装置或杠杆将承载纸滚筒的机架移到右边,以便可以重新在左侧输入字母。他们在 Windows 上的文本文件上保留了它,但从未在 Linux 系统上使用过。当你尝试在 Linux 上处理在 Windows 上创建的文件时,这种不兼容性有时会导致问题,但这是一个非常容易解决的问题。
如果你使用 `od`<ruby>八进制转储<rt>octal dump</rt></ruby>)命令查看文件,那么回车(也用 `Ctrl+M` 代表)字符将显示为八进制的 15。字符 `CRLF` 通常用于表示 Windows 文本文件中的一行结束的回车符和换行符序列。那些注意看八进制转储的会看到 `\r\n`。相比之下Linux 文本仅以换行符结束。
这有一个 `od` 输出的示例,高亮显示了行中的 `CRLF` 字符,以及它的八进制。
```
$ od -bc testfile.txt
@ -40,14 +42,14 @@ $ od -bc testfile.txt
#### dos2unix
你可能会在安装上遇到麻烦,但 **dos2unix** 可能是将 Windows 文本转换为 Unix/Linux 文本的最简单方法。一个命令带上一个参数就行了。不需要第二个文件名。该文件会被直接更改。
你可能会在安装时遇到麻烦,但 `dos2unix` 可能是将 Windows 文本转换为 Unix/Linux 文本的最简单方法。一个命令带上一个参数就行了。不需要第二个文件名。该文件会被直接更改。
```
$ dos2unix testfile.txt
dos2unix: converting file testfile.txt to Unix format...
```
你应该看到文件长度减少,具体取决于它包含的行数。包含 100 行的文件可能会缩小 99 个字符,因为只有最后一行不会以 **CRLF** 字符结尾。
你应该会发现文件长度减少,具体取决于它包含的行数。包含 100 行的文件可能会缩小 99 个字符,因为只有最后一行不会以 `CRLF` 字符结尾。
之前:
@ -67,31 +69,29 @@ dos2unix: converting file testfile.txt to Unix format...
$ find . -type f -exec dos2unix {} \;
```
在此命令中,我们使用 find 查找常规文件,然后运行 **dos2unix** 命令一次转换一个。命令中的 {} 将被替换为文件名。运行时,你应该处于包含文件的目录中。此命令可能会损坏其他类型的文件,例如除了文本文件外在上下文中包含八进制 15 的文件(如,镜像文件中的字节)。
在此命令中,我们使用 `find` 查找常规文件,然后运行 `dos2unix` 命令一次转换一个。命令中的 `{}` 将被替换为文件名。运行时,你应该处于包含文件的目录中。此命令可能会损坏其他类型的文件,例如除了文本文件外在上下文中包含八进制 15 的文件(如,镜像文件中的字节)。
#### sed
你还可以使用流编辑器 **sed** 来删除回车符。但是,你必须提供第二个文件名。以下是例子:
你还可以使用流编辑器 `sed` 来删除回车符。但是,你必须提供第二个文件名。以下是例子:
```
$ sed -e “s/^M//” before.txt > after.txt
```
一件需要注意的重要的事情是,请不要输入你看到的字符。你必须按下 **Ctrl+V** 后跟 **Ctrl+M** 来输入 **^M**。 “s” 是替换命令。斜杠将我们要查找的文本Ctrl + M和要替换的文本这里是空)分开。
一件需要注意的重要的事情是,请不要输入你看到的字符。你必须按下 `Ctrl+V` 后跟 `Ctrl+M` 来输入 `^M`。`s` 是替换命令。斜杠将我们要查找的文本(`Ctrl + M`)和要替换的文本(这里为空)分开。
#### vi
你甚至可以使用 **vi** 删除回车符(**Ctrl+M**),但这里假设你没有打开数百个文件,或许也在做一些其他的修改。你可以键入“**:**” 进入命令行,然后输入下面的字符串。与 **sed** 一样,命令中 **^M** 需要通过 **Ctrl+V** 输入 **^**,然后 **Ctrl+M** 插入**M**。 **%s**是替换操作,斜杠再次将我们要删除的字符和我们想要替换它的文本(空)分开。 “**g**”(全局)意味在所有行上执行。
你甚至可以使用 `vi` 删除回车符(`Ctrl+M`),但这里假设你没有打开数百个文件,或许也在做一些其他的修改。你可以键入 `:` 进入命令行,然后输入下面的字符串。与 `sed` 一样,命令中 `^M` 需要通过 `Ctrl+V` 输入 `^`,然后 `Ctrl+M` 插入 `M`。`%s` 是替换操作,斜杠再次将我们要删除的字符和我们想要替换它的文本(空)分开。 `g`(全局)意味在所有行上执行。
```
:%s/^M//g
```
#### 总结
### 总结
**dos2unix** 命令可能是最容易记住的,也是最可靠地从文本中删除回车的方法。 其他选择使用起来有点困难,但它们提供相同的基本功能。
在 [Facebook][3] 和 [LinkedIn][4] 上加入 Network World 社区,评论最热主题。
`dos2unix` 命令可能是最容易记住的,也是从文本中删除回车的最可靠的方法。其他选择使用起来有点困难,但它们提供相同的基本功能。
--------------------------------------------------------------------------------
@ -100,7 +100,7 @@ via: https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,143 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Great Open Source Divide: ICE, Hippocratic License and the Controversy)
[#]: via: (https://itsfoss.com/hippocratic-license/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
The Great Open Source Divide: ICE, Hippocratic License and the Controversy
======
_**Coraline Ada Ehmke has created “Hippocratic License” that “add ethics to open source projects”. But this seems to be just the beginning of a controversy as the “Hippocratic License” may not be open source at all.**_
Coraline Ada Ehmke, better known for her [Contributor Covenant][1], has modified the MIT open source license into Hippocratic License that adds a couple of conditions to the existing MIT license. Before you learn what it is, let me give you the context on why its been created in the first place.
### No Tech for ICE
![No Tech For ICE | Image Credit Science for All][2]
Immigration and Customs Enforcement agency of the US government, [ICE][3], has been condemned by human rights groups and activists for inhumane practices of separating children from their parents at the US-Mexico border under the new strict immigration policy.
Some techies have been vocal against the actions of ICE and they dont want ICE to use tech projects they work on as it helps ICE in one way or another.
The “[No Tech for ICE][4]” movement has been going on for some time but it got highlighted once again this week when an engineer named [Seth Vargo took down his open source project after finding ICE was using it][5] through Chef.
The project was called [Chef Sugar][6], a Ruby library for simplifying work with [Chef][7], a platform for configuration management. ICE is one of the clients for Chef. The project withdrawal momentarily impacted Chef and its clients. Chef swiftly fixed the problem by uploading the Chef Sugar project on its own GitHub repository.
Despite the trouble it caused for a number of companies using Chef worldwide, Vargo made a point. The pressure tactic worked and after [initial resistance][8], Chef caved in and [agreed to not renew its contract with ICE][9].
Now Chef Sugar is an open source project and its developer cannot stop people from forking it and continue using it. And thats where [Coraline Ada Ehmke][10] came up with a new licensing model called Hippocratic License.
### What is Hippocratic License?
![][11]
To enable more developers to forbid unethical organizations like ICE from using their open source projects, Coraline Ada Ehmake introduced a new license called “Hippocratic License”.
The term Hippocratic relates to ancient Greek physician [Hippocrates][12]. The [Hippocratic oath][13] is an ethical oath (historically taken by physicians) and one of the crucial part of the oath is “I will abstain from all intentional wrong-doing and harm”. This part of the oath is known as “Primum non nocere” or “First do no harm”.
The entire terminology is significant. The license is called Hippocratic license and is hosted on a domain called [firstdonoharm.dev][14] and the idea is to enable the developers to be not part of intentional wrong-doing.
The [Hippocratic License][14] is based on the popular [MIT open source license][15]. It adds this additional and crucial condition:
> The software may not be used by individuals, corporations, governments, or other groups for systems or activities that actively and knowingly endanger, harm, or otherwise threaten the physical, mental, economic, or general well-being of underprivileged individuals or groups.
### Is Hippocratic license really an open source license?
No, it is not. Thats what [Open Source Initiative][16] (OSI) says. OSI is the community-recognized body for reviewing and approving licenses as Open Source Definition conformant.
> The intro to the Hippocratic Licence might lead some to believe
> the license is an Open Source Software licence, and software distributed under the Hippocratic Licence is Open Source Software.
>
> As neither is true, we ask you to please modify the language to remove confusion.
>
> — OpenSourceInitiative (@OpenSourceOrg) [September 23, 2019][17]
Coraline first [thanked][18] OSI for pointing it out and then goes on to attack it as an “open source problem”.
> This is the problem: the current structure of open source specifically prohibits us from protecting our labor from use by organizations like ICE.
>
> Thats not a license problem. Thats an Open Source™ problem. <https://t.co/XEyu5VNUMJ>
>
> — Coraline Ada Ehmke (@CoralineAda) [September 23, 2019][19]
Coraline clearly doesnt accept that OSI (open Source Initiative) and [FSF][20] (Free Software Foundation) has the authority on the matter of defining open source and free software.
> OSI and FSF are not the real arbiters of what is Open Source and what is Free Software.
>
> We are.
>
> — Coraline Ada Ehmke (@CoralineAda) [September 22, 2019][21]
So if OSI and FSF, the organizations created for the sole purpose of defining open source and free software, are not the authority on this subject then who is? The “we” in “we are” of Coralines statement is ambiguous. Does we represents the people who agree to Coralines view or we means the entire open source community? If its the latter, then Coraline doesnt represent or speak for every person in the open source community.
### Does it solve the problem or does it create more problems? Can open source be neutral?
> Developers are (finally) becoming more aware of the impact that their work has on the world, and in particular on underprivileged people.
>
> Its late to come to that realization, but not TOO LATE to do something about it.
>
> The lesson here is that TECH IS NOT NEUTRAL.
>
> — Coraline Ada Ehmke (@CoralineAda) [September 23, 2019][22]
Everything looks good from an idealistic point of view at the first glance. It seems like this new license will solve the problem of evil people using open source projects.
But I see a problem here and that problem is the perception of evil. What you consider evil depends on your point of view.
A number of “No Tech for ICE” supporting techies are also supporters of ANTIFA. [ANTIFA has been indulging in physical violence from time to time][23]. What if a bunch of cis white men, who found [far-left organizations like ANTIFA][24] evil, stop them from using their open source projects? What if [Richard Stallman comes back from his forced retirement][25] and starts selecting people who can use GNU projects based on whether or not they agree with his views?
The license condition also says “knowingly endanger, harm, or otherwise threaten the physical, mental, economic, or general well-being of underprivileged individuals or groups”.
So the entire stuff is only applicable to “underprivileged individuals or groups”, not others? So the others dont get the same rights anymore? This should not come as surprise because Coraline is the same person who took extreme measure to harm the economic well being of a developer ([Coraline disagreed with his views][26]) by doing everything in capacity to get him fired from his job.
Until these concerns are addressed, the Hippocratic License will unfortunately remain hypocrite license.
Where will this end? How many open source projects will be forked between sparring groups of different ideologies? Why should the rest of the world suffer from the American domestic politics? Can we not leave open source undivided?
Your views are welcome. Please note that abusive comments wont be published.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][27].
--------------------------------------------------------------------------------
via: https://itsfoss.com/hippocratic-license/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.contributor-covenant.org/
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/no-tech-for-ice.jpg?resize=800%2C340&ssl=1
[3]: https://en.wikipedia.org/wiki/U.S._Immigration_and_Customs_Enforcement
[4]: https://notechforice.com/
[5]: https://www.zdnet.com/article/developer-takes-down-ruby-library-after-he-finds-out-ice-was-using-it/
[6]: https://github.com/sethvargo/chef-sugar
[7]: https://www.chef.io/
[8]: https://blog.chef.io/2019/09/19/chefs-position-on-customer-engagement-in-the-public-and-private-sectors/
[9]: https://www.vice.com/en_us/article/qvg3q5/chef-not-renewing-ice-immigration-customs-enforcement-contract-after-code-deleting-protest
[10]: https://where.coraline.codes/
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/hippocratic-license.png?ssl=1
[12]: https://en.wikipedia.org/wiki/Hippocrates
[13]: https://en.wikipedia.org/wiki/Hippocratic_Oath
[14]: https://firstdonoharm.dev/
[15]: https://opensource.org/licenses/MIT
[16]: https://opensource.org/
[17]: https://twitter.com/OpenSourceOrg/status/1176229398929977344?ref_src=twsrc%5Etfw
[18]: https://twitter.com/CoralineAda/status/1176246765676302336
[19]: https://twitter.com/CoralineAda/status/1176262778459496454?ref_src=twsrc%5Etfw
[20]: https://www.fsf.org/
[21]: https://twitter.com/CoralineAda/status/1175878569169432582?ref_src=twsrc%5Etfw
[22]: https://twitter.com/CoralineAda/status/1176207120133447680?ref_src=twsrc%5Etfw
[23]: https://www.aol.com/article/news/2017/05/04/what-is-antifa-controversial-far-left-group-defends-use-of-violence/22067671/?guccounter=1&guce_referrer=aHR0cHM6Ly9lbi53aWtpcGVkaWEub3JnLw&guce_referrer_sig=AQAAAHYUcIrnC8zD4UX-W4N2Vshf-QVSVDTwNXlTNmy4gbUJUb9smDm7W9Bf1IelnBGz5x0QAdI-O3Zhm9obQjZcORvHjvp3J8tUgEbdlpKNef-jk1rTH8BTZOP7YJule2n7wbIc4wDHPMFjrZUsMx-kypQYVCpkjtEDltAHHo-73ZD_
[24]: https://www.bbc.com/news/world-us-canada-40930831
[25]: https://itsfoss.com/richard-stallman-controversy/
[26]: https://itsfoss.com/linux-code-of-conduct/
[27]: https://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,111 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to contribute to GitLab)
[#]: via: (https://opensource.com/article/19/9/how-contribute-gitlab)
[#]: author: (Ray Paik https://opensource.com/users/rpaikhttps://opensource.com/users/barkerd427)
How to contribute to GitLab
======
Help the community by contributing to code, documentation, translations,
user experience design, and more.
![Woman programming][1]
I think many people are familiar with GitLab—the company or the software. What many may not realize is that GitLab is also an open source community that started with this [first commit][2] from our co-founder [Dmitriy Zaporozhet][3] in 2011. As a matter of fact, we have [more than 2,000 contributors][4] from the wider community who have contributed to GitLab.
The wider community contributions span code, documentation, translations, user experience design, etc. If you are interested in open source and in contributing to a complete DevOps platform, I'd like you to consider joining the GitLab community.
You can find things that you can start contributing to by looking at issues with the "[Accepting merge requests" label sorted by weight][5]. Low-weight issues will be easier to accomplish. If you find an issue that you're interested in working on, be sure to add a comment on the issue saying that you'd like to work on this, and verify that no one is already working on it. If you cannot find an issue that you are interested in but have an idea for a contribution (e.g., bug fixes, documentation update, new features, etc.), we encourage you to open a new issue or even [open a merge request][6] (MR) to start working with reviewers or other community members.
If you are interested, here are the different areas at GitLab where you can contribute and how you can get started.
### Development
Whether it's fixing bugs, adding new features, or helping with reviews, GitLab is a great open source community for developers from all backgrounds. Many contributors have started contributing to GitLab development without being familiar with languages like Ruby. You can follow the steps below to start contributing to GitLab development:
1. For GitLab development, you should download and set up the [GitLab Development Kit][7]. The GDK README has instructions on how you can get started.
2. [Fork the GitLab project][8] that you want to contribute to.
3. Add the feature or fix the bug you want to work on.
4. If you work on a feature change that impacts users or admins, please also [update the documentation][9].
5. [Open an MR][6] to merge your code and its documentation. The earlier you open an MR, the sooner you can get feedback. You can mark your MR as a [Work in Progress][10] so that people know that you're not done yet.
6. Add tests, if needed, as well as a [changelog entry][11] so you can be credited for your work.
7. Make sure the test suite is passing.
8. Wait for a reviewer. A "Community contribution" label will be added to your MR, and it will be triaged within a few days and a reviewer notified. You may need multiple reviews/iterations depending on the size of the change. If you don't hear from anyone in several days, feel free to mention the Merge Request Coaches by typing **@gitlab-org/coaches** in a comment.
### Documentation
Contributing to documentation is a great way to get familiar with the GitLab development process and to meet reviewers and other community members. From fixing typos to better organizing our documentation, you will find many areas where you can contribute. Here are the recommended steps for people interested in helping with documentation:
1. Visit [https://docs.gitlab.com][12] for the latest GitLab documentation.
2. If you find a page that needs improvement, click the "Edit this page" link at the bottom of the page, fork the project, and modify the documentation.
3. Open an MR and follow the [branch-naming convention for documentation][13] so you can speed up the continuous integration process.
4. Wait for a reviewer. A "Community contribution" label will be added to your MR and it will be triaged within a few days and a reviewer notified. If you don't hear from a reviewer in several days, feel free to mention **@gl-docsteam** in a comment.
You may also want to reference [GitLab Documentation Guidelines][9] as you contribute to documentation.
### Translation
GitLab is being translated into more than 35 languages, and this is driven primarily by wider community members. If you speak another language, you can join more than 1,500 community members who are helping translate GitLab.
The translation is managed at <https://translate.gitlab.com> using [CrowdIn][14]. First, a phrase (e.g., one that appears in the GitLab user interface or in error messages) needs to be internationalized before it can be translated. The internationalized phrases are then made available for translations on <https://translate.gitlab.com>. Here's how you can help us speak your language:
1. Log into <https://translate.gitlab.com> (you can use your GitLab login).
2. Find a language you'd like to contribute to.
3. Improve existing translations, vote on new translations, and/or contribute new translations to your given language.
4. Once your translation is approved, it will be merged into future GitLab releases.
### UX design
In order to help make a product that is easy to use and built for a diverse group of people, we welcome contributions from the wider community. You can help us better understand how you use GitLab and your needs as you work with the GitLab UX team members. Here's how you can get started:
1. Visit the [https://design.gitlab.com][15] for an overview of GitLab's open source Design System. You may also find the [Get Started guide][16] to be helpful.
2. Choose an [issue][17] to work on. If you can't find an issue that you are interested in, you can open a new issue to start a conversation and get early feedback.
3. Create an MR to make changes that reflect the issue you're working on.
4. Wait for a reviewer. A "Community contribution" label will be added to your MR, and it will be triaged within a few days and a reviewer notified. If you don't hear from anyone in several days, feel free to mention **@gitlab-com/gitlab-ux** in a comment.
### Getting help
If you need any help while contributing to GitLab, you can refer to the [Getting Help][18] section on our Contribute page for available resources. One thing I want to emphasize is that you should not feel afraid to [mention][19] people at GitLab in issues or MRs if you have any questions or if you feel like someone has not been responsive. GitLab team members should be responsive to other community members whether they work at GitLab or not.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/how-contribute-gitlab
作者:[Ray Paik][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rpaikhttps://opensource.com/users/barkerd427
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
[2]: https://gitlab.com/gitlab-org/gitlab-ce/commit/9ba1224867665844b117fa037e1465bb706b3685
[3]: https://about.gitlab.com/company/team/#dzaporozhets
[4]: https://contributors.gitlab.com
[5]: https://gitlab.com/groups/gitlab-org/-/issues?assignee_id=None&label_name%5B%5D=Accepting+merge+requests&scope=all&sort=weight&state=opened&utf8=%E2%9C%93
[6]: https://docs.gitlab.com/ee/gitlab-basics/add-merge-request.html
[7]: https://gitlab.com/gitlab-org/gitlab-development-kit
[8]: https://docs.gitlab.com/ee/workflow/forking_workflow.html#creating-a-fork
[9]: https://docs.gitlab.com/ee/development/documentation/
[10]: https://docs.gitlab.com/ee/user/project/merge_requests/work_in_progress_merge_requests.html
[11]: https://docs.gitlab.com/ee/development/changelog.html
[12]: https://docs.gitlab.com/
[13]: https://docs.gitlab.com/ee/development/documentation/index.html#branch-naming
[14]: https://crowdin.com/
[15]: https://design.gitlab.com/
[16]: https://design.gitlab.com/contribute/get-started/
[17]: https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/issues
[18]: https://about.gitlab.com/community/contribute/#getting-help
[19]: https://docs.gitlab.com/ee/user/group/subgroups/#mentioning-subgroups

View File

@ -1,114 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (PsiACE)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a Messenger App: Schema)
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-schema/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: Schema
======
New post on building a messenger app. You already know this kind of app. They allow you to have conversations with your friends. [Facebook Messenger][1], [WhatsApp][2] and [Skype][3] are a few examples. Tho, these apps allows you to send pictures, stream video, record audio, chat with large groups of people, etc… Well try to keep it simple and just send text messages between two users.
Well use [CockroachDB][4] as the SQL database, [Go][5] as the backend language, and JavaScript to make a web app.
In this first post, were getting around the database design.
```
CREATE TABLE users (
id SERIAL NOT NULL PRIMARY KEY,
username STRING NOT NULL UNIQUE,
avatar_url STRING,
github_id INT NOT NULL UNIQUE
);
```
Of course, this app requires users. We will go with social login. I selected just [GitHub][6] so we keep a reference to the github user ID there.
```
CREATE TABLE conversations (
id SERIAL NOT NULL PRIMARY KEY,
last_message_id INT,
INDEX (last_message_id DESC)
);
```
Each conversation references the last message. Every time we insert a new message, well go and update this field. (Ill add the foreign key constraint below).
… You can say that we can group conversations and get the last message that way, but that will add much more complexity to the queries.
```
CREATE TABLE participants (
user_id INT NOT NULL REFERENCES users ON DELETE CASCADE,
conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE,
messages_read_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (user_id, conversation_id)
);
```
Even tho I said conversations will be between just two users, well go with a design that allow the possibility to add multiple participants to a conversation. Thats why we have a participants table between the conversation and users.
To know whether the user has unread messages we have the `messages_read_at` field. Every time the user read in a conversation, we update this value, so we can compare it with the conversation last message `created_at` field.
```
CREATE TABLE messages (
id SERIAL NOT NULL PRIMARY KEY,
content STRING NOT NULL,
user_id INT NOT NULL REFERENCES users ON DELETE CASCADE,
conversation_id INT NOT NULL REFERENCES conversations ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
INDEX(created_at DESC)
);
```
Last but not least is the messages table, it saves a reference to the user who created it and the conversation in which it goes. Is has an index on `created_at` too to sort messages.
```
ALTER TABLE conversations
ADD CONSTRAINT fk_last_message_id_ref_messages
FOREIGN KEY (last_message_id) REFERENCES messages ON DELETE SET NULL;
```
And yep, the fk constraint I said.
These four tables will do the trick. You can save those queries to a file and pipe it to the Cockroach CLI. First start a new node:
```
cockroach start --insecure --host 127.0.0.1
```
Then create the database and tables:
```
cockroach sql --insecure -e "CREATE DATABASE messenger"
cat schema.sql | cockroach sql --insecure -d messenger
```
* * *
Thats it. In the next part well do the login. Wait for it.
[Souce Code][7]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/go-messenger-schema/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://nicolasparada.netlify.com/
[b]: https://github.com/lujun9972
[1]: https://www.messenger.com/
[2]: https://www.whatsapp.com/
[3]: https://www.skype.com/
[4]: https://www.cockroachlabs.com/
[5]: https://golang.org/
[6]: https://github.com/
[7]: https://github.com/nicolasparada/go-messenger-demo

View File

@ -1,106 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Quick Look at Elvish Shell)
[#]: via: (https://itsfoss.com/elvish-shell/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
A Quick Look at Elvish Shell
======
Everyone who comes to this site has some knowledge (no matter how slight) of the Bash shell that comes default of so many systems. There have been several attempts to create shells that solve some of the shortcomings of Bash that have appeared over the years. One such shell is Elvish, which we will look at today.
### What is Elvish Shell?
![Pipelines In Elvish][1]
[Elvish][2] is more than just a shell. It is [also][3] “an expressive programming language”. It has a number of interesting features including:
* Written in Go
* Built-in file manager, inspired by the [Ranger file manager][4] (`Ctrl + N`)
* Searchable command history (`Ctrl + R`)
* History of directories visited (`Ctrl + L`)
* Powerful pipelines that support structured data, such as lists, maps, and functions
* Includes a “standard set of control structures: conditional control with `if`, loops with `for` and `while`, and exception handling with `try`
* Support for [third-party modules via a package manager to extend Elvish][5]
* Licensed under the BSD 2-Clause license
“Why is it named Elvish?” I hear you shout. Well, according to [their website][6], they chose their current name because:
> In roguelikes, items made by the elves have a reputation of high quality. These are usually called elven items, but “elvish” was chosen because it ends with “sh”, a long tradition of Unix shells. It also rhymes with fish, one of the shells that influenced the philosophy of Elvish.
### How to Install Elvish Shell
Elvish is available in several mainstream distributions.
Note that the software is very young. The most recent version is 0.12. According to the projects [GitHub page][3]: “Despite its pre-1.0 status, it is already suitable for most daily interactive use.”
![Elvish Control Structures][7]
#### Debian and Ubuntu
Elvish packages were introduced into Debian Buster and Ubuntu 17.10. Unfortunately, those packages are out of date and you will need to use a [PPA][8] to install the latest version. You will need to use the following commands:
```
sudo add-apt-repository ppa:zhsj/elvish
sudo apt update
sudo apt install elvish
```
#### Fedora
Elvish is not available in the main Fedora repos. You will need to add the [FZUG Repository][9] to install Evlish. To do so, you will need to use these commands:
```
sudo dnf config-manager --add-repo=http://repo.fdzh.org/FZUG/FZUG.repol
sudo dnf install elvish
```
#### Arch
Elvish is available in the [Arch User Repository][10].
I believe you know [how to change shell in Linux][11] so after installing you can switch to Elvish to use it.
### Final Thoughts on Elvish Shell
Personally, I have no reason to install Elvish on any of my systems. I can get most of its features by installing a couple of small command line programs or using already installed programs.
For example, the search past commands feature already exists in Bash and it works pretty well. If you want to improve your ability to search past commands, I would recommend installing [fzf][12] instead. Fzf uses fuzzy search, so you dont need to remember the exact command you are looking for. Fzf also allows you to preview and open files.
I do think that the fact that Elvish is also a programming language is neat, but Ill stick with Bash shell scripting until Elvish matures a little more.
Have you every used Elvish? Do you think it would be worthwhile to install Elvish? What is your favorite Bash replacement? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][13].
--------------------------------------------------------------------------------
via: https://itsfoss.com/elvish-shell/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/pipelines-in-elvish.png?fit=800%2C421&ssl=1
[2]: https://elv.sh/
[3]: https://github.com/elves/elvish
[4]: https://ranger.github.io/
[5]: https://github.com/elves/awesome-elvish
[6]: https://elv.sh/ref/name.html
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Elvish-control-structures.png?fit=800%2C425&ssl=1
[8]: https://launchpad.net/%7Ezhsj/+archive/ubuntu/elvish
[9]: https://github.com/FZUG/repo/wiki/Add-FZUG-Repository
[10]: https://aur.archlinux.org/packages/elvish/
[11]: https://linuxhandbook.com/change-shell-linux/
[12]: https://github.com/junegunn/fzf
[13]: http://reddit.com/r/linuxusersgroup

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (way-ww)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,476 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8)
[#]: via: (https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8
======
Elastic stack widely known as **ELK stack**, it is a group of opensource products like **Elasticsearch**, **Logstash** and **Kibana**. Elastic Stack is developed and maintained by Elastic company. Using elastic stack, one can feed systems logs to Logstash, it is a data collection engine which accept the logs or data from all the sources and normalize logs and then it forwards the logs to Elasticsearch for **analyzing**, **indexing**, **searching** and **storing** and finally using Kibana one can represent the visualize data, using Kibana we can also create interactive graphs and diagram based on users queries.
[![Elastic-Stack-Cluster-RHEL8-CentOS8][1]][2]
In this article we will demonstrate how to setup multi node elastic stack cluster on RHEL 8 / CentOS 8 servers. Following are details for my Elastic Stack Cluster:
### Elasticsearch:
* Three Servers with Minimal RHEL 8 / CentOS 8
* IPs &amp; Hostname 192.168.56.40 (elasticsearch1.linuxtechi. local), 192.168.56.50 (elasticsearch2.linuxtechi. local), 192.168.56.60 (elasticsearch3.linuxtechi. local)
### Logstash:
* Two Servers with minimal RHEL 8 / CentOS 8
* IPs &amp; Hostname 192.168.56.20 (logstash1.linuxtechi. local) , 192.168.56.30 (logstash2.linuxtechi. local)
### Kibana:
* One Server with minimal RHEL 8 / CentOS 8
* Hostname kibana.linuxtechi.local
* IP 192.168.56.10
### Filebeat:
* One Server with minimal CentOS 7
* IP &amp; hostname 192.168.56.70 (web-server)
Lets start with Elasticsearch cluster setup,
#### Setup 3 node Elasticsearch cluster
As I have already stated that I have kept nodes for Elasticsearch cluster, login to each node, set the hostname and configure yum/dnf repositories.
Use the below hostnamectl command to set the hostname on respective nodes,
```
[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch1.linuxtechi. local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch2.linuxtechi. local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
[root@linuxtechi ~]# hostnamectl set-hostname "elasticsearch3.linuxtechi. local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
```
For CentOS 8 System we dont need to configure any OS package repository and for RHEL 8 Server, if you have valid subscription and then subscribed it with Red Hat for getting package repository.  In Case you want to configure local yum/dnf repository for OS packages then refer the below url:
[How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File][3]
Configure Elasticsearch package repository on all the nodes, create a file elastic.repo  file under /etc/yum.repos.d/ folder with the following content
```
~]# vi /etc/yum.repos.d/elastic.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
```
save &amp; exit the file
Use below rpm command on all three nodes to import Elastics public signing key
```
~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
```
Add the following lines in /etc/hosts file on all three nodes,
```
192.168.56.40 elasticsearch1.linuxtechi.local
192.168.56.50 elasticsearch2.linuxtechi.local
192.168.56.60 elasticsearch3.linuxtechi.local
```
Install Java on all three Nodes using yum / dnf command,
```
[root@linuxtechi ~]# dnf install java-openjdk -y
[root@linuxtechi ~]# dnf install java-openjdk -y
[root@linuxtechi ~]# dnf install java-openjdk -y
```
Install Elasticsearch using beneath dnf command on all three nodes,
```
[root@linuxtechi ~]# dnf install elasticsearch -y
[root@linuxtechi ~]# dnf install elasticsearch -y
[root@linuxtechi ~]# dnf install elasticsearch -y
```
**Note:** In case OS firewall is enabled and running in each Elasticsearch node then allow following ports using beneath firewall-cmd command,
```
~]# firewall-cmd --permanent --add-port=9300/tcp
~]# firewall-cmd --permanent --add-port=9200/tcp
~]# firewall-cmd --reload
```
Configure Elasticsearch, edit the file “**/etc/elasticsearch/elasticsearch.yml**” on all the three nodes and add the followings,
```
~]# vim /etc/elasticsearch/elasticsearch.yml
…………………………………………
cluster.name: opn-cluster
node.name: elasticsearch1.linuxtechi.local
network.host: 192.168.56.40
http.port: 9200
discovery.seed_hosts: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"]
cluster.initial_master_nodes: ["elasticsearch1.linuxtechi.local", "elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local"]
……………………………………………
```
**Note:** on Each node, add the correct hostname in node.name parameter and ip address in network.host parameter and other parameters will remain the same.
Now Start and enable the Elasticsearch service on all three nodes using following systemctl command,
```
~]# systemctl daemon-reload
~]# systemctl enable elasticsearch.service
~]# systemctl start elasticsearch.service
```
Use below ss command to verify whether elasticsearch node is start listening on 9200 port,
```
[root@linuxtechi ~]# ss -tunlp | grep 9200
tcp LISTEN 0 128 [::ffff:192.168.56.40]:9200 *:* users:(("java",pid=2734,fd=256))
[root@linuxtechi ~]#
```
Use following curl commands to verify the Elasticsearch cluster status
```
[root@linuxtechi ~]# curl http://elasticsearch1.linuxtechi.local:9200
[root@linuxtechi ~]# curl -X GET http://elasticsearch2.linuxtechi.local:9200/_cluster/health?pretty
```
Output above command would be something like below,
![Elasticsearch-cluster-status-rhel8][1]
Above output confirms that we have successfully created 3 node Elasticsearch cluster and status of cluster is also green.
**Note:** If you want to modify JVM heap size then you have edit the file “**/etc/elasticsearch/jvm.options**” and change the below parameters that suits to your environment,
* -Xms1g
* -Xmx1g
Now lets move to Logstash nodes,
#### Install and Configure Logstash
Perform the following steps on both Logstash nodes,
Login to both the nodes set the hostname using following hostnamectl command,
```
[root@linuxtechi ~]# hostnamectl set-hostname "logstash1.linuxtechi.local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
[root@linuxtechi ~]# hostnamectl set-hostname "logstash2.linuxtechi.local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
```
Add the following entries in /etc/hosts file in both logstash nodes
```
~]# vi /etc/hosts
192.168.56.40 elasticsearch1.linuxtechi.local
192.168.56.50 elasticsearch2.linuxtechi.local
192.168.56.60 elasticsearch3.linuxtechi.local
```
Save and exit the file
Configure Logstash repository on both the nodes, create a file **logstash.repo** under the folder /ete/yum.repos.d/ with following content,
```
~]# vi /etc/yum.repos.d/logstash.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
```
Save and exit the file, run the following rpm command to import the signing key
```
~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
```
Install Java OpenJDK on both the nodes using following dnf command,
```
~]# dnf install java-openjdk -y
```
Run the following dnf command from both the nodes to install logstash,
```
[root@linuxtechi ~]# dnf install logstash -y
[root@linuxtechi ~]# dnf install logstash -y
```
Now configure logstash, perform below steps on both logstash nodes,
Create a logstash conf file, for that first we have copy sample logstash file under /etc/logstash/conf.d/
```
# cd /etc/logstash/
# cp logstash-sample.conf conf.d/logstash.conf
```
Edit conf file and update the following content,
```
# vi conf.d/logstash.conf
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
```
Under output section, in hosts parameter specify FQDN of all three Elasticsearch nodes, other parameters leave as it is.
Allow logstash port “5044” in OS firewall using following firewall-cmd command,
```
~ # firewall-cmd --permanent --add-port=5044/tcp
~ # firewall-cmd reload
```
Now start and enable Logstash service, run the following systemctl commands on both the nodes
```
~]# systemctl start logstash
~]# systemctl eanble logstash
```
Use below ss command to verify whether logstash service start listening on 5044,
```
[root@linuxtechi ~]# ss -tunlp | grep 5044
tcp LISTEN 0 128 *:5044 *:* users:(("java",pid=2416,fd=96))
[root@linuxtechi ~]#
```
Above output confirms that logstash has been installed and configured successfully. Lets move to Kibana installation.
#### Install and Configure Kibana
Login to Kibana node, set the hostname with **hostnamectl** command,
```
[root@linuxtechi ~]# hostnamectl set-hostname "kibana.linuxtechi.local"
[root@linuxtechi ~]# exec bash
[root@linuxtechi ~]#
```
Edit /etc/hosts file and add the following lines
```
192.168.56.40 elasticsearch1.linuxtechi.local
192.168.56.50 elasticsearch2.linuxtechi.local
192.168.56.60 elasticsearch3.linuxtechi.local
```
Setup the Kibana repository using following,
```
[root@linuxtechi ~]# vi /etc/yum.repos.d/kibana.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[root@linuxtechi ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
```
Execute below dnf command to install kibana,
```
[root@linuxtechi ~]# yum install kibana -y
```
Configure Kibana by editing the file “**/etc/kibana/kibana.yml**”
```
[root@linuxtechi ~]# vim /etc/kibana/kibana.yml
…………
server.host: "kibana.linuxtechi.local"
server.name: "kibana.linuxtechi.local"
elasticsearch.hosts: ["http://elasticsearch1.linuxtechi.local:9200", "http://elasticsearch2.linuxtechi.local:9200", "http://elasticsearch3.linuxtechi.local:9200"]
…………
```
Start and enable kibana service
```
[root@linuxtechi ~]# systemctl start kibana
[root@linuxtechi ~]# systemctl enable kibana
```
Allow Kibana port 5601 in OS firewall,
```
[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5601/tcp
success
[root@linuxtechi ~]# firewall-cmd --reload
success
[root@linuxtechi ~]#
```
Access Kibana portal / GUI using the following URL:
<http://kibana.linuxtechi.local:5601>
[![Kibana-Dashboard-rhel8][1]][4]
From dashboard, we can also check our Elastic Stack cluster status
[![Stack-Monitoring-Overview-RHEL8][1]][5]
This confirms that we have successfully setup multi node Elastic Stack cluster on RHEL 8 / CentOS 8.
Now lets send some logs to logstash nodes via filebeat from other Linux servers, In my case I have one CentOS 7 Server, I will push all important logs of this server to logstash via filebeat.
Login to CentOS 7 server and install filebeat package using following rpm command,
```
[root@linuxtechi ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
Retrieving https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:filebeat-7.3.1-1 ################################# [100%]
[root@linuxtechi ~]#
```
Edit the /etc/hosts file and add the following entries,
```
192.168.56.20 logstash1.linuxtechi.local
192.168.56.30 logstash2.linuxtechi.local
```
Now configure the filebeat so that it can send logs to logstash nodes using load balancing technique, edit the file “**/etc/filebeat/filebeat.yml**” and add the following parameters,
Under the **filebeat.inputs:** section change **enabled: false** to **enabled: true** and under the “**paths**” parameter specify the location log files that we can send to logstash, In output Elasticsearch section comment out “**output.elasticsearch**” and **host** parameter. In Logstash output section, remove the comments for “**output.logstash:**” and “**hosts:**” and add the both logstash nodes in hosts parameters and also “**loadbalance: true**”.
```
[root@linuxtechi ~]# vi /etc/filebeat/filebeat.yml
……………………….
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/messages
- /var/log/dmesg
- /var/log/maillog
- /var/log/boot.log
#output.elasticsearch:
# hosts: ["localhost:9200"]
output.logstash:
hosts: ["logstash1.linuxtechi.local:5044", "logstash2.linuxtechi.local:5044"]
loadbalance: true
………………………………………
```
Start and enable filebeat service using beneath systemctl commands,
```
[root@linuxtechi ~]# systemctl start filebeat
[root@linuxtechi ~]# systemctl enable filebeat
```
Now go to Kibana GUI, verify whether new indices are visible or not,
Choose Management option from Left side bar and then click on Index Management under Elasticsearch,
[![Elasticsearch-index-management-Kibana][1]][6]
As we can see above, indices are visible now, lets create index pattern,
Click on “Index Patterns” from Kibana Section, it will prompt us to create a new pattern, click on “**Create Index Pattern**” and specify the pattern name as “**filebeat**”
[![Define-Index-Pattern-Kibana-RHEL8][1]][7]
Click on Next Step
Choose “**Timestamp**” as time filter for index pattern and then click on “Create index pattern”
[![Time-Filter-Index-Pattern-Kibana-RHEL8][1]][8]
[![filebeat-index-pattern-overview-Kibana][1]][9]
Now Click on Discover to see real time filebeat index pattern,
[![Discover-Kibana-REHL8][1]][10]
This confirms that Filebeat agent has been configured successfully and we are able to see real time logs on Kibana dashboard.
Thats all from this article, please dont hesitate to share your feedback and comments in case these steps help you to setup multi node Elastic Stack Cluster on RHEL 8 / CentOS 8 system.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elastic-Stack-Cluster-RHEL8-CentOS8.jpg
[3]: https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Kibana-Dashboard-rhel8.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Stack-Monitoring-Overview-RHEL8.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Elasticsearch-index-management-Kibana.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Define-Index-Pattern-Kibana-RHEL8.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Time-Filter-Index-Pattern-Kibana-RHEL8.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/filebeat-index-pattern-overview-Kibana.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Discover-Kibana-REHL8.jpg

View File

@ -1,138 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Introduction to the Linux chgrp and newgrp commands)
[#]: via: (https://opensource.com/article/19/9/linux-chgrp-and-newgrp-commands)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/seth)
Introduction to the Linux chgrp and newgrp commands
======
The chgrp and newgrp commands help you manage files that need to
maintain group ownership.
![Penguins walking on the beach ][1]
In a recent article, I introduced the [**chown** command][2], which is used for modifying ownership of files on systems. Recall that ownership is the combination of the user and group assigned to an object. The **chgrp** and **newgrp** commands provide additional help for managing files that need to maintain group ownership.
### Using chgrp
The **chgrp** command simply changes the group ownership of a file. It is the same as the **chown :&lt;group&gt;** command. You can use:
```
`$chown :alan mynotes`
```
or:
```
`$chgrp alan mynotes`
```
#### Recursive
A few additional arguments to chgrp can be useful at both the command line and in a script. Just like many other Linux commands, chgrp has a recursive argument, **-R**. You will need this to operate on a directory and its contents recursively, as I'll demonstrate below. I added the **-v** (**verbose**) argument so chgrp tells me what it is doing:
```
$ ls -l . conf
.:
drwxrwxr-x 2 alan alan 4096 Aug  5 15:33 conf
conf:
-rw-rw-r-- 1 alan alan 0 Aug  5 15:33 conf.xml
# chgrp -vR delta conf
changed group of 'conf/conf.xml' from alan to delta
changed group of 'conf' from alan to delta
```
#### Reference
A reference file (**\--reference=RFILE**) can be used when changing the group on files to match a certain configuration or when you don't know the group, as might be the case when running a script. You can duplicate another file's group (**RFILE**), referred to as a reference file. For example, to undo the changes made above (recall that a dot [**.**] refers to the present working directory):
```
`$ chgrp -vR --reference=. conf`
```
#### Report changes
Most commands have arguments for controlling their output. The most common is **-v** to enable verbose, and the chgrp command has a verbose mode. It also has a **-c** (**\--changes**) argument, which instructs chgrp to report only when a change is made. Chgrp will still report other things, such as if an operation is not permitted.
The argument **-f** (**\--silent**, **\--quiet**) is used to suppress most error messages. I will use this argument and **-c** in the next section so it will show only actual changes.
#### Preserve root
The root (**/**) of the Linux filesystem should be treated with great respect. If a command mistake is made at this level, the consequences can be terrible and leave a system completely useless. Particularly when you are running a recursive command that will make any kind of change—or worse, deletions. The chgrp command has an argument that can be used to protect and preserve the root. The argument is **\--preserve-root**. If this argument is used with a recursive chgrp command on the root, nothing will happen and a message will appear instead:
```
[root@localhost /]# chgrp -cfR --preserve-root a+w /
chgrp: it is dangerous to operate recursively on '/'
chgrp: use --no-preserve-root to override this failsafe
```
The option has no effect when it's not used in conjunction with recursive. However, if the command is run by the root user, the permissions of **/** will change, but not those of other files or directories within it:
```
[alan@localhost /]$ chgrp -c --preserve-root alan /
chgrp: changing group of '/': Operation not permitted
[root@localhost /]# chgrp -c --preserve-root alan /
changed group of '/' from root to alan
```
Surprisingly, it seems, this is not the default argument. The option **\--no-preserve-root** is the default. If you run the command above without the "preserve" option, it will default to "no preserve" mode and possibly change permissions on files that shouldn't be changed:
```
[alan@localhost /]$ chgrp -cfR alan /
changed group of '/dev/pts/0' from tty to alan
changed group of '/dev/tty2' from tty to alan
changed group of '/var/spool/mail/alan' from mail to alan
```
### About newgrp
The **newgrp** command allows a user to override the current primary group. newgrp can be handy when you are working in a directory where all files must have the same group ownership. Suppose you have a directory called _share_ on your intranet server where different teams store marketing photos. The group is **share**. As different users place files into the directory, the files' primary groups might become mixed up. Whenever new files are added, you can run **chgrp** to correct any mix-ups by setting the group to **share**:
```
$ cd share
ls -l
-rw-r--r--. 1 alan share 0 Aug  7 15:35 pic13
-rw-r--r--. 1 alan alan 0 Aug  7 15:35 pic1
-rw-r--r--. 1 susan delta 0 Aug  7 15:35 pic2
-rw-r--r--. 1 james gamma 0 Aug  7 15:35 pic3
-rw-rw-r--. 1 bill contract  0 Aug  7 15:36 pic4
```
I covered **setgid** mode in my article on the [**chmod** command][3]. This would be one way to solve this problem. But, suppose the setgid bit was not set for some reason. The newgrp command is useful in this situation. Before any users put files into the _share_ directory, they can run the command **newgrp share**. This switches their primary group to **share** so all files they put into the directory will automatically have the group **share**, rather than the user's primary group. Once they are finished, users can switch back to their regular primary group with (for example):
```
`newgrp alan`
```
### Conclusion
It is important to understand how to manage users, groups, and permissions. It is also good to know a few alternative ways to work around problems you might encounter since not all environments are set up the same way.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/linux-chgrp-and-newgrp-commands
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A (Penguins walking on the beach )
[2]: https://opensource.com/article/19/8/linux-chown-command
[3]: https://opensource.com/article/19/8/linux-chmod-command

View File

@ -0,0 +1,165 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (CodeReady Containers: complex solutions on OpenShift + Fedora)
[#]: via: (https://fedoramagazine.org/codeready-containers-complex-solutions-on-openshift-fedora/)
[#]: author: (Marc Chisinevski https://fedoramagazine.org/author/mchisine/)
CodeReady Containers: complex solutions on OpenShift + Fedora
======
![][1]
Want to experiment with (complex) solutions on [OpenShift][2] 4.1+? CodeReady Containers (CRC) on a physical Fedora server is a great choice. It lets you:
* Configure the RAM available to CRC / OpenShift (this is key as well deploy Machine Learning, Change Data Capture, Process Automation and other solutions with significant memory requirements)
* Avoid installing anything on your laptop
* Standardize (on Fedora 30) so that you get the same results every time
Start by installing CRC and Ansible Agnostic Deployer (AgnosticD) on a Fedora 30 physical server. Then, youll use AgnosticD to deploy Open Data Hub on the OpenShift 4.1 environment created by CRC. Lets get started!
### Set up CodeReady Containers
```
$ dnf config-manager --set-enabled fedora
$ su -c 'dnf -y install git wget tar qemu-kvm libvirt NetworkManager jq libselinux-python'
$ sudo systemctl enable --now libvirtd
```
Lets also add a user.
```
$ sudo adduser demouser
$ sudo passwd demouser
$ sudo usermod -aG wheel demouser
```
Download and extract CodeReady Containers:
```
$ su demouser
$ cd /home/demouser
$ wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/1.0.0-beta.3/crc-linux-amd64.tar.xz
$ tar -xvf crc-linux-amd64.tar.xz
$ cd crc-linux-1.0.0-beta.3-amd64/
$ sudo cp ./crc /usr/bin
```
Set the memory available to CRC according to what you have on your physical server. For example, on a physical server with around 100GB you can allocate 80G to CRC as follows:
```
$ crc config set memory 81920
$ crc setup
```
Youll need your pull secret from <https://cloud.redhat.com/openshift/install/metal/user-provisioned>.
```
$ crc start
```
Thats it — you can now login to your OpenShift environment:
```
eval $(crc oc-env) && oc login -u kubeadmin -p <password> https://api.crc.testing:6443
```
### Set up Ansible Agnostic Deployer
[github.com/redhat-cop/agnosticd][3] is a fully automated two-phase deployer. Lets deploy it!
```
$ su demouser
$ cd /home/demouser
$ git clone https://github.com/redhat-cop/agnosticd.git
$ cd agnosticd/ansible
$ python -m pip install --upgrade --trusted-host files.pythonhosted.org -r requirements.txt
$ python3 -m pip install --upgrade --trusted-host files.pythonhosted.org -r requirements.txt
$ pip3 install kubernetes
$ pip3 install openshift
$ pip install kubernetes
$ pip install openshift
```
### Set up Open Data Hub on Code Ready Containers
[Open Data Hub][4] is a machine-learning-as-a-service platform built on OpenShift and Kafka/Strimzi. It integrates a collection of open source projects.
First, create an Ansible inventory file with the following content.
```
$ cat inventory
$ 127.0.0.1 ansible_connection=local
```
Set up the WORKLOAD environment variable so that Ansible Agnostic Deployer knows that we want to deploy Open Data Hub.
```
$ export WORKLOAD="ocp4-workload-open-data-hub"
$ sudo cp /usr/local/bin/ansible-playbook /usr/bin/ansible-playbook
```
We are only deploying one Open Data Hub project, so set _user_count_ to 1. You can set up workshops for many students by setting _user_count_.
An OpenShift project (with Open Data Hub in our case) will be created for each student.
```
$ eval $(crc oc-env) && oc login -u kubeadmin -p <password> https://api.crc.testing:6443
$ ansible-playbook -i inventory ./configs/ocp-workloads/ocp-workload.yml -e"ocp_workload=${WORKLOAD}" -e"ACTION=create" -e"user_count=1" -e"ocp_username=kubeadmin" -e"ansible_become_pass=<password>" -e"silent=False"
$ oc project open-data-hub-user1
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
jupyterhub jupyterhub-open-data-hub-user1.apps-crc.testing jupyterhub 8080-tcp edge/Redirect None
```
On your laptop, add _jupyterhub-open-data-hub-user1.apps-crc.testing_ to your _/etc/hosts_ file. For example:
```
127.0.0.1 localhost fedora30 console-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing mapit-app-management.apps-crc.testing mapit-spring-pipeline-demo.apps-crc.testing jupyterhub-open-data-hub-user1.apps-crc.testing jupyterhub-open-data-hub-user1.apps-crc.testing
```
On your laptop:
```
$ sudo ssh marc@fedora30 -L 443:jupyterhub-open-data-hub-user1.apps-crc.testing:443
```
You can now browse to [https://jupyterhub-open-data-hub-user1.apps-crc.testing][5].
Now that we have Open Data Hub ready, you could deploy something interesting on it. For example, you could deploy IBMs Qiskit open source framework for quantum computing. For more information, refer to Video no. 9 at [this YouTube playlist][6], and the [Github repo here][7].
You could also deploy plenty of other useful tools for Process Automation, Change Data Capture, Camel Integration, and 3scale API Management. You dont have to wait for articles on these, though. Step-by-step short videos are already [available on YouTube][6].
The corresponding step-by-step instructions are [also on YouTube][6]. You can also follow along with this article using the [GitHub repo][8].
* * *
_Photo by _[_Marta Markes_][9]_ on _[_Unsplash_][10]_._
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/codeready-containers-complex-solutions-on-openshift-fedora/
作者:[Marc Chisinevski][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/mchisine/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/codeready-containers-816x345.jpg
[2]: https://fedoramagazine.org/run-openshift-locally-minishift/
[3]: https://github.com/redhat-cop/agnosticd
[4]: https://opendatahub.io/
[5]: https://jupyterhub-open-data-hub-user1.apps-crc.testing/
[6]: https://www.youtube.com/playlist?list=PLg1pvyPzFye2UtQjZTSjoXhFdqkGK6exw
[7]: https://github.com/marcredhat/crcdemos/blob/master/IBMQuantum-qiskit
[8]: https://github.com/marcredhat/crcdemos/tree/master/fedora
[9]: https://unsplash.com/@vnevremeni?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[10]: https://unsplash.com/s/photos/container?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,70 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fedora and CentOS Stream)
[#]: via: (https://fedoramagazine.org/fedora-and-centos-stream/)
[#]: author: (Matthew Miller https://fedoramagazine.org/author/mattdm/)
Fedora and CentOS Stream
======
![][1]
_From the desk of the Fedora Project Leader:_
Hi everyone! You may have seen the [announcement][2] about [changes over at the CentOS Project][3]. (If not, please go ahead and take a few minutes and read it — Ill wait!) And now you may be wondering: if CentOS is now upstream of RHEL, what happens to Fedora? Isnt that Fedoras role in the Red Hat ecosystem?
First, dont worry. There are changes to the big picture, but theyre all for the better.
![][4]
If youve been following the conference talks from Red Hat Enterprise Linux leadership about the relationship between Fedora, CentOS, and RHEL, you have heard about “the [Penrose Triangle][5]”. Thats a shape like something from an M. C. Escher drawing: its impossible in real life!
Weve been thinking for a while that _maybe_ impossible geometry is not actually the best model. 
For one thing, the imagined flow where contributions at the end would flow back into Fedora and grow in a “virtuous cycle” never actually worked that way. Thats a shame, because theres a huge, awesome CentOS community and many great people working on it — and theres a lot of overlap with the Fedora community too. Were missing out.
But that gap isnt the only one: theres not really been a consistent flow between the projects and product at all. So far, the process has gone like this: 
1. Some time after the previous RHEL release, Red Hat would suddenly turn more attention to Fedora than usual.
2. A few months later, Red Hat would split off a new RHEL version, developed internally.
3. After some months, thatd be put into the world, including all of the source — from which CentOS is built. 
4. Source drops continue for updates, and sometimes those updates include patches that were in Fedora — but theres no visible connection.
Each step here has its problems: intermittent attention, closed-door development, blind drops, and little ongoing transparency. But now Red Hat and CentOS Project are fixing that, and thats good news for Fedora, too.
**Fedora will remain the** [**first**][6] **upstream of RHEL.** Its where every RHEL came from, and is where RHEL 9 will come from, too. But after RHEL branches off, _CentOS_ will be upstream for ongoing work on those RHEL versions. I like to call it “the midstream”, but the marketing folks somehow dont, so thats going to be called “CentOS Stream”.
We — Fedora, CentOS, and Red Hat — still need to work out all of the technical details, but the idea is that these branches will live in the same package source repository. (The current plan is to make a “src.centos.org” with a  parallel view of the same data as [src.fedoraproject.org][7]). This change gives public visibility into ongoing work on released RHEL, and a place for developers and Red Hats partners to collaborate at that level.
[CentOS SIGs][8] — the special interest groups for virtualization, storage, config management and so on — will do their work in shared space right next to Fedora branches. This will allow much easier collaboration and sharing between the projects, and Im hoping well even be able to merge some of our similar SIGs to work together directly. Fixes from Fedora packages can be cherry-picked into the CentOS “midstream” ones — and where useful, vice versa.
Ultimately, Fedora, CentOS, and RHEL are part of the same big project family. This new, more natural flow opens possibilities for collaboration which were locked behind artificial (and extra-dimensional!) barriers. Im very excited for what we can now do together!
_— Matthew Miller, Fedora Project Leader_
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-and-centos-stream/
作者:[Matthew Miller][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/mattdm/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/centos-stream-816x345.jpg
[2]: http://redhat.com/en/blog/transforming-development-experience-within-centos
[3]: https://wiki.centos.org/Manuals/ReleaseNotes/CentOSStream
[4]: https://lh3.googleusercontent.com/5XMDU29DYPsFKIVLCexK46n9DqWZEa0nTjAnJcouzww-RSAzNshGW3yIxXBSBsd6KfAyUAGpxX9y0Dsh1hj21ygcAn5a7h55LrneKROkxsipdXO2gq8cgoFqz582ojOh8NU9Ix0X
[5]: https://www.youtube.com/watch?v=1JmgOkEznjw
[6]: https://docs.fedoraproject.org/en-US/project/#_first
[7]: https://src.fedoraproject.org/
[8]: https://wiki.centos.org/SpecialInterestGroup

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Essential Accessories for Intel NUC Mini PC)
[#]: via: (https://itsfoss.com/intel-nuc-essential-accessories/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Essential Accessories for Intel NUC Mini PC
======
I bought a [barebone Intel NUC mini PC][1] a few weeks back. I [installed Linux on it][2] and I am totally enjoying it. This tiny fanless gadget replaces that bulky CPU of the desktop computer.
Intel NUC mostly comes in barebone format which means it doesnt have any RAM, hard disk and obviously no operating system. Many [Linux-based mini PCs][3] customize the Intel NUC and sell them to end users by adding disk, RAM and operating systems.
Needless to say that it doesnt come with keyboard, mouse or screen just like most other desktop computers out there.
[Intel NUC][4] is an excellent device and if you are looking to buy a desktop computer, I highly recommend it. And if you are considering to get Intel NUC, here are a few accessories you should have in order to start using the NUC as your computer.
### Essential Intel NUC accessories
![][5]
_The Amazon links in the article are affiliate links. Please read our [affiliate policy][6]._
#### The peripheral devices: monitor, keyboard and mouse
This is a no-brainer. You need to have a screen, keyboard and mouse to use a computer. Youll need a monitor with HDMI connection and USB or wireless keyboard-mouse. If you have these things already, you are good to go.
If you are looking for recommendations, I suggest LG IPS LED monitor. I have two of them in 22 inch model and I am happy with the sharp visuals it provides.
These monitors have a simple stand that doesnt move. If you want a monitor that can move up and down and rotate in portrait mode, try [HP EliteDisplay monitors][7].
![HP EliteDisplay Monitor][8]
I connect all three monitors at the same time in a multi-monitor setup. One monitor connects to the given HDMI port. Two monitors connect to thunderbolt port via a [thunderbolt to HDMI splitter from Club 3D][9].
You may also opt for the ultrawide monitors. I dont have a personal experience with them.
#### A/C power cord
This will be a surprise for you When you get your NUC, youll notice that though it has power adapter, its not complete with the plug.
![][10]
Since different countries have different plug points, Intel decided to simply drop it from the NUC kit. I am using the power cord of an old dead laptop but if you dont have one, chances are that you may have to get one for yourself.
#### RAM
Intel NUC has two RAM slots and it can support up to 32 GB of RAM. Since I have the core i3 processor, I opted from [8GB DDR4 RAM from Crucial][11] that costs around $33.
![][12]
8 GB RAM is fine for most cases but if you have core i7 processor, you may opt for a [16 GB RAM][13] one that costs almost $67. You can double it up and get the maximum 32 GB. The choice is all yours.
#### Hard disk [Important]
Intel NUC supports both 2.5 drive and M.2 SSD and you can use both at the same time to get more storage.
The 2.5 inches slot can hold both SSD and HDD. I strongly recommend to opt for SSD because its way faster than HDD. A [480 GB 2.5][14] costs $60. Which is a fair price in my opinion.
![][15]
The 2.5″ drive is limited with the standard SATA interface speed of 6Gb/sec. The M.2 slot could be faster depending upon whether you are choosing a NVMe SSD or not. The NVMe (non volatile memory express) SSDs are up to 4 times faster than the normal SSDs (also called SATA SSD). But they may also be slightly more expensive than SATA M2 SSD.
While buying the M.2 SSD, check the product image. It should be mentioned on the image of the disk itself whether its a NVMe or SATA SSD. [Samsung EVO is a cost effective NVMe M.2 SSD][16] that you may consider.
![Make sure that your are buying the faster NVMe M2 SSD][17]
A SATA SSD in both M.2 slot and 2.5″ slot has the same speed. This is why if you dont want to opt for the expensive NVMe SSD, I suggest you go for the 2.5″ SATA SSD and keep the M.2 slot free for future upgrades.
#### Other supporting accessories
Youll need HDMI cable to connect your monitor. If you are buying a new monitor, you should usually get a cable with it.
You may need a screw driver if you are going to use the M.2 slot. Intel NUC is an excellent device and you can unscrew the bottom panel just by rotating the four pods simply by your hands. Youll have to open the device in order to place the RAM and disk.
![Intel NUC with Security Cable | Image Credit Intel][18]
NUC also has the antitheft key lock hole that you can use with security cables. Keeping computers secure with cables is a recommended security practices in a business environment. Investing a [few dollars in the security cable][19] could save you hundreds of dollars.
**What accessories do you use?**
Thats the Intel NUC accessories I use and I suggest. How about you? If you own a NUC, what accessories you use and recommend to other NUC users?
--------------------------------------------------------------------------------
via: https://itsfoss.com/intel-nuc-essential-accessories/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Intel-NUC-Mainstream-Kit-NUC8i3BEH/dp/B07GX4X4PW?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07GX4X4PW (barebone Intel NUC mini PC)
[2]: https://itsfoss.com/install-linux-on-intel-nuc/
[3]: https://itsfoss.com/linux-based-mini-pc/
[4]: https://www.intel.in/content/www/in/en/products/boards-kits/nuc.html
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-accessories.png?ssl=1
[6]: https://itsfoss.com/affiliate-policy/
[7]: https://www.amazon.com/HP-EliteDisplay-21-5-Inch-1FH45AA-ABA/dp/B075L4VKQF?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B075L4VKQF (HP EliteDisplay monitors)
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/hp-elitedisplay-monitor.png?ssl=1
[9]: https://www.amazon.com/Club3D-CSV-1546-USB-C-Multi-Monitor-Splitter/dp/B06Y2FX13G?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B06Y2FX13G (thunderbolt to HDMI splitter from Club 3D)
[10]: https://itsfoss.com/wp-content/uploads/2019/09/ac-power-cord-3-pongs.webp
[11]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B01BIWKP58?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01BIWKP58 (8GB DDR4 RAM from Crucial)
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/crucial-ram.jpg?ssl=1
[13]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B019FRBHZ0?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B019FRBHZ0 (16 GB RAM)
[14]: https://www.amazon.com/Green-480GB-Internal-SSD-WDS480G2G0A/dp/B01M3POPK3?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01M3POPK3 (480 GB 2.5)
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/wd-green-ssd.png?ssl=1
[16]: https://www.amazon.com/Samsung-970-EVO-500GB-MZ-V7E500BW/dp/B07BN4NJ2J?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BN4NJ2J (Samsung EVO is a cost effective NVMe M.2 SSD)
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/samsung-evo-nvme.jpg?ssl=1
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-security-cable.jpg?ssl=1
[19]: https://www.amazon.com/Kensington-Combination-Laptops-Devices-K64673AM/dp/B005J7Y99W?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B005J7Y99W (few dollars in the security cable)

View File

@ -0,0 +1,117 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mirror your Android screen on your computer with Guiscrcpy)
[#]: via: (https://opensource.com/article/19/9/mirror-android-screen-guiscrcpy)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/holmjahttps://opensource.com/users/holmjahttps://opensource.com/users/rajaram121)
Mirror your Android screen on your computer with Guiscrcpy
======
Access your Android device from your PC with this open source
application based on scrcpy.
![Coding on a computer][1]
In the future, all the information you need will be just one gesture away, and it will all appear in midair as a hologram that you can interact with even while you're driving your flying car. That's the future, though, and until that arrives, we're all stuck with information spread across a laptop, a phone, a tablet, and a smart refrigerator. Unfortunately, that means when we need information from a device, we generally have to look at that device.
While not quite holographic terminals or flying cars, [guiscrcpy][2] by developer [Srevin Saju][3] is an application that consolidates multiple screens in one location and helps to capture that futuristic feeling.
Guiscrcpy is an open source (GNU GPLv3 licensed) project based on the award-winning [scrcpy][4] open source engine. With guiscrcpy, you can cast your Android screen onto your computer screen so you can view it along with everything else. Guiscrcpy supports Linux, Windows, and MacOS.
Unlike many scrcpy alternatives, Guiscrcpy is not a fork of scrcpy. The project prioritizes collaborating with other open source projects, so Guiscrcpy is an extension, or a graphical user interface (GUI) layer, for scrcpy. Keeping the Python 3 GUI separate from scrcpy ensures that nothing interferes with the efficiency of the scrcpy backend. You can screencast up to 1080p resolution and, because it uses ultrafast rendering and surprisingly little CPU, it works even on a relatively low-end PC.
Scrcpy, Guiscrcpy's foundation, is a command-line application, so it doesn't have GUI buttons to handle gestures, it doesn't provide a Back or Home button, and it requires familiarity with the [Linux terminal][5]. Guiscrcpy adds GUI panels to scrcpy, so any user can run it—and cast and control their device—without sending any information over the internet. Everything works over USB or WiFi (using only a local network). Guiscrcpy also adds a desktop launcher to Linux and Windows systems and provides compiled binaries for Linux and Windows.
### Installing Guiscrcpy
Before installing Guiscrcpy, you must install its dependencies, most notably scrcpy. Possibly the easiest way to install scrcpy is with [snap][6], which is available for most major Linux distributions. If you have snap installed and active, then you can install scrcpy with one easy command:
```
`$ sudo snap install scrcpy`
```
While it's installing, you can install the other dependencies. The [Simple DirectMedia Layer][7] (SDL 2.0) toolkit is required to display and interact with the phone screen, and the [Android Debug Bridge][8] (adb) command connects your computer to your Android phone.
On Fedora or CentOS:
```
`$ sudo dnf install SDL2 android-tools`
```
On Ubuntu or Debian:
```
`$ sudo apt install SDL2 android-tools-adb`
```
In another terminal, install the Python dependencies:
```
`$ python3 -m pip install -r requirements.txt --user`
```
### Setting up your phone
For your phone to accept an adb connection, it must have Developer Mode enabled. To enable Developer Mode on Android, go to **Settings** and select **About phone**. In **About phone**, find the **Build number** (it may be in the **Software information** panel). Believe it or not, to enable Developer Mode, tap **Build number** seven times in a row.
![Enabling Developer Mode][9]
For full instructions on all the many ways you can configure your phone for access from your computer, read the [Android developer documentation][10].
Once that's set up, plug your phone into a USB port on your computer (or ensure that you've configured it correctly to connect over WiFi).
### Using guiscrcpy
When you launch guiscrcpy, you see its main control window. In this window, click the **Start scrcpy** button. This connects to your phone, as long as it's set up in Developer Mode and connected to your computer over USB or WiFi.
![Guiscrcpy main screen][11]
It also includes a configuration-writing system, where you can write a configuration file to your **~/.config** directory to preserve your preferences between uses.
The bottom panel of guiscrcpy is a floating window that helps you perform basic controlling actions. It has buttons for Home, Back, Power, and more. These are common functions on Android devices, but an important feature of this module is that it doesn't interact with scrcpy's SDL, so it can function with no lag. In other words, this panel communicates directly with your connected device through adb rather than scrcpy.
![guiscrcpy's bottom panel][12]
The project is in active development and new features are still being added. The latest build has an interface for gestures and notifications.
With guiscrcpy, you not only _see_ your phone on your screen, but you can also interact with it, either by clicking the SDL window itself, just as you would tap your physical phone, or by using the buttons on the panels.
![guiscrcpy running on Fedora 30][13]
Guiscrcpy is a fun and useful application that provides features that ought to be official features of any modern device, especially a platform like Android. Try it out yourself, and add some futuristic pragmatism to your present-day digital life.
We took a look at 12 of the best open source Android games in the F-Droid repository.
All six apps are available from the F-Droid repository and licensed under the GPLv3, providing an...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mirror-android-screen-guiscrcpy
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/holmjahttps://opensource.com/users/holmjahttps://opensource.com/users/rajaram121
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: https://github.com/srevinsaju/guiscrcpy
[3]: http://opensource.com/users/srevinsaju
[4]: https://github.com/Genymobile/scrcpy
[5]: https://www.redhat.com/sysadmin/navigating-filesystem-linux-terminal
[6]: https://snapcraft.io/
[7]: https://www.libsdl.org/
[8]: https://developer.android.com/studio/command-line/adb
[9]: https://opensource.com/sites/default/files/uploads/developer-mode.jpg (Enabling Developer Mode)
[10]: https://developer.android.com/studio/debug/dev-options
[11]: https://opensource.com/sites/default/files/uploads/guiscrcpy-main.png (Guiscrcpy main screen)
[12]: https://opensource.com/sites/default/files/uploads/guiscrcpy-bottompanel.png (guiscrcpy's bottom panel)
[13]: https://opensource.com/sites/default/files/uploads/guiscrcpy-screenshot.jpg (guiscrcpy running on Fedora 30)

View File

@ -0,0 +1,163 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mutation testing by example: Execute the test)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-execute-test)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
Mutation testing by example: Execute the test
======
Use the logic created so far in this series to implement functioning
code, then use failure and unit testing to make it better.
![A cat.][1]
The [second article][2] in this series demonstrated how to implement the logic for determining whether it's daylight or nighttime in a home automation system (HAS) application that controls locking and unlocking a cat door. This third article explains how to write code to use that logic in an application that locks a door at night and unlocks it during daylight hours.
As a reminder, set yourself up to follow along using the .NET xUnit.net testing framework by following the [instructions here][3].
### Disable the cat trap door during nighttime
Assume the cat door is a sophisticated Internet of Things (IoT) product that has an IP address and can be accessed by sending a request to its API. For the sake of brevity, this series doesn't go into how to program an IoT device; rather, it simulates the service to keep the focus on test-driven development (TDD) and mutation testing.
Start by writing a failing unit test:
```
[Fact]
public void GivenNighttimeDisableTrapDoor() {
   var expected = "Cat trap door disabled";
   var timeOfDay = dayOrNightUtility.GetDayOrNight(nightHour);
   var actual = catTrapDoor.Control(timeOfDay);
   Assert.Equal(expected, actual);
}
```
This describes a brand new component or service (**catTrapDoor**). That component (or service) has the capability to control the trap door given the current time. Now it's time to implement **catTrapDoor**.
To simulate this service, you must first describe its capabilities by using the interface. Create a new file in the app folder and name it **ICatTrapDoor.cs** (by convention, an interface name starts with an uppercase letter **I**). Add the following code to that file:
```
namespace app{
   public interface ICatTrapDoor {
       string Control(string dayOrNight);
   }
}
```
This interface is not capable of functioning. It merely describes your intention when building the **CatTrapDoor** service. Interfaces are a nice way to create abstractions of the services you are working with. In a way, you could regard this interface as an API of the **CatTrapDoor** service.
To implement the API, create a new file in the app folder and name it **FakeCatTrapDoor.cs**. Enter the following code into the class file:
```
namespace app{
   public class FakeCatTrapDoor : ICatTrapDoor {
       public string Control(string dayOrNight) {
           string trapDoorStatus = "Undetermined";
           if(dayOrNight == "Nighttime") {
               trapDoorStatus = "Cat trap door disabled";
           }
           return trapDoorStatus;
       }
   }
}
```
This new **FakeCatTrapDoor** class implements the interface **ICatTrapDoor**. Its method **Control** accepts string value **dayOrNight** and checks whether the value passed in is "Nighttime." If it is, it modifies **trapDoorStatus** from "Undetermined" to "Cat trap door disabled" and returns that value to the calling client.
Why is it called **FakeCatTrapDoor**? Because it's not a representation of the real cat trap door. The fake just helps you work out the processing logic. Once your logic is airtight, the fake service is replaced with the real service (this topic is reserved for the discipline of integration testing).
With everything implemented, all the unit tests pass when they run:
```
Starting test execution, please wait...
Total tests; 3. Passed: 3. failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 1.3913 Seconds
```
### Enable the cat trap door during daytime
It's time to look at the next scenario in our user story:
> _Scenario #2: Enable cat trap door during daylight_
>
> * Given that the clock detects the daylight
> * When the clock notifies the HAS
> * Then the HAS enables the cat trap door
>
This should be easy, just the flip side of the first scenario. First, write the failing test. Add the following unit test to your **UnitTest1.cs** file in the **unittest** folder:
```
[Fact]
public void GivenDaylightEnableTrapDoor() {
   var expected = "Cat trap door enabled";
   var timeOfDay = dayOrNightUtility.GetDayOrNight(dayHour);
   var actual = catTrapDoor.Control(timeOfDay);
   Assert.Equal(expected, actual);
}
```
You can expect to receive a "Cat trap door enabled" notification when sending the "Daylight" status to **catTrapDoor** service. When you run unit tests, you see the result you expect, which fails as expected:
```
Starting test execution, please wait...
[Xunit unittest.UnitTest1.UnitTest1.GivenDaylightEnableTrapDoor [FAIL]
Failed unittest.UnitTest1.UnitTest1.GivenDaylightEnableTrapDoor
[...]
```
The unit test expected to receive a "Cat trap door enabled" notification but instead was notified that the cat trap door status is "Undetermined." Cool; now's the time to fix this minor failure.
Adding three lines of code to the **FakeCatTrapDoor** does the trick:
```
if(dayOrNight == "Daylight") {
   trapDoorStatus = "Cat trap door enabled";
}
```
Run the unit tests again, and all tests pass:
```
Starting test execution, please wait...
Total tests: 4. Passed: 4. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 2.4888 Seconds
```
Awesome! Everything looks good, all the unit tests are in green, you have a rock-solid solution. Thank you, TDD!
### Not so fast!
Experienced engineers would not be convinced that the solution is rock-solid. Why? Because the solution hasn't been mutated yet. To dive deeply into what mutation is and why it's important, be sure to read the final article in this series.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-execute-test
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cat_pet_animal.jpg?itok=HOrVTfBZ (A cat.)
[2]: https://opensource.com/article/19/9/mutation-testing-example-part-2-failure-experimentation
[3]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd

View File

@ -0,0 +1,76 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 open source social platforms to consider)
[#]: via: (https://opensource.com/article/19/9/open-source-social-networks)
[#]: author: (Jaouhari Youssef https://opensource.com/users/jaouharihttps://opensource.com/users/danarelhttps://opensource.com/users/osmomjianhttps://opensource.com/users/dff)
3 open source social platforms to consider
======
A photo-sharing platform, a privacy-friendly social network, and a web
application for building and sharing portfolios.
![Hands holding a mobile phone with open on the screen][1]
It is no mystery why modern social media platforms were designed to be addictive: the more we consult them, the more data they have to fuel them—which enables them to grow smarter and bigger and more powerful.
The massive, global interest in these platforms has created the attention economy, and people's focused mental engagement is the new gold in the age of information abundance. As economist, political scientist, and cognitive psychologist Herbert A. Simon said in [_Designing organizations for an information-rich world_][2], "the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes." And information consumes our attention, a resource we only have so much of it.
According to [GlobalWebIndex][3], we are now spending an average of 142 minutes on social media and messaging platforms daily, 63% more than the 90 minutes we spent on these platforms just seven years ago. This can be explained by the fact that these platforms have grown more intelligent over time by studying the minds and behaviors of users and applying those findings to boost their appeal.
Of relevance here is the psychological concept [variable-ratio schedule][4], which gives rewards after an average number of responses but on an unpredictable schedule. One example is slot machines, which may provide a reward an average of every five games, but the players don't know the specific number of games (one, two, seven, or even 15) they must play before obtaining a reward. This schedule leads to a high response rate and strong engagement.
Knowing all of this, what can we do to make things better and loosen the grip social networks have on us and our data? I suggest the answer is migrating to open source social platforms, which I believe consider the humane aspect of technology more than private companies do. Here are three open source social platforms to consider.
### Pixelfed
[Pixelfed][5] is a photo-sharing platform that is ad-free and privacy-focused, which means no third party is making a profit from your data. Posts are in chronological order, which means there is no algorithm making distinctions between content.
To join the network, you can pick one of the servers on the [list of instances][6], or you can [install and run][7] your own Pixelfed instance.
Once you are set up, you can connect with other Pixelfed instances. This is known as federation, which means many instances of a software (in this case, Pixelfed) share data (in this case, pictures). When you federate with another instance of Pixelfed, you can see and interact with pictures posted to other accounts.
The project is ongoing and needs the community's support to grow. Check [Pixelfed's GitHub][8] page for more information about contributing.
### Okuna
[Okuna][9] is an open source, privacy-friendly social network. It is committed to being a positive influence on society and the environment, plus it donates 30% of its profits to worthy causes.
### Mahara
[Mahara][10] is an open source web application for building and sharing electronic portfolios. (The word _mahara_ is Māori for _memory_ or _thoughtful consideration_.) With Mahara, you can create a meaningful and verifiable professional profile, but all your data belongs to you rather than a corporate sponsor. It is customizable and can be integrated into other web services.
You can try Mahara on its [demo site][11].
### Engage for change
If you want to know more about the impact of the attention economy on our lives and engage for positive change, take a look at the [Center for Humane Technology][12], an organization trying to temper the attention economy and make technology more humane. Its aim is to spur change that will protect human vulnerabilities from being exploited and therefore build a better society.
As Sonya Parker said, "whatever you focus your attention on will become important to you even if it's unimportant." So let's focus our attention on building a better world for all.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/open-source-social-networks
作者:[Jaouhari Youssef][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jaouharihttps://opensource.com/users/danarelhttps://opensource.com/users/osmomjianhttps://opensource.com/users/dff
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78 (Hands holding a mobile phone with open on the screen)
[2]: https://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=33748
[3]: https://www.digitalinformationworld.com/2019/01/how-much-time-do-people-spend-social-media-infographic.html
[4]: https://dictionary.apa.org/variable-ratio-schedule
[5]: https://pixelfed.org/
[6]: https://pixelfed.org/join
[7]: https://docs.pixelfed.org/installing-pixelfed/
[8]: https://github.com/pixelfed/pixelfed
[9]: https://www.okuna.io/en/home
[10]: https://mahara.org/
[11]: https://demo.mahara.org/
[12]: https://humanetech.com/problem/

View File

@ -0,0 +1,423 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Execute Commands on Remote Linux System over SSH)
[#]: via: (https://www.2daygeek.com/execute-run-linux-commands-remote-system-over-ssh/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How to Execute Commands on Remote Linux System over SSH
======
We may need to perform some commands on the remote machine.
To do so, log in to a remote system and do it, if its once in a while.
But every time you do this, it can irritate you
If so, what is the better way to get out of it.
Yes, you can do this from your local system instead of logging in to the remote system.
Will it benefit me? Yeah definitely. This will save you good time.
Hows that happening? Yes, SSH allows you to run a command on a remote machine without logging in to a computer.
**The general syntax is as follow:**
```
$ ssh [User_Name]@[Rremote_Host_Name or IP] [Command or Script]
```
### 1) How to Run the Command on a Remote Linux System Over SSH
The following example allows users to run the **[df command][1]** via ssh on a remote Linux machine
```
$ ssh [email protected] df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 27G 4.4G 23G 17% /
devtmpfs 903M 0 903M 0% /dev
tmpfs 920M 0 920M 0% /dev/shm
tmpfs 920M 9.3M 910M 2% /run
tmpfs 920M 0 920M 0% /sys/fs/cgroup
/dev/sda1 1014M 179M 836M 18% /boot
tmpfs 184M 8.0K 184M 1% /run/user/42
tmpfs 184M 0 184M 0% /run/user/1000
```
### 2) How to Run Multiple Commands on a Remote Linux System Over SSH
The following example allows users to run multiple commands at once over ssh on the remote Linux system.
Its running uptime command and free command on the remote Linux system simultaneously.
```
$ ssh [email protected] "uptime && free -m"
23:05:10 up 10 min, 0 users, load average: 0.00, 0.03, 0.03
total used free shared buffers cached
Mem: 1878 432 1445 1 100 134
-/+ buffers/cache: 197 1680
Swap: 3071 0 3071
```
### 3) How to Run the Command with sudo Privilege on a Remote Linux System Over SSH
The following example allows users to run the **fdisk** command with **[sudo][2]** [][2]**[privilege][2]** on the remote Linux system via ssh.
Normal users are not allowed to execute commands available under the system binary **(/usr/sbin/)** directory. Users need root privileges to run it.
So to run the **[fdisk command][3]** on a Linux system, you need root privileges.
The which command returns the full path of the executable of the given command.
```
$ which fdisk
/usr/sbin/fdisk
```
```
$ ssh -t [email protected] "sudo fdisk -l"
[sudo] password for daygeek:
Disk /dev/sda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000bf685
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 62914559 30407680 8e Linux LVM
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-root: 29.0 GB, 28982640640 bytes, 56606720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Connection to centos7.2daygeek.com closed.
```
### 4) How to Run the Service Command with sudo Privilege on a Remote Linux System Over SSH
The following example allows users to run the service command with sudo privilege on the remote Linux system via ssh.
```
$ ssh -t [email protected] "sudo systemctl restart httpd"
[sudo] password for daygeek:
Connection to centos7.2daygeek.com closed.
```
### 5) How to Run the Command on a Remote Linux System Over SSH With Non-Standard Port
The following example allows users to run the **[hostnamectl command][4]** via ssh on a remote Linux machine with non-standard port.
```
$ ssh -p 2200 [email protected] hostnamectl
Static hostname: Ubuntu18.2daygeek.com
Icon name: computer-vm
Chassis: vm
Machine ID: 27f6c2febda84dc881f28fd145077187
Boot ID: bbeccdf932be41ddb5deae9e5f15183d
Virtualization: oracle
Operating System: Ubuntu 18.04.2 LTS
Kernel: Linux 4.15.0-60-generic
Architecture: x86-64
```
### 6) How to Save Output from Remote System to Local System
The following example allows users to remotely execute the **[top command][5]** on a Linux system via ssh and save the output to the local system.
```
$ ssh [email protected] "top -bc | head -n 35" > /tmp/top-output.txt
```
```
cat /tmp/top-output.txt
top - 01:13:11 up 18 min, 1 user, load average: 0.01, 0.05, 0.10
Tasks: 168 total, 1 running, 167 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 6.2 sy, 0.0 ni, 93.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 1882300 total, 1176324 free, 342392 used, 363584 buff/cache
KiB Swap: 2097148 total, 2097148 free, 0 used. 1348140 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4943 daygeek 20 0 162052 2248 1612 R 10.0 0.1 0:00.07 top -bc
1 root 20 0 128276 6936 4204 S 0.0 0.4 0:03.08 /usr/lib/sy+
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kthreadd]
3 root 20 0 0 0 0 S 0.0 0.0 0:00.25 [ksoftirqd/+
4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:+
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kworker/0:+
7 root rt 0 0 0 0 S 0.0 0.0 0:00.00 [migration/+
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [rcu_bh]
9 root 20 0 0 0 0 S 0.0 0.0 0:00.77 [rcu_sched]
10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [lru-add-dr+
11 root rt 0 0 0 0 S 0.0 0.0 0:00.01 [watchdog/0]
13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kdevtmpfs]
14 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [netns]
15 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [khungtaskd]
16 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [writeback]
17 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kintegrity+
18 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset]
19 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset]
20 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [bioset]
```
Alternatively, you can use the following format to run multiple commands on a remote system.
```
$ ssh [email protected] << EOF
hostnamectl
free -m
grep daygeek /etc/passwd
EOF
```
Output of the above command.
```
Pseudo-terminal will not be allocated because stdin is not a terminal.
Static hostname: CentOS7.2daygeek.com
Icon name: computer-vm
Chassis: vm
Machine ID: 002f47b82af248f5be1d67b67e03514c
Boot ID: dca9a1ba06374d7d96678f9461752482
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-957.el7.x86_64
Architecture: x86-64
total used free shared buff/cache available
Mem: 1838 335 1146 11 355 1314
Swap: 2047 0 2047
daygeek:x:1000:1000:2daygeek:/home/daygeek:/bin/bash
```
### 7) How to Execute Local Bash Scripts on Remote System
The following example allows users to run local **[bash script][6]** “remote-test.sh” via ssh on a remote Linux machine.
Create a shell script and execute it.
```
$ vi /tmp/remote-test.sh
#!/bin/bash
#Name: remote-test.sh
#--------------------
uptime
free -m
df -h
uname -a
hostnamectl
```
Output for the above command.
```
$ ssh [email protected] 'bash -s' < /tmp/remote-test.sh
01:17:09 up 22 min, 1 user, load average: 0.00, 0.02, 0.08
total used free shared buff/cache available
Mem: 1838 333 1148 11 355 1316
Swap: 2047 0 2047
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 27G 4.4G 23G 17% /
devtmpfs 903M 0 903M 0% /dev
tmpfs 920M 0 920M 0% /dev/shm
tmpfs 920M 9.3M 910M 2% /run
tmpfs 920M 0 920M 0% /sys/fs/cgroup
/dev/sda1 1014M 179M 836M 18% /boot
tmpfs 184M 12K 184M 1% /run/user/42
tmpfs 184M 0 184M 0% /run/user/1000
Linux CentOS7.2daygeek.com 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Static hostname: CentOS7.2daygeek.com
Icon name: computer-vm
Chassis: vm
Machine ID: 002f47b82af248f5be1d67b67e03514c
Boot ID: dca9a1ba06374d7d96678f9461752482
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-957.el7.x86_64
Architecture: x86-64
```
Alternatively, the pipe can be used. If you think the output is not good, add few changes to make it more elegant.
```
$ vi /tmp/remote-test-1.sh
#!/bin/bash
#Name: remote-test.sh
echo "---------System Uptime--------------------------------------------"
uptime
echo -e "\n"
echo "---------Memory Usage---------------------------------------------"
free -m
echo -e "\n"
echo "---------Disk Usage-----------------------------------------------"
df -h
echo -e "\n"
echo "---------Kernel Version-------------------------------------------"
uname -a
echo -e "\n"
echo "---------HostName Info--------------------------------------------"
hostnamectl
echo "------------------------------------------------------------------"
```
Output for the above script.
```
$ cat /tmp/remote-test.sh | ssh [email protected]
Pseudo-terminal will not be allocated because stdin is not a terminal.
---------System Uptime--------------------------------------------
03:14:09 up 2:19, 1 user, load average: 0.00, 0.01, 0.05
---------Memory Usage---------------------------------------------
total used free shared buff/cache available
Mem: 1838 376 1063 11 398 1253
Swap: 2047 0 2047
---------Disk Usage-----------------------------------------------
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 27G 4.4G 23G 17% /
devtmpfs 903M 0 903M 0% /dev
tmpfs 920M 0 920M 0% /dev/shm
tmpfs 920M 9.3M 910M 2% /run
tmpfs 920M 0 920M 0% /sys/fs/cgroup
/dev/sda1 1014M 179M 836M 18% /boot
tmpfs 184M 12K 184M 1% /run/user/42
tmpfs 184M 0 184M 0% /run/user/1000
tmpfs 184M 0 184M 0% /run/user/0
---------Kernel Version-------------------------------------------
Linux CentOS7.2daygeek.com 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
---------HostName Info--------------------------------------------
Static hostname: CentOS7.2daygeek.com
Icon name: computer-vm
Chassis: vm
Machine ID: 002f47b82af248f5be1d67b67e03514c
Boot ID: dca9a1ba06374d7d96678f9461752482
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-957.el7.x86_64
Architecture: x86-64
```
### 8) How to Run Multiple Commands on Multiple Remote Systems Simultaneously
The following bash script allows users to run multiple commands on multiple remote systems simultaneously. Use simple for loop to achieve it.
For this purpose, you can try with with the **[PSSH command][7]** or **[ClusterShell command][8]** or **[DSH command][9]**
```
$ vi /tmp/multiple-host.sh
for host in CentOS7.2daygeek.com CentOS6.2daygeek.com
do
ssh [email protected]${host} "uname -a;uptime;date;w"
done
```
Output for the above script:
```
$ sh multiple-host.sh
Linux CentOS7.2daygeek.com 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
01:33:57 up 39 min, 1 user, load average: 0.07, 0.06, 0.06
Wed Sep 25 01:33:57 CDT 2019
01:33:57 up 39 min, 1 user, load average: 0.07, 0.06, 0.06
USER TTY FROM [email protected] IDLE JCPU PCPU WHAT
daygeek pts/0 192.168.1.6 01:08 23:25 0.06s 0.06s -bash
Linux CentOS6.2daygeek.com 2.6.32-754.el6.x86_64 #1 SMP Tue Jun 19 21:26:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
23:33:58 up 39 min, 0 users, load average: 0.00, 0.00, 0.00
Tue Sep 24 23:33:58 MST 2019
23:33:58 up 39 min, 0 users, load average: 0.00, 0.00, 0.00
USER TTY FROM [email protected] IDLE JCPU PCPU WHAT
```
### 9) How to Add a Password Using the sshpass Command
If you are having trouble entering your password each time, I advise you to go with any one of the methods below as per your requirement.
If you are going to perform this type of activity frequently, I advise you to set up **[password-less authentication][10]** since its a standard and permanent solution.
If you only do these tasks a few times a month. I recommend you to use the **“sshpass”** utility.
Just provide a password as an argument using the **“-p”** option.
```
$ sshpass -p 'Your_Password_Here' ssh -p 2200 [email protected] ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:18:90:7f brd ff:ff:ff:ff:ff:ff
inet 192.168.1.12/24 brd 192.168.1.255 scope global dynamic eth0
valid_lft 86145sec preferred_lft 86145sec
inet6 fe80::a00:27ff:fe18:907f/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/execute-run-linux-commands-remote-system-over-ssh/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/linux-check-disk-space-usage-df-command/
[2]: https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/
[3]: https://www.2daygeek.com/linux-fdisk-command-to-manage-disk-partitions/
[4]: https://www.2daygeek.com/four-methods-to-change-the-hostname-in-linux/
[5]: https://www.2daygeek.com/understanding-linux-top-command-output-usage/
[6]: https://www.2daygeek.com/category/shell-script/
[7]: https://www.2daygeek.com/pssh-parallel-ssh-run-execute-commands-on-multiple-linux-servers/
[8]: https://www.2daygeek.com/clustershell-clush-run-commands-on-cluster-nodes-remote-system-in-parallel-linux/
[9]: https://www.2daygeek.com/dsh-run-execute-shell-commands-on-multiple-linux-servers-at-once/
[10]: https://www.2daygeek.com/configure-setup-passwordless-ssh-key-based-authentication-linux/

View File

@ -0,0 +1,258 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mutation testing by example: Evolving from fragile TDD)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-definition)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/marcobravo)
Mutation testing by example: Evolving from fragile TDD
======
Test-driven development is not enough for delivering lean code that
works exactly to expectations. Mutation testing is a powerful step
forward. Here's what that looks like.
![Binary code on a computer screen][1]
The [third article][2] in this series demonstrated how to use failure and unit testing to develop better code.
While it seemed that the journey was over with a successful sample Internet of Things (IoT) application to control a cat door, experienced programmers know that solutions need _mutation_.
### What's mutation testing?
Mutation testing is the process of iterating through each line of implemented code, mutating that line, then running unit tests and checking if the mutation broke the expectations. If it hasn't, you have created a surviving mutant.
Surviving mutants are always an alarming issue that points to potentially risky areas in a codebase. As soon as you catch a surviving mutant, you must kill it. And the only way to kill a surviving mutant is to create additional descriptions—new unit tests that describe your expectations regarding the output of your function or module. In the end, you deliver a lean, mean solution that is airtight and guarantees no pesky bugs or defects are lurking in your codebase.
If you leave surviving mutants to kick around and proliferate, live long, and prosper, then you are creating the much dreaded technical debt. On the other hand, if any unit test complains that the temporarily mutated line of code produces output that's different from the expected output, the mutant has been killed.
### Installing Stryker
The quickest way to try mutation testing is to leverage a dedicated framework. This example uses [Stryker][3].
To install Stryker, go to the command line and run:
```
`$ dotnet tool install -g dotnet-stryker`
```
To run Stryker, navigate to the **unittest** folder and type:
```
`$ dotnet-stryker`
```
Here is Stryker's report on the quality of our solution:
```
14 mutants have been created. Each mutant will now be tested, this could take a while.
Tests progress | 14/14 | 100% | ~0m 00s |
Killed : 13
Survived : 1
Timeout : 0
All mutants have been tested, and your mutation score has been calculated
\- \app [13/14 (92.86%)]
[...]
```
The report says:
* Stryker created 14 mutants
* Stryker saw 13 mutants were killed by the unit tests
* Stryker saw one mutant survive the onslaught of the unit tests
* Stryker calculated that the existing codebase contains 92.86% of code that serves the expectations
* Stryker calculated that 7.14% of the codebase contains code that does not serve the expectations
Overall, Stryker claims that the application assembled in the first three articles in this series failed to produce a reliable solution.
### How to kill a mutant
When software developers encounter surviving mutants, they typically reach for the implemented code and look for ways to modify it. For example, in the case of the sample application for cat door automation, change the line:
```
`string trapDoorStatus = "Undetermined";`
```
to:
```
`string trapDoorStatus = "";`
```
and run Stryker again. A mutant has survived:
```
All mutants have been tested, and your mutation score has been calculated
\- \app [13/14 (92.86%)]
[...]
[Survived] String mutation on line 4: '""' ==&gt; '"Stryker was here!"'
[...]
```
This time, you can see that Stryker mutated the line:
```
`string trapDoorStatus = "";`
```
into:
```
`string trapDoorStatus = ""Stryker was here!";`
```
This is a great example of how Stryker works: it mutates every line of our code, in a smart way, in order to see if there are further test cases we have yet to think about. It's forcing us to consider our expectations in greater depth.
Defeated by Stryker, you can attempt to improve the implemented code by adding more logic to it:
```
public string Control(string dayOrNight) {
   string trapDoorStatus = "Undetermined";
   if(dayOrNight == "Nighttime") {
       trapDoorStatus = "Cat trap door disabled";
   } else if(dayOrNight == "Daylight") {
       trapDoorStatus = "Cat trap door enabled";
   } else {
       trapDoorStatus = "Undetermined";
   }
   return trapDoorStatus;
}
```
But after running Stryker again, you see this attempt created a new mutant:
```
ll mutants have been tested, and your mutation score has been calculated
\- \app [13/15 (86.67%)]
[...]
[Survived] String mutation on line 4: '"Undetermined"' ==&gt; '""'
[...]
[Survived] String mutation on line 10: '"Undetermined"' ==&gt; '""'
[...]
```
![Stryker report][4]
You cannot wiggle out of this tight spot by modifying the implemented code. It turns out the only way to kill surviving mutants is to _describe additional expectations_. And how do you describe expectations? By writing unit tests.
### Unit testing for success
It's time to add a new unit test. Since the surviving mutant is located on line 4, you realize you have not specified expectations for the output with value "Undetermined."
Let's add a new unit test:
```
[Fact]
public void GivenIncorrectTimeOfDayReturnUndetermined() {
   var expected = "Undetermined";
   var actual = catTrapDoor.Control("Incorrect input");
   Assert.Equal(expected, actual);
}
```
The fix worked! Now all mutants are killed:
```
All mutants have been tested, and your mutation score has been calculated
\- \app [14/14 (100%)]
[Killed] [...]
```
You finally have a complete solution, including a description of what is expected as output if the system receives incorrect input values.
### Mutation testing to the rescue
Suppose you decide to over-engineer a solution and add this method to the **FakeCatTrapDoor**:
```
private string getTrapDoorStatus(string dayOrNight) {
   string status = "Everything okay";
   if(dayOrNight != "Nighttime" || dayOrNight != "Daylight") {
       status = "Undetermined";
   }
   return status;
}
```
Then replace the line 4 statement:
```
`string trapDoorStatus = "Undetermined";`
```
with:
```
`string trapDoorStatus = getTrapDoorStatus(dayOrNight);`
```
When you run unit tests, everything passes:
```
Starting test execution, please wait...
Total tests: 5. Passed: 5. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 2.7191 Seconds
```
The test has passed without an issue. TDD has worked. But bring  Stryker to the scene, and suddenly the picture looks a bit grim:
```
All mutants have been tested, and your mutation score has been calculated
\- \app [14/20 (70%)]
[...]
```
Stryker created 20 mutants; 14 mutants were killed, while six mutants survived. This lowers the success score to 70%. This means only 70% of our code is there to fulfill the described expectations. The other 30% of the code is there for no clear reason, which puts us at risk of misuse of that code.
In this case, Stryker helps fight the bloat. It discourages the use of unnecessary and convoluted logic because it is within the crevices of such unnecessary complex logic where bugs and defects breed.
### Conclusion
As you've seen, mutation testing ensures that no uncertain fact goes unchecked.
You could compare Stryker to a chess master who is thinking of all possible moves to win a match. When Stryker is uncertain, it's telling you that winning is not yet a guarantee. The more unit tests we record as facts, the further we are in our match, and the more likely Stryker can predict a win. In any case, Stryker helps detect losing scenarios even when everything looks good on the surface.
It is always a good idea to engineer code properly. You've seen how TDD helps in that regard. TDD is especially useful when it comes to keeping your code extremely modular. However, TDD on its own is not enough for delivering lean code that works exactly to expectations. Developers can add code to an already implemented codebase without first describing the expectations. That puts the entire code base at risk. Mutation testing is especially useful in catching breaches in the regular test-driven development (TDD) cadence. You need to mutate every line of implemented code to be certain no line of code is there without a specific reason.
Now that you understand how mutation testing works, you should look into how to leverage it. Next time, I'll show you how to put mutation testing to good use when tackling more complex scenarios. I will also introduce more agile concepts to see how DevOps culture can benefit from maturing technology.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-definition
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/binary_code_computer_screen.png?itok=7IzHK1nn (Binary code on a computer screen)
[2]: https://opensource.com/article/19/9/mutation-testing-example-part-3-execute-test
[3]: https://stryker-mutator.io/
[4]: https://opensource.com/sites/default/files/uploads/strykerreport.png (Stryker report)

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (You Can Now Use OneDrive in Linux Natively Thanks to Insync)
[#]: via: (https://itsfoss.com/use-onedrive-on-linux/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
You Can Now Use OneDrive in Linux Natively Thanks to Insync
======
[OneDrive][1] is a cloud storage service from Microsoft and it provides 5 GB of free storage to every user. This is integrated with Microsoft account and if you use Windows, you are have OneDrive preinstalled there.
OneDrive as a desktop application is not available on Linux. You can access your stored files via the web interface but you wont get that native feel of using the cloud storage in the file manager.
The good news is that you can now use an unofficial tool that lets you use OneDrive in Ubuntu or other Linux distributions.
[Insync][2] is a quite popular premium third-party sync tool when it comes to Google Drive cloud storage management on Linux. We already have a detailed review of [Insync with Google Drive][3] support for that matter.
However, recently, [Insync 3 was released][4] with OneDrive support. So, in this article, we are going to take a quick look at how OneDrive can be used with it and whats new in Insync 3.
Non-FOSS Alert
_Few developers take the pain of bringing their software to Linux. As a portal focusing on desktop Linux, we cover such software here even if they are not FOSS._
_Insync 3 is neither open source software nor it is free to use. You only get a 15-day trial to test it out._ _If you like it, you can purchase it for a lifetime fee of $29.99 per account._
_And no, we are not getting any money to promote them (in case you were going that way). We dont do that here._
### Get A Native OneDrive Experience in Linux With Insync
![][5]
Even though it is a premium tool the users who rely on OneDrive may want to get this for a seamless experience to sync OneDrive on their Linux system.
To get started, you have to download the suitable package for your Linux distribution from the [official download page][6].
[Download Insync][7]
You can also choose to add the repository and get it installed. You will get the instructions at Insyncs [official website][7].
Once you have it installed, just launch it and choose the OneDrive option.
![][8]
Also, it is worth noting that you need a separate license for each OneDrive or Google Drive account you add.
Now, after authorizing the OneDrive account, you have to select a base folder where you would want to sync everything which is a new feature in Insync 3.
![Insync 3 Base Folder][9]
In addition to this, you also get the ability to selectively sync files/folders locally or from the cloud after you set it up.
![Insync Selective Sync][10]
You can also customize the sync preference by adding your own rules to ignore/sync folders and files that you want it is totally optional.
![Insync Customize Sync Preferences][11]
Finally, you have it ready:
![Insync 3][12]
You can now start syncing files/folders using OneDrive across multiple platforms including your Linux desktop with Insync. In addition to all the new features/changes mentioned above, you also get a faster/smoother experience on Insync.
Also, with Insync 3, you can now take a look at the progress of your sync:
![][13]
**Wrapping Up**
Overall, Insync 3 is an impressive upgrade for those looking to sync OneDrive on their Linux system. In case you do not want to pay you can try other [free cloud services for Linux][14].
What do you think about Insync? If youre already using it, hows the experience so far? Let us know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/use-onedrive-on-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://onedrive.live.com
[2]: https://www.insynchq.com
[3]: https://itsfoss.com/insync-linux-review/
[4]: https://www.insynchq.com/blog/insync-3/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/onedrive-linux.png?ssl=1
[6]: https://www.insynchq.com/downloads?start=true
[7]: https://www.insynchq.com/downloads
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-3one-drive-sync.png?ssl=1
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-3-base-folder-1.png?ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-selective-syncs.png?ssl=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-customize-sync.png?ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-homescreen.png?ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/insync-3-progress-bar.png?ssl=1
[14]: https://itsfoss.com/cloud-services-linux/

View File

@ -1,97 +0,0 @@
技术如何改变敏捷的规则
======
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Containers%20Ecosystem.png?itok=lDTaYXzk)
越来越多的企业正因为一个非常明显的原因开始尝试敏捷和[DevOps][1]: 企业需要通过更快的速度和更多的实验为创新和竞争性提供优势。而DevOps将帮助我们得到所需的创新速度。但是在小团队或初创企业中实践DevOps与进行大规模实践完全是两码事。我们都明白这样的一个事实那就是在10人的跨职能团队中能够很好地解决问题的方案当将相同的模式应用到100人的团队中时就可能无法奏效。这条道路是如此艰难以至于IT领导者很容易将敏捷方法的推行再推迟一年。
但那样的时代已经结束了。如果你已经尝试过,但是没有成功,那么现在是时候重新开始了。
直到现在DevOps需要为许多组织提供个性化的解决方案因此往往需要进行大量的调整以及付出额外的工作。但在今天[Linux容器][2]和Kubernetes正在推动DevOps工具和过程的标准化。而这样的标准化将会加速整个软件开发过程。因此我们用来实践DevOps工作方式的技术最终能够满足我们加快软件开发速度的愿望。
Linux容器和[Kubernetes][3]正在改变团队交互的方式。此外你可以在Kubernetes平台上运行任何能够在Linux运行的应用程序。这意味着什么呢你可以运行大量的企业及应用程序(甚至可以解决以前令人烦恼的Windows和Linux之间的协调问题)。最后容器和Kubernetes将能够满足未来所有运行内容的需求。它们正在经受着未来的考验以应对机器学习、人工智能和分析工作等下一代解决问题工具。
**[ 参考相关文章,[4 container adoption patterns: What you need to know. ] ][4]**
让我们以机器学习为例来思考一下。今天,人们可以在大量的企业数据中找到一些模式。当机器发现这些模式时(想想机器学习),你的员工就能更快地采取行动。随着人工智能的加入,机器不仅可以发现模式,还可以对模式进行操作。如今,三个星期已经成为了一个积极的软件开发冲刺周期。有了人工智能,机器每秒可以多次修改代码。创业公司会利用这种能力来“打扰你”。
考虑一下你需要多快才能参与到竞争当中。如果你对于无法对于DevOps和每周一个迭代周期充满信心那么考虑一下当那个创业公司将AI驱动的过程指向你时会发生什么现在是时候转向DevOps的工作方式了否认就会像你的竞争对手一样被甩在后面。
### 容器技术如何改变团队的工作?
DevOps使得许多试图将这种工作方式扩展到更大范围的团队感到沮丧。即使许多IT(和业务)人员之前都听说过敏捷相关的语言、框架、模型(如DevOps)等承诺将会彻底应用程序开发和IT过程的全部相关内容但他们还是对此持怀疑态度。
**[ 想要获取来自其他CIO们的建议吗不放参考下我们的综述性资源, [DevOps: The IT Leader's Guide][5]. ]**
向你的涉众“推销”快速开发冲刺也不是一件容易的事情。想象一下如果你以这种方式买了一栋房子你将不再需要向开发商支付固定的金额而是会得到这样的信息“我们将在4周内浇筑完地基其成本是X之后再搭建房屋框架和铺设电路但是我们现在只能够知道地基完成的时间表。”人们已经习惯了买房子的时候有一个预先的价格和交付时间表。
挑战在于构建软件与构建房屋不同。同一个建筑商往往建造了成千上万个完全相同的房子,而软件项目从来都各不相同。这是你要克服的第一个障碍。
开发和运维团队的工作方式确实不同,我之所以知道这一点是因为我曾经从事过这两方面的工作。企业往往会用不同的方式来激励他们,开发人员会因为更改和创建而获得奖励,而运维专家则会因降低成本和确保安全性而获得奖励。我们会把他们分成不同的小组,并且尽量减少互动。而这些角色通常会吸引那些思维方式完全不同的技术人员。但是这样的解决方案注定会失败,你必须打破横亘在开发和运维之间的藩篱。
想想传统情况下会发生什么。业务会把需求扔过墙这是因为他们在“买房”模式下运作并且说上一句“我们9个月后见。”开发人员根据这些需求进行开发并根据技术约束的需要进行更改。然后他们把它扔过墙传递给运维人员并说一句“搞清楚如何运行这个软件”。然后运维人员勤就会奋地进行大量更改使软件与基础设施保持一致。然而最终的结果是什么呢
通常情况下当业务人员看到需求实现的最终结果时甚至根本辨认不出。在过去20年的大部分时间里我们一次又一次地目睹了这种模式在软件行业中上演。而现在是时候改变了。
Linux容器能够真正地解决这样的问题这是因为容器缩小了开发和运维之间的间隙。容器技术允许两个团队共同理解和设计所有的关键需求但仍然独立地履行各自团队的职责。基本上我们去掉了开发人员和运维人员之间的电话游戏。
因为容器技术,我们可以使得运维团队的规模更小,但依旧能够承担起数百万应用程序的运维工作,并且能够使得开发团队可以更加快速地根据需要更改软件。(在较大的组织中,所需的速度可能比运维人员的响应速度更快。)
使用容器,您可以将所需要交付的内容与它运行的位置分开。你的运维团队只需要负责运行容器的主机和安全的内存占用,仅此而已。这意味着什么呢?
首先这意味着你现在可以和团队一起实践DevOps了。没错只需要让团队专注于他们已经拥有的专业知识而对于容器只需让团队了解所需集成依赖关系的必要知识即可。
如果你想要重新训练每个人,往往会收效甚微。容器技术允许团队之间进行交互,但同时也会为每个团队提供一个围绕该团队优势而构建的强大边界。开发人员会知道需要消耗什么,但不需要知道如何使其大规模运行。运维团队了解核心基础设施,但不需要了解应用程序的细节。此外,运维团队也可以通过更新应用程序来解决新的安全问题,以免你成为下一个数据泄露的热门话题。
想要为一个大型IT组织比如30000人的团队教授运维和开发技能那或许需要花费你十年的时间而你可能并没有那么多时间。
当人们谈论“构建新的云原生应用程序将帮助我们摆脱这个问题”时请批判性地进行思考。你可以在10个人的团队中构建云原生应用程序但这对《财富》杂志前1000强的企业而言或许并不适用。除非你不再需要依赖现有的团队否则你无法一个接一个地构建新的微服务你最终将得到一个竖井式的组织。这是一个诱人的想法但你不能指望这些应用程序来重新定义你的业务。我还没见过哪家公司能在如此大规模的并行开发中获得成功。IT预算已经受到限制在很长一段时间内将预算翻倍甚至三倍是不现实的。
### 当奇迹发生时: 你好, 速度
Linux容器就是为扩容而生的。一旦你开始这样做[Kubernetes之类的编制工具就会发挥作用][6],这是因为你将需要运行数千个容器。应用程序将不仅仅由一个容器组成,它们将依赖于许多不同的部分,所有的部分都会作为一个单元运行在容器上。如果不这样做,你的应用程序将无法在生产环境中很好地运行。
思考一下有多少小滑轮和杠杆组合在一起来支撑你的业务,对于任何应用程序都是如此。开发人员负责应用程序中的所有滑轮和杠杆。(如果开发人员没有这些组件,您可能会在集成时做噩梦。)与此同时无论是在线下还是在云上运维团队都会负责构成基础设施的所有滑轮和杠杆。做一个较为抽象的比喻使用Kubernetes你的运维团队就可以为应用程序提供运行所需的燃料但又不必成为所有方面的专家。
开发人员进行实验,运维团队则保持基础设施的安全和可靠。这样的组合使得企业敢于承担小风险,从而实现创新。不同于打几个孤注一掷的赌,公司中真正的实验往往是循序渐进的和快速的。
从个人经验来看,这就是组织内部发生的显著变化:因为人们说:“我们如何通过改变计划来真正地利用这种能力进行实验?”它强制执行敏捷计划。
举个例子使用DevOps模型、容器和Kubernetes的KeyBank如今每天都会部署代码。(观看视频[7]其中主导了KeyBank持续交付和反馈的John Rzeszotarski将解释这一变化。)类似地Macquarie银行也借助DevOps和容器技术每天将一些东西投入生产环境。
一旦你每天都推出软件,它就会改变你计划的每一个方面,并且会[加速业务的变化速度][8]。Macquarie银行和金融服务集团的CDOLuis Uguina表示“创意可以在一天内触达客户。”(参见[9]对Red Hat与Macquarie银行合作的案例研究)。
### 是时候去创造一些伟大的东西了
Macquarie的例子说明了速度的力量。这将如何改变你的经营方式记住Macquarie不是一家初创企业。这是CIO们所面临的颠覆性力量它不仅来自新的市场进入者也来自老牌同行。
开发人员的自由还改变了运营敏捷商店的CIO们的人才方程式。突然之间大公司里的个体(即使不是在最热门的行业或地区)也可以产生巨大的影响。Macquarie利用这一变动作为招聘工具并向开发人员承诺所有新招聘的员工将会在第一周内推出新产品。
与此同时,在这个基于云的计算和存储能力的时代,我们比以往任何时候都拥有更多可用的基础设施。考虑到[机器学习和人工智能工具将很快实现的飞跃][10],这是幸运的。
所有这些都说明现在正是打造伟大事业的好时机。考虑到市场创新的速度你需要不断地创造伟大的东西来保持客户的忠诚度。因此如果你一直在等待将赌注押在DevOps上那么现在就是正确的时机。容器技术和Kubernetes改变了规则并且对你有利。
**想要获取更多这样的智慧吗, IT领导者? [订阅每周邮件][11].**
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2018/1/how-technology-changes-rules-doing-agile
作者:[Matt Hicks][a]
译者:[JayFrank](https://github.com/JayFrank)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://enterprisersproject.com/user/matt-hicks
[1]:https://enterprisersproject.com/tags/devops
[2]:https://www.redhat.com/en/topics/containers?intcmp=701f2000000tjyaAAA
[3]:https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA
[4]:https://enterprisersproject.com/article/2017/8/4-container-adoption-patterns-what-you-need-know?sc_cid=70160000000h0aXAAQ
[5]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ
[6]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity
[7]:https://www.redhat.com/en/about/videos/john-rzeszotarski-keybank-red-hat-summit-2017?intcmp=701f2000000tjyaAAA
[8]:https://enterprisersproject.com/article/2017/11/dear-cios-stop-beating-yourselves-being-behind-transformation
[9]:https://www.redhat.com/en/resources/macquarie-bank-case-study?intcmp=701f2000000tjyaAAA
[10]:https://enterprisersproject.com/article/2018/1/4-ai-trends-watch
[11]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ

View File

@ -1,7 +1,3 @@
数码文件与文件夹收纳术(以照片为例)
======
更新 2014-05-14:增加了一些具体实例
@ -12,9 +8,17 @@
更新 2017-08-28: geeqier 视频缩略图的邮件评论
更新 2018-03-06:增加了zum Konzept von Julian Kahnert 的链接
更新 2018-05-06:增加了作者在 2018 Linuxtage Graz 大会上演讲的视频
更新 2018-06-05:关于 metadata 的邮件回复
更新 2019-07-09:关于在文件名中避免使用系谱和字符的邮件回复
每当度假或去哪游玩时我就会化身为一个富有激情的摄影师。所以,过去的几年中我积累了许多的 [JPEG][1] 文件。这篇文章中我会介绍我是如何避免[vendor lock-in][2]LCTT译注vendor lock-in 供应商锁定,原为经济学术语,这里引申为避免过于依赖某一服务平台)造成受限于那些临时性的解决方案及数据丢失。相反,我更倾向于使用那些可以让我投入时间和精力打理并能长久使用的解决方案。
这一(相当长的)攻略 **并不仅仅适用于图像文件** :我将进一步阐述像是文件夹结构,文件的命名规则,等等许多领域的事情。因此,这些规范适用于我所能接触到的所以类型的文件。
这一(相当长的)攻略 **并不仅仅适用于图像文件** :我将进一步阐述像是文件夹结构,文件的命名规则,等等许多领域的事情。因此,这些规范适用于我所能接触到的所类型的文件。
在我开始传授我的方法之前,我们应该先就我将要介绍方法的达成一个共识,那就是我们是否有相同的需求。如果你对[raw 图像格式][3]十分推崇,将照片存储在云端或其他你信赖的地方(对我而言可能不会),那么你可能不会认同这篇文章将要描述的方式了。请根据你的情况来灵活做出选择。
@ -32,7 +36,7 @@
就独立性和**避免锁定效应**而言,我不想使用那种一旦公司停止产品或服务就无法使用的工具。出于同样的原因,由于我是一个注重隐私的人,**我不想使用任何基于云的服务**。为了让自己对新的可能性保持开放的心态,我不希望仅在一个特定的操作系统平台上倾注全部的精力。**基本的东西必须在任何平台上可用**(查看、导航、……)。但是**全套需求必须在GNU/Linux上运行**且我选择Debian GNU/Linux。
在我传授当前针对上述大量需求的解决方案之前,我必须解释一下我的一般文件夹结构和文件命名约定,我也使用它来命名数码照片。但首先,你必须考虑一个重要的事实:
在我传授当前针对上述大量需求的解决方案之前,我必须解释一下我的一般文件夹结构和文件命名约定,我也使用它来命名数码照片。但首先,你必须认清一个重要的事实:
#### iPhoto, Picasa, 诸如此类应被认为是有害的
@ -42,20 +46,23 @@
如果你现在正打算更换 相应的工具你将会意识到iPhoto或Picasa确实分别存储原始图像文件和你对它们所做的所有操作。旋转图像向图像文件添加描述标签裁剪等等如果你不能导出并重新导入到新工具那么**所有的东西都将永远丢失**。而无损的进行转换和迁移几乎是不可能的。
我不想在一个锁住我工作的工具上投入任何精力。**我也拒绝把自己在任何专有工具上**。我是一个过来人,希望你们吸取我的经验。
我不想在一个锁住我工作的工具上投入任何精力。**我也拒绝把自己绑定在任何专有工具上**。我是一个过来人,希望你们吸取我的经验。
这就是我在文件名中保留时间戳、图像描述或标记的原因。文件名是永久性的除非我手动更改它们。当我把照片备份或复制到u盘或其他操作系统时它们不会丢失。每个人都能读懂。任何未来的系统都能够处理它们。
### 我的文件命名约定
### 我的文件命名规范
这里有一个我在 [2018 Linuxtage Graz 大会][44]上给出的[演讲][45],其中详细阐述了我的在本文中提到的想法和工作流程。
我所有的文件都与一个特定的日期或时间有关,根据所采用的[ISO 8601][7]规范,我采用的是**日期-标记**或**时间-标记**
带有日期戳和两个标签的示例文件名:`2014-05-09 42号项目的预算 -- 金融公司.csv`
带有时间戳(甚至包括可选秒)和两个标签的示例文件名:`2014-05-09T22.19.58 Susan展示她的新鞋子 -- 家庭衣物.jpg`
由于冒号不适用于Windows[文件系统NTFS][8]所以我必须使用已采用的ISO时间戳。因此我用点代替冒号以便将小时与分钟区别开来。
如果是**时间或日期持续时间**,我将两个日期或时间戳用两个负号分开:`2014-05-09—2014-05-13爵士音乐节Graz—folder 旅游音乐.pdf`。
如果是**时间或持续的一段时间**,我将两个日期或时间戳用两个负号分开:`2014-05-09—2014-05-13爵士音乐节Graz—folder 旅游音乐.pdf`。
文件名中的时间/日期戳的优点是,除非我手动更改它们,否则它们保持不变。当通过某些不处理这些元数据的软件进行处理时,包含在文件内容本身中的元数据(如[Exif][9])往往会丢失。此外,使用这样的日期/时间戳启动文件名可以确保文件按时间顺序显示,而不是按字母顺序显示。字母表是一种[完全人工的排序顺序][10],对于用户定位文件通常不太实用。
@ -89,7 +96,7 @@
### 我的工作流程
Tataaaa在你了解了我的文件夹结构和文件名约定之后下面是我当前的工作流程和工具我使用它们来满足我前面描述的需求。
请注意,**你必须知道你在做什么**。我这里的示例及文件夹路径和更多只**适用我的机器或我的设置的文件夹路径**你必须采用**相应的路径、文件名等**来满足你的需求!
请注意,**你必须知道你在做什么**。我这里的示例及文件夹路径和更多只**适用我的机器或我的设置的文件夹路径**你必须采用**相应的路径、文件名等**来满足你的需求!
#### 工作流程:将文件从SD卡移动到笔记本电脑旋转人像图像并重命名文件
@ -187,7 +194,7 @@ vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh
##### 在geeqie中使用文件标签
当我在geeqie文件浏览器中浏览图像文件时我选择要标记的文件(一到多个)并按`t`。然后,一个小窗口弹出,要求我提供一个或多个标签。在与` Return `确认后,这些标签被添加到文件名中。
当我在geeqie文件浏览器中浏览图像文件时我选择要标记的文件(一到多个)并按`t`。然后,一个小窗口弹出,要求我提供一个或多个标签。在 ` Return ` 命令确认后,这些标签被添加到文件名中。
删除标签也是一样:选择多个文件,按下`T`,输入要删除的标签,然后用`Return`确认。就是这样。几乎没有[更简单的方法来添加或删除标签到文件][29]。
@ -197,7 +204,7 @@ vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh
重命名一组大型文件可能是一个冗长乏味的过程。对于`2014-04-20T17.09.11_p1100386.jpg`这样的原始文件名,在文件名中添加描述的过程相当烦人。你将按`Ctrl-r`(重命名)在geeqie打开文件重命名对话框。默认情况下原始名称(没有文件扩展名的文件名称)被标记。因此,如果不希望删除/覆盖文件名(但要追加),则必须按下光标键` <right> `。然后,光标放在基本名称和扩展名之间。输入你的描述(不要忘记初始空格字符),并用`Return`进行确认。
##### 在geeqie使中用appendfilename
##### 在 geeqie 使中用 appendfilename
使用[appendfilename][30],我的过程得到了简化,可以获得将文本附加到文件名的最佳用户体验:当我在geeqie中按下` a ` (append)时,会弹出一个对话框窗口,询问文本。在`Return`确认后,输入的文本将放置在时间戳和可选标记之间。
@ -207,7 +214,7 @@ vk-filetags-interactive-removing-wrapper-with-gnome-terminal.sh
最好的部分是:当我想要将相同的文本添加到多个选定的文件中时也可以使用appendfilename。
##### 使用geeqie初始appendfilename
##### 使用 geeqie 初始 appendfilename
添加一个额外的编辑器到geeqie: ` Edit > Preferences > Configure editor…>New`。然后输入桌面文件定义:
@ -271,7 +278,7 @@ Categories=X-Geeqie;
```
当你将快捷方式`o`(见上文)与geeqie关联时你就能够打开与其关联的应用程序的视频文件(和其他文件)。
当你将快捷方式`o`(见上文)与geeqie关联时你就能够打开与其关联的应用程序的视频文件(和其他文件)。
##### 使用xdg-open打开电影文件(和其他文件)
@ -293,11 +300,11 @@ Categories=X-Geeqie;
因此,我(再次)编写了一个Python脚本它为我完成了这项工作:[move2archive][33](简而言之:` m2a `需要一个或多个文件作为命令行参数。然后,出现一个对话框,我可以在其中输入一个可选文件夹名。当我不输入任何东西,但按`Return`,文件被移动到相应年份的文件夹。当我输入一个类似`business marathon after show - party`的文件夹名称时,第一个图像文件的日期戳被附加到该文件夹(`$HOME/archive/events_memories/2014/2014-05-08 business marathon after show - party`),得到的文件夹是(`$HOME/archive/events_memories/2014/2014-05-08 Business-Marathon After-Show-Party`),并移动文件。
再一次:我在geeqie中选择一个或多个文件,按`m`(移动),或者只按`Return`(没有特殊的子文件夹),或者输入一个描述性文本,这是要创建的子文件夹的名称(可选不带日期戳)。
我在geeqie中再一次选择一个或多个文件,按`m`(移动),或者只按`Return`(没有特殊的子文件夹),或者输入一个描述性文本,这是要创建的子文件夹的名称(可选不带日期戳)。
**没有一个图像管理工具像我的geeqie一样通过快捷键快速且有趣的使用 appendfilename和move2archive完成工作。**
##### 在geeqie里初始化m2a的相关设置
##### 在 geeqie 里初始化 m2a 的相关设置
同样向geeqie添加`m2a`是一个手动步骤:“编辑>首选项>配置编辑器……”然后创建一个带有“New”的附加条目。在这里你可以定义一个新的桌面文件如下所示:
@ -526,7 +533,9 @@ Wow, 这是一篇很长的博客文章。难怪你可能已经忘了之前的概
### 最后
所以,这是一个详细描述我关于照片和电影的工作流程的叙述。你可能已经发现了我可能感兴趣的其他东西。所以请不要犹豫,请使用下面的链接留下评论或电子邮件。
我也希望得到反馈,如果我的工作流程适用于你。并且:如果你已经发布了你的工作流程或者找到了其他人工作流程的描述,也请留下评论!
及时行乐,莫让错误的工具或低效的方法浪费了我们的人生!
### 其他工具
@ -535,23 +544,6 @@ Wow, 这是一篇很长的博客文章。难怪你可能已经忘了之前的概
当你觉得你以上文中所叙述的符合你的需求时,请根据相关的建议来选择对应的工具。
### 邮件回复
> Date: Sat, 26 Aug 2017 22:05:09 +0200
> 你好卡尔,
我喜欢你的文章喜欢和memacs一起工作当然还有orgmode但是我对python不是很熟悉……在你的博客文章“管理数码照片”你写了关于打开视频与[Geeqie][26]。是的,但是我在浏览器里看不到任何视频缩略图。你有什么建议吗?
> 谢谢你,托马斯
你好托马斯,
谢谢你的美言。当有人发现我的工作对他/她的生活有用时,我总是感觉很棒。
不幸的是,大多数时候,我从未听到过这些。
是的我有时使用Geeqie来可视化文件夹这些文件夹不仅包含图像文件还包含电影文件。在这些情况下我没有看到任何视频的缩略图。你说得对有很多文件浏览器可以显示视频的预览图像。
坦白地说我从来没有想过视频缩略图我也不怀念它们。在我的首选项和搜索引擎上做了一个快速的研究并没有发现在Geeqie中启用视频预览的相关方法。所以这里要说声抱歉。
@ -609,3 +601,5 @@ via: http://karl-voit.at/managing-digital-photographs/
[41]:https://docs.kde.org/development/en/extragear-graphics/digikam/using-kapp.html#idp7659904
[42]:https://en.wikipedia.org/wiki/Symbolic_link
[43]:http://karl-voit.at/2017/02/19/gthumb
[44]:https://glt18.linuxtage.at
[45]:https://glt18-programm.linuxtage.at/events/321.html

View File

@ -1,257 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Different Ways to Configure Static IP Address in RHEL 8)
[#]: via: (https://www.linuxtechi.com/configure-static-ip-address-rhel8/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
在 RHEL8 配置静态 IP 地址的不同方法
======
**Linux服务器** 上工作时,在网卡/以太网卡上分配静态 IP 地址是每个 Linux 工程师的常见任务之一。如果一个人在Linux 服务器上正确配置了静态地址,那么他/她就可以通过网络远程访问它。在本文中,我们将演示在 RHEL 8 服务器网卡上配置静态 IP 地址的不同方法。
[![Configure-Static-IP-RHEL8][1]][2]
以下是在网卡上配置静态IP的方法
* nmcli (命令行工具)
* 网络脚本文件(ifcfg-*)
* nmtui  (基于文本的用户界面)
### 使用 nmcli 命令行工具配置静态 IP 地址
每当我们安装 RHEL 8 服务器时,就会自动安装命令行工具 **nmcli** ,网络管理器使用 nmcli并允许我们在以太网卡上配置静态 IP 地址。
运行下面的 ip addr 命令,列出 RHEL 8 服务器上的以太网卡
```
[root@linuxtechi ~]# ip addr
```
![ip-addr-command-rhel8][1]
正如我们在上面的命令输出中看到的,我们有两个网卡 enp0s3 &amp; ampenp0s8。当前分配给网卡的 IP 地址是通过 DHCP 服务器获得的 。
假设我们希望在第一个网卡 (enp0s3) 上分配静态IP地址具体内容如下:
* IP address = 192.168.1.4
* Netmask = 255.255.255.0
* Gateway= 192.168.1.1
* DNS = 8.8.8.8
依次运行以下 nmcli 命令来配置静态 IP
使用“**nmcli connection **”命令列出当前活动的以太网卡,
```
[root@linuxtechi ~]# nmcli connection
NAME UUID TYPE DEVICE
enp0s3 7c1b8444-cb65-440d-9bf6-ea0ad5e60bae ethernet enp0s3
virbr0 3020c41f-6b21-4d80-a1a6-7c1bd5867e6c bridge virbr0
[root@linuxtechi ~]#
```
在 nmcli 命令下使用,在 enp0s3 上分配静态 IP。
**句法:**
nmcli connection modify &lt;interface_name&gt; ipv4.address  &lt;ip/prefix&gt;
**注意:** 简化语句,在 nmcli 命令中,我们通常用 “con” 关键字替换连接,并用 “mod”关 键字进行修改。
将 ipv4 (192.168.1.4) 分配给 enp0s3 网卡上。
```
[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.addresses 192.168.1.4/24
[root@linuxtechi ~]#
```
使用下面的 nmcli 命令设置网关,
```
[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.gateway 192.168.1.1
[root@linuxtechi ~]#
```
设置手动配置(从dhcp到static)
```
[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.method manual
[root@linuxtechi ~]#
```
设置 DNS 值为 “8.8.8.8”,
```
[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.dns "8.8.8.8"
[root@linuxtechi ~]#
```
要保存上述更改并重新加载,请执行 nmcli 如下命令,
```
[root@linuxtechi ~]# nmcli con up enp0s3
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)
[root@linuxtechi ~]#
```
以上命令显示网卡 enp0s3 已成功配置。 我们使用 nmcli 命令进行了那些更改都将永久保存在文件“etc/sysconfig/network-scripts/ifcfg-enp0s3” 里。
```
[root@linuxtechi ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
```
![ifcfg-enp0s3-file-rhel8][1]
要确认 IP 地址是否分配给了 enp0s3 网卡了,请使用以下 IP 命令查看,
```
[root@linuxtechi ~]#ip addr show enp0s3
```
### 使用网络脚本文件(ifcfg-)手动配置静态 IP 地址
我们可以使用配置以太网卡的网络脚本或“ifcfg-”文件来配置以太网卡的静态 IP 地址。假设我们想在第二个以太网卡 “enp0s8” 上分配静态IP 地址:
* IP= 192.168.1.91
* Netmask / Prefix = 24
* Gateway=192.168.1.1
* DNS1=4.2.2.2
转到目录 "/etc/sysconfig/network-scripts ",查找文件 'ifcfg- enp0s8',如果它不存在,则使用以下内容创建它,
```
[root@linuxtechi ~]# cd /etc/sysconfig/network-scripts/
[root@linuxtechi network-scripts]# vi ifcfg-enp0s8
TYPE="Ethernet"
DEVICE="enp0s8"
BOOTPROTO="static"
ONBOOT="yes"
NAME="enp0s8"
IPADDR="192.168.1.91"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="4.2.2.2"
```
保存并退出文件,然后重新启动网络管理器服务以使上述更改生效,
```
[root@linuxtechi network-scripts]# systemctl restart NetworkManager
[root@linuxtechi network-scripts]#
```
现在使用下面的 IP 命令来验证 IP 地址是否分配给网卡,
```
[root@linuxtechi ~]# ip add show enp0s8
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:7c:bb:cb brd ff:ff:ff:ff:ff:ff
inet 192.168.1.91/24 brd 192.168.1.255 scope global noprefixroute enp0s8
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe7c:bbcb/64 scope link
valid_lft forever preferred_lft forever
[root@linuxtechi ~]#
```
以上输出内容确认静态 IP 地址已在网卡“enp0s8”上成功配置了
### 使用 “nmtui” 实用程序配置静态 IP 地址
nmtui 是一个基于文本用户界面的,用于控制网络的管理器,当我们执行 nmtui 时它将打开一个基于文本的用户界面通过它我们可以添加、修改和删除连接。除此之外nmtui 还可以用来设置系统的主机名。
假设我们希望通过以下细节将静态 IP 地址分配给网卡 enp0s3
* IP address = 10.20.0.72
* Prefix = 24
* Gateway= 10.20.0.1
* DNS1=4.2.2.2
运行 nmtui 并按照屏幕说明操作,示例如下所示
```
[root@linuxtechi ~]# nmtui
```
[![nmtui-rhel8][1]][3]
选择第一个选项 “**Edit a connection**”然后选择接口为“enp0s3”
[![Choose-interface-nmtui-rhel8][1]][4]
选择编辑,然后指定 IP 地址、前缀、网关和域名系统服务器IP
[![set-ip-nmtui-rhel8][1]][5]
选择确定,然后点击回车。在下一个窗口中,选择 “**Activate a connection**”
[![Activate-option-nmtui-rhel8][1]][6]
选择 **enp0s3**,选择 **Deactivate** &amp; 点击回车
[![Deactivate-interface-nmtui-rhel8][1]][7]
现在选择 **Activate** &amp;点击回车,
[![Activate-interface-nmtui-rhel8][1]][8]
选择“上一步”,然后选择“退出”,
[![Quit-Option-nmtui-rhel8][1]][9]
使用下面的 IP 命令验证 IP 地址是否已分配给接口 enp0s3
```
[root@linuxtechi ~]# ip add show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:53:39:4d brd ff:ff:ff:ff:ff:ff
inet 10.20.0.72/24 brd 10.20.0.255 scope global noprefixroute enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::421d:5abf:58bd:c47e/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@linuxtechi ~]#
```
以上输出内容显示我们已经使用 nmtui 实用程序成功地将静态 IP 地址分配给接口 enp0s3。
以上就是本教程的全部内容,我们已经介绍了在 RHEL 8 系统上为以太网卡配置 ipv4 地址的三种不同方法。请不要犹豫,在下面的评论部分分享反馈和评论。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/configure-static-ip-address-rhel8/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Configure-Static-IP-RHEL8.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/nmtui-rhel8.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-interface-nmtui-rhel8.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/set-ip-nmtui-rhel8.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Activate-option-nmtui-rhel8.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Deactivate-interface-nmtui-rhel8.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Activate-interface-nmtui-rhel8.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Quit-Option-nmtui-rhel8.jpg

View File

@ -0,0 +1,136 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Introduction to the Linux chgrp and newgrp commands)
[#]: via: (https://opensource.com/article/19/9/linux-chgrp-and-newgrp-commands)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/seth)
Linux chgrp 和 newgrp 命令简介
======
chgrp 和 newgrp 命令可帮助你管理需要维护组所有权的文件。
![Penguins walking on the beach ][1]
在最近的一篇文章中,我介绍了 [chown][2] 命令,它用于修改系统上的文件所有权。回想一下,所有权是分配一个对象的用户和组的组合。**chgrp** 和 **newgrp** 命令为管理需要维护组所有权的文件提供了帮助。
### 使用 chgrp
**chgrp** 只是更改文件的组所有权。这与 **chown :&lt;group&gt;** 命令相同。你可以使用:
```
`$chown :alan mynotes`
```
或者:
```
`$chgrp alan mynotes`
```
#### 递归
chgrp 的一些其他参数在命令行和脚本中都可能有用。就像许多其他 Linux 命令一样chgrp 有一个递归参数 **-R**。如下所示,你需要它来对文件夹及其内容进行递归操作。我加了 **-v****verbose**)参数,因此 chgrp 会告诉我它在做什么:
```
$ ls -l . conf
.:
drwxrwxr-x 2 alan alan 4096 Aug  5 15:33 conf
conf:
-rw-rw-r-- 1 alan alan 0 Aug  5 15:33 conf.xml
# chgrp -vR delta conf
changed group of 'conf/conf.xml' from alan to delta
changed group of 'conf' from alan to delta
```
#### 引用
引用文件 **\--reference=RFILE** 可用于更改匹配特定配置的文件的组,或者当你不知道组,比如你运行一个脚本时。你可以复制另外一个文件的组 **RFILE**)。比如,为了撤销上面的更改 (请注意,点 [**.**] 指向当前工作目录):
```
`$ chgrp -vR --reference=. conf`
```
#### 报告更改
大多数命令都有用于控制其输出的参数。最常见的是 **-v** 来启用详细信息,并且 chgrp 命令拥有详细模式。它还具有 **-c****\--changes**)参数,表示 chgrp 仅在进行更改时报告。chgrp 还能报告其他内容,例如是否有不允许的操作。
参数 **-f****\--silent**、**\--quiet**)用于禁止显示大部分错误消息。我将在下一节中使用此参数和 **-c** 来显示实际更改。
#### 保留根目录
Linux 文件系统的根目录 ** / ** 应该受到高度重视。如果命令在此犯了一个错误那么后果可能是可怕的并会让系统无法使用。尤其是在运行一个会递归修改甚至删除的命令时。chgrp 命令有一个可用于保护和保留根目录的参数。它是 **\--preserve-root**。如果在根目录中将此参数和递归一起使用,那么什么也不会发生,而是会出现一条消息:
```
[root@localhost /]# chgrp -cfR --preserve-root a+w /
chgrp: it is dangerous to operate recursively on '/'
chgrp: use --no-preserve-root to override this failsafe
```
不与递归结合使用时,该选项无效。但是,如果该命令由 root 用户运行,那么 **/** 的权限将会更改,但其中的其他文件或目录的权限则不会被更改:
```
[alan@localhost /]$ chgrp -c --preserve-root alan /
chgrp: changing group of '/': Operation not permitted
[root@localhost /]# chgrp -c --preserve-root alan /
changed group of '/' from root to alan
```
令人惊讶的是,它似乎不是默认参数。选项 **\--no-preserve-root** 是默认的。如果你在不带 “preserve” 选项的情况下运行上述命令,那么它将默认为“无保留”模式,并可能会更改不应更改的文件的权限:
```
[alan@localhost /]$ chgrp -cfR alan /
changed group of '/dev/pts/0' from tty to alan
changed group of '/dev/tty2' from tty to alan
changed group of '/var/spool/mail/alan' from mail to alan
```
### 关于 newgrp
**newgrp** 命令允许用户覆盖当前的主要组。当你在所有文件必须有相同组所有权的目录中操作时newgrp 会很方便。假设你的内网服务器上有一个名为 _share_ 的目录,不同的团队在其中存储营销照片。组名为 “share”。当不同的用户将文件放入目录时文件的主要组可能会变得混乱。每当添加新文件时你都可以运行 **chgrp** 将错乱的组纠正为 **share**
```
$ cd share
ls -l
-rw-r--r--. 1 alan share 0 Aug  7 15:35 pic13
-rw-r--r--. 1 alan alan 0 Aug  7 15:35 pic1
-rw-r--r--. 1 susan delta 0 Aug  7 15:35 pic2
-rw-r--r--. 1 james gamma 0 Aug  7 15:35 pic3
-rw-rw-r--. 1 bill contract  0 Aug  7 15:36 pic4
```
我在 [**chmod** 命令][3]的文章中介绍了 **setgid**模式。它是解决此问题的一种方法。但是,假设由于某种原因未设置 setgid 位。newgrp 命令在此时很有用。在任何用户将文件放入 _share_ 目录之前,他们可以运行命令 **newgrp share**。这会将其主要组切换为 “share”因此他们放入目录中的所有文件都将有 “share” 组,而不是用户的主要组。完成后,用户可以使用以下命令切换回常规主要组:
```
`newgrp alan`
```
### 总结
了解如何管理用户、组和权限非常重要。最好知道一些替代方法来解决可能遇到的问题,因为并非所有环境都以相同的方式设置。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/linux-chgrp-and-newgrp-commands
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A (Penguins walking on the beach )
[2]: https://opensource.com/article/19/8/linux-chown-command
[3]: https://opensource.com/article/19/8/linux-chmod-command