mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
commit
d0366a9b73
@ -0,0 +1,92 @@
|
||||
如何安全地生成随机数
|
||||
======
|
||||
|
||||
### 使用 urandom
|
||||
|
||||
使用 [urandom][1]!使用 [urandom][2]!使用 [urandom][3]!
|
||||
|
||||
使用 [urandom][4]!使用 [urandom][5]!使用 [urandom][6]!
|
||||
|
||||
### 但对于密码学密钥呢?
|
||||
|
||||
仍然使用 [urandom][6]。
|
||||
|
||||
### 为什么不是 SecureRandom、OpenSSL、havaged 或者 c 语言实现呢?
|
||||
|
||||
这些是用户空间的 CSPRNG(伪随机数生成器)。你应该用内核的 CSPRNG,因为:
|
||||
|
||||
* 内核可以访问原始设备熵。
|
||||
* 它可以确保不在应用程序之间共享相同的状态。
|
||||
* 一个好的内核 CSPRNG,像 FreeBSD 中的,也可以保证它播种之前不给你随机数据。
|
||||
|
||||
研究过去十年中的随机失败案例,你会看到一连串的用户空间的随机失败案例。[Debian 的 OpenSSH 崩溃][7]?用户空间随机!安卓的比特币钱包[重复 ECDSA 随机 k 值][8]?用户空间随机!可预测洗牌的赌博网站?用户空间随机!
|
||||
|
||||
用户空间的生成器几乎总是依赖于内核的生成器。即使它们不这样做,整个系统的安全性也会确保如此。**但用户空间的 CSPRNG 不会增加防御深度;相反,它会产生两个单点故障。**
|
||||
|
||||
### 手册页不是说使用 /dev/random 嘛?
|
||||
|
||||
这个稍后详述,保留你的意见。你应该忽略掉手册页。不要使用 `/dev/random`。`/dev/random` 和 `/dev/urandom` 之间的区别是 Unix 设计缺陷。手册页不想承认这一点,因此它产生了一个并不存在的安全顾虑。把 `random(4)` 中的密码学上的建议当作传说,继续你的生活吧。
|
||||
|
||||
### 但是如果我需要的是真随机值,而非伪随机值呢?
|
||||
|
||||
urandom 和 `/dev/random` 提供的是同一类型的随机。与流行的观念相反,`/dev/random` 不提供“真正的随机”。从密码学上来说,你通常不需要“真正的随机”。
|
||||
|
||||
urandom 和 `/dev/random` 都基于一个简单的想法。它们的设计与流密码的设计密切相关:一个小秘密被延伸到不可预测值的不确定流中。 这里的秘密是“熵”,而流是“输出”。
|
||||
|
||||
只在 Linux 上 `/dev/random` 和 urandom 仍然有意义上的不同。Linux 内核的 CSPRNG 定期进行密钥更新(通过收集更多的熵)。但是 `/dev/random` 也试图跟踪内核池中剩余的熵,并且如果它没有足够的剩余熵时,偶尔也会罢工。这种设计和我所说的一样蠢;这与基于“密钥流”中剩下多少“密钥”的 AES-CTR 设计类似。
|
||||
|
||||
如果你使用 `/dev/random` 而非 urandom,那么当 Linux 对自己的 RNG(随机数生成器)如何工作感到困惑时,你的程序将不可预测地(或者如果你是攻击者,非常可预测地)挂起。使用 `/dev/random` 会使你的程序不太稳定,但这不会让你在密码学上更安全。
|
||||
|
||||
### 这是个缺陷,对吗?
|
||||
|
||||
不是,但存在一个你可能想要了解的 Linux 内核 bug,即使这并不能改变你应该使用哪一个 RNG。
|
||||
|
||||
在 Linux 上,如果你的软件在引导时立即运行,或者这个操作系统你刚刚安装好,那么你的代码可能会与 RNG 发生竞争。这很糟糕,因为如果你赢了竞争,那么你可能会在一段时间内从 urandom 获得可预测的输出。这是 Linux 中的一个 bug,如果你正在为 Linux 嵌入式设备构建平台级代码,那你需要了解它。
|
||||
|
||||
在 Linux 上,这确实是 urandom(而不是 `/dev/random`)的问题。这也是 [Linux 内核中的错误][9]。 但它也容易在用户空间中修复:在引导时,明确地为 urandom 提供种子。长期以来,大多数 Linux 发行版都是这么做的。但**不要**切换到不同的 CSPRNG。
|
||||
|
||||
### 在其它操作系统上呢?
|
||||
|
||||
FreeBSD 和 OS X 消除了 urandom 和 `/dev/random` 之间的区别;这两个设备的行为是相同的。不幸的是,手册页在解释为什么这样做上干的很糟糕,并延续了 Linux 上 urandom 可怕的神话。
|
||||
|
||||
无论你使用 `/dev/random` 还是 urandom,FreeBSD 的内核加密 RNG 都不会停摆。 除非它没有被提供种子,在这种情况下,这两者都会停摆。与 Linux 不同,这种行为是有道理的。Linux 应该采用它。但是,如果你是一名应用程序开发人员,这对你几乎没有什么影响:Linux、FreeBSD、iOS,无论什么:使用 urandom 吧。
|
||||
|
||||
### 太长了,懒得看
|
||||
|
||||
直接使用 urandom 吧。
|
||||
|
||||
### 结语
|
||||
|
||||
[ruby-trunk Feature #9569][10]
|
||||
|
||||
> 现在,在尝试检测 `/dev/urandom` 之前,SecureRandom.random_bytes 会尝试检测要使用的 OpenSSL。 我认为这应该反过来。在这两种情况下,你只需要将随机字节进行解压,所以 SecureRandom 可以跳过中间人(和第二个故障点),如果可用的话可以直接与 `/dev/urandom` 进行交互。
|
||||
|
||||
总结:
|
||||
|
||||
> `/dev/urandom` 不适合用来直接生成会话密钥和频繁生成其他应用程序级随机数据。
|
||||
>
|
||||
> GNU/Linux 上的 random(4) 手册所述......
|
||||
|
||||
感谢 Matthew Green、 Nate Lawson、 Sean Devlin、 Coda Hale 和 Alex Balducci 阅读了本文草稿。公正警告:Matthew 只是大多同意我的观点。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/
|
||||
|
||||
作者:[Thomas & Erin Ptacek][a]
|
||||
译者:[kimii](https://github.com/kimii)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://sockpuppet.org/blog
|
||||
[1]:http://blog.cr.yp.to/20140205-entropy.html
|
||||
[2]:http://cr.yp.to/talks/2011.09.28/slides.pdf
|
||||
[3]:http://golang.org/src/pkg/crypto/rand/rand_unix.go
|
||||
[4]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key
|
||||
[5]:http://stackoverflow.com/a/5639631
|
||||
[6]:https://twitter.com/bramcohen/status/206146075487240194
|
||||
[7]:http://research.swtch.com/openssl
|
||||
[8]:http://arstechnica.com/security/2013/08/google-confirms-critical-android-crypto-flaw-used-in-5700-bitcoin-heist/
|
||||
[9]:https://factorable.net/weakkeys12.extended.pdf
|
||||
[10]:https://bugs.ruby-lang.org/issues/9569
|
@ -1,79 +1,85 @@
|
||||
Tlog - 录制/播放终端 IO 和会话的工具
|
||||
Tlog:录制/播放终端 IO 和会话的工具
|
||||
======
|
||||
Tlog 是 Linux 中终端 I/O 录制和回放软件包。它用于实现集中式用户会话录制。它将所有经过的消息录制为 JSON 消息。录制为 JSON 格式的主要目的是将数据传送到 Elasticsearch 之类的存储服务,可以从中搜索和查询,以及回放。同时,他们保留所有通过的数据和时序。
|
||||
|
||||
Tlog 包含三个工具,分别是 tlog-rec、tlog-rec-session 和 tlog-play。
|
||||
|
||||
* `Tlog-rec tool` 一般用于录制终端、程序或 shell 的输入或输出。
|
||||
* `Tlog-rec-session tool` 用于录制整个终端会话的 I/O,并保护录制的用户。
|
||||
* `Tlog-rec-session tool` 用于回放录制。
|
||||
Tlog 是 Linux 中终端 I/O 录制和回放软件包。它用于实现一个集中式用户会话录制。它将所有经过的消息录制为 JSON 消息。录制为 JSON 格式的主要目的是将数据传送到 ElasticSearch 之类的存储服务,可以从中搜索和查询,以及回放。同时,它们保留所有通过的数据和时序。
|
||||
|
||||
Tlog 包含三个工具,分别是 `tlog-rec`、tlog-rec-session` 和 `tlog-play`。
|
||||
|
||||
* `tlog-rec` 工具一般用于录制终端、程序或 shell 的输入或输出。
|
||||
* `tlog-rec-session` 工具用于录制整个终端会话的 I/O,包括录制的用户。
|
||||
* `tlog-play` 工具用于回放录制。
|
||||
|
||||
在本文中,我将解释如何在 CentOS 7.4 服务器上安装 Tlog。
|
||||
|
||||
### 安装
|
||||
|
||||
在安装之前,我们需要确保我们的系统满足编译和安装程序的所有软件要求。在第一步中,使用以下命令更新系统仓库和软件包。
|
||||
|
||||
```
|
||||
#yum update
|
||||
# yum update
|
||||
```
|
||||
|
||||
我们需要安装此软件安装所需的依赖项。在安装之前,我已经使用这些命令安装了所有依赖包。
|
||||
|
||||
```
|
||||
#yum install wget gcc
|
||||
#yum install systemd-devel json-c-devel libcurl-devel m4
|
||||
# yum install wget gcc
|
||||
# yum install systemd-devel json-c-devel libcurl-devel m4
|
||||
```
|
||||
|
||||
完成这些安装后,我们可以下载该工具的[源码包][1]并根据需要将其解压到服务器上:
|
||||
|
||||
```
|
||||
#wget https://github.com/Scribery/tlog/releases/download/v3/tlog-3.tar.gz
|
||||
#tar -xvf tlog-3.tar.gz
|
||||
# wget https://github.com/Scribery/tlog/releases/download/v3/tlog-3.tar.gz
|
||||
# tar -xvf tlog-3.tar.gz
|
||||
# cd tlog-3
|
||||
```
|
||||
|
||||
现在,你可以使用我们通常的配置和制作方法开始构建此工具。
|
||||
现在,你可以使用我们通常的配置和编译方法开始构建此工具。
|
||||
|
||||
```
|
||||
#./configure --prefix=/usr --sysconfdir=/etc && make
|
||||
#make install
|
||||
#ldconfig
|
||||
# ./configure --prefix=/usr --sysconfdir=/etc && make
|
||||
# make install
|
||||
# ldconfig
|
||||
```
|
||||
|
||||
最后,你需要运行 `ldconfig`。它会创建必要的链接,并缓存命令行中指定目录中最近的共享库。( /etc/ld.so.conf 中的文件,以及信任的目录 (/lib and /usr/lib))
|
||||
最后,你需要运行 `ldconfig`。它对命令行中指定目录、`/etc/ld.so.conf` 文件,以及信任的目录( `/lib` 和 `/usr/lib`)中最近的共享库创建必要的链接和缓存。
|
||||
|
||||
### Tlog 工作流程图
|
||||
|
||||
![Tlog working process][2]
|
||||
|
||||
首先,用户通过 PAM 进行身份验证登录。名称服务交换机(NSS)提供的 `tlog` 信息是用户的 shell。这初始化了 tlog 部分,并从环境变量/配置文件收集关于实际 shell 的信息,并以 PTY 的形式启动实际的 shell。然后通过 syslog 或 sd-journal 开始录制在终端和 PTY 之间传递的所有内容。
|
||||
首先,用户通过 PAM 进行身份验证登录。名称服务交换器(NSS)提供的 `tlog` 信息是用户的 shell。这初始化了 tlog 部分,并从环境变量/配置文件收集关于实际 shell 的信息,并在 PTY 中启动实际的 shell。然后通过 syslog 或 sd-journal 开始录制在终端和 PTY 之间传递的所有内容。
|
||||
|
||||
### 用法
|
||||
|
||||
你可以使用 `tlog-rec` 录制一个会话并使用 `tlog-play` 回放它来测试新安装的 tlog 是否能够正常录制和回放会话。
|
||||
你可以使用 `tlog-rec` 录制一个会话并使用 `tlog-play` 回放它,以测试新安装的 tlog 是否能够正常录制和回放会话。
|
||||
|
||||
#### 录制到文件中
|
||||
|
||||
要将会话录制到文件中,请在命令行中执行 `tlog-rec`,如下所示:
|
||||
|
||||
```
|
||||
tlog-rec --writer=file --file-path=tlog.log
|
||||
```
|
||||
|
||||
该命令会将我们的终端会话录制到名为 tlog.log 的文件中,并将其保存在命令中指定的路径中。
|
||||
该命令会将我们的终端会话录制到名为 `tlog.log` 的文件中,并将其保存在命令中指定的路径中。
|
||||
|
||||
#### 从文件中回放
|
||||
|
||||
你可以在录制过程中或录制后使用 `tlog-play` 命令回放录制的会话。
|
||||
|
||||
```
|
||||
tlog-play --reader=file --file-path=tlog.log
|
||||
```
|
||||
|
||||
该命令从指定的路径读取先前录制的文件 tlog.log。
|
||||
该命令从指定的路径读取先前录制的文件 `tlog.log`。
|
||||
|
||||
### 总结
|
||||
|
||||
Tlog 是一个开源软件包,可用于实现集中式用户会话录制。它主要是作为一个更大的用户会话录制解决方案的一部分使用,但它被设计为独立且可重用的。该工具可以帮助录制用户所做的一切并将其存储在服务器的某个位置,以备将来参考。你可以从这个[文档][3]中获得关于这个软件包使用的更多细节。我希望这篇文章对你有用。请发表你的宝贵建议和意见。
|
||||
Tlog 是一个开源软件包,可用于实现集中式用户会话录制。它主要是作为一个更大的用户会话录制解决方案的一部分使用,但它被设计为独立且可重用的。该工具可以帮助录制用户所做的一切,并将其存储在服务器的某个位置,以备将来参考。你可以从这个[文档][3]中获得关于这个软件包使用的更多细节。我希望这篇文章对你有用。请发表你的宝贵建议和意见。
|
||||
|
||||
**关于 Saheetha Shameer (作者)**
|
||||
|
||||
### 关于 Saheetha Shameer (作者)
|
||||
我正在担任高级系统管理员。我是一名快速学习者,有轻微的倾向跟随行业中目前和正在出现的趋势。我的爱好包括听音乐、玩策略游戏、阅读和园艺。我对尝试各种美食也有很高的热情 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -82,7 +88,7 @@ via: https://linoxide.com/linux-how-to/tlog-tool-record-play-terminal-io-session
|
||||
|
||||
作者:[Saheetha Shameer][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,15 +1,14 @@
|
||||
使用 Ansible 在树莓派上构建一个基于 Linux 的高性能计算系统
|
||||
============================================================
|
||||
|
||||
### 使用低成本的硬件和开源软件设计一个高性能计算集群。
|
||||
> 使用低成本的硬件和开源软件设计一个高性能计算集群。
|
||||
|
||||
![Building a Linux-based HPC system on the Raspberry Pi with Ansible](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82 "Building a Linux-based HPC system on the Raspberry Pi with Ansible")
|
||||
图片来源:opensource.com
|
||||
|
||||
在我的 [Opensource.com 上前面的文章中][14],我介绍了 [OpenHPC][15] 项目,它的目标是致力于加速高性能计算(HPC)的创新。这篇文章将更深入来介绍使用 OpenHPC 的特性来构建一个小型的 HPC 系统。将它称为 _HPC 系统_ 可能有点“扯虎皮拉大旗”的意思,因此,更确切的说法应该是,通过 OpenHPC 项目发布的 [方法构建集群][16] 系统。
|
||||
|
||||
这个集群由两台树莓派 3 系统作为活动计算节点,以及一台虚拟机作为主节点,结构示意如下:
|
||||
在我的 [之前发表在 Opensource.com 上的文章中][14],我介绍了 [OpenHPC][15] 项目,它的目标是致力于加速高性能计算(HPC)的创新。这篇文章将更深入来介绍使用 OpenHPC 的特性来构建一个小型的 HPC 系统。将它称为 _HPC 系统_ 可能有点“扯虎皮拉大旗”的意思,因此,更确切的说法应该是,它是一个基于 OpenHPC 项目发布的 [集群构建方法][16] 的系统。
|
||||
|
||||
这个集群由两台树莓派 3 系统作为计算节点,以及一台虚拟机作为主节点,结构示意如下:
|
||||
|
||||
![Map of HPC cluster](https://opensource.com/sites/default/files/u128651/hpc_with_pi-1.png "Map of HPC cluster")
|
||||
|
||||
@ -17,13 +16,12 @@
|
||||
|
||||
下图是真实的设备工作照:
|
||||
|
||||
|
||||
![HPC hardware setup](https://opensource.com/sites/default/files/u128651/hpc_with_pi-2.jpg "HPC hardware setup")
|
||||
|
||||
去配置一台像上图这样的 HPC 系统,我是按照 OpenHPC 集群构建方法 —— [CentOS 7.4/aarch64 + Warewulf + Slurm 安装指南][17] (PDF) 的一些步骤来做的。这个方法包括 [Warewulf][18] 提供的使用说明;因为我的那三个系统是手动安装的,我跳过了 Warewulf 部分以及创建 [Ansible playbook][19] 的一些步骤。
|
||||
要把我的系统配置成像上图这样的 HPC 系统,我是按照 OpenHPC 的集群构建方法的 [CentOS 7.4/aarch64 + Warewulf + Slurm 安装指南][17] (PDF)的一些步骤来做的。这个方法包括了使用 [Warewulf][18] 的配置说明;因为我的那三个系统是手动安装的,我跳过了 Warewulf 部分以及创建 [Ansible 剧本][19] 的一些步骤。
|
||||
|
||||
在 [Ansible][26] 剧本中设置完成我的集群之后,我就可以向资源管理器提交作业了。在我的这个案例中, [Slurm][27] 充当了资源管理器,它是集群中的一个实例,由它来决定我的作业什么时候在哪里运行。在集群上启动一个简单的作业的方式之一:
|
||||
|
||||
在 [Ansible][26] playbooks 中设置完成我的集群之后,我就可以向资源管理器提交作业了。在我的这个案例中, [Slurm][27] 充当了资源管理器,它是集群中的一个实例,由它来决定我的作业什么时候在哪里运行。其中一种可做的事情是,在集群上启动一个简单的作业:
|
||||
```
|
||||
[ohpc@centos01 ~]$ srun hostname
|
||||
calvin
|
||||
@ -66,9 +64,9 @@ calvin
|
||||
Mon 11 Dec 16:42:41 UTC 2017
|
||||
```
|
||||
|
||||
为示范资源管理器的基本功能、简单的和一系列的命令行工具,这个集群系统是挺合适的,但是,去配置一个类似HPC 系统去做各种工作就有点无聊了。
|
||||
为示范资源管理器的基本功能,简单的串行命令行工具就行,但是,做各种工作去配置一个类似 HPC 系统就有点无聊了。
|
||||
|
||||
一个更有趣的应用是在这个集群的所有可用 CPU 上运行一个 [Open MPI][20] 并行作业。我使用了一个基于 [Game of Life][21] 的应用,它被用于一个名为“使用 Red Hat 企业版 Linux 跨多种架构运行 `Game of Life`“的 [视频][22]。除了以前实现的基于 MPI 的 `Game of Life` 之外,在我的集群中现在运行的这个版本对每个涉及的主机的单元格颜色都是不同的。下面的脚本以图形输出的方式来交互式启动应用:
|
||||
一个更有趣的应用是在这个集群的所有可用 CPU 上运行一个 [Open MPI][20] 的并行作业。我使用了一个基于 [康威生命游戏][21] 的应用,它被用于一个名为“使用 Red Hat 企业版 Linux 跨多种架构运行康威生命游戏”的 [视频][22]。除了以前基于 MPI 的 `Game of Life` 版本之外,在我的集群中现在运行的这个版本对每个涉及的主机的单元格颜色都是不同的。下面的脚本以图形输出的方式来交互式启动应用:
|
||||
|
||||
```
|
||||
$ cat life.mpi
|
||||
@ -91,15 +89,13 @@ $ srun -n 8 --x11 life.mpi
|
||||
|
||||
为了演示,这个作业有一个图形界面,它展示了当前计算的结果:
|
||||
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/hpc_with_pi-3.png)
|
||||
|
||||
红色单元格是由其中一个计算节点来计算的,而绿色单元格是由另外一个计算节点来计算的。我也可以让 `Game of Life` 程序为使用的每个 CPU 核心(这里的每个计算节点有四个核心)去生成不同的颜色,这样它的输出如下:
|
||||
|
||||
红色单元格是由其中一个计算节点来计算的,而绿色单元格是由另外一个计算节点来计算的。我也可以让康威生命游戏程序为使用的每个 CPU 核心(这里的每个计算节点有四个核心)去生成不同的颜色,这样它的输出如下:
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/hpc_with_pi-4.png)
|
||||
|
||||
感谢 OpenHPC 提供的软件包和安装方法,因为它们我可以去配置一个由两个计算节点和一个主节点的 HPC 式的系统。我可以在资源管理器上提交作业,然后使用 OpenHPC 提供的软件在我的树莓派的 CPU 上去启动 MPI 应用程序。
|
||||
感谢 OpenHPC 提供的软件包和安装方法,因为它们让我可以去配置一个由两个计算节点和一个主节点的 HPC 式的系统。我可以在资源管理器上提交作业,然后使用 OpenHPC 提供的软件在我的树莓派的 CPU 上去启动 MPI 应用程序。
|
||||
|
||||
* * *
|
||||
|
||||
@ -107,15 +103,15 @@ $ srun -n 8 --x11 life.mpi
|
||||
|
||||
### 关于作者
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/gotchi-square.png?itok=PJKu7LHn)][23] Adrian Reber —— Adrian 是 Red Hat 的高级软件工程师,他早在 2010 年就开始了迁移的过程,迁移到高性能计算环境中,从那个时候起迁移了许多的程序,并因此获得了博士学位,然后加入了 Red Hat 公司并开始去迁移到容器。偶尔他仍然去迁移单个进程,并且它至今仍然对高性能计算非常感兴趣。[关于我的更多信息点这里][12]
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/gotchi-square.png?itok=PJKu7LHn)][23] Adrian Reber —— Adrian 是 Red Hat 的高级软件工程师,他早在 2010 年就开始了迁移处理过程到高性能计算环境,从那个时候起迁移了许多的处理过程,并因此获得了博士学位,然后加入了 Red Hat 公司并开始去迁移到容器。偶尔他仍然去迁移单个处理过程,并且它至今仍然对高性能计算非常感兴趣。[关于我的更多信息点这里][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/how-build-hpc-system-raspberry-pi-and-openhpc
|
||||
|
||||
作者:[Adrian Reber ][a]
|
||||
作者:[Adrian Reber][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,64 +1,53 @@
|
||||
如何搭建“我的世界”服务器
|
||||
======
|
||||
我们将通过一个一步步的、新手友好的教程来向你展示如何搭建一个“我的世界”服务器。这将会是一个长期的多人游戏服务器,你可以与来自世界各地的朋友们一起玩,而不用在同一个局域网下。
|
||||
|
||||
![](https://thishosting.rocks/wp-content/uploads/2018/01/how-to-make-minecraft-server.jpg)
|
||||
|
||||
我们将通过一个一步步的、新手友好的教程来向你展示如何搭建一个“我的世界(Minecraft)”服务器。这将会是一个长期的多人游戏服务器,你可以与来自世界各地的朋友们一起玩,而不用在同一个局域网下。
|
||||
|
||||
### 如何搭建一个“我的世界”服务器 - 快速指南
|
||||
|
||||
如果你很急,想直达重点的话,这是我们的目录。我们推荐通读整篇文章。
|
||||
|
||||
* [学点东西][1] (可选)
|
||||
|
||||
* [再学点东西][2] (可选)
|
||||
|
||||
* [需求][3] (必读)
|
||||
|
||||
* [安装并运行“我的世界”服务器][4] (必读)
|
||||
|
||||
* [在你登出 VPS 后继续运行服务端][5] (可选)
|
||||
|
||||
* [让服务端随系统启动][6] (可选)
|
||||
|
||||
* [配置你的“我的世界”服务器][7] (必读)
|
||||
|
||||
* [常见问题][8] (可选)
|
||||
|
||||
在你开始行动之前,要先了解一些事情:
|
||||
|
||||
#### 为什么你不应该使用专门的“我的世界”服务器提供商
|
||||
#### 为什么你**不**应该使用专门的“我的世界”服务器提供商
|
||||
|
||||
既然你正在阅读这篇文章,你肯定对搭建自己的“我的世界”服务器感兴趣。不应该使用专门的“我的世界”服务器提供商的原因有很多,以下是其中一些:
|
||||
|
||||
* 它们通常很慢。这是因为你是在和很多用户一起共享资源。这有的时候会超负荷,他们中很多都会超售。
|
||||
|
||||
* 你并不能完全控制”我的世界“服务端,或是真正的服务器。你没法按照你的意愿进行自定义。
|
||||
|
||||
* 你并不能完全控制“我的世界”服务端或真正的服务器。你没法按照你的意愿进行自定义。
|
||||
* 你是受限制的。这种主机套餐或多或少都会有限制。
|
||||
|
||||
当然,使用现成的提供商也是有优点的。最好的就是你不用做下面这些操作。但是那还有什么意思呢?![🙂](https://s.w.org/images/core/emoji/2.3/svg/1f642.svg)
|
||||
当然,使用现成的提供商也是有优点的。最好的就是你不用做下面这些操作。但是那还有什么意思呢?!
|
||||
|
||||
#### 为什么不应该用你的个人电脑作为”我的世界“服务器
|
||||
#### 为什么不应该用你的个人电脑作为“我的世界”服务器
|
||||
|
||||
我们注意到很多教程都展示的是如何在你自己的电脑上搭建服务器。这样做有一些弊端,比如:
|
||||
|
||||
* 你的家庭网络不够安全,无法抵挡 DDoS 攻击。游戏服务器通常容易被 DDoS 攻击,而你的家庭网络设置通常不够安全,来抵挡它们。很可能连小型攻击都无法阻挡。
|
||||
|
||||
* 你得处理端口转发。如果你试着在家庭网络中搭建”我的世界“服务器的话,你肯定会偶然发现端口转发的问题,并且处理时可能会有问题。
|
||||
|
||||
* 你得处理端口转发。如果你试着在家庭网络中搭建“我的世界”服务器的话,你肯定会偶然发现端口转发的问题,并且处理时可能会有问题。
|
||||
* 你得保持你的电脑一直开着。你的电费将会突破天际,并且你会增加不必要的硬件负载。大部分服务器硬件都是企业级的,提升了稳定性和持久性,专门设计用来处理负载。
|
||||
|
||||
* 你的家庭网络速度不够快。家庭网络并不是设计用来负载多人联机游戏的。即使你想搭建一个小型服务器,你也需要一个更好的网络套餐。幸运的是,数据中心有多个高速的、企业级的互联网连接,来保证他们达到(或尽量达到)100%在线。
|
||||
|
||||
* 你的硬件很可能不够好。再说一次,服务器使用的都是企业级硬件,最新最快的处理器、固态硬盘,等等。你的个人电脑很可能不是的。
|
||||
* 你的个人电脑很可能是 Windows/MacOS。尽管这有所争议,但我们相信 Linux 更适合搭建游戏服务器。不用担心,搭建“我的世界”服务器不需要完全了解 Linux(尽管推荐这样)。我们会向你展示你需要了解的。
|
||||
|
||||
* 你的个人电脑很可能是 Windows/MacOS。尽管这是有争议的,我们相信 Linux 更适合搭建游戏服务器。不用担心,搭建”我的世界“服务器不需要完全了解 Linux(尽管推荐这样)。我们会向你展示你需要了解的。
|
||||
我们的建议是不要使用个人电脑,即使从技术角度来说你能做到。买一个云服务器并不是很贵。下面我们会向你展示如何在云服务器上搭建“我的世界”服务端。小心地遵守以下步骤,就很简单。
|
||||
|
||||
我们的建议是不要使用个人电脑,即使从技术角度来说你能做到。买一个云服务器并不是很贵。下面我们会向你展示如何在云服务器上搭建”我的世界“服务端。小心地遵守以下步骤,就很简单。
|
||||
|
||||
### 搭建一个”我的世界“服务器 - 需求
|
||||
### 搭建一个“我的世界”服务器 - 需求
|
||||
|
||||
这是一些需求,你在教程开始之前需要拥有并了解它们:
|
||||
|
||||
* 你需要一个 [Linux 云服务器][9]。我们推荐 [Vultr][10]。这家价格便宜,服务质量高,客户支持很好,并且所有的服务器硬件都很高端。检查[“我的世界”服务器需求][11]来选择你需要哪种类型的服务器(像内存和硬盘之类的资源)。我们推荐每月 20 美元的套餐。他们也支持按小时收费,所以如果你只是临时需要服务器和朋友们联机的话,你的花费会更少。注册时选择 Ubuntu 16.04 发行版。在注册时选择离你的朋友们最近的地域。这样的话你就需要保护并管理服务器。如果你不想这样的话,你可以选择[托管的服务器][],这样的话服务器提供商可能会给你搭建好一个“我的世界”服务器。
|
||||
* 你需要一个 [Linux 云服务器][9]。我们推荐 [Vultr][10]。这家价格便宜,服务质量高,客户支持很好,并且所有的服务器硬件都很高端。检查[“我的世界”服务器需求][11]来选择你需要哪种类型的服务器(像内存和硬盘之类的资源)。我们推荐每月 20 美元的套餐。他们也支持按小时收费,所以如果你只是临时需要服务器和朋友们联机的话,你的花费会更少。注册时选择 Ubuntu 16.04 发行版。在注册时选择离你的朋友们最近的地域。这样的话你就需要保护并管理服务器。如果你不想这样的话,你可以选择[托管的服务器][12],这样的话服务器提供商可能会给你搭建好一个“我的世界”服务器。
|
||||
* 你需要一个 SSH 客户端来连接到你的 Linux 云服务器。新手通常建议使用 [PuTTy][13],但我们也推荐使用 [MobaXTerm][14]。也有很多 SSH 客户端,所以挑一个你喜欢的吧。
|
||||
* 你需要设置你的服务器(至少做好基本的安全设置)。谷歌一下你会发现很多教程。你也可以按照 [Linode 的 安全指南][15],然后在你的 [Vultr][16] 服务器上一步步操作。
|
||||
* 下面我们将会处理软件依赖,比如 Java。
|
||||
@ -67,17 +56,17 @@
|
||||
|
||||
### 如何在 Ubuntu(Linux)上搭建一个“我的世界”服务器
|
||||
|
||||
这篇教程是为 [Vultr][17] 上的 Ubuntu 16.04 撰写并测试可行的。但是这对 Ubuntu 14.04, [Ubuntu 18.04][18],以及其他基于 Ubuntu 的发行版,其他服务器提供商也是可行的。
|
||||
这篇教程是为 [Vultr][17] 上的 Ubuntu 16.04 撰写并测试可行的。但是这对 Ubuntu 14.04, [Ubuntu 18.04][18],以及其他基于 Ubuntu 的发行版、其他服务器提供商也是可行的。
|
||||
|
||||
我们使用默认的 Vanilla 服务端。你也可以使用像 CraftBukkit 或 Spigot 这样的服务端,来支持更多的自定义和插件。虽然如果你使用过多插件的话会毁了服务端。这各有优缺点。不管怎么说,下面的教程使用默认的 Vanilla 服务端,来使事情变得简单和更新手友好。如果有兴趣的话我们可能会发表一篇 CraftBukkit 的教程。
|
||||
我们使用默认的 Vanilla 服务端。你也可以使用像 CraftBukkit 或 Spigot 这样的服务端,来支持更多的自定义和插件。虽然如果你使用过多插件的话会影响服务端。这各有优缺点。不管怎么说,下面的教程使用默认的 Vanilla 服务端,来使事情变得简单和更新手友好。如果有兴趣的话我们可能会发表一篇 CraftBukkit 的教程。
|
||||
|
||||
#### 1. 登陆到你的服务器
|
||||
#### 1. 登录到你的服务器
|
||||
|
||||
我们将使用 root 账户。如果你使用受限的账户的话,大部分命令都需要 ‘sudo’。做你没有权限的事情时会出现警告。
|
||||
我们将使用 root 账户。如果你使用受限的账户的话,大部分命令都需要 `sudo`。做你没有权限的事情时会出现警告。
|
||||
|
||||
你可以通过 SSH 客户端来登陆你的服务器。使用你的 IP 和端口(大部分都是 22)。
|
||||
你可以通过 SSH 客户端来登录你的服务器。使用你的 IP 和端口(大部分都是 22)。
|
||||
|
||||
在你登陆之后,确保你的[服务器安全][19]。
|
||||
在你登录之后,确保你的[服务器安全][19]。
|
||||
|
||||
#### 2. 更新 Ubuntu
|
||||
|
||||
@ -87,7 +76,7 @@
|
||||
apt-get update && apt-get upgrade
|
||||
```
|
||||
|
||||
在提示时敲击“回车键” 和/或 “y”。
|
||||
在提示时敲击“回车键” 和/或 `y`。
|
||||
|
||||
#### 3. 安装必要的工具
|
||||
|
||||
@ -113,37 +102,37 @@ mkdir /opt/minecraft
|
||||
cd /opt/minecraft
|
||||
```
|
||||
|
||||
现在你可以下载“我的世界“服务端文件了。去往[下载页面][20]获取下载链接。使用wget下载文件:
|
||||
现在你可以下载“我的世界“服务端文件了。去往[下载页面][20]获取下载链接。使用 `wget` 下载文件:
|
||||
|
||||
```
|
||||
wget https://s3.amazonaws.com/Minecraft.Download/versions/1.12.2/minecraft_server.1.12.2.jar
|
||||
```
|
||||
|
||||
#### 5. 安装”我的世界“服务端
|
||||
#### 5. 安装“我的世界”服务端
|
||||
|
||||
下载好了服务端 .jar 文件之后,你就需要先运行一下,它会生成一些文件,包括一个 eula.txt 许可文件。第一次运行的时候,它会返回一个错误并退出。这是正常的。使用下面的命令运行它:
|
||||
下载好了服务端的 .jar 文件之后,你就需要先运行一下,它会生成一些文件,包括一个 `eula.txt` 许可文件。第一次运行的时候,它会返回一个错误并退出。这是正常的。使用下面的命令运行它:
|
||||
|
||||
```
|
||||
java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.2.jar nogui
|
||||
```
|
||||
|
||||
”-Xms2048M“是你的服务端能使用的最小的内存,”-Xmx3472M“是最大的。[调整][21]基于你服务器的硬件资源。如果你在 [Vultr][22] 服务器上有 4GB 内存,并且不用服务器来干其他事情的话可以就这样留着不动。
|
||||
`-Xms2048M` 是你的服务端能使用的最小的内存,`-Xmx3472M` 是最大的内存。[调整][21]基于你服务器的硬件资源。如果你在 [Vultr][22] 服务器上有 4GB 内存,并且不用服务器来干其他事情的话可以就这样留着不动。
|
||||
|
||||
在这条命令结束并返回一个错误之后,将会生成一个新的 eula.txt 文件。你需要同意那个文件里的协议。你可以通过下面这条命令将”eula=true“添加到文件中:
|
||||
在这条命令结束并返回一个错误之后,将会生成一个新的 `eula.txt` 文件。你需要同意那个文件里的协议。你可以通过下面这条命令将 `eula=true` 添加到文件中:
|
||||
|
||||
```
|
||||
sed -i.orig 's/eula=false/eula=true/g' eula.txt
|
||||
```
|
||||
|
||||
你现在可以通过和上面一样的命令来开启服务端并进入”我的世界“服务端控制台了:
|
||||
你现在可以通过和上面一样的命令来开启服务端并进入“我的世界”服务端控制台了:
|
||||
|
||||
```
|
||||
java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.2.jar nogui
|
||||
```
|
||||
|
||||
确保你在 /opt/minecraft 目录,或者其他你安装你的 MC 服务端的目录下。
|
||||
确保你在 `/opt/minecraft` 目录,或者其他你安装你的 MC 服务端的目录下。
|
||||
|
||||
如果你只是测试或暂时需要的话,到这里就可以停了。如果你在登陆服务器时有问题的话,你就需要[配置你的防火墙][23]。
|
||||
如果你只是测试或暂时需要的话,到这里就可以停了。如果你在登录服务器时有问题的话,你就需要[配置你的防火墙][23]。
|
||||
|
||||
第一次成功启动服务端时会花费一点时间来生成。
|
||||
|
||||
@ -166,7 +155,7 @@ nano /opt/minecraft/startminecraft.sh
|
||||
cd /opt/minecraft/ && java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.2.jar nogui
|
||||
```
|
||||
|
||||
如果你不熟悉 nano 的话 - 你可以使用“CTRL + X”,再敲击“Y”,然后回车。这个脚本将进入你先前创建的“我的世界”服务端并运行 Java 命令来开启服务端。你需要执行下面的命令来使脚本可执行:
|
||||
如果你不熟悉 nano 的话 - 你可以使用 `CTRL + X`,再敲击 `Y`,然后回车。这个脚本将进入你先前创建的“我的世界”服务端并运行 Java 命令来开启服务端。你需要执行下面的命令来使脚本可执行:
|
||||
|
||||
```
|
||||
chmod +x startminecraft.sh
|
||||
@ -178,7 +167,7 @@ chmod +x startminecraft.sh
|
||||
/opt/minecraft/startminecraft.sh
|
||||
```
|
||||
|
||||
但是,如果/当你登出 SSH 会话的话,服务端就会关闭。要想让服务端不登陆也持续运行的话,你可以使用 screen 会话。一个 screen 会话会一直运行,直到实际的服务器被关闭或重启。
|
||||
但是,如果/当你登出 SSH 会话的话,服务端就会关闭。要想让服务端不登录也持续运行的话,你可以使用 `screen` 会话。`screen` 会话会一直运行,直到实际的服务器被关闭或重启。
|
||||
|
||||
使用下面的命令开启一个 screen 会话:
|
||||
|
||||
@ -186,23 +175,23 @@ chmod +x startminecraft.sh
|
||||
screen -S minecraft
|
||||
```
|
||||
|
||||
一旦你进入了 screen 会话(看起来就像是你新建了一个 SSH 会话),你就可以使用先前创建的 bash 脚本来启动服务端:
|
||||
一旦你进入了 `screen` 会话(看起来就像是你新建了一个 SSH 会话),你就可以使用先前创建的 bash 脚本来启动服务端:
|
||||
|
||||
```
|
||||
/opt/minecraft/startminecraft.sh
|
||||
```
|
||||
|
||||
要退出 screen 会话的话,你应该按 CTRL +A-D。即使你离开 screen 会话(断开的),服务端也会继续运行。你现在可以安全的登出 Ubuntu 服务器了,你创建的“我的世界”服务端将会继续运行。
|
||||
要退出 `screen` 会话的话,你应该按 `CTRL+A-D`。即使你离开 `screen` 会话(断开的),服务端也会继续运行。你现在可以安全的登出 Ubuntu 服务器了,你创建的“我的世界”服务端将会继续运行。
|
||||
|
||||
但是,如果 Ubuntu 服务器重启或关闭了的话,screen 会话将不再起作用。所以**为了让我们之前做的这些在启动时自动运行**,做下面这些:
|
||||
但是,如果 Ubuntu 服务器重启或关闭了的话,`screen` 会话将不再起作用。所以**为了让我们之前做的这些在启动时自动运行**,做下面这些:
|
||||
|
||||
打开 /etc/rc.local 文件:
|
||||
打开 `/etc/rc.local` 文件:
|
||||
|
||||
```
|
||||
nano /etc/rc.local
|
||||
```
|
||||
|
||||
在“exit 0”语句前添加如下内容:
|
||||
在 `exit 0` 语句前添加如下内容:
|
||||
|
||||
```
|
||||
screen -dm -S minecraft /opt/minecraft/startminecraft.sh
|
||||
@ -211,7 +200,7 @@ exit 0
|
||||
|
||||
保存并关闭文件。
|
||||
|
||||
要访问“我的世界”服务端控制台,只需运行下面的命令来重新连接 screen 会话:
|
||||
要访问“我的世界”服务端控制台,只需运行下面的命令来重新连接 `screen` 会话:
|
||||
|
||||
```
|
||||
screen -r minecraft
|
||||
@ -247,7 +236,7 @@ ufw allow 25565/tcp
|
||||
nano /opt/minecraft/server.properties
|
||||
```
|
||||
|
||||
并将“white-list”行改为“true”:
|
||||
并将 `white-list` 行改为 `true`:
|
||||
|
||||
```
|
||||
white-list=true
|
||||
@ -279,7 +268,7 @@ whitelist add PlayerUsername
|
||||
whitelist remove PlayerUsername
|
||||
```
|
||||
|
||||
使用 CTRL + A-D 来退出 screen(服务器控制台)。值得注意的是,这会拒绝除白名单以外的所有人连接到服务端。
|
||||
使用 `CTRL+A-D` 来退出 `screen`(服务器控制台)。值得注意的是,这会拒绝除白名单以外的所有人连接到服务端。
|
||||
|
||||
[![how to create a minecraft server](https://thishosting.rocks/wp-content/uploads/2018/01/create-a-minecraft-server.jpg)][26]
|
||||
|
||||
@ -305,7 +294,7 @@ whitelist remove PlayerUsername
|
||||
screen -r minecraft
|
||||
```
|
||||
|
||||
在这里执行[命令][28]。像下面这些命令:
|
||||
并执行[命令][28]。像下面这些命令:
|
||||
|
||||
```
|
||||
difficulty hard
|
||||
@ -321,7 +310,7 @@ gamemode survival @a
|
||||
|
||||
如果有新版本发布的话,你需要这样做:
|
||||
|
||||
进入”我的世界“目录:
|
||||
进入“我的世界”目录:
|
||||
|
||||
```
|
||||
cd /opt/minecraft
|
||||
@ -356,7 +345,7 @@ cd /opt/minecraft/ && java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.3.jar
|
||||
|
||||
#### 为什么你们的教程这么长,而其他的只有 2 行那么长?!
|
||||
|
||||
我们想让这个教程对新手来说更友好,并且尽可能详细。我们还向你展示了了如何让服务端长期运行并跟随系统启动,我们向你展示了如何配置你的服务端以及所有的东西。我是说,你当然可以用几行来启动“我的世界”服务器,但那样的话绝对很烂,从不仅一方面说。
|
||||
我们想让这个教程对新手来说更友好,并且尽可能详细。我们还向你展示了如何让服务端长期运行并跟随系统启动,我们向你展示了如何配置你的服务端以及所有的东西。我是说,你当然可以用几行来启动“我的世界”服务器,但那样的话绝对很烂,从不仅一方面说。
|
||||
|
||||
#### 我不知道 Linux 或者这里说的什么东西,我该如何搭建一个“我的世界”服务器呢?
|
||||
|
||||
@ -372,7 +361,7 @@ via: https://thishosting.rocks/how-to-make-a-minecraft-server/
|
||||
|
||||
作者:[ThisHosting.Rocks][a]
|
||||
译者:[heart4lor](https://github.com/heart4lor)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -3,35 +3,36 @@
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/eddington_a._space_time_and_gravitation._fig._9.jpg?itok=KgNqViyZ)
|
||||
|
||||
公共服务最重要的一点就是守时,但是很多人并没有意识到这一点。大多数公共时间服务器都是由志愿者管理,以满足不断增长的需求。学习如何运行你自己的时间服务器,为基本的公共利益做贡献。(查看 [在 Linux 上使用 NTP 保持精确时间][1] 去学习如何设置一台局域网时间服务器)
|
||||
最重要的公共服务之一就是<ruby>报时<rt>timekeeping</rt></ruby>,但是很多人并没有意识到这一点。大多数公共时间服务器都是由志愿者管理,以满足不断增长的需求。这里学习一下如何运行你自己的时间服务器,为基础公共利益做贡献。(查看 [在 Linux 上使用 NTP 保持精确时间][1] 去学习如何设置一台局域网时间服务器)
|
||||
|
||||
### 著名的时间服务器滥用事件
|
||||
|
||||
就像现实生活中任何一件事情一样,即便是像时间服务器这样的公益项目,也会遭受不称职的或者恶意的滥用。
|
||||
|
||||
消费类网络设备的供应商因制造了大混乱而臭名昭著。我回想起的第一件事发生在 2003 年,那时,Netgear 在它们的路由器中硬编码了 University of Wisconsin-Madison 的 NTP 时间服务器地址。使得时间服务器的查询请求突然增加,随着 NetGear 卖出越来越多的路由器,这种情况越发严重。更有意思的是,路由器的程序设置是每秒钟发送一次请求,这将使服务器难堪重负。后来 Netgear 发布了升级固件,但是,升级他们的设备的用户很少,并且他们的其中一些用户的设备,到今天为止,还在不停地每秒钟查询一次 University of Wisconsin-Madison 的 NTP 服务器。Netgear 给 University of Wisconsin-Madison 捐献了一些钱,以帮助弥补他们带来的成本增加,直到这些路由器全部淘汰。类似的事件还有 D-Link、Snapchat、TP-Link 等等。
|
||||
消费类网络设备的供应商因制造了大混乱而臭名昭著。我回想起的第一件事发生在 2003 年,那时,NetGear 在它们的路由器中硬编码了威斯康星大学的 NTP 时间服务器地址。使得时间服务器的查询请求突然增加,随着 NetGear 卖出越来越多的路由器,这种情况越发严重。更有意思的是,路由器的程序设置是每秒钟发送一次请求,这将使服务器难堪重负。后来 Netgear 发布了升级固件,但是,升级他们的设备的用户很少,并且他们的其中一些用户的设备,到今天为止,还在不停地每秒钟查询一次威斯康星大学的 NTP 服务器。Netgear 给威斯康星大学捐献了一些钱,以帮助弥补他们带来的成本增加,直到这些路由器全部淘汰。类似的事件还有 D-Link、Snapchat、TP-Link 等等。
|
||||
|
||||
对 NTP 协议进行反射和放大,已经成为发起 DDoS 攻击的一个选择。当攻击者使用一个伪造的源地址向目标受害者发送请求,称为反射;攻击者发送请求到多个服务器,这些服务器将回复请求,这样就使伪造的地址受到轰炸。放大是指一个很小的请求收到大量的回复信息。例如,在 Linux 上,`ntpq` 命令是一个查询你的 NTP 服务器并验证它们的系统时间是否正确的很有用的工具。一些回复,比如,对端列表,是非常大的。组合使用反射和放大,攻击者可以将 10 倍甚至更多带宽的数据量发送到被攻击者。
|
||||
对 NTP 协议进行反射和放大,已经成为发起 DDoS 攻击的一个选择。当攻击者使用一个伪造的目标受害者的源地址向时间服务器发送请求,称为反射攻击;攻击者发送请求到多个服务器,这些服务器将回复请求,这样就使伪造的源地址受到轰炸。放大攻击是指一个很小的请求收到大量的回复信息。例如,在 Linux 上,`ntpq` 命令是一个查询你的 NTP 服务器并验证它们的系统时间是否正确的很有用的工具。一些回复,比如,对端列表,是非常大的。组合使用反射和放大,攻击者可以将 10 倍甚至更多带宽的数据量发送到被攻击者。
|
||||
|
||||
那么,如何保护提供公益服务的公共 NTP 服务器呢?从使用 NTP 4.2.7p26 或者更新的版本开始,它们可以帮助你的 Linux 发行版不会发生前面所说的这种问题,因为它们都是在 2010 年以后发布的。这个发行版都默认禁用了最常见的滥用攻击。目前,[最新版本是 4.2.8p10][2],它发布于 2017 年。
|
||||
|
||||
你可以采用的另一个措施是,在你的网络上启用入站和出站过滤器。阻塞进入你的网络的数据包,以及拦截发送到伪造地址的出站数据包。入站过滤器帮助你,而出站过滤器则帮助你和其他人。阅读 [BCP38.info][3] 了解更多信息。
|
||||
你可以采用的另一个措施是,在你的网络上启用入站和出站过滤器。阻塞宣称来自你的网络的数据包进入你的网络,以及拦截发送到伪造返回地址的出站数据包。入站过滤器可以帮助你,而出站过滤器则帮助你和其他人。阅读 [BCP38.info][3] 了解更多信息。
|
||||
|
||||
### 层级为 0、1、2 的时间服务器
|
||||
|
||||
NTP 有超过 30 年的历史了,它是至今还在使用的最老的因特网协议之一。它的用途是保持计算机与协调世界时间(UTC)的同步。NTP 网络是分层组织的,并且同层的设备是对等的。层次 0 包含主守时设备,比如,原子钟。层级 1 的时间服务器与层级 0 的设备同步。层级 2 的设备与层级 1 的设备同步,层级 3 的设备与层级 2 的设备同步。NTP 协议支持 16 个层级,现实中并没有使用那么多的层级。同一个层级的服务器是相互对等的。
|
||||
NTP 有超过 30 年的历史了,它是至今还在使用的最老的因特网协议之一。它的用途是保持计算机与世界标准时间(UTC)的同步。NTP 网络是分层组织的,并且同层的设备是对等的。<ruby>层次<rt>Stratum</rt></ruby> 0 包含主报时设备,比如,原子钟。层级 1 的时间服务器与层级 0 的设备同步。层级 2 的设备与层级 1 的设备同步,层级 3 的设备与层级 2 的设备同步。NTP 协议支持 16 个层级,现实中并没有使用那么多的层级。同一个层级的服务器是相互对等的。
|
||||
|
||||
过去很长一段时间内,我们都为客户端选择配置单一的 NTP 服务器,而现在更好的做法是使用 [NTP 服务器地址池][4],它使用往返的 DNS 信息去共享负载。池地址只是为客户端服务的,比如单一的 PC 和你的本地局域网 NTP 服务器。当你运行一台自己的公共服务器时,你不能使用这些池中的地址。
|
||||
过去很长一段时间内,我们都为客户端选择配置单一的 NTP 服务器,而现在更好的做法是使用 [NTP 服务器地址池][4],它使用轮询的 DNS 信息去共享负载。池地址只是为客户端服务的,比如单一的 PC 和你的本地局域网 NTP 服务器。当你运行一台自己的公共服务器时,你不用使用这些池地址。
|
||||
|
||||
### 公共 NTP 服务器配置
|
||||
|
||||
运行一台公共 NTP 服务器只有两步:设置你的服务器,然后加入到 NTP 服务器池。运行一台公共的 NTP 服务器是一种很高尚的行为,但是你得先知道如何加入到 NTP 服务器池中。加入 NTP 服务器池是一种长期责任,因为即使你加入服务器池后,运行了很短的时间马上退出,然后接下来的很多年你仍然会接收到请求。
|
||||
运行一台公共 NTP 服务器只有两步:设置你的服务器,然后申请加入到 NTP 服务器池。运行一台公共的 NTP 服务器是一种很高尚的行为,但是你得先知道这意味着什么。加入 NTP 服务器池是一种长期责任,因为即使你加入服务器池后,运行了很短的时间马上退出,然后接下来的很多年你仍然会接收到请求。
|
||||
|
||||
你需要一个静态的公共 IP 地址,一个至少 512Kb/s 带宽的、可靠的、持久的因特网连接。NTP 使用的是 UDP 的 123 端口。它对机器本身要求并不高,很多管理员在其它的面向公共的服务器(比如,Web 服务器)上顺带架设了 NTP 服务。
|
||||
|
||||
配置一台公共的 NTP 服务器与配置一台用于局域网的 NTP 服务器是一样的,只需要几个配置。我们从阅读 [协议规则][5] 开始。遵守规则并注意你的行为;几乎每个时间服务器的维护者都是像你这样的志愿者。然后,从 [StratumTwoTimeServers][6] 中选择 2 台层级为 4-7 的上游服务器。选择的时候,选取地理位置上靠近(小于 300 英里的)你的因特网服务提供商的上游服务器,阅读他们的访问规则,然后,使用 `ping` 和 `mtr` 去找到延迟和跳数最小的服务器。
|
||||
配置一台公共的 NTP 服务器与配置一台用于局域网的 NTP 服务器是一样的,只需要几个配置。我们从阅读 [协议规则][5] 开始。遵守规则并注意你的行为;几乎每个时间服务器的维护者都是像你这样的志愿者。然后,从 [StratumTwoTimeServers][6] 中选择 4 到 7 个层级 2 的上游服务器。选择的时候,选取地理位置上靠近(小于 300 英里的)你的因特网服务提供商的上游服务器,阅读他们的访问规则,然后,使用 `ping` 和 `mtr` 去找到延迟和跳数最小的服务器。
|
||||
|
||||
以下的 `/etc/ntp.conf` 配置示例文件,包括了 IPv4 和 IPv6,以及基本的安全防护:
|
||||
|
||||
```
|
||||
# stratum 2 server list
|
||||
server servername_1 iburst
|
||||
@ -47,10 +48,10 @@ restrict -6 default kod noquery nomodify notrap nopeer limited
|
||||
# Allow ntpq and ntpdc queries only from localhost
|
||||
restrict 127.0.0.1
|
||||
restrict ::1
|
||||
|
||||
```
|
||||
|
||||
启动你的 NTP 服务器,让它运行几分钟,然后测试它对远程服务器的查询:
|
||||
|
||||
```
|
||||
$ ntpq -p
|
||||
remote refid st t when poll reach delay offset jitter
|
||||
@ -63,21 +64,21 @@ $ ntpq -p
|
||||
```
|
||||
|
||||
目前表现很好。现在从另一台 PC 上使用你的 NTP 服务器名字进行测试。以下的示例是一个正确的输出。如果有不正确的地方,你将看到一些错误信息。
|
||||
|
||||
```
|
||||
$ ntpdate -q _yourservername_
|
||||
$ ntpdate -q yourservername
|
||||
server 66.96.99.10, stratum 2, offset 0.017690, delay 0.12794
|
||||
server 98.191.213.2, stratum 1, offset 0.014798, delay 0.22887
|
||||
server 173.49.198.27, stratum 2, offset 0.020665, delay 0.15012
|
||||
server 129.6.15.28, stratum 1, offset -0.018846, delay 0.20966
|
||||
26 Jan 11:13:54 ntpdate[17293]: adjust time server 98.191.213.2 offset 0.014798 sec
|
||||
|
||||
```
|
||||
|
||||
一旦你的服务器运行的很好,你就可以向 [manage.ntppool.org][7] 申请加入池中。
|
||||
|
||||
查看官方的手册 [分布式网络时间服务器(NTP)][8] 学习所有的命令、配置选项、以及高级特性,比如,管理、查询、和验证。访问以下的站点学习关于运行一台时间服务器所需要的一切东西。
|
||||
查看官方的手册 [分布式网络时间服务器(NTP)][8] 学习所有的命令、配置选项、以及高级特性,比如,管理、查询、和验证。访问以下的站点学习关于运行一台时间服务器所需要的一切东西。
|
||||
|
||||
通过来自 Linux 基金会和 edX 的免费课程 ["Linux 入门" ][9] 学习更多 Linux 的知识。
|
||||
通过来自 Linux 基金会和 edX 的免费课程 [“Linux 入门”][9] 学习更多 Linux 的知识。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -85,12 +86,12 @@ via: https://www.linux.com/learn/intro-to-linux/2018/2/how-run-your-own-public-t
|
||||
|
||||
作者:[CARLA SCHRODER][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/learn/intro-to-linux/2018/1/keep-accurate-time-linux-ntp
|
||||
[1]:https://linux.cn/article-9462-1.html
|
||||
[2]:http://www.ntp.org/downloads.html
|
||||
[3]:http://www.bcp38.info/index.php/Main_Page
|
||||
[4]:http://www.pool.ntp.org/en/use.html
|
@ -1,43 +1,41 @@
|
||||
一个仅下载文件新的部分的传输工具
|
||||
Zsync:一个仅下载文件新的部分的传输工具
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/02/Linux-1-720x340.png)
|
||||
|
||||
仅仅因为网费每天变得越来越便宜,你也不应该重复下载相同的东西来浪费你的流量。一个很好的例子就是下载 Ubuntu 或任何 Linux 镜像的开发版本。如你所知,Ubuntu 开发人员每隔几个月就会发布一次日常构建、alpha、beta 版 ISO 镜像以供测试。在过去,一旦发布我就会下载这些镜像,并审查每个版本。现在不用了!感谢 **Zsync** 文件传输程序。现在可以仅下载 ISO 镜像新的部分。这将为你节省大量时间和 Internet 带宽。不仅时间和带宽,它将为你节省服务端和客户端的资源。
|
||||
就算是网费每天变得越来越便宜,你也不应该重复下载相同的东西来浪费你的流量。一个很好的例子就是下载 Ubuntu 或任何 Linux 镜像的开发版本。如你所知,Ubuntu 开发人员每隔几个月就会发布一次日常构建、alpha、beta 版 ISO 镜像以供测试。在过去,一旦发布我就会下载这些镜像,并审查每个版本。现在不用了!感谢 Zsync 文件传输程序。现在可以仅下载 ISO 镜像新的部分。这将为你节省大量时间和 Internet 带宽。不仅时间和带宽,它将为你节省服务端和客户端的资源。
|
||||
|
||||
Zsync 使用与 **Rsync** 相同的算法,但它只下载文件的新部分,你会得到一份已有文件旧版本的副本。 Rsync 主要用于在计算机之间同步数据,而 Zsync 则用于分发数据。简单地说,可以使用 Zsync 将中心的一个文件分发给数千个下载者。它在 Artistic License V2 许可证下发布,完全免费且开源。
|
||||
Zsync 使用与 Rsync 相同的算法,如果你会得到一份已有文件旧版本,它只下载该文件新的部分。 Rsync 主要用于在计算机之间同步数据,而 Zsync 则用于分发数据。简单地说,可以使用 Zsync 将中心的一个文件分发给数千个下载者。它在 Artistic License V2 许可证下发布,完全免费且开源。
|
||||
|
||||
### 安装 Zsync
|
||||
|
||||
Zsync 在大多数 Linux 发行版的默认仓库中有。
|
||||
|
||||
在 **Arch Linux** 及其衍生版上,使用命令安装它:
|
||||
在 Arch Linux 及其衍生版上,使用命令安装它:
|
||||
```
|
||||
$ sudo pacman -S zsync
|
||||
|
||||
```
|
||||
|
||||
在 **Fedora** 上:
|
||||
在 Fedora 上,启用 Zsync 仓库:
|
||||
|
||||
启用 Zsync 仓库:
|
||||
```
|
||||
$ sudo dnf copr enable ngompa/zsync
|
||||
|
||||
```
|
||||
|
||||
并使用命令安装它:
|
||||
|
||||
```
|
||||
$ sudo dnf install zsync
|
||||
|
||||
```
|
||||
|
||||
在 **Debian、Ubuntu、Linux Mint** 上:
|
||||
在 Debian、Ubuntu、Linux Mint 上:
|
||||
|
||||
```
|
||||
$ sudo apt-get install zsync
|
||||
|
||||
```
|
||||
|
||||
对于其他发行版,你可以从 [**Zsync 下载页面**][1]下载二进制文件,并手动编译安装它,如下所示。
|
||||
对于其他发行版,你可以从 [Zsync 下载页面][1]下载二进制打包文件,并手动编译安装它,如下所示。
|
||||
|
||||
```
|
||||
$ wget http://zsync.moria.org.uk/download/zsync-0.6.2.tar.bz2
|
||||
$ tar xjf zsync-0.6.2.tar.bz2
|
||||
@ -45,44 +43,41 @@ $ cd zsync-0.6.2/
|
||||
$ configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
|
||||
```
|
||||
|
||||
### 用法
|
||||
|
||||
请注意,**只有当人们提供 zsync 下载时,zsync 才有用**。目前,Debian、Ubuntu(所有版本)的 ISO 镜像都可以用 .zsync 下载。例如,请访问以下链接。
|
||||
请注意,只有当人们提供 zsync 下载方式时,zsync 才有用。目前,Debian、Ubuntu(所有版本)的 ISO 镜像都有 .zsync 下载链接。例如,请访问以下链接。
|
||||
|
||||
你可能注意到,Ubuntu 18.04 LTS 每日构建版有直接的 ISO 和 .zsync 文件。如果你下载 .ISO 文件,则必须在 ISO 更新时下载完整的 ISO 文件。但是,如果你下载的是 .zsync 文件,那么 Zsync 将在未来仅下载新的更改。你不需要每次都下载整个 ISO 映像。
|
||||
你可能注意到,Ubuntu 18.04 LTS 每日构建版有直接的 ISO 和 .zsync 文件。如果你下载 .ISO 文件,则必须在 ISO 更新时下载完整的 ISO 文件。但是,如果你下载的是 .zsync 文件,那么 Zsync 以后仅会下载新的更改。你不需要每次都下载整个 ISO 映像。
|
||||
|
||||
.zsync 文件包含 zsync 程序所需的元数据。该文件包含 rsync 算法的预先计算的校验和。它在服务器上生成一次,然后由任意数量的下载器使用。要使用 Zsync 客户端程序下载 .zsync 文件,你只需执行以下操作:
|
||||
|
||||
```
|
||||
$ zsync <.zsync-file-URL>
|
||||
|
||||
```
|
||||
|
||||
例如:
|
||||
|
||||
```
|
||||
$ zsync http://cdimage.ubuntu.com/ubuntu/daily-live/current/bionic-desktop-amd64.iso.zsync
|
||||
|
||||
```
|
||||
|
||||
如果你的系统中已有以前的镜像文件,那么 Zsync 将计算远程服务器中旧文件和新文件之间的差异,并仅下载新的部分。你将在终端看见计算过程一系列的点或星星。
|
||||
|
||||
如果你下载的文件的旧版本存在于当前工作目录,那么 Zsync 将只下载新的部分。下载完成后,你将看到两个镜像,一个你刚下载的镜像和以 **.iso.zs-old** 为扩展名的旧镜像。
|
||||
如果你下载的文件的旧版本存在于当前工作目录,那么 Zsync 将只下载新的部分。下载完成后,你将看到两个镜像,一个你刚下载的镜像和以 .iso.zs-old 为扩展名的旧镜像。
|
||||
|
||||
如果没有找到相关的本地数据,Zsync 会下载整个文件。
|
||||
|
||||
![](http://www.ostechnix.com/wp-content/uploads/2018/02/Zsync-1.png)
|
||||
|
||||
你可以随时按 **CTRL-C** 取消下载过程。
|
||||
你可以随时按 `CTRL-C` 取消下载过程。
|
||||
|
||||
试想一下,如果你直接下载 .ISO 文件或使用 torrent,每当你下载新镜像时,你将损失约 1.4GB 流量。因此,Zsync 不会下载整个 Alpha、beta 和日常构建映像,而只是在你的系统上下载了 ISO 文件的新部分,并在系统中有一个旧版本的拷贝。
|
||||
|
||||
今天就到这里。希望对你有帮助。我将很快另外写一篇有用的指南。在此之前,请继续关注 OSTechNix!
|
||||
|
||||
干杯!
|
||||
|
||||
今天就到这里。希望对你有帮助。我将很快另外写一篇有用的指南。在此之前,请保持关注!
|
||||
|
||||
干杯!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -90,7 +85,7 @@ via: https://www.ostechnix.com/zsync-file-transfer-utility-download-new-parts-fi
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,11 +1,11 @@
|
||||
如何在Linux上检查您的网络连接
|
||||
解读 ip 命令展示的网络连接信息
|
||||
======
|
||||
|
||||
![](https://images.idgesg.net/images/article/2018/03/network-connections-100751906-large.jpg)
|
||||
|
||||
**ip**命令有很多可以告诉你网络连接配置和状态的信息,但是所有这些词和数字意味着什么? 让我们深入了解一下,看看所有显示的值都试图告诉你什么。
|
||||
`ip` 命令可以告诉你很多网络连接配置和状态的信息,但是所有这些词和数字意味着什么? 让我们深入了解一下,看看所有显示的值都试图告诉你什么。
|
||||
|
||||
当您使用`ip a`(或`ip addr`)命令获取系统上所有网络接口的信息时,您将看到如下所示的内容:
|
||||
当您使用 `ip a`(或 `ip addr`)命令获取系统上所有网络接口的信息时,您将看到如下所示的内容:
|
||||
|
||||
```
|
||||
$ ip a
|
||||
@ -23,46 +23,46 @@ $ ip a
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
这个系统上的两个接口 - 环回(lo)和网络(enp0s25)——显示了很多统计数据。 “lo”接口显然是环回地址。 我们可以在列表中看到环回IPv4地址(127.0.0.1)和环回IPv6( **::1**)。 正常的网络接口更有趣。
|
||||
这个系统上的两个接口 - 环回(`lo`)和网络(`enp0s25`)——显示了很多统计数据。 `lo` 接口显然是<ruby>环回地址<rt>loolback</rt></ruby>。 我们可以在列表中看到环回 IPv4 地址(`127.0.0.1`)和环回 IPv6(`::1`)。 而普通的网络接口更有趣。
|
||||
|
||||
### 为什么是enp0s25而不是eth0
|
||||
### 为什么是 enp0s25 而不是 eth0
|
||||
|
||||
如果你想知道为什么它在这个系统上被称为**enp0s25**,而不是可能更熟悉的**eth0**,那我们可以稍微解释一下。
|
||||
如果你想知道为什么它在这个系统上被称为 `enp0s25`,而不是可能更熟悉的 `eth0`,那我们可以稍微解释一下。
|
||||
|
||||
新的命名方案被称为“可预测的网络接口”。 它已经在基于systemd的Linux系统上使用了一段时间了。 接口名称取决于硬件的物理位置。 “**en**”仅仅就是“ethernet”的意思就像“eth”用于对于eth0,一样。 “**p**”是以太网卡的总线编号,“**s**”是插槽编号。 所以“enp0s25”告诉我们很多我们正在使用的硬件的信息。
|
||||
新的命名方案被称为“<ruby>可预测的网络接口<rt>Predictable Network Interface</rt></ruby>”。 它已经在基于systemd 的 Linux 系统上使用了一段时间了。 接口名称取决于硬件的物理位置。 `en` 仅仅就是 “ethernet” 的意思,就像 “eth” 用于对应 `eth0`,一样。 `p` 是以太网卡的总线编号,`s` 是插槽编号。 所以 `enp0s25` 告诉我们很多我们正在使用的硬件的信息。
|
||||
|
||||
`<BROADCAST,MULTICAST,UP,LOWER_UP>` 这个配置串告诉我们:
|
||||
|
||||
<BROADCAST,MULTICAST,UP,LOWER_UP> 这个配置串告诉我们:
|
||||
```
|
||||
BROADCAST 该接口支持广播
|
||||
MULTICAST 该接口支持多播
|
||||
UP 网络接口已启用
|
||||
LOWER_UP 网络电缆已插入,设备已连接至网络
|
||||
mtu 1500 最大传输单位(数据包大小)为1,500字节
|
||||
```
|
||||
|
||||
列出的其他值也告诉了我们很多关于接口的知识,但我们需要知道“brd”和“qlen”这些词代表什么意思。 所以,这里显示的是上面展示的**ip**信息的其余部分的翻译。
|
||||
列出的其他值也告诉了我们很多关于接口的知识,但我们需要知道 `brd` 和 `qlen` 这些词代表什么意思。 所以,这里显示的是上面展示的 `ip` 信息的其余部分的翻译。
|
||||
|
||||
```
|
||||
mtu 最大传输单位(数据包大小)为1,500字节
|
||||
mtu 1500 最大传输单位(数据包大小)为1,500字节
|
||||
qdisc pfifo_fast 用于数据包排队
|
||||
state UP 网络接口已启用
|
||||
group default 接口组
|
||||
qlen 1000 传输队列长度
|
||||
link/ether 00:1e:4f:c8:43:fc 接口的MAC(硬件)地址
|
||||
link/ether 00:1e:4f:c8:43:fc 接口的 MAC(硬件)地址
|
||||
brd ff:ff:ff:ff:ff:ff 广播地址
|
||||
inet 192.168.0.24/24 IPv4地址
|
||||
inet 192.168.0.24/24 IPv4 地址
|
||||
brd 192.168.0.255 广播地址
|
||||
scope global 全局有效
|
||||
dynamic enp0s25 地址是动态分配的
|
||||
valid_lft 80866sec IPv4地址的有效使用期限
|
||||
preferred_lft 80866sec IPv4地址的首选生存期
|
||||
inet6 fe80::2c8e:1de0:a862:14fd/64 IPv6地址
|
||||
valid_lft 80866sec IPv4 地址的有效使用期限
|
||||
preferred_lft 80866sec IPv4 地址的首选生存期
|
||||
inet6 fe80::2c8e:1de0:a862:14fd/64 IPv6 地址
|
||||
scope link 仅在此设备上有效
|
||||
valid_lft forever IPv6地址的有效使用期限
|
||||
preferred_lft forever IPv6地址的首选生存期
|
||||
valid_lft forever IPv6 地址的有效使用期限
|
||||
preferred_lft forever IPv6 地址的首选生存期
|
||||
```
|
||||
|
||||
您可能已经注意到,ifconfig命令提供的一些信息未包含在**ip a** 命令的输出中 —— 例如传输数据包的统计信息。 如果您想查看发送和接收的数据包数量以及冲突数量的列表,可以使用以下ip命令:
|
||||
您可能已经注意到,`ifconfig` 命令提供的一些信息未包含在 `ip a` 命令的输出中 —— 例如传输数据包的统计信息。 如果您想查看发送和接收的数据包数量以及冲突数量的列表,可以使用以下 `ip` 命令:
|
||||
|
||||
```
|
||||
$ ip -s link show enp0s25
|
||||
@ -74,7 +74,8 @@ $ ip -s link show enp0s25
|
||||
6131373 78152 0 0 0 0
|
||||
```
|
||||
|
||||
另一个**ip**命令提供有关系统路由表的信息。
|
||||
另一个 `ip` 命令提供有关系统路由表的信息。
|
||||
|
||||
```
|
||||
$ ip route show
|
||||
default via 192.168.0.1 dev enp0s25 proto static metric 100
|
||||
@ -82,7 +83,7 @@ default via 192.168.0.1 dev enp0s25 proto static metric 100
|
||||
192.168.0.0/24 dev enp0s25 proto kernel scope link src 192.168.0.24 metric 100
|
||||
```
|
||||
|
||||
**ip**命令是非常通用的。 您可以从**ip**命令及其来自[Red Hat][1]的选项获得有用的备忘单。
|
||||
`ip` 命令是非常通用的。 您可以从 `ip` 命令及其来自[Red Hat][1]的选项获得有用的备忘单。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -90,7 +91,7 @@ via: https://www.networkworld.com/article/3262045/linux/checking-your-network-co
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,3 +1,4 @@
|
||||
【Pgmalion666 Translating】
|
||||
10 Tools To Add Some Spice To Your UNIX/Linux Shell Scripts
|
||||
======
|
||||
There are some misconceptions that shell scripts are only for a CLI environment. You can efficiently use various tools to write GUI and network (socket) scripts under KDE or Gnome desktops. Shell scripts can make use of some of the GUI widget (menus, warning boxes, progress bars, etc.). You can always control the final output, cursor position on the screen, various output effects, and more. With the following tools, you can build powerful, interactive, user-friendly UNIX / Linux bash shell scripts.
|
||||
|
@ -1,154 +0,0 @@
|
||||
translating by yizhuoyan
|
||||
|
||||
How To Set Readonly File Permissions On Linux / Unix Web Server DocumentRoot
|
||||
======
|
||||
|
||||
How do I set a read-only permission for all of my files stored in /var/www/html/ directory?
|
||||
|
||||
You can use the chmod command to set read-only permission for all files on a Linux / Unix / macOS / Apple OS X / *BSD operating systems. This page explains how to setup read only file permission on Linux or Unix web server such as Nginx, Lighttpd, Apache and more.
|
||||
|
||||
[![Proper read-only permissions for Linux/Unix Nginx/Apache web server's directory][1]][1]
|
||||
|
||||
### How to set files in read-only mode
|
||||
|
||||
|
||||
The syntax is:
|
||||
```
|
||||
### use only for files ##
|
||||
chmod 0444 /var/www/html/*
|
||||
chmod 0444 /var/www/html/*.php
|
||||
```
|
||||
|
||||
### How to to set directories in read-only mode
|
||||
|
||||
TO set directories in read-only mode, enter:
|
||||
```
|
||||
### use only for dirs ##
|
||||
chmod 0444 /var/www/html/
|
||||
chmod 0444 /path/to/your/dir/
|
||||
# ***************************************************************************
|
||||
# Say webserver user/group is www-data, and file-owned by ftp-data user/group
|
||||
# ***************************************************************************
|
||||
# All files/dirs are read-only
|
||||
chmod -R 0444 /var/www/html/
|
||||
# All files/dir owned by ftp-data
|
||||
chown -R ftp-data:ftp-data /var/www/html/
|
||||
# All directories and sub-dirs has 0445 permission (so that webserver user www-data can read our files)
|
||||
find /var/www/html/ -type d -print0 | xargs -0 -I {} chmod 0445 "{}"
|
||||
```
|
||||
To find all files (including sub-directories in /var/www/html) and set read-only permission, enter:
|
||||
```
|
||||
### works on files only ##
|
||||
find /var/www/html -type f -iname "*" -print0 | xargs -I {} -0 chmod 0444 {}
|
||||
```
|
||||
|
||||
However, you need to set set read-only and execute permission on /var/www/html and all sub-directories so that web server can enter into your DocumentRoot, enter:
|
||||
```
|
||||
### works on dirs only ##
|
||||
find /var/www/html -type d -iname "*" -print0 | xargs -I {} -0 chmod 0544 {}
|
||||
```
|
||||
|
||||
### A warning about write permission
|
||||
|
||||
Please note that write access on a directory /var/www/html/ allows anyone to remove or add new files. In other words, you may need to set a read-only permission for /var/www/html/ directory itself:
|
||||
```
|
||||
### read-only web-root but web server allowed to read files ##
|
||||
chmod 0555 /var/www/html
|
||||
```
|
||||
|
||||
In some cases you can change file owner and group to set tight permissions as per your setup:
|
||||
```
|
||||
### Say /var/www/html is owned by normal user, you can set it to root:root or httpd:httpd (recommended) ###
|
||||
chown -R root:root /var/www/html/
|
||||
|
||||
### Make sure apache user owns /var/www/html/ ##
|
||||
chown -R apache:apache /var/www/html/
|
||||
```
|
||||
|
||||
### A note about NFS exported directories
|
||||
|
||||
You can specify whether the directory should have [read-only or read/write permissions using /etc/exports][2] file. This file defines the various shares on the NFS server and their permissions. A few examples:
|
||||
```
|
||||
# Read-only access to anyone
|
||||
/var/www/html *(ro,sync)
|
||||
|
||||
# Read-write access to a client on 192.168.1.10 (upload.example.com)
|
||||
/var/www/html 192.168.1.10(rw,sync)
|
||||
```
|
||||
|
||||
### A note about read-only Samba (CIFS) share for MS-Windows clients
|
||||
|
||||
To share sales as read-only, update smb.conf as follows:
|
||||
```
|
||||
[sales]
|
||||
comment = Sales Data
|
||||
path = /export/cifs/sales
|
||||
read only = Yes
|
||||
guest ok = Yes
|
||||
```
|
||||
|
||||
### A note about file systems table
|
||||
|
||||
You can use the /etc/fstab file on Unix or Linux to configure to mount certain files in read-only mode. You need to have a dedicated partition. Do not set / or other system partitions in read-only mode. In this example /srv/html is set to read-only mode using /etc/fstab file:
|
||||
```
|
||||
/dev/sda6 /srv/html ext4 ro 1 1
|
||||
```
|
||||
|
||||
You can use the mount command to [remount partition in read-only mode][3] (run it as the root user):
|
||||
```
|
||||
# mount -o remount,ro /dev/sda6 /srv/html
|
||||
```
|
||||
OR
|
||||
```
|
||||
# mount -o remount,ro /srv/html
|
||||
```
|
||||
The above command will try to attempt to remount an already-mounted filesystem at /srv/html. This is commonly used to change the mount flags for a filesystem, especially to make a readonly filesystem writeable. It does not change device or mount point. To make file system writable again, enter:
|
||||
```
|
||||
# mount -o remount,rw /dev/sda6 /srv/html
|
||||
```
|
||||
OR
|
||||
```
|
||||
# mount -o remount,rw /srv/html
|
||||
```
|
||||
|
||||
### Linux: chattr Command
|
||||
|
||||
You can change file [attributes on a Linux file system to read-only][4] using the chattr command:
|
||||
```
|
||||
chattr +i /path/to/file.php
|
||||
chattr +i /var/www/html/
|
||||
|
||||
# find everything in /var/www/html and set to read-only #
|
||||
find /var/www/html -iname "*" -print0 | xargs -I {} -0 chattr +i {}
|
||||
```
|
||||
|
||||
To remove read-only attribute pass the -i option:
|
||||
```
|
||||
# chattr -i /path/to/file.php
|
||||
```
|
||||
FreeBSD, Mac OS X and other BSD unix user can use the [chflags command][5]:
|
||||
```
|
||||
### set read-only ##
|
||||
chflags schg /path/to/file.php
|
||||
|
||||
# remove read-only ##
|
||||
chflags noschg /path/to/file.php
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/howto-set-readonly-file-permission-in-linux-unix/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/media/new/faq/2012/04/linux-unix-set-read-only-file-system-permission-for-apache-nginx.jpg
|
||||
[2]:https://www.cyberciti.biz//www.cyberciti.biz/faq/centos-fedora-rhel-nfs-v4-configuration/
|
||||
[3]:https://www.cyberciti.biz/faq/howto-freebsd-remount-partition/
|
||||
[4]:https://www.cyberciti.biz/tips/linux-password-trick.html
|
||||
[5]:https://www.cyberciti.biz/tips/howto-write-protect-file-with-immutable-bit.html
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Caffeinated 6.828: Lab 1: Booting a PC
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating by yizhuoyan
|
||||
|
||||
Prevent Files And Folders From Accidental Deletion Or Modification In Linux
|
||||
======
|
||||
|
||||
|
702
sources/tech/20171024 Learn Blockchains by Building One.md
Normal file
702
sources/tech/20171024 Learn Blockchains by Building One.md
Normal file
@ -0,0 +1,702 @@
|
||||
Learn Blockchains by Building One
|
||||
======
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/2000/1*zutLn_-fZZhy7Ari-x-JWQ.jpeg)
|
||||
You’re here because, like me, you’re psyched about the rise of Cryptocurrencies. And you want to know how Blockchains work—the fundamental technology behind them.
|
||||
|
||||
But understanding Blockchains isn’t easy—or at least wasn’t for me. I trudged through dense videos, followed porous tutorials, and dealt with the amplified frustration of too few examples.
|
||||
|
||||
I like learning by doing. It forces me to deal with the subject matter at a code level, which gets it sticking. If you do the same, at the end of this guide you’ll have a functioning Blockchain with a solid grasp of how they work.
|
||||
|
||||
### Before you get started…
|
||||
|
||||
Remember that a blockchain is an _immutable, sequential_ chain of records called Blocks. They can contain transactions, files or any data you like, really. But the important thing is that they’re _chained_ together using _hashes_ .
|
||||
|
||||
If you aren’t sure what a hash is, [here’s an explanation][1].
|
||||
|
||||
**_Who is this guide aimed at?_** You should be comfy reading and writing some basic Python, as well as have some understanding of how HTTP requests work, since we’ll be talking to our Blockchain over HTTP.
|
||||
|
||||
**_What do I need?_** Make sure that [Python 3.6][2]+ (along with `pip`) is installed. You’ll also need to install Flask and the wonderful Requests library:
|
||||
|
||||
```
|
||||
pip install Flask==0.12.2 requests==2.18.4
|
||||
```
|
||||
|
||||
Oh, you’ll also need an HTTP Client, like [Postman][3] or cURL. But anything will do.
|
||||
|
||||
**_Where’s the final code?_** The source code is [available here][4].
|
||||
|
||||
* * *
|
||||
|
||||
### Step 1: Building a Blockchain
|
||||
|
||||
Open up your favourite text editor or IDE, personally I ❤️ [PyCharm][5]. Create a new file, called `blockchain.py`. We’ll only use a single file, but if you get lost, you can always refer to the [source code][6].
|
||||
|
||||
#### Representing a Blockchain
|
||||
|
||||
We’ll create a `Blockchain` class whose constructor creates an initial empty list (to store our blockchain), and another to store transactions. Here’s the blueprint for our class:
|
||||
|
||||
```
|
||||
class Blockchain(object):
|
||||
def __init__(self):
|
||||
self.chain = []
|
||||
self.current_transactions = []
|
||||
|
||||
def new_block(self):
|
||||
# Creates a new Block and adds it to the chain
|
||||
pass
|
||||
|
||||
def new_transaction(self):
|
||||
# Adds a new transaction to the list of transactions
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def hash(block):
|
||||
# Hashes a Block
|
||||
pass
|
||||
|
||||
@property
|
||||
def last_block(self):
|
||||
# Returns the last Block in the chain
|
||||
pass
|
||||
```
|
||||
|
||||
|
||||
Our Blockchain class is responsible for managing the chain. It will store transactions and have some helper methods for adding new blocks to the chain. Let’s start fleshing out some methods.
|
||||
|
||||
#### What does a Block look like?
|
||||
|
||||
Each Block has an index, a timestamp (in Unix time), a list of transactions, a proof (more on that later), and the hash of the previous Block.
|
||||
|
||||
Here’s an example of what a single Block looks like:
|
||||
|
||||
```
|
||||
block = {
|
||||
'index': 1,
|
||||
'timestamp': 1506057125.900785,
|
||||
'transactions': [
|
||||
{
|
||||
'sender': "8527147fe1f5426f9dd545de4b27ee00",
|
||||
'recipient': "a77f5cdfa2934df3954a5c7c7da5df1f",
|
||||
'amount': 5,
|
||||
}
|
||||
],
|
||||
'proof': 324984774000,
|
||||
'previous_hash': "2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824"
|
||||
}
|
||||
```
|
||||
|
||||
At this point, the idea of a chain should be apparent—each new block contains within itself, the hash of the previous Block. This is crucial because it’s what gives blockchains immutability: If an attacker corrupted an earlier Block in the chain then all subsequent blocks will contain incorrect hashes.
|
||||
|
||||
Does this make sense? If it doesn’t, take some time to let it sink in—it’s the core idea behind blockchains.
|
||||
|
||||
#### Adding Transactions to a Block
|
||||
|
||||
We’ll need a way of adding transactions to a Block. Our new_transaction() method is responsible for this, and it’s pretty straight-forward:
|
||||
|
||||
```
|
||||
class Blockchain(object):
|
||||
...
|
||||
|
||||
def new_transaction(self, sender, recipient, amount):
|
||||
"""
|
||||
Creates a new transaction to go into the next mined Block
|
||||
:param sender: <str> Address of the Sender
|
||||
:param recipient: <str> Address of the Recipient
|
||||
:param amount: <int> Amount
|
||||
:return: <int> The index of the Block that will hold this transaction
|
||||
"""
|
||||
|
||||
self.current_transactions.append({
|
||||
'sender': sender,
|
||||
'recipient': recipient,
|
||||
'amount': amount,
|
||||
})
|
||||
|
||||
return self.last_block['index'] + 1
|
||||
```
|
||||
|
||||
After new_transaction() adds a transaction to the list, it returns the index of the block which the transaction will be added to—the next one to be mined. This will be useful later on, to the user submitting the transaction.
|
||||
|
||||
#### Creating new Blocks
|
||||
|
||||
When our Blockchain is instantiated we’ll need to seed it with a genesis block—a block with no predecessors. We’ll also need to add a “proof” to our genesis block which is the result of mining (or proof of work). We’ll talk more about mining later.
|
||||
|
||||
In addition to creating the genesis block in our constructor, we’ll also flesh out the methods for new_block(), new_transaction() and hash():
|
||||
|
||||
```
|
||||
import hashlib
|
||||
import json
|
||||
from time import time
|
||||
|
||||
|
||||
class Blockchain(object):
|
||||
def __init__(self):
|
||||
self.current_transactions = []
|
||||
self.chain = []
|
||||
|
||||
# Create the genesis block
|
||||
self.new_block(previous_hash=1, proof=100)
|
||||
|
||||
def new_block(self, proof, previous_hash=None):
|
||||
"""
|
||||
Create a new Block in the Blockchain
|
||||
:param proof: <int> The proof given by the Proof of Work algorithm
|
||||
:param previous_hash: (Optional) <str> Hash of previous Block
|
||||
:return: <dict> New Block
|
||||
"""
|
||||
|
||||
block = {
|
||||
'index': len(self.chain) + 1,
|
||||
'timestamp': time(),
|
||||
'transactions': self.current_transactions,
|
||||
'proof': proof,
|
||||
'previous_hash': previous_hash or self.hash(self.chain[-1]),
|
||||
}
|
||||
|
||||
# Reset the current list of transactions
|
||||
self.current_transactions = []
|
||||
|
||||
self.chain.append(block)
|
||||
return block
|
||||
|
||||
def new_transaction(self, sender, recipient, amount):
|
||||
"""
|
||||
Creates a new transaction to go into the next mined Block
|
||||
:param sender: <str> Address of the Sender
|
||||
:param recipient: <str> Address of the Recipient
|
||||
:param amount: <int> Amount
|
||||
:return: <int> The index of the Block that will hold this transaction
|
||||
"""
|
||||
self.current_transactions.append({
|
||||
'sender': sender,
|
||||
'recipient': recipient,
|
||||
'amount': amount,
|
||||
})
|
||||
|
||||
return self.last_block['index'] + 1
|
||||
|
||||
@property
|
||||
def last_block(self):
|
||||
return self.chain[-1]
|
||||
|
||||
@staticmethod
|
||||
def hash(block):
|
||||
"""
|
||||
Creates a SHA-256 hash of a Block
|
||||
:param block: <dict> Block
|
||||
:return: <str>
|
||||
"""
|
||||
|
||||
# We must make sure that the Dictionary is Ordered, or we'll have inconsistent hashes
|
||||
block_string = json.dumps(block, sort_keys=True).encode()
|
||||
return hashlib.sha256(block_string).hexdigest()
|
||||
```
|
||||
|
||||
The above should be straight-forward—I’ve added some comments and docstrings to help keep it clear. We’re almost done with representing our blockchain. But at this point, you must be wondering how new blocks are created, forged or mined.
|
||||
|
||||
#### Understanding Proof of Work
|
||||
|
||||
A Proof of Work algorithm (PoW) is how new Blocks are created or mined on the blockchain. The goal of PoW is to discover a number which solves a problem. The number must be difficult to find but easy to verify—computationally speaking—by anyone on the network. This is the core idea behind Proof of Work.
|
||||
|
||||
We’ll look at a very simple example to help this sink in.
|
||||
|
||||
Let’s decide that the hash of some integer x multiplied by another y must end in 0\. So, hash(x * y) = ac23dc...0\. And for this simplified example, let’s fix x = 5\. Implementing this in Python:
|
||||
|
||||
```
|
||||
from hashlib import sha256
|
||||
|
||||
x = 5
|
||||
y = 0 # We don't know what y should be yet...
|
||||
|
||||
while sha256(f'{x*y}'.encode()).hexdigest()[-1] != "0":
|
||||
y += 1
|
||||
|
||||
print(f'The solution is y = {y}')
|
||||
```
|
||||
|
||||
The solution here is y = 21\. Since, the produced hash ends in 0:
|
||||
|
||||
```
|
||||
hash(5 * 21) = 1253e9373e...5e3600155e860
|
||||
```
|
||||
|
||||
The network is able to easily verify their solution.
|
||||
|
||||
#### Implementing basic Proof of Work
|
||||
|
||||
Let’s implement a similar algorithm for our blockchain. Our rule will be similar to the example above:
|
||||
|
||||
> Find a number p that when hashed with the previous block’s solution a hash with 4 leading 0s is produced.
|
||||
|
||||
```
|
||||
import hashlib
|
||||
import json
|
||||
|
||||
from time import time
|
||||
from uuid import uuid4
|
||||
|
||||
|
||||
class Blockchain(object):
|
||||
...
|
||||
|
||||
def proof_of_work(self, last_proof):
|
||||
"""
|
||||
Simple Proof of Work Algorithm:
|
||||
- Find a number p' such that hash(pp') contains leading 4 zeroes, where p is the previous p'
|
||||
- p is the previous proof, and p' is the new proof
|
||||
:param last_proof: <int>
|
||||
:return: <int>
|
||||
"""
|
||||
|
||||
proof = 0
|
||||
while self.valid_proof(last_proof, proof) is False:
|
||||
proof += 1
|
||||
|
||||
return proof
|
||||
|
||||
@staticmethod
|
||||
def valid_proof(last_proof, proof):
|
||||
"""
|
||||
Validates the Proof: Does hash(last_proof, proof) contain 4 leading zeroes?
|
||||
:param last_proof: <int> Previous Proof
|
||||
:param proof: <int> Current Proof
|
||||
:return: <bool> True if correct, False if not.
|
||||
"""
|
||||
|
||||
guess = f'{last_proof}{proof}'.encode()
|
||||
guess_hash = hashlib.sha256(guess).hexdigest()
|
||||
return guess_hash[:4] == "0000"
|
||||
```
|
||||
|
||||
To adjust the difficulty of the algorithm, we could modify the number of leading zeroes. But 4 is sufficient. You’ll find out that the addition of a single leading zero makes a mammoth difference to the time required to find a solution.
|
||||
|
||||
Our class is almost complete and we’re ready to begin interacting with it using HTTP requests.
|
||||
|
||||
* * *
|
||||
|
||||
### Step 2: Our Blockchain as an API
|
||||
|
||||
We’re going to use the Python Flask Framework. It’s a micro-framework and it makes it easy to map endpoints to Python functions. This allows us talk to our blockchain over the web using HTTP requests.
|
||||
|
||||
We’ll create three methods:
|
||||
|
||||
* `/transactions/new` to create a new transaction to a block
|
||||
|
||||
* `/mine` to tell our server to mine a new block.
|
||||
|
||||
* `/chain` to return the full Blockchain.
|
||||
|
||||
#### Setting up Flask
|
||||
|
||||
Our “server” will form a single node in our blockchain network. Let’s create some boilerplate code:
|
||||
|
||||
```
|
||||
import hashlib
|
||||
import json
|
||||
from textwrap import dedent
|
||||
from time import time
|
||||
from uuid import uuid4
|
||||
|
||||
from flask import Flask
|
||||
|
||||
|
||||
class Blockchain(object):
|
||||
...
|
||||
|
||||
|
||||
# Instantiate our Node
|
||||
app = Flask(__name__)
|
||||
|
||||
# Generate a globally unique address for this node
|
||||
node_identifier = str(uuid4()).replace('-', '')
|
||||
|
||||
# Instantiate the Blockchain
|
||||
blockchain = Blockchain()
|
||||
|
||||
|
||||
@app.route('/mine', methods=['GET'])
|
||||
def mine():
|
||||
return "We'll mine a new Block"
|
||||
|
||||
@app.route('/transactions/new', methods=['POST'])
|
||||
def new_transaction():
|
||||
return "We'll add a new transaction"
|
||||
|
||||
@app.route('/chain', methods=['GET'])
|
||||
def full_chain():
|
||||
response = {
|
||||
'chain': blockchain.chain,
|
||||
'length': len(blockchain.chain),
|
||||
}
|
||||
return jsonify(response), 200
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(host='0.0.0.0', port=5000)
|
||||
```
|
||||
|
||||
A brief explanation of what we’ve added above:
|
||||
|
||||
* Line 15: Instantiates our Node. Read more about Flask [here][7].
|
||||
|
||||
* Line 18: Create a random name for our node.
|
||||
|
||||
* Line 21: Instantiate our Blockchain class.
|
||||
|
||||
* Line 24–26: Create the /mine endpoint, which is a GET request.
|
||||
|
||||
* Line 28–30: Create the /transactions/new endpoint, which is a POST request, since we’ll be sending data to it.
|
||||
|
||||
* Line 32–38: Create the /chain endpoint, which returns the full Blockchain.
|
||||
|
||||
* Line 40–41: Runs the server on port 5000.
|
||||
|
||||
#### The Transactions Endpoint
|
||||
|
||||
This is what the request for a transaction will look like. It’s what the user sends to the server:
|
||||
|
||||
```
|
||||
{ "sender": "my address", "recipient": "someone else's address", "amount": 5}
|
||||
```
|
||||
|
||||
```
|
||||
import hashlib
|
||||
import json
|
||||
from textwrap import dedent
|
||||
from time import time
|
||||
from uuid import uuid4
|
||||
|
||||
from flask import Flask, jsonify, request
|
||||
|
||||
...
|
||||
|
||||
@app.route('/transactions/new', methods=['POST'])
|
||||
def new_transaction():
|
||||
values = request.get_json()
|
||||
|
||||
# Check that the required fields are in the POST'ed data
|
||||
required = ['sender', 'recipient', 'amount']
|
||||
if not all(k in values for k in required):
|
||||
return 'Missing values', 400
|
||||
|
||||
# Create a new Transaction
|
||||
index = blockchain.new_transaction(values['sender'], values['recipient'], values['amount'])
|
||||
|
||||
response = {'message': f'Transaction will be added to Block {index}'}
|
||||
return jsonify(response), 201
|
||||
```
|
||||
A method for creating Transactions
|
||||
|
||||
#### The Mining Endpoint
|
||||
|
||||
Our mining endpoint is where the magic happens, and it’s easy. It has to do three things:
|
||||
|
||||
1. Calculate the Proof of Work
|
||||
|
||||
2. Reward the miner (us) by adding a transaction granting us 1 coin
|
||||
|
||||
3. Forge the new Block by adding it to the chain
|
||||
|
||||
```
|
||||
import hashlib
|
||||
import json
|
||||
|
||||
from time import time
|
||||
from uuid import uuid4
|
||||
|
||||
from flask import Flask, jsonify, request
|
||||
|
||||
...
|
||||
|
||||
@app.route('/mine', methods=['GET'])
|
||||
def mine():
|
||||
# We run the proof of work algorithm to get the next proof...
|
||||
last_block = blockchain.last_block
|
||||
last_proof = last_block['proof']
|
||||
proof = blockchain.proof_of_work(last_proof)
|
||||
|
||||
# We must receive a reward for finding the proof.
|
||||
# The sender is "0" to signify that this node has mined a new coin.
|
||||
blockchain.new_transaction(
|
||||
sender="0",
|
||||
recipient=node_identifier,
|
||||
amount=1,
|
||||
)
|
||||
|
||||
# Forge the new Block by adding it to the chain
|
||||
previous_hash = blockchain.hash(last_block)
|
||||
block = blockchain.new_block(proof, previous_hash)
|
||||
|
||||
response = {
|
||||
'message': "New Block Forged",
|
||||
'index': block['index'],
|
||||
'transactions': block['transactions'],
|
||||
'proof': block['proof'],
|
||||
'previous_hash': block['previous_hash'],
|
||||
}
|
||||
return jsonify(response), 200
|
||||
```
|
||||
|
||||
Note that the recipient of the mined block is the address of our node. And most of what we’ve done here is just interact with the methods on our Blockchain class. At this point, we’re done, and can start interacting with our blockchain.
|
||||
|
||||
### Step 3: Interacting with our Blockchain
|
||||
|
||||
You can use plain old cURL or Postman to interact with our API over a network.
|
||||
|
||||
Fire up the server:
|
||||
|
||||
```
|
||||
$ python blockchain.py
|
||||
```
|
||||
|
||||
Let’s try mining a block by making a GET request to http://localhost:5000/mine:
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*ufYwRmWgQeA-Jxg0zgYLOA.png)
|
||||
Using Postman to make a GET request
|
||||
|
||||
Let’s create a new transaction by making a POST request tohttp://localhost:5000/transactions/new with a body containing our transaction structure:
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*O89KNbEWj1vigMZ6VelHAg.png)
|
||||
Using Postman to make a POST request
|
||||
|
||||
If you aren’t using Postman, then you can make the equivalent request using cURL:
|
||||
|
||||
```
|
||||
$ curl -X POST -H "Content-Type: application/json" -d '{ "sender": "d4ee26eee15148ee92c6cd394edd974e", "recipient": "someone-other-address", "amount": 5}' "http://localhost:5000/transactions/new"
|
||||
```
|
||||
I restarted my server, and mined two blocks, to give 3 in total. Let’s inspect the full chain by requesting http://localhost:5000/chain:
|
||||
```
|
||||
{
|
||||
"chain": [
|
||||
{
|
||||
"index": 1,
|
||||
"previous_hash": 1,
|
||||
"proof": 100,
|
||||
"timestamp": 1506280650.770839,
|
||||
"transactions": []
|
||||
},
|
||||
{
|
||||
"index": 2,
|
||||
"previous_hash": "c099bc...bfb7",
|
||||
"proof": 35293,
|
||||
"timestamp": 1506280664.717925,
|
||||
"transactions": [
|
||||
{
|
||||
"amount": 1,
|
||||
"recipient": "8bbcb347e0634905b0cac7955bae152b",
|
||||
"sender": "0"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"index": 3,
|
||||
"previous_hash": "eff91a...10f2",
|
||||
"proof": 35089,
|
||||
"timestamp": 1506280666.1086972,
|
||||
"transactions": [
|
||||
{
|
||||
"amount": 1,
|
||||
"recipient": "8bbcb347e0634905b0cac7955bae152b",
|
||||
"sender": "0"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"length": 3
|
||||
```
|
||||
### Step 4: Consensus
|
||||
|
||||
This is very cool. We’ve got a basic Blockchain that accepts transactions and allows us to mine new Blocks. But the whole point of Blockchains is that they should be decentralized. And if they’re decentralized, how on earth do we ensure that they all reflect the same chain? This is called the problem of Consensus, and we’ll have to implement a Consensus Algorithm if we want more than one node in our network.
|
||||
|
||||
#### Registering new Nodes
|
||||
|
||||
Before we can implement a Consensus Algorithm, we need a way to let a node know about neighbouring nodes on the network. Each node on our network should keep a registry of other nodes on the network. Thus, we’ll need some more endpoints:
|
||||
|
||||
1. /nodes/register to accept a list of new nodes in the form of URLs.
|
||||
|
||||
2. /nodes/resolve to implement our Consensus Algorithm, which resolves any conflicts—to ensure a node has the correct chain.
|
||||
|
||||
We’ll need to modify our Blockchain’s constructor and provide a method for registering nodes:
|
||||
|
||||
```
|
||||
...
|
||||
from urllib.parse import urlparse
|
||||
...
|
||||
|
||||
|
||||
class Blockchain(object):
|
||||
def __init__(self):
|
||||
...
|
||||
self.nodes = set()
|
||||
...
|
||||
|
||||
def register_node(self, address):
|
||||
"""
|
||||
Add a new node to the list of nodes
|
||||
:param address: <str> Address of node. Eg. 'http://192.168.0.5:5000'
|
||||
:return: None
|
||||
"""
|
||||
|
||||
parsed_url = urlparse(address)
|
||||
self.nodes.add(parsed_url.netloc)
|
||||
```
|
||||
A method for adding neighbouring nodes to our Network
|
||||
|
||||
Note that we’ve used a set() to hold the list of nodes. This is a cheap way of ensuring that the addition of new nodes is idempotent—meaning that no matter how many times we add a specific node, it appears exactly once.
|
||||
|
||||
#### Implementing the Consensus Algorithm
|
||||
|
||||
As mentioned, a conflict is when one node has a different chain to another node. To resolve this, we’ll make the rule that the longest valid chain is authoritative. In other words, the longest chain on the network is the de-facto one. Using this algorithm, we reach Consensus amongst the nodes in our network.
|
||||
|
||||
```
|
||||
...
|
||||
import requests
|
||||
|
||||
|
||||
class Blockchain(object)
|
||||
...
|
||||
|
||||
def valid_chain(self, chain):
|
||||
"""
|
||||
Determine if a given blockchain is valid
|
||||
:param chain: <list> A blockchain
|
||||
:return: <bool> True if valid, False if not
|
||||
"""
|
||||
|
||||
last_block = chain[0]
|
||||
current_index = 1
|
||||
|
||||
while current_index < len(chain):
|
||||
block = chain[current_index]
|
||||
print(f'{last_block}')
|
||||
print(f'{block}')
|
||||
print("\n-----------\n")
|
||||
# Check that the hash of the block is correct
|
||||
if block['previous_hash'] != self.hash(last_block):
|
||||
return False
|
||||
|
||||
# Check that the Proof of Work is correct
|
||||
if not self.valid_proof(last_block['proof'], block['proof']):
|
||||
return False
|
||||
|
||||
last_block = block
|
||||
current_index += 1
|
||||
|
||||
return True
|
||||
|
||||
def resolve_conflicts(self):
|
||||
"""
|
||||
This is our Consensus Algorithm, it resolves conflicts
|
||||
by replacing our chain with the longest one in the network.
|
||||
:return: <bool> True if our chain was replaced, False if not
|
||||
"""
|
||||
|
||||
neighbours = self.nodes
|
||||
new_chain = None
|
||||
|
||||
# We're only looking for chains longer than ours
|
||||
max_length = len(self.chain)
|
||||
|
||||
# Grab and verify the chains from all the nodes in our network
|
||||
for node in neighbours:
|
||||
response = requests.get(f'http://{node}/chain')
|
||||
|
||||
if response.status_code == 200:
|
||||
length = response.json()['length']
|
||||
chain = response.json()['chain']
|
||||
|
||||
# Check if the length is longer and the chain is valid
|
||||
if length > max_length and self.valid_chain(chain):
|
||||
max_length = length
|
||||
new_chain = chain
|
||||
|
||||
# Replace our chain if we discovered a new, valid chain longer than ours
|
||||
if new_chain:
|
||||
self.chain = new_chain
|
||||
return True
|
||||
|
||||
return False
|
||||
```
|
||||
|
||||
The first method valid_chain() is responsible for checking if a chain is valid by looping through each block and verifying both the hash and the proof.
|
||||
|
||||
resolve_conflicts() is a method which loops through all our neighbouring nodes, downloads their chains and verifies them using the above method. If a valid chain is found, whose length is greater than ours, we replace ours.
|
||||
|
||||
Let’s register the two endpoints to our API, one for adding neighbouring nodes and the another for resolving conflicts:
|
||||
|
||||
```
|
||||
@app.route('/nodes/register', methods=['POST'])
|
||||
def register_nodes():
|
||||
values = request.get_json()
|
||||
|
||||
nodes = values.get('nodes')
|
||||
if nodes is None:
|
||||
return "Error: Please supply a valid list of nodes", 400
|
||||
|
||||
for node in nodes:
|
||||
blockchain.register_node(node)
|
||||
|
||||
response = {
|
||||
'message': 'New nodes have been added',
|
||||
'total_nodes': list(blockchain.nodes),
|
||||
}
|
||||
return jsonify(response), 201
|
||||
|
||||
|
||||
@app.route('/nodes/resolve', methods=['GET'])
|
||||
def consensus():
|
||||
replaced = blockchain.resolve_conflicts()
|
||||
|
||||
if replaced:
|
||||
response = {
|
||||
'message': 'Our chain was replaced',
|
||||
'new_chain': blockchain.chain
|
||||
}
|
||||
else:
|
||||
response = {
|
||||
'message': 'Our chain is authoritative',
|
||||
'chain': blockchain.chain
|
||||
}
|
||||
|
||||
return jsonify(response), 200
|
||||
```
|
||||
|
||||
At this point you can grab a different machine if you like, and spin up different nodes on your network. Or spin up processes using different ports on the same machine. I spun up another node on my machine, on a different port, and registered it with my current node. Thus, I have two nodes: [http://localhost:5000][9] and http://localhost:5001.
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*Dd78u-gmtwhQWHhPG3qMTQ.png)
|
||||
Registering a new Node
|
||||
|
||||
I then mined some new Blocks on node 2, to ensure the chain was longer. Afterward, I called GET /nodes/resolve on node 1, where the chain was replaced by the Consensus Algorithm:
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*SGO5MWVf7GguIxfz6S8NVw.png)
|
||||
Consensus Algorithm at Work
|
||||
|
||||
And that’s a wrap... Go get some friends together to help test out your Blockchain.
|
||||
|
||||
* * *
|
||||
|
||||
I hope that this has inspired you to create something new. I’m ecstatic about Cryptocurrencies because I believe that Blockchains will rapidly change the way we think about economies, governments and record-keeping.
|
||||
|
||||
**Update:** I’m planning on following up with a Part 2, where we’ll extend our Blockchain to have a Transaction Validation Mechanism as well as discuss some ways in which you can productionize your Blockchain.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://hackernoon.com/learn-blockchains-by-building-one-117428612f46
|
||||
|
||||
作者:[Daniel van Flymen][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://hackernoon.com/@vanflymen?source=post_header_lockup
|
||||
[1]:https://learncryptography.com/hash-functions/what-are-hash-functions
|
||||
[2]:https://www.python.org/downloads/
|
||||
[3]:https://www.getpostman.com
|
||||
[4]:https://github.com/dvf/blockchain
|
||||
[5]:https://www.jetbrains.com/pycharm/
|
||||
[6]:https://github.com/dvf/blockchain
|
||||
[7]:http://flask.pocoo.org/docs/0.12/quickstart/#a-minimal-application
|
||||
[8]:http://localhost:5000/transactions/new
|
||||
[9]:http://localhost:5000
|
@ -1,3 +1,5 @@
|
||||
lontow translating
|
||||
|
||||
5 ways open source can strengthen your job search
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI)
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating By MjSeven
|
||||
|
||||
How to install software applications on Linux
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
hankchow translating
|
||||
|
||||
Check Linux Distribution Name and Version
|
||||
======
|
||||
You have joined new company and want to install some software’s which is requested by DevApp team, also want to restart few of the service after installation. What to do?
|
||||
|
@ -1,3 +1,4 @@
|
||||
translating by kimii
|
||||
cTop - A CLI Tool For Container Monitoring
|
||||
======
|
||||
Recent days Linux containers are famous, even most of us already working on it and few of us start learning about it.
|
||||
|
@ -1,80 +0,0 @@
|
||||
leemeans translating
|
||||
How to block local spoofed addresses using the Linux firewall
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA)
|
||||
|
||||
Attackers are finding sophisticated ways to penetrate even remote networks that are protected by intrusion detection and prevention systems. No IDS/IPS can halt or minimize attacks by hackers who are determined to take over your network. Improper configuration allows attackers to bypass all implemented network security measures.
|
||||
|
||||
In this article, I will explain how security engineers or system administrators can prevent these attacks.
|
||||
|
||||
Almost all Linux distributions come with a built-in firewall to secure processes and applications running on the Linux host. Most firewalls are designed as IDS/IPS solutions, whose primary purpose is to detect and prevent malicious packets from gaining access to a network.
|
||||
|
||||
A Linux firewall usually comes with two interfaces: iptables and ipchains. Most people refer to these interfaces as the "iptables firewall" or the "ipchains firewall." Both interfaces are designed as packet filters. Iptables acts as a stateful firewall, making decisions based on previous packets. Ipchains does not make decisions based on previous packets; hence, it is designed as a stateless firewall.
|
||||
|
||||
In this article, we will focus on the iptables firewall, which comes with kernel version 2.4 and beyond.
|
||||
|
||||
With the iptables firewall, you can create policies, or ordered sets of rules, which communicate to the kernel how it should treat specific classes of packets. Inside the kernel is the Netfilter framework. Netfilter is both a framework and the project name for the iptables firewall. As a framework, Netfilter allows iptables to hook functions designed to perform operations on packets. In a nutshell, iptables relies on the Netfilter framework to build firewall functionality such as filtering packet data.
|
||||
|
||||
Each iptables rule is applied to a chain within a table. An iptables chain is a collection of rules that are compared against packets with similar characteristics, while a table (such as nat or mangle) describes diverse categories of functionality. For instance, a mangle table alters packet data. Thus, specialized rules that alter packet data are applied to it, and filtering rules are applied to the filter table because the filter table filters packet data.
|
||||
|
||||
Iptables rules have a set of matches, along with a target, such as `Drop` or `Deny`, that instructs iptables what to do with a packet that conforms to the rule. Thus, without a target and a set of matches, iptables can’t effectively process packets. A target simply refers to a specific action to be taken if a packet matches a rule. Matches, on the other hand, must be met by every packet in order for iptables to process them.
|
||||
|
||||
Now that we understand how the iptables firewall operates, let's look at how to use iptables firewall to detect and reject or drop spoofed addresses.
|
||||
|
||||
### Turning on source address verification
|
||||
|
||||
The first step I, as a security engineer, take when I deal with spoofed addresses from remote hosts is to turn on source address verification in the kernel.
|
||||
|
||||
Source address verification is a kernel-level feature that drops packets pretending to come from your network. It uses the reverse path filter method to check whether the source of the received packet is reachable through the interface it came in.
|
||||
|
||||
To turn source address verification, utilize the simple shell script below instead of doing it manually:
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
#author’s name: Michael K Aboagye
|
||||
|
||||
#purpose of program: to enable reverse path filtering
|
||||
|
||||
#date: 7/02/18
|
||||
|
||||
#displays “enabling source address verification” on the screen
|
||||
|
||||
echo -n "Enabling source address verification…"
|
||||
|
||||
#Overwrites the value 0 to 1 to enable source address verification
|
||||
|
||||
echo 1 > /proc/sys/net/ipv4/conf/default/rp_filter
|
||||
|
||||
echo "completed"
|
||||
|
||||
```
|
||||
|
||||
The preceding script, when executed, displays the message `Enabling source address verification` without appending a new line. The default value of the reverse path filter is 0.0, which means no source validation. Thus, the second line simply overwrites the default value 0 to 1. 1 means that the kernel will validate the source by confirming the reverse path.
|
||||
|
||||
Finally, you can use the following command to drop or reject spoofed addresses from remote hosts by choosing either one of these targets: `DROP` or `REJECT`. However, I recommend using `DROP` for security reasons.
|
||||
|
||||
Replace the “IP-address” placeholder with your own IP address, as shown below. Also, you must choose to use either `REJECT` or `DROP`; the two targets don’t work together.
|
||||
```
|
||||
iptables -A INPUT -i internal_interface -s IP_address -j REJECT / DROP
|
||||
|
||||
|
||||
|
||||
iptables -A INPUT -i internal_interface -s 192.168.0.0/16 -j REJECT/ DROP
|
||||
|
||||
```
|
||||
|
||||
This article provides only the basics of how to prevent spoofing attacks from remote hosts using the iptables firewall.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/block-local-spoofed-addresses-using-linux-firewall
|
||||
|
||||
作者:[Michael Kwaku Aboagye][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/revoks
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
The Type Command Tutorial With Examples For Beginners
|
||||
======
|
||||
|
||||
|
@ -1,195 +0,0 @@
|
||||
hankchow translating
|
||||
|
||||
How to measure particulate matter with a Raspberry Pi
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bubblehands_fromRHT_520_0612LL.png?itok=_iQ2dO3S)
|
||||
We regularly measure particulate matter in the air at our school in Southeast Asia. The values here are very high, particularly between February and May, when weather conditions are very dry and hot, and many fields burn. These factors negatively affect the quality of the air. In this article, I will show you how to measure particulate matter using a Raspberry Pi.
|
||||
|
||||
### What is particulate matter?
|
||||
|
||||
Particulate matter is fine dust or very small particles in the air. A distinction is made between PM10 and PM2.5: PM10 refers to particles that are smaller than 10µm; PM2.5 refers to particles that are smaller than 2.5µm. The smaller the particles—i.e., anything smaller than 2.5µm—the more dangerous they are to one's health, as they can penetrate into the alveoli and impact the respiratory system.
|
||||
|
||||
The World Health Organization recommends [limiting particulate matter][1] to the following values:
|
||||
|
||||
* Annual average PM10 20 µg/m³
|
||||
* Annual average PM2,5 10 µg/m³ per year
|
||||
* Daily average PM10 50 µg/m³ without permitted days on which exceeding is possible.
|
||||
* Daily average PM2,5 25 µg/m³ without permitted days on which exceeding is possible.
|
||||
|
||||
|
||||
|
||||
These values are below the limits set in most countries. In the European Union, an annual average of 40 µg/m³ for PM10 is allowed.
|
||||
|
||||
### What is the Air Quality Index (AQI)?
|
||||
|
||||
The Air Quality Index indicates how “good” or “bad” air is based on its particulate measurement. Unfortunately, there is no uniform standard for AQI because not all countries calculate it the same way. The Wikipedia article on the [Air Quality Index][2] offers a helpful overview. At our school, we are guided by the classification established by the United States' [Environmental Protection Agency][3].
|
||||
|
||||
|
||||
![Air quality index][5]
|
||||
|
||||
Air quality index
|
||||
|
||||
### What do we need to measure particulate matter?
|
||||
|
||||
Measuring particulate matter requires only two things:
|
||||
|
||||
* A Raspberry Pi (every model works; a model with WiFi is best)
|
||||
* A particulates sensor SDS011
|
||||
|
||||
|
||||
|
||||
![Particulate sensor][7]
|
||||
|
||||
Particulate sensor
|
||||
|
||||
If you are using a Raspberry Pi Zero W, you will also need an adapter cable to a standard USB port because the Zero has only a Micro USB. These are available for about $20. The sensor comes with a USB adapter for the serial interface.
|
||||
|
||||
### Installation
|
||||
|
||||
For our Raspberry Pi we download the corresponding Raspbian Lite Image and [write it on the Micro SD card][8]. (I will not go into the details of setting up the WLAN connection; many tutorials are available online).
|
||||
|
||||
If you want to have SSH enabled after booting, you need to create an empty file named `ssh` in the boot partition. The IP of the Raspberry Pi can best be obtained via your own router/DHCP server. You can then log in via SSH (the default password is raspberry):
|
||||
```
|
||||
$ ssh pi@192.168.1.5
|
||||
|
||||
```
|
||||
|
||||
First we need to install some packages on the Pi:
|
||||
```
|
||||
$ sudo apt install git-core python-serial python-enum lighttpd
|
||||
|
||||
```
|
||||
|
||||
Before we can start, we need to know which serial port the USB adapter is connected to. `dmesg` helps us:
|
||||
```
|
||||
$ dmesg
|
||||
|
||||
[ 5.559802] usbcore: registered new interface driver usbserial
|
||||
|
||||
[ 5.559930] usbcore: registered new interface driver usbserial_generic
|
||||
|
||||
[ 5.560049] usbserial: USB Serial support registered for generic
|
||||
|
||||
[ 5.569938] usbcore: registered new interface driver ch341
|
||||
|
||||
[ 5.570079] usbserial: USB Serial support registered for ch341-uart
|
||||
|
||||
[ 5.570217] ch341 1–1.4:1.0: ch341-uart converter detected
|
||||
|
||||
[ 5.575686] usb 1–1.4: ch341-uart converter now attached to ttyUSB0
|
||||
|
||||
```
|
||||
|
||||
In the last line, you can see our interface: `ttyUSB0`. We now need a small Python script that reads the data and saves it in a JSON file, and then we will create a small HTML page that reads and displays the data.
|
||||
|
||||
### Reading data on the Raspberry Pi
|
||||
|
||||
We first create an instance of the sensor and then read the sensor every 5 minutes, for 30 seconds. These values can, of course, be adjusted. Between the measuring intervals, we put the sensor into a sleep mode to increase its lifespan (according to the manufacturer, the lifespan totals approximately 8000 hours).
|
||||
|
||||
We can download the script with this command:
|
||||
```
|
||||
$ wget -O /home/pi/aqi.py https://raw.githubusercontent.com/zefanja/aqi/master/python/aqi.py
|
||||
|
||||
```
|
||||
|
||||
For the script to run without errors, two small things are still needed:
|
||||
```
|
||||
$ sudo chown pi:pi /var/wwww/html/
|
||||
|
||||
$ echo[] > /var/wwww/html/aqi.json
|
||||
|
||||
```
|
||||
|
||||
Now you can start the script:
|
||||
```
|
||||
$ chmod +x aqi.py
|
||||
|
||||
$ ./aqi.py
|
||||
|
||||
PM2.5:55.3, PM10:47.5
|
||||
|
||||
PM2.5:55.5, PM10:47.7
|
||||
|
||||
PM2.5:55.7, PM10:47.8
|
||||
|
||||
PM2.5:53.9, PM10:47.6
|
||||
|
||||
PM2.5:53.6, PM10:47.4
|
||||
|
||||
PM2.5:54.2, PM10:47.3
|
||||
|
||||
…
|
||||
|
||||
```
|
||||
|
||||
### Run the script automatically
|
||||
|
||||
So that we don’t have to start the script manually every time, we can let it start with a cronjob, e.g., with every restart of the Raspberry Pi. To do this, open the crontab file:
|
||||
```
|
||||
$ crontab -e
|
||||
|
||||
```
|
||||
|
||||
and add the following line at the end:
|
||||
```
|
||||
@reboot cd /home/pi/ && ./aqi.py
|
||||
|
||||
```
|
||||
|
||||
Now our script starts automatically with every restart.
|
||||
|
||||
### HTML page for displaying measured values and AQI
|
||||
|
||||
We have already installed a lightweight webserver, `lighttpd`. So we need to save our HTML, JavaScript, and CSS files in the directory `/var/www/html/` so that we can access the data from another computer or smartphone. With the next three commands, we simply download the corresponding files:
|
||||
```
|
||||
$ wget -O /var/wwww/html/index.html https://raw.githubusercontent.com/zefanja/aqi/master/html/index.html
|
||||
|
||||
$ wget -O /var/wwww/html/aqi.js https://raw.githubusercontent.com/zefanja/aqi/master/html/aqi.js
|
||||
|
||||
$ wget -O /var/wwww/html/style.css https://raw.githubusercontent.com/zefanja/aqi/master/html/style.css
|
||||
|
||||
```
|
||||
|
||||
The main work is done in the JavaScript file, which opens our JSON file, takes the last value, and calculates the AQI based on this value. Then the background colors are adjusted according to the scale of the EPA.
|
||||
|
||||
Now you simply open the address of the Raspberry Pi in your browser and look at the current particulates values, e.g., [http://192.168.1.5:][9]
|
||||
|
||||
The page is very simple and can be extended, for example, with a chart showing the history of the last hours, etc. Pull requests are welcome.
|
||||
|
||||
The complete [source code is available on Github][10].
|
||||
|
||||
**[Enter our[Raspberry Pi week giveaway][11] for a chance at this arcade gaming kit.]**
|
||||
|
||||
### Wrapping up
|
||||
|
||||
For relatively little money, we can measure particulate matter with a Raspberry Pi. There are many possible applications, from a permanent outdoor installation to a mobile measuring device. At our school, we use both: There is a sensor that measures outdoor values day and night, and a mobile sensor that checks the effectiveness of the air conditioning filters in our classrooms.
|
||||
|
||||
[Luftdaten.info][12] offers guidance to build a similar sensor. The software is delivered ready to use, and the measuring device is even more compact because it does not use a Raspberry Pi. Great project!
|
||||
|
||||
Creating a particulates sensor is an excellent project to do with students in computer science classes or a workshop.
|
||||
|
||||
What do you use a [Raspberry Pi][13] for?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/how-measure-particulate-matter-raspberry-pi
|
||||
|
||||
作者:[Stephan Tetzel][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/stephan
|
||||
[1]:https://en.wikipedia.org/wiki/Particulates
|
||||
[2]:https://en.wikipedia.org/wiki/Air_quality_index
|
||||
[3]:https://en.wikipedia.org/wiki/United_States_Environmental_Protection_Agency
|
||||
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/air_quality_index.png?itok=FwmGf1ZS (Air quality index)
|
||||
[7]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/particulate_sensor.jpg?itok=ddH3bBwO (Particulate sensor)
|
||||
[8]:https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[9]:http://192.168.1.5/
|
||||
[10]:https://github.com/zefanja/aqi
|
||||
[11]:https://opensource.com/article/18/3/raspberry-pi-week-giveaway
|
||||
[12]:http://luftdaten.info/
|
||||
[13]:https://openschoolsolutions.org/shutdown-servers-case-power-failure%e2%80%8a-%e2%80%8aups-nut-co/
|
@ -0,0 +1,202 @@
|
||||
leemeans translating
|
||||
A Command Line Productivity Tool For Tracking Work Hours
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/03/Moro-720x340.jpg)
|
||||
Keeping track of your work hours will give you an insight about the amount of work you get done in a specific time frame. There are plenty of GUI-based productivity tools available on the Internet for tracking work hours. However, I couldn’t find a good CLI-based tool. Today, I stumbled upon a a simple, yet useful tool named **“Moro”** for tracking work hours. Moro is a Finnish word which means “Hello”. Using Moro, you can find how much time you take to complete a specific task. It is free, open source and written using **NodeJS**.
|
||||
|
||||
### Moro – A Command Line Productivity Tool For Tracking Work Hours
|
||||
|
||||
Since Moro is written using NodeJS, make sure you have installed it on your system. If you haven’t installed it already, follow the link given below to install NodeJS and NPM in your Linux box.
|
||||
|
||||
Once NodeJS and Npm installed, run the following command to install Moro.
|
||||
```
|
||||
$ npm install -g moro
|
||||
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
Moro’s working concept is very simple. It saves your work staring time, ending time and the break time in your system. At the end of each day, it will tell you how many hours you have worked!
|
||||
|
||||
When you reached to office, just type:
|
||||
```
|
||||
$ moro
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
```
|
||||
💙 Moro \o/
|
||||
|
||||
✔ You clocked in at: 9:20
|
||||
|
||||
```
|
||||
|
||||
Moro will register this time as your starting time.
|
||||
|
||||
When you leave the office, again type:
|
||||
```
|
||||
$ moro
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
```
|
||||
💙 Moro \o/
|
||||
|
||||
✔ You clocked out at: 19:22
|
||||
|
||||
ℹ Today looks like this so far:
|
||||
|
||||
┌──────────────────┬─────────────────────────┐
|
||||
│ Today you worked │ 9 Hours and 72 Minutes │
|
||||
├──────────────────┼─────────────────────────┤
|
||||
│ Clock in │ 9:20 │
|
||||
├──────────────────┼─────────────────────────┤
|
||||
│ Clock out │ 19:22 │
|
||||
├──────────────────┼─────────────────────────┤
|
||||
│ Break duration │ 30 minutes │
|
||||
├──────────────────┼─────────────────────────┤
|
||||
│ Date │ 2018-03-19 │
|
||||
└──────────────────┴─────────────────────────┘
|
||||
ℹ Run moro --help to learn how to edit your clock in, clock out or break duration for today
|
||||
|
||||
```
|
||||
|
||||
Moro will registers that time as your ending time.
|
||||
|
||||
Now, More will subtract the starting time from ending time and then subtracts another 30 minutes for break time from the total and gives you the total working hours on that day. Sorry I am really terrible at explaining Math calculations. Let us say you came to work at 10 am in the morning and left 17.30 in the evening. So, the total hours you spent on the office is 7.30 hours (i.e 17.30-10). Then subtract the break time (default is 30 minutes) from the total. Hence, your total working time is 7 hours. Understood? Great!
|
||||
|
||||
**Note:** Don’t confuse “moro” with “more” command like I did while writing this guide.
|
||||
|
||||
To see all your registered hours, run:
|
||||
```
|
||||
$ moro report --all
|
||||
|
||||
```
|
||||
|
||||
Just in case, you forgot to register the start time or end time, you can specify that later on the same.
|
||||
|
||||
For example, to register 10 am as start time, run:
|
||||
```
|
||||
$ moro hi 10:00
|
||||
|
||||
💙 Moro \o/
|
||||
|
||||
✔ You clocked in at: 10:00
|
||||
|
||||
⏰ Working until 18:00 will make it a full (7.5 hours) day
|
||||
|
||||
```
|
||||
|
||||
To register 17.30 as end time:
|
||||
```
|
||||
$ moro bye 17:30
|
||||
|
||||
💙 Moro \o/
|
||||
|
||||
✔ You clocked out at: 17:30
|
||||
|
||||
ℹ Today looks like this so far:
|
||||
|
||||
┌──────────────────┬───────────────────────┐
|
||||
│ Today you worked │ 7 Hours and 0 Minutes │
|
||||
├──────────────────┼───────────────────────┤
|
||||
│ Clock in │ 10:00 │
|
||||
├──────────────────┼───────────────────────┤
|
||||
│ Clock out │ 17:30 │
|
||||
├──────────────────┼───────────────────────┤
|
||||
│ Break duration │ 30 minutes │
|
||||
├──────────────────┼───────────────────────┤
|
||||
│ Date │ 2018-03-19 │
|
||||
└──────────────────┴───────────────────────┘
|
||||
ℹ Run moro --help to learn how to edit your clock in, clock out or break duration for today
|
||||
|
||||
```
|
||||
|
||||
You already know Moro will subtract 30 minutes for break time, by default. If you wanted to set a custom break time, you could simply set it using command:
|
||||
```
|
||||
$ moro break 45
|
||||
|
||||
```
|
||||
|
||||
Now, the break time is 45 minutes.
|
||||
|
||||
To clear all data:
|
||||
```
|
||||
$ moro clear --yes
|
||||
|
||||
💙 Moro \o/
|
||||
|
||||
✔ Database file deleted successfully
|
||||
|
||||
```
|
||||
|
||||
**Add notes**
|
||||
|
||||
Sometimes, you may want to add note while working. Don’t look for a separate note taking application. Moro will help you to add notes. To add a note, just run:
|
||||
```
|
||||
$ moro note mynotes
|
||||
|
||||
```
|
||||
|
||||
To search for the registered notes at later time, simply do:
|
||||
```
|
||||
$ moro search mynotes
|
||||
|
||||
```
|
||||
|
||||
**Change default settings**
|
||||
|
||||
The default full work day is 7.5 hours. Since the developer is from Finland, it’s the official work hours. You can, however, change this settings as per your country’s work hours.
|
||||
|
||||
Say for example, to set it 7 hours, run:
|
||||
```
|
||||
$ moro config --day 7
|
||||
|
||||
```
|
||||
|
||||
Also the default break time can be changed from 30 minutes like below:
|
||||
```
|
||||
$ moro config --break 45
|
||||
|
||||
```
|
||||
|
||||
**Backup your data**
|
||||
|
||||
Like I already said, Moro stores the tracking time data in your home directory, and the file name is **.moro-data.db**.
|
||||
|
||||
You can can, however, save the backup database file to different location. To do so, move the **.more-data.db** file to a different location of your choice and tell Moro to use that database file like below.
|
||||
```
|
||||
$ moro config --database-path /home/sk/personal/moro-data.db
|
||||
|
||||
```
|
||||
|
||||
As per above command, I have assigned the default database file’s location to **/home/sk/personal** directory.
|
||||
|
||||
For help, run:
|
||||
```
|
||||
$ moro --help
|
||||
|
||||
```
|
||||
|
||||
As you can see, Moro is very simple, yet useful to track how much time you’ve spent to get your work done. It is will be useful for freelancers and also anyone who must get things done under a limited time frame.
|
||||
|
||||
And, that’s all for today. Hope this helps. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/moro-a-command-line-productivity-tool-for-tracking-work-hours/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
@ -1,207 +0,0 @@
|
||||
hankchow translating
|
||||
|
||||
How to use Ansible to patch systems and install applications
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4)
|
||||
Have you ever wondered how to patch your systems, reboot, and continue working?
|
||||
|
||||
If so, you'll be interested in [Ansible][1] , a simple configuration management tool that can make some of the hardest work easy. For example, system administration tasks that can be complicated, take hours to complete, or have complex requirements for security.
|
||||
|
||||
In my experience, one of the hardest parts of being a sysadmin is patching systems. Every time you get a Common Vulnerabilities and Exposure (CVE) notification or Information Assurance Vulnerability Alert (IAVA) mandated by security, you have to kick into high gear to close the security gaps. (And, believe me, your security officer will hunt you down unless the vulnerabilities are patched.)
|
||||
|
||||
Ansible can reduce the time it takes to patch systems by running [packaging modules][2]. To demonstrate, let's use the [yum module][3] to update the system. Ansible can install, update, remove, or install from another location (e.g., `rpmbuild` from continuous integration/continuous development). Here is the task for updating the system:
|
||||
```
|
||||
- name: update the system
|
||||
|
||||
yum:
|
||||
|
||||
name: "*"
|
||||
|
||||
state: latest
|
||||
|
||||
```
|
||||
|
||||
In the first line, we give the task a meaningful `name` so we know what Ansible is doing. In the next line, the `yum module` updates the CentOS virtual machine (VM), then `name: "*"` tells yum to update everything, and, finally, `state: latest` updates to the latest RPM.
|
||||
|
||||
After updating the system, we need to restart and reconnect:
|
||||
```
|
||||
- name: restart system to reboot to newest kernel
|
||||
|
||||
shell: "sleep 5 && reboot"
|
||||
|
||||
async: 1
|
||||
|
||||
poll: 0
|
||||
|
||||
|
||||
|
||||
- name: wait for 10 seconds
|
||||
|
||||
pause:
|
||||
|
||||
seconds: 10
|
||||
|
||||
|
||||
|
||||
- name: wait for the system to reboot
|
||||
|
||||
wait_for_connection:
|
||||
|
||||
connect_timeout: 20
|
||||
|
||||
sleep: 5
|
||||
|
||||
delay: 5
|
||||
|
||||
timeout: 60
|
||||
|
||||
|
||||
|
||||
- name: install epel-release
|
||||
|
||||
yum:
|
||||
|
||||
name: epel-release
|
||||
|
||||
state: latest
|
||||
|
||||
```
|
||||
|
||||
The `shell module` puts the system to sleep for 5 seconds then reboots. We use `sleep` to prevent the connection from breaking, `async` to avoid timeout, and `poll` to fire & forget. We pause for 10 seconds to wait for the VM to come back and use `wait_for_connection` to connect back to the VM as soon as it can make a connection. Then we `install epel-release` to test the RPM installation. You can run this playbook multiple times to show the `idempotent`, and the only task that will show as changed is the reboot since we are using the `shell` module. You can use `changed_when: False` to ignore the change when using the `shell` module if you expect no actual changes.
|
||||
|
||||
So far we've learned how to update a system, restart the VM, reconnect, and install a RPM. Next we will install NGINX using the role in [Ansible Lightbulb][4].
|
||||
```
|
||||
- name: Ensure nginx packages are present
|
||||
|
||||
yum:
|
||||
|
||||
name: nginx, python-pip, python-devel, devel
|
||||
|
||||
state: present
|
||||
|
||||
notify: restart-nginx-service
|
||||
|
||||
|
||||
|
||||
- name: Ensure uwsgi package is present
|
||||
|
||||
pip:
|
||||
|
||||
name: uwsgi
|
||||
|
||||
state: present
|
||||
|
||||
notify: restart-nginx-service
|
||||
|
||||
|
||||
|
||||
- name: Ensure latest default.conf is present
|
||||
|
||||
template:
|
||||
|
||||
src: templates/nginx.conf.j2
|
||||
|
||||
dest: /etc/nginx/nginx.conf
|
||||
|
||||
backup: yes
|
||||
|
||||
notify: restart-nginx-service
|
||||
|
||||
|
||||
|
||||
- name: Ensure latest index.html is present
|
||||
|
||||
template:
|
||||
|
||||
src: templates/index.html.j2
|
||||
|
||||
dest: /usr/share/nginx/html/index.html
|
||||
|
||||
|
||||
|
||||
- name: Ensure nginx service is started and enabled
|
||||
|
||||
service:
|
||||
|
||||
name: nginx
|
||||
|
||||
state: started
|
||||
|
||||
enabled: yes
|
||||
|
||||
|
||||
|
||||
- name: Ensure proper response from localhost can be received
|
||||
|
||||
uri:
|
||||
|
||||
url: "http://localhost:80/"
|
||||
|
||||
return_content: yes
|
||||
|
||||
register: response
|
||||
|
||||
until: 'nginx_test_message in response.content'
|
||||
|
||||
retries: 10
|
||||
|
||||
delay: 1
|
||||
|
||||
```
|
||||
|
||||
And the handler that restarts the nginx service:
|
||||
```
|
||||
# handlers file for nginx-example
|
||||
|
||||
- name: restart-nginx-service
|
||||
|
||||
service:
|
||||
|
||||
name: nginx
|
||||
|
||||
state: restarted
|
||||
|
||||
```
|
||||
|
||||
In this role, we install the RPMs `nginx`, `python-pip`, `python-devel`, and `devel` and install `uwsgi` with PIP. Next, we use the `template` module to copy over the `nginx.conf` and `index.html` for the page to display. After that, we make sure the service is enabled on boot and started. Then we use the `uri` module to check the connection to the page.
|
||||
|
||||
Here is a playbook showing an example of updating, restarting, and installing an RPM. Then continue installing nginx. This can be done with any other roles/applications you want.
|
||||
```
|
||||
- hosts: all
|
||||
|
||||
roles:
|
||||
|
||||
- centos-update
|
||||
|
||||
- nginx-simple
|
||||
|
||||
```
|
||||
|
||||
Watch this demo video for more insight on the process.
|
||||
|
||||
[demo](https://asciinema.org/a/166437/embed?)
|
||||
|
||||
This was just a simple example of how to update, reboot, and continue. For simplicity, I added the packages without [variables][5]. Once you start working with a large number of hosts, you will need to change a few settings:
|
||||
|
||||
This is because on your production environment you might want to update one system at a time (not fire & forget) and actually wait a longer time for your system to reboot and continue.
|
||||
|
||||
For more ways to automate your work with this tool, take a look at the other [Ansible articles on Opensource.com][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/ansible-patch-systems
|
||||
|
||||
作者:[Jonathan Lozada De La Matta][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jlozadad
|
||||
[1]:https://www.ansible.com/overview/how-ansible-works
|
||||
[2]:https://docs.ansible.com/ansible/latest/list_of_packaging_modules.html
|
||||
[3]:https://docs.ansible.com/ansible/latest/yum_module.html
|
||||
[4]:https://github.com/ansible/lightbulb/tree/master/examples/nginx-role
|
||||
[5]:https://docs.ansible.com/ansible/latest/playbooks_variables.html
|
||||
[6]:https://opensource.com/tags/ansible
|
139
sources/tech/20180322 Simple Load Balancing with DNS on Linux.md
Normal file
139
sources/tech/20180322 Simple Load Balancing with DNS on Linux.md
Normal file
@ -0,0 +1,139 @@
|
||||
Simple Load Balancing with DNS on Linux
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/american-robin-920.jpg?itok=_B_RRbfj)
|
||||
When you have server back ends built of multiple servers, such as clustered or mirrowed web or file servers, a load balancer provides a single point of entry. Large busy shops spend big money on high-end load balancers that perform a wide range of tasks: proxy, caching, health checks, SSL processing, configurable prioritization, traffic shaping, and lots more.
|
||||
|
||||
But you don't want all that. You need a simple method for distributing workloads across all of your servers and providing a bit of failover and don't care whether it is perfectly efficient. DNS round-robin and subdomain delegation with round-robin provide two simple methods to achieve this.
|
||||
|
||||
DNS round-robin is mapping multiple servers to the same hostname, so that when users visit foo.example.com multiple servers are available to handle their requests.
|
||||
|
||||
Subdomain delegation with round-robin is useful when you have multiple subdomains or when your servers are geographically dispersed. You have a primary nameserver, and then your subdomains have their own nameservers. Your primary nameserver refers all subdomain requests to their own nameservers. This usually improves response times, as the DNS protocol will automatically look for the fastest links.
|
||||
|
||||
### Round-Robin DNS
|
||||
|
||||
Round-robin has nothing to do with robins. According to my favorite librarian, it was originally a French phrase, _ruban rond_ , or round ribbon. Way back in olden times, French government officials signed grievance petitions in non-hierarchical circular, wavy, or spoke patterns to conceal whoever originated the petition.
|
||||
|
||||
Round-robin DNS is also non-hierarchical, a simple configuration that takes a list of servers and sends requests to each server in turn. It does not perform true load-balancing as it does not measure loads, and does no health checks, so if one of the servers is down, requests are still sent to that server. Its virtue lies in simplicity. If you have a little cluster of file or web servers and want to spread the load between them in the simplest way, then round-robin DNS is for you.
|
||||
|
||||
All you do is create multiple A or AAAA records, mapping multiple servers to a single host name. This BIND example uses both IPv4 and IPv6 private address classes:
|
||||
```
|
||||
fileserv.example.com. IN A 172.16.10.10
|
||||
fileserv.example.com. IN A 172.16.10.11
|
||||
fileserv.example.com. IN A 172.16.10.12
|
||||
|
||||
fileserv.example.com. IN AAAA fd02:faea:f561:8fa0:1::10
|
||||
fileserv.example.com. IN AAAA fd02:faea:f561:8fa0:1::11
|
||||
fileserv.example.com. IN AAAA fd02:faea:f561:8fa0:1::12
|
||||
|
||||
```
|
||||
|
||||
Dnsmasq uses _/etc/hosts_ for A and AAAA records:
|
||||
```
|
||||
172.16.1.10 fileserv fileserv.example.com
|
||||
172.16.1.11 fileserv fileserv.example.com
|
||||
172.16.1.12 fileserv fileserv.example.com
|
||||
fd02:faea:f561:8fa0:1::10 fileserv fileserv.example.com
|
||||
fd02:faea:f561:8fa0:1::11 fileserv fileserv.example.com
|
||||
fd02:faea:f561:8fa0:1::12 fileserv fileserv.example.com
|
||||
|
||||
```
|
||||
|
||||
Note that these examples are simplified, and there are multiple ways to resolve fully-qualified domain names, so please study up on configuring DNS.
|
||||
|
||||
Use the `dig` command to check your work. Replace `ns.example.com` with your name server:
|
||||
```
|
||||
$ dig @ns.example.com fileserv A fileserv AAA
|
||||
|
||||
```
|
||||
|
||||
That should display both IPv4 and IPv6 round-robin records.
|
||||
|
||||
### Subdomain Delegation and Round-Robin
|
||||
|
||||
Subdomain delegation combined with round-robin is more work to set up, but it has some advantages. Use this when you have multiple subdomains or geographically-dispersed servers. Response times are often quicker, and a down server will not respond, so clients will not get hung up waiting for a reply. A short TTL, such as 60 seconds, helps this.
|
||||
|
||||
This approach requires multiple name servers. In the simplest scenario, you have a primary name server and two subdomains, each with its own name server. Configure your round-robin entries on the subdomain servers, then configure the delegations on your primary server.
|
||||
|
||||
In BIND on your primary name server, you'll need at least two additional configurations, a zone statement, and A/AAAA records in your zone data file. The delegation looks something like this on your primary name server:
|
||||
```
|
||||
ns1.sub.example.com. IN A 172.16.1.20
|
||||
ns1.sub.example.com. IN AAAA fd02:faea:f561:8fa0:1::20
|
||||
ns2.sub.example.com. IN A 172.16.1.21
|
||||
ns2.sub.example.com. IN AAA fd02:faea:f561:8fa0:1::21
|
||||
|
||||
sub.example.com. IN NS ns1.sub.example.com.
|
||||
sub.example.com. IN NS ns2.sub.example.com.
|
||||
|
||||
```
|
||||
|
||||
Then each of the subdomain servers have their own zone files. The trick here is for each server to return its own IP address. The zone statement in `named.conf` is the same on both servers:
|
||||
```
|
||||
zone "sub.example.com" {
|
||||
type master;
|
||||
file "db.sub.example.com";
|
||||
};
|
||||
|
||||
```
|
||||
|
||||
Then the data files are the same, except that the A/AAAA records use the server's own IP address. The SOA (start of authority) refers to the primary name server:
|
||||
```
|
||||
; first subdomain name server
|
||||
$ORIGIN sub.example.com.
|
||||
$TTL 60
|
||||
sub.example.com IN SOA ns1.example.com. admin.example.com. (
|
||||
2018123456 ; serial
|
||||
3H ; refresh
|
||||
15 ; retry
|
||||
3600000 ; expire
|
||||
)
|
||||
|
||||
sub.example.com. IN NS ns1.sub.example.com.
|
||||
sub.example.com. IN A 172.16.1.20
|
||||
ns1.sub.example.com. IN AAAA fd02:faea:f561:8fa0:1::20
|
||||
|
||||
; second subdomain name server
|
||||
$ORIGIN sub.example.com.
|
||||
$TTL 60
|
||||
sub.example.com IN SOA ns1.example.com. admin.example.com. (
|
||||
2018234567 ; serial
|
||||
3H ; refresh
|
||||
15 ; retry
|
||||
3600000 ; expire
|
||||
)
|
||||
|
||||
sub.example.com. IN NS ns1.sub.example.com.
|
||||
sub.example.com. IN A 172.16.1.21
|
||||
ns2.sub.example.com. IN AAAA fd02:faea:f561:8fa0:1::21
|
||||
|
||||
```
|
||||
|
||||
Next, make your round-robin entries on the subdomain name servers, and you're done. Now you have multiple name servers handling requests for your subdomains. Again, BIND is complex and has multiple ways to do the same thing, so your homework is to ensure that your configuration fits with the way you use it.
|
||||
|
||||
Subdomain delegations are easier in Dnsmasq. On your primary server, add lines like this in `dnsmasq.conf` to point to the name servers for the subdomains:
|
||||
```
|
||||
server=/sub.example.com/172.16.1.20
|
||||
server=/sub.example.com/172.16.1.21
|
||||
server=/sub.example.com/fd02:faea:f561:8fa0:1::20
|
||||
server=/sub.example.com/fd02:faea:f561:8fa0:1::21
|
||||
|
||||
```
|
||||
|
||||
Then configure round-robin on the subdomain name servers in `/etc/hosts`.
|
||||
|
||||
For way more details and help, refer to these resources:
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][1]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/3/simple-load-balancing-dns-linux
|
||||
|
||||
作者:[CARLA SCHRODER][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,167 @@
|
||||
|
||||
如何在Linux/Unix中的Web服务根文档目录(DocumentRoot)上设置只读文件权限
|
||||
======
|
||||
我是如何对存放在/var/www/html/目录中的所有我的文件设置只读权限的?
|
||||
|
||||
|
||||
你可以使用`chmod`命令对Linux / Unix / macOS / Apple OS X / *BSD操作系统上的所有文件来设置只读权限。这篇文章介绍如何在Linux/Unix的web服务器(如 Nginx, Lighttpd, Apache等)上来设置只读文件权限。
|
||||
[![Proper read-only permissions for Linux/Unix Nginx/Apache web server's directory][1]][1]
|
||||
|
||||
### 如何设置文件为只读模式
|
||||
|
||||
|
||||
语法为:
|
||||
|
||||
```
|
||||
### 仅针对文件 ##
|
||||
chmod 0444 /var/www/html/*
|
||||
chmod 0444 /var/www/html/*.php
|
||||
```
|
||||
|
||||
### 如何设置目录为只读模式
|
||||
|
||||
语法为:
|
||||
```
|
||||
### 仅针对目录##
|
||||
chmod 0444 /var/www/html/
|
||||
chmod 0444 /path/to/your/dir/
|
||||
# ***************************************************************************
|
||||
# Say webserver user/group is www-data, and file-owned by ftp-data user/group
|
||||
# 假如webserver用户/用户组是www-data ,文件拥有者是ftp-data用户/用户组
|
||||
# ***************************************************************************
|
||||
# 设置目录所有文件为只读
|
||||
chmod -R 0444 /var/www/html/
|
||||
#设置文件/目录拥有者为ftp-data
|
||||
chown -R ftp-data:ftp-data /var/www/html/
|
||||
# 所有目录和子目录的权限为0445 (这样webserver用户或用户组就可以读取我们的文件)
|
||||
find /var/www/html/ -type d -print0 | xargs -0 -I {} chmod 0445 "{}"
|
||||
```
|
||||
找到所有/var/www/html下的所有文件(包括子目录),键入:
|
||||
|
||||
```
|
||||
### 仅对文件有效##
|
||||
find /var/www/html -type f -iname "*" -print0 | xargs -I {} -0 chmod 0444 {}
|
||||
```
|
||||
然而,你需要在/var/www/html目录及其子目录上设置只读和执行权限,如此才能让web server能够访问根目录,键入:
|
||||
|
||||
```
|
||||
### 仅对目录有效 ##
|
||||
find /var/www/html -type d -iname "*" -print0 | xargs -I {} -0 chmod 0544 {}
|
||||
```
|
||||
|
||||
### 警惕写权限
|
||||
|
||||
|
||||
请注意在/var/www/html/目录上的写权限会允许任何人删除文件或添加新文件。也就是说,你可能需要设置一个只读权限给/var/www/html/目录本身。
|
||||
```
|
||||
### web根目录只读##
|
||||
chmod 0555 /var/www/html
|
||||
```
|
||||
在某些情况下,根据你的设置要求,你可以改变文件的属主和属组来设置严格的权限。
|
||||
```
|
||||
### 如果/var/www/html目录的拥有人是普通用户,你可以设置拥有人为: root:root 或 httpd:httpd (推荐) ###
|
||||
chown -R root:root /var/www/html/
|
||||
|
||||
### 确保apache拥有/var/www/html/ ##
|
||||
chown -R apache:apache /var/www/html/
|
||||
```
|
||||
|
||||
### 关于NFS导出目录
|
||||
|
||||
你可以在/etc/exports文件中指定哪个目录应该拥有[只读或者读写权限 ][2] . 这个文件定义各种各样的共享在NFS服务器和他们的权限。如:
|
||||
|
||||
```
|
||||
# 对任何人只读权限
|
||||
/var/www/html *(ro,sync)
|
||||
|
||||
# 对192.168.1.10(upload.example.com)客户端读写权限访问
|
||||
/var/www/html 192.168.1.10(rw,sync)
|
||||
```
|
||||
|
||||
|
||||
### 关于Samba (CIFS)只读共享对MS-Windows客户端
|
||||
|
||||
|
||||
共享sales为只读,更新smb.conf,如下:
|
||||
```
|
||||
[sales]
|
||||
comment = Sales Data
|
||||
path = /export/cifs/sales
|
||||
read only = Yes
|
||||
guest ok = Yes
|
||||
```
|
||||
|
||||
### 关于文件系统表(file systems table)
|
||||
|
||||
你可以在Unix/Linux上的/etc/fstab文件中配置挂载某些文件为只读模式。
|
||||
你需要有专用分区,不要设置其他系统分区为只读模式。
|
||||
|
||||
如下在/etc/fstab文件中设置/srv/html为只读模式。
|
||||
```
|
||||
/dev/sda6 /srv/html ext4 ro 1 1
|
||||
```
|
||||
你可以使用`mount`命令[重新挂载分区为只读模式][3](使用root用户)
|
||||
|
||||
```
|
||||
# mount -o remount,ro /dev/sda6 /srv/html
|
||||
```
|
||||
或者
|
||||
```
|
||||
# mount -o remount,ro /srv/html
|
||||
```
|
||||
|
||||
|
||||
上面的命令会尝试重新挂载已挂载的文件系统在/srv/html上。这是改变文件系统挂载标志的常用方法,特别是让只读文件改为可写的。这种方式不会改变设备或者挂载点。让文件变得再次可写,键入:
|
||||
```
|
||||
# mount -o remount,rw /dev/sda6 /srv/html
|
||||
```
|
||||
或
|
||||
```
|
||||
# mount -o remount,rw /srv/html
|
||||
```
|
||||
|
||||
### Linux: chattr 命令
|
||||
|
||||
|
||||
你可以在Linux文件系统上使用`chattr`命令[改变文件属性为只读][4],如:
|
||||
```
|
||||
chattr +i /path/to/file.php
|
||||
chattr +i /var/www/html/
|
||||
|
||||
# 查找任何在/var/www/html下的文件并设置为只读#
|
||||
find /var/www/html -iname "*" -print0 | xargs -I {} -0 chattr +i {}
|
||||
```
|
||||
|
||||
|
||||
通过提供`-i`选项可删除只读属性
|
||||
```
|
||||
chattr -i /path/to/file.php
|
||||
```
|
||||
|
||||
FreeBSD, Mac OS X 和其他BSD Unix用户可使用[`chflags`命令][5]:
|
||||
|
||||
```
|
||||
### 设置只读 ##
|
||||
chflags schg /path/to/file.php
|
||||
|
||||
### 删除只读 ##
|
||||
chflags noschg /path/to/file.php
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
来源: https://www.cyberciti.biz/faq/howto-set-readonly-file-permission-in-linux-unix/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[yizhuoyan](https://github.com/yizhuoyan)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/media/new/faq/2012/04/linux-unix-set-read-only-file-system-permission-for-apache-nginx.jpg
|
||||
[2]:https://www.cyberciti.biz//www.cyberciti.biz/faq/centos-fedora-rhel-nfs-v4-configuration/
|
||||
[3]:https://www.cyberciti.biz/faq/howto-freebsd-remount-partition/
|
||||
[4]:https://www.cyberciti.biz/tips/linux-password-trick.html
|
||||
[5]:https://www.cyberciti.biz/tips/howto-write-protect-file-with-immutable-bit.html
|
@ -1,94 +0,0 @@
|
||||
如何安全地生成随机数 - 争论
|
||||
======
|
||||
### 使用 urandom
|
||||
|
||||
使用 urandom。使用 urandom。使用 urandom。使用 urandom。使用 urandom。使用 urandom。
|
||||
|
||||
### 但对于密码学密钥呢?
|
||||
|
||||
仍然使用 urandom[6]。
|
||||
|
||||
### 为什么不是 SecureRandom, OpenSSL, havaged 或者 c 语言实现呢?
|
||||
|
||||
这些是用户空间的 CSPRNG(伪随机数生成器)。你应该想用内核的 CSPRNG,因为:
|
||||
|
||||
* 内核可以访问原始设备熵。
|
||||
|
||||
* 它保证不在应用程序之间共享相同的状态。
|
||||
|
||||
* 一个好的内核 CSPRNG,像 FreeBSD 中的,也可以保证在提供种子之前不给你随机数据。
|
||||
|
||||
|
||||
|
||||
|
||||
研究过去十年中的随机失败案例,你会看到一连串的用户空间随机失败。[Debian 的 OpenSSH 崩溃][7]?用户空间随机。安卓的比特币钱包[重复 ECDSA k's][8]?用户空间随机。可预测洗牌的赌博网站?用户空间随机。
|
||||
|
||||
用户空间生成器几乎总是依赖于内核的生成器。即使它们不这样做,整个系统的安全性也会确保如此。**用户空间的 CSPRNG 不会增加防御深度;相反,它会产生两个单点故障。**
|
||||
|
||||
### 手册页不是说使用/dev/random嘛?
|
||||
|
||||
这个稍后详述,保留你的想法。你应该忽略掉手册页。不要使用 /dev/random。/dev/random 和 /dev/urandom 之间的区别是 Unix 设计缺陷。手册页不想承认这一点,因此它产生了一个并不存在的安全问题。把 random(4) 上的密码学上的建议当作传奇,继续你的生活。
|
||||
|
||||
### 但是如果我需要的是真随机值,而非伪随机值呢?
|
||||
|
||||
Urandom 和 /dev/random 提供的是同一类型的随机。与流行的观念相反,/dev/random 不提供“真正的随机”。从密码学上来说,你通常不需要“真正的随机”。
|
||||
|
||||
Urandom 和 /dev/random 都基于一个简单的想法。它们的设计与流密码的设计密切相关:一个小秘密被延伸到不可预测值的不确定流中。 这里的秘密是“熵”,而流是“输出”。
|
||||
|
||||
只在 Linux 上 /dev/random 和 urandom 仍然有意义上的不同。Linux 内核的 CSPRNG 定期进行密钥更新(通过收集更多的熵)。但是 /dev/random 也试图跟踪内核池中剩余的熵,并且如果它没有足够的剩余熵时,偶尔也会罢工。这种设计和我所说的一样蠢;这与基于“密钥流”中剩下多少“密钥”的 AES-CTR 设计类似。
|
||||
|
||||
如果你使用 /dev/random 而非 urandom,那么当 Linux 对自己的 RNG(随机数生成器)如何工作感到困惑时,你的程序将不可预测地(或者如果你是攻击者,非常可预测地)挂起。使用 /dev/random 会使你的程序不太稳定,但在密码学角度上它也不会让程序更加安全。
|
||||
|
||||
### 这里有个缺陷,不是吗?
|
||||
|
||||
不是,但存在一个你可能想要了解的 Linux 内核 bug,即使这并不能改变你应该使用哪一个 RNG。
|
||||
|
||||
在 Linux 上,如果你的软件在引导时立即运行,并且/或者刚刚安装了操作系统,那么你的代码可能会与 RNG 发生竞争。这很糟糕,因为如果你赢了比赛,那么你可能会在一段时间内从 urandom 获得可预测的输出。这是 Linux 中的一个 bug,如果你正在为 Linux 嵌入式设备构建平台级代码,那你需要了解它。
|
||||
|
||||
在 Linux 上,这确实是 urandom(而不是 /dev/random)的问题。这也是[Linux 内核中的错误][9]。 但它也容易在用户空间中修复:在引导时,明确地为 urandom 提供种子。长期以来,大多数 Linux 发行版都是这么做的。但不要切换到不同的 CSPRNG。
|
||||
|
||||
### 在其它操作系统上呢?
|
||||
|
||||
FreeBSD 和 OS X 消除了 urandom 和 /dev/random 之间的区别; 这两个设备的行为是相同的。不幸的是,手册页在解释为什么这样做上干的很糟糕,并延续了 Linux 上 urandom 可怕的神话。
|
||||
|
||||
无论你使用 /dev/random 还是 urandom,FreeBSD 的内核加密 RNG 都不会阻塞。 除非它没有被提供种子,在这种情况下,这两者都会阻塞。与 Linux 不同,这种行为是有道理的。Linux 应该采用它。但是,如果你是一名应用程序开发人员,这对你几乎没有什么影响:Linux,FreeBSD,iOS,无论什么:使用 urandom 吧。
|
||||
|
||||
### 太长了,懒得看
|
||||
|
||||
直接使用 urandom 吧。
|
||||
|
||||
### 结语
|
||||
|
||||
[ruby-trunk Feature #9569][10]
|
||||
|
||||
> 现在,在尝试检测 /dev/urandom 之前,SecureRandom.random_bytes 会尝试检测要使用的 OpenSSL。 我认为这应该反过来。在这两种情况下,你只需要将随机字节进行解压,所以 SecureRandom 可以跳过中间人(和第二个故障点),如果可用的话可以直接与 /dev/urandom 进行交互。
|
||||
|
||||
总结:
|
||||
|
||||
> /dev/urandom 不适合用来直接生成会话密钥和频繁生成其他应用程序级随机数据
|
||||
>
|
||||
> GNU/Linux 上的 random(4) 手册所述......
|
||||
|
||||
感谢 Matthew Green, Nate Lawson, Sean Devlin, Coda Hale, and Alex Balducci 阅读了本文草稿。公正警告:Matthew 只是大多同意我的观点。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/
|
||||
|
||||
作者:[Thomas;Erin;Matasano][a]
|
||||
译者:[kimii](https://github.com/kimii)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://sockpuppet.org/blog
|
||||
[1]:http://blog.cr.yp.to/20140205-entropy.html
|
||||
[2]:http://cr.yp.to/talks/2011.09.28/slides.pdf
|
||||
[3]:http://golang.org/src/pkg/crypto/rand/rand_unix.go
|
||||
[4]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key
|
||||
[5]:http://stackoverflow.com/a/5639631
|
||||
[6]:https://twitter.com/bramcohen/status/206146075487240194
|
||||
[7]:http://research.swtch.com/openssl
|
||||
[8]:http://arstechnica.com/security/2013/08/google-confirms-critical-android-crypto-flaw-used-in-5700-bitcoin-heist/
|
||||
[9]:https://factorable.net/weakkeys12.extended.pdf
|
||||
[10]:https://bugs.ruby-lang.org/issues/9569
|
@ -1,16 +1,14 @@
|
||||
translating---geekpi
|
||||
|
||||
How to resolve mount.nfs: Stale file handle error
|
||||
如何解决 mount.nfs:失效的文件句柄错误
|
||||
======
|
||||
Learn how to resolve mount.nfs: Stale file handle error on Linux platform. This is Network File System error can be resolved from client or server end.
|
||||
了解如何解决 mount.nfs:Linux 平台上的失效文件句柄错误。它是可以在客户端或者服务端解决的网络文件系统错误。
|
||||
|
||||
_![][1]_
|
||||
|
||||
When you are using Network File System in your environment, you must have seen`mount.nfs: Stale file handle` error at times. This error denotes that NFS share is unable to mount since something has changed since last good known configuration.
|
||||
当你在你的环境中使用网络文件系统时,你一定不时看到 `mount.nfs:Stale file handle` 错误。此错误表示 NFS 共享无法挂载,因为自上次配置后有些东西已经更改。
|
||||
|
||||
Whenever you reboot NFS server or some of the NFS processes are not running on client or server or share is not properly exported at server; these can be reasons for this error. Moreover its irritating when this error comes to previously mounted NFS share. Because this means configuration part is correct since it was previously mounted. In such case once can try following commands:
|
||||
无论何时你重启 NFS 服务器或某些 NFS 进程未在客户端或服务器上运行,或者共享未在服务器上正确导出,这些都可能是这个错误的原因。此外,当这个错误发生在先前挂载的 NFS 共享上时,它会它令人不快。因为这意味着配置部分是正确的,因为是以前挂载的。在这种情况下,可以尝试下面的命令:
|
||||
|
||||
Make sure NFS service are running good on client and server.
|
||||
确保 NFS 服务在客户端和服务器上运行良好。
|
||||
|
||||
```
|
||||
# service nfs status
|
||||
@ -20,9 +18,9 @@ nfsd (pid 12009 12008 12007 12006 12005 12004 12003 12002) is running...
|
||||
rpc.rquotad (pid 11988) is running...
|
||||
```
|
||||
|
||||
> Stay connected to your favorite windows applications from anywhere on any device with [ windows 7 cloud desktop ][2] from CloudDesktopOnline.com. Get Office 365 with expert support and free migration from [ Apps4Rent.com ][3].
|
||||
>通过 CloudDesktopOnline.com 上的[ Windows 7 云桌面][2]在任意位置的任何设备上保持与你最喜爱的 Windows 程序的连接。从 [Apps4Rent.com][3] 获得有专家支持的 Office 365 和免费迁移。
|
||||
|
||||
If NFS share currently mounted on client, then un-mount it forcefully and try to remount it on NFS client. Check if its properly mounted by `df` command and changing directory inside it.
|
||||
如果 NFS 共享目前挂载在客户端上,则强制卸载它并尝试在 NFS 客户端上重新挂载它。通过 `df` 命令检查它是否正确挂载,并更改其中的目录。
|
||||
|
||||
```
|
||||
# umount -f /mydata_nfs
|
||||
@ -34,9 +32,9 @@ If NFS share currently mounted on client, then un-mount it forcefully and try to
|
||||
server:/nfs_share 41943040 892928 41050112 3% /mydata_nfs
|
||||
```
|
||||
|
||||
In above mount command, server can be IP or [hostname ][4]of NFS server.
|
||||
在上面的挂载命令中,服务器可以是 NFS 服务器的 IP 或[主机名][4]。
|
||||
|
||||
If you are getting error while forcefully un-mounting like below :
|
||||
如果你在强制取消挂载时遇到像下面错误:
|
||||
|
||||
```
|
||||
# umount -f /mydata_nfs
|
||||
@ -45,7 +43,7 @@ umount: /mydata_nfs: device is busy
|
||||
umount2: Device or resource busy
|
||||
umount: /mydata_nfs: device is busy
|
||||
```
|
||||
Then you can check which all processes or users are using that mount point with `lsof` command like below:
|
||||
然后你可以用 `lsof` 命令来检查哪个进程或用户正在使用该挂载点,如下所示:
|
||||
|
||||
```
|
||||
# lsof |grep mydata_nfs
|
||||
@ -57,9 +55,9 @@ bash 20092 oracle11 cwd unknown
|
||||
bash 25040 oracle11 cwd unknown /mydata_nfs/MUYR (stat: Stale NFS file handle)
|
||||
```
|
||||
|
||||
If you see in above example that 4 PID are using some files on said mount point. Try killing them off to free mount point. Once done you will be able to un-mount it properly.
|
||||
如果你在上面的示例中看到共有 4 个 PID 正在使用该挂载点上的某些文件。尝试杀死它们以释放挂载点。完成后,你将能够正确卸载它。
|
||||
|
||||
Sometimes it still give same error for mount command. Then try mounting after restarting NFS service at client using below command.
|
||||
有时 mount 命令会有相同的错误。接着使用下面的命令在客户端重启 NFS 服务后挂载。
|
||||
|
||||
```
|
||||
# service nfs restart
|
||||
@ -73,18 +71,18 @@ Starting NFS mountd: [ OK ]
|
||||
Starting NFS daemon: [ OK ]
|
||||
```
|
||||
|
||||
Also read : [How to restart NFS step by step in HPUX][5]
|
||||
另请阅读:[如何在 HPUX 中逐步重启 NFS][5]
|
||||
|
||||
Even if this didnt solve your issue, final step is to restart services at NFS server. Caution! This will disconnect all NFS shares which are exported from NFS server. All clients will see mount point disconnect. This step is where 99% you will get your issue resolved. If not then [NFS configurations][6] must be checked, provided you have changed configuration and post that you started seeing this error.
|
||||
即使这没有解决你的问题,最后一步是在 NFS 服务器上重启服务。警告!这将断开从该 NFS 服务器导出的所有 NFS 共享。所有客户端将看到挂载点断开。这一步将 99% 解决你的问题。如果没有,请务必检查[ NFS 配置][6],提供你修改的配置并发布你启动时看到的错误。
|
||||
|
||||
Outputs in above post are from RHEL6.3 server. Drop us your comments related to this post.
|
||||
上面文章中的输出来自 RHEL6.3 服务器。请将你的评论发送给我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://kerneltalks.com/troubleshooting/resolve-mount-nfs-stale-file-handle-error/
|
||||
|
||||
作者:[KernelTalks][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,80 @@
|
||||
如何使用Linux防火墙隔离局域网受欺骗地址
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA)
|
||||
|
||||
|
||||
即便是被入侵检测和隔离系统保护的远程网络,黑客们也在寻找精致的方法入侵。IDS/IPS是不能停止或者减少那些想要接管你的网络的黑客的攻击的。不恰当的配置允许攻击者绕过所有部署的安全措施。
|
||||
|
||||
在这篇文章中,我将会解释安全工程师或者系统管理员怎样可以避免这些攻击。
|
||||
|
||||
几乎所有的Linux发行版都带着一个内建的防火墙来保护运行在Linux宿主机上的进程和应用程序。大多数都按照IDS/IPS解决方案设计,这样的设计的主要目的是检测和避免恶意包获取网络的进入权。
|
||||
|
||||
Linux防火墙通常带有两个接口:iptable和ipchain程序。大多数人将这些接口称作iptables防火墙或者ipchains防火墙。这两个接口都被设计成包过滤器。iptables是有状态防火墙,基于先前的包做出决定。
|
||||
|
||||
在这篇文章中,我们将会专注于内核2.4之后出现的iptables防火墙。
|
||||
|
||||
有了iptables防火墙,你可以创建策略或者有序的规则集,规则集可以告诉内核如何对待特定的数据包。在内核中的是Netfilter框架。Netfilter既是框架也是iptables防火墙的工程名。作为一个框架,Netfilter允许iptables勾取被设计来操作数据包的函数。概括地说,iptables依靠Netfilter框架构筑诸如过滤数据包数据的功能。
|
||||
|
||||
每个iptables规则都被应用到一个含表的链中。一个iptables链就是一个比较包中相似字符的规则的集合。而表(例如nat或者mangle)则描述不同的功能目录。例如,一个mangle表转化包数据。因此,特定的改变包数据的规则被应用到这里,而过滤规则被应用到filter表,因为filter表过滤包数据。
|
||||
|
||||
iptables规则有一系列匹配,伴随着一个诸如`Drop`或者`Deny`的目标,这可以告诉iptables对一个包做什么符合规则。因此,没有一个目标和一系列匹配,iptables就不能有效地处理包。如果一个包匹配一条规则,一个目标简单地指向一个将要采取的特定措施。另一方面,为了让iptables处理,匹配必须被每个包满足吗。
|
||||
|
||||
|
||||
现在我们已经知道iptables防火墙如何工作,开始着眼于如何使用iptables防火墙检测并拒绝或丢弃被欺骗的地址吧。
|
||||
|
||||
### 打开源地址验证
|
||||
|
||||
作为一个安全工程师,在处理远程主机被欺骗地址的时候,我采取的第一步是在内核打开源地址验证。
|
||||
|
||||
源地址验证是一种内核层级的特性,这种特性丢弃那些伪装成来自你的网络的包。这种特性使用反向路径过滤器方法来检查收到的包的源地址是否可以通过包到达的接口可以到达。(译注:到达的包的源地址应该可以从它到达的网络接口反向到达,只需反转源地址和目的地址就可以达到这样的效果)
|
||||
|
||||
利用下面简单的脚本可以打开源地址验证而不用手工操作:
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
#作者: Michael K Aboagye
|
||||
|
||||
#程序目标: 打开反向路径过滤
|
||||
|
||||
#日期: 7/02/18
|
||||
|
||||
#在屏幕上显示 “enabling source address verification”
|
||||
|
||||
echo -n "Enabling source address verification…"
|
||||
|
||||
#将值0覆盖为1来打开源地址验证
|
||||
|
||||
echo 1 > /proc/sys/net/ipv4/conf/default/rp_filter
|
||||
|
||||
echo "completed"
|
||||
|
||||
```
|
||||
|
||||
先前的脚本在执行的时候只显示了`Enabling source address verification`这条信息而没有添加新行。默认的反向路径过滤的值是0,0表示没有源验证。因此,第二行简单地将默认值0覆盖为1。1表示内核将会通过确认反向路径来验证源(地址)。
|
||||
|
||||
最后,你可以使用下面的命令通过选择`DROP`或者`REJECT`目标中的一个来丢弃或者拒绝来自远端主机的被欺骗地址。但是,处于安全原因的考虑,我建议使用`DROP`目标。
|
||||
|
||||
像下面这样,用你自己的IP地址代替“IP-address” 占位符。另外,你必须选择使用`REJECT`或者`DROP`中的一个,这两个目标不能同时使用。
|
||||
```
|
||||
iptables -A INPUT -i internal_interface -s IP_address -j REJECT / DROP
|
||||
|
||||
|
||||
|
||||
iptables -A INPUT -i internal_interface -s 192.168.0.0/16 -j REJECT/ DROP
|
||||
|
||||
```
|
||||
|
||||
这篇文章只提供了如何使用iptables防火墙来避免远端欺骗攻击的基础(知识)。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/block-local-spoofed-addresses-using-linux-firewall
|
||||
|
||||
作者:[Michael Kwaku Aboagye][a]
|
||||
译者:[leemeans](https://github.com/leemeans)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/revoks
|
@ -0,0 +1,188 @@
|
||||
如何使用树莓派测定颗粒物
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bubblehands_fromRHT_520_0612LL.png?itok=_iQ2dO3S)
|
||||
我们在东南亚的学校定期测定空气中的颗粒物。这里的测定值非常高,尤其是在二到五月之间,干燥炎热、土地干旱等各种因素都对空气质量产生了不利的影响。我将会在这篇文章中展示如何使用树莓派来测定颗粒物。
|
||||
|
||||
### 什么是颗粒物?
|
||||
|
||||
颗粒物就是粉尘或者空气中的微小颗粒。其中 PM10 和 PM2.5 之间的差别就是 PM10 指的是粒径小于10微米的颗粒,而 PM2.5 指的是粒径小于2.5微米的颗粒。在粒径小于2.5微米的的情况下,由于它们能被吸入肺泡中并且对呼吸系统造成影响,因此颗粒越小,对人的健康危害越大。
|
||||
|
||||
世界卫生组织的建议[颗粒物浓度][1]是:
|
||||
|
||||
* 年均 PM10 不高于20 µg/m³
|
||||
* 年均 PM2.5 不高于10 µg/m³
|
||||
* 不允许超标时,日均 PM10 不高于50 µg/m³
|
||||
* 不允许超标时,日均 PM2.5 不高于25 µg/m³
|
||||
|
||||
以上数值实际上是低于大多数国家的标准的,例如欧盟对于 PM10 所允许的年均值是不高于40 µg/m³。
|
||||
|
||||
### 什么是空气质量指数(AQI, Air Quality Index)?
|
||||
|
||||
空气质量指数按照颗粒物的测定值来评价空气质量的好坏,然而由于各国之间的计算方式有所不同,这个指数并没有统一的标准。维基百科上关于[空气质量指数][2]的词条对此给出了一个概述。我们学校则以[美国环境保护协会][3](EPA, Environment Protection Agency)建立的分类法来作为依据。
|
||||
|
||||
![空气质量指数][5]
|
||||
|
||||
空气质量指数
|
||||
|
||||
### 测定颗粒物需要哪些准备?
|
||||
|
||||
测定颗粒物只需要以下两种器材:
|
||||
* 树莓派(款式不限,最好带有 WiFi)
|
||||
* SDS011 颗粒物传感器
|
||||
|
||||
|
||||
|
||||
![颗粒物传感器][7]
|
||||
|
||||
颗粒物传感器
|
||||
|
||||
如果是只带有 Micro USB的树莓派Zero W,那还需要一根连接到标准 USB 端口的适配线,只需要20美元,而传感器则自带适配串行接口的 USB 适配器。
|
||||
|
||||
### 安装过程
|
||||
|
||||
对于树莓派,只需要下载对应的 Raspbian Lite 镜像并且[写入到 Micro SD 卡][8]上就可以了(网上很多教程都有介绍如何设置 WLAN 连接,我就不细说了)。
|
||||
|
||||
如果要使用 SSH,那还需要在启动分区建立一个名为 `ssh` 的空文件。树莓派的 IP 通过路由器或者 DHCP 服务器获取,随后就可以通过 SSH 登录到树莓派了(默认密码是 raspberry):
|
||||
```
|
||||
$ ssh pi@192.168.1.5
|
||||
|
||||
```
|
||||
|
||||
首先我们需要在树莓派上安装一下这些包:
|
||||
```
|
||||
$ sudo apt install git-core python-serial python-enum lighttpd
|
||||
|
||||
```
|
||||
|
||||
在开始之前,我们可以用 `dmesg` 来获取 USB 适配器连接的串行接口:
|
||||
```
|
||||
$ dmesg
|
||||
|
||||
[ 5.559802] usbcore: registered new interface driver usbserial
|
||||
|
||||
[ 5.559930] usbcore: registered new interface driver usbserial_generic
|
||||
|
||||
[ 5.560049] usbserial: USB Serial support registered for generic
|
||||
|
||||
[ 5.569938] usbcore: registered new interface driver ch341
|
||||
|
||||
[ 5.570079] usbserial: USB Serial support registered for ch341-uart
|
||||
|
||||
[ 5.570217] ch341 1–1.4:1.0: ch341-uart converter detected
|
||||
|
||||
[ 5.575686] usb 1–1.4: ch341-uart converter now attached to ttyUSB0
|
||||
|
||||
```
|
||||
|
||||
在最后一行,可以看到接口 `ttyUSB0`。然后我们需要写一个 Python 脚本来读取传感器的数据并以 JSON 格式存储,在通过一个 HTML 页面就可以把数据展示出来了。
|
||||
|
||||
### 在树莓派上读取数据
|
||||
|
||||
首先创建一个传感器实例,每5分钟读取一次传感器的数据,持续30秒,这些数值后续都可以调整。在每两次测定的间隔,我们把传感器调到睡眠模式以延长它的使用寿命(厂商认为元件的寿命大约8000小时)。
|
||||
|
||||
我们可以使用以下命令来下载 Python 脚本:
|
||||
```
|
||||
$ wget -O /home/pi/aqi.py https://raw.githubusercontent.com/zefanja/aqi/master/python/aqi.py
|
||||
|
||||
```
|
||||
|
||||
另外还需要执行以下两条命令来保证脚本正常运行:
|
||||
```
|
||||
$ sudo chown pi:pi /var/wwww/html/
|
||||
|
||||
$ echo[] > /var/wwww/html/aqi.json
|
||||
|
||||
```
|
||||
|
||||
下面就可以执行脚本了:
|
||||
```
|
||||
$ chmod +x aqi.py
|
||||
|
||||
$ ./aqi.py
|
||||
|
||||
PM2.5:55.3, PM10:47.5
|
||||
|
||||
PM2.5:55.5, PM10:47.7
|
||||
|
||||
PM2.5:55.7, PM10:47.8
|
||||
|
||||
PM2.5:53.9, PM10:47.6
|
||||
|
||||
PM2.5:53.6, PM10:47.4
|
||||
|
||||
PM2.5:54.2, PM10:47.3
|
||||
|
||||
…
|
||||
|
||||
```
|
||||
|
||||
### 自动化执行脚本
|
||||
|
||||
只需要使用诸如 crontab 的服务,我们就不需要每次都手动启动脚本了。按照以下命令打开 crontab 文件:
|
||||
```
|
||||
$ crontab -e
|
||||
|
||||
```
|
||||
|
||||
在文件末尾添加这一行:
|
||||
```
|
||||
@reboot cd /home/pi/ && ./aqi.py
|
||||
|
||||
```
|
||||
|
||||
现在我们的脚本就会在树莓派每次重启后自动执行了。
|
||||
|
||||
### 展示颗粒物测定值和空气质量指数的 HTML 页面
|
||||
|
||||
我们在前面已经安装了一个轻量级的 web 服务器 `lighttpd`,所以我们需要把 HTML、JavaScript、CSS 文件放置在 `/var/www/html` 目录中,这样就能通过电脑和智能手机访问到相关数据了。执行下面的三条命令,可以下载到对应的文件:
|
||||
|
||||
```
|
||||
$ wget -O /var/wwww/html/index.html https://raw.githubusercontent.com/zefanja/aqi/master/html/index.html
|
||||
|
||||
$ wget -O /var/wwww/html/aqi.js https://raw.githubusercontent.com/zefanja/aqi/master/html/aqi.js
|
||||
|
||||
$ wget -O /var/wwww/html/style.css https://raw.githubusercontent.com/zefanja/aqi/master/html/style.css
|
||||
|
||||
```
|
||||
|
||||
在 JavaScript 文件中,实现了打开 JSON 文件、提取数据、计算空气质量指数的过程,随后页面的背景颜色将会根据 EPA 的划分标准而变化。
|
||||
|
||||
你只需要用浏览器访问树莓派的地址,就可以看到当前颗粒物浓度值等数据了。[http://192.168.1.5:][9]
|
||||
|
||||
这个页面比较简单而且可扩展,比如可以添加一个展示过去数小时历史数据的表格等等。
|
||||
|
||||
这是[Github上的完整源代码][10]。
|
||||
|
||||
### 总结
|
||||
|
||||
在资金相对紧张的情况下,树莓派是一种选择。除此以外,还有很多可以用来测定颗粒物的应用,包括室外固定装置、移动测定设备等等。我们学校则同时采用了这两种:固定装置在室外测定全天颗粒物浓度,而移动测定设备在室内检测空调过滤器的效果。
|
||||
|
||||
[Luftdaten.info][12]提供了一个如何设计类似的传感器的介绍,其中的软件效果出众,而且因为它没有使用树莓派,所以硬件更是小巧。
|
||||
|
||||
对于学生来说,设计一个颗粒物传感器确实算得上是一个优秀的课外项目。
|
||||
|
||||
你又打算如何使用你的[树莓派][13]呢?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/how-measure-particulate-matter-raspberry-pi
|
||||
|
||||
作者:[Stephan Tetzel][a]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/stephan
|
||||
[1]:https://en.wikipedia.org/wiki/Particulates
|
||||
[2]:https://en.wikipedia.org/wiki/Air_quality_index
|
||||
[3]:https://en.wikipedia.org/wiki/United_States_Environmental_Protection_Agency
|
||||
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/air_quality_index.png?itok=FwmGf1ZS (Air quality index)
|
||||
[7]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/particulate_sensor.jpg?itok=ddH3bBwO (Particulate sensor)
|
||||
[8]:https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[9]:http://192.168.1.5/
|
||||
[10]:https://github.com/zefanja/aqi
|
||||
[11]:https://opensource.com/article/18/3/raspberry-pi-week-giveaway
|
||||
[12]:http://luftdaten.info/
|
||||
[13]:https://openschoolsolutions.org/shutdown-servers-case-power-failure%e2%80%8a-%e2%80%8aups-nut-co/
|
@ -0,0 +1,205 @@
|
||||
如何使用 Ansible 打补丁以及安装应用
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4)
|
||||
你有没有想过,如何打补丁、重启系统,然后继续工作?
|
||||
|
||||
如果你的回答是肯定的,那就需要了解一下 [Ansible][1] 了。它是一个配置管理工具,对于一些复杂的系统管理任务有时候需要几个小时才能完成,又或者对安全性有比较高要求的时候,使用 Ansible 能够大大简化工作流程。
|
||||
|
||||
以我作为系统管理员的经验,打补丁是一项最有难度的工作。每次遇到公共漏洞和暴露(CVE, Common Vulnearbilities and Exposure)通知或者信息安全漏洞预警(IAVA, Information Assurance Vulnerability Alert)时都必须要高度关注安全漏洞,否则安全部门将会严肃追究自己的责任。
|
||||
|
||||
使用 Ansible 可以通过运行[封装模块][2]以缩短打补丁的时间,下面以[yum模块][3]更新系统为例,使用 Ansible 可以执行安装、更新、删除、从其它地方安装(例如持续集成/持续开发中的 `rpmbuild`)。以下是系统更新的任务:
|
||||
```
|
||||
- name: update the system
|
||||
|
||||
yum:
|
||||
|
||||
name: "*"
|
||||
|
||||
state: latest
|
||||
|
||||
```
|
||||
|
||||
在第一行,我们给这个任务命名,这样可以清楚 Ansible 的工作内容。第二行表示使用 `yum` 模块在CentOS虚拟机中执行更新操作。第三行 `name: "*"` 表示更新所有程序。最后一行 `state: latest` 表示更新到最新的 RPM。
|
||||
|
||||
系统更新结束之后,需要重新启动并重新连接:
|
||||
```
|
||||
- name: restart system to reboot to newest kernel
|
||||
|
||||
shell: "sleep 5 && reboot"
|
||||
|
||||
async: 1
|
||||
|
||||
poll: 0
|
||||
|
||||
|
||||
|
||||
- name: wait for 10 seconds
|
||||
|
||||
pause:
|
||||
|
||||
seconds: 10
|
||||
|
||||
|
||||
|
||||
- name: wait for the system to reboot
|
||||
|
||||
wait_for_connection:
|
||||
|
||||
connect_timeout: 20
|
||||
|
||||
sleep: 5
|
||||
|
||||
delay: 5
|
||||
|
||||
timeout: 60
|
||||
|
||||
|
||||
|
||||
- name: install epel-release
|
||||
|
||||
yum:
|
||||
|
||||
name: epel-release
|
||||
|
||||
state: latest
|
||||
|
||||
```
|
||||
|
||||
`shell` 字段中的命令让系统在5秒休眠之后重新启动,我们使用 `sleep` 来保持连接不断开,使用 `async` 设定最大等待时长以避免发生超时,`poll` 设置为0表示直接执行不需要等待执行结果。等待10秒钟,使用 `wait_for_connection` 在虚拟机恢复连接后尽快连接。随后由 `install epel-release` 任务检查 RPM 的安装情况。你可以对这个剧本执行多次来验证它的幂等性,唯一会显示造成影响的是重启操作,因为我们使用了 `shell` 模块。如果不想造成实际的影响,可以在使用 `shell` 模块的时候 `changed_when: False`。
|
||||
|
||||
现在我们已经知道如何对系统进行更新、重启虚拟机、重新连接、安装 RPM 包。下面我们通过 [Ansible Lightbulb][4] 来安装 NGINX:
|
||||
```
|
||||
- name: Ensure nginx packages are present
|
||||
|
||||
yum:
|
||||
|
||||
name: nginx, python-pip, python-devel, devel
|
||||
|
||||
state: present
|
||||
|
||||
notify: restart-nginx-service
|
||||
|
||||
|
||||
|
||||
- name: Ensure uwsgi package is present
|
||||
|
||||
pip:
|
||||
|
||||
name: uwsgi
|
||||
|
||||
state: present
|
||||
|
||||
notify: restart-nginx-service
|
||||
|
||||
|
||||
|
||||
- name: Ensure latest default.conf is present
|
||||
|
||||
template:
|
||||
|
||||
src: templates/nginx.conf.j2
|
||||
|
||||
dest: /etc/nginx/nginx.conf
|
||||
|
||||
backup: yes
|
||||
|
||||
notify: restart-nginx-service
|
||||
|
||||
|
||||
|
||||
- name: Ensure latest index.html is present
|
||||
|
||||
template:
|
||||
|
||||
src: templates/index.html.j2
|
||||
|
||||
dest: /usr/share/nginx/html/index.html
|
||||
|
||||
|
||||
|
||||
- name: Ensure nginx service is started and enabled
|
||||
|
||||
service:
|
||||
|
||||
name: nginx
|
||||
|
||||
state: started
|
||||
|
||||
enabled: yes
|
||||
|
||||
|
||||
|
||||
- name: Ensure proper response from localhost can be received
|
||||
|
||||
uri:
|
||||
|
||||
url: "http://localhost:80/"
|
||||
|
||||
return_content: yes
|
||||
|
||||
register: response
|
||||
|
||||
until: 'nginx_test_message in response.content'
|
||||
|
||||
retries: 10
|
||||
|
||||
delay: 1
|
||||
|
||||
```
|
||||
|
||||
And the handler that restarts the nginx service:
|
||||
```
|
||||
# 安装 nginx 的操作文件
|
||||
|
||||
- name: restart-nginx-service
|
||||
|
||||
service:
|
||||
|
||||
name: nginx
|
||||
|
||||
state: restarted
|
||||
|
||||
```
|
||||
|
||||
在这个角色里,我们使用 RPM 安装了 `nginx`、`python-pip`、`python-devel`、`devel`,用 PIP 安装了 `uwsgi`,接下来使用 `template` 模块复制 `nginx.conf` 和 `index.html` 以显示页面,并确保服务在系统启动时启动。然后就可以使用 `uri` 模块检查到页面的连接了。
|
||||
|
||||
这个是一个系统更新、系统重启、安装 RPM 包的剧本示例,后续可以继续安装 nginx,当然这里可以替换成任何你想要的角色和应用程序。
|
||||
```
|
||||
- hosts: all
|
||||
|
||||
roles:
|
||||
|
||||
- centos-update
|
||||
|
||||
- nginx-simple
|
||||
|
||||
```
|
||||
|
||||
观看演示视频了解了解这个过程。
|
||||
|
||||
[demo](https://asciinema.org/a/166437/embed?)
|
||||
|
||||
这只是关于如何更新系统、重启以及后续工作的示例。简单起见,我只添加了不带[变量][5]的包,当你在操作大量主机的时候,你就需要修改其中的一些设置了:
|
||||
|
||||
这是由于在生产环境中如果你想逐一更新每一台主机的系统,你需要花相当一段时间去等待主机重启才能够继续下去。
|
||||
|
||||
有关 Ansible 进行自动化工作的更多用法,请查阅[其它文章][6]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/ansible-patch-systems
|
||||
|
||||
作者:[Jonathan Lozada De La Matta][a]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jlozadad
|
||||
[1]:https://www.ansible.com/overview/how-ansible-works
|
||||
[2]:https://docs.ansible.com/ansible/latest/list_of_packaging_modules.html
|
||||
[3]:https://docs.ansible.com/ansible/latest/yum_module.html
|
||||
[4]:https://github.com/ansible/lightbulb/tree/master/examples/nginx-role
|
||||
[5]:https://docs.ansible.com/ansible/latest/playbooks_variables.html
|
||||
[6]:https://opensource.com/tags/ansible
|
@ -1,46 +1,51 @@
|
||||
The Command line Personal Assistant For Your Linux System
|
||||
======
|
||||
# 您的 Linux 系统命令行个人助理
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/03/Yoda-720x340.png)
|
||||
A while ago, we wrote about a command line virtual assistant named [**“Betty”**][1]. Today, I stumbled upon a similar utility called **“Yoda”**. Yoda is a command line personal assistant who can help you to do some trivial tasks in Linux. It is a free, open source application written in Python. In this guide, we will see how to install and use Yoda in GNU/Linux.
|
||||
|
||||
### Installing Yoda, the command line personal assistant
|
||||
不久前,我们写了一个名为 [**“Betty”**][1] 的命令行虚拟助手。今天,我偶然发现了一个类似的实用程序,叫做 **“Yoda”**。Yoda 是一个命令行个人助理,可以帮助您在 Linux 中完成一些琐碎的任务。它是用 Python 编写的一个免费的开源应用程序。在本指南中,我们将了解如何在 GNU/Linux 中安装和使用 Yoda。
|
||||
|
||||
Yoda requires **Python 2** and PIP. If PIP is not installed in your Linux box, refer the following guide to install it. Just make sure you have installed **python2-pip.** Yoda may not support Python 3.
|
||||
### 安装 Yoda,命令行私人助理。
|
||||
|
||||
**Note:** I recommend you to try Yoda under a virtual environment. Not just Yoda, always try any Python applications in a virtual environment, so they won’t interfere with globally installed packages. You can setup a virtual environment as described in the above link under the section titled “Creating Virtual Environments”.
|
||||
Yoda 需要 **Python 2** 和 PIP 。如果在您的 Linux 中没有安装 PIP,请参考下面的指南来安装它。只要确保已经安装了 **python2-pip** 。Yoda 可能不支持 Python 3。
|
||||
|
||||
**注意**:我建议你在虚拟环境下试用 Yoda。 不仅仅是 Yoda,总是在虚拟环境中尝试任何 Python 应用程序,让它们不会干扰全局安装的软件包。 您可以按照上文链接中标题为“创建虚拟环境”一节中所述设置虚拟环境。
|
||||
|
||||
在您的系统上安装了 pip 之后,使用下面的命令克隆 Yoda 库。
|
||||
|
||||
Once you have installed pip on your system, git clone Yoda repository.
|
||||
```
|
||||
$ git clone https://github.com/yoda-pa/yoda
|
||||
|
||||
```
|
||||
|
||||
The above command will create a directory named “yoda” in your current working directory and clone all contents in it. Go to the Yoda directory:
|
||||
上面的命令将在当前工作目录中创建一个名为 “yoda” 的目录,并在其中克隆所有内容。转到 Yoda 目录:
|
||||
|
||||
```
|
||||
$ cd yoda/
|
||||
|
||||
```
|
||||
|
||||
Run the following command to install Yoda application.
|
||||
运行以下命令安装Yoda应用程序。
|
||||
|
||||
```
|
||||
$ pip install .
|
||||
|
||||
```
|
||||
|
||||
Please note the dot (.) at the end. Now, all required packages will be downloaded and installed.
|
||||
请注意最后的点(.)。 现在,所有必需的软件包将被下载并安装。
|
||||
|
||||
### Configure Yoda
|
||||
### 配置 Yoda
|
||||
|
||||
First, setup configuration to save your information on your local system.
|
||||
首先,设置配置以将您的信息保存在本地系统上。
|
||||
|
||||
运行下面的命令:
|
||||
|
||||
To do so, run:
|
||||
```
|
||||
$ yoda setup new
|
||||
|
||||
```
|
||||
|
||||
Answer the following questions:
|
||||
填写下列的问题:
|
||||
|
||||
```
|
||||
Enter your name:
|
||||
Senthil Kumar
|
||||
@ -57,15 +62,17 @@ y
|
||||
|
||||
```
|
||||
|
||||
Your password is saved in the config file after encrypting, so don’t worry about it.
|
||||
你的密码在加密后保存在配置文件中,所以不用担心。
|
||||
|
||||
要检查当前配置,请运行:
|
||||
|
||||
To check the current configuration, run:
|
||||
```
|
||||
$ yoda setup check
|
||||
|
||||
```
|
||||
|
||||
You will see an output something like below.
|
||||
你会看到如下的输出。
|
||||
|
||||
```
|
||||
Name: Senthil Kumar
|
||||
Email: [email protected]
|
||||
@ -73,23 +80,26 @@ Github username: sk
|
||||
|
||||
```
|
||||
|
||||
By default, your information is stored in **~/.yoda** directory.
|
||||
默认情况下,您的信息存储在 **~/.yoda** 目录中。
|
||||
|
||||
要删除现有配置,请执行以下操作:
|
||||
|
||||
To delete the existing configuration, do:
|
||||
```
|
||||
$ yoda setup delete
|
||||
|
||||
```
|
||||
|
||||
### Usage
|
||||
### 用法
|
||||
|
||||
Yoda 包含一个简单的聊天机器人。您可以使用下面的聊天命令与它交互。
|
||||
|
||||
Yoda contains a simple chat bot. You can interact with it using **chat** command like below.
|
||||
```
|
||||
$ yoda chat who are you
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
样例输出:
|
||||
|
||||
```
|
||||
Yoda speaks:
|
||||
I'm a virtual agent
|
||||
@ -100,11 +110,12 @@ I'm doing very well. Thanks!
|
||||
|
||||
```
|
||||
|
||||
Here is the list of things we can do with Yoda:
|
||||
以下是我们可以用 Yoda 做的事情:
|
||||
|
||||
**Test Internet speed**
|
||||
**测试网络速度**
|
||||
|
||||
让我们问一下 Yoda 关于互联网速度的问题。运行:
|
||||
|
||||
Let us ask Yoda about the Internet speed. To do so, run:
|
||||
```
|
||||
$ yoda speedtest
|
||||
Speed test results:
|
||||
@ -114,9 +125,10 @@ Upload: 1.95 Mb/s
|
||||
|
||||
```
|
||||
|
||||
**Shorten and expand URLs**
|
||||
**缩短并展开网址**
|
||||
|
||||
Yoda 还有助于缩短任何网址。
|
||||
|
||||
Yoda also helps to shorten any URL.
|
||||
```
|
||||
$ yoda url shorten https://www.ostechnix.com/
|
||||
Here's your shortened URL:
|
||||
@ -124,7 +136,8 @@ https://goo.gl/hVW6U0
|
||||
|
||||
```
|
||||
|
||||
To expand the shortened URL:
|
||||
要展开缩短的网址:
|
||||
|
||||
```
|
||||
$ yoda url expand https://goo.gl/hVW6U0
|
||||
Here's your original URL:
|
||||
@ -132,9 +145,11 @@ https://www.ostechnix.com/
|
||||
|
||||
```
|
||||
|
||||
**Read Hacker News**
|
||||
|
||||
I am regular visitor of Hacker News website. If you’re anything like me, you can read the news from Hacker News website using Yoda like below.
|
||||
**阅读黑客新闻**
|
||||
|
||||
我是 Hacker News 网站的常客。 如果你像我一样,你可以使用 Yoda 从下面的 Hacker News 网站阅读新闻。
|
||||
|
||||
```
|
||||
$ yoda hackernews
|
||||
News-- 1/513
|
||||
@ -147,13 +162,14 @@ Continue? [press-"y"]
|
||||
|
||||
```
|
||||
|
||||
Yoda will display one item at a time. To read the next news, simply type “y” and hit ENTER.
|
||||
Yoda 将一次显示一个项目。 要阅读下一条新闻,只需输入 “y” 并按下 ENTER。
|
||||
|
||||
**Manage personal diaries**
|
||||
**管理个人日记**
|
||||
|
||||
We can also maintain a personal diary to note important events.
|
||||
我们也可以保留个人日记以记录重要事件。
|
||||
|
||||
使用命令创建一个新的日记:
|
||||
|
||||
Create a new diary using command:
|
||||
```
|
||||
$ yoda diary nn
|
||||
Input your entry for note:
|
||||
@ -161,9 +177,10 @@ Today I learned about Yoda
|
||||
|
||||
```
|
||||
|
||||
To create a new note, run the above command again.
|
||||
要创建新笔记,请再次运行上述命令。
|
||||
|
||||
查看所有笔记:
|
||||
|
||||
To view all notes:
|
||||
```
|
||||
$ yoda diary notes
|
||||
Today's notes:
|
||||
@ -174,9 +191,10 @@ Today's notes:
|
||||
|
||||
```
|
||||
|
||||
Not just notes, Yoda can also help you to create tasks.
|
||||
不仅仅是笔记,Yoda 还可以帮助你创建任务。
|
||||
|
||||
要创建新任务,请运行:
|
||||
|
||||
To create a new task, run:
|
||||
```
|
||||
$ yoda diary nt
|
||||
Input your entry for task:
|
||||
@ -184,7 +202,8 @@ Write an article about Yoda and publish it on OSTechNix
|
||||
|
||||
```
|
||||
|
||||
To view the list of tasks, run:
|
||||
要查看任务列表,请运行:
|
||||
|
||||
```
|
||||
$ yoda diary tasks
|
||||
Today's agenda:
|
||||
@ -201,7 +220,8 @@ Completed tasks: 0
|
||||
|
||||
```
|
||||
|
||||
As you see above, I have one incomplete task. To mark it as completed, run the following command and type the completed task serial number and hit ENTER:
|
||||
正如你在上面看到的,我有一个未完成的任务。 要将其标记为已完成,请运行以下命令并输入已完成的任务序列号并按下 ENTER 键:
|
||||
|
||||
```
|
||||
$ yoda diary ct
|
||||
Today's agenda:
|
||||
@ -214,7 +234,8 @@ Enter the task number that you would like to set as completed
|
||||
|
||||
```
|
||||
|
||||
You can analyze the current month’s tasks at any time using command:
|
||||
您可以随时使用命令分析当前月份的任务:
|
||||
|
||||
```
|
||||
$ yoda diary analyze
|
||||
Percentage of incomplete task : 0
|
||||
@ -223,17 +244,19 @@ Frequency of adding task (Task/Day) : 3
|
||||
|
||||
```
|
||||
|
||||
Sometimes, you may want to maintain a profile about a person you love, admire.
|
||||
有时候,你可能想要记录一个关于你爱的或者敬佩的人的个人资料。
|
||||
|
||||
**Take notes about loved ones**
|
||||
**记录关于爱人的笔记**
|
||||
|
||||
首先,您需要设置配置来存储朋友的详细信息。 请运行:
|
||||
|
||||
First, you need to setup configuration to store your friend’s details. To do so, run:
|
||||
```
|
||||
$ yoda love setup
|
||||
|
||||
```
|
||||
|
||||
Enter the details of your friend:
|
||||
输入你的朋友的详细信息:
|
||||
|
||||
```
|
||||
Enter their name:
|
||||
Abdul Kalam
|
||||
@ -244,14 +267,16 @@ Rameswaram
|
||||
|
||||
```
|
||||
|
||||
To view the details of the person, run:
|
||||
要查看此人的详细信息,请运行:
|
||||
|
||||
```
|
||||
$ yoda love status
|
||||
{'place': 'Rameswaram', 'name': 'Abdul Kalam', 'sex': 'M'}
|
||||
|
||||
```
|
||||
|
||||
To add the birthday of your love one:
|
||||
要添加你的爱人的生日:
|
||||
|
||||
```
|
||||
$ yoda love addbirth
|
||||
Enter birthday
|
||||
@ -259,21 +284,24 @@ Enter birthday
|
||||
|
||||
```
|
||||
|
||||
To view the birth date:
|
||||
查看生日:
|
||||
|
||||
```
|
||||
$ yoda love showbirth
|
||||
Birthday is 15-10-1931
|
||||
|
||||
```
|
||||
|
||||
You could even add notes about that person:
|
||||
你甚至可以添加关于该人的笔记:
|
||||
|
||||
```
|
||||
$ yoda love note
|
||||
Avul Pakir Jainulabdeen Abdul Kalam better known as A. P. J. Abdul Kalam, was the 11th President of India from 2002 to 2007.
|
||||
|
||||
```
|
||||
|
||||
You can view the notes using command:
|
||||
您可以使用命令查看笔记:
|
||||
|
||||
```
|
||||
$ yoda love notes
|
||||
Notes:
|
||||
@ -281,7 +309,8 @@ Notes:
|
||||
|
||||
```
|
||||
|
||||
You can also write the things that person like:
|
||||
你也可以写下这个人喜欢的东西:
|
||||
|
||||
```
|
||||
$ yoda love like
|
||||
Add things they like
|
||||
@ -291,7 +320,8 @@ n
|
||||
|
||||
```
|
||||
|
||||
To view the things they like, run:
|
||||
要查看他们喜欢的东西,请运行:
|
||||
|
||||
```
|
||||
$ yoda love likes
|
||||
Likes:
|
||||
@ -299,17 +329,21 @@ Likes:
|
||||
|
||||
```
|
||||
|
||||
**Tracking money expenses**
|
||||
****
|
||||
|
||||
You don’t need a separate tool to maintain your financial expenditure. Yoda got your back.
|
||||
**跟踪资金费用**
|
||||
|
||||
您不需要单独的工具来维护您的财务支出。 Yoda 会替您处理好。
|
||||
|
||||
首先,使用命令设置您的金钱支出配置:
|
||||
|
||||
First, setup configuration for your money expenses using command:
|
||||
```
|
||||
$ yoda money setup
|
||||
|
||||
```
|
||||
|
||||
Enter your currency code and the initial amount:
|
||||
输入您的货币代码和初始金额:
|
||||
|
||||
```
|
||||
Enter default currency code:
|
||||
INR
|
||||
@ -321,14 +355,16 @@ Enter initial amount:
|
||||
|
||||
```
|
||||
|
||||
To view the money configuration, just run:``
|
||||
要查看金钱配置,只需运行:
|
||||
|
||||
```
|
||||
$ yoda money status
|
||||
{'initial_money': 10000, 'currency_code': 'INR'}
|
||||
|
||||
```
|
||||
|
||||
Let us say you bought a book that costs 250 INR. To add this expense, run:
|
||||
让我们假设你买了一本价值 250 卢比的书。 要添加此费用,请运行:
|
||||
|
||||
```
|
||||
$ yoda money exp
|
||||
Spend 250 INR on books
|
||||
@ -336,64 +372,76 @@ output:
|
||||
|
||||
```
|
||||
|
||||
To view the expenses, run:
|
||||
要查看花费,请运行:
|
||||
|
||||
```
|
||||
$ yoda money exps
|
||||
2018-03-21 17:12:31 INR 250 books
|
||||
|
||||
```
|
||||
|
||||
**Creating Idea lists**
|
||||
****
|
||||
|
||||
**创建想法列表**
|
||||
|
||||
创建一个新的想法:
|
||||
|
||||
To create a new idea:
|
||||
```
|
||||
$ yoda ideas add --task <task_name> --inside <project_name>
|
||||
|
||||
```
|
||||
|
||||
List the ideas:
|
||||
列出想法:
|
||||
|
||||
```
|
||||
$ yoda ideas show
|
||||
|
||||
```
|
||||
|
||||
To remove a idea from the project:
|
||||
从任务中移除一个想法:
|
||||
|
||||
```
|
||||
$ yoda ideas remove --task <task_name> --inside <project_name>
|
||||
|
||||
```
|
||||
|
||||
To remove the idea completely, run:
|
||||
要完全删除这个想法,请运行:
|
||||
|
||||
```
|
||||
$ yoda ideas remove --project <project_name>
|
||||
|
||||
```
|
||||
|
||||
**Learning English Vocabulary**
|
||||
****
|
||||
|
||||
Yoda helps you to learn random English words and track your learning progress.
|
||||
**学习英语词汇**
|
||||
|
||||
Yoda 帮助你学习随机英语单词并追踪你的学习进度。
|
||||
|
||||
要学习一个新单词,请输入:
|
||||
|
||||
To learn a new word, type:
|
||||
```
|
||||
$ yoda vocabulary word
|
||||
|
||||
```
|
||||
|
||||
It will display a random word. Press ENTER to display the meaning of the word. Again, Yoda asks you if you already know the meaning of the word. If you know it already, type “yes”. If you don’t know, type “no”. This can help you to track your progress. Use the following command to know your progress.
|
||||
它会随机显示一个单词。 按 ENTER 键显示单词的含义。 再一次,Yoda 问你是否已经知道这个词的意思。 如果您已经知道,请输入“是”。 如果您不知道,请输入“否”。 这可以帮助你跟踪你的进度。 使用以下命令来了解您的进度。
|
||||
|
||||
```
|
||||
$ yoda vocabulary accuracy
|
||||
|
||||
```
|
||||
|
||||
Also, Yoda can help you to do few other things like finding the definition of a word and creating flashcards to easily learn anything. For more details and list of available options, refer the help section.
|
||||
此外,Yoda 可以帮助您做其他一些事情,比如找到单词的定义和创建插卡以轻松学习任何内容。 有关更多详细信息和可用选项列表,请参阅帮助部分。
|
||||
|
||||
```
|
||||
$ yoda --help
|
||||
|
||||
```
|
||||
|
||||
More good stuffs to come. Stay tuned!
|
||||
更多好的东西来了。请继续关注!
|
||||
|
||||
Cheers!
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
@ -402,7 +450,7 @@ Cheers!
|
||||
via: https://www.ostechnix.com/yoda-the-command-line-personal-assistant-for-your-linux-system/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,119 @@
|
||||
# chkservice -Linux终端管理系统单元的工具
|
||||
|
||||
systemd 即系统守护进程是一个新的初始化系统和系统管理工具,这个 systemd 非常流行,大部分的 Linux 发行版开始使用这种新的初始化系统。
|
||||
|
||||
Systemctl 是一个 systemd 的工具,它可以帮助我们管理 systemd 守护进程。 它控制系统启动和服务,使用并行化方式,套接字和 D-Bus 激活启动服务,提供按需启动守护进程,使用 Linux 控制组跟踪进程,维护挂载和自动挂载点。
|
||||
|
||||
此外,它还提供了守护进程日志、用于控制基本系统配置如主机名、日期、地区、维护登录用户列表和运行容器和虚拟机、系统帐户、运行时目录和设置,以及守护进程来管理简单的网络配置、网络时间同步、日志转发和名称解析。
|
||||
|
||||
### 什么是 chkservice
|
||||
|
||||
[chkservice][1] 是一个基于 ncurses 的在终端中管理系统资源的工具。它提供了一个非常全面的系统资源视图,可以使它们非常容易修改。
|
||||
|
||||
只有拥有超级管理权限才能够改变系统资源状态和系统启动脚本。
|
||||
|
||||
### 在 Linux 安装 chkservice
|
||||
|
||||
我们可以通过两种方式安装 chkservice,通过包安装或者手动安装。
|
||||
|
||||
对于 `Debian/Ubuntu`,使用 [APT-GET Command][2] 或 [APT Command][3] 安装 chkservice.
|
||||
|
||||
```
|
||||
$ sudo add-apt-repository ppa:linuxenko/chkservice
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install chkservice
|
||||
|
||||
```
|
||||
|
||||
对于 `Arch Linux` 系的系统,使用 [Yaourt Command][4] 或 [Packer Command][5] 从 AUR 库安装 chkservice。
|
||||
|
||||
```
|
||||
$ yaourt -S chkservice
|
||||
or
|
||||
$ packer -S chkservice
|
||||
|
||||
```
|
||||
|
||||
对于 **`Fedora`** , 使用 [DNF Command][6] 安装 chkservice。
|
||||
|
||||
```
|
||||
$ sudo dnf copr enable srakitnican/default
|
||||
$ sudo dnf install chkservice
|
||||
|
||||
```
|
||||
|
||||
对于 `Debian` 系系统,使用 [DPKG Command][7] 安装 chkservice。
|
||||
|
||||
```
|
||||
$ wget https://github.com/linuxenko/chkservice/releases/download/0.1/chkservice_0.1.0-amd64.deb
|
||||
$ sudo dpkg -i chkservice_0.1.0-amd64.deb
|
||||
|
||||
```
|
||||
|
||||
对于 `RPM` 系的系统,使用 [DNF Command][8] 安装 chkservice。
|
||||
|
||||
```
|
||||
$ sudo yum install https://github.com/linuxenko/chkservice/releases/download/0.1/chkservice_0.1.0-amd64.rpm
|
||||
|
||||
```
|
||||
|
||||
### 如何使用 chkservice
|
||||
|
||||
只需输入以下命令即可启动 chkservice 工具。 输出分为四部分。
|
||||
|
||||
* **`第一部分:`** 这一部分显示了守护进程的状态,比如可用的 [X] 或者不可用的 [] 或者静态的 [s] 或者被掩藏的 -m-
|
||||
* **`第二部分:`** 这一部分显示守护进程的状态例如开始 [ >] 或者停止 [=]
|
||||
* **`第三部分:`** 这一部分显示单元的名称
|
||||
* **`第四部分:`** 这一部分简短地显示了守护进程的一些信息
|
||||
|
||||
|
||||
```
|
||||
$ sudo chkservice
|
||||
|
||||
```
|
||||
|
||||
![][10]
|
||||
|
||||
要查看帮助页面,点击 `?` 按钮。 这将向您显示管理 systemd 服务的可用选项。
|
||||
|
||||
![][11]
|
||||
|
||||
选择要启用或禁用的守护进程,然后点击`空格键`按钮。
|
||||
|
||||
![][12]
|
||||
|
||||
选择你想开始或停止的守护进程,然后点击 `s` 按钮。
|
||||
|
||||
![][13]
|
||||
|
||||
选择要重新启动的守护进程,然后按 `r` 按钮。 点击 `r` 键后,您可以在顶部看到更新的提示。
|
||||
|
||||
![][14]
|
||||
|
||||
按 `q` 按钮退出。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/
|
||||
|
||||
作者:[Ramya Nuvvula][a]
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/ramya/
|
||||
[1]:https://github.com/linuxenko/chkservice
|
||||
[2]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[3]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[4]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/
|
||||
[5]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/
|
||||
[6]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[7]:https://www.2daygeek.com/dpkg-command-to-manage-packages-on-debian-ubuntu-linux-mint-systems/
|
||||
[8]:https://www.2daygeek.com/rpm-command-examples/
|
||||
[9]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[10]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-to-manage-systemd-units-1.png
|
||||
[11]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-to-manage-systemd-units-2.png
|
||||
[12]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-to-manage-systemd-units-3.png
|
||||
[13]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-to-manage-systemd-units-4.png
|
||||
[14]:https://www.2daygeek.com/wp-content/uploads/2018/03/chkservice-to-manage-systemd-units-5.png
|
Loading…
Reference in New Issue
Block a user