mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
commit
467e517733
@ -0,0 +1,109 @@
|
||||
你所不知道的用 less 查看文件的高级用法
|
||||
======
|
||||
|
||||
> 使用 less 文件查看器的一些技巧。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/16/152826assmtybsohopo4b7.png)
|
||||
|
||||
最近,我正在拜读 Scott Nesbitt 的文章《[在 Linux 命令行中使用 less 来查看文本文件][1]》,并受到了一些启发,所以我想分享一些使用 `less` 命令的技巧。
|
||||
|
||||
### LESS 环境变量
|
||||
|
||||
如果你定义了环境变量 `LESS`(例如在 `.bashrc` 中),那么 `less` 会将其视为一个选项列表,就像在命令行中传递给它一样。
|
||||
|
||||
我这样定义:
|
||||
|
||||
```
|
||||
LESS='-C -M -I -j 10 -# 4'
|
||||
```
|
||||
|
||||
它的意思是:
|
||||
|
||||
* `-C` – 通过不从底部滚动来加快全屏打印速度。
|
||||
* `-M` – 在最后一行(状态行)显示更多信息。你可以使用 `-PM` 来自定义显示的信息,但我通常只用 `-M`。
|
||||
* `-I` – 忽略搜索中的大小写。
|
||||
* `-j 10` – 在终端的第 10 行而不是第一行显示搜索结果。这样,每次按 `n` 或(`N`) 跳到下一个(或上一个)匹配项时,就会有 10 条上下文。
|
||||
* `-# 4` – 当按下向右或向左的箭头时,会向右或向左跳四个字符。默认情况时跳转半个屏幕,我觉得太多了。一般来说,`less` 似乎(至少部分)是按最初开发它时的环境优化的,那时使用慢速调制解调器和低带宽的互联网连接,所以跳过半个屏幕是有意义的。
|
||||
|
||||
### PAGER 环境变量
|
||||
|
||||
许多程序使用在 `PAGER` 环境变量中的命令来显示信息。因此,你可以在 `.bashrc` 中设置 `PAGER=less`,然后让程序运行 `less`。查看手册页(`man 7 environ`)中是否有其它此类变量。
|
||||
|
||||
### -S
|
||||
|
||||
`-S` 告诉 `less` 将长行切断而不是将它们换行。除非我在(或我要)查看文件,否则我很少需要这样做。幸运的是,你可以在 `less` 中输入所有命令行选项,就像它们是键盘命令一样。因此,如果我想在文件已经打开的情况下隔断长行,我可以简单地输入 `-S`。(LCTT 译注:注意大写 `S` ,并按回车)
|
||||
|
||||
这是我经常使用的一个例子:
|
||||
|
||||
```
|
||||
su - postgres
|
||||
export PAGER=less # 因为我不用在所有的机器上编辑 postgres 的 .bashrc
|
||||
psql
|
||||
```
|
||||
|
||||
有时当我查看一个 `SELECT` 命令的输出非常宽时,我会输入 `-S` 以便将其格式化的漂亮一些。如果当我按下右箭头想查看更多内容时它跳得太远(因为我没有设置 `-#`),则可以输入 `-#8`,那么每次按下右箭头都会向右移动八个字符。
|
||||
|
||||
有时在多次输入 `-S` 之后,我会退出 psql 并设置环境变量后再次运行它:
|
||||
|
||||
```
|
||||
export LESS=-S
|
||||
```
|
||||
|
||||
### F
|
||||
|
||||
命令 `F` 可以使 `less` 像 `tail -f` 一样工作,等待更多的数据被添加到文件后再显示它们。与 `tail -f` 相比,它的一个优点是,高亮显示搜索匹配仍然有效。因此,你可以输入 `less /var/log/logfile`,搜索某些内容时,它将高亮显示所有出现的内容(除非你使用了 `-g`),然后按下 `F` 键,当更多数据写入到日志时,`less` 将显示它并高亮新的匹配项。
|
||||
|
||||
按下 `F` 后,可以按 `Ctrl+C` 来停止其查找新数据(这不会干掉它),这样你可以返回文件查看旧内容,搜索其它内容等,然后再次按 `F` 键来查看更多新数据。
|
||||
|
||||
### 搜索
|
||||
|
||||
搜索使用系统的正则表达式库,这通常意味着你可以使用扩展正则表达式。特别是,搜索 `one|two|three` 可以找到并高亮所有的 one、two 或 three。
|
||||
|
||||
我经常使用的另一种模式是 `.*someting.*`,特别是对于一些很长的日志行(例如,跨越多个终端宽度的行),它会高亮整行。这种模式使查看一行的起始和结束位置变得更加容易。我还会结合其它内容,例如 `.*one thing.*|.*another thing.*`,或者使用 `key: .*|.*marker.*` 来查看 `key` 的内容。例如,一个日志文件中包含一些字典/哈希的转储。它会高亮相关的标记行,这样我就看到上下文了,甚至,如果我知道这个值被引号引起来的话,就可以:
|
||||
|
||||
```
|
||||
key: '[^']*'|.*marker.*
|
||||
```
|
||||
|
||||
`less` 会保留你的搜索项的历史纪录,并将其保存到磁盘中以备将来调用。当你按下 `/` 或 `?` 时,可以使用向上或向下箭头浏览历史记录(以及进行基本的行编辑)。
|
||||
|
||||
在撰写本文时,我无意间看了下 `less` 手册页,发现了一个非常有用的功能:使用 `&!pattern` 跳过无关的行。例如,当我在 `/var/log/messages` 中寻找内容时,经常会一个个迭代使用以下命令:
|
||||
|
||||
```
|
||||
cat /var/log/messages | egrep -v 'systemd: Started Session' | less
|
||||
cat /var/log/messages | egrep -v 'systemd: Started Session|systemd: Starting Session' | less
|
||||
cat /var/log/messages | egrep -v 'systemd: Started Session|systemd: Starting Session|User Slice' | less
|
||||
cat /var/log/messages | egrep -v 'systemd: Started Session|systemd: Starting Session|User Slice|dbus' | less
|
||||
cat /var/log/messages | egrep -v 'systemd: Started Session|systemd: Starting Session|User Slice|dbus|PackageKit Daemon' | less
|
||||
```
|
||||
|
||||
但是现在我知道如何在 `less` 中做同样的事情。例如,我可以输入 `&!systemd: Started Session`,然后想要隐藏 `systemd: Starting Session`,所以我输入 `&!`,并使用向上箭头从历史记录中获得上一次搜索的结果。然后我接着输入 `|systemd: Starting Session` 并按下回车,继续以相同的方式添加更多条目,直到我过滤掉足够多的条目,看到更有趣的内容。
|
||||
|
||||
### =
|
||||
|
||||
命令 `=` 显示有关文件和位置的更多信息,甚至比 `-M` 更多。如果文件非常大,计算 `=` 花费的时间太长,可以按下 `Ctrl+C`,它将停止尝试计算。
|
||||
|
||||
如果你正在查看的内容来自管道而不是文件,则 `=`(和 `-M`)不会显示未知内容,包括文件中的行数和字节数。要查看这些数据,如果你知道管道命令将很快结束,则可以使用 `G` 跳到最后,然后 `less` 将开始显示这些信息。
|
||||
|
||||
如果按下 `G` 并且写入管道的命令花费的时间比预期的长,你可以按下 `Ctrl+C`,该命令将被终止。即使你没有按 `G`,`Ctrl+C` 键也会杀死它。因此,如果你不想终止命令,请不要意外按下 `Ctrl+C`。出于这个原因,如果命令执行了某些操作(不仅是显示信息),通常更安全的做法是将其输出写入文件并在单独的终端中查看文件,而不是使用管道。
|
||||
|
||||
### 为什么你需要 less
|
||||
|
||||
`less` 是一个非常强大的程序,与该领域中较新的竞争者(例如 `most` 和 `moar`)不同,你可能会在几乎所有的系统上找到它,就像 `vi` 一样。因此,即使你使用 GUI 查看器或编辑器,花一些时间浏览 `less` 手册页也是值得的,至少可以了解一下它的用处。这样,当你需要做一些已有的功能可能提供的工作时,就会知道如何要搜索手册页或互联网来找到所需的内容。
|
||||
|
||||
有关更多信息,访问 [less 主页][2]。该网站有一个不错的常见问题解答,其中包含更多提示和技巧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/advanced-use-less-text-file-viewer
|
||||
|
||||
作者:[Yedidyah Bar David][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/didib
|
||||
[1]:http://opensource.com/article/18/4/using-less-view-text-files-command-line
|
||||
[2]:http://www.greenwoodsoftware.com/less/
|
@ -0,0 +1,288 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "way-ww"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-12027-1.html"
|
||||
[#]: subject: "How To Enable Or Disable SSH Access For A Particular User Or Group In Linux?"
|
||||
[#]: via: "https://www.2daygeek.com/allow-deny-enable-disable-ssh-access-user-group-in-linux/"
|
||||
[#]: author: "2daygeek http://www.2daygeek.com/author/2daygeek/"
|
||||
|
||||
如何在 Linux 上为特定的用户或用户组启用或禁用 SSH?
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/23/105915r1azn34i82sp48ca.jpg)
|
||||
|
||||
由于你的公司标准规定,你可能只能允许部分人访问 Linux 系统。或者你可能只能够允许几个用户组中的用户访问 Linux 系统。那么如何实现这样的要求呢?最好的方法是什么呢?如何使用一个简单的方法去实现呢?
|
||||
|
||||
是的,我们会有很多种方法去实现它。但是我们应该使用简单轻松的方法。为了简单轻松的完成目的,我们可以通过对 `/etc/ssh/sshd_config` 文件做必要的修改来实现。在这篇文章中我们将会向你展示实现要求的详细步骤。
|
||||
|
||||
为什么我们要这样做呢?是出于安全的原因。你可以访问[这个链接][1]来获取更多关于 openSSH 的使用方法。
|
||||
|
||||
### 什么是 SSH ?
|
||||
|
||||
openssh 全称为 OpenBSD Secure Shell。Secure Shell(ssh)是一个自由开源的网络工具,它能让我们在一个不安全的网络中通过使用 Secure Shell(SSH)协议来安全访问远程主机。
|
||||
|
||||
它采用了客户端-服务器架构(C/S),拥有用户身份认证、加密、在计算机和隧道之间传输文件等功能。
|
||||
|
||||
我们也可以用 `telnet` 或 `rcp` 等传统工具来完成,但是这些工具都不安全,因为它们在执行任何动作时都会使用明文来传输密码。
|
||||
|
||||
### 如何在 Linux 中允许用户使用 SSH?
|
||||
|
||||
通过以下内容,我们可以为指定的用户或用户列表启用 `ssh` 访问。如果你想要允许多个用户,那么你可以在添加用户时在同一行中用空格来隔开他们。
|
||||
|
||||
为了达到目的只需要将下面的值追加到 `/etc/ssh/sshd_config` 文件中去。 在这个例子中, 我们将会允许用户 `user3` 使用 ssh。
|
||||
|
||||
```
|
||||
# echo "AllowUsers user3" >> /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
你可以运行下列命令再次检查是否添加成功。
|
||||
|
||||
```
|
||||
# cat /etc/ssh/sshd_config | grep -i allowusers
|
||||
AllowUsers user3
|
||||
```
|
||||
|
||||
这样就行了, 现在只需要重启 `ssh` 服务和见证奇迹了。(下面这两条命令效果相同, 请根据你的服务管理方式选择一条执行即可)
|
||||
|
||||
```
|
||||
# systemctl restart sshd
|
||||
或
|
||||
# service restart sshd
|
||||
```
|
||||
|
||||
接下来很简单,只需打开一个新的终端或者会话尝试用不同的用户身份访问 Linux 系统。是的,这里 `user2` 用户是不被允许使用 SSH 登录的并且会得到如下所示的错误信息。
|
||||
|
||||
```
|
||||
# ssh user2@192.168.1.4
|
||||
user2@192.168.1.4's password:
|
||||
Permission denied, please try again.
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
Mar 29 02:00:35 CentOS7 sshd[4900]: User user2 from 192.168.1.6 not allowed because not listed in AllowUsers
|
||||
Mar 29 02:00:35 CentOS7 sshd[4900]: input_userauth_request: invalid user user2 [preauth]
|
||||
Mar 29 02:00:40 CentOS7 unix_chkpwd[4902]: password check failed for user (user2)
|
||||
Mar 29 02:00:40 CentOS7 sshd[4900]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user2
|
||||
Mar 29 02:00:43 CentOS7 sshd[4900]: Failed password for invalid user user2 from 192.168.1.6 port 42568 ssh2
|
||||
```
|
||||
|
||||
与此同时用户 `user3` 被允许登入系统因为他在被允许的用户列表中。
|
||||
|
||||
```
|
||||
# ssh user3@192.168.1.4
|
||||
user3@192.168.1.4's password:
|
||||
[user3@CentOS7 ~]$
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
Mar 29 02:01:13 CentOS7 sshd[4939]: Accepted password for user3 from 192.168.1.6 port 42590 ssh2
|
||||
Mar 29 02:01:13 CentOS7 sshd[4939]: pam_unix(sshd:session): session opened for user user3 by (uid=0)
|
||||
```
|
||||
|
||||
### 如何在 Linux 中阻止用户使用 SSH ?
|
||||
|
||||
通过以下内容,我们可以配置指定的用户或用户列表禁用 `ssh`。如果你想要禁用多个用户,那么你可以在添加用户时在同一行中用空格来隔开他们。
|
||||
|
||||
为了达到目的只需要将以下值追加到 `/etc/ssh/sshd_config` 文件中去。 在这个例子中, 我们将禁用用户 `user1` 使用 `ssh`。
|
||||
|
||||
```
|
||||
# echo "DenyUsers user1" >> /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
你可以运行下列命令再次检查是否添加成功。
|
||||
|
||||
```
|
||||
# cat /etc/ssh/sshd_config | grep -i denyusers
|
||||
DenyUsers user1
|
||||
```
|
||||
|
||||
这样就行了, 现在只需要重启 `ssh` 服务和见证奇迹了。
|
||||
|
||||
```
|
||||
# systemctl restart sshd
|
||||
活
|
||||
# service restart sshd
|
||||
```
|
||||
|
||||
接下来很简单,只需打开一个新的终端或者会话,尝试使用被禁用的用户身份被访问 Linux 系统。是的,这里 `user1` 用户在禁用名单中。所以,当你尝试登录时,你将会得到如下所示的错误信息。
|
||||
|
||||
```
|
||||
# ssh user1@192.168.1.4
|
||||
user1@192.168.1.4's password:
|
||||
Permission denied, please try again.
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
Mar 29 01:53:42 CentOS7 sshd[4753]: User user1 from 192.168.1.6 not allowed because listed in DenyUsers
|
||||
Mar 29 01:53:42 CentOS7 sshd[4753]: input_userauth_request: invalid user user1 [preauth]
|
||||
Mar 29 01:53:46 CentOS7 unix_chkpwd[4755]: password check failed for user (user1)
|
||||
Mar 29 01:53:46 CentOS7 sshd[4753]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user1
|
||||
Mar 29 01:53:48 CentOS7 sshd[4753]: Failed password for invalid user user1 from 192.168.1.6 port 42522 ssh2
|
||||
```
|
||||
|
||||
### 如何在 Linux 中允许用户组使用 SSH?
|
||||
|
||||
通过以下内容,我们可以允许一个指定的组或多个组使用 `ssh`。
|
||||
|
||||
如果你想要允许多个组使用 `ssh` 那么你在添加用户组时需要在同一行中使用空格来隔开他们。
|
||||
|
||||
为了达到目的只需将以下值追加到 `/etc/ssh/sshd_config` 文件中去。在这个例子中,我们将允许 `2g-admin` 组使用 ssh。
|
||||
|
||||
```
|
||||
# echo "AllowGroups 2g-admin" >> /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
你可以运行下列命令再次检查是否添加成功。
|
||||
|
||||
```
|
||||
# cat /etc/ssh/sshd_config | grep -i allowgroups
|
||||
AllowGroups 2g-admin
|
||||
```
|
||||
|
||||
运行下列命令查看属于该用户组的用户有哪些。
|
||||
|
||||
```
|
||||
# getent group 2g-admin
|
||||
2g-admin:x:1005:user1,user2,user3
|
||||
```
|
||||
|
||||
这样就行了, 现在只需要重启 `ssh` 服务和见证奇迹了。
|
||||
|
||||
```
|
||||
# systemctl restart sshd
|
||||
或
|
||||
# service restart sshd
|
||||
```
|
||||
|
||||
是的, `user1` 被允许登入系统因为用户 `user1` 属于 `2g-admin` 组。
|
||||
|
||||
```
|
||||
# ssh user1@192.168.1.4
|
||||
user1@192.168.1.4's password:
|
||||
[user1@CentOS7 ~]$
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
Mar 29 02:10:21 CentOS7 sshd[5165]: Accepted password for user1 from 192.168.1.6 port 42640 ssh2
|
||||
Mar 29 02:10:22 CentOS7 sshd[5165]: pam_unix(sshd:session): session opened for user user1 by (uid=0)
|
||||
```
|
||||
|
||||
是的, `user2` 被允许登入系统因为用户 `user2` 同样属于 `2g-admin` 组。
|
||||
|
||||
```
|
||||
# ssh user2@192.168.1.4
|
||||
user2@192.168.1.4's password:
|
||||
[user2@CentOS7 ~]$
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
Mar 29 02:10:38 CentOS7 sshd[5225]: Accepted password for user2 from 192.168.1.6 port 42642 ssh2
|
||||
Mar 29 02:10:38 CentOS7 sshd[5225]: pam_unix(sshd:session): session opened for user user2 by (uid=0)
|
||||
```
|
||||
|
||||
当你尝试使用其他不在被允许的组中的用户去登入系统时, 你将会得到如下所示的错误信息。
|
||||
|
||||
```
|
||||
# ssh ladmin@192.168.1.4
|
||||
ladmin@192.168.1.4's password:
|
||||
Permission denied, please try again.
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
Mar 29 02:12:36 CentOS7 sshd[5306]: User ladmin from 192.168.1.6 not allowed because none of user's groups are listed in AllowGroups
|
||||
Mar 29 02:12:36 CentOS7 sshd[5306]: input_userauth_request: invalid user ladmin [preauth]
|
||||
Mar 29 02:12:56 CentOS7 unix_chkpwd[5310]: password check failed for user (ladmin)
|
||||
Mar 29 02:12:56 CentOS7 sshd[5306]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=ladmin
|
||||
Mar 29 02:12:58 CentOS7 sshd[5306]: Failed password for invalid user ladmin from 192.168.1.6 port 42674 ssh2
|
||||
```
|
||||
|
||||
### 如何在 Linux 中阻止用户组使用 SSH?
|
||||
|
||||
通过以下内容,我们可以禁用指定的组或多个组使用 `ssh`。
|
||||
|
||||
如果你想要禁用多个用户组使用 `ssh`,那么你需要在添加用户组时在同一行中使用空格来隔开他们。
|
||||
|
||||
为了达到目的只需要将下面的值追加到 `/etc/ssh/sshd_config` 文件中去。
|
||||
|
||||
```
|
||||
# echo "DenyGroups 2g-admin" >> /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
你可以运行下列命令再次检查是否添加成功。
|
||||
|
||||
```
|
||||
# # cat /etc/ssh/sshd_config | grep -i denygroups
|
||||
DenyGroups 2g-admin
|
||||
|
||||
# getent group 2g-admin
|
||||
2g-admin:x:1005:user1,user2,user3
|
||||
```
|
||||
|
||||
这样就行了, 现在只需要重启 `ssh` 服务和见证奇迹了。
|
||||
|
||||
```
|
||||
# systemctl restart sshd
|
||||
或
|
||||
# service restart sshd
|
||||
```
|
||||
|
||||
是的 `user1` 不被允许登入系统,因为他是 `2g-admin` 用户组中的一员。他属于被禁用 `ssh` 的组中。
|
||||
|
||||
```
|
||||
# ssh user1@192.168.1.4
|
||||
user1@192.168.1.4's password:
|
||||
Permission denied, please try again.
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
Mar 29 02:17:32 CentOS7 sshd[5400]: User user1 from 192.168.1.6 not allowed because a group is listed in DenyGroups
|
||||
Mar 29 02:17:32 CentOS7 sshd[5400]: input_userauth_request: invalid user user1 [preauth]
|
||||
Mar 29 02:17:38 CentOS7 unix_chkpwd[5402]: password check failed for user (user1)
|
||||
Mar 29 02:17:38 CentOS7 sshd[5400]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user1
|
||||
Mar 29 02:17:41 CentOS7 sshd[5400]: Failed password for invalid user user1 from 192.168.1.6 port 42710 ssh2
|
||||
```
|
||||
|
||||
除了 `2g-admin` 用户组之外的用户都可以使用 ssh 登入系统。 例如,`ladmin` 等用户就允许登入系统。
|
||||
|
||||
```
|
||||
# ssh ladmin@192.168.1.4
|
||||
ladmin@192.168.1.4's password:
|
||||
[ladmin@CentOS7 ~]$
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
Mar 29 02:19:13 CentOS7 sshd[5432]: Accepted password for ladmin from 192.168.1.6 port 42716 ssh2
|
||||
Mar 29 02:19:13 CentOS7 sshd[5432]: pam_unix(sshd:session): session opened for user ladmin by (uid=0)
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/allow-deny-enable-disable-ssh-access-user-group-in-linux/
|
||||
|
||||
作者:[2daygeek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[way-ww](https://github.com/way-ww)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.2daygeek.com/author/2daygeek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/ssh-tutorials/
|
@ -0,0 +1,224 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (mengxinayan)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12000-1.html)
|
||||
[#]: subject: (How to structure a multi-file C program: Part 2)
|
||||
[#]: via: (https://opensource.com/article/19/7/structure-multi-file-c-part-2)
|
||||
[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny)
|
||||
|
||||
如何组织构建多文件 C 语言程序(二)
|
||||
======
|
||||
|
||||
> 我将在本系列的第二篇中深入研究由多个文件组成的 C 程序的结构。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/16/122928i6qheufnh24jq2qf.jpg)
|
||||
|
||||
在[第一篇][2]中,我设计了一个名为[喵呜喵呜][3]的多文件 C 程序,该程序实现了一个玩具[编解码器][4]。我也提到了程序设计中的 Unix 哲学,即在一开始创建多个空文件,并建立一个好的结构。最后,我创建了一个 `Makefile` 文件夹并阐述了它的作用。在本文中将另一个方向展开:现在我将介绍简单但具有指导性的喵呜喵呜编解码器的实现。
|
||||
|
||||
当读过我的《[如何写一个好的 C 语言 main 函数][5]》后,你会觉得喵呜喵呜编解码器的 `main.c` 文件的结构很熟悉,其主体结构如下:
|
||||
|
||||
```
|
||||
/* main.c - 喵呜喵呜流式编解码器 */
|
||||
|
||||
/* 00 系统包含文件 */
|
||||
/* 01 项目包含文件 */
|
||||
/* 02 外部声明 */
|
||||
/* 03 定义 */
|
||||
/* 04 类型定义 */
|
||||
/* 05 全局变量声明(不要用)*/
|
||||
/* 06 附加的函数原型 */
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
/* 07 变量声明 */
|
||||
/* 08 检查 argv[0] 以查看该程序是被如何调用的 */
|
||||
/* 09 处理来自用户的命令行选项 */
|
||||
/* 10 做点有用的事情 */
|
||||
}
|
||||
|
||||
/* 11 其它辅助函数 */
|
||||
```
|
||||
|
||||
### 包含项目头文件
|
||||
|
||||
位于第二部分中的 `/* 01 项目包含文件 */` 的源代码如下:
|
||||
|
||||
|
||||
```
|
||||
/* main.c - 喵呜喵呜流式编解码器 */
|
||||
...
|
||||
/* 01 项目包含文件 */
|
||||
#include "main.h"
|
||||
#include "mmecode.h"
|
||||
#include "mmdecode.h"
|
||||
```
|
||||
|
||||
`#include` 是 C 语言的预处理命令,它会将该文件名的文件内容拷贝到当前文件中。如果程序员在头文件名称周围使用双引号(`""`),编译器将会在当前目录寻找该文件。如果文件被尖括号包围(`<>`),编译器将在一组预定义的目录中查找该文件。
|
||||
|
||||
[main.h][6] 文件中包含了 [main.c][7] 文件中用到的定义和类型定义。我喜欢尽可能多将声明放在头文件里,以便我在我的程序的其他位置使用这些定义。
|
||||
|
||||
头文件 [mmencode.h][8] 和 [mmdecode.h][9] 几乎相同,因此我以 `mmencode.h` 为例来分析。
|
||||
|
||||
```
|
||||
/* mmencode.h - 喵呜喵呜流编解码器 */
|
||||
|
||||
#ifndef _MMENCODE_H
|
||||
#define _MMENCODE_H
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
int mm_encode(FILE *src, FILE *dst);
|
||||
|
||||
#endif /* _MMENCODE_H */
|
||||
```
|
||||
|
||||
`#ifdef`、`#define`、`#endif` 指令统称为 “防护” 指令。其可以防止 C 编译器在一个文件中多次包含同一文件。如果编译器在一个文件中发现多个定义/原型/声明,它将会产生警告。因此这些防护措施是必要的。
|
||||
|
||||
在这些防护内部,只有两个东西:`#include` 指令和函数原型声明。我在这里包含了 `stdio.h` 头文件,以便于能在函数原型中使用 `FILE` 定义。函数原型也可以被包含在其他 C 文件中,以便于在文件的命名空间中创建它。你可以将每个文件视为一个独立的命名空间,其中的变量和函数不能被另一个文件中的函数或者变量使用。
|
||||
|
||||
编写头文件很复杂,并且在大型项目中很难管理它。不要忘记使用防护。
|
||||
|
||||
### 喵呜喵呜编码的最终实现
|
||||
|
||||
该程序的功能是按照字节进行 `MeowMeow` 字符串的编解码,事实上这是该项目中最简单的部分。截止目前我所做的工作便是支持允许在适当的位置调用此函数:解析命令行,确定要使用的操作,并打开将要操作的文件。下面的循环是编码的过程:
|
||||
|
||||
```
|
||||
/* mmencode.c - 喵呜喵呜流式编解码器 */
|
||||
...
|
||||
while (!feof(src)) {
|
||||
|
||||
if (!fgets(buf, sizeof(buf), src))
|
||||
break;
|
||||
|
||||
for(i=0; i<strlen(buf); i++) {
|
||||
lo = (buf[i] & 0x000f);
|
||||
hi = (buf[i] & 0x00f0) >> 4;
|
||||
fputs(tbl[hi], dst);
|
||||
fputs(tbl[lo], dst);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
简单的说,当文件中还有数据块时( `feof(3)` ),该循环读取(`feof(3)` )文件中的一个数据块。然后将读入的内容的每个字节分成两个 `hi` 和 `lo` 的<ruby>半字节<rt>nibble</rt></ruby>。半字节是半个字节,即 4 个位。这里的奥妙之处在于可以用 4 个位来编码 16 个值。我将 `hi` 和 `lo` 用作 16 个字符串查找表 `tbl` 的索引,表中包含了用半字节编码的 `MeowMeow` 字符串。这些字符串使用 `fputs(3)` 函数写入目标 `FILE` 流,然后我们继续处理缓存区的下一个字节。
|
||||
|
||||
该表使用 [table.h][14] 中的宏定义进行初始化,在没有特殊原因(比如:要展示包含了另一个项目的本地头文件)时,我喜欢使用宏来进行初始化。我将在未来的文章中进一步探讨原因。
|
||||
|
||||
### 喵呜喵呜解码的实现
|
||||
|
||||
我承认在开始工作前花了一些时间。解码的循环与编码类似:读取 `MeowMeow` 字符串到缓冲区,将编码从字符串转换为字节
|
||||
|
||||
```
|
||||
/* mmdecode.c - 喵呜喵呜流式编解码器 */
|
||||
...
|
||||
int mm_decode(FILE *src, FILE *dst)
|
||||
{
|
||||
if (!src || !dst) {
|
||||
errno = EINVAL;
|
||||
return -1;
|
||||
}
|
||||
return stupid_decode(src, dst);
|
||||
}
|
||||
```
|
||||
|
||||
这不符合你的期望吗?
|
||||
|
||||
在这里,我通过外部公开的 `mm_decode()` 函数公开了 `stupid_decode()` 函数细节。我上面所说的“外部”是指在这个文件之外。因为 `stupid_decode()` 函数不在该头文件中,因此无法在其他文件中调用它。
|
||||
|
||||
当我们想发布一个可靠的公共接口时,有时候会这样做,但是我们还没有完全使用函数解决问题。在本例中,我编写了一个 I/O 密集型函数,该函数每次从源中读取 8 个字节,然后解码获得 1 个字节写入目标流中。较好的实现是一次处理多于 8 个字节的缓冲区。更好的实现还可以通过缓冲区输出字节,进而减少目标流中单字节的写入次数。
|
||||
|
||||
```
|
||||
/* mmdecode.c - 喵呜喵呜流式编解码器 */
|
||||
...
|
||||
int stupid_decode(FILE *src, FILE *dst)
|
||||
{
|
||||
char buf[9];
|
||||
decoded_byte_t byte;
|
||||
int i;
|
||||
|
||||
while (!feof(src)) {
|
||||
if (!fgets(buf, sizeof(buf), src))
|
||||
break;
|
||||
byte.field.f0 = isupper(buf[0]);
|
||||
byte.field.f1 = isupper(buf[1]);
|
||||
byte.field.f2 = isupper(buf[2]);
|
||||
byte.field.f3 = isupper(buf[3]);
|
||||
byte.field.f4 = isupper(buf[4]);
|
||||
byte.field.f5 = isupper(buf[5]);
|
||||
byte.field.f6 = isupper(buf[6]);
|
||||
byte.field.f7 = isupper(buf[7]);
|
||||
|
||||
fputc(byte.value, dst);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
我并没有使用编码器中使用的位移方法,而是创建了一个名为 `decoded_byte_t` 的自定义数据结构。
|
||||
|
||||
```
|
||||
/* mmdecode.c - 喵呜喵呜流式编解码器 */
|
||||
...
|
||||
|
||||
typedef struct {
|
||||
unsigned char f7:1;
|
||||
unsigned char f6:1;
|
||||
unsigned char f5:1;
|
||||
unsigned char f4:1;
|
||||
unsigned char f3:1;
|
||||
unsigned char f2:1;
|
||||
unsigned char f1:1;
|
||||
unsigned char f0:1;
|
||||
} fields_t;
|
||||
|
||||
typedef union {
|
||||
fields_t field;
|
||||
unsigned char value;
|
||||
} decoded_byte_t;
|
||||
```
|
||||
|
||||
初次看到代码时可能会感到有点儿复杂,但不要放弃。`decoded_byte_t` 被定义为 `fields_t` 和 `unsigned char` 的 **联合**。可以将联合中的命名成员看作同一内存区域的别名。在这种情况下,`value` 和 `field` 指向相同的 8 位内存区域。将 `field.f0` 设置为 `1` 也将会设置 `value` 中的最低有效位。
|
||||
|
||||
虽然 `unsigned char` 并不神秘,但是对 `fields_t` 的类型定义(`typedef`)也许看起来有些陌生。现代 C 编译器允许程序员在结构体中指定单个位字段的值。字段所在的类型是一个无符号整数类型,并在成员标识符后紧跟一个冒号和一个整数,该整数指定了位字段的长度。
|
||||
|
||||
这种数据结构使得按字段名称访问字节中的每个位变得简单,并可以通过联合中的 `value` 字段访问组合后的值。我们依赖编译器生成正确的移位指令来访问字段,这可以在调试时为你节省不少时间。
|
||||
|
||||
最后,因为 `stupid_decode()` 函数一次仅从源 `FILE` 流中读取 8 个字节,所以它效率并不高。通常我们尝试最小化读写次数,以提高性能和降低调用系统调用的开销。请记住:少量的读取/写入大的块比大量的读取/写入小的块好得多。
|
||||
|
||||
### 总结
|
||||
|
||||
用 C 语言编写一个多文件程序需要程序员要比只是是一个 `main.c` 做更多的规划。但是当你添加功能或者重构时,只需要多花费一点儿努力便可以节省大量时间以及避免让你头痛的问题。
|
||||
|
||||
回顾一下,我更喜欢这样做:多个文件,每个文件仅有简单功能;通过头文件公开那些文件中的小部分功能;把数字常量和字符串常量保存在头文件中;使用 `Makefile` 而不是 Bash 脚本来自动化处理事务;使用 `main()` 函数来处理命令行参数解析并作为程序主要功能的框架。
|
||||
|
||||
我知道我只是蜻蜓点水般介绍了这个简单的程序,并且我很高兴知道哪些事情对你有所帮助,以及哪些主题需要详细的解释。请在评论中分享你的想法,让我知道。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/structure-multi-file-c-part-2
|
||||
|
||||
作者:[Erik O'Shaughnessy][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[萌新阿岩](https://github.com/mengxinayan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jnyjny
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/file_system.jpg?itok=pzCrX1Kc (4 manilla folders, yellow, green, purple, blue)
|
||||
[2]: https://linux.cn/article-11935-1.html
|
||||
[3]: https://github.com/jnyjny/MeowMeow.git
|
||||
[4]: https://en.wikipedia.org/wiki/Codec
|
||||
[5]: https://linux.cn/article-10949-1.html
|
||||
[6]: https://github.com/JnyJny/meowmeow/blob/master/main.h
|
||||
[7]: https://github.com/JnyJny/meowmeow/blob/master/main.c
|
||||
[8]: https://github.com/JnyJny/meowmeow/blob/master/mmencode.h
|
||||
[9]: https://github.com/JnyJny/meowmeow/blob/master/mmdecode.h
|
||||
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/feof.html
|
||||
[11]: http://www.opengroup.org/onlinepubs/009695399/functions/fgets.html
|
||||
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
|
||||
[13]: http://www.opengroup.org/onlinepubs/009695399/functions/fputs.html
|
||||
[14]: https://github.com/JnyJny/meowmeow/blob/master/table.h
|
||||
[15]: http://www.opengroup.org/onlinepubs/009695399/functions/isupper.html
|
||||
[16]: http://www.opengroup.org/onlinepubs/009695399/functions/fputc.html
|
99
published/20191223 10 articles to learn Linux your way.md
Normal file
99
published/20191223 10 articles to learn Linux your way.md
Normal file
@ -0,0 +1,99 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "messon007"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-12035-1.html"
|
||||
[#]: subject: "10 articles to learn Linux your way"
|
||||
[#]: via: "https://opensource.com/article/19/12/learn-linux"
|
||||
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
|
||||
|
||||
10 篇关于 Linux 的好文章
|
||||
=======
|
||||
|
||||
> 2019 年对于 Linux 来说是个好年份,让我们一起来回顾一下这十篇关于 Linux 的好文章。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/25/115447rrjfuufccumf0oz6.jpg)
|
||||
|
||||
2019 年对于 Linux 来说是个好年份,显然这里的 “Linux” 一词有更多含义: 内核? 桌面? 或是生态? 在此次回顾年度 Linux 好文中,我在选择十大好文时有意采取了更开放的视角。下面就是十大好文(无先后次序之分)。
|
||||
|
||||
### 《Linux 权限入门指南》
|
||||
|
||||
Bryant Son 的《[Linux 权限入门指南][2]》向新用户介绍了文件权限的概念,通过图形和图表的方式来说明每个要点。通常很难以视觉的方式来解释纯粹基于文本的概念,而本文则对可视方式学习的人非常友好。 Bryant 在讲述时也很专注主题。关于文件权限的任何阐述都可能引出几个相关主题(例如所有权和访问控制列表等),但是本文致力于解释一件事并很好地解释它。
|
||||
|
||||
### 《为什么我从 Mac 换到了 Linux》
|
||||
|
||||
Matthew Broberg 在《[为什么我从 Mac 换到了 Linux][3]》中清楚而客观的介绍了他从 MacOS 切换到 Linux 的经历。通常切换平台是很困难的,因此记录决定切换的背后考虑非常重要。我认为 Matt 的文章带有几个目的,但对我来说最重要的两个目的是:通过解答他的问题并提供潜在的解决方案,他请 Linux 社区的人们来支持他;这对于其他正在考虑采用 Linux 的人来说是一个很好的参考。
|
||||
|
||||
### 《在 Linux 上定位 WiFi 速度慢的问题》
|
||||
|
||||
《[在 Linux 上定位 WiFi 速度慢的问题][4]》这篇文章中,David Clinton 对每个人都可能遇到的问题进行了分析,并提供了怎么样一步步解决的思路。这是“偶然的 Linux”技巧的一个很好的例子,但它不仅可以帮助经常遇到问题的人,而且可以向非 Linux 用户展示如何在其他平台上进行问题定位。
|
||||
|
||||
### 《一个非技术人员对 GNOME 项目使用 GitLab 的感受》
|
||||
|
||||
Molly de Blanc 所写的《[一个非技术人员对 GNOME 项目使用 GitLab 的感受][5]》深层次地揭示了开源界的一个典范(GNOME 桌面)如何使用开源界的另一个典范(Git)进行开发。听到一个开放源代码项目对于任何需要做的事情默认为开放源代码解决方案,这总是让我感到振奋。无论如何,这种情况并不常见,然而对于 GNOME 来说,这是项目本身的重要且受欢迎的部分。
|
||||
|
||||
### 《详解 Linux 中的虚拟文件系统》
|
||||
|
||||
Alison Chaiken 在《[详解 Linux 中的虚拟文件系统][6]》中巧妙地解释了对许多用户来说都很难理解的东西。理解文件系统是什么、虚拟文件系统和真实的文件系统是一回事,但从定义上讲,*虚拟的*其实并不是真正的文件系统。Linux 以一种普通用户也能从中受益的方式提供了它们,Alison 的文章以一种易于理解的方式对其进行了阐述。另外,Alison 在文章的后半部分更深入地介绍了如何使用 `bcc` 脚本查看她刚刚讲的虚拟文件系统的相关内容。
|
||||
|
||||
### 《理解文件路径并学会使用它们》
|
||||
|
||||
我认为《[理解文件路径并学会使用它们][7]》很重要,因为这是大多数用户(在任何平台上)似乎都没有学过的概念。这是一个奇怪的现象,因为现在比以往任何时候,人们都越来越多地看到*文件路徑*:几乎所有的互联网网址都包含一个文件路径,该路径告诉你你在域中的确切位置。我常常在想为什么计算机教育不是从互联网开始的,互联网是目前最熟悉的应用程序,并且可以说是使用最频繁的超级计算机,完全可以用它来解释我们每天使用的设备。(我想如果这些设备运行 Linux 会有所帮助,但是我们正在努力。)
|
||||
|
||||
### 《Linux 下的进程间通信:共享存储》
|
||||
|
||||
Marty Kalin 的《[Linux 下的进程间通信:共享存储][8]》从 Linux 的开发者视角解释了 IPC 以及如何在代码中使用它。虽然我只是列入了这篇文章,不过它实际上是一个三篇文章的系列,而它是同类文章中阐述的最好的。很少有文档能够解释 Linux 怎样处理 IPC,更不用说 IPC 是什么,为什么它很重要,或者在编程时该如何使用它。通常这是你在大学读书时关注的话题。现在,你可以在这里阅读所有内容。
|
||||
|
||||
### 《在 Linux 上用 strace 来理解系统调用》
|
||||
|
||||
Gaurav Kamathe 的《[在 Linux 上用 strace 来理解系统调用][9]》具有很强的技术性,我希望我所见过的有关 `strace` 的每次会议演讲都是如此。这是对一个复杂但非常有用的命令的清晰演示。令我惊讶的是,我读了本文才发现自己一直使用的命令不是这个命令,而是 `ltrace`(可以用来查看命令调用了哪些函数)。本文包含了丰富的信息,是开发人员和测试人员的快捷参考手册。
|
||||
|
||||
### 《Linux 桌面发展旅程》
|
||||
|
||||
Jim Hall 的《[Linux 桌面发展旅程][10]》是对 Linux 桌面历史的一次视觉之旅。从 [TWM][11] 开始,经历了 [FVWM][12]、[GNOME][13]、[KDE][14] 等薪火相传。如果你是 Linux 的新手,那么这将是一个出自那个年代的人的有趣的历史课(有截图可以证明这一点)。如果你已经使用 Linux 多年,那么这肯定会唤醒你的记忆。最后,可以肯定的是:仍然可以找到 20 年前屏幕快照的人都是神一样的数据档案管理员。
|
||||
|
||||
### 《用 Linux 创建你自己的视频流服务器》
|
||||
|
||||
Aaron J. Prisk 的 《[用 Linux 创建你自己的视频流服务器][15]》消除了大多数人对我们视为理所当然的服务的误解。由于 YouTube 和 Twitch 之类服务的存在,许多人认为这是向世界广播视频的唯一方式。当然,人们过去常常以为 Windows 和 Mac 是计算机的唯一入口,值得庆幸的是,最终证明这是严重的误解。在本文中,Aaron 建立了一个视频流服务器,甚至还顺便讨论了一下 [OBS][16],以便你可以创建视频。这是一个有趣的周末项目还是新职业的开始?你自己决定。
|
||||
|
||||
### 《塑造 Linux 历史的 10 个时刻》
|
||||
|
||||
Alan Formy-Duval 撰写的《[塑造 Linux 历史的 10 个时刻][17]》试图完成一项艰巨的任务,即从 Linux 的历史中选出 10 件有代表性的事情。当然,这是很难的,因为有如此多重要的时刻,所以我想看看 Alan 是如何通过自己的经历来选择它。例如,什么时候开始意识到 Linux 必然可以发展下去?—— 当 Alan 意识到他维护的所有系统都在运行 Linux 时。用这种方式来解释历史是很美的,因为每个人的重要时刻都会有所不同。 关于 Linux 没有权威性列表,关于 Linux 的文章也没有,关于开源也没有。你可以创建你自己的列表,也可以使你自己成为列表的一部分。
|
||||
|
||||
(LCTT 译注:这里推荐了 11 篇,我数了好几遍,没眼花……)
|
||||
|
||||
### 你想从何学起?
|
||||
|
||||
你还想知道 Linux 的什么内容?请在评论区告诉我们或来文讲述你的 Linux 经验。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/learn-linux
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[messon007](https://github.com/messon007)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Penguin_Image_520x292_12324207_0714_mm_v1a.png?itok=p7cWyQv9 "Penguins gathered together in the Artic"
|
||||
[2]: https://linux.cn/article-11056-1.html
|
||||
[3]: https://linux.cn/article-11586-1.html
|
||||
[4]: http://opensource.com/article/19/4/troubleshooting-wifi-linux
|
||||
[5]: https://linux.cn/article-11806-1.html
|
||||
[6]: https://linux.cn/article-10884-1.html
|
||||
[7]: https://opensource.com/article/19/8/understanding-file-paths-linux
|
||||
[8]: https://linux.cn/article-10826-1.html
|
||||
[9]: https://linux.cn/article-11545-1.html
|
||||
[10]: https://opensource.com/article/19/8/how-linux-desktop-grown
|
||||
[11]: https://github.com/freedesktop/twm
|
||||
[12]: http://www.fvwm.org/
|
||||
[13]: http://gnome.org
|
||||
[14]: http://kde.org
|
||||
[15]: https://opensource.com/article/19/1/basic-live-video-streaming-server
|
||||
[16]: https://opensource.com/life/15/12/real-time-linux-video-editing-with-obs-studio
|
||||
[17]: https://opensource.com/article/19/4/top-moments-linux-history
|
||||
[18]: https://opensource.com/how-submit-article
|
@ -0,0 +1,87 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12011-1.html)
|
||||
[#]: subject: (10 Linux command tutorials for beginners and experts)
|
||||
[#]: via: (https://opensource.com/article/19/12/linux-commands)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
10 篇对初学者和专家都有用的 Linux 命令教程
|
||||
======
|
||||
|
||||
> 在这有关 Linux 命令的十大文章中,了解如何使 Linux 发挥所需的作用。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/19/095932xc64xw7cwqlolale.jpg)
|
||||
|
||||
**很好地**使用 Linux 意味着了解有哪些命令以及它们可以为你执行的功能。
|
||||
|
||||
### 《在 Linux 命令行下使用“原力”》
|
||||
|
||||
<ruby>原力<rt>force</rt></ruby>有光明的一面和黑暗的一面。正确理解这个对于真正掌握它至关重要。Alan Formy-Duval 在他的文章《[在 Linux 命令行下使用“原力”][2]》中,解释了一些流行的、有时是危险的命令的 `-f` 选项(也称为 `--force`)。
|
||||
|
||||
### 《Linux useradd 命令介绍》
|
||||
|
||||
共享帐户是一个坏主意。相反,请使用典型的 `useradd` 命令为不同的人(甚至是不同的角色)分配单独的帐户。作为其经典的 Linux 管理基础系列的一部分,Alan Formy-Duval 提供了《[Linux useradd 命令介绍][3]》,并且像往常一样,他用**朴实明白的语言**对其进行了解释,以便新老管理员都可以理解。
|
||||
|
||||
### 《用 Linux 命令显示硬件信息》
|
||||
|
||||
机器**里面**是什么?有时不使用螺丝刀检查硬件很有用。无论是你正在使用的计算机,还是在商店购买前进行测试的计算机、或者是正在尝试维修的计算机,在《[用 Linux 命令显示硬件信息][4]》中,Howard Fosdick 提供了或流行或晦涩难懂的命令,以帮助你深入了解计算机的硬件信息。
|
||||
|
||||
### 《如何在 Linux 上使用 gocryptfs 加密文件》
|
||||
|
||||
从社会保险号到个人信件再到亲人,我们的文件中包含了许多私人数据。在《[如何在 Linux 上使用 gocryptfs 加密文件][5]》中,Brian "Bex" Exelbierd 解释了如何保持**隐私*的私密性。此外,他展示了一种加密文件的方式,这种方式对你现有的工作流程几乎没有影响。这不是复杂的 PGP 风格的密钥管理和后台密钥代理的难题,这是快速、无缝和安全的文件加密。
|
||||
|
||||
### 《如何使用 rsync 的高级用法进行大型备份》
|
||||
|
||||
在新的一年中,许多人会下定决心要更加努力地进行备份。Alan Formy-Duval 早在几年前就已经做出了解决方案,因为在《[如何使用 rsync 的高级用法进行大型备份][6]》中,他表现出对文件同步命令的极其熟悉。你可能不会马上记住所有语法,但其思路是读取和处理选项、构造备份命令,然后使其自动化。这是使用 `rsync` 的明智方法,也是可靠地执行备份的**唯一**方法。
|
||||
|
||||
### 《在 Linux 命令行使用 more 查看文本文件》
|
||||
|
||||
在 Scott Nesbitt 的文章《[在 Linux 命令行使用 more 查看文本文件][7]》中,古老而良好的默认分页器 `more` 引起了人们的关注。许多人安装和使用 `less`,因为它比 `more` 更灵活。但是,随着越来越多的系统在新出现的容器中实现,有时甚至不存在像 `less` 或 `most` 之类的新颖的豪华工具。了解和使用`more` 很简单,这是常见的默认设置,并且是生产系统最后的调试工具。
|
||||
|
||||
### 《关于 sudo 你可能不知道的》
|
||||
|
||||
`sudo` 命令因其过失而闻名。人们知道 `sudo` 一词,我们大多数人认为我们知道它的作用。我们的观点是正确的,但是正如 Peter Czanik 在他的文章《[关于 sudo 你可能不知道的][8]》中所揭示的那样,该命令所包含的不仅仅是“<ruby>西蒙说的<rt>Simon says</rt></ruby>”(LCTT 译注:国外的一个儿童游戏)。就像这个经典的童年游戏一样,`sudo` 命令功能强大,也容易犯愚蠢的错误 —— 有更多的可能产生可怕都后果,而这是你绝不想遇上的事情!
|
||||
|
||||
### 《怎样用 Bash 编程:语法和工具》
|
||||
|
||||
如果你是 Linux、BSD 或 Mac(以及近来的 Windows)用户,你也许使用过交互式 Bash shell。它是快速的、一次性命令的绝佳 shell,这就是为什么这么多 Linux 用户喜欢将其用作主要用户界面的原因。但是,Bash 不仅仅是个命令提示符。它也是一种编程语言,如果你已经在使用 Bash 命令,那么自动化的道路从未如此简单过。在 David Both 的出色作品《[怎样用 Bash 编程:语法和工具][9]》中对其进行全面了解。
|
||||
|
||||
### 《精通 Linux 的 ls 命令》
|
||||
|
||||
`ls` 命令是那些两个字母的命令之一。单字母命令是针对慢速终端的优化,因为慢速终端的输入的每个字母都会导致明显的延迟,而这对于懒惰的打字员来说也是不错的奖励。一如既往地,Seth Kenlon 清晰实用地解释了你可以怎样《[精通 Linux 的 ls 命令][10]》。最重要的是,在“一切都是文件”的系统中,列出文件至关重要。
|
||||
|
||||
### 《Linux cat 命令入门》
|
||||
|
||||
`cat` 命令(con*cat*enate 的缩写)看似简单。无论是使用它来快速查看文件的内容还是将内容通过管道传输到另一个命令,你都可能没有充分利用 `cat` 的功能。Alan Formy-Duval 的《[Linux cat 命令入门][11]》提供了一些新思路,可以使你没有打开文件的感觉就可以看到文件内容。另外,了解各种有关 `zcat` 的知识,这样你就可以无需解压缩就可以得到压缩文件的内容!这是一件小而简单的事情,但是**这**是使 Linux 很棒的原因。
|
||||
|
||||
### 继续旅程
|
||||
|
||||
不要让这些关于 Linux 命令的 10 篇最佳文章成为你的旅程终点。关于 Linux 及其多才多艺的提示符,还有更多值得去发现,因此,请继续关注以获取更多知识。而且,如果你想让我们介绍一个 Linux 命令,请在评论中告诉我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/linux-commands
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc (Penguin driving a car with a yellow background)
|
||||
[2]: https://linux.cn/article-10881-1.html
|
||||
[3]: https://linux.cn/article-11756-1.html
|
||||
[4]: https://linux.cn/article-11422-1.html
|
||||
[5]: https://opensource.com/article/19/8/how-encrypt-files-gocryptfs
|
||||
[6]: https://linux.cn/article-10865-1.html
|
||||
[7]: https://linux.cn/article-10531-1.html
|
||||
[8]: https://linux.cn/article-11595-1.html
|
||||
[9]: https://linux.cn/article-11552-1.html
|
||||
[10]: https://linux.cn/article-11159-1.html
|
||||
[11]: https://opensource.com/article/19/2/getting-started-cat-command
|
||||
[12]: https://opensource.com/how-submit-article
|
142
published/20200123 6 things you should be doing with Emacs.md
Normal file
142
published/20200123 6 things you should be doing with Emacs.md
Normal file
@ -0,0 +1,142 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12004-1.html)
|
||||
[#]: subject: (6 things you should be doing with Emacs)
|
||||
[#]: via: (https://opensource.com/article/20/1/emacs-cheat-sheet)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
6 件你应该用 Emacs 做的事
|
||||
======
|
||||
|
||||
> 下面六件事情你可能都没有意识到可以在 Emacs 下完成。此外还有我们的新备忘单,拿去,充分利用 Emacs 的功能吧。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/17/133738wjj66p2safcpc50z.jpg)
|
||||
|
||||
想象一下使用 Python 的 IDLE 界面来编辑文本。你可以将文件加载到内存中,编辑它们,并保存更改。但是你执行的每个操作都由 Python 函数定义。例如,调用 `upper()` 来让一个单词全部大写,调用 `open` 打开文件,等等。文本文档中的所有内容都是 Python 对象,可以进行相应的操作。从用户的角度来看,这与其他文本编辑器的体验一致。对于 Python 开发人员来说,这是一个丰富的 Python 环境,只需在配置文件中添加几个自定义函数就可以对其进行更改和开发。
|
||||
|
||||
这就是 [Emacs][2] 对 1958 年的编程语言 [Lisp][3] 所做的事情。在 Emacs 中,运行应用程序的 Lisp 引擎与输入文本之间无缝结合。对 Emacs 来说,一切都是 Lisp 数据,因此一切都可以通过编程进行分析和操作。
|
||||
|
||||
这造就了一个强大的用户界面(UI)。但是,如果你是 Emacs 的普通用户,你可能对它的能力知之甚少。下面是你可能没有意识到 Emacs 可以做的六件事。
|
||||
|
||||
### 使用 Tramp 模式进行云端编辑
|
||||
|
||||
Emacs 早在网络流行化之前就实现了透明的网络编辑能力了,而且时至今日,它仍然提供了最流畅的远程编辑体验。Emacs 中的 [Tramp 模式][4](以前称为 RPC 模式)代表着 “<ruby>透明的远程(文件)访问,多协议<rt>Transparent Remote (file) Access,Multiple Protocol</rt></ruby>”,这准确说明了它提供的功能:通过最流行的网络协议轻松访问你希望编辑的远程文件。目前最流行、最安全的能用于远程编辑的协议是 [OpenSSH][5],因此 Tramp 使用它作为默认的协议。
|
||||
|
||||
在 Emacs 22.1 或更高版本中已经包含了 Tramp,因此要使用 Tramp,只需使用 Tramp 语法打开一个文件。在 Emacs 的 “File” 菜单中,选择 “Open File”。当在 Emacs 窗口底部的小缓冲区中出现提示时,使用以下语法输入文件名:
|
||||
|
||||
```
|
||||
/ssh:user@example.com:/path/to/file
|
||||
```
|
||||
|
||||
如果需要交互式登录,Tramp 会提示你输入密码。但是,Tramp 直接使用 OpenSSH,所以为了避免交互提示,你可以将主机名、用户名和 SSH 密钥路径添加到你的 `~/.ssh/config` 文件。与 Git 一样,Emacs 首先使用你的 SSH 配置,只有在出现错误时才会停下来询问更多信息。
|
||||
|
||||
Tramp 非常适合编辑并没有放在你的计算机上的文件,它的用户体验与编辑本地文件没有明显的区别。下次,当你 SSH 到服务器启动 Vim 或 Emacs 会话时,请尝试使用 Tramp。
|
||||
|
||||
### 日历
|
||||
|
||||
如果你喜欢文本多过图形界面,那么你一定会很高兴地知道,可以使用 Emacs 以纯文本的方式安排你的日程(或生活),而且你依然可以在移动设备上使用开源的 [Org 模式][6]查看器来获得华丽的通知。
|
||||
|
||||
这个过程需要一些配置,以创建一个方便的方式来与移动设备同步你的日程(我使用 Git,但你可以调用蓝牙、KDE Connect、Nextcloud,或其他文件同步工具),此外你必须在移动设备上安装一个 Org 模式查看器(如 [Orgzly][7])以及 Git 客户程序。但是,一旦你搭建好了这些基础,该流程就会与你常用的(或正在完善的,如果你是新用户)Emacs 工作流完美地集成在一起。你可以在 Emacs 中方便地查阅日程、更新日程,并专注于任务上。议程上的变化将会反映在移动设备上,因此即使在 Emacs 不可用的时候,你也可以保持井然有序。
|
||||
|
||||
![][8]
|
||||
|
||||
感兴趣了?阅读我的关于[使用 Org mode 和 Git 进行日程安排][9]的逐步指南。
|
||||
|
||||
### 访问终端
|
||||
|
||||
有[许多终端模拟器][10]可用。尽管 Emacs 中的 Elisp 终端仿真器不是最强大的通用仿真器,但是它有两个显著的优点:
|
||||
|
||||
1. **打开在 Emacs 缓冲区之中**:我使用 Emacs 的 Elisp shell,因为它在 Emacs 窗口中打开很方便,我经常全屏运行该窗口。这是一个小而重要的优势,只需要输入 `Ctrl+x+o`(或用 Emacs 符号来表示就是 `C-x o`)就能使用终端了,而且它还有一个特别好的地方在于当运行漫长的作业时能够一瞥它的状态报告。
|
||||
2. **在没有系统剪贴板的情况下复制和粘贴特别方便**:无论是因为懒惰不愿将手从键盘移动到鼠标,还是因为在远程控制台运行 Emacs 而无法使用鼠标,在 Emacs 中运行终端有时意味着可以从 Emacs 缓冲区中很快地传输数据到 Bash。
|
||||
|
||||
要尝试 Emacs 终端,输入 `Alt+x`(用 Emacs 符号表示就是 `M-x`),然后输入 `shell`,然后按回车。
|
||||
|
||||
### 使用 Racket 模式
|
||||
|
||||
[Racket][11] 是一种激动人心的新兴 Lisp 方言,拥有动态编程环境、GUI 工具包和充满激情的社区。学习 Racket 的默认编辑器是 DrRacket,它的顶部是定义面板,底部是交互面板。使用该设置,用户可以编写影响 Racket 运行时环境的定义。就像旧的 [Logo Turtle][12] 程序,但是有一个终端而不是仅仅一个海龟。
|
||||
|
||||
![Racket-mode][13]
|
||||
|
||||
*由 PLT 提供的 LGPL 示例代码*
|
||||
|
||||
基于 Lisp 的 Emacs 为资深 Racket 编程人员提供了一个很好的集成开发环境(IDE)。它尚未附带 [Racket 模式][14],但你可以使用 Emacs 包安装程序安装 Racket 模式和辅助扩展。要安装它,按下 `Alt+X`(用 Emacs 符号表示就是 `M-x`),键入 `package-install`,然后按回车。接着输入要安装的包 `racet-mode`,按回车。
|
||||
|
||||
使用 `M-x racket-mode` 进入 Racket 模式。如果你是 Racket 新手,而对 Lisp 或 Emacs 比较熟悉,可以从这份优秀的[图解 Racket][15] 入手。
|
||||
|
||||
## 脚本
|
||||
|
||||
你可能知道,Bash 脚本在自动化和增强 Linux 或 Unix 体验方面很流行。你可能听说过 Python 在这方面也做得很好。但是你知道 Lisp 脚本可以用同样的方式运行吗?有时人们会对 Lisp 到底有多有用感到困惑,因为许多人是通过 Emacs 来了解 Lisp 的,因此有一种潜在的印象,即在 21 世纪运行 Lisp 的惟一方法是在 Emacs 中运行。幸运的是,事实并非如此,Emacs 是一个很好的 IDE,它支持将 Lisp 脚本作为一般的系统可执行文件来运行。
|
||||
|
||||
除了 Elisp 之外,还有两种流行的现代 Lisp 可以很容易地用来作为独立脚本运行。
|
||||
|
||||
1. **Racket**:你可以通过在系统上运行 Racket 来提供运行 Racket 脚本所需的运行时支持,或者你可以使用 `raco exe` 产生一个可执行文件。`raco exe` 命令将代码和运行时支持文件一起打包,以创建可执行文件。然后,`raco distribution` 命令将可执行文件打包成可以在其他机器上工作的发行版。Emacs 有许多 Racket 工具,因此在 Emacs 中创建 Racket 文件既简单又有效。
|
||||
2. **GNU Guile**:[GNU Guile][16](<ruby>GNU 通用智能语言扩展<rt>GNU Ubiquitous Intelligent Language for Extensions</rt></ruby> 的缩写)是 [Scheme][17] 编程语言的一个实现,它可以用于为桌面、互联网、终端等创建应用程序和游戏。Emacs 中的 Scheme 扩展众多,使用任何一个扩展来编写 Scheme 都很容易。例如,这里有一个用 Guile 编写的 “Hello world” 脚本:
|
||||
|
||||
```
|
||||
#!/usr/bin/guile -s
|
||||
!#
|
||||
|
||||
(display "hello world")
|
||||
(newline)
|
||||
```
|
||||
|
||||
用 `guile` 编译并允许它:
|
||||
|
||||
```
|
||||
$ guile ./hello.scheme
|
||||
;;; compiling /home/seth/./hello.scheme
|
||||
;;; compiled [...]/hello.scheme.go
|
||||
hello world
|
||||
$ guile ./hello.scheme
|
||||
hello world
|
||||
```
|
||||
|
||||
### 无需 Emacs 允许 Elisp
|
||||
|
||||
Emacs 可以作为 Elisp 的运行环境,但是你无需按照传统印象中的必须打开 Emacs 来运行 Elisp。`--script` 选项可以让你使用 Emacs 作为引擎来执行 Elisp 脚本,而无需运行 Emacs 图形界面(甚至也无需使用终端)。下面这个例子中,`-Q` 选项让 Emacs 忽略 `.emacs` 文件,从而避免由于执行 Elisp 脚本时产生延迟(若你的脚本依赖于 Emacs 配置中的内容,那么请忽略该选项)。
|
||||
|
||||
```
|
||||
emacs -Q --script ~/path/to/script.el
|
||||
```
|
||||
|
||||
### 下载 Emacs 备忘录
|
||||
|
||||
Emacs 许多重要功能都不是只能通过 Emacs 来实现的;Org 模式是 Emacs 扩展也是一种格式标准,流行的 Lisp 方言大多不依赖于具体的应用,我们甚至可以在没有可见或可交互式 Emacs 实例的情况下编写和运行 Elisp。然后若你对为什么模糊代码和数据之间的界限能够引发创新和效率感到好奇的话,那么 Emacs 是一个很棒的工具。
|
||||
|
||||
幸运的是,现在是 21 世纪,Emacs 有了带有传统菜单的图形界面以及大量的文档,因此学习曲线不再像以前那样。然而,要最大化 Emacs 对你的好处,你需要学习它的快捷键。由于 Emacs 支持的每个任务都是一个 Elisp 函数,Emacs 中的任何功能都可以对应一个快捷键,因此要描述所有这些快捷键是不可能完成的任务。你只要学习使用频率 10 倍于不常用功能的那些快捷键即可。
|
||||
|
||||
我们汇聚了最常用的 Emacs 快捷键成为一份 Emacs 备忘录以便你查询。将它挂在屏幕附近或办公室墙上,把它作为鼠标垫也行。让它触手可及经常翻阅一下。每次翻两下可以让你获得十倍的学习效率。而且一旦开始编写自己的函数,你一定不会后悔获取了这个免费的备忘录副本的!
|
||||
|
||||
- [这里下载 Emacs 备忘录(需注册)](https://opensource.com/downloads/emacs-cheat-sheet)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
via: https://opensource.com/article/20/1/emacs-cheat-sheet
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_blue_text_editor_web.png
|
||||
[2]: https://www.gnu.org/software/emacs/
|
||||
[3]: https://en.wikipedia.org/wiki/Lisp_(programming_language)
|
||||
[4]: https://www.gnu.org/software/tramp/
|
||||
[5]: https://www.openssh.com/
|
||||
[6]: https://orgmode.org/
|
||||
[7]: https://f-droid.org/en/packages/com.orgzly/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/orgzly-agenda.jpg
|
||||
[9]: https://linux.cn/article-11320-1.html
|
||||
[10]: https://linux.cn/article-11814-1.html
|
||||
[11]: http://racket-lang.org/
|
||||
[12]: https://en.wikipedia.org/wiki/Logo_(programming_language)#Turtle_and_graphics
|
||||
[13]: https://opensource.com/sites/default/files/racket-mode.jpg
|
||||
[14]: https://www.racket-mode.com/
|
||||
[15]: https://docs.racket-lang.org/quick/index.html
|
||||
[16]: https://www.gnu.org/software/guile/
|
||||
[17]: https://en.wikipedia.org/wiki/Scheme_(programming_language)
|
160
published/20200204 DevOps vs Agile- What-s the difference.md
Normal file
160
published/20200204 DevOps vs Agile- What-s the difference.md
Normal file
@ -0,0 +1,160 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "messon007"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-12031-1.html"
|
||||
[#]: subject: "DevOps vs Agile: What's the difference?"
|
||||
[#]: via: "https://opensource.com/article/20/2/devops-vs-agile"
|
||||
[#]: author: "Taz Brown https://opensource.com/users/heronthecli"
|
||||
|
||||
DevOps 和敏捷:究竟有什么区别?
|
||||
======
|
||||
|
||||
> 两者之间的区别在于开发完毕之后发生的事情。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/23/200609w2rlzrjjhpf2hzsq.jpg)
|
||||
|
||||
早期,软件开发并没有特定的管理流程。随后出现了<ruby>[瀑布开发流程][2]<rt>Waterfall</rt></ruby>,它提出软件开发活动可以用开发和构建应用所耗费的时间来定义。
|
||||
|
||||
那时候,由于在开发流程中没有审查环节和权衡考虑,常常需要花费很长的时间来开发、测试和部署软件。交付的软件也是带有缺陷和 Bug 的质量较差的软件,而且交付时间也不满足要求。那时候软件项目管理的重点是长期而拖沓的计划。
|
||||
|
||||
瀑布流程与<ruby>[三重约束模型][3]<rt>triple constraint model</rt></ruby>相关,三重约束模型也称为<ruby>项目管理三角形<rt>project management triangle</rt></ruby>。三角形的每一个边代表项目管理三要素的一个要素: **范围、时间和成本**。正如 [Angelo Baretta 写到][4],三重约束模型“认为成本是时间和范围的函数,这三个约束以一种确定的、可预测的方式相互作用。……如果我们想缩短时间表(时间),就必须增加成本。如果我们想增加范围,就必须增加成本或时间。”
|
||||
|
||||
### 从瀑布流程过渡到敏捷开发
|
||||
|
||||
瀑布流程来源于生产和工程领域,这些领域适合线性化的流程:正如房屋封顶之前需要先盖好支撑墙。相似地,软件开发问题被认为可以通过提前做好计划来解决。从头到尾,开发流程均由路线图清晰地定义,沿着路线图就可以得到最终交付的产品。
|
||||
|
||||
最终,瀑布模型被认为对软件开发是不利的而且违反人的直觉,因为通常直到开发流程的最后才能体现出项目的价值,这导致许多项目最终都以失败告终。而且,在项目结束前客户看不到任何可以工作的软件。
|
||||
|
||||
<ruby>敏捷<rt>Agile</rt></ruby>采用了一种不同的方法,它抛弃了规划整个项目,承诺估计的时间点,简单的遵循计划。与瀑布流程相反,它假设和拥抱不确定性。它的理念是以响应变化代替讨论过去,它认为变更是客户需求的一部分。
|
||||
|
||||
### 敏捷价值观
|
||||
|
||||
敏捷由<ruby>敏捷宣言<rt>Agile Manifesto</rt></ruby>代言,敏捷宣言定义了 [12 条原则][5](LCTT 译注:此处没有采用本文原本的简略句式,而是摘录了来自敏捷软件开发宣言官方的[中文译本][14]):
|
||||
|
||||
1. 我们最重要的目标,是通过持续不断地及早交付有价值的软件使客户满意。
|
||||
2. 欣然面对需求变化,即使在开发后期也一样。
|
||||
3. 经常交付可工作的软件,相隔几星期或一两个月,倾向于采取较短的周期。
|
||||
4. 业务人员和开发人员必须相互合作,项目中的每一天都不例外。
|
||||
5. 激发个体的斗志,以他们为核心搭建项目。提供所需的环境和支援,辅以信任,从而达成目标。
|
||||
6. 面对面沟通是传递信息的最佳的也是效率最高的方法。
|
||||
7. 可工作的软件是进度的首要度量标准。
|
||||
8. 敏捷流程倡导可持续的开发,责任人、开发人员和用户要能够共同维持其步调稳定延续。
|
||||
9. 坚持不懈地追求技术卓越和良好设计,敏捷能力由此增强。
|
||||
10. 以简洁为本,它是极力减少不必要工作量的艺术。
|
||||
11. 最好的架构,需求和设计出自自组织团队
|
||||
12. 团队定期地反思如何能提高成效,并依此调整自身的举止表现。
|
||||
|
||||
敏捷的四个[核心价值观][6]是(LCTT 译注:[此处译文][15]同样来自敏捷软件开发宣言官方):
|
||||
|
||||
* **个体和互动** 高于流程和工具
|
||||
* **工作的软件** 高于详尽的文档
|
||||
* **客户合作** 高于合同谈判
|
||||
* **响应变化** 高于遵循计划
|
||||
|
||||
这与瀑布流程死板的计划风格相反。在敏捷流程中,客户是开发团队的一员,而不仅仅是在项目开始时参与项目需求的定义,在项目结束时验收最终的产品。客户帮忙团队完成[验收标准][7],并在整个过程中保持投入。另外,敏捷需要整个组织的变化和持续的改进。开发团队和其他团队一起合作,包括项目管理团队和测试团队。做什么和计划什么时候做由指定的角色领导,并由整个团队同意。
|
||||
|
||||
### 敏捷软件开发
|
||||
|
||||
敏捷软件开发需要自适应的规划、演进式的开发和交付。许多软件开发方法、框架和实践遵从敏捷的理念,包括:
|
||||
|
||||
* Scrum
|
||||
* <ruby>看板<rt>Kanban</rt></ruby>(可视化工作流)
|
||||
* <ruby>极限编程<rt>Xtreme Programming</rt></ruby>(XP)
|
||||
* <ruby>精益方法<rt>Lean</rt></ruby>
|
||||
* DevOps
|
||||
* <ruby>特性驱动开发<rt>Feature-Driven Development</rt></ruby>(FDD)
|
||||
* <ruby>测试驱动开发<rt>Test-Driven Development</rt></ruby>(TDD)
|
||||
* <ruby>水晶方法<rt>Crystal</rt></ruby>
|
||||
* <ruby>动态系统开发方法<rt>Dynamic Systems Development Method</rt></ruby>(DSDM)
|
||||
* <ruby>自适应软件开发<rt>Adaptive Software Development</rt></ruby>(ASD)
|
||||
|
||||
所有这些已经被单独用于或一起用于开发和部署软件。最常用的是 [Scrum][8]、看板(或 Scrumban)和 DevOps。
|
||||
|
||||
[Scrum][9] 是一个框架,采用该框架的团队通常由一个 Scrum 教练、产品经理和开发人员组成,该团队以跨职能、自主的工作方式运作,能够加快软件交付速度从而给客户带来巨大的商业价值。其关注点是[较小增量][10]的快速迭代。
|
||||
|
||||
[看板][11] 是一个敏捷框架,有时也叫工作流管理系统,它能帮助团队可视化他们的工作从而最大化效率(因而变得敏捷)。看板通常由数字或物理展示板来呈现。团队的工作在展示板上随着进度而移动,例如从未启动到进行中,一直到测试中、已完成。看板使得每个团队成员可以随时查看到所有工作的状态。
|
||||
|
||||
### DevOps 价值观
|
||||
|
||||
DevOps 是一种文化,是一种思维状态,是一种软件开发的方式或者基础设施的方式,也是一种构建和部署软件和应用的方式。它假设开发和运维之间没有隔阂,他们一起合作,没有矛盾。
|
||||
|
||||
DevOps 基于其它两个领域的实践: 精益和敏捷。DevOps 不是一个公司内的岗位或角色;它是一个组织或团队对持续交付、持续部署和持续集成的坚持不懈的追求。[Gene Kim][12](Phoenix 项目和 Unicorn 项目的作者)认为,有三种方式定义 DevOps 的理念:
|
||||
|
||||
* 第一种: 流程原则
|
||||
* 第二种: 反馈原则
|
||||
* 第三种: 持续学习原则
|
||||
|
||||
### DevOps 软件开发
|
||||
|
||||
DevOps 不会凭空产生;它是一种灵活的实践,它的本质是一种关于软件开发和 IT 或基础设施实施的共享文化和思维方式。
|
||||
|
||||
当你想到自动化、云、微服务时,你会想到 DevOps。在一次[访谈][13]中,《加速构建和扩张高性能技术组织》的作者 Nicol Forsgren、Jez Humble 和 Gene Kim 这样解释到:
|
||||
|
||||
> * 软件交付能力很重要,它极大地影响到组织的成果,例如利润、市场份额、质量、客户满意度以及组织战略目标的达成。
|
||||
> * 优秀的团队能达到很高的交付量、稳定性和质量;他们并没有为了获得这些属性而进行取舍。
|
||||
> * 你可以通过实施精益、敏捷和 DevOps 中的实践来提升能力。
|
||||
> * 实施这些实践和能力也会影响你的组织文化,并且会进一步对你的软件交付能力和组织能力产生有益的提升。
|
||||
> * 懂得怎样改进能力需要做很多工作。
|
||||
|
||||
### DevOps 和敏捷的对比
|
||||
|
||||
DevOps 和敏捷有相似性,但是它们不完全相同,一些人认为 DevOps 比敏捷更好。为了避免造成混淆,深入地了解它们是很重要的。
|
||||
|
||||
#### 相似之处
|
||||
|
||||
* 毫无疑问,两者都是软件开发技术。
|
||||
* 敏捷已经存在了 20 多年,DevOps 是最近才出现的。
|
||||
* 两者都追求软件的快速开发,它们的理念都基于怎样在不伤害客户或运维利益的情况下快速开发出软件。
|
||||
|
||||
#### 不同之处
|
||||
|
||||
* 两者的差异在于软件开发完成后发生的事情。
|
||||
* 在 DevOps 和敏捷中,都有软件开发、测试和部署的阶段。然而,敏捷流程在这三个阶段之后会终止。相反,DevOps 包括后续持续的运维。因此,DevOps 会持续的监控软件运行情况和进行持续的开发。
|
||||
* 敏捷中,不同的人负责软件的开发、测试和部署。而 DevOps 工程角色负责所有活动,开发即运维,运维即开发。
|
||||
* DevOps 更关注于削减成本,而敏捷则是精益和减少浪费的代名词,侧重于像敏捷项目会计和最小可行产品的概念。
|
||||
* 敏捷专注于并体现了经验主义(适应、透明和检查),而不是预测性措施。
|
||||
|
||||
敏捷 | DevOps
|
||||
--- | ---
|
||||
从客户得到反馈 | 从自己得到反馈
|
||||
较小的发布周期 | 较小的发布周期,立即反馈
|
||||
聚焦于速度 | 聚焦于速度和自动化
|
||||
对业务不是最好 | 对业务最好
|
||||
|
||||
### 总结
|
||||
|
||||
敏捷和 DevOps 是截然不同的,尽管它们的相似之处使人们认为它们是相同的。这对敏捷和 DevOps 都是一种伤害。
|
||||
|
||||
根据我作为一名敏捷专家的经验,我发现对于组织和团队从高层次上了解敏捷和 DevOps 是什么,以及它们如何帮助团队更高效地工作,更快地交付高质量产品从而提高客户满意度非常有价值。
|
||||
|
||||
敏捷和 DevOps 绝不是对抗性的(或至少没有这个意图)。在敏捷革命中,它们更像是盟友而不是敌人。敏捷和 DevOps 可以相互协作一致对外,因此可以在相同的场合共存。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/devops-vs-agile
|
||||
|
||||
作者:[Taz Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[messon007](https://github.com/messon007)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/heronthecli
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard.png?itok=kBeRTFL1 "Pair programming"
|
||||
[2]: http://www.agilenutshell.com/agile_vs_waterfall
|
||||
[3]: https://en.wikipedia.org/wiki/Project_management_triangle
|
||||
[4]: https://www.pmi.org/learning/library/triple-constraint-erroneous-useless-value-8024
|
||||
[5]: http://agilemanifesto.org/principles.html
|
||||
[6]: https://agilemanifesto.org/
|
||||
[7]: https://www.productplan.com/glossary/acceptance-criteria/
|
||||
[8]: https://opensource.com/article/19/8/scrum-vs-kanban
|
||||
[9]: https://www.scrum.org/
|
||||
[10]: https://www.scrum.org/resources/what-is-an-increment
|
||||
[11]: https://www.atlassian.com/agile/kanban
|
||||
[12]: https://itrevolution.com/the-unicorn-project/
|
||||
[13]: https://www.infoq.com/articles/book-review-accelerate/
|
||||
[14]: http://agilemanifesto.org/iso/zhchs/principles.html
|
||||
[15]: http://agilemanifesto.org/iso/zhchs/manifesto.html
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12007-1.html)
|
||||
[#]: subject: (Basic kubectl and Helm commands for beginners)
|
||||
[#]: via: (https://opensource.com/article/20/2/kubectl-helm-commands)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/jrepka)
|
||||
@ -10,23 +10,23 @@
|
||||
适用于初学者的基本 kubectl 和 Helm 命令
|
||||
======
|
||||
|
||||
> 像去杂货店购物一样,你需要用这些命令入门 Kubernetes。
|
||||
> 去杂货店“采购”这些命令,你需要用这些 Kubernetes 工具来入门。
|
||||
|
||||
![A person working.][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/18/113120adp34myy90eb944b.jpg)
|
||||
|
||||
最近,我丈夫告诉我他即将要去参加一个工作面试,面试时他需要在计算机上运行一些基本命令。他对这场面试感到焦虑,但是对于他来说,学习和记住事情的最好方法是将不了解的事物比喻为非常熟悉的事物。因为我们的谈话是在我逛杂货店试图决定当晚要烹饪的食物之后进行的,所以这启发我用一次去杂货店的行程来描述 `kubectl` 和 `helm` 命令。
|
||||
最近,我丈夫告诉我他即将要去参加一个工作面试,面试时他需要在计算机上运行一些基本命令。他对这场面试感到焦虑,但是对于他来说,学习和记住事情的最好方法是将不了解的事物比喻为非常熟悉的事物。因为我们的谈话是在我逛杂货店试图决定当晚要烹饪的食物之后进行的,所以这启发我用一次去杂货店的行程来介绍 `kubectl` 和 `helm` 命令。
|
||||
|
||||
[Helm][2](“舵轮”)是在 Kubernetes(来自希腊语,意思是“舵手” 或 “领航员”)中管理应用程序的工具。你可以轻松地使用你的应用程序信息来部署“<ruby>海图<rt>chart</rt></ruby>”,从而可以在你的 Kubernetes 环境中几分钟之内让它们就绪并预配置好。在学习新知识时,查看示例的“海图”以了解其用法总是很有帮助的,因此,如果有时间,请查看这些稳定版的“[海图][3]”。
|
||||
[Helm][2](“舵轮”)是在 Kubernetes(来自希腊语,意思是“舵手” 或 “领航员”)中管理应用程序的工具。你可以轻松地使用你的应用程序信息来部署“<ruby>海图<rt>chart</rt></ruby>”,从而可以在你的 Kubernetes 环境中几分钟之内让它们就绪并预配置好。在学习新知识时,查看示例的“海图”以了解其用法总是很有帮助的,因此,如果有时间,请查看这些成型的“[海图][3]”。(LCTT 译注:Kubernetes 生态中大量使用了和航海有关的比喻,因此本文在翻译时也采用了这些比喻)
|
||||
|
||||
[kubectl][4] 是与 Kubernetes 环境的命令行界面,允许你配置和管理集群。它需要一些配置才能在环境中工作,因此请仔细阅读其[文档][5]以了解你需要做什么。
|
||||
[kubectl][4] 是与 Kubernetes 环境交互的命令行界面,允许你配置和管理集群。它需要一些配置才能在环境中工作,因此请仔细阅读其[文档][5]以了解你需要做什么。
|
||||
|
||||
我将在示例中使用命名空间,你可以在我的文章《[Kubernetes 命名空间入门][6]》中了解它。
|
||||
我会在示例中使用命名空间,你可以在我的文章《[Kubernetes 命名空间入门][6]》中了解它。
|
||||
|
||||
现在我们已经准备好了,让我们开始 `kubectl`和 `helm` 基本命令的购物之旅!
|
||||
|
||||
### 用 Helm 列出清单
|
||||
|
||||
你去商店之前要做的第一件事是什么?好吧,如果你做事有条理,就可以创建一个“清单”。同样,这是我将解释的第一个基本的 Helm 命令。
|
||||
你去商店之前要做的第一件事是什么?好吧,如果你做事有条理,会创建一个“清单”。同样,这是我将解释的第一个基本的 Helm 命令。
|
||||
|
||||
在一个用 Helm 部署的应用程序中,`list` 命令提供有关应用程序当前版本的详细信息。在此示例中,我有一个已部署的应用程序:Jenkins CI/CD 应用程序。运行基本的 `list` 命令总是会显示默认的命名空间。由于我没有在默认的命名空间中部署任何内容,因此不会显示任何内容:
|
||||
|
||||
@ -51,11 +51,11 @@ NAME NAMESPACE REVISION UPDATED STATUS
|
||||
jenkins jenkins 1 2020-01-18 16:18:07 EST deployed jenkins-1.9.4 lts
|
||||
```
|
||||
|
||||
现在我有了一个清单,并且知道该清单上有什么,我可以使用 `get` 命令来“获取”我的物品!我将从 Kubernetes 集群开始,我能从中获取到什么?
|
||||
现在我有了一个清单,并且知道该清单上有什么,我可以使用 `get` 命令来“获取”我的物品!我会从 Kubernetes 集群开始,看看我能从中获取到什么?
|
||||
|
||||
### 用 Kubectl 获取物品
|
||||
|
||||
`kubectl get` 命令提供有关 Kubernetes 中许多事物的信息,包括“<ruby>吊舱<rt>Pod</rt></ruby>”、节点和命名空间。同样,没有指定命名空间标志,你就会使用默认的命名空间。首先,我获取集群中的命名空间以查看正在运行的命名空间:
|
||||
`kubectl get` 命令提供了有关 Kubernetes 中许多事物的信息,包括“<ruby>吊舱<rt>Pod</rt></ruby>”、节点和命名空间。同样,如果没有指定命名空间标志,就会使用默认的命名空间。首先,我获取集群中的命名空间以查看正在运行的命名空间:
|
||||
|
||||
```
|
||||
$kubectl get namespaces
|
||||
@ -67,7 +67,7 @@ kube-public Active 53m
|
||||
kube-system Active 53m
|
||||
```
|
||||
|
||||
现在我已经知道了在我的环境中运行的命名空间了,接下来将获取节点并查看有多少个正在运行:
|
||||
现在我已经知道了在我的环境中运行的有哪些命名空间了,接下来获取节点并查看有多少个节点正在运行:
|
||||
|
||||
```
|
||||
$kubectl get nodes
|
||||
@ -75,7 +75,7 @@ NAME STATUS ROLES AGE VERSION
|
||||
minikube Ready master 55m v1.16.2
|
||||
```
|
||||
|
||||
我有一个节点正在运行,这主要是因为我的 Minikube 运行在一台小型服务器上。要得到在我的这一个节点上运行的“吊舱”:
|
||||
我有一个节点正在运行,这主要是因为我的 Minikube 运行在一台小型服务器上。要得到在我的这一个节点上运行的“吊舱”可以这样:
|
||||
|
||||
```
|
||||
$kubectl get pods
|
||||
@ -90,11 +90,11 @@ NAME READY STATUS RESTARTS AGE
|
||||
jenkins-7fc688c874-mh7gv 1/1 Running 0 40m
|
||||
```
|
||||
|
||||
好消息!这里有个“吊舱”,它还没有重新启动过,已运行了 40 分钟了。好的,我知道“吊舱”已经装好,所以我想看看用 Helm 命令可以得到什么。
|
||||
好消息!这里发现了一个“吊舱”,它还没有重新启动过,已运行了 40 分钟了。好的,如今我知道“吊舱”已经装好,所以我想看看用 Helm 命令可以得到什么。
|
||||
|
||||
### 用 Helm 获取信息
|
||||
|
||||
`helm get` 稍微复杂一点,因为这个“获取”命令所需要的不仅仅是一个应用程序名称,而且你可以从应用程序中请求多个内容。我将从获取用于制作应用程序的值开始,然后展示“获取全部”的操作结果的片段,该操作将提供与该应用程序相关的所有数据。
|
||||
`helm get` 命令稍微复杂一点,因为这个“获取”命令所需要的不仅仅是一个应用程序名称,而且你可以从应用程序中请求多个内容。我会从获取用于制作该应用程序的值开始,然后展示“获取全部”的操作结果的片段,该操作将提供与该应用程序相关的所有数据。
|
||||
|
||||
```
|
||||
$helm get values jenkins -n jenkins
|
||||
@ -102,7 +102,7 @@ USER-SUPPLIED VALUES:
|
||||
null
|
||||
```
|
||||
|
||||
由于我最小限度的只安装了稳定版,因此配置没有更改。如果我运行“获取全部”命令,我将得到所有“海图”:
|
||||
由于我只安装了最小限度的稳定版,因此配置没有更改。如果我运行“获取全部”命令,我将得到所有的“海图”:
|
||||
|
||||
```
|
||||
$helm get all jenkins -n jenkins
|
||||
@ -116,7 +116,7 @@ $helm get all jenkins -n jenkins
|
||||
|
||||
### 用 kubectl 查看描述
|
||||
|
||||
正如我使用“获取”命令(该命令可以描述 Kubernetes 中的几乎所有内容)所做的那样,我将示例限制定命名空间、“吊舱”和节点上。由于我知道它们每一个是什么,因此这很容易。
|
||||
正如我使用“获取”命令(该命令可以描述 Kubernetes 中的几乎所有内容)所做的那样,我将示例限定到命名空间、“吊舱”和节点上。由于我知道它们每一个是什么,因此这很容易。
|
||||
|
||||
```
|
||||
$kubectl describe ns jenkins
|
||||
@ -130,7 +130,7 @@ No resource limits.
|
||||
|
||||
我可以看到我的命名空间的名称,并且它是活动的,没有资源或限额限制。
|
||||
|
||||
`describe pods` 命令会产生大量信息,因此我将提供一小段输出。如果你在不使用“吊舱”名称的情况下运行该命令,它将返回名称空间中所有“吊舱”的信息,这可能会很麻烦。因此,请确保在此命令中始终包含“吊舱”名称。例如:
|
||||
`describe pods` 命令会产生大量信息,因此我这里提供的是一小段输出。如果你在不使用“吊舱”名称的情况下运行该命令,它将返回名称空间中所有“吊舱”的信息,这可能会很麻烦。因此,请确保在此命令中始终包含“吊舱”名称。例如:
|
||||
|
||||
```
|
||||
$kubectl describe pods jenkins-7fc688c874-mh7gv --namespace jenkins
|
||||
@ -138,7 +138,7 @@ $kubectl describe pods jenkins-7fc688c874-mh7gv --namespace jenkins
|
||||
|
||||
![output of kubectl-describe-pods][8]
|
||||
|
||||
这提供容器的状态、管理方式、标签以及“吊舱”中使用的镜像(还有很多其他信息)。不在这个简化过的输出中的数据包括资源请求和限制以及在 Helm 配置值文件中应用的任何条件、初始化容器和存储卷信息。如果你的应用程序由于资源不足而崩溃,或者是运行前置脚本进行配置的已初始化配置的容器,或者生成不应该存在于纯文本 YAML 文件中的隐藏密码,则此数据很有用。
|
||||
这会提供容器的状态、管理方式、标签以及“吊舱”中所使用的镜像(还有很多其它信息)。没有在这个简化过的输出中包括的数据有:在 Helm 配置值文件中应用的各种条件下的资源请求和限制、初始化容器和存储卷信息。如果你的应用程序由于资源不足而崩溃,或者是一个需要运行前置脚本进行配置的初始配置容器,或者生成不应该存储于纯文本 YAML 文件中的隐藏密码,则此数据很有用。
|
||||
|
||||
最后,我将使用 `describe node` 命令,当然,它是用来描述节点的。由于本示例只有一个名为 Minikube 的示例,因此我将使用这个名字。如果你的环境中有多个节点,则必须包含你想查找的的节点名称。
|
||||
|
||||
@ -150,11 +150,11 @@ $kubectl describe node minikube
|
||||
|
||||
![output of kubectl describe node][9]
|
||||
|
||||
注意,`describe node` 是更重要的基本命令之一。如此图所示,该命令返回统计信息,该信息指示节点何时资源用尽,并且该数据非常适合在需要扩展时(如果你的环境中没有自动扩展)向你发出警报。此输出片段中未包含的其他内容包括对所有资源和限制的请求所占的百分比,以及资源的使用期限和分配(例如,对于我的应用程序而言)。
|
||||
注意,`describe node` 是更重要的基本命令之一。如此图所示,该命令返回统计信息,该信息指示节点何时资源用尽,并且该数据非常适合在需要扩展时(如果你的环境中没有自动扩展)向你发出警报。此输出片段中未包含的其它内容包括:对所有资源和限制的请求所占的百分比,以及资源的使用期限和分配(例如,对于我的应用程序而言)。
|
||||
|
||||
### 买单
|
||||
|
||||
使用这些命令,我完成了购物并得到了我想要的一切。希望这些基本命令也能在你使用 Kubernetes 的日常工作中提供帮助。
|
||||
使用这些命令,我完成了“购物”并得到了我想要的一切。希望这些基本命令也能在你使用 Kubernetes 的日常工作中提供帮助。
|
||||
|
||||
我鼓励你经常使用命令行并学习“帮助”部分中的速记标志,你可以通过运行以下命令来查看这些标志:
|
||||
|
||||
@ -170,7 +170,7 @@ $kubectl -h
|
||||
|
||||
### 花生酱和果冻
|
||||
|
||||
有些东西像花生酱和果冻一样混在一起。Helm 和 `kubectl` 就有点像那样。
|
||||
有些东西像花生酱和果冻一样混在一起。Helm 和 `kubectl` 就有点像那样交错在一起。
|
||||
|
||||
我经常在自己的环境中使用这些工具。因为它们在很多地方都有很多相似之处,所以在使用其中一个之后,我通常需要跟进另一个。例如,我可以进行 Helm 部署,并使用 `kubectl` 观察它是否失败。一起试试它们,看看它们能为你做什么。
|
||||
|
||||
@ -181,7 +181,7 @@ via: https://opensource.com/article/20/2/kubectl-helm-commands
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
80
published/20200214 Linux is our love language.md
Normal file
80
published/20200214 Linux is our love language.md
Normal file
@ -0,0 +1,80 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (sndnvaps)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12022-1.html)
|
||||
[#]: subject: (Linux is our love language)
|
||||
[#]: via: (https://opensource.com/article/20/2/linux-love-language)
|
||||
[#]: author: (Christopher Cherry https://opensource.com/users/chcherry)
|
||||
|
||||
学习 Linux 是我们的爱情语言
|
||||
======
|
||||
|
||||
> 当一个妻子教丈夫一些新技能的时候,他们都学到了比期待更多的东西。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/22/163819clzuy77dc4d7q8zu.jpg)
|
||||
|
||||
2019 年是我们 Cherry 家学习的一年。我是一个喜欢学习新技术的高级软件工程师,并把学到的内容一起教给了我的丈夫 Chris。通过教给他一些我学到的东西,并让他全程经历我的技术演练文章,我帮助 Chris 学习到了新技术,使他能够将自己的职业生涯更深入地转向技术领域。而我学习到了新的方法,使我的演练和培训材料更易于让读者理解。
|
||||
|
||||
在这篇文章中,我们来讨论一下我们各自和彼此学习到了什么东西,然后探讨这对于我们的未来有何影响。
|
||||
|
||||
### 向学生的提问
|
||||
|
||||
**Jess:** Chris,是什么导致你想深入学习我的领域的技能呢?
|
||||
|
||||
**Chris:** 主要目的是为了让我事业更进一步。作为一个网络工程师的经历告诉我,现在的网络专家已经不像以前一样有价值了,我必须掌握更多的知识。由于网络经常被认为是造成这些天程序中断或出错的原因,我想从开发人员的角度了解更多关于编写应用程序的知识,以便于了解它们如何依赖网络资源。
|
||||
|
||||
**Jess:** 我首先教你什么内容?你从中学到什么?
|
||||
|
||||
**Chris:** 首先是从学习除此安装 Linux 系统开始的,之后又安装了 [Ansible][2]。只要硬件兼容,我用过的每一个 Linux 发行版都很容易安装,但可能会出现个别不兼容的情况。这就意味着我有时候第一手学习到的是如何解决系统安装过程的最初 5 分钟出现的问题了(这个我最喜欢了)。Ansible 给了一个我学习使用软件管理器来安装程序的理由。当程序安装完成后,通过查看 yum 安装的程序,我快速了解了程序管理器是如何处理程序的依赖项的,因此,用 Python 编写的 Ansible 能够在我的系统运行。自此之后,我开始使用 Ansible 来安装各种各样的程序。
|
||||
|
||||
**Jessica:** 你喜欢我这种教学方式不?
|
||||
|
||||
**Chris:** 我们一开始有过争吵,直到我们弄清楚了我喜欢的学习方式,你也知道了应该怎样为我提供最好的学习方式。在一开始的时候,我很难跟上你讲的内容。例如,当你说“一个码头工人集装箱”的时候,我完全不知道你在讲什么。比较早的时候,我的回答就是“这是一个集装箱”,然而当时这对我来说,完全没有意义。当你对这些内容进行一些更深入的讲解后,才让学习更有趣。
|
||||
|
||||
**Jess:** 老实说,这对我来说也是一个重要的教训。在你之前,我从来没有教过在这个技术领域知识比我少的人,所以你帮助我认识到我需要解释更多细节。我也得说声谢谢。
|
||||
|
||||
当你通过这几个学习步骤的时候,你觉得我的这篇测试文章怎样呢?
|
||||
|
||||
**Chris:** 就我个人而言,我认为这很容易,但我错了。在我主要学习的内容中,比如你[介绍的Vagrant][3],它在不同的 Linux 发行版间的变化比我想像的要多。操作系统的变化会影响设置的方式、运行都要求和特定的命令。这看起来比我用的网络设备变化更大。这让我花费更多的精力去查看这些说明是对应我的系统还是其它的系统(有时候很难知道)。在这学习路上,我似乎碰到很多问题。
|
||||
|
||||
**Jess:** 我每天都会遇到各种各样的问题,所以对我来说日常就是用各种方法解决各种问题。
|
||||
|
||||
### 向老师的提问
|
||||
|
||||
**Chris:** Jess,你将来教我的方式会有所改变吗?
|
||||
|
||||
**Jess:** 我想让你像我一样读多一些书。通过翻阅书籍来学习新技术。每天起床后一小时和睡觉前一小时我都会看书,花费一个星期左右我就能看一到两本书。我也会创建为期两周的任务计划来实践我从书本中学习到的技能。这是除了我一天中第一个小时在喝大量咖啡时读到的科技文章之外的。当我考虑到你的职业发展目标的时候,我认为除了我们谈到的优秀博客文章和文章之外,书籍是一个重要的元素。我觉得我的阅读量使我保持进步,如果你也这么做了,你也会很快赶上我的。
|
||||
|
||||
**Chris:** 那么学生有没有教过老师呢?
|
||||
|
||||
**Jess:** 我在你那里学习到耐心。举个例子,当你完成了安装 Ansible 的时候,我问你下一步要怎样操作的时候。你直接回复我,“不知道”,这不是我想让你学习到的内容。所以我改变了策略,现在在逐步安装任何组件之前,我们将详细讨论你想要实现的目标。当我们在写 Vagrant 文章的时候,我们一起进行相应的演示操作,我以创建它时就牢记目标,因此我们就有一些需要马上实现的目标。
|
||||
|
||||
这实际上对我在工作中的培训方式产生了巨大的改变。现在我在大家学习的过程中会问更多问题,并更多地进行手把手讲解。我更愿意坐下来仔细检查,确保有人明白我在说什么和我们在做什么。这是我之前从来没有做过的。
|
||||
|
||||
### 我们一起学到了什么
|
||||
|
||||
做为一对夫妇,在这一年的技术合作中我们的技术都有所增长。
|
||||
|
||||
**Chris:** 我对自己学到的东西感到震惊。通过一年课程学习,我认识了新操作系统、如何使用 API、使用 Ansible 部署 Web 应用和使用 Vagrant 启动虚拟机器。我还学习到了文档可以让生活变得更好,所以我也会尝试去写一写。然而,在这个工作领域,操作并不总是被记录在案,所以我学会了准备好处理棘手的问题,并记录如何解决它们。
|
||||
|
||||
**Jess:** 除了我在教你中学到的知识外,我还专注于学习 Kubernetes 在云环境中的应用知识。这包括部署策略、Kubernetes API 的复杂度、创建我自己的容器,并对环境进行加密处理。我还节省了探索的时间:研究了 serverless 的代码、AI 模型、Python 和以图形方式显示热图。对于我来说,这一年也很充足。
|
||||
|
||||
我们下一个目标是什么?现在还不知道,但我可以向你保证,我们将会继续进行分享它。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/linux-love-language
|
||||
|
||||
作者:[Christopher Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[sndnvaps](https://github.com/sndnvaps)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/chcherry
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/red-love-heart-alone-stone-path.jpg?itok=O3q1nEVz (红心 "你不是孤单的")
|
||||
[2]: https://opensource.com/resources/what-ansible
|
||||
[3]: https://opensource.com/resources/vagrant
|
330
published/20200219 Try this Bash script for-large filesystems.md
Normal file
330
published/20200219 Try this Bash script for-large filesystems.md
Normal file
@ -0,0 +1,330 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12025-1.html)
|
||||
[#]: subject: (Try this Bash script for large filesystems)
|
||||
[#]: via: (https://opensource.com/article/20/2/script-large-files)
|
||||
[#]: author: (Nick Clifton https://opensource.com/users/nickclifton)
|
||||
|
||||
针对大型文件系统可以试试此 Bash 脚本
|
||||
======
|
||||
|
||||
> 一个可以列出文件、目录、可执行文件和链接的简单脚本。
|
||||
|
||||
![bash logo on green background][1]
|
||||
|
||||
你是否曾经想列出目录中的所有文件,但仅列出文件,而不列出其它的。仅列出目录呢?如果有这种需求的话,那么下面的脚本可能正是你一直在寻找的,它在 GPLv3 下开源。
|
||||
|
||||
当然,你可以使用 `find` 命令:
|
||||
|
||||
```
|
||||
find . -maxdepth 1 -type f -print
|
||||
```
|
||||
|
||||
但这键入起来很麻烦,输出也不友好,并且缺少 `ls` 命令拥有的一些改进。你还可以结合使用 `ls` 和 `grep` 来达到相同的结果:
|
||||
|
||||
```
|
||||
ls -F . | grep -v /
|
||||
```
|
||||
|
||||
但是,这又有点笨拙。下面这个脚本提供了一种简单的替代方法。
|
||||
|
||||
### 用法
|
||||
|
||||
该脚本提供了四个主要功能,具体取决于你调用它的名称:`lsf` 列出文件,`lsd` 列出目录,`lsx` 列出可执行文件以及 `lsl` 列出链接。
|
||||
|
||||
通过符号链接无需安装该脚本的多个副本。这样可以节省空间并使脚本更新更容易。
|
||||
|
||||
该脚本通过使用 `find` 命令进行搜索,然后在找到的每个项目上运行 `ls`。这样做的好处是,任何给脚本的参数都将传递给 `ls` 命令。因此,例如,这可以列出所有文件,甚至包括以点开头的文件:
|
||||
|
||||
```
|
||||
lsf -a
|
||||
```
|
||||
|
||||
要以长格式列出目录,请使用 `lsd` 命令:
|
||||
|
||||
```
|
||||
lsd -l
|
||||
```
|
||||
|
||||
你可以提供多个参数,以及文件和目录路径。
|
||||
|
||||
下面提供了当前目录的父目录和 `/usr/bin` 目录中所有文件的长分类列表:
|
||||
|
||||
```
|
||||
lsf -F -l .. /usr/bin
|
||||
```
|
||||
|
||||
目前该脚本不处理递归,仅列出当前目录中的文件。
|
||||
|
||||
```
|
||||
lsf -R
|
||||
```
|
||||
|
||||
该脚本不会深入子目录,这个不足有一天可能会进行修复。
|
||||
|
||||
### 内部
|
||||
|
||||
该脚本采用自上而下的方式编写,其初始化功能位于脚本的开头,而工作主体则接近结尾。脚本中只有两个真正重要的功能。函数 `parse_args()` 会仔细分析命令行,将选项与路径名分开,并处理脚本中的 `ls` 命令行选项中的特定选项。
|
||||
|
||||
`list_things_in_dir()` 函数以目录名作为参数并在其上运行 `find` 命令。找到的每个项目都传递给 `ls` 命令进行显示。
|
||||
|
||||
### 总结
|
||||
|
||||
这是一个可以完成简单功能的简单脚本。它节省了时间,并且在使用大型文件系统时可能会非常有用。
|
||||
|
||||
### 脚本
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
# Script to list:
|
||||
# directories (if called "lsd")
|
||||
# files (if called "lsf")
|
||||
# links (if called "lsl")
|
||||
# or executables (if called "lsx")
|
||||
# but not any other type of filesystem object.
|
||||
# FIXME: add lsp (list pipes)
|
||||
#
|
||||
# Usage:
|
||||
# <command_name> [switches valid for ls command] [dirname...]
|
||||
#
|
||||
# Works with names that includes spaces and that start with a hyphen.
|
||||
#
|
||||
# Created by Nick Clifton.
|
||||
# Version 1.4
|
||||
# Copyright (c) 2006, 2007 Red Hat.
|
||||
#
|
||||
# This is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published
|
||||
# by the Free Software Foundation; either version 3, or (at your
|
||||
# option) any later version.
|
||||
|
||||
# It is distributed in the hope that it will be useful, but
|
||||
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
|
||||
# ToDo:
|
||||
# Handle recursion, eg: lsl -R
|
||||
# Handle switches that take arguments, eg --block-size
|
||||
# Handle --almost-all, --ignore-backups, --format and --ignore
|
||||
|
||||
main ()
|
||||
{
|
||||
init
|
||||
|
||||
parse_args ${1+"$@"}
|
||||
|
||||
list_objects
|
||||
|
||||
exit 0
|
||||
}
|
||||
|
||||
report ()
|
||||
{
|
||||
echo $prog": " ${1+"$@"}
|
||||
}
|
||||
|
||||
fail ()
|
||||
{
|
||||
report " Internal error: " ${1+"$@"}
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Initialise global variables.
|
||||
init ()
|
||||
{
|
||||
# Default to listing things in the current directory.
|
||||
dirs[0]=".";
|
||||
|
||||
# num_dirs is the number of directories to be listed minus one.
|
||||
# This is because we are indexing the dirs[] array from zero.
|
||||
num_dirs=0;
|
||||
|
||||
# Default to ignoring things that start with a period.
|
||||
no_dots=1
|
||||
|
||||
# Note - the global variables 'type' and 'opts' are initialised in
|
||||
# parse_args function.
|
||||
}
|
||||
|
||||
# Parse our command line
|
||||
parse_args ()
|
||||
{
|
||||
local no_more_args
|
||||
|
||||
no_more_args=0 ;
|
||||
|
||||
prog=`basename $0` ;
|
||||
|
||||
# Decide if we are listing files or directories.
|
||||
case $prog in
|
||||
lsf | lsf.sh)
|
||||
type=f
|
||||
opts="";
|
||||
;;
|
||||
lsd | lsd.sh)
|
||||
type=d
|
||||
# The -d switch to "ls" is presumed when listing directories.
|
||||
opts="-d";
|
||||
;;
|
||||
lsl | lsl.sh)
|
||||
type=l
|
||||
# Use -d to prevent the listed links from being followed.
|
||||
opts="-d";
|
||||
;;
|
||||
lsx | lsx.sh)
|
||||
type=f
|
||||
find_extras="-perm /111"
|
||||
;;
|
||||
*)
|
||||
fail "Unrecognised program name: '$prog', expected either 'lsd', 'lsf', 'lsl' or 'lsx'"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Locate any additional command line switches for ls and accumulate them.
|
||||
# Likewise accumulate non-switches to the directories list.
|
||||
while [ $# -gt 0 ]
|
||||
do
|
||||
case "$1" in
|
||||
# FIXME: Handle switches that take arguments, eg --block-size
|
||||
# FIXME: Properly handle --almost-all, --ignore-backups, --format
|
||||
# FIXME: and --ignore
|
||||
# FIXME: Properly handle --recursive
|
||||
-a | -A | --all | --almost-all)
|
||||
no_dots=0;
|
||||
;;
|
||||
--version)
|
||||
report "version 1.2"
|
||||
exit 0
|
||||
;;
|
||||
--help)
|
||||
case $type in
|
||||
d) report "a version of 'ls' that lists only directories" ;;
|
||||
l) report "a version of 'ls' that lists only links" ;;
|
||||
f) if [ "x$find_extras" = "x" ] ; then
|
||||
report "a version of 'ls' that lists only files" ;
|
||||
else
|
||||
report "a version of 'ls' that lists only executables";
|
||||
fi ;;
|
||||
esac
|
||||
exit 0
|
||||
;;
|
||||
--)
|
||||
# A switch to say that all further items on the command line are
|
||||
# arguments and not switches.
|
||||
no_more_args=1 ;
|
||||
;;
|
||||
-*)
|
||||
if [ "x$no_more_args" = "x1" ] ;
|
||||
then
|
||||
dirs[$num_dirs]="$1";
|
||||
let "num_dirs++"
|
||||
else
|
||||
# Check for a switch that just uses a single dash, not a double
|
||||
# dash. This could actually be multiple switches combined into
|
||||
# one word, eg "lsd -alF". In this case, scan for the -a switch.
|
||||
# XXX: FIXME: The use of =~ requires bash v3.0+.
|
||||
if [[ "x${1:1:1}" != "x-" && "x$1" =~ "x-.*a.*" ]] ;
|
||||
then
|
||||
no_dots=0;
|
||||
fi
|
||||
opts="$opts $1";
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
dirs[$num_dirs]="$1";
|
||||
let "num_dirs++"
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# Remember that we are counting from zero not one.
|
||||
if [ $num_dirs -gt 0 ] ;
|
||||
then
|
||||
let "num_dirs--"
|
||||
fi
|
||||
}
|
||||
|
||||
list_things_in_dir ()
|
||||
{
|
||||
local dir
|
||||
|
||||
# Paranoia checks - the user should never encounter these.
|
||||
if test "x$1" = "x" ;
|
||||
then
|
||||
fail "list_things_in_dir called without an argument"
|
||||
fi
|
||||
|
||||
if test "x$2" != "x" ;
|
||||
then
|
||||
fail "list_things_in_dir called with too many arguments"
|
||||
fi
|
||||
|
||||
# Use quotes when accessing $dir in order to preserve
|
||||
# any spaces that might be in the directory name.
|
||||
dir="${dirs[$1]}";
|
||||
|
||||
# Catch directory names that start with a dash - they
|
||||
# confuse pushd.
|
||||
if test "x${dir:0:1}" = "x-" ;
|
||||
then
|
||||
dir="./$dir"
|
||||
fi
|
||||
|
||||
if [ -d "$dir" ]
|
||||
then
|
||||
if [ $num_dirs -gt 0 ]
|
||||
then
|
||||
echo " $dir:"
|
||||
fi
|
||||
|
||||
# Use pushd rather passing the directory name to find so that the
|
||||
# names that find passes on to xargs do not have any paths prepended.
|
||||
pushd "$dir" > /dev/null
|
||||
if [ $no_dots -ne 0 ] ; then
|
||||
find . -maxdepth 1 -type $type $find_extras -not -name ".*" -printf "%f\000" \
|
||||
| xargs --null --no-run-if-empty ls $opts -- ;
|
||||
else
|
||||
find . -maxdepth 1 -type $type $find_extras -printf "%f\000" \
|
||||
| xargs --null --no-run-if-empty ls $opts -- ;
|
||||
fi
|
||||
popd > /dev/null
|
||||
else
|
||||
report "directory '$dir' could not be found"
|
||||
fi
|
||||
}
|
||||
|
||||
list_objects ()
|
||||
{
|
||||
local i
|
||||
|
||||
i=0;
|
||||
while [ $i -le $num_dirs ]
|
||||
do
|
||||
list_things_in_dir i
|
||||
let "i++"
|
||||
done
|
||||
}
|
||||
|
||||
# Invoke main
|
||||
main ${1+"$@"}
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/script-large-files
|
||||
|
||||
作者:[Nick Clifton][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/nickclifton
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12033-1.html)
|
||||
[#]: subject: (Waterfox: Firefox Fork With Legacy Add-ons Options)
|
||||
[#]: via: (https://itsfoss.com/waterfox-browser/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
水狐:一个支持旧版扩展的火狐复刻版
|
||||
======
|
||||
|
||||
> 在本周的开源软件推荐中,我们将介绍一个基于 Firefox 的浏览器,该浏览器支持 Firefox 如今已不再支持的旧版扩展,同时尽可能地提供了快速的用户体验。
|
||||
|
||||
在 Web 浏览器方面,虽然谷歌浏览器已经占据了最大的市场份额,但 [Mozilla Firefox 仍然是关切隐私的主流 Web 浏览器的一面大旗][1]。
|
||||
|
||||
Firefox 最近有了很多改进,这些改进的副作用之一是它删除了旧版<ruby>扩展附件<rt>add-on</rt></ruby>的支持。如果你最喜欢的扩展附件在最近几个月/几年内消失了,那么你可以以 Witerfox 的形式再次拥有它们。
|
||||
|
||||
> 注意!
|
||||
>
|
||||
> 我们注意到,Waterfox 已被 System1 收购。该公司还收购了注重隐私的搜索引擎 Startpage。尽管 System1 声称他们提供注重隐私的产品,因为“这是刚需”,但我们不能对此担保。换句话说,这要取决于你是否信任 System1 和 Waterfox。
|
||||
|
||||
### Waterfox:一个基于 Firefox 的浏览器
|
||||
|
||||
![Waterfox Classic][2]
|
||||
|
||||
[Waterfox][3] 是基于 Firefox 构建的一个好用的开源浏览器,它注重隐私并支持旧版扩展。它没有将自己定位为偏执于隐私的浏览器,但确实尊重这个基本的认知。
|
||||
|
||||
你可以得到两个单独的 Waterfox 浏览器版本。当前版旨在提供现代体验,而经典版则旨在支持 [NPAPI 插件][4] 和 [bootstrap 扩展][5]。
|
||||
|
||||
![Waterfox Classic][6]
|
||||
|
||||
如果你不需要使用 bootstrap 扩展程序,而是需要 [WebExtensions][7],则应该选择 Waterfox 当前版。
|
||||
|
||||
而如果你需要设置一个需要大量 NPAPI 插件或 Bootstrap 扩展的浏览器,则 Waterfox 经典版将非常适合你。
|
||||
|
||||
因此,如果你喜欢 Firefox,但想在同一阵营内尝试一些不同的体验,那么这个 Firefox 替代选择就是为此而生的。
|
||||
|
||||
### Waterfox 的功能
|
||||
|
||||
![Waterfox Current][8]
|
||||
|
||||
当然,从技术上讲,你应该能够做 Mozilla Firefox 支持的许多操作。
|
||||
|
||||
因此,我将在此处的列表中突出显示 Waterfox 的所有重要功能。
|
||||
|
||||
* 支持 NPAPI 插件
|
||||
* 支持 Bootstrap 扩展
|
||||
* 分别提供了支持旧版本扩展和现代的 WebExtension 两个版本。
|
||||
* 跨平台支持(Windows、Linux 和 macOS)
|
||||
* 主题定制
|
||||
* 支持已经归档的扩展
|
||||
|
||||
### 在 Ubuntu/Linux 上安装 Waterfox
|
||||
|
||||
与其他流行的浏览器不同,它没有可以安装的软件包。因此,你将必须从其[官方下载页面][9]下载归档包。
|
||||
|
||||
![][10]
|
||||
|
||||
根据你想要的版本(当前版/经典版),只需下载该文件,它是以 .tar.bz2 为扩展名的文件。
|
||||
|
||||
下载后,只需解压缩文件即可。
|
||||
|
||||
接下来,转到解压缩的文件夹并查找 `Waterfox` 文件。你只需双击它即可运行以启动浏览器。
|
||||
|
||||
如果这不起作用,则可以使用终端并导航到提取的 `Waterfox` 文件夹。到达那里后,你只需使用一个命令即可运行它。看起来如下:
|
||||
|
||||
```
|
||||
cd waterfox-classic
|
||||
./waterfox
|
||||
```
|
||||
|
||||
无论是哪种情况,你都可以访问其 [GitHub 页面][11]以了解将其安装在系统上的更多方式。
|
||||
|
||||
- [下载 Waterfox][3]
|
||||
|
||||
### 总结
|
||||
|
||||
我在我的 Pop!_OS 19.10 系统中启动了它,在我这里工作的很好。尽管我不准备从 Firefox 切换到 Waterfox,因为我没有使用任何旧版扩展附件。但对于某些用户来说,它可能是一个重要选择。
|
||||
|
||||
你可以尝试一下,在下面的评论中让我知道你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/waterfox-browser/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/why-firefox/
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/waterfox-classic.png?fit=800%2C423&ssl=1
|
||||
[3]: https://www.waterfox.net/
|
||||
[4]: https://en.wikipedia.org/wiki/NPAPI
|
||||
[5]: https://wiki.mozilla.org/Extension_Manager:Bootstrapped_Extensions
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/waterfox-classic-screenshot.jpg?ssl=1
|
||||
[7]: https://wiki.mozilla.org/WebExtensions
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/waterfox-screenshot.jpg?ssl=1
|
||||
[9]: https://www.waterfox.net/download/
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/waterfox-download-page.jpg?ssl=1
|
||||
[11]: https://github.com/MrAlex94/Waterfox
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12020-1.html)
|
||||
[#]: subject: (7 tips for writing an effective technical resume)
|
||||
[#]: via: (https://opensource.com/article/20/2/technical-resume-writing)
|
||||
[#]: author: (Emily Brand https://opensource.com/users/emily-brand)
|
||||
|
||||
撰写有效的技术简历的 7 个技巧
|
||||
======
|
||||
|
||||
> 遵循以下这些要点,把自己最好的一面呈现给潜在雇主。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/21/092248u2w2gz2aezre2ba2.jpg)
|
||||
|
||||
如果你是一名软件工程师或技术领域的经理,那么创建或更新简历可能是一项艰巨的任务。要考虑的重点是什么?应该怎么把控格式、内容以及求职目标或摘要?哪些工作经验相关?如何确保自动化招聘工具不会过滤掉你的简历?
|
||||
|
||||
在过去的七年中,作为一名招聘经理,我看到了各种各样的简历;尽管有些令人印象深刻,但还有很多人写的很糟糕。
|
||||
|
||||
在编写或更新简历时,请遵循以下七个简单原则。
|
||||
|
||||
### 1、概述
|
||||
|
||||
简历顶部的简短段落应简洁明了、目的明确,避免过度使用形容词和副词。诸如“令人印象深刻”、“广泛”和“卓越”之类的词,这些词不会增加你的招聘机会;相反,它们看起来和感觉上像是过度使用的填充词。 关于你的求职目标,问自己一个重要的问题:**它是否告诉招聘经理我正在寻找什么样的工作以及如何为他们提供价值?** 如果不是,请加强并简化它以回答该问题,或者将其完全排除在外。
|
||||
|
||||
### 2、工作经验
|
||||
|
||||
数字、数字、数字——重要的事情说三遍。用确凿的事实传达观点远比一般的陈述,例如“帮助构建、管理、交付许多对客户利润有贡献的项目”更能对你有帮助。你的表达中应包括统计数据,例如“直接影响了 5 个顶级银行的项目,这些项目将其上市时间缩短了 40%”,你提交了多少行代码或管理了几个团队。数据比修饰语更能有效地展示你的能力和价值。
|
||||
|
||||
如果你经验不足,没有什么工作经验可展示,那些无关的经验,如暑期兼职工作,就不要写了。相反,将相关经验的细节以及你所学到的知识的详细信息写进简历,这些可以使你成为一个更好的候选人。
|
||||
|
||||
### 3、搜索术语和行话
|
||||
|
||||
随着技术在招聘过程中发挥如此巨大的作用,确保简历被标记为正确的职位非常重要,但不要在简历上过分吹嘘自己。如果你提到敏捷技能但不知道看板是什么,请三思。如果你提到自己精通 Java,但是已经有五年都没有使用过 Java 了,请小心。如果存在你熟悉但不一定是当前在用的语言和框架,请创建其他类别或将你的经验分为“精通”和“熟悉”。
|
||||
|
||||
### 4、教育
|
||||
|
||||
如果你不是应届大学毕业生,那就没必要再写你的 GPA 或你参加过的俱乐部或兄弟会,除非你计划将它们用作谈话要点以在面试中赢得信任。确保将你发表的或获取过专利的东西包括在内,即使它与该工作无关。如果你没有大学学位,请添加一个证书部分代替教育背景部分。如果你是军人,请包括现役和预备役时间。
|
||||
|
||||
### 5、资质证书
|
||||
|
||||
除非你想重新进入之前离开的领域,否则不要写过期的证书,例如,如果你曾经是一名人事经理,而现在正寻求动手编程的工作。如果你拥有与该领域不再相关的认证,就不要写这些认证,因为这些可能会分散招聘者的注意力,使你的简历失去吸引力。利用你的 LinkedIn 个人资料为简历添加更多色彩,因为大多数人在面试之前都会阅读你的简历和 LinkedIn 个人资料。
|
||||
|
||||
### 6、拼写和语法
|
||||
|
||||
让其他人帮忙对你的简历校对一下。很多时候,我在简历中看到过拼写错误的单词,或者错误用词,如“他们的”、“他们是”、“那些”。这些可以避免和修复的错误会产生负面影响。理想情况下,你的简历应用主动语态,但是如果这样会使你感到不舒服,那么就用过去时书写 —— 最重要的是要始终保持一致。不正确的拼写和语法会传递你要么不是很在乎所申请的工作,要么没有足够注意细节。
|
||||
|
||||
### 7、格式
|
||||
|
||||
确保你的简历是最新的并且富有吸引力,这是留下良好第一印象的简便方法。确保格式一致,例如相同的页边距,相同的间距、大写字母和颜色(将调色板保持在最低限度,不要花花绿绿)是简历写作中最基本的部分,但有必要表明你对工作感到自豪,并重视自己的价值和未来的雇主。在适当的地方使用表格,以视觉吸引人的方式分配信息。如果支持的话,以 .pdf 和 .docx 格式上传简历,然后用 Google Docs 导出为 .odt 格式,这样可以在 LibreOffice 中轻松打开。这里有一个我推荐的简单的 Google 文档[简历模板][2]。 你还可以支付少量费用(不到 10 美元)从一些设计公司购买模板。
|
||||
|
||||
### 定期更新
|
||||
|
||||
如果你需要(或希望)申请一份工作,定期更新简历可以最大程度地减少压力,也可以帮助你创建和维护更准确的简历版本。撰写简历时,要有远见,确保至少让另外三个人对你的简历内容、拼写和语法进行检查。即使你是由公司招募或其他人推荐给公司的,面试官也可能只能通过简历来认识你,因此请确保它为你带来良好的第一印象。
|
||||
|
||||
你还有其他提示要添加吗?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/technical-resume-writing
|
||||
|
||||
作者:[Emily Brand][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/Morisun029)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/emily-brand
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI (Two hands holding a resume with computer, clock, and desk chair )
|
||||
[2]: https://docs.google.com/document/d/1ARVyybC5qQEiCzUOLElwAdPpKOK0Qf88srr682eHdCQ/edit
|
@ -0,0 +1,143 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (linusboyle)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12023-1.html)
|
||||
[#]: subject: (Communicating with other users on the Linux command line)
|
||||
[#]: via: (https://www.networkworld.com/article/3530343/communicating-with-other-users-on-the-linux-command-line.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
使用 Linux 命令行与其他用户进行通信
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/22/171055z3q772v2zq320zx3.jpg)
|
||||
|
||||
使用 Linux 命令行向其他用户发送消息或许非常容易,这里有一些相关的命令你可以考虑使用。在这篇文章中,我们会考察 4 个这样的命令,看看它们是怎么工作的。
|
||||
|
||||
### wall
|
||||
|
||||
`wall`(“Write ALL” 的简称)命令允许你向所有系统中已登录的用户发送一条信息。这里我们假设用户都使用命令行在同一台服务器上工作。虽然 `wall` 命令最常被系统管理员用于向用户发布公告和传递信息(比如说,服务器即将因维护而关闭),但它可以被任何用户使用。
|
||||
|
||||
系统管理员可能会用类似下面的方式发送信息:
|
||||
|
||||
```
|
||||
$ wall The system will be going down in 15 minutes to address a serious problem
|
||||
```
|
||||
|
||||
而所有登录的用户都将看到类似这样的信息:
|
||||
|
||||
```
|
||||
Broadcast message from admin@dragonfly (pts/0) (Thu Mar 5 08:56:42 2020):
|
||||
The system is going down in 15 minutes to address a serious problem
|
||||
```
|
||||
|
||||
如果希望在消息中使用单引号,你可以像这样将信息用双引号括起来:
|
||||
|
||||
```
|
||||
$ wall "Don't forget to save your work before logging off"
|
||||
```
|
||||
|
||||
最外层的双引号不会出现在发出的消息中,但是如果没有它们,`wall` 会停下并等待输入一个配对的单引号。
|
||||
|
||||
### mesg
|
||||
|
||||
如果出于某种理由你不想接收来自另一个用户的消息,你可以使用 `mesg` 命令来屏蔽这些消息。这个命令可以接受一个 `n` 作为参数来拒绝某用户的消息,或者接收一个 `y` 作为参数来接收用户发来的消息。
|
||||
|
||||
```
|
||||
$ mesg n doug
|
||||
$ mesg y doug
|
||||
```
|
||||
|
||||
被屏蔽的用户不会被告知这一事实。你也可以像这样使用 `mesg` 来屏蔽或者接收所有消息:
|
||||
|
||||
```
|
||||
$ mesg y
|
||||
$ mesg n
|
||||
```
|
||||
|
||||
### write
|
||||
|
||||
另一个在不使用电子邮件的情况下发送文本的命令是 `write`,这个命令可以用来和一个特定的用户通信。
|
||||
|
||||
```
|
||||
$ write nemo
|
||||
Are you still at your desk?
|
||||
I need to talk with you right away.
|
||||
^C
|
||||
```
|
||||
|
||||
输入你的信息后用 `ctrl-c` 退出,这样就完成了通信。这个命令允许你发送文本,但并不会建立一个双向的通话。它只是将文本发送过去而已。如果目标用户在多个终端上登录,你可以指定你想将消息发送到哪一个终端,否则系统会选择空闲时间最短的那个终端。
|
||||
|
||||
```
|
||||
$ write nemo#1
|
||||
```
|
||||
|
||||
如果你试图向一个将消息屏蔽了的用户发送信息,你应该会看到这样的输出:
|
||||
|
||||
```
|
||||
$ write nemo
|
||||
write: nemo has messages disabled
|
||||
```
|
||||
|
||||
### talk/ytalk
|
||||
|
||||
`talk` 和 `ytalk` 命令让你可以和一个或多个用户进行交互式的聊天。它们会展示一个有上下两个子窗口的界面,每个用户向显示在他们屏幕上方的窗口内输入内容,并在下方的窗口看到回复信息。要回复一个`talk` 请求,接收方可以输入 `talk`,在后面加上请求方的用户名。
|
||||
|
||||
```
|
||||
Message from Talk_Daemon@dragonfly at 10:10 ...
|
||||
talk: connection requested by dory@127.0.0.1.
|
||||
talk: respond with: talk dory@127.0.0.1
|
||||
|
||||
$ talk dory
|
||||
```
|
||||
|
||||
如果使用的是 `ytalk`,那么窗口中可以包含多于两个参与者。正如下面的例子所展示的(这是上面 `talk dory` 命令的结果),`talk` 通常指向 `ytalk`。
|
||||
|
||||
```
|
||||
----------------------------= YTalk version 3.3.0 =--------------------------
|
||||
Is the report ready?
|
||||
|
||||
-------------------------------= nemo@dragonfly =----------------------------
|
||||
Just finished it
|
||||
```
|
||||
|
||||
如上所述,在通话的另一侧,`talk`会话界面的窗口是相反的:
|
||||
|
||||
```
|
||||
----------------------------= YTalk version 3.3.0 =--------------------------
|
||||
Just finished it
|
||||
|
||||
-------------------------------= dory@dragonfly =----------------------------
|
||||
Is the report ready?
|
||||
```
|
||||
|
||||
同样的,使用 `ctrl-c` 来退出。
|
||||
|
||||
如果要和非本机的用户通讯,你需要加上 `-h` 选项和目标主机名或IP地址,就像这样:
|
||||
|
||||
```
|
||||
$ talk -h 192.168.0.11 nemo
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
Linux 上有若干基本的命令可以用来向其他登录的用户发送消息。如果你需要向所有用户快速发送信息或是需要便捷的电话替代品,又或是希望能简单地开始一个多用户快速通讯会话,这些命令会十分实用。
|
||||
|
||||
一些命令如 `wall` 允许广播消息但却不是交互式的。另外的一些命令如 `talk` 允许多用户进行长时间通讯,当你只需要非常快速地交换一些信息,它们可以你你避免建立一个电话会议。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3530343/communicating-with-other-users-on-the-linux-command-line.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[linusboyle](https://github.com/linusboyle)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[2]: https://www.facebook.com/NetworkWorld/
|
||||
[3]: https://www.linkedin.com/company/network-world
|
@ -1,30 +1,32 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12005-1.html)
|
||||
[#]: subject: (6 Raspberry Pi tutorials to try out)
|
||||
[#]: via: (https://opensource.com/article/20/3/raspberry-pi-tutorials)
|
||||
[#]: author: (Lauren Pritchett https://opensource.com/users/lauren-pritchett)
|
||||
|
||||
6 个可以尝试的树莓派教程
|
||||
======
|
||||
这些树莓派项目均旨在简化你的生活并提高生产力。
|
||||
![Cartoon graphic of Raspberry Pi board][1]
|
||||
|
||||
没有什么比体验树莓派创作结果更令人兴奋了。经过数小时的编程、测试和徒手构建,你的项目终于开始成形,你不禁大喊 “woohoo!”。树莓派可以带给日常生活的可能性让我着迷。无论你是想学习新知识、尝试提高效率还是只是乐在其中,本文总有一个树莓派项目适合你。
|
||||
> 这些树莓派项目均旨在简化你的生活并提高生产力。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/17/142619e7jbh7dj5448nf1i.jpg)
|
||||
|
||||
没有什么比体验树莓派创作结果更令人兴奋了。经过数小时的编程、测试和徒手构建,你的项目终于开始成形,你不禁大喊 “哇哦!”树莓派可以带给日常生活的可能性让我着迷。无论你是想学习新知识、尝试提高效率还是只是乐在其中,本文总有一个树莓派项目适合你。
|
||||
|
||||
### 设置 VPN 服务器
|
||||
|
||||
本[教程][[2]教你如何使用树莓派添加网络安全层。这个项目不仅有实际好处,而且还能为你带来很多乐趣。额外的安全性使你可以放心地做其他项目,例如下面列出的项目。
|
||||
本[教程][2]教你如何使用树莓派添加一个网络安全层。这个项目不仅有实际好处,而且还能为你带来很多乐趣。额外的安全性使你可以放心地做其它项目,例如下面列出的项目。
|
||||
|
||||
### 创建一个物体跟踪摄像机
|
||||
|
||||
树莓派之所以具有吸引力,是因为它提供了较低的入门门槛来学习机器学习等新技术。这份[逐步指南][3]提供了详尽的说明,说明了如何构建一个全景摄像头,以便使用 TensorFlow 和树莓派跟踪运动。
|
||||
树莓派之所以具有吸引力,是因为它提供了较低的入门门槛来学习机器学习等新技术。这份[分步指南][3]提供了详尽的说明,说明了如何构建一个全景摄像头,以便使用 TensorFlow 和树莓派跟踪运动。
|
||||
|
||||
### 使用照片幻灯片展示你最喜欢的回忆
|
||||
|
||||
你是否曾经问过自己:“我应该怎么处理这些数码照片?”。如果你像我一样,那么答案是“是”。在朋友和家人圈子中,我被公认为摄像爱好者。这就是为什么我喜欢这个树莓派项目。在[本教程][4]中,你将学习如何设置照片幻灯片,以便轻松地在家里展示自己喜欢的回忆,而无需打印机!
|
||||
你是否曾经问过自己:“我应该怎么处理这些数码照片?”。如果你像我一样这样想过,那么答案是“是”。在朋友和家人圈子中,我被公认为摄像爱好者。这就是为什么我喜欢这个树莓派项目。在[本教程][4]中,你将学习如何设置照片幻灯片,以便轻松地在家里展示自己喜欢的回忆,而无需打印机!
|
||||
|
||||
### 玩复古电子游戏
|
||||
|
||||
@ -32,13 +34,13 @@
|
||||
|
||||
### 为你的娱乐中心搭建时钟
|
||||
|
||||
在过去的十年中,家庭娱乐中心发生了很大的变化。我的家人完全依靠流媒体服务来观看节目和电影。我之所以爱它是因为我可以通过移动设备或语音助手控制电视。但是,当你不再能看眼时钟时,便会失去一定程度的便利!请遵循[这些步骤][6],使用树莓派从头开始搭建自己的时钟显示。
|
||||
在过去的十年中,家庭娱乐中心发生了很大的变化。我的家人完全依靠流媒体服务来观看节目和电影。我之所以爱它是因为我可以通过移动设备或语音助手控制电视。但是,当你不再能一眼看到时钟时,便会失去一定程度的便利!请遵循[这些步骤][6],使用树莓派从头开始搭建自己的时钟显示。
|
||||
|
||||
### 扩大自制啤酒的生产规模
|
||||
|
||||
在[本教程][7]中,经验丰富的家庭酿酒师分享了他建立电动啤酒酿造系统的经验。该项目需要在硬件和零件上进行更多的前期投资,但由此产生的效率和一致性让这些值得。为此祝贺!
|
||||
|
||||
如果你是像我这样的树莓派新手,那么我建议你阅读我们可下载的树莓派指南。我们的[单页速查表][8]提供了入门指南。有关更多技巧和教程,我们的[综合指南][9]涵盖了一些主题,例如选择树莓派、保持更新、为社区做出贡献等。
|
||||
如果你是像我这样的树莓派新手,那么我建议你阅读我们这些可下载的树莓派指南。我们的[单页速查表][8]提供了入门指南。有关更多技巧和教程,我们的[综合指南][9]涵盖了一些主题,例如选择树莓派、保持更新、为社区做出贡献等。
|
||||
|
||||
你会尝试哪个树莓派项目?让我们在评论中知道。
|
||||
|
||||
@ -49,7 +51,7 @@ via: https://opensource.com/article/20/3/raspberry-pi-tutorials
|
||||
作者:[Lauren Pritchett][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,61 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hkurj)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12034-1.html)
|
||||
[#]: subject: (2020 Will Be a Year of Hindsight for SD-WAN)
|
||||
[#]: via: (https://www.networkworld.com/article/3531315/2020-will-be-a-year-of-hindsight-for-sd-wan.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
2020 年将是 SD-WAN 的回顾之年
|
||||
======
|
||||
|
||||
![](https://images.idgesg.net/images/article/2020/03/istock-1127447341-100834720-large.jpg)
|
||||
|
||||
对于软件定义的广域网(SD-WAN),“过去看起来困难的选择,知道了这些选择的结果后,现在看起来就很清晰了” 这一说法再合适不过了。总结过去的几年:云计算和数字化转型促使公司重新评估传统的 WAN 技术,该技术不再能够满足其不断增长的业务需求。从那时起,SD-WAN 成为一种有前途的新技术。
|
||||
|
||||
SD-WAN 旨在解决物理设备的流量管理问题,并支持从云进行基于软件的配置。许多最初的 SD-WAN 部署都因希望取代昂贵的多协议标签交换 (MPLS) 而得到推动。公司希望它可以神奇地解决他们所有的网络问题。但是在实践中,基本的 SD-WAN 解决方案远没有实现这一愿景。
|
||||
|
||||
快速发展到现在,围绕 SD-WAN 的炒作已经尘埃落定,并且早期的实施工作已经过去。现在是时候回顾一下我们在 2019 年学到的东西以及在 2020 年要改进的地方。所以,让我们开始吧。
|
||||
|
||||
### 1、这与节省成本无关
|
||||
|
||||
大多数公司选择 SD-WAN 作为 MPLS 的替代品,因为它可以降低 WAN 成本。但是,[节省的成本][1]会因 SD-WAN 的不同而异,因此不应将其用作部署该技术的主要驱动力。无论公司需要什么,公司都应该专注于提高网络敏捷性,例如实现更快的站点部署和减少配置时间。SD-WAN 的主要驱动力是使网络更高效。如果成功实现那么成本也会随之降低。
|
||||
|
||||
### 2、WAN 优化是必要的
|
||||
|
||||
说到效率,[WAN 优化][2]提高了应用程序和数据流量的性能。通过应用协议加速、重复数据消除、压缩和缓存等技术,WAN 优化可以增加带宽、减少延迟并减轻数据包丢失。最初的想法是 SD-WAN 可以完成对 WAN 优化的需求,但是我们现在知道某些应用程序需要更多的性能支持。这些技术相互补充,而不是相互替代。它们应该用来解决不同的问题。
|
||||
|
||||
### 3、安全性不应该事后考虑
|
||||
|
||||
SD-WAN 具有许多优点,其中之一就是使用宽带互联网快速发送企业应用程序流量。但是这种方法也带来了安全风险,因为它使用户及其本地网络暴露于不受信任的公共互联网中。从一开始,安全性就应该成为 SD-WAN 实施的一部分,而不是在事后。公司可以通过使用[安全的云托管][3]之类的服务,将安全性放在分支机构附近,从而实现所需的应用程序性能和保护。
|
||||
|
||||
### 4、可见性对于 SD-WAN 成功至关重要
|
||||
|
||||
在应用程序和数据流量中具有[可见性][4],这使网络管理不再需要猜测。最好的起点是部署前阶段,在此阶段,公司可以在实施 SD-WAN 之前评估其现有功能以及缺少的功能。可见性以日常监控和警报的形式在部署后继续发挥重要作用。了解网络中正在发生的情况的公司会更好地准备应对性能问题,并可以利用这些知识来避免将来出现问题。
|
||||
|
||||
### 5、无线广域网尚未准备就绪
|
||||
|
||||
SD-WAN 可通过包括宽带和 4G/LTE 无线在内的任何传输将用户连接到应用程序。这就是[移动互联][5]越来越多地集成到 SD-WAN 解决方案中的原因。尽管公司渴望将 4G 用作潜在的传输替代方案(尤其是在偏远地区),但由此带来的按使用付费 4G 服务成本却很高。此外,由于延迟和带宽限制,4G 可能会出现问题。最好的方法是等待服务提供商部署具有更好的价格选择的 5G。今年我们将看到 5G 的推出,并更加关注无线 SD-WAN。
|
||||
|
||||
请务必观看以下 SD-WAN 视频系列:[你应该知道的所有关于 SD-WAN 的知识][6]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3531315/2020-will-be-a-year-of-hindsight-for-sd-wan.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[hkurj](https://github.com/hkurj)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://blog.silver-peak.com/to-maximize-the-value-of-sd-wan-look-past-hardware-savings
|
||||
[2]: https://blog.silver-peak.com/sd-wan-vs-wan-optimization
|
||||
[3]: https://blog.silver-peak.com/sd-wans-enable-scalable-local-internet-breakout-but-pose-security-risk
|
||||
[4]: https://blog.silver-peak.com/know-the-true-business-drivers-for-sd-wan
|
||||
[5]: https://blog.silver-peak.com/mobility-and-sd-wan-part-1-sd-wan-with-4g-lte-is-a-reality
|
||||
[6]: https://www.silver-peak.com/everything-you-need-to-know-about-sd-wan
|
@ -0,0 +1,103 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12001-1.html)
|
||||
[#]: subject: (Basilisk: A Firefox Fork For The Classic Looks and Classic Extensions)
|
||||
[#]: via: (https://itsfoss.com/basilisk-browser/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Basilisk:一个有着经典的外观和扩展的 Firefox 复刻
|
||||
======
|
||||
|
||||
> Basilisk 是一个 Firefox 复刻,它支持旧版的扩展等更多功能。在这里,我们看一下它的功能并尝试一下。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/16/130319lvls6fvsaslzllrv.jpg)
|
||||
|
||||
### Basilisk:基于 XUL 的开源 Web 浏览器
|
||||
|
||||
尽管最好使用 Linux 上的常规浏览器(如 Firefox 或 Chromium),但了解其他浏览器也没坏处。最近,我偶然发现了一个 Firefox 复刻:[Basilisk][1] 浏览器,它有经典的 Firefox 用户界面以及对旧版扩展的支持(就像 [Waterfox][2] 一样)。
|
||||
|
||||
![itsfoss.com homepage on Basilisk][3]
|
||||
|
||||
如果你迫切需要使用旧版扩展程序或怀念 Firefox 的经典外观,Basilisk 浏览器可以帮到你。这个浏览器是由 [Pale Moon][4] 浏览器背后的团队维护(这是我接下来要介绍的另一个 Firefox 复刻)。
|
||||
|
||||
如果你正在寻找开源 [Chrome 替代品][5],那么你可以快速了解一下 Basilisk 提供的功能。
|
||||
|
||||
**注意:**Basilisk 是开发中软件。即使我在使用时没有遇到重大的可用性问题,但你也不应依赖它作为唯一使用的浏览器。
|
||||
|
||||
### Basilisk 浏览器的特性
|
||||
|
||||
![][6]
|
||||
|
||||
Basilisk 开箱即用。但是,在考虑使用之前,可能需要先看一下以下这些特性:
|
||||
|
||||
* 基于 [XUL][7] 的 Web 浏览器
|
||||
* 它具有 “Australis” Firefox 界面,这在 v29–v56 的 Firefox 版本中非常流行。
|
||||
* 支持 [NPAPI][8] 插件(Flash、Unity、Java 等)
|
||||
* 支持 XUL/Overlay Mozilla 形式的扩展。
|
||||
* 使用 [Goanna][9] 开源浏览器引擎,它是 Mozilla [Gecko][10] 的复刻
|
||||
* 不使用 Rust 或 Photon 用户界面
|
||||
* 仅支持 64 位系统
|
||||
|
||||
### 在 Linux 上安装 Basilisk
|
||||
|
||||
你可能没有在软件中心中找到它。因此,你必须前往其官方[下载页面][11]获得 tarball(tar.xz)文件。
|
||||
|
||||
下载后,只需将其解压缩并进入文件夹。接下来,你将在其中找到一个 `Basilisk` 可执行文件。你只需双击或右键单击并选择 “运行” 即可运行它。
|
||||
|
||||
你可以查看它的 [GitHub 页面][12]获取更多信息。
|
||||
|
||||
![][13]
|
||||
|
||||
你也可以按照下面的步骤使用终端进入下载的文件夹,并运行文件:
|
||||
|
||||
```
|
||||
cd basilisk-latest.linux64
|
||||
cd basilisk
|
||||
./basilisk
|
||||
```
|
||||
|
||||
- [下载 Basilisk][1]
|
||||
|
||||
### 使用 Basilisk 浏览器
|
||||
|
||||
![][14]
|
||||
|
||||
如果你想要支持旧版扩展,Basilisk 是不错的 Firefox 复刻。它是由 Pale Moon 背后的团队积极开发的,对于希望获得 Mozilla Firefox(在 Quantum 更新之前)经典外观,且不包括现代 Web 支持的用户而言,它可能是一个不错的选择。
|
||||
|
||||
浏览网页没有任何问题。但是,我注意到 YouTube 将其检测为过时的浏览器,并警告说它将很快停止支持它。
|
||||
|
||||
**因此,我不确定 Basilisk 是否适合所有现有的 Web 服务 —— 但是,如果你确实需要使用 Firefox 较早版本中的扩展,那这是一个解决方案。**
|
||||
|
||||
### 总结
|
||||
|
||||
你认为这个 Firefox 复刻值得尝试吗?你喜欢哪个?在下面的评论中分享你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/basilisk-browser/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.basilisk-browser.org/
|
||||
[2]: https://itsfoss.com/waterfox-browser/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/basilisk-itsfoss.jpg?ssl=1
|
||||
[4]: https://www.palemoon.org
|
||||
[5]: https://itsfoss.com/open-source-browsers-linux/
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/basilisk-options-1.jpg?ssl=1
|
||||
[7]: https://developer.mozilla.org/en-US/docs/Archive/Mozilla/XUL
|
||||
[8]: https://wiki.mozilla.org/NPAPI
|
||||
[9]: https://en.wikipedia.org/wiki/Goanna_(software)
|
||||
[10]: https://developer.mozilla.org/en-US/docs/Mozilla/Gecko
|
||||
[11]: https://www.basilisk-browser.org/download.shtml
|
||||
[12]: https://github.com/MoonchildProductions/Basilisk
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/basilisk-folder-1.jpg?ssl=1
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/basilisk-browser-1.jpg?ssl=1
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "messon007"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-12015-1.html"
|
||||
[#]: subject: "Containers vs. VMs, Istio in production, and more industry news"
|
||||
[#]: via: "https://opensource.com/article/20/3/survey-istio-industry-news"
|
||||
[#]: author: "Tim Hildred https://opensource.com/users/thildred"
|
||||
|
||||
每周开源点评:容器 vs 虚拟机、生产环境中的 Istio 等
|
||||
======
|
||||
|
||||
> 本文是最近一周开源社区的新闻和行业进展。
|
||||
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
我在一家采用开源软件开发模型的企业软件公司任高级产品营销经理,我的一部分职责是为产品营销人员,经理和其他相关人定期发布有关开源社区,市场和业界发展趋势的更新。以下是该更新中我和他们最喜欢的五篇文章。
|
||||
|
||||
### 云原生应用采用的技术:容器等
|
||||
|
||||
- [文章链接][2]
|
||||
|
||||
> * 在生产环境中采用容器的比例从 2018 年的 73% 上升到 2019 年的 84%。其中,运行了至少 250 个容器的比例从 2018 年的 46% 上升到 2019 年的 58%。2017 到 2019 年间, 环境中拥有 50 台以上计算机(物理或虚拟)的受访者人数从 2017 年的 77% 上升到 2019 年的 81%。
|
||||
> * 表明: 容器的引入似乎缓解了需要管理的 VM 的快速增长。但是,请警惕要管理的原始机器数量会减少的说法。
|
||||
>
|
||||
|
||||
**分析**:从直觉上看,随着容器使用的增长,虚拟机的增长将放缓;有许多容器被部署在虚拟机内部,从而充分利用了两者的优势特性,而且许多应用不会很快被容器化(留意下你所在企业的传统单体式应用)。
|
||||
|
||||
### 在生产环境中运行Istio的经验
|
||||
|
||||
- [文章链接][3]
|
||||
|
||||
> 在 HelloFresh,我们将团队分为小队和团伙。每个团伙都有自己的 Kubernetes 命名空间。如上所述,我们先按命名空间启用 sidecar 注入,然后再逐个对应用启用。在将应用添加到 Istio 之前,我们举办了研讨会,以使小队了解其应用发生的变化。由于我们采用“您构建,您维护”的模型,团队可以在故障定位时了解应用的进出流量。不仅如此,它还提升了公司内部的知识量。我们还创建了 Istio 相关的 [OKR][4] ,来跟踪我们的进度并达成我们引入Istio的目的。
|
||||
|
||||
**分析**:引入或是不引入技术,要由自己决定,同时要自行承担相应的后果。
|
||||
|
||||
### Aether: 首个开源的边缘云平台
|
||||
|
||||
- [文章链接][5]
|
||||
|
||||
> ONF 的营销副主席 Sloane 这样解释 Aether: 它将多个正在自己的沙箱中开发和运行的项目聚集到一起,ONF 试图在该框架下将多种边缘服务在一个融合平台上支持起来。ONF 的各个项目将保持独立并可继续单独使用,但是 Aether 试图将多个能力绑定到一起来简化企业的私有边缘云运营。
|
||||
>
|
||||
> 他说:“我们认为我们正在创造一个新的合作空间,工业界和社区可以携手帮助推动通用平台背后的整合和关键工作,然后可以帮助这些边缘云中的通用功能不断发展”。
|
||||
|
||||
**分析**:当今,使用技术解决的问题过于复杂,无法通过单一技术解决。比技术更重要的是要解决的业务问题需要聚焦于真正增值的部分。将两者结合起来,就是企业之间需要在他们共同的需求上找到合作的方法,并在它们特有的方面进行竞争。除了开源,你找不到比这更好的方法了。
|
||||
|
||||
### 与云相关职业的女性正在改变现状
|
||||
|
||||
- [文章链接][6]
|
||||
|
||||
> Yordanova 说:“由于云是一种相对较新的技术,我的[成为一名“科技女性”][7]的经验可能并不典型,因为云行业极为多样化”。“实际上,我的团队中性别比例相当,成员由随着云技术而成长的不同个性、文化和优势的具体人员组成。“
|
||||
|
||||
**分析**:我想考虑的一件事就是跨越式的演进思路。你可能可以跳过演进过程中的某个步骤或阶段,因为原先导致其存在的条件已不再适用。云技术时代没有形成“谁发明的以及它是为谁而生”的固有说法,所以也许它所承载的某些前代的技术负担更少?
|
||||
|
||||
### StarlingX 如何在中国开源项目的星空中闪耀
|
||||
|
||||
- [文章链接][8]
|
||||
|
||||
> 我们的团队在中国,因此我们的任务之一是帮助中国的社区开发软件、贡献代码、文档等。大多数 StarlingX 项目会议是在中国的深夜举行,因此华人社区成员的出席和参与颇具挑战。为了克服这些障碍,我们与中国的其他社区成员(例如 99cloud 的朋友)一起采取了一些措施,例如和其他社区成员一起聚会,一起参加动手实践研讨会和中文的特设技术会议,将一些文档翻译成中文,并在微信小组中不断进行互动(就像每个人都可以享受的 24/7 通话服务一样)
|
||||
|
||||
**分析**:随着中国对开源项目的贡献不断增长,这种情况似乎有可能逆转或至少相当。“学习英语”根本不是参与开源项目开发的先决条件。
|
||||
|
||||
希望你喜欢这个列表,下周再见。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/survey-istio-industry-news
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[messon007](https://github.com/messon007)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr "Person standing in front of a giant computer screen with numbers, data"
|
||||
[2]: https://thenewstack.io/cncf-survey-snapshot-tech-adoption-in-the-cloud-native-world/
|
||||
[3]: https://engineering.hellofresh.com/everything-we-learned-running-istio-in-production-part-1-51efec69df65
|
||||
[4]: https://en.wikipedia.org/wiki/OKR
|
||||
[5]: https://www.sdxcentral.com/articles/news/onf-projects-coalesce-for-enterprise-edge-cloud/2020/03/
|
||||
[6]: https://www.cloudpro.co.uk/leadership/cloud-essentials/8446/how-women-in-cloud-are-challenging-the-narrative
|
||||
[7]: https://www.itpro.co.uk/business-strategy/33301/diversity-not-a-company-priority-claim-nearly-half-of-women-in-tech
|
||||
[8]: https://superuser.openstack.org/articles/starlingx-community-interview-how-starlingx-shines-in-the-starry-sky-of-open-source-projects-in-china/
|
@ -0,0 +1,87 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12018-1.html)
|
||||
[#]: subject: (Amazon Has Launched Its Own Linux Distribution But It’s Not for Everyone)
|
||||
[#]: via: (https://itsfoss.com/bottlerocket-linux/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Amazon 推出了自己的容器专用 Linux 发行版“瓶装火箭”
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/21/074953h0a55lq72h0qlpzd.jpg)
|
||||
|
||||
Amazon 已经[推出][1]了自己的基于 Linux 的开源操作系统 Bottlerocket(“瓶装火箭”)。但在你兴奋地想要尝试安装和运行它之前,我必须告诉你,它不是常规的如 Ubuntu、Fedora 或 Debian 这样的 Linux 发行版。那它是什么?
|
||||
|
||||
### Bottlerocket:来自 Amazon 的 Linux 发行版,用于运行容器
|
||||
|
||||
![][2]
|
||||
|
||||
如果你不了解 Linux 容器,建议你阅读 Red Hat 的[这篇文章][3]。
|
||||
|
||||
自从首次提出云计算一词以来,IT 行业发生了许多变化。得益于 Amazon AWS、Google、Linode、Digital Ocean 等云服务器提供商,部署 Linux 服务器(通常在虚拟机中运行)只需几秒钟。最重要的是,你可以借助 Docker 和 Kubernetes 之类的工具在这些服务器上以容器形式部署应用和服务。
|
||||
|
||||
问题是,当你唯一目的是在 Linux 系统上运行容器时,并不总是需要完整的 Linux 发行版。这就是为什么容器专用 Linux 仅提供必要软件包的原因。这将大大减少操作系统的大小,从而进一步减少部署时间。
|
||||
|
||||
[Bottlerocket][5] Linux 由 Amazon Web Services(AWS)专门构建,用于在虚拟机或裸机上运行容器。它支持 docker 镜像和其他遵循 [OCI 镜像][6]格式的镜像。
|
||||
|
||||
### Bottlerocket Linux 的特性
|
||||
|
||||
![][7]
|
||||
|
||||
这是来自 Amazon 的新 Linux 发行版提供的特性:
|
||||
|
||||
#### 没有逐包更新
|
||||
|
||||
传统的 Linux 发行版更新过程由更新单个软件包组成。Bottlerocket 改用基于镜像的更新。
|
||||
|
||||
由于采用了这种方法,可以避免冲突和破坏,并可以进行快速而完整的回滚(如有必要)。
|
||||
|
||||
#### 只读文件系统
|
||||
|
||||
Bottlerocket 还使用了只读主文件系统。在启动时通过 dm-verity 检查其完整性。在其他安全措施上,也不建议使用 SSH 访问,并且只能通过[管理容器][8](附加机制)使用。
|
||||
|
||||
AWS 已经统治了云世界。
|
||||
|
||||
#### 自动更新
|
||||
|
||||
你可以使用 Amazon EKS 之类的编排服务来自动执行 Bottlerocket 更新。
|
||||
|
||||
Amazon 还声称,与通用 Linux 发行版相比,仅包含运行容器的基本软件可以减少攻击面。
|
||||
|
||||
### 你怎么看?
|
||||
|
||||
Amazon 并不是第一个创建“容器专用 Linux” 的公司。我认为 CoreOS 是最早的此类发行版之一。[CoreOS 被 Red Hat 收购][9],Red Hat 又被 [IBM 收购][10]。Red Hat 公司最近停用了 CoreOS,并用 [Fedora CoreOS][11] 代替了它。
|
||||
|
||||
云服务器是一个巨大的行业,它将继续发展壮大。像 Amazon 这样的巨头将竭尽所能与它竞争对手保持一致或领先。我认为,Bottlerocket 是对 IBM Fedora CoreOS(目前)的应答。
|
||||
|
||||
尽管 [Bottlerocket 仓库可在 GitHub 上找到][12],但我还没发现就绪的镜像(LCTT 译注:源代码已经提供)。在撰写本文时,它仅[可在 AWS 上预览][5]。
|
||||
|
||||
你对此有何看法?Amazon 会从 Bottlerocket 获得什么?如果你以前使用过 CoreOS 之类的软件,你会切换到 Bottlerocket 么?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/bottlerocket-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://aws.amazon.com/blogs/aws/bottlerocket-open-source-os-for-container-hosting/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/botlerocket-logo.png?ssl=1
|
||||
[3]: https://www.redhat.com/en/topics/containers/whats-a-linux-container
|
||||
[4]: https://www.linode.com/
|
||||
[5]: https://aws.amazon.com/bottlerocket/
|
||||
[6]: https://www.opencontainers.org/
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/BottleRocket.png?ssl=1
|
||||
[8]: https://github.com/bottlerocket-os/bottlerocket-admin-container
|
||||
[9]: https://itsfoss.com/red-hat-acquires-coreos/
|
||||
[10]: https://itsfoss.com/ibm-red-hat-acquisition/
|
||||
[11]: https://getfedora.org/en/coreos/
|
||||
[12]: https://github.com/bottlerocket-os/bottlerocket
|
@ -0,0 +1,215 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hkurj)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12019-1.html)
|
||||
[#]: subject: (6 Best AUR (Arch User Repository) Helpers for Arch Linux)
|
||||
[#]: via: (https://www.2daygeek.com/best-aur-arch-user-repository-helpers-arch-linux-manjaro/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
6 个用于 Arch Linux 的最佳 AUR 助手
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/21/082920kxdmlwkk7xx7llpw.jpeg)
|
||||
|
||||
Arch Linux 是一款 Linux 发行版,主要由针对 x86-64 微处理器计算机的二进制软件包组成。Arch Linux 使用的是滚动发布模型,这种模式会频繁的给应用程序交付更新。它使用名为 [pacman][1] 的软件包管理器,可以用来安装、删除和更新软件包。
|
||||
|
||||
由于 Arch Linux 是为有经验的用户构建的,建议新手在使用过其他 Linux 后再来尝试。
|
||||
|
||||
### 什么是 AUR(Arch 用户软件仓库)?
|
||||
|
||||
[Arch 用户软件仓库][2] 通常称为 AUR,是给 Arch 用户的基于社区的软件存储库。
|
||||
|
||||
根据软件包在 AUR 社区的流行程度,用户编译的软件包会进入到 Arch 的官方存储库。
|
||||
|
||||
### 什么是 AUR 助手?
|
||||
|
||||
[AUR 助手][3]是一个包装程序,允许用户从 AUR 存储库安装软件包,而无需手动干预。
|
||||
|
||||
很多用例实现了自动化,比如包搜索、解决依赖关系、检索和构建 AUR 包、Web 内容检索和 AUR 包提交之类。
|
||||
|
||||
以下列出了 6 种最佳的 AUR 助手:
|
||||
|
||||
* Yay(Yet another Yogurt)
|
||||
* Pakku
|
||||
* Pacaur
|
||||
* Pikaur
|
||||
* Trizen
|
||||
* Aura
|
||||
|
||||
### 1)Yay(Yet another Yogurt)
|
||||
|
||||
[Yay][4] 是 Arch Linux 下基于 CLI 的最佳 AUR 助手,使用 Go 语言编写。Yay 是基于 yaourt、apacman 和 pacaur 设计的。
|
||||
|
||||
这是最合适推荐给新手的 AUR 助手。类似于 Pacman,其使用方法和 `pacman` 中的命令和选项很相似,可以让用户在搜索过程中找到匹配的软件包提供程序,并进行选择。
|
||||
|
||||
#### 如何安装 yay
|
||||
|
||||
依次运行以下命令以在 Arch Linux 系统上安装。
|
||||
|
||||
```
|
||||
$ sudo pacman -S git go base-devel
|
||||
$ git clone https://aur.archlinux.org/yay.git
|
||||
$ cd yay
|
||||
$ makepkg -si
|
||||
```
|
||||
|
||||
#### 如何使用 yay
|
||||
|
||||
`yay` 语法与 `pacman` 相同,使用以下命令安装软件包。
|
||||
|
||||
```
|
||||
$ yay -s arch-wiki-man
|
||||
```
|
||||
|
||||
### 2)Pakku
|
||||
|
||||
[Pakku][5] 可以被视为一个初始阶段的 Pacman。它是一个包装程序,可以让用户从 AUR 中搜索或安装软件包。
|
||||
|
||||
它在删除依赖项方面做得不错,并且还允许通过克隆 PKGBUILD 来安装软件包。
|
||||
|
||||
#### 如何安装 Pakku
|
||||
|
||||
要在 Arch Linux 的系统上安装 Pakku,请依次运行以下命令。
|
||||
|
||||
```
|
||||
$ sudo pacman -S git base-devel
|
||||
$ git clone https://aur.archlinux.org/pakku.git
|
||||
$ cd pakku
|
||||
$ makepkg -si
|
||||
```
|
||||
|
||||
#### 如何使用 Pakku
|
||||
|
||||
`pakku` 语法与 `pacman` 相同,使用以下命令安装软件包。
|
||||
|
||||
```
|
||||
$ pakku -s dropbox
|
||||
```
|
||||
|
||||
### 3)Pacaur
|
||||
|
||||
另一个基于 CLI 的 AUR 助手,可帮助减少用户与提示符的交互。
|
||||
|
||||
[Pacaur][6] 专为倾向于自动化重复任务的高级用户而设计。用户需要熟悉 `makepkg` 及其配置的 AUR 手动构建过程。
|
||||
|
||||
#### 如何安装 Pacaur
|
||||
|
||||
要在 Arch Linux 的系统上安装 Pakku,请依次运行以下命令。
|
||||
|
||||
```
|
||||
$ sudo pacman -S git base-devel
|
||||
$ git clone https://aur.archlinux.org/pacaur.git
|
||||
$ cd pacaur
|
||||
$ makepkg -si
|
||||
```
|
||||
|
||||
#### 如何使用 Pacaur
|
||||
|
||||
`pacaur` 语法与 `pacman` 相同,使用以下命令安装软件包。
|
||||
|
||||
```
|
||||
$ pacaur -s spotify
|
||||
```
|
||||
|
||||
### 4)Pikaur
|
||||
|
||||
[Pikaur][7] 是具有最小依赖性的 AUR 助手,可以一次查看所有 PKGBUILD,无需用户交互即可全部构建。
|
||||
|
||||
Pikaur 将通过控制 `pacman` 命令来告知 Pacman 要执行的下一个步骤。
|
||||
|
||||
#### 如何安装 Pikaur
|
||||
|
||||
要在 Arch Linux 的系统上安装 Pakku,请依次运行以下命令。
|
||||
|
||||
```
|
||||
$ sudo pacman -S git base-devel
|
||||
$ git clone https://aur.archlinux.org/pikaur.git
|
||||
$ cd pikaur
|
||||
$ makepkg -fsri
|
||||
```
|
||||
|
||||
#### 如何使用 Pikaur
|
||||
|
||||
`pikaur` 语法与 `pacman` 相同,使用以下命令安装软件包。
|
||||
|
||||
```
|
||||
$ pacaur -s spotify
|
||||
```
|
||||
|
||||
### 5)Trizen
|
||||
|
||||
[Trizen][8] 是用 Perl 编写的基于命令行的 AUR 轻量级包装器。这个面向速度的 AUR 助手,它允许用户搜索、安装软件包,还允许阅读 AUR 软件包注释。
|
||||
|
||||
支持编辑文本文件,并且输入/输出使用 UTF-8。内置与 `pacman` 的交互功能。
|
||||
|
||||
#### 如何安装 Trizen
|
||||
|
||||
要在 Arch Linux 的系统上安装 Trizen,请依次运行以下命令。
|
||||
|
||||
```
|
||||
$ sudo pacman -S git base-devel
|
||||
$ git clone https://aur.archlinux.org/trizen.git
|
||||
$ cd trizen
|
||||
$ makepkg -si
|
||||
```
|
||||
|
||||
#### 如何使用 Trizen
|
||||
|
||||
`trizen` 语法与 `pacman` 相同,使用以下命令安装软件包。
|
||||
|
||||
```
|
||||
$ pacaur -s google-chrome
|
||||
```
|
||||
|
||||
### 6)Aura
|
||||
|
||||
[Aura][9] 是用 Haskell 编写的,是用于 Arch Linux 和 AUR 的安全的多语言包管理器。它支持许多Pacman 操作和子选项,可轻松进行开发并编写精美的代码。
|
||||
|
||||
它可以自动从 AUR 安装软件包。使用 Aura 时,用户通常会在系统升级方面遇到一些困难。
|
||||
|
||||
#### 如何安装 Aura
|
||||
|
||||
要在 Arch Linux 的系统上安装 Pakku,请依次运行以下命令。
|
||||
|
||||
```
|
||||
$ sudo pacman -S git base-devel
|
||||
$ git clone https://aur.archlinux.org/aura.git
|
||||
$ cd aura
|
||||
$ makepkg -si
|
||||
```
|
||||
|
||||
#### 如何使用 Aura
|
||||
|
||||
`aura` 语法与 `pacman` 相同,使用以下命令安装软件包。
|
||||
|
||||
```
|
||||
$ pacaur -s android-sdk
|
||||
```
|
||||
|
||||
### 结论
|
||||
|
||||
用户可以凭借这些分析在上述 6 个 AUR 助手中进行选择。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/best-aur-arch-user-repository-helpers-arch-linux-manjaro/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[hkurj](https://github.com/hkurj)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[2]: https://wiki.archlinux.org/index.php/Arch_User_Repository
|
||||
[3]: https://wiki.archlinux.org/index.php/AUR_helpers
|
||||
[4]: https://github.com/Jguer/yay
|
||||
[5]: https://github.com/kitsunyan/pakku
|
||||
[6]: https://github.com/E5ten/pacaur
|
||||
[7]: https://github.com/actionless/pikaur
|
||||
[8]: https://github.com/trizen/trizen
|
||||
[9]: https://github.com/fosskers/aura
|
@ -0,0 +1,114 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12030-1.html)
|
||||
[#]: subject: (GNOME 3.36 Released With Visual & Performance Improvements)
|
||||
[#]: via: (https://itsfoss.com/gnome-3-36-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
GNOME 3.36 发布,对视觉和性能进行了改进
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/23/151837oglshll1b7kjj8jg.jpg)
|
||||
|
||||
在 [GNOME 3.34][1] 发布 6 个月后,最新版本的 GNOME 3.36 代号为 “Gresik” 也最终发布了。不仅是添加了功能,GNOME 3.36 还改进了许多我们需要的功能。
|
||||
|
||||
在本文中,我将重点介绍 GNOME 新版本的主要更改。
|
||||
|
||||
### GNOME 3.36 的关键改进
|
||||
|
||||
如果你想快速了解发生了什么变化,可以看以下官方视频:
|
||||
|
||||
[视频][2]
|
||||
|
||||
现在,让我分别重点介绍最新版本中的更改:
|
||||
|
||||
#### GNOME Shell 扩展应用
|
||||
|
||||
你可以通过专门的“扩展”程序轻松管理 GNOME Shell 的扩展。
|
||||
|
||||
![][3]
|
||||
|
||||
因此,你可以使用该程序更新、配置、删除或禁用现有扩展。
|
||||
|
||||
#### 切换到请勿打扰(DND)
|
||||
|
||||
![][4]
|
||||
|
||||
你可能已经在 Pop!\_OS 或其他 Linux 发行版中注意到了它。
|
||||
|
||||
但是,GNOME 3.36 现在在通知弹出区域中实现了 DND 切换。除非你将其关闭,否则不会收到任何通知。
|
||||
|
||||
#### 锁屏改进
|
||||
|
||||
![][5]
|
||||
|
||||
在新版本中,锁定屏幕在输入凭据之前没有额外的动画。取而代之会在登录屏幕直接打招呼。
|
||||
|
||||
另外,为了提升一致性,锁屏背景将是墙纸的模糊版本。
|
||||
|
||||
总的来说,对锁屏的改进旨在使之更易于访问,同时改善其外观/感觉。
|
||||
|
||||
#### 视觉变化
|
||||
|
||||
![][6]
|
||||
|
||||
包含了一些明显的新增,这些设计更改改善了 GNOME 3.36 的总体外观。
|
||||
|
||||
从图标的重新设计到文件夹和系统对话框,进行了许多小的改进以增强 GNOME 3.36 的用户体验。
|
||||
|
||||
此外,设置应用已进行了调整,通过微小的界面重新设计,使得选项访问更加容易。
|
||||
|
||||
#### 主要性能改进
|
||||
|
||||
GNOME 声称,此更新还提升了 GNOME 桌面的性能。
|
||||
|
||||
当使用装有 GNOME 3.36 的发行版时,你会注意到在性能上有明显的不同。无论是动画、重新设计还是微小的调整,对于 GNOME 3.36 所做的一切都会对用户的性能产生积极影响。
|
||||
|
||||
#### 其他更改
|
||||
|
||||
除了上述关键更改之外,还有很多其他改进,例如:
|
||||
|
||||
* 时钟重新设计
|
||||
* 用户文档更新
|
||||
* GNOME 安装助手改进
|
||||
|
||||
还有许多其他更改。你可以查看[官方发布说明][7]了解更多信息。
|
||||
|
||||
### 如何获取 GNOME 3.36?
|
||||
|
||||
即使 GNOME 3.36 已正式发布。Linux 发行版可能需要一段时间才能让你轻松升级 GNOME 体验。
|
||||
|
||||
[Ubuntu 20.04 LTS][8] 将提供最新版本 GNOME。你可以等待。
|
||||
|
||||
其他[流行的 Linux 发行版][9],如 Fedora、OpenSuse、Pop!\_OS,应该会很快包含 GNOME 3.36。[Arch Linux][10] 已升级到 GNOME 3.36。
|
||||
|
||||
我建议你耐心等待,直到获得发行版本的更新。不过,你可以查看[源代码][11]或尝试即将发布的流行发行版的开发版本,这些发行版可能有 GNOME 3.36。
|
||||
|
||||
你如何看待 GNOME 3.36?在下面的评论中让我知道你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/gnome-3-36-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/gnome-3-34-release/
|
||||
[2]: https://img.linux.net.cn/static/video/Introducing%20GNOME%203.36%20-%20%27Gresik%27-ae2D4aWTsXM.mp4
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/extensions-gnome.jpg?ssl=1
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/dnd-gnome.jpg?ssl=1
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/gnome-lockscreen.jpg?ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/gnome-settings.jpg?ssl=1
|
||||
[7]: https://help.gnome.org/misc/release-notes/3.36/
|
||||
[8]: https://itsfoss.com/ubuntu-20-04-release-features/
|
||||
[9]: https://itsfoss.com/best-linux-distributions/
|
||||
[10]: https://www.archlinux.org/
|
||||
[11]: https://gitlab.gnome.org/GNOME
|
173
published/20200313 How to Change MAC Address in Linux.md
Normal file
173
published/20200313 How to Change MAC Address in Linux.md
Normal file
@ -0,0 +1,173 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12008-1.html)
|
||||
[#]: subject: (How to Change MAC Address in Linux)
|
||||
[#]: via: (https://itsfoss.com/change-mac-address-linux/)
|
||||
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/itsfoss/)
|
||||
|
||||
如何在 Linux 中更改 MAC 地址
|
||||
======
|
||||
|
||||
在向你展示如何在 Linux 中更改 MAC 地址之前,让我们首先讨论为什么要更改它。
|
||||
|
||||
可能有几个原因。也许你不希望在公共网络上公开你的实际 [MAC 地址][1](也称为物理地址)?还有可能是网络管理员可能已在路由器或防火墙中阻止了特定的 MAC 地址。
|
||||
|
||||
一个实用的“好处”是某些公共网络(例如机场 WiFi)允许在有限的时间内免费上网。如果你还想继续使用,那么伪造 Mac 地址可能会欺骗网络,让它认为是一台新设备。这也是一个有名的原因。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/18/120702qdjyb7hvyj7bsrj7.jpg)
|
||||
|
||||
我将展示更改 MAC 地址(也称为欺骗/伪造 MAC 地址)的步骤。
|
||||
|
||||
### 在 Linux 中更改 MAC 地址
|
||||
|
||||
让我们一步步来:
|
||||
|
||||
#### 查找你的 MAC 地址和网络接口
|
||||
|
||||
让我们找出一些[关于 Linux 中网卡的细节][3]。使用此命令获取网络接口详细信息:
|
||||
|
||||
```
|
||||
ip link show
|
||||
```
|
||||
|
||||
在输出中,你将看到一些详细信息以及 MAC 地址:
|
||||
|
||||
```
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
|
||||
link/ether 94:c6:f8:a7:d7:30 brd ff:ff:ff:ff:ff:ff
|
||||
3: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
|
||||
link/ether 38:42:f8:8b:a7:68 brd ff:ff:ff:ff:ff:ff
|
||||
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
|
||||
link/ether 42:02:07:8f:a7:38 brd ff:ff:ff:ff:ff:ff
|
||||
```
|
||||
|
||||
如你所见,在这里,我的网络接口称为 `enp0s31f6`,MAC 地址为 `38:42:f8:8b:a7:68`。
|
||||
|
||||
你可能需要在安全的地方记录下来,以便稍后还原到该原始 MAC 地址。
|
||||
|
||||
现在你可以继续更改 MAC 地址。
|
||||
|
||||
注意!
|
||||
|
||||
如果在当前使用的网络接口上执行此操作,那么可能会中断你的网络连接。因此,请在其他网卡上尝试使用此方法,或者准备重启网络。
|
||||
|
||||
#### 方法 1:使用 Macchanger 更改 MAC 地址
|
||||
|
||||
![][4]
|
||||
|
||||
[Macchanger][5] 是查看、修改和操作网卡 MAC 地址的简单程序。它几乎在所有 GNU/Linux 操作系统中都可用,你可以使用发行版的包安装程序进行安装。
|
||||
|
||||
在 Arch Linux 或 Manjaro 上:
|
||||
|
||||
```
|
||||
sudo pacman -S macchanger
|
||||
```
|
||||
|
||||
在 Fedora、CentOS 和 RHEL 上:
|
||||
|
||||
```
|
||||
sudo dnf install macchanger
|
||||
```
|
||||
|
||||
在 Debian、Ubuntu、Linux Mint、Kali Linux 上:
|
||||
|
||||
```
|
||||
sudo apt install macchanger
|
||||
```
|
||||
|
||||
**重要!**系统会要求你选择是否应将 `macchanger` 设置为在每次启动或关闭网络设备时自动运行。每当你接到网线或重启 WiFi 时,它都会提供一个新的 MAC 地址。
|
||||
|
||||
![Not a good idea to run it automatically][6]
|
||||
|
||||
我建议不要自动运行它,除非你确实需要每次更改 MAC 地址。因此,选择“No”(按 `Tab` 键),然后按回车键继续。
|
||||
|
||||
**如何使用 Macchanger 更改 MAC 地址**
|
||||
|
||||
你还记得网络接口名称吗?你在前面的步骤中获得了它。
|
||||
|
||||
现在,要将随机 MAC 地址分配给该网卡,请使用:
|
||||
|
||||
```
|
||||
sudo macchanger -r enp0s31f6
|
||||
```
|
||||
|
||||
更改 MAC 后,使用以下命令进行验证:
|
||||
|
||||
```
|
||||
ip addr
|
||||
```
|
||||
|
||||
现在你将看到已经伪造 MAC。
|
||||
|
||||
要将 MAC 地址更改为特定值,请使用以下命令指定自定义 MAC 地址:
|
||||
|
||||
```
|
||||
macchanger --mac=XX:XX:XX:XX:XX:XX
|
||||
```
|
||||
|
||||
其中 XX:XX:XX:XX:XX:XX 是你要更改的新 MAC。
|
||||
|
||||
最后,要将 MAC 地址恢复为其原始硬件值,请运行以下命令:
|
||||
|
||||
```
|
||||
macchanger -p enp0s31f6
|
||||
```
|
||||
|
||||
但是,你不必如此。重启系统后,更改将自动丢失,并且实际的 MAC 地址将再次恢复。
|
||||
|
||||
你可以随时查看手册页以获取更多详细信息。
|
||||
|
||||
#### 方法 2:使用 iproute2 更改 Mac 地址(中级知识)
|
||||
|
||||
我建议你使用 macchanger,但如果你不想使用它,那么可以使用另一种方法在 Linux 中更改 MAC 地址。
|
||||
|
||||
首先,使用以下命令关闭网卡:
|
||||
|
||||
```
|
||||
sudo ip link set dev enp0s31f6 down
|
||||
```
|
||||
|
||||
接下来,使用以下命令设置新的 MAC:
|
||||
|
||||
```
|
||||
sudo ip link set dev enp0s31f6 address XX:XX:XX:XX:XX:XX
|
||||
```
|
||||
|
||||
最后,使用以下命令重新打开网络:
|
||||
|
||||
```
|
||||
sudo ip link set dev enp0s31f6 up
|
||||
```
|
||||
|
||||
现在,验证新的 MAC 地址:
|
||||
|
||||
```
|
||||
ip link show enp0s31f6
|
||||
```
|
||||
|
||||
就是这些了。你已经成功地在 Linux 中修改了 MAC 地址。敬请期待 FOSS 更多有关 Linux 教程和技巧的文章。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/change-mac-address-linux/
|
||||
|
||||
作者:[Dimitrios Savvopoulos][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/itsfoss/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/MAC_address
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/airport_wifi_meme.jpg?ssl=1
|
||||
[3]: https://itsfoss.com/find-network-adapter-ubuntu-linux/
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/Change_MAC_Address_Linux.jpg?ssl=1
|
||||
[5]: https://github.com/alobbs/macchanger
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/configuring_mcchanger.jpg?ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/Dimitrios.jpg?ssl=1
|
@ -0,0 +1,98 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12017-1.html)
|
||||
[#]: subject: (How to Install Netbeans on Ubuntu and Other Linux)
|
||||
[#]: via: (https://itsfoss.com/install-netbeans-ubuntu/)
|
||||
[#]: author: (Community https://itsfoss.com/author/itsfoss/)
|
||||
|
||||
如何在 Ubuntu 和其他 Linux 上安装 Netbeans
|
||||
======
|
||||
|
||||
> 在本教程中,你将学习在 Ubuntu 和其他 Linux 发行版上安装 Netbeans IDE 的各种方法。
|
||||
|
||||
[NetBeans][1] 是一个开源集成开发环境,具有良好的跨平台支持。此工具已被 Java 和 C/C++ 开发社区广泛认可。
|
||||
|
||||
开发环境相当灵活。你可以配置它以支持各种开发。实际上,你可以用它来开发 Web、桌面和移动应用,而无需离开此平台。这太神奇了,不是吗?除此之外,用户可以添加许多已知语言,如 [PHP][2]、C、C++、HTML、[Ajax][3]、JavaScript、JSP、Ruby on Rails 等。
|
||||
|
||||
如果你正在了解如何在 Linux 上安装 Netbeans,那么有几种方法可以做到。我编写本教程主要是为了 Ubuntu,但一些安装方法也适用于其他发行版。
|
||||
|
||||
* [使用 apt 在 Ubuntu 上安装 Netbeans][4]:适用于 Ubuntu 和基于 Ubuntu 的发行版,但通常**它是旧版的 Netbeans**
|
||||
* [使用 Snap 在 Ubuntu 上安装 Netbeans][5]:适用于已启用 Snap 包支持的任何 Linux 发行版
|
||||
* [使用 Flatpak 安装 Netbeans] [6]:适用于所有支持 Flatpak 包的 Linux 发行版
|
||||
|
||||
### 使用 Apt 包管理器在 Ubuntu 上安装 Netbeans IDE
|
||||
|
||||
如果在 Ubuntu 软件中心搜索 Netbeans,你将找到两个版本的 Netbeans。Apache Netbeans 是 snap 版本,大小较大,但提供了最新的 Netbeans。
|
||||
|
||||
只需单击一下即可安装它。无需打开终端。是最简单的方法。
|
||||
|
||||
![Apache Netbeans in Ubuntu Software Center][7]
|
||||
|
||||
你也可以选择使用 `apt` 命令,但使用 `apt` 时,你无法获得最新的 Netbeans。例如,在编写本教程时,Ubuntu 18.04 中 Apt 提供 Netbeans 10,而 Snap 有最新的 Netbeans 11。
|
||||
|
||||
如果你是 [apt 或 apt-get][8] 的粉丝,那么可以[启用 universe 仓库][9],并在终端中使用此命令安装 Netbeans:
|
||||
|
||||
```
|
||||
sudo apt install netbeans
|
||||
```
|
||||
|
||||
### 使用 Snap 在任何 Linux 发行版上安装 Netbeans IDE
|
||||
|
||||
![][10]
|
||||
|
||||
Snap 是一个通用包管理器,如果[发行版上启用了 Snap][11],那么可以使用以下命令安装它:
|
||||
|
||||
```
|
||||
sudo snap install netbeans --classic
|
||||
```
|
||||
|
||||
此过程可能需要一些时间才能完成,因为总下载大小约为 1 GB。完成后,你将在应用程序启动器中看到它。
|
||||
|
||||
你不仅可以通过 Snap 获取最新的 Netbeans,已安装的版本将自动更新到较新版本。
|
||||
|
||||
### 使用 Flatpak 安装 Netbeans
|
||||
|
||||
[Flatpak][12] 是另一个类似 Snap 的包安装器。默认情况下,某些发行版支持 Flatpak,在其他发行版上你可以[启用 Flatpak 支持][13]。
|
||||
|
||||
发行版支持 Flatpak 后,你可以使用以下命令安装 Netbeans:
|
||||
|
||||
```
|
||||
flatpak install flathub org.apache.netbeans
|
||||
```
|
||||
|
||||
另外,你可以下载源码并自己编译。
|
||||
|
||||
- [下载 Netbeans][14]
|
||||
|
||||
希望你使用了上面其中一个方法在你的 Ubuntu 上安装了 Netbeans。但你使用的是哪个方法?有遇到问题么?让我们知道。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-netbeans-ubuntu/
|
||||
|
||||
作者:[Srimanta Koley][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/itsfoss/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://netbeans.org/
|
||||
[2]: https://www.php.net/
|
||||
[3]: https://en.wikipedia.org/wiki/Ajax_(programming)
|
||||
[4]: tmp.ZNFNEC210y#apt
|
||||
[5]: tmp.ZNFNEC210y#snap
|
||||
[6]: tmp.ZNFNEC210y#flatpak
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/apache-netbeans-ubuntu-software-center.jpg?ssl=1
|
||||
[8]: https://itsfoss.com/apt-vs-apt-get-difference/
|
||||
[9]: https://itsfoss.com/ubuntu-repositories/
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/Install_Netbeans_Linux.jpg?ssl=1
|
||||
[11]: https://itsfoss.com/install-snap-linux/
|
||||
[12]: https://flatpak.org/
|
||||
[13]: https://itsfoss.com/flatpak-guide/
|
||||
[14]: https://netbeans.apache.org/download/index.html
|
||||
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/srimanta.jpg?ssl=1
|
@ -0,0 +1,141 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12014-1.html)
|
||||
[#]: subject: (Top 10 open source tools for working from home)
|
||||
[#]: via: (https://opensource.com/article/20/3/open-source-working-home)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
在家工作的十大开源工具
|
||||
======
|
||||
|
||||
> 无论你是在家工作的资深人士还是远程工作的新手,这些工具都可以使交流和协作变得轻而易举。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/20/103814bwxxqxkxc9qqxkbb.jpg)
|
||||
|
||||
如果你<ruby>在家工作<rt>work from home</rt></ruby>(WFH),你就会知道拥有一系列实用的工具是很重要的,这些工具可以让你远离烦恼,专注于重要的事情。在工作期间工作越努力,工作日结束后,你就越容易放松。
|
||||
|
||||
我已经在家工作多年了,这是我精选的远程工作者必备的最佳开源工具。
|
||||
|
||||
### 视频会议:Jitsi
|
||||
|
||||
![Jitsi screenshot][2]
|
||||
|
||||
当你不在同事身边时,每周保持连接几次非常重要,甚至只是这样也可以保持人与人之间的联系,否则你就会变得孤独。市场上有很多视频会议系统,但以我的经验,最简单、最好的是开源的 [Jitsi][3]。
|
||||
|
||||
通过易于记忆的 URL([meet.jit.si][3])和按需会议室,Jitsi 使得召开即席会议非常简单。而且更好的是,无需注册。你只需进入 [meet.jit.si][3],找到一个带有友好的、随机生成的 URL(此处没有字母和数字的随机组合) 的会议室,你就可以立即开始聊天。如果你选择注册,那么还可以与几种日历进行集成。
|
||||
|
||||
在现实生活中,我参加了许多来自新西兰最偏远地区的会议,而 Jitsi 无疑是我迄今为止拥有的最好的视频聊天体验。不需要浪费一半的会议时间在迷宫般的虚拟会议室中寻找彼此,也不用在尴尬的滞后时间中坐着发呆,更不用努力地为聊天应用程序安装更新。使用开源和符合标准的 webRTC 协议的 Jitsi 可以让你有个愉悦的开会体验。
|
||||
|
||||
### 白板:Drawpile
|
||||
|
||||
![Drawpile screenshot][4]
|
||||
|
||||
有时,白板非常适合解释事情、跟踪想法或只是散布一下疯狂的想法。白板是办公室会议室的常见物品,但在数字世界中却很难获得。幸运的是,有了 [Drawpile][5],这是一个实时协作的绘图应用程序。你可以在自己的计算机上托管绘图会话并邀请其他用户,也可以在 Drawpile 的服务器上托管会话。
|
||||
|
||||
它易于使用,足够精简、直观而功能强大,当你的粗略想法开始逐渐成型时,它是使其成为可行的作品的应用程序。
|
||||
|
||||
### 看板:Taiga
|
||||
|
||||
![Taiga screenshot][6]
|
||||
|
||||
想要保持有序并与你的部门保持同步吗?你应该试试 [Taiga][7],这是一个虚拟的“便利贴”面板,可以帮助团队跟踪各个任务。这种组织和项目计划方法被称为<ruby>看板<rt>kanban</rt></ruby>,在软件开发中很流行,而它也流行于从假期规划到家庭装修项目的所有计划之中。
|
||||
|
||||
Taiga 的优点是它是一个在线共享空间。与你进行协作或与之合作的任何人都可以把任务放到面板上,并且随着每个人的进展,他们将任务从左列(起点)移到右边(终点线)。Taiga 具有令人愉悦的图形化和交互性,没有什么比从一列移动到另一列的拖放任务令人感到舒适的了。
|
||||
|
||||
如果你的团队有 Taiga 无法满足的特定需求,那么你应该看看[我们挑选的最佳开源看板][8]。
|
||||
|
||||
### 个人笔记本:Joplin
|
||||
|
||||
![Joplin][9]
|
||||
|
||||
我在办公桌旁放着一个纸质笔记本,这样我就可以随时记下思考或想法了。想要捕捉这种简单动作的感受和便利是很棘手的,但 Joplin 却做的很好。
|
||||
|
||||
你可以在 Joplin 中创建虚拟笔记本,每个笔记本可以有任意数量的条目。这些条目可以是简单的文本,也可以是带有图形、任务列表、超链接等的复杂的动态文档。最重要的是,你可以将 Joplin 与所有的在线存储服务同步,包括开源的 Nextcloud 服务,这样你可以在任何计算机和任何设备上使用笔记本。这是使你的工作日井井有条、专心致志并保持活动顺畅的好方法。
|
||||
|
||||
如果 Joplin 不太满足你的要求,请查看一些我们最喜欢的[笔记本应用][10]。
|
||||
|
||||
### 群聊:Riot
|
||||
|
||||
![Riot screenshot][11]
|
||||
|
||||
并非所有内容都需要视频聊天或语音通话,但是有些事情比电子邮件更紧急。这就是团队聊天发挥作用的地方。一个好的群聊应用程序应该具有这些功能:即时消息传递、支持表情符号、支持 GIF 和图像,按需聊天室或“频道”、广泛的兼容性和隐私性。[Matrix][12] 是一个用于实时通信的开放标准和轻量级协议,如果你厌烦于键入大段消息,则可以使用相同的协议快速切换到 VOIP。你将获得世界上最好的群聊体验。
|
||||
|
||||
Matrix 是一种协议,并且有许多应用程序可以接驳到它(就像使用 Internet 协议一样,Firefox 是使人类可以访问的应用程序)。最受欢迎的客户端之一是 [Riot.im][13]。你可以为你的计算机和手机下载 Riot,并且只是短时间使用的话,可以通过 Web 浏览器连接到 Riot。你的团队总是会近在咫尺,但永远不会近到让你感到不舒服。
|
||||
|
||||
### 共享文档:Etherpad
|
||||
|
||||
![Etherpad screenshot][14]
|
||||
|
||||
如果你想与他人协作处理文档或与开会,则仅需 Etherpad 就行。Etherpad 是一个实时共享的文字处理器。可以邀请一个或多个人访问文档,并在每个人进行添加和编辑时进行观察。这是一种快速有效的方法,可将想法记入“纸上”并一起迭代修订。
|
||||
|
||||
有几种使用 Etherpad 的方法。如果你拥有良好的 IT 支持,则可以要求你的 IT 部门为你的组织托管一个 Etherpad 实例。否则,将有来自开源支持者的在线公共实例,例如 [Riseup][15] 和 [Etherpad][16] 本身所提供的。
|
||||
|
||||
### 共享电子表格:Ethercalc
|
||||
|
||||
![Ethercalc screenshot][17]
|
||||
|
||||
与 Etherpad 相似,在线 [Ethercalc][18] 编辑器允许多个用户同时在同一屏幕上远程地在电子表格上工作。Ethercalc 甚至可以从现有电子表格和定界文本文件中导入数据。你可能会也可能不会丢失大部分格式,具体取决于要导入的内容的复杂性,但是我从来没有弄坏过我的数据,因此导入文件总是一个好的开始。 下次需要复杂公式的帮助时,或者需要在最新预算中输入收据时,或者只是需要某人在格子上的输入时,请将其输入到 Ethercalc。
|
||||
|
||||
### 共享存储与日历:Nextcloud
|
||||
|
||||
![Nextcloud screenshot][19]
|
||||
|
||||
[Nextcloud][20] 是一个心存远志的应用程序。顾名思义,它是你自己的个人云。它最明显的切入点是在线共享存储,它可以与台式机和移动设备上的文件夹同步。将文件放入文件中,文件会上传到存储空间,然后当一切内容都同步后,它们会出现在所有设备上。为组织中的每个人提供一个帐户,你马上便拥有了共享的存储空间,可以通过单击鼠标单击以共享带有或不带有密码的文件和文件夹。
|
||||
|
||||
但是,除了充当共享数据的保管箱之外,Nextcloud 还有很多其他功能。由于其插件式结构,你可以将无数的 Web 应用程序安装到 Nextcloud 中,例如聊天客户端、电子邮件客户端、视频聊天等等。并非所有插件都是“官方的”,因此其支持服务各不相同,但是有几个非常好的官方插件。值得注意的是,有一个官方的日历应用程序,你和你的同事可以用它安排会议并跟踪即将发生的重要事件。该日历使用 CalDAV 协议,因此你所做的一切都可以与任何 CalDAV 客户端兼容。
|
||||
|
||||
### LibreOffice
|
||||
|
||||
![LibreOffice screenshot][21]
|
||||
|
||||
如果你习惯于每天一整天都在办公室里工作,那么你也可能习惯整天在办公套件里工作。包含所有常用功能的面面俱到的应用程序会令人感到某种程度的舒适,而在开源办公世界中的这样的应用程序就是 [LibreOffice][22]。它具有办公套件所应有的一切:文字处理器、电子表格和幻灯片演示。它还具有超出预期的功能,例如基于矢量的绘图应用程序,它还可以编辑 PDF 文件,还有一个带有图形界面构建器的关系型数据库。如果你正在寻找一个好的办公应用程序,那么 LibreOffice 是你应该首先看一看的,因为一旦你使用它,你就再也不用看别的了。
|
||||
|
||||
### Linux
|
||||
|
||||
![][23]
|
||||
|
||||
如果你不熟悉远程工作,而可能是由于某种原因而正在经历一场重大变革。对于某些人来说,变革的时光是一个极好的动力,它可以一劳永逸地改变一切。如果你是其中的一员,那么可能是时候更改一下整个操作系统了。Windows 和 Mac 可能在过去为你提供了很好的服务,但是如果你希望从非开源软件转向开源软件,为什么不换一下运行所有这些应用程序的平台呢?
|
||||
|
||||
有许多出色的 Linux 发行版可以让你认真地工作、认真地自我管理和认真地进阶。获取一份 Linux 副本,不论是 [Fedora][24]、[Elementary][25] 还是 [Red Hat Enterprise Linux][26] 的长期支持订购,去尝试使用自由的开源操作系统吧。等你熟悉了远程人的生活时,你最终将成为一名 Linux 专家!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/open-source-working-home
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/oatmeal-and-fedora.jpg?itok=NBFUH9eF (Oatmeal and a laptop.)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/jitsi_0.jpg (Jitsi screenshot)
|
||||
[3]: http://meet.jit.si
|
||||
[4]: https://opensource.com/sites/default/files/uploads/drawpile-whiteboard.jpg (Drawpile screenshot)
|
||||
[5]: https://drawpile.net/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/taiga_kanban_screen_0.jpg (Taiga screenshot)
|
||||
[7]: http://taiga.io
|
||||
[8]: https://opensource.com/alternatives/trello
|
||||
[9]: https://opensource.com/sites/default/files/joplin_0.png (Joplin)
|
||||
[10]: https://opensource.com/alternatives/evernote
|
||||
[11]: https://opensource.com/sites/default/files/uploads/riot-matrix.jpg (Riot screenshot)
|
||||
[12]: http://matrix.org
|
||||
[13]: http://riot.im
|
||||
[14]: https://opensource.com/sites/default/files/uploads/etherpad.jpg (Etherpad screenshot)
|
||||
[15]: https://pad.riseup.net/
|
||||
[16]: https://beta.etherpad.org
|
||||
[17]: https://opensource.com/sites/default/files/uploads/ethercalc.jpg (Ethercalc screenshot)
|
||||
[18]: https://ethercalc.org
|
||||
[19]: https://opensource.com/sites/default/files/uploads/nextcloud-calendar.jpg (Nextcloud screenshot)
|
||||
[20]: http://nextcloud.com
|
||||
[21]: https://opensource.com/sites/default/files/uploads/libreoffice.png (LibreOffice screenshot)
|
||||
[22]: http://libreoffice.org
|
||||
[23]: https://opensource.com/sites/default/files/uploads/advent-pantheon.jpg
|
||||
[24]: https://getfedora.org/
|
||||
[25]: https://elementary.io
|
||||
[26]: https://www.redhat.com/en/store/red-hat-enterprise-linux-workstation
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12029-1.html)
|
||||
[#]: subject: (Linus Torvalds’ Advice on Working From Home during Coronavirus Lockdown)
|
||||
[#]: via: (https://itsfoss.com/torvalds-remote-work-advice/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Linus Torvalds 关于在冠状病毒禁足期间在家工作的建议
|
||||
======
|
||||
|
||||
在冠状病毒爆发期间,我们中的许多人都在室内自我隔离。[ZDNet][1] 特此与 Linus Torvalds 进行了专题采访,讨论了他对冠状病毒禁足期间在家工作的看法或想法。
|
||||
|
||||
如果你还不知道(怎么可能不知道),[Linus Torvalds][2] 是 Linux 的创建者,也是 [Git][3] 的创建者,而所有这一切都是他在家里工作时做的。这是 2016 年的视频,Torvalds 展示了他的家庭办公室:
|
||||
|
||||
- [video](https://img.linux.net.cn/static/video/Linus%20Torvalds%20Guided%20Tour%20of%20His%20Home%20Office-SOXeXauRAm0.mp4)
|
||||
|
||||
因此,在本文中,我将分享我关注的一些主要要点,以及来自 Linus Torvalds 接受 ZDNet [Steven J. Vaughan-Nichols][4] 采访互动时的回应。
|
||||
|
||||
### 消除对人际交往缺失的恐惧
|
||||
|
||||
Linus 提到,几年前刚开始在家工作时,他担心过缺少人与人之间的互动,包括去办公室、与人互动或哪怕只是出去吃个午餐。
|
||||
|
||||
有趣的是,他似乎并没有错过任何东西,他更喜欢在家中没有人际交往的时间。
|
||||
|
||||
当然,将自己与人际互动隔离开并不是最好的事情 ,但在目前看来,这是一件好事。
|
||||
|
||||
### 利用在家工作的优势
|
||||
|
||||
![][5]
|
||||
|
||||
就像我们是完全远程操作一样,你可以做很多事情,而无需实际在办公室。
|
||||
|
||||
不要忘记,你可以随心所欲地养猫,我有 6 只猫,我知道这很困难(*哈哈*)。
|
||||
|
||||
而且,正如 Linus 所提到的,远程工作的真正优势在于“灵活性”。你不一定需要朝九晚五甚至更长的时间坐在办公桌前。从技术上讲,你可以在工作中自由休息,并在家中做你想做的任何事情。
|
||||
|
||||
换句话说,Linus 建议**不要在你的家中重新搞一个办公室**,这比去办公室还差。
|
||||
|
||||
### 高效沟通是关键
|
||||
|
||||
![][6]
|
||||
|
||||
虽然你可以在一天之中召开几次会议(视频会议或音频呼叫),但这真的有必要吗?
|
||||
|
||||
对于某些人来说,这可能很重要,但是你应该通过简化和整理内容来尽量减少会议花费的时间。
|
||||
|
||||
或者,按照 Linus 的建议,最好有个电子邮件列表来记录事情,以确保一切各司其职,这就是 [Linux 内核][7] 的运行方式。
|
||||
|
||||
James Bottomley 是 [IBM 研究院][8]的杰出工程师,也是资深 Linux 内核开发人员,他也建议你重新阅读你的文字以确保发送的准确信息不会被人不小心跳过。
|
||||
|
||||
就个人而言,出于同样的原因,我更喜欢文本而不是语音。实际上,它可以节省你的时间。
|
||||
|
||||
但是,请记住,你需要只以适当的方式传达必要的信息,而不要使通过文本/电子邮件发送的信息过载。
|
||||
|
||||
### 追踪你的时间
|
||||
|
||||
灵活性并不一定意味着你可以减少工作量并泡在社交媒体平台上,除非那就是你的工作。
|
||||
|
||||
因此,你需要确保充分利用自己的时间。为此,你可以使用多种工具来跟踪你的时间用在什么地方,以及在计算机上花费的时间。
|
||||
|
||||
你甚至可以将其记录在便签上,以确保你可以将时间高效地分配于工作上。你可以选择使用 [RescueTime][9] 或 [ActivityWatch][10] 来跟踪你在计算机或智能手机上花费的时间。
|
||||
|
||||
### 和猫(宠物)一起玩
|
||||
|
||||
![][11]
|
||||
|
||||
不歧视其他宠物,但这就是 Linus Torvalds 提到的。
|
||||
|
||||
正因为你在家中,你在安排工作或尝试有效利用时间时要做的事情有很多。
|
||||
|
||||
Linus 坚持认为,每当你感到无聊时,可以在必要时出门获取必需品,也可以与猫(或你的其它宠物)一起玩。
|
||||
|
||||
### 结语
|
||||
|
||||
虽然 Linus 还提到了当你在家时没人会评判你,但他的建议似乎是正确的,对于那些在家工作的人来说可能很有用。
|
||||
|
||||
不仅是因为冠状病毒的爆发,而且如果你打算一直在家工作,应该牢记这些。
|
||||
|
||||
你如何看待 Linus 的看法呢?你同意他的观点吗?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/torvalds-remote-work-advice/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.zdnet.com/article/pet-the-cat-own-the-bathrobe-linus-torvalds-on-working-from-home/
|
||||
[2]: https://en.wikipedia.org/wiki/Linus_Torvalds
|
||||
[3]: https://git-scm.com/
|
||||
[4]: https://twitter.com/sjvn
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/Work-from-Home-torvalds.jpg?ssl=1
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/torvalds-home-office.jpg?ssl=1
|
||||
[7]: https://en.wikipedia.org/wiki/Linux_kernel
|
||||
[8]: https://www.research.ibm.com/
|
||||
[9]: https://www.rescuetime.com/
|
||||
[10]: https://activitywatch.net/
|
||||
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/torvalds-penguins.jpeg?ssl=1
|
@ -1,74 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Containers vs. VMs, Istio in production, and more industry news)
|
||||
[#]: via: (https://opensource.com/article/20/3/survey-istio-industry-news)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
Containers vs. VMs, Istio in production, and more industry news
|
||||
======
|
||||
A weekly look at open source community and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [Tech adoption in the cloud native world: Containers and more][2]
|
||||
|
||||
> * Adoption of containers in production rose to from 73% in 2018 to 84% in 2019. Among this group, those running at least 250 containers rose from 46% in 2018 to 58% in 2019. From 2017 to 2019, the number of respondents with more than 50 machines (physical or virtual) in their fleet rose from 77% in 2017 to 81% in 2019.
|
||||
> * Implication: Container adoption appears to have mitigated the growth of VMs that need to be managed. However, be wary of claims that the raw number of machines being managed will decline.
|
||||
>
|
||||
|
||||
|
||||
**The impact**: It intuitively makes sense that virtual machine growth would slow down as container use grows; there are lots of containers being deployed inside VMs to take advantage of the best features of both, and lots of apps that won't be containerized any time soon (looking at you legacy enterprise monoliths).
|
||||
|
||||
## [Everything we learned running Istio in production][3]
|
||||
|
||||
> At HelloFresh we organize our teams into squads and tribes. Each tribe has their own Kubernetes namespace. As mentioned above, we enabled sidecar injection namespace by namespace then application by application. Before enabling applications for Istio we held workshops so that squads understood the changes happening to their application. Since we employ the model of “you build it, you own it”, this allows teams to understand traffic flows when troubleshooting. Not only that, it also raised the knowledge bar within the company. We also created Istio related [OKR’s][4] to track our progress and reach our Istio adoption goals.
|
||||
|
||||
**The impact**: The parts of technology adoption that aren't technology adoption are ignored at your own peril.
|
||||
|
||||
## [Aether: the first open source edge cloud platform][5]
|
||||
|
||||
> Aether is bringing together projects that have been under development and operating in their own sandbox, and under that framework ONF is trying to support a diversity of edge services on a converged platform, Sloane explained. ONF’s various projects will remain separate and continue to be consumable separately, but Aether is its attempt to bring multiple capabilities together to simplify private edge cloud operations for enterprises.
|
||||
>
|
||||
> "We think we’re creating a new collaborative place where the industry and community can come together to help drive some maybe consolidation and critical mass behind a common platform that can then help common functionality proliferate in these edge clouds," he said.
|
||||
|
||||
**The impact**: The problems being solved with technology today are too complex to be solved with a single technology. The business problems being solved on top of that require focus on the truly, value-adding. Taken together, businesses need to find ways to collaborate on their shared needs and compete on what makes them unique in the market. You couldn't find a better way to do that than open source.
|
||||
|
||||
## [Women in cloud careers are challenging the narrative][6]
|
||||
|
||||
> "As cloud is a relatively new technology, my experience of [being a 'woman in tech'][7] may not be typical, as the cloud industry is extremely diverse," Yordanova says. "In fact, my team has an equal gender split with a real mix of personalities, cultures and strengths from people who grew up with this technology."
|
||||
|
||||
**The impact**: One thing I like to think about is the idea of leapfrogging; that you might be able to skip a certain step or stage in a process because the circumstance that caused its existence in the first place no longer applies. The cloud era didn't have as long a period with static stereotypes of who made it and who it was for, so maybe it carries less of the baggage of some previous generations of technology?
|
||||
|
||||
## [How StarlingX shines in the starry sky of open source projects in China][8]
|
||||
|
||||
> Our team is in China, so one of our missions is to help the Chinese community to develop the software, contribute code, documentation, and more. Most of the StarlingX project meetings are held late at night in China, so the presence and participation for the Chinese community members are quite challenging. To overcome these obstacles, together with other community members (like friends in 99cloud) in China, we made some initiatives, such as engaging with other Chinese community members at the meet-ups, hands-on workshops ad-hoc tech meetings in Chinese, translating some documents to Chinese, and continuously interacting in WeChat groups (just like a 24/7 on-call services for and by everyone)
|
||||
|
||||
**The impact**: As Chinese contributions to open source projects continue to grow this seems like a situation that is likely to reverse, or at least equalize. It doesn't really make sense that "learn English" should be a pre-requisite to participating in the open source development process.
|
||||
|
||||
_I hope you enjoyed this list and come back next week for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/survey-istio-industry-news
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://thenewstack.io/cncf-survey-snapshot-tech-adoption-in-the-cloud-native-world/
|
||||
[3]: https://engineering.hellofresh.com/everything-we-learned-running-istio-in-production-part-1-51efec69df65
|
||||
[4]: https://en.wikipedia.org/wiki/OKR
|
||||
[5]: https://www.sdxcentral.com/articles/news/onf-projects-coalesce-for-enterprise-edge-cloud/2020/03/
|
||||
[6]: https://www.cloudpro.co.uk/leadership/cloud-essentials/8446/how-women-in-cloud-are-challenging-the-narrative
|
||||
[7]: https://www.itpro.co.uk/business-strategy/33301/diversity-not-a-company-priority-claim-nearly-half-of-women-in-tech
|
||||
[8]: https://superuser.openstack.org/articles/starlingx-community-interview-how-starlingx-shines-in-the-starry-sky-of-open-source-projects-in-china/
|
@ -1,116 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (GNOME 3.36 Released With Visual & Performance Improvements)
|
||||
[#]: via: (https://itsfoss.com/gnome-3-36-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
GNOME 3.36 Released With Visual & Performance Improvements
|
||||
======
|
||||
|
||||
The latest version GNOME 3.36 also codenamed as “Gresik” has finally landed after 6 months of [GNOME 3.34][1] release.
|
||||
|
||||
Not just limited to the feature additions, but GNOME 3.36 improves on a lot of things that we needed.
|
||||
|
||||
In this article, I’ll highlight the key changes with GNOME’s new release.
|
||||
|
||||
### GNOME 3.36 Key Improvements
|
||||
|
||||
If you want a quick look at what has changed, you can take a look at the official video below:
|
||||
|
||||
[Subscribe to our YouTube channel for Linux videos][2]
|
||||
|
||||
Now, let me highlight the changes in the latest release separately:
|
||||
|
||||
#### GNOME Shell Extensions App
|
||||
|
||||
You can easily manage your GNOME shell extensions right through a dedicated “Extensions” app.
|
||||
|
||||
![][3]
|
||||
|
||||
So, you can update, configure, delete or disable the existing extension using the app.
|
||||
|
||||
#### Do Not Disturb Toggle
|
||||
|
||||
![][4]
|
||||
|
||||
You might have already noticed this on Pop!_OS or other Linux distros.
|
||||
|
||||
However, GNOME 3.36 now implements a DND toggle in the notifications pop-over area out of the box. You won’t be notified of anything unless you turn it off.
|
||||
|
||||
#### Lock Screen Improvements
|
||||
|
||||
![][5]
|
||||
|
||||
With the new version, the lock screen won’t need an additional slide (or animation) before entering the credentials. Instead, you will be directly greeted by the login screen.
|
||||
|
||||
Also, to improve consistency, the background image of the lockscreen will be a blurred out version of your wallpaper.
|
||||
|
||||
So, overall, the improvements to the lock screen aim to make it easier to access while improving the look/feel of it.
|
||||
|
||||
#### Visual Changes
|
||||
|
||||
![][6]
|
||||
|
||||
Including the obvious new additions – there are several design changes that have improved the overall look and feel of GNOME 3.36.
|
||||
|
||||
Ranging from the icon redesign to the folders and system dialogues, a lot of minor improvements are in place to enhance the user experience on GNOME 3.36.
|
||||
|
||||
Also, the settings app has been tweaked to make the options easier to access along with minor interface re-designs.
|
||||
|
||||
#### Major Performance Improvement
|
||||
|
||||
GNOME claims that this update also brings in performance improvement for the GNOME desktop.
|
||||
|
||||
You will have a noticeable difference in the performance when using a distribution with GNOME 3.36 on board. Be it an animation, a redesign, or a minor tweak, everything that has been done for GNOME 3.36 positively impacts the performance for the users.
|
||||
|
||||
#### Other Changes
|
||||
|
||||
In addition to all the key changes mentioned above, there’s a bunch of other improvements like:
|
||||
|
||||
* Clock redesign
|
||||
* User documentation update
|
||||
* GNOME Setup assistant improvements
|
||||
|
||||
|
||||
|
||||
And, a lot of stuff here and there. You can take a look at the [official release notes][7] to learn more about it.
|
||||
|
||||
### How to get GNOME 3.36?
|
||||
|
||||
Even though GNOME 3.36 has officially released. It will take a while for the Linux distributions to let you easily upgrade your GNOME experience.
|
||||
|
||||
[Ubuntu 20.04 LTS][8] release will include the latest version out of the box. You can wait for it.
|
||||
|
||||
Other [popular Linux distributions][9] like Fedora, OpenSuse, Pop!_OS should soon include GNOME 3.36 soon enough. [Arch Linux][10] has already upgraded to GNOME 3.36.
|
||||
|
||||
I’ll advise you to wait it out till you get an update for your distribution. Nevertheless, you can take a look at the [source code][11] or try the upcoming development editions of popular distros that could potentially feature GNOME 3.36.
|
||||
|
||||
What do you think about GNOME 3.36? Let me know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/gnome-3-36-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/gnome-3-34-release/
|
||||
[2]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/extensions-gnome.jpg?ssl=1
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/dnd-gnome.jpg?ssl=1
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/gnome-lockscreen.jpg?ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/gnome-settings.jpg?ssl=1
|
||||
[7]: https://help.gnome.org/misc/release-notes/3.36/
|
||||
[8]: https://itsfoss.com/ubuntu-20-04-release-features/
|
||||
[9]: https://itsfoss.com/best-linux-distributions/
|
||||
[10]: https://www.archlinux.org/
|
||||
[11]: https://gitlab.gnome.org/GNOME
|
@ -0,0 +1,82 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Foundation prepares for disaster, new anti-tracking data set, Mozilla goes back to mobile OSes, and more open source news)
|
||||
[#]: via: (https://opensource.com/article/20/3/news-march-14)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
Linux Foundation prepares for disaster, new anti-tracking data set, Mozilla goes back to mobile OSes, and more open source news
|
||||
======
|
||||
Catch up on the biggest open source headlines from the past two weeks.
|
||||
![][1]
|
||||
|
||||
In this edition of our open source news roundup, we take a look at the Linux Foundation's disaster relief project, DuckDuckGo's anti-tracking tool, open textbooks, and more!
|
||||
|
||||
### Linux Foundation unveils Project OWL
|
||||
|
||||
When a disaster happens, it's vital to keep communications links up and running. One way to do that is with [mesh networks][2]. The Linux Foundation [has unveiled][3] Project OWL to "help build mesh network nodes for global emergency communications networks."
|
||||
|
||||
Short for _Organisation, Whereabouts, and Logistics_, OWL is firmware for Internet of Things (IoT) devices that "can quickly turn a cheap wireless device into a ‘DuckLink’, a mesh network node". Those devices can connect to other, similar devices around them. OWL also provides an analytics tool that responders can use for "coordinating resources, learning about weather patterns, and communicating with civilians who would otherwise be cut off."
|
||||
|
||||
### New open source tool to block web trackers
|
||||
|
||||
It's no secret that sites all over the web track their visitors. Often, it's shocking how much of that goes on and what a threat to your privacy that is. To help web browser developers better protect their users, the team behind search engine DuckDuckGo is "[sharing data it's collected about online trackers with other companies so they can also protect your privacy][4]."
|
||||
|
||||
That dataset is called Tracker Radar and it "details 5,326 internet domains used by 1,727 companies and organizations that track you online". Browser Radar is different from other tracker databases in that it "annotates data with other information, like whether blocking a tracker is likely to break a website, so anyone using it can pick the best balance of privacy and convenience."
|
||||
|
||||
Tracker Radar's dataset is [available on GitHub][5]. The repository also links to the code for the [crawler][6] and [detector][7] that work with the data.
|
||||
|
||||
### Oregon Tech embracing open textbooks
|
||||
|
||||
With the cost of textbooks taking an increasingly large bite out of the budgets of university students, more and more schools are turning to open textbooks to cut those costs. By embracing open textbooks, the Oregon Institute of Technology has [save students $400,000][8] over the last two years.
|
||||
|
||||
The school offers open textbooks for 26 courses, ranging "from chemistry and biology, to respiratory care, sociology and engineering." Although the textbooks are free, university librarian John Schoppert points out that the materials are of a high quality and that faculty members have been "developing lab manuals and open-licensed textbooks where they hadn’t existed before and improved on others’ materials."
|
||||
|
||||
### Mozilla to help update feature phone OS
|
||||
|
||||
A few years ago, Mozilla tried to break into the world of mobile operating systems with Firefox OS. While that effort didn't pan out, Firefox OS found new life powering low-cost feature phones under the name KaiOS. Mozilla's [jumping back into the game][9] by helping "modernize the browser engine that's core to the software."
|
||||
|
||||
KaiOS is built upon a four-year-old version of Mozilla's Gecko browser engine. Updating Gecko will "improve security, make apps run faster and more smoothly, and open [KaiOS to] more-sophisticated apps and WebGL 2.0 for better games graphics." Mozilla said its collaboration will include "Mozilla's help with test engineering and adding new low-level Gecko abilities."
|
||||
|
||||
#### In other news
|
||||
|
||||
* [CERN adopts Mattermost, an open source messaging app][10]
|
||||
* [Open-source software analyzes economics of biofuels, bioproducts][11]
|
||||
* [Netflix releases Dispatch for crisis management orchestration][12]
|
||||
* [FreeNAS and TrueNAS are merging][13]
|
||||
* [Smithsonian 3D Scans NASA Space Shuttle Discovery And Makes It Open Source][14]
|
||||
|
||||
|
||||
|
||||
Thanks, as always, to Opensource.com staff members and [Correspondents][15] for their help this week.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/news-march-14
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/weekly_news_roundup_tv.png?itok=tibLvjBd
|
||||
[2]: https://en.wikipedia.org/wiki/Mesh_networking
|
||||
[3]: https://www.smartcitiesworld.net/news/news/linux-announces-open-source-project-to-aid-disaster-relief-5102
|
||||
[4]: https://www.cnet.com/news/privacy-focused-duckduckgo-launches-new-effort-to-block-online-tracking/
|
||||
[5]: https://github.com/duckduckgo/tracker-radar
|
||||
[6]: https://github.com/duckduckgo/tracker-radar-collector
|
||||
[7]: https://github.com/duckduckgo/tracker-radar-detector
|
||||
[8]: https://www.heraldandnews.com/news/local_news/oregon-tech-turns-to-open-source-materials-to-save-students/article_ba641e79-3034-5b9a-a8f7-b5872ddc998e.html
|
||||
[9]: https://www.cnet.com/news/mozilla-helps-modernize-feature-phones-powered-by-firefox-tech/
|
||||
[10]: https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/cern-uses-mattermost
|
||||
[11]: http://www.biomassmagazine.com/articles/16848/open-source-software-analyzes-economics-of-biofuels-bioproducts
|
||||
[12]: https://jaxenter.com/netflix-dispatch-crisis-management-orchestration-169381.html
|
||||
[13]: https://liliputing.com/2020/03/freenas-and-turenas-are-merging-open-source-operating-systems-for-network-attached-storage.html
|
||||
[14]: https://www.forbes.com/sites/tjmccue/2020/03/04/smithsonian-3d-scans-the-nasa-space-shuttle-discovery-and-makes-it-open-source/#39aa0f243ecd
|
||||
[15]: https://opensource.com/correspondent-program
|
@ -0,0 +1,94 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fedora 32 Release Date, New Features and Everything Else)
|
||||
[#]: via: (https://itsfoss.com/fedora-32/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Fedora 32 Release Date, New Features and Everything Else
|
||||
======
|
||||
|
||||
Fedora 32 should be releasing at the end of April, around the same time as the [Ubuntu 20.04 LTS release][1].
|
||||
|
||||
Since we are covering the Ubuntu 20.04 release in detail, we thought of doing the same for our Fedora fans here.
|
||||
|
||||
In this article, I am going to highlight the new features coming to Fedora 32. I’ll update this article as the development progresses further.
|
||||
|
||||
### New features in Fedora 32
|
||||
|
||||
![][2]
|
||||
|
||||
#### EarlyOOM Enabled
|
||||
|
||||
With this release, [EarlyOOM][3] comes enabled by default. To give you a background, EarlyOOM lets users to easily recover their systems from a low-memory situation with heavy [swap][4] usage.
|
||||
|
||||
It is worth noting that it is applicable to the Fedora 32 Beta Workstation edition.
|
||||
|
||||
#### GNOME 3.36 Added
|
||||
|
||||
The new Fedora 32 Workstation also comes included with the new [GNOME 3.36][5].
|
||||
|
||||
Not just limited to Fedora 32 Beta Workstation – but it has also been added to the daily build of [Ubuntu 20.04 LTS][1].
|
||||
|
||||
Of course, the improvements in GNOME 3.36 translates to Fedora’s latest release as well – providing a faster and better experience, overall.
|
||||
|
||||
So, you’ll get the new lock screen, the do not disturb feature and everything else that comes with GNOME 3.36.
|
||||
|
||||
#### Package Updates
|
||||
|
||||
Fedora 32 release also updates a lot of important packages that include Ruby, Perl, and Python. It also features the latest version 10 of the [GNU Compiler Collection][6] (GCC).
|
||||
|
||||
#### Other Changes
|
||||
|
||||
In addition to the key highlights, there’s a lot of things that have changed, improved, or fixed. You can take a detailed look at its [changelog][7] to know more about what has changed.
|
||||
|
||||
### Download Fedora 32 (development version)
|
||||
|
||||
Fedora 32 is still under development. The beta version has been released and you may test it on a spare system or in virtual machine. **I would not advise you to use it on your main system before the final release**. There’s an official [list of know bugs][8] for the current release, you can refer to that as well.
|
||||
|
||||
In the [official announcement][9], they mentioned the availability of both **Fedora 32 beta workstation** and the **server** along with other popular variants.
|
||||
|
||||
To get the Workstation and the Server edition, you have to visit the official download page for [Fedora Workstation][10] and [Fedora Server][11] (depending on what you want).
|
||||
|
||||
![Fedora Download Beta][12]
|
||||
|
||||
Once, you do that, just look for a release tagged as “**Beta!**” as shown in the image above and start downloading it. For other variants, click on the links below to head to their respective download pages:
|
||||
|
||||
* [Fedora 32 Beta Spins][13]
|
||||
* [Fedora 32 Beta Labs][14]
|
||||
* [Fedora 32 Beta ARM][15]
|
||||
|
||||
|
||||
|
||||
Have you noticed any other new feature in Fedora 32? What features you would like to see here? Feel free to leave a comment below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/fedora-32/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/ubuntu-20-04-release-features/
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/11/update_fedora.jpg?ssl=1
|
||||
[3]: https://fedoraproject.org/wiki/Changes/EnableEarlyoom#Enable_EarlyOOM
|
||||
[4]: https://itsfoss.com/swap-size/
|
||||
[5]: https://itsfoss.com/gnome-3-36-release/
|
||||
[6]: https://gcc.gnu.org/
|
||||
[7]: https://fedoraproject.org/wiki/Releases/32/ChangeSet
|
||||
[8]: https://fedoraproject.org/wiki/Common_F32_bugs
|
||||
[9]: https://fedoramagazine.org/announcing-the-release-of-fedora-32-beta/
|
||||
[10]: https://getfedora.org/workstation/download/
|
||||
[11]: https://getfedora.org/server/download/
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/fedora-download-beta.jpg?ssl=1
|
||||
[13]: https://spins.fedoraproject.org/prerelease
|
||||
[14]: https://labs.fedoraproject.org/prerelease
|
||||
[15]: https://arm.fedoraproject.org/prerelease
|
@ -0,0 +1,60 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Tools for monitoring, introvert inclusion, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/20/3/monitoring-introvert-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
Tools for monitoring, introvert inclusion, and more industry trends
|
||||
======
|
||||
A weekly look at open source community and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [Top six open source tools for monitoring][2]
|
||||
|
||||
> These tools are widely used in the tech industry, and they all have their benefits. Most of these solutions, however, require skilled implementation and ongoing manual maintenance that can be a burden for DevOps teams and a distraction from the business. There’s no one solution that can cater to all of your requirements, since each tool focuses on one or two specific aspects of observability and analysis. By mixing these tools together, you can derive a unique solution for your individual business needs.
|
||||
|
||||
**The impact**: If a container falls over in the cluster and there is no open source monitoring tool to see it, did it really happen?
|
||||
|
||||
## [Introvert environment: What can we do?][3]
|
||||
|
||||
> An example is its inclusive team dynamics programme, which consists of both information and guidance on everyday practices. Each team is required to appoint an inclusion champion, who ensures all members are given the space to contribute to discussions. Leaders are also encouraged not to speak first during meetings.
|
||||
|
||||
**The impact**: If it is hard for you not to speak for a while in a meeting, that probably indicates you should be doing it more often. Will new WFH policies make this harder, or easier? Only time will tell.
|
||||
|
||||
## [The difference between API Gateways and service mesh][4]
|
||||
|
||||
> The service connectivity capabilities that service mesh provides are conflicting with the API connectivity features that an API gateway provides. However, because the ones provided by service mesh are more inclusive (L4 + L7, all TCP traffic, not just HTTP and not just limited to APIs but to every service), they are in a way more complete. But as we can see from the diagram above, there are also use cases that service mesh does not provide, and that is the “API as a product” use case as well as the full API management lifecycle, which still belong to the API gateway pattern.
|
||||
|
||||
**The impact**: Another way of saying this is you can't make money from your service mesh directly, unlike your APIs.
|
||||
|
||||
## [Open Policy Agent’s mission to secure the cloud][5]
|
||||
|
||||
> While the cost of implementing OPA is a little high today, the technology pays for itself by providing more control and helping to secure systems. As OPA continues to be refined, we can expect implementation costs to fall, making an investment in OPA easier to justify.
|
||||
|
||||
**The impact**: Compliance is expensive; large investments in it only make sense if non-compliance is even more so.
|
||||
|
||||
_I hope you enjoyed this list and come back next week for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/monitoring-introvert-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://devops.com/top-six-open-source-tools-for-monitoring-kubernetes-and-docker/
|
||||
[3]: https://www.raconteur.net/hr/introverts-workplace
|
||||
[4]: https://www.cncf.io/blog/2020/03/06/the-difference-between-api-gateways-and-service-mesh/
|
||||
[5]: https://thenewstack.io/open-policy-agents-mission-to-secure-the-cloud/
|
@ -0,0 +1,78 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (With npm Acquisition, Microsoft is Set to Own the Largest Software Registry in the World)
|
||||
[#]: via: (https://itsfoss.com/microsoft-npm-acquisition/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
With npm Acquisition, Microsoft is Set to Own the Largest Software Registry in the World
|
||||
======
|
||||
|
||||
Microsoft has been betting big on open source for past few years. Apart from open sourcing a few things here and there, Microsoft is contributing a lot to Linux kernel (for its Azure cloud platform).
|
||||
|
||||
To further strengthen its position in the open source world, [Microsoft acquired the popular open source code hosting platform GitHub for $7.5 billion][1].
|
||||
|
||||
Now Microsoft owned GitHub [has acquired][2] [npm][3] ( short for Node Package Manager). npm is the [world’s largest software registry][4] with [more than 1.3 million packages that have 75 billion downloads a month][5].
|
||||
|
||||
![][6]
|
||||
|
||||
If you are not familiar, npm is a package manager for JavaScript programming language, primarily the hugely popular open source [Node.js][7].
|
||||
|
||||
Though npm has scope of private repository for enterprises, most of the 1.3 million packages are open source and/or used in various open source projects.
|
||||
|
||||
Both node.js and npm are used by big software and IT companies like IBM, Yahoo and big corporations like Netflix and PayPal.
|
||||
|
||||
In case you are wondering, the acquisition amount has not been disclosed by either party.
|
||||
|
||||
### Microsoft’s proposed plan for npm
|
||||
|
||||
![][8]
|
||||
|
||||
GitHub CEO Nat Friedman assured that Microsoft intends to keep the npm registry available as open-source and free to developers.
|
||||
|
||||
Once the acquisition is complete, Microsoft is going to invest in the registry infrastructure and platform. It plans to improve the core experience of npm by adding new features like Workspaces, as well as bringing improvements to publishing and multi-factor authentication.
|
||||
|
||||
Microsoft also intends to integrate GitHub and npm so that developers could trace a change from a GitHub pull request to the npm package version that fixed it.
|
||||
|
||||
### Part of a larger plan
|
||||
|
||||
First, [Microsoft bought GitHub][1], the platform that had the largest open source repositories and now npm, the largest software registry. Clearly, Microsoft is tightening its grip around open source projects. This could allow Microsoft to dictate the policies around these open source projects in future.
|
||||
|
||||
When Microsoft acquired GitHub, several open source developers moved to [alternate platforms like GitLab][9] but GitHub remained the first choice for the developers. Microsoft did introduce some innovative features like security advisories, [package registry][10], [sponsorship][11] etc. Microsoft is expanding GitHub by forming communities around it specially in developing countries. Recently, [GitHub announced its Indian subsidiary][12] to specially attract young developers to its platform.
|
||||
|
||||
So now Microsoft owns the professional social network [LinkedIn][13], developer oriented GitHub and npm. This indicates that Microsoft will continue its shopping spree and will acquire more open source related projects that have substantial developer population.
|
||||
|
||||
What could be next then? WordPress because it is the [most popular open source CMS][14] and [runs 33% of the websites][15] on the internet?
|
||||
|
||||
While we wait and watch for Microsoft’s next move, why not share your views on this development? Comment section is all yours.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/microsoft-npm-acquisition/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/microsoft-github/
|
||||
[2]: https://github.blog/2020-03-16-npm-is-joining-github/
|
||||
[3]: https://www.npmjs.com/
|
||||
[4]: https://www.linux.com/news/state-union-npm/
|
||||
[5]: https://www.zdnet.com/article/microsoft-buys-javascript-developer-platform-npm-plans-to-integrate-it-with-github/
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/microsoft-github-npm.jpg?ssl=1
|
||||
[7]: https://nodejs.org/en/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/github-npm.jpg?ssl=1
|
||||
[9]: https://itsfoss.com/github-alternatives/
|
||||
[10]: https://github.blog/2019-05-10-introducing-github-package-registry/
|
||||
[11]: https://itsfoss.com/github-sponsors-program/
|
||||
[12]: https://github.blog/2020-02-12-announcing-github-india/
|
||||
[13]: https://www.linkedin.com/
|
||||
[14]: https://itsfoss.com/open-source-cms/
|
||||
[15]: https://wordpress.org/news/2019/03/one-third-of-the-web/
|
@ -0,0 +1,98 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (OBS Studio 25.0 is Here With Vulkan-based Games Capture Feature and More)
|
||||
[#]: via: (https://itsfoss.com/obs-studio-25-release/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
OBS Studio 25.0 is Here With Vulkan-based Games Capture Feature and More
|
||||
======
|
||||
|
||||
_**Brief: Open source screen recording and streaming software OBS Studio 25.0 has just been released and it brings the ability to capture Vulkan-based games with game capture among other new features.**_
|
||||
|
||||
![][1]
|
||||
|
||||
If you are into recording your desktop or streaming it, you might have heard of [OBS][2] (Open Broadcaster Software) Studio. It’s one of the [best screen recorder tools on Linux][3] and other operating systems.
|
||||
|
||||
But OBS is more than just a simple screen recorder. It also provides all the stuff you need for streaming your recordings.
|
||||
|
||||
### New features in OBS Studio 25.0
|
||||
|
||||
![OBS 25.0][4]
|
||||
|
||||
OBS Studio has released it’s latest version 25.0 with plenty of new features to make your recording and streaming experience better. Let’s take a look at some of the main new features:
|
||||
|
||||
* Capture Vulkan-based games with game capture
|
||||
* New capture method to window capture which allows capturing browsers, browser-based windows, and UWP programs
|
||||
* Advanced scene collection importing allows you to import from other common streaming programs
|
||||
* Media source hotkeys to allow control of playback
|
||||
* Ability to drag and drop URLs to create browser sources
|
||||
* Support for the [SRT protocol][5]
|
||||
* Ability to lock volume values of audio sources in the mixer
|
||||
* Support for certain devices that can automatically rotate their camera output such as the Logitech StreamCam
|
||||
* System tray icon to show when the recording is paused
|
||||
* Help icons when an property has a tooltip associated with it
|
||||
|
||||
|
||||
|
||||
Apart from there, there are plenty of bug features and minor changes that you may follow in the [release notes][6].
|
||||
|
||||
### Install OBS Studio 25.0 on Linux
|
||||
|
||||
OBS Studio is a cross-platform software and is also available for Windows and macOS in addition to Linux. You can download it from its official website.
|
||||
|
||||
[Download OBS Studio 25.0][7]
|
||||
|
||||
For Linux, you can grab the source code and build it your self. I know that’s not very convenient for everyone. The good news is that you can install the latest OBS version using Snap or Flatpak packages.
|
||||
|
||||
On Ubuntu or any other [Linux distribution with Snap support][8], you can use the following command:
|
||||
|
||||
```
|
||||
sudo snap install obs-studio
|
||||
```
|
||||
|
||||
If your distribution supports Fltapak packages, you can get it from Flathub website:
|
||||
|
||||
[OBS Studio on Flathub][9]
|
||||
|
||||
For Ubuntu users, there is also the official PPA for easily installing it. In a terminal, you can enter the following command one by one:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:obsproject/obs-studio
|
||||
sudo apt update
|
||||
sudo apt install obs-studio
|
||||
```
|
||||
|
||||
You can [learn about deleting PPA here][10].
|
||||
|
||||
Personally, I haven’t used OBS much though I have heard great stuff about it. I don’t live stream but I do record my desktop to create tutorial and information Linux videos on I[t’s FOSS YouTube channel][11] (you should subscribe to it if you haven’t already). For that, I [use Kazam][12] which I find simpler to use.
|
||||
|
||||
Do you use OBS Studio? Which features you like the most? Do share your views.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/obs-studio-25-release/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/obs_logo_icon_small.png?resize=150%2C150&ssl=1
|
||||
[2]: https://obsproject.com/
|
||||
[3]: https://itsfoss.com/best-linux-screen-recorders/
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/obs-25-ubuntu.png?ssl=1
|
||||
[5]: https://en.wikipedia.org/wiki/Secure_Reliable_Transport
|
||||
[6]: https://github.com/obsproject/obs-studio/releases/tag/25.0.0
|
||||
[7]: https://obsproject.com/download
|
||||
[8]: https://itsfoss.com/install-snap-linux/
|
||||
[9]: https://flathub.org/apps/details/com.obsproject.Studio
|
||||
[10]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
|
||||
[11]: https://www.youtube.com/channel/UCEU9D6KIShdLeTRyH3IdSvw
|
||||
[12]: https://itsfoss.com/kazam-screen-recorder/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (chai-yuan)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,81 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (As the networks evolve enterprises need to rethink network security)
|
||||
[#]: via: (https://www.networkworld.com/article/3531929/as-the-network-evolves-enterprises-need-to-rethink-security.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
As the networks evolve enterprises need to rethink network security
|
||||
======
|
||||
Q&A: John Maddison, executive vice president of products for network security vendor Fortinet, discusses how to deal with network security in the digital era.
|
||||
D3Damon / Getty Images
|
||||
|
||||
_Digital innovation is disrupting businesses. Data and applications are at the hub of new business models, and data needs to travel across the extended network at increasingly high speeds without interruption. To make this possible, organizations are radically redesigning their networks by adopting multi-cloud environments, building hyperscale data centers, retooling their campuses, and designing new connectivity systems for their next-gen branch offices. Networks are faster than ever before, more agile and software-driven. They're also increasingly difficult to secure. To understand the challenges and how security needs to change, I recently talked with John Maddison, executive vice president of products for network security vendor Fortinet._
|
||||
|
||||
**ZK: As the speed and scale of data escalate, how do the challenges to secure it change?**
|
||||
|
||||
JM: Security platforms were designed to provide things like enhanced visibility, control, and performance by monitoring and managing the perimeter. But the traditional perimeter has shifted from being a very closely monitored, single access point to a highly dynamic and flexible environment that has not only expanded outward but inward, into the core of the network as well.
|
||||
|
||||
**[ Also see [What to consider when deploying a next generation firewall][1]. | Get regularly scheduled insights by [signing up for Network World newsletters][2]. ]**
|
||||
|
||||
**READ MORE:** [The VPN is dying, long live zero trust][3]
|
||||
|
||||
Today's perimeter not only includes multiple access points, the campus, the WAN, and the cloud, but also IoT, mobile, and virtual devices that are generating data, communicating with data centers and manufacturing floors, and literally creating thousands of new edges inside an organization. And with this expanded perimeter, there are a lot more places for attacks to get in. To address this new attack surface, security has to move from being a standalone perimeter solution to being fully integrated into the network.
|
||||
|
||||
This convergence of security and networking needs to cover SD-WAN, VPN, Wi-Fi controllers, switching infrastructures, and data center environments – something we call security-driven networking. As we see it, security-driven networking is an essential approach for ensuring that security and networking are integrated together into a single system so that whenever the networking infrastructure evolves or expands, security automatically adapts as an integrated part of that environment. And it needs to do this by providing organizations with a new suite of security solutions, including network segmentation, dynamic multi-cloud controls, and [zero-trust network access][3]. And because of the speed of digital operations and the sophistication of today's attacks, this new network-centric security strategy also needs to be augmented with AI-driven security operations.
|
||||
|
||||
The perimeter security devices that have been on the market weren't really built to run as part of the internal network, and when you put them there, they become bottlenecks. Customers don't put these traditional security devices in the middle of their networks because they just can't run fast enough. But the result is an open network environment that can become a playground for criminals that manage to breach perimeter defenses. It's why the dwell time for network malware is over six months.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]**
|
||||
|
||||
As you combine networking applications, networking functionality, and security applications together to address this challenge, you absolutely need a different performance architecture. This can't be achieved using the traditional hardware most security platforms rely on.
|
||||
|
||||
**ZK: Why can't traditional security devices secure the internal network?**
|
||||
|
||||
JM: They simply aren't fast enough. And the ones that come close are prohibitively expensive… For example, internal segmentation not only enables organizations to see and separate all of the devices on their network but also dynamically create horizontal segments that support and secure applications and automated workflows that need to travel across the extended network. Inside the network, you're running at 100 gigs, 400 gigs, that sort of thing. But the interface for a lot of security systems today is just 10 gigs. Even with multiple ports, the device can't handle much more than that without having to spend a fortune… In order to handle today's capacity and performance demands, security needs to be done at network speeds that most security solutions cannot support without specialized content processors.
|
||||
|
||||
**ZK: Hyperscale data centers have been growing steadily. What sort of additional security challenges do these environments face?**
|
||||
|
||||
JM: Hyperscale architectures are being used to move and process massive amounts of data. A lot of the times, research centers will need to send a payload of over 10 gigabytes – one packet that's 10 gigabytes – to support advanced rendering and modeling projects. Most firewalls today cannot process these large payloads, also known as elephant flows. Instead, they often compromise on their security to let them flow through. Other hyperscale environment examples include financial organizations that need to process transactions with sub-second latency or online gaming providers that need to support massive numbers of connections per second while maintaining high user experience. … [Traditional security platforms] will never be able to secure hyperscale environments, or even worse, the next generation of ultra-fast converged networks that rely on hyperscale and hyperconnectivity to run things like smart cities or smart infrastructures, until they fundamentally change their hardware.
|
||||
|
||||
**ZK: Do these approaches introduce new risks or increase the existing risk for these organizations?**
|
||||
|
||||
JM: They do both. As the attack surface expands, existing risks often get multiplied across the network. We actually see more exploits in the wild targeting older vulnerabilities than new ones. But cybercriminals are also building new tools designed to exploit cloud environments and modern data centers. They are targeting mobile devices and exploiting IoT vulnerabilities. Some of these attacks are simply revisions of older, tried and true exploits. But many are new and highly sophisticated. We are also seeing new attacks that use machine learning and rely on AI enhancements to better bypass security and evade detection.
|
||||
|
||||
To address this challenge, security platforms need to be broad, integrated, and automated.
|
||||
|
||||
Broad security platforms come in a variety of form factors so they can be deployed everywhere across the expanding network. Physical hardware enhancements, such as our [security processing units], enable security platforms to be effectively deployed inside high-performance networks, including hyperscale data centers and SD-WAN environments. And virtualized versions need to support private cloud environments as well as all major cloud providers through thorough cloud-native integration.
|
||||
|
||||
Next, these security platforms need to be integrated. The security components built into a security platform need to work together as a single solution – not the sort of loose affiliation most platforms provide – to enable extremely fast threat intelligence collection, correlation, and response. That security platform also needs to support common standards and APIs so third-party tools can be added and supported. And finally, these platforms need to be able to work together, regardless of their location or form factor, to create a single, unified security fabric. It's important to note that many cloud providers have developed their own custom hardware, such as Google's TPU, Amazon's Inferentia, and Microsoft's Corsica, to accelerate cloud functions. As a result, hardware acceleration on physical security platforms is essential to ensure consistent performance for data moving between physical and cloud environments
|
||||
|
||||
And finally, security platforms need to be automated. Support for automated workflows and AI-enhanced security operations can significantly accelerate the speed of threat detection, analysis, and response. But like other processing-intensive functions, such as decrypting traffic for deep inspection, these functions also need specialized and purpose-built processors or they will become innovation-killing bottlenecks.
|
||||
|
||||
**ZK: What's next for network security?**
|
||||
|
||||
JM: This is just the start. As networking functions begin to converge even further, creating the next generation of smart environments – smart buildings, smart cities, and smart critical infrastructures – the lack of viable security tools capable of inspecting and protecting these hyperfast, hyperconnected, and hyper-scalable environments will seriously impact our digital economy and way of life.
|
||||
|
||||
Security vendors need to understand this challenge and begin investing now in developing advanced hardware and security-driven networking technologies. Organizations aren't waiting for vendors to catch up so they can secure their networks of tomorrow. Their networks are being left exposed right now because the software-based security solutions they have in place are just not adequate. And it's up to the security industry to step up and solve this challenge.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3531929/as-the-network-evolves-enterprises-need-to-rethink-security.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: https://www.networkworld.com/article/3487720/the-vpn-is-dying-long-live-zero-trust.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
108
sources/talk/20200316 How to be the right person for DevOps.md
Normal file
108
sources/talk/20200316 How to be the right person for DevOps.md
Normal file
@ -0,0 +1,108 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to be the right person for DevOps)
|
||||
[#]: via: (https://opensource.com/article/20/3/devops-relationships)
|
||||
[#]: author: (Josh Atwell https://opensource.com/users/joshatwell)
|
||||
|
||||
How to be the right person for DevOps
|
||||
======
|
||||
Creating healthy relationships is the essential ingredient in DevOps
|
||||
success.
|
||||
![Team meeting][1]
|
||||
|
||||
In my kitchen, we have a sign that reads "Marriage is more than finding the right person. It is being the right person." It serves as a great reminder of the individual responsibility everyone has in any healthy relationship. As organizations adopt [DevOps][2] as a model of developing and delivering value to customers, the impact of healthy relationships is extremely important for success.
|
||||
|
||||
![Marriage sign][3]
|
||||
|
||||
Historically, the relationship between development and operations teams has been unhealthy. Poor communication, limited empathy, and a history of mistrust make merging these teams into a tighter operating model challenging, to say the least. This is not entirely unfair to either side.
|
||||
|
||||
Developers have long been frustrated by lead times and processes put in place by the operations organization. They just want to work, and they often see operations as an anchor on the ship of progress.
|
||||
|
||||
Operations professionals have long been frustrated by the impatience and lack of clear requirements that come from development teams. They are often confused about why those teams are not able to use the available services and processes. They see developers as a liability to their ability to maintain stable services for customers and the business.
|
||||
|
||||
A lingering gap here is that each side has been focused on protecting its own perspective. They emphasize how the other team is not being what _they_ need, and they never question whether they, too, could be doing something different.
|
||||
|
||||
In DevOps, all sides must frame their role in the organization based on how they add value to others.
|
||||
|
||||
There are a few things that everyone, including managers and leaders, can do right away to become a better contributor and partner in their DevOps relationships.
|
||||
|
||||
### Communicate
|
||||
|
||||
Most professionals in an organization adopting DevOps find themselves needing to work closely with new people, ones they have had limited exposure to in the past. It is important for everyone to take time to get to know their new teammates and learn more about their concerns, their interests, and also their preferred communication style.
|
||||
|
||||
Successful communication in new relationships is often built on simply listening more and talking less. Our natural tendency is to talk about ourselves. Most people love sharing what they know best. However, it is extremely important to make more room to listen.
|
||||
|
||||
Hearing someone is not the same as listening to them. I'm confident that we have all been in the situation where someone expresses a concern that we do not entirely internalize. Also, merely hearing does not encourage people to share–or to share as completely as they should.
|
||||
|
||||
![Stick figures hearing][4]
|
||||
|
||||
It is important to listen actively. Repeat what you hear, and seek validation that what you repeat is what they wanted you to understand. Once you understand their concern, it is important to make your initial response a selfless one. Even if you can't completely solve the problem, demonstrate sympathy and help the person move towards a solution.
|
||||
|
||||
![Stick figures listening][5]
|
||||
|
||||
### Selflessness
|
||||
|
||||
Another key relationship challenge as organizations adopt DevOps is developing a perspective of selflessness. In DevOps, most people are responsible for delivering value to a wide variety of other people. Each person should begin by considering how their actions and work impact other people.
|
||||
|
||||
This service mindset carries forward when you become more sensitive to when others are in need and then dedicate time in your schedule specifically for the purpose of helping them. This can be as simple as creating a small improvement in a process or helping to troubleshoot an issue. A positive side effect is that this effort will provide you more opportunities to work with others and develop deeper trust.
|
||||
|
||||
It is also important not to hoard knowledge—either technical or institutional—especially when people ask questions or seek help. Maintain the mindset that there are no stupid questions.
|
||||
|
||||
![Stick figure apologizing][6]
|
||||
|
||||
Finally, selflessness includes being trustworthy. It is difficult to maintain a healthy relationship when there is no trust. Be honest and transparent. In IT, this is often seen as a liability, but in DevOps it is a requirement for success.
|
||||
|
||||
### Self-care
|
||||
|
||||
In order to be a strong contributor to a relationship, it is necessary to maintain a sense of self. Our individuality provides the diversity a relationship needs to grow. Make sure you maintain and share your interests with others. Be more than just the work you do. Apply your interests to your work.
|
||||
|
||||
You are no good to others if you are not good to yourself. Healthy relationships are stronger with healthy people. Make sure you take time to enjoy your interests and recharge. Take your vacation and leave work behind!
|
||||
|
||||
![Stick figure relaxing][7]
|
||||
|
||||
I am also a strong advocate for mental health days. Sometimes our mental health is not sufficient to work effectively. You're not as effective when you are physically ill, and you're not as effective when your head is not 100%. Work with your manager and your team to support each other to maintain good mental health.
|
||||
|
||||
Mental health is improved by learning. Invest in yourself and expand your knowledge. DevOps ideally needs "t-shaped" people who have depth on a topic and also broader system knowledge. Work to increase your depth, but balance that by learning new things about your environment. This knowledge can come from your teammates and create operational sympathy.
|
||||
|
||||
![Stick figure with open arms][8]
|
||||
|
||||
Finally, healthy relationships are not all work and no play. Take time to acknowledge the successes of others. If you know your team, you likely know how individuals prefer to receive praise. Respect those preferences, but always strive to praise vocally where possible.
|
||||
|
||||
![Stick figures celebrating][9]
|
||||
|
||||
Make sure to celebrate these successes as a team. All work and no play makes everyone dull. Celebrate milestones together as a team, and then articulate and target the next objectives.
|
||||
|
||||
### Be the right person
|
||||
|
||||
DevOps requires more from every individual, and its success is directly tied to the health of relationships. Each member of the organization should apply these techniques to grow and improve themselves. A focus on being the right person for the team will build stronger bonds and make the organization better equipped to reach its goals.
|
||||
|
||||
* * *
|
||||
|
||||
_This article is based on [a talk][10] Josh Atwell gave at All Things Open 2019._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/devops-relationships
|
||||
|
||||
作者:[Josh Atwell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/joshatwell
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/meeting-team-listen-communicate.png?itok=KEBP6vZ_ (Team meeting)
|
||||
[2]: https://opensource.com/tags/devops
|
||||
[3]: https://opensource.com/sites/default/files/uploads/marriage.png (Marriage sign)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/hearing.png (Stick figures hearing)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/listening.png (Stick figures listening)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/apologize.png (Stick figure apologizing)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/relax.png (Stick figure relaxing)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/open_0.png (Stick figure with open arms)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/celebrate.png (Stick figures celebrating)
|
||||
[10]: https://opensource.com/article/20/1/devops-empathy
|
@ -0,0 +1,93 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (OpenStreetMap: A Community-Driven Google Maps Alternative)
|
||||
[#]: via: (https://itsfoss.com/openstreetmap/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
OpenStreetMap: A Community-Driven Google Maps Alternative
|
||||
======
|
||||
|
||||
_**Brief: OpenStreetMap is a community-driven map – which is a potential alternative to Google Maps. Learn more about this open source project.**_
|
||||
|
||||
[OpenStreetMap][1] (OSM) is a free editable map of the world. Anyone can contribute, edit, and make changes to the OpenStreetMap to improve it.
|
||||
|
||||
![][2]
|
||||
|
||||
You need to sign up for an account first – in order to be able to edit or add information to the OpenStreetMap. To view the map, you wouldn’t need an account.
|
||||
|
||||
Even though it’s a free-to-use map under an [open data license][3], you cannot use the map API to build another service on top of it for commercial purpose.
|
||||
|
||||
So, you can download the map data to use it and host it yourself while mentioning the credits to OSM. You can learn more about its [API usage policy][4] and [copyright][5] information on its official website to learn more.
|
||||
|
||||
In this article, we shall take a brief look at how it works and what kind of projects use OpenStreetMaps as the source of their map data.
|
||||
|
||||
### OpenStreetMap: Overview
|
||||
|
||||
![][6]
|
||||
|
||||
OpenStreetMap is a good alternative to Google Maps. You might not get the same level of information as Google Maps- but for basic navigation and traveling, OpenStreetMap is sufficient.
|
||||
|
||||
Just like any other map, you will be able to switch between multiple layers in the map, get to know your location, and easily search for places.
|
||||
|
||||
You may not find all the latest information for the businesses, shops, and restaurants nearby. But, for basic navigation, it’s more than enough.
|
||||
|
||||
OpenStreetMap can be usually accessed through a web browser on both desktop and mobile by visiting the [OpenStreetMap site][7]. It does not have an official Android/iOS app yet.
|
||||
|
||||
However, there are a variety of applications available that utilize OpenStreetMap at its core. So, if you want to utilize OpenStreetMap on a smartphone, you can take a look at some of the popular open-source Google Maps alternatives:
|
||||
|
||||
* [OsmAnd][8]
|
||||
* [MAPS.ME][9]
|
||||
|
||||
|
||||
|
||||
**MAPS.ME** and **OsmAnd** are two open-source applications for Android and iOS that utilize OpenStreetMap data to provide a rich user experience with a bunch of useful information and features added to it.
|
||||
|
||||
You can also opt for other proprietary options if you wish, like [Magic Earth][10].
|
||||
|
||||
In either case, you can take a look at the extensive list of applications on their official wiki page for [Android][11] and [iOS][12].
|
||||
|
||||
### Using OpenStreetMap On Linux
|
||||
|
||||
![][13]
|
||||
|
||||
The easiest way to use OpenStreetMap on Linux is to use it in a web browser. If you use GNOME desktop environment, you can install GNOME Maps which is built on top of OpenStreetMap.
|
||||
|
||||
There are also several software (that are mostly obsolete) that utilize OpenStreetMap on Linux for specific purposes. You can check out the list of available packages in their [official wiki list][14].
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
OpenStreetMap may not be the best source for navigation for end users but its open source model allows it to be used freely. This means that many services can be built using OpenStreetMap. For example, [ÖPNVKarte][15] uses OpenStreetMap to display worldwide public transport facilities on a uniform map so that you don’t have to browse individual operator’s websites.
|
||||
|
||||
What do you think about OpenStreetMap? Can you use it as a Google Maps alternative? Feel free to share your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/openstreetmap/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.openstreetmap.org/
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/openstreetmap.jpg?ssl=1
|
||||
[3]: https://opendatacommons.org/licenses/odbl/
|
||||
[4]: https://operations.osmfoundation.org/policies/api/
|
||||
[5]: https://www.openstreetmap.org/copyright
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/open-street-map-2.jpg?ssl=1
|
||||
[7]: https://www.openstreetmap.org
|
||||
[8]: https://play.google.com/store/apps/details?id=net.osmand
|
||||
[9]: https://play.google.com/store/apps/details?id=com.mapswithme.maps.pro
|
||||
[10]: https://www.magicearth.com/
|
||||
[11]: https://wiki.openstreetmap.org/wiki/Android#OpenStreetMap_applications
|
||||
[12]: https://wiki.openstreetmap.org/wiki/Apple_iOS
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/open-street-map-1.jpg?ssl=1
|
||||
[14]: https://wiki.openstreetmap.org/wiki/Linux
|
||||
[15]: http://xn--pnvkarte-m4a.de/
|
@ -0,0 +1,61 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: ('AI everywhere' IoT chips coming from Arm)
|
||||
[#]: via: (https://www.networkworld.com/article/3532094/ai-everywhere-iot-chips-coming-from-arm.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
'AI everywhere' IoT chips coming from Arm
|
||||
======
|
||||
Two new microprocessors from Arm promise to miniaturize artificial intelligence.
|
||||
Healthcare
|
||||
|
||||
Silicon microchip maker Arm is working on a new semiconductor design that it says will enable machine learning, at scale, on small sensor devices. Arm has completed testing of the technology and expects to bring it to market next year.
|
||||
|
||||
Artificial intelligence, implemented locally on "billions and ultimately trillions" of devices is coming, the company says in a [press release][1]. Arm Holdings, owned by Japanese conglomerate Softbank, says its partners have shipped more than 160 billion Arm-based chips to date, and that 45 million of its microprocessor designs are being placed within electronics every day.
|
||||
|
||||
The new machine-learning silicon will include micro neural processing units (microNPU) that can be used to identify speech patterns and perform other AI tasks. Importantly, the processing is accomplished on-device and in smaller form factors than have so far been available. The chips don't need the cloud or any network.
|
||||
|
||||
[RELATED: Auto parts supplier has big plans for its nascent IoT effort][2]
|
||||
|
||||
Arm, which historically has been behind mobile smartphone microchips, is aiming this design – the Cortex M55 processor, paired with the Ethos-U55, Arm's first microNPU – at Internet of Things instead.
|
||||
|
||||
"Enabling AI everywhere requires device makers and developers to deliver machine learning locally on billions, and ultimately trillions of devices," said Dipti Vachani, senior vice president and general manager of Arm's automotive and IoT areas, in a statement. "With these additions to our AI platform, no device is left behind as on-device ML on the tiniest devices will be the new normal, unleashing the potential of AI securely across a vast range of life-changing applications."
|
||||
|
||||
Arm wants to take advantage of the autonomous nature of chip-based number crunching, as opposed to doing it in the cloud. Privacy-conscious (and regulated) healthcare is an example of a vertical that might like the idea of localized processing.
|
||||
|
||||
Functioning AI without cloud dependence isn't entirely new. Intel's [Neural Compute Stick 2][3], a $69 self-contained computer vision and deep learning development kit, doesn't need it, for example.
|
||||
|
||||
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][4] ]**
|
||||
|
||||
Arm is also going for power savings with its new AI technology. Not requiring a data network can mean longer battery life for the sensor— only the calculated results need to be sent, rather than every bit. Much of the time, raw sensor data is irrelevant and can be discarded. Arm's new endpoint ML technologies are going to help microcontroller developers "accelerate edge inference in devices limited by size and power," said Geoff Lees, senior vice president of edge processing at IoT semiconductor company [NXP][5], in the announcement.
|
||||
|
||||
Enabling machine learning in power-constrained settings and eliminating the need for network connectivity mean the sensor can be placed where there isn't a hardy power supply. Latency advantages and cost advantages also can come into play.
|
||||
|
||||
"These devices can run neural network models on batteries for years, and deliver low-latency inference directly on the device," said Ian Nappier, product manager of TensorFlow Lite for Microcontrollers at Google, in a statement to Arm. [TensorFlow][6] is an open-source machine learning platform that's been used for detecting respiratory diseases, among other things.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3532094/ai-everywhere-iot-chips-coming-from-arm.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.arm.com/company/news/2020/02/new-ai-technology-from-arm
|
||||
[2]: https://www.networkworld.com/article/3098084/internet-of-things/auto-parts-supplier-has-big-plans-for-its-nascent-iot-effort.html#tk.nww-fsb
|
||||
[3]: https://store.intelrealsense.com/buy-intel-neural-compute-stick-2.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[5]: https://www.nxp.com/company/our-company/about-nxp:ABOUT-NXP
|
||||
[6]: https://www.tensorflow.org/
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,117 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Coronavirus challenges remote networking)
|
||||
[#]: via: (https://www.networkworld.com/article/3532440/coronavirus-challenges-remote-networking.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Coronavirus challenges remote networking
|
||||
======
|
||||
COVID-19 sends IBM, Google, Amazon, AT&T, Cisco, Apple and others scrambling to securely support an enormous rise in teleworkers, and puts stress on remote-access networks.
|
||||
Thinkstock
|
||||
|
||||
As the coronavirus spreads, many companies are requiring employees to work from home, putting unanticipated stress on remote networking technologies and causing bandwidth and security concerns.
|
||||
|
||||
Businesses have facilitated brisk growth of teleworkers over the past decades to an estimated 4 million-plus. The meteoric rise in new remote users expected to come online as a result of the novel coronavirus calls for stepped-up capacity.
|
||||
|
||||
Research by VPN vendor [Atlas][1] shows that VPN usage in the U.S. grew by 53% between March 9 and 15, and it could grow faster. VPN usage in Italy, where the virus outbreak is about two weeks ahead of the U.S., increased by 112% during the last week. "We estimate that VPN usage in the U.S. could increase over 150% by the end of the month," said Rachel Welch, chief operating officer of Atlas VPN, in a statement.
|
||||
|
||||
Businesses are trying to get a handle on how much capacity they'll need by running one-day tests. For example, JPMorgan Chase, Morningstar and analytics startup Arity have tested or plan to test their systems by having employees work from home for a day, according to the [Chicago Tribune][2].
|
||||
|
||||
On the government side, agencies such as [National Oceanic and Atmospheric Administration][3] and NASA have or will run remote networking stress tests to understand their remote networking capacity and what the impact will be if they add thousands of new teleworkers. About [2 million people][4] work for the government in the U.S.
|
||||
|
||||
To help stave off congestion in cellular data networks, the [Federal Communications Commission][5] has granted T-Mobile temporary access to spectrum in the 600MHz band that's owned by other licensees. T-Mobile said it requested the spectrum "to make it easier for Americans to participate in telehealth, distance learning, and telework, and simply remain connected while practicing recommended 'social distancing'."
|
||||
|
||||
Last-mile internet access may become congested in areas that rely on wireless connectivity, some industry players warn.
|
||||
|
||||
[][6]
|
||||
|
||||
"Bottlenecks are likely going to exist in hard-to-reach areas, such as rural locations, where internet access relies on microwave or wireless infrastructure," said Alex Cruz Farmer, product manager for network intelligence company ThousandEyes, which makes software that analyzes the performance of local and wide area networks. "The challenge here is that the available bandwidth is usually much less via these solutions, as well as more latent."
|
||||
|
||||
"We have seen a very small number of platform-related issues or outages due to increased loads, although those have since been resolved," added ThousandEyes' Farmer.
|
||||
|
||||
For its part, AT&T said it has noticed shifts in usage on its wireless network, but capacity has not been taxed.
|
||||
|
||||
"In cities where the coronavirus has had the biggest impact, we are seeing fewer spikes in wireless usage around particular cell towers or particular times of day, because more people are working from home rather than commuting to work, and fewer people are gathering in large crowds at specific locations," [AT&T said in a statement][7]. "We continuously monitor bandwidth usage with tools that analyze and correlate network statistics, which reveal network trends and provide us with performance and capacity reports that help us manage our network."
|
||||
|
||||
Verizon says it hasn't seen a measurable increase in data usage since the coronavirus outbreak, despite a jump in the number of customers working from home. "Verizon’s networks are designed and built to meet future demand and are ready should demand increase or usage patterns change significantly. While this is an unprecedented situation, we know things are changing, and we are ready to adjust network resources as we better understand any shifts in demand," the company said in a statement.
|
||||
|
||||
Verizon has been monitoring network usage in the most affected areas and pledged to work with and prioritize network resources to meet the needs of hospitals, first responders and government agencies. It also announced plans to increase capital spending from between $17 billion and $18 billion to between $17.5 billion to $18.5 billion in 2020 in an effort to "accelerate Verizon's transition to 5G and help support the economy during this period of disruption."
|
||||
|
||||
### Enterprise VPN security concerns
|
||||
|
||||
For enterprises, supporting the myriad network and security technologies that sit between data centers and remote users is no small task, particularly since remote-access VPNs, for example, typically rely on residential internet-access services over which businesses have little control. But IT pros should try to verify that these connections meet enterprise standards, according Tom Nolle, president of CIMI Corp. (Read more of Nolle's thoughts on working at home [here][8].)
|
||||
|
||||
"The home broadband elements, like the ISP and DNS and Wi-Fi, should really be part of a business certification of suitable networking for home work," Nolle said. "I find that DNS services like Google's are less prone to being overloaded than ISPs' services, which suggests users should be required to adopt one of them. OpenDNS is also good."
|
||||
|
||||
The security of home Wi-Fi networks is also an issue, Nolle said. IT pros should require workers to submit screenshots of their Wi-Fi configurations in order to validate the encryption being used. "Home workers often bypass a lot of the security built into enterprise locations," he said.
|
||||
|
||||
Education of new home workers is also important, said Andrew Wertkin, chief strategy officer with DNS software company BlueCat. "There will be remote workers who have not substantially worked from home before, and may or may not understand the implications to security," Wertkin said. "This is especially problematic if the users are accessing the network via personal home devices versus corporate devices."
|
||||
|
||||
An unexpected increase in remote corporate users using a [VPN][9] can also introduce cost challenges.
|
||||
|
||||
"VPN appliances are expensive, and moving to virtualized environments in the cloud often can turn out to be expensive when you take into account compute cost and per-seat cost," Farmer said. A significant increase in per-seat VPN licenses have likely not been budgeted for.
|
||||
|
||||
On the capacity side, systems such as DHCP, which doles out IP addresses, could come under stress with increased remote-access use. "It doesn't matter if there are enough licenses for VPN if the devices connecting cannot obtain network addresses," Wertkin said. "Companies must test for and understand choke points and start implementing strategies to mitigate these risks."
|
||||
|
||||
Along those lines, enterprises "may have to validate the number of SSL sockets their data centers can expose for use, or they could end up running out," Nolle said.
|
||||
|
||||
Paul Collinge, a senior program manager in the Microsoft Office 365 product team, raised similar concerns. Network elements such as VPN concentrators, central network egress equipment such as proxies, DLP, central internet bandwidth, backhaul MPLS circuits, and NAT capability are put under enormous strain when all employees are using them, Collinge wrote in a [blog][10] about optimizing Office 365 traffic for remote staff. The result is poor performance and productivity coupled with a poor user experience for those working from home.
|
||||
|
||||
ThousandEyes' Farmer said enterprises might have to increase the number of VPN concentrators on their networks. "This way, remote-user connectivity is distributed across multiple VPN endpoints and not concentrated," he said. If that's not an option, businesses may have to open firewall ports to allow access to essential applications, which would enable them to scale up, but could also weaken security temporarily.
|
||||
|
||||
### Can VPN split tunneling help?
|
||||
|
||||
Industry players are divided on the use of split tunnerling to minimize coronavirus capacity concerns.
|
||||
|
||||
VPNs can be set up to allow split tunneling, where only traffic intended for the corporate network tunnels through the VPN, BlueCat's Wertkin said. The rest of the traffic goes directly to the internet at large, meaning it isn't subject to the security controls imposed by the tunnel and by tools within the corporate network, which is a security concern. This could lead to remote users' computers being compromised by internet-borne attacks, which could in turn put corporate data and networks at risk.
|
||||
|
||||
Despite this, Microsoftlast week recommended split tunneling as a way for IT admins to address its Office 365 service becoming congested due to an influx of remote users. In [the advisory][10], Microsoft offers a list of URLs and IP addresses for its points of access and describes how IT can use that information to route traffic directly to Office 365.
|
||||
|
||||
The VPN client should be configured so that traffic to identified URLs/IPs/ports is routed in this way, according to Collinge. "This allows us to deliver extremely high performance levels to users wherever they are in the world.”
|
||||
|
||||
ThousandEyes' Farmer said increased use of remote access VPNs might call for a review of network security in general. "[For] enterprises that are still using a legacy network security architecture, it may be time to consider cloud-based security options, which could improve performance for remote workers and diminish the overall use of the enterprise’s WAN circuits."
|
||||
|
||||
Other related developments:
|
||||
|
||||
* The [FCC][11] called on broadband providers to relax their data cap policies in appropriate circumstances, on telephone carriers to waive long-distance and overage fees in appropriate circumstances, on those that serve schools and libraries to work with them on remote learning opportunities, and on all network operators to prioritize the connectivity needs of hospitals and healthcare providers. AT&T and others have responded.
|
||||
* [U.S. Senator Mark R. Warner (D-VA)][12] and 17 other senators sent a letter to the CEOs of eight major ISPs calling on the companies to take steps to accommodate the unprecedented reliance on telepresence services, including telework, online education, telehealth, and remote support services. In the letter, sent to the CEOs of AT&T, CenturyLink, Charter Communications, Comcast, Cox Communications, Sprint, T-Mobile, and Verizon, the senators call on companies to suspend restrictions and fees that could limit telepresence options. Related to the nation's broadband gaps, they also call on the companies to provide free or at-cost broadband options for students affected by the virus who otherwise lack broadband access for online learning during the outbreak.
|
||||
* Vendors including [Cisco][13], Microsoft, [Google][14], [LogMeIn][15], [Spectrum][16] and others are offering free tools to help customers manage security and communications during the outbreak.
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][17] and [LinkedIn][18] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3532440/coronavirus-challenges-remote-networking.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://atlasvpn.com/blog/vpn-usage-in-italy-rockets-by-112-and-53-in-the-us-amidst-coronavirus-outbreak/
|
||||
[2]: https://www.chicagotribune.com/coronavirus/ct-coronavirus-work-from-home-20200312-bscm4ifjvne7dlugjn34sksrz4-story.html
|
||||
[3]: https://federalnewsnetwork.com/workforce/2020/03/agencies-ramp-up-coronavirus-preparations-as-noaa-plans-large-scale-telework-test/
|
||||
[4]: https://fas.org/sgp/crs/misc/R43590.pdf
|
||||
[5]: https://www.fcc.gov/coronavirus
|
||||
[6]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
|
||||
[7]: https://about.att.com/pages/COVID-19.html
|
||||
[8]: https://blog.cimicorp.com/?p=4055
|
||||
[9]: https://www.networkworld.com/article/3268744/understanding-virtual-private-networks-and-why-vpns-are-important-to-sd-wan.html
|
||||
[10]: https://techcommunity.microsoft.com/t5/office-365-blog/how-to-quickly-optimize-office-365-traffic-for-remote-staff-amp/ba-p/1214571
|
||||
[11]: https://www.fcc.gov/document/commissioner-starks-statement-fccs-response-covid-19
|
||||
[12]: https://www.warner.senate.gov/public/_cache/files/2/3/239084db-83bd-4641-bf59-371cb829937a/A99E41ACD1BA92FB37BDE54E14A97BFA.letter-to-isps-on-covid-19-final-v2.-signed.pdf
|
||||
[13]: https://blogs.cisco.com/collaboration/cisco-announces-work-from-home-webex-contact-center-quick-deployment
|
||||
[14]: https://cloud.google.com/blog/products/g-suite/helping-businesses-and-schools-stay-connected-in-response-to-coronavirus
|
||||
[15]: https://www.gotomeeting.com/work-remote?clickid=RFlSQF3DBxyOTSr0MKVSfWfHUknShrScK0%3AhTY0&irgwc=1&cid=g2m_noam_ir_aff_cm_pl_ct
|
||||
[16]: https://www.multichannel.com/news/charter-opening-wi-fi-hotspots-in-face-of-covid-19
|
||||
[17]: https://www.facebook.com/NetworkWorld/
|
||||
[18]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,61 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Electronics should sweat to cool down, say researchers)
|
||||
[#]: via: (https://www.networkworld.com/article/3532827/electronics-should-sweat-to-cool-down-say-researchers.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Electronics should sweat to cool down, say researchers
|
||||
======
|
||||
Scientists think that in much the same way the human body releases perspiration to cool down, special materials might release water to draw heat from electronics.
|
||||
rclassenlayouts / Aleksei Derin / Getty Images
|
||||
|
||||
Computing devices should sweat when they get too hot, say scientists at Shanghai Jiao Tong University in China, where they have developed a materials application they claim will cool down devices more efficiently and in smaller form-factors than existing fans.
|
||||
|
||||
It’s “a coating for electronics that releases water vapor to dissipate heat from running devices,” the team explain in a news release. “Mammals sweat to regulate body temperature,” so should electronics, they believe.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
|
||||
|
||||
The group’s focus has been on studying porous materials that can absorb moisture from the environment and then release water vapor when warmed. MIL-101(Cr) checks the boxes, they say. The material is a metal organic framework, or MOF, which is a sorbent, a material that stores large amounts of water. The higher the water capacity one has, the greater the dissipation of heat when it's warmed.
|
||||
|
||||
MOF projects have been attempted before. “Researchers have tried to use MOFs to extract water from the desert air,” says refrigeration-engineering scientist Ruzhu Wang, who is senior author of a paper on the university’s work that has just been [published in Joule][2].
|
||||
|
||||
Their proof-of-concept test involved applying a micrometers-thin coating of MIL-101(Cr) to metallic substrates that resulted in temperature drops of up to 8.6 degrees Celsius for 25 minutes, according to the abstract for their paper.
|
||||
|
||||
That’s “a significant improvement compared to that of traditional PCMs,” they say. Phase change materials (PCM) include waxes and fatty acids that are used in electronics and melt to absorb heat. They are used in smartphones, but the solid-to-liquid transition doesn’t exchange all that much energy.
|
||||
|
||||
“In contrast, the liquid-vapor transition of water can exchange 10 times the energy compared to that of PCM solid-liquid transition.” Plus the material used recovers almost immediately to start sweating again, just like a mammal.
|
||||
|
||||
[][3]
|
||||
|
||||
Shanghai Jiao Tong University isn’t the only school looking into sweat for future tech. Cornell University says it wants to get robots to sweat to bring their temperature below ambient. Researchers there say they have built a 3D-printed, sweating robot muscle. It [manages its own temperature][4], and they think it will one day let robots run for extended periods without overheating.
|
||||
|
||||
Soft robots, which are the kind preferred by many developers for their flexibility, hold more heat than metal ones. As in electronic devices such as smartphones and IoT sensors, fans aren’t ideal because they take up too much space. That’s why new materials applications are being studied.
|
||||
|
||||
The Cornell robot group uses light to cure resin into shapes that control the flow of heat. A base layer “smart sponge” made of poly-N-isopropylacrylamide retains water and squeezes it through fabricated, dilated pores when heated. The pores then close automatically when cooled.
|
||||
|
||||
“Just when it seemed like robots couldn’t get any cooler,” the group says.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3532827/electronics-should-sweat-to-cool-down-say-researchers.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/newsletters/signup.html
|
||||
[2]: https://www.cell.com/joule/fulltext/S2542-4351(19)30590-2
|
||||
[3]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
|
||||
[4]: https://news.cornell.edu/stories/2020/01/researchers-create-3d-printed-sweating-robot-muscle
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco warns of five SD-WAN security weaknesses)
|
||||
[#]: via: (https://www.networkworld.com/article/3533550/cisco-warns-of-five-sd-wan-security-weaknesses.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco warns of five SD-WAN security weaknesses
|
||||
======
|
||||
Cisco warnings include three high-impact SD-WAN vulnerabilities
|
||||
[Jaredd Craig][1] [(CC0)][2]
|
||||
|
||||
Cisco has issued five warnings about security weaknesses in its [SD-WAN][3] offerings, three of them on the high-end of the vulnerability scale.
|
||||
|
||||
The worst problem is with the command-line interface (CLI) of its [SD-WAN][4] Solution software where a weakness could let a local attacker inject arbitrary commands that are executed with root privileges, Cisco [wrote.][5]
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][6]
|
||||
|
||||
An attacker could exploit this vulnerability – which has a 7.8 out if 10 on the Common Vulnerability Scoring System – by authenticating to the device and submitting crafted input to the CLI utility. The attacker must be authenticated to access the CLI utility. The vulnerability is due to insufficient input validation, Cisco wrote.
|
||||
|
||||
Another high warning problem lets an authenticated, local attacker elevate privileges to root on the underlying operating system. An attacker could exploit this vulnerability by sending a crafted request to an affected system. A successful exploit could allow the attacker to gain root-level privileges, Cisco [wrote][7]. The vulnerability is due to insufficient input validation.
|
||||
|
||||
The third high-level vulnerability in the SD-WAN Solution software could let an attacker cause a buffer overflow on an affected device. An attacker could exploit this vulnerability by sending crafted traffic to an affected device. A successful exploit could allow the attacker to gain access to information that they are not authorized to access and make changes to the system that they are not authorized to make, Cisco [wrote][8].
|
||||
|
||||
The vulnerabilities affect a number of Cisco products if they are running a Cisco SD-WAN Solution software release earlier than Release 19.2.2: vBond Orchestrator Software, vEdge 100-5000 Series Routers, vManage Network Management System and vSmart Controller Software.
|
||||
|
||||
Cisco said there were no workarounds for any of the vulnerabilities and it suggested users accept automatic software updates to allay exploit risks. There are [software fixes for the problems][9] as well.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][10] ]**
|
||||
|
||||
All three of the high-level warnings were reported to Cisco by the Orange Group, Cisco said.
|
||||
|
||||
The other two SD-WAN Solution software warnings – with medium threat levels -- include a one that allows a cross-site scripting (XSS) attack against the web-based management interface of the vManage software and SQL injection threat.
|
||||
|
||||
The [XXS vulnerability][11] is due to insufficient validation of user-supplied input by the web-based management interface. An attacker could exploit this vulnerability by persuading a user of the interface to click a crafted link. A successful exploit could allow the attacker to execute arbitrary script code in the context of the interface or to access sensitive, browser-based information.
|
||||
|
||||
The SQL vulnerability exists because the web UI improperly validates SQL values. An attacker could exploit this vulnerability by authenticating to the application and sending malicious SQL queries to an affected system. A successful exploit could let the attacker modify values on, or return values from, the underlying database as well as the operating system, Cisco [wrote][12].
|
||||
|
||||
Cisco recognized Julien Legras and Thomas Etrillard of Synacktiv for reporting the problems.
|
||||
|
||||
The company said release 19.2.2 of the [Cisco SD-WAN Solution][13] contains fixes for all five vulnerabilities.
|
||||
|
||||
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3533550/cisco-warns-of-five-sd-wan-security-weaknesses.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://unsplash.com/photos/T15gG5nA9Xk
|
||||
[2]: https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
|
||||
[4]: https://www.networkworld.com/article/3527194/multicloud-security-integration-drive-massive-sd-wan-adoption.html
|
||||
[5]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-sdwclici-cvrQpH9v
|
||||
[6]: https://www.networkworld.com/newsletters/signup.html
|
||||
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-sdwpresc-ySJGvE9
|
||||
[8]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-sdwanbo-QKcABnS2
|
||||
[9]: https://software.cisco.com/download/home
|
||||
[10]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[11]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20200318-vmanage-xss
|
||||
[12]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20200318-vmanage-cypher-inject
|
||||
[13]: https://www.cisco.com/c/en/us/solutions/enterprise-networks/sd-wan/index.html#~benefits
|
||||
[14]: https://www.facebook.com/NetworkWorld/
|
||||
[15]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How technical debt is risking your security)
|
||||
[#]: via: (https://opensource.com/article/20/3/remove-security-debt)
|
||||
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
|
||||
|
||||
How technical debt is risking your security
|
||||
======
|
||||
A few security fixes now will help lighten the load of future developers
|
||||
using your software.
|
||||
![A lock on the side of a building][1]
|
||||
|
||||
Everyone knows they shouldn't take shortcuts, especially in their work, and yet everyone does. Sometimes it doesn't matter, but when it comes to code development, though, it definitely does.
|
||||
|
||||
As any experienced programmer knows, building your code the quick and dirty way soon leads to problems down the line. These issues might not be disastrous, but they incur a small penalty every time you want to develop your code further.
|
||||
|
||||
This is the basic idea behind [technical debt][2], a term first coined by well-known programmer Ward Cunningham. Technical debt is a metaphor that explains the long-term burden developers and software teams incur when taking shortcuts, and has become a popular way to think about the extra effort that we have to do in future development because of the quick and dirty design choice.
|
||||
|
||||
"Security Debt" is an extension of this idea, and in this article, we'll take a look at what the term means, why it is a problem, and what you can do about it.
|
||||
|
||||
### What is security debt?
|
||||
|
||||
To get an idea of how security debt works, we have to consider the software development lifecycle. Today, it's very rare for developers to start with a blank page, even for a new piece of software. At the very least, most programmers will start a new project with open source code copied from online repositories.
|
||||
|
||||
They will then adapt and change this code to make their project. While they are doing this, there will be many points where they notice a security vulnerability. Something as simple as an error establishing a database connection can be an indication that systems are not playing well together, and that someone has taken a fast and dirty approach.
|
||||
|
||||
Then they have two options: they can either take an in-depth look at the code they are working with, and fix the issue at a fundamental level, or they can quickly paste extra code over the top that gets around the problem in a quick, inefficient way.
|
||||
|
||||
Given the demands of today's development environment, most developers choose the second route, and we can't blame them. The problem is that the next person who looks at the code is going to have to spend longer working out how it operates.
|
||||
|
||||
Time, as we all know, is money. Because of this, each time software needs to be changed, there will be a small cost to make it secure due to previous developers taking shortcuts. This is security debt.
|
||||
|
||||
### How security debt threatens your software
|
||||
|
||||
There was a time when security debt was not a huge problem, at least not in the open source community. A decade ago, open source components had lifetimes measured in years and were freely available to everyone.
|
||||
|
||||
This meant that security issues in legacy code got fixed. Today, the increased speed of the development lifecycle and the increasingly censored internet means that developers can no longer trust third party code to the degree they used to.
|
||||
|
||||
This has led to a considerable increase in security debt for developers using open source components. Veracode's latest [State of Software Security (SOSS)][3] report found that security issues in open source software take about a month longer to be fixed than those in software that is sourced internally. Insourced software recorded the highest fix rates, but even software sourced from external contractors gets fixed faster, by about two weeks, than open source software.
|
||||
|
||||
The ultimate outcome of this – and one that the term "security debt" captures very well – is that most companies currently face security vulnerabilities throughout their entire software stack, and these are accumulating faster than they are fixed. In other words, developers have maxed out their security debt credit card, and are drowning in the debt they've incurred. This is particularly concerning when you consider that total household debt [reached nearly $14 trillion][4] in the United States alone in 2019.
|
||||
|
||||
### How to avoid security debt
|
||||
|
||||
Avoiding a build-up of security debt requires that developers take a different approach to security than the one that is prevalent in the industry at the moment. Proven methods such as zero-knowledge cloud encryption, VPNs to promote online anonymity, and network intrusion prevention software are great, but they may also not be enough.
|
||||
|
||||
In fact, there might have been some developers who were scratching their heads during our definition of security debt above: how many of us think about the next poor soul who will have to check our code for security flaws?
|
||||
|
||||
Changing that way of thinking is key to preventing a build-up of security debt. Developers should take the time to thoroughly [check their software for security vulnerabilities][5], not just during development, but after the release as well. Fix any errors now, rather than waiting for security holes to build up.
|
||||
|
||||
If that instruction sounds familiar, then well done. A continuity approach to software development is a critical component of [layering security through DevOps][6], and one of the pillars of the emerging discipline of DevSecOps. Along with [chaos engineering][7], these approaches seek to integrate security into development, testing, and assessment processes, and thereby prevent a build-up of security debt.
|
||||
|
||||
Just like a credit card, the key to avoiding security debt getting out of control is to avoid the temptation to take shortcuts in the first place. That's easier said than done, of course, but one of the key lessons from recent data breaches is that legacy systems that many developers assume are secure are just as full of shortcuts as recently written code.
|
||||
|
||||
### Measure twice, cut once
|
||||
|
||||
Since [security by default hasn't arrived yet][8], we must all try and do things properly in the future. Taking the fast, dirty approach might mean that you get to leave the office early, but ultimately that decision will come back to bite you.
|
||||
|
||||
If you finish early anyway, well done: you can use the time to read [our best articles on security][9] and check whether your code is as secure as you think it is.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/remove-security-debt
|
||||
|
||||
作者:[Sam Bocetta][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sambocetta
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
|
||||
[2]: https://opensource.com/article/17/10/why-i-love-technical-debt
|
||||
[3]: https://www.veracode.com/state-of-software-security-report
|
||||
[4]: https://thetokenist.io/financial-statistics/
|
||||
[5]: https://opensource.com/article/17/6/3-security-musts-software-developers
|
||||
[6]: https://opensource.com/article/19/9/layered-security-devops
|
||||
[7]: https://www.infoq.com/articles/chaos-engineering-security-networking/
|
||||
[8]: https://opensource.com/article/20/1/confidential-computing
|
||||
[9]: https://opensource.com/article/19/12/security-resources
|
@ -0,0 +1,110 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (3 metrics to measure your open source community health)
|
||||
[#]: via: (https://opensource.com/article/20/3/community-metrics)
|
||||
[#]: author: (Kevin Xu https://opensource.com/users/kevin-xu)
|
||||
|
||||
3 metrics to measure your open source community health
|
||||
======
|
||||
Community building is critical to the success of any open source
|
||||
project. Here's how to evaluate your community's health and strengthen
|
||||
it.
|
||||
![Green graph of measurements][1]
|
||||
|
||||
Community building is table stakes in the success of any open source project. Even outside of open source, community is considered a competitive advantage for businesses in many industries—from retail, to gaming, to fitness. (For a deeper dive, see "[When community becomes your competitive advantage][2]" in the _Harvard Business Review_.)
|
||||
|
||||
However, open source community building—especially offline activities—is notoriously hard to measure, track, and analyze. While we've all been to our fair share of meetups, conferences, and "summits" (and probably hosted a few of them ourselves), were they worth it? Did the community meaningfully grow? Was printing all those stickers and swags worth the money? Did we collect and track the right numbers to measure progress?
|
||||
|
||||
To develop a better framework for measuring community, we can look to a different industry for guidance and fresh ideas: political campaigns.
|
||||
|
||||
### My metrics start with politics
|
||||
|
||||
I started my career in political campaigns in the US as a field organizer (aka a low-level staffer) for then-candidate Senator Obama in 2008. Thinking back, a field organizer's job is basically community building in a specifically assigned geographical area that your campaign needs to win. My day consisted of calling supporters to do volunteer activities, hosting events to gather supporters, bringing in guest speakers (called "surrogates" in politics) to events, and selling the vision and plan of our candidate (essentially our "product").
|
||||
|
||||
Another big chunk of my day was doing data entry. We logged everything: interactions on phone conversations with voters, contact rates, event attendance, volunteer recruitment rates, volunteer show-up rates, and myriad other numbers to constantly measure our effectiveness.
|
||||
|
||||
Regardless of your misgivings about politics in general or specific politicians, the winning campaigns that lead to political victories are all giant community-building exercises that are data-driven, meticulously measured, and constantly optimized. They are well-oiled community-building machines.
|
||||
|
||||
When I entered the world of open source a few years ago, the community-building part felt familiar and natural. What surprised me was how little community building as an operation is quantified and measured—especially with offline activities.
|
||||
|
||||
### Three metrics to track
|
||||
|
||||
Taking a page from the best-run political campaigns I've seen, here are the three most important metrics for an open source community to track and optimize:
|
||||
|
||||
* Number of **community ambassadors**
|
||||
* Number of **return attendees** (people who attend your activities two times or more)
|
||||
* Rate of **churned attendees** (the percentage of people who attend your activities only once or say they will come but don't show up)
|
||||
|
||||
|
||||
|
||||
If you're curious, the corresponding terms on a political campaign for these three metrics are typically community captains, super volunteers, and flake rate.
|
||||
|
||||
#### Community ambassadors
|
||||
|
||||
A "community ambassador" is a user or enthusiast of your project who is willing to _consistently_ host local meetups or activities where she or he lives. Growing the number of community ambassadors and supporting them with resources and guidance are core to your community's strength and scale. You can probably hire for these if you have a lot of funding, but pure volunteers speak more to your project's allure.
|
||||
|
||||
These ambassadors should be your best friends, where you understand inside and out why they are motivated to evangelize your project in front of both peers and strangers. Their feedback on your project is also valuable and should be a critical part of your development roadmap and process. You can strategically cultivate ambassadors in different tech hubs geographically around the world, so your project can count on someone with local knowledge to reach and serve users of different business cultures with different needs. The beauty of open source is that it's global by default; take advantage of it!
|
||||
|
||||
Some cities are arguably more of a developer hub than others. Some to consider are Amsterdam, Austin, Bangalore, Beijing, Berlin, Hangzhou, Istanbul, London, NYC, Paris, Seattle, Seoul, Shenzhen, Singapore, São Paulo, San Francisco-Bay Area, Vancouver, Tel Aviv, Tokyo, and Toronto (listed alphabetically and based on feedback I got through social media. Please add a comment if I missed any!). An example of this is the [Cloud Native Ambassadors program][3] of the Cloud Native Computing Foundation.
|
||||
|
||||
#### Return attendees
|
||||
|
||||
The number of return attendees is crucial to measuring the usefulness or stickiness of your community activities. Tracking return attendees is how you can draw a meaningful line between "the curious" and "the serious."
|
||||
|
||||
Trying to grow this number should be an obvious goal. However, that's not the only goal. This is the group whose motivation you want to understand the clearest. This is the group that reflects your project's user persona. This is the group that can probably give you the most valuable feedback. This is the group that will become your future community ambassadors.
|
||||
|
||||
Putting it differently, this is your [1,000 true fans][4] (if you can keep them).
|
||||
|
||||
Having hosted and attended my fair share of these community meetups, my observation is that most people attend to be educated on a technical topic, look for tools to solve problems at work, or network for their next job opportunity. What they are not looking for is being "marketed to."
|
||||
|
||||
There is a growing trend of developer community events becoming marketing events, especially when companies are flush with funding or have a strong marketing department that wants to "control the message." I find this trend troubling because it undermines community building.
|
||||
|
||||
Thus, be laser-focused on technical education. If a developer community gets taken over by marketing campaigns, your return-attendees metric won't be pretty.
|
||||
|
||||
#### Churned attendees rate
|
||||
|
||||
Tracking churned attendees is the flipside of the returned-attendees coin, so I won't belabor the point. These are the people that join once and then disappear or who show interest but don't show up. They are important because they tell you what isn't working and for whom, which is more actionable than just counting the people who show up.
|
||||
|
||||
One note of caution: Be brutally honest when measuring this number, and don't fool yourself (or others). On its own, if someone signs up but doesn't show up, it doesn't mean much. Similarly, if someone shows up once and never comes back, it doesn't mean much. Routinely sit down and assess _why_ someone isn't showing up, so you can re-evaluate and refine your community program and activities. Don't build the wrong incentives into your community-building operation to reward the wrong metric.
|
||||
|
||||
### Value of in-person connections
|
||||
|
||||
I purposely focused this post on measuring offline community activities because online activities are inherently more trackable and intuitive to digital-native open source creators.
|
||||
|
||||
Offline community activities are essential to any project's journey to reaching traction and prominence. I have yet to see a successful project that does not have a sizable offline presence, regardless of its online popularity.
|
||||
|
||||
Why is this the case? Why can't an open source community, usually born online, just stay and grow online?
|
||||
|
||||
Because technology choice is ultimately a human decision; therefore, face-to-face interaction is an irreplaceable element of new technology adoption. No one wants to be the guinea pig. No one wants to be the first. The most effective way to not feel like the first is to literally _see_ other human beings trying out or being interested in the same thing.
|
||||
|
||||
Being in the same room as other developers, learning about the same project, and doing that regularly is the most effective way to build trust for a project. And with trust comes traction.
|
||||
|
||||
### These three metrics work
|
||||
|
||||
There are other things you _can_ track, but more data does not necessarily mean clearer insight. Focusing your energy on these three metrics will make the most impact on your community-building operation. An open source community where the _number of ambassadors and return attendees are trending up_ and the _churned attendees rate is trending down_ is one that's healthy and growing in the right way.
|
||||
|
||||
* * *
|
||||
|
||||
_This article originally appeared on_ _[COSS Media][5]_ _and is republished with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/community-metrics
|
||||
|
||||
作者:[Kevin Xu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/kevin-xu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_lead-steps-measure.png?itok=DG7rFZPk (Green graph of measurements)
|
||||
[2]: https://hbr.org/2020/01/when-community-becomes-your-competitive-advantage
|
||||
[3]: https://www.cncf.io/people/ambassadors/
|
||||
[4]: https://kk.org/thetechnium/1000-true-fans/
|
||||
[5]: https://coss.media/how-to-measure-community-building/
|
@ -0,0 +1,107 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Coronavirus challenges capacity, but core networks are holding up)
|
||||
[#]: via: (https://www.networkworld.com/article/3533438/coronavirus-challenges-capacity-but-core-networks-are-holding-up.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Coronavirus challenges capacity, but core networks are holding up
|
||||
======
|
||||
COVID-19 has sent thousands of employees to work from home, placing untold stress on remote-access networks.
|
||||
Cicilie Arcurs / Getty Images
|
||||
|
||||
As the coronavirus continues to spread and more people work from home, the impact of the increased traffic on networks in the US so far seems to be minimal.
|
||||
|
||||
No doubt that web, VPN and data usage is growing dramatically with the [influx of remote workers][1]. For example, Verizon said it has seen a 34% increase in VPN traffic from March 10 to 17. It has also seen a 75% increase in gaming traffic, and web traffic increased by just under 20% in that time period, according to Verizon.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
|
||||
|
||||
Verizon said its fiber optic and wireless networks “have been able to meet the shifting demands of customers and continue to perform well. In small pockets where there has been a significant increase in usage, our engineers have quickly added capacity to meet customers’ demand.”
|
||||
|
||||
“As we see more and more individuals work from home and students engage in online learning, it is a natural byproduct that we would see an increase in web traffic and access to VPN. And as more entertainment options are cancelled in communities across the US, an increase in video traffic and online gaming is not surprising,” said Kyle Malady, Chief Technology Officer for Verizon in a [statement][3]. “We expect these peak hour percentages to fluctuate, so our engineers are continuing to closely monitor network usage patterns 24x7 and stand ready to adjust resources as changing demands arise."
|
||||
|
||||
As of March 16, AT&T said that its network continues to perform well. “In cities where the coronavirus has had the biggest impact, we are seeing fewer spikes in wireless usage around particular cell towers or particular times of day because more people are working from home rather than commuting to work and fewer people are gathering in large crowds at specific locations.”
|
||||
|
||||
In Europe, Vodaphone say it has seen an 50% increase in data traffic in some markets.
|
||||
|
||||
“COVID-19 is already having a significant impact on our services and placing a greater demand on our network,” the company said in a statement. “Our technology teams throughout Europe have been focusing on capacity across our networks to make sure they are resilient and can absorb any new usage patterns arising as more people start working from home.”
|
||||
|
||||
[][4]
|
||||
|
||||
In Europe there have also been indications of problems.
|
||||
|
||||
Ireland-based [Spearline][5], which monitors international phone numbers for connectivity and audio quality, said this week that Italy's landline connection rate continues to be volatile with as much as a 10% failure rate and audio quality is running approximately 4% below normal levels.
|
||||
|
||||
Other Spearline research says:
|
||||
|
||||
* Spain saw a drop in connection rates to 98.5% on March 16, but it is improving again.
|
||||
* France saw a dip in connection rates approaching 5% on March17. Good quality has been maintained overall, though periodic slippage has been observed.
|
||||
* Germany saw a 1.7% connection-failure rate March 17. Good quality has been maintained, though periodic slippage has been observed.
|
||||
|
||||
|
||||
|
||||
Such problems are not showing up in the US at this point, Spearline said.
|
||||
|
||||
“The US is Spearline's most tested market with test calls over three main sectors being enterprise, unified communications and carrier. To date, there has been no significant impact on either the connection rates or the audio quality on calls throughout the US,” said Matthew Lawlor, co-founder and chief technical officer at Spearline.
|
||||
|
||||
The future impact is the real unknown of course, Lawlor said.
|
||||
|
||||
“There are many potential issues which have happened in other countries which may have a similar impact on US infrastructure. For example, in many countries there have been hardware issues where engineers are unable to get physical access to resolve the issue,” Lawlor said. “While rerouting calls may help resolve issues it does put more pressure on other segments of your network.”
|
||||
|
||||
On March 19, one week after the CDC declarated the virus as pandemic, data analytics and broadband vendor OpenVault wrote:
|
||||
|
||||
* Subscribers’ average usage from 9 a.m. to 5 p.m. has risen to 6.3 GB, 41.4% higher than the January figure of 4.4 GB.
|
||||
* During the same period, peak-hour (6 p.m. to 11 p.m.) usage has risen 17.2% from 5.0 GB per subscriber in January to 5.87 GB in March.
|
||||
* Overall daily usage has grown from 12.19 GB to 15.46 GB, an increase of 26.8%.
|
||||
|
||||
|
||||
|
||||
Based on the current rate of growth, OpenVault projected that consumption for March will reach nearly 400 GB per subscriber, an increase of almost 11% over the previous monthly record of 361 GB, established in January. In addition, OpenVault projects a new coronavirus-influenced run rate of 460 GB per subscriber per month going forward. OpenVault’s research is based on the usage of more than 1 million broadband subscribers through the United States, the company said.
|
||||
|
||||
“Broadband clearly is keeping the hearts of business, education and entertainment beating during this crisis,” said Mark Trudeau, CEO and founder of OpenVault in a [statement][6]. “Networks built for peak-hours consumption so far are easily handling the rise in nine-to-five business-hours usage. We’ve had concerns about peak hours consumption given the increase in streaming entertainment and the trend toward temporary cessation of bandwidth caps, but operator networks seem to be handling the additional traffic without impacting customer experiences.”
|
||||
|
||||
Increased use of conferencing apps may affect their availability for reasons other than network capacity. For example, according to Thousand Eyes, users around the globe were unable to connect to their Zoom meetings for approximately 20 minutes on Friday due to failed DNS resolution.
|
||||
|
||||
Others too are monitoring data traffic looking for warning signs of slowdowns. “Traffic towards video conferencing, streaming services and news, e-commerce websites has surged. We've seen growth in traffic from residential broadband networks, and a slowing of traffic from businesses and universities," wrote Louis Poinsignon a network engineer with CloudFlare in a [blog][7] about Internet traffic patterns. He noted that on March 13 when the US announced a state of emergency, CloudFlare’s US data centers served 20% more traffic than usual.
|
||||
|
||||
Poinsignon noted that [Internet Exchange Points][8], where Internet service providers and content providers can exchange data directly (rather than via a third party) have also seen spikes in traffic. For example, Amsterdam ([AMS-IX][9]), London ([LINX][10]) and Frankfurt ([DE-CIX][11]), a 10-20% increase was seen around March 9.
|
||||
|
||||
“Even though from time to time individual services, such as a web site or an app, have outages, the core of the Internet is robust,” Poinsignon wrote. “Traffic is shifting from corporate and university networks to residential broadband, but the Internet was designed for change.”
|
||||
|
||||
In related news:
|
||||
|
||||
* Netflix said it would reduce streaming quality in Europe for at least the next 30 days to prevent the internet collapsing under the strain of unprecedented usage due to the coronavirus pandemic. "We estimate that this will reduce Netflix traffic on European networks by around 25% while also ensuring a good quality service for our members," Netflix said.
|
||||
* DISH announced that it is providing 20 MHz of AWS-4 (Band 66) and all of its 700 MHz spectrum to AT&T at no cost for 60 days. Last week, DISH began lending its complete 600 MHz portfolio of spectrum to T-Mobile. With these two agreements, DISH has activated most of its spectrum portfolio to enhance national wireless capacity as the nation confronts the COVID-19 crisis.
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3533438/coronavirus-challenges-capacity-but-core-networks-are-holding-up.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3532440/coronavirus-challenges-remote-networking.html
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: https://www.verizon.com/about/news/how-americans-are-spending-their-time-temporary-new-normal
|
||||
[4]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
|
||||
[5]: https://www.spearline.com/
|
||||
[6]: http://openvault.com/covid-19-impact-driving-business-hours-broadband-consumption-up-41/
|
||||
[7]: https://blog.cloudflare.com/on-the-shoulders-of-giants-recent-changes-in-internet-traffic/
|
||||
[8]: https://en.wikipedia.org/wiki/Internet_exchange_point
|
||||
[9]: https://www.ams-ix.net/ams/documentation/total-stats
|
||||
[10]: https://portal.linx.net/stats/lans
|
||||
[11]: https://www.de-cix.net/en/locations/germany/frankfurt/statistics
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
@ -1,119 +0,0 @@
|
||||
Translating by MjSeven
|
||||
|
||||
Advanced use of the less text file viewer in Linux
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals_0.png?itok=XwIRERsn)
|
||||
|
||||
I recently read Scott Nesbitt's article "[Using less to view text files at the Linux command line][1]" and was inspired to share additional tips and tricks I use with `less`.
|
||||
|
||||
### LESS env var
|
||||
|
||||
If you have an environment variable `LESS` defined (e.g., in your `.bashrc`), `less` treats it as a list of options, as if passed on the command line.
|
||||
|
||||
I use this:
|
||||
```
|
||||
LESS='-C -M -I -j 10 -# 4'
|
||||
|
||||
```
|
||||
|
||||
These mean:
|
||||
|
||||
* `-C` – Make full-screen reprints faster by not scrolling from the bottom.
|
||||
* `-M` – Show more information from the last (status) line. You can customize the information shown with `-PM`, but I usually do not bother.
|
||||
* `-I` – Ignore letter case (upper/lower) in searches.
|
||||
* `-j 10` – Show search results in line 10 of the terminal, instead of the first line. This way you have 10 lines of context each time you press `n` (or `N`) to jump to the next (or previous) match.
|
||||
* `-# 4` – Jump four characters to the right or left when pressing the Right or Left arrow key. The default is to jump half of the screen, which I usually find to be too much. Generally speaking, `less` seems to be (at least partially) optimized to the environment it was initially developed in, with slow modems and low-bandwidth internet connections, when it made sense to jump half a screen.
|
||||
|
||||
|
||||
|
||||
### PAGER env var
|
||||
|
||||
Many programs show information using the command set in the `PAGER` environment variable (if it's set). So, you can set `PAGER=less` in your `.bashrc` and have your program run `less`. Check the man page environ(7) (`man 7 environ`) for other such variables.
|
||||
|
||||
### -S
|
||||
|
||||
`-S` tells `less` to chop long lines instead of wrapping them. I rarely find a need for this unless (and until) I've started viewing a file. Fortunately, you can type all command-line options inside `less` as if they were keyboard commands. So, if I want to chop long lines while I'm already in a file, I can simply type `-S`.
|
||||
|
||||
The command-line optiontellsto chop long lines instead of wrapping them. I rarely find a need for this unless (and until) I've started viewing a file. Fortunately, you can type all command-line options insideas if they were keyboard commands. So, if I want to chop long lines while I'm already in a file, I can simply type
|
||||
|
||||
Here's an example I use a lot:
|
||||
```
|
||||
su - postgres
|
||||
|
||||
export PAGER=less # Because I didn't bother editing postgres' .bashrc on all the machines I use it on
|
||||
|
||||
psql
|
||||
|
||||
```
|
||||
|
||||
Sometimes when I later view the output of a `SELECT` command with a very wide output, I type `-S` so it will be formatted nicely. If it jumps too far when I press the Right arrow to see more (because I didn't set `-#`), I can type `-#8`, then each Right arrow press will move eight characters to the right.
|
||||
|
||||
Sometimes after typing `-S` too many times, I exit psql and run it again after entering:
|
||||
```
|
||||
export LESS=-S
|
||||
|
||||
```
|
||||
|
||||
### F
|
||||
|
||||
The command `F` makes `less` work like `tail -f`—waiting until more data is added to the file before showing it. One advantage this has over `tail -f` is that highlighting search matches still works. So you can enter `less /var/log/logfile`, search for something—which will highlight all occurrences of it (unless you used `-g`)—and then press `F`. When more data is written to the log, `less` will show it and highlight the new matches.
|
||||
|
||||
After you press `F`, you can press `Ctrl+C` to stop it from looking for new data (this will not kill it); go back into the file to see older stuff, search for other things, etc.; and then press `F` again to look at more new data.
|
||||
|
||||
### Searching
|
||||
|
||||
Searches use the system's regexp library, and this usually means you can use extended regular expressions. In particular, searching for `one|two|three` will find and highlight all occurrences of one, two, or three.
|
||||
|
||||
Another pattern I use a lot, especially with wide log lines (e.g., ones that span more than one terminal line), is `.*something.*`, which highlights the entire line. This pattern makes it much easier to see where a line starts and finishes. I also combine these, such as: `.*one thing.*|.*another thing.*`, or `key: .*|.*marker.*` to see the contents of `key` (e.g., in a log file with a dump of some dictionary/hash) and highlight relevant marker lines (so I have a context), or even, if I know the value is surrounded by quotes:
|
||||
```
|
||||
key: '[^']*'|.*marker.*
|
||||
|
||||
```
|
||||
|
||||
`less` maintains a history of your search items and saves them to disk for future invocations. When you press `/` (or `?`), you can go through this history with the Up or Down arrow (as well as do basic line editing).
|
||||
|
||||
I stumbled upon what seems to be a very useful feature when skimming through the `less` man page while writing this article: skipping uninteresting lines with `&!pattern`. For example, while looking for something in `/var/log/messages`, I used to iterate through this list of commands:
|
||||
```
|
||||
cat /var/log/messages | egrep -v 'systemd: Started Session' | less
|
||||
|
||||
cat /var/log/messages | egrep -v 'systemd: Started Session|systemd: Starting Session' | less
|
||||
|
||||
cat /var/log/messages | egrep -v 'systemd: Started Session|systemd: Starting Session|User Slice' | less
|
||||
|
||||
cat /var/log/messages | egrep -v 'systemd: Started Session|systemd: Starting Session|User Slice|dbus' | less
|
||||
|
||||
cat /var/log/messages | egrep -v 'systemd: Started Session|systemd: Starting Session|User Slice|dbus|PackageKit Daemon' | less
|
||||
|
||||
```
|
||||
|
||||
But now I know how to do the same thing within `less`. For example, I can type `&!systemd: Started Session`, then decide I want to get rid of `systemd: Starting Session`, so I add it by typing `&!` and use the Up arrow to get the previous search from the history. Then I type `|systemd: Starting Session` and press `Enter`, continuing to add more items the same way until I filter out enough to see the more interesting stuff.
|
||||
|
||||
### =
|
||||
|
||||
The command `=` shows more information about the file and location, even more than `-M`. If the file is very long, and calculating `=` takes too long, you can press `Ctrl+C` and it will stop trying.
|
||||
|
||||
If the content you're viewing is from a pipe rather than a file, `=` (and `-M`) will not show what it does not know, including the number of lines and bytes in the file. To see that data, if you know that `command` will finish quickly, you can jump to the end with `G`, and then `less` will start showing that information.
|
||||
|
||||
If you press `G` and the command writing to the pipe takes longer than expected, you can press `Ctrl+C`, and the command will be killed. Pressing `Ctrl+C` will kill it even if you didn't press `G`, so be careful not to press `Ctrl+C` accidentally if you don't intend to kill it. For this reason, if the command does something (that is, it's not only showing information), it's usually safer to write its output to a file and view the file in a separate terminal, instead of using a pipe.
|
||||
|
||||
### Why you need less
|
||||
|
||||
`less` is a very powerful program, and contrary to newer contenders in this space, such as `most` and `moar`, you are likely to find it on almost all the systems you use, just like `vi`. So, even if you use GUI viewers or editors, it's worth investing some time going through the `less` man page, at least to get a feeling of what's available. This way, when you need to do something that might be covered by existing functionality, you'll know to search the manual page or the internet to find what you need.
|
||||
|
||||
For more information, visit the [less home page][2]. The site has a nice FAQ with more tips and tricks.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/advanced-use-less-text-file-viewer
|
||||
|
||||
作者:[Yedidyah Bar David][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/didib
|
||||
[1]:http://opensource.com/article/18/4/using-less-view-text-files-command-line
|
||||
[2]:http://www.greenwoodsoftware.com/less/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,300 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (way-ww)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Enable Or Disable SSH Access For A Particular User Or Group In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/allow-deny-enable-disable-ssh-access-user-group-in-linux/)
|
||||
[#]: author: (2daygeek http://www.2daygeek.com/author/2daygeek/)
|
||||
|
||||
How To Enable Or Disable SSH Access For A Particular User Or Group In Linux?
|
||||
======
|
||||
|
||||
As per your organization standard policy, you may need to allow only the list of users that are allowed to access the Linux system.
|
||||
|
||||
Or you may need to allow only few groups, which are allowed to access the Linux system.
|
||||
|
||||
How to achieve this? What is the best way? How to achieve this in a simple way?
|
||||
|
||||
Yes, there are many ways are available to perform this.
|
||||
|
||||
However, we need to go with simple and easy method.
|
||||
|
||||
If so, it can be done by making the necessary changes in `/etc/ssh/sshd_config` file.
|
||||
|
||||
In this article we will show you, how to perform this in details.
|
||||
|
||||
Why are we doing this? due to security reason. Navigate to the following URL to know more about **[openSSH][1]** usage.
|
||||
|
||||
### What Is SSH?
|
||||
|
||||
openssh stands for OpenBSD Secure Shell. Secure Shell (ssh) is a free open source networking tool which allow us to access remote system over an unsecured network using Secure Shell (SSH) protocol.
|
||||
|
||||
It’s a client-server architecture. It handles user authentication, encryption, transferring files between computers and tunneling.
|
||||
|
||||
These can be accomplished via traditional tools such as telnet or rcp, these are insecure and use transfer password in cleartext format while performing any action.
|
||||
|
||||
### How To Allow A User To Access SSH In Linux?
|
||||
|
||||
We can allow/enable the ssh access for a particular user or list of the users using the following method.
|
||||
|
||||
If you would like to allow more than one user then you have to add the users with space in the same line.
|
||||
|
||||
To do so, just append the following value into `/etc/ssh/sshd_config` file. In this example, we are going to allow ssh access for `user3`.
|
||||
|
||||
```
|
||||
# echo "AllowUsers user3" >> /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
You can double check this by running the following command.
|
||||
|
||||
```
|
||||
# cat /etc/ssh/sshd_config | grep -i allowusers
|
||||
AllowUsers user3
|
||||
```
|
||||
|
||||
That’s it. Just bounce the ssh service and see the magic.
|
||||
|
||||
```
|
||||
# systemctl restart sshd
|
||||
|
||||
# service restart sshd
|
||||
```
|
||||
|
||||
Simple open a new terminal or session and try to access the Linux system with different user. Yes, `user2` isn’t allowed for SSH login and will be getting an error message as shown below.
|
||||
|
||||
```
|
||||
# ssh [email protected]
|
||||
[email protected]'s password:
|
||||
Permission denied, please try again.
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
Mar 29 02:00:35 CentOS7 sshd[4900]: User user2 from 192.168.1.6 not allowed because not listed in AllowUsers
|
||||
Mar 29 02:00:35 CentOS7 sshd[4900]: input_userauth_request: invalid user user2 [preauth]
|
||||
Mar 29 02:00:40 CentOS7 unix_chkpwd[4902]: password check failed for user (user2)
|
||||
Mar 29 02:00:40 CentOS7 sshd[4900]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user2
|
||||
Mar 29 02:00:43 CentOS7 sshd[4900]: Failed password for invalid user user2 from 192.168.1.6 port 42568 ssh2
|
||||
```
|
||||
|
||||
At the same time `user3` is allowed to login into the system because it’s in allowed users list.
|
||||
|
||||
```
|
||||
# ssh [email protected]
|
||||
[email protected]'s password:
|
||||
[[email protected] ~]$
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
Mar 29 02:01:13 CentOS7 sshd[4939]: Accepted password for user3 from 192.168.1.6 port 42590 ssh2
|
||||
Mar 29 02:01:13 CentOS7 sshd[4939]: pam_unix(sshd:session): session opened for user user3 by (uid=0)
|
||||
```
|
||||
|
||||
### How To Deny Users To Access SSH In Linux?
|
||||
|
||||
We can deny/disable the ssh access for a particular user or list of the users using the following method.
|
||||
|
||||
If you would like to disable more than one user then you have to add the users with space in the same line.
|
||||
|
||||
To do so, just append the following value into `/etc/ssh/sshd_config` file. In this example, we are going to disable ssh access for `user1`.
|
||||
|
||||
```
|
||||
# echo "DenyUsers user1" >> /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
You can double check this by running the following command.
|
||||
|
||||
```
|
||||
# cat /etc/ssh/sshd_config | grep -i denyusers
|
||||
DenyUsers user1
|
||||
```
|
||||
|
||||
That’s it. Just bounce the ssh service and see the magic.
|
||||
|
||||
```
|
||||
# systemctl restart sshd
|
||||
|
||||
# service restart sshd
|
||||
```
|
||||
|
||||
Simple open a new terminal or session and try to access the Linux system with Deny user. Yes, `user1` is in denyusers list. So, you will be getting an error message as shown below when you are try to login.
|
||||
|
||||
```
|
||||
# ssh [email protected]
|
||||
[email protected]'s password:
|
||||
Permission denied, please try again.
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
Mar 29 01:53:42 CentOS7 sshd[4753]: User user1 from 192.168.1.6 not allowed because listed in DenyUsers
|
||||
Mar 29 01:53:42 CentOS7 sshd[4753]: input_userauth_request: invalid user user1 [preauth]
|
||||
Mar 29 01:53:46 CentOS7 unix_chkpwd[4755]: password check failed for user (user1)
|
||||
Mar 29 01:53:46 CentOS7 sshd[4753]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user1
|
||||
Mar 29 01:53:48 CentOS7 sshd[4753]: Failed password for invalid user user1 from 192.168.1.6 port 42522 ssh2
|
||||
```
|
||||
|
||||
### How To Allow Groups To Access SSH In Linux?
|
||||
|
||||
We can allow/enable the ssh access for a particular group or groups using the following method.
|
||||
|
||||
If you would like to allow more than one group then you have to add the groups with space in the same line.
|
||||
|
||||
To do so, just append the following value into `/etc/ssh/sshd_config` file. In this example, we are going to disable ssh access for `2g-admin` group.
|
||||
|
||||
```
|
||||
# echo "AllowGroups 2g-admin" >> /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
You can double check this by running the following command.
|
||||
|
||||
```
|
||||
# cat /etc/ssh/sshd_config | grep -i allowgroups
|
||||
AllowGroups 2g-admin
|
||||
```
|
||||
|
||||
Run the following command to know the list of the users are belongs to this group.
|
||||
|
||||
```
|
||||
# getent group 2g-admin
|
||||
2g-admin:x:1005:user1,user2,user3
|
||||
```
|
||||
|
||||
That’s it. Just bounce the ssh service and see the magic.
|
||||
|
||||
```
|
||||
# systemctl restart sshd
|
||||
|
||||
# service restart sshd
|
||||
```
|
||||
|
||||
Yes, `user3` is allowed to login into the system because user3 is belongs to `2g-admin` group.
|
||||
|
||||
```
|
||||
# ssh [email protected]
|
||||
[email protected]'s password:
|
||||
[[email protected] ~]$
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
Mar 29 02:10:21 CentOS7 sshd[5165]: Accepted password for user1 from 192.168.1.6 port 42640 ssh2
|
||||
Mar 29 02:10:22 CentOS7 sshd[5165]: pam_unix(sshd:session): session opened for user user1 by (uid=0)
|
||||
```
|
||||
|
||||
Yes, `user2` is allowed to login into the system because user2 is belongs to `2g-admin` group.
|
||||
|
||||
```
|
||||
# ssh [email protected]
|
||||
[email protected]'s password:
|
||||
[[email protected] ~]$
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
Mar 29 02:10:38 CentOS7 sshd[5225]: Accepted password for user2 from 192.168.1.6 port 42642 ssh2
|
||||
Mar 29 02:10:38 CentOS7 sshd[5225]: pam_unix(sshd:session): session opened for user user2 by (uid=0)
|
||||
```
|
||||
|
||||
When you are try to login into the system with other users which are not part of this group then you will be getting an error message as shown below.
|
||||
|
||||
```
|
||||
# ssh [email protected]
|
||||
[email protected]'s password:
|
||||
Permission denied, please try again.
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
Mar 29 02:12:36 CentOS7 sshd[5306]: User ladmin from 192.168.1.6 not allowed because none of user's groups are listed in AllowGroups
|
||||
Mar 29 02:12:36 CentOS7 sshd[5306]: input_userauth_request: invalid user ladmin [preauth]
|
||||
Mar 29 02:12:56 CentOS7 unix_chkpwd[5310]: password check failed for user (ladmin)
|
||||
Mar 29 02:12:56 CentOS7 sshd[5306]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=ladmin
|
||||
Mar 29 02:12:58 CentOS7 sshd[5306]: Failed password for invalid user ladmin from 192.168.1.6 port 42674 ssh2
|
||||
```
|
||||
|
||||
### How To Deny Group To Access SSH In Linux?
|
||||
|
||||
We can deny/disable the ssh access for a particular group or groups using the following method.
|
||||
|
||||
If you would like to disable more than one group then you need to add the group with space in the same line.
|
||||
|
||||
To do so, just append the following value into `/etc/ssh/sshd_config` file.
|
||||
|
||||
```
|
||||
# echo "DenyGroups 2g-admin" >> /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
You can double check this by running the following command.
|
||||
|
||||
```
|
||||
# # cat /etc/ssh/sshd_config | grep -i denygroups
|
||||
DenyGroups 2g-admin
|
||||
|
||||
# getent group 2g-admin
|
||||
2g-admin:x:1005:user1,user2,user3
|
||||
```
|
||||
|
||||
That’s it. Just bounce the ssh service and see the magic.
|
||||
|
||||
```
|
||||
# systemctl restart sshd
|
||||
|
||||
# service restart sshd
|
||||
```
|
||||
|
||||
Yes `user3` isn’t allowed to login into the system because it’s not part of `2g-admin` group. It’s in Denygroups.
|
||||
|
||||
```
|
||||
# ssh [email protected]
|
||||
[email protected]'s password:
|
||||
Permission denied, please try again.
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
Mar 29 02:17:32 CentOS7 sshd[5400]: User user1 from 192.168.1.6 not allowed because a group is listed in DenyGroups
|
||||
Mar 29 02:17:32 CentOS7 sshd[5400]: input_userauth_request: invalid user user1 [preauth]
|
||||
Mar 29 02:17:38 CentOS7 unix_chkpwd[5402]: password check failed for user (user1)
|
||||
Mar 29 02:17:38 CentOS7 sshd[5400]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user1
|
||||
Mar 29 02:17:41 CentOS7 sshd[5400]: Failed password for invalid user user1 from 192.168.1.6 port 42710 ssh2
|
||||
```
|
||||
|
||||
Anyone can login into the system except `2g-admin` group. Hence, `ladmin` user is allowed to login into the system.
|
||||
|
||||
```
|
||||
# ssh [email protected]
|
||||
[email protected]'s password:
|
||||
[[email protected] ~]$
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
Mar 29 02:19:13 CentOS7 sshd[5432]: Accepted password for ladmin from 192.168.1.6 port 42716 ssh2
|
||||
Mar 29 02:19:13 CentOS7 sshd[5432]: pam_unix(sshd:session): session opened for user ladmin by (uid=0)
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/allow-deny-enable-disable-ssh-access-user-group-in-linux/
|
||||
|
||||
作者:[2daygeek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.2daygeek.com/author/2daygeek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/ssh-tutorials/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zhangxiangping)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -264,7 +264,7 @@ via: https://opensource.com/article/19/7/python-google-natural-language-api
|
||||
|
||||
作者:[JR Oakes][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[zhangxiangping](https://github.com/zhangxiangping)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,96 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (10 articles to learn Linux your way)
|
||||
[#]: via: (https://opensource.com/article/19/12/learn-linux)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
10 articles to learn Linux your way
|
||||
======
|
||||
It's been a good year for Linux, so take a look back at the top 10 Linux
|
||||
articles on Opensource.com from 2019.
|
||||
![Penguins gathered together in the Artic][1]
|
||||
|
||||
The year 2019 has been good for Linux with Opensource.com readers. Obviously, the term "Linux" itself is weighted: Does it refer to the kernel or the desktop or the ecosystem? In this look back at the top Linux articles of the year, I've intentionally taken a broad view in defining the top 10 Linux articles (for some definition of "top" and some definition of "Linux"). Here they are, offered in no particular order.
|
||||
|
||||
### A beginner's guide to Linux permissions
|
||||
|
||||
[_A beginner's guide to Linux permissions_][2] by Bryant Son introduces new users to the concept of file permissions with graphics and charts to illustrate each point. It can be hard to come up with visuals for concepts that are, at their core, purely text-based, and this article is friendly for the visual learners out there. I also like how Bryant stays focused. Any discussion of file permissions can lead to several related topics (like ownership and access control lists and so on), but this article is dedicated to explaining one thing and explaining it well.
|
||||
|
||||
### Why I made the switch from Mac to Linux
|
||||
|
||||
Matthew Broberg offers an insightful and honest look at his migration to Linux from MacOS in [_Why I made the switch from Mac to Linux_][3]. Changing platforms is always tough, and it's important to record what's behind the decision to switch. Matt's article, I think, serves several purposes, but the two most important for me: it's an invitation for the Linux community to support him by answering questions and offering potential solutions, and it's a good data point for others who are considering Linux adoption.
|
||||
|
||||
### Troubleshooting slow WiFi on Linux
|
||||
|
||||
In [_Troubleshooting slow WiFi on Linux_][4], David Clinton provides a useful analysis of a problem everyone has on every platform—and has tips on how to solve it. It's a good example of an "incidentally Linux" tip that not only helps everyday people with everyday problems but also shows non-Linux users how approachable troubleshooting (on any platform) is.
|
||||
|
||||
### How GNOME uses Git
|
||||
|
||||
[_How GNOME uses Git_][5] by Molly de Blanc takes a look behind the scenes, revealing how one of the paragons of open source software (the GNOME desktop) uses one of the other paragons of open source (Git) for development. It's always heartening to me to hear about an open source project that defaults to an open source solution for whatever needs to be done. Believe it or not, this isn't always the case, but for GNOME, it's an important and welcoming part of the project's identity.
|
||||
|
||||
### Virtual filesystems in Linux: Why we need them and how they work
|
||||
|
||||
Alison Chaiken masterfully explains what is considered incomprehensible to many users in [_Virtual filesystems in Linux: Why we need them and how they work_][6]. Understanding what a filesystem is and what it does is one thing, but _virtual_ ones aren't even, by definition, real. And yet Linux delivers them in a way that even casual users can benefit from, and Alison's article explains it in a way that anyone can understand. As a bonus, Alison goes even deeper in the second half of the article and demonstrates how to use bcc scripts to monitor everything she just taught you.
|
||||
|
||||
### Understanding file paths and how to use them
|
||||
|
||||
I thought [_Understanding file paths and how to use them_][7] was important to write about because it's a concept most users (on any platform) don't seem to be taught. It's a strange phenomenon, because now, more than ever, the _file path_ is something people see literally on a daily basis: Nearly all internet URLs contain a file path telling you exactly where within the domain you are. I often wonder why computer education doesn't start with the internet, the most familiar app of all and arguably the most heavily used supercomputer in existence, and use it to explain the appliances we interface with each day. (I guess it would help if those appliances were running Linux, but we're working on that.)
|
||||
|
||||
### Inter-process communication in Linux
|
||||
|
||||
[_Inter-process communication in Linux: Shared storage_][8] by Marty Kalin delves into the developer side of Linux, explaining IPC and how to interact with it in your code. I'm cheating by including this article because it's actually a three-part series, but it's the best explanation of its kind. There is very little documentation that manages to explain how Linux handles IPC, much less what IPC is, why it's important, or how to take advantage of it when programming. It's normally a topic you work your way up to in university. Now you can read all about it here instead.
|
||||
|
||||
### Understanding system calls on Linux with strace
|
||||
|
||||
[_Understanding system calls on Linux with strace_][9] by Gaurav Kamathe is highly technical in ways I wish that every conference talk I've ever seen about **strace** was. This is a clear and helpful demonstration of a complex but amazingly useful command. To my surprise, the command I've found myself using since this article isn't the titular command, but **ltrace** (to see which functions are called by a command). Obviously, this article's packed with information and is a handy reference for developers and QA testers.
|
||||
|
||||
### How the Linux desktop has grown
|
||||
|
||||
[_How the Linux desktop has grown_][10] by Jim Hall is a visual journey through the history of the Linux desktop. It starts with [TWM][11] and passes by [FVWM][12], [GNOME][13], [KDE][14], and others. If you're new to Linux, this is a fascinating history lesson from someone who was there (and has the screenshots to prove it). If you've been with Linux for many years, then this will definitely bring back memories. In the end, though, one thing is certain: Anyone who can still locate screenshots from 20 years ago is a superhuman data archivist.
|
||||
|
||||
### Create your own video streaming server with Linux
|
||||
|
||||
[_Create your own video streaming server with Linux_][15] by Aaron J. Prisk breaks down more than just a few preconceptions most of us have about the services we take for granted. Because services like YouTube and Twitch exist, many people assume that those are the only gateways to broadcasting video to the world. Of course, people used to think that Windows and Mac were the only gateways into computing, and that, thankfully, turned out to be a gross miscalculation. In this article, Aaron sets up a video-streaming server and even manages to find space to talk about [OBS][16] in so you can create videos to stream. Is it a fun weekend project or the start of a new career? You decide.
|
||||
|
||||
### 10 moments that shaped Linux history
|
||||
|
||||
[_10 moments that shaped Linux history_][17] by Alan Formy-Duval attempts the formidable task of choosing just 10 things to highlight in the history of Linux. It's an exercise in futility, of course, because there have been so many important moments, so I love how Alan filters it through his own experience. For example, when was it obvious that Linux was going to last? When Alan realized that all the systems he maintained at work were running Linux. There's a beauty to interpreting history this way because the moments of importance will differ for each person. There's no definitive list for Linux, or articles about Linux, or for open source. You make your own list, and you make yourself a part of it.
|
||||
|
||||
### What do you want to learn?
|
||||
|
||||
What else do you want to know about Linux? Please tell us about it in the comments, or [write an article][18] for Opensource.com about your experience with Linux.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/learn-linux
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Penguin_Image_520x292_12324207_0714_mm_v1a.png?itok=p7cWyQv9 (Penguins gathered together in the Artic)
|
||||
[2]: https://opensource.com/article/19/6/understanding-linux-permissions
|
||||
[3]: https://opensource.com/article/19/10/why-switch-mac-linux
|
||||
[4]: http://opensource.com/article/19/4/troubleshooting-wifi-linux
|
||||
[5]: https://opensource.com/article/19/10/how-gnome-uses-git
|
||||
[6]: https://opensource.com/article/19/3/virtual-filesystems-linux
|
||||
[7]: https://opensource.com/article/19/8/understanding-file-paths-linux
|
||||
[8]: https://opensource.com/article/19/4/interprocess-communication-linux-storage
|
||||
[9]: https://opensource.com/article/19/2/linux-backup-solutions
|
||||
[10]: https://opensource.com/article/19/8/how-linux-desktop-grown
|
||||
[11]: https://github.com/freedesktop/twm
|
||||
[12]: http://www.fvwm.org/
|
||||
[13]: http://gnome.org
|
||||
[14]: http://kde.org
|
||||
[15]: https://opensource.com/article/19/1/basic-live-video-streaming-server
|
||||
[16]: https://opensource.com/life/15/12/real-time-linux-video-editing-with-obs-studio
|
||||
[17]: https://opensource.com/article/19/4/top-moments-linux-history
|
||||
[18]: https://opensource.com/how-submit-article
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (caiichenr)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,86 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (10 Linux command tutorials for beginners and experts)
|
||||
[#]: via: (https://opensource.com/article/19/12/linux-commands)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
10 Linux command tutorials for beginners and experts
|
||||
======
|
||||
Learn how to make Linux do what you need it to do in Opensource.com's
|
||||
top 10 articles about Linux commands from 2019.
|
||||
![Penguin driving a car with a yellow background][1]
|
||||
|
||||
Using Linux _well_ means understanding what commands are available and what they're capable of doing for you. We have covered a lot of them on Opensource.com during 2019, and here are 10 favorites from the bunch.
|
||||
|
||||
### Using the force at the Linux command line
|
||||
|
||||
The Force has a light side and a dark side. Properly understanding that is crucial to true mastery. In his article [_Using the force at the Linux command line_][2], Alan Formy-Duval explains the **-f** option (also known as **\--force**) for several popular and sometimes dangerous commands.
|
||||
|
||||
### Intro to the Linux useradd command
|
||||
|
||||
Sharing accounts is a bad idea. Instead, give separate accounts to different people (and even different roles) with the quintessential **useradd** command. Part of his venerable series on basic Linux administration, Alan Formy-Duval provides an [_Intro to the Linux useradd command_][3], and, as usual, he explains it in _plain English_ so that both new and experienced admins can understand it.
|
||||
|
||||
### Linux commands to display your hardware information
|
||||
|
||||
What's _inside_ the box? Sometimes it's useful to inspect your hardware without using a screwdriver. In [_Linux commands to display your hardware information_][4], Howard Fosdick provides both popular and obscure commands to help you dig deep into the computer you're using, the computer you're testing at the store before buying, or the computer you're trying to repair.
|
||||
|
||||
### How to encrypt files with gocryptfs on Linux
|
||||
|
||||
Our files hold lots of private data, from social security numbers to personal letters to loved ones. In [_How to encrypt files with gocryptfs on Linux_][5], Brian "Bex" Exelbierd explains how to keep *private *what's meant to be private. As a bonus, he demonstrates encrypting files in a way that has little to no impact on your existing workflow. This isn't a complex PGP-style puzzle of key management and background key agents; this is quick, seamless, and secure file encryption.
|
||||
|
||||
### How to use advanced rsync for large Linux backups
|
||||
|
||||
In the New Year, many people will resolve to be more diligent about making backups. Alan Formy-Duval must have made that resolution years ago, because in [_How to use advanced rsync for large Linux backups_][6], he displays remarkable familiarity with the file synchronization command. You might not remember all the syntax right away, but the idea is to read and process the options, construct your backup command, and then automate it. That's the smart way to use **rsync**, and it's the _only_ way to do backups reliably.
|
||||
|
||||
### Using more to view text files at the Linux command line
|
||||
|
||||
In Scott Nesbitt's article [_Using more to view text files at the Linux command line_][7], the good old default pager **more** finally gets the spotlight. Many people install and use **less**, because it's more flexible than **more**. However, with more and more systems being implemented in the sparsest of containers, the luxury of fancy new tools like **less** or **most** sometimes just doesn't exist. Knowing and using **more** is simple, it's a common default, and it's the production system's debugging tool of last resort.
|
||||
|
||||
### What you probably didn't know about sudo
|
||||
|
||||
The **sudo** command is famous to a fault. People know the **sudo** term, and most of us believe we know what it does. And we're a little bit correct, but as Peter Czanik reveals in his article [_What you probably didn't know about sudo_][8], there's a lot more to the command than just "Simon says." Like that classic childhood game, the **sudo** command is powerful and also prone to silly mistakes—only with greater potential for horrible consequences. This is one game you do not want to lose!
|
||||
|
||||
### How to program with Bash: Syntax and tools
|
||||
|
||||
If you're a Linux, BSD, or Mac (and lately, Windows) user, you may have used the Bash shell interactively. It's a great shell for quick, one-off commands, which is why so many Linux users love to use it as their primary user interface. However, Bash is much more than just a command prompt. It's also a programming language, and if you're already using Bash commands, then the path to automation has never been more straightforward. Learn all about it in David Both's excellent [_How to program with Bash: Syntax and tools_][9].
|
||||
|
||||
### Master the Linux ls command
|
||||
|
||||
The **ls** command is one of those commands that merits a two-letter name; one-letter commands are an optimization for slow terminals where each letter causes a significant delay and also a nice bonus for lazy typists. Seth Kenlon explains how you can [_Master the Linux ls command_][10] and he does so with his usual clarity and pragmatism. Most significantly, in a system where "everything is a file," being able to list the files is crucial.
|
||||
|
||||
### Getting started with the Linux cat command
|
||||
|
||||
The **cat** command (short for con_cat_enate) is deceptively simple. Whether you use it to quickly see the contents of a file or to pipe the contents to another command, you may not be using **cat** to its full potential. Alan Formy-Duval's elucidating [_Getting started with the Linux cat command_][11] offers new ideas to take advantage of a command that lets you open a file without feeling like you've opened it. As a bonus, learn all about **zcat** so you can decompress files without all the trouble of decompression! It's a small and simple thing, but _this_ is what makes Linux great.
|
||||
|
||||
### Continue the journey
|
||||
|
||||
Don't let Opensource.com's 10 best articles about Linux commands of 2019 be the end of your journey. There's much more to discover about Linux and its versatile prompt, so stay tuned in 2020 for more insights. And, if there's a Linux command you want us to know about, please tell us about it in the comments, or share your knowledge with Opensource.com readers by [submitting an article][12] about your favorite Linux command.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/linux-commands
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc (Penguin driving a car with a yellow background)
|
||||
[2]: https://opensource.com/article/19/5/may-the-force-linux
|
||||
[3]: https://opensource.com/article/19/10/linux-useradd-command
|
||||
[4]: https://opensource.com/article/19/9/linux-commands-hardware-information
|
||||
[5]: https://opensource.com/article/19/8/how-encrypt-files-gocryptfs
|
||||
[6]: https://opensource.com/article/19/5/advanced-rsync
|
||||
[7]: https://opensource.com/article/19/1/more-text-files-linux
|
||||
[8]: https://opensource.com/article/19/10/know-about-sudo
|
||||
[9]: https://opensource.com/article/19/10/programming-bash-syntax-tools
|
||||
[10]: https://opensource.com/article/19/7/master-ls-command
|
||||
[11]: https://opensource.com/article/19/2/getting-started-cat-command
|
||||
[12]: https://opensource.com/how-submit-article
|
@ -7,34 +7,34 @@
|
||||
[#]: via: (https://itsfoss.com/beautiful-linux-distributions/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Here Are The Most Beautiful Linux Distributions in 2020
|
||||
2020 年最漂亮的 Linux 发行版
|
||||
======
|
||||
|
||||
It’s a no-brainer that there’s a Linux distribution for every user – no matter what they prefer or what they want to do.
|
||||
每个 Linux 用户都有一个 Linux 发行版是不需要考虑的 – 不管他们喜欢什么,或者想做什么
|
||||
|
||||
Starting out with Linux? You can go with the [Linux distributions for beginners][1]. Switching from Windows? You have [Windows-like Linux distributions][2]. Have an old computer? You can [use lightweight Linux distros][3].
|
||||
刚开始使用 Linux ?你可以使用 [面向初学者的 Linux 发行版][1] 。来自 Windows ?你可以拥有[像 Windows 一样的 Linux发行版][2]。你拥有一台旧电脑?你可以 [使用轻量级 Linux 发行版][3] 。
|
||||
|
||||
In this list, I’m going to focus only on the most beautiful Linux distros out there.
|
||||
在这个列表中,我只关注最漂亮的 Linux 发行版。
|
||||
|
||||
### Top 7 Most Beautiful Linux Distributions
|
||||
### 7 款最漂亮的 Linux 发行版
|
||||
|
||||
![][4]
|
||||
|
||||
Wait! Is there a thing called a beautiful Linux distribution? Is it not redundant considering the fact that you can customize the looks of any distribution and make it look better with [themes][5] and [icons][6]?
|
||||
等一下!有一个可以称为最漂亮的 Linux 发行版的东西存在吗?不需要考虑的事实是,你可以自定义容易发行版的外观,并使用 [主题][5] 和 [图标]][6] 来使其看起来更漂亮。
|
||||
|
||||
You are right about that. But here, I am talking about the distributions that look great without any tweaks and customization effort from the user’s end. These distros provide a seamless, pleasant desktop experience right out of the box.
|
||||
你说得对。但是在这里,我所说的发行版看起来好极了,不需要用户端的任何调整和自定义工作。这些发行版提供友好的无缝转换的开箱即用的桌面体验。
|
||||
|
||||
**Note:** _The list is in no particular order of ranking._
|
||||
**注意:** _列表排名不分前后。_
|
||||
|
||||
#### 1\. elementary OS
|
||||
|
||||
![][7]
|
||||
|
||||
elementary OS is one of the most beautiful Linux distros out there. It leans on a macOS-ish look while providing a great user experience for Linux users. If you’re already comfortable macOS – you will have no problem using the elementary OS.
|
||||
这里的 elementary OS 是一个最漂亮的 Linux 发行版。在为 Linux 用户提供一种极好的体验的同时,它倾向于一种 macOS 的外观。如果你已经适应了 macOS – 那么你在使用 elementary OS 时,将没有任何问题。
|
||||
|
||||
Also, elementary OS is based on Ubuntu – so you can easily find plenty of applications to get things done.
|
||||
此外,elementary OS 也同样基于 Ubuntu – 因此你可以很容易地找到大量的应用程序来把事情做好。
|
||||
|
||||
Not just limited to the look and feel – but the elementary OS is always hard at work to introduce meaningful changes. So, you can expect the user experience to improve with every update you get.
|
||||
不紧紧局限于外观和感觉 – elementary OS 也努力地引入重要的更改。因此,你可以期待每次更新后所带来的用户体验。
|
||||
|
||||
[elementary OS][8]
|
||||
|
||||
@ -42,11 +42,11 @@ Not just limited to the look and feel – but the elementary OS is always hard a
|
||||
|
||||
![][9]
|
||||
|
||||
Deepin is yet another beautiful Linux distro originally based on Debian’s stable branch. The animations (look and feel) could be too overwhelming for some – but it looks pretty.
|
||||
Deepin 是另一个漂亮的 Linux 发行版,起初基于 Debian 的稳定版本分支。对于一些人来说,动画效果(外观和感觉)可能太过令人不知所措 – 但是它看起来很漂亮。
|
||||
|
||||
It features its own Deepin Desktop Environment that involves a mix of essential features for the best user experience possible. It may not exactly resemble the UI of any other distribution but it’s quite easy to get used to.
|
||||
它以拥有自己的 Deepin 桌面环境为特色,涉及到一些基本功能的组合,以获得尽可能好的用户体验。它可能与任何其它的发行版的用户界面都不相似,但是它很容易习惯的。
|
||||
|
||||
My personal attention would go to the control center and the color scheme featured in Deepin OS. You can give it a try – it’s worth taking a look.
|
||||
我个人的注意力将转向 Deepin OS 中的控制中心和特色配色方案。你也尝试一下 – 它是值得一看的。
|
||||
|
||||
[Deepin][10]
|
||||
|
||||
@ -54,11 +54,11 @@ My personal attention would go to the control center and the color scheme featur
|
||||
|
||||
![][11]
|
||||
|
||||
Pop!_OS manages to offer a great UI on top of Ubuntu while offering a pure [GNOME][12] experience.
|
||||
Pop!_OS 成功地在 Ubuntu 的上层提供一个的极好用户界面,与此同时也提供一种纯净的 [GNOME][12] 体验。
|
||||
|
||||
It also happens to be my personal favorite which I utilize as my primary desktop OS. Pop!_OS isn’t flashy – nor involves any fancy animations. However, they’ve managed to get things right by having a perfect combo of icon/themes – while polishing the user experience from a technical point of view.
|
||||
它也碰巧也是我个人的最爱,我使用它作为我的主要桌面系统。Pop!_OS 既不浮华,也不是涉及一些花哨的动画。不过,它们通过图标和主题的完美组合成来解决问题 – 在此期间从技术角度提升用户体验。
|
||||
|
||||
I don’t want to initiate a [Ubuntu vs Pop OS][13] debate but if you’re used to Ubuntu, Pop!_OS can be a great alternative for potentially better user experience.
|
||||
我不想发起一场 [Ubuntu 和 Pop OS][13] 的争论,但是如果你已经习惯了 Ubuntu ,为获取更好的潜在的用户体验,Pop!_OS 可能是一个极好的可供选择的系统。
|
||||
|
||||
[Pop!_OS][14]
|
||||
|
||||
@ -66,11 +66,11 @@ I don’t want to initiate a [Ubuntu vs Pop OS][13] debate but if you’re used
|
||||
|
||||
![][15]
|
||||
|
||||
Manjaro Linux is an [Arch][16]-based Linux distribution. While [installing Arch Linux][17] is a slightly complicated job, Manjaro provides an easier and smoother Arch experience.
|
||||
Manjaro Linux 是一个基于 [Arch][16] 的 Linux 发行版。然而 [安装 Arch Linux][17] 是一件稍微复杂的工作,Manjaro 提供了一种更舒适、更流畅的 Arch 体验。
|
||||
|
||||
It offers a variety of [desktop environment editions][18] to choose from while downloading. No matter what you choose, you still get enough options to customize the look and feel or the layout.
|
||||
它提供各种各样的 [桌面环境版本][18] 来供下载时选择。不管你选择哪一个,你都仍然有足够的选择权来自定义外观和感觉或布局。
|
||||
|
||||
To me, it looks quite fantastic for an Arch-based distribution that works out of the box – you can give it a try!
|
||||
对我来说,一个开箱即用的基于 Arch 的发行版看起来极好。 – 你可以试一试!
|
||||
|
||||
[Manjaro Linux][19]
|
||||
|
||||
@ -78,13 +78,13 @@ To me, it looks quite fantastic for an Arch-based distribution that works out of
|
||||
|
||||
![][20]
|
||||
|
||||
[KDE Neon][21] is for the users who want a simplified approach to the design language but still get a great user experience.
|
||||
[KDE Neon][21] 是为那些想简单接近设计语言的用户所准备的,the design language 但是仍能获得良好的用户体验。
|
||||
|
||||
It is a lightweight Linux distro which is based on Ubuntu. As the name suggests, it features the KDE Plasma desktop and looks absolutely beautiful.
|
||||
它是一个基于 Ubuntu 的轻量级 Linux 发行版。顾名思义,它以 KDE Plasma 桌面和精美绝伦为特色。
|
||||
|
||||
KDE Neon gives you the latest and greatest KDE Plasma desktop and KDE applications. Unlike [Kubuntu][22] or other KDE-based distributions, you don’t have to wait for months to get the new [KDE software][23].
|
||||
KDE Neon 给予你最新的、最好的 KDE Plasma 桌面及KDE 应用程序。不像 [Kubuntu][22] 或其它基于 KDE 的发行版,你不需要等待数月来获取新的 [KDE 软件][23] 。
|
||||
|
||||
You get a lot of customization options built-in with the KDE desktop – so feel free to try it out!
|
||||
你可以在 KDE 桌面中获取很多内置的自定义选项 – 所以你可以随意使用!
|
||||
|
||||
[KDE Neon][24]
|
||||
|
||||
@ -92,11 +92,11 @@ You get a lot of customization options built-in with the KDE desktop – so feel
|
||||
|
||||
![][25]
|
||||
|
||||
Without a doubt, Zorin OS is an impressive Linux distro that manages to provide a good user experience – even with its lite edition.
|
||||
毫无疑问,Zorin OS 是一个令人印象深刻的 Linux 发行版,它设法提供一个良好的用户体验 – 即使使用它的精简版本。
|
||||
|
||||
You can try either the full version or the lite edition (with [Xfce desktop][26]). The UI is tailored for Windows and macOS users to get used to. While based on Ubuntu, it provides a great user experience with what it has to offer.
|
||||
你可以尝试完整版本或精简版本(使用 [Xfce 桌面][26]) 。该用户界面专门为习惯于 Windows 和 macOS 的用户定制。虽然基于 Ubuntu ,它仍然能提供一种极好的用户体验。
|
||||
|
||||
If you start like its user interface – you can also try [Zorin Grid][27] to manage multiple computers running Zorin OS at your workplace/home. With the ultimate edition, you can also control the layout of your desktop (as shown in the image above).
|
||||
如果你开始喜欢它的用户界面 – 在你也可以尝试 [Zorin Grid][27] 来管理在工作区/家庭中运行的 Zorin OS的 计算机。使用终极版本,你也可以控制你的桌面布局 (如上图所示)。
|
||||
|
||||
[Zorin OS][28]
|
||||
|
||||
@ -104,33 +104,33 @@ If you start like its user interface – you can also try [Zorin Grid][27] to ma
|
||||
|
||||
![][29]
|
||||
|
||||
[Nitrux OS][30] is a unique take on a Linux distribution which is somewhat based on Ubuntu – but not completely.
|
||||
[Nitrux OS][30] 是一个独特的 Linux 发行版,它某种程度上基于 Ubuntu – 但是不完全基于 Ubuntu 。
|
||||
|
||||
It focuses on providing a good user experience to the users who are looking for a unique design language with a fresh take on a Linux distro. It uses Nomad desktop which is based on KDE.
|
||||
对于正在寻找在 Linux 发行版上使用全新方式的独特设计语言的用户来说,它专注于提供一种良好的用户体验。它使用基于 KDE 的 Nomad 桌面。
|
||||
|
||||
Nitrux encourages to use of [AppImage][31] for applications. But you can also use Arch Linux’s pacman package manager in Nitrux which is based on Ubuntu. Awesome, isn’t it?
|
||||
Nitrux 鼓励使用 [AppImage][31] 应用程序。但是在 基于 Ubuntu 的 Nitrux 中你也可以使用 Arch Linux 的 pacman 软件包管理器。令人惊叹,不是吗?
|
||||
|
||||
Even if it’s not the perfect OS to have installed (yet), it sure looks pretty and good enough for most of the basic tasks. You can also know more about it when you read our [interview with Nitrux’s founder][32].
|
||||
尽管它尚不是用来安装的完美的操作系统,它确实看起来很漂亮,并且对大多数基本任务来说已经足够了。当你阅读我们的 [ Nitrux 创始人的采访][32] 时,你可以了解更多。
|
||||
|
||||
Here’s a slightly old video of Nitrux but it still looks good:
|
||||
这是一个稍微过时的 Nitrux 视频,但是它仍然看起来很好:
|
||||
|
||||
[Nitrux OS][33]
|
||||
|
||||
#### Bonus: eXtern OS (in ‘stagnated’ development)
|
||||
#### 意外:eXtern OS (处于‘停滞’ 开发阶段)
|
||||
|
||||
![][34]
|
||||
|
||||
If you want to try an experimental Linux distro, extern OS is going to be beautiful.
|
||||
如果你想尝试一个实验性的 Linux 发行版,extern OS 将会非常出色。
|
||||
|
||||
It isn’t actively maintained and should not be used for production systems. Yet, it provides unique user experience (thought not polished enough).
|
||||
它并没有被积极地维护,因此不应该用于生产系统。但是,它提供独特的用户体验(尽管还不够完美无缺)。
|
||||
|
||||
Just for the sake of trying a good-looking Linux distro, you can give it a try to experience it.
|
||||
只为尝试一个好看的 Linux 发行版,你可以尝试体验一下。
|
||||
|
||||
[eXtern OS][35]
|
||||
|
||||
**Wrapping Up**
|
||||
**总结**
|
||||
|
||||
Now, as the saying goes, beauty lies in the eyes of the beholder. So this list of beautiful Linux distributions is from my point of view. Feel free to disagree (politely of course) and mention your favorites.
|
||||
现在,俗话说,情人眼里出西施。所以这份来自我眼中的最漂亮 Linux 发行版列表。你可以随意提出不同的意见 (当然要礼貌一点),并提及你最喜欢的 Linux 发行版。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -138,7 +138,7 @@ via: https://itsfoss.com/beautiful-linux-distributions/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,335 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Try this Bash script for large filesystems)
|
||||
[#]: via: (https://opensource.com/article/20/2/script-large-files)
|
||||
[#]: author: (Nick Clifton https://opensource.com/users/nickclifton)
|
||||
|
||||
Try this Bash script for large filesystems
|
||||
======
|
||||
A simple script to list files, directories, executables, and links.
|
||||
![bash logo on green background][1]
|
||||
|
||||
Have you ever wanted to list all the files in a directory, but just the files, nothing else? How about just the directories? If you have, then the following script, which is open source under GPLv3, could be what you have been looking for.
|
||||
|
||||
Of course, you could use the **find** command:
|
||||
|
||||
|
||||
```
|
||||
`find . -maxdepth 1 -type f -print`
|
||||
```
|
||||
|
||||
But this is cumbersome to type, produces unfriendly output, and lacks some of the refinement of the **ls** command. You could also combine **ls** and **grep** to achieve the same result:
|
||||
|
||||
|
||||
```
|
||||
`ls -F . | grep -v /`
|
||||
```
|
||||
|
||||
But again, this is clunky. This script provides a simple alternative.
|
||||
|
||||
### Usage
|
||||
|
||||
The script provides four main functions, which depend upon which name you call: **lsf** lists files, **lsd** lists directories, **lsx** lists executables, and **lsl** lists links.
|
||||
|
||||
There is no need to install multiple copies of the script, as symbolic links work. This saves space and makes updating the script easier.
|
||||
|
||||
The script works by using the **find** command to do the searching, and then it runs **ls** on each item it finds. The nice thing about this is that any arguments given to the script are passed to the **ls** command. So, for example, this lists all files, even those that start with a dot:
|
||||
|
||||
|
||||
```
|
||||
`lsf -a`
|
||||
```
|
||||
|
||||
To list directories in long format, use the **lsd** command:
|
||||
|
||||
|
||||
```
|
||||
`lsd -l`
|
||||
```
|
||||
|
||||
You can provide multiple arguments, and also file and directory paths.
|
||||
|
||||
This provides a long classified listing of all of files in the current directory's parent directory, and in the **/usr/bin** directory:
|
||||
|
||||
|
||||
```
|
||||
`lsf -F -l .. /usr/bin`
|
||||
```
|
||||
|
||||
One thing that the script does not currently handle, however, is recursion. This command lists only the files in the current directory.
|
||||
|
||||
|
||||
```
|
||||
`lsf -R`
|
||||
```
|
||||
|
||||
The script does not descend into any subdirectories. This is something that may be fixed one day.
|
||||
|
||||
### Internals
|
||||
|
||||
The script is written in a top-down fashion with the initial functions at the start of the script and the body of the work performed near the end. There are only two functions that really matter in the script. The **parse_args()** function peruses the command line, separates options from pathnames, and scripts specific options from the **ls** command-line options.
|
||||
|
||||
The **list_things_in_dir()** function takes a directory name as an argument and runs the **find** command on it. Each item found is passed to the **ls** command for display.
|
||||
|
||||
### Conclusion
|
||||
|
||||
This is a simple script to accomplish a simple function. It is a time saver and can be surprisingly useful when working with large filesystems.
|
||||
|
||||
### The script
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
# Script to list:
|
||||
# directories (if called "lsd")
|
||||
# files (if called "lsf")
|
||||
# links (if called "lsl")
|
||||
# or executables (if called "lsx")
|
||||
# but not any other type of filesystem object.
|
||||
# FIXME: add lsp (list pipes)
|
||||
#
|
||||
# Usage:
|
||||
# <command_name> [switches valid for ls command] [dirname...]
|
||||
#
|
||||
# Works with names that includes spaces and that start with a hyphen.
|
||||
#
|
||||
# Created by Nick Clifton.
|
||||
# Version 1.4
|
||||
# Copyright (c) 2006, 2007 Red Hat.
|
||||
#
|
||||
# This is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published
|
||||
# by the Free Software Foundation; either version 3, or (at your
|
||||
# option) any later version.
|
||||
|
||||
# It is distributed in the hope that it will be useful, but
|
||||
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
|
||||
# ToDo:
|
||||
# Handle recursion, eg: lsl -R
|
||||
# Handle switches that take arguments, eg --block-size
|
||||
# Handle --almost-all, --ignore-backups, --format and --ignore
|
||||
|
||||
main ()
|
||||
{
|
||||
init
|
||||
|
||||
parse_args ${1+"$@"}
|
||||
|
||||
list_objects
|
||||
|
||||
exit 0
|
||||
}
|
||||
|
||||
report ()
|
||||
{
|
||||
echo $prog": " ${1+"$@"}
|
||||
}
|
||||
|
||||
fail ()
|
||||
{
|
||||
report " Internal error: " ${1+"$@"}
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Initialise global variables.
|
||||
init ()
|
||||
{
|
||||
# Default to listing things in the current directory.
|
||||
dirs[0]=".";
|
||||
|
||||
# num_dirs is the number of directories to be listed minus one.
|
||||
# This is because we are indexing the dirs[] array from zero.
|
||||
num_dirs=0;
|
||||
|
||||
# Default to ignoring things that start with a period.
|
||||
no_dots=1
|
||||
|
||||
# Note - the global variables 'type' and 'opts' are initialised in
|
||||
# parse_args function.
|
||||
}
|
||||
|
||||
# Parse our command line
|
||||
parse_args ()
|
||||
{
|
||||
local no_more_args
|
||||
|
||||
no_more_args=0 ;
|
||||
|
||||
prog=`basename $0` ;
|
||||
|
||||
# Decide if we are listing files or directories.
|
||||
case $prog in
|
||||
lsf | lsf.sh)
|
||||
type=f
|
||||
opts="";
|
||||
;;
|
||||
lsd | lsd.sh)
|
||||
type=d
|
||||
# The -d switch to "ls" is presumed when listing directories.
|
||||
opts="-d";
|
||||
;;
|
||||
lsl | lsl.sh)
|
||||
type=l
|
||||
# Use -d to prevent the listed links from being followed.
|
||||
opts="-d";
|
||||
;;
|
||||
lsx | lsx.sh)
|
||||
type=f
|
||||
find_extras="-perm /111"
|
||||
;;
|
||||
*)
|
||||
fail "Unrecognised program name: '$prog', expected either 'lsd', 'lsf', 'lsl' or 'lsx'"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Locate any additional command line switches for ls and accumulate them.
|
||||
# Likewise accumulate non-switches to the directories list.
|
||||
while [ $# -gt 0 ]
|
||||
do
|
||||
case "$1" in
|
||||
# FIXME: Handle switches that take arguments, eg --block-size
|
||||
# FIXME: Properly handle --almost-all, --ignore-backups, --format
|
||||
# FIXME: and --ignore
|
||||
# FIXME: Properly handle --recursive
|
||||
-a | -A | --all | --almost-all)
|
||||
no_dots=0;
|
||||
;;
|
||||
--version)
|
||||
report "version 1.2"
|
||||
exit 0
|
||||
;;
|
||||
--help)
|
||||
case $type in
|
||||
d) report "a version of 'ls' that lists only directories" ;;
|
||||
l) report "a version of 'ls' that lists only links" ;;
|
||||
f) if [ "x$find_extras" = "x" ] ; then
|
||||
report "a version of 'ls' that lists only files" ;
|
||||
else
|
||||
report "a version of 'ls' that lists only executables";
|
||||
fi ;;
|
||||
esac
|
||||
exit 0
|
||||
;;
|
||||
--)
|
||||
# A switch to say that all further items on the command line are
|
||||
# arguments and not switches.
|
||||
no_more_args=1 ;
|
||||
;;
|
||||
-*)
|
||||
if [ "x$no_more_args" = "x1" ] ;
|
||||
then
|
||||
dirs[$num_dirs]="$1";
|
||||
let "num_dirs++"
|
||||
else
|
||||
# Check for a switch that just uses a single dash, not a double
|
||||
# dash. This could actually be multiple switches combined into
|
||||
# one word, eg "lsd -alF". In this case, scan for the -a switch.
|
||||
# XXX: FIXME: The use of =~ requires bash v3.0+.
|
||||
if [[ "x${1:1:1}" != "x-" && "x$1" =~ "x-.*a.*" ]] ;
|
||||
then
|
||||
no_dots=0;
|
||||
fi
|
||||
opts="$opts $1";
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
dirs[$num_dirs]="$1";
|
||||
let "num_dirs++"
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# Remember that we are counting from zero not one.
|
||||
if [ $num_dirs -gt 0 ] ;
|
||||
then
|
||||
let "num_dirs--"
|
||||
fi
|
||||
}
|
||||
|
||||
list_things_in_dir ()
|
||||
{
|
||||
local dir
|
||||
|
||||
# Paranoia checks - the user should never encounter these.
|
||||
if test "x$1" = "x" ;
|
||||
then
|
||||
fail "list_things_in_dir called without an argument"
|
||||
fi
|
||||
|
||||
if test "x$2" != "x" ;
|
||||
then
|
||||
fail "list_things_in_dir called with too many arguments"
|
||||
fi
|
||||
|
||||
# Use quotes when accessing $dir in order to preserve
|
||||
# any spaces that might be in the directory name.
|
||||
dir="${dirs[$1]}";
|
||||
|
||||
# Catch directory names that start with a dash - they
|
||||
# confuse pushd.
|
||||
if test "x${dir:0:1}" = "x-" ;
|
||||
then
|
||||
dir="./$dir"
|
||||
fi
|
||||
|
||||
if [ -d "$dir" ]
|
||||
then
|
||||
if [ $num_dirs -gt 0 ]
|
||||
then
|
||||
echo " $dir:"
|
||||
fi
|
||||
|
||||
# Use pushd rather passing the directory name to find so that the
|
||||
# names that find passes on to xargs do not have any paths prepended.
|
||||
pushd "$dir" > /dev/null
|
||||
if [ $no_dots -ne 0 ] ; then
|
||||
find . -maxdepth 1 -type $type $find_extras -not -name ".*" -printf "%f\000" \
|
||||
| xargs --null --no-run-if-empty ls $opts -- ;
|
||||
else
|
||||
find . -maxdepth 1 -type $type $find_extras -printf "%f\000" \
|
||||
| xargs --null --no-run-if-empty ls $opts -- ;
|
||||
fi
|
||||
popd > /dev/null
|
||||
else
|
||||
report "directory '$dir' could not be found"
|
||||
fi
|
||||
}
|
||||
|
||||
list_objects ()
|
||||
{
|
||||
local i
|
||||
|
||||
i=0;
|
||||
while [ $i -le $num_dirs ]
|
||||
do
|
||||
list_things_in_dir i
|
||||
let "i++"
|
||||
done
|
||||
}
|
||||
|
||||
# Invoke main
|
||||
main ${1+"$@"}
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/script-large-files
|
||||
|
||||
作者:[Nick Clifton][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/nickclifton
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
@ -1,110 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Waterfox: Firefox Fork With Legacy Add-ons Options)
|
||||
[#]: via: (https://itsfoss.com/waterfox-browser/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Waterfox: Firefox Fork With Legacy Add-ons Options
|
||||
======
|
||||
|
||||
_**Brief: In this week’s open source software highlight, we take a look at a Firefox-based browser that supports legacy extensions that Firefox no longer supports while potentially providing fast user experience.**_
|
||||
|
||||
When it comes to web browsers, Google Chrome leads the market share. [Mozilla Firefox is there still providing hopes for a mainstream web browser that respects your privacy][1].
|
||||
|
||||
Firefox has improved a lot lately and one of the side-effects of the improvements is removal of add-ons. If your favorite add-on disappeared in last few months/years, you have a good new in the form of Witerfox.
|
||||
|
||||
Attention!
|
||||
|
||||
It’s been brought to our notice that Waterfox has been acquired by System1. This company also acquired privacy focused search engine Startpage.
|
||||
While System1 claims that they are providing privacy focused products because ‘there is a demand’, we cannot vouch for their claim.
|
||||
In other words, it’s up to you to trust System1 and Waterfox.
|
||||
|
||||
### Waterfox: A Firefox-based Browser
|
||||
|
||||
![Waterfox Classic][2]
|
||||
|
||||
[Waterfox][3] is a useful open-source browser built on top of Firefox that focuses on privacy and supports legacy extensions. It doesn’t pitch itself as a privacy-paranoid browser but it does respect the basics.
|
||||
|
||||
You get two separate Waterfox browser versions. The current edition aims to provide a modern experience and the classic version focuses to support [NPAPI plugins][4] and [bootstrap extensions][5].
|
||||
|
||||
![Waterfox Classic][6]
|
||||
|
||||
If you do not need to utilize bootstrap extensions but rely on [WebExtensions][7], Waterfox Current is the one you should go for.
|
||||
|
||||
And, if you need to set up a browser that needs NPAPI plugins or bootstrap extensions extensively, Waterfox Classic version will be suitable for you.
|
||||
|
||||
So, if you like Firefox, but want to try something different on the same line, this is a Firefox alternative for the job.
|
||||
|
||||
### Features of Waterfox
|
||||
|
||||
![Waterfox Current][8]
|
||||
|
||||
Of course, technically, you should be able to do a lot of things that Mozilla Firefox supports.
|
||||
|
||||
So, I’ll just highlight all the important features of Waterfox in a list here.
|
||||
|
||||
* Supports NPAPI Plugins
|
||||
* Supports Bootstrap Extensions
|
||||
* Offers separate editions for legacy extension support and modern WebExtension support.
|
||||
* Cross-platform support (Windows, Linux, and macOS)
|
||||
* Theme customization
|
||||
* Archived Add-ons supported
|
||||
|
||||
|
||||
|
||||
### Installing Waterfox on Ubuntu/Linux
|
||||
|
||||
Unlike other popular browsers, you don’t get a package to install. So, you will have to download the archived package from its [official download page][9].
|
||||
|
||||
![][10]
|
||||
|
||||
Depending on what edition (Current/Classic) you want – just download the file, which will be **.tar.bz2** extension file.
|
||||
|
||||
Once downloaded, simply extract the file.
|
||||
|
||||
Next, head on to the extracted folder and look for the “**Waterfox**” file. You can simply double-click on it to run start up the browser.
|
||||
|
||||
If that doesn’t work, you can utilize the terminal and navigate to the extracted **Waterfox** folder. Once there, you can simply run it with a single command. Here’s how it looks like:
|
||||
|
||||
```
|
||||
cd waterfox-classic
|
||||
./waterfox
|
||||
```
|
||||
|
||||
In either case, you can also head to its [GitHub page][11] and explore more options to get it installed on your system.
|
||||
|
||||
[Download Waterfox][3]
|
||||
|
||||
**Wrapping up**
|
||||
|
||||
I fired it up on my Pop!_OS 19.10 installation and it worked really well for me. Though I don’t think I could switch from Firefox to Waterfox because I am not using any legacy add-on. It could still be an impressive option for certain users.
|
||||
|
||||
You could give it a try and let me know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/waterfox-browser/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/why-firefox/
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/waterfox-classic.png?fit=800%2C423&ssl=1
|
||||
[3]: https://www.waterfox.net/
|
||||
[4]: https://en.wikipedia.org/wiki/NPAPI
|
||||
[5]: https://wiki.mozilla.org/Extension_Manager:Bootstrapped_Extensions
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/waterfox-classic-screenshot.jpg?ssl=1
|
||||
[7]: https://wiki.mozilla.org/WebExtensions
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/waterfox-screenshot.jpg?ssl=1
|
||||
[9]: https://www.waterfox.net/download/
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/waterfox-download-page.jpg?ssl=1
|
||||
[11]: https://github.com/MrAlex94/Waterfox
|
@ -1,148 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Communicating with other users on the Linux command line)
|
||||
[#]: via: (https://www.networkworld.com/article/3530343/communicating-with-other-users-on-the-linux-command-line.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Communicating with other users on the Linux command line
|
||||
======
|
||||
|
||||
Thinkstock / Linux
|
||||
|
||||
Sending messages to other users on the Linux command line can be very easy, but there are a number of commands that you might want to consider. In this post, we’ll look at four commands and see how each of them works.
|
||||
|
||||
### wall
|
||||
|
||||
The **wall** command (as in "write all") allows you to send a message to all users who are currently logged into the system. This implies that the system is likely a server and that users are working on the command line. While the wall command is generally used by sysadmins to send out notices to users to let send out information (e.g., that the server is going down for maintenance), it can be used by any user.
|
||||
|
||||
A sysadmin might send out a message like this:
|
||||
|
||||
```
|
||||
$ wall The system will be going down in 15 minutes to address a serious problem
|
||||
```
|
||||
|
||||
Everyone logged into the system will see something like this:
|
||||
|
||||
```
|
||||
Broadcast message from admin@dragonfly (pts/0) (Thu Mar 5 08:56:42 2020):
|
||||
|
||||
The system is going down in 15 minutes to address a serious problem
|
||||
```
|
||||
|
||||
If you want to use single quote marks in your message, enclose the message in double quote marks like this:
|
||||
|
||||
```
|
||||
$ wall “Don’t forget to save your work before logging off”
|
||||
```
|
||||
|
||||
The outside quote marks will not show up in the transmitted message, but, without them, the command sits and waits for a closing single quote.
|
||||
|
||||
### mesg
|
||||
|
||||
If, for some reason, you don’t want to accept messages from another user, you can stop them from arriving with the **mesg** command. This command can be used with a “n” argument to refuse mail from the user or a “y” argument to allow the messages to arrive.
|
||||
|
||||
[][1]
|
||||
|
||||
```
|
||||
$ mesg n doug
|
||||
$ mesg y doug
|
||||
```
|
||||
|
||||
The blocked user will not be notified that their messages have been blocked. You can also block or allow all messages with a **mesg** command like one of these:
|
||||
|
||||
```
|
||||
$ mesg y
|
||||
$ mesg n
|
||||
```
|
||||
|
||||
### write
|
||||
|
||||
Another command for sending text without reverting to email is **write**. This command can be used to communicate with a specific user.
|
||||
|
||||
```
|
||||
$ write nemo
|
||||
Are you still at your desk?
|
||||
I need to talk with you right away.
|
||||
^C
|
||||
```
|
||||
|
||||
Enter your text and use **^C** to exit when you’re done. The command allows you to send text, but doesn’t start a two-way conversation. It just sends the text. If the user is logged in on more than one terminal, you can specify which terminal you want to send the message to or you can rely on the system to choose the one with the shortest idle time.
|
||||
|
||||
```
|
||||
$ write nemo#1
|
||||
```
|
||||
|
||||
If the user you are trying to write to has messages blocked, you should see something like this:
|
||||
|
||||
```
|
||||
$ write nemo
|
||||
write: nemo has messages disabled
|
||||
```
|
||||
|
||||
### talk/ytalk
|
||||
|
||||
The **talk** or **ytalk** command gives you a chance to have an interactive chat with one or more other users. The command will bring up a double-pane (top and bottom) window. Each individual will type into the top portion of the display on their screen and see the responses in the bottom section(s). The respondents can respond to a talk request by typing "talk" followed by the username of the person addressing them.
|
||||
|
||||
```
|
||||
Message from Talk_Daemon@dragonfly at 10:10 ...
|
||||
talk: connection requested by dory@127.0.0.1.
|
||||
talk: respond with: talk dory@127.0.0.1
|
||||
|
||||
$ talk dory
|
||||
```
|
||||
|
||||
The window can involve more than two participants if **ytalk** is used. As you can see in the example below (the result of the "talk dory" command shown above), talk is often ytalk.
|
||||
|
||||
```
|
||||
----------------------------= YTalk version 3.3.0 =--------------------------
|
||||
Is the report ready?
|
||||
|
||||
-------------------------------= nemo@dragonfly =----------------------------
|
||||
Just finished it
|
||||
```
|
||||
|
||||
As explained above, on the other side of the conversation, the talk session window panes are reversed:
|
||||
|
||||
```
|
||||
----------------------------= YTalk version 3.3.0 =--------------------------
|
||||
Just finished it
|
||||
|
||||
-------------------------------= dory@dragonfly =----------------------------
|
||||
Is the report ready?
|
||||
```
|
||||
|
||||
Again, use **^C** to exit.
|
||||
|
||||
To talk with someone on another system, you just need to add a **-h** option and the hostname or IP address with a command like this:
|
||||
|
||||
```
|
||||
$ talk -h 192.168.0.11 nemo
|
||||
```
|
||||
|
||||
### Wrap-Up
|
||||
|
||||
There are a number of basic commands for sending messages to other logged-in users on Linux systems, and they can be especially useful when you need to send out a quick message to all of the users, prefer a quick exchange to a phone call or want to easily involve more than two people in a quick messaging session.
|
||||
|
||||
Some commands, like **wall**, allow a message to be broadcast, but are not interactive. Others, like **talk**, allow both lengthy and multi-user chats, avoiding the need to set up a conference call when a fairly quick exchange of information is all that's required.
|
||||
|
||||
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3530343/communicating-with-other-users-on-the-linux-command-line.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[2]: https://www.facebook.com/NetworkWorld/
|
||||
[3]: https://www.linkedin.com/company/network-world
|
@ -1,103 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Basilisk: A Firefox Fork For The Classic Looks and Classic Extensions)
|
||||
[#]: via: (https://itsfoss.com/basilisk-browser/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Basilisk: A Firefox Fork For The Classic Looks and Classic Extensions
|
||||
======
|
||||
|
||||
_**Brief: Basilisk is a Firefox fork that supports legacy extensions and much more. Here, we take a look at its features and try it out.**_
|
||||
|
||||
### Basilisk: Open Source XUL-Based Web Browser
|
||||
|
||||
Even though it is better to stick with the regular web browsers like Firefox or Chromium available for Linux – it doesn’t hurt to know about other browsers. Recently, I stumbled upon a Firefox fork, [Basilisk][1] web browser that features the classic Firefox user interface along with legacy add-ons support (just like [Waterfox][2]).
|
||||
|
||||
![itsfoss.com homepage on Basilisk][3]
|
||||
|
||||
If you are in the dire need of using a legacy extensions or miss the classic look and feel of Firefox, the Basilisk web browser can save your day. The web browser is being maintained by the team behind [Pale Moon][4] browser (which is another Firefox fork I will be looking at next).
|
||||
|
||||
If you’re looking for open-source [Chrome alternatives][5], you may have a quick look at what Basilisk offers.
|
||||
|
||||
**Note:** _Basilisk is a development software. Even though I didn’t have any major usability issues for the time I used, you should not rely on it as the only browser to use._
|
||||
|
||||
### Features of Basilisk web browser
|
||||
|
||||
![][6]
|
||||
|
||||
Basilisk works out of the box. However, here are some features you might want to take a look before considering to use it:
|
||||
|
||||
* [XUL][7]-based web browser
|
||||
* It features the ‘Australis’ Firefox interface, which was quite popular back in the time of v29 – v56 Firefox version.
|
||||
* [NPAPI][8] plugins supported (Flash, Unity, Java, etc.)
|
||||
* Support for XUL/Overlay Mozilla-style extensions.
|
||||
* Uses [Goanna][9] open-source browser engine which is a fork of Mozilla’s [Gecko][10]
|
||||
* Does not use Rust or the Photon user interface
|
||||
* Supports 64-bit systems only
|
||||
|
||||
|
||||
|
||||
### Installing Basilisk on Linux
|
||||
|
||||
You may not find it listed in your Software Center. So, you will have to head to its official [download page][11] to get the tarball (tar.xz) file.
|
||||
|
||||
Once you download it, simply extract it and head inside the folders. Next, you will find a “**Basilisk**” executable file in it. You need to simply run it by double-clicking on it or performing a right-click and selecting “**Run**“.
|
||||
|
||||
You may check out its [GitHub page][12] for more information.
|
||||
|
||||
![][13]
|
||||
|
||||
You can also use the terminal and run the file by following the steps below while navigating to the directory you downloaded it to:
|
||||
|
||||
```
|
||||
cd basilisk-latest.linux64
|
||||
cd basilisk
|
||||
./basilisk
|
||||
```
|
||||
|
||||
[Download Basilisk][1]
|
||||
|
||||
### Using Basilisk browser
|
||||
|
||||
![][14]
|
||||
|
||||
Basilisk is a decent Firefox fork if you want the legacy extensions support. It is being actively developed by the team behind Pale Moon and is potentially a great option for users who want the classic look and feel of Mozilla’s Firefox (before the Quantum update) without comprising on the modern web support.
|
||||
|
||||
I didn’t have any issues with browsing webpages. However, I did notice that “**YouTube**” detects this as an obsolete browser and warns that it will stop supporting it soon enough.
|
||||
|
||||
_**So, I’m not sure if Basilisk will be a fit for every web service out there – but if you really need the archived legacy extensions that you used on Firefox’s older releases, this could be a solution for your problem.**_
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
Do you think a Firefox fork is worth trying out? What do you prefer? Share your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/basilisk-browser/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.basilisk-browser.org/
|
||||
[2]: https://itsfoss.com/waterfox-browser/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/basilisk-itsfoss.jpg?ssl=1
|
||||
[4]: https://www.palemoon.org
|
||||
[5]: https://itsfoss.com/open-source-browsers-linux/
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/basilisk-options-1.jpg?ssl=1
|
||||
[7]: https://developer.mozilla.org/en-US/docs/Archive/Mozilla/XUL
|
||||
[8]: https://wiki.mozilla.org/NPAPI
|
||||
[9]: https://en.wikipedia.org/wiki/Goanna_(software)
|
||||
[10]: https://developer.mozilla.org/en-US/docs/Mozilla/Gecko
|
||||
[11]: https://www.basilisk-browser.org/download.shtml
|
||||
[12]: https://github.com/MoonchildProductions/Basilisk
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/basilisk-folder-1.jpg?ssl=1
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/basilisk-browser-1.jpg?ssl=1
|
@ -0,0 +1,203 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Run Kubernetes on a Raspberry Pi with k3s)
|
||||
[#]: via: (https://opensource.com/article/20/3/kubernetes-raspberry-pi-k3s)
|
||||
[#]: author: (Lee Carpenter https://opensource.com/users/carpie)
|
||||
|
||||
Run Kubernetes on a Raspberry Pi with k3s
|
||||
======
|
||||
Create your own three-node Kubernetes cluster with these easy-to-follow
|
||||
instructions.
|
||||
![A ship wheel with someone steering][1]
|
||||
|
||||
For a long time, I've been interested in building a [Kubernetes][2] cluster out of a stack of inexpensive Raspberry Pis. Following along with various tutorials on the web, I was able to get Kubernetes installed and working in a three Pi cluster. However, the RAM and CPU requirements on the master node overwhelmed my Pi. This caused poor performance when doing various Kubernetes tasks. It also made an in-place upgrade of Kubernetes impossible.
|
||||
|
||||
As a result, I was very excited to see the [k3s project][3]. K3s is billed as a lightweight Kubernetes for use in resource-constrained environments. It is also optimized for ARM processors. This makes running a Raspberry Pi-based Kubernetes cluster much more feasible. In fact, we are going to create one in this article.
|
||||
|
||||
### Materials needed
|
||||
|
||||
To create the Kubernetes cluster described in this article, we are going to need:
|
||||
|
||||
* At least one Raspberry Pi (with SD card and power adapter)
|
||||
* Ethernet cables
|
||||
* A switch or router to connect all our Pis together
|
||||
|
||||
|
||||
|
||||
We will be installing k3s from the internet, so they will need to be able to access the internet through the router.
|
||||
|
||||
### An overview of our cluster
|
||||
|
||||
For this cluster, we are going to use three Raspberry Pis. The first we'll name **kmaster** and assign a static IP of 192.168.0.50 (since our local network is 192.168.0.0/24). The first worker node (the second Pi), we'll name **knode1** and assign an IP of 192.168.0.51. The final worker node we'll name **knode2** and assign an IP of 192.168.0.52.
|
||||
|
||||
Obviously, if you have a different network layout, you may use any network/IPs you have available. Just substitute your own values anywhere IPs are used in this article.
|
||||
|
||||
So that we don't have to keep referring to each node by IP, let's add their host names to our **/etc/hosts** file on our PC.
|
||||
|
||||
|
||||
```
|
||||
echo -e "192.168.0.50\tkmaster" | sudo tee -a /etc/hosts
|
||||
echo -e "192.168.0.51\tknode1" | sudo tee -a /etc/hosts
|
||||
echo -e "192.168.0.52\tknode2" | sudo tee -a /etc/hosts
|
||||
```
|
||||
|
||||
### Installing the master node
|
||||
|
||||
Now we're ready to install the master node. The first step is to install the latest Raspbian image. I am not going to explain that here, but I have a [detailed article][4] on how to do this if you need it. So please go install Raspbian, enable the SSH server, set the hostname to **kmaster**, and assign a static IP of 192.168.0.50.
|
||||
|
||||
Now that Raspbian is installed on the master node, let's boot our master Pi and **ssh** into it:
|
||||
|
||||
|
||||
```
|
||||
`ssh pi@kmaster`
|
||||
```
|
||||
|
||||
Now we're ready to install **k3s**. On the master Pi, run:
|
||||
|
||||
|
||||
```
|
||||
`curl -sfL https://get.k3s.io | sh -`
|
||||
```
|
||||
|
||||
When the command finishes, we already have a single node cluster set up and running! Let's check it out. Still on the Pi, run:
|
||||
|
||||
|
||||
```
|
||||
`sudo kubectl get nodes`
|
||||
```
|
||||
|
||||
You should see something similar to:
|
||||
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
kmaster Ready master 2m13s v1.14.3-k3s.1
|
||||
```
|
||||
|
||||
### Extracting the join token
|
||||
|
||||
We want to add a couple of worker nodes. When installing **k3s** on those nodes we will need a join token. The join token exists on the master node's filesystem. Let's copy that and save it somewhere we can get to it later:
|
||||
|
||||
|
||||
```
|
||||
`sudo cat /var/lib/rancher/k3s/server/node-token`
|
||||
```
|
||||
|
||||
### Installing the worker nodes
|
||||
|
||||
Grab some SD cards for the two worker nodes and install Raspbian on each. For one, set the hostname to **knode1** and assign an IP of 192.168.0.51. For the other, set the hostname to **knode2** and assign an IP of 192.168.0.52. Now, let's install **k3s**.
|
||||
|
||||
Boot your first worker node and **ssh** into it:
|
||||
|
||||
|
||||
```
|
||||
`ssh pi@knode1`
|
||||
```
|
||||
|
||||
On the Pi, we'll install **k3s** as before, but we will give the installer extra parameters to let it know that we are installing a worker node and that we'd like to join the existing cluster:
|
||||
|
||||
|
||||
```
|
||||
curl -sfL <http://get.k3s.io> | K3S_URL=<https://192.168.0.50:6443> \
|
||||
K3S_TOKEN=join_token_we_copied_earlier sh -
|
||||
```
|
||||
|
||||
Replace **join_token_we_copied_earlier** with the token from the "Extracting the join token" section. Repeat these steps for **knode2**.
|
||||
|
||||
### Access the cluster from our PC
|
||||
|
||||
It'd be annoying to have to **ssh** to the master node to run **kubectl** anytime we wanted to inspect or modify our cluster. So, we want to put **kubectl** on our PC. But first, let's get the configuration information we need from our master node. **Ssh** into **kmaster** and run:
|
||||
|
||||
|
||||
```
|
||||
`sudo cat /etc/rancher/k3s/k3s.yaml`
|
||||
```
|
||||
|
||||
Copy this configuration information and return to your PC. Make a directory for the config:
|
||||
|
||||
|
||||
```
|
||||
`mkdir ~/.kube`
|
||||
```
|
||||
|
||||
Save the copied configuration as **~/.kube/config**. Now edit the file and change the line:
|
||||
|
||||
|
||||
```
|
||||
`server: https://localhost:6443`
|
||||
```
|
||||
|
||||
to be:
|
||||
|
||||
|
||||
```
|
||||
`server: https://kmaster:6443`
|
||||
```
|
||||
|
||||
For security purpose, limit the file's read/write permissions to just yourself:
|
||||
|
||||
|
||||
```
|
||||
`chmod 600 ~/.kube/config`
|
||||
```
|
||||
|
||||
Now let's install **kubectl** on our PC (if you don't already have it). The Kubernetes site has [instructions][5] for doing this for various platforms. Since I'm running Linux Mint, an Ubuntu derivative, I'll show the Ubuntu instructions here:
|
||||
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt install -y apt-transport-https
|
||||
curl -s <https://packages.cloud.google.com/apt/doc/apt-key.gpg> | sudo apt-key add -
|
||||
echo "deb <https://apt.kubernetes.io/> kubernetes-xenial main" | \
|
||||
sudo tee -a /etc/apt/sources.list.d/kubernetes.list
|
||||
sudo apt update && sudo apt install kubectl
|
||||
```
|
||||
|
||||
If you're not familiar, the above commands add a Debian repository for Kubernetes, grab its GPG key for security, and then update the list of packages and install **kubectl**. Now, we'll get notifications of any updates for **kubectl** through the standard software update mechanism.
|
||||
|
||||
Now we can check out our cluster from our PC! Run:
|
||||
|
||||
|
||||
```
|
||||
`kubectl get nodes`
|
||||
```
|
||||
|
||||
You should see something like:
|
||||
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
kmaster Ready master 12m v1.14.3-k3s.1
|
||||
knode1 Ready worker 103s v1.14.3-k3s.1
|
||||
knode1 Ready worker 103s v1.14.3-k3s.1
|
||||
```
|
||||
|
||||
Congratulations! You have a working 3-node Kubernetes cluster!
|
||||
|
||||
### The k3s bonus
|
||||
|
||||
If you run **kubectl get pods --all-namespaces**, you will see some extra pods for [Traefik][6]. Traefik is a reverse proxy and load balancer that we can use to direct traffic into our cluster from a single entry point. Kubernetes allows for this but doesn't provide such a service directly. Having Traefik installed by default is a nice touch by Rancher Labs. This makes a default **k3s** install fully complete and immediately usable!
|
||||
|
||||
We're going to explore using Traefik through Kubernetes **ingress** rules and deploy all kinds of goodies to our cluster in future articles. Stay tuned!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/kubernetes-raspberry-pi-k3s
|
||||
|
||||
作者:[Lee Carpenter][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/carpie
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv (A ship wheel with someone steering)
|
||||
[2]: https://opensource.com/resources/what-is-kubernetes
|
||||
[3]: https://k3s.io/
|
||||
[4]: https://carpie.net/articles/headless-pi-with-static-ip-wired-edition
|
||||
[5]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
|
||||
[6]: https://traefik.io/
|
@ -1,87 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Amazon Has Launched Its Own Linux Distribution But It’s Not for Everyone)
|
||||
[#]: via: (https://itsfoss.com/bottlerocket-linux/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Amazon Has Launched Its Own Linux Distribution But It’s Not for Everyone
|
||||
======
|
||||
|
||||
Amazon has [launched][1] its own Linux-based open source operating system, Bottlerocket.
|
||||
|
||||
Before you get too excited and try to install and run it, I must tell you that it’s not your regular Linux distribution like Ubuntu, Fedora or Debian. What is it then?
|
||||
|
||||
### Bottlerocket: Linux distribution from Amazon for running containers
|
||||
|
||||
![][2]
|
||||
|
||||
If you are not aware of containers in Linux, I recommend reading [this article][3] from Red Hat.
|
||||
|
||||
A lot has changed in the IT industry since the term cloud computing was first coined. It takes few seconds to deploy a Linux server (usually running in a VM) thanks to cloud server providers like Amazon AWS, Google, [Linode][4], Digital Ocean etc. On top of that, you can deploy applications and services on these servers in form of containers thanks to tools like Docker and Kubernetes.
|
||||
|
||||
The thing is that when your sole purpose is to run containers on a Linux system, a full-fledged Linux distribution is not always required. This is why there are container specific Linux that provide only the necessary packages. This reduces the size of the operating system drastically which further reduces the deployment time.
|
||||
|
||||
**[Bottlerocket][5] Linux is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts.** It supports docker images and other images that follow the [OCI image][6] format.
|
||||
|
||||
### Features of Bottlerocket Linux
|
||||
|
||||
![][7]
|
||||
|
||||
Here’s what this new Linux distribution from Amazon offers:
|
||||
|
||||
#### No package-by-package updates
|
||||
|
||||
The traditional Linux distribution update procedure is composed of updating individual packages. Bottlerocket uses image-based updates instead.
|
||||
|
||||
Thanks to this approach, conflicts and breakage are avoided with the possibility of a rapid and complete rollback (if necessary).
|
||||
|
||||
#### Read-only file system
|
||||
|
||||
Bottlerocket also uses a primarily read-only file system. Its integrity is checked at boot time via dm-verity. For additional security measures, SSH access is also discouraged and is only available through the [admin container][8] (additional mechanism).
|
||||
|
||||
AWS already rules the cloud world and with it
|
||||
|
||||
#### Automated updates
|
||||
|
||||
You can automate updates to Bottlerocket by using an orchestration service like Amazon EKS.
|
||||
|
||||
Amazon also claims that including only the essential software to run containers reduces the attack surface compared to general purpose Linux distributions.
|
||||
|
||||
### What do you think?
|
||||
|
||||
Amazon is not the first to create a ‘container specific Linux’. I think CoreOS was among the first such distributions. [CoreOS was acquired by Red Hat][9] which itself was [sold to IBM][10]. Red Hat recently discontinued CoreOS and replaced it with [Fedora CoreOS][11].
|
||||
|
||||
Cloud server is a big industry that will continue to grow bigger. A giant like Amazon will do everything to stay at par or ahead with its competitors. In my opinion, Bottlerocket is an answer to (now) IBM’s Fedora CoreOS.
|
||||
|
||||
Though [Bottlerocket repositories are available on GitHub][12], I could not find any ready to image yet. At the moment of writing this article, it is only [available as a preview on AWS][5].
|
||||
|
||||
What are your views on it? What does Amazon gain with Bottlerocket? If you used something like CoreOS before, will you switch to Bottlerocket?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/bottlerocket-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://aws.amazon.com/blogs/aws/bottlerocket-open-source-os-for-container-hosting/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/botlerocket-logo.png?ssl=1
|
||||
[3]: https://www.redhat.com/en/topics/containers/whats-a-linux-container
|
||||
[4]: https://www.linode.com/
|
||||
[5]: https://aws.amazon.com/bottlerocket/
|
||||
[6]: https://www.opencontainers.org/
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/BottleRocket.png?ssl=1
|
||||
[8]: https://github.com/bottlerocket-os/bottlerocket-admin-container
|
||||
[9]: https://itsfoss.com/red-hat-acquires-coreos/
|
||||
[10]: https://itsfoss.com/ibm-red-hat-acquisition/
|
||||
[11]: https://getfedora.org/en/coreos/
|
||||
[12]: https://github.com/bottlerocket-os/bottlerocket
|
@ -1,113 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to write effective documentation for your open source project)
|
||||
[#]: via: (https://opensource.com/article/20/3/documentation)
|
||||
[#]: author: (Kevin Xu https://opensource.com/users/kevin-xu)
|
||||
|
||||
How to write effective documentation for your open source project
|
||||
======
|
||||
Documentation quality can make the difference in people trying your
|
||||
project or passing it by.
|
||||
![A pink typewriter][1]
|
||||
|
||||
Unfortunately, good code won't speak for itself. Even the most elegantly designed and well-written codebase that solves the most pressing problem in the world won't just get adopted on its own. You, the open source creator, need to speak for your code and breathe life into your creation. That's where technical writing and documentation come in.
|
||||
|
||||
A project's documentation gets the most amount of traffic, by far. It's the place where people decide whether to continue learning about your project or move on. Thus, spending time and energy on documentation and technical writing, focusing on the most important section, "Getting Started," will do wonders for your project's traction.
|
||||
|
||||
Writing may feel uncomfortable, even daunting, to many of you. As engineers, we are trained more to write code than to write _about_ code. Many people also speak English as a second or even third language and may feel insecure or intimidated about writing in English. (I learned English as a second language, and my mother tongue is Mandarin Chinese, so I feel your pain.)
|
||||
|
||||
But we can't get around the reality that, if you want your project to have a broad, global reach, English is the language you must use. Don't fear. I wrote this post with those challenges in mind. You don't need to be the next Shakespeare to find the advice here useful.
|
||||
|
||||
### Five actionable writing tips
|
||||
|
||||
Here are five actionable writing tips you can apply today. They may seem painfully simple and obvious, yet they are ignored over and over again in technical writing.
|
||||
|
||||
1. **Use** [**active voice**][2]**:** Active voice: "You can change these configurations by…" vs. passive voice: "These configurations can be changed by…"
|
||||
2. **Use simple, short sentences:** While not open source, [Hemingway App][3] and [Grammarly][4] are both helpful tools.
|
||||
3. **Format for easy reading:** Use headings, bullet points, and links to break up information into chunks instead of long explanatory paragraphs.
|
||||
4. **Keep it visual:** Use tables and diagrams, not sentences, to represent information with multiple dimensions.
|
||||
5. **Mind your spelling and grammar:** Always, always, always spell check for typos and grammar check for polish.
|
||||
|
||||
|
||||
|
||||
By applying these tips consistently in your writing and editing workflow, you achieve two big goals: efficient communication and building trust.
|
||||
|
||||
* **Efficient communication:** Engineers don't want to read long-winded, meandering paragraphs in documentation (they have novels for that). They want to get technical information or instructions (when it's a guide) as efficiently as possible. Thus, your writing needs to be lean and useful. (That being said, it's fine to apply some humor, emojis, and "fluff" here and there to give your project some personality and make it more memorable. How exactly you do that will depend on _your_ personality.)
|
||||
* **Building trust:** The most valuable currency you must accrue, especially in the early days of building your project, is trust. Trust comes not only from your code quality but also from the quality of writing that talks about your code. Thus, please apply the same polish to your writing that you would to your code. This is the main reason for point 5 above (on spelling and grammar checks).
|
||||
|
||||
|
||||
|
||||
### Start with Getting Started documentation
|
||||
|
||||
With these fundamental techniques baked into your writing, the section you should spend the most time on in your documentation is the Getting Started section. This is, by far, the most important section and a classic example of the "[80/20 rule][5]" in action. Most of the web traffic to your project lands on your documentation, and most of _that_ lands on Getting Started. If it is well-constructed, you will get a new user right away. If not, the visitor will bounce and likely never come back.
|
||||
|
||||
How do you construct a good Getting Started section? I propose this three-step process:
|
||||
|
||||
1. **Make it a task:** An effective Getting Started guide should be task-oriented—a discrete mini-project that a developer can accomplish. It should _not_ contain too much information about the architectural design, core concept, and other higher-level information. A single, visual architectural overview is fine, but don't devote multiple paragraphs to how and why your project is the best-designed solution. That information belongs somewhere else (more on that below). Instead, the Getting Started section should mostly be a list of steps and commands to… well, get your project started!
|
||||
2. **Can be finished in less than 30 minutes:** The core takeaway here is that the time to completion should be as low as possible; 30 minutes is the upper bound. This time limit also assumes the user has relatively little context about your project. This is important to keep in mind. Most people who bother to go through your Getting Started guide are members of a technical audience with a vague understanding of your project but not much more than that. They are there to try something out before they decide to spend more time digging deeper. "Time to completion" is a metric you should measure to continuously improve your Getting Started guide.
|
||||
3. **Do something meaningful:** What "meaningful" means depends on the open source project. It is important to think hard about what that is, tightly define it into a task, and allow a developer who completes your Getting Started guide to achieve that meaningful task. This meaningful task must speak directly to your project's value; otherwise, it will leave developers feeling like they just wasted their time.
|
||||
|
||||
|
||||
|
||||
For inspiration: If you are a distributed database project, perhaps "meaningful" means the whole cluster remains available with no downtime after you kill some nodes. If you are a data analytics or business intelligence tool, perhaps "meaningful" means quickly generating a dashboard with different visualizations after loading some data. Whatever "meaningful" means to your project, it should be achievable quickly and locally on a laptop.
|
||||
|
||||
A good example is [Linkerd's Getting Started][6]. Linkerd is an open source service mesh for Kubernetes. I'm a novice in Kubernetes and even less familiar with service mesh. Yet, I completed Linkerd's Getting Started guide on my laptop without much hassle, and the experience gave me a taste of what operating a service mesh is all about.
|
||||
|
||||
The three-step process above could be a helpful framework for designing a highly efficient Getting Started section in a measurable way. It is also related to the time-to-value metric when it comes to [productizing your open source project][7].
|
||||
|
||||
### Other core components
|
||||
|
||||
Besides carefully calibrating and optimizing your Getting Started, there are five other top-level components that are necessary to build full-fledged documentation: architectural design, in-production usage guide, use cases, references, and roadmap.
|
||||
|
||||
* **Architectural design:** This is a deep-dive into your project's architecture and the rationales behind your design decisions, full of the details that you strategically glossed over in your Getting Started guide. This section is a big part of your overall [product marketing plan][8]. This section, usually filled with visuals and drawings, is meant to turn a casual hobbyist into an expert enthusiast who is interested in investing time in your project for the long term.
|
||||
* **In-production usage guide:** There is a world of difference between trying something out on a laptop and deploying it in production. Guiding a user who wants to use your project more seriously is an important next step. Demonstrating in-production operational knowledge is also how you attract your initial business customers who may like the promise of the technology but don't know or don't feel confident about using it in a production environment.
|
||||
* **Use cases:** The value of social proof is obvious, so listing your in-production adopters is important. The key here is to make sure this information is easy to find. It will likely be the second most popular link after Getting Started.
|
||||
* **References:** This section explains the project in detail and allows the user to examine and understand it under a microscope. It also functions as a dictionary where people look up information when needed. Some open source creators spend an inordinate amount of time spelling out every nuance and edge case of their project here. The motivation is understandable but unnecessary at the outset when your time is limited. It's more effective to reach a balance between detail and ways to get help: links to your community forum, Stack Overflow tag, or a separate FAQ page would do.
|
||||
* **Roadmap:** Laying out your future vision and plan with a rough timeline will keep users interested and incentivized for the long-term. Your project may not be perfect now, but you have a plan to perfect it. The Roadmap section is also a great place to get your community involved to build a strong ecosystem, so make sure you have a link that tells people how to voice their thoughts and opinions regarding the roadmap. (I'll write about community-building specifics in the future.)
|
||||
|
||||
|
||||
|
||||
You may not have all these components fully fleshed out yet, and some parts may materialize later than others, especially the use cases. However, be intentional about building these out over time. Addressing these five elements is the critical next step to your users' journey into your project, assuming they had a good experience with Getting Started.
|
||||
|
||||
One last note: include a clear one-sentence statement on what license you are using (probably in Getting Started, README, or somewhere else highly visible). This small touch will make vetting your project for adoption from the end user's side much more efficient.
|
||||
|
||||
### Spend 20% of your time writing
|
||||
|
||||
Generally, I recommend spending 10–20% of your time writing. Putting it in context: If you are working on your project full time, it's about half a day to one full day per week.
|
||||
|
||||
The more nuanced point here is you should work writing into your normal workflow, so it becomes a routine, not an isolated chore. Making incremental progress over time, rather than doing all the writing in one giant sitting, is what will help your project reach that ultimate goal: traction and trust.
|
||||
|
||||
* * *
|
||||
|
||||
_Special thanks to [Luc Perkins][9], developer advocate at the Cloud Native Computing Foundation, for his invaluable input._
|
||||
|
||||
_This article originally appeared on_ _[COSS Media][10]_ _and is republished with permission._
|
||||
|
||||
Nigel Babu offers 10 tips for taking your project's documentation to the next level.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/documentation
|
||||
|
||||
作者:[Kevin Xu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/kevin-xu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-docdish-typewriter-pink.png?itok=OXJBtyYf (A pink typewriter)
|
||||
[2]: https://www.grammar-monster.com/glossary/active_voice.htm
|
||||
[3]: http://www.hemingwayapp.com/
|
||||
[4]: https://www.grammarly.com/
|
||||
[5]: https://en.wikipedia.org/wiki/Pareto_principle
|
||||
[6]: https://linkerd.io/2/getting-started/
|
||||
[7]: https://opensource.com/article/19/11/products-open-source-projects
|
||||
[8]: https://opensource.com/article/20/2/product-marketing-open-source-project
|
||||
[9]: https://twitter.com/lucperkins
|
||||
[10]: https://coss.media/open-source-documentation-technical-writing-101/
|
@ -1,349 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Make SSL certs easy with k3s)
|
||||
[#]: via: (https://opensource.com/article/20/3/ssl-letsencrypt-k3s)
|
||||
[#]: author: (Lee Carpenter https://opensource.com/users/carpie)
|
||||
|
||||
Make SSL certs easy with k3s
|
||||
======
|
||||
How to encrypt your website with k3s and Letsencrypt on a Raspberry Pi.
|
||||
![Files in a folder][1]
|
||||
|
||||
In a [previous article][2], we deployed a couple of simple websites on our k3s cluster. There were non-encrypted sites. Now that's fine, and they work, but non-encrypted is very last century! These days most websites are encrypted. In this article, we are going to install [cert-manager][3] and use it to deploy TLS encrypted sites on our cluster. Not only will the sites be encrypted, but they will be using valid public certificates that are automatically provisioned and automatically renewed from [Let's][4] [Encrypt][4]! Let's get started!
|
||||
|
||||
### Materials needed
|
||||
|
||||
To follow along with the article, you will need [the k3s Raspberry Pi cluster][5] we built in a previous article. Also, you will need a public static IP address and a domain name that you own and can create DNS records for. If you have a dynamic DNS provider that provides a domain name for you, that may work as well. However, in this article, we will be using a static IP and [CloudFlare][6] to manually create DNS "A" records.
|
||||
|
||||
As we create configuration files in this article, if you don't want to type them out, they are all available for download [here][7].
|
||||
|
||||
### Why are we using cert-manager?
|
||||
|
||||
Traefik (which comes pre-bundled with k3s) actually has Let's Encrypt support built-in, so you may be wondering why we are installing a third-party package to do the same thing. At the time of this writing, Traefik's Let's Encrypt support retrieves certificates and stores them in files. Cert-manager retrieves certificates and stores them in Kubernetes **secrets**. **Secrets** can be simply referenced by name and, therefore, easier to use, in my opinion. That is the main reason we are going to use cert-manager in this article.
|
||||
|
||||
### Installing cert-manager
|
||||
|
||||
Mostly we will simply follow the cert-manager [documentation][8] for installing on Kubernetes. Since we are working with an ARM architecture, however, we will be making slight changes, so we will go through the procedure here.
|
||||
|
||||
The first step is to create the cert-manager namespace. The namespace helps keep cert-manager's pods out of our default namespace, so we do not have to see them when we do things like **kubectl get pods** with our own pods. Creating the namespace is simple:
|
||||
|
||||
|
||||
```
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
The installation instructions have you download the cert-manager YAML configuration file and apply it to your cluster all in one step. We need to break that into two steps in order to modify the file for our ARM-based Pis. We will download the file and do the conversion in one step:
|
||||
|
||||
|
||||
```
|
||||
curl -sL \
|
||||
<https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml> |\
|
||||
sed -r 's/(image:.*):(v.*)$/\1-arm:\2/g' > cert-manager-arm.yaml
|
||||
```
|
||||
|
||||
This downloads the configuration file and updates all the contained docker images to be the ARM versions. To check what it did:
|
||||
|
||||
|
||||
```
|
||||
$ grep image: cert-manager-arm.yaml
|
||||
image: "quay.io/jetstack/cert-manager-cainjector-arm:v0.11.0"
|
||||
image: "quay.io/jetstack/cert-manager-controller-arm:v0.11.0"
|
||||
image: "quay.io/jetstack/cert-manager-webhook-arm:v0.11.0"
|
||||
```
|
||||
|
||||
As we can see, the three images now have **-arm** added to the image name. Now that we have the correct file, we simply apply it to our cluster:
|
||||
|
||||
|
||||
```
|
||||
`kubectl apply -f cert-manager-arm.yaml`
|
||||
```
|
||||
|
||||
This will install all of cert-manager. We can know when the installation finished by checking with **kubectl --namespace cert-manager get pods** until all pods are in the **Running** state.
|
||||
|
||||
That is actually it for cert-manager installation!
|
||||
|
||||
### A quick overview of Let's Encrypt
|
||||
|
||||
The nice thing about Let's Encrypt is that they provide us with publicly validated TLS certificates for free! This means that we can have a completely valid TLS encrypted website that anyone can visit for our home or hobby things that do not make money to support themselves without paying out of our own pocket for TLS certificates! Also, when using Let's Encrypt certificates with cert-manager, the entire process of procuring the certificates is automated. Certificate renewal is also automated!
|
||||
|
||||
But how does this work? Here is a simplified explanation of the process. We (or cert-manager on our behalf) issue a request for a certificate to Let's Encrypt for a domain name that we own. Let's Encrypt verifies that we own that domain by using an ACME DNS or HTTP validation mechanism. If the verification is successful, Let's Encrypt provides us with certificates, which cert-manager installs in our website (or other TLS encrypted endpoint). These certificates are good for 90 days before the process needs to be repeated. Cert-manager, however, will automatically keep the certificates up-to-date for us.
|
||||
|
||||
In this article, we will use the HTTP validation method as it is simpler to set up and works for the majority of use cases. Here is the basic process that will happen behind the scenes. Cert-manager will issue a certificate request to Let's Encrypt. Let's Encrypt will issue an ownership verification challenge in response. The challenge will be to put an HTTP resource at a specific URL under the domain name that the certificate is being requested for. The theory is that if we can put that resource at that URL and Let's Encrypt can retrieve it remotely, then we must really be the owners of the domain. Otherwise, either we could not have placed the resource in the correct place, or we could not have manipulated DNS to allow Let's Encrypt to get to it. In this case, cert-manager puts the resource in the right place and automatically creates a temporary **Ingress** record that will route traffic to the correct place. If Let's Encrypt can read the challenge and it is correct, it will issue the certificates back to cert-manager. Cert-manager will then store the certificates as secrets, and our website (or whatever) will use those certificates for securing our traffic with TLS.
|
||||
|
||||
### Preparing our network for the challenges
|
||||
|
||||
I'm assuming that you are wanting to set this up on your home network and have a router/access point that is connected in some fashion to the broader internet. If that is not the case, the following process may not be what you need.
|
||||
|
||||
To make the challenge process work, we need the domain that we are requesting a certificate for to route to our k3s cluster on port 80. To do that, we need to tell the world's DNS system where that is. So, we'll need to map the domain name to our public IP address. If you do not know what your public IP address is, you can go to somewhere like [WhatsMyIP][9], and it will tell you. Next, we need to enter a DNS "A" record that maps our domain name to our public IP address. For this to work reliably, you need a static public IP address, or you may be able to use a dynamic DNS provider. Some dynamic DNS providers will issue you a domain name that you may be able to use with these instructions. I have not tried this, so I cannot say for sure it works with all providers.
|
||||
|
||||
For this article, we are going to assume a static public IP and use CloudFlare to set the DNS "A" records. You may use your own DNS provider if you wish. The important part is that you are able to set the "A" records.
|
||||
|
||||
For this rest of the article, I am going to use **[k3s.carpie.net][10]** as the example domain since this is a domain I own. You would obviously replace that with whatever domain you own.
|
||||
|
||||
Ok, for the sake of example, assume our public IP address is 198.51.100.42. We would go to our DNS provider's DNS record section and add a record of type "A," with a name of **[k3s.carpie.net][10]** (CloudFlare assumes the domain, so there we could just enter **k3s**) and enter 198.51.100.42 as the IPv4 address.
|
||||
|
||||
![][11]
|
||||
|
||||
Be aware that sometimes it takes a while for the DNS updates to propagate. It may be several hours before you can resolve the name. It is imperative that the name resolves before moving on. Otherwise, all our certificate requests will fail.
|
||||
|
||||
We can check that the name resolves using the **dig** command:
|
||||
|
||||
|
||||
```
|
||||
$ dig +short k3s.carpie.net
|
||||
198.51.100.42
|
||||
```
|
||||
|
||||
Keep running the above command until an IP is returned. Just a note about CloudFlare: ClouldFlare provides a service that hides your actual IP by proxying the traffic. In this case, we'll get back a CloudFlare IP instead of our IP. This should work fine for our purposes.
|
||||
|
||||
The final step for network configuration is configuring our router to route incoming traffic on ports 80 and 443 to our k3s cluster. Sadly, router configuration screens vary widely, so I can't tell you exactly what yours will look like. Most of the time, the admin page we need is under "Port forwarding" or something similar. I have even seen it listed under "Gaming" (which is apparently what port forwarding is mostly used for)! Let's see what the configuration looks like for my router.
|
||||
|
||||
![][12]
|
||||
|
||||
If you had my setup, you would go to 192.168.0.1 to log in to the router administration application. For this router, it's under **NAT / QoS** -> **Port Forwarding**. Here we set port **80**, **TCP** protocol to forward to 192.168.0.50 (the IP of **kmaster** our master node) port **80**. We also set port **443** to map to **kmaster** as well. This is technically not needed for the challenges, but at the end of the article, we are going to deploy a TLS enabled website, and we will need **443** mapped to get to it. So it's convenient to go ahead and map it now. We save and apply the changes, and we should be good to go!
|
||||
|
||||
### Configuring cert-manager to use Lets Encrypt (staging)
|
||||
|
||||
Now we need to configure cert-manager to issue certificates through Let's Encrypt. Let's Encrypt provides a staging (e.g., test) environment for us to sort out our configurations on. It is much more tolerant of mistakes and frequency of requests. If we bumble around on the production environment, we'll very quickly find ourselves temporarily banned! As such, we'll manually test requests using the staging environment.
|
||||
|
||||
Create a file, **letsencrypt-issuer-staging.yaml** with the contents:
|
||||
|
||||
|
||||
```
|
||||
apiVersion: cert-manager.io/v1alpha2
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
spec:
|
||||
acme:
|
||||
# The ACME server URL
|
||||
server: <https://acme-staging-v02.api.letsencrypt.org/directory>
|
||||
# Email address used for ACME registration
|
||||
email: <your_email>@example.com
|
||||
# Name of a secret used to store the ACME account private key
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-staging
|
||||
# Enable the HTTP-01 challenge provider
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
class: traefik
|
||||
```
|
||||
|
||||
Make sure to update the email address to your address. This is how Let's Encrypt contacts us if something is wrong or we are doing bad things!
|
||||
|
||||
Now we create the issuer with:
|
||||
|
||||
|
||||
```
|
||||
`kubectl apply -f letsencrypt-issuer-staging.yaml`
|
||||
```
|
||||
|
||||
We can check that the issuer was created successfully with:
|
||||
|
||||
|
||||
```
|
||||
`kubectl get clusterissuers`
|
||||
```
|
||||
|
||||
**Clusterissuers** is a new Kubernetes resource type created by cert-manager.
|
||||
|
||||
Let's now request a test certificate manually. For our sites, we will not need to do this; we are just testing out the process to make sure our configuration is correct.
|
||||
|
||||
Create a certificate request file, **le-test-certificate.yaml** with the contents:
|
||||
|
||||
|
||||
```
|
||||
apiVersion: cert-manager.io/v1alpha2
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: k3s-carpie-net
|
||||
namespace: default
|
||||
spec:
|
||||
secretName: k3s-carpie-net-tls
|
||||
issuerRef:
|
||||
name: letsencrypt-staging
|
||||
kind: ClusterIssuer
|
||||
commonName: k3s.carpie.net
|
||||
dnsNames:
|
||||
- k3s.carpie.net
|
||||
```
|
||||
|
||||
This record just says we want to request a certificate for the domain **[k3s.carpie.net][10]**, using a **ClusterIssuer** named **letsencrypt-staging** (which we created in the previous step) and store the certificate files in the Kubernetes secret named **k3s-carpie-net-tls**.
|
||||
|
||||
Apply it like normal:
|
||||
|
||||
|
||||
```
|
||||
`kubectl apply -f le-test-certificate.yaml`
|
||||
```
|
||||
|
||||
We can check the status with:
|
||||
|
||||
|
||||
```
|
||||
`kubectl get certificates`
|
||||
```
|
||||
|
||||
If we see something like:
|
||||
|
||||
|
||||
```
|
||||
NAME READY SECRET AGE
|
||||
k3s-carpie-net True k3s-carpie-net-tls 30s
|
||||
```
|
||||
|
||||
We are good to go! (The key here is **READY** being **True**).
|
||||
|
||||
### Troubleshooting certificate request issues
|
||||
|
||||
That's the happy path. If **READY** is **False**, we could give it some time and check the status again in case it takes a bit. If it stays **False,** then we have an issue we need to troubleshoot. At this point, we can walk the chain of Kubernetes resources until we find a status message that tells us the problem.
|
||||
|
||||
Let's say that we did the request above, and **READY** was **False**. We start the troubleshooting with:
|
||||
|
||||
|
||||
```
|
||||
`kubectl describe certificates k3s-carpie-net`
|
||||
```
|
||||
|
||||
This will return a lot of information. Usually, the helpful things are in the **Events:** section, which is typically at the bottom. Let's say the last event was **Created new CertificateRequest resource "k3s-carpie-net-1256631848**. We would then describe that request:
|
||||
|
||||
|
||||
```
|
||||
`kubectl describe certificaterequest k3s-carpie-net-1256631848`
|
||||
```
|
||||
|
||||
Now let's say the last event there was **Waiting on certificate issuance from order default/k3s-carpie-net-1256631848-2342473830**.
|
||||
|
||||
Ok, we can describe the order:
|
||||
|
||||
|
||||
```
|
||||
`kubectl describe orders default/k3s-carpie-net-1256631848-2342473830`
|
||||
```
|
||||
|
||||
Let's say that has an event that says **Created Challenge resource "k3s-carpie-net-1256631848-2342473830-1892150396" for domain "[k3s.carpie.net][10]"**. Let's describe the challenge:
|
||||
|
||||
|
||||
```
|
||||
`kubectl describe challenges k3s-carpie-net-1256631848-2342473830-1892150396`
|
||||
```
|
||||
|
||||
The last event returned from here is **Presented challenge using http-01 challenge mechanism**. That looks ok, so we scan up the describe output and see a message **Waiting for http-01 challenge propagation: failed to perform self check GET request … no such host**. Finally! We have found the problem! In this case, **no such host** means that the DNS lookup failed, so then we would go back and manually check our DNS settings and that our domain's DNS resolves correctly for us and make any changes needed.
|
||||
|
||||
### Clean up our test certificates
|
||||
|
||||
We actually want a real certificate for the domain name we used, so let's go ahead and clean up both the certificate and the secret we just created:
|
||||
|
||||
|
||||
```
|
||||
kubectl delete certificates k3s-carpie-net
|
||||
kubectl delete secrets k3s-carpie-net-tls
|
||||
```
|
||||
|
||||
### Configuring cert-manager to use Let's Encrypt (production)
|
||||
|
||||
Now that we have test certificates working, it's time to move up to production. Just like we configured cert-manager for Let's Encrypt staging environment, we need to do the same for production now. Create a file (you can copy and modify staging if desired) named **letsencrypt-issuer-production.yaml** with the contents:
|
||||
|
||||
|
||||
```
|
||||
apiVersion: cert-manager.io/v1alpha2
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-prod
|
||||
spec:
|
||||
acme:
|
||||
# The ACME server URL
|
||||
server: <https://acme-v02.api.letsencrypt.org/directory>
|
||||
# Email address used for ACME registration
|
||||
email: <your_email>@example.com
|
||||
# Name of a secret used to store the ACME account private key
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-prod
|
||||
# Enable the HTTP-01 challenge provider
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
class: traefik
|
||||
```
|
||||
|
||||
(If you are copying from the staging, the only that changes is the **server:** URL. Don't forget the email)!
|
||||
|
||||
Apply with:
|
||||
|
||||
|
||||
```
|
||||
`kubectl apply -f letsencrypt-issuer-production.yaml`
|
||||
```
|
||||
|
||||
### Request a certificate for our website
|
||||
|
||||
It's important to note that all the steps we have completed to this point are one time set up! For any additional requests in the future, we can start at this point in the instructions!
|
||||
|
||||
Let's deploy that same site we deployed in the [previous article][13]. (If you still have it around, you can just modify the YAML file. If not, you may want to recreate it and re-deploy it).
|
||||
|
||||
We just need to modify **mysite .yaml's** **Ingress** section to be:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: mysite-nginx-ingress
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "traefik"
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
spec:
|
||||
rules:
|
||||
- host: k3s.carpie.net
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: mysite-nginx-service
|
||||
servicePort: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- k3s.carpie.net
|
||||
secretName: k3s-carpie-net-tls
|
||||
```
|
||||
|
||||
Please note that just the **Ingress** section of **mysite.yaml** is shown above. The changes are the addition of the **annotation [cert-manager.io/cluster-issuer][14]: letsencrypt-prod**. This tells traefik which issuer to use when creating certificates. The only other addition is the **tls:** block. This tells traefik that we expect to have TLS on host **[k3s.carpie.net][10],** and we expect the TLS certificate files to be stored in the secret **k3s-carpie-net-tls**.
|
||||
|
||||
Please remember that we did not create these certificates! (Well, we created test certificates similarly named, but we deleted those.) Traefik will read this and go looking for the secret. When it does not find it, it sees the annotation saying we want to use **letsencrypt-prod** issuer to procure one. From there, it will make the request and install the certificate in the secret for us!
|
||||
|
||||
We're done! Let's try it out.
|
||||
|
||||
There it is in all its encrypted TLS beauty! Congratulations!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/ssl-letsencrypt-k3s
|
||||
|
||||
作者:[Lee Carpenter][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/carpie
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
|
||||
[2]: https://carpie.net/articles/ingressing-with-k3s
|
||||
[3]: https://cert-manager.io/
|
||||
[4]: https://letsencrypt.org/
|
||||
[5]: https://opensource.com/article/20/3/kubernetes-raspberry-pi-k3s
|
||||
[6]: https://cloudflare.com/
|
||||
[7]: https://gitlab.com/carpie/k3s_using_certmanager/-/archive/master/k3s_using_certmanager-master.zip
|
||||
[8]: https://cert-manager.io/docs/installation/kubernetes/
|
||||
[9]: https://whatsmyip.org/
|
||||
[10]: http://k3s.carpie.net
|
||||
[11]: https://opensource.com/sites/default/files/uploads/ep011_dns_example.png
|
||||
[12]: https://opensource.com/sites/default/files/uploads/ep011_router.png
|
||||
[13]: https://carpie.net/articles/ingressing-with-k3s#deploying-a-simple-website
|
||||
[14]: http://cert-manager.io/cluster-issuer
|
@ -1,219 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 Best AUR (Arch User Repository) Helpers for Arch Linux)
|
||||
[#]: via: (https://www.2daygeek.com/best-aur-arch-user-repository-helpers-arch-linux-manjaro/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
6 Best AUR (Arch User Repository) Helpers for Arch Linux
|
||||
======
|
||||
|
||||
Arch Linux is a Linux distribution largely based on the binary packages which targets x86-64 microprocessors computers.
|
||||
|
||||
Arch Linux uses rolling release model wherein the updates are frequently delivered over to applications.
|
||||
|
||||
It has package manager called **[pacman][1]** which allows to install, remove and update software packages.
|
||||
|
||||
Newbies are advised to step in after gaining hand on experience with other Linux flavors since Arch Linux is built for experienced users.
|
||||
|
||||
### What is AUR (Arch User Repository)?
|
||||
|
||||
[Arch User Repository][2] commonly referred as AUR is the community-based software repository for Arch users.
|
||||
|
||||
User compiled packages get into the Arch official repository, based on the package popularity honored by AUR Community.
|
||||
|
||||
### What is AUR Helper?
|
||||
|
||||
[AUR helper][3] is a wrapper that allows user to install the package from AUR Repository without manual intervention.
|
||||
|
||||
Usages such as searching of packages, resolving dependencies, retrieval and build AUR packages, Web content retrieval and submission of AUR packages are being automated.
|
||||
|
||||
**The 6 best AUR helpers are listed below:**
|
||||
|
||||
* Yay (Yet another Yogurt)
|
||||
* Pakku
|
||||
* Pacaur
|
||||
* Pikaur
|
||||
* Trizen
|
||||
* Aura
|
||||
|
||||
|
||||
|
||||
### 1) Yay (Yet another Yogurt)
|
||||
|
||||
[Yay][4] is a best CLI based AUR helper for Arch Linux, written in GO Language. Yay is based on the design of yaourt, apacman and pacaur.
|
||||
|
||||
Best recommended for newbies. Similar to that of Pacman, with features matching many of the commands and options used in pacman.Allows users to find matching package providers during search and allow selection.
|
||||
|
||||
### How to Install yay
|
||||
|
||||
Run the following commands one by one to install yay on Arch Linux based systems.
|
||||
|
||||
```
|
||||
$ sudo pacman -S git go base-devel
|
||||
$ git clone https://aur.archlinux.org/yay.git
|
||||
$ cd yay
|
||||
$ makepkg -si
|
||||
```
|
||||
|
||||
### How to Use yay
|
||||
|
||||
This use the same syntax like pacman, use the following command to install a package through yay.
|
||||
|
||||
```
|
||||
$ yay -s arch-wiki-man
|
||||
```
|
||||
|
||||
### 2) Pakku
|
||||
|
||||
[Pakku][5] can be treated as another Pacman however dwells in its initial stage. It is a wrapper which allows users to search or install packages from AUR.
|
||||
|
||||
It does a decent job of removing the dependencies and also allows to install packages by cloning the PKGBUILD.
|
||||
|
||||
### How to Install Pakku
|
||||
|
||||
To install pakku on Arch Linux based systems, run the following commands one by one.
|
||||
|
||||
```
|
||||
$ sudo pacman -S git base-devel
|
||||
$ git clone https://aur.archlinux.org/pakku.git
|
||||
$ cd pakku
|
||||
$ makepkg -si
|
||||
```
|
||||
|
||||
### How to Use Pakku
|
||||
|
||||
It uses the same syntax as pacman, use the following command to install a package with pakku.
|
||||
|
||||
```
|
||||
$ pakku -s dropbox
|
||||
```
|
||||
|
||||
### 3) Pacaur
|
||||
|
||||
Another CLI based AUR helper that helps to reduce the user prompt interaction.
|
||||
|
||||
[Pacaur][6] is designed for advanced users who are inclined towards automation for repetitive tasks. Users are expected to be familiar with the AUR manual build process with makepkg and its configuration.
|
||||
|
||||
### How to Install Pacaur
|
||||
|
||||
To install pakku on Arch Linux based systems, run the following commands one by one.
|
||||
|
||||
```
|
||||
$ sudo pacman -S git base-devel
|
||||
$ git clone https://aur.archlinux.org/pacaur.git
|
||||
$ cd pacaur
|
||||
$ makepkg -si
|
||||
```
|
||||
|
||||
### How to Use Pacaur
|
||||
|
||||
It uses the same syntax as pacman, use the following command to install a package with Pacaur.
|
||||
|
||||
```
|
||||
$ pacaur -s spotify
|
||||
```
|
||||
|
||||
### 4) Pikaur
|
||||
|
||||
[Pikaur][7] is a AUR helper with minimal dependencies and review PKGBUILDs all in once, next build them all without user interaction.
|
||||
|
||||
Pikaur will inform Pacman about the next step by mastering the pacman.
|
||||
|
||||
### How to Install Pikaur
|
||||
|
||||
To install pakku on Arch Linux based systems, run the following commands one by one.
|
||||
|
||||
```
|
||||
$ sudo pacman -S git base-devel
|
||||
$ git clone https://aur.archlinux.org/pikaur.git
|
||||
$ cd pikaur
|
||||
$ makepkg -fsri
|
||||
```
|
||||
|
||||
### How to Use Pikaur
|
||||
|
||||
It uses the same syntax as pacman, use the following command to install a package with Pikaur.
|
||||
|
||||
```
|
||||
$ pacaur -s spotify
|
||||
```
|
||||
|
||||
### 5) Trizen
|
||||
|
||||
[Trizen][8] is a command line based lightweight wrapper for AUR, written in Perl. Speed oriented AUR helper which allows users to search, install packages and also permits to read AUR package comments.
|
||||
|
||||
Edit support for text files and the Input/output uses UTF-8 support. Built-in interaction with ‘pacman’.
|
||||
|
||||
### How to Install Trizen
|
||||
|
||||
To install pakku on Arch Linux based systems, run the following commands one by one.
|
||||
|
||||
```
|
||||
$ sudo pacman -S git base-devel
|
||||
$ git clone https://aur.archlinux.org/trizen.git
|
||||
$ cd trizen
|
||||
$ makepkg -si
|
||||
```
|
||||
|
||||
### How to Use Trizen
|
||||
|
||||
It uses the same syntax as pacman, use the following command to install a package with Trizen.
|
||||
|
||||
```
|
||||
$ pacaur -s google-chrome
|
||||
```
|
||||
|
||||
### 6) Aura
|
||||
|
||||
[Aura][9] is a secure, multilingual package manager for Arch Linux and the AUR, written in Haskell. It supports many Pacman operations and sub-options which allows easy development and beautiful code.
|
||||
|
||||
It automates the process of installating packages from the Arch User Repositories.Users normally face difficulties in system upgrade when using Aura.
|
||||
|
||||
### How to Install Aura
|
||||
|
||||
To install pakku on Arch Linux based systems, run the following commands one by one.
|
||||
|
||||
```
|
||||
$ sudo pacman -S git base-devel
|
||||
$ git clone https://aur.archlinux.org/aura.git
|
||||
$ cd aura
|
||||
$ makepkg -si
|
||||
```
|
||||
|
||||
### How to Use Aura
|
||||
|
||||
It uses the same syntax as pacman, use the following command to install a package with Aura.
|
||||
|
||||
```
|
||||
$ pacaur -s android-sdk
|
||||
```
|
||||
|
||||
### Conclusion:
|
||||
|
||||
Users can make their choice among the above 6 AUR helpers by analyzing through.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/best-aur-arch-user-repository-helpers-arch-linux-manjaro/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[2]: https://wiki.archlinux.org/index.php/Arch_User_Repository
|
||||
[3]: https://wiki.archlinux.org/index.php/AUR_helpers
|
||||
[4]: https://github.com/Jguer/yay
|
||||
[5]: https://github.com/kitsunyan/pakku
|
||||
[6]: https://github.com/E5ten/pacaur
|
||||
[7]: https://github.com/actionless/pikaur
|
||||
[8]: https://github.com/trizen/trizen
|
||||
[9]: https://github.com/fosskers/aura
|
@ -1,179 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Change MAC Address in Linux)
|
||||
[#]: via: (https://itsfoss.com/change-mac-address-linux/)
|
||||
[#]: author: (Community https://itsfoss.com/author/itsfoss/)
|
||||
|
||||
How to Change MAC Address in Linux
|
||||
======
|
||||
|
||||
Before I show you how to change Mac address in Linux, let’s first discuss why would you change it in the first place.
|
||||
|
||||
You may have several reasons. Maybe you don’t want your actual [MAC address][1] (also called physical address) to be exposed on a public network? Other case can be that the network administrator might have blocked a particular MAC address in the router or firewall.
|
||||
|
||||
One practical ‘benefit’ is that some public network (like Airport WiFi) allows free internet for a limited time. If you want to use the internet beyond that, spoofing your Mac address may trick the network in believing that it’s a new device. It’s a famous meme as well.
|
||||
|
||||
![Airport WiFi Meme][2]
|
||||
|
||||
I am going to show the steps for changing MAC address (also called spoofing/faking MAC address).
|
||||
|
||||
### Changing MAC address in Linux
|
||||
|
||||
Let’s go through each step:
|
||||
|
||||
#### Step 1: Find your MAC address and network interface
|
||||
|
||||
Let’s find out some [details about the network card in Linux][3]. Use this command to get the network interface details:
|
||||
|
||||
```
|
||||
ip link show
|
||||
```
|
||||
|
||||
In the output, you’ll see several details along with the MAC address:
|
||||
|
||||
```
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
|
||||
link/ether 94:c6:f8:a7:d7:30 brd ff:ff:ff:ff:ff:ff
|
||||
3: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
|
||||
link/ether 38:42:f8:8b:a7:68 brd ff:ff:ff:ff:ff:ff
|
||||
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
|
||||
link/ether 42:02:07:8f:a7:38 brd ff:ff:ff:ff:ff:ff
|
||||
```
|
||||
|
||||
As you can see, in this case, my network interface is called **enp0s31f6** and its MAC address is **38:42:f8:8b:a7:68**.
|
||||
|
||||
You may want to note it down on a secure place to revert to this original MAC address later on.
|
||||
|
||||
Now you may proceed to changing the MAC address.
|
||||
|
||||
Attention!
|
||||
|
||||
If you do this on a network interface which is currently in use, probably your network connection will be terminated. So either try this method on an additional card or be prepared to restart your network.
|
||||
|
||||
#### Method 1: Change MAC address using Macchanger
|
||||
|
||||
![][4]
|
||||
|
||||
[Macchanger][5] is simple utility to view, modify, and manipulate MAC addresses for your Network interface cards. It is available in almost all GNU/Linux operating systems and you can install is using the package installer of your distribution.
|
||||
|
||||
On Arch Linux or Manjaro:
|
||||
|
||||
```
|
||||
sudo pacman -S macchanger
|
||||
```
|
||||
|
||||
On Fedora, CentOS, RHEL:
|
||||
|
||||
```
|
||||
sudo dnf install macchanger
|
||||
```
|
||||
|
||||
On Debian, Ubuntu, Linux Mint, Kali Linux:
|
||||
|
||||
```
|
||||
sudo apt install macchanger
|
||||
```
|
||||
|
||||
**Important!** You’ll be asked to specify whether macchanger should be set up to run automatically every time a network device is brought up or down. This gives a new MAC address whenever you attach an Ethernet cable or re-enable WiFi.
|
||||
|
||||
![Not a good idea to run it automatically][6]
|
||||
|
||||
I recommend not to run it automatically, unless you really need to change your MAC address every time. So, choose No (by pressing tab key) and hit Enter key to continue.
|
||||
|
||||
##### How to Use Macchanger to change MAC address
|
||||
|
||||
Do you remember your network interface name? You got it in the Step 1 earlier.
|
||||
|
||||
Now, to assign any random MAC address to this network card, use:
|
||||
|
||||
```
|
||||
sudo macchanger -r enp0s31f6
|
||||
```
|
||||
|
||||
After changing the MAC id, verify it using command:
|
||||
|
||||
```
|
||||
ip addr
|
||||
```
|
||||
|
||||
You will now see that MAC has been spoofed.
|
||||
|
||||
To change the MAC address to a specific value, specify any custom MAC address using command:
|
||||
|
||||
```
|
||||
macchanger --mac=XX:XX:XX:XX:XX:XX
|
||||
```
|
||||
|
||||
Where XX:XX:XX:XX:XX:XX is the new MAC id that you want to change.
|
||||
|
||||
Finally, to revert the MAC address to its original hardware value, run the following command:
|
||||
|
||||
```
|
||||
macchanger -p enp0s31f6
|
||||
```
|
||||
|
||||
However, you don’t have to do this. Once you reboot the system, the changes will be automatically lost, and the actual MAC address will be restored again.
|
||||
|
||||
You can always check the man page for more details.
|
||||
|
||||
#### Method 2: Changing Mac address using iproute2 [intermediate knowledge]
|
||||
|
||||
I would recommend using Macchanger but if you don’t want to use it, there is another way to change the MAC address in Linux.
|
||||
|
||||
First, turn off the network card using command:
|
||||
|
||||
```
|
||||
sudo ip link set dev enp0s31f6 down
|
||||
```
|
||||
|
||||
Next, set the new MAC using command:
|
||||
|
||||
```
|
||||
sudo ip link set dev enp0s31f6 address XX:XX:XX:XX:XX:XX
|
||||
```
|
||||
|
||||
Finally, turn the network back on with this command:
|
||||
|
||||
```
|
||||
sudo ip link set dev enp0s31f6 up
|
||||
```
|
||||
|
||||
Now, verify new MAC address:
|
||||
|
||||
```
|
||||
ip link show enp0s31f6
|
||||
```
|
||||
|
||||
That’s it. You have successfully changed the MAC address in true Linux style. Stay tuned with It’s FOSS for more Linux tutorial and tips.
|
||||
|
||||
![][7]
|
||||
|
||||
### Dimitrios Savvopoulos
|
||||
|
||||
Dimitrios is an MSc Mechanical Engineer but a Linux enthusiast in heart. He is well settled in Solus OS but curiosity drives him to constantly test other distros. Challenge is part of his personality and his hobby is to compete from 5k to the marathon distance.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/change-mac-address-linux/
|
||||
|
||||
作者:[Community][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/itsfoss/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/MAC_address
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/airport_wifi_meme.jpg?ssl=1
|
||||
[3]: https://itsfoss.com/find-network-adapter-ubuntu-linux/
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/Change_MAC_Address_Linux.jpg?ssl=1
|
||||
[5]: https://github.com/alobbs/macchanger
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/configuring_mcchanger.jpg?ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/Dimitrios.jpg?ssl=1
|
@ -0,0 +1,281 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Adding a display to a travel-ready Raspberry Pi Zero)
|
||||
[#]: via: (https://opensource.com/article/20/3/pi-zero-display)
|
||||
[#]: author: (Peter Garner https://opensource.com/users/petergarner)
|
||||
|
||||
Adding a display to a travel-ready Raspberry Pi Zero
|
||||
======
|
||||
A small eInk display turns a Raspberry Pi into a self-contained,
|
||||
pocket-sized travel computer.
|
||||
![Pi Zero][1]
|
||||
|
||||
In my earlier article, I explained how I [transformed a Raspberry Pi Zero][2] into a minimal, portable, go-anywhere computer system that, although small, can actually achieve useful things. I've since made iterations that have proved interesting and made the little Pi even more useful. Read on to learn what I've done.
|
||||
|
||||
### After the road trip
|
||||
|
||||
My initial Pi Zero setup [proved its worth][3] on a road trip to Whitby, but afterward, it was largely consigned to the "pending" shelf, waiting for another assignment. It was powered up weekly to apply updates, but other than that, it was idle. Then one day, as I was flicking through emails from various Pi suppliers, I came across a (slightly) reduced e-Ink display offer: hmmm… and there was a version for the Pi Zero as well. What could I do with one?
|
||||
|
||||
ModMyPi was selling a rather neat [display and driver board combination][4] and a [small case][5] with a transparent window on top. I read the usual reviews, and apart from one comment about the _boards being a very tight fit_, it sounded positive. I ordered it, and it turned up a few days later. I had noted from the product description that the display board didn't have GPIO headers installed, so I ordered a Pi Zero WH (wireless + headers pre-installed) to save me the bother of soldering one on.
|
||||
|
||||
### Some assembly required
|
||||
|
||||
As with most of these things, some self-assembly was required, so I carefully opened the boxes and laid out the parts on the desk. The case was nicely made apart from ridiculous slots for a watch strap (?!) and some strange holes in the side to allow tiny fingers to press the five I/O buttons on the display. "_Could I get a top without holes?"_ I inquired on the review page. "_No."_ Okay then.
|
||||
|
||||
With the case unpacked, it was time to open the display box. A nicely designed board was first out, and there were clear instructions on the Pi-Supply website. The display was so thin (0.95mm) that I nearly threw it out with the bubble wrap.
|
||||
|
||||
The first job was to mount the display board on the Pi Zero. I checked to make sure I could attach the display cable to the driver board when it was joined to the Pi and decided that, with my sausage fingers, I'd attach the display first and leave it flapping in the breeze while I attached the driver board to the Pi. I carefully got the boards lined up on the GPIO pins, and, with those in place, I folded over the display "screen" to sit on top of the board. With the piggy-backed boards in place, I then _verrrry_ carefully shoe-horned the assembly into place in the case. Tight fit? Yeah, you're not kidding, but I got it all safely in place and snapped the top on, and nothing appeared to be broken. Phew!
|
||||
|
||||
### How to set up your display
|
||||
|
||||
I'm going to skip a chunk of messing about here and refer you to the maker's [instructions][6] instead. Suffice to say that after a few installs, reboots, and coffees, I managed to get a working e-Ink display! Now all I had to do was figure out what to do with it.
|
||||
|
||||
One of the main challenges of working with a small device like [my "TravelPi"][2] is that you don't have access to as much screen real estate as you would on a larger machine. I like the size and power of the device though, so it's really a compromise as to what you get out of it. For example, there's a single screen accessible via the HDMI port, and I've used tmux to split that into four separate, usable panes. If I really need to view something else urgently, I could always **Ctrl+Z** into another prompt and do the necessary configs, but that's messy.
|
||||
|
||||
I wanted to see various settings and maybe look at some system settings, and the e-Ink display enabled me to do all that! As you can see from the image below, I ended up with a very usable info panel that is updated by a simple(-ish) Python script (**qv**) either manually or by a crontab entry every 10 minutes. The manufacturer states that the update frequency should be "no more than 1Hz if you want your display to last for a long time." Ten minutes is fine, thank you.
|
||||
|
||||
Here's what I wanted to be able to see at a glance:
|
||||
|
||||
Hostname | And device serial number
|
||||
---|---
|
||||
IP address | Current internal IP address
|
||||
VPN status | Inactive/country/IP address
|
||||
Tor status | Inactive/IP address
|
||||
"Usage" | Percentage disk space and memory used
|
||||
Uptime | So satisfying to see those long uptimes
|
||||
|
||||
And here it is: a display that's the same size as the Pi Zero and 1" deep.
|
||||
|
||||
![PiZero Display][7]
|
||||
|
||||
### How to populate the display
|
||||
|
||||
Now I needed to populate the display. As seems to be the norm these days, the e-Ink support software is in Python, which, of course, is installed as standard with most Linux distros. _Disclaimer:_ Python is not my first (dev) language, but the code below works for me. It'll probably work for you, too.
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import datetime
|
||||
import socket
|
||||
import netifaces as ni
|
||||
import psutil
|
||||
import subprocess
|
||||
|
||||
from netifaces import AF_INET, AF_INET6, AF_LINK, AF_PACKET
|
||||
from papirus import PapirusText, PapirusTextPos, Papirus
|
||||
from subprocess import check_output
|
||||
from datetime import timedelta
|
||||
|
||||
rot = 0
|
||||
screen = Papirus(rotation = rot)
|
||||
fbold = '/usr/share/fonts/truetype/dejavu/DejaVuSansMono-Bold.ttf'
|
||||
fnorm = '/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf'
|
||||
text = PapirusTextPos(rotation = rot)
|
||||
|
||||
def GetBootTime():
|
||||
return datetime.datetime.fromtimestamp(psutil.boot_time())
|
||||
|
||||
def GetUptime():
|
||||
with open('/proc/uptime','r') as f:
|
||||
uptime_seconds = float(f.readline().split()[0])
|
||||
u = str(timedelta(seconds = uptime_seconds))
|
||||
duration,junk = u.split(".")
|
||||
hr,mi,sc = duration.split(":")
|
||||
return "%sh %sm %ss" % ( hr,mi,sc )
|
||||
|
||||
def getHostname():
|
||||
hostname = socket.gethostname()
|
||||
return hostname
|
||||
|
||||
def getWiFiIPaddress():
|
||||
try:
|
||||
ni.interfaces()
|
||||
[ 'wlan0', ]
|
||||
return ni.ifaddresses('wlan0')[AF_INET][0]['addr']
|
||||
except:
|
||||
return 'inactive'
|
||||
|
||||
def getVPNIPaddress():
|
||||
try:
|
||||
ni.interfaces()
|
||||
[ 'tun0', ]
|
||||
return ni.ifaddresses('tun0')[AF_INET][0]['addr']
|
||||
except:
|
||||
return 'inactive'
|
||||
|
||||
def GetTmuxEnv():
|
||||
if 'TMUX_PANE' in os.environ:
|
||||
return ' (t)'
|
||||
return ' '
|
||||
|
||||
def GetCPUserial():
|
||||
cpuinfo = subprocess.check_output(["/bin/cat", "/proc/cpuinfo"])
|
||||
cpuinfo = cpuinfo.replace("\t","")
|
||||
cpuinfo = cpuinfo.split("\n")
|
||||
[ legend, cpuserial ] = cpuinfo[12].split(' ')
|
||||
cpuserial = cpuserial.lstrip("0")
|
||||
return cpuserial
|
||||
|
||||
def GetMemUsed():
|
||||
memUsed = psutil.virtual_memory()[2]
|
||||
return memUsed
|
||||
|
||||
def GetDiskUsed():
|
||||
diskUsed = psutil.disk_usage('/')[3]
|
||||
return diskUsed
|
||||
|
||||
def CheckTor():
|
||||
try:
|
||||
TS = "active: pid %s" %check_output(['pidof','tor'])
|
||||
except:
|
||||
TS = 'inactive'
|
||||
return TS
|
||||
|
||||
def CheckVPN():
|
||||
return VPNlo
|
||||
# ---------------------------------------------------------------------------
|
||||
def main():
|
||||
pass
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
|
||||
VPNlo = 'inactive'
|
||||
|
||||
if (len(sys.argv) == 2):
|
||||
try:
|
||||
VPNlo = sys.argv[1]
|
||||
except:
|
||||
VPNlo = 'inactive'
|
||||
|
||||
text = PapirusTextPos(False,rotation=rot)
|
||||
text.AddText("%s %s %s"% (getHostname(),GetCPUserial(),GetTmuxEnv()),x=1,y=0,size=12,invert=True,fontPath=fbold)
|
||||
text.AddText("IP %s" % getWiFiIPaddress(),x=1,y=16,size=12,fontPath=fnorm)
|
||||
if ( getVPNIPaddress() == 'inactive' ):
|
||||
text.AddText("VPN %s" % CheckVPN(),x=1,y=30,size=12,fontPath=fnorm)
|
||||
else:
|
||||
text.AddText("VPN %s" % getVPNIPaddress(),x=1,y=30,size=12,fontPath=fnorm)
|
||||
text.AddText("TOR %s" % CheckTor(),x=1,y=44,size=12,fontPath=fnorm)
|
||||
text.AddText("MEM %s% DISK %s% used" % (GetMemUsed(),GetDiskUsed()),x=1,y=58,size=12,fontPath=fnorm,maxLines=1)
|
||||
text.AddText("UPTIME %s" % GetUptime(),x=1,y=72,size=12,fontPath=fnorm)
|
||||
text.WriteAll()
|
||||
|
||||
sys.exit(0)
|
||||
```
|
||||
|
||||
Normally, the script runs without any arguments and is called by a series of Bash scripts that I've written to start up various subsystems; these are, in turn, called from a menu system written in Whiptail, which is pretty versatile. In the case of the VPN system, I have a list of access points to choose from and that update the location on the display. Initially, I call the display updater with the location name (e.g., Honolulu), but at that point, I can't display the VPN IP address because I don't know it:
|
||||
|
||||
|
||||
```
|
||||
dispupdate.py ${accesspoint}
|
||||
openvpn --config $PATH/Privacy-${accesspoint}.conf --auth-user-pass credfile
|
||||
```
|
||||
|
||||
When the display updater runs again (outside the VPN startup script), the IP address is readable from the **tun0** interface and the display is updated with the IP address. I may change this later, but it works fine now. I use the **PapirusTextPos** function (rather than **PapirusText**), as this allows multiple lines to be written before the display is updated, leading to a much faster write. The **text.WriteAll()** function does the actual update.
|
||||
|
||||
### Adding more software
|
||||
|
||||
I was very pleased with my initial choice of applications, but since I'd managed to slim the whole installation down to 1.7GB, I had plenty of available space. So, I decided to see if there was anything else that could be useful. Here's what I added:
|
||||
|
||||
Irssi | IRC client
|
||||
---|---
|
||||
FreeBSD games | There are still many text-mode games to enjoy
|
||||
nmon | A _very_ comprehensive top-alike utility for all aspects of the system
|
||||
Newsbeuter | Text-mode Atom/RSS feed reader
|
||||
|
||||
And I still have about 300MB free space to take me up to 2GB, so I may add more.
|
||||
|
||||
### We keed to talk about ~~Kevin~~ Bluetooth
|
||||
|
||||
Observant readers will remember my hatred for Bluetooth and trying to pair terminal-based software with a Bluetooth device. When I bought a new Pi, I realized that I had to pair the damn thing up with the keyboards again. Oh, woe is me! But a search-engine session and a calming coffee enabled me to actually do it! It goes something like this:
|
||||
|
||||
|
||||
```
|
||||
sudo su
|
||||
bluetoothctl {enter}
|
||||
|
||||
[bluetooth]#
|
||||
|
||||
[bluetooth]# scan on
|
||||
Discovery started
|
||||
[CHG] Controller B8:27:EB:XX:XX:XX Discovering: yes
|
||||
|
||||
[bluetooth]# agent on
|
||||
Agent registered
|
||||
[NEW] Device B2:2B:XX:XX:XX:XX Bluetooth Keyboard
|
||||
Attempting to pair with B2:2B:XX:XX:XX:XX
|
||||
[CHG] Device B2:2B:XX:XX:XX:XX Connected: yes
|
||||
[agent] PIN code: 834652
|
||||
[CHG] Device B2:2B:XX:XX:XX:XX Modalias: usb:v05ACp0220d0001
|
||||
[CHG] Device B2:2B:XX:XX:XX:XX UUIDs: zzzzz
|
||||
[CHG] Device B2:2B:XX:XX:XX:XX UUIDs: yyyyy
|
||||
[CHG] Device B2:2B:XX:XX:XX:XX ServicesResolved: yes
|
||||
[CHG] Device B2:2B:XX:XX:XX:XX Paired: yes
|
||||
Pairing successful
|
||||
[CHG] Device B2:2B:XX:XX:XX:XX ServicesResolved: no
|
||||
[CHG] Device B2:2B:XX:XX:XX:XX Connected: no
|
||||
|
||||
[bluetooth]# trust B2:2B:XX:XX:XX:XX
|
||||
[CHG] Device B2:2B:XX:XX:XX:XX Trusted: yes
|
||||
Changing B2:2B:XX:XX:XX:XX trust succeeded
|
||||
[CHG] Device B2:2B:XX:XX:XX:XX RSSI: -53
|
||||
|
||||
[bluetooth]# scan off
|
||||
[CHG] Device B2:2B:XX:XX:XX:XX RSSI is nil
|
||||
Discovery stopped
|
||||
[CHG] Controller B8:27:EB:XX:XX:XX Discovering: no
|
||||
|
||||
[bluetooth]# exit
|
||||
Agent unregistered
|
||||
|
||||
$
|
||||
```
|
||||
|
||||
I was gobsmacked! No, really. I paired my other keyboard and am now considering pairing a speaker, but we'll see. I had a beer that night to celebrate my new-found "l33t" tech skills! Here is an [excellent guide][8] on how to do it.
|
||||
|
||||
### One more hardware mod
|
||||
|
||||
Until recently, I've been using as large a good-quality microSDHC card as I could afford, and in case of problems, I created a backup copy using the rsync-based rpi-clone. However, after reading various articles on the 'net where people complain about corrupted cards due to power problems, unclean shutdowns, and other mishaps, I decided to invest in a higher-quality card that hopefully will survive all this and more. This is important if you're traveling long distances and _really_ need your software to work at the destination.
|
||||
|
||||
After a long search, I found the [ATP Industrial-Grade MicroSD/MicroSDHC][9] cards, which are rated military-spec for demanding applications. That sounded perfect. However, with quality comes a cost, as well as (in this case) limited capacity. In order to keep my wallet happy, I limited myself to an 8GB card, which may not sound like a lot for a working computer, but bearing in mind I have a genuine 5.3GB of that 8GB free, it works just fine. I also have a level of reassurance that bigger but lower-quality cards can't give me, and I can create an ISO of that card that's small enough to email if need be. Result!
|
||||
|
||||
### What's next?
|
||||
|
||||
The Zero goes from strength to strength, only needing to go out more. I've gone technically about as far as I can for now, and any other changes will be small and incremental.
|
||||
|
||||
* * *
|
||||
|
||||
_This was originally published on [Peter Garner's blog][10] under a CC BY-NC-ND 4.0 and is reused here with the author's permission._
|
||||
|
||||
The new issue of the official Raspberry Pi magazine, The MagPi, comes with a free computer stuck to...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/pi-zero-display
|
||||
|
||||
作者:[Peter Garner][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/petergarner
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/zero-osdc-lead.png?itok=bK70ON2W (Pi Zero)
|
||||
[2]: https://opensource.com/article/20/3/raspberry-pi-zero-w-road
|
||||
[3]: https://petergarner.net/notes/index.php?thisnote=20180511-Travels+with+a+Pi+%282%29
|
||||
[4]: https://www.modmypi.com/raspberry-pi/screens-and-displays/epaper/papirus-zero-epaper--eink-screen-phat-for-pi-zero-medium
|
||||
[5]: https://www.modmypi.com/raspberry-pi/cases-183/accessories-1125/watch-straps/pi-supply-papirus-zero-case
|
||||
[6]: https://github.com/PiSupply/PaPiRus
|
||||
[7]: https://opensource.com/sites/default/files/uploads/pizerodisplay.jpg (PiZero Display)
|
||||
[8]: https://www.sigmdel.ca/michel/ha/rpi/bluetooth_01_en.html
|
||||
[9]: https://www.digikey.com/en/product-highlight/a/atp/industrial-grade-microsd-microsdhc-cards
|
||||
[10]: https://petergarner.net/notes/index.php?thisnote=20190205-Travels+with+a+Pi+%283%29
|
@ -1,106 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Install Netbeans on Ubuntu and Other Linux)
|
||||
[#]: via: (https://itsfoss.com/install-netbeans-ubuntu/)
|
||||
[#]: author: (Community https://itsfoss.com/author/itsfoss/)
|
||||
|
||||
How to Install Netbeans on Ubuntu and Other Linux
|
||||
======
|
||||
|
||||
_**In this tutorial, you’ll learn various ways to install Netbeans IDE on Ubuntu and other Linux distributions.**_
|
||||
|
||||
[NetBeans][1] is an open source integrated development environment that comes with good cross-platform support. This tool has been recognized by the Java and C/C++ development community widely.
|
||||
|
||||
The development environment is quite flexible. You can configure this tool to support a wide array of development objectives. Practically, you can develop Web, Desktop and Mobile Applications without leaving this platform. It’s amazing, isn’t it? Besides this, the user can add a wide array of known languages such as [PHP][2], C, C++, HTML, [Ajax][3], JavaScript, JSP, Ruby on Rails and the list goes on and on!
|
||||
|
||||
If you are looking to install Netbeans on Linux, you have several ways to do that. I have written this tutorial primarily for Ubuntu but some installation methods are applicable to other distributions as well.
|
||||
|
||||
* [Installing Netbeans on Ubuntu using apt][4]: for Ubuntu and Ubuntu-based distributions but usually **it has older version of Netbeans**
|
||||
* [Installing Netbeans on Ubuntu using Snap][5]: for any Linux distribution that has Snap packaging support enabled
|
||||
* [Installing Netbeans using Flatpak][6]: for any Linux distribution with Flatpak package support
|
||||
|
||||
|
||||
|
||||
### Installing Netbeans IDE on Ubuntu using Apt package manager
|
||||
|
||||
If you search for Netbeans in Ubuntu Software Center, you’ll find two Netbeans available. The Apache Netbeans is the snap version which is bigger in download size but gives you the latest Netbeans.
|
||||
|
||||
You can install it in one click. No need to open terminal. Easiest way.
|
||||
|
||||
![Apache Netbeans in Ubuntu Software Center][7]
|
||||
|
||||
You may also opt for using the apt command but with apt version, you won’t get the latest Netbeans. For example, at the time of writing this tutorial, Ubuntu 18.04 has Netbeans version 10 available via Apt while Snap has the latest Netbeans 11.
|
||||
|
||||
If you are a fan of [apt or apt-get][8], you can [enable the universe repository][9] and install Netbeans using this command in the terminal:
|
||||
|
||||
```
|
||||
sudo apt install netbeans
|
||||
```
|
||||
|
||||
### Installing Netbeans IDE on any Linux distribution using Snap
|
||||
|
||||
![][10]
|
||||
|
||||
Snap is a universal package manager and if [you have enabled Snap on your distribution][11], you can install it using the following command:
|
||||
|
||||
```
|
||||
sudo snap install netbeans --classic
|
||||
```
|
||||
|
||||
The process might take some time to complete because the total download size is around 1 GB. Once done, you will see the app in the application launcher.
|
||||
|
||||
Not only you’ll get the latest Netbeans with Snap, the installed version will be automatically updated to the newer version.
|
||||
|
||||
### Installing Netbeans using Flatpak
|
||||
|
||||
[Flatpak][12] is another universal packaging like Snap. Some distributions support Flatpak by default while you can [enable Flatpak support][13] on others.
|
||||
|
||||
Once you have the Flatpak support on your distribution, you can use the following command to install Netbeans:
|
||||
|
||||
```
|
||||
flatpak install flathub org.apache.netbeans
|
||||
```
|
||||
|
||||
Alernatively, you can always download the source code of this open source software and compile it yourself.
|
||||
|
||||
[Download Netbeans][14]
|
||||
|
||||
Hopefully, you selected one of the above methods to install Netbeans on your Ubuntu Linux system. But which one did you use? Did you face any issues? Do let us know.
|
||||
|
||||
![][15]
|
||||
|
||||
### Srimanta Koley
|
||||
|
||||
Srimanta is a passionate writer, a distrohopper & open source enthusiast. He is extremely fond of everything related to technology. He loves to read books and has an unhealthy addiction to the 90s!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-netbeans-ubuntu/
|
||||
|
||||
作者:[Community][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/itsfoss/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://netbeans.org/
|
||||
[2]: https://www.php.net/
|
||||
[3]: https://en.wikipedia.org/wiki/Ajax_(programming)
|
||||
[4]: tmp.ZNFNEC210y#apt
|
||||
[5]: tmp.ZNFNEC210y#snap
|
||||
[6]: tmp.ZNFNEC210y#flatpak
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/apache-netbeans-ubuntu-software-center.jpg?ssl=1
|
||||
[8]: https://itsfoss.com/apt-vs-apt-get-difference/
|
||||
[9]: https://itsfoss.com/ubuntu-repositories/
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/Install_Netbeans_Linux.jpg?ssl=1
|
||||
[11]: https://itsfoss.com/install-snap-linux/
|
||||
[12]: https://flatpak.org/
|
||||
[13]: https://itsfoss.com/flatpak-guide/
|
||||
[14]: https://netbeans.apache.org/download/index.html
|
||||
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/srimanta.jpg?ssl=1
|
@ -0,0 +1,243 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with shaders: signed distance functions!)
|
||||
[#]: via: (https://jvns.ca/blog/2020/03/15/writing-shaders-with-signed-distance-functions/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
|
||||
Getting started with shaders: signed distance functions!
|
||||
======
|
||||
|
||||
Hello! A while back I learned how to make fun shiny spinny things like this using shaders:
|
||||
|
||||
![][1]
|
||||
|
||||
My shader skills are still extremely basic, but this fun spinning thing turned out to be a lot easier to make than I thought it would be to make (with a lot of copying of code snippets from other people!).
|
||||
|
||||
The big idea I learned when doing this was something called “signed distance functions”, which I learned about from a very fun tutorial called [Signed Distance Function tutorial: box & balloon][2].
|
||||
|
||||
In this post I’ll go through the steps I used to learn to write a simple shader and try to convince you that shaders are not that hard to get started with!
|
||||
|
||||
### examples of more advanced shaders
|
||||
|
||||
If you haven’t seen people do really fancy things with shaders, here are a couple:
|
||||
|
||||
1. this very complicated shader that is like a realistic video of a river: <https://www.shadertoy.com/view/Xl2XRW>
|
||||
2. a more abstract (and shorter!) fun shader with a lot of glowing circles: <https://www.shadertoy.com/view/lstSzj>
|
||||
|
||||
|
||||
|
||||
### step 1: my first shader
|
||||
|
||||
I knew that you could make shaders on shadertoy, and so I went to <https://www.shadertoy.com/new>. They give you a default shader to start with that looks like this:
|
||||
|
||||
![][3]
|
||||
|
||||
Here’s the code:
|
||||
|
||||
```
|
||||
void mainImage( out vec4 fragColor, in vec2 fragCoord )
|
||||
{
|
||||
// Normalized pixel coordinates (from 0 to 1)
|
||||
vec2 uv = fragCoord/iResolution.xy;
|
||||
|
||||
// Time varying pixel color
|
||||
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
|
||||
|
||||
// Output to screen
|
||||
fragColor = vec4(col,1.0);
|
||||
}
|
||||
```
|
||||
|
||||
This doesn’t do anythign that exciting, but it already taught me the basic structure of a shader program!
|
||||
|
||||
### the idea: map a pair of coordinates (and time) to a colour
|
||||
|
||||
The idea here is that you get a pair of coordinates as an input (`fragCoord`) and you need to output a RGBA vector with the colour of that. The function can also use the current time (`iTime`), which is how the picture changes over time.
|
||||
|
||||
The neat thing about this programming model (where you map a pair of coordinates and the time to) is that it’s extremely trivially parallelizable. I don’t understand a lot about GPUs but my understanding is that this kind of task (where you have 10000 trivially parallelizable calculations to do at once) is exactly the kind of thing GPUs are good at.
|
||||
|
||||
### step 2: iterate faster with `shadertoy-render`
|
||||
|
||||
After a while of playing with shadertoy, I got tired of having to click “recompile” on the Shadertoy website every time I saved my shader.
|
||||
|
||||
I found a command line tool that will watch a file and update the animation in real time every time I save called [shadertoy-render][4]. So now I can just run:
|
||||
|
||||
```
|
||||
shadertoy-render.py circle.glsl
|
||||
```
|
||||
|
||||
and iterate way faster!
|
||||
|
||||
### step 3: draw a circle
|
||||
|
||||
Next I thought – I’m good at math! I can use some basic trigonometry to draw a bouncing rainbow circle!
|
||||
|
||||
I know the equation for a circle (`x**2 + y**2 = whatever`!), so I wrote some code to do that:
|
||||
|
||||
![][5]
|
||||
|
||||
Here’s the code: (which you can also [see on shadertoy][6])
|
||||
|
||||
```
|
||||
void mainImage( out vec4 fragColor, in vec2 fragCoord )
|
||||
{
|
||||
// Normalized pixel coordinates (from 0 to 1)
|
||||
vec2 uv = fragCoord/iResolution.xy;
|
||||
// Draw a circle whose center depends on what time it is
|
||||
vec2 shifted = uv - vec2((sin(iGlobalTime) + 1)/2, (1 + cos(iGlobalTime)) / 2);
|
||||
if (dot(shifted, shifted) < 0.03) {
|
||||
// Varying pixel colour
|
||||
vec3 col = 0.5 + 0.5*cos(iGlobalTime+uv.xyx+vec3(0,2,4));
|
||||
fragColor = vec4(col,1.0);
|
||||
} else {
|
||||
// make everything outside the circle black
|
||||
fragColor = vec4(0,0,0,1.0);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This takes the dot product of the coordinate vector `fragCoord` with itself, which is the same as calculating `x^2 + y^2`. I played with the center of the circle a little bit in this one too – I made the center `vec2((sin(iGlobalTime) + 1)/2, (1 + cos(faster)) / 2)`, which means that the center of the circle also goes in a circle depending on what time it is.
|
||||
|
||||
### shaders are a fun way to play with math!
|
||||
|
||||
One thing I think is fun about this already (even though we haven’t done anything super advanced!) is that these shaders give us a fun visual way to play with math – I used `sin` and `cos` to make something go in a circle, and if you want to get some better intuition about how trigonometric work, maybe writing shaders would be a fun way to do that!
|
||||
|
||||
I love that you get instant visual feedback about your math code – if you multiply something by 2, things get bigger! or smaller! or faster! or slower! or more red!
|
||||
|
||||
### but how do we do something really fancy?
|
||||
|
||||
This bouncing circle is nice but it’s really far from the super fancy things I’ve seen other people do with shaders. So what’s the next step?
|
||||
|
||||
### idea: instead of using if statements, use signed distance functions!
|
||||
|
||||
In my circle code above, I basically wrote:
|
||||
|
||||
```
|
||||
if (dot(uv, uv) < 0.03) {
|
||||
// code for inside the circle
|
||||
} else {
|
||||
// code for outside the circle
|
||||
}
|
||||
```
|
||||
|
||||
But the problem with this (and the reason I was feeling stuck) is that it’s not clear how it generalizes to more complicated shapes! Writing a bajillion if statements doesn’t seem like it would work well. And how do people render those 3d shapes anyway?
|
||||
|
||||
So! **Signed distance functions** are a different way to define a shape. Instead of using a hardcoded if statement, instead you define a **function** that tells you, for any point in the world, how far away that point is from your shape. For example, here’s a signed distance function for a sphere.
|
||||
|
||||
```
|
||||
float sdSphere( vec3 p, float center )
|
||||
{
|
||||
return length(p)-center;
|
||||
}
|
||||
```
|
||||
|
||||
Signed distance functions are awesome because they’re:
|
||||
|
||||
* simple to define!
|
||||
* easy to compose! You can take a union / intersection / difference with some simple math if you want a sphere with a chunk taken out of it.
|
||||
* easy to rotate / stretch / bend!
|
||||
|
||||
|
||||
|
||||
### the steps to making a spinning top
|
||||
|
||||
When I started out I didn’t understand what code I needed to write to make a shiny spinning thing. It turns out that these are the basic steps:
|
||||
|
||||
1. Make a signed distance function for the shape I want (in my case an octahedron)
|
||||
2. Raytrace the signed distance function so you can display it in a 2D picture (or raymarch? The tutorial I used called it raytracing and I don’t understand the difference between raytracing and raymarching yet)
|
||||
3. Write some code to texture the surface of your shape and make it shiny
|
||||
|
||||
|
||||
|
||||
I’m not going to explain signed distance functions or raytracing in detail in this post because I found this [AMAZING tutorial on signed distance functions][2] that is very friendly and honestly it does a way better job than I could do. It explains how to do the 3 steps above and the code has a ton of comments and it’s great.
|
||||
|
||||
* The tutorial is called “SDF Tutorial: box & balloon” and it’s here: <https://www.shadertoy.com/view/Xl2XWt>
|
||||
* Here are tons of signed distance functions that you can copy and paste into your code <http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm> (and ways to compose them to make other shapes)
|
||||
|
||||
|
||||
|
||||
### step 4: copy the tutorial code and start changing things
|
||||
|
||||
Here I used the time honoured programming practice here of “copy the code and change things in a chaotic way until I get the result I want”.
|
||||
|
||||
My final shader of a bunch of shiny spinny things is here: <https://www.shadertoy.com/view/wdlcR4>
|
||||
|
||||
The animation comes out looking like this:
|
||||
|
||||
![][7]
|
||||
|
||||
Basically to make this I just copied the tutorial on signed distance functions that renders the shape based on the signed distance function and:
|
||||
|
||||
* changed `sdfBalloon` to `sdfOctahedron` and made the octahedron spin instead of staying still in my signed distance function
|
||||
* changed the `doBalloonColor` colouring function to make it shiny
|
||||
* made there be lots of octahedrons instead of just one
|
||||
|
||||
|
||||
|
||||
### making the octahedron spin!
|
||||
|
||||
Here’s some the I used to make the octahedron spin! This turned out to be really simple: first copied an octahedron signed distance function from [this page][8] and then added a `rotate` to make it rotate based on time and then suddenly it’s spinning!
|
||||
|
||||
```
|
||||
vec2 sdfOctahedron( vec3 currentRayPosition, vec3 offset ){
|
||||
vec3 p = rotate((currentRayPosition), offset.xy, iTime * 3.0) - offset;
|
||||
float s = 0.1; // what is s?
|
||||
p = abs(p);
|
||||
float distance = (p.x+p.y+p.z-s)*0.57735027;
|
||||
float id = 1.0;
|
||||
return vec2( distance, id );
|
||||
}
|
||||
```
|
||||
|
||||
### making it shiny with some noise
|
||||
|
||||
The other thing I wanted to do was to make my shape look sparkly/shiny. I used a noise funciton that I found in [this github gist][9] to make the surface look textured.
|
||||
|
||||
Here’s how I used the noise function. Basically I just changed parameters to the noise function mostly at random (multiply by 2? 3? 1800? who knows!) until I got an effect I liked.
|
||||
|
||||
```
|
||||
float x = noise(rotate(positionOfHit, vec2(0, 0), iGlobalTime * 3.0).xy * 1800.0);
|
||||
float x2 = noise(lightDirection.xy * 400.0);
|
||||
float y = min(max(x, 0.0), 1.0);
|
||||
float y2 = min(max(x2, 0.0), 1.0) ;
|
||||
vec3 balloonColor = vec3(y , y + y2, y + y2);
|
||||
```
|
||||
|
||||
### writing shaders is fun!
|
||||
|
||||
That’s all! I had a lot of fun making this thing spin and be shiny. If you also want to make fun animations with shaders, I hope this helps you make your cool thing!
|
||||
|
||||
As usual with subjects I don’t know tha well, I’ve probably said at least one wrong thing about shaders in this post, let me know what it is!
|
||||
|
||||
Again, here are the 2 resources I used:
|
||||
|
||||
1. “SDF Tutorial: box & balloon”: <https://www.shadertoy.com/view/Xl2XWt> (which is really fun to modify and play around with)
|
||||
2. Tons of signed distance functions that you can copy and paste into your code <http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm>
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2020/03/15/writing-shaders-with-signed-distance-functions/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://jvns.ca/images/spinny.gif
|
||||
[2]: https://www.shadertoy.com/view/Xl2XWt
|
||||
[3]: https://jvns.ca/images/colour.gif
|
||||
[4]: https://github.com/alexjc/shadertoy-render
|
||||
[5]: https://jvns.ca/images/circle.gif
|
||||
[6]: https://www.shadertoy.com/view/tsscR4
|
||||
[7]: https://jvns.ca/images/octahedron2.gif
|
||||
[8]: http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm
|
||||
[9]: https://gist.github.com/patriciogonzalezvivo/670c22f3966e662d2f83
|
@ -0,0 +1,184 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How I migrated from a Mac Mini to a Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/20/3/mac-raspberry-pi)
|
||||
[#]: author: (Peter Garner https://opensource.com/users/petergarner)
|
||||
|
||||
How I migrated from a Mac Mini to a Raspberry Pi
|
||||
======
|
||||
Learn more about Linux by turning a Raspberry Pi Model 2 into a workable
|
||||
desktop computer.
|
||||
![Vector, generic Raspberry Pi board][1]
|
||||
|
||||
Some time ago, I decided to move my computing environment from a Mac Mini PowerPC to a Raspberry Pi Model 2. This article describes my reasons for doing so and how I did it. While it is quite technical in places, if you're considering switching from an existing system to something decidedly lean and mean, there are things that you need to know before making that leap. There are lots of links to click as well, which will lead you to the software and apps that I mention.
|
||||
|
||||
Enjoy!
|
||||
|
||||
## Saying goodbye to the Mac
|
||||
|
||||
I have to admit, I've never really been an Apple fanboi, especially following a short (and ultimately unsatisfactory) fling with a plastic polycarbonate MacBook back in 2006. Although it was beautifully designed, and the software "Just Worked," I was understandably upset when it decided to expire shortly after the warranty period expired (design faults, apparently). Ah well.
|
||||
|
||||
I swore never to "invest" in an Apple machine again—until I discovered a used Mac Mini PowerPC on eBay that could be had for around $100 in 2012. It was new back in 2005 but had apparently been refurbished. "What have I got to lose, especially at that price?" I asked myself. Nobody answered, so I placed a last-minute bid, won it, and invested about the same sum of money again in bumping the memory up to 1GB and buying the OS on DVD. The OS X version was 10.4.7 Tiger, and the architecture was Power PC. It was sedate but reliable, and I was happy. It didn't take a lot of power either; some 60 watts at full load, so that was a bonus. I spent many happy hours tinkering with it and trying to find software that was supported on a device that old.
|
||||
|
||||
Predictably though, as my computing requirements grew and the Mac got older, it started to get noticeably slower, and I was aware that even simple tasks—such as asking it to run a web browser and display an HTTPS page—were causing it problems. When I finally managed to find antivirus software for it, I became aware of just how noisy the Mini's cooling fan was as the CPU struggled with the extra load.
|
||||
|
||||
A quick check of the performance monitors revealed thousands of memory-paging faults, and I realized that my old friend was soon destined for the knackers yard. Of course, that meant searching for a replacement, and that's when the fun started.
|
||||
|
||||
## A(nother) small computer
|
||||
|
||||
My main problem was that I didn't have a big budget. I looked at eBay again and found a number of Mac Minis for sale, all around the $500 mark, and many of those were early basic-spec Intel units that, like my old Mac, people had simply grown out of. Essentially, I wanted something like the old Mini, ideally with similar power consumption. A new one was out of the question, obviously.
|
||||
|
||||
Let me state that my computer requirements are pretty undemanding, and for photo/graphics work, I have another computer that consumes power like there's no tomorrow and gives off enough heat to keep me warm in winter. And then I got to thinking about the [Raspberry Pi Model 2][2]. Now before you laugh, I have around six of the things running various servers, and they do just fine. One runs a small web server, another runs a mail server, and so on. Each one costs around $30, and most use a cheap microSDHC card, so if one fails, I can easily swap it out for another, and I can usually buy a suitable card at a local supermarket—try doing that when your laptop drive fails! I also have a Netgear ReadyNAS 102 with a couple of 2TB hard drives to act as my bulk storage.
|
||||
|
||||
Suddenly, my plan looked as though it might be viable after all!
|
||||
|
||||
## Spec'ing it out
|
||||
|
||||
The specification was a bit of a no-brainer: The Model 2 Pi comes with 1GB of memory standard, the Ethernet runs at 100Mbps maximum, the clock speed is 900MHz, there are four USB ports, and that's yer lot, mate. You can overclock it, but I've never wanted to try this for various reasons.
|
||||
|
||||
I had a Pi in my spares drawer, so no problem there. I ordered a posh aluminum case made by [Flirc][3] that was on offer for $20 and duly slotted in the Pi. The power supply unit (PSU) had to be a genuine two-amp device, and again, I had a spare lying around. If you take your Pi ownership seriously, I recommend the [Anker 40W][4] five-port desktop charger: it has intelligent power management, and I'm running five Pis from one unit. Incidentally, if you inadvertently use a PSU that can't deliver the required current, you'll keep seeing a square, multi-colored icon in the top-right corner of your screen, so be warned.
|
||||
|
||||
The microSDHC "disk" was more of an issue, though. I always use SanDisk, and this time I wanted something fast, especially as this was to be a "desktop" machine. In the end, I went for a [SanDisk 8GB Extreme Pro UHS-1][5] card that promised up to 90 to 95 Mbps write/read performance. "8GB? That's not a lot of space," I hear you Windows users cry, and because this is Linux, there doesn't need to be.
|
||||
|
||||
The way I envisioned it, I'd set up the Pi normally and use it primarily as a boot disk. I'd host all my documents and media files on the network-attached storage (NAS) box, and all would be well. The NAS shares would be accessed via network filesystem (NFS), and I'd just mount them as directories on the Pi.
|
||||
|
||||
Quite early on, I elected to move my entire home directory onto the NAS, and this has worked well, with some quirks. The problem I faced was a Pi quirk, and although I was sure there was a fix, I wanted to get it up and running before the Mac finally crapped out. When the Pi boots, it seems to enable the networking part quite late in the sequence, and I found that I couldn't do my NFS mounts because the networking interface hadn't come up yet. Rather than hack around with tricky scripts, I decided to simply mount the NFS shares by hand after I'd logged in after a successful boot. This seemed to work, and it's the solution I'm using now. Now that I had a basic strategy, it was time to implement it on the "live" machine.
|
||||
|
||||
That's the beauty of working with the Raspberry Pi—you can quickly hack together a testbed and have a system up and running in under 30 minutes.
|
||||
|
||||
Regarding video, I bought an HDMI-to-DVI cable to use with my Dell monitor, and in GUI desktop mode, this comes up as 1280x1024—plenty good enough for my use. If you have a monster flat-screen TV, you can always use that instead.
|
||||
|
||||
## My software environment
|
||||
|
||||
### Operating system
|
||||
|
||||
I ultimately decided on [Arch Linux for ARM][6] 7H as the operating system. I'm a [Raspbian][7] veteran, but I didn't need the educational software that comes with it (I have other Pis for that). Arch provides a minimal environment but is full-featured, well-supported, and powerful; it also has bucket-loads of software available. After its initial installation, I'd used just over 1.2GB of space, and even now, with all my software on the microSDHC, I'm only using 2.8GB of my 8GB card. Please note that the Pi 2 is officially Arch Linux ARM 7, not 6.
|
||||
|
||||
### Desktop
|
||||
|
||||
I wanted a graphical desktop environment (even though I'm a command-line sorta guy), but it needed to be in keeping with the lean and mean ethos. I'd used [LXDE][8] before and was happy with it, so I installed it; GNOME and KDE were just too big.
|
||||
|
||||
### Web browser
|
||||
|
||||
The web browser was a bit of a problem, but after trying the default Midori, Epiphany, and a couple of others, I decided on [Firefox][9]. It's a bit flabby, but it follows standards well, and if you're going to digitally sign LibreOffice ODT documents, you'll need it anyway. One problem on a machine of this power is the tremendous toll that web-based ads place on the overall memory usage. In fact, a badly ad'ed page can make the browser stop completely, so I had to make those ads disappear. One way would be to install an ad-blocker plugin, but that's another hit on available memory, so a simpler method was called for.
|
||||
|
||||
As this is a Linux box, I simply downloaded an [ad-blocking hosts file][10]. This is an amazing piece of community work that consists of over 15,000 hostnames for basically any server that spits out ads. All the entries point to an IP address of 0.0.0.0, so there's no time wasted and your bandwidth's your own again. It's a free download and can be added to the end of an existing hosts file. Of course, the major value, as far as I'm concerned, is that page load times are much quicker.
|
||||
|
||||
The screen capture below shows an ad-free Firefox overlaid with the same page in [ELinks][11].
|
||||
|
||||
![Firefox and eLinks browsers on Raspberry Pi][12]
|
||||
|
||||
No ads in either, but if you don't need all the eye candy rendered by Firefox, ELinks will provide a super-clean experience. (Normally, all that whitespace in the Firefox image is filled with ads.) The ELinks browser is an interesting hybrid browser that is primarily text-based and is similar to the classic pure-text Lynx browser.
|
||||
|
||||
### Messaging
|
||||
|
||||
It would be overkill, and undesirable from a security point of view, to have Microsoft Skype on the Pi, so I decided on a Jabber/XMPP client, [Psi][13]. Psi has the advantage of not having a multitude of dependencies, and it also works really well. It's easy to take part in multi-user chats, and I have another Pi hosting a Jabber server to test it on. There's no character-mode version, unfortunately, and most of the text-based clients I tried had problems, so it's a GUI-only situation at the moment. No matter; it works well and doesn't use a lot of resources.
|
||||
|
||||
### Email
|
||||
|
||||
I also tried a number of email applications: this was easily the most important application. Eventually, I chose [Claws Mail][14]. Sadly, it doesn't do HTML mail, but it's rock-solid reliable. I have to say that I can't get the GNU Privacy Guard (GPG) plugin working properly yet due to some unresolved version issues, but I can always encrypt messages in a terminal, if need be.
|
||||
|
||||
### Audio
|
||||
|
||||
Music is important to me, and I chose [SMPlayer][15] as my media player. It supports many options, including playlists for local and networked files and internet radio streaming. It does the job well.
|
||||
|
||||
### Video
|
||||
|
||||
I'll not go into the video player in any great detail. Bearing in mind the hardware specs of the Pi, reliably playing back a video stream, even on the same network, was problematic. I decided that if I wanted to watch videos, I had other devices more suited to it. I did try and experiment with the **gpu_mem** setting in the **[/boot/config.txt][16]**, switching it from the default 64MB to 96MB. I was prepared to borrow a bit of application memory for the video player, but even that didn't seem to make it work well. In the end, I kept that setting so that the desktop environment would run more smoothly, and so far, I haven't had problems. The irony of this is that I have another Pi that has a [DLNA][17] server installed, and this can stream video exceedingly well—not just to one client, but several. In its defense, though, it doesn't have a desktop environment to contend with. So, for now, I don't bother trying to play video.
|
||||
|
||||
### Image processing
|
||||
|
||||
I need to do simple, lightweight photo and image editing, and I knew from prior experience that GIMP and similar packages would bring the Pi to its knees. I found an app called [Pinta][18], which resembles an enhanced Microsoft Paint, but with more cojones. As someone with a large image collection, I also needed a slideshow application. After much evaluation, I decided on [feh][19]. Normally run from a terminal within the GUI desktop, it has an incredible array of options that can help you produce an image slideshow, and again, it has low memory requirements.
|
||||
|
||||
### Office suite
|
||||
|
||||
And then there was an office suite. On the old Mac Mini, I was happily (and legally) running a copy of Microsoft Mac Office 2004, and I was truly sorry to lose that. I just needed a Microsoft Word and Excel equivalent, but I had to bear in mind the Pi's limitations. Sure, there are standalone versions of word-processor and spreadsheet applications, but there was nothing that really gave me confidence that I could edit a full-featured document.
|
||||
|
||||
I already knew of [LibreOffice][20], but I had my doubts about it because of its Java Runtime Environment (JRE) requirement, or so I thought. Thankfully, JRE was optional, and as long as I didn't want to use (database) connection pooling of macros, there was no need to enable it. I also used as many built-in options as possible, rejecting skins and themes; this brought the overall memory footprint down to a reasonable level, and hey, I'm writing this on LibreOffice Writer now! I adopted the attitude that if it has a built-in theme, use it!
|
||||
|
||||
Here's the current [memory overview][21] (in MB) from within the GUI desktop:
|
||||
|
||||
![Raspberry Pi GUI memory usage][22]
|
||||
|
||||
### Miscellaneous
|
||||
|
||||
Other desktop software I've installed (not much as I wanted in order to keep this a minimal installation) is:
|
||||
|
||||
* [FileZilla][23]: SFTP/FTP client
|
||||
* [PuTTY][24]: SSH/telnet terminal frontend
|
||||
* [Mousepad][25]: A versatile plain-text editor, similar to Wordpad or Notepad, but much more powerful **[Note: this link was broken. Is this ok?]**
|
||||
|
||||
|
||||
|
||||
Overall, the entire setup works as intended. I've found that it performs well, if a little slow sometimes, but this is to be expected, as it's running on a Raspberry Pi with a 900MHz clock speed and 1GB of memory. As long you're aware of and prepared to accept the limitations, you can have a cheap, very functional system that doesn't take up all your desk space.
|
||||
|
||||
## Lacking in characters
|
||||
|
||||
Life with a Pi desktop is not all about the GUI; it's a very competent command-line environment too, should you need one. As a Linux developer and geek, I am very comfortable in a character-mode environment, and this is where the Pi really comes into its own. The performance you can expect in the command-line environment, at least in my configuration, is dependent on a number of factors. I'm limited to a certain extent by the Pi's network-interface speed and the overall performance of my Netgear ReadyNAS 102, another slightly underpowered, consumer-grade ARM box. The one thing that did please me, though, was the noticeable increase in speed over the Mac Mini!
|
||||
|
||||
Running in a native terminal environment, this is the typical memory usage (in MB) you might expect:
|
||||
|
||||
![Raspberry Pi terminal memory usage][26]
|
||||
|
||||
One thing to note is the lack of a swap partition. It's generally accepted that any type of swap system on a Raspberry Pi is a Very Bad Thing™ and will wear out your SD card in no time. I considered setting up a swap partition on the NAS box, but I ruled this out early on, as it would very negatively impact the network as a whole, and as with the NFS mount issue, the swap partition would need to be mounted before the network came up. So no go.
|
||||
|
||||
Having lived with Raspberry Pis for some time now, let's just say that one has to learn to set things up carefully in the first place to avoid the need, and ultimately, it can teach you to manage computers better.
|
||||
|
||||
As part of my efforts to make the Pi as useful as possible, I had to envision a scenario where whatever I was working on was either so resource-hungry that I couldn't run a GUI desktop or the GUI was just not required. That meant reproducing as many of the desktop-only apps in a character-mode environment. In fact, this was easier than finding the equivalent desktop apps.
|
||||
|
||||
Here is my current lineup:
|
||||
|
||||
* **File manager:** [Midnight Commander][27]; if you're old enough to remember Norton Commander, you'll know what it looks like.
|
||||
* **File transfer:** SSH/SFTP; normally handled by PuTTY and FileZilla on the desktop, you just use these two commands as provided.
|
||||
* **Web browser:** Lynx or Links are classic character-mode browsers that significantly speed up the internet experience.
|
||||
* **Music player:** Yes, you can play music in a character-mode terminal! [Mpg123][28] is the name of the app, and when it's run as **mpg123 -C**, it allows full keyboard control of all playback functions. If you want to be really cool, you can alter the way Midnight Commander handles MP3 files by editing **/etc/mc/mc.ext** and adding the code snippet below. This allows you to browse and play your music collection with ease. [code] shell/i/.mp3
|
||||
Open=/usr/bin/mpg123 -C %f
|
||||
View=%view{ascii} /usr/lib/mc/ext.d/sound.sh view mp3
|
||||
```
|
||||
* **Office:** Don't be silly! Oh wait, though; I installed the character-mode spreadsheet app called **sc** (Supercalc?), and there's always Vi if you want to edit a text document, but don't expect to able to edit any Microsoft files. If your need is truly great, you can install a supplementary application called Antiword, which will let you view a .doc file.
|
||||
* **Email:** A bit of a problem, as the Claws Mail mailbox format is not directly compatible with my character-mode app of choice, Mutt. There's a workaround, but I'm only going to do it if I get some spare time. For sending quick emails, I installed ssmtp, which is described as "a send-only sendmail emulator for machines which normally pick their mail up from a centralized mail hub." The setup is minimal, and overhead is practically nil, as normally it's invoked only when mail is being sent. So, you can do things like typing **echo "The donuts are on my desk" | mail -s"Important News" [everybody@myoffice.com][29]** from the command line without firing up a GUI mail app.
|
||||
|
||||
|
||||
|
||||
For everything else, it's just a question of flipping back to the GUI desktop. Speaking of which…
|
||||
|
||||
![Raspberry Pi GUI desktop environment][30]
|
||||
|
||||
Quite a busy screen, but the Raspberry Pi handles it well. Here, I'm using LibreOffice to write this article, there's a network status box, Firefox is on the mpg123 website, and there's a terminal running top showing how much memory (isn't) being used. The drop-down menu on the left shows the office suite apps.
|
||||
|
||||
## Other scenarios and thoughts
|
||||
|
||||
### What's where
|
||||
|
||||
With any hybrid system like this, it's important to remember what is located where so that, in the event of any problems, recovery will be easier. In my current configuration, the microSDHC card contains only the operating system, and as much as possible, any system-configuration files are also on there. Your own userland data will be on the NAS in your home directory. Ideally, you should be to replace or update the software on the microSDHC without having any adverse effects on your computing environment as a whole, but in IT, it's never that straightforward.
|
||||
|
||||
In the X11 GUI desktop system, although there is a default config file in **/etc/X11**, you will invariably have a customized version containing your own preferences. (This is by design.) Your own file on the NAS, however, will reference files on the microSDHC:
|
||||
|
||||
![Location of files][31]
|
||||
|
||||
The overall effect is that if you change one environment for another, you will invariably experience a change (or loss) in functionality. Hopefully, the changes will be minor, but you do need to be aware of the sometimes ambiguous links.
|
||||
|
||||
Please remember that the **root** user will _always_ be on the microSDHC, and if your NAS box fails for any reason, you'll still be able to boot your system and at least do some recovery work.
|
||||
|
||||
### NAS alternatives
|
||||
|
||||
While I'm in my home office, I have full access to my NAS box, which represents what (in today's terminology) would be a personal cloud. I much prefer this solution to a commercial cloud that is invariably managed by a company of unknown origin, location, security, and motives. For those reasons, I will always host my data where I can see it and physically get to it as required. Having said that, you may not be as paranoid as I am and will want to hook up your Pi desktop to an external cloud share.
|
||||
|
||||
In that case, using an NFS mount as a basis for your home directory should mean that it's simply a matter of editing your **/etc/fstab** to point the NFS client at a different location. In my setup, the NAS box is called, er, NASBOX, and the local NFS share mountpoint is called **/NASmount**. When you create your non-root user, you'll simply move their home directory to an existing directory called **/NASmount**:
|
||||
```
|
||||
|
||||
|
||||
NASBOX:/data/yourshare /NASmount nfs
|
||||
nfsvers=3,rsize=8192,wsize=8192,timeo=60,intr,auto 0 0
|
||||
|
||||
mount -t nfs -v NASBOX:/data/yourshare /NASmount
|
||||
|
||||
```
|
||||
and then your directory tree could look like this:
|
||||
```
|
||||
`/NASmount/home/user`
|
||||
```
|
||||
So, by simply changing the **/etc/fstab** entry, you could quickly be hooked up to someone else's cloud. This, as they say, is left as an exercise for the re
|
@ -0,0 +1,288 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to test failed authentication attempts with test-driven development)
|
||||
[#]: via: (https://opensource.com/article/20/3/failed-authentication-attempts-tdd)
|
||||
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
|
||||
|
||||
How to test failed authentication attempts with test-driven development
|
||||
======
|
||||
Mountebank makes it easier to test the "less happy path" in your code.
|
||||
![Programming keyboard.][1]
|
||||
|
||||
Testing often begins with what we hope happens. In my [previous article][2], I demonstrated how to virtualize a service you depend on when processing the "happy path" scenario (that is, testing the outcome of a successful login attempt). But we all know that software fails in spectacular and unexpected ways. Now's the time to take a closer look into how to process the "less happy paths": what happens when someone tries to log in with the wrong credentials?
|
||||
|
||||
In the first article linked above, I walked through building a user authentication module. (Now is a good time to review that code and get it up and running.) This module does not do all the heavy lifting; it mostly relies on another service to do those tougher tasks—enable user registration, store the user accounts, and authenticate the users. The module will only be sending HTTP POST requests to this additional service's endpoint; in this case, **/api/v1/users/login**.
|
||||
|
||||
What do you do if the service you're dependent on hasn't been built yet? This scenario creates a blockage. In the previous post, I explored how to remove that blockage by using service virtualization enabled by [mountebank][3], a powerful test environment.
|
||||
|
||||
This article walks through the steps required to enable the processing of user authentication in cases when a user repeatedly attempts to log in. The third-party authentication service allows only three attempts to log in, after which it ceases to service the HTTP request arriving from the offending domain.
|
||||
|
||||
### How to simulate repeat requests
|
||||
|
||||
Mountebank makes it very easy to simulate a service that listens on a network port, matches the method and the path defined in the request, then handles it by sending back an HTTP response. To follow along, be sure to get mountebank running as we [did in the previous article][2]. As I explained there, these values are declared as JSONs that are posted to **<http://localhost:2525/imposters>**, mountebank's endpoint for processing authentication requests.
|
||||
|
||||
But the challenge now is how to simulate the scenario when the HTTP request keeps hitting the same endpoint from the same domain. This is necessary to simulate a user who submits invalid credentials (username and password), is informed they are invalid, tries different credentials, and is repeatedly rejected (or foolishly attempts to log in with the same credentials that failed on previous attempts). Eventually (in this case, after a third failed attempt), the user is barred from additional tries.
|
||||
|
||||
Writing executable code to simulate such a scenario would have to model very elaborate processing. However, when using mountebank, this type of simulated processing is extremely simple to accomplish. It is done by creating a rolling buffer of responses, and mountebank responds in the order the buffer was created. Here is an example of one way to simulate repeat requests in mountebank:
|
||||
|
||||
|
||||
```
|
||||
{
|
||||
"port": 3001,
|
||||
"protocol": "http",
|
||||
"name": "authentication imposter",
|
||||
"stubs": [
|
||||
{
|
||||
"predicates": [
|
||||
{
|
||||
"equals": {
|
||||
"method": "post",
|
||||
"path": "/api/v1/users/login"
|
||||
}
|
||||
}
|
||||
],
|
||||
"responses": [
|
||||
{
|
||||
"is": {
|
||||
"statusCode": 200,
|
||||
"body": "Successfully logged in."
|
||||
}
|
||||
},
|
||||
{
|
||||
"is": {
|
||||
"statusCode": 400,
|
||||
"body": "Incorrect login. You have 2 more attempts left."
|
||||
}
|
||||
},
|
||||
{
|
||||
"is": {
|
||||
"statusCode": 400,
|
||||
"body": "Incorrect login. You have 1 more attempt left."
|
||||
}
|
||||
},
|
||||
{
|
||||
"is": {
|
||||
"statusCode": 400,
|
||||
"body": "Incorrect login. You have no more attempts left."
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The rolling buffer is simply an unlimited collection of JSON responses where each response is represented with two key-value pairs: **statusCode** and **body**. In this case, four responses are defined. The first response is the happy path (i.e., user successfully logged in), and the remaining three responses represent failed use cases (i.e., wrong credentials result in status code 400 and corresponding error messages).
|
||||
|
||||
### How to test repeat requests
|
||||
|
||||
Modify the tests as follows:
|
||||
|
||||
|
||||
```
|
||||
using System;
|
||||
using Xunit;
|
||||
using app;
|
||||
namespace tests
|
||||
{
|
||||
public class UnitTest1
|
||||
{
|
||||
Authenticate auth = [new][4] Authenticate();
|
||||
[Fact]
|
||||
public void SuccessfulLogin()
|
||||
{
|
||||
var given = "valid credentials";
|
||||
var expected = " Successfully logged in.";
|
||||
var actual= auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
[Fact]
|
||||
public void FirstFailedLogin()
|
||||
{
|
||||
var given = "invalid credentials";
|
||||
var expected = "Incorrect login. You have 2 more attempts left.";
|
||||
var actual = auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
[Fact]
|
||||
public void SecondFailedLogin()
|
||||
{
|
||||
var given = “invalid credentials";
|
||||
var expected = "Incorrect login. You have 1 more attempt left.";
|
||||
var actual = auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
[Fact]
|
||||
public void ThirdFailedLogin()
|
||||
{
|
||||
var given = " invalid credentials";
|
||||
var expected = "Incorrect login. You have no more attempts left.";
|
||||
var actual = auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now, run the tests to confirm that your code still works:
|
||||
|
||||
![Failed test][5]
|
||||
|
||||
Whoa! The tests now all fail. Why?
|
||||
|
||||
If you take a closer look, you'll see a revealing pattern:
|
||||
|
||||
![Reason for failed test][6]
|
||||
|
||||
Notice that ThirdFailedLogin is executed first, followed by the SuccessfulLogin, followed by FirstFailedLogin, followed by SecondFailedLogin. What's going on here? Why is the third test running before the first test?
|
||||
|
||||
The testing framework ([xUnit][7]) is executing all tests in parallel, and the sequence of execution is unpredictable. You need tests to run in order, which means you cannot test these scenarios using the vanilla xUnit toolkit.
|
||||
|
||||
### How to run tests in the right sequence
|
||||
|
||||
To force your tests to run in a certain sequence that you define (instead of running in an unpredictable order), you need to extend the vanilla xUnit toolkit with the NuGet [Xunit.Extensions.Ordering][8] package. Install the package on the command line with:
|
||||
|
||||
|
||||
```
|
||||
`$ dotnet add package Xunit.Extensions.Ordering --version 1.4.5`
|
||||
```
|
||||
|
||||
or add it to your **tests.csproj** config file:
|
||||
|
||||
|
||||
```
|
||||
`<PackageReference Include="Xunit.Extensions.Ordering" Version="1.4.5" />`
|
||||
```
|
||||
|
||||
Once that's taken care of, make some modifications to your **./tests/UnitTests1.cs** file. Add these four lines at the beginning of your **UnitTests1.cs **file:
|
||||
|
||||
|
||||
```
|
||||
using Xunit.Extensions.Ordering;
|
||||
[assembly: CollectionBehavior(DisableTestParallelization = true)]
|
||||
[assembly: TestCaseOrderer("Xunit.Extensions.Ordering.TestCaseOrderer", "Xunit.Extensions.Ordering")]
|
||||
[assembly: TestCollectionOrderer("Xunit.Extensions.Ordering.CollectionOrderer", "Xunit.Extensions.Ordering")]
|
||||
```
|
||||
|
||||
Now you can specify the order you want your tests to run. Initially, simulate the happy path (i.e., the **SuccessfulLogin()**) by annotating the test with:
|
||||
|
||||
|
||||
```
|
||||
[Fact, Order(1)]
|
||||
public void SuccessfulLogin() {
|
||||
```
|
||||
|
||||
After you test a successful login, test the first failed login:
|
||||
|
||||
|
||||
```
|
||||
[Fact, Order(2)]
|
||||
public void FirstFailedLogin()
|
||||
```
|
||||
|
||||
And so on. You can add the order of the test runs by simply adding the **Order(x)** (where **x** denotes the order you want the test to run) annotation to your Fact.
|
||||
|
||||
This annotation guarantees that your tests will run in the exact order you want them to run, and now you can (finally!) completely test your integration scenario.
|
||||
|
||||
The final version of your test is:
|
||||
|
||||
|
||||
```
|
||||
using System;
|
||||
using Xunit;
|
||||
using app;
|
||||
using Xunit.Extensions.Ordering;
|
||||
[assembly: CollectionBehavior(DisableTestParallelization = true)]
|
||||
[assembly: TestCaseOrderer("Xunit.Extensions.Ordering.TestCaseOrderer", "Xunit.Extensions.Ordering")]
|
||||
[assembly: TestCollectionOrderer("Xunit.Extensions.Ordering.CollectionOrderer", "Xunit.Extensions.Ordering")]
|
||||
namespace tests
|
||||
{
|
||||
public class UnitTest1
|
||||
{
|
||||
Authenticate auth = [new][4] Authenticate();
|
||||
[Fact, Order(1)]
|
||||
public void SuccessfulLogin()
|
||||
{
|
||||
var given = "[elon_musk@tesla.com][9]";
|
||||
var expected = "Successfully logged in.";
|
||||
var actual= auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
[Fact, Order(2)]
|
||||
public void FirstFailedLogin()
|
||||
{
|
||||
var given = "[mickey@tesla.com][10]";
|
||||
var expected = "Incorrect login. You have 2 more attempts left.";
|
||||
var actual = auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
[Fact, Order(3)]
|
||||
public void SecondFailedLogin()
|
||||
{
|
||||
var given = "[mickey@tesla.com][10]";
|
||||
var expected = "Incorrect login. You have 1 more attempt left.";
|
||||
var actual = auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
[Fact, Order(4)]
|
||||
public void ThirdFailedLogin()
|
||||
{
|
||||
var given = "[mickey@tesla.com][10]";
|
||||
var expected = "Incorrect login. You have no more attempts left.";
|
||||
var actual = auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Run the test again—everything passes!
|
||||
|
||||
![Passing test][11]
|
||||
|
||||
### What are you testing exactly?
|
||||
|
||||
This article has focused on test-driven development (TDD), but let's review it from another methodology, Extreme Programming (XP). XP defines two types of tests:
|
||||
|
||||
1. Programmer tests
|
||||
2. Customer tests
|
||||
|
||||
|
||||
|
||||
So far, in this series of articles on TDD, I have focused on the first type of tests (i.e., programmer tests). In this and the previous article, I switched my lenses to examine the most efficient ways of doing customer tests.
|
||||
|
||||
The important point is that programmer (or producer) tests are focused on precision work. We often refer to these precision tests as "micro tests," while others may call them "unit tests." Customer tests, on the other hand, are more focused on a bigger picture; we sometimes refer to them as "approximation tests" or "end-to-end tests."
|
||||
|
||||
### Conclusion
|
||||
|
||||
This article demonstrated how to write a suite of approximation tests that integrate several discrete steps and ensure that the code can handle all edge cases, including simulating the customer experience when repeatedly attempting to log in and failing to obtain the necessary clearance. This combination of TDD and tools like xUnit and mountebank can lead to well-tested and thus more reliable application development.
|
||||
|
||||
In future articles, I'll look into other usages of mountebank for writing customer (or approximation) tests.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/failed-authentication-attempts-tdd
|
||||
|
||||
作者:[Alex Bunardzic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alex-bunardzic
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_keyboard_coding.png?itok=E0Vvam7A (Programming keyboard.)
|
||||
[2]: https://opensource.com/article/20/3/service-virtualization-test-driven-development
|
||||
[3]: http://www.mbtest.org/
|
||||
[4]: http://www.google.com/search?q=new+msdn.microsoft.com
|
||||
[5]: https://opensource.com/sites/default/files/uploads/testfails_0.png (Failed test)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/failurepattern.png (Reason for failed test)
|
||||
[7]: https://xunit.net/
|
||||
[8]: https://www.nuget.org/packages/Xunit.Extensions.Ordering/#
|
||||
[9]: mailto:elon_musk@tesla.com
|
||||
[10]: mailto:mickey@tesla.com
|
||||
[11]: https://opensource.com/sites/default/files/uploads/testpasses.png (Passing test)
|
@ -0,0 +1,848 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to upload an OpenStack disk image to Glance)
|
||||
[#]: via: (https://opensource.com/article/20/3/glance)
|
||||
[#]: author: (Jair Patete https://opensource.com/users/jpatete)
|
||||
|
||||
How to upload an OpenStack disk image to Glance
|
||||
======
|
||||
Make images available to your private cloud, and more.
|
||||
![blank background that says your image here][1]
|
||||
|
||||
[Glance][2] is an image service that allows you to discover, provide, register, or even delete disk and/or server images. It is a fundamental part of managing images on [OpenStack][3] and [TripleO][4] (which stands for "OpenStack-On-OpenStack").
|
||||
|
||||
If you have used a recent version of the OpenStack platform, you may already have launched your first Overcloud using TripleO, as you interact with Glance when uploading the Overcloud disk images inside the Undercloud's OpenStack (i.e., the node inside your cloud that is used to install the Overcloud, add/delete nodes, and do some other handy things).
|
||||
|
||||
In this article, I'll explain how to upload an image to Glance. Uploading an image to the service makes it available for the instances in your private cloud. Also, when you're deploying an Overcloud, it makes the image(s) available so the bare-metal nodes can be deployed using them.
|
||||
|
||||
In an Undercloud, execute the following command:
|
||||
|
||||
|
||||
```
|
||||
`$ openstack overcloud image upload --image-path /home/stack/images/`
|
||||
```
|
||||
|
||||
This uploads the following Overcloud images to Glance:
|
||||
|
||||
1. overcloud-full
|
||||
2. overcloud-full-initrd
|
||||
3. overcloud-full-vmlinuz
|
||||
|
||||
|
||||
|
||||
After some seconds, the images will upload successfully. Check the result by running:
|
||||
|
||||
|
||||
```
|
||||
(undercloud) [stack@undercloud ~]$ openstack image list
|
||||
+--------------------------------------+------------------------+--------+
|
||||
| ID | Name | Status |
|
||||
+--------------------------------------+------------------------+--------+
|
||||
| 09ca88ea-2771-459d-94a2-9f87c9c393f0 | overcloud-full | active |
|
||||
| 806b6c35-2dd5-478d-a384-217173a6e032 | overcloud-full-initrd | active |
|
||||
| b2c96922-161a-4171-829f-be73482549d5 | overcloud-full-vmlinuz | active |
|
||||
+--------------------------------------+------------------------+--------+
|
||||
```
|
||||
|
||||
This is a mandatory and easy step in the process of deploying an Overcloud, and it happens within seconds, which makes it hard to see what's under the hood. But what if you want to know what is going on?
|
||||
|
||||
One thing to keep in mind: Glance works using client-server communication carried through REST APIs. Therefore, you can see what is going on by using [tcpdump][5] to take some TCP packets.
|
||||
|
||||
Another thing that is important: There is a database (there's always a database, right?) that is shared among all the OpenStack platform components, and it contains all the information that Glance (and other components) needs to operate. (In my case, MariaDB is the backend.) I won't get into how to access the SQL database, as I don't recommend playing around with it, but I will show what the database looks like during the upload process. (This is an entirely-for-test OpenStack installation, so there's no need to play with the database in this example.)
|
||||
|
||||
### The database
|
||||
|
||||
The basic flow of this example exercise is:
|
||||
|
||||
_Image Created -> Image Queued -> Image Saved -> Image Active_
|
||||
|
||||
You need permission to go through this flow, so first, you must ask OpenStack's identity service, [Keystone][6], for authorization. My Keystone catalog entry looks like this; as I'm in the Undercloud, I'll hit the public endpoint:
|
||||
|
||||
|
||||
```
|
||||
| keystone | identity | regionOne |
|
||||
| | | public: <https://172.16.0.20:13000> |
|
||||
| | | regionOne |
|
||||
| | | internal: <http://172.16.0.19:5000> |
|
||||
| | | regionOne |
|
||||
| | | admin: <http://172.16.0.19:35357> |
|
||||
```
|
||||
|
||||
And for Glance:
|
||||
|
||||
|
||||
```
|
||||
| glance | image | regionOne |
|
||||
| | | public: <https://172.16.0.20:13292> |
|
||||
| | | regionOne |
|
||||
| | | internal: <http://172.16.0.19:9292> |
|
||||
| | | regionOne |
|
||||
| | | admin: <http://172.16.0.19:9292> |
|
||||
```
|
||||
|
||||
I'll hit those ports and TCP port 3306 in the capture; the latter is so I can capture what's going on with the SQL database. To capture the packets, use the tcpdump command:
|
||||
|
||||
|
||||
```
|
||||
`$ tcpdump -nvs0 -i ens3 host 172.16.0.20 and port 13000 or port 3306 or port 13292`
|
||||
```
|
||||
|
||||
Under the hood, this looks like:
|
||||
|
||||
Authentication:
|
||||
|
||||
**Initial request (discovery of API Version Information):**
|
||||
|
||||
|
||||
```
|
||||
`https://172.16.0.20:13000 "GET / HTTP/1.1"`
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
|
||||
```
|
||||
Content-Length: 268 Content-Type: application/json Date: Tue, 18 Feb 2020 04:49:55 GMT Location: <https://172.16.0.20:13000/v3/> Server: Apache Vary: X-Auth-Token x-openstack-request-id: req-6edc6642-3945-4fd0-a0f7-125744fb23ec
|
||||
|
||||
{
|
||||
"versions":{
|
||||
"values":[
|
||||
{
|
||||
"id":"v3.13",
|
||||
"status":"stable",
|
||||
"updated":"2019-07-19T00:00:00Z",
|
||||
"links":[
|
||||
{
|
||||
"rel":"self",
|
||||
"href":"<https://172.16.0.20:13000/v3/>"
|
||||
}
|
||||
],
|
||||
"media-types":[
|
||||
{
|
||||
"base":"application/json",
|
||||
"type":"application/vnd.openstack.identity-v3+json"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Authentication request**
|
||||
|
||||
|
||||
```
|
||||
`https://172.16.0.20:13000 "POST /v3/auth/tokens HTTP/1.1"`
|
||||
```
|
||||
|
||||
After this step, a token is assigned for the admin user to use the services. (The token cannot be displayed for security reasons.) The token tells the other services something like: "I've already logged in with the proper credentials against Keystone; please let me go straight to the service and ask no more questions about who I am."
|
||||
|
||||
At this point, the command:
|
||||
|
||||
|
||||
```
|
||||
`$ openstack overcloud image upload --image-path /home/stack/images/`
|
||||
```
|
||||
|
||||
executes, and it is authorized to upload the image to the Glance service.
|
||||
|
||||
The current status is:
|
||||
|
||||
_**Image Created**_ _-> Image Queued -> Image Saved -> Image Active_
|
||||
|
||||
The service checks whether this image already exists:
|
||||
|
||||
|
||||
```
|
||||
`https://172.16.0.20:13292 "GET /v2/images/overcloud-full-vmlinuz HTTP/1.1"`
|
||||
```
|
||||
|
||||
From the client's point of view, the request looks like:
|
||||
|
||||
|
||||
```
|
||||
`curl -g -i -X GET -H 'b'Content-Type': b'application/octet-stream'' -H 'b'X-Auth-Token': b'gAAAAABeS2zzWzAZBqF-whE7SmJt_Atx7tiLZhcL8mf6wJPrO3RBdv4SdnWImxbeSQSqEQdZJnwBT79SWhrtt7QDn-2o6dsAtpUb1Rb7w6xe7Qg_AHQfD5P1rU7tXXtKu2DyYFhtPg2TRQS5viV128FyItyt49Yn_ho3lWfIXaR3TuZzyIz38NU'' -H 'User-Agent: python-glanceclient' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'Connection: keep-alive' --cacert /etc/pki/ca-trust/source/anchors/cm-local-ca.pem --cert None --key None https://172.16.0.20:13292/v2/images/overcloud-full-vmlinuz`
|
||||
```
|
||||
|
||||
Here, you can see the fernet token, the user-agent indicating Glance is speaking, and the TLS certificate; this is why you don't see anything in your tcpdump.
|
||||
|
||||
Since the image does not exist, it is OK to get a 404 ERROR for this request.
|
||||
|
||||
Next, the current images are consulted:
|
||||
|
||||
|
||||
```
|
||||
`https://172.16.0.20:13292 "GET /v2/images?limit=20 HTTP/1.1" 200 78`
|
||||
```
|
||||
|
||||
and retrieved from the service:
|
||||
|
||||
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
Content-Length: 78
|
||||
Content-Type: application/json
|
||||
X-Openstack-Request-Id: req-0f117984-f427-4d35-bec3-956432865dd1
|
||||
Date: Tue, 18 Feb 2020 04:49:55 GMT
|
||||
|
||||
{
|
||||
"images":[
|
||||
|
||||
],
|
||||
"first":"/v2/images?limit=20",
|
||||
"schema":"/v2/schemas/images"
|
||||
}
|
||||
```
|
||||
|
||||
Yes, it is still empty.
|
||||
|
||||
Meanwhile, the same check has been done on the database, where a huge query has been triggered with the same results. (To sync on the timestamp, I checked on the tcpdump after the connection and queries were finished, and then compared them with the API calls' timestamp.)
|
||||
|
||||
To identify where the Glance-DB calls started, I did a full-packet search with the word "glance" inside the tcpdump file. This saves a lot of time vs. searching through all the other database calls, so this is my starting point to check each database call.
|
||||
|
||||
![Searching "glance" inside tcpdump][7]
|
||||
|
||||
The first query returns nothing in the fields, as the image still does not exist:
|
||||
|
||||
|
||||
```
|
||||
SELECT images.created_at AS images_created_at, images.updated_at AS images_updated_at, images.deleted_at AS images_deleted_at, images.deleted AS images_deleted, images.id AS images_id, images.name AS images_name, images.disk_format AS images_disk_format, images.container_format AS images_container_format, images.size AS images_size, images.virtual_size AS images_virtual_size, images.status AS images_status, images.visibility AS images_visibility, images.checksum AS images_checksum, images.os_hash_algo AS images_os_hash_algo, images.os_hash_value AS images_os_hash_value, images.min_disk AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS images_owner, images.protected AS images_protected, images.os_hidden AS images_os_hidden, image_properties_1.created_at AS image_properties_1_created_at, image_properties_1.updated_at AS image_properties_1_updated_at, image_properties_1.deleted_at AS image_properties_1_deleted_at, image_properties_1.deleted AS image_properties_1_deleted, image_properties_1.id AS image_properties_1_id, image_properties_1.image_id AS image_properties_1_image_id, image_properties_1.name AS image_properties_1_name, image_properties_1.value AS image_properties_1_value, image_locations_1.created_at AS image_locations_1_created_at, image_locations_1.updated_at AS image_locations_1_updated_at, image_locations_1.deleted_at AS image_locations_1_deleted_at, image_locations_1.deleted AS image_locations_1_deleted, image_locations_1.id AS image_locations_1_id, image_locations_1.image_id AS image_locations_1_image_id, image_locations_1.value AS image_locations_1_value, image_locations_1.meta_data AS image_locations_1_meta_data, image_locations_1.status AS image_locations_1_status
|
||||
FROM images LEFT OUTER JOIN image_properties AS image_properties_1 ON images.id = image_properties_1.image_id LEFT OUTER JOIN image_locations AS image_locations_1 ON images.id = image_locations_1.image_id
|
||||
WHERE images.id = 'overcloud-full-vmlinuz'
|
||||
```
|
||||
|
||||
Next, the image will start uploading, so an API call and a write to the database are expected.
|
||||
|
||||
On the API side, the image scheme is retrieved by consulting the service in:
|
||||
|
||||
|
||||
```
|
||||
`https://172.16.0.20:13292 "GET /v2/schemas/image HTTP/1.1"`
|
||||
```
|
||||
|
||||
Then, some of the fields are populated with image information. This is what the scheme looks like:
|
||||
|
||||
|
||||
```
|
||||
{
|
||||
"name":"image",
|
||||
"properties":{
|
||||
"id":{
|
||||
"type":"string",
|
||||
"description":"An identifier for the image",
|
||||
"pattern":"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$"
|
||||
},
|
||||
"name":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"description":"Descriptive name for the image",
|
||||
"maxLength":255
|
||||
},
|
||||
"status":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"Status of the image",
|
||||
"enum":[
|
||||
"queued",
|
||||
"saving",
|
||||
"active",
|
||||
"killed",
|
||||
"deleted",
|
||||
"uploading",
|
||||
"importing",
|
||||
"pending_delete",
|
||||
"deactivated"
|
||||
]
|
||||
},
|
||||
"visibility":{
|
||||
"type":"string",
|
||||
"description":"Scope of image accessibility",
|
||||
"enum":[
|
||||
"community",
|
||||
"public",
|
||||
"private",
|
||||
"shared"
|
||||
]
|
||||
},
|
||||
"protected":{
|
||||
"type":"boolean",
|
||||
"description":"If true, image will not be deletable."
|
||||
},
|
||||
"os_hidden":{
|
||||
"type":"boolean",
|
||||
"description":"If true, image will not appear in default image list response."
|
||||
},
|
||||
"checksum":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"readOnly":true,
|
||||
"description":"md5 hash of image contents.",
|
||||
"maxLength":32
|
||||
},
|
||||
"os_hash_algo":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"readOnly":true,
|
||||
"description":"Algorithm to calculate the os_hash_value",
|
||||
"maxLength":64
|
||||
},
|
||||
"os_hash_value":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"readOnly":true,
|
||||
"description":"Hexdigest of the image contents using the algorithm specified by the os_hash_algo",
|
||||
"maxLength":128
|
||||
},
|
||||
"owner":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"description":"Owner of the image",
|
||||
"maxLength":255
|
||||
},
|
||||
"size":{
|
||||
"type":[
|
||||
"null",
|
||||
"integer"
|
||||
],
|
||||
"readOnly":true,
|
||||
"description":"Size of image file in bytes"
|
||||
},
|
||||
"virtual_size":{
|
||||
"type":[
|
||||
"null",
|
||||
"integer"
|
||||
],
|
||||
"readOnly":true,
|
||||
"description":"Virtual size of image in bytes"
|
||||
},
|
||||
"container_format":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"description":"Format of the container",
|
||||
"enum":[
|
||||
null,
|
||||
"ami",
|
||||
"ari",
|
||||
"aki",
|
||||
"bare",
|
||||
"ovf",
|
||||
"ova",
|
||||
"docker",
|
||||
"compressed"
|
||||
]
|
||||
},
|
||||
"disk_format":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"description":"Format of the disk",
|
||||
"enum":[
|
||||
null,
|
||||
"ami",
|
||||
"ari",
|
||||
"aki",
|
||||
"vhd",
|
||||
"vhdx",
|
||||
"vmdk",
|
||||
"raw",
|
||||
"qcow2",
|
||||
"vdi",
|
||||
"iso",
|
||||
"ploop"
|
||||
]
|
||||
},
|
||||
"created_at":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"Date and time of image registration"
|
||||
},
|
||||
"updated_at":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"Date and time of the last image modification"
|
||||
},
|
||||
"tags":{
|
||||
"type":"array",
|
||||
"description":"List of strings related to the image",
|
||||
"items":{
|
||||
"type":"string",
|
||||
"maxLength":255
|
||||
}
|
||||
},
|
||||
"direct_url":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"URL to access the image file kept in external store"
|
||||
},
|
||||
"min_ram":{
|
||||
"type":"integer",
|
||||
"description":"Amount of ram (in MB) required to boot image."
|
||||
},
|
||||
"min_disk":{
|
||||
"type":"integer",
|
||||
"description":"Amount of disk space (in GB) required to boot image."
|
||||
},
|
||||
"self":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"An image self url"
|
||||
},
|
||||
"file":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"An image file url"
|
||||
},
|
||||
"stores":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"Store in which image data resides. Only present when the operator has enabled multiple stores. May be a comma-separated list of store identifiers."
|
||||
},
|
||||
"schema":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"An image schema url"
|
||||
},
|
||||
"locations":{
|
||||
"type":"array",
|
||||
"items":{
|
||||
"type":"object",
|
||||
"properties":{
|
||||
"url":{
|
||||
"type":"string",
|
||||
"maxLength":255
|
||||
},
|
||||
"metadata":{
|
||||
"type":"object"
|
||||
},
|
||||
"validation_data":{
|
||||
"description":"Values to be used to populate the corresponding image properties. If the image status is not 'queued', values must exactly match those already contained in the image properties.",
|
||||
"type":"object",
|
||||
"writeOnly":true,
|
||||
"additionalProperties":false,
|
||||
"properties":{
|
||||
"checksum":{
|
||||
"type":"string",
|
||||
"minLength":32,
|
||||
"maxLength":32
|
||||
},
|
||||
"os_hash_algo":{
|
||||
"type":"string",
|
||||
"maxLength":64
|
||||
},
|
||||
"os_hash_value":{
|
||||
"type":"string",
|
||||
"maxLength":128
|
||||
}
|
||||
},
|
||||
"required":[
|
||||
"os_hash_algo",
|
||||
"os_hash_value"
|
||||
]
|
||||
}
|
||||
},
|
||||
"required":[
|
||||
"url",
|
||||
"metadata"
|
||||
]
|
||||
},
|
||||
"description":"A set of URLs to access the image file kept in external store"
|
||||
},
|
||||
"kernel_id":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"pattern":"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$",
|
||||
"description":"ID of image stored in Glance that should be used as the kernel when booting an AMI-style image.",
|
||||
"is_base":false
|
||||
},
|
||||
"ramdisk_id":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"pattern":"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$",
|
||||
"description":"ID of image stored in Glance that should be used as the ramdisk when booting an AMI-style image.",
|
||||
"is_base":false
|
||||
},
|
||||
"instance_uuid":{
|
||||
"type":"string",
|
||||
"description":"Metadata which can be used to record which instance this image is associated with. (Informational only, does not create an instance snapshot.)",
|
||||
"is_base":false
|
||||
},
|
||||
"architecture":{
|
||||
"description":"Operating system architecture as specified in <https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html>",
|
||||
"type":"string",
|
||||
"is_base":false
|
||||
},
|
||||
"os_distro":{
|
||||
"description":"Common name of operating system distribution as specified in <https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html>",
|
||||
"type":"string",
|
||||
"is_base":false
|
||||
},
|
||||
"os_version":{
|
||||
"description":"Operating system version as specified by the distributor.",
|
||||
"type":"string",
|
||||
"is_base":false
|
||||
},
|
||||
"description":{
|
||||
"description":"A human-readable string describing this image.",
|
||||
"type":"string",
|
||||
"is_base":false
|
||||
},
|
||||
"cinder_encryption_key_id":{
|
||||
"description":"Identifier in the OpenStack Key Management Service for the encryption key for the Block Storage Service to use when mounting a volume created from this image",
|
||||
"type":"string",
|
||||
"is_base":false
|
||||
},
|
||||
"cinder_encryption_key_deletion_policy":{
|
||||
"description":"States the condition under which the Image Service will delete the object associated with the 'cinder_encryption_key_id' image property. If this property is missing, the Image Service will take no action",
|
||||
"type":"string",
|
||||
"enum":[
|
||||
"on_image_deletion",
|
||||
"do_not_delete"
|
||||
],
|
||||
"is_base":false
|
||||
}
|
||||
},
|
||||
"additionalProperties":{
|
||||
"type":"string"
|
||||
},
|
||||
"links":[
|
||||
{
|
||||
"rel":"self",
|
||||
"href":"{self}"
|
||||
},
|
||||
{
|
||||
"rel":"enclosure",
|
||||
"href":"{file}"
|
||||
},
|
||||
{
|
||||
"rel":"describedby",
|
||||
"href":"{schema}"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
That's a long scheme!
|
||||
|
||||
Here is the API call to start uploading the image information, and it will now move to the "queue" state:
|
||||
|
||||
|
||||
```
|
||||
`curl -g -i -X POST -H 'b'Content-Type': b'application/json'' -H 'b'X-Auth-Token': b'gAAAAABeS2zzWzAZBqF-whE7SmJt_Atx7tiLZhcL8mf6wJPrO3RBdv4SdnWImxbeSQSqEQdZJnwBT79SWhrtt7QDn-2o6dsAtpUb1Rb7w6xe7Qg_AHQfD5P1rU7tXXtKu2DyYFhtPg2TRQS5viV128FyItyt49Yn_ho3lWfIXaR3TuZzyIz38NU'' -H 'User-Agent: python-glanceclient' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'Connection: keep-alive' --cacert /etc/pki/ca-trust/source/anchors/cm-local-ca.pem --cert None --key None -d '{"name": "overcloud-full-vmlinuz", "disk_format": "aki", "visibility": "public", "container_format": "bare"}' https://172.16.0.20:13292/v2/images`
|
||||
```
|
||||
|
||||
Here is the API response:
|
||||
|
||||
|
||||
```
|
||||
HTTP/1.1 201 Created
|
||||
Content-Length: 629
|
||||
Content-Type: application/json
|
||||
Location: <https://172.16.0.20:13292/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c>
|
||||
Openstack-Image-Import-Methods: web-download
|
||||
X-Openstack-Request-Id: req-bd5194f0-b1c2-40d3-a646-8a24ed0a1b1b
|
||||
Date: Tue, 18 Feb 2020 04:49:56 GMT
|
||||
|
||||
{
|
||||
"name":"overcloud-full-vmlinuz",
|
||||
"disk_format":"aki",
|
||||
"container_format":"bare",
|
||||
"visibility":"public",
|
||||
"size":null,
|
||||
"virtual_size":null,
|
||||
"status":"queued",
|
||||
"checksum":null,
|
||||
"protected":false,
|
||||
"min_ram":0,
|
||||
"min_disk":0,
|
||||
"owner":"c0a46a106d3341649a25b10f2770aff8",
|
||||
"os_hidden":false,
|
||||
"os_hash_algo":null,
|
||||
"os_hash_value":null,
|
||||
"id":"13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"created_at":"2020-02-18T04:49:55Z",
|
||||
"updated_at":"2020-02-18T04:49:55Z",
|
||||
"tags":[
|
||||
|
||||
],
|
||||
"self":"/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"file":"/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c/file",
|
||||
"schema":"/v2/schemas/image"
|
||||
}
|
||||
```
|
||||
|
||||
and the SQL call to store the information in the Glance-DB:
|
||||
|
||||
|
||||
```
|
||||
`INSERT INTO images (created_at, updated_at, deleted_at, deleted, id, name, disk_format, container_format, SIZE, virtual_size, STATUS, visibility, checksum, os_hash_algo, os_hash_value, min_disk, min_ram, owner, protected, os_hidden) VALUES ('2020-02-18 04:49:55.993652', '2020-02-18 04:49:55.993652', NULL, 0, '13892850-6add-4c28-87cd-6da62e6f8a3c', 'overcloud-full-vmlinuz', 'aki', 'bare', NULL, NULL, 'queued', 'public', NULL, NULL, NULL, 0, 0, 'c0a46a106d3341649a25b10f2770aff8', 0, 0)`
|
||||
```
|
||||
|
||||
Current status:
|
||||
|
||||
_Image Created ->_ _**Image Queued**_ _-> Image Saved -> Image Active_
|
||||
|
||||
In the Glance architecture, the images are "physically" stored in the specified backend (Swift in this case), so traffic will also hit the Swift endpoint at port 8080. Capturing this traffic will make the .pcap file as large as the images being uploaded (2GB in my case).[*][8]
|
||||
|
||||
![Glance architecture][9]
|
||||
|
||||
|
||||
```
|
||||
SELECT image_properties.created_at AS image_properties_created_at, image_properties.updated_at AS image_properties_updated_at, image_properties.deleted_at AS image_properties_deleted_at, image_properties.deleted AS image_properties_deleted, image_properties.id AS image_properties_id, image_properties.image_id AS image_properties_image_id, image_properties.name AS image_properties_name, image_properties.value AS image_properties_value
|
||||
FROM image_properties
|
||||
WHERE '13892850-6add-4c28-87cd-6da62e6f8a3c' = image_properties.image_id
|
||||
```
|
||||
|
||||
You can see some validations happening within the database. At this point, the flow status is "queued" (as shown above), and you can check it here:
|
||||
|
||||
![Checking the Glance image status][10]
|
||||
|
||||
You can also check it with the following queries, where the **updated_at** field and the flow status are modified accordingly (i.e., queued to saving):
|
||||
|
||||
Current status:
|
||||
|
||||
_Image Created -> Image Queued ->_ _**Image Saved**_ _-> Image Active_
|
||||
|
||||
|
||||
```
|
||||
SELECT images.id AS images_id
|
||||
FROM images
|
||||
WHERE images.id = '13892850-6add-4c28-87cd-6da62e6f8a3c' AND images.status = 'queued'
|
||||
UPDATE images SET updated_at='2020-02-18 04:49:56.046542', id='13892850-6add-4c28-87cd-6da62e6f8a3c', name='overcloud-full-vmlinuz', disk_format='aki', container_format='bare', SIZE=NULL, virtual_size=NULL, STATUS='saving', visibility='public', checksum=NULL, os_hash_algo=NULL, os_hash_value=NULL, min_disk=0, min_ram=0, owner='c0a46a106d3341649a25b10f2770aff8', protected=0, os_hidden=0 WHERE images.id = '13892850-6add-4c28-87cd-6da62e6f8a3c' AND images.status = 'queued'
|
||||
```
|
||||
|
||||
This is validated during the process with the following query:
|
||||
|
||||
|
||||
```
|
||||
SELECT images.created_at AS images_created_at, images.updated_at AS images_updated_at, images.deleted_at AS images_deleted_at, images.deleted AS images_deleted, images.id AS images_id, images.name AS images_name, images.disk_format AS images_disk_format, images.container_format AS images_container_format, images.size AS images_size, images.virtual_size AS images_virtual_size, images.status AS images_status, images.visibility AS images_visibility, images.checksum AS images_checksum, images.os_hash_algo AS images_os_hash_algo, images.os_hash_value AS images_os_hash_value, images.min_disk AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS images_owner, images.protected AS images_protected, images.os_hidden AS images_os_hidden, image_properties_1.created_at AS image_properties_1_created_at, image_properties_1.updated_at AS image_properties_1_updated_at, image_properties_1.deleted_at AS image_properties_1_deleted_at, image_properties_1.deleted AS image_properties_1_deleted, image_properties_1.id AS image_properties_1_id, image_properties_1.image_id AS image_properties_1_image_id, image_properties_1.name AS image_properties_1_name, image_properties_1.value AS image_properties_1_value, image_locations_1.created_at AS image_locations_1_created_at, image_locations_1.updated_at AS image_locations_1_updated_at, image_locations_1.deleted_at AS image_locations_1_deleted_at, image_locations_1.deleted AS image_locations_1_deleted, image_locations_1.id AS image_locations_1_id, image_locations_1.image_id AS image_locations_1_image_id, image_locations_1.value AS image_locations_1_value, image_locations_1.meta_data AS image_locations_1_meta_data, image_locations_1.status AS image_locations_1_status
|
||||
FROM images LEFT OUTER JOIN image_properties AS image_properties_1 ON images.id = image_properties_1.image_id LEFT OUTER JOIN image_locations AS image_locations_1 ON images.id = image_locations_1.image_id
|
||||
WHERE images.id = '13892850-6add-4c28-87cd-6da62e6f8a3c'
|
||||
```
|
||||
|
||||
And you can see its response in the Wireshark capture:
|
||||
|
||||
![Wireshark capture][11]
|
||||
|
||||
After the image is completely uploaded, its status will change to "active," which means the image is available in the service and ready to use.
|
||||
|
||||
|
||||
```
|
||||
<https://172.16.0.20:13292> "GET /v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c HTTP/1.1" 200
|
||||
|
||||
{
|
||||
"name":"overcloud-full-vmlinuz",
|
||||
"disk_format":"aki",
|
||||
"container_format":"bare",
|
||||
"visibility":"public",
|
||||
"size":8106848,
|
||||
"virtual_size":null,
|
||||
"status":"active",
|
||||
"checksum":"5d31ee013d06b83d02c106ea07f20265",
|
||||
"protected":false,
|
||||
"min_ram":0,
|
||||
"min_disk":0,
|
||||
"owner":"c0a46a106d3341649a25b10f2770aff8",
|
||||
"os_hidden":false,
|
||||
"os_hash_algo":"sha512",
|
||||
"os_hash_value":"9f59d36dec7b30f69b696003e7e3726bbbb27a36211a0b31278318c2af0b969ffb279b0991474c18c9faef8b9e96cf372ce4087ca13f5f05338a36f57c281499",
|
||||
"id":"13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"created_at":"2020-02-18T04:49:55Z",
|
||||
"updated_at":"2020-02-18T04:49:56Z",
|
||||
"direct_url":"swift+config://ref1/glance/13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"tags":[
|
||||
|
||||
],
|
||||
"self":"/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"file":"/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c/file",
|
||||
"schema":"/v2/schemas/image"
|
||||
}
|
||||
```
|
||||
|
||||
You can also see the database call that updates the current status:
|
||||
|
||||
|
||||
```
|
||||
`UPDATE images SET updated_at='2020-02-18 04:49:56.571879', id='13892850-6add-4c28-87cd-6da62e6f8a3c', name='overcloud-full-vmlinuz', disk_format='aki', container_format='bare', SIZE=8106848, virtual_size=NULL, STATUS='active', visibility='public', checksum='5d31ee013d06b83d02c106ea07f20265', os_hash_algo='sha512', os_hash_value='9f59d36dec7b30f69b696003e7e3726bbbb27a36211a0b31278318c2af0b969ffb279b0991474c18c9faef8b9e96cf372ce4087ca13f5f05338a36f57c281499', min_disk=0, min_ram=0, owner='c0a46a106d3341649a25b10f2770aff8', protected=0, os_hidden=0 WHERE images.id = '13892850-6add-4c28-87cd-6da62e6f8a3c' AND images.status = 'saving'`
|
||||
```
|
||||
|
||||
Current status:
|
||||
|
||||
_Image Created -> Image Queued -> Image Saved ->_ _**Image Active**_
|
||||
|
||||
One interesting thing is that a property in the image is added after the image is uploaded using a PATCH. This property is **hw_architecture** and it is set to **x86_64**:
|
||||
|
||||
|
||||
```
|
||||
<https://172.16.0.20:13292> "PATCH /v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c HTTP/1.1"
|
||||
|
||||
curl -g -i -X PATCH -H 'b'Content-Type': b'application/openstack-images-v2.1-json-patch'' -H 'b'X-Auth-Token': b'gAAAAABeS2zzWzAZBqF-whE7SmJt_Atx7tiLZhcL8mf6wJPrO3RBdv4SdnWImxbeSQSqEQdZJnwBT79SWhrtt7QDn-2o6dsAtpUb1Rb7w6xe7Qg_AHQfD5P1rU7tXXtKu2DyYFhtPg2TRQS5viV128FyItyt49Yn_ho3lWfIXaR3TuZzyIz38NU'' -H 'User-Agent: python-glanceclient' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'Connection: keep-alive' --cacert /etc/pki/ca-trust/source/anchors/cm-local-ca.pem --cert None --key None -d '[{"op": "add", "path": "/hw_architecture", "value": "x86_64"}]' <https://172.16.0.20:13292/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c>
|
||||
|
||||
Response:
|
||||
|
||||
{
|
||||
"hw_architecture":"x86_64",
|
||||
"name":"overcloud-full-vmlinuz",
|
||||
"disk_format":"aki",
|
||||
"container_format":"bare",
|
||||
"visibility":"public",
|
||||
"size":8106848,
|
||||
"virtual_size":null,
|
||||
"status":"active",
|
||||
"checksum":"5d31ee013d06b83d02c106ea07f20265",
|
||||
"protected":false,
|
||||
"min_ram":0,
|
||||
"min_disk":0,
|
||||
"owner":"c0a46a106d3341649a25b10f2770aff8",
|
||||
"os_hidden":false,
|
||||
"os_hash_algo":"sha512",
|
||||
"os_hash_value":"9f59d36dec7b30f69b696003e7e3726bbbb27a36211a0b31278318c2af0b969ffb279b0991474c18c9faef8b9e96cf372ce4087ca13f5f05338a36f57c281499",
|
||||
"id":"13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"created_at":"2020-02-18T04:49:55Z",
|
||||
"updated_at":"2020-02-18T04:49:56Z",
|
||||
"direct_url":"swift+config://ref1/glance/13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"tags":[
|
||||
|
||||
],
|
||||
"self":"/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"file":"/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c/file",
|
||||
"schema":"/v2/schemas/image"
|
||||
}
|
||||
```
|
||||
|
||||
This is also updated in the MySQL database:
|
||||
|
||||
|
||||
```
|
||||
`INSERT INTO image_properties (created_at, updated_at, deleted_at, deleted, image_id, name, VALUE) VALUES ('2020-02-18 04:49:56.655780', '2020-02-18 04:49:56.655783', NULL, 0, '13892850-6add-4c28-87cd-6da62e6f8a3c', 'hw_architecture', 'x86_64')`
|
||||
```
|
||||
|
||||
This is pretty much what happens when you upload an image to Glance. Here's what it looks like if you check on the database:
|
||||
|
||||
|
||||
```
|
||||
MariaDB [glance]> SELECT images.created_at AS images_created_at, images.updated_at AS images_updated_at, images.deleted_at AS images_deleted_at, images.deleted AS images_deleted, images.id AS images_id, images.name AS images_name, images.disk_format AS images_disk_format, images.container_format AS images_container_format, images.size AS images_size, images.virtual_size AS images_virtual_size, images.status AS images_status, images.visibility AS images_visibility, images.checksum AS images_checksum, images.os_hash_algo AS images_os_hash_algo, images.os_hash_value AS images_os_hash_value, images.min_disk AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS images_owner, images.protected AS images_protected, images.os_hidden AS images_os_hidden, image_properties_1.created_at AS image_properties_1_created_at, image_properties_1.updated_at AS image_properties_1_updated_at, image_properties_1.deleted_at AS image_properties_1_deleted_at, image_properties_1.deleted AS image_properties_1_deleted, image_properties_1.id AS image_properties_1_id, image_properties_1.image_id AS image_properties_1_image_id, image_properties_1.name AS image_properties_1_name, image_properties_1.value AS image_properties_1_value, image_locations_1.created_at AS image_locations_1_created_at, image_locations_1.updated_at AS image_locations_1_updated_at, image_locations_1.deleted_at AS image_locations_1_deleted_at, image_locations_1.deleted AS image_locations_1_deleted, image_locations_1.id AS image_locations_1_id, image_locations_1.image_id AS image_locations_1_image_id, image_locations_1.value AS image_locations_1_value, image_locations_1.meta_data AS image_locations_1_meta_data, image_locations_1.status AS image_locations_1_status FROM images LEFT OUTER JOIN image_properties AS image_properties_1 ON images.id = image_properties_1.image_id LEFT OUTER JOIN image_locations AS image_locations_1 ON images.id = image_locations_1.image_id WHERE images.id = '13892850-6add-4c28-87cd-6da62e6f8a3c'\G;
|
||||
*************************** 1. row ***************************
|
||||
images_created_at: 2020-02-18 04:49:55
|
||||
images_updated_at: 2020-02-18 04:49:56
|
||||
images_deleted_at: NULL
|
||||
images_deleted: 0
|
||||
images_id: 13892850-6add-4c28-87cd-6da62e6f8a3c
|
||||
images_name: overcloud-full-vmlinuz
|
||||
images_disk_format: aki
|
||||
images_container_format: bare
|
||||
images_size: 8106848
|
||||
images_virtual_size: NULL
|
||||
images_status: active
|
||||
images_visibility: public
|
||||
images_checksum: 5d31ee013d06b83d02c106ea07f20265
|
||||
images_os_hash_algo: sha512
|
||||
images_os_hash_value: 9f59d36dec7b30f69b696003e7e3726bbbb27a36211a0b31278318c2af0b969ffb279b0991474c18c9faef8b9e96cf372ce4087ca13f5f05338a36f57c281499
|
||||
images_min_disk: 0
|
||||
images_min_ram: 0
|
||||
images_owner: c0a46a106d3341649a25b10f2770aff8
|
||||
images_protected: 0
|
||||
images_os_hidden: 0
|
||||
image_properties_1_created_at: 2020-02-18 04:49:56
|
||||
image_properties_1_updated_at: 2020-02-18 04:49:56
|
||||
image_properties_1_deleted_at: NULL
|
||||
image_properties_1_deleted: 0
|
||||
image_properties_1_id: 11
|
||||
image_properties_1_image_id: 13892850-6add-4c28-87cd-6da62e6f8a3c
|
||||
image_properties_1_name: hw_architecture
|
||||
image_properties_1_value: x86_64
|
||||
image_locations_1_created_at: 2020-02-18 04:49:56
|
||||
image_locations_1_updated_at: 2020-02-18 04:49:56
|
||||
image_locations_1_deleted_at: NULL
|
||||
image_locations_1_deleted: 0
|
||||
image_locations_1_id: 7
|
||||
image_locations_1_image_id: 13892850-6add-4c28-87cd-6da62e6f8a3c
|
||||
image_locations_1_value: swift+config://ref1/glance/13892850-6add-4c28-87cd-6da62e6f8a3c
|
||||
image_locations_1_meta_data: {}
|
||||
image_locations_1_status: active
|
||||
1 row in set (0.00 sec)
|
||||
```
|
||||
|
||||
The final result is:
|
||||
|
||||
|
||||
```
|
||||
(undercloud) [stack@undercloud ~]$ openstack image list
|
||||
+--------------------------------------+------------------------+--------+
|
||||
| ID | Name | Status |
|
||||
+--------------------------------------+------------------------+--------+
|
||||
| 9a26b9da-3783-4223-bdd7-c553aa194e30 | overcloud-full | active |
|
||||
| a2914297-c70f-4021-bc3e-8ec2123f6ea6 | overcloud-full-initrd | active |
|
||||
| 13892850-6add-4c28-87cd-6da62e6f8a3c | overcloud-full-vmlinuz | active |
|
||||
+--------------------------------------+------------------------+--------+
|
||||
(undercloud) [stack@undercloud ~]$
|
||||
```
|
||||
|
||||
Some other minor things are happening during this process, but overall, this is how it looks.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Understanding the flow of the most common actions in the OpenStack platform will enable you to enhance your troubleshooting skills when facing some issues at work. You can check the status of an image in Glance; know whether an image is in a "queued," "saving," or "active" state; and do some captures in your environment to see what is going on by checking the endpoints you need to check.
|
||||
|
||||
I enjoy debugging. I consider this is an important skill for any role—whether you are working in a support, consulting, developer (of course!), or architect role. I hope this article gave you some basic guidelines to start debugging things.
|
||||
|
||||
* * *
|
||||
|
||||
* In case you're wondering how to open a 2GB .pcap file without problems, here is one way to do it:
|
||||
|
||||
|
||||
```
|
||||
`$ editcap -c 5000 image-upload.pcap upload-overcloud-image.pcap`
|
||||
```
|
||||
|
||||
This splits your huge capture in smaller captures of 5,000 packets each.
|
||||
|
||||
* * *
|
||||
|
||||
_This article was [originally posted][12] on LinkedIn and is reprinted with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/glance
|
||||
|
||||
作者:[Jair Patete][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jpatete
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yourimagehere_520x292.png?itok=V-xhX7KL (blank background that says your image here)
|
||||
[2]: https://www.openstack.org/software/releases/ocata/components/glance
|
||||
[3]: https://www.openstack.org/
|
||||
[4]: https://wiki.openstack.org/wiki/TripleO
|
||||
[5]: https://www.tcpdump.org/
|
||||
[6]: https://docs.openstack.org/keystone/latest/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/glance-db-calls.png (Searching "glance" inside tcpdump)
|
||||
[8]: tmp.qBKg0ttLIJ#*
|
||||
[9]: https://opensource.com/sites/default/files/uploads/glance-architecture.png (Glance architecture)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/check-flow-status.png (Checking the Glance image status)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/wireshark-capture.png (Wireshark capture)
|
||||
[12]: https://www.linkedin.com/pulse/what-happens-behind-doors-when-we-upload-image-glance-patete-garc%25C3%25ADa/?trackingId=czWiFC4dRfOsSZJ%2BXdzQfg%3D%3D
|
@ -0,0 +1,108 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Create Stunning Pixel Art With Free and Open Source Editor Pixelorama)
|
||||
[#]: via: (https://itsfoss.com/pixelorama/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Create Stunning Pixel Art With Free and Open Source Editor Pixelorama
|
||||
======
|
||||
|
||||
_**Brief: Pixelorama is a cross-platform, free and open source 2D sprite editor. It provides all the necessary tools to create pixel art in a neat user interface.**_
|
||||
|
||||
### Pixelorama: open source sprite editor
|
||||
|
||||
[Pixelorama][1] is a tool created by young game developers at [Orama Interactive][2]. They have developed a few 2D games and a couple of them use pixel art.
|
||||
|
||||
While Orama is primarily into game development, the developers are also creating utility tools that help them (and others) create those games.
|
||||
|
||||
The free and open source sprite editor, Pixelorama is such a utility tool. It’s built on top of [Godot Engine][3] and is perfect for creating pixel art.
|
||||
|
||||
![Pixelorama screenshot][4]
|
||||
|
||||
You see the pixel art in the screenshot above? It’s been created using Pixelorama. This video shows a timelapse video of creating the above image.
|
||||
|
||||
### Features of Pixelorama
|
||||
|
||||
Here are the main features Pixelorama provides:
|
||||
|
||||
* Multiple tools like penicl, erase, fill bucket color picker etc
|
||||
* Multiple layer system that allows you to add, remove, move up and down, clone and merge as many layers as you like
|
||||
* Support for spritesheets
|
||||
* Import images and edit them inside Pixelorama
|
||||
* Animation timeline with [Onion Skinning][5]
|
||||
* Custom brushes
|
||||
* Save and open your projects in Pixelorama’s custom file format, .pxo
|
||||
* Horizontal & vertical mirrored drawing
|
||||
* Tile Mode for pattern creation
|
||||
* Split screen mode and mini canvas preview
|
||||
* Zoom with mouse scroll wheel
|
||||
* Unlimited undo and redo
|
||||
* Scale, crop, flip, rotate, color invert and desaturate your images
|
||||
* Keyboard shortcuts
|
||||
* Available in several languages
|
||||
* Supports Linux, Windows and macOS
|
||||
|
||||
|
||||
|
||||
### Installing Pixelorama on Linux
|
||||
|
||||
Pixelorama is available as a Snap application and if you are using Ubuntu, you can find it in the software center itself.
|
||||
|
||||
![Pixelorama is available in Ubuntu Software Center][6]
|
||||
|
||||
Alternatively, if you have [Snap support enabled on your Linux distribution][7], you can install it using this command:
|
||||
|
||||
```
|
||||
sudo snap install pixelorama
|
||||
```
|
||||
|
||||
If you don’t want to use Snap, no worries. You can download the latest release of Pixelorama from [their GitHub repository][8], [extract the zip file][9] and you’ll see an executable file. Give this file execute permission and double click on it to run the application.
|
||||
|
||||
[Download Pixelorama][10]
|
||||
|
||||
**Conclusion**
|
||||
|
||||
![Pixelorama Welcome Screen][11]
|
||||
|
||||
In the Pixeloaram features, it says that you can import images and edit them. I guess that’s only true for certain kind of files because when I tried to import PNG or JPEG files, the application crashed.
|
||||
|
||||
However, I could easily doodle like a 3 year old and make random pixel art. I am not that into arts but I think this is a [useful tool for digital artists on Linux][12].
|
||||
|
||||
I liked the idea that despite being game developers, they are creating tools that could help other game developers and artists. That’s the spirit of open source.
|
||||
|
||||
If you like the project and will be using it, consider supporting them by a donation. [It’s FOSS has made a humble donation][13] of $25 to thank their effort.
|
||||
|
||||
[Donate to Pixelorama (personal Paypal account of the lead developer)][14]
|
||||
|
||||
Do you like Pixelorama? Do you use some other open source sprite editor? Feel free to share your views in the comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/pixelorama/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.orama-interactive.com/pixelorama
|
||||
[2]: https://www.orama-interactive.com/
|
||||
[3]: https://godotengine.org/
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/pixelorama-v6.jpg?ssl=1
|
||||
[5]: https://en.wikipedia.org/wiki/Onion_skinning
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/pixelorama-ubuntu-software-center.jpg?ssl=1
|
||||
[7]: https://itsfoss.com/install-snap-linux/
|
||||
[8]: https://github.com/Orama-Interactive/Pixelorama
|
||||
[9]: https://itsfoss.com/unzip-linux/
|
||||
[10]: https://github.com/Orama-Interactive/Pixelorama/releases
|
||||
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/pixelorama.jpg?ssl=1
|
||||
[12]: https://itsfoss.com/best-linux-graphic-design-software/
|
||||
[13]: https://itsfoss.com/donations-foss/
|
||||
[14]: https://www.paypal.me/erevos
|
@ -0,0 +1,663 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Share data between C and Python with this messaging library)
|
||||
[#]: via: (https://opensource.com/article/20/3/zeromq-c-python)
|
||||
[#]: author: (Cristiano L. Fontana https://opensource.com/users/cristianofontana)
|
||||
|
||||
Share data between C and Python with this messaging library
|
||||
======
|
||||
ZeroMQ makes for a fast and resilient messaging library to gather data
|
||||
and share between multiple languages.
|
||||
![Chat via email][1]
|
||||
|
||||
I've had moments as a software engineer when I'm asked to do a task that sends shivers down my spine. One such moment was when I had to write an interface between some new hardware infrastructure that requires C and a cloud infrastructure, which is primarily Python.
|
||||
|
||||
One strategy could be to [write an extension in C][2], which Python supports by design. A quick glance at the documentation shows this would mean writing a good amount of C. That can be good in some cases, but it's not what I prefer to do. Another strategy is to put the two tasks in separate processes and exchange messages between the two with the [ZeroMQ messaging library][3].
|
||||
|
||||
When I experienced this type of scenario before discovering ZeroMQ, I went through the extension-writing path. It was not that bad, but it is very time-consuming and convoluted. Nowadays, to avoid that, I subdivide a system into independent processes that exchange information through messages sent over [communication sockets][4]. With this approach, several programming languages can coexist, and each process is simpler and thus easier to debug.
|
||||
|
||||
ZeroMQ provides an even easier process:
|
||||
|
||||
1. Write a small shim in C that reads data from the hardware and sends whatever it finds as a message.
|
||||
2. Write a Python interface between the new and existing infrastructure.
|
||||
|
||||
|
||||
|
||||
One of ZeroMQ's project's founders is [Pieter Hintjens][5], a remarkable person with [interesting views and writings][6].
|
||||
|
||||
### Prerequisites
|
||||
|
||||
For this tutorial, you will need:
|
||||
|
||||
* A C compiler (e.g., [GCC][7] or [Clang][8])
|
||||
* The [**libzmq** library][9]
|
||||
* [Python 3][10]
|
||||
* [ZeroMQ bindings][11] for python
|
||||
|
||||
|
||||
|
||||
Install them on Fedora with:
|
||||
|
||||
|
||||
```
|
||||
`$ dnf install clang zeromq zeromq-devel python3 python3-zmq`
|
||||
```
|
||||
|
||||
For Debian or Ubuntu:
|
||||
|
||||
|
||||
```
|
||||
`$ apt-get install clang libzmq5 libzmq3-dev python3 python3-zmq`
|
||||
```
|
||||
|
||||
If you run into any issues, refer to each project's installation instructions (which are linked above).
|
||||
|
||||
### Writing the hardware-interfacing library
|
||||
|
||||
Since this is a hypothetical scenario, this tutorial will write a fictitious library with two functions:
|
||||
|
||||
* **fancyhw_init()** to initiate the (hypothetical) hardware
|
||||
* **fancyhw_read_val()** to return a value read from the hardware
|
||||
|
||||
|
||||
|
||||
Save the library's full source code to a file named **libfancyhw.h**:
|
||||
|
||||
|
||||
```
|
||||
#ifndef LIBFANCYHW_H
|
||||
#define LIBFANCYHW_H
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <stdint.h>
|
||||
|
||||
// This is the fictitious hardware interfacing library
|
||||
|
||||
void fancyhw_init(unsigned int init_param)
|
||||
{
|
||||
[srand][12](init_param);
|
||||
}
|
||||
|
||||
int16_t fancyhw_read_val(void)
|
||||
{
|
||||
return (int16_t)[rand][13]();
|
||||
}
|
||||
|
||||
#endif
|
||||
```
|
||||
|
||||
This library can simulate the data you want to pass between languages, thanks to the random number generator.
|
||||
|
||||
### Designing a C interface
|
||||
|
||||
The following will go step-by-step through writing the C interface—from including the libraries to managing the data transfer.
|
||||
|
||||
#### Libraries
|
||||
|
||||
Begin by loading the necessary libraries (the purpose of each library is in a comment in the code):
|
||||
|
||||
|
||||
```
|
||||
// For printf()
|
||||
#include <stdio.h>
|
||||
// For EXIT_*
|
||||
#include <stdlib.h>
|
||||
// For memcpy()
|
||||
#include <string.h>
|
||||
// For sleep()
|
||||
#include <unistd.h>
|
||||
|
||||
#include <zmq.h>
|
||||
|
||||
#include "libfancyhw.h"
|
||||
```
|
||||
|
||||
#### Significant parameters
|
||||
|
||||
Define the **main** function and the significant parameters needed for the rest of the program:
|
||||
|
||||
|
||||
```
|
||||
int main(void)
|
||||
{
|
||||
const unsigned int INIT_PARAM = 12345;
|
||||
const unsigned int REPETITIONS = 10;
|
||||
const unsigned int PACKET_SIZE = 16;
|
||||
const char *TOPIC = "fancyhw_data";
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
#### Initialization
|
||||
|
||||
Both libraries need some initialization. The fictitious one needs just one parameter:
|
||||
|
||||
|
||||
```
|
||||
`fancyhw_init(INIT_PARAM);`
|
||||
```
|
||||
|
||||
The ZeroMQ library needs some real initialization. First, define a **context**—an object that manages all the sockets:
|
||||
|
||||
|
||||
```
|
||||
void *context = zmq_ctx_new();
|
||||
|
||||
if (!context)
|
||||
{
|
||||
[printf][14]("ERROR: ZeroMQ error occurred during zmq_ctx_new(): %s\n", zmq_strerror(errno));
|
||||
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
```
|
||||
|
||||
Then define the socket used to deliver data. ZeroMQ supports several types of sockets, each with its application. Use a **publish** socket (also known as **PUB** socket), which can deliver copies of a message to multiple receivers. This approach enables you to attach several receivers that will all get the same messages. If there are no receivers, the messages will be discarded (i.e., they will not be queued). Do this with:
|
||||
|
||||
|
||||
```
|
||||
`void *data_socket = zmq_socket(context, ZMQ_PUB);`
|
||||
```
|
||||
|
||||
The socket must be bound to an address so that the clients know where to connect. In this case, use the [TCP transport layer][15] (there are [other options][16], but TCP is a good default choice):
|
||||
|
||||
|
||||
```
|
||||
const int rb = zmq_bind(data_socket, "tcp://*:5555");
|
||||
|
||||
if (rb != 0)
|
||||
{
|
||||
[printf][14]("ERROR: ZeroMQ error occurred during zmq_ctx_new(): %s\n", zmq_strerror(errno));
|
||||
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
```
|
||||
|
||||
Next, calculate some useful values that you will need later. Note **TOPIC** in the code below; **PUB** sockets need a topic to be associated with the messages they send. Topics can be used by the receivers to filter messages:
|
||||
|
||||
|
||||
```
|
||||
const size_t topic_size = [strlen][17](TOPIC);
|
||||
const size_t envelope_size = topic_size + 1 + PACKET_SIZE * sizeof(int16_t);
|
||||
|
||||
[printf][14]("Topic: %s; topic size: %zu; Envelope size: %zu\n", TOPIC, topic_size, envelope_size);
|
||||
```
|
||||
|
||||
#### Sending messages
|
||||
|
||||
Start a loop that sends **REPETITIONS** messages:
|
||||
|
||||
|
||||
```
|
||||
for (unsigned int i = 0; i < REPETITIONS; i++)
|
||||
{
|
||||
...
|
||||
```
|
||||
|
||||
Before sending a message, fill a buffer of **PACKET_SIZE** values. The library provides signed integers of 16 bits. Since the dimension of an **int** in C is not defined, use an **int** with a specific width:
|
||||
|
||||
|
||||
```
|
||||
int16_t buffer[PACKET_SIZE];
|
||||
|
||||
for (unsigned int j = 0; j < PACKET_SIZE; j++)
|
||||
{
|
||||
buffer[j] = fancyhw_read_val();
|
||||
}
|
||||
|
||||
[printf][14]("Read %u data values\n", PACKET_SIZE);
|
||||
```
|
||||
|
||||
The first step in message preparation and delivery is creating a ZeroMQ message and allocating the memory necessary for your message. This empty message is an envelope to store the data you will ship:
|
||||
|
||||
|
||||
```
|
||||
zmq_msg_t envelope;
|
||||
|
||||
const int rmi = zmq_msg_init_size(&envelope, envelope_size);
|
||||
if (rmi != 0)
|
||||
{
|
||||
[printf][14]("ERROR: ZeroMQ error occurred during zmq_msg_init_size(): %s\n", zmq_strerror(errno));
|
||||
|
||||
zmq_msg_close(&envelope);
|
||||
|
||||
break;
|
||||
}
|
||||
```
|
||||
|
||||
Now that the memory is allocated, store the data in the ZeroMQ message "envelope." The **zmq_msg_data()** function returns a pointer to the beginning of the buffer in the envelope. The first part is the topic, followed by a space, then the binary data. Add whitespace as a separator between the topic and the data. To move along the buffer, you have to play with casts and [pointer arithmetic][18]. (Thank you, C, for making things straightforward.) Do this with:
|
||||
|
||||
|
||||
```
|
||||
[memcpy][19](zmq_msg_data(&envelope), TOPIC, topic_size);
|
||||
[memcpy][19]((void*)((char*)zmq_msg_data(&envelope) + topic_size), " ", 1);
|
||||
[memcpy][19]((void*)((char*)zmq_msg_data(&envelope) + 1 + topic_size), buffer, PACKET_SIZE * sizeof(int16_t));
|
||||
```
|
||||
|
||||
Send the message through the **data_socket**:
|
||||
|
||||
|
||||
```
|
||||
const size_t rs = zmq_msg_send(&envelope, data_socket, 0);
|
||||
if (rs != envelope_size)
|
||||
{
|
||||
[printf][14]("ERROR: ZeroMQ error occurred during zmq_msg_send(): %s\n", zmq_strerror(errno));
|
||||
|
||||
zmq_msg_close(&envelope);
|
||||
|
||||
break;
|
||||
}
|
||||
```
|
||||
|
||||
Make sure to dispose of the envelope after you use it:
|
||||
|
||||
|
||||
```
|
||||
zmq_msg_close(&envelope);
|
||||
|
||||
[printf][14]("Message sent; i: %u, topic: %s\n", i, TOPIC);
|
||||
```
|
||||
|
||||
#### Clean it up
|
||||
|
||||
Because C does not provide [garbage collection][20], you have to tidy up. After you are done sending your messages, close the program with the clean-up needed to release the used memory:
|
||||
|
||||
|
||||
```
|
||||
const int rc = zmq_close(data_socket);
|
||||
|
||||
if (rc != 0)
|
||||
{
|
||||
[printf][14]("ERROR: ZeroMQ error occurred during zmq_close(): %s\n", zmq_strerror(errno));
|
||||
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
const int rd = zmq_ctx_destroy(context);
|
||||
|
||||
if (rd != 0)
|
||||
{
|
||||
[printf][14]("Error occurred during zmq_ctx_destroy(): %s\n", zmq_strerror(errno));
|
||||
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
return EXIT_SUCCESS;
|
||||
```
|
||||
|
||||
#### The entire C program
|
||||
|
||||
Save the full interface library below to a local file called **hw_interface.c**:
|
||||
|
||||
|
||||
```
|
||||
// For printf()
|
||||
#include <stdio.h>
|
||||
// For EXIT_*
|
||||
#include <stdlib.h>
|
||||
// For memcpy()
|
||||
#include <string.h>
|
||||
// For sleep()
|
||||
#include <unistd.h>
|
||||
|
||||
#include <zmq.h>
|
||||
|
||||
#include "libfancyhw.h"
|
||||
|
||||
int main(void)
|
||||
{
|
||||
const unsigned int INIT_PARAM = 12345;
|
||||
const unsigned int REPETITIONS = 10;
|
||||
const unsigned int PACKET_SIZE = 16;
|
||||
const char *TOPIC = "fancyhw_data";
|
||||
|
||||
fancyhw_init(INIT_PARAM);
|
||||
|
||||
void *context = zmq_ctx_new();
|
||||
|
||||
if (!context)
|
||||
{
|
||||
[printf][14]("ERROR: ZeroMQ error occurred during zmq_ctx_new(): %s\n", zmq_strerror(errno));
|
||||
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
void *data_socket = zmq_socket(context, ZMQ_PUB);
|
||||
|
||||
const int rb = zmq_bind(data_socket, "tcp://*:5555");
|
||||
|
||||
if (rb != 0)
|
||||
{
|
||||
[printf][14]("ERROR: ZeroMQ error occurred during zmq_ctx_new(): %s\n", zmq_strerror(errno));
|
||||
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
const size_t topic_size = [strlen][17](TOPIC);
|
||||
const size_t envelope_size = topic_size + 1 + PACKET_SIZE * sizeof(int16_t);
|
||||
|
||||
[printf][14]("Topic: %s; topic size: %zu; Envelope size: %zu\n", TOPIC, topic_size, envelope_size);
|
||||
|
||||
for (unsigned int i = 0; i < REPETITIONS; i++)
|
||||
{
|
||||
int16_t buffer[PACKET_SIZE];
|
||||
|
||||
for (unsigned int j = 0; j < PACKET_SIZE; j++)
|
||||
{
|
||||
buffer[j] = fancyhw_read_val();
|
||||
}
|
||||
|
||||
[printf][14]("Read %u data values\n", PACKET_SIZE);
|
||||
|
||||
zmq_msg_t envelope;
|
||||
|
||||
const int rmi = zmq_msg_init_size(&envelope, envelope_size);
|
||||
if (rmi != 0)
|
||||
{
|
||||
[printf][14]("ERROR: ZeroMQ error occurred during zmq_msg_init_size(): %s\n", zmq_strerror(errno));
|
||||
|
||||
zmq_msg_close(&envelope);
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
[memcpy][19](zmq_msg_data(&envelope), TOPIC, topic_size);
|
||||
|
||||
[memcpy][19]((void*)((char*)zmq_msg_data(&envelope) + topic_size), " ", 1);
|
||||
|
||||
[memcpy][19]((void*)((char*)zmq_msg_data(&envelope) + 1 + topic_size), buffer, PACKET_SIZE * sizeof(int16_t));
|
||||
|
||||
const size_t rs = zmq_msg_send(&envelope, data_socket, 0);
|
||||
if (rs != envelope_size)
|
||||
{
|
||||
[printf][14]("ERROR: ZeroMQ error occurred during zmq_msg_send(): %s\n", zmq_strerror(errno));
|
||||
|
||||
zmq_msg_close(&envelope);
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
zmq_msg_close(&envelope);
|
||||
|
||||
[printf][14]("Message sent; i: %u, topic: %s\n", i, TOPIC);
|
||||
|
||||
sleep(1);
|
||||
}
|
||||
|
||||
const int rc = zmq_close(data_socket);
|
||||
|
||||
if (rc != 0)
|
||||
{
|
||||
[printf][14]("ERROR: ZeroMQ error occurred during zmq_close(): %s\n", zmq_strerror(errno));
|
||||
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
const int rd = zmq_ctx_destroy(context);
|
||||
|
||||
if (rd != 0)
|
||||
{
|
||||
[printf][14]("Error occurred during zmq_ctx_destroy(): %s\n", zmq_strerror(errno));
|
||||
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
return EXIT_SUCCESS;
|
||||
}
|
||||
```
|
||||
|
||||
Compile using the command:
|
||||
|
||||
|
||||
```
|
||||
`$ clang -std=c99 -I. hw_interface.c -lzmq -o hw_interface`
|
||||
```
|
||||
|
||||
If there are no compilation errors, you can run the interface. What's great is that ZeroMQ **PUB** sockets can run without any applications sending or retrieving data. That reduces complexity because there is no obligation in terms of which process needs to start first.
|
||||
|
||||
Run the interface:
|
||||
|
||||
|
||||
```
|
||||
$ ./hw_interface
|
||||
Topic: fancyhw_data; topic size: 12; Envelope size: 45
|
||||
Read 16 data values
|
||||
Message sent; i: 0, topic: fancyhw_data
|
||||
Read 16 data values
|
||||
Message sent; i: 1, topic: fancyhw_data
|
||||
Read 16 data values
|
||||
...
|
||||
...
|
||||
```
|
||||
|
||||
The output shows the data being sent through ZeroMQ. Now you need an application to read the data.
|
||||
|
||||
### Write a Python data processor
|
||||
|
||||
You are now ready to pass the data from C to a Python application.
|
||||
|
||||
#### Libraries
|
||||
|
||||
You need two libraries to help transfer data. First, you need ZeroMQ bindings in Python:
|
||||
|
||||
|
||||
```
|
||||
`$ python3 -m pip install zmq`
|
||||
```
|
||||
|
||||
The other is the [**struct** library][21], which decodes binary data. It's commonly available with the Python standard library, so there's no need to **pip install** it.
|
||||
|
||||
The first part of the Python program imports both of these libraries:
|
||||
|
||||
|
||||
```
|
||||
import zmq
|
||||
import struct
|
||||
```
|
||||
|
||||
#### Significant parameters
|
||||
|
||||
To use ZeroMQ, you must subscribe to the same topic used in the constant **TOPIC** above:
|
||||
|
||||
|
||||
```
|
||||
topic = "fancyhw_data".encode('ascii')
|
||||
|
||||
print("Reading messages with topic: {}".format(topic))
|
||||
```
|
||||
|
||||
#### Initialization
|
||||
|
||||
Next, initialize the context and the socket. Use a **subscribe** socket (also known as a **SUB** socket), which is the natural partner of the **PUB** socket. The socket also needs to subscribe to the right topic:
|
||||
|
||||
|
||||
```
|
||||
with zmq.Context() as context:
|
||||
socket = context.socket(zmq.SUB)
|
||||
|
||||
socket.connect("tcp://127.0.0.1:5555")
|
||||
socket.setsockopt(zmq.SUBSCRIBE, topic)
|
||||
|
||||
i = 0
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
#### Receiving messages
|
||||
|
||||
Start an infinite loop that waits for new messages to be delivered to the SUB socket. The loop will be closed if you press **Ctrl+C** or if an error occurs:
|
||||
|
||||
|
||||
```
|
||||
try:
|
||||
while True:
|
||||
|
||||
... # we will fill this in next
|
||||
|
||||
except KeyboardInterrupt:
|
||||
socket.close()
|
||||
except Exception as error:
|
||||
print("ERROR: {}".format(error))
|
||||
socket.close()
|
||||
```
|
||||
|
||||
The loop waits for new messages to arrive with the **recv()** method. Then it splits whatever is received at the first space to separate the topic from the content:
|
||||
|
||||
|
||||
```
|
||||
`binary_topic, data_buffer = socket.recv().split(b' ', 1)`
|
||||
```
|
||||
|
||||
#### Decoding messages
|
||||
|
||||
Python does yet not know that the topic is a string, so decode it using the standard ASCII encoding:
|
||||
|
||||
|
||||
```
|
||||
topic = binary_topic.decode(encoding = 'ascii')
|
||||
|
||||
print("Message {:d}:".format(i))
|
||||
print("\ttopic: '{}'".format(topic))
|
||||
```
|
||||
|
||||
The next step is to read the binary data using the **struct** library, which can convert shapeless binary blobs to significant values. First, calculate the number of values stored in the packet. This example uses 16-bit signed integers that correspond to an "h" in the **struct** [format][22]:
|
||||
|
||||
|
||||
```
|
||||
packet_size = len(data_buffer) // struct.calcsize("h")
|
||||
|
||||
print("\tpacket size: {:d}".format(packet_size))
|
||||
```
|
||||
|
||||
By knowing how many values are in the packet, you can define the format by preparing a string with the number of values and their types (e.g., "**16h**"):
|
||||
|
||||
|
||||
```
|
||||
`struct_format = "{:d}h".format(packet_size)`
|
||||
```
|
||||
|
||||
Convert that binary blob to a series of numbers that you can immediately print:
|
||||
|
||||
|
||||
```
|
||||
data = struct.unpack(struct_format, data_buffer)
|
||||
|
||||
print("\tdata: {}".format(data))
|
||||
```
|
||||
|
||||
#### The full Python program
|
||||
|
||||
Here is the complete data receiver in Python:
|
||||
|
||||
|
||||
```
|
||||
#! /usr/bin/env python3
|
||||
|
||||
import zmq
|
||||
import struct
|
||||
|
||||
topic = "fancyhw_data".encode('ascii')
|
||||
|
||||
print("Reading messages with topic: {}".format(topic))
|
||||
|
||||
with zmq.Context() as context:
|
||||
socket = context.socket(zmq.SUB)
|
||||
|
||||
socket.connect("tcp://127.0.0.1:5555")
|
||||
socket.setsockopt(zmq.SUBSCRIBE, topic)
|
||||
|
||||
i = 0
|
||||
|
||||
try:
|
||||
while True:
|
||||
binary_topic, data_buffer = socket.recv().split(b' ', 1)
|
||||
|
||||
topic = binary_topic.decode(encoding = 'ascii')
|
||||
|
||||
print("Message {:d}:".format(i))
|
||||
print("\ttopic: '{}'".format(topic))
|
||||
|
||||
packet_size = len(data_buffer) // struct.calcsize("h")
|
||||
|
||||
print("\tpacket size: {:d}".format(packet_size))
|
||||
|
||||
struct_format = "{:d}h".format(packet_size)
|
||||
|
||||
data = struct.unpack(struct_format, data_buffer)
|
||||
|
||||
print("\tdata: {}".format(data))
|
||||
|
||||
i += 1
|
||||
|
||||
except KeyboardInterrupt:
|
||||
socket.close()
|
||||
except Exception as error:
|
||||
print("ERROR: {}".format(error))
|
||||
socket.close()
|
||||
```
|
||||
|
||||
Save it to a file called **online_analysis.py**. Python does not need to be compiled, so you can run the program immediately.
|
||||
|
||||
Here is the output:
|
||||
|
||||
|
||||
```
|
||||
$ ./online_analysis.py
|
||||
Reading messages with topic: b'fancyhw_data'
|
||||
Message 0:
|
||||
topic: 'fancyhw_data'
|
||||
packet size: 16
|
||||
data: (20946, -23616, 9865, 31416, -15911, -10845, -5332, 25662, 10955, -32501, -18717, -24490, -16511, -28861, 24205, 26568)
|
||||
Message 1:
|
||||
topic: 'fancyhw_data'
|
||||
packet size: 16
|
||||
data: (12505, 31355, 14083, -19654, -9141, 14532, -25591, 31203, 10428, -25564, -732, -7979, 9529, -27982, 29610, 30475)
|
||||
...
|
||||
...
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
This tutorial describes an alternative way of gathering data from C-based hardware interfaces and providing it to Python-based infrastructures. You can take this data and analyze it or pass it off in any number of directions. It employs a messaging library to deliver data between a "gatherer" and an "analyzer" instead of having a monolithic piece of software that does everything.
|
||||
|
||||
This tutorial also increases what I call "software granularity." In other words, it subdivides the software into smaller units. One of the benefits of this strategy is the possibility of using different programming languages at the same time with minimal interfaces acting as shims between them.
|
||||
|
||||
In practice, this design allows software engineers to work both more collaboratively and independently. Different teams may work on different steps of the analysis, choosing the tool they prefer. Another benefit is the parallelism that comes for free since all the processes can run in parallel. The [ZeroMQ messaging library][3] is a remarkable piece of software that makes all of this much easier.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/zeromq-c-python
|
||||
|
||||
作者:[Cristiano L. Fontana][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cristianofontana
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_chat_communication_message.png?itok=LKjiLnQu (Chat via email)
|
||||
[2]: https://docs.python.org/3/extending/extending.html
|
||||
[3]: https://zeromq.org/
|
||||
[4]: https://en.wikipedia.org/wiki/Network_socket
|
||||
[5]: https://en.wikipedia.org/wiki/Pieter_Hintjens
|
||||
[6]: http://hintjens.com/
|
||||
[7]: https://gcc.gnu.org/
|
||||
[8]: https://clang.llvm.org/
|
||||
[9]: https://github.com/zeromq/libzmq#installation-of-binary-packages-
|
||||
[10]: https://www.python.org/downloads/
|
||||
[11]: https://zeromq.org/languages/python/
|
||||
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/srand.html
|
||||
[13]: http://www.opengroup.org/onlinepubs/009695399/functions/rand.html
|
||||
[14]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
|
||||
[15]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol
|
||||
[16]: http://zguide.zeromq.org/page:all#Plugging-Sockets-into-the-Topology
|
||||
[17]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
|
||||
[18]: https://en.wikipedia.org/wiki/Pointer_%28computer_programming%29%23C_and_C++
|
||||
[19]: http://www.opengroup.org/onlinepubs/009695399/functions/memcpy.html
|
||||
[20]: https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)
|
||||
[21]: https://docs.python.org/3/library/struct.html
|
||||
[22]: https://docs.python.org/3/library/struct.html#format-characters
|
@ -0,0 +1,207 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (7 open hardware projects working to solve COVID-19)
|
||||
[#]: via: (https://opensource.com/article/20/3/open-hardware-covid19)
|
||||
[#]: author: (Harris Kenny https://opensource.com/users/harriskenny)
|
||||
|
||||
7 open hardware projects working to solve COVID-19
|
||||
======
|
||||
Open hardware solutions can prevent the spread and suffering of the
|
||||
novel coronavirus.
|
||||
![open on blue background with heartbeat symbol][1]
|
||||
|
||||
The open source [hardware][2] movement has long championed the importance of the right to repair, fully own the technology you buy, and be able to remix and reproduce gadgets, just like you can with music. And so, during this challenging time, open hardware is providing some answers to some of the problems created by the coronavirus pandemic.
|
||||
|
||||
### An overview of what's happening
|
||||
|
||||
For one, hardware developers around the world are working to resolve supply chain weaknesses using open source, the same philosophy that has driven a proliferation of new software technologies over the last 30 years. The hardware movement's past successes include the [RepRap Project][3], [Open Source Ecology][4], and [Open Source Beehives][5], proving this can be done.
|
||||
|
||||
There has been increasing interest in creators using 3D printing and other technologies to create replacement parts for and manufacturing of safety equipment on demand. For example, the Polytechnic University lab in Hong Kong [3D printed face shields][6] for hospital workers. And Italian startup Isinnova partnered with the FabLab in Milan to [3D-print replacement valves][7] for reanimation devices in hard-hit Northern Italy. Companies are also releasing designs to adapt our physical interactions, like this [3D printed hands-free door opener][8] from Materialise. These examples of replacing parts and solving problems are an excellent start and appear to be saving lives.
|
||||
|
||||
Another traditional hardware technique is picking up steam: sewing. The AFP reports that there is an acute need for face masks around the world and guidance from the World Health Organization about their importance. With single-use, disposable masks being prioritized for healthcare workers, in the Czech Republic people are [taking to sewing to make their own masks][9]. (Repeat-use masks do introduce sterility concerns.) The Facebook group "Czechia sews face masks" started to address this problem in their country, with tens of thousands of members using their at-home sewing machines.
|
||||
|
||||
Open source hardware equipment and machinery projects are also gaining traction. First, there is testing equipment that is sophisticated and highly capable. Next, there is medical equipment that can be categorized as field-grade (at best) for scenarios with no other option. These projects are outlined in detail below.
|
||||
|
||||
To learn more, I spoke with Jason Huggins, founder and CEO of Chicago-based [Tapster Robotics][10]. Tapster Robotics designs and manufactures desktop robots using 3D printing, computer numerical control (CNC) machining, and open electronics like [Arduino][11]. He has both the technical know-how and the industrial capacity to make an impact. And he wants to commit his company's resources to help in this fight.
|
||||
|
||||
"Basically, we're in a World War II mobilization moment right now. Even though I'm not a doctor, we should still all follow the Hippocratic Oath. Whatever I do, I don't want to make the problem worse," Huggins explains. "As a counterpoint, there is WHO executive director Dr. Michael Ryan's comment: 'Speed trumps perfection,'" Huggins argues.
|
||||
|
||||
> Wow.
|
||||
>
|
||||
> This man is the global authority on the spread of disease. If you are a leader (in any capacity) watch this. If you are not, watch it too. [pic.twitter.com/bFogaekehM][12]
|
||||
>
|
||||
> — Jim Richards Sh🎙wgram (@JIMrichards1010) [March 15, 2020][13]
|
||||
|
||||
Huggins has extensive experience with delivering during times of need. His efforts were instrumental in helping [Healthcare.gov][14] scale after its challenging initial launch. He also created the software industry-standard testing frameworks Selenium and Appium. With this experience, his advice is well worth considering.
|
||||
|
||||
I also spoke with Seattle-based attorney Mark Tyson of [Tyson Law][15], who works with startups and small businesses. He has direct experience working with nimble companies in rapidly evolving industries. In framing the overall question, Tyson begins:
|
||||
|
||||
> Good Samaritan laws protect volunteers—i.e., “Good Samaritans”—from being held liable as a result of their decision to give aid during an emergency. While the specifics of these laws vary by state, they share a common public policy rationale: namely, encouraging bystanders to help others facing an emergency. Conceivably, this rationale could justify application of these types of laws in less traditional settings than, say, pulling the victim of a car accident out of harm’s way.
|
||||
|
||||
Applying this specific situation, Tyson notes:
|
||||
|
||||
> "Before taking action, creators would be wise to speak with an attorney to conduct a state-specific risk assessment. It would also be prudent to ask larger institutions, like hospitals or insurers, to accept potential liability exposure via contract—for instance, through the use of indemnification agreements, whereby the hospital or its insurer agrees to indemnify the creator for liability."
|
||||
|
||||
Tyson understands the urgency and gravity of the situation. This option to use contracts is not meant to be a roadblock; instead, it may be a way to help adoption happen at scale to make a bigger difference faster. It is up to you or your organization to make this determination.
|
||||
|
||||
With all that said, let's explore the projects that are in use or in development (and may be available for deployment soon).
|
||||
|
||||
### 7 open hardware projects fighting COVID-19
|
||||
|
||||
#### Opentrons
|
||||
|
||||
[Opentrons][16]' open source lab automation platform is comprised of a suite of open source hardware, verified labware, consumables, reagents, and workstations. Opentrons says its products can help dramatically [scale-up COVID-19 testing][17] with systems that can "automate up to 2,400 tests per day within days of an order being placed." It plans to ramp up to 1 million tested samples by July 1.
|
||||
|
||||
![Opentrons roadmap graphic][18]
|
||||
|
||||
From the Opentrons [website][17], Copyright
|
||||
|
||||
The company is already working with federal and local government agencies to determine if its systems can be used for clinical diagnosis under an [emergency use authorization][19]. Opentrons is shared under an [Apache 2.0 license][20]. I first learned of it from biologist Kristin Ellis, who is affiliated with the project.
|
||||
|
||||
#### Chai Open qPCR
|
||||
|
||||
Chai's [Open qPCR][21] device uses [polymerase chain reaction][22] (PCR) to rapidly test swabs from surfaces (e.g., door handles and elevator buttons) to see if the novel coronavirus is present. This open source hardware shared under an [Apache 2.0 license][23] uses a [BeagleBone][24] low-power Linux computer. Data from the Chai Open qPCR can enable public health, civic, and business leaders to make more informed decisions about cleaning, mitigation, facility closures, contract tracing, and testing.
|
||||
|
||||
#### OpenPCR
|
||||
|
||||
[OpenPCR][25] is a PCR testing device kit from Josh Perfetto and Jessie Ho, the creators behind the Chai Open qPCR. This is more of a DIY open source device than their previous project, but it has the same use case: using environmental testing to identify the coronavirus in the field. As the project page states, "traditional real-time PCR machines capable of detecting these pathogens typically cost upwards of $30,000 US dollars and are not suitable for field usage." Because OpenPCR is a kit users build and is shared under a [GPLv3.0 license][26], the device aims to democratize access to molecular diagnostics.
|
||||
|
||||
![OpenPCR][27]
|
||||
|
||||
From the OpenPCR [website][25], Copyright
|
||||
|
||||
And, like any good open source project, there is a derivative! [WildOpenPCR][28] by [GaudiLabs][29] in Switzerland is also shared under a [GPLv3.0 license][30].
|
||||
|
||||
#### PocketPCR
|
||||
|
||||
Gaudi Labs' [PocketPCR][31] thermocycler is used to activate biological reactions by raising and lowering the temperature of a liquid in small test tubes. It can be powered with a simple USB power adapter, either tethered to a device or on its own, with preset parameters that don't require a computer or smartphone.
|
||||
|
||||
![PocketPCR][32]
|
||||
|
||||
From the PocketPCR [website][31], Copyright
|
||||
|
||||
Like the other PCR options described in this article, this device may facilitate environmental testing for coronavirus, although its project page does not explicitly state so. PocketPCR is shared under a [GPLv3.0 license][33].
|
||||
|
||||
#### Open Lung Low Resource Ventilator
|
||||
|
||||
The [Open Lung Low Resource Ventilator][34] is a quick-deployment ventilator that utilizes a [bag valve mask][35] (BVM), also known as an Ambu-bag, as a core component. Ambu-bags are mass-produced, certified, small, mechanically simple, and adaptable to both invasive tubing and masks. The OPEN LUNG ventilator will use micro-electronics to sense and control air pressure and flow, with the goal to enable semi-autonomous operation.
|
||||
|
||||
![Open Lung ventilator][36]
|
||||
|
||||
Open Lung [on GitLab][37]
|
||||
|
||||
This early-stage project boasts a large team with hundreds of contributors, led by: Colin Keogh, David Pollard, Connall Laverty, and Gui Calavanti. It is shared under a [GPLv3.0 license][38].
|
||||
|
||||
#### Pandemic Ventilator
|
||||
|
||||
The [Pandemic Ventilator][39] is a DIY ventilator prototype. Like the RepRap project, it uses commonly available hardware components in its design. The project was uploaded by user Panvent to Instructables more than 10 years ago, and there are six major steps to producing it. The project is shared under a [CC BY-NC-SA license][39]. This video shows the system in action:
|
||||
|
||||
#### Folding at Home
|
||||
|
||||
[Folding at Home][40] is a distributed computing project for simulating protein dynamics, including the process of protein folding and the movements of proteins implicated in a variety of diseases. It is a call-to-action for citizen scientists, researchers, and volunteers to use their computers at home to help run simulations, similar to the decommissioned [SETI@Home project][41]. If you're a technologist with capable computer hardware, Folding at Home is for you.
|
||||
|
||||
![Markov state model][42]
|
||||
|
||||
Vincent Voelz, CC BY-SA 3.0
|
||||
|
||||
Folding at Home uses Markov state models (shown above) to model the possible shapes and folding pathways a protein can take in order to look for new therapeutic opportunities. You can find out more about the project in Washington University biophysicist Greg Bowman's post on [how it works and how you can help][43].
|
||||
|
||||
The project involves a consortium of academic laboratories, contributors, and corporate sponsors from many countries, including Hong Kong, Croatia, Sweden, and the United States. Folding at Home is shared under a [mix of GPL and proprietary licenses][44] on [GitHub][45] and is multi-platform for Windows, macOS, and GNU/Linux (e.g., Debian, Ubuntu, Mint, RHEL, CentOS, Fedora).
|
||||
|
||||
### Many other interesting projects
|
||||
|
||||
These projects are just a fraction of the activity happening in the open hardware space to solve or treat COVID-19. In researching this article, I discovered other projects worth exploring, such as:
|
||||
|
||||
* [Open source ventilators, oxygen concentrators, etc.][46] by Coronavirus Tech Handbook
|
||||
* [Helpful engineering][47] by ProjectOpenAir
|
||||
* [Open source ventilator hackathon][48] on Hackaday
|
||||
* [Specifications for simple open source mechanical ventilator][49] by Johns Hopkins emergency medicine resident Julian Botta
|
||||
* [Coronavirus-related phishing, malware, and randomware on the rise][50] by Shannon Morse
|
||||
* [Converting a low-cost CPAP blower into a rudimentary ventilator][51] by jcl5m1
|
||||
* [Forum A.I.R.E. discussion on open respirators and fans][52] (Spanish/español)
|
||||
* [Special Issue on Open-Source COVID19 Medical Hardware][53] by Elsevier HardwareX
|
||||
|
||||
|
||||
|
||||
These projects are based all over the world, and this type of global cooperation is exactly what we need, as the virus ignores borders. The novel coronavirus pandemic affects countries at different times and in different ways, so we need a distributed approach.
|
||||
|
||||
As my colleague Steven Abadie and I write in the [OSHdata 2020 Report][54], the open source hardware movement is a global movement. Participating individuals and organizations with certified projects are located in over 35 countries around the world and in every hemisphere.
|
||||
|
||||
![Open source hardware map][55]
|
||||
|
||||
OSHdata, CC BY-SA 4.0 International
|
||||
|
||||
If you are interested in joining this conversation with open source hardware developers around the world, join the [Open Hardware Summit Discord][56] server with a dedicated channel for conversations about COVID-19. You can find roboticists, designers, artists, firmware and mechanical engineers, students, researchers, and others who are fighting this war together. We hope to see you there.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/open-hardware-covid19
|
||||
|
||||
作者:[Harris Kenny][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/harriskenny
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/health_heartbeat.png?itok=P-GXea-p (open on blue background with heartbeat symbol)
|
||||
[2]: https://opensource.com/resources/what-open-hardware
|
||||
[3]: https://reprap.org/wiki/RepRap
|
||||
[4]: https://www.opensourceecology.org/
|
||||
[5]: https://www.osbeehives.com/
|
||||
[6]: https://www.scmp.com/news/hong-kong/health-environment/article/3052135/polytechnic-university-lab-3d-printing-face
|
||||
[7]: https://www.3dprintingmedia.network/covid-19-3d-printed-valve-for-reanimation-device/
|
||||
[8]: https://www.3dprintingmedia.network/materialise-shows-3d-printed-door-opener-for-coronavirus-containment-efforts/
|
||||
[9]: https://news.yahoo.com/stitch-time-czechs-sew-combat-virus-mask-shortage-205213804.html
|
||||
[10]: http://tapster.io/
|
||||
[11]: https://opensource.com/life/15/5/arduino-or-raspberry-pi
|
||||
[12]: https://t.co/bFogaekehM
|
||||
[13]: https://twitter.com/JIMrichards1010/status/1239140710558969857?ref_src=twsrc%5Etfw
|
||||
[14]: http://Healthcare.gov
|
||||
[15]: https://www.marktysonlaw.com/
|
||||
[16]: https://opentrons.com/
|
||||
[17]: https://blog.opentrons.com/testing-for-covid-19-with-opentrons/
|
||||
[18]: https://opensource.com/sites/default/files/uploads/opentrons.png (Opentrons roadmap graphic)
|
||||
[19]: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/policy-diagnostics-testing-laboratories-certified-perform-high-complexity-testing-under-clia-prior
|
||||
[20]: https://github.com/Opentrons/opentrons/blob/edge/LICENSE
|
||||
[21]: https://www.chaibio.com/openqpcr
|
||||
[22]: https://en.wikipedia.org/wiki/Polymerase_chain_reaction
|
||||
[23]: https://github.com/chaibio/chaipcr
|
||||
[24]: https://beagleboard.org/bone
|
||||
[25]: https://openpcr.org/
|
||||
[26]: https://github.com/jperfetto/OpenPCR/blob/master/license.txt
|
||||
[27]: https://opensource.com/sites/default/files/uploads/openpcr.png (OpenPCR)
|
||||
[28]: https://github.com/GenericLab/WildOpenPCR
|
||||
[29]: http://www.gaudi.ch/GaudiLabs/?page_id=328
|
||||
[30]: https://github.com/GenericLab/WildOpenPCR/blob/master/license.txt
|
||||
[31]: http://gaudi.ch/PocketPCR/
|
||||
[32]: https://opensource.com/sites/default/files/uploads/pocketpcr.png (PocketPCR)
|
||||
[33]: https://github.com/GaudiLabs/PocketPCR/blob/master/LICENSE
|
||||
[34]: https://gitlab.com/TrevorSmale/low-resource-ambu-bag-ventilor
|
||||
[35]: https://en.wikipedia.org/wiki/Bag_valve_mask
|
||||
[36]: https://opensource.com/sites/default/files/uploads/open-lung.png (Open Lung ventilator)
|
||||
[37]: https://gitlab.com/TrevorSmale/low-resource-ambu-bag-ventilor/-/blob/master/images/CONCEPT_1_MECH.png
|
||||
[38]: https://gitlab.com/TrevorSmale/low-resource-ambu-bag-ventilor/-/blob/master/LICENSE
|
||||
[39]: https://www.instructables.com/id/The-Pandemic-Ventilator/
|
||||
[40]: https://foldingathome.org/
|
||||
[41]: https://setiathome.ssl.berkeley.edu/
|
||||
[42]: https://opensource.com/sites/default/files/uploads/foldingathome.png (Markov state model)
|
||||
[43]: https://foldingathome.org/2020/03/15/coronavirus-what-were-doing-and-how-you-can-help-in-simple-terms/
|
||||
[44]: https://en.wikipedia.org/wiki/Folding@home
|
||||
[45]: https://github.com/FoldingAtHome
|
||||
[46]: https://coronavirustechhandbook.com/hardware
|
||||
[47]: https://app.jogl.io/project/121#about
|
||||
[48]: https://hackaday.com/2020/03/12/ultimate-medical-hackathon-how-fast-can-we-design-and-deploy-an-open-source-ventilator/
|
||||
[49]: https://docs.google.com/document/d/1FNPwrQjB1qW1330s5-S_-VB0vDHajMWKieJRjINCNeE/edit?fbclid=IwAR3ugu1SGMsacwKi6ycAKJFOMduInSO4WVM8rgmC4CgMJY6cKaGBNR14mpM
|
||||
[50]: https://www.youtube.com/watch?v=dmQ1twpPpXA
|
||||
[51]: https://github.com/jcl5m1/ventilator
|
||||
[52]: https://foro.coronavirusmakers.org/
|
||||
[53]: https://www.journals.elsevier.com/hardwarex/call-for-papers/special-issue-on-open-source-covid19-medical-hardware
|
||||
[54]: https://oshdata.com/2020-report
|
||||
[55]: https://opensource.com/sites/default/files/uploads/oshdata-country.png (Open source hardware map)
|
||||
[56]: https://discord.gg/duAtG5h
|
@ -0,0 +1,373 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Build a private social network with a Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/20/3/raspberry-pi-open-source-social)
|
||||
[#]: author: (Giuseppe Cassibba https://opensource.com/users/peppe8o)
|
||||
|
||||
Build a private social network with a Raspberry Pi
|
||||
======
|
||||
Step-by-step instructions on how to create your own social network with
|
||||
low-cost hardware and simple setup.
|
||||
![Team of people around the world][1]
|
||||
|
||||
Social networks have revolutionized people's lives in the last several years. People use social channels every day to stay connected with friends and family. But a common question remains regarding privacy and data security. Even if social networks have created complex privacy policies to protect users, maintaining your data in your own server is always the best option if you don't want to make them available to the public.
|
||||
|
||||
Again, a Raspberry Pi—Raspbian Lite version can be very versatile to help you put a number of useful home services (see also my [Raspberry Pi projects][2] article) in place. Some addictive features can be achieved by searching for open source software and testing it with this fantastic device. An interesting example to try is installing OpenSource Social Network in your Raspberry Pi.
|
||||
|
||||
### What Is OpenSource Social Network?
|
||||
|
||||
[OpenSource Social Network][3] (OSSN) is a rapid-development social networking software written in PHP, that essentially allows you to make a social networking website. OSSN can be used to build different types of social apps, such as:
|
||||
|
||||
* Private Intranets
|
||||
* Public/Open Networks
|
||||
* Community
|
||||
|
||||
|
||||
|
||||
OSSN supports features like:
|
||||
|
||||
* Photos
|
||||
* Profile
|
||||
* Friends
|
||||
* Smileys
|
||||
* Search
|
||||
* Chat
|
||||
|
||||
|
||||
|
||||
OSSN runs on a LAMP server. It has very poor hardware requirements, but an amazing user interface, which is also mobile-friendly.
|
||||
|
||||
### What we need
|
||||
|
||||
This project is very simple and, because we're installing only remote web services, we only need a few cheap parts. I'm going to use a Raspberry Pi 3 model B+, but it should also work with Raspberry Pi 3 model A+ or newer boards.
|
||||
|
||||
Hardware:
|
||||
|
||||
* Raspberry Pi 3 model B+ with its power supply
|
||||
* a micro SD card (better if it is a performing card, at least 16GB)
|
||||
* a Desktop PC with an SFTP software (for example, the free [Filezilla][4]) to transfer installation packages into your RPI.
|
||||
|
||||
|
||||
|
||||
### Step-by-step procedure
|
||||
|
||||
We'll start by setting up a classic LAMP server. We'll then set up database users and install OpenSource Social Network.
|
||||
|
||||
#### 1\. Install Raspbian Buster Lite OS
|
||||
|
||||
For this step, you can simply follow my [Install Raspbian Buster Lite in your Raspberry Pi][5] article.
|
||||
|
||||
Make sure that your system is up to date. Connect via ssh terminal and type following commands:
|
||||
|
||||
|
||||
```
|
||||
sudo apt-get update
|
||||
sudo apt-get upgrade
|
||||
```
|
||||
|
||||
2\. Install LAMP server
|
||||
|
||||
LAMP (Linux–Apache–Mysql–Php) servers usually come with the MySQL database. In our project, we'll use MariaDB instead, because it is lighter and works with Raspberry Pi.
|
||||
|
||||
#### 3\. Install Apache server:
|
||||
|
||||
|
||||
```
|
||||
`sudo apt-get install apache2 -y`
|
||||
```
|
||||
|
||||
You should now be able to check that Apache installation has gone correctly by browsing http://<<YouRpiIPAddress>>:
|
||||
|
||||
![][6]
|
||||
|
||||
#### 4\. Install PHP:
|
||||
|
||||
|
||||
```
|
||||
`sudo apt-get install php -y`
|
||||
```
|
||||
|
||||
5\. Install MariaDB server and PHP connector:
|
||||
|
||||
|
||||
```
|
||||
`sudo apt-get install mariadb-server php-mysql -y`
|
||||
```
|
||||
|
||||
6\. Install PhpMyAdmin:
|
||||
|
||||
PhpMyAdmin is not mandatory in OpenSource Social Network, but I suggest that you install it because it simplifies database management.
|
||||
|
||||
|
||||
```
|
||||
`sudo apt-get install phpmyadmin`
|
||||
```
|
||||
|
||||
In the phpMyAdmin setup screen, take the following steps:
|
||||
|
||||
* Select apache (mandatory) with space and press OK.
|
||||
* Select Yes to configure the database for phpMyAdmin with dbconfig-common.
|
||||
* Enter your favorite phpMyAdmin password and press OK.
|
||||
* Enter your phpMyAdmin password again to confirm and press OK
|
||||
|
||||
|
||||
|
||||
#### 7\. Grant phpMyAdmin user DB privileges to manage DBs:
|
||||
|
||||
We'll connect to MariaDB with root user (default password is empty) to grant permissions. Remember to use semicolons at the end of each command row as shown below:
|
||||
|
||||
|
||||
```
|
||||
sudo mysql -uroot -p
|
||||
grant all privileges on *.* to 'phpmyadmin'@'localhost';
|
||||
flush privileges;
|
||||
quit
|
||||
```
|
||||
|
||||
8\. Finally, restart Apache service:
|
||||
|
||||
|
||||
```
|
||||
`sudo systemctl restart apache2.service`
|
||||
```
|
||||
|
||||
And check that phpMyAdmin is working by browsing http://<<YouRpiIPAddress>>/phpmyadmin/.
|
||||
|
||||
![][7]
|
||||
|
||||
Default phpMyAdmin login credentials are:
|
||||
|
||||
* user: phpmyadmin
|
||||
* password: the one you set up in the phpMyAdmin installation step
|
||||
|
||||
|
||||
|
||||
### Installing other open source social network-required packages and setting up PHP
|
||||
|
||||
We need to prepare our system for OpenSource Social Network's first setup wizard. Required packages are:
|
||||
|
||||
* PHP version any of 5.6, 7.0, 7.1
|
||||
* MYSQL 5 OR >
|
||||
* APACHE
|
||||
* MOD_REWRITE
|
||||
* PHP Extensions cURL & Mcrypt should be enabled
|
||||
* PHP GD Extension
|
||||
* PHP ZIP Extension
|
||||
* PHP settings allow_url_fopen enabled
|
||||
* PHP JSON Support
|
||||
* PHP XML Support
|
||||
* PHP OpenSSL
|
||||
|
||||
|
||||
|
||||
So we'll install them with following terminal commands:
|
||||
|
||||
|
||||
```
|
||||
`sudo apt-get install php7.3-curl php7.3-gd php7.3-zip php7.3-json php7.3-xml`
|
||||
```
|
||||
|
||||
#### 1\. Enable MOD_REWRITE:
|
||||
|
||||
|
||||
```
|
||||
`sudo a2enmod rewrite`
|
||||
```
|
||||
|
||||
2\. Edit default Apache config to use mod_rewrite:
|
||||
|
||||
|
||||
```
|
||||
`sudo nano /etc/apache2/sites-available/000-default.conf`
|
||||
```
|
||||
|
||||
3\. Add the section so that your **000-default.conf** file appears like the following (excluding comments):
|
||||
|
||||
|
||||
```
|
||||
<VirtualHost *:80>
|
||||
ServerAdmin webmaster@localhost
|
||||
DocumentRoot /var/www/html
|
||||
ErrorLog ${APACHE_LOG_DIR}/error.log
|
||||
CustomLog ${APACHE_LOG_DIR}/access.log combined
|
||||
# SECTION TO ADD --------------------------------
|
||||
<Directory /var/www/html>
|
||||
Options Indexes FollowSymLinks MultiViews
|
||||
AllowOverride All
|
||||
Require all granted
|
||||
</Directory>
|
||||
# END SECTION TO ADD --------------------------------
|
||||
</VirtualHost>
|
||||
```
|
||||
|
||||
4\. Install Mcrypt:
|
||||
|
||||
|
||||
```
|
||||
sudo apt install php-dev libmcrypt-dev php-pear
|
||||
sudo pecl channel-update pecl.php.net
|
||||
sudo pecl install mcrypt-1.0.2
|
||||
```
|
||||
|
||||
5\. Enable Mcrypt module by adding (or uncommenting) “extension=mcrypt.so" in "/etc/php/7.3/apache2/php.ini":
|
||||
|
||||
|
||||
```
|
||||
`sudo nano /etc/php/7.3/apache2/php.ini`
|
||||
```
|
||||
|
||||
**allow_url_fopen** should be already enabled in "/etc/php/7.3/apache2/php.ini". OpenSSL should be already installed in php7.3.
|
||||
|
||||
#### 6\. Another setting that I suggest is editing the PHP max upload file size up to 16 MB:
|
||||
|
||||
|
||||
```
|
||||
`sudo nano /etc/php/7.3/apache2/php.ini`
|
||||
```
|
||||
|
||||
7\. Look for the row with the **upload_max_filesize** parameter and set it as the following:
|
||||
|
||||
|
||||
```
|
||||
`upload_max_filesize = 16M`
|
||||
```
|
||||
|
||||
8\. Save and exit. Restart Apache:
|
||||
|
||||
|
||||
```
|
||||
`sudo systemctl restart apache2.service`
|
||||
```
|
||||
|
||||
### Install OSSN
|
||||
|
||||
#### 1\. Create DB and set up user:
|
||||
|
||||
Go back to phpmyadmin web page (browse "http://<<YourRpiIPAddress>>/phpmyadmin/") and login:
|
||||
|
||||
User: phpmyadmin
|
||||
|
||||
Password: the one set up in phpmyadmin installation step
|
||||
|
||||
Click on database tab:
|
||||
|
||||
![][8]
|
||||
|
||||
Create a database and take note of the database name, as you will be required to enter it later in the installation process.
|
||||
|
||||
![][9]
|
||||
|
||||
It's time to create a database user for OSSN. In this example, I'll use the following credentials:
|
||||
|
||||
User: ossn_db_user
|
||||
|
||||
Password: ossn_db_password
|
||||
|
||||
So, terminal commands will be (root password is still empty, if not changed by you before):
|
||||
|
||||
|
||||
```
|
||||
sudo mysql -uroot -p
|
||||
CREATE USER 'ossn_db_user'@'localhost' IDENTIFIED BY 'ossn_db_password';
|
||||
GRANT ALL PRIVILEGES ON ossn_db.* TO 'ossn_db_user'@'localhost';
|
||||
flush privileges;
|
||||
quit
|
||||
```
|
||||
|
||||
2\. Install OSSN software:
|
||||
|
||||
Download the OSSN installation zip file from the [OSSN download page][10] on your local PC. At the time of this writing, this file is named "ossn-v5.2-1577836800.zip."
|
||||
|
||||
Using your favorite SFTP software, transfer the entire zip file via SFTP to a new folder in the path "/home/pi/download" on your Raspberry Pi. Common (default) SFP connection parameters are:
|
||||
|
||||
* Host: your Raspberry Pi IP address
|
||||
* User: pi
|
||||
* Password: raspberry (if you didn't change the pi default password)
|
||||
* Port: 22
|
||||
|
||||
|
||||
|
||||
Back to terminal:
|
||||
|
||||
|
||||
```
|
||||
cd /home/pi/download/ #Enter directory where OSSN installation files have been transferred
|
||||
unzip ossn-v5.2-1577836800.zip #Extracts all files from zip
|
||||
cd /var/www/html/ #Enter Apache web directory
|
||||
sudo rm index.html #Removes Apache default page - we'll use OSSN one
|
||||
sudo cp -R /home/pi/download/ossn-v5.2-1577836800/* ./ #Copy installation files to web directory
|
||||
sudo chown -R www-data:www-data ./
|
||||
```
|
||||
|
||||
Create a data folder:OSSN requires a folder to store data. OSSN suggests, for security reasons, to create this folder outside of the published document root. So, we'll create this opt-in folder and give grants:
|
||||
|
||||
|
||||
```
|
||||
sudo mkdir /opt/ossn_data
|
||||
sudo chown -R www-data:www-data /opt/ossn_data/
|
||||
```
|
||||
|
||||
Browse http://<<YourRpiIPAddress>> to start the installation wizard:
|
||||
|
||||
![][11]
|
||||
|
||||
All checks should be fine. Click the Next button at the end of the page.
|
||||
|
||||
![][12]
|
||||
|
||||
Read the license validation and click the Next button at the end of the page to accept.
|
||||
|
||||
![][13]
|
||||
|
||||
Enter the database user, password, and the DB name you chose. Remember also to enter the OSSN data folder. Press Install.
|
||||
|
||||
![][14]
|
||||
|
||||
Enter your admin account information and press the Create button.
|
||||
|
||||
![][15]
|
||||
|
||||
Everything should be fine now. Press Finish to access the administration dashboard.
|
||||
|
||||
![][16]
|
||||
|
||||
So, administration panel can be reached with URL "http://<<YourRpiIPAddress>>/administrator" while user link will be "http://<<YourRpiIPAddress>>".
|
||||
|
||||
![][17]
|
||||
|
||||
_This article was originally published at [peppe8o.com][18]. Reposted with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/raspberry-pi-open-source-social
|
||||
|
||||
作者:[Giuseppe Cassibba][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/peppe8o
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_global_people_gis_location.png?itok=Rl2IKo12 (Team of people around the world)
|
||||
[2]: https://peppe8o.com/2019/04/best-raspberry-pi-projects-with-open-source-software/
|
||||
[3]: https://www.opensource-socialnetwork.org/
|
||||
[4]: https://filezilla-project.org/
|
||||
[5]: https://peppe8o.com/2019/07/install-raspbian-buster-lite-in-your-raspberry-pi/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/ossn_1_0.jpg
|
||||
[7]: https://opensource.com/sites/default/files/uploads/ossn_2.jpg
|
||||
[8]: https://opensource.com/sites/default/files/uploads/ossn_3.jpg
|
||||
[9]: https://opensource.com/sites/default/files/uploads/ossn_4.jpg
|
||||
[10]: https://www.opensource-socialnetwork.org/download
|
||||
[11]: https://opensource.com/sites/default/files/uploads/ossn_5.jpg
|
||||
[12]: https://opensource.com/sites/default/files/uploads/ossn_6.jpg
|
||||
[13]: https://opensource.com/sites/default/files/uploads/ossn_7.jpg
|
||||
[14]: https://opensource.com/sites/default/files/uploads/ossn_8.jpg
|
||||
[15]: https://opensource.com/sites/default/files/uploads/ossn_9.jpg
|
||||
[16]: https://opensource.com/sites/default/files/uploads/ossn_10.jpg
|
||||
[17]: https://opensource.com/sites/default/files/uploads/ossn_11.jpg
|
||||
[18]: https://peppe8o.com/private-social-network-with-raspberry-pi-and-opensource-social-network/
|
@ -0,0 +1,138 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (tinyeyeser )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Control the firewall at the command line)
|
||||
[#]: via: (https://fedoramagazine.org/control-the-firewall-at-the-command-line/)
|
||||
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
|
||||
|
||||
Control the firewall at the command line
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
A network _firewall_ is more or less what it sounds like: a protective barrier that prevents unwanted network transmissions. They are most frequently used to prevent outsiders from contacting or using network services on a system. For instance, if you’re running a laptop at school or in a coffee shop, you probably don’t want strangers poking around on it.
|
||||
|
||||
Every Fedora system has a firewall built in. It’s part of the network functions in the Linux kernel inside. This article shows you how to change its settings using _firewall-cmd_.
|
||||
|
||||
### Network basics
|
||||
|
||||
This article can’t teach you [everything][2] about computer networks. But a few basics suffice to get you started.
|
||||
|
||||
Any computer on a network has an _IP address_. Think of this just like a mailing address that allows correct routing of data. Each computer also has a set of _ports_, numbered 0-65535. These are not physical ports; instead, you can think of them as a set of connection points at the address.
|
||||
|
||||
In many cases, the port is a [standard number][3] or range depending on the application expected to answer. For instance, a web server typically reserves port 80 for non-secure HTTP communications, and/or 443 for secure HTTPS. The port numbers under 1024 are reserved for system and well-known purposes, ports 1024-49151 are registered, and ports 49152 and above are usually ephemeral (used only for a short time).
|
||||
|
||||
Each of the two most common protocols for Internet data transfer, [TCP][4] and [UDP][5], have this set of ports. TCP is used when it’s important that all data be received and, if it arrives out of order, reassembled in the right order. UDP is used for more time-sensitive services that can withstand losing some data.
|
||||
|
||||
An application running on the system, such as a web server, reserves one or more ports (as seen above, 80 and 443 for example). Then during network communication, a host establishes a connection between a source address and port, and the destination address and port.
|
||||
|
||||
A network firewall can block or permit transmissions of network data based on rules like address, port, or other criteria. The _firewall-cmd_ utility lets you interact with the rule set to view or change how the firewall works.
|
||||
|
||||
### Firewall zones
|
||||
|
||||
To verify the firewall is running, use this command with [sudo][6]. (In fairness, you can run _firewall-cmd_ without the _sudo_ command in environments where [PolicyKit][7] is running.)
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --state
|
||||
running
|
||||
```
|
||||
|
||||
The firewalld service supports any number of _zones_. Each zone can have its own settings and rules for protection. In addition, each network interface can be placed in any zone individually The default zone for an external facing interface (like the wifi or wired network card) on a Fedora Workstation is the _FedoraWorkstation_ zone.
|
||||
|
||||
To see what zones are active, use the _–get-active-zones_ flag. On this system, there are two network interfaces, a wired Ethernet card _wlp2s0_ and a virtualization (libvirt) bridge interface _virbr0_:
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --get-active-zones
|
||||
FedoraWorkstation
|
||||
interfaces: wlp2s0
|
||||
libvirt
|
||||
interfaces: virbr0
|
||||
```
|
||||
|
||||
To see the default zone, or all the defined zones:
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --get-default-zone
|
||||
FedoraWorkstation
|
||||
$ sudo firewall-cmd --get-zones
|
||||
FedoraServer FedoraWorkstation block dmz drop external home internal libvirt public trusted work
|
||||
```
|
||||
|
||||
To see the services the firewall is allowing other systems to access in the default zone, use the _–list-services_ flag. Here is an example from a customized system; you may see something different.
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --list-services
|
||||
dhcpv6-client mdns samba-client ssh
|
||||
```
|
||||
|
||||
This system has four services exposed. Each of these has a well-known port number. The firewall recognizes them by name. For instance, the _ssh_ service is associated with port 22.
|
||||
|
||||
To see other port settings for the firewall in the current zone, use the _–list-ports_ flag. By the way, you can always declare the zone you want to check:
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --list-ports --zone=FedoraWorkstation
|
||||
1025-65535/udp 1025-65535/tcp
|
||||
```
|
||||
|
||||
This shows that ports 1025 and above (both UDP and TCP) are open by default.
|
||||
|
||||
### Changing zones, ports, and services
|
||||
|
||||
The above setting is a design decision.* It ensures novice users can use network facing applications they install. If you know what you’re doing and want a more protective default, you can move the interface to the _FedoraServer_ zone, which prohibits any ports not explicitly allowed. _(**Warning:** if you’re using the host via the network, you may break your connection — meaning you’ll have to go to that box physically to make further changes!)_
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --change-interface=<ifname> --zone=FedoraServer
|
||||
success
|
||||
```
|
||||
|
||||
* _This article is not the place to discuss that decision, which went through many rounds of review and debate in the Fedora community. You are welcome to change settings as needed._
|
||||
|
||||
If you want to open a well-known port that belongs to a service, you can add that service to the default zone (or use _–zone_ to adjust a different zone). You can add more than one at once. This example opens up the well-known ports for your web server for both HTTP and HTTPS traffic, on ports 80 and 443:
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --add-service=http --add-service=https
|
||||
success
|
||||
```
|
||||
|
||||
Not all services are defined, but many are. To see the whole list, use the _–get-services_ flag.
|
||||
|
||||
If you want to add specific ports, you can do that by number and protocol as well. (You can also combine _–add-service_ and _–add-port_ flags, as many as necessary.) This example opens up the UDP service for a network boot service:
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --add-port=67/udp
|
||||
success
|
||||
```
|
||||
|
||||
**Important:** If you want your changes to be effective after you reboot your system or restart the firewalld service, you **must** add the _–permanent_ flag to your commands. The examples here only change the firewall until one of those events next happens.
|
||||
|
||||
These are just some of the many functions of the _firewall-cmd_ utility and the firewalld service. There is much more information on firewalld at the project’s [home page][8] that’s worth reading and trying out.
|
||||
|
||||
* * *
|
||||
|
||||
_Photo by [Jakob Braun][9] on [Unsplash][10]._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/control-the-firewall-at-the-command-line/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/pfrields/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/03/firewall-cmd-816x345.jpg
|
||||
[2]: https://en.wikipedia.org/wiki/Portal:Internet
|
||||
[3]: https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers
|
||||
[4]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol
|
||||
[5]: https://en.wikipedia.org/wiki/User_Datagram_Protocol
|
||||
[6]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[7]: https://en.wikipedia.org/wiki/Polkit
|
||||
[8]: https://firewalld.org/
|
||||
[9]: https://unsplash.com/@jakobustrop?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[10]: https://unsplash.com/s/photos/brick-wall?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,123 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Check Password Expiration Date for All Users on Linux)
|
||||
[#]: via: (https://www.2daygeek.com/linux-check-user-password-expiration-date/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Check Password Expiration Date for All Users on Linux
|
||||
======
|
||||
|
||||
If you enable a **[password policy on Linux][1]**.
|
||||
|
||||
The password must be changed before it expires, and you will be notified when you log in to the system.
|
||||
|
||||
If you rarely use your account, it may be locked due to password expiration.
|
||||
|
||||
In many cases, this can happen in service accounts with a **[password-less login][2]**, because nobody will monitor it.
|
||||
|
||||
This will lead to stop the **[cronjobs/crontab][3]** configured on the server.
|
||||
|
||||
If so, how to mitigate this situation.
|
||||
|
||||
You can write a **[shell script][4]** to get a notification about it, for which we wrote an article some time ago.
|
||||
|
||||
* **[Bash Script to Send eMail With a List of User Accounts Expiring in “X” Days][5]**
|
||||
|
||||
|
||||
|
||||
This will give you the number of days, but this article aims to give you a actual date on your terminal.
|
||||
|
||||
This can be achieved with the chage command.
|
||||
|
||||
### What is chage Command?
|
||||
|
||||
chage stands for change age. It changes user password expiration information.
|
||||
|
||||
The chage command changes the number of days between password changes and the date of the last password change.
|
||||
|
||||
This information is used by the system to determine when a user should change his/her password.
|
||||
|
||||
It allows the user to perform other functions such as setting the account expiration date, setting the password inactive after the expiration, displaying account aging information, setting minimum and maximum days before password change, and setting expiry warning days.
|
||||
|
||||
### 1) How to Check the Password Expiration Date for a Specific User on Linux
|
||||
|
||||
If you want to check the password expiration date for a specific user on Linux, use the following command.
|
||||
|
||||
```
|
||||
# chage -l daygeek
|
||||
|
||||
Last password change : Feb 13, 2020
|
||||
Password expires : May 13, 2020
|
||||
Password inactive : never
|
||||
Account expires : never
|
||||
Minimum number of days between password change : 7
|
||||
Maximum number of days between password change : 90
|
||||
Number of days of warning before password expires : 7
|
||||
```
|
||||
|
||||
### 2) How To Check Password Expiration Date For All Users On Linux
|
||||
|
||||
You can use the chage command directly for a single user, which may not work as expected for many users, but you can use it.
|
||||
|
||||
To achieve this you need to write a small shell script. The shell script below allows you to list all users added to your system, including system users.
|
||||
|
||||
```
|
||||
# for user in $(cat /etc/passwd |cut -d: -f1); do echo $user; chage -l $user | grep "Password expires"; done | paste -d " " - - | sed 's/Password expires//g'
|
||||
```
|
||||
|
||||
You will get an output like the one below, but the username may differ.
|
||||
|
||||
```
|
||||
root : never
|
||||
bin : never
|
||||
daemon : never
|
||||
adm : never
|
||||
lp : never
|
||||
sync : never
|
||||
shutdown : never
|
||||
u1 : Nov 12, 2018
|
||||
u2 : Jun 17, 2019
|
||||
u3 : Jun 17, 2019
|
||||
u4 : Jun 17, 2019
|
||||
u5 : Jun 17, 2019
|
||||
```
|
||||
|
||||
### 3) How To Check Password Expiration Date For All Users Except System Users On Linux
|
||||
|
||||
The below shell script will display a list of users who has an expiry date.
|
||||
|
||||
```
|
||||
# for user in $(cat /etc/passwd |cut -d: -f1); do echo $user; chage -l $user | grep "Password expires"; done | paste -d " " - - | sed 's/Password expires//g' | grep -v "never"
|
||||
```
|
||||
|
||||
You will get an output like the one below, but the username may differ.
|
||||
|
||||
```
|
||||
u1 : Nov 12, 2018
|
||||
u2 : Jun 17, 2019
|
||||
u3 : Jun 17, 2019
|
||||
u4 : Jun 17, 2019
|
||||
u5 : Jun 17, 2019
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-check-user-password-expiration-date/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/how-to-set-password-complexity-policy-on-linux/
|
||||
[2]: https://www.2daygeek.com/configure-setup-passwordless-ssh-key-based-authentication-linux/
|
||||
[3]: https://www.2daygeek.com/linux-crontab-cron-job-to-schedule-jobs-task/
|
||||
[4]: https://www.2daygeek.com/category/shell-script/
|
||||
[5]: https://www.2daygeek.com/bash-script-to-check-user-account-password-expiry-linux/
|
@ -0,0 +1,772 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Run a command on binary files with this script)
|
||||
[#]: via: (https://opensource.com/article/20/3/run-binaries-script)
|
||||
[#]: author: (Nick Clifton https://opensource.com/users/nickclifton)
|
||||
|
||||
Run a command on binary files with this script
|
||||
======
|
||||
Try this simple script to easily run a command on binary files
|
||||
regardless of their packaging.
|
||||
![Binary code on a computer screen][1]
|
||||
|
||||
Examining files from the command-line is generally an easy thing to do. You just run the command you want, followed by a list of files to be examined. Dealing with binary files, however, is more complicated. These files are often packaged up into archives, tarballs, or other packaging formats. The run-on-binaries script provides a convenient way to run a command on a collection of files, regardless of how they are packaged.
|
||||
|
||||
The invocation of the script is quite simple:
|
||||
|
||||
|
||||
```
|
||||
`run-on-binaries <file(s}>`
|
||||
```
|
||||
|
||||
So, for example:
|
||||
|
||||
|
||||
```
|
||||
`run-on-binaries /usr/bin/ls foo.rpm`
|
||||
```
|
||||
|
||||
will list all of the files inside the **foo.rpm** file, while:
|
||||
|
||||
|
||||
```
|
||||
`run-on-binaries /usr/bin/readelf -a libc.a`
|
||||
```
|
||||
|
||||
will run the **readelf** program, with the **-a** command-line option, on all of the object files inside the **libc.a library**.
|
||||
|
||||
If necessary, the script can be passed a file containing a list of other files to be processed, rather than specifying them on the command line—like this:
|
||||
|
||||
|
||||
```
|
||||
`run-on-binaries --files-from=foo.lst /usr/bin/ps2ascii`
|
||||
```
|
||||
|
||||
This will run the **ps2ascii** script on all of the files listed in **foo.lst**. (The files just need to be separated by white space. There can be multiple files on a single line if desired).
|
||||
|
||||
Also, a skip list can be provided to stop the script from processing specified files:
|
||||
|
||||
|
||||
```
|
||||
`run-on-binaries --skip-list=skip.lst /usr/bin/wc *`
|
||||
```
|
||||
|
||||
This will run the **wc** program on all of the files in the current directory, except for those specified in **skip.lst**.
|
||||
|
||||
The script does not recurse into directories, but this can be handled by combining it with the **find** command, like this:
|
||||
|
||||
|
||||
```
|
||||
`find . -type f -exec run-on-binaries @ ;`
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
|
||||
```
|
||||
`find . -type d -exec run-on-binaries @/* ;`
|
||||
```
|
||||
|
||||
The only difference between these two invocations is that the second one only runs the target program once per directory, but gives it a long command-line of all of the files in the directory.
|
||||
|
||||
Though convenient, the script is lacking in several areas. Right now, it does not examine the PATH environment variable to find the command that it is asked to run, so a full path must be provided. Also, the script ought to be able to handle recursion on its own, without needing help from the find command.
|
||||
|
||||
The run-on-binaries script is part of the annobin package, which is available on Fedora. The sources for annobin can also be obtained from the git repository at <git://sourceware.org/git/annobin.git>.
|
||||
|
||||
### The script
|
||||
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
# Script to run another script/program on the executables inside a given file.
|
||||
#
|
||||
# Created by Nick Clifton. <[nickc@redhat.com][2]>
|
||||
# Copyright (c) 2018 Red Hat.
|
||||
#
|
||||
# This is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published
|
||||
# by the Free Software Foundation; either version 3, or (at your
|
||||
# option) any later version.
|
||||
|
||||
# It is distributed in the hope that it will be useful, but
|
||||
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# Usage:
|
||||
# run-on-binaries-in [options] program [options-for-the-program] file(s)
|
||||
#
|
||||
# This script does not handle directories. This is deliberate.
|
||||
# It is intended that if recursion is needed then it will be
|
||||
# invoked from find, like this:
|
||||
#
|
||||
# find . -name "*.rpm" -exec run-on-binaries-in <script-to-run> {} \;
|
||||
|
||||
version=1.0
|
||||
|
||||
help ()
|
||||
{
|
||||
# The following exec goop is so that we don't have to manually
|
||||
# redirect every message to stderr in this function.
|
||||
exec 4>&1 # save stdout fd to fd #4
|
||||
exec 1>&2 # redirect stdout to stderr
|
||||
|
||||
cat <<__EOM__
|
||||
|
||||
This is a shell script to run another script/program on one or more binary
|
||||
files. If the file(s) specified are archives of some kind (including rpms)
|
||||
then the script/program is run on the binary executables inside the archive.
|
||||
|
||||
Usage: $prog {options} program {options-for-the-program} files(s)
|
||||
|
||||
{options} are:
|
||||
-h --help Display this information and then exit.
|
||||
-v --version Report the version number of this script.
|
||||
-V --verbose Report on progress.
|
||||
-q --quiet Do not include the script name in the output.
|
||||
-i --ignore Silently ignore files that are not executables or archives.
|
||||
-p=<TEXT> --prefix=<TEXT> Prefix normal output with this string.
|
||||
-t=<DIR> --tmpdir=<DIR> Temporary directory to use when opening archives.
|
||||
-f=<FILE> --files-from=<FILE> Process files listed in <FILE>.
|
||||
-s=<FILE> --skip-list=<FILE> Skip any file listed in <FILE>.
|
||||
-- Stop accumulating options.
|
||||
|
||||
Examples:
|
||||
|
||||
$prog hardened foo.rpm
|
||||
Runs the hardened script on the executable
|
||||
files inside foo.rpm.
|
||||
|
||||
$prog check-abi -v fred.tar.xz
|
||||
Runs the check-abi script on the decompressed
|
||||
contents of the fred.tar.xz archive, passing the
|
||||
-v option to check-abi as it does so.
|
||||
|
||||
$prog -V -f=list.txt readelf -a
|
||||
Runs the readelf program, with the -a option on
|
||||
every file listed in the list.txt. Describes
|
||||
what is being done as it works.
|
||||
|
||||
$prog -v -- -fred -a jim -b bert -- -c harry
|
||||
Runs the script "-fred" on the files jim, bert,
|
||||
"-c" and harry. Passes the options "-a" and
|
||||
"-b" to the script (even when run on jim).
|
||||
Reports the version of this script as well.
|
||||
|
||||
__EOM__
|
||||
exec 1>&4 # Copy stdout fd back from temporary save fd, #4
|
||||
}
|
||||
|
||||
main ()
|
||||
{
|
||||
init
|
||||
|
||||
parse_args ${1+"$@"}
|
||||
|
||||
if [ $failed -eq 0 ];
|
||||
then
|
||||
run_script_on_files
|
||||
fi
|
||||
|
||||
if [ $failed -ne 0 ];
|
||||
then
|
||||
exit 1
|
||||
else
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
report ()
|
||||
{
|
||||
if [ $quiet -eq 0 ];
|
||||
then
|
||||
echo -n $prog": "
|
||||
fi
|
||||
|
||||
echo ${1+"$@"}
|
||||
}
|
||||
|
||||
ice ()
|
||||
{
|
||||
report "Internal error: " ${1+"$@"}
|
||||
exit 1
|
||||
}
|
||||
|
||||
fail ()
|
||||
{
|
||||
report "Failure:" ${1+"$@"}
|
||||
failed=1
|
||||
}
|
||||
|
||||
verbose ()
|
||||
{
|
||||
if [ $verbose -ne 0 ]
|
||||
then
|
||||
report ${1+"$@"}
|
||||
fi
|
||||
}
|
||||
|
||||
# Initialise global variables.
|
||||
init ()
|
||||
{
|
||||
files[0]="";
|
||||
# num_files is the number of files to be scanned.
|
||||
# files[0] is the script to run on the files.
|
||||
num_files=0;
|
||||
|
||||
script=""
|
||||
script_opts="";
|
||||
|
||||
prog_opts="-i"
|
||||
|
||||
tmpdir=/dev/shm
|
||||
prefix=""
|
||||
files_from=""
|
||||
skip_list=""
|
||||
|
||||
failed=0
|
||||
verbose=0
|
||||
ignore=0
|
||||
quiet=0
|
||||
}
|
||||
|
||||
# Parse our command line
|
||||
parse_args ()
|
||||
{
|
||||
abs_prog=$0;
|
||||
prog=`basename $abs_prog`;
|
||||
|
||||
# Locate any additional command line switches
|
||||
# Likewise accumulate non-switches to the files list.
|
||||
while [ $# -gt 0 ]
|
||||
do
|
||||
optname="`echo $1 | sed 's,=.*,,'`"
|
||||
optarg="`echo $1 | sed 's,^[^=]*=,,'`"
|
||||
case "$optname" in
|
||||
-v | --version)
|
||||
report "version: $version"
|
||||
;;
|
||||
-h | --help)
|
||||
help
|
||||
exit 0
|
||||
;;
|
||||
-q | --quiet)
|
||||
quiet=1;
|
||||
prog_opts="$prog_opts -q"
|
||||
;;
|
||||
-V | --verbose)
|
||||
if [ $verbose -eq 1 ];
|
||||
then
|
||||
# This has the effect of cancelling out the prog_opts="-i"
|
||||
# in the init function, so that recursive invocations of this
|
||||
# script will complain about unrecognised file types.
|
||||
if [ $quiet -eq 0 ];
|
||||
then
|
||||
prog_opts="-V -V"
|
||||
else
|
||||
prog_opts="-V -V -q"
|
||||
fi
|
||||
else
|
||||
verbose=1;
|
||||
prog_opts="$prog_opts -V"
|
||||
fi
|
||||
;;
|
||||
-i | --ignore)
|
||||
ignore=1
|
||||
;;
|
||||
-t | --tmpdir)
|
||||
if test "x$optarg" = "x$optname" ;
|
||||
then
|
||||
shift
|
||||
if [ $# -eq 0 ]
|
||||
then
|
||||
fail "$optname needs a directory name"
|
||||
else
|
||||
tmpdir=$1
|
||||
fi
|
||||
else
|
||||
tmpdir="$optarg"
|
||||
fi
|
||||
;;
|
||||
-p | --prefix)
|
||||
if test "x$optarg" = "x$optname" ;
|
||||
then
|
||||
shift
|
||||
if [ $# -eq 0 ]
|
||||
then
|
||||
fail "$optname needs a string argument"
|
||||
else
|
||||
prefix=$1
|
||||
fi
|
||||
else
|
||||
prefix="$optarg"
|
||||
fi
|
||||
;;
|
||||
-f | --files_from)
|
||||
if test "x$optarg" = "x$optname" ;
|
||||
then
|
||||
shift
|
||||
if [ $# -eq 0 ]
|
||||
then
|
||||
fail "$optname needs a file name"
|
||||
else
|
||||
files_from=$1
|
||||
fi
|
||||
else
|
||||
files_from="$optarg"
|
||||
fi
|
||||
;;
|
||||
|
||||
-s | --skip-list)
|
||||
if test "x$optarg" = "x$optname" ;
|
||||
then
|
||||
shift
|
||||
if [ $# -eq 0 ]
|
||||
then
|
||||
fail "$optname needs a file name"
|
||||
else
|
||||
skip_list=$1
|
||||
fi
|
||||
else
|
||||
skip_list="$optarg"
|
||||
fi
|
||||
;;
|
||||
|
||||
--)
|
||||
shift
|
||||
break;
|
||||
;;
|
||||
--*)
|
||||
fail "unrecognised option: $1"
|
||||
help
|
||||
;;
|
||||
*)
|
||||
script="$1";
|
||||
if ! [ -a "$script" ]
|
||||
then
|
||||
fail "$script: program/script not found"
|
||||
elif ! [ -x "$script" ]
|
||||
then
|
||||
fail "$script: program/script not executable"
|
||||
fi
|
||||
# After we have seen the first non-option we stop
|
||||
# accumulating options for this script and instead
|
||||
# start accumulating options for the script to be
|
||||
# run.
|
||||
shift
|
||||
break;
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# Read in the contents of the --file-from list, if specified.
|
||||
if test "x$files_from" != "x" ;
|
||||
then
|
||||
if ! [ -a "$files_from" ]
|
||||
then
|
||||
fail "$files_from: file not found"
|
||||
elif ! [ -r "$files_from" ]
|
||||
then
|
||||
fail "$files_from: file not readable"
|
||||
else
|
||||
eval 'files=($(cat $files_from))'
|
||||
num_files=${#files[*]}
|
||||
fi
|
||||
fi
|
||||
skip_files[foo]=bar
|
||||
|
||||
# Check that the skip list exists, if specified.
|
||||
if test "x$skip_list" != "x" ;
|
||||
then
|
||||
if ! [ -a "$skip_list" ]
|
||||
then
|
||||
fail "$skip_list: file not found"
|
||||
elif ! [ -r "$skip_list" ]
|
||||
then
|
||||
fail "$files_from: file not readable"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Accumulate any remaining arguments separating out the arguments
|
||||
# for the script from the names of the files to scan.
|
||||
while [ $# -gt 0 ]
|
||||
do
|
||||
optname="`echo $1 | sed 's,=.*,,'`"
|
||||
optarg="`echo $1 | sed 's,^[^=]*=,,'`"
|
||||
case "$optname" in
|
||||
--)
|
||||
shift
|
||||
break;
|
||||
;;
|
||||
-*)
|
||||
script_opts="$script_opts $1"
|
||||
;;
|
||||
*)
|
||||
files[$num_files]="$1";
|
||||
let "num_files++"
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# Accumulate any remaining arguments without processing them.
|
||||
while [ $# -gt 0 ]
|
||||
do
|
||||
files[$num_files]="$1";
|
||||
let "num_files++";
|
||||
shift
|
||||
done
|
||||
|
||||
if [ $num_files -gt 0 ];
|
||||
then
|
||||
# Remember that we are counting from zero not one.
|
||||
let "num_files--"
|
||||
else
|
||||
fail "Must specify a program/script and at least one file to scan."
|
||||
fi
|
||||
}
|
||||
|
||||
run_script_on_files ()
|
||||
{
|
||||
local i
|
||||
|
||||
i=0;
|
||||
while [ $i -le $num_files ]
|
||||
do
|
||||
run_on_file i
|
||||
let "i++"
|
||||
done
|
||||
}
|
||||
|
||||
# syntax: run <command> [<args>]
|
||||
# If being verbose report the command being run, and
|
||||
# the directory in which it is run.
|
||||
run ()
|
||||
{
|
||||
local where
|
||||
|
||||
if test "x$1" = "x" ;
|
||||
then
|
||||
fail "run() called without an argument."
|
||||
fi
|
||||
|
||||
verbose " Running: ${1+$@}"
|
||||
|
||||
${1+$@}
|
||||
}
|
||||
|
||||
decompress ()
|
||||
{
|
||||
local abs_file decompressor decomp_args orig_file base_file
|
||||
|
||||
# Paranoia checks - the user should never encounter these.
|
||||
if test "x$4" = "x" ;
|
||||
then
|
||||
ice "decompress called with too few arguments"
|
||||
fi
|
||||
if test "x$5" != "x" ;
|
||||
then
|
||||
ice "decompress called with too many arguments"
|
||||
fi
|
||||
|
||||
abs_file=$1
|
||||
decompressor=$2
|
||||
decomp_args=$3
|
||||
orig_file=$4
|
||||
|
||||
base_file=`basename $abs_file`
|
||||
|
||||
run cp $abs_file $base_file
|
||||
run $decompressor $decomp_args $base_file
|
||||
if [ $? != 0 ];
|
||||
then
|
||||
fail "$orig_file: Unable to decompress"
|
||||
fi
|
||||
|
||||
rm -f $base_file
|
||||
}
|
||||
|
||||
run_on_file ()
|
||||
{
|
||||
local file
|
||||
|
||||
# Paranoia checks - the user should never encounter these.
|
||||
if test "x$1" = "x" ;
|
||||
then
|
||||
ice "scan_file called without an argument"
|
||||
fi
|
||||
if test "x$2" != "x" ;
|
||||
then
|
||||
ice "scan_file called with too many arguments"
|
||||
fi
|
||||
|
||||
# Use quotes when accessing files in order to preserve
|
||||
# any spaces that might be in the directory name.
|
||||
file="${files[$1]}";
|
||||
|
||||
# Catch names that start with a dash - they might confuse readelf
|
||||
if test "x${file:0:1}" = "x-" ;
|
||||
then
|
||||
file="./$file"
|
||||
fi
|
||||
|
||||
# See if we should skip this file.
|
||||
if test "x$skip_list" != "x" ;
|
||||
then
|
||||
# This regexp looks for $file being the first text on a line, either
|
||||
# on its own, or with additional text separated from it by at least
|
||||
# one space character. So searching for "fred" in the following gives:
|
||||
# fr <\- no match
|
||||
# fred <\- match
|
||||
# fredjim <\- no match
|
||||
# fred bert <\- match
|
||||
regexp="^$file[^[:graph:]]*"
|
||||
grep --silent --regexp="$regexp" $skip_list
|
||||
if [ $? = 0 ];
|
||||
then
|
||||
verbose "$file: skipping"
|
||||
return
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check the file.
|
||||
if ! [ -a "$file" ]
|
||||
then
|
||||
fail "$file: file not found"
|
||||
return
|
||||
elif ! [ -r "$file" ]
|
||||
then
|
||||
if [ $ignore -eq 0 ];
|
||||
then
|
||||
fail "$file: not readable"
|
||||
fi
|
||||
return
|
||||
elif [ -d "$file" ]
|
||||
then
|
||||
if [ $ignore -eq 0 ];
|
||||
then
|
||||
if [ $num_files -gt 1 ];
|
||||
then
|
||||
verbose "$file: skipping - it is a directory"
|
||||
else
|
||||
report "$file: skipping - it is a directory"
|
||||
fi
|
||||
fi
|
||||
return
|
||||
elif ! [ -f "$file" ]
|
||||
then
|
||||
if [ $ignore -eq 0 ];
|
||||
then
|
||||
fail "$file: not an ordinary file"
|
||||
fi
|
||||
return
|
||||
fi
|
||||
|
||||
file_type=`file -b $file`
|
||||
case "$file_type" in
|
||||
*"ELF "*)
|
||||
verbose "$file: ELF format - running script/program"
|
||||
if test "x$prefix" != "x" ;
|
||||
then
|
||||
report "$prefix:"
|
||||
fi
|
||||
run $script $script_opts $file
|
||||
return
|
||||
;;
|
||||
"RPM "*)
|
||||
verbose "$file: RPM format."
|
||||
;;
|
||||
*" cpio "*)
|
||||
verbose "$file: CPIO format."
|
||||
;;
|
||||
*"tar "*)
|
||||
verbose "$file: TAR archive."
|
||||
;;
|
||||
*"Zip archive"*)
|
||||
verbose "$file: ZIP archive."
|
||||
;;
|
||||
*"ar archive"*)
|
||||
verbose "$file: AR archive."
|
||||
;;
|
||||
*"bzip2 compressed data"*)
|
||||
verbose "$file: contains bzip2 compressed data"
|
||||
;;
|
||||
*"gzip compressed data"*)
|
||||
verbose "$file: contains gzip compressed data"
|
||||
;;
|
||||
*"lzip compressed data"*)
|
||||
verbose "$file: contains lzip compressed data"
|
||||
;;
|
||||
*"XZ compressed data"*)
|
||||
verbose "$file: contains xz compressed data"
|
||||
;;
|
||||
*"shell script"* | *"ASCII text"*)
|
||||
if [ $ignore -eq 0 ];
|
||||
then
|
||||
fail "$file: test/scripts cannot be scanned."
|
||||
fi
|
||||
return
|
||||
;;
|
||||
*"symbolic link"*)
|
||||
if [ $ignore -eq 0 ];
|
||||
then
|
||||
# FIXME: We ought to be able to follow symbolic links
|
||||
fail "$file: symbolic links are not followed."
|
||||
fi
|
||||
return
|
||||
;;
|
||||
*)
|
||||
if [ $ignore -eq 0 ];
|
||||
then
|
||||
fail "$file: Unsupported file type: $file_type"
|
||||
fi
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
# We now know that we will need a temporary directory
|
||||
# so create one, and create paths to the file and scripts.
|
||||
if test "x${file:0:1}" = "x/" ;
|
||||
then
|
||||
abs_file=$file
|
||||
else
|
||||
abs_file="$PWD/$file"
|
||||
fi
|
||||
|
||||
if test "x${abs_prog:0:1}" != "x/" ;
|
||||
then
|
||||
abs_prog="$PWD/$abs_prog"
|
||||
fi
|
||||
|
||||
if test "x${script:0:1}" = "x/" ;
|
||||
then
|
||||
abs_script=$script
|
||||
else
|
||||
abs_script="$PWD/$script"
|
||||
fi
|
||||
|
||||
tmp_root=$tmpdir/delme.run.on.binary
|
||||
run mkdir -p "$tmp_root/$file"
|
||||
|
||||
verbose " Changing to directory: $tmp_root/$file"
|
||||
pushd "$tmp_root/$file" > /dev/null
|
||||
if [ $? != 0 ];
|
||||
then
|
||||
fail "Unable to change to temporary directory: $tmp_root/$file"
|
||||
return
|
||||
fi
|
||||
|
||||
# Run the file type switch again, although this time we do not need to
|
||||
# check for unrecognised types. (But we do, just in case...)
|
||||
# Note since are transforming the file we re-invoke the run-on-binaries
|
||||
# script on the decoded contents. This allows for archives that contain
|
||||
# other archives, and so on. We normally pass the -i option to the
|
||||
# invoked script so that it will not complain about unrecognised files in
|
||||
# the decoded archive, although we do not do this when running in very
|
||||
# verbose mode. We also pass an extended -t option to ensure that any
|
||||
# sub-archives are extracted into a unique directory tree.
|
||||
|
||||
case "$file_type" in
|
||||
"RPM "*)
|
||||
# The output redirect confuses the run function...
|
||||
verbose " Running: rpm2cpio $abs_file > delme.cpio"
|
||||
rpm2cpio $abs_file > delme.cpio
|
||||
if [ $? != 0 ];
|
||||
then
|
||||
fail "$file: Unable to extract from rpm archive"
|
||||
else
|
||||
# Save time - run cpio now.
|
||||
run cpio --quiet --extract --make-directories --file delme.cpio
|
||||
if [ $? != 0 ];
|
||||
then
|
||||
fail "$file: Unable to extract files from cpio archive"
|
||||
fi
|
||||
run rm -f delme.cpio
|
||||
fi
|
||||
;;
|
||||
|
||||
*" cpio "*)
|
||||
run cpio --quiet --extract --make-directories --file=$abs_file
|
||||
if [ $? != 0 ];
|
||||
then
|
||||
fail "$file: Unable to extract files from cpio archive"
|
||||
fi
|
||||
;;
|
||||
|
||||
*"tar "*)
|
||||
run tar --extract --file=$abs_file
|
||||
if [ $? != 0 ];
|
||||
then
|
||||
fail "$file: Unable to extract files from tarball"
|
||||
fi
|
||||
;;
|
||||
|
||||
*"ar archive"*)
|
||||
run ar x $abs_file
|
||||
if [ $? != 0 ];
|
||||
then
|
||||
fail "$file: Unable to extract files from ar archive"
|
||||
fi
|
||||
;;
|
||||
|
||||
*"Zip archive"*)
|
||||
decompress $abs_file unzip "-q" $file
|
||||
;;
|
||||
*"bzip2 compressed data"*)
|
||||
decompress $abs_file bzip2 "--quiet --decompress" $file
|
||||
;;
|
||||
*"gzip compressed data"*)
|
||||
decompress $abs_file gzip "--quiet --decompress" $file
|
||||
;;
|
||||
*"lzip compressed data"*)
|
||||
decompress $abs_file lzip "--quiet --decompress" $file
|
||||
;;
|
||||
*"XZ compressed data"*)
|
||||
decompress $abs_file xz "--quiet --decompress" $file
|
||||
;;
|
||||
*)
|
||||
ice "unhandled file type: $file_type"
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ $failed -eq 0 ];
|
||||
then
|
||||
# Now scan the file(s) created in the previous step.
|
||||
run find . -type f -execdir $abs_prog $prog_opts -t=$tmp_root/$file -p=$file $abs_script $script_opts {} +
|
||||
fi
|
||||
|
||||
verbose " Deleting temporary directory: $tmp_root"
|
||||
rm -fr $tmp_root
|
||||
|
||||
verbose " Return to previous directory"
|
||||
popd > /dev/null
|
||||
}
|
||||
|
||||
# Invoke main
|
||||
main ${1+"$@"}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Git has extensions for handling binary blobs such as multimedia files, so today we will learn how...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/run-binaries-script
|
||||
|
||||
作者:[Nick Clifton][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/nickclifton
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/binary_code_computer_screen.png?itok=7IzHK1nn (Binary code on a computer screen)
|
||||
[2]: mailto:nickc@redhat.com
|
@ -0,0 +1,120 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Meet DebianDog – Puppy sized Debian Linux)
|
||||
[#]: via: (https://itsfoss.com/debiandog/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
Meet DebianDog – Puppy sized Debian Linux
|
||||
======
|
||||
|
||||
Recently I stumbled upon an intriguing Linux project. This project aims to create small live CDs for Debian and Debian-based systems, similar to the [Puppy Linux project][1]. Let’s take a look at DebianDog.
|
||||
|
||||
### What is DebianDog?
|
||||
|
||||
As it says on the tin, [DebianDog][2] “is a small Debian Live CD shaped to look like Puppy and act like Puppy. Debian structure and Debian behaviour are untouched and Debian documentation is 100% valid for DebianDog. You have access to all Debian repositories using apt-get or synaptic.”
|
||||
|
||||
![DebianDog Jessie][3]
|
||||
|
||||
For those of you who are not familiar with [Puppy Linux][1], the project is “a collection of multiple Linux distributions, built on the same shared principles”. Those principles are to be fast, small (300 MB or less), and easy to use. There are versions of Puppy Linux built to support Ubuntu, Slackware, and Raspbian packages.
|
||||
|
||||
The major difference between DebianDog and Puppy Linux is that Puppy Linux has its own package manager [the [Puppy Package Manager][4]]. As stated above, DebianDog using the Debian package manager and packages. Even the DebianDog website tries to make that clear: “It is not Puppy Linux and it has nothing to do with Puppy based on Debian.”
|
||||
|
||||
### Why should anyone use DebianDog?
|
||||
|
||||
The main reason to install DebianDog (or any of its derivatives) would be to restore an older system to operability. Every entry on DebianDog has a 32-bit option. They also have lighter desktop environments/window managers, such as [Openbox][5] or the [Trinity Desktop][6] environment. Most of those also have an alternative to systemd. They also come with lighter applications installed, such as [PCManFM][7].
|
||||
|
||||
### What versions of DebianDog are available?
|
||||
|
||||
Though DebianDog was the first in the series, the project is called ‘Dog Linux’ and provides various ‘Dog variants’ on popular distributions based on Debian and Ubuntu.
|
||||
|
||||
#### DebianDog Jessie
|
||||
|
||||
The first (and original) version of DebianDog is DebianDog Jessie. There are two [32-bit versions][8] of it. One uses [Joe’s Window Manager (JWM)][9] as default and the other uses XFCE. Both systemd and sysvinit are available. There is also a [64-bit version][10]. DebianDog Jessie is based on Debian 8.0 (codename Jessie). Support for Debian 8.0 ends on June 30th, 2020, so install with caution.
|
||||
|
||||
![TrinityDog][11]
|
||||
|
||||
#### StretchDog
|
||||
|
||||
[Stret][12][c][12][hDog][12] is based on Debian 9.0 (codename Stretch). It is available in 32 and 64-bit. Openbox is the default window manager, but we can also switch to JWM. Support for Debian 9.0 ends on June 30th, 2022.
|
||||
|
||||
#### BusterDog
|
||||
|
||||
[BusterDog][13] is interesting. It is based on [Debian 10][14] (codename Buster). It does not use systemd, instead, it uses [elogind][15] just like [AntiX][16]. Support for Debian 10.0 ends on June 2024.
|
||||
|
||||
#### MintPup
|
||||
|
||||
[MintPup][17] is based on [Linux Mint][18] 17.1. This LiveCD is 32-bit only. You can also access all of the “Ubuntu/Mint repositories using apt-get or synaptic”. Considering that Mint 17 has reached end of life, this version must be avoided.
|
||||
|
||||
#### XenialDog
|
||||
|
||||
There are both [32-bit][19] and [64-bit versions][20] of this spin based on the Ubuntu 16.04 LTS. Both versions come with Openbox as default with JWM as an option. Support for Ubuntu 16.04 LTS ends in April of 2021, so install with caution.
|
||||
|
||||
#### TrinityDog
|
||||
|
||||
There are two versions of the [TrintyDog][21] spin. One is based on Debian 8 and the other is based on Debian 9. Both are 32-bit and both use the [Trinity Desktop Environment][6], thus the name.
|
||||
|
||||
![BionicDog][22]
|
||||
|
||||
#### BionicDog
|
||||
|
||||
As you should be able to guess by the name. [BionicDog][23] is based on [Ubuntu 18.04 LTS][24]. The main version of this spin has both 32 and 64-bit with Openbox as the default window manager. There is also a version that uses the [Cinnamon desktop][25] and is only 64-bit.
|
||||
|
||||
### Final Thoughts
|
||||
|
||||
I like any [Linux project that wants to make older systems usable][26]. However, most of the operating systems available through DebianDog are no longer supported or nearing the end of their life span. This makes it less than useful for the long run.
|
||||
|
||||
**I wouldn’t really advise to use it on your main computer.** Try it in live USB or on a spare system. Also, [you can create][27] your own LiveCD spin if you want to take advantage of a newer base system.
|
||||
|
||||
Somehow I keep on stumbling across obscure Linux distributions like [FatDog64][28], [4M Linux][29] and [Vipper Linux][30]. Even though I may not always recommend them to use, it’s still good to know about the existence of such projects.
|
||||
|
||||
What are your thoughts on the DebianDog? What is your favorite Puppy-syle OS? Please let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][31].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/debiandog/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://puppylinux.com/
|
||||
[2]: https://debiandog.github.io/doglinux/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/DebianDog-Jessie.jpg?fit=800%2C600&ssl=1
|
||||
[4]: http://wikka.puppylinux.com/PPM?redirect=no
|
||||
[5]: http://openbox.org/wiki/Main_Page
|
||||
[6]: https://www.trinitydesktop.org/
|
||||
[7]: https://wiki.lxde.org/en/PCManFM
|
||||
[8]: https://debiandog.github.io/doglinux/zz01debiandogjessie.html
|
||||
[9]: https://en.wikipedia.org/wiki/JWM
|
||||
[10]: https://debiandog.github.io/doglinux/zz02debiandog64.html
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/TrinityDog.jpg?ssl=1
|
||||
[12]: https://debiandog.github.io/doglinux/zz02stretchdog.html
|
||||
[13]: https://debiandog.github.io/doglinux/zz03busterdog.html
|
||||
[14]: https://itsfoss.com/debian-10-buster/
|
||||
[15]: https://github.com/elogind/elogind
|
||||
[16]: https://antixlinux.com/
|
||||
[17]: https://debiandog.github.io/doglinux/zz04mintpup.html
|
||||
[18]: https://linuxmint.com/
|
||||
[19]: https://debiandog.github.io/doglinux/zz05xenialdog.html
|
||||
[20]: https://debiandog.github.io/doglinux/zz05zxenialdog.html
|
||||
[21]: https://debiandog.github.io/doglinux/zz06-trinitydog.html
|
||||
[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/BionicDog.jpg?ssl=1
|
||||
[23]: https://debiandog.github.io/doglinux/zz06-zbionicdog.html
|
||||
[24]: https://itsfoss.com/ubuntu-18-04-released/
|
||||
[25]: https://en.wikipedia.org/wiki/Cinnamon_(desktop_environment)
|
||||
[26]: https://itsfoss.com/lightweight-linux-beginners/
|
||||
[27]: https://github.com/DebianDog/MakeLive
|
||||
[28]: https://itsfoss.com/fatdog64-linux-review/
|
||||
[29]: https://itsfoss.com/4mlinux-review/
|
||||
[30]: https://itsfoss.com/viperr-linux-review/
|
||||
[31]: https://reddit.com/r/linuxusersgroup
|
@ -0,0 +1,114 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 Python scripts for automating basic community management tasks)
|
||||
[#]: via: (https://opensource.com/article/20/3/automating-community-management-python)
|
||||
[#]: author: (Rich Bowen https://opensource.com/users/rbowen)
|
||||
|
||||
5 Python scripts for automating basic community management tasks
|
||||
======
|
||||
If you have to do something three times, try to automate it.
|
||||
![shapes of people symbols][1]
|
||||
|
||||
I've [written before about what a community manager does][2], and if you ask ten community managers, you'll get 12 different answers. Mostly, though, you do what the community needs for you to do at any given moment. And a lot of it can be repetitive.
|
||||
|
||||
Back when I was a sysadmin, I had a rule: if I had to do something three times, I'd try to automate it. And, of course, these days, with awesome tools like Ansible, there's a whole science to that.
|
||||
|
||||
Some of what I do on a daily or weekly basis involves looking something up in a few places and then generating some digest or report of that information to publish elsewhere. A task like that is a perfect candidate for automation. None of this is [rocket surgery][3], but when I've shared some of these scripts with colleagues, invariably, at least one of them turns out to be useful.
|
||||
|
||||
[On GitHub][4], I have several scripts that I use every week. None of them are complicated, but they save me a few minutes every time. Some of them are in Perl because I'm almost 50. Some of them are in Python because a few years ago, I decided I needed to learn Python. Here's an overview:
|
||||
|
||||
### **[tshirts.py][5]**
|
||||
|
||||
This simple script takes a number of Tshirts that you're going to order for an event and tells you what the size distribution should be. It spreads them on a normal curve (also called a bell curve), and, in my experience, this coincides pretty well with what you'll actually need for a normal conference audience. You might want to adjust the script to slightly larger if you're using it in the USA, slightly smaller if you're using it in Europe. YMMV.
|
||||
|
||||
Usage:
|
||||
|
||||
|
||||
```
|
||||
[rbowen@sasha:community-tools/scripts]$ ./tshirts.py
|
||||
How many shirts? 300
|
||||
For a total of 300 shirts, order:
|
||||
|
||||
30.0 small
|
||||
72.0 medium
|
||||
96.0 large
|
||||
72.0 xl
|
||||
30.0 2xl
|
||||
```
|
||||
|
||||
### **[followers.py][6]**
|
||||
|
||||
This script provides me with the follower count for Twitter handles I care about.
|
||||
|
||||
This script is only 14 lines long and isn't exciting, but it saves me perhaps ten minutes of loading web pages and looking for a number.
|
||||
|
||||
You'll need to edit the feeds array to add the accounts you care about:
|
||||
|
||||
|
||||
```
|
||||
feeds = [
|
||||
'centosproject',
|
||||
'centos'
|
||||
];
|
||||
```
|
||||
|
||||
NB: It probably won't work if you're running it outside of English-speaking countries, because it's just a simple screen-scraping script that reads HTML and looks for particular information buried within it. So when the output is in a different language, the regular expressions won't match.
|
||||
|
||||
Usage:
|
||||
|
||||
|
||||
```
|
||||
[rbowen@sasha:community-tools/scripts]$ ./followers.py
|
||||
centosproject: 11,479 Followers
|
||||
centos: 18,155 Followers
|
||||
```
|
||||
|
||||
### **[get_meetups][7]**
|
||||
|
||||
This script fits into another category—API scripts. This particular script uses the [meetup.com][8] API to look for meetups on a particular topic in a particular area and time range so that I can report them to my community. Many of the services you rely on provide an API so that your scripts can look up information without having to manually look through web pages. Learning how to use those APIs can be frustrating and time-consuming, but you'll end up with skills that will save you a LOT of time.
|
||||
|
||||
_Disclaimer: [meetup.com][8] changed their API in August of 2019, and I have not yet updated this script to the new API, so it doesn't actually work right now. Watch this repo for a fixed version in the coming weeks._
|
||||
|
||||
### **[centos-announcements.pl][9]**
|
||||
|
||||
This script is considerably more complicated and extremely specific to my use case, but you probably have a similar situation. This script looks at a mailing list archive—in this case, the centos-announce mailing list—and finds messages that are in a particular format, then builds a report of those messages. Reports come in a couple of different formats—one for my monthly newsletter and one for scheduling messages (via Hootsuite) for Twitter.
|
||||
|
||||
I use Hootsuite to schedule content for Twitter, and they have a convenient CSV (comma-separated value) format that lets you bulk-schedule a whole week of tweets in one go. Auto-generating that CSV from various data sources (i.e., mailing lists, blogs, other web pages) can save you a lot of time. Do note, however, that this should probably only be used for a first draft, which you then examine and edit yourself so that you don't end up auto-tweeting something you didn't intend to.
|
||||
|
||||
### **[reporting.pl][10]**
|
||||
|
||||
This script is also fairly specific to my particular needs, but the concept itself is universal. I send out a monthly mailing to the [CentOS SIGs][11] (Special Interest Groups), which are scheduled to report in that given month. This script simply tells me which SIGs those are this month, and writes the email that needs to go to them.
|
||||
|
||||
It does not actually send that email, however, for a couple of reasons. One, I may wish to edit those messages before they go out. Two, while scripts sending email worked great in the old days, these days, they're likely to result in getting spam-filtered.
|
||||
|
||||
### In conclusion
|
||||
|
||||
There are some other scripts in that repo that are more or less specific to my particular needs, but I hope at least one of them is useful to you, and that the variety of what's there inspires you to automate something of your own. I'd love to see your handy automation script repos, too; link to them in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/automating-community-management-python
|
||||
|
||||
作者:[Rich Bowen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rbowen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/Open%20Pharma.png?itok=GP7zqNZE (shapes of people symbols)
|
||||
[2]: http://drbacchus.com/what-does-a-community-manager-do/
|
||||
[3]: https://6dollarshirts.com/rocket-surgery
|
||||
[4]: https://github.com/rbowen/centos-community-tools/tree/master/scripts
|
||||
[5]: https://github.com/rbowen/centos-community-tools/blob/master/scripts/tshirts.py
|
||||
[6]: https://github.com/rbowen/centos-community-tools/blob/master/scripts/followers.py
|
||||
[7]: https://github.com/rbowen/centos-community-tools/blob/master/scripts/get_meetups
|
||||
[8]: http://meetup.com
|
||||
[9]: https://github.com/rbowen/centos-community-tools/blob/master/scripts/centos-announcements.pl
|
||||
[10]: https://github.com/rbowen/centos-community-tools/blob/master/scripts/sig_reporting/reporting.pl
|
||||
[11]: https://wiki.centos.org/SpecialInterestGroup
|
131
sources/tech/20200323 Don-t love diff- Use Meld instead.md
Normal file
131
sources/tech/20200323 Don-t love diff- Use Meld instead.md
Normal file
@ -0,0 +1,131 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Don't love diff? Use Meld instead)
|
||||
[#]: via: (https://opensource.com/article/20/3/meld)
|
||||
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
|
||||
|
||||
Don't love diff? Use Meld instead
|
||||
======
|
||||
Meld is a visual diff tool that makes it easier to compare and merge
|
||||
changes in files, directories, Git repos, and more.
|
||||
![Person drinking a hat drink at the computer][1]
|
||||
|
||||
Meld is one of my essential tools for working with code and data files. It's a graphical diff tool, so if you've ever used the **diff** command and struggled to make sense of the output, [Meld][2] is here to help.
|
||||
|
||||
Here is a brilliant description from the project's website:
|
||||
|
||||
> "Meld is a visual diff and merge tool targeted at developers. Meld helps you compare files, directories, and version controlled projects. It provides two- and three-way comparison of both files and directories, and has support for many popular version control systems.
|
||||
>
|
||||
> "Meld helps you review code changes and understand patches. It might even help you to figure out what is going on in that merge you keep avoiding."
|
||||
|
||||
You can install Meld on Debian/Ubuntu systems (including Raspbian) with:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install meld`
|
||||
```
|
||||
|
||||
On Fedora or similar, it's:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install meld`
|
||||
```
|
||||
|
||||
Meld is cross-platform—there's a [Windows install][3] using the [Chocolately][4] package manager. While it's not officially supported on macOS, there are [builds available for Mac][5], and you can install it on Homebrew with:
|
||||
|
||||
|
||||
```
|
||||
`$ brew cask install meld`
|
||||
```
|
||||
|
||||
See Meld's homepage for [additional options][2].
|
||||
|
||||
### Meld vs. the diff command
|
||||
|
||||
If you have two similar files (perhaps one is a modified version of the other) and want to see the changes between them, you could run the **diff** command to see their differences in the terminal:
|
||||
|
||||
![diff output][6]
|
||||
|
||||
This example shows the differences between **conway1.py** and **conway2.py**. It's showing that I:
|
||||
|
||||
* Removed the [shebang][7] and second line
|
||||
* Removed **(object)** from the class declaration
|
||||
* Added a docstring to the class
|
||||
* Swapped the order of **alive** and **neighbours == 2** in a method
|
||||
|
||||
|
||||
|
||||
Here's the same example using the **meld** command. You can run the same comparison from the command line with:
|
||||
|
||||
|
||||
```
|
||||
`$ meld conway1.py conway2.py`
|
||||
```
|
||||
|
||||
![Meld output][8]
|
||||
|
||||
Much clearer!
|
||||
|
||||
You can easily see changes and merge changes between files by clicking the arrows (they work both ways). You can even edit the files live (Meld doubles up as a simple text editor with live comparisons as you type)—just be sure to save before you close the window.
|
||||
|
||||
You can even compare and edit three different files:
|
||||
|
||||
![Comparing three files in Meld][9]
|
||||
|
||||
### Meld's Git-awareness
|
||||
|
||||
Hopefully, you're using a version control system like [Git][10]. If so, your comparison isn't between two different files but to find differences between the current working file and the one Git knows. Meld understands this, so if you run **meld conway.py**, where **conway.py** is known by Git, it'll show you any changes made since the last Git commit:
|
||||
|
||||
![Comparing Git files in Meld][11]
|
||||
|
||||
You can see changes made in the current version (on the right) and the repository version (on the left). You can see I deleted a method and added a parameter and a loop since the last commit.
|
||||
|
||||
If you run **meld .**, you'll see all the changes in the current directory (or the whole repository, if you're in its root):
|
||||
|
||||
![Meld . output][12]
|
||||
|
||||
You can see a single file is modified, another file is unversioned (meaning it's new to Git, so I need to **git add** the file before comparing it), and lots of other unmodified files. Various display options are provided by icons along the top.
|
||||
|
||||
You can also compare two directories, which is sometimes handy:
|
||||
|
||||
![Comparing directories in Meld][13]
|
||||
|
||||
### Conclusion
|
||||
|
||||
Even regular users can find comparisons with diff difficult to decipher. I find the visualizations Meld provides make a big difference in troubleshooting what's changed between files. On top of that, Meld comes with some helpful awareness of version control and helps you compare across Git commits without thinking much about it. Give Meld a go, and make troubleshooting a little easier on the eyes.
|
||||
|
||||
* * *
|
||||
|
||||
_This was originally published on Ben Nuttall's [Tooling blog][14] and is reused with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/meld
|
||||
|
||||
作者:[Ben Nuttall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hat drink at the computer)
|
||||
[2]: https://meldmerge.org/
|
||||
[3]: https://chocolatey.org/packages/meld
|
||||
[4]: https://opensource.com/article/20/3/chocolatey
|
||||
[5]: https://yousseb.github.io/meld/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/diff-output.png (diff output)
|
||||
[7]: https://en.wikipedia.org/wiki/Shebang_(Unix)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/meld-output.png (Meld output)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/meld-3-files.png (Comparing three files in Meld)
|
||||
[10]: https://opensource.com/resources/what-is-git
|
||||
[11]: https://opensource.com/sites/default/files/uploads/meld-git.png (Comparing Git files in Meld)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/meld-directory-changes.png (Meld . output)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/meld-directory-compare.png (Comparing directories in Meld)
|
||||
[14]: https://tooling.bennuttall.com/meld/
|
@ -0,0 +1,137 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to create a personal file server with SSH on Linux)
|
||||
[#]: via: (https://opensource.com/article/20/3/personal-file-server-ssh)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
|
||||
How to create a personal file server with SSH on Linux
|
||||
======
|
||||
Connecting to a remote Linux system over SSH is just plain easy. Here's
|
||||
how to do it.
|
||||
![Hand putting a Linux file folder into a drawer][1]
|
||||
|
||||
The Raspberry Pi makes for a useful and inexpensive home server for lots of things. I most often use the [Raspberry Pi as a print server][2] to share a laser printer with other devices in our home or as a personal file server to store copies of projects and other data.
|
||||
|
||||
I use this file server in various ways. Let's say I'm working on a project, such as a new book, and I want to make a snapshot copy of my work and all my associated files. In that case, I simply copy my **BookProject** folder to a **BookBackup** folder on the file server.
|
||||
|
||||
Or if I'm cleaning up my local files, and I discover some files that I don't really need but I'm not yet ready to delete, I'll copy them to a **KeepForLater** folder on the file server. That's a convenient way to remove clutter from my everyday Linux system and offload infrequently used files to my personal file server.
|
||||
|
||||
Setting up a Raspberry Pi—or any Linux system—as a personal file server doesn't require configuring Network File System (NFS) or Common Internet File System (CIFS) or tinkering with other file-sharing systems such as WebDAV. You can easily set up a remote file server using SSH. And here's how.
|
||||
|
||||
### Set up SSHD on the remote system
|
||||
|
||||
Your Linux system probably has the SSH daemon (sshd) installed. It may even be running by default. If not, you can easily set up SSH through whatever control panel you prefer on your Linux distribution. I run [Fedora ARM][3] on my Raspberry Pi, and I can access the control panel remotely by pointing my Pi's web browser to port 9090. (On my home network, the Raspberry Pi's IP address is **10.0.0.11**, so I connect to **10.0.0.11:9090**.) If the SSH daemon isn't running by default, you can set it to start automatically in Services in the control panel.
|
||||
|
||||
![sshd in the list of system services][4]
|
||||
|
||||
You can find sshd in the list of system services.
|
||||
|
||||
![slider to activate sshd][5]
|
||||
|
||||
Click the slider to activate **sshd** if it isn't already.
|
||||
|
||||
### Do you have an account?
|
||||
|
||||
Make sure you have an account on the remote system. It might be the same as the username you use on your local system, or it could be something different.
|
||||
|
||||
On the popular Raspbian distribution, the default account username is **pi**. But other Linux distributions may require you to set up a unique new user when you install it. If you don't know your username, you can use your distribution's control panel to create one. On my Raspberry Pi, I set up a **jhall** account that matches the username on my everyday Linux desktop machine.
|
||||
|
||||
![Set up a new account on Fedora Server][6]
|
||||
|
||||
If you use Fedora Server, click the **Create New Account** button to set up a new account.
|
||||
|
||||
![Set password or SSH key][7]
|
||||
|
||||
Don't forget to set a password or add a public SSH key.
|
||||
|
||||
### Optional: Share your SSH public key
|
||||
|
||||
If you exchange your public SSH key with the remote Linux system, you can log in without having to enter a password. This step is optional; you can use a password if you prefer.
|
||||
|
||||
You can learn more about SSH keys in these Opensource.com articles:
|
||||
|
||||
* [Tools for SSH key management][8]
|
||||
* [Graphically manage SSH keys with Seahorse][9]
|
||||
* [How to manage multiple SSH keys][10]
|
||||
* [How to enable SSH access using a GPG key for authentication][11]
|
||||
|
||||
|
||||
|
||||
### Make a file manager shortcut
|
||||
|
||||
Since you've started the SSH daemon on the remote system and set up your account username and password, all that's left is to map a shortcut to the other Linux system from your file manager. I use GNOME as my desktop, but the steps are basically the same for any Linux desktop.
|
||||
|
||||
#### Make the initial connection
|
||||
|
||||
In the GNOME file manager, look for the **+Other Locations** button in the left-hand navigation. Click that to open a **Connect to Server** prompt. Enter the address of the remote Linux server here, starting with the SSH connection protocol.
|
||||
|
||||
![Creating a shortcut in GNOME file manager][12]
|
||||
|
||||
The GNOME file manager supports a variety of connection protocols. To make a connection over SSH, start your server address with **sftp://** or **ssh://**.
|
||||
|
||||
If your username is the same on your local Linux system and your remote Linux system, you can just enter the server's address and the folder location. To make my connection to the **/home/jhall** directory on my Raspberry Pi, I use:
|
||||
|
||||
|
||||
```
|
||||
`sftp://10.0.0.11/home/jhall`
|
||||
```
|
||||
|
||||
![GNOME file manager Connect to Server][13]
|
||||
|
||||
If your username is different, you can specify your remote system's username with an **@** sign before the remote system's address. To connect to a Raspbian system on the other end, you might use:
|
||||
|
||||
|
||||
```
|
||||
`sftp://pi@10.0.0.11/home/pi`
|
||||
```
|
||||
|
||||
![GNOME file manager Connect to Server][14]
|
||||
|
||||
If you didn't share your public SSH key, you may need to enter a password. Otherwise, the GNOME file manager should automatically open the folder on the remote system and let you navigate.
|
||||
|
||||
![GNOME file manager connection][15]
|
||||
|
||||
#### Create a shortcut so you can easily connect to the server later
|
||||
|
||||
This is easy in the GNOME file manager. Right-click on the remote system's name in the navigation list, and select **Add Bookmark**. This creates a shortcut to the remote location.
|
||||
|
||||
![GNOME file manager - adding bookmark][16]
|
||||
|
||||
If you want to give the bookmark a more memorable name, you can right-click on the shortcut and choose **Rename**.
|
||||
|
||||
### That's it!
|
||||
|
||||
Connecting to a remote Linux system over SSH is just plain easy. And you can use the same method to connect to systems other than home file servers. I also have a shortcut that allows me to instantly access files on my provider's web server and another that lets me open a folder on my project server. SSH makes it a secure connection; all of my traffic is encrypted. Once I've opened the remote system over SSH, I can use the GNOME file manager to manage my remote files as easily as I'd manage my local folders.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/personal-file-server-ssh
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
|
||||
[2]: https://opensource.com/article/18/3/print-server-raspberry-pi
|
||||
[3]: https://arm.fedoraproject.org/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/fedora-server-control-panel-sshd.png (sshd in the list of system services)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/fedora-server-control-panel-sshd-service.png (slider to activate sshd)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/fedora-server-control-panel-accounts_create-user.png (Set up a new account on Fedora Server)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/fedora-server-control-panel-accounts.png (Set password or SSH key)
|
||||
[8]: https://opensource.com/article/20/2/ssh-tools
|
||||
[9]: https://opensource.com/article/19/4/ssh-keys-seahorse
|
||||
[10]: https://opensource.com/article/19/4/gpg-subkeys-ssh-manage
|
||||
[11]: https://opensource.com/article/19/4/gpg-subkeys-ssh
|
||||
[12]: https://opensource.com/sites/default/files/uploads/gnome-file-manager-other-locations.png (Creating a shortcut in GNOME file manager)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/gnome-file-manager-other-sftp.png (GNOME file manager Connect to Server)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/gnome-file-manager-other-sftp-username.png (GNOME file manager Connect to Server)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/gnome-file-manager-remote-jhall.png (GNOME file manager connection)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/gnome-file-manager-remote-jhall-add-bookmark.png (GNOME file manager - adding bookmark)
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Audacious 4.0 Released With Qt 5: Here’s How to Install it on Ubuntu)
|
||||
[#]: via: (https://itsfoss.com/audacious-4-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Audacious 4.0 Released With Qt 5: Here’s How to Install it on Ubuntu
|
||||
======
|
||||
|
||||
[Audacious][1] is an open-source audio player available for multiple platforms that include Linux. Almost after 2 years of its last major release, Audacious 4.0 has arrived with some big changes.
|
||||
|
||||
The latest release Audacious 4.0 comes with [Qt 5][2] UI by default. You can still go for the old GTK2 UI from the source – however, the new features will be added to the Qt UI only.
|
||||
|
||||
Let’s take a look at what has changed and how to install the latest Audacious on your Linux system.
|
||||
|
||||
### Audacious 4.0 Key Changes & Features
|
||||
|
||||
![Audacious 4 Release][3]
|
||||
|
||||
Of course, the major change would be the use of Qt 5 UI as the default. In addition to that, there are a lot of improvements and feature additions mentioned in their [official announcement post][4], here they are:
|
||||
|
||||
* Clicking on playlist column headers sorts the playlist
|
||||
* Dragging playlist column headers changes the column order
|
||||
* Application-wide settings for volume and time step sizes
|
||||
* New option to hide playlist tabs
|
||||
* Sorting playlist by path now sorts folders after files
|
||||
* Implemented additional MPRIS calls for compatibility with KDE 5.16+
|
||||
* New OpenMPT-based tracker module plugin
|
||||
* New VU Meter visualization plugin
|
||||
* Added option to use a SOCKS network proxy
|
||||
* The Song Change plugin now works on Windows
|
||||
* New “Next Album” and “Previous Album” commands
|
||||
* The tag editor in Qt UI can now edit multiple files at once
|
||||
* Implemented equalizer presets window for Qt UI
|
||||
* Lyrics plugin gained the ability to save and load lyrics locally
|
||||
* Blur Scope and Spectrum Analyzer visualizations ported to Qt
|
||||
* MIDI plugin SoundFont selection ported to Qt
|
||||
* JACK output plugin gained some new options
|
||||
* Added option to endlessly loop PSF files
|
||||
|
||||
|
||||
|
||||
If you didn’t know about it previously, you can easily get it installed and use the equalizer coupled with [LADSP][5] effects to tweak your music experience.
|
||||
|
||||
![Audacious Winamp Classic Interface][6]
|
||||
|
||||
### How to Install Audacious 4.0 on Ubuntu
|
||||
|
||||
It is worth noting that the [unofficial PPA][7] is made available by [UbuntuHandbook][8]. You can simply follow the instructions below to install it on Ubuntu 16.04, 18.04, 19.10, and 20.04.
|
||||
|
||||
1\. First, you have to add the PPA to your system by typing in the following command in the terminal:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:ubuntuhandbook1/apps
|
||||
```
|
||||
|
||||
3\. Next, you need to update/refresh the package information from the repositories/sources you have and proceed to install the app. Here’s how to do that:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install audacious audacious-plugins
|
||||
```
|
||||
|
||||
That’s it. You don’t have to do anything else. In either case, if you want to [remove the PPA and the software][9], just type in the following commands in order:
|
||||
|
||||
```
|
||||
sudo add-apt-repository --remove ppa:ubuntuhandbook1/apps
|
||||
sudo apt remove --autoremove audacious audacious-plugins
|
||||
```
|
||||
|
||||
You can also check out their GitHub page for more information on the source and potentially install it on other Linux distros as well, if that’s what you’re looking for.
|
||||
|
||||
[Audacious Source Code][10]
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
The new features and the Qt 5 UI switch should be a good thing to improve the user experience and the functionality of the audio player. If you’re a fan of the classic Winamp interface, it works just fine as well – but missing a few features as mentioned in their announcement post.
|
||||
|
||||
You can try it out and let me know your thoughts in the comments below!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/audacious-4-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://audacious-media-player.org
|
||||
[2]: https://doc.qt.io/qt-5/qt5-intro.html
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/audacious-4-release.jpg?ssl=1
|
||||
[4]: https://audacious-media-player.org/news/45-audacious-4-0-released
|
||||
[5]: https://www.ladspa.org/
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/audacious-winamp.jpg?ssl=1
|
||||
[7]: https://itsfoss.com/ppa-guide/
|
||||
[8]: http://ubuntuhandbook.org/index.php/2020/03/audacious-4-0-released-qt5-ui/
|
||||
[9]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
|
||||
[10]: https://github.com/audacious-media-player/audacious
|
@ -1,83 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (sndnvaps)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux is our love language)
|
||||
[#]: via: (https://opensource.com/article/20/2/linux-love-language)
|
||||
[#]: author: (Christopher Cherry https://opensource.com/users/chcherry)
|
||||
|
||||
Linux 是我们最喜爱的语言
|
||||
======
|
||||
当一个妻子教丈夫一些新技能的时候,他们会学到比他们期待更多的知识。
|
||||
![红心 "你不是孤单的"][1]
|
||||
|
||||
2019年是樱桃家族学习的一年。我是一个喜欢学习新技术的高级软件工程师,并把学到的内容教给我的丈夫 Chris。通过教他一些我学到的东西并让他完成我的技术演练文章,我帮助Chris学习到了新技术,使他能够将自己的职业生涯更深入地转向技术领域。我学习到了新的方法,使我的演练和培训材料更易于让读者理解。
|
||||
|
||||
通过这篇文章,我们来讨论一下我们从双方中各自学习到了什么东西,这学习到的内容对于我们的未来有何影响。
|
||||
|
||||
### 对于学生的问题
|
||||
|
||||
**Jess:** Chris, 是什么导致你想深入都学习我所学领域的技能呢?
|
||||
|
||||
**Chris:** 主要目的是为了让我事业更进一步。作为一个网络工程师的经历告诉我,现在的网络专家已经不像以前一样有价值了,我必须掌握更多的知识。由于网络经常被认为是造成这些天程序中断或程序出错的原因,我想从开发人员的角度了解更多关于编写应用程序的知识,以便于了解它们如何依赖网络资源。
|
||||
|
||||
**Jess:** 你想让我先教你什么内容呢,你想从中学到什么东西?
|
||||
|
||||
**Chris:** 首先要学习怎样安装Linux系统,之后再安装[Ansible][2]程序。只要硬件能配对得上,每一个Linux发生版都很容易安装上,但可能会出现个别不兼容的情况。这就需要我学习如何解决系统安装前5分钟出现的问题了(这个我最喜欢了)。Ansible 给了一个我使用软件管理器安装程序的理由。当程序安装完成后,我快速学习到程序管理器如何处理程序的依赖项目,通过查看yum已经安装的程序,我发现Ansible是用Python写的,所以能在我的系统运行。自此之后,我都通过Ansible来安装各种各样的程序。
|
||||
|
||||
**Jessica:** 你喜欢我这种教学方式不?
|
||||
|
||||
**Chris:** 我们一开始有过争吵,直到我们弄清楚了我喜欢的学习方式,你应该怎样为我提供最好的学习方式。在一开始的时候,我很难跟上你讲的内容。例如,当你说"a Docker container,"的时候,我完全不知道你在讲什么。比较早的时候,我的回答就是”这是一个容器",然而这对我来说,完全没有意义。我比较喜欢你对这些内容进行一些更深入的讲解,这让学习更有趣。
|
||||
|
||||
**Jess:** 老实说,这对我来说也是一堂大的课程。我从来没有教过在这个技术领域知识比我少的人在你之前,所以你帮助我认识到更多细节内容在解释的时候。我也得说声谢谢。
|
||||
|
||||
当你通过这几个学习步骤的时候,你觉得我的这篇测试文章怎样呢?
|
||||
|
||||
|
||||
**Chris:** 就个人而已,对于男生来说这很容易,但我错了。在我主要学习的内容中,像你[介绍的Vagrant][3]程序,它在不同的Linux发生版本的变化比我想像的要多。操作系统(OS)会按你的要求更改设置方式、运行要求和特定命令。这看起来比我用的网络设备变化更大。这让我花费更多的精力去查看对应我的系统或其它系统(在某此时候,这个比较难以理解)。在这学习路上,我似乎碰到了很多不懂的事情。
|
||||
|
||||
**Jess:** 我每天都会遇到各种各样的问题,所以不同的方法处理不同的问题这就是日常生活。
|
||||
|
||||
### 对于老师的问题
|
||||
|
||||
**Chris:** Jess, 你现在教我的方式有什么改变呢?
|
||||
|
||||
**Jess:** 我会让你读多一些书,我也是。通过翻译书籍来学习新技术。每天起床后一小时和睡觉前一小时我都会看书,花费一个星期左右我就能看一到两本书。我也会创建为期两周的任务计划来实践我从书本中学习到的技能。这是除了我一天中第一个小时在喝大量咖啡时读到的科技文章之外的。当我在想如何让你增长职业技能的目标的时候,我认为书籍是一个重要的元素除了厉害博客文章和我们谈论的文章。我觉得我的阅读量使我理解增加了速度,如果你也这么做了,你也会很快赶上我的。
|
||||
|
||||
**Chris:** 那么学生有没有教过老师呢?
|
||||
|
||||
**Jess:** 我在你那里学习到耐心。举个例子,当你完成了安装Ansible的时候,我问你,下一步要怎样操作的时候。你直接回复我,“不知道”,这不是我想让你学习到的内容。所以我改变了策略,我们来说一说,你想在安装程序的过程前需要学习到什么东西。当我们在写Vagrant文章的时候,我们一起进行相应的演示操作,我以创建它为目标,所以我们在最后都有所获得。
|
||||
|
||||
这实际上对我在工作中的培训方式产生了巨大的改变。现在我讲更多问题在大家学习的过程中,并手把手进行讲解,这比我之前要做的多。我更愿意坐下来仔细检查,确保有人明白我在说什么和我们在做什么。这是我之前从来没有做过的。
|
||||
|
||||
### 我们在一直学到的东西
|
||||
|
||||
做为一对夫妇,我们的技术都有所增长在这一年的技术合作中。
|
||||
|
||||
**Chris:** 我对自己学到的东西感到震惊。通过一年课程学习,我认识了新操作系统,如何使用API,使用Ansible开发网络程序,和使用Vagrant启动虚拟机器。我还学习到了文档如何让生活变得更好,所以我也会尝试去写一写。然而,在这个工作领域,行为并不总是被记录在案,所以我学会了准备好处理棘手的问题,并记录如何解决它们。
|
||||
|
||||
**Jess:** 除了我在教你中学到的知识外,我还专注于学习Kubernetes在云环境中的应用知识。这包括开发阶段,Kubernetes API的复杂度,创建我自己的容器,并对环境进行加密处理。我还节省了部分时间来学习更有趣的东西,研究无服务终端的代码、人工知识模型、Python和以图形方式显示热成像。对于我来说,这一年也很充足。
|
||||
|
||||
我们下一个目标是什么?现在还不知道,但我可以向您保证,我们将会在Opensoruce.com上面进行分享它。
|
||||
|
||||
** 2019年你辅导了谁,或2020年又准备辅导谁。在评论中告诉我们。 **
|
||||
|
||||
我能从我六岁的侄女舒淇身上看到好奇的光辉,当她发现探索的时候。。。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/linux-love-language
|
||||
|
||||
作者:[Christopher Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[sndnvaps](https://github.com/sndnvaps)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/chcherry
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/red-love-heart-alone-stone-path.jpg?itok=O3q1nEVz (红心 "你不是孤单的")
|
||||
[2]: https://opensource.com/resources/what-ansible
|
||||
[3]: https://opensource.com/resources/vagrant
|
@ -1,71 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (7 tips for writing an effective technical resume)
|
||||
[#]: via: (https://opensource.com/article/20/2/technical-resume-writing)
|
||||
[#]: author: (Emily Brand https://opensource.com/users/emily-brand)
|
||||
|
||||
撰写有效技术简历的7个技巧
|
||||
======
|
||||
遵循以下这些要点,把自己最好的一面呈现给潜在雇主。
|
||||
![Two hands holding a resume with computer, clock, and desk chair ][1]
|
||||
|
||||
如果你是一名软件工程师或技术领域的经理,那么创建或更新简历可能是一项艰巨的任务。 要考虑的重点是什么? 应该怎么处理格式,内容以及目标或摘要? 哪些工作经验相关? 如何确保自动招聘工具不会过滤掉你的简历?
|
||||
|
||||
在过去的七年中,作为一名招聘经理,我看到了各种各样的简历; 尽管有些令人印象深刻,但还有很多人写的很糟糕。
|
||||
|
||||
在编写或更新简历时,请遵循以下七个简单原则。
|
||||
|
||||
### 1\. 概述
|
||||
|
||||
简历顶部的简短段落应简洁明了,目的明确,避免过多使用形容词和副词。 诸如“令人印象深刻”,“广泛”和“优秀”之类的词,这些词不会增加你的招聘机会; 相反,它们看起来和感觉上像是过度使用的填充词。 关于你的目标,问自己一个重要的问题: **它是否告诉招聘经理我正在寻找什么样的工作以及如何为他们提供价值?** 如果不是,请加强并简化它以回答该问题,或者将其完全排除在外。
|
||||
|
||||
### 2\. 工作经验
|
||||
|
||||
数字,数字,数字。 用事实传达观点远比一般的陈述,例如“帮助构建,管理,交付许多对客户利润有贡献的项目” 更能对你有帮助。 你的表达中应包括统计数据,例如“直接影响了五个顶级银行的项目,这些项目将其上市时间缩短了40%”,你提交了多少行代码或管理了几个团队。 数据比修饰语更能有效地展示你的能力和价值。
|
||||
|
||||
如果你经验不足,没有什么工作经验可展示,那无关的经验,如暑期兼职工作,就不要写了。 相反,将相关经验的细节以及你所学到的知识的详细信息写进简历,这些可以使你成为一个更好的候选人。
|
||||
|
||||
### 3\. 搜索术语和行话
|
||||
|
||||
随着技术在招聘过程中发挥如此巨大的作用,确保简历被标记为正确的职位非常重要,但不要在简历上过分吹嘘自己。 如果你提到敏捷技能但不知道看板是什么,请三思。 如果你提到自己精通Java,但是已经有五年都没有使用过Java了,请小心。 如果存在你熟悉但不一定是最新的语言和框架,请创建其他类别或将你的经验分为“精通”和“熟悉”。
|
||||
|
||||
### 4\. 教育
|
||||
|
||||
如果你不是应届大学毕业生,那就没必要再写你的GPA或你参加过的俱乐部或兄弟会,除非你计划将它们用作谈话要点以在面试中赢得信任。 确保你发表的或获取过专利的东西包括在内,即使与你的工作无关。 如果你没有大学学位,请添加一个证书部分代替教育背景部分。 如果你是军人,请包括现役和预备时间。
|
||||
|
||||
### 5\. 资质证书
|
||||
|
||||
除非你想重新进入之前离开的领域,否则不要写过期的证书,例如,如果你曾经是一名人事经理,而现在正寻求动手编程的工作。 如果你拥有与该领域不再相关的认证,就不要写这些认证,因为这些可能会分散招聘者的注意力,使你的简历失去吸引力。 利用你的 LinkedIn 个人资料为简历添加更多色彩,因为大多数人在面试之前都会阅读你的简历和 LinkedIn 个人资料。
|
||||
|
||||
### 6\. 拼写和语法
|
||||
|
||||
|
||||
让其他人对你的简历进行校对。 很多时候,我在简历中看到拼写错误的单词,或者错误用词,如“他们的”、“他们是”、“那些”。 这些可以避免和修复的错误会产生负面影响。 理想情况下,你的简历应用主动语态,但是如果这样会使你感到不舒服,那么就用过去时书写-最重要的是要始终保持一致。 不正确的拼写和语法会传递你要么不是很在乎所申请的工作,要么没有足够注意细节。
|
||||
|
||||
### 7\. 格式
|
||||
|
||||
确保你的简历是最新的并且富有吸引力,这是留下良好第一印象的简便方法。 确保格式一致,例如相同的页边距,相同的间距,大写字母和颜色(将调色板保持在最低水平)是简历写作中最基本的部分,但有必要表明你对工作感到豪感,并重视自己的价值和未来的雇主。 在适当的地方使用表格,以视觉吸引人的方式分配信息。 如果有给定模板,以 .pdf 和 .docx 格式上传简历,然后用 Google Docs 导出为 .odt 格式,这样可以在 LibreOffice 中轻松打开。 这是我推荐的简单的 Google 文档简历模板。 你还可以支付少量费用(不到10美元)从一些设计公司购买。
|
||||
|
||||
### 定期更新
|
||||
|
||||
如果你被要求(或希望)申请一份工作,定期更新简历可以最大程度地减少压力,也可以帮助你创建和维护更准确的简历版本。 撰写简历时,要有远见,确保至少让其他三个人对你的简历内容,拼写和语法进行检查。 即使你是由公司招募或其他人推荐给公司的,面试官也可能仅通过简历认识你,因此请确保它为你带来良好的第一印象。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/technical-resume-writing
|
||||
|
||||
作者:[Emily Brand][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/emily-brand
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI (Two hands holding a resume with computer, clock, and desk chair )
|
||||
[2]: https://docs.google.com/document/d/1ARVyybC5qQEiCzUOLElwAdPpKOK0Qf88srr682eHdCQ/edit
|
@ -1,63 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hkurj)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (2020 Will Be a Year of Hindsight for SD-WAN)
|
||||
[#]: via: (https://www.networkworld.com/article/3531315/2020-will-be-a-year-of-hindsight-for-sd-wan.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
2020年将会着眼于SD-WAN
|
||||
======
|
||||
|
||||
对于软件定义的广域网(SD-WAN),“过去看起来困难的选择,知道了这些选择的结果后,现在看起来就很清晰了” 这一说法再合适不过了。总结过去的几年:云计算和数字化转型促使公司重新评估传统的WAN技术,该技术不再能够满足其不断增长的业务需求。从那时起,SD-WAN成为一种有前途的新技术。
|
||||
|
||||
SD-WAN旨在解决物理设备的流量管理问题,并支持从云进行基于软件的配置。替换昂贵的多协议标签交换(MPLS)的愿望推动了许多最初的SD-WAN部署。公司希望它可以神奇地解决他们所有的网络问题。但是在实践中,基本的SD-WAN解决方案远没有实现这一愿景。
|
||||
|
||||
快速发展到现在,围绕SD-WAN的炒作已经尘埃落定,并且早期的实施工作已经过去。现在是时候回顾一下我们在2019年学到的东西以及在2020年要改进的地方。所以,让我们开始吧。
|
||||
|
||||
|
||||
### **1\. 这与节省成本无关**
|
||||
|
||||
大多数公司选择SD-WAN作为MPLS的替代品,因为它可以降低WAN成本。但是,[节省的成本] [1]会随SD-WAN的不同而异,因此不应将其用作部署该技术的主要驱动力。无论公司需要什么,公司都应该专注于提高网络敏捷性,例如实现更快的站点部署和减少配置时间。 SD-WAN的主要驱动力是使网络更高效。如果成功实现那么成本也会随之降低。
|
||||
|
||||
|
||||
### **2\. WAN优化是必要的**
|
||||
|
||||
说到效率,[WAN优化] [2]提高了应用程序和数据流量的性能。通过应用协议加速,重复数据删除,压缩和缓存等技术,WAN优化可以增加带宽,减少等待时间并减轻数据包丢失。最初的想法是SD-WAN可以完成对WAN优化的需求,但是我们现在知道某些应用程序需要额外的性能。这些技术相互补充,而不是相互替代。它们应该用来解决不同的问题。
|
||||
|
||||
|
||||
### **3\. 安全性不应该事后考虑。**
|
||||
|
||||
SD-WAN具有许多优点,其中之一就是使用宽带互联网快速发送企业应用程序流量。但是这种方法也带来了安全风险,因为它使用户及其本地网络暴露于不受信任的公共互联网中。从一开始,安全性就应该成为SD-WAN实施的一部分,而不是在事后。公司可以通过使用[安全的云托管] [3]之类的服务,将安全性放在分支机构附近,从而实现所需的应用程序性能和保护。
|
||||
|
||||
|
||||
### **4\. 可见性对于SD-WAN成功至关重要**
|
||||
|
||||
在应用程序和数据流量中具有[可见性] [4],这使网络管理不再需要猜测。最好的起点是部署前阶段,在此阶段,公司可以评估其现有功能以及在实施SD-WAN之前缺少的功能。可见的日常监视和警报形式显示在部署后继续发挥重要作用。了解网络中正在发生什么情况的公司会更好地准备应对性能问题,并可以利用这些知识来避免将来出现问题。
|
||||
|
||||
### **5\. 无线广域网尚未准备就绪**
|
||||
|
||||
SD-WAN可通过任何传输将用户连接到应用程序,包括宽带和4G / LTE(Long Term Evolution)无线。这就是[移动互联] [5]越来越多地集成到SD-WAN解决方案中的原因。尽管公司渴望将4G用作潜在的传输替代方案(尤其是在偏远地区),但按使用付费4G服务所产生的成本却很高。此外,由于延迟和带宽限制,4G可能会出现问题。最好的方法是等待服务提供商以更好的价格选择部署5G。今年将是我们看到5G推出并更加关注无线SD-WAN的一年。
|
||||
|
||||
请务必观看以下SD-WAN视频系列:[你应该知道的所有关于SD-WAN的知识] [6]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3531315/2020-will-be-a-year-of-hindsight-for-sd-wan.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/hkurj)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://blog.silver-peak.com/to-maximize-the-value-of-sd-wan-look-past-hardware-savings
|
||||
[2]: https://blog.silver-peak.com/sd-wan-vs-wan-optimization
|
||||
[3]: https://blog.silver-peak.com/sd-wans-enable-scalable-local-internet-breakout-but-pose-security-risk
|
||||
[4]: https://blog.silver-peak.com/know-the-true-business-drivers-for-sd-wan
|
||||
[5]: https://blog.silver-peak.com/mobility-and-sd-wan-part-1-sd-wan-with-4g-lte-is-a-reality
|
||||
[6]: https://www.silver-peak.com/everything-you-need-to-know-about-sd-wan
|
@ -7,35 +7,33 @@
|
||||
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-conversations/)
|
||||
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
|
||||
|
||||
Building a Messenger App: Conversations
|
||||
构建一个即时消息应用(三):对话
|
||||
======
|
||||
|
||||
This post is the 3rd in a series:
|
||||
本文是该系列的第三篇。
|
||||
|
||||
* [Part 1: Schema][1]
|
||||
* [Part 2: OAuth][2]
|
||||
* [第一篇:模式][1]
|
||||
* [第二篇:OAuth][2]
|
||||
|
||||
在我们的即时消息应用中,消息表现为两个参与者对话的堆叠。如果你想要开始异常对话,就应该向应用提供你想要交谈的用户,而当对话创建后(如果该对话此前并不存在),就可以向该对话发送消息。
|
||||
|
||||
就前端而言,我们可能想要显示一份近期对话列表。并在此处显示对话的最后一条消息以及另一个参与者的姓名和头像。
|
||||
|
||||
In our messenger app, messages are stacked by conversations between two participants. You start a conversation providing the user you want to chat with, the conversations is created (if not exists already) and you can start sending messages to that conversations.
|
||||
在这篇帖子中,我们将会编写一些端点(endpoints)来完成像「创建对话」、「获取对话列表」以及「找到单个对话」这样的任务。
|
||||
|
||||
On the front-end we’re interested in showing a list of the lastest conversations. There we’ll show the last message of it and the name and avatar of the other participant.
|
||||
首先,要在主函数 `main()` 中添加下面的路由。
|
||||
|
||||
In this post, we’ll code the endpoints to start a conversation, list the latest and find a single one.
|
||||
|
||||
Inside the `main()` function add this routes.
|
||||
|
||||
```
|
||||
```go
|
||||
router.HandleFunc("POST", "/api/conversations", requireJSON(guard(createConversation)))
|
||||
router.HandleFunc("GET", "/api/conversations", guard(getConversations))
|
||||
router.HandleFunc("GET", "/api/conversations/:conversationID", guard(getConversation))
|
||||
```
|
||||
|
||||
These three endpoints require authentication so we use the `guard()` middleware. There is a new middleware that checks for the request content type JSON.
|
||||
这三个端点都需要进行身份验证,所以我们将会使用 `guard()` 中间件。我们也会构建一个新的中间件,用于检查请求内容是否为 JSON 格式。
|
||||
|
||||
### Require JSON Middleware
|
||||
### JSON 请求检查中间件
|
||||
|
||||
```
|
||||
```go
|
||||
func requireJSON(handler http.HandlerFunc) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
if ct := r.Header.Get("Content-Type"); !strings.HasPrefix(ct, "application/json") {
|
||||
@ -47,11 +45,11 @@ func requireJSON(handler http.HandlerFunc) http.HandlerFunc {
|
||||
}
|
||||
```
|
||||
|
||||
If the request isn’t JSON, it responds with a `415 Unsupported Media Type` error.
|
||||
如果请求(request)不是 JSON 格式,那么它会返回 `415 Unsupported Media Type`(不支持的媒体类型)错误。
|
||||
|
||||
### Create Conversation
|
||||
### 创建对话
|
||||
|
||||
```
|
||||
```go
|
||||
type Conversation struct {
|
||||
ID string `json:"id"`
|
||||
OtherParticipant *User `json:"otherParticipant"`
|
||||
@ -60,9 +58,9 @@ type Conversation struct {
|
||||
}
|
||||
```
|
||||
|
||||
So, a conversation holds a reference to the other participant and the last message. Also has a bool field to tell if it has unread messages.
|
||||
就像上面的代码那样,对话中保持对另一个参与者和最后一条消息的引用,还有一个 bool 类型的字段,用来告知是否有未读消息。
|
||||
|
||||
```
|
||||
```go
|
||||
type Message struct {
|
||||
ID string `json:"id"`
|
||||
Content string `json:"content"`
|
||||
@ -74,11 +72,11 @@ type Message struct {
|
||||
}
|
||||
```
|
||||
|
||||
Messages are for the next post, but I define the struct now since we are using it. Most of the fields are the same as the database table. We have `Mine` to tell if the message is owned by the current authenticated user and `ReceiverID` will be used to filter messanges once we add realtime capabilities.
|
||||
我们会在下一篇文章介绍与消息相关的内容,但由于我们这里也需要用到它,所以先定义了 `Message` 结构体。其中大多数字段与数据库表一致。我们需要使用 `Mine` 来断定消息是否属于当前已验证用户所有。一旦加入实时功能,`ReceiverID` 可以帮助我们过滤消息。
|
||||
|
||||
Lets write the HTTP handler then. It’s quite long but don’t be scared.
|
||||
接下来让我们编写 HTTP 处理程序。尽管它有些长,但也没什么好怕的。
|
||||
|
||||
```
|
||||
```go
|
||||
func createConversation(w http.ResponseWriter, r *http.Request) {
|
||||
var input struct {
|
||||
Username string `json:"username"`
|
||||
@ -170,19 +168,19 @@ func createConversation(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
```
|
||||
|
||||
For this endpoint you do a POST request to `/api/conversations` with a JSON body containing the username of the user you want to chat with.
|
||||
在此端点,你会向 `/api/conversations` 发送 POST 请求,请求的 JSON 主体中包含要对话的用户的用户名。
|
||||
|
||||
So first it decodes the request body into an struct with the username. Then it validates that the username is not empty.
|
||||
因此,首先需要将请求主体解析成包含用户名的结构。然后,校验用户名不能为空。
|
||||
|
||||
```
|
||||
```go
|
||||
type Errors struct {
|
||||
Errors map[string]string `json:"errors"`
|
||||
}
|
||||
```
|
||||
|
||||
This is the `Errors` struct. It’s just a map. If you enter an empty username you get this JSON with a `422 Unprocessable Entity` error.
|
||||
这是错误消息的结构体 `Errors`,它仅仅是一个映射。如果输入空用户名,你就会得到一段带有 `422 Unprocessable Entity`(无法处理的实体)错误消息的 JSON 。
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"errors": {
|
||||
"username": "Username required"
|
||||
@ -190,17 +188,17 @@ This is the `Errors` struct. It’s just a map. If you enter an empty username y
|
||||
}
|
||||
```
|
||||
|
||||
Then, we begin an SQL transaction. We only received an username, but we need the actual user ID. So the first part of the transaction is to query for the id and avatar of that user (the other participant). If the user is not found, we respond with a `404 Not Found` error. Also, if the user happens to be the same as the current authenticated user, we respond with `403 Forbidden`. There should be two different users, not the same.
|
||||
然后,我们开始执行 SQL 事务。收到的仅仅是用户名,但事实上,我们需要知道实际的用户 ID 。因此,事务的第一项内容是查询另一个参与者的 ID 和头像。如果找不到该用户,我们将会返回 `404 Not Found`(未找到) 错误。另外,如果找到的用户恰好和「当前已验证用户」相同,我们应该返回 `403 Forbidden`(拒绝处理)错误。这是由于对话只应当在两个不同的用户之间发起,而不能是同一个。
|
||||
|
||||
Then, we try to find a conversation those two users have in common. We use `INTERSECT` for that. If there is one, we redirect to that conversation `/api/conversations/{conversationID}` and return there.
|
||||
然后,我们试图找到这两个用户所共有的对话,所以需要使用 `INTERSECT` 语句。如果存在,只需要通过 `/api/conversations/{conversationID}` 重定向到该对话并将其返回。
|
||||
|
||||
If no common conversation was found, we continue by creating a new one and adding the two participants. Finally, we `COMMIT` the transaction and respond with the newly created conversation.
|
||||
如果未找到共有的对话,我们需要创建一个新的对话并添加指定的两个参与者。最后,我们 `COMMIT` 该事务并使用新创建的对话进行响应。
|
||||
|
||||
### Get Conversations
|
||||
### 获取对话列表
|
||||
|
||||
This endpoint `/api/conversations` is to get all the conversations of the current authenticated user.
|
||||
端点 `/api/conversations` 将获取当前已验证用户的所有对话。
|
||||
|
||||
```
|
||||
```go
|
||||
func getConversations(w http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
authUserID := ctx.Value(keyAuthUserID).(string)
|
||||
@ -267,17 +265,17 @@ func getConversations(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
```
|
||||
|
||||
This handler just does a query to the database. It queries to the conversations table with some joins… First, to the messages table to get the last message. Then to the participants, but it adds a condition to a participant whose ID is not the one of the current authenticated user; this is the other participant. Then it joins to the users table to get his username and avatar. And finally joins with the participants again but with the contrary condition, so this participant is the current authenticated user. We compare `messages_read_at` with the message `created_at` to know whether the conversation has unread messages. And we use the message `user_id` to check if it’s “mine” or not.
|
||||
该处理程序仅对数据库进行查询。它通过一些联接来查询对话表……首先,从消息表中获取最后一条消息。然后依据「ID 与当前已验证用户不同」的条件,从参与者表找到对话的另一个参与者。然后联接到用户表以获取该用户的用户名和头像。最后,再次联接参与者表,并以相反的条件从该表中找出参与对话的另一个用户,其实就是当前已验证用户。我们会对比消息中的 `messages_read_at` 和 `created_at` 两个字段,以确定对话中是否存在未读消息。然后,我们通过 `user_id` 字段来判定该消息是否属于「我」(指当前已验证用户)。
|
||||
|
||||
Note that this query assumes that a conversation has just two users. It only works for that scenario. Also, if you want to show a count of the unread messages, this design isn’t good. I think you could add a `unread_messages_count` `INT` field on the `participants` table and increment it each time a new message is created and reset it when the user read them.
|
||||
注意,此查询过程假定对话中只有两个用户参与,它也仅仅适用于这种情况。另外,该设计也不很适用于需要显示未读消息数量的情况。如果需要显示未读消息的数量,我认为可以在 `participants` 表上添加一个`unread_messages_count` `INT` 字段,并在每次创建新消息的时候递增它,如果用户已读则重置该字段。
|
||||
|
||||
Then it iterates over the rows, scan each one to make an slice of conversations and respond with those at the end.
|
||||
接下来需要遍历每一条记录,通过扫描每一个存在的对话来建立一个对话切片(an slice of conversations)并在最后进行响应。
|
||||
|
||||
### Get Conversation
|
||||
### 找到单个对话
|
||||
|
||||
This endpoint `/api/conversations/{conversationID}` respond with a single conversation by its ID.
|
||||
端点 `/api/conversations/{conversationID}` 会根据 ID 对单个对话进行响应。
|
||||
|
||||
```
|
||||
```go
|
||||
func getConversation(w http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
authUserID := ctx.Value(keyAuthUserID).(string)
|
||||
@ -321,15 +319,15 @@ func getConversation(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
```
|
||||
|
||||
The query is quite similar. We’re not interested in showing the last message, so we omit those fields, but we need the message to know whether the conversation has unread messages. This time we do a `LEFT JOIN` instead of an `INNER JOIN` because the `last_message_id` is `NULLABLE`; in other case we won’t get any rows. We use an `IFNULL` in the `has_unread_messages` comparison for that reason too. Lastly, we filter by ID.
|
||||
这里的查询与之前有点类似。尽管我们并不关心最后一条消息的显示问题,并因此忽略了与之相关的一些字段,但是我们需要根据这条消息来判断对话中是否存在未读消息。此时,我们使用 `LEFT JOIN` 来代替 `INNER JOIN`,因为 `last_message_id` 字段是 `NULLABLE`(可以为空)的;而其他情况下,我们无法得到任何记录。基于同样的理由,我们在 `has_unread_messages` 的比较中使用了 `IFNULL` 语句。最后,我们按 ID 进行过滤。
|
||||
|
||||
If the query returns no rows, we respond with a `404 Not Found` error, otherwise `200 OK` with the found conversation.
|
||||
如果查询没有返回任何记录,我们的响应会返回 `404 Not Found` 错误,否则响应将会返回 `200 OK` 以及找到的对话。
|
||||
|
||||
* * *
|
||||
|
||||
Yeah, that concludes with the conversation endpoints.
|
||||
本篇帖子以创建了一些对话端点结束。
|
||||
|
||||
Wait for the next post to create and list messages 👋
|
||||
在下一篇帖子中,我们将会看到如何创建并列出消息。
|
||||
|
||||
[Souce Code][3]
|
||||
|
||||
@ -346,6 +344,6 @@ via: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
|
||||
|
||||
[a]: https://nicolasparada.netlify.com/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
|
||||
[2]: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
|
||||
[1]: https://linux.cn/article-11396-1.html
|
||||
[2]: https://linux.cn/article-11510-1.html
|
||||
[3]: https://github.com/nicolasparada/go-messenger-demo
|
@ -1,226 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (mengxinayan)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to structure a multi-file C program: Part 2)
|
||||
[#]: via: (https://opensource.com/article/19/7/structure-multi-file-c-part-2)
|
||||
[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny)
|
||||
|
||||
如何组织构建多文件 C 语言程序(二)
|
||||
======
|
||||
我将在本系列的第二篇中深入研究由多个文件组成的C程序的结构。
|
||||
![4 个 manilla 文件,黄色,绿色,紫色,蓝色][1]
|
||||
|
||||
在 [(第一篇)][2] 中,我设计了一个名为 [MeowMeow][3] 的多文件 C 程序,该程序实现了一个玩具——[codec][4]。我提到了程序设计中的 Unix 哲学,即在一开始创建多个空文件,并建立一个好的结构。最后,我创建了一个 Makefile 文件夹并阐述了它的作用。在本文中将另一个方向展开:现在我将介绍简单但具有指导性的 MeowMeow 编/解码器的实现。
|
||||
|
||||
当读过我的 "[如何写一个好的 C 语言 main 函数][5]." 后,你便会知道 `main.c` 文件中 `meow` 和 `unmeow` 的结构,其主体结构如下:
|
||||
|
||||
```
|
||||
/* main.c - MeowMeow 流编码器和解码器 */
|
||||
|
||||
/* 00 system includes */
|
||||
/* 01 project includes */
|
||||
/* 02 externs */
|
||||
/* 03 defines */
|
||||
/* 04 typedefs */
|
||||
/* 05 globals (but don't)*/
|
||||
/* 06 ancillary function prototypes if any */
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
/* 07 variable declarations */
|
||||
/* 08 check argv[0] to see how the program was invoked */
|
||||
/* 09 process the command line options from the user */
|
||||
/* 10 do the needful */
|
||||
}
|
||||
|
||||
/* 11 ancillary functions if any */
|
||||
```
|
||||
|
||||
### 包含项目头文件
|
||||
|
||||
位于第二部分中的 `/* 01 project includes */` 的源代码如下:
|
||||
|
||||
|
||||
```
|
||||
/* main.c - MeowMeow 流编码器和解码器 */
|
||||
...
|
||||
/* 01 project includes */
|
||||
#include "main.h"
|
||||
#include "mmecode.h"
|
||||
#include "mmdecode.h"
|
||||
```
|
||||
|
||||
`#include` 是 C 语言的预处理命令,它会将其后面的文件内容拷贝到当前文件中。如果程序员在头文件名称周围使用双引号,编译器将会在当前目录寻找该文件。如果文件被尖括号包围,编译器将在一组预定义的目录中查找该文件。
|
||||
|
||||
[main.h][6] 文件中包含了 [main.c][7] 文件中用到的定义和别名。我喜欢在头文件里尽可能多的声明,以便我想在我的程序的其他位置使用这些定义。
|
||||
|
||||
头文件 [mmencode.h][8] 和 [mmdecode.h][9] 几乎相同,因此我以 `mmencode.h` 为例来分析。
|
||||
|
||||
|
||||
```
|
||||
/* mmencode.h - MeowMeow 流编码器和解码器 */
|
||||
|
||||
#ifndef _MMENCODE_H
|
||||
#define _MMENCODE_H
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
int mm_encode(FILE *src, FILE *dst);
|
||||
|
||||
#endif /* _MMENCODE_H */
|
||||
```
|
||||
|
||||
`#ifdef`,`#define`,`#endif` 指令统称为 “防护” 指令。其可以防止 C 编译器在一个文件中多次包含同一文件。如果编译器在一个文件中发现多个定义/原型/声明,它将会产生警告。因此这些防护措施是必要的。
|
||||
|
||||
在防护内部,它只做两件事:`#include` 指令和函数原型声明。我将包含 `stdio.h` 头文件,以便于能在函数原型中使用 `FILE` 流。函数原型也可以被包含在其他 C 文件中,以便于在文件的命名空间中创建它。你可以将每个文件视为一个命名空间,其中的变量和函数不能被另一个文件中的函数或者变量使用。
|
||||
|
||||
编写头文件很复杂,并且在大型项目中很难管理它。不要忘记使用防护。
|
||||
|
||||
### MeowMeow 编码的实现
|
||||
|
||||
该程序的功能是按照字节进行 `MeowMeow` 字符串的编解码,事实上这是该项目中最简单的部分。截止目前我所做的工作便是支持允许在适当的位置调用此函数:解析命令行,确定要使用的操作,并打开将要操作的文件。下面的循环是编码的过程:
|
||||
|
||||
|
||||
```
|
||||
/* mmencode.c - MeowMeow 流编码器 */
|
||||
...
|
||||
while (![feof][10](src)) {
|
||||
|
||||
if (![fgets][11](buf, sizeof(buf), src))
|
||||
break;
|
||||
|
||||
for(i=0; i<[strlen][12](buf); i++) {
|
||||
lo = (buf[i] & 0x000f);
|
||||
hi = (buf[i] & 0x00f0) >> 4;
|
||||
[fputs][13](tbl[hi], dst);
|
||||
[fputs][13](tbl[lo], dst);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
简单的说,上面代码循环读取文件的一部分,剩下的内容通过 `feof(3)` 函数和 `fgets(3)` 函数读取。然后将读入的内容的每个字节分成 `hi` 和 `lo` 半个字节。半个字节是 4 个比特。这里的奥妙之处在于可以用 4 个比特来编码16个值。我将字符串 `hi` 和 `lo` 用作 16 字符串查找表 `tbl` 的索引,表中包含了对每半个字节编码后的 `MeowMeow` 字符串。这些字符串使用 `fputs(3)` 函数写入目标 `FILE` 流,然后移动到缓存区的下一个字节。
|
||||
|
||||
该表使用 [table.h][14] 中的宏定义进行初始化,在没有特殊原因(比如:要展示包含了另一个项目的本地头文件)时,我喜欢使用宏来进行初始化。我将在未来的文章中进一步探讨原因。
|
||||
|
||||
### MeowMeow 解码的实现
|
||||
|
||||
我承认在开始工作前花了一些时间。解码的循环与编码类似:读取 `MeowMeow` 字符串到缓冲区,将编码从字符串转换为字节
|
||||
|
||||
|
||||
```
|
||||
/* mmdecode.c - MeowMeow 流解码器 */
|
||||
...
|
||||
int mm_decode(FILE *src, FILE *dst)
|
||||
{
|
||||
if (!src || !dst) {
|
||||
errno = EINVAL;
|
||||
return -1;
|
||||
}
|
||||
return stupid_decode(src, dst);
|
||||
}
|
||||
```
|
||||
|
||||
这不是所期望的吗?
|
||||
|
||||
在这里,我通过外部公开的 `mm_decode()` 函数公开了 `stupid_decode()` 函数细节。我上面所说的“外部”是指在这个文件之外。因为 `stupid_decode()` 函数不在头文件中,因此无法在其他文件中调用它。
|
||||
|
||||
当我们想发布一个可靠的公共接口时,有时候会这样做,但是我们还没有完全使用函数解决问题。在本例中,我编写了一个 I/O 密集型函数,该函数每次从源中读取 8 个字节,然后解码获得 1 个字节写入目标流中。较好的实现是一次处理多于 8 个字节的缓冲区。更好的实现还可以通过缓冲区输出字节,进而减少目标流中单字节的写入次数。
|
||||
|
||||
|
||||
```
|
||||
/* mmdecode.c - MeowMeow 流解码器 */
|
||||
...
|
||||
int stupid_decode(FILE *src, FILE *dst)
|
||||
{
|
||||
char buf[9];
|
||||
decoded_byte_t byte;
|
||||
int i;
|
||||
|
||||
while (![feof][10](src)) {
|
||||
if (![fgets][11](buf, sizeof(buf), src))
|
||||
break;
|
||||
byte.field.f0 = [isupper][15](buf[0]);
|
||||
byte.field.f1 = [isupper][15](buf[1]);
|
||||
byte.field.f2 = [isupper][15](buf[2]);
|
||||
byte.field.f3 = [isupper][15](buf[3]);
|
||||
byte.field.f4 = [isupper][15](buf[4]);
|
||||
byte.field.f5 = [isupper][15](buf[5]);
|
||||
byte.field.f6 = [isupper][15](buf[6]);
|
||||
byte.field.f7 = [isupper][15](buf[7]);
|
||||
|
||||
[fputc][16](byte.value, dst);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
我并没有使用编码器中使用的位移方法,而是创建了一个名为 `decoded_byte_t` 的自定义数据结构。
|
||||
|
||||
```
|
||||
/* mmdecode.c - MeowMeow 流解码器 */
|
||||
...
|
||||
|
||||
typedef struct {
|
||||
unsigned char f7:1;
|
||||
unsigned char f6:1;
|
||||
unsigned char f5:1;
|
||||
unsigned char f4:1;
|
||||
unsigned char f3:1;
|
||||
unsigned char f2:1;
|
||||
unsigned char f1:1;
|
||||
unsigned char f0:1;
|
||||
} fields_t;
|
||||
|
||||
typedef union {
|
||||
fields_t field;
|
||||
unsigned char value;
|
||||
} decoded_byte_t;
|
||||
```
|
||||
|
||||
初次看到代码时可能会感到有点儿复杂,但不要放弃。`decoded_byte_t` 被定义为 `fields_t` 和 `unsigned char` 的 **联合体** 。可以将联合中的命名成员看作同一内存区域的别名。在这种情况下,`value` 和 `field` 指向相同的 8 比特内存区域。将 `field.f0` 设置为 1 也将设置 `value` 中的最低有效位。
|
||||
|
||||
虽然 `unsigned char` 并不神秘,但是对 `fields_t` 的 别名(`typedef`) 也许看起来有些陌生。现代 C 编译器允许程序员在结构体中指定单个 bit 的值。`field` 成员是一个无符号整数类型,可以在成员标识符后紧跟一个冒号和一个整数,该整数指定了比特字段的长度。
|
||||
|
||||
这种数据结构使得按 `field` 名称访问每个比特变得简单。我们依赖编译器生成正确的移位指令来访问 `field`,这可以在调试时为你节省不少时间。
|
||||
|
||||
最后,因为`stupid_decode()` 函数一次仅从源 `FILE` 流中读取 8 个字节,所以它效率并不高。通常我们尝试最小化读写次数,以提高性能和降低调用系统调用的开销。请记住:少次数的读取/写入大的块比多次数的读取/写入小的块好得多。
|
||||
|
||||
### 总结
|
||||
|
||||
用 C 语言编写一个多文件程序需要程序员做更多计划,而不仅仅是一个 `main.c`。但是当你添加功能或者重构时,只需要多花费一点儿努力便可以节省大量时间以及避免让你头痛的问题。
|
||||
|
||||
回顾一下,我更喜欢这样做:多个文件,每个文件仅有简单功能;通过头文件公开那些文件中的小部分功能;把数字常量和字符串常量保存在头文件中;使用 `Makefile` 而不是 Bash 脚本来自动化处理事务;使用 `main()` 函数来处理和解析命令行参数并作为程序主要功能的框架。
|
||||
|
||||
我知道我只是涉及了这个简单程序中发生的事情,并且我很高兴知道哪些事情对您有所帮助以及哪些主题需要详细的解释。在评论中分享您的想法,让我知道。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/structure-multi-file-c-part-2
|
||||
|
||||
作者:[Erik O'Shaughnessy][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[萌新阿岩](https://github.com/mengxinayan)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jnyjny
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/file_system.jpg?itok=pzCrX1Kc (4 manilla folders, yellow, green, purple, blue)
|
||||
[2]: https://opensource.com/article/19/7/how-structure-multi-file-c-program-part-1
|
||||
[3]: https://github.com/jnyjny/MeowMeow.git
|
||||
[4]: https://en.wikipedia.org/wiki/Codec
|
||||
[5]: https://opensource.com/article/19/5/how-write-good-c-main-function
|
||||
[6]: https://github.com/JnyJny/meowmeow/blob/master/main.h
|
||||
[7]: https://github.com/JnyJny/meowmeow/blob/master/main.c
|
||||
[8]: https://github.com/JnyJny/meowmeow/blob/master/mmencode.h
|
||||
[9]: https://github.com/JnyJny/meowmeow/blob/master/mmdecode.h
|
||||
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/feof.html
|
||||
[11]: http://www.opengroup.org/onlinepubs/009695399/functions/fgets.html
|
||||
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
|
||||
[13]: http://www.opengroup.org/onlinepubs/009695399/functions/fputs.html
|
||||
[14]: https://github.com/JnyJny/meowmeow/blob/master/table.h
|
||||
[15]: http://www.opengroup.org/onlinepubs/009695399/functions/isupper.html
|
||||
[16]: http://www.opengroup.org/onlinepubs/009695399/functions/fputc.html
|
@ -1,102 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 things you should be doing with Emacs)
|
||||
[#]: via: (https://opensource.com/article/20/1/emacs-cheat-sheet)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
6 件你应该用 Emacs 做的事
|
||||
======
|
||||
下面六件事情你可能都没有意识到可以在 Emacs 下完成。此外,使用我们的新备忘单来充分利用 Emacs 的功能吧。
|
||||
![浏览器上给蓝色编辑器 ][1]
|
||||
|
||||
想象一下使用 Python 的 IDLE 界面来编辑文本。你可以将文件加载到内存中,编辑它们,并保存更改。但是你执行的每个操作都由 Python 函数定义。例如,调用 **upper()** 来让一个单词全部大写,调用 **open** 打开文件,等等。文本文档中的所有内容都是 Python 对象,可以进行相应的操作。从用户的角度来看,这与其他文本编辑器的体验一致。对于 Python 开发人员来说,这是一个丰富的 Python 环境,只需在配置文件中添加几个自定义函数就可以对其进行更改和开发。
|
||||
|
||||
这就是 [Emacs][2] 使用 1958 年的编程语言 [Lisp][3] 所做的事情。在 Emacs 中,运行应用程序的 Lisp 引擎与输入文本之间无缝结合。对 Emacs 来说,一切都是 Lisp 数据,因此一切都可以通过编程进行分析和操作。
|
||||
|
||||
这就形成了一个强大的用户界面 (UI)。但是,如果您是 Emacs 的普通用户,您可能对它的能力知之甚少。下面是你可能没有意识到 Emacs 可以做的六件事。
|
||||
|
||||
## 使用 Tramp mode 进行云端编辑
|
||||
|
||||
Emacs 早在网络流行话之前就实现了透明的网络编辑能力了,而且时至今日,它仍然提供了最流畅的远程编辑体验。Emacs 中的 [Tramp mode][4]( 以前称为 RPC mode) 代表着 “Transparent Remote (file) Access,Multiple Protocol( 透明的远程(文件)访问,多协议)”,这详细描述了它提供的功能:通过最流行的网络协议轻松访问您希望编辑的远程文件。目前最流行、最安全的远程编辑协议是 [OpenSSH][5],因此 Tramp 使用它作为默认的协议。
|
||||
|
||||
在 Emacs 22.1 或更高版本中已经包含了 Tramp,因此要使用 Tramp,只需使用 Tramp 语法打开一个文件。在 Emacs 的 **File** 菜单中,选择 **Open File**。当在 Emacs 窗口底部的小缓冲区中出现提示时,使用以下语法输入文件名:
|
||||
|
||||
```
|
||||
`/ssh:user@example.com:/path/to/file`
|
||||
```
|
||||
|
||||
如果需要交互式登录,Tramp 会提示输入密码。但是,Tramp 直接使用 OpenSSH,所以为了避免交互提示,你可以将主机名、用户名和 SSH 密钥路径添加到您的 `~/.ssh/config` 文件。与 Git 一样,Emacs 首先使用 SSH 配置,只有在出现错误时才会停下来询问更多信息。
|
||||
|
||||
Tramp 非常适合编辑计算机上不存在的文件,它的用户体验与编辑本地文件没有明显的区别。下次,当你 SSH 到服务器启动 Vim 或 Emacs 会话时,请尝试使用 Tramp。
|
||||
|
||||
## 日历
|
||||
|
||||
如果你喜欢文本多过图形界面,那么你一定会很高兴地知道,可以使用 Emacs 以纯文本的方式安排你的日程(或生活)。而且你依然可以在移动设备上使用开放源码的 [Org mode][6] 查看器来获得华丽的通知。
|
||||
|
||||
这个过程需要一些配置来创建一个方便的方式来与移动设备同步你的日程(我使用 Git,但你可以调用蓝牙,KDE Connect,Nextcloud,或其他文件同步工具),此外你必须安装一个 Org mode 查看器(如 [Orgzly][7]) 以及移动设备上的 Git 客户程序。但是,一旦你搭建好了这些基础,该流程就会与您常用的(或正在完善的,如果您是新用户 )Emacs 工作流完美地集成在一起。你可以在 Emacs 中方便地查阅日程,更新日程,并专注于任务上。议程上的变化将会反映在移动设备上,因此即使在 Emacs 不可用的时候,你也可以保持条理性。
|
||||
|
||||
![][8]
|
||||
|
||||
感兴趣了?阅读我的关于[使用 Org mode 和 Git 进行日程安排 ][9] 的逐步指南。
|
||||
|
||||
## 访问终端
|
||||
|
||||
有[许多终端模拟器 ][10] 可用。尽管 Emacs 中的 Elisp 终端仿真器不是最强大的通用仿真器,但是它有两个显著的优点。
|
||||
|
||||
1。**在 Emacs 缓冲区中打开:**我使用 Emacs 的 Elisp shell,因为它在 Emacs 窗口中打开很方便,我经常全屏运行该窗口。这是一个小而重要的优势,只需要输入 `Ctrl+x+o`( 或用 Emacs 符号来表示就是 C-x) 就能使用终端了,而且它还有一个特别好的地方在于当运行漫长的作业时能够一瞥它的状态报告。
|
||||
2。**在没有系统剪贴板的情况下复制和粘贴特别方便:** 无论是因为懒惰不愿将手从键盘移动到鼠标,还是因为在远程控制台运行 Emacs 而无法使用鼠标,在 Emacs 中运行终端有时意味着可以快从 Emacs 缓冲区中传输数据到 Bash。
|
||||
|
||||
|
||||
|
||||
要尝试 Emacs 终端,输入 `Alt+x (用 Emacs 符号表示就是 M-x)`,然后输入 **shell**,然后按 **Return**。
|
||||
|
||||
## 使用 Racket mode
|
||||
|
||||
[Racket][11] 是一种激动人心的新兴 Lisp 方言,拥有动态编程环境 、GUI 工具包和热情的社区。学习 Racket 的默认编辑器是 DrRacket,它的顶部是定义面板,底部是交互面板。使用该设置,用户可以编写影响 Racket 运行时的定义。就像旧的 [Logo Turtle][12] 程序,但是有一个终端而不是仅仅一个海龟。
|
||||
|
||||
![Racket-mode][13]
|
||||
|
||||
由 PLT 提供的 LGPL 示例代码
|
||||
|
||||
基于 Lisp 的 Emacs 为资深 Racket 编程人员提供了一个很好的集成开发环境 (IDE)。它还没有自带 [Racket mode][14],但你可以使用 Emacs 包安装程序安装 Racket 模式和辅助扩展。
|
||||
要安装它,按下 `Alt+X` (用 Emacs 符号表示就是 **M-x**),键入 **package-install**,然后按 **Return**。然后输入要安装的包 (**racet-mode**),按 **Return**。
|
||||
|
||||
使用 **M-x racket-mode** 进入 Racket mode。如果你是 Racket 新手,但不是对 Lisp 或 Emacs 比较熟悉,可以从优秀[图解 Racket][15] 入手。
|
||||
|
||||
## 脚本
|
||||
|
||||
您可能知道,Bash 脚本在自动化和增强 Linux 或 Unix 体验方面很流行。你可能听说过 Python 在这方面也做得很好。但是你知道 Lisp 脚本可以用同样的方式运行吗?有时人们会对 Lisp 到底有多有用感到困惑,因为许多人是通过 Emacs 来了解 Lisp 的,因此有一种潜在的印象,即在 21 世纪运行 Lisp 的惟一方法是在 Emacs 中运行。幸运的是,事实并非如此,Emacs 是一个很好的 IDE,它支持将 Lisp 脚本作为一般的系统可执行文件来运行。
|
||||
|
||||
除了 Elisp 之外,还有两种流行的现代 lisp 可以很容易地用来作为独立脚本运行。
|
||||
|
||||
1。**Racket:** 你可以通过在系统上运行 Racket 来提供运行 Racket 脚本所需的运行时支持,或者你可以使用 **raco exe** 产生一个可执行文件。**raco exe** 命令将代码和运行时支持文件一起打包,以创建可执行文件。然后,**raco distribution** 命令将可执行文件打包成可以在其他机器上工作的发行版。Emacs 有许多 Racket 工具,因此在 Emacs 中创建 Racket 文件既简单又有效。
|
||||
|
||||
2。**GNU Guile:** [GNU Guile][16](“GNU Ubiquitous Intelligent Language for Extensions”--GNU 通用智能语言扩展的缩写)是 [Scheme][17] 编程语言的一个实现,它用于为桌面 、internet、 终端等创建应用程序和游戏。使用 Emacs 中的 Scheme 扩展众多,使用任何一个扩展来编写 Scheme 都很容易。例如,这里有一个用 Guile 编写的 “Hello world” 脚本:
|
||||
```
|
||||
#!/usr/bin/guile - s
|
||||
|
||||
(display "hello world")
|
||||
(newline) [/code] Compile and run it with the **guile** command: [code] $ guile ./hello.scheme
|
||||
;;; compiling /home/seth/./hello.scheme
|
||||
;;; compiled [...]/hello.scheme.go
|
||||
hello world
|
||||
$ guile ./hello.scheme
|
||||
hello world
|
||||
```
|
||||
## Run Elisp without Emacs
|
||||
Emacs 可以作为 Elisp 的运行环境,但是你无需按照传统印象中的必须打开 Emacs 来运行 Elisp。`--script` 选项可以让你使用 Emacs 作为引擎来执行 Elisp 脚本而无需运行 Emacs 图形界面(甚至也无需使用终端界面)。下面这个例子中,`-Q` 选项让 Emacs 忽略 `.emacs` 文件从而避免由于执行 Elisp 脚本时产生延迟(若你的脚本依赖于 Emacs 配置中的内容那么请忽略该选项)。
|
||||
|
||||
```
|
||||
emacs -Q --script ~/path/to/script.el
|
||||
```
|
||||
## 下载 Emacs 备忘录
|
||||
Emacs 许多重要功能都不是只能通过 Emacs 来实现的; Org mode 是 Emacs 扩展也是一种格式标准,流行的 Lisp 方言大多不依赖于具体的实现,我们甚至可以在没有可见或可交互式 Emacs 实例的情况下编写和运行 Elisp。然后若你对为什么模糊代码和数据之间的界限能够引发创新和效率感到好奇的话,那么 Emacs 是一个很棒的工具。
|
||||
|
||||
幸运的是,现在是 21 世纪,Emacs 有了带有传统菜单的图形界面以及大量的文档,因此学习曲线不再像以前那样。然而,要最大化 Emacs 对你的好处,你需要学习它的快捷键。由于 Emacs 支持的每个任务都是一个 Elisp 函数,Emacs 中的任何功能都可以对应一个快捷键,因此要描述所有这些快捷键是不可能完成的任务。你只要学习使用频率 10 倍于不常用功能的那些快捷键即可。
|
||||
|
||||
我们汇聚了最常用的 Emacs 快捷键成为一份 Emacs 备忘录以便你查询。将它挂在屏幕附近或办公室墙上,把它作为鼠标垫也行。让它触手可及经常翻阅一下。每次翻两下可以让你获得十倍的学习效率。而且一旦开始编写自己的函数,你一定不会后悔获取了这个免费的备忘录副本的!
|
||||
|
||||
[这里下载 Emacs 备忘录 ](https://opensource.com/downloads/emacs-cheat-sheet)
|
@ -1,181 +0,0 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "messon007"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: subject: "DevOps vs Agile: What's the difference?"
|
||||
[#]: via: "https://opensource.com/article/20/2/devops-vs-agile"
|
||||
[#]: author: "Taz Brown https://opensource.com/users/heronthecli"
|
||||
|
||||
DevOps和敏捷: 究竟有什么区别?
|
||||
======
|
||||
|
||||
两者之间的区别在于开发完毕之后发生的事情。
|
||||
|
||||
![Pair programming][1]
|
||||
|
||||
|
||||
早期, 软件开发并没有特定的管理流程。随后出现了[瀑布开发流程][2], 它提出软件开发活动可以被开发和构建应用所耗费的时间来定义。
|
||||
|
||||
那时候, 由于在开发流程中没有审查环节和权衡考虑, 常常需要花费很长的时间来开发, 测试和部署软件。交付的软件也是带有缺陷和Bug的质量较差的软件, 而且交付时间也不满足要求。那时候软件项目管理的重点是长期计划。
|
||||
|
||||
瀑布流程与[三重约束模型][3]相关, 三重约束模型也称为项目管理三角形。三角形的每一个边代表项目管理三要素的一个要素: **范围, 时间和成本**. 正如[Angelo Baretta写到][4], 三重约束模型认为成本是时间和范围的函数, 这三个约束以一种确定的, 可预测的方式相互作用。如果我们想缩短时间, 就必须增加成本。如果我们想增加范围, 就必须增加成本或时间。
|
||||
|
||||
### 从瀑布流程过渡到敏捷开发
|
||||
|
||||
瀑布流程来源于生产和工程领域, 这些领域适合线性化的流程: 正如房屋封顶之前需要先盖好支撑墙。 相似地, 软件开发问题被认为可以通过提前做好计划来解决。从头到尾,开发流程被路线图清晰地定义,顺从路线图就可以得到最终交付的产品。
|
||||
|
||||
最终, 瀑布模型被认为对软件开发是不利的而且违反人的直觉, 因为直到开发流程的最后才能体现出其价值, 这导致许多项目最终都以失败告终。而且, 在项目结束前客户看不到任何可以工作的软件。
|
||||
|
||||
敏捷采用了一种不同的方法, 它抛弃了规划整个项目, 承诺估计的时间点, 简单的遵循计划。与瀑布流程相反, 它假设和拥抱不确定性。它的理念是以响应变化代替讨论过去, 它认为变更是客户需求的一部分。
|
||||
|
||||
### 敏捷价值观
|
||||
|
||||
敏捷由敏捷宣言代言, 敏捷宣言定义了[12条原则][5]:
|
||||
|
||||
1. 我们最重要的目标,是通过持续不断地及早交付有价值的软件使客户满意。
|
||||
|
||||
2. 欣然面对需求变化,即使在开发后期也一样。
|
||||
|
||||
3. 经常交付可工作的软件,相隔几星期或一两个月,倾向于采取较短的周期。
|
||||
|
||||
4. 业务人员和开发人员必须相互合作,项目中的每一天都不例外。
|
||||
|
||||
5. 激发个体的斗志,以他们为核心搭建项目。提供所需的环境和支援,辅以信任,从而达成目标。
|
||||
|
||||
6. 面对面沟通是传递信息的最佳的也是效率最高的方法。
|
||||
|
||||
7. 可工作的软件是进度的首要度量标准。
|
||||
|
||||
8. 敏捷流程倡导可持续的开发,责任人、开发人员和用户要能够共同维持其步调稳定延续。
|
||||
|
||||
9. 坚持不懈地追求技术卓越和良好设计,敏捷能力由此增强。
|
||||
|
||||
10. 以简洁为本,它是极力减少不必要工作量的艺术。
|
||||
|
||||
11. 最好的架构, 需求和设计出自自组织团队
|
||||
|
||||
12. 团队定期地反思如何能提高成效,并依此调整自身的举止表现。
|
||||
|
||||
|
||||
|
||||
敏捷的四个[核心价值观][6]是:
|
||||
|
||||
* **个体和互动** 高于 流程和工具
|
||||
* **工作的软件** 高于 详尽的文档
|
||||
* **客户合作** 高于 合同谈判
|
||||
* **响应变化** 高于 遵循计划
|
||||
|
||||
|
||||
这与瀑布流程死板的计划风格相反。在敏捷流程中,客户是开发团队的一员,而不仅仅是在项目开始时参与项目需求的定义, 在项目结束时验收最终的产品。客户帮忙团队完成[准入标准][7],保持参与整个流程。另外,敏捷需要整个组织的变化和持续的改进。开发团队和其他团队一起合作, 包括项目管理团队和测试团队。做什么和计划什么时候做由指定的角色领导, 并由整个团队同意。
|
||||
|
||||
### 敏捷软件开发
|
||||
|
||||
敏捷软件开发需要自适应的规划,演进式的开发和交付。许多软件开发方法,框架和实践遵从敏捷的理念,包括:
|
||||
|
||||
* Scrum
|
||||
* Kanban (可视化工作流)
|
||||
* XP(极限编程)
|
||||
* 精益
|
||||
* DevOps
|
||||
* 特性驱动开发(FDD)
|
||||
* 测试驱动开发(TDD)
|
||||
* 水晶方法
|
||||
* 动态系统开发方法(DSDM)
|
||||
* 自适应软件开发(ASD)
|
||||
|
||||
|
||||
|
||||
所有这些已经被单独用于或一起用于开发和部署软件。最常用的是[scrum][8], kanban(或scrumban)和DevOps.
|
||||
|
||||
[Scrum][9]是一个框架, 采用该框架的团队通常由一个scrum教练, 产品经理和开发人员组成, 该功能团队采用自主的工作方式, 能够加快软件交付速度从而给客户带来巨大的商业价值。其关注点是[较小增量][10]的快速迭代。
|
||||
|
||||
[Kanban][11]是一个敏捷框架,有时也叫工作流管理系统,它能帮助团队可视化他们的工作从而最大化效率。Kanban通常由数字或物理展示板来呈现。团队的工作移到展示板上,例如未启动,进行中, 测试中, 已完成。Kanban使得每个团队成员可以随时看到所有工作的状态。
|
||||
|
||||
### DevOps价值观
|
||||
|
||||
DevOps是一种文化, 思维状态, 一种软件开发的方式或者基础设施开发的方式,一种软件和应用被构建和部署的方式。它假设开发和运维之间没有隔阂, 他们一起合作,没有矛盾。
|
||||
|
||||
DevOps基于两个其他实践: 精益和敏捷。DevOps不是一个公司内的岗位或角色;它是一个组织或团队对持续交付,持续部署和持续集成的坚持不懈的追求。[Gene Kim][12](Phoenix项目和Unicorn项目的作者)认为,有三种方式定义DevOps的理念:
|
||||
|
||||
* 第一种: 自动化流程
|
||||
* 第二种: 快速反馈
|
||||
* 第三种: 持续学习
|
||||
|
||||
|
||||
|
||||
### DevOps软件开发
|
||||
|
||||
DevOps不会凭空产生;它是一种灵活的实践,它的本质是一种关于软件开发和IT基础设施实施的共享文化和思想。
|
||||
|
||||
当你想到自动化,云,微服务时,你会想到DevOps。在一次[访谈][13]中, "*加速构建和扩张高性能技术组织*"的作者Nicol Forsgren, Jez Humble和Gene Kim这样解释到:
|
||||
|
||||
> * 软件交付能力关系到甚至极大地影响到组织结果例如利润,市场份额,质量,客户满意度以及达成组织的战略目标。
|
||||
> * 优秀的团队能达到很高的交付量,稳定性和质量;他们并没有为了获得这些属性而进行取舍。
|
||||
> * 你可以通过实施精益,敏捷和DevOps中的实践来提升能力。
|
||||
> * 实施这些实践和能力也会影响你的组织文化,并且会进一步对你的软件交付能力和组织能力产生有益的提升。
|
||||
> * 懂得怎样改进能力需要做很多工作。
|
||||
>
|
||||
|
||||
|
||||
### DevOps和敏捷的对比
|
||||
|
||||
DevOps和敏捷有相似性,但是他们不完全相同,一些人认为DevOps比敏捷更好。为了避免造成混淆,深入地了解他们是很重要的。
|
||||
|
||||
#### 相似之处
|
||||
|
||||
* 毫无疑问,两者都是软件开发技术;
|
||||
* 敏捷已经存在了20多年,DevOps是最近才出现的。
|
||||
* 两者都追求软件的快速开发,它们的理念都基于怎样在不伤害客户或运维利益的情况下快速开发出软件。
|
||||
|
||||
|
||||
|
||||
#### 不同之处
|
||||
|
||||
* **两者的差异** 在于软件开发完成后发生的事情。
|
||||
* 在DevOps和敏捷中,软件开发,测试和部署的阶段都相同。然而,敏捷流程在这三个阶段之后会终止。相反,DevOps包括后续持续的运维。因此,DevOps会持续的监控软件运行情况和进行持续的开发。
|
||||
* 敏捷中,不同的人负责软件的开发,测试和部署。而DevOps的工程角色负责所有活动,开发即运维,运维即开发。
|
||||
* DevOps更关注于削减成本,而敏捷则是精益和减少浪费的代名词,侧重于像敏捷项目会计和最小可行产品的概念。
|
||||
* **敏捷专注于并体现了经验主义(适应,透明和检查),而不是预测性措施。**
|
||||
|
||||
敏捷 | DevOps
|
||||
---|---
|
||||
从客户得到反馈 | 从自己得到反馈
|
||||
较小的发布周期 | 较小的发布周期,立即反馈
|
||||
聚焦于速度 | 聚焦于速度和自动化
|
||||
对业务不是最好 | 对业务最好
|
||||
|
||||
### 总结
|
||||
|
||||
敏捷和DevOps是截然不同的,尽管它们的相似之处使人们认为它们是相同的。 这对敏捷和DevOps都是一种伤害。
|
||||
|
||||
根据我作为一名敏捷专家的经验,我发现对于组织和团队从高层次上了解敏捷和DevOps是什么,以及它们如何帮助团队更高效地工作,更快地交付高质量产品从而提高客户满意度非常有价值。
|
||||
|
||||
敏捷和DevOps绝不是对抗性的(或至少没有这个意图)。在敏捷革命中,他们更像是盟友而不是敌人。敏捷和DevOps可以相互协作一致对外,因此可以在相同的场合共存。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/devops-vs-agile
|
||||
|
||||
作者:[Taz Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[messon007](https://github.com/messon007)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/heronthecli
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard.png?itok=kBeRTFL1 "Pair programming"
|
||||
[2]: http://www.agilenutshell.com/agile_vs_waterfall
|
||||
[3]: https://en.wikipedia.org/wiki/Project_management_triangle
|
||||
[4]: https://www.pmi.org/learning/library/triple-constraint-erroneous-useless-value-8024
|
||||
[5]: https://agilemanifesto.org/principles.html
|
||||
[6]: https://agilemanifesto.org/
|
||||
[7]: https://www.productplan.com/glossary/acceptance-criteria/
|
||||
[8]: https://opensource.com/article/19/8/scrum-vs-kanban
|
||||
[9]: https://www.scrum.org/
|
||||
[10]: https://www.scrum.org/resources/what-is-an-increment
|
||||
[11]: https://www.atlassian.com/agile/kanban
|
||||
[12]: https://itrevolution.com/the-unicorn-project/
|
||||
[13]: https://www.infoq.com/articles/book-review-accelerate/
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to write effective documentation for your open source project)
|
||||
[#]: via: (https://opensource.com/article/20/3/documentation)
|
||||
[#]: author: (Kevin Xu https://opensource.com/users/kevin-xu)
|
||||
|
||||
如何为你的开源项目编写实用的文档
|
||||
======
|
||||
一份优质的文档可以让很多用户对你的项目路人转粉。
|
||||
|
||||
![A pink typewriter][1]
|
||||
|
||||
好的代码很多时候并不代表一切。或许你能用最精巧的代码解决了世界上最迫切需要解决的问题,但如果你作为一个开源开发者,没能用准确的语言将你的作品公之于世,你的代码也只能成为沧海遗珠。因此,技术写作和文档编写是很重要的技能。
|
||||
|
||||
一般来说,项目中的文档是最受人关注的部分,很多用户会通过文档来决定自己是否应该对某个项目开始学习或研究。所以,我们不能忽视技术写作和文档编写的工作,尤其要重点关注其中的“入门”部分,这会对你项目的发展起到关键性的作用。
|
||||
|
||||
对于很多人来说,写作是一件令人厌烦甚至恐惧的事情。我们这些工程师出身的人,学习“写代码”比学习“为代码写文档”明显更多。不少人会把英语作为自己的第二语言或者第三语言,他们可能会对英语写作感到不安全甚至害怕(我的母语是汉语,英语是作为我的第二语言学习的,所以我也能感受到这种痛苦)。
|
||||
|
||||
但如果你希望自己的项目能在全球范围内产生一定的影响力,英语就是你必须使用的语言,这是一个无法避免的现实。但不必害怕,我在写这篇文章的时候就考虑到了这些可能带来的挑战,并给出了我的一些建议。
|
||||
|
||||
### 五条有用的写作建议
|
||||
|
||||
这五条建议你马上就可以用起来,尽管看起来似乎有些浅显,但在技术写作时却经常被忽视。
|
||||
|
||||
1. 使用[主动语态][2]:感受一下主动语态下的“你可以这样更改配置(You can change these configurations by…)”和被动语态下的“配置可以这样更改(These configurations can be changed by…)”有什么不同之处。
|
||||
2. 使用简洁明了的句子:可以借助 [Hemingway App][3] 或者 [Grammarly][4] 这样的工具,尽管它们并不开源。
|
||||
3. 保持条理性:你可以在文档中通过写标题、划重点、引链接等方式,把各类信息划分为不同的部分,避免将所有内容都杂糅在一大段冗长的文字当中。
|
||||
4. 提高可读性:除了单纯的文字之外,运用图表也是从多种角度表达的手段之一。
|
||||
5. 注意拼写和语法:必须记得检查文档中是否有拼写错误或者语法错误。
|
||||
|
||||
只要在文档的写作和编辑过程中应用到这些技巧,你就能够和读者建立起沟通和信任。
|
||||
|
||||
* 高效沟通:对于工程师们来说,阅读长篇大论的冗长文字,还不如去看小说。在阅读技术文档时,他们总是希望能够从中快速准确地获取到有用的信息。因此,技术文档的最佳风格应该是精简而有效的,不过这并不代表文档中不能出现类似幽默、emoji 甚至段子这些东西,这些元素可以当你的文档更有个性、更使人印象深刻。当然,具体的实现方式就因人而异了
|
||||
* 建立信任:你需要取得文档读者们的信任,这在一个项目的前期尤为重要。读者对你的信任除了来源于你代码的质量,还跟你文档编写的质量有关。所以你不仅要打磨代码,还要润色好相关的文档,这也是上面第 5 点建议拼写和语法检查的原因。
|
||||
|
||||
### 如何开始编写文档
|
||||
|
||||
现在,最需要花费功夫的应该就是“入门”部分了,这是一篇技术文档最重要的部分,[二八定律][5]在这里得到了充分体现:访问一个项目的大部分流量都会落在项目文档上,而访问项目文档的大部分流量则会落在文档的“入门”部分中。因此,如果文档的“入门”部分写得足够好,项目就会吸引到很多用户,反之,用户会对你的项目敬而远之。
|
||||
|
||||
那么如何写好“入门”部分呢?我建议按照以下三步走:
|
||||
|
||||
1. 任务化:入门指南应该以任务为导向。这里的任务指的是对于开发者来说可以完成的离散的小项目,而不应该包含太多涉及到体系结构、核心概念等的抽象信息,因此在“入门”部分只需要提供一个简单明了的概述就可以了。也不要在“入门”部分大谈这个项目如何优秀地解决了问题,这个话题可以放在文档中别的部分进行说明。总而言之,“入门”部分最好是给出一些主要的操作步骤,这样显得开门见山。
|
||||
2. 30 分钟内能够完成:这一点的核心是耗时尽可能短,不宜超过 30 分钟,这个时间上限是考虑到用户可能对你的项目并不了解。这一点很重要,大部分愿意浏览文档的人都是有技术基础的,但对你的项目也仅仅是一知半解。首先让这些读者尝试进行一些相关操作,在收到一定效果后,他们才会愿意花更多时间深入研究整个项目。因此,你可以从耗时这个角度来评估你的文档“入门”部分有没有需要改进之处。
|
||||
3. 有意义的任务:这里“有意义”的含义取决于你的开源项目。最重要的是认真思考并将“入门”部分严格定义为一项任务,然后交给你的读者去完成。这个项目的价值应该在这项有意义的任务中有所体现,不然读者可能会感觉这是一个浪费时间的行为。
|
||||
|
||||
提示:假如你的项目是一个分布式数据库,那么达到“整个集群在某些节点故障的情况下可以不中断地保持可用”的目标就可以认为是“有意义”的;加入你的项目是一个数据分析工具或者是商业智能工具,“有意义”的目标也可以是“加载数据后能快速生成多种可视化效果的仪表板”。总之,无论你的项目需要达到什么“有意义”的目标,都应该能在笔记本电脑上本地快速实现。
|
||||
|
||||
[Linkerd 入门][6]就是一个很好的例子。Linkerd 是 Kubernetes 的开源<ruby>服务网格<rt>Service Mesh</rt></ruby>,当时我对 Kubernetes 了解并不多,也不熟悉服务网格。但我在自己的笔记本电脑上很轻松地就完成了其中的任务,同时也加深了对服务网格的理解。
|
||||
|
||||
上面提到的三步过程是一个很有用的框架,对一篇文档“入门”部分的设计和量化评估很有帮助。今后你如果想将你的[开源项目产品化][7],这个框架还可能对<ruby>实现价值的时间<rt>time-to-value</rt></ruby>产生影响。
|
||||
|
||||
### 其它核心部分
|
||||
|
||||
认真写好“入门”部分之后,你的文档中还需要有这五个部分:架构设计、生产环境使用指导、使用案例、参考资料以及未来展望,这五个部分在一份完整的文档中是必不可少的。
|
||||
|
||||
* 架构设计:这一部分需要深入探讨整个项目架构设计的依据,“入门”部分中一笔带过的那些关键细节就应该在这里体现。在产品化过程中,这个部分将会是[产品推广计划][8]的核心,因此通常会包含一些可视化呈现的内容,期望的效果是让更多用户长期参与到项目中来。
|
||||
* 生产环境使用指导:对于同一个项目,在生产环境中部署比在笔记本电脑上部署要复杂得多。因此,指导用户认真使用就尤为重要。同时,有些用户可能对项目很感兴趣,但对生产环境下的使用有所顾虑,而指导和展示的过程则正好能够吸引到这类潜在的用户。
|
||||
* 使用案例:<ruby>社会认同<rt>social proof</rt></ruby>的力量是有目共睹的,所以很有必要列出正在生产环境使用这个项目的其他用户,并把这些信息摆放在显眼的位置。这个部分的浏览量甚至仅次于“入门”部分。
|
||||
* 参考资料:这个部分是对项目的一些详细说明,让用户得以进行详细的研究以及查阅相关信息。一些开源作者会在这个部分事无巨细地列出项目中的每一个细节和<ruby>边缘情况<rt>edge case</rt></ruby>,这种做法可以理解,但不推荐在项目初期就在这个部分花费过多的时间。你可以采取更折中的方式,在质量和效率之间取得平衡,例如提供一些相关社区的链接、Stack Overflow 上的标签或单独的 FAQ 页面。
|
||||
* 未来展望:你需要制定一个简略的时间表,规划这个项目的未来发展方向,这会让用户长期保持兴趣。尽管项目在当下可能并不完美,但要让用户知道你仍然有完善这个项目的计划。这个部分也能让整个社区构建一个强大的生态,因此还要向用户提供表达他们对未来展望的看法的交流区。
|
||||
|
||||
以上这几个部分或许还没有在你的文档中出现,甚至可能会在后期才能出现,尤其是“使用案例”部分。尽管如此,还是应该在文档中逐渐加入这些部分。如果用户对“入门”部分已经感觉良好,那以上这几个部分将会提起用户更大的兴趣。
|
||||
|
||||
最后,请在“入门”部分、README 文件或其它显眼的位置注明整个项目所使用的许可证。这个细节会让你的项目更容易通过终端用户的审核。
|
||||
|
||||
### 花 20% 的时间写作
|
||||
|
||||
一般情况下,我建议把整个项目 10% 到 20% 的时间用在文档写作上。也就是说,如果你是全职进行某一个项目的,文档写作需要在其中占半天到一天。
|
||||
|
||||
再细致一点,应该将写作纳入到常规的工作流程中,这样它就不再是一件孤立的琐事,而是日常的事务。文档写作应该随着工作进度同步进行,切忌将所有写作任务都堆积起来最后完成,这样才可以帮助你的项目达到最终目标:吸引用户、获得信任。
|
||||
|
||||
* * *
|
||||
|
||||
_特别鸣谢云原生计算基金会的布道师 [Luc Perkins][9] 给出的宝贵意见。_
|
||||
|
||||
_本文首发于_ _[COSS Media][10]_ _并经许可发布。_
|
||||
|
||||
Nigel Babu 提供了 10 条帮助编写项目文档的有用技巧。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/documentation
|
||||
|
||||
作者:[Kevin Xu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/kevin-xu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-docdish-typewriter-pink.png?itok=OXJBtyYf (A pink typewriter)
|
||||
[2]: https://www.grammar-monster.com/glossary/active_voice.htm
|
||||
[3]: http://www.hemingwayapp.com/
|
||||
[4]: https://www.grammarly.com/
|
||||
[5]: https://en.wikipedia.org/wiki/Pareto_principle
|
||||
[6]: https://linkerd.io/2/getting-started/
|
||||
[7]: https://opensource.com/article/19/11/products-open-source-projects
|
||||
[8]: https://opensource.com/article/20/2/product-marketing-open-source-project
|
||||
[9]: https://twitter.com/lucperkins
|
||||
[10]: https://coss.media/open-source-documentation-technical-writing-101/
|
333
translated/tech/20200312 Make SSL certs easy with k3s.md
Normal file
333
translated/tech/20200312 Make SSL certs easy with k3s.md
Normal file
@ -0,0 +1,333 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Make SSL certs easy with k3s)
|
||||
[#]: via: (https://opensource.com/article/20/3/ssl-letsencrypt-k3s)
|
||||
[#]: author: (Lee Carpenter https://opensource.com/users/carpie)
|
||||
|
||||
用 k3s 轻松管理 SSL 证书
|
||||
======
|
||||
|
||||
> 如何在树莓派上使用 k3s 和 Let's Encrypt 来加密你的网站。
|
||||
|
||||
![Files in a folder][1]
|
||||
|
||||
在[上一篇文章][2]中,我们在 k3s 集群上部署了几个简单的网站。那些是未加密的网站。很好,它们可以工作,但是未加密的网站有点太过时了!如今,大多数网站都是加密的。在本文中,我们将安装 [cert-manager][3] 并将其用于在集群上以部署采用 TLS 加密的网站。这些网站不仅会被加密,而且还会使用有效的公共证书,这些证书会从 [Let's Encrypt][4] 自动获取和更新!让我们开始吧!
|
||||
|
||||
### 所需材料
|
||||
|
||||
要继续阅读本文,你将需要我们在上一篇文章中构建的 [k3s 树莓派集群][5]。另外,你需要拥有一个公用静态 IP 地址,并有一个可以为其创建 DNS 记录的域名。如果你有一个动态 DNS 提供程序为你提供域名,可能也行。但是,在本文中,我们将使用静态 IP 和 [CloudFlare][6] 来手动创建 DNS 的 A 记录。
|
||||
|
||||
我们在本文中创建配置文件时,如果你不想键入它们,则可以在[此处][7]进行下载。
|
||||
|
||||
### 我们为什么使用 cert-manager?
|
||||
|
||||
Traefik(预先捆绑了 k3s)实际上具有内置的 Let's Encrypt 支持,因此你可能想知道为什么我们要安装第三方软件包来做同样的事情。在撰写本文时,Traefik 中的 Let's Encrypt 支持检索证书并将其存储在文件中。cert-manager 会检索证书并将其存储在 Kubernetes 的 “<ruby>机密信息<rt>secrets</rt></ruby>” 中。我认为,“机密信息”可以简单地按名称引用,因此更易于使用。这就是我们在本文中使用 cert-manager 的主要原因。
|
||||
|
||||
### 安装 cert-manager
|
||||
|
||||
通常,我们只是遵循 cert-manager 的[文档][8]在 Kubernetes 上进行安装。但是,由于我们使用的是 ARM 体系结构,因此我们需要进行一些更改,以便我们可以完成这个操作。
|
||||
|
||||
第一步是创建 cert-manager 命名空间。命名空间有助于将 cert-manager 的<ruby>吊舱<rt>Pod</rt></ruby>排除在我们的默认命名空间之外,因此当我们使用自己的“吊舱”执行 `kubectl get pods` 之类的操作时,我们不必看到它们。创建名称空间很简单:
|
||||
|
||||
```
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
这份安装说明会告诉你下载 cert-manager 的 YAML 配置文件并将其一步全部应用到你的集群。我们需要将其分为两个步骤,以便为基于 ARM 的树莓派修改文件。我们将下载文件并一步一步进行转换:
|
||||
|
||||
```
|
||||
curl -sL \
|
||||
https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml |\
|
||||
sed -r 's/(image:.*):(v.*)$/\1-arm:\2/g' > cert-manager-arm.yaml
|
||||
```
|
||||
|
||||
这会下载配置文件,并将所有包含的 docker 镜像更新为 ARM 版本。来检查一下它做了什么:
|
||||
|
||||
|
||||
```
|
||||
$ grep image: cert-manager-arm.yaml
|
||||
image: "quay.io/jetstack/cert-manager-cainjector-arm:v0.11.0"
|
||||
image: "quay.io/jetstack/cert-manager-controller-arm:v0.11.0"
|
||||
image: "quay.io/jetstack/cert-manager-webhook-arm:v0.11.0"
|
||||
```
|
||||
|
||||
如我们所见,三个镜像现在在镜像名称上添加了 `-arm`。现在我们有了正确的文件,我们只需将其应用于集群:
|
||||
|
||||
```
|
||||
kubectl apply -f cert-manager-arm.yaml
|
||||
```
|
||||
|
||||
这将安装所有的 cert-manager。我们可以通过 `kubectl --namespace cert-manager get pods` 来检查安装何时完成,直到所有“吊舱”都处于 `Running` 状态。
|
||||
|
||||
这实际上就完成了 cert-manager 的安装!
|
||||
|
||||
### Let's Encrypt 概述
|
||||
|
||||
Let's Encrypt 的好处是,它们免费为我们提供了经过公共验证的 TLS 证书!这意味着我们可以拥有一个完全有效的、可供任何人访问的 TLS 加密网站,这些家庭或业余的爱好活动挣不到钱,也无需自己掏腰包购买 TLS 证书!以及,当通过 cert-manager 使用 Let's Encrypt 的证书时,获得证书的整个过程是自动化的,证书的续订也是自动的!
|
||||
|
||||
但它是如何工作的?下面是该过程的简化说明。我们(或代表我们的 cert-manager)向 Let's Encrypt 发出我们拥有的域名的证书请求。Let's Encrypt 通过使用 ACME DNS 或 HTTP 验证机制来验证我们是否拥有该域。如果验证成功,则 Let's Encrypt 将向我们提供证书,这些证书将由 cert-manager 安装在我们的网站(或其他 TLS 加密的终结点)中。在需要重复此过程之前,这些证书可以使用 90 天。但是,cert-manager 会自动为我们更新证书。
|
||||
|
||||
在本文中,我们将使用 HTTP 验证方法,因为它更易于设置并且适用于大多数情况。以下是幕后将发生的基本过程。cert-manager 将向 Let's Encrypt 发出证书请求。作为回应,Let's Encrypt 将发出所有权验证的<ruby>质询<rt>challenges</rt></ruby>。这个质询是将一个 HTTP 资源放在请求证书的域名下的一个特定 URL 上。从理论上讲,如果我们可以将该资源放在该 URL 上,并且让 Let's Encrypt 可以远程获取它,那么我们实际上必须是该域的所有者。否则,要么我们无法将资源放置在正确的位置,要么我们无法操纵 DNS 以使 Let's Encrypt 访问它。在这种情况下,cert-manager 会将资源放在正确的位置,并自动创建一个临时的 `Ingress` 记录,以将流量路由到正确的位置。如果 Let's Encrypt 可以读到该质询要求的资源并正确无误,它将把证书发回给 cert-manager。然后,cert-manager 将证书存储为“机密信息”,然后我们的网站(或其他任何网站)将使用这些证书通过 TLS 保护我们的流量。
|
||||
|
||||
### 为该质询设置网络
|
||||
|
||||
我假设你要在家庭网络上进行设置,并拥有一个以某种方式连接到更广泛的互联网的路由器/接入点。如果不是这种情况,则可能不需要以下过程。
|
||||
|
||||
为了使质询过程正常运行,我们需要一个我们要申请证书的域名,以将其路由到端口 80 上的 k3s 集群。为此,我们需要告诉世界上的 DNS 系统它的位置。因此,我们需要将域名映射到我们的公共 IP 地址。如果你不知道你的公共 IP 地址是什么,可以访问 [WhatsMyIP][9] 之类的地方,它会告诉你。接下来,我们需要输入 DNS 的 A 记录,该记录将我们的域名映射到我们的公共 IP 地址。为了使此功能可靠地工作,你需要一个静态的公共 IP 地址,或者你可以使用动态 DNS 提供商。一些动态 DNS 提供商会向你颁发一个域名,你可以按照以下说明使用它。我没有尝试过,所以不能肯定地说它适用于所有提供商。
|
||||
|
||||
对于本文,我们将假设有一个静态公共 IP 并使用 CloudFlare 来设置 DNS 的 A 记录。如果愿意,可以使用自己的 DNS 提供程序。重要的是你可以设置 A 记录。
|
||||
|
||||
在本文的其余部分中,我将使用 [k3s.carpie.net][10] 作为示例域,因为这是我拥有的域。你显然会用自己拥有的任何域替换它。
|
||||
|
||||
为示例起见,假设我们的公共 IP 地址是 198.51.100.42。我们将转到我们的 DNS 提供商的 DNS 记录部分,并添加一个名为 [k3s.carpie.net][10] 的类型为 `A` 的记录(CloudFlare 已经假定了域的部分,因此我们只需输入 `k3s`),然后输入 `198.51.100.42` 作为 IPv4 地址。
|
||||
|
||||
![][11]
|
||||
|
||||
请注意,有时 DNS 更新要传播一段时间。你可能需要几个小时才能解析该名称。在继续之前该名称必须可以解析。否则,我们所有的证书请求都将失败。
|
||||
|
||||
我们可以使用 `dig` 命令检查名称是否解析:
|
||||
|
||||
```
|
||||
$ dig +short k3s.carpie.net
|
||||
198.51.100.42
|
||||
```
|
||||
|
||||
继续运行以上命令,直到可以返回 IP 才行。关于 CloudFlare 有个小注释:ClouldFlare 提供了通过代理流量来隐藏你的实际 IP 的服务。在这种情况下,我们取回的是 CloudFlare 的 IP,而不是我们的 IP。 但对于我们的目的,这应该可以正常工作。
|
||||
|
||||
网络配置的最后一步是配置路由器,以将端口 80 和 443 上的传入流量路由到我们的 k3s 集群。可悲的是,路由器配置页面的差异很大,因此我无法确切地说明你的外观是什么样子。大多数时候,我们需要的管理页面位于“端口转发”或类似内容下。我甚至看到过它列在“游戏”之下(显然是端口转发主要用于的游戏)!让我们看看我的路由器的配置如何。
|
||||
|
||||
![][12]
|
||||
|
||||
如果你和我的设置一样,则转到 192.168.0.1 登录到路由器管理应用程序。对于此路由器,它位于 “ NAT / QoS” -> “端口转发”。在这里,我们将端口 80/TCP 协议设置为转发到 192.168.0.50(主节点 `kmaster` 的 IP)的端口 80。我们还将端口 443 设置为也映射到 `kmaster`。从技术上讲,这对于质询来说并不是必需的,但是在本文的结尾,我们将部署一个启用 TLS 的网站,并且需要映射 443 来进行访问。因此,现在进行映射很方便。我们保存并应用更改,应该一切顺利!
|
||||
|
||||
### 配置 cert-manager 来使用 Let's Encrypt(暂存环境)
|
||||
|
||||
现在,我们需要配置 cert-manager 来通过 Let's Encrypt 颁发证书。Let's Encrypt 为我们提供了一个暂存(例如用于测试)环境,以便审视我们的配置。这样它更能容忍错误和请求的频率。如果我们对生产环境做了错误的操作,我们很快就好发现自己被暂时禁止访问了!因此,我们将使用暂存环境手动测试请求。
|
||||
|
||||
创建一个文件 `letsencrypt-issuer-staging.yaml`,内容如下:
|
||||
|
||||
```
|
||||
apiVersion: cert-manager.io/v1alpha2
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
spec:
|
||||
acme:
|
||||
# The ACME server URL
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
# Email address used for ACME registration
|
||||
email: <your_email>@example.com
|
||||
# Name of a secret used to store the ACME account private key
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-staging
|
||||
# Enable the HTTP-01 challenge provider
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
class: traefik
|
||||
```
|
||||
|
||||
请确保将电子邮件地址更新为你的地址。如果出现问题或我们弄坏了一些东西,这就是 Let's Encrypt 与我们联系的方式!
|
||||
|
||||
现在,我们使用以下方法创建发行者:
|
||||
|
||||
```
|
||||
kubectl apply -f letsencrypt-issuer-staging.yaml
|
||||
```
|
||||
|
||||
我们可以使用以下方法检查发行者是否已成功创建:
|
||||
|
||||
```
|
||||
kubectl get clusterissuers
|
||||
```
|
||||
|
||||
`clusterissuers` 是由 cert-manager 创建的一种新的 Kubernetes 资源类型。
|
||||
|
||||
现在让我们手动请求一个测试证书。对于我们的网站,我们不需要这样做;我们只是在测试这个过程,以确保我们的配置正确。
|
||||
|
||||
创建一个包含以下内容的证书请求文件 `le-test-certificate.yaml`:
|
||||
|
||||
|
||||
```
|
||||
apiVersion: cert-manager.io/v1alpha2
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: k3s-carpie-net
|
||||
namespace: default
|
||||
spec:
|
||||
secretName: k3s-carpie-net-tls
|
||||
issuerRef:
|
||||
name: letsencrypt-staging
|
||||
kind: ClusterIssuer
|
||||
commonName: k3s.carpie.net
|
||||
dnsNames:
|
||||
- k3s.carpie.net
|
||||
```
|
||||
|
||||
该记录仅表示我们要使用名为 `letsencrypt-staging`(我们在上一步中创建的)的 `ClusterIssuer` 来请求域 [k3s.carpie.net][10] 的证书,并在 Kubernetes 的机密信息中名为 `k3s-carpie-net-tls` 文件中存储该证书。
|
||||
|
||||
像平常一样应用它:
|
||||
|
||||
```
|
||||
kubectl apply -f le-test-certificate.yaml
|
||||
```
|
||||
|
||||
我们可以通过以下方式查看状态:
|
||||
|
||||
```
|
||||
kubectl get certificates
|
||||
```
|
||||
|
||||
如果我们看到类似以下内容:
|
||||
|
||||
```
|
||||
NAME READY SECRET AGE
|
||||
k3s-carpie-net True k3s-carpie-net-tls 30s
|
||||
```
|
||||
|
||||
我们走在幸福之路!(这里的关键是`READY` 是 `True`)。
|
||||
|
||||
### 解决证书颁发问题
|
||||
|
||||
上面是幸福的道路。如果 `READY` 为 `False`,我们可以等等它,然后再次花点时间检查状态。如果它一直是 `False`,那么我们就有一个需要解决的问题。此时,我们可以遍历 Kubernetes 资源链,直到找到一条告诉我们问题的状态消息。
|
||||
|
||||
假设我们执行了上面的请求,而 `READY` 为 `False`。我们可以从以下方面开始故障排除:
|
||||
|
||||
```
|
||||
kubectl describe certificates k3s-carpie-net
|
||||
```
|
||||
|
||||
这将返回很多信息。通常,有用的内容位于 `Events:` 部分,该部分通常位于底部。假设最后一个事件是 `Created new CertificateRequest resource "k3s-carpie-net-1256631848`。然后我们<ruby>描述<rt> describe</rt></ruby>一些该请求:
|
||||
|
||||
```
|
||||
kubectl describe certificaterequest k3s-carpie-net-1256631848
|
||||
```
|
||||
|
||||
现在比如说最后一个事件是 `Waiting on certificate issuance from order default/k3s-carpie-net-1256631848-2342473830`。
|
||||
|
||||
那么,我们可以描述该顺序:
|
||||
|
||||
```
|
||||
`kubectl describe orders default/k3s-carpie-net-1256631848-2342473830`
|
||||
```
|
||||
|
||||
假设有一个事件,事件为 `Created Challenge resource "k3s-carpie-net-1256631848-2342473830-1892150396" for domain "k3s.carpie.net"`。让我们描述一下该质询:
|
||||
|
||||
```
|
||||
kubectl describe challenges k3s-carpie-net-1256631848-2342473830-1892150396
|
||||
```
|
||||
|
||||
从这里返回的最后一个事件是 `Presented challenge using http-01 challenge mechanism`。看起来没问题,因此我们浏览一下描述的输出,并看到一条消息 `Waiting for http-01 challenge propagation: failed to perform self check GET request … no such host`。终于!我们发现了问题!在这种情况下,`no such host` 意味着 DNS 查找失败,因此我们需要返回并手动检查我们的 DNS 设置,正确解析域的 DNS,并进行所需的任何更改。
|
||||
|
||||
### 清理我们的测试证书
|
||||
|
||||
我们实际上想要使用的是域名的真实证书,所以让我们继续清理证书和我们刚刚创建的机密信息:
|
||||
|
||||
```
|
||||
kubectl delete certificates k3s-carpie-net
|
||||
kubectl delete secrets k3s-carpie-net-tls
|
||||
```
|
||||
|
||||
### 配置 cert-manager 以使用 Let's Encrypt(生产环境)
|
||||
|
||||
现在我们已经有了测试证书,是时候移动到生产环境了。就像我们在 Let's Encrypt 暂存环境中配置 cert-manager 一样,我们现在也需要对生产环境进行同样的操作。创建一个名为 `letsencrypt-issuer-production.yaml` 的文件(如果需要,可以复制和修改暂存环境的文件),其内容如下:
|
||||
|
||||
```
|
||||
apiVersion: cert-manager.io/v1alpha2
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-prod
|
||||
spec:
|
||||
acme:
|
||||
# The ACME server URL
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
# Email address used for ACME registration
|
||||
email: <your_email>@example.com
|
||||
# Name of a secret used to store the ACME account private key
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-prod
|
||||
# Enable the HTTP-01 challenge provider
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
class: traefik
|
||||
```
|
||||
|
||||
(如果要从暂存环境进行复制,则唯一的更改是 `server:` URL。也请不要忘记修改电子邮件!)
|
||||
|
||||
应用它:
|
||||
|
||||
```
|
||||
kubectl apply -f letsencrypt-issuer-production.yaml
|
||||
```
|
||||
|
||||
### 申请我们网站的证书
|
||||
|
||||
重要的是要注意,我们到目前为止完成的所有步骤都是一次性设置的!对于将来的任何其他申请,我们可以从这个说明开始!
|
||||
|
||||
让我们部署在[上一篇文章][13]中部署的同样站点。(如果仍然可用,则可以修改 YAML 文件。如果没有,则可能需要重新创建并重新部署它)。
|
||||
|
||||
我们只需要将 `mysite.yaml` 的 `Ingress` 部分修改为:
|
||||
|
||||
```
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: mysite-nginx-ingress
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "traefik"
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
spec:
|
||||
rules:
|
||||
- host: k3s.carpie.net
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: mysite-nginx-service
|
||||
servicePort: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- k3s.carpie.net
|
||||
secretName: k3s-carpie-net-tls
|
||||
```
|
||||
|
||||
请注意,上面仅显示了 `mysite.yaml` 的 `Ingress` 部分。所做的更改是添加了注释 `cert-manager.io/cluster-issuer: letsencrypt-prod`。这告诉 traefik 创建证书时使用哪个发行者。 唯一的其他增加是 `tls:` 块。这告诉 traefik 我们希望在主机 [k3s.carpie.net][10] 上具有 TLS 功能,并且我们希望 TLS 证书文件存储在机密信息 `k3s-carpie-net-tls` 中。
|
||||
|
||||
请记住,我们没有创建这些证书!(好吧,我们创建了名称相似的测试证书,但我们删除了这些证书。)Traefik 将读取这些配置并继续寻找机密信息。当找不到时,它会看到注释说我们想使用 `letsencrypt-prod` 发行者来获取它。由此,它将提出请求并为我们安装证书到机密信息之中!
|
||||
|
||||
大功告成! 让我们尝试一下。
|
||||
|
||||
它现在具有了加密 TLS 所有优点!恭喜你!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/ssl-letsencrypt-k3s
|
||||
|
||||
作者:[Lee Carpenter][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/carpie
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
|
||||
[2]: https://carpie.net/articles/ingressing-with-k3s
|
||||
[3]: https://cert-manager.io/
|
||||
[4]: https://letsencrypt.org/
|
||||
[5]: https://opensource.com/article/20/3/kubernetes-raspberry-pi-k3s
|
||||
[6]: https://cloudflare.com/
|
||||
[7]: https://gitlab.com/carpie/k3s_using_certmanager/-/archive/master/k3s_using_certmanager-master.zip
|
||||
[8]: https://cert-manager.io/docs/installation/kubernetes/
|
||||
[9]: https://whatsmyip.org/
|
||||
[10]: http://k3s.carpie.net
|
||||
[11]: https://opensource.com/sites/default/files/uploads/ep011_dns_example.png
|
||||
[12]: https://opensource.com/sites/default/files/uploads/ep011_router.png
|
||||
[13]: https://carpie.net/articles/ingressing-with-k3s#deploying-a-simple-website
|
||||
[14]: http://cert-manager.io/cluster-issuer
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Connect your Google Drive to Fedora Workstation)
|
||||
[#]: via: (https://fedoramagazine.org/connect-your-google-drive-to-fedora-workstation/)
|
||||
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
|
||||
|
||||
将你的 Google Drive 连接到 Fedora Workstation
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
有大量的云服务可用于存储重要文档。Google Drive 无疑是最受欢迎的之一。它提供了一组相应的应用程序,例如文档,表格和幻灯片来创建内容。但是,你也可以在 Google Drive 中存储任意内容。本文向你展示如何将其连接到 [Fedora Workstation][2]。
|
||||
|
||||
### 添加帐户
|
||||
|
||||
Fedora Workstation 可在安装后首次启动或者之后的任何时候添加一个帐户。要在首次启动期间添加帐户,请按照提示进行操作。其中包括选择添加一个帐户:
|
||||
|
||||
![][3]
|
||||
|
||||
选择 Google,然后会出现一个登录提示,请使用你的 Google 帐户信息登录。
|
||||
|
||||
![][4]
|
||||
|
||||
请注意,此信息仅传输给 Google,而不传输给 GNOME 项目。下一个页面要求你授予访问权限,这是必需的,以便系统桌面可以与 Google 进行交互。向下滚动查看访问请求,然后选择 _Allow_ 继续。
|
||||
|
||||
你会在移动设备和 Gmail 中收到有关新设备(系统)访问 Google 帐户的通知。这是正常现象。
|
||||
|
||||
![][5]
|
||||
|
||||
如果你在初次启动时没有执行此操作,或者需要重新添加帐户,请打开 _Settings_,然后选择 _Online Accounts_ 来添加帐户。可以通过顶部栏右侧的下拉菜单(“齿轮”图标)或打开“概览”并输入 _settings_ 来使用它。接着和上面一样。
|
||||
|
||||
### 在 Google Drive 中使用“文件”应用
|
||||
|
||||
打开_文件_ 应用(以前称为 _nautilus_)。“文件”应用可以通过左侧栏访问。在列表中找到你的 Google 帐户。
|
||||
|
||||
当你选择帐户后,“文件”应用会显示你的 Google Drive 的内容。你可以使用 Fedora Workstation 的本地应用打开某些文件,例如声音文件或 [LibreOffice][6] 兼容文件(包括 Microsoft Office 文档)。其他文件(例如 Google 文档、表格和幻灯片等 Google 应用文件)将使用浏览器和相应的应用打开。
|
||||
|
||||
请记住,如果文件很大,将需要一些时间才能通过网络接收文件,你才可以打开它。
|
||||
|
||||
你还可以复制粘贴 Google Drive 中的文件到连接到 Fedora Workstation 的其他存储,或者反之。你还可以使用内置功能来重命名文件、创建文件夹并组织它们。对于共享和其他高级选项,请和平常一样在浏览器中使用 Google Drive。
|
||||
|
||||
请注意,“文件”应用程序不会实时刷新内容。如果你从其他连接 Google 的设备(例如手机或平板电脑)添加或删除文件,那么可能需要按 **Ctrl+R** 刷新“文件”应用。
|
||||
|
||||
* * *
|
||||
|
||||
_照片由 [Beatriz Pérez Moya][7] 发表在 [Unsplash][8] 中。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/connect-your-google-drive-to-fedora-workstation/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/pfrields/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/03/gdrive-workstation-816x345.jpg
|
||||
[2]: https://getfedora.org/workstation
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2020/03/firstboot-choices.jpg
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2020/03/firstboot-signin.jpg
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2020/03/firstboot-grantaccess.jpg
|
||||
[6]: https://fedoramagazine.org/discover-hidden-gems-libreoffice/
|
||||
[7]: https://unsplash.com/@beatriz_perez?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[8]: https://unsplash.com/s/photos/office-files?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,148 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Viewing and configuring password aging on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3532815/viewing-and-configuring-password-aging-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
在 Linux 上查看和配置密码时效
|
||||
======
|
||||
使用正确的设置,可以强制 Linux 用户定期更改密码。以下是查看密码时效以及如何更改其中设置的方法。。
|
||||
|
||||
可以将 Linux 系统上的用户密码配置为永久或设置过期,以让人们必须定期重置它们。出于安全原因,通常认为定期更改密码是一种好习惯,但默认未配置。
|
||||
|
||||
要查看和修改密码时效,你需要熟悉几个重要的命令:**chage** 命令及其 **-l ** 选项,以及 **passwd**命令及其 **-S** 选项。本文会介绍这些命令,还有其他一些 **chage** 命令来配置密码时效。
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
|
||||
|
||||
### 查看密码时效设置
|
||||
|
||||
确定某个特定帐户是否已设置密码时效的方法是使用如下 **chage** 命令。请注意,除了你自己的帐户以外,其他任何帐户都需要 root 权限。请注意下面的密码到期日期。
|
||||
|
||||
```
|
||||
$ sudo chage -l dory
|
||||
Last password change : Mar 15, 2020
|
||||
Password expires : Jun 13, 2020 <==
|
||||
Password inactive : never
|
||||
Account expires : never
|
||||
Minimum number of days between password change : 10
|
||||
Maximum number of days between password change : 90
|
||||
Number of days of warning before password expires : 14
|
||||
```
|
||||
|
||||
如果未应用密码时效,那么帐户信息将如下所示:
|
||||
|
||||
```
|
||||
$ sudo chage -l nemo
|
||||
Last password change : Jan 14, 2019
|
||||
Password expires : never <==
|
||||
Password inactive : never
|
||||
Account expires : Mar 26, 2706989
|
||||
Minimum number of days between password change : 0
|
||||
Maximum number of days between password change : 99999
|
||||
Number of days of warning before password expires : 7
|
||||
```
|
||||
|
||||
你也可以使用 **passwd -S** 命令查看某些信息,但是你需要知道输出中的每个字段代表什么:
|
||||
|
||||
```
|
||||
dory$ passwd -S
|
||||
dory P 03/15/2020 10 90 14 -1
|
||||
```
|
||||
|
||||
这里的七个字段代表:
|
||||
|
||||
* 1 – 用户名
|
||||
* 2 - 帐户状态(L=锁定,NP=无密码,P=可用密码)
|
||||
* 3 –上次密码更改的日期
|
||||
* 4 – 可更改最低时效(如果没有这么多天,则不能更改密码)
|
||||
* 5 – 最长时效(这些天后,密码必须更改)
|
||||
* 6 – 密码过期前提前警告的天数
|
||||
* 7 – 密码过期后锁定之前的天数(设为无效)
|
||||
|
||||
|
||||
|
||||
需要注意的一件事是,**chage** 命令不会显示帐户是否被锁定;它仅显示密码时效设置。另一方面,**passwd -S** 命令将告诉你密码被锁定的时间。在此例中,请注意帐户状态为 “L”:
|
||||
|
||||
[][2]
|
||||
|
||||
```
|
||||
$ sudo passwd -S dorothy
|
||||
dorothy L 07/09/2019 0 99999 7 10
|
||||
```
|
||||
|
||||
该锁定在 **/etc/shadow** 文件中生效,通常会将包含密码的“哈希”字段变为 “!”。
|
||||
|
||||
```
|
||||
$ sudo grep dorothy /etc/shadow
|
||||
dorothy:!:18086:0:99999:7:10:: <==
|
||||
```
|
||||
|
||||
帐户被锁定的事实在 **chage** 输出中并不明显:
|
||||
|
||||
```
|
||||
$ sudo chage -l dorothy
|
||||
Last password change : Jul 09, 2019
|
||||
Password expires : never
|
||||
Password inactive : never
|
||||
Account expires : never
|
||||
Minimum number of days between password change : 0
|
||||
Maximum number of days between password change : 99999
|
||||
Number of days of warning before password expires : 7
|
||||
```
|
||||
|
||||
### 密码时效的一些选项
|
||||
|
||||
最常用的设置是最短和最长的天数。它们经常结合使用。例如,你可以配置一个密码,使其最长不能使用超过 90 天(最大),然后添加一个有效期为一周或 10 天(最小)的密码。这样可以确保用户不会在需要更改密码后马上改回以前的密码。
|
||||
|
||||
```
|
||||
$ sudo chage -M 90 -m 10 shark
|
||||
$ sudo chage -l shark
|
||||
Last password change : Mar 16, 2020
|
||||
Password expires : Jun 14, 2020
|
||||
Password inactive : never
|
||||
Account expires : never
|
||||
Minimum number of days between password change : 10 <==
|
||||
Maximum number of days between password change : 90 <==
|
||||
Number of days of warning before password expires : 7
|
||||
```
|
||||
|
||||
你还可以使用 **-E** 选项为帐户设置特定的到期日期。
|
||||
|
||||
|
||||
```
|
||||
$ sudo chage -E 2020-11-11 tadpole
|
||||
$ sudo chage -l tadpole
|
||||
Last password change : Oct 15, 2019
|
||||
Password expires : never
|
||||
Password inactive : never
|
||||
Account expires : Nov 11, 2020 <==
|
||||
Minimum number of days between password change : 0
|
||||
Maximum number of days between password change : 99999
|
||||
Number of days of warning before password expires : 7
|
||||
```
|
||||
|
||||
密码时效可能是一个重要的选择,只要它不鼓励用户使用过于简单的密码或以不安全的方式写下来即可。有关控制密码字符(例如,大小写字母、数字等的组合)的更多信息,请参考这篇关于[密码复杂度][3]的文章。
|
||||
|
||||
加入 [Facebook][4] 和 [LinkedIn][5] 上的 Network World 社区,评论热门主题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3532815/viewing-and-configuring-password-aging-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/newsletters/signup.html
|
||||
[2]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
|
||||
[3]: https://www.networkworld.com/article/2726217/how-to-enforce-password-complexity-on-linux.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,109 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 Markdown tools for the Linux command line)
|
||||
[#]: via: (https://opensource.com/article/20/3/markdown-apps-linux-command-line)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
4 个 Linux 命令行下的 Markdown 工具
|
||||
======
|
||||
命令行 Markdown 工具快速、强大、灵活。以下是 4 个值得试一下的工具。
|
||||
![A person working.][1]
|
||||
|
||||
在处理 [Markdown][2] 格式的文件时,命令行工具会占据主导地位。它们轻巧、快速、强大、灵活,它们大多数都遵循 Unix 哲学只做好一件事。
|
||||
|
||||
看一下这四个程序,它们可以帮助你在命令行中更有效地处理 Markdown 文件。
|
||||
|
||||
### mdless
|
||||
|
||||
如果你使用过一段时间的 Linux 命令行,那么你可能对名为 [less][3] 的文本查看器很熟悉。当然,你可以使用 less 查看 Markdown 文件,但结果有点枯燥。如何在终端中查看 Markdown 文件效果更好一点?来使用 [mdless][4]。
|
||||
|
||||
![mdless][5]
|
||||
|
||||
你可以使用键盘上的箭头键四处移动,并且 mdless 提供了很好的搜索功能。
|
||||
|
||||
mdless 不仅会显示文本,而且还会渲染标题、粗体和斜体等格式。它还可以显示表格并语法高亮代码块。你还可以创建一个或多个主题文件来[定制][6] mdless 的外观。
|
||||
|
||||
### Markdown lint 工具
|
||||
|
||||
你在快速输入时会犯错误。如果你在使用 Markdown(或其他任何标记语言)时丢失了一些格式,那么在将文件转换为另一种格式时可能会有问题。
|
||||
|
||||
程序员通常使用名为 _linter_ 的工具来检查语法是否正确。你可以使用 [Markdown lint 工具][7]对 Markdown 执行相同的操作。
|
||||
|
||||
在你对 Markdown 文件运行该工具时,它会根据[规则集][8]检查格式。这些规则控制着文档的结构,包括标题级别的顺序、不正确的缩进和间距、代码块问题、文件中存在 HTML 等等。
|
||||
|
||||
![Markdown lint tool][9]
|
||||
|
||||
规则可能有点严格。但是,在将文件转换为其他格式之前对文件运行 Markdown lint 工具可以防止由于格式错误或不一致引起的麻烦。
|
||||
|
||||
### mdmerge
|
||||
|
||||
合并任何类型的文件可能会很痛苦。例如,我在整理一本电子书。它是一篇文章集,最初发布在我的[每周邮件][10]中。这些文章都放在单独的文件中,作为受虐狂,我以凌乱,手动的方式将它们组合在一起。
|
||||
|
||||
我希望在开始项目之前就了解了 [mdmerge][11]。这样我可以节省很多时间和精力。
|
||||
|
||||
mdmerge,你可能已经从名称中猜到了它的作用,它将两个或多个 Markdown 文件合并为一个文件。你无需在命令行中输入文件名。相反,你可以将它们添加到名为 book.txt 的文件中,并将其用作 mdmerge 的输入文件。
|
||||
|
||||
这并不是 mdmerge 能做的一切。你可以添加对另一个文档的引用(使用 Markdown 格式引用或一段源代码),然后将其放入主文档中。这样一来,你就可以创建针对特定受众定制的[主文档] [12]。
|
||||
|
||||
mdmerge 不会是你一直使用的程序之一。当你需要时,你会很高兴硬盘上有它。
|
||||
|
||||
### bashblog
|
||||
|
||||
[bashblog][13] 并不是严格上的 Markdown 工具。它获取 Markdown 文件,并使用它们来构建简单的博客或网站。你可以将 bashblog 视为[静态站点生成器][14],但是它没有很多脆弱的依赖关系。一切几乎都在一个不到 50KB 的 shell 脚本中。
|
||||
|
||||
要使用 bashblog,只需在计算机上安装 Markdown 处理器即可。在此,你可以编辑 Shell 脚本添加有关博客的信息,例如标题、名字、社交媒体链接等。然后运行脚本。之后会在默认文本编辑器中新建一篇文章。开始输入。
|
||||
|
||||
保存文章后,你可以发布它或将其另存为草稿。如果你选择发布文章,那么 bashblog 会将你的博客、文章和所有内容生成为一组 HTML 文件,你可以将它们上传到 Web 服务器。
|
||||
|
||||
它开箱即用,你的博客会平淡无奇,但可以使用。你可以根据自己喜好编辑站点的 CSS 文件来改变外观。
|
||||
|
||||
![bashblog][15]
|
||||
|
||||
### Pandoc 如何?
|
||||
|
||||
当然,Panddoc 是一个非常强大的工具,可以将 Markdown 文件转换为其他标记语言。但是,在命令行上使用 Markdown 要比 Pandoc 多。
|
||||
|
||||
如果你需要 Pandoc 修复,请查看我们在 Opensource.com 上发布的文章:
|
||||
|
||||
* [使用 Pandoc 在命令行中转换文件][16]
|
||||
* [使用 Pandoc 将你的书变成网站和 ePub] [17]
|
||||
* [如何使用 Pandoc 生成论文] [18]
|
||||
* [使用 Pandoc 将 Markdown 文件转换为 word 文档] [19]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/markdown-apps-linux-command-line
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl (A person working.)
|
||||
[2]: https://opensource.com/article/19/9/introduction-markdown
|
||||
[3]: https://opensource.com/article/18/4/using-less-view-text-files-command-line
|
||||
[4]: https://github.com/ttscoff/mdless
|
||||
[5]: https://opensource.com/sites/default/files/uploads/mdless.png (mdless)
|
||||
[6]: https://github.com/ttscoff/mdless#customization
|
||||
[7]: https://github.com/markdownlint/markdownlint
|
||||
[8]: https://github.com/markdownlint/markdownlint/blob/master/docs/RULES.md
|
||||
[9]: https://opensource.com/sites/default/files/uploads/mdl.png (Markdown lint tool)
|
||||
[10]: https://buttondown.email/weeklymusings
|
||||
[11]: https://github.com/JeNeSuisPasDave/MarkdownTools
|
||||
[12]: https://help.libreoffice.org/6.2/en-US/text/swriter/guide/globaldoc.html
|
||||
[13]: https://github.com/cfenollosa/bashblog
|
||||
[14]: https://en.wikipedia.org/wiki/Web_template_system#Static_site_generators
|
||||
[15]: https://opensource.com/sites/default/files/uploads/bashblog.png (bashblog)
|
||||
[16]: https://opensource.com/article/18/9/intro-pandoc
|
||||
[17]: https://opensource.com/article/18/10/book-to-website-epub-using-pandoc
|
||||
[18]: https://opensource.com/article/18/9/pandoc-research-paper
|
||||
[19]: https://opensource.com/article/19/5/convert-markdown-to-word-pandoc
|
147
translated/tech/20200319 Manually rotating log files on Linux.md
Normal file
147
translated/tech/20200319 Manually rotating log files on Linux.md
Normal file
@ -0,0 +1,147 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manually rotating log files on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3531969/manually-rotating-log-files-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
在 Linux 系统中手动滚动日志
|
||||
======
|
||||
|
||||
[deovolenti][1] [(CC BY 2.0)][2]
|
||||
|
||||
<ruby>日志滚动<rt>log rotation</rt></ruby>在 Linux 系统上是在常见不过的一个功能了,它为系统监控和故障排查保留必要的日志内容,同时又防止过多日志堆积在单个日志文件当中。
|
||||
|
||||
日志滚动的过程是这样的:在一组日志文件之中,编号最大的一个日志文件会被删除,其余的日志文件编号则依次增大并取代较旧的日志文件。这一个过程很容易就可以实现自动化,在细节上还能按需作出微调。
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
|
||||
|
||||
使用 `logrotate` 命令可以手动执行日志滚动的操作。本文将要介绍的就是手动进行日志滚动的方法,以及预期产生的结果。
|
||||
|
||||
文中出现的示例适用于 Ubuntu 等 Linux 系统,对于其它类型的系统,日志文件和配置文件可能会有所不同,但日志滚动的过程是大同小异的。
|
||||
|
||||
### 为什么需要日志滚动
|
||||
|
||||
一般情况下,Linux 系统会每隔一天(或间隔更长的时间)就自动进行一次日志滚动,因此需要手动执行日志滚动的场景并不多,除非有些日志的体积确实比较大。如果你需要释放存储空间,又或者将某一部分日志文件从活动的日志中分割出来,得当的日志滚动就是很方便的解决方法。
|
||||
|
||||
### 一点背景介绍
|
||||
|
||||
在 Linux 系统安装完成后就已经有很多日志文件被纳入到日志滚动的范围内了,另外,一些应用程序在安装时也会为自己产生的日志文件设置滚动规则。一般来说,日志滚动的配置文件会放置在 `/etc/logrotate.d`。如果你想了解日志滚动的详细实现,可以参考[这篇以前的文章][4]。
|
||||
|
||||
在日志滚动的过程中,活动日志会以一个新名称命名,例如 `log.1`,之前被命名为 `log.1` 的文件则会被重命名为 `log.2`,以此类推。在这一组文件中,最旧的日志文件(假如名为 `log.7`)会从系统中删除。日志滚动时文件的命名方式、保留日志文件的数量等参数是由 `/etc/logrotate.d` 目录中的配置文件决定的,因此你可能会看到有些日志文件只有少数几次滚动,而有些日志文件的滚动次数远大于 7 次。
|
||||
|
||||
[][5]
|
||||
|
||||
例如 `syslog` 在经过日志滚动之后可能会如下所示(注意,行尾的注释部分只是说明滚动过程是如何对文件名产生影响的):
|
||||
|
||||
```
|
||||
$ ls -l /var/log/syslog*
|
||||
-rw-r----- 1 syslog adm 128674 Mar 10 08:00 /var/log/syslog <== 新文件
|
||||
-rw-r----- 1 syslog adm 2405968 Mar 9 16:09 /var/log/syslog.1 <== 之前的 syslog
|
||||
-rw-r----- 1 syslog adm 206451 Mar 9 00:00 /var/log/syslog.2.gz <== 之前的 syslog.1
|
||||
-rw-r----- 1 syslog adm 216852 Mar 8 00:00 /var/log/syslog.3.gz <== 之前的 syslog.2.gz
|
||||
-rw-r----- 1 syslog adm 212889 Mar 7 00:00 /var/log/syslog.4.gz <== 之前的 syslog.3.gz
|
||||
-rw-r----- 1 syslog adm 219106 Mar 6 00:00 /var/log/syslog.5.gz <== 之前的 syslog.4.gz
|
||||
-rw-r----- 1 syslog adm 218596 Mar 5 00:00 /var/log/syslog.6.gz <== 之前的 syslog.5.gz
|
||||
-rw-r----- 1 syslog adm 211074 Mar 4 00:00 /var/log/syslog.7.gz <== 之前的 syslog.6.gz
|
||||
```
|
||||
|
||||
你可能会发现,除了活动日志和最新一次滚动的日志文件之外,其余的文件都已经被压缩以节省存储空间。这样设计的原因是大部分系统管理员都只需要查阅最新的日志文件,其余的日志文件压缩起来,需要的时候可以解压查阅,这是一个很好的折中方案。
|
||||
|
||||
### 手动日志滚动
|
||||
|
||||
你可以这样执行 `logrotate` 命令进行手动日志滚动:
|
||||
|
||||
```
|
||||
$ sudo logrotate -f /etc/logrotate.d/rsyslog
|
||||
```
|
||||
|
||||
值得一提的是,`logrotate` 命令使用 `/etc/logrotate.d/rsyslog` 这个配置文件,并通过了 `-f` 参数实行“强制滚动”。因此,整个过程将会是:删除 `syslog.7.gz`,将原来的 `syslog.6.gz` 命名为 `syslog.7.gz`,将原来的 `syslog.5.gz` 命名为 `syslog.6.gz`,将原来的 `syslog.4.gz` 命名为 `syslog.5.gz`,将原来的 `syslog.3.gz` 命名为 `syslog.4.gz`,将原来的 `syslog.2.gz` 命名为 `syslog.3.gz`,将原来的 `syslog.1.gz` 命名为 `syslog.2.gz`,但新的 `syslog` 文件不一定会创建。你可以按照下面的几条命令执行操作,以确保文件的属主和权限正确:
|
||||
|
||||
```
|
||||
$ sudo touch /var/log/syslog
|
||||
$ sudo chown syslog:adm /var/log/syslog
|
||||
$ sudo chmod 640 /var/log/syslog
|
||||
```
|
||||
|
||||
你也可以把以下这一行内容添加到 `/etc/logrotate.d/rsyslog` 当中,由 `logrotate` 来帮你完成上面三条命令的操作:
|
||||
|
||||
```
|
||||
create 0640 syslog adm
|
||||
```
|
||||
|
||||
整个配置文件的内容是这样的:
|
||||
|
||||
```
|
||||
/var/log/syslog
|
||||
{
|
||||
rotate 7
|
||||
daily
|
||||
missingok
|
||||
notifempty
|
||||
create 0640 syslog adm <==
|
||||
delaycompress
|
||||
compress
|
||||
postrotate
|
||||
/usr/lib/rsyslog/rsyslog-rotate
|
||||
endscript
|
||||
}
|
||||
```
|
||||
|
||||
下面是用户登录日志文件 `wtmp` 手动日志滚动的示例。由于 `/etc/logrotate.d/wtmp` 中有 `rotate 2` 的配置,因此系统中只保留了两份 `wtmp` 日志文件。
|
||||
|
||||
滚动前:
|
||||
|
||||
```
|
||||
$ ls -l wtmp*
|
||||
-rw-r----- 1 root utmp 1152 Mar 12 11:49 wtmp
|
||||
-rw-r----- 1 root utmp 768 Mar 11 17:04 wtmp.1
|
||||
```
|
||||
|
||||
执行滚动命令:
|
||||
|
||||
```
|
||||
$ sudo logrotate -f /etc/logrotate.d/wtmp
|
||||
```
|
||||
|
||||
滚动后:
|
||||
|
||||
```
|
||||
$ ls -l /var/log/wtmp*
|
||||
-rw-r----- 1 root utmp 0 Mar 12 11:52 /var/log/wtmp
|
||||
-rw-r----- 1 root utmp 1152 Mar 12 11:49 /var/log/wtmp.1
|
||||
-rw-r----- 1 root adm 99726 Feb 21 07:46 /var/log/wtmp.report
|
||||
```
|
||||
|
||||
需要知道的是,无论发生的日志滚动是自动滚动还是手动滚动,最近一次的滚动时间都会记录在 `logrorate` 的状态文件中。
|
||||
|
||||
```
|
||||
$ grep wtmp /var/lib/logrotate/status
|
||||
"/var/log/wtmp" 2020-3-12-11:52:57
|
||||
```
|
||||
|
||||
欢迎加入 [Facebook][6] 和 [LinkedIn][7] 上的 Network World 社区参与话题评论。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3531969/manually-rotating-log-files-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.flickr.com/photos/klif/4244284159/in/photolist-7t44P6-oPFpsr-a8c5W-gWNZ6-32EEo4-cjdxqy-diHaq9-8DYZWf-gWNWM-bgLApc-hBt94C-cj71kY-PMESV-dZBcCU-pSqgNM-51eKHq-EecbfS-osGNau-KMUx-nFaWEL-cj71PE-HFVXn-gWNWs-85HueR-8QpDh8-kV1dEc-76qYSV-5YnxuS-gWNXr-dYoQ5w-dzj1j3-3AJyd-mHbaWF-q2fTri-e9bFa6-nJyvfR-4PnMyH-gWNZr-8VUtGS-gWNWZ-ajzUd4-2hAjMk-gWW3g-gWP11-dwYbH5-4XMew-cj71B1-ica9kJ-5RonM6-8z5tGL
|
||||
[2]: https://creativecommons.org/licenses/by/2.0/legalcode
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.networkworld.com/article/3218728/how-log-rotation-works-with-logrotate.html
|
||||
[5]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,107 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "wxy"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: subject: "Purism Librem Mini: A Privacy-First Linux-Based Mini PC"
|
||||
[#]: via: "https://itsfoss.com/purism-librem-mini/"
|
||||
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
|
||||
|
||||
Purism Librem Mini:隐私为重的基于 Linux 的微型个人电脑
|
||||
======
|
||||
|
||||
> Purism 推出了一款外形小巧的微型个人电脑 “Librem Mini”,旨在提供隐私和安全性。让我们来看看它的细节。
|
||||
|
||||
[Purism][1] 通常以专注于增强用户的数字隐私和安全性的服务或产品而闻名。
|
||||
|
||||
Purism 自诩为“<ruby>[社会目地公司][2]<rt>Social Purpose Company</rt></ruby>”,旨在为社会造福,并在这方面提供了多种服务和产品。
|
||||
|
||||
你可能听说过它的 Librem 系列 [Linux 笔记本电脑][3]、[Librem One][4](加密服务)、[PureOS Linux][5] 和 [Librem 5 Linux 智能手机][6]。现在,他们已经针对想要掌控自己的隐私和安全性的用户推出了小尺寸微型个人电脑。
|
||||
|
||||
### Librem Mini: Purism 的微型个人电脑
|
||||
|
||||
![Librem Mini PC][7]
|
||||
|
||||
[Purism][1] 的 [Librem Mini][8] 旨在成为小型、轻便且功能强大的微型个人电脑。
|
||||
|
||||
当然,已经有很多[基于 Linux 的微型个人电脑][9]了,但是 Librem Mini 专门关注于其用户的隐私和安全性。它随附 [PureOS][5]、[Pureboot][10] 和 [Librem Key][11] 支持。
|
||||
|
||||
基本配置将以 699 美元的价格提供。这比大多数其他微型个人电脑要贵。但是,与大多数其他产品不同,Librem Mini 并不是又一个 [Intel NUC][12]。 那么,它提供了什么呢?
|
||||
|
||||
### Librem Mini 的规格
|
||||
|
||||
![][13]
|
||||
|
||||
这是它的规格表:
|
||||
|
||||
* Intel Core i7-8565U(Whiskey Lake),主动(风扇)冷却,4 核 8 线程最高频率 4.6GHz
|
||||
* Intel UHD Graphics 620
|
||||
* RAM: 最多 64 GB DDR4 2400 MHz(2 个 SO-DIMM 插槽)
|
||||
* 1 SATA III 6GB/s SSD/HDD(7mm)
|
||||
* 1 M.2 SSD(SATA III/NVMe x4)
|
||||
* 1 HDMI 2.0 4K @ 60Hz
|
||||
* 1 DisplayPort 1.2 4K @ 60Hz
|
||||
* 4 x USB 3.0
|
||||
* 2 x USB 2.0
|
||||
* 1 x Type-C 3.1
|
||||
* 3.5mm 音频插孔(麦克风输入和耳机插孔合一)
|
||||
* 1 RJ45 Gigabit Ethernet LAN
|
||||
* WiFi 802.11n(2.4/5.0 GHz),可选 Atheros ATH9k 模块
|
||||
* 包括在 WiFi 模块的蓝牙 4.0(可选)
|
||||
* 重量:1 公斤(2.2 磅)
|
||||
* 尺寸:12.8 厘米(5.0 英寸) x 12.8 厘米(5.0 英寸) x 3.8 厘米(1.5 英寸)
|
||||
|
||||
我不知道他们为什么决定采用 Intel 的 8 代处理器,而市场上已经出现了 10 代处理器。也许是因为 Whiskey Lake 还是第 8 代处理器的最新产品。
|
||||
|
||||
但是,是的,他们已禁用并中止了 Intel 的管理引擎,所以仍然可以采用这个产品。
|
||||
|
||||
除此之外,你还应该记住,这款微型个人电脑在提供全盘加密的同时具有检测硬件和软件篡改的功能。
|
||||
|
||||
而且,当然,用的是 Linux。
|
||||
|
||||
### 价格和供应
|
||||
|
||||
![Librem Mini from the back][14]
|
||||
|
||||
具有 8 Gigs RAM **和** 256 GB SSD 的基本配置将需要花费 $699。而且,如果你想要最强大的配置,其价格轻松就达到了 $3000。
|
||||
|
||||
他们的预定销售额目标是 $50,000,并且他们计划在达到预定目标后一个月内开始发货。
|
||||
|
||||
因此,如果你要是现在[预订][15]的话,不要指望很快就会开始发货。因此,我建议你关注 [Librem Mini 产品页面][8]的预定目标。
|
||||
|
||||
### 总结
|
||||
|
||||
如果你正在寻找一台微型个人电脑(不是专门为隐私和安全而设计的),则可以看看我们的[基于 Linux 的最佳微型个人电脑][9]列表,以获取更多建议。
|
||||
|
||||
对于普通消费者而言,Librem Mini 绝对听起来很昂贵。对于隐私发烧友来说,它仍然是一个不错的选择。
|
||||
|
||||
你怎么看?让我知道你的想法!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/purism-librem-mini/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://puri.sm/
|
||||
[2]: https://puri.sm/about/social-purpose/
|
||||
[3]: https://itsfoss.com/get-linux-laptops/
|
||||
[4]: https://itsfoss.com/librem-one/
|
||||
[5]: https://itsfoss.com/pureos-convergence/
|
||||
[6]: https://itsfoss.com/librem-linux-phone/
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/librem-mini-pc.png?ssl=1
|
||||
[8]: https://puri.sm/products/librem-mini/
|
||||
[9]: https://itsfoss.com/linux-based-mini-pc/
|
||||
[10]: https://docs.puri.sm/PureBoot.html
|
||||
[11]: https://puri.sm/products/librem-key/
|
||||
[12]: https://itsfoss.com/intel-nuc-essential-accessories/
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/librem-mini-pc-1.png?ssl=1
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/librem-mini-back.png?ssl=1
|
||||
[15]: https://shop.puri.sm/shop/librem-mini/
|
Loading…
Reference in New Issue
Block a user