Merge pull request #14 from LCTT/master

Update translation branch
This commit is contained in:
萌新阿岩 2020-12-12 13:32:53 +08:00 committed by GitHub
commit 85312488cf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
125 changed files with 8874 additions and 2769 deletions

View File

@ -0,0 +1,523 @@
[#]: collector: (lujun9972)
[#]: translator: (chenmu-kk)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12883-1.html)
[#]: subject: (Linux Desktop Setup · HookRace Blog)
[#]: via: (https://hookrace.net/blog/linux-desktop-setup/)
[#]: author: (Dennis Felsing http://felsin9.de/nnis/)
十年 Linux 桌面生存指南
======
![](https://img.linux.net.cn/data/attachment/album/202012/03/223817smrej5qwsbqjb3vs.jpg)
从 2006 年开始转战 Linux 系统后,经过几年的实践,我的软件设置在过去十年内出人意料的固定。再过十年回顾一下,看看发生了什么,也许会非常有趣。在写这篇推文时,我迅速回顾了正在运行的内容:
![htop overview][2]
### 动机
我的软件介绍排序不分先后:
* 程序应该运行在本地系统中以便我可以控制它,这其中并不包括云解决方案。
* 程序应在终端中运行,以便于在任何地方连贯地使用它们,包括性能稍差的电脑或手机。
* 通过使用终端软件,可以实现自动聚焦键盘。只有在一些有意义的地方,我会更喜欢使用鼠标,因为在打字期间一直伸手去拿鼠标感觉像在浪费时间。有时候过了一个小时我才注意到甚至还没有插鼠标。
* 最好使用快速高效的软件,我不喜欢听到风扇的声音和感到房间在变热。我还可以继续长久地运行旧硬件,已经使用了 10 年的 Thinkpad x200s 还能很好地支持我所使用的软件。
* 组合。我不想手动执行每个步骤,而是在需要时自动执行更多操作,这时自然是支持 shell。
### 操作系统
十二年前移除 Windows 系统后,我在 Linux 系统上经历了一个艰难的开始,当时我手上只有 [Gentoo Linux][3] 系统的安装光盘和一本打印的说明书,要用它们来实现一个可运行的 Linux 系统。虽然花费了几天的时间去编译和修整,但最终还是觉得自己受益颇多。
自此我再也没有转回 Windows 系统,但在持续的编译压力导致风扇失灵后,我将我的电脑系统切换到 [Arch Linux][4]。之后我将其他的电脑和私人服务器也切换到了 Arch Linux。作为一个滚动发布发行版你可以随时升级软件包但 [Arch Linux News][5] 已经详细报道了其中最主要的漏洞。
不过令人烦恼的是一旦你更新了旧的内核模组Arch Linux 就会移除旧版的相关信息。我经常注意到一旦我试着插入一个 USB 闪存盘,内核就无法加载相关组件。相反,每次内核升级后都应该进行重启。有一些 [方法][6] 可以解决这个问题,但我还没有实际地使用它们。
其他程序也会出现类似的情况,通常 Firefox 、 cron 或者 Samba 在升级后都需要重启,但恼人的是,它们没有警告你存在这种情况。我在工作中使用的 [SUSE][7] 很好地提醒了这种情况。
对于 [DDNet][8] 产品服务器,相较于 Arch Linux ,我更倾向于 [Debian][9] 系统,这样在每次升级时出现故障的几率更低。我的防火墙和路由器使用了 [OpenBSD][10] ,它拥有干净系统、文档和强大的 [pf 防火墙][11],而我现在不需要一个单独的路由器。
### 窗口管理器
从我开始使用 Gentoo 后,我很快注意到 KDE 的编译时间非常长,这让我没办法继续使用它。我四处寻找更简单的解决方案,最初使用了 [Openbox][12] 和 [Fluxbox][13]。某次,为了能更多进行纯键盘操作,我开始尝试转入平铺窗口管理器,并在研究其初始版本的时候学习了 [dwm][14] 和 [awesome][15]。
最终,由于 [xmonad][16]的灵活性、可扩展性以及使用纯 [Haskell][17](一种出色的函数编程语言)编写和配置,最终选择了它。一个例子是,我在家中运行一个 40" 4K 的屏幕,但经常会将它分为四个虚拟屏幕,每个虚拟屏幕显示一个工作区,每个工作区自动排列在我的窗口上。当然, xmonad 有一个对应的 [模块][18]。
[dzen][19] 和 [conky][20] 对我来说是一个非常简单的状态栏。我的整体 conky 配置看起来是这样的:
```
out_to_console yes
update_interval 1
total_run_times 0
TEXT
${downspeed eth0} ${upspeed eth0} | $cpu% ${loadavg 1} ${loadavg 2} ${loadavg 3} $mem/$memmax | ${time %F %T}
```
输入命令直接通过管道输入 dzen2
```
conky | dzen2 -fn '-xos4-terminus-medium-r-normal-*-12-*-*-*-*-*-*-*' -bg '#000000' -fg '#ffffff' -p -e '' -x 1000 -w 920 -xs 1 -ta r
```
对我而言,一项重要功能是在完成工作后使终端发出蜂鸣声。只需要简单地在 zsh 中的 `PR_TITLEBAR` 变量中添加一个 `\a` 字符就可以做到,只要工作完成就可以发出蜂鸣声。当然,我使用了命令:
```
echo "blacklist pcspkr" > /etc/modprobe.d/nobeep.conf
```
`pcspkr` 内核模块列入黑名单来禁用实际的蜂鸣声。相反 urxvt 的 `URxvt.urgentOnBell: true` 设置会将声音变为尖锐。之后 xmonad 有一个 urgency 钩子来捕捉这类信号,并且我可以使用组合键自动聚焦到当前的发出紧急信号的窗口。在 dzen 中我可以看到一个漂亮且明亮的 `#ff0000` 紧急窗口。
在我笔记本上所得到的最终成品是:
![Laptop screenshot][22]
我听说前几年 [i3][23] 变得非常流行,但它要求更多的手动窗口对齐而不是自动对齐。
我意识到也有像 [tmux][24] 那样的终端多路复用器,但我仍想要一些图形化应用程序,因此最终我没有有效地使用它们。
### 终端连续性
为了使终端保持活跃状态,我使用了 [dtach][25] ,它只是模拟屏幕分离功能。为了使计算机上的每个终端都可连接和断开,我编写了一个小的封装脚本。 这意味着,即使必须重新启动 X 服务器,我也可以使所有终端都运行良好,包括本地和远程终端。
### Shell & 编程
对于 shell我使用 [zsh][28] 而不是 [bash][27],因为它有众多的功能。
作为终端模拟,我发现 [urxvt][29] 足够轻巧,支持 Unicode 编码和 256 色,具有出色的性能。另一个重要的功能是可以分别运行 urxvt 客户端和守护进程。因此,即使大量终端也几乎不占用任何内存(回滚缓冲区除外)。
对我而言,只有一种字体看起来绝对干净和完美: [Terminus][30]。 由于它是位图字体,因此所有内容都是完美像素,渲染速度极快且 CPU 使用率低。为了能使用 `CTRL-WIN-[1-7]` 在每个终端按需切换字体,我的 `~/.Xdefaults` 包含:
```
URxvt.font: -xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-*
dzen2.font: -xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-*
URxvt.keysym.C-M-1: command:\033]50;-xos4-terminus-medium-r-normal-*-12-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-2: command:\033]50;-xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-3: command:\033]50;-xos4-terminus-medium-r-normal-*-18-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-4: command:\033]50;-xos4-terminus-medium-r-normal-*-22-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-5: command:\033]50;-xos4-terminus-medium-r-normal-*-24-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-6: command:\033]50;-xos4-terminus-medium-r-normal-*-28-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-7: command:\033]50;-xos4-terminus-medium-r-normal-*-32-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-n: command:\033]10;#ffffff\007\033]11;#000000\007\033]12;#ffffff\007\033]706;#00ffff\007\033]707;#ffff00\007
URxvt.keysym.C-M-b: command:\033]10;#000000\007\033]11;#ffffff\007\033]12;#000000\007\033]706;#0000ff\007\033]707;#ff0000\007
```
对于编程和书写,我使用 [Vim][31] 语法高亮显示和 [ctags][32] 进行索引,以及一些带有 `grep` 、`sed` 和其他用于搜索和操作的常用终端窗口。这可能不像 IDE 那样舒适,但可以实现更多的自动化。
Vim 的一个问题是你已经习惯了它的键映射,因此希望在任何地方都使用它们。
在 shell 功能不够强大时,[Python][33] 和 [Nim][34] 作为脚本语言也不错。
### 系统监控
[htop][35] (查看当前站点的后台运行,是托管服务器的实时视图)非常适合快速了解软件的当前运行状态。 [lm_sensors][36] 可以监控硬件温度、风扇和电压。 [powertop][37] 是一款由 Intel 发布的优秀省电小工具。 [ncdu][38] 可以交互式分析磁盘使用情况。
[nmap][39]、 iptraf-ng、 [tcpdump][40] 和 [Wireshark][41] 都是分析网络问题的基本工具。
当然还有很多更优秀的工具。
### 邮件 & 同步
在我的家庭服务器上,我为自己所有的邮箱账号运行了 [fetchmail][42] 守护进程。fetchmail 只是检索收到的邮件并调用 [procmail][43]
```
#!/bin/sh
for i in /home/deen/.fetchmail/*; do
FETCHMAILHOME=$i /usr/bin/fetchmail -m 'procmail -d %T' -d 60
done
```
配置非常简单,然后等待服务器通知我们有新的邮件:
```
poll imap.1und1.de protocol imap timeout 120 user "dennis@felsin9.de" password "XXX" folders INBOX keep ssl idle
```
我的 `.procmailrc` 配置包含一些备份全部邮件的规则,并将邮件整理在对应的目录下面。例如,基于邮件列表名或者邮件标题:
```
MAILDIR=/home/deen/shared/Maildir
LOGFILE=$HOME/.procmaillog
LOGABSTRACT=no
VERBOSE=off
FORMAIL=/usr/bin/formail
NL="
"
:0wc
* ! ? test -d /media/mailarchive/`date +%Y`
| mkdir -p /media/mailarchive/`date +%Y`
# Make backups of all mail received in format YYYY/YYYY-MM
:0c
/media/mailarchive/`date +%Y`/`date +%Y-%m`
:0
* ^From: .*(.*@.*.kit.edu|.*@.*.uka.de|.*@.*.uni-karlsruhe.de)
$MAILDIR/.uni/
:0
* ^list-Id:.*lists.kit.edu
$MAILDIR/.uni-ml/
[...]
```
我使用 [msmtp][44] 来发送邮件,它也很好配置:
```
account default
host smtp.1und1.de
tls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
auth on
from dennis@felsin9.de
user dennis@felsin9.de
password XXX
[...]
```
但是到目前为止,邮件还在服务器上。 我的文档全部存储在一个目录中,我使用 [Unison][45] 在所有计算机之间进行同步。Unison 可以视为双向交互式 [rsync][46],我的邮件是这个文件目录下的一部分,因此它们最终存储在我的电脑上。
这也意味着,尽管邮件会立即到达我的邮箱,但我只是按需拿取,而不是邮件一到达时就立即收到通知。
从此我使用 [mutt][47] 阅读邮件,使用侧边栏显示我的邮件目录。 `/etc/mailcap` 文件对于显示非纯文本邮件( HTML, Word 或者 PDF不可或缺
```
text/html;w3m -I %{charset} -T text/html; copiousoutput
application/msword; antiword %s; copiousoutput
application/pdf; pdftotext -layout /dev/stdin -; copiousoutput
```
### 新闻 & 通讯
[Newsboat][48] 是一个非常棒的终端 RSS/Atom 阅读器。我在一个有约 150 个提要的 `tach` 会话服务器上运行它。也可以在本地选择提要,例如:
```
ignore-article "https://forum.ddnet.tw/feed.php" "title =~ \"Map Testing •\" or title =~ \"Old maps •\" or title =~ \"Map Bugs •\" or title =~ \"Archive •\" or title =~ \"Waiting for mapper •\" or title =~ \"Other mods •\" or title =~ \"Fixes •\""
```
我以同样的方式使用 [Irssi][49] 进行 IRC 通讯。
### 日历
[remind][50] 是一个可以从命令行获取的日历。通过编辑 `rem` 文件可以设置新的提醒:
```
# One time events
REM 2019-01-20 +90 Flight to China %b
# Recurring Holidays
REM 1 May +90 Holiday "Tag der Arbeit" %b
REM [trigger(easterdate(year(today()))-2)] +90 Holiday "Karfreitag" %b
# Time Change
REM Nov Sunday 1 --7 +90 Time Change (03:00 -> 02:00) %b
REM Apr Sunday 1 --7 +90 Time Change (02:00 -> 03:00) %b
# Birthdays
FSET birthday(x) "'s " + ord(year(trigdate())-x) + " birthday is %b"
REM 16 Apr +90 MSG Andreas[birthday(1994)]
# Sun
SET $LatDeg 49
SET $LatMin 19
SET $LatSec 49
SET $LongDeg -8
SET $LongMin -40
SET $LongSec -24
MSG Sun from [sunrise(trigdate())] to [sunset(trigdate())]
[...]
```
遗憾的是,目前 remind 中还没有中国农历的提醒功能,因此中国的节日不易计算。
我给提醒设置了两个名字:
```
rem -m -b1 -q -g
```
按时间顺序查看待办事项清单
```
rem -m -b1 -q -cuc12 -w$(($(tput cols)+1)) | sed -e "s/\f//g" | less
```
显示适应终端宽度的日历:
![remcal][51]
### 字典
[rdictcc][52] 是鲜为人知的字典工具,它可以使用 [dict.cc][53] 很棒的词典并将他们转存在本地数据库中:
```
$ rdictcc rasch
====================[ A => B ]====================
rasch:
- apace
- brisk [speedy]
- cursory
- in a timely manner
- quick
- quickly
- rapid
- rapidly
- sharpish [Br.] [coll.]
- speedily
- speedy
- swift
- swiftly
rasch [gehen]:
- smartly [quickly]
Rasch {n} [Zittergras-Segge]:
- Alpine grass [Carex brizoides]
- quaking grass sedge [Carex brizoides]
Rasch {m} [regional] [Putzrasch]:
- scouring pad
====================[ B => A ]====================
Rasch model:
- Rasch-Modell {n}
```
### 记录和阅读
我有一个简单记录任务的备忘录,在 Vim 会话中基本上一直处于打开状态。我也使用备忘录作为工作中“已完成”工作的记录,这样就可以检查自己每天完成了哪些任务。
对于写文件、信件和演示文稿,我会使用 [LaTeX][54] 进行高级排版。德式的简单信件可以这样设置,例如:
```
\documentclass[paper = a4, fromalign = right]{scrlttr2}
\usepackage{german}
\usepackage{eurosym}
\usepackage[utf8]{inputenc}
\setlength{\parskip}{6pt}
\setlength{\parindent}{0pt}
\setkomavar{fromname}{Dennis Felsing}
\setkomavar{fromaddress}{Meine Str. 1\\69181 Leimen}
\setkomavar{subject}{Titel}
\setkomavar*{enclseparator}{Anlagen}
\makeatletter
\@setplength{refvpos}{89mm}
\makeatother
\begin{document}
\begin{letter} {Herr Soundso\\Deine Str. 2\\69121 Heidelberg}
\opening{Sehr geehrter Herr Soundso,}
Sie haben bei mir seit dem Bla Bla Bla.
Ich fordere Sie hiermit zu Bla Bla Bla auf.
\closing{Mit freundlichen Grüßen}
\end{letter}
\end{document}
```
在 [我的私人网站][55] 上可以找到更多的示例文档和演示文稿。
[Zathura][56] 读取 PDF 文件速度很快,支持 Vim 类控件,还支持两种不同的 PDF 后端: Poppler 和 MuPDF。另一方面在偶尔遇到 Zathura 无法打开的文件时,[Evince][57] 则显得更全能一些。
### 图片编辑
简便的选择是,[GIMP][58] 和 [Inkscape][59] 分别用于照片编辑和交互式向量图形。
有时 [Imagemagick][60] 已经足够好了,它可以从命令行直接使用,从而自动编辑图片。同样 [Graphviz][61] 和 [TikZ][62] 可以用来绘制曲线图和其他图表。
### Web 浏览器
对于 Web 浏览器,我一直在使用 [Firefox][63]。相较于 Chrome它的可扩展性更好资源使用率更低。
不幸的是,在 Firefox 完全改用 Chrome 风格的扩展之后, [Pentadactyl][64] 扩展的开发就停止了,所以我的浏览器中再也没有令人满意的 Vim 类控件了。
### 媒体播放器
通过设置 `vo=gpu` 以及 `hwdec=vaapi`,具有硬件解码功能的 [mpv][65] 在播放期间 CPU 的占用率保持在 5%。相较于默认的 PulseAudiompv 中的 `audio-channels=2` 似乎可以使我的立体扬声器/耳机获得更清晰的降级混频。一个很棒的小功能是用 `Shift-Q` 退出,而不是只用 `Q` 来保存回放位置。当你与母语是其他语言的人一起看视频时,你可以使用 `--secondary-sid=` 同时显示两种字幕,主字幕位于底部,次字幕位于屏幕顶部。
我的无线鼠标可以简单地通过一个小的配置文件( `~/.config/mpv/input.conf` )实现远程控制 mpv
```
MOUSE_BTN5 run "mixer" "pcm" "-2"
MOUSE_BTN6 run "mixer" "pcm" "+2"
MOUSE_BTN1 cycle sub-visibility
MOUSE_BTN7 add chapter -1
MOUSE_BTN8 add chapter 1
```
[youtube-dl][66] 非常适合观看在线托管的视频,使用 `-f bestvideo+bestaudio/best --all-subs --embed-subs` 命令可获得最高质量的视频。
作为音乐播放器, [MOC][67] 不再活跃开发,但它仍是一个简易的播放器,可以播放各种可能的格式,包括最不常用的 Chiptune 格式。在 AUR 中有一个 [补丁][68] 增加了 PulseAudio 支持。即使在 CPU 时钟频率降到 800 MHz MOC 也只使用了单核 CPU 的 1-2% 。
![moc][69]
我的音乐收藏夹保存在我的家庭服务器上,因此我可以从任何地方访问它。它使用 [SSHFS][70] 挂载并自动安装在 `/etc/fstab/` 目录下:
```
root@server:/media/media /mnt/media fuse.sshfs noauto,x-systemd.automount,idmap=user,IdentityFile=/root/.ssh/id_rsa,allow_other,reconnect 0 0
```
### 跨平台构建
除了 Linux 本身,它对于构建任何主流操作系统的软件包都很优秀! 一开始,我使用 [QEMU][71] 与旧版 Debian、 Windows 以及 Mac OS X VM 一起构建这些平台。
现在我在旧版 Debian 发行版上转而使用 chroot (以获得最大的 Linux 兼容性),在 Windows 上使用 [MinGW][72] 进行交叉编译,在 Mac OS X 上则使用 [OSXCross][73]。
用于 [构建 DDNet][74] 的脚本以及 [更新库构建的说明][75] 的脚本都基于这个。
### 备份
通常,我们都会忘记备份。即使这是最后一节,它也不应该成为事后诸葛。
十年前我写了 [rrb][76] (反向 rsync 备份)重新封装了 `rsync` ,因此我只需要将备份服务器的 root SSH 权限授予正在备份的计算机。令人惊讶地是,尽管我一直在使用 rrb ,但它在过去十年里没有任何改变。
备份文件直接存储在文件系统中。使用硬链接实现增量备份(`--link-dest`)。一个简单的 [配置][77] 定义了备份保存时间,默认为:
```
KEEP_RULES=( \
7 7 \ # One backup a day for the last 7 days
31 8 \ # 8 more backups for the last month
365 11 \ # 11 more backups for the last year
1825 4 \ # 4 more backups for the last 5 years
)
```
因为我的一些计算机没有静态 IP / DNS 但仍想使用 rrb 备份,那我会使用反向安全隧道(作为 systemd 服务):
```
[Unit]
Description=Reverse SSH Tunnel
After=network.target
[Service]
ExecStart=/usr/bin/ssh -N -R 27276:localhost:22 -o "ExitOnForwardFailure yes" server
KillMode=process
Restart=always
[Install]
WantedBy=multi-user.target
```
现在,隧道运行备份时,服务器可以通过 `ssh -p 27276 localhost` 命令或者使用 `.ssh/config` 到达服务器端。
```
Host cr-remote
HostName localhost
Port 27276
```
在谈及 SSH 技巧时,有时由于某些中断的路由会很难访问到服务器。在那种情况下你可以借道其他服务器的 SSH 连接,以获得更好的路由。在这种情况下,你可能通过美国连接访问到我的中国服务器,而来自德国的不可靠连接可能需要几个周:
```
Host chn.ddnet.tw
ProxyCommand ssh -q usa.ddnet.tw nc -q0 chn.ddnet.tw 22
Port 22
```
### 结语
感谢阅读我工具的收藏。这其中我也许遗漏了许多日常中自然成习惯的步骤。让我们来看看我的软件设置在下一年里能多稳定吧。如果你有任何问题,随时联系我 [dennis@felsin9.de][78] 。
在 [Hacker News][79] 下评论吧。
--------------------------------------------------------------------------------
via: https://hookrace.net/blog/linux-desktop-setup/
作者:[Dennis Felsing][a]
选题:[lujun9972][b]
译者:[chenmu-kk](https://github.com/chenmu-kk)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://felsin9.de/nnis/
[b]: https://github.com/lujun9972
[1]: https://hookrace.net/public/linux-desktop/htop_small.png
[2]: https://hookrace.net/public/linux-desktop/htop.png
[3]: https://gentoo.org/
[4]: https://www.archlinux.org/
[5]: https://www.archlinux.org/news/
[6]: https://www.reddit.com/r/archlinux/comments/4zrsc3/keep_your_system_fully_functional_after_a_kernel/
[7]: https://www.suse.com/
[8]: https://ddnet.tw/
[9]: https://www.debian.org/
[10]: https://www.openbsd.org/
[11]: https://www.openbsd.org/faq/pf/
[12]: http://openbox.org/wiki/Main_Page
[13]: http://fluxbox.org/
[14]: https://dwm.suckless.org/
[15]: https://awesomewm.org/
[16]: https://xmonad.org/
[17]: https://www.haskell.org/
[18]: http://hackage.haskell.org/package/xmonad-contrib-0.15/docs/XMonad-Layout-LayoutScreens.html
[19]: http://robm.github.io/dzen/
[20]: https://github.com/brndnmtthws/conky
[21]: https://hookrace.net/public/linux-desktop/laptop_small.png
[22]: https://hookrace.net/public/linux-desktop/laptop.png
[23]: https://i3wm.org/
[24]: https://github.com/tmux/tmux/wiki
[25]: http://dtach.sourceforge.net/
[26]: https://github.com/def-/tach/blob/master/tach
[27]: https://www.gnu.org/software/bash/
[28]: http://www.zsh.org/
[29]: http://software.schmorp.de/pkg/rxvt-unicode.html
[30]: http://terminus-font.sourceforge.net/
[31]: https://www.vim.org/
[32]: http://ctags.sourceforge.net/
[33]: https://www.python.org/
[34]: https://nim-lang.org/
[35]: https://hisham.hm/htop/
[36]: http://lm-sensors.org/
[37]: https://01.org/powertop/
[38]: https://dev.yorhel.nl/ncdu
[39]: https://nmap.org/
[40]: https://www.tcpdump.org/
[41]: https://www.wireshark.org/
[42]: http://www.fetchmail.info/
[43]: http://www.procmail.org/
[44]: https://marlam.de/msmtp/
[45]: https://www.cis.upenn.edu/~bcpierce/unison/
[46]: https://rsync.samba.org/
[47]: http://www.mutt.org/
[48]: https://newsboat.org/
[49]: https://irssi.org/
[50]: https://www.roaringpenguin.com/products/remind
[51]: https://hookrace.net/public/linux-desktop/remcal.png
[52]: https://github.com/tsdh/rdictcc
[53]: https://www.dict.cc/
[54]: https://www.latex-project.org/
[55]: http://felsin9.de/nnis/research/
[56]: https://pwmt.org/projects/zathura/
[57]: https://wiki.gnome.org/Apps/Evince
[58]: https://www.gimp.org/
[59]: https://inkscape.org/
[60]: https://imagemagick.org/Usage/
[61]: https://www.graphviz.org/
[62]: https://sourceforge.net/projects/pgf/
[63]: https://www.mozilla.org/en-US/firefox/new/
[64]: https://github.com/5digits/dactyl
[65]: https://mpv.io/
[66]: https://rg3.github.io/youtube-dl/
[67]: http://moc.daper.net/
[68]: https://aur.archlinux.org/packages/moc-pulse/
[69]: https://hookrace.net/public/linux-desktop/moc.png
[70]: https://github.com/libfuse/sshfs
[71]: https://www.qemu.org/
[72]: http://www.mingw.org/
[73]: https://github.com/tpoechtrager/osxcross
[74]: https://github.com/ddnet/ddnet-scripts/blob/master/ddnet-release.sh
[75]: https://github.com/ddnet/ddnet-scripts/blob/master/ddnet-lib-update.sh
[76]: https://github.com/def-/rrb/blob/master/rrb
[77]: https://github.com/def-/rrb/blob/master/config.example
[78]: mailto:dennis@felsin9.de
[79]: https://news.ycombinator.com/item?id=18979731

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12891-1.html)
[#]: subject: (Dual booting Windows and Linux using UEFI)
[#]: via: (https://opensource.com/article/19/5/dual-booting-windows-linux-uefi)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss/users/ckrzen)
使用 UEFI 双启动 Windows 和 Linux
======
> 这是一份在同一台机器上设置 Linux 和 Windows 双重启动的速成解释使用统一可扩展固件接口UEFI
![](https://img.linux.net.cn/data/attachment/album/202012/06/101431eb02wvkk0nzkk5sw.jpg)
我将强调一些重要点,而不是一步一步地指导你来如何配置你的系统以实现双重启动。作为一个示例,我将提到我在几个月之前新买的笔记本计算机。我先是安装 [Ubuntu Linux][2] 到整个硬盘中,这就摧毁了预装的 [Windows 10][3] 环境。几个月后,我决定安装一个不同的 Linux 发行版 [Fedora Linux][4],也决定在双重启动配置中与它一起再次安装 Windows 10 。我将强调一些极其重要的实际情况。让我们开始吧!
### 固件
双重启动不仅仅是软件问题。或者说它算是软件的问题,因为它需要更改你的固件,以告诉你的机器如何开始启动程序。这里有一些和固件相关的重要事项要铭记于心。
#### UEFI vs BIOS
在尝试安装前,确保你的固件配置是最佳的。现在出售的大多数计算机有一个名为 <ruby>[统一可扩展固件接口][5]<rt>Unified Extensible Firmware Interface</rt></ruby> UEFI的新类型固件UEFI 差不多取代了另一个名为 <ruby>[基本输入输出系统][6]<rt>Basic Input Output System</rt></ruby>BIOS的固件类型包括很多固件供应商在内的很多人通常称 BIOS 为<ruby>传统启动模式<rt>Legacy Boot</rt></ruby>
我不需要 BIOS所以我选择了 UEFI 模式。
#### 安全启动
另一个重要的设置是<ruby>安全启动<rt>Secure Boot</rt></ruby>。这个功能将检测启动路径是否被篡改,并阻止未经批准的操作系统的启动。现在,我禁用这个选项来确保我能够安装 Fedora Linux 。依据 Fedora 项目维基“[功能/安全启动][7]”部分的介绍可知Fedora Linux 在安全启动选项启用的时候也可以工作。这对其它的 Linux 发行版来说可能有所不同 — 我打算今后重新研究这项设置。
简而言之,如果你发现在这项设置启用时你不能安装你的 Linux 操作系统,那么禁用安全启动并再次重新尝试安装。
### 对启动驱动器进行分区
如果你选择双重启动并且两个操作系统都在同一个驱动器上,那么你必须将它分成多个分区。即使你使用两个不同的驱动器进行双重启动,出于各种各样的原因,大多数 Linux 环境也最好分成几个基本的分区。这里有一些选项值得考虑。
#### GPT vs MBR
如果你决定手动分区你的启动驱动器,在动手前,我建议使用<ruby>[GUID 分区表][8]<rt>GUID Partition Table</rt></ruby>GPT而不是使用旧的<ruby>[主启动记录][9]<rt>Master Boot Record</rt></ruby>MBR 。这种更改的原因之一是MBR 有两个特定的限制,而 GPT 却没有:
* MBR 可以最多拥有 15 个分区,而 GPT 可以最多拥有 128 个分区。
* MBR 最多仅支持 2 TB 磁盘,而 GPT 使用 64 位地址,这使得它最多支持 800 万 TB 的磁盘。
如果你最近购买过硬盘,那么你可能会知道现代的很多硬盘都超过了 2 TB 的限制。
#### EFI 系统分区
如果你正在进行一次全新的安装或使用一块新的驱动器,那么这里可能没有可以开始的分区。在这种情况下,操作系统安装程序将先创建一个分区,即<ruby>[EFI 系统分区][10]<rt>EFI System Partition</rt></ruby>ESP。如果你选择使用一个诸如 [gdisk][11] 之类的工具来手动分区你的驱动器,你将需要使用一些参数来创建这个分区。基于现有的 ESP ,我设置它为约 500 MB 的大小,并分配它为 `ef00` EFI 系统 分区类型。UEFI 规范要求格式化为 FAT32/msdos ,很可能是因为这种格式被大量的操作系统所支持。
![分区][12]
### 操作系统安装
在你完成先前的两个任务后,你就可以安装你的操作系统了。虽然我在这里关注的是 Windows 10 和 Fedora Linux ,当安装其它组合时的过程也是非常相似。
#### Windows 10
我开始 Windows 10 的安装,并创建了一个 20 GB 的 Windows 分区。因为我先前在我的笔记本计算机上安装了 Linux ,所以驱动器已经有了一个 ESP ,我选择保留它。我删除所有的现有 Linux 和交换分区来开始一次全新安装,然后开始我的 Windows 安装。Windows 安装程序自动创建另一个 16 MB 的小分区,称为 <ruby>[微软保留分区][13]<rt>Microsoft Reserved Partition</rt></ruby>MSR。在这完成后在 512 GB 启动驱动器上仍然有大约 400 GB 的未分配空间。
接下来,我继续完成了 Windows 10 安装过程。随后我重新启动到 Windows 来确保它是工作的,在操作系统第一次启动时,创建我的用户账号,设置 Wi-Fi ,并完成其它必须的任务。
#### Fedora Linux
接下来,我将心思转移到安装 Linux 。我开始了安装过程,当安装进行到磁盘配置的步骤时,我确保不会更改 Windows NTFS 和 MSR 分区。我也不会更改 ESP ,但是我设置它的挂载点为 `/boot/efi`。然后我创建常用的 ext4 格式分区, `/`(根分区)、`/boot` 和 `/home`。我创建的最后一个分区是 Linux 的交换分区swap
像安装 Windows 一样,我继续完成了 Linux 安装,随后重新启动。令我高兴的是,在启动时<ruby>[大一统启动加载程序][14]<rt>GRand Unified Boot Loader</rt></ruby>GRUB菜单提供选择 Windows 或 Linux 的选项,这意味着我不需要再做任何额外的配置。我选择 Linux 并完成了诸如创建我的用户账号等常规步骤。
### 总结
总体而言,这个过程是不难的,在过去的几年里,从 BIOS 过渡到 UEFI 有一些困难需要解决,加上诸如安全启动等功能的引入。我相信我们现在已经克服了这些障碍,可以可靠地设置多重启动系统。
我不再怀念 [Linux LOader][15]LILO
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/dual-booting-windows-linux-uefi
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss/users/ckrzen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
[2]: https://www.ubuntu.com
[3]: https://www.microsoft.com/en-us/windows
[4]: https://getfedora.org
[5]: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface
[6]: https://en.wikipedia.org/wiki/BIOS
[7]: https://fedoraproject.org/wiki/Features/SecureBoot
[8]: https://en.wikipedia.org/wiki/GUID_Partition_Table
[9]: https://en.wikipedia.org/wiki/Master_boot_record
[10]: https://en.wikipedia.org/wiki/EFI_system_partition
[11]: https://sourceforge.net/projects/gptfdisk/
[12]: /sites/default/files/u216961/gdisk_screenshot_s.png
[13]: https://en.wikipedia.org/wiki/Microsoft_Reserved_Partition
[14]: https://en.wikipedia.org/wiki/GNU_GRUB
[15]: https://en.wikipedia.org/wiki/LILO_(boot_loader)

View File

@ -0,0 +1,164 @@
[#]: collector: (lujun9972)
[#]: translator: (alim0x)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12878-1.html)
[#]: subject: (Creating a Chat Bot with Recast.AI)
[#]: via: (https://opensourceforu.com/2019/11/creating-a-chat-bot-with-recast-ai/)
[#]: author: (Athira Lekshmi C.V https://opensourceforu.com/author/athira-lekshmi/)
用 Recast.AI 创建一个聊天机器人
======
[![][1]][2]
> 据 Gartner 2018 年 2 月的报告,“到 2020 年25% 的客户服务和支持业务将在参与渠道中整合虚拟客户助理VCA或聊天机器人技术而 2017 年只有不到 2%。”鉴于此,读者会发现本教程对理解开源的 Recast.AI 机器人创建平台的工作原理很有帮助。
聊天机器人,包括基于语音的以及其他技术的,已经实际使用了有一阵子了。从让用户参与谋杀解密游戏,到帮助完成房地产交易和医疗诊断,聊天机器人已经跨越了多个领域。
有许多平台可以让用户创建和部署机器人。Recast.AI在被 SAP 收购之后现在是 SAP Conversational AI是其中的先行者。
酷炫的界面、协作性以及它所提供的分析工具,让它成为流行的选择。
正如 Recast 官方网站说的,“它是一个创建、训练、部署和监控智能机器人的终极协作平台。”
### 创建一个基础的机器人
让我们来看看如何在 Recast 创建一个基础的机器人。
1. 在 https://cai.tools.sap 创建一个账户。注册可以使用电子邮箱或者 Github 账户。
2. 在你登录之后,你会进入仪表板。点击右上角 “+” 新建机器人图标新建一个机器人。
3. 在下一个界面,你会看到一系列可选的预定义技能。暂时选择<ruby>问候<rt>Greetings</rt></ruby>”(图 1。这个机器人已经经过训练能够理解基本的问候。
![图 1: 设置机器人属性][3]
4. 给机器人提供一个名字。目前来说,你可以让机器人讲一些笑话,所以我们将它命名为 Joke Bot选择英语作为默认语言。
5. 因为你不会处理任何敏感信息,所以在数据策略下选择非个人数据。然后选择公共机器人选项并点击创建一个机器人。
所以这就是你在 Recast 平台创建的机器人。
### 开发一个机器人的五个阶段
用 Recast 官方博客的话说,在机器人的生命中有五个阶段。
* 训练——教授机器人需要理解的内容
* 构建——使用机器人构建工具创建你的对话流
* 编写代码——将机器人连接到外部 API 或数据库
* 连接——将机器人发布到一个或多个消息平台
* 监控——训练机器人让它更敏锐,并且了解其使用情况
### 通过意图训练机器人
你可以在仪表板上看到搜索、分叉或创建一个<ruby>意图<rt>intent</rt></ruby>的选项。“‘意图’是一系列含义相同但构造不同的表达。‘意图’是你的机器人理解能力的核心。每个‘意图’代表了机器人可以理解的一种想法。”(摘自 Recast.AI 网站)
![图 2: 机器人面板][4]
就像先前定的你需要一个讲笑话的机器人。所以底线是这个机器人可以理解用户在要求它讲笑话它不应该在用户仅仅说了“Hi”的情况下回复一个笑话——这可不妙。把用户可能说的话进行分组比如
```
Tell me a joke.(给我讲个笑话。)
Tell me a funny fact.(告诉我一个有趣的事实。)
Can you crack a joke?(你可以讲个笑话吗?)
Whats funny today?(今天有什么有趣的?)
```
……
在继续从头开始创建意图之前,让我们来看看搜索/分叉选项。在搜索框输入 “Joke”图 3。系统给出了全球的 Recast 用户创建的公开的意图清单,这就是为什么说 Recast 天然就是协作性质的。所以其实没有必要从头开始创建所有的意图,可以在已经创建的基础上进行构建。这就降低了训练具有常见意图的机器人所需的投入。
![图 3: 搜索一个意图][5]
* 选择列表中的第一个意图并将其分叉到机器人上。
* 点击<ruby>分叉<rt>Fork</rt></ruby>按钮。这个意图就添加到了机器人中(图 4
![图 4: @joke 意图][6]
* 点击意图 @joke,会显示出这个意图中已经存在的<ruby>表达<rt>expression</rt></ruby>列表(图 5
![图 5: 预定义表达][7]
* 向其添加更多的表达(图 6
![图 6: 建议的表达][8]
添加了一些表达之后,机器人会给出一些建议,像图 7 展示的那样。选择几个将它们添加到意图中。你还可以根据机器人的上下文,标记你自己的自定义实体来检测关键词。
![图 7: 建议的表达][9]
### 技能
<ruby>技能<rt>skill</rt></ruby>是一块有明确目的的对话,机器人可以据此运行并达到目标。它可以像打招呼那么简单,也可以更复杂,比如基于用户提供的信息提供电影建议。
技能需要的不能只是一对问答,它需要多次交互。比如考虑一个帮你学习汇率的机器人。它一开始会问原货币,然后是目标货币,最后给出准确回应。结合技能可以创建复杂的对话流。
下面是如何给笑话机器人创建技能:
* 去到 构建Build 页。点击 “+” 图标创建技能。
* 给技能命名 “Joke”图 8
![图 8: 技能面板][10]
* 创建之后,点击这个技能。你会看到四个标签。<ruby>读我<rt>Read me</rt></ruby><ruby>触发器<rt>Triggers</rt></ruby><ruby>需求<rt>Requirements</rt></ruby><ruby>动作<rt>Actions</rt></ruby>
* 切换到需求页面。只有在笑话意图存在的时候,你才应该存储信息。所以,像图 9 那样添加一个需求。
![图 9: 添加一个触发器][11]
由于这个简单的使用范例,你不需要在需求选项卡中考虑任何特定的需求,但可以考虑只有当某些关键字或实体出现时才需要触发响应的情况——在这种情况下你需要需求。
需求是某个技能执行动作之前需要检索的意图或实体。需求是对话中机器人可以使用的重要信息。例如用户的姓名或位置。一旦一个需求完成,相关的值就会存储在机器人的内存中,供整个对话使用。
现在让我们转到动作页面设置<ruby>回应<rt>response</rt></ruby>(参见图 10
![图 10: 添加动作][12]
点击添加<ruby>新消息组<rt>new message group</rt></ruby>。然后选择<ruby>发送消息<rt>Send message</rt></ruby>并添加一条文本消息,在这个例子中可以是任何笑话。当然,你肯定不想让你的机器人每次都说一样的笑话,你可以添加多条消息,每次从中随机选择一条。
![图 11: 添加文本消息][13]
### 频道集成
一个成功的机器人还依赖于它的易得性。Recast 有不少的内置消息频道集成,如 Skype for Business、Kik Messenger、Telegram、Line、Facebook Messenger、Slack、Alexa 等等。除此之外Recast 还提供了 SDK 用于开发自定义的频道。
此外Recast 还提供一个可立即使用的网页聊天(在连接页面中)。你可以自定义颜色主题、标题、机器人头像等。它给你提供了一个可以添加到页面的脚本标签。你的界面现在就可以使用了(图 12
![图 12: 设置网络聊天][14]
网页聊天的代码是开源的,开发者可以更方便地定制外观、标准回应类型等等。面板提供了如何将机器人部署到各种频道的逐步过程说明。这个笑话机器人部署在 Telegram 和网页聊天上,就像图 13 展示的那样。
![图 13: 网页聊天部署][15]
![图 14: Telegram 中开发的机器人][16]
### 还有更多
Recast 支持多语言,创建机器人的时候选择一个语言作为基础,但之后你有选项可以添加更多你想要的语言。
![图 15: 多语言机器人][17]
这里的例子是一个简单的静态笑话机器人实际使用中可能需要更多的和不同系统的交互。Recast 有 Web 钩子功能,用户可以连接到不同的系统来获取回应。同时它还有详细的 API 文档来帮助使用平台的每个独立功能。
至于分析Recast 有一个监控面板,帮助你了解机器人的准确度以及更加深入地训练机器人。
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/creating-a-chat-bot-with-recast-ai/
作者:[Athira Lekshmi C.V][a]
选题:[lujun9972][b]
译者:[alim0x](https://github.com/alim0x)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/athira-lekshmi/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/04/Build-ChatBoat.jpg?resize=696%2C442&ssl=1 (Build ChatBoat)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/04/Build-ChatBoat.jpg?fit=900%2C572&ssl=1
[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-1-Setting-the-bot-properties.jpg
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-2-Setting-the-bot-properties.jpg
[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-3-Searching-an-intent.jpg
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-4-@joke-intent.jpg
[7]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-5-Predefined-expressions.jpg
[8]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-6-Suggested-expressions.jpg
[9]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-7-Suggested-expressions.jpg
[10]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-8-Skills-dashboard.jpg
[11]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-9-Adding-a-trigger.jpg
[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-10-Adding-actions.jpg
[13]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-11-Adding-text-messages.jpg
[14]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-12-Setting-up-webchat.jpg
[15]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-13-Webchat-deployed.jpg
[16]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-14-Bot-deployed-in-Telegram.jpg
[17]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-15-Multi-language-bot.jpg
[18]: https://secure.gravatar.com/avatar/d24503a2a0bb8bd9eefe502587d67323?s=100&r=g
[19]: https://opensourceforu.com/author/athira-lekshmi/
[20]: https://opensourceforu.com/wp-content/uploads/2019/11/assoc.png
[21]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -0,0 +1,78 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12886-1.html)
[#]: subject: (scanimage: scan from the command line!)
[#]: via: (https://jvns.ca/blog/2020/07/11/scanimage--scan-from-the-command-line/)
[#]: author: (Julia Evans https://jvns.ca/)
scanimage从命令行扫描!
======
![](https://img.linux.net.cn/data/attachment/album/202012/05/105822m30t6x66hz3hx6x3.jpg)
这又是一篇关于我很喜欢的一个命令行工具的文章。
昨晚,出于官僚原因,我需要扫描一些文档。我以前从来没有在 Linux 上使用过扫描仪,我担心会花上好几个小时才弄明白。我从使用 `gscan2pdf` 开始,但在用户界面上遇到了麻烦。我想同时扫描两面(我知道我们的扫描仪支持),但无法使它工作。
### 遇到 scanimage
`scanimage` 是一个命令行工具,在 `sane-utils` Debian 软件包中。我想所有的 Linux 扫描工具都使用 `sane` “scanner access now easy” 库,所以我猜测它和其他扫描软件有类似的能力。在这里,我不需要 OCR所以我不打算谈论 OCR。
### 用 scanimage -L 得到你的扫描仪的名字
`scanimage -L` 列出了你所有的扫描设备。
一开始我不能让它工作,我有点沮丧,但事实证明,我把扫描仪连接到了我的电脑上,但没有插上电源。
插上后,它马上就能工作了。显然我们的扫描仪叫 `fujitsu:ScanSnap S1500:2314`。万岁!
### 用 --help 列出你的扫描仪选项
显然每个扫描仪有不同的选项(有道理!),所以我运行这个命令来获取我的扫描仪的选项:
```
scanimage --help -d 'fujitsu:ScanSnap S1500:2314'
```
我发现我的扫描仪支持 `--source` 选项(我可以用它来启用双面扫描)和 `--resolution` 选项(我把它改为 150以减少文件大小使扫描更快
### scanimage 不支持输出 PDF 文件(但你可以写一个小脚本)
唯一的缺点是:我想要一个 PDF 格式的扫描文件,而 scanimage 似乎不支持 PDF 输出。
所以我写了这个 5 行的 shell 脚本在一个临时文件夹中扫描一堆 PNG 文件,并将结果保存到 PDF 中。
```
#!/bin/bash
set -e
DIR=`mktemp -d`
CUR=$PWD
cd $DIR
scanimage -b --format png -d 'fujitsu:ScanSnap S1500:2314' --source 'ADF Front' --resolution 150
convert *.png $CUR/$1
```
我像这样运行脚本:`scan-single-sided output-file-to-save.pdf`
你可能需要为你的扫描仪设置不同的 `-d``-source`
### 这真是太简单了!
我一直以为在 Linux 上使用打印机/扫描仪是一个噩梦,我真的很惊讶 `scanimage` 可以工作。我可以直接运行我的脚本 `scan-single-sided receipts.pdf`,它将扫描文档并将其保存到 `receipts.pdf`
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2020/07/11/scanimage--scan-from-the-command-line/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12877-1.html)
[#]: subject: (Add sound to your Python game)
[#]: via: (https://opensource.com/article/20/9/add-sound-python-game)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
@ -12,7 +12,7 @@
> 通过添加声音到你的游戏中,听听当你的英雄战斗、跳跃、收集战利品时会发生什么。学习如何在这个 Pygame 系列的第十三篇文章中,创建一个声音平台类精灵。
![彩色声波图][1]
![](https://img.linux.net.cn/data/attachment/album/202012/02/092244du74f14837zmo7fz.jpg)
在 [Python 3][2] 中使用 [Pygame][3] 模块来创建电脑游戏的系列文章仍在进行中,这是第十三部分。先前的文章是:
@ -27,7 +27,7 @@
9. [使你的 Python 游戏玩家能够向前和向后跑][12]
10. [在你的 Python 平台类游戏中放一些奖励][13]
11. [添加计分到你的 Python 游戏][14]
12. [在你的 Python 游戏中加入投掷技巧][15]
12. [在你的 Python 游戏中加入投掷机制][15]
Pygame 提供了一种简单的方法来集成声音到你的 Python 电脑游戏中。Pygame 的 [mixer 模块][16] 可以依据命令播放一个或多个声音,并且你也可以将这些声音混合在一起,例如,你能够在听到你的英雄收集奖励或跳过敌人声音的同时播放背景音乐。
@ -144,7 +144,7 @@ via: https://opensource.com/article/20/9/add-sound-python-game
[12]: https://linux.cn/article-11819-1.html
[13]: https://linux.cn/article-11828-1.html
[14]: https://linux.cn/article-11839-1.html
[15]: https://opensource.com/article/20/9/add-throwing-python-game
[15]: https://linux.cn/article-12872-1.html
[16]: https://www.pygame.org/docs/ref/mixer.html
[17]: https://opensource.com/article/20/1/what-creative-commons
[18]: https://freesound.org

View File

@ -0,0 +1,189 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12894-1.html)
[#]: subject: (7 Git tricks that changed my life)
[#]: via: (https://opensource.com/article/20/10/advanced-git-tips)
[#]: author: (Rajeev Bera https://opensource.com/users/acompiler)
改变我使用 Git 工作方式的七个技巧
======
> 这些有用的技巧将改变你使用这个流行的版本控制系统的工作方式。
![](https://img.linux.net.cn/data/attachment/album/202012/07/092803d67fa7bttuuj98fb.jpg)
Git 是目前最常见的版本控制系统之一,无论是私有系统还是公开托管的网站,都在使用它进行各种开发工作。但无论我对 Git 的使用有多熟练,似乎总有一些功能还没有被发现,下面是改变我使用 Git 工作方式的七个技巧。
### 1、Git 中的自动更正
我们有时都会打错字,但如果启用了 Git 的自动更正功能,就可以让 Git 自动修正打错的子命令。
假设你想用 `git status` 检查状态,却不小心输入了 `git stats`。正常情况下Git 会告诉你 `stats` 不是一条有效的命令:
```
$ git stats
git: stats is not a git command. See git --help.
The most similar command is
status
```
为了避免类似的情况发生,请在 Git 配置中启用 Git 自动更正功能:
```
$ git config --global help.autocorrect 1
```
如果你希望这个命令只适用于你当前的版本库,请省略 `--global` 选项。
这条命令启用了自动更正功能。更深入的教程可以在 [Git Docs][2] 中找到,但尝试一下和上面一样的错误命令,就能很好地了解这个配置的作用:
```
$ git stats
git: stats is not a git command. See git --help.
On branch master
Your branch is up to date with origin/master.
nothing to commit, working tree clean
```
Git 现在不会建议使用其他子命令,而是直接运行最上面的建议,在本例中是 `git status`
### 2、计算你的提交量
你需要计算提交数量可能有很多原因。例如,许多开发者通过计算提交数量来判断何时该增加构建版本号,或者只是想了解项目的进展情况。
要计算提交数量其实很简单直接,下面是 Git 的命令:
```
$ git rev-list --count branch-name
```
在上面的命令中,`branch-name` 应该是当前版本库中有效的分支名称:
```
$ git rev-list count master
32
$ git rev-list count dev
34
```
### 3、优化你的仓库
你的代码仓库不仅对你有价值,对你的组织也有价值。你可以通过一些简单的做法来保持你的版本库的清洁和更新。其中一个最好的做法是 [使用 .gitignore 文件][3]。使用这个文件,就是告诉 Git 不要存储许多不需要的文件,比如二进制文件、临时文件等等。
为了进一步优化你的版本库,你可以使用 Git 的垃圾收集功能:
```
$ git gc --prune=now --aggressive
```
当你或你的团队大量使用 `pull``push` 命令时,这条命令就会起到帮助作用。
这个命令是一个内部工具,可以清理仓库中无法访问或 “孤儿” Git 对象。
### 4、备份未被跟踪的文件
大多数时候,删除所有未被跟踪的文件是安全的。不过很多时候,你不仅要删除,还要为你的未跟踪文件创建一个备份,以备以后需要。
通过 Git 和一些 Bash 命令管道,可以很容易地为你的未被跟踪的文件创建一个压缩包:
```
$ git ls-files --others --exclude-standard -z |\
xargs -0 tar rvf ~/backup-untracked.zip
```
上面的命令制作了一个名为 `backup-untracked.zip` 的存档(并排除了 `.gitignore` 中列出的文件)。
### 5、了解你的 .git 文件夹
每个版本库都有一个 `.git` 文件夹。它是一个特殊的隐藏文件夹。
```
$ ls -a
. … .git
```
Git 的工作主要依赖于两个部分:
1. 工作树(你当前签出的文件状态)。
2. 你的 Git 仓库的路径(即你的 `.git` 文件夹的位置,其中包含版本信息)。
这个文件夹存储了所有的引用和其他重要的细节比如配置、仓库数据、HEAD 状态、日志等等。
如果你删除这个文件夹,你的源代码的当前状态不会被删除,但你的远程信息,如你的项目历史记录,会被删除。删除这个文件夹意味着你的项目(至少是本地副本)不再处于版本控制之下。这意味着你不能跟踪你的变化;你不能从远程拉取或推送。
一般来说,不需要在 `.git` 文件夹里做什么,也没有什么应该做的。它是由 Git 管理的,基本上被认为是个禁区。然而,这个目录里有一些有趣的工件,包括 HEAD 的当前状态。
```
$ cat .git/HEAD
ref: refs/heads/master
```
它还可能包含对你的存储库的描述:
```
$ cat .git/description
```
这是一个未命名的仓库,编辑这个 `description` 文件可以命名这个仓库。
Git 钩子文件夹(`hooks`)也在这里,里面有一些钩子示例文件。你可以阅读这些示例来了解通过 Git 钩子可以实现什么,你也可以 [阅读 Seth Kenlon 的 Git 钩子介绍][4]。
### 6、查看另一个分支的文件
有时你想查看另一个分支的文件的内容。用一个简单的 Git 命令就可以实现,而且不需要切换分支。
假设你有一个名为 [README.md][5] 的文件,它在 `main` 分支中,而你正在 `dev` 分支上工作。
使用下面的 Git 命令,你可以在终端上完成:
```
$ git show main:README.md
```
一旦你执行了这个命令,你就可以在你的终端上查看文件的内容。
### 7、在 Git 中搜索
只需一个简单的命令,你就可以像专业人士一样在 Git 中搜索。更棒的是,即使你不确定是在哪个提交或分支上做的修改,也可以在 Git 中搜索。
```
$ git rev-list --all | xargs git grep -F 'string'
```
例如,假设你想在你的版本库中搜索 `font-size: 52 px;` 这个字符串:
```
$ git rev-list all | xargs git grep -F 'font-size: 52 px;'
F3022…9e12:HtmlTemplate/style.css: font-size: 52 px;
E9211…8244:RR.Web/Content/style/style.css: font-size: 52 px;
```
### 试试这些技巧
希望这些高级技巧对你有用,提高你的工作效率,为你节省很多时间。
你有喜欢的 [Git 小技巧][6] 吗?在评论中分享吧。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/10/advanced-git-tips
作者:[Rajeev Bera][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/acompiler
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
[2]: https://git-scm.com/book/en/v2/Customizing-Git-Git-Configuration#_code_help_autocorrect_code
[3]: https://linux.cn/article-12524-1.html
[4]: https://opensource.com/life/16/8/how-construct-your-own-git-server-part-6
[5]: http://README.md
[6]: https://acompiler.com/git-tips/

View File

@ -0,0 +1,93 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12880-1.html)
[#]: subject: (How to rebase to Fedora 33 on Silverblue)
[#]: via: (https://fedoramagazine.org/how-to-rebase-to-fedora-33-on-silverblue/)
[#]: author: (Michal Konečný https://fedoramagazine.org/author/zlopez/)
如何在 Silverblue 上变基到 Fedora 33
======
![](https://img.linux.net.cn/data/attachment/album/202012/02/232440exewdbwdhde4mqhv.jpg)
Silverblue 是[一个建立在 Fedora 之上的桌面操作系统][2]。它非常适合日常使用、开发和基于容器的工作流程。它提供了[众多的优势][3],例如在出现任何问题时能够回滚。如果你想在你的 Silverblue 系统上更新到 Fedora 33这篇文章会告诉你如何做。它不仅告诉你该怎么做还告诉你如果发生了不可预见的事情时该如何回退。
在实际做变基到 Fedora 33 之前,你应该应用任何挂起的更新。在终端中输入以下内容:
```
$ rpm-ostree update
```
或通过 GNOME 软件中心安装更新并重启。
### 使用 GNOME 软件中心变基
GNOME 软件中心会在更新界面显示有新版本的 Fedora 可用。
![Fedora 33 is available][4]
首先你需要做的是下载新镜像,点击 “Download” 按钮。这将需要一些时间,完成后你会看到更新已经准备好安装了。
![Fedora 33 is ready for installation][5]
点击 “Install” 按钮。这一步只需要几分钟,然后会提示你重启电脑。
![Restart is needed to rebase to Fedora 33 Silverblue][6]
点击 “Restart” 按钮就可以了。重启后,你将进入新的 Fedora 33 版本。很简单,不是吗?
### 使用终端变基
如果你喜欢在终端上做所有的事情,那么接下来的指南就适合你。
使用终端变基到 Fedora 33 很简单。首先,检查 33 版本分支是否可用:
```
$ ostree remote refs fedora
```
你应该在输出中看到以下内容:
```
fedora:fedora/33/x86_64/silverblue
```
接下来,将你的系统变基到 Fedora 33 分支。
```
$ rpm-ostree rebase fedora:fedora/33/x86_64/silverblue
```
最后要做的是重启你的电脑并启动到 Fedora 33。
### 如何回滚
如果有什么不好的事情发生。例如,如果你无法启动到 Fedora 33那很容易回滚回去。在启动时选择 GRUB 菜单中的前一个条目,你的系统就会以切换到 Fedora 33 之前的状态启动。要使这一改变永久化,请使用以下命令:
```
$ rpm-ostree rollback
```
就是这样了。现在你知道如何将 Silverblue 变基为 Fedora 33 并回滚了。那为什么不在今天就做呢?
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/how-to-rebase-to-fedora-33-on-silverblue/
作者:[Michal Konečný][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/zlopez/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/02/fedora-silverblue-logo.png
[2]: https://docs.fedoraproject.org/en-US/fedora-silverblue/
[3]: https://fedoramagazine.org/give-fedora-silverblue-a-test-drive/
[4]: https://fedoramagazine.org/wp-content/uploads/2020/10/Screenshot-from-2020-10-29-12-53-37-1024x725.png
[5]: https://fedoramagazine.org/wp-content/uploads/2020/10/Screenshot-from-2020-10-29-13-00-15-1024x722.png
[6]: https://fedoramagazine.org/wp-content/uploads/2020/10/Screenshot-from-2020-10-29-13-01-32-1024x727.png

View File

@ -0,0 +1,177 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12868-1.html)
[#]: subject: (Getting started with Emacs)
[#]: via: (https://opensource.com/article/20/3/getting-started-emacs)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
10 个让你进入 Emacs 世界的技巧
======
> 10 个技巧,让你深入这个有用的开源文本编辑器的世界。
![](https://img.linux.net.cn/data/attachment/album/202011/29/103757lccy9ljxiowooyzy.jpg)
很多人都说想学 [Emacs][2],但很多人在短暂的接触后就退缩了。这并不是因为 Emacs 不好,也不是 Emacs 复杂。我相信,问题在于人们其实并不想“学习” Emacs而是他们想习惯 Emacs 的传统。他们想了解那些神秘的键盘快捷键和不熟悉的术语。他们想按照他们认为的“使用目的”来使用 Emacs。
我很同情这一点,因为我对 Emacs 的感觉就是这样。我以为真正的 Emacs 用户都只会在终端里面运行,从来不用方向键和菜单,更不会用鼠标。这是个阻止自己开始使用 Emacs 的好办法。有足够多的独特的 `.emacs` 配置文件证明,如果说 Emacs 用户有一个共同的变化,那就是每个人使用 Emacs 的方式不同。
学习 Emacs 很容易。爱上 Emacs 才是最难的。要爱上 Emacs你必须发现它所拥有的功能而这些功能是你一直在寻找的有时你并不知道你已经错过了它们。这需要经验。
获得这种经验的唯一方法就是从一开始就积极使用 Emacs。这里有十个小提示可以帮助你找出最适合你的方法。
### 从 GUI 开始
Emacs以及它的友好竞争者 [Vim][3])最伟大的事情之一是它可以在终端中运行,这在你 SSH 进入服务器时很有用,但在过去 15 年来制造的计算机上意义不大。Emacs 的 GUI 版本可以在极度[低功耗的设备][4]上运行,它有很多实用的功能,无论是新手还是有经验的用户都可以使用它。
例如,如果你不知道如何在 Emacs 中只用键盘快捷键复制一个单词,编辑菜单的复制、剪切和粘贴选择提供了最轻松的路径。没有理由因为选择了 Emacs 而惩罚自己。使用它的菜单,用鼠标选择区域,点击缓冲区内的按钮,不要让陌生感阻碍你的工作效率。
![Emacs slackware][5]
这些功能被内置到 Emacs 中,是因为用户在使用它们。你应该在你需要的时候使用它们,而当你最终在 VT100 终端上通过 SSH 使用 Emacs没有 `Alt` 或方向键的时候,你才应该使用这些晦涩的命令。
### 习惯术语
Emacs 的 UI 元素有着特殊的术语。个人计算的发展并不是建立在相同的术语上,所以很多术语对现代计算机用户来说比较陌生,还有一些术语虽然相同,但含义不同。下面是一些最常见的术语。
* <ruby>框架<rt>Frame</rt></ruby>。在 Emacs 中,“框架”就是现代计算机所说的“窗口”。
* <ruby>缓冲区<rt>Buffer</rt></ruby>:“缓冲区”是 Emacs 的一个通信通道。它可以作为 Emacs 进程的命令行,也可以作为 shell或者只是一个文件的内容。
* <ruby>窗口<rt>Window</rt></ruby>:“窗口”是你进入一个缓冲区的视角。
* <ruby>迷你缓冲区<rt>Mini-buffer</rt></ruby>。它是主要的命令行,位于 Emacs 窗口的底部。
![Emacs tutorial map][6]
### 让 Emacs 的修饰键变得更有意义
在 PC 键盘上,`Ctrl` 键被称为 `C``Alt` 键被称为 `M`,这些键并不是 `C``M` 键,由于它们总是与相应的字母或符号键配对,所以在文档中很容易识别。
例如,`C-x` 在现代键盘符号中的意思是 `Ctrl+X``M-x` 是 `Alt+X`。就像你从任何应用程序中剪切文本时一样,同时按下这两个键。
不过,还有另一个层次的键盘快捷键,与现代电脑上的任何东西都完全不同。有时,键盘快捷键并不只是一个键组合,而是由一系列的按键组成。
例如,`C-x C-f` 的意思是像往常一样按 `Ctrl+X`,然后再按 `Ctrl+C`
有时,一个键盘快捷键有混合的键型。组合键 `C-x 3` 意味着像往常一样按 `Ctrl+X`,然后按数字 `3` 键。
Emacs 之所以能做到这些花哨的强力组合,是因为某些键会让 Emacs 进入一种特殊的命令模式。如果你按 `C-X`(也就是 `Ctrl+X`),就是告诉 `Emacs` 进入空闲状态,等待第二个键或键盘快捷键。
Emacs 的文档,无论是官方的还是非官方的,都有很多键盘快捷键。在心里练习把 `C` 键翻译成 `Ctrl` 键,`M` 键翻译成 `Alt` 键,那么这些文档对你来说都会变得更有意义。
### 剪切、复制和粘贴的备用快捷方式
从规范上,复制文本是通过一系列的键盘快捷键进行的,这些快捷键取决于你想要复制或剪切的方式。
例如,你可以用 `M-d``Alt+d` 的 Emacs 行话)剪切一整个单词,或者用`C-k``Ctrl+K`)剪切一整行,或者用 `M-m``Alt+M`)剪切一个高亮区域。如果你想的话,你可以习惯这样,但如果你喜欢 `Ctrl+C``Ctrl+X``Ctrl-V`,那么你可以用这些来代替。
启用现代的“剪切-复制-粘贴”需要激活一个名为 CUA<ruby>通用用户访问<rt>Common User Access</rt></ruby>)的功能。要激活 CUA请单击“选项”菜单并选择“使用 CUA 键”。启用后,`C-c` 复制高亮显示的文本,`C-x` 剪切高亮显示的文本,`C-v` 粘贴文本。这个模式只有在你选择了文本之后才会实际激活,所以你仍然可以学习 Emacs 通常使用的 `C-x``C-c` 绑定。
### 用哪个都好
Emacs 是一个应用程序,它不会意识到你对它的感情或忠诚度。如果你想只用 Emacs 来完成那些“感觉”适合 Emacs 的任务,而用不同的编辑器(比如 Vim来完成其他任务你可以这样做。
你与一个应用程序的交互会影响你的工作方式,所以如果 Emacs 中所需要的按键模式与特定任务不一致,那么就不要强迫自己使用 Emacs 来完成该任务。Emacs 只是众多可供你使用的开源工具之一,没有理由让自己只限于一种工具。
### 探索新函数
Emacs 所做的大部分工作都是一个 elisp 函数它可以从菜单选择和键盘快捷键调用或者在某些情况下从特定事件中调用。所有的函数都可以从迷你缓冲区Emacs 框架底部的命令行)执行。理论上,你甚至可以通过键入 `forward-word``backward-word` 以及 `next-line``previous-line` 等函数来导航光标。这肯定是无比低效的但这就是一种直接访问你运行的代码的方式。在某种程度上Emacs 就是自己的 API。
你可以通过在社区博客上阅读有关 Emacs 的资料来了解新函数,或者你可以采取更直接的方法,使用描述函数(`describe-function`)。要获得任何函数的帮助,按 `M-x`(也就是 `Alt+X`),然后输入 `describe-function`,然后按回车键。系统会提示你输入一个函数名称,然后显示该函数的描述。
你可以通过键入`M-x`Alt+X`),然后键入 `?` 来获得所有可用函数的列表。
你也可以在输入函数时,通过按 `M-x` 键,然后输入 `auto-complete-mode`,再按回车键,获得弹出的函数描述。激活该模式后,当你在文档中键入任何 Emacs 函数时,都会向你提供自动补完选项,以及函数的描述。
![Emacs function][7]
当你找到一个有用的函数并使用它时Emacs 会告诉你它的键盘绑定,如果有的话。如果没有的话,你可以通过打开你的 `$HOME/.emacs` 配置文件并输入键盘快捷键来自己分配一个。语法是 ` global-set-key`,后面是你要使用的键盘快捷键,然后是你要调用的函数。
例如,要将 `screenwriter-slugline` 函数分配一个键盘绑定:
```
(global-set-key (kbd “C-c s”) 'screenwriter-slugline)
```
重新加载配置文件,键盘快捷键就可以使用了:
```
M-x load-file ~/.emacs
```
### 紧急按钮
当你使用 Emacs 并尝试新的函数时你一定会开始调用一些你并不想调用的东西。Emacs 中通用的紧急按钮是 `C-g`(就是 `Ctrl+G`)。
我通过将 G 与 GNU 联系起来来记住这一点,我想我是在呼吁 GNU 将我从一个错误的决定中拯救出来,但请随意编造你自己的记忆符号。
如果你按几下 `C-g`Emacs 的迷你缓冲区就会回到潜伏状态,弹出窗口被隐藏,你又回到了一个普通的、无聊的文本编辑器的安全状态。
### 忽略键盘快捷键
潜在的键盘快捷键太多在这里无法一一总结更不希望你能记住。这是设计好的。Emacs 的目的是为了定制,当人们为 Emacs 编写插件时,他们可以定义自己的特殊键盘快捷键。
我们的想法不是要马上记住所有的快捷键。相反,你的目标是让你在使用 Emacs 时感到舒适。你在 Emacs 中变得越舒适,你就越会厌倦总是求助于菜单栏,你就会开始记住对你重要的组合键。
根据自己在 Emacs 中通常做的事情,每个人都有自己喜欢的快捷方式。一个整天用 Emacs 写代码的人可能知道运行调试器或启动特定语言模式的所有键盘快捷键,但对 Org 模式或 Artist 模式一无所知。这很自然,也很好。
### 使用 Bash 时练习 Emacs
了解 Emacs 键盘快捷键的一个好处是,其中许多快捷键也适用于 Bash。
* `C-a`:到行首
* `C-e`:到行尾
* `C-k`:剪切整行
* `M-f`:向前一个字
* `M-b`:向后一个字
* `M-d`:剪切一个字
* `C-y`:贴回(粘贴)最近剪切的内容
* `M-Shift-U`:大写一个词
* `C-t`:交换两个字符(例如,`sl` 变成 `ls`
还有更多的例子,它能让你与 Bash 终端的交互速度超乎你的想象。
### 包
Emacs 有一个内置的包管理器来帮助你发现新的插件。它的包管理器包含了帮助你编辑特定类型文本的模式(例如,如果你经常编辑 JSON 文件,你可以尝试使用 ejson 模式、嵌入的应用程序、主题、拼写检查选项、linter 等。这就是 Emacs 有可能成为你日常计算的关键所在;一旦你找到一个优秀的 Emacs 包,你可能离不开它了。
![Emacs emoji][8]
你可以按 `M-x`(就是 `Alt+X`)键,然后输入 `package-list-packages` 命令再按回车键来浏览包。软件包管理器在每次启动时都会更新缓存所以第一次使用时要耐心等待它下载可用软件包的列表。一旦加载完毕你可以用键盘或鼠标进行导航记住Emacs 是一个 GUI 应用程序)。每一个软件包的名称都是一个按钮,所以你可以将光标移到它上面,然后按回车键,或者直接用鼠标点击它。你可以在 Emacs 框架中出现的新窗口中阅读有关软件包的信息,然后用安装按钮来安装它。
有些软件包需要特殊的配置,有时会在它的描述中列出,但有时需要你访问软件包的主页来阅读更多的信息。例如,自动完成包 `ac-emoji` 很容易安装,但需要你定义一个符号字体。无论哪种方式都可以使用,但你只有在安装了字体的情况下才能看到相应的表情符号,除非你访问它的主页,否则你可能不会知道。
### 俄罗斯方块
Emacs 有游戏,信不信由你。有数独、拼图、扫雷、一个好玩的心理治疗师,甚至还有俄罗斯方块。这些并不是特别有用,但在任何层面上与 Emacs 进行交互都是很好的练习,游戏是让你在 Emacs 中花费时间的好方法。
![Emacs tetris][9]
俄罗斯方块也是我最初接触 Emacs 的方式所以在该游戏的所有版本中Emacs 版本才是我真正的最爱。
### 使用 Emacs
GNU Emacs 之所以受欢迎,是因为它的灵活性和高度可扩展性。人们习惯了 Emacs 的键盘快捷键,以至于他们习惯性地尝试在其他所有的应用程序中使用这些快捷键,他们将应用程序构建到 Emacs 中,所以他们永远不需要离开。如果你想让 Emacs 在你的计算生活中扮演重要角色,最终的关键是拥抱未知,开始使用 Emacs。磕磕绊绊地直到你发现如何让它为你工作然后安下心来享受 40 年的舒适生活。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/3/getting-started-emacs
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/keyboaord_enter_writing_documentation.jpg?itok=kKrnXc5h (Computer keyboard typing)
[2]: https://opensource.com/downloads/emacs-cheat-sheet
[3]: https://opensource.com/downloads/cheat-sheet-vim
[4]: https://opensource.com/article/17/2/pocketchip-or-pi
[5]: https://opensource.com/sites/default/files/uploads/emacs-slackware.jpg (Emacs slackware)
[6]: https://opensource.com/sites/default/files/uploads/emacs-tutorial-map.png (Emacs tutorial map)
[7]: https://opensource.com/sites/default/files/uploads/emacs-function.jpg (Emacs function)
[8]: https://opensource.com/sites/default/files/uploads/emacs-emoji_0.jpg (Emacs emoji)
[9]: https://opensource.com/sites/default/files/uploads/emacs-tetris.jpg (Emacs tetris)

View File

@ -1,28 +1,31 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12871-1.html)
[#]: subject: (An introduction to mutation testing in Python)
[#]: via: (https://opensource.com/article/20/7/mutmut-python)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Python 突变测试
Python 突变测试介
======
通过突变测试来修复未知的 bug。
![Searching for code][1]
> 通过突变测试来修复未知的 bug。
![](https://img.linux.net.cn/data/attachment/album/202011/29/230106ie9xc89dj3jx1yj9.jpg)
你一定对所有内容都进行了测试,也许你甚至在项目仓库中有一个徽章,标明有 100% 的测试覆盖率,但是这些测试真的帮到你了吗?你怎么知道的?
开发人员很清楚单元测试的 _成本_,必须编写测试。有时它们无法按照预期工作:存在假告警或者抖动测试。在不更改任何代码的情况下有时成功,有时失败。通过单元测试发现的小问题很有价值,但是通常它们悄无声息的出现在开发人员的机器上,并且在提交到版本控制之前就已得到修复。但真正令人担忧的问题大多是不可见的。最糟糕的是_丢失的告警_ 是完全不可见的:你看不到没能捕获的错误,直到出现在用户手上--有时甚至连用户都看不到。
开发人员很清楚单元测试的*成本*。测试必须要编写。有时它们无法按照预期工作:存在假告警或者抖动测试。在不更改任何代码的情况下有时成功,有时失败。通过单元测试发现的小问题很有价值,但是通常它们悄无声息的出现在开发人员的机器上,并且在提交到版本控制之前就已得到修复。但真正令人担忧的问题大多是看不见的。最糟糕的是,*丢失的告警*是完全不可见的:你看不到没能捕获的错误,直到出现在用户手上 —— 有时甚至连用户都看不到。
有一种测试可以使不可见的错误变为可见:[突变测试][2]。
有一种测试可以使不可见的错误变为可见:<ruby>[突变测试][2]<rt>mutation testing</rt></ruby>
变异测试通过算法修改源代码,并检查每次测试是否都有“变异体”存活。任何在单元测试中幸存下来的变异体都是问题:这意味着对代码的修改(可能会引入错误)没有被标准测试套件捕获。
[Python][3] 中用于突变测试的一个框架是 `mutmut`
假设你需要编写代码来计算钟表中时针和分针之间的角度,直到最接近的度数,代码可能会这样写:
```
def hours_hand(hour, minutes):
    base = (hour % 12 ) * (360 // 12)
@ -37,6 +40,7 @@ def between(hour, minutes):
```
首先,写一个简单的单元测试:
```
import angle
@ -45,6 +49,7 @@ def test_twelve():
```
足够了吗?代码没有 `if` 语句,所以如果你查看覆盖率:
```
$ coverage run `which pytest`
============================= test session starts ==============================
@ -58,71 +63,74 @@ tests/test_angle.py .                                      
```
完美!测试通过,覆盖率为 100%,你真的是一个测试专家。但是,当你使用突变测试时,覆盖率会变成多少?
```
$ mutmut run --paths-to-mutate angle.py
&lt;snip&gt;
<snip>
Legend for output:
🎉 Killed mutants.   The goal is for everything to end up in this bucket.
⏰ Timeout.          Test suite took 10 times as long as the baseline so were killed.
🤔 Suspicious.       Tests took a long time, but not long enough to be fatal.
🙁 Survived.         This means your tests needs to be expanded.
🔇 Skipped.          Skipped.
&lt;snip&gt;
⠋ 21/21  🎉 5  ⏰ 0  🤔 0  🙁 16  🔇 0
🎉 Killed mutants. The goal is for everything to end up in this bucket.
⏰ Timeout. Test suite took 10 times as long as the baseline so were killed.
🤔 Suspicious. Tests took a long time, but not long enough to be fatal.
🙁 Survived. This means your tests needs to be expanded.
🔇 Skipped. Skipped.
<snip>
⠋ 21/21 🎉 5 ⏰ 0 🤔 0 🙁 16 🔇 0
```
天啊,在 21 个突变体中,有 16 个存活。只有 5 个通过了突变测试,但是,这意味着什么呢?
天啊,在 21 个突变体中,有 16 个存活。只有 5 个通过了突变测试,但是,这意味着什么呢?
对于每个突变测试,`mutmut` 会修改部分源代码,以模拟潜在的错误,修改的一个例子是将 `>` 比较更改为 `>=`,查看会发生什么。如果没有针对这个边界条件的单元测试,那么这个突变将会“存活”:这是一个没有任何测试用例能够检测到的潜在错误。
是时候编写更好的单元测试了。很容易检查使用 `results` 所做的更改:
```
$ mutmut results
&lt;snip&gt;
<snip>
Survived 🙁 (16)
\---- angle.py (16) ----
---- angle.py (16) ----
4-7, 9-14, 16-21
$ mutmut apply 4
$ git diff
diff --git a/angle.py b/angle.py
index b5dca41..3939353 100644
\--- a/angle.py
--- a/angle.py
+++ b/angle.py
@@ -1,6 +1,6 @@
 def hours_hand(hour, minutes):
     hour = hour % 12
\-    base = hour * (360 // 12)
\+    base = hour / (360 // 12)
     correction = int((minutes / 60) * (360 // 12))
     return base + correction
def hours_hand(hour, minutes):
hour = hour % 12
- base = hour * (360 // 12)
+ base = hour / (360 // 12)
correction = int((minutes / 60) * (360 // 12))
return base + correction
```
这是 `mutmut` 执行突变的一个典型例子,它会分析源代码并将运算符更改为不同的运算符:减法变加法。在本例中由乘法变为除法。一般来说,单元测试应该在操作符更换时捕获错误。否则,它们将无法有效地测试行为。按照这种逻辑,`mutmut` 会遍历源代码仔细检查你的测试。
你可以使用 `mutmut apply` 来应用失败的突变体。事实证明你几乎没有检查过 `hour` 参数是否被正确使用。修复它:
```
$ git diff
diff --git a/tests/test_angle.py b/tests/test_angle.py
index f51d43a..1a2e4df 100644
\--- a/tests/test_angle.py
--- a/tests/test_angle.py
+++ b/tests/test_angle.py
@@ -2,3 +2,6 @@ import angle
 
 def test_twelve():
     assert angle.between(12, 00) == 0
def test_twelve():
assert angle.between(12, 00) == 0
+
+def test_three():
\+    assert angle.between(3, 00) == 90
+ assert angle.between(3, 00) == 90
```
以前,你只测试了 12 点钟,现在增加一个 3 点钟的测试就足够了吗?
```
$ mutmut run --paths-to-mutate angle.py
&lt;snip&gt;
⠋ 21/21  🎉 7  ⏰ 0  🤔 0  🙁 14  🔇 0
<snip>
⠋ 21/21 🎉 7 ⏰ 0 🤔 0 🙁 14 🔇 0
```
这项新测试成功杀死了两个突变体,比以前更好,当然还有很长的路要走。我不会一一解决剩下的 14 个测试用例,因为我认为模式已经很明确了。(你能将它们降低到零吗?)
@ -136,7 +144,7 @@ via: https://opensource.com/article/20/7/mutmut-python
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,665 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12872-1.html)
[#]: subject: (Add throwing mechanics to your Python game)
[#]: via: (https://opensource.com/article/20/9/add-throwing-python-game)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
在你的 Python 游戏中添加投掷机制
======
> 四处奔跑躲避敌人是一回事,反击敌人是另一回事。学习如何在这系列的第十二篇文章中在 Pygame 中创建平台游戏。
![](https://img.linux.net.cn/data/attachment/album/202011/30/124457xcj9mztw9kx9c7zj.jpg)
这是仍在进行中的关于使用 [Pygame][3] 模块在 [Python 3][2] 中创建电脑游戏的第十二部分。先前的文章是:
1. [通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程][4]
2. [使用 Python 和 Pygame 模块构建一个游戏框架][5]
3. [如何在你的 Python 游戏中添加一个玩家][6]
4. [用 Pygame 使你的游戏角色移动起来][7]
5. [如何向你的 Python 游戏中添加一个敌人][8]
6. [在 Pygame 游戏中放置平台][9]
7. [在你的 Python 游戏中模拟引力][10]
8. [为你的 Python 平台类游戏添加跳跃功能][11]
9. [使你的 Python 游戏玩家能够向前和向后跑][12]
10. [在你的 Python 平台类游戏中放一些奖励][13]
11. [添加计分到你的 Python 游戏][14]
我的上一篇文章本来是这一系列文章的最后一篇,它鼓励你为这个游戏编写自己的附加程序。你们很多人都这么做了!我收到了一些电子邮件,要求帮助我还没有涵盖的常用机制:战斗。毕竟,跳起来躲避坏人是一回事,但是有时候让他们走开是一件非常令人满意的事。在电脑游戏中向你的敌人投掷一些物品是很常见的,不管是一个火球、一支箭、一道闪电,还是其它适合游戏的东西。
与迄今为止你在这个系列中为你的平台游戏编程的任何东西不同,可投掷物品有一个*生存时间*。在你投掷一个物品后,它会如期在移动一段距离后消失。如果它是一支箭或其它类似的东西,它可能会在通过屏幕的边缘时而消失。如果它是一个火球或一道闪电,它可能会在一段时间后熄灭。
这意味着每次生成一个可投掷的物品时,也必须生成一个独特的衡量其生存时间的标准。为了介绍这个概念,这篇文章演示如何一次只投掷一个物品。(换句话说,每次仅存在一个投掷物品)。 一方面,这是一个游戏的限制条件,但另一方面,它却是游戏本身的运行机制。你的玩家不能每次同时投掷 50 个火球,因为每次仅允许一个投掷物品,所以当你的玩家释放一个火球来尝试击中一名敌人就成为了一项挑战。而在幕后,这也使你的代码保持简单。
如果你想启用每次投掷多个项目,在完成这篇教程后,通过学习这篇教程所获取的知识来挑战你自己。
### 创建 Throwable 类
如果你跟随学习这系列的其它文章,那么你应该熟悉在屏幕上生成一个新的对象基础的 `__init__` 函数。这和你用来生成你的 [玩家][6] 和 [敌人][8] 的函数是一样的。这里是生成一个 `throwable` 对象的 `__init__` 函数来:
```
class Throwable(pygame.sprite.Sprite):
"""
生成一个 throwable 对象
"""
def __init__(self, x, y, img, throw):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(os.path.join('images',img))
self.image.convert_alpha()
self.image.set_colorkey(ALPHA)
self.rect = self.image.get_rect()
self.rect.x = x
self.rect.y = y
self.firing = throw
```
同你的 `Player` 类或 `Enemy` 类的 `__init__` 函数相比,这个函数的主要区别是,它有一个 `self.firing` 变量。这个变量保持跟踪一个投掷的物品是否在当前屏幕上活动,因此当一个 `throwable` 对象创建时,将变量设置为 `1` 的合乎情理的。
### 判断存活时间
接下来,就像使用 `Player``Enemy` 一样,你需要一个 `update` 函数,以便投掷的物品在瞄准敌人抛向空中时,它会自己移动。
测定一个投掷的物品存活时间的最简单方法是侦测它何时离开屏幕。你需要监视的屏幕边缘取决于你投掷的物品的物理特性。
* 如果你的玩家正在投掷的物品是沿着水平轴快速移动的,像一只弩箭或箭或一股非常快的魔法力量,而你想监视你游戏屏幕的水平轴极限。这可以通过 `worldx` 定义。
* 如果你的玩家正在投掷的物品是沿着垂直方向或同时沿着水平方向和垂直方向移动的,那么你必须监视你游戏屏幕的垂直轴极限。这可以通过 `worldy` 定义。
这个示例假设你投掷的物品向前移动一点并最终落到地面上。不过,投掷的物品不会从地面上反弹起来,而是继续掉落出屏幕。你可以尝试不同的设置来看看什么最适合你的游戏:
```
def update(self,worldy):
'''
投掷物理学
'''
if self.rect.y < worldy: #垂直轴
self.rect.x += 15 #它向前移动的速度有多快
self.rect.y += 5 #它掉落的速度有多快
else:
self.kill() #移除投掷对象
self.firing = 0 #解除火力发射
```
为使你的投掷物品移动地更快,增加 `self.rect` 的动量值。
如果投掷物品不在屏幕上,那么该物品将被销毁,以及释放其所占用的寄存器。另外,`self.firing` 将被设置回 `0` 以允许你的玩家来进行另一次射击。
### 设置你的投掷对象
就像使用你的玩家和敌人一样,你必须在你的设置部分中创建一个精灵组来保持投掷对象。
此外,你必须创建一个非活动的投掷对象来供开始的游戏使用。如果在游戏开始时却没有一个投掷对象,那么玩家在第一次尝试投掷一柄武器时,投掷将失败。
这个示例假设你的玩家使用一个火球作为开始的武器,因此,每一个投掷实例都是由 `fire` 变量指派的。在后面的关卡中,当玩家获取新的技能时,你可以使用相同的 `Throwable` 类来引入一个新的变量以使用一张不同的图像。
在这代码块中,前两行已经在你的代码中,因此不要重新键入它们:
```
player_list = pygame.sprite.Group() #上下文
player_list.add(player) #上下文
fire = Throwable(player.rect.x,player.rect.y,'fire.png',0)
firepower = pygame.sprite.Group()
```
注意,每一个投掷对象的起始位置都是和玩家所在的位置相同。这使得它看起来像是投掷对象来自玩家。在第一个火球生成时,使用 `0` 来显示 `self.firing` 是可用的。
### 在主循环中获取投掷行为
没有在主循环中出现的代码不会在游戏中使用,因此你需要在你的主循环中添加一些东西,以便能在你的游戏世界中获取投掷对象。
首先,添加玩家控制。当前,你没有火力触发器。在键盘上的按键是有两种状态的:释放的按键,按下的按键。为了移动,你要使用这两种状态:按下按键来启动玩家移动,释放按键来停止玩家移动。开火仅需要一个信号。你使用哪个按键事件(按键按下或按键释放)来触发你的投掷对象取决于你的品味。
在这个代码语句块中,前两行是用于上下文的:
```
if event.key == pygame.K_UP or event.key == ord('w'):
player.jump(platform_list)
if event.key == pygame.K_SPACE:
if not fire.firing:
fire = Throwable(player.rect.x,player.rect.y,'fire.png',1)
firepower.add(fire)
```
与你在设置部分创建的火球不同,你使用一个 `1` 来设置 `self.firing` 为不可用。
最后,你必须更新和绘制你的投掷物品。这个顺序很重要,因此把这段代码放置到你现有的 `enemy.move``player_list.draw` 的代码行之间:
```
enemy.move() # 上下文
if fire.firing:
fire.update(worldy)
firepower.draw(world)
player_list.draw(screen) # 上下文
enemy_list.draw(screen) # 上下文
```
注意,这些更新仅在 `self.firing` 变量被设置为 1 时执行。如果它被设置为 0 ,那么 `fire.firing` 就不为 `true`,接下来就跳过更新。如果你尝试做上述这些更新,不管怎样,你的游戏都会崩溃,因为在游戏中将不会更新或绘制一个 `fire` 对象。
启动你的游戏,尝试挑战你的武器。
### 检测碰撞
如果你玩使用了新投掷技巧的游戏,你可能会注意到,你可以投掷对象,但是它却不会对你的敌人有任何影响。
原因是你的敌人没有被查到碰撞事故。一名敌人可能会被你的投掷物品所击中,但是敌人却从来不知道被击中了。
你已经在你的 `Player` 类中完成了碰撞检测,这非常类似。在你的 `Enemy` 类中,添加一个新的 `update` 函数:
```
def update(self,firepower, enemy_list):
"""
检测火力碰撞
"""
fire_hit_list = pygame.sprite.spritecollide(self,firepower,False)
for fire in fire_hit_list:
enemy_list.remove(self)
```
代码很简单。每个敌人对象都检查并看看它自己是否被 `firepower` 精灵组的成员所击中。如果它被击中,那么敌人就会从敌人组中移除和消失。
为集成这些功能到你的游戏之中,在主循环中调用位于新触发语句块中的函数:
```
if fire.firing: # 上下文
fire.update(worldy) # 上下文
firepower.draw(screen) # 上下文
enemy_list.update(firepower,enemy_list) # 更新敌人
```
你现在可以尝试一下你的游戏了,大多数的事情都如预期般的那样工作。不过,这里仍然有一个问题,那就是投掷的方向。
### 更改投掷机制的方向
当前,你英雄的火球只会向右移动。这是因为 `Throwable` 类的 `update` 函数将像素添加到火球的位置,在 Pygame 中,在 X 轴上一个较大的数字意味着向屏幕的右侧移动。当你的英雄转向另一个方向时,你可能希望它投掷的火球也抛向左侧。
到目前为止,你已经知道如何实现这一点,至少在技术上是这样的。然而,最简单的解决方案却是使用一个变量,在一定程度上对你来说可能是一种新的方法。一般来说,你可以“设置一个标记”(有时也被称为“翻转一个位”)来标明你的英雄所面向的方向。在你做完后,你就可以检查这个变量来得知火球是向左移动还是向右移动。
首先,在你的 `Player` 类中创建一个新的变量来代表你的游戏所面向的方向。因为我的游戏天然地面向右侧,由此我把面向右侧作为默认值:
```
self.score = 0
self.facing_right = True # 添加这行
self.is_jumping = True
```
当这个变量是 `True` 时,你的英雄精灵是面向右侧的。当玩家每次更改英雄的方向时,变量也必须重新设置,因此,在你的主循环中相关的 `keyup` 事件中这样做:
```
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT or event.key == ord('a'):
player.control(steps, 0)
player.facing_right = False # 添加这行
if event.key == pygame.K_RIGHT or event.key == ord('d'):
player.control(-steps, 0)
player.facing_right = True # 添加这行
```
最后,更改你的 `Throwable` 类的 `update` 函数,以检测英雄是否面向右侧,并恰当地添加或减去来自火球位置的像素:
```
if self.rect.y < worldy:
if player.facing_right:
self.rect.x += 15
else:
self.rect.x -= 15
self.rect.y += 5
```
再次尝试你的游戏,清除掉你游戏世界中的一些坏人。
![Python 平台类使用投掷能力][15]
作为一项额外的挑战,当彻底打败敌人时,尝试增加你玩家的得分。
### 完整的代码
```
#!/usr/bin/env python3
# 作者: Seth Kenlon
# GPLv3
# This program is free software: you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <[http://www.gnu.org/licenses/>][17].
import pygame
import pygame.freetype
import sys
import os
'''
变量
'''
worldx = 960
worldy = 720
fps = 40
ani = 4
world = pygame.display.set_mode([worldx, worldy])
forwardx = 600
backwardx = 120
BLUE = (80, 80, 155)
BLACK = (23, 23, 23)
WHITE = (254, 254, 254)
ALPHA = (0, 255, 0)
tx = 64
ty = 64
font_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "fonts", "amazdoom.ttf")
font_size = tx
pygame.freetype.init()
myfont = pygame.freetype.Font(font_path, font_size)
'''
对象
'''
def stats(score, health):
myfont.render_to(world, (4, 4), "Score:"+str(score), BLUE, None, size=64)
myfont.render_to(world, (4, 72), "Health:"+str(health), BLUE, None, size=64)
class Throwable(pygame.sprite.Sprite):
"""
生成一个投掷的对象
"""
def __init__(self, x, y, img, throw):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(os.path.join('images', img))
self.image.convert_alpha()
self.image.set_colorkey(ALPHA)
self.rect = self.image.get_rect()
self.rect.x = x
self.rect.y = y
self.firing = throw
def update(self, worldy):
'''
投掷物理学
'''
if self.rect.y < worldy:
if player.facing_right:
self.rect.x += 15
else:
self.rect.x -= 15
self.rect.y += 5
else:
self.kill()
self.firing = 0
# x 位置, y 位置, img 宽度, img 高度, img 文件
class Platform(pygame.sprite.Sprite):
def __init__(self, xloc, yloc, imgw, imgh, img):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(os.path.join('images', img)).convert()
self.image.convert_alpha()
self.image.set_colorkey(ALPHA)
self.rect = self.image.get_rect()
self.rect.y = yloc
self.rect.x = xloc
class Player(pygame.sprite.Sprite):
"""
生成一名玩家
"""
def __init__(self):
pygame.sprite.Sprite.__init__(self)
self.movex = 0
self.movey = 0
self.frame = 0
self.health = 10
self.damage = 0
self.score = 0
self.facing_right = True
self.is_jumping = True
self.is_falling = True
self.images = []
for i in range(1, 5):
img = pygame.image.load(os.path.join('images', 'walk' + str(i) + '.png')).convert()
img.convert_alpha()
img.set_colorkey(ALPHA)
self.images.append(img)
self.image = self.images[0]
self.rect = self.image.get_rect()
def gravity(self):
if self.is_jumping:
self.movey += 3.2
def control(self, x, y):
"""
控制玩家移动
"""
self.movex += x
def jump(self):
if self.is_jumping is False:
self.is_falling = False
self.is_jumping = True
def update(self):
"""
更新精灵位置
"""
# 向左移动
if self.movex < 0:
self.is_jumping = True
self.frame += 1
if self.frame > 3 * ani:
self.frame = 0
self.image = pygame.transform.flip(self.images[self.frame // ani], True, False)
# 向右移动
if self.movex > 0:
self.is_jumping = True
self.frame += 1
if self.frame > 3 * ani:
self.frame = 0
self.image = self.images[self.frame // ani]
# 碰撞
enemy_hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
if self.damage == 0:
for enemy in enemy_hit_list:
if not self.rect.contains(enemy):
self.damage = self.rect.colliderect(enemy)
if self.damage == 1:
idx = self.rect.collidelist(enemy_hit_list)
if idx == -1:
self.damage = 0 # 设置伤害回 0
self.health -= 1 # 减去 1 单位健康度
ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
for g in ground_hit_list:
self.movey = 0
self.rect.bottom = g.rect.top
self.is_jumping = False # 停止跳跃
# 掉落世界
if self.rect.y > worldy:
self.health -=1
print(self.health)
self.rect.x = tx
self.rect.y = ty
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
for p in plat_hit_list:
self.is_jumping = False # 停止跳跃
self.movey = 0
if self.rect.bottom <= p.rect.bottom:
self.rect.bottom = p.rect.top
else:
self.movey += 3.2
if self.is_jumping and self.is_falling is False:
self.is_falling = True
self.movey -= 33 # 跳跃多高
loot_hit_list = pygame.sprite.spritecollide(self, loot_list, False)
for loot in loot_hit_list:
loot_list.remove(loot)
self.score += 1
print(self.score)
plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
self.rect.x += self.movex
self.rect.y += self.movey
class Enemy(pygame.sprite.Sprite):
"""
生成一名敌人
"""
def __init__(self, x, y, img):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(os.path.join('images', img))
self.image.convert_alpha()
self.image.set_colorkey(ALPHA)
self.rect = self.image.get_rect()
self.rect.x = x
self.rect.y = y
self.counter = 0
def move(self):
"""
敌人移动
"""
distance = 80
speed = 8
if self.counter >= 0 and self.counter <= distance:
self.rect.x += speed
elif self.counter >= distance and self.counter <= distance * 2:
self.rect.x -= speed
else:
self.counter = 0
self.counter += 1
def update(self, firepower, enemy_list):
"""
检测火力碰撞
"""
fire_hit_list = pygame.sprite.spritecollide(self, firepower, False)
for fire in fire_hit_list:
enemy_list.remove(self)
class Level:
def ground(lvl, gloc, tx, ty):
ground_list = pygame.sprite.Group()
i = 0
if lvl == 1:
while i < len(gloc):
ground = Platform(gloc[i], worldy - ty, tx, ty, 'tile-ground.png')
ground_list.add(ground)
i = i + 1
if lvl == 2:
print("Level " + str(lvl))
return ground_list
def bad(lvl, eloc):
if lvl == 1:
enemy = Enemy(eloc[0], eloc[1], 'enemy.png')
enemy_list = pygame.sprite.Group()
enemy_list.add(enemy)
if lvl == 2:
print("Level " + str(lvl))
return enemy_list
# x 位置, y 位置, img 宽度, img 高度, img 文件
def platform(lvl, tx, ty):
plat_list = pygame.sprite.Group()
ploc = []
i = 0
if lvl == 1:
ploc.append((200, worldy - ty - 128, 3))
ploc.append((300, worldy - ty - 256, 3))
ploc.append((550, worldy - ty - 128, 4))
while i &lt; len(ploc):
j = 0
while j <= ploc[i][2]:
plat = Platform((ploc[i][0] + (j * tx)), ploc[i][1], tx, ty, 'tile.png')
plat_list.add(plat)
j = j + 1
print('run' + str(i) + str(ploc[i]))
i = i + 1
if lvl == 2:
print("Level " + str(lvl))
return plat_list
def loot(lvl):
if lvl == 1:
loot_list = pygame.sprite.Group()
loot = Platform(tx*5, ty*5, tx, ty, 'loot_1.png')
loot_list.add(loot)
if lvl == 2:
print(lvl)
return loot_list
'''
Setup 部分
'''
backdrop = pygame.image.load(os.path.join('images', 'stage.png'))
clock = pygame.time.Clock()
pygame.init()
backdropbox = world.get_rect()
main = True
player = Player() # 生成玩家
player.rect.x = 0 # 转到 x
player.rect.y = 30 # 转到 y
player_list = pygame.sprite.Group()
player_list.add(player)
steps = 10
fire = Throwable(player.rect.x, player.rect.y, 'fire.png', 0)
firepower = pygame.sprite.Group()
eloc = []
eloc = [300, worldy-ty-80]
enemy_list = Level.bad(1, eloc)
gloc = []
i = 0
while i &lt;= (worldx / tx) + tx:
gloc.append(i * tx)
i = i + 1
ground_list = Level.ground(1, gloc, tx, ty)
plat_list = Level.platform(1, tx, ty)
enemy_list = Level.bad( 1, eloc )
loot_list = Level.loot(1)
'''
主循环
'''
while main:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
try:
sys.exit()
finally:
main = False
if event.type == pygame.KEYDOWN:
if event.key == ord('q'):
pygame.quit()
try:
sys.exit()
finally:
main = False
if event.key == pygame.K_LEFT or event.key == ord('a'):
player.control(-steps, 0)
if event.key == pygame.K_RIGHT or event.key == ord('d'):
player.control(steps, 0)
if event.key == pygame.K_UP or event.key == ord('w'):
player.jump()
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT or event.key == ord('a'):
player.control(steps, 0)
player.facing_right = False
if event.key == pygame.K_RIGHT or event.key == ord('d'):
player.control(-steps, 0)
player.facing_right = True
if event.key == pygame.K_SPACE:
if not fire.firing:
fire = Throwable(player.rect.x, player.rect.y, 'fire.png', 1)
firepower.add(fire)
# 向向滚动世界
if player.rect.x >= forwardx:
scroll = player.rect.x - forwardx
player.rect.x = forwardx
for p in plat_list:
p.rect.x -= scroll
for e in enemy_list:
e.rect.x -= scroll
for l in loot_list:
l.rect.x -= scroll
# 向后滚动世界
if player.rect.x <= backwardx:
scroll = backwardx - player.rect.x
player.rect.x = backwardx
for p in plat_list:
p.rect.x += scroll
for e in enemy_list:
e.rect.x += scroll
for l in loot_list:
l.rect.x += scroll
world.blit(backdrop, backdropbox)
player.update()
player.gravity()
player_list.draw(world)
if fire.firing:
fire.update(worldy)
firepower.draw(world)
enemy_list.draw(world)
enemy_list.update(firepower, enemy_list)
loot_list.draw(world)
ground_list.draw(world)
plat_list.draw(world)
for e in enemy_list:
e.move()
stats(player.score, player.health)
pygame.display.flip()
clock.tick(fps)
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/add-throwing-python-game
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/game_pawn_grid_linux.png?itok=4gERzRkg (Gaming on a grid with penguin pawns)
[2]: https://www.python.org/
[3]: https://www.pygame.org/news
[4]: https://linux.cn/article-9071-1.html
[5]: https://linux.cn/article-10850-1.html
[6]: https://linux.cn/article-10858-1.html
[7]: https://linux.cn/article-10874-1.html
[8]: https://linux.cn/article-10883-1.html
[9]: https://linux.cn/article-10902-1.html
[10]: https://linux.cn/article-11780-1.html
[11]: https://linux.cn/article-11790-1.html
[12]: https://linux.cn/article-11819-1.html
[13]: https://linux.cn/article-11828-1.html
[14]: https://linux.cn/article-11839-1.html
[15]: https://opensource.com/sites/default/files/uploads/pygame-throw.jpg (Python platformer with throwing capability)
[16]: https://creativecommons.org/licenses/by-sa/4.0/
[17]: http://www.gnu.org/licenses/\>

View File

@ -1,69 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12874-1.html)
[#]: subject: (Easily set image transparency using GIMP)
[#]: via: (https://opensource.com/article/20/9/chroma-key-gimp)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
使用 GIMP 轻松地设置图片透明度
======
使用 chroma 按键或 "绿屏" 技巧来设置你电脑游戏中图片的透明度。
![企鹅兵游戏][1]
不管你是否正在使用 [Python][2] 或 [Lua][3] 编程一个游戏或一个 APP你都有可能在你的游戏资产中使用 PNG 图像。PNG 格式图像的一个优点是能够存储一个 **alpha 通道**,这在一个 JPEG 格式的图像中是不可能获得的。Alpha 在本质上是不可见的或透明的 "颜色" 。Alpha 是你图像 _不可见_ 的一部分。例如,你要绘制一个甜甜圈,甜甜圈的空洞将使用 alpha 填充,你就可以看到它后面的任何东西
> 使用色键(绿屏)技巧来设置你电脑游戏中图片的透明度。
一个常见的问题是如何找到一幅图像的 alpha 部分。有时你的编程框架,不管它是 [Python Arcade][4][Pygame][5]LÖVE或者其它的任何东西侦察出 alpha 通道,(在适当地调用函数后) 将其作为透明处理。这意味着它将不会在 alpha 部分来渲染新的像素而留下甜甜圈的空洞。100% 是透明的0% 是不透明的,在功能上起 "不可见" 的作用。
![](https://img.linux.net.cn/data/attachment/album/202011/30/223815rdmrgx1109ngng0g.jpg)
有些时候,你的框架与你的图像资源在 alpha 通道的位置上是不一致的 (或者说,一个 alpha 通道根本就不存在),你可以在你想要透明度的地方得到像素。
不管你是否正在使用 [Python][2] 或 [Lua][3] 编程一个游戏或一个 APP你都有可能在你的游戏资源中使用 PNG 图像。PNG 格式图像的一个优点是能够存储一个 **alpha 通道**,这在一个 JPEG 格式的图像中是不可能获得的。alpha 在本质上是不可见的或透明的“颜色”。alpha 是你图像 _不可见_ 的一部分。例如,你要绘制一个甜甜圈,甜甜圈的空洞将使用 alpha 填充,你就可以看到它后面的任何东西。
一个常见的问题是如何找到一幅图像的 alpha 部分。有时你的编程框架,不管它是 [Python Arcade][4]、[Pygame][5]、LÖVE或者其它的任何东西都会检测出 alpha 通道,(在适当地调用函数后)将其作为透明处理。这意味着它将不会在 alpha 部分来渲染新的像素而留下甜甜圈的空洞。100% 是透明的0% 是不透明的,在功能上起到“不可见”的作用。
有些时候,你的框架与你的图像资源在 alpha 通道的位置上是不一致的或者alpha 通道根本就不存在),你在你想要透明度的地方却得到像素。
这篇文章描述了我所知道的最可靠的方法来解决透明度的问题。
### Chroma key
### 色键
在计算机图形学中,有一些有助于确定每一个像素是如何渲染的值。**Chrominance** ,或者 **chroma** ,描述一个像素的饱和度或强度。**chroma key** 技术 (也称为 **绿屏**) 最初是作为一种化学工艺而开发的,在复印一张底片时,使用一种特定的 **无光泽** 的颜色 (最初是蓝色,后来的绿色) 来故意遮掩,以允许使用另一幅图像来取代曾经有蓝色或绿色屏幕的地方。这是一种简化的解释,但是它说明了计算机图形学中被称为 alpha 通道的起源。
在计算机图形学中,有一些有助于确定每一个像素是如何渲染的值。<ruby>色度<rt>Chrominance</rt></ruby>(或者 chroma描述一个像素的饱和度或强度。<ruby>色键<rt>chroma key</rt></ruby>技术(也称为<ruby>绿屏<rt>green screening</rt></ruby>)最初是作为一种化学工艺而发展起来的,在复印一张底片时,使用一种特定的 **无光泽** 的颜色(最初是蓝色,后来是绿色)来故意遮掩,以允许使用另一幅图像来取代曾经有蓝色或绿色屏幕的地方。这是一种简化的解释,但是它说明了计算机图形学中被称为 alpha 通道的起源。
alpha 通道是保存在一幅图像中的信息用以标识意欲透明的像素。例如RGB 图像有红绿蓝通道。RGBA 图像包含红,绿,蓝通道,以及 alpha 通道。alpha 值的范围可以从 0 到 1 ,使用小数是也有效的。
alpha 通道是保存在图像中的信息用以标识要透明的像素。例如RGB 图像有红、绿、蓝通道。RGBA 图像包含红、绿、蓝通道,以及 alpha 通道。alpha 值的范围可以从 0 到 1 ,使用小数是也有效的。
因为一个 alpha 通道可以用几种不同的方法表达,依赖于一个嵌入的 alpha 通道可能是有问题的。作为替代方案,你可以在你的游戏框架中选择一种颜色并将其转化为一个 alpha 值。要做到这一点,你必需知道在你图像中的颜色值。
因为一个 alpha 通道可以用几种不同的方法表达,因此依赖于嵌入的 alpha 通道可能是有问题的。作为替代方案,你可以在你的游戏框架中选择一种颜色并将其转化为一个 0 的 alpha 值。要做到这一点,你必须知道在你图像中的颜色值。
### 准备你的图片
为一次绿屏操作来准备一张清晰的使用独占保留颜色的图片,在你最喜欢的图片编辑器中打开图片。我建议使用 [GIMP][6] 或 [Glimpse][7],但是 [mtPaint][8] 或 [Pinta][9],甚至 [Inkscape][10] 也能很好地工作,这取决于你的图像的性质,以及你将这些操作指南转换到一种不同图片编辑器工具的能力。
要准备一个专门为色度键保留明确颜色的图形,在你最喜欢的图片编辑器中打开图片。我建议使用 [GIMP][6] 或 [Glimpse][7],但是 [mtPaint][8] 或 [Pinta][9],甚至 [Inkscape][10] 也能很好地工作,这取决于你的图像的性质,以及你将这些操作指南转换到一种不同图片编辑器工具的能力。
首先打开这幅 Tux 企鹅的图像:
![Tux 企鹅][11]
(Seth Kenlon, [CC BY-SA 4.0][12])
### 选择图片
在图片打开后,转到 **窗口** 菜单,选择 **可停靠对话框** ,接下来选择 **图层**。在 **图层** 面板中 Tux 图层上右击。从弹出菜单中,选择 **Alpha 到选区** 。如果你的图像没有内置的 alpha 通道,那么你必手动创建你自己的选区。
在图片打开后,转到 **窗口** 菜单,选择 **可停靠对话框** ,接下来选择 **图层**。在 **图层** 面板中 Tux 图层上右击。从弹出菜单中,选择 **Alpha 到选区** 。如果你的图像没有内置的 alpha 通道,那么你必手动创建你自己的选区。
![Alpha 到选区][13]
(Seth Kenlon, [CC BY-SA 4.0][12])
为手动创建一个选区,单击来自工具箱的 **路径** 工具。
![GIMP 的路径工具][14]
(Seth Kenlon, [CC BY-SA 4.0][12])
使用 **路径** 工具,在图像周围移动鼠标,在其轮廓的每个主要交叉点处都单击和释放 (不要拖动) 。 不要担心下面的曲线;只需要找到主要的交叉点和拐角。这将在每个点处创建一个节点,并在节点中间绘制一条条线段。你不需要关闭你的路径,因此当你最后到达你开始时的交叉点时,你就完成了。
使用 **路径** 工具,在图像周围移动鼠标,在其轮廓的每个主要交叉点处都单击和释放(不要拖动)。 不要担心沿着曲线走;只需要找到主要的交叉点和拐角。这将在每个点处创建一个节点,并在节点中间绘制一条条线段。你不需要闭合你的路径,因此当你最后到达你开始时的交叉点时,你就完成了。
![在 GIMP 中创建一个路径][15]
(Seth Kenlon, [CC BY-SA 4.0][12])
在你创建你的轮廓路径后,转到 **窗口** 菜单,选择 **可停靠对话框** ,接下来选择 **工具选项** 。在 **工具选项** 面板中,选择 **编辑 (Ctrl)** 。随着这项操作的激活,你可以编辑你刚刚通过单击线或单击节点绘制的路径,并通过调整它们来更好地适应你的图像。你甚至能够将直线弯曲。
![编辑路径][16]
(Seth Kenlon, [CC BY-SA 4.0][12])
现在从 **窗口 > 可停靠对话框** 菜单中选择 **路径** 面板。在 **路径** 面板中,单击 **路径到选区** 按钮。你的绘图现在已经被选中了。
### 扩大选区
@ -74,24 +66,20 @@ alpha 通道是保存在一幅图像中的信息,用以标识意欲透明的
### 反转选区
你已经选择了你的图形,但是你真正想选择的东西却 _不包括_ 你所选择的图像。这是因为你正在创建的在你图像中定义的一些东西的一个 alpha 蒙版来将被其它的一些东西所替换。换句话说,你需要标记将被转变为不可视的像素。
你已经选择了你的图形,但是你真正想选择的东西却 _不包括_ 你所选择的图像。这是因为你要创建一个 alpha 蒙版来定义图像中的一些内容的来被其它一些内容所替换。换句话说,你需要标记那些将被转变为不可见的像素。
为了反转选择区,单击 **选择** 菜单,选择 **反转** 。现在除你的图像以外的一切东西都是被选择的。
### 使用 alpha 填充
随着除你的图像以外的一切东西被选择,选择你想使用的颜色来指定你的 alpha 蒙版。最常用的颜色的绿色(正如你可能从 术语 "绿屏" 中所猜到的一样)。绿色或者绿色的特殊阴影并没有什么神奇的。使用 "绿屏" 是因为人们经常处理不包含绿色色素的图像,这样人们能够很容易分离出绿色,而不会意外地分离出图像中重要的部分。当然,如果你的图像是一位绿色的外星人或一枚绿宝石或一些 _确实_ 包含绿色的东西,那么你应该使用一种不同的颜色。只要你所选择的颜色是始终如一的单色,那么你就可以使用你所希望的任意颜色。如果你正在处理很多图像,那么你选择颜色时应该考虑所有的图像颜色
随着选择了你的图像以外的一切东西,选择你想使用的颜色来指定你的 alpha 蒙版。最常用的颜色是绿色(正如你可能从术语“绿屏”中所猜到的一样)。绿色不是什么神奇的颜色,甚至也不是特定的绿色色调。之所以使用它是因为人们经常处理不包含绿色色素的图像,这样人们能够很容易分离出绿色,而不会意外地分离出图像中重要的部分。当然,如果你的图像是一位绿色的外星人或一枚绿宝石或一些 _确实_ 包含绿色的东西,那么你应该使用一种不同的颜色。只要你所选择的颜色是一的单色,那么你就可以使用你所希望的任意颜色。如果你正在处理很多图像,你的选择应该在所有图像中保持一致
![在工具箱中的前景色][17]
(Seth Kenlon, [CC BY-SA 4.0][12])
使用你选择的颜色值来设置你的前景色。为确保你的选择是精确是,使用 [HTML][18] 或 [HSV][19] 描述的颜色。例如,如果你正在使用纯绿色,它可以在 GIMP ( 以及大多数的开放源码图像应用程序 ) 中表示为 `00ff00` ( 00 是红色FF 是绿色00 是蓝色F 是最大值 ) 。
使用你选择的颜色值来设置你的前景色。为确保你的选择是精确的,使用 [HTML][18] 或 [HSV][19] 表示的颜色。例如,如果你正在使用纯绿色,它可以在 GIMP以及大多数的开放源码图像应用程序中表示为 `00ff00``00` 是红色,`FF` 是绿色,`00` 是蓝色,`F` 是最大值)。
![设置颜色值][20]
(Seth Kenlon, [CC BY-SA 4.0][12])
不管你选择什么颜色,务必记录下 HTML 或 HSV 的值,以便你可以为每一张图像使用完全相同的颜色。
为填充你的 alpha 蒙版,单击 **编辑** 菜单,选择 **使用前景色填充**
@ -100,19 +88,17 @@ alpha 通道是保存在一幅图像中的信息,用以标识意欲透明的
如果你在你的图像周围留下边框,设置背景颜色来着色你想使用的边界笔刷。这通常是黑色或白色,但是它也可以是任何适宜你游戏审美观的颜色。
在你设置背景颜色后,单击 **图像** 菜单,选择 **平整图像**。不管你是否添加了边,这样做都是安全的。这个过程将从图像中移除 alpha 通道,并使用背景色填充任何 "透明的" 像素。
在你设置背景颜色后,单击 **图像** 菜单,选择 **平整图像**。不管你是否添加了边,这样做都是安全的。这个过程将从图像中移除 alpha 通道,并使用背景色填充任何“透明的”像素。
![平整图像][21]
(Seth Kenlon, [CC BY-SA 4.0][12])
你现在已经为你的游戏引擎准备好了一张图像。导出图像到你的游戏引擎喜欢的任何格式,接下来使用游戏引擎所需要的每一个函数来将图像导入的你的游戏中。在的代码中,设置 alpha 值为 `00ff00` ( 或你使用的任何颜色 ),接下来使用游戏引擎的图像转换器来讲该颜色作为 alpha 通道处理。
你现在已经为你的游戏引擎准备好了一张图像。导出图像为你的游戏引擎喜欢的任何格式,接下来使用游戏引擎所需要的每一个函数来将图像导入的你的游戏中。在的代码中,设置 alpha 值为 `00ff00`(或你使用的任何颜色),接下来使用游戏引擎的图像转换器来将该颜色作为 alpha 通道处理。
### 其它的方法
这不是唯一能在你游戏图像中获取透明度的方法。查看你游戏引擎的文档来找出它是如何默认尝试处理 alpha 通道的,在你不确定的时候,在你开始编辑图像前,尝试让你的游戏引擎来自动侦测图像中透明度。有时,你游戏引擎的预期值和你图像的预设值恰巧匹配,那么你就可以直接获取透明度,而不需要做任何额外的工作。
这不是唯一能在你游戏图像中获取透明度的方法。查看你游戏引擎的文档来找出它是如何默认尝试处理 alpha 通道的,在你不确定的时候,尝试让你的游戏引擎来自动侦测图像中透明度,然后再去编辑它。有时,你游戏引擎的预期值和你图像的预设值恰巧匹配,那么你就可以直接获取透明度,而不需要做任何额外的工作。
不过,当这些尝试都失败时,尝试一下 chroma key。它为电影业工作了将近 100 年,它也可以为你工作。
不过,当这些尝试都失败时,尝试一下色键。它为电影业工作了将近 100 年,它也可以为你工作。
--------------------------------------------------------------------------------
@ -121,7 +107,7 @@ via: https://opensource.com/article/20/9/chroma-key-gimp
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,20 +1,20 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12865-1.html)
[#]: subject: (5 new sudo features you need to know in 2020)
[#]: via: (https://opensource.com/article/20/10/sudo-19)
[#]: author: (Peter Czanik https://opensource.com/users/czanik)
2020 年 5 个 新 sudo 功能
2020 年 5 个新 sudo 功能
======
> 从通过 chroot 支持集中会话录制到 Python APIsudo 1.9 提供了许多新功能。
> 从集中会话记录、chroot 支持到 Python APIsudo 1.9 提供了许多新功能。
![Wratchet set tools][1] 。
![](https://img.linux.net.cn/data/attachment/album/202011/28/143544x5cdcxzf9dcujdng.jpg)
当你想在 [POSIX 系统][2]上执行一个操作时,最安全的方法之一就是使用 `sudo` 命令。与以 root 用户身份登录并执行命令可能是个危险的操作不同,`sudo` 授予任何被系统管理员[指定为 “sudoer”][3]的用户临时权限来执行通常受限制的活动。
当你想在 [POSIX 系统][2]上执行一个操作时,最安全的方法之一就是使用 `sudo` 命令。与以 root 用户身份登录并执行命令可能是个危险的操作不同,`sudo` 授予任何被系统管理员[指定为 “sudoer”][3]的用户临时权限来执行通常受限制的活动。
几十年来,这个系统帮助 Linux、Unix 和 macOS 系统免受愚蠢的错误和恶意攻击,它是当今所有主要 Linux 发行版的默认管理机制。
@ -32,11 +32,11 @@
### 哪里可以得到 sudo 1.9
大多数的 Linux 发行版仍然封装了上一代的 `sudo`1.8 版本并且在长期支持LTS的发行版中会保持这种方式数年。据我所知,提供了最完整的 sudo 1.9 包的 Linux 发行版是 openSUSE[Tumbleweed][5],它是一个滚动发行版,而且该 `sudo` 包的子包中有 Python 支持。最近的 [Fedora][6] 版本包含了 sudo 1.9,但没有 Python。[FreeBSD Ports][7] 有最新的 `sudo` 版本,如果你自己编译 `sudo` 而不是使用软件包,你可以启用 Python 支持。
大多数的 Linux 发行版仍然封装了上一代的 `sudo`1.8 版本并且在长期支持LTS的发行版中会保持这个版本数年。据我所知,提供了最完整的 sudo 1.9 包的 Linux 发行版是 openSUSE[Tumbleweed][5],它是一个滚动发行版,而且该 `sudo` 包的子包中有 Python 支持。最近的 [Fedora][6] 版本包含了 sudo 1.9,但没有 Python 支持。[FreeBSD Ports][7] 有最新的 `sudo` 版本,如果你自己编译 `sudo` 而不是使用软件包,你可以启用 Python 支持。
如果你喜欢的 Linux 发行版还没有包含 sudo 1.9,请查看 [sudo 二进制页面][8]来查看是否有现成的包可以用于你的系统。这个页面还提供了一些商用 Unix 变种的软件包。
像往常一样,在你开始试验 `sudo` 设置之前,*确保你知道 root 密码*。是的,即使在 Ubuntu 上也是如此。有一个临时的“后门”是很重要的;如果没有这个后门,如果出了问题,你就必须得黑掉自己的系统。记住:语法正确的配置并不意味着任何人都可以在该系统上通过 `sudo` 做任何事情!
像往常一样,在你开始试验 `sudo` 设置之前,*确保你知道 root 密码*。是的,即使在 Ubuntu 上也是如此。有一个临时的“后门”是很重要的;如果没有这个后门,如果出了问题,你就必须得黑掉自己的系统。记住:语法正确的配置并不意味着每个人都可以在该系统上通过 `sudo` 做任何事情!
### 记录服务
@ -46,11 +46,11 @@
* 即使在发送机器停机的情况下也可以进行记录
* 本地用户若想掩盖其轨迹,不能删除记录
为了快速测试,你可以通过非加密连接向记录服务发送会话。我的博客中包含了[说明][9],可以在几分钟内完成设置。对于生产环境,我建议使用加密连接。有很多可能性,所以阅读最适合你的环境的[文档][10]。
为了快速测试,你可以通过非加密连接向记录服务发送会话。我的博客中包含了[说明][9],可以在几分钟内完成设置。对于生产环境,我建议使用加密连接。有很多可能性,所以阅读最适合你的环境的[文档][10]。
### 审计插件 API
新的审计插件 API 不是一个用户可见的功能。换句话说,你不能从 `sudoers` 文件中配置它。它是一个 API意味着你可以从插件中访问审计信息包括用 Python 编写的插件。你可以用很多不同的方式来使用它,比如当一些有趣的事情发生时,从 `sudo` 直接发送事件到 Elasticsearch 或日志即服务LaaS。你也可以用它来进行调试并以任何你喜欢的格式将其他难以访问的信息打印到屏幕上。
新的审计插件 API 不是一个用户可见的功能。换句话说,你不能从 `sudoers` 文件中配置它。它是一个 API意味着你可以从插件中访问审计信息包括用 Python 编写的插件。你可以用很多不同的方式来使用它,比如当一些有趣的事情发生时,从 `sudo` 直接发送事件到 Elasticsearch 或日志即服务LaaS。你也可以用它来进行调试,并以任何你喜欢的格式将其他难以访问的信息打印到屏幕上。
根据你使用它的方式,你可以在 `sudo` 插件手册页(针对 C 语言)和 `sudo` Python 插件手册中找到它的文档。在 `sudo` 源代码中可以找到 [Python 代码示例][11],在我的博客上也有一个[简化的例子][12]。
@ -58,7 +58,7 @@
审批插件 API 可以在命令执行之前加入额外的限制。这些限制只有在策略插件成功后才会运行,因此你可以有效地添加额外的策略层,而无需更换策略插件,进而无需更换 `sudoers`。可以定义多个审批插件,而且所有插件都必须成功,命令才能执行。
与审计插件 API 一样,你可以从 C 和 Python 中使用它。我博客上记录的[示例 Python 代码][13]是对该 API 的一个很好的介绍。一旦你理解了它是如何工作的,你就可以扩展它来连接 `sudo` 到工单系统,并且只批准有相关开放工单的会话。你也可以连接到人力资源数据库,这样只有当班的工程师才能获得管理权限。
与审计插件 API 一样,你可以从 C 和 Python 中使用它。我博客上记录的[示例 Python 代码][13]是对该 API 的一个很好的介绍。一旦你理解了它是如何工作的,你就可以扩展它来`sudo` 连接到工单系统,并且只批准有相关开放工单的会话。你也可以连接到人力资源数据库,这样只有当班的工程师才能获得管理权限。
### Python 对插件的支持
@ -70,7 +70,7 @@
除了审计和审批插件 API 之外,还有一些其他的 API你可以用它们做一些非常有趣的事情。
通过使用策略插件 API你可以取代 `sudo` 策略引擎。请注意,你将失去大部分的 `sudo` 功能,而且没有更多基于 `sudoers` 的配置。这在小众情况下还是很有用的,但大多数时候,最好还是继续使用 `sudoers`,并使用审批插件 API 创建额外的策略。如果你想尝试一下,我的 [Python 插件介绍][14]提供了一个非常简单的策略:只允许使用 `id` 命令。再次确认你知道 root 密码,因为一旦启用这个策略,它就会阻止任何实际使用 `sudo` 的行为。
通过使用策略插件 API你可以取代 `sudo` 策略引擎。请注意,你将失去大部分的 `sudo` 功能,而且没有基于 `sudoers` 的配置。这在小众情况下还是很有用的,但大多数时候,最好还是继续使用 `sudoers`,并使用审批插件 API 创建额外的策略。如果你想尝试一下,我的 [Python 插件介绍][14]提供了一个非常简单的策略:只允许使用 `id` 命令。再次确认你知道 root 密码,因为一旦启用这个策略,它就会阻止任何实际使用 `sudo` 的行为。
使用 I/O 日志 API你可以访问用户会话的输入和输出。这意味着你可以分析会话中发生了什么甚至在发现可疑情况时终止会话。这个 API 有很多可能的用途,比如防止数据泄露。你可以监控屏幕上的关键字,如果数据流中出现任何关键字,你可以在关键字出现在用户的屏幕上之前中断连接。另一种可能是检查用户正在输入的内容,并使用这些数据来重建用户正在输入的命令行。例如,如果用户输入 `rm -fr /`,你可以在按下回车键之前就断开用户的连接。
@ -78,11 +78,11 @@
### chroot 和 CWD 支持
`sudo` 的最新功能是支持 chroot 和改变工作目录CWD这两个选项都不是默认启用的你需要在 `sudoers` 文件中明确启用它们。当它们被启用时,你可以调目标目录或允许用户指定使用哪个目录。日志反映了这些设置何时被使用。
`sudo` 的最新功能是支持 chroot 和改变工作目录CWD这两个选项都不是默认启用的你需要在 `sudoers` 文件中明确启用它们。当它们被启用时,你可以调目标目录或允许用户指定使用哪个目录。日志反映了这些设置何时被使用。
在大多数系统中chroot 只对 root 用户开放。如果你的某个用户需要 chroot你需要给他们 root 权限,这比仅仅给他们 chroot 权限要大得多。另外,你可以通过 `sudo` 允许访问 chroot 命令,但它仍然允许漏洞,他们可以获得完全的权限。当你使用 `sudo` 内置的 chroot 支持时,你可以轻松地限制对单个目录的访问。你也可以让用户灵活地指定根目录。当然,这可能会导致灾难(例如,`sudo --chroot / -s`),但至少事件会被记录下来。
当你通过 `sudo` 运行一个命令时,它会将工作目录设置为当前目录。这是预期的行为,但可能有一些情况下,命令需要在不同的目录下运行。例如,我记得使用一个应用程序,它通过检查我的工作目录是否是 `/root` 来检查我的权限。
当你通过 `sudo` 运行一个命令时,它会将工作目录设置为当前目录。这是预期的行为,但可能有一些情况下,命令需要在不同的目录下运行。例如,我记得使用一个应用程序,它通过检查我的工作目录是否是 `/root` 来检查我的权限。
### 尝试新功能
@ -101,7 +101,7 @@ via: https://opensource.com/article/20/10/sudo-19
作者:[Peter Czanik][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12869-1.html)
[#]: subject: (How to Increase Disk Size of Your Existing Virtual Machines in VirtualBox)
[#]: via: (https://itsfoss.com/increase-disk-size-virtualbox/)
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
@ -30,11 +30,11 @@
VirtualBox 6 增加了一个调整虚拟磁盘大小的图形化选项。你可以在 VirtualBox 主页的文件选项卡中找到它。
进入 _**File-&gt;Virtual Media Manager**_
进入 “File -> Virtual Media Manager”
![][4]
在列表中选择一个虚拟机,然后使用 ”Size“ 滑块或输入你需要的大小值。完成后点击 ”Apply“
在列表中选择一个虚拟机,然后使用 “Size” 滑块或输入你需要的大小值。完成后点击 “Apply”
![][5]
@ -42,7 +42,7 @@ VirtualBox 6 增加了一个调整虚拟磁盘大小的图形化选项。你可
#### 方法 2使用 Linux 命令行增加 VirtualBox 磁盘空间
如果你使用 Linux 操作系统作为宿主机,打开终端并输入以下命令来调整 VDI 的大小:
如果你使用 Linux 操作系统作为宿主机,在宿主机中打开终端并输入以下命令来调整 VDI 的大小:
```
VBoxManage modifymedium "/path_to_vdi_file" --resize <megabytes>
@ -50,17 +50,17 @@ VBoxManage modifymedium "/path_to_vdi_file" --resize <megabytes>
在你按下回车执行命令后,调整大小的过程应该马上结束。
注意事项
VirtualBox 早期版本命令中的 **modifyvdi****modifyhd** 命令也支持,并在内部映射到 **modifymedium** 命令。
> 注意事项
>
> VirtualBox 早期版本命令中的 `*modifyvdi``modifyhd` 命令也支持,并在内部映射到 `modifymedium` 命令。
![][6]
如果你不确定虚拟机的保存位置,可以在 VirtualBox 主页面点击 **Files -&gt; Preferences** 或使用键盘快捷键 **Ctrl+G** 找到默认位置。
如果你不确定虚拟机的保存位置,可以在 VirtualBox 主页面点击 “Files -> Preferences” 或使用键盘快捷键 `Ctrl+G` 找到默认位置。
![][7]
#### 总结
### 总结
就我个人而言,我更喜欢在每个 Linux 发行版上使用终端来扩展磁盘,图形化选项是最新的 VirtualBox 版本的一个非常方便的补充。
@ -73,7 +73,7 @@ via: https://itsfoss.com/increase-disk-size-virtualbox/
作者:[Dimitrios Savvopoulos][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12866-1.html)
[#]: subject: (Compare Files and Folders Graphically in Linux With Meld)
[#]: via: (https://itsfoss.com/meld-gui-diff/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
@ -10,19 +10,21 @@
使用 Meld 在 Linux 中以图形方式比较文件和文件夹
======
![](https://img.linux.net.cn/data/attachment/album/202011/28/145914mr5zcl2wnns8rj5j.jpg)
如何比较两个相似的文件来检查差异?答案显而易见,就是[使用 Linux 中的 diff 命令][1]。
问题是,并不是每个人都能自如地在 Linux 终端中比较文件,而且 diff 命令的输出可能会让一些人感到困惑。
问题是,并不是每个人都能自如地在 Linux 终端中比较文件,而且 `diff` 命令的输出可能会让一些人感到困惑。
以这个 diff 命令的输出为例:
以这个 `diff` 命令的输出为例:
![][2]
这里肯定涉及到一个学习曲线。然而,如果你使用的是桌面 Linux你可以使用 [GUI][3] 应用来轻松比较两个文件是否有任何差异。
有几个 Linux 中的 GUI diff 工具。我将在本周的 Linux 应用亮点中重点介绍我最喜欢的工具 Meld。
有几个 Linux 中的 GUI 差异比较工具。我将在本周的 Linux 应用亮点中重点介绍我最喜欢的工具 Meld。
### MeldLinux (及 Windows 下的可视化比较和合并工具
### MeldLinux及 Windows下的可视化比较和合并工具
通过 [Meld][4],你可以将两个文件并排比较。不仅如此,你还可以对文件进行相应的修改。这是你在大多数情况下想做的事情,对吗?
@ -45,19 +47,17 @@ Meld 还能够比较目录,并显示哪些文件是不同的。它还会显示
开源的 Meld 工具具有以下主要功能:
* 进行双向和三向差异比较
* 就地编辑文件,差异比较立即更新
* 就地编辑文件,差异比较立即更新
* 在差异和冲突之间进行导航
* 通过插入、更改和冲突相应地标示出全局和局部差异,使其可视化
* 使用正则文本过滤来忽略某些差异
* 使用正则文本过滤来忽略某些差异
* 语法高亮显示
* 比较两个或三个目录,看是否有新增加、缺失和更改的文件
* 比较两个或三个目录,看是否有新增加、缺失和更改的文件
* 将一些文件排除在比较之外
* 支持流行的版本控制系统,如 Git、Mercurial、Bazaar 和 SVN
* 支持流行的版本控制系统,如 Git、Mercurial、Bazaar 和 SVN
* 支持多种国际语言
* 开源 GPL v2 许可证
* 既可用于 Linux也可用于 Windows。
* 既可用于 Linux也可用于 Windows
### 在 Linux 上安装 Meld
@ -77,7 +77,7 @@ sudo apt install meld
[Meld Source Code][14]
### 它值得吗?
### 它值得使用吗?
我知道[大多数现代开源编辑器][15]都有这个功能但有时你只是想要一个简单的界面而不需要安装额外的附加软件来比较文件。Meld 就为你提供了这样的功能。
@ -90,7 +90,7 @@ via: https://itsfoss.com/meld-gui-diff/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12884-1.html)
[#]: subject: (Day 1: a confusing Rails error message)
[#]: via: (https://jvns.ca/blog/2020/11/09/day-1--a-little-rails-/)
[#]: author: (Julia Evans https://jvns.ca/)
@ -10,7 +10,9 @@
Rails 之旅第 1 天:一个令人困惑的 Rails 错误信息
======
今天,我开始了一个 Recurse Center 的批次学习!我认识了一些人,并开始了一个小小的有趣的 Rails 项目。我想我今天不会谈太多关于这个项目的实际内容,但这里有一些关于 Rails 一天的快速笔记。
![](https://img.linux.net.cn/data/attachment/album/202012/04/080957f0p4piqz52bypqb5.jpg)
今天,我开始了一个 Recurse Center 的班次学习!我认识了一些人,并开始了一个小小的有趣的 Rails 项目。我想我今天不会谈太多关于这个项目的实际内容,但这里有一些关于 Rails 一天的快速笔记。
### 一些关于开始的笔记

View File

@ -0,0 +1,117 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12890-1.html)
[#]: subject: (Day 2: Rails associations & dragging divs around)
[#]: via: (https://jvns.ca/blog/2020/11/10/day-2--rails-associations---dragging-divs-around/)
[#]: author: (Julia Evans https://jvns.ca/)
Rails 之旅第 2 天Rails 关联和拖动 div
======
![](https://img.linux.net.cn/data/attachment/album/202012/05/212345zz8jajhaj0hh8h2f.jpg)
大家好!今天是我搭建这个玩具项目的第 2 天。下面再来记录一下关于 Rails 的一些有趣的事情吧!
### 目标:做一个冰箱诗歌论坛
我想做一种无聊的标准网站来学习 Rails并且其他人可以与之互动就像一个论坛一样! 但如果人们真的可以在网站上打字,那就会产生各种各样的问题(如果他们是垃圾邮件发送者怎么办?又或者只是言语刻薄?)。
我想到的第一个想法是,可以让人们与网站互动,但实际上却不能在网站上打字,那就是一个“冰箱诗歌论坛”,只给你一组固定的字,你就可以随意组合。
所以,这就是我们的计划!
我这个项目的目标是想知道我是否能用 Rails 来做其他的小型网络项目(而不是像我通常做的那样,使用一些更基本的东西,比如 Flask或者放弃后端用 Javascript 来写所有东西)。
### 怎么把字拖来拖去呢jQuery 的可拖放 UI
我想让大家能够把文字拖动起来,但我又不想写很多 Javascript。结果发现这超级简单 —— 有一个 jQuery 库可以做到,它叫做 `draggable`!一开始,拖动并不成功。
一开始拖动在手机上是不行的,但是有一个技巧可以让 jQuery UI 在手机上工作,叫做 [jQuery UI touch punch][1]。下面是它的样子(有兴趣看工作原理的可以查看源码,代码很少)。
> `banana` `forest` `cake` `is`
### 一个有趣的 Rails 功能:关联
我以前从来没有使用过关系型 ORM对于 Rails我很兴奋的一件事就是想看看使用 Active Record 是什么样子的!今天我了解了 Rails 的 ORM 功能之一:[关联][2]。如果你像我一样对 ORM 完全不了解的话,那就来看看是怎么回事吧。
在我的论坛中,我有:
* 用户
* 话题(我本来想把它叫做“线索”,但显然这在 Rails 中是一个保留词,所以现在叫做“话题”)。
* 帖子
当显示一个帖子时,我需要显示创建该帖子的用户的用户名。所以我想我可能需要写一些代码来加载帖子,并为每个帖子加载用户,就像这样(在 Rails 中,`Post.where` 和 `User.find` 将会运行 SQL 语句,并将结果转化为 Ruby 对象):
```
@posts = Post.where(topic_id: id)
@posts.each do |post|
user = User.find(post.user_id)
post.user = user
end
```
这还不够好,它要为每个帖子做一次单独的 SQL 查询!我知道有一个更好的方法,我发现它叫做[关联][2]。这个链接是来自 <https://guides.rubyonrails.org> 的指南,到目前为止,它对我很有帮助。
基本上我需要做的就是:
1. 在 `User` 模型中添加一行 `has_many :post`
2. 在 `Post` 模型中添加一行 `belongs_to :user`
3. Rails 现在知道如何将这两个表连接起来,尽管我没有告诉它要连接到什么列上!我认为这是因为我按照它所期望的惯例命名了 `posts` 表中的 `user_id` 列。
4. 对 `User``Topic` 做完全相同的事情(一个主题也有很多帖子:`has_many :posts`)。
然后我加载每一个帖子和它的关联用户的代码就变成了只有一行! 就是这一行:
```
@posts = @topic.posts.order(created_at: :asc).preload(:user)
```
比起只有一行更重要的是,它不是单独做一个查询来获取每个帖子的用户,而是在 1 个查询中获取所有用户。显然,在 Rails 中,有一堆[不同的方法][3]来做类似的事情(预加载、急切加载、联接和包含?),我还不知道这些都是什么,但也许我以后会知道的。
### 一个有趣的 Rails 功能:脚手架!
Rails 有一个叫 `rails` 的命令行工具,它可以生成很多代码。例如,我想添加一个 `Topic` 模型/控制器。我不用去想在哪里添加所有的代码,可以直接运行
```
rails generate scaffold Topic title:text
```
并生成了一堆代码,这样我已经有了基本的端点来创建/编辑/删除主题(`Topic`)。例如,这是我的[现在的主题控制器][4],其中大部分我没有写(我只写了高亮的 3 行)。我可能会删除很多内容,但是有一个起点,让我可以扩展我想要的部分,删除我不想要的部分,感觉还不错。
### 数据库迁移!
`rails` 工具还可以生成数据库迁移! 例如,我决定要删除帖子中的 `title` 字段。
下面是我要做的:
```
rails generate migration RemoveTitleFromPosts title:string
rails db:migrate
```
就是这样 —— 只要运行几个命令行咒语就可以了! 我运行了几个这样的迁移,因为我改变了对我的数据库模式的设想。它是相当直接的,到目前为止 —— 感觉很神奇。
当我试图在一列中的某些字段为空的地方添加一个“不为空”(`not null`)约束时,情况就变得有点有趣了 —— 迁移失败。但我可以修复违例的记录,并轻松地重新运行迁移。
### 今天就到这里吧!
明天,如果我有更多的进展,也许我会把它放在互联网上。
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2020/11/10/day-2--rails-associations---dragging-divs-around/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://github.com/furf/jquery-ui-touch-punch
[2]: https://guides.rubyonrails.org/association_basics.html
[3]: https://blog.bigbinary.com/2013/07/01/preload-vs-eager-load-vs-joins-vs-includes.html
[4]: https://github.com/jvns/refrigerator-forum/blob/776b3227cfd7004cb1efb00ec7e3f82a511cbdc4/app/controllers/topics_controller.rb#L13-L15

View File

@ -0,0 +1,242 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12900-1.html)
[#]: subject: (How to use Serializers in the Django Python web framework)
[#]: via: (https://opensource.com/article/20/11/django-rest-framework-serializers)
[#]: author: (Renato Oliveira https://opensource.com/users/renato-oliveira)
如何在 Python Web 框架 Django 中使用序列化器
======
> 序列化用于将数据转换为方便存储或传输的格式然后将其重新构建以供使用。DRF 是最具有知名的序列化器。
![](https://img.linux.net.cn/data/attachment/album/202012/08/220845q5tz7cfftze5oem5.jpg)
序列化是将数据转换为可以存储或传输的格式,然后对其进行重新构建的过程。在开发应用程序或将数据存储在数据库、内存或将其转换为文件时,一直会用到它。
我最近帮助 [Labcodes][2] 的两名初级开发人员理解序列化器,我想也可以与诸位读者分享一下我的方法。
假设你正在编写一个电子商务网站,你有一个订单,该订单记录了某人在某个日期以某种价格购买了一个产品:
```
class Order:
    def __init__(self, product, customer, price, date):
        self.product = product
        self.customer = customer
        self.price = price
        self.date = date
```
现在,假设你想从一个键值数据库中存储和检索订单数据。幸运的是,它的接口可以接受和返回字典,因此你需要将对象转换成字典:
```
def serialize_order(order):
    return {
        'product': order.product,
        'customer': order.customer,
        'price': order.price,
        'date': order.date
    }
```
如果你想从数据库中获取一些数据,你可以获取字典数据并将其转换为订单对象(`Order`
```
def deserialize_order(order_data):
    return Order(
        product=order_data['product'],
        customer=order_data['customer'],
        price=order_data['price'],
        date=order_data['date'],
    )
```
这对于简单的数据非常直接了当,但是当你需要处理一些由复杂属性构成的复杂对象时,这种方法就无法很好地扩展。你还需要处理不同类型字段的验证,这需要手工完成大量工作。
此时框架的序列化可以很方便的派上用场。它们使你可以创建带有少量模板的序列化器,这将适用于复杂的情况。
[Django][3] 提供了一个序列化模块,允许你将模型“转换”为其它格式:
```
from django.core import serializers
serializers.serialize('json', Order.objects.all())
```
它涵盖了 Web 应用程序最常用的种类,例如 JSON、YAML 和 XML。但是你也可以使用第三方序列化器或创建自己的序列化器。你只需要在 `settings.py` 文件中注册它:
```
# settings.py
SERIALIZATION_MODULES = {
    'my_format': appname.serializers.MyFormatSerializer,
}
```
要创建自己的 `MyFormatSerializer`,你需要实现 `.serialize()` 方法并接受一个查询集和其它选项作为参数:
```
class MyFormatSerializer:
    def serialize(self, queryset, **options):
        # serious serialization happening
```
现在,你可以将查询集序列化为新格式:
```
from django.core import serializers
serializers.serialize('my_format', Order.objects.all())
```
你可以使用选项参数来定义序列化程序的行为。例如,如果要定义在处理 `ForeignKeys` 时要使用嵌套序列化,或者只希望数据返回其主键,你可以传递一个 `flat=True` 参数作为选项,并在方法中处理:
```
class MyFormatSerializer:
    def serializer(self, queryset, **options):
        if options.get('flat', False):
            # don't recursively serialize relationships
        # recursively serialize relationships
```
使用 Django 序列化的一种方法是使用 `loaddata``dumpdata` 管理命令。
### DRF 序列化器
在 Django 社区中,[Django REST 框架][4]DRF提供了最著名的序列化器。尽管你可以使用 Django 的序列化器来构建将在 API 中响应的 JSON但 REST 框架中的序列化器提供了更出色的功能,可以帮助你处理并轻松验证复杂的数据。
在订单的例子中,你可以像这样创建一个序列化器:
```
from restframework import serializers
class OrderSerializer(serializers.Serializer):
    product = serializers.CharField(max_length=255)
    customer = serializers.CharField(max_lenght=255)
    price = serializers.DecimalField(max_digits=5, decimal_places=2)
    date = serializers.DateField()
```
轻松序列化其数据:
```
order = Order('pen', 'renato', 10.50, date.today())
serializer = OrderSerializer(order)
serializer.data
# {'product': 'pen', 'customer': 'renato', 'price': '10.50', 'date': '2020-08-16'}
```
为了能够从数据返回实例,你需要实现两个方法:`create` 和 `update`
```
from rest_framework import serializers
class OrderSerializer(serializers.Serializer):
    product = serializers.CharField(max_length=255)
    customer = serializers.CharField(max_length=255)
    price = serializers.DecimalField(max_digits=5, decimal_places=2)
    date = serializers.DateField()
    def create(self, validated_data):
        # 执行订单创建
        return order
    def update(self, instance, validated_data):
       # 执行实例更新
       return instance
```
之后,你可以通过调用 `is_valid()` 来验证数据,并通过调用 `save()` 来创建或更新实例:
```
serializer = OrderSerializer(**data)
## 若要验证数据,在调用 save 之前必须执行
serializer.is_valid()
serializer.save()
```
### 模型序列化器
序列化数据时,通常需要从数据库(即你创建的模型)进行数据处理。`ModelSerializer` 与 `ModelForm` 一样,提供了一个 API用于从模型创建序列化器。假设你有一个订单模型
```
from django.db import models
class Order(models.Model):
    product = models.CharField(max_length=255)
    customer = models.CharField(max_length=255)
    price = models.DecimalField(max_digits=5, decimal_places=2)
    date = models.DateField()    
```
你可以像这样为它创建一个序列化器:
```
from rest_framework import serializers
class OrderSerializer(serializers.ModelSerializer):
    class Meta:
        model = Order
        fields = '__all__'
```
Django 会自动在序列化器中包含所有模型字段,并创建 `create``udpate` 方法。
### 在基于类的视图CBV中使用序列化器
像 Django CBV 中的 `Forms` 一样,序列化器可以很好地与 DRF 集成。你可以设置 `serializer_class` 属性,方便序列化器用于视图:
```
from rest_framework import generics
class OrderListCreateAPIView(generics.ListCreateAPIView):
    queryset = Order.objects.all()
    serializer_class = OrderSerializer
```
你也可以定义 `get_serializer_class()` 方法:
```
from rest_framework import generics
class OrderListCreateAPIView(generics.ListCreateAPIView):
    queryset = Order.objects.all()
   
    def get_serializer_class(self):
        if is_free_order():
            return FreeOrderSerializer
        return OrderSerializer
```
在 CBV 中还有其它与序列化器交互的方法。例如,[get_serializer()][5] 返回一个已经实例化的序列化器,[get_serializer_context()][6] 返回创建实例时传递给序列化器的参数。对于创建或更新数据的视图,有 `create``update`,它们使用 `is_valid` 方法验证数据,还有 [perform_create][7] 和 [perform_update][8] 调用序列化器的 `save` 方法。
### 了解更多
要了解更多资源,参考我朋友 André Ericson 的[经典 Django REST 框架][9]网站。它是一个[基于类的经典视图][10]的 REST 框架版本,可让你深入查看组成 DRF 的类。当然,官方[文档][11]也是一个很棒的资源。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/django-rest-framework-serializers
作者:[Renato Oliveira][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/renato-oliveira
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
[2]: http://www.labcodes.com.br
[3]: https://www.djangoproject.com/
[4]: https://www.django-rest-framework.org/
[5]: http://www.cdrf.co/3.9/rest_framework.generics/CreateAPIView.html#get_serializer
[6]: http://www.cdrf.co/3.9/rest_framework.generics/CreateAPIView.html#get_serializer_context
[7]: http://www.cdrf.co/3.9/rest_framework.generics/CreateAPIView.html#perform_create
[8]: http://www.cdrf.co/3.9/rest_framework.generics/RetrieveUpdateAPIView.html#perform_update
[9]: http://www.cdrf.co/
[10]: https://ccbv.co.uk/
[11]: https://www.django-rest-framework.org/api-guide/serializers/#serializers

View File

@ -0,0 +1,227 @@
[#]: collector: "lujun9972"
[#]: translator: "lxbwolf"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-12875-1.html"
[#]: subject: "Get started with Fossil, an alternative to Git"
[#]: via: "https://opensource.com/article/20/11/fossil"
[#]: author: "Klaatu https://opensource.com/users/klaatu"
了解一下 Fossil一个 Git 的替代品
======
> Fossil 是一个集版本控制系统、bug 追踪、维基、论坛以及文档解决方案于一体的系统。
![](https://img.linux.net.cn/data/attachment/album/202012/01/173057hfhyzyw921zll219.jpg)
每个开发者都知道,追踪代码的修改是至关重要的。有时候你会处于好奇或者教育的目的需要展示你的项目开始和进化的历史。有时候你想让其他的开发者参与到你的项目中,因此你需要一种值得信赖的能合并不同代码分支的方法。更极端一点,有时候你为了解决一个问题而修改的代码导致已有的功能不能正常使用。
[Fossil][2] 源码管理系统是由著名的 [SQLite][3] 数据库的作者开发的一个集版本控制系统、bug 追踪、维基、论坛以及文档解决方案于一体的系统。
### 安装 Fossil
Fossil 是一个独立的 C 程序,因此你可以从它的网站上[下载][4]后放在环境变量 [PATH][5] 中的任意位置。例如,假定 `/usr/local/bin` 已经在你的环境变量中(默认情况下是在的):
```
$ wget https://fossil-scm.org/home/uv/fossil-linux-x64-X.Y.tar.gz
$ sudo tar xvf fossil-linux-x64-X.Y.tar.gz --directory /usr/local/bin
```
你也可以通过包管理器从软件仓库中找到 Fossil或者直接从源码编译。
### 创建一个 Fossil 仓库
如果你已经有一个代码项目,想用 Fossil 来追踪,那么第一步就是创建一个 Fossil 仓库:
```
$ fossil init myproject.fossil
project-id: 010836ac6112fefb0b015702152d447c8c1d8604
server-id:  54d837e9dc938ba1caa56d31b99c35a4c9627f44
admin-user: klaatu (initial password is "14b605")
```
创建 Fossil 仓库的过程中会返回三行信息:一个唯一的项目 ID、一个唯一的服务器 ID 以及管理员 ID 和密码。项目 ID 和服务器 ID 是版本数字。管理员凭证表明你对这个仓库的所有权,当你把 Fossil 作为服务器让其他用户来访问时可以使用管理员权限。
### Fossil 仓库工作流
在你使用 Fossil 仓库之前,你需要先为它的数据创建一个工作路径。你可以把这个过程类比为使用 Python 时创建一个虚拟环境或者解压一个只用来备份的 ZIP 文件。
创建一个工作目录并进入:
```
$ mkdir myprojectdir
$ cd myprojectdir
```
把你的 Fossil 打开到刚刚创建的目录:
```
$ fossil open ../myproject
project-name: <unnamed>
repository: /home/klaatu/myprojectdir/../myproject
local-root: /home/klaatu/myprojectdir/
config-db: /home/klaatu/.fossil
project-code: 010836ac6112fefb0b015702152d447c8c1d8604
checkout: 9e6cd96dd675544c58a246520ad58cdd460d1559 2020-11-09 04:09:35 UTC
tags: trunk
comment: initial empty check-in (user: klaatu)
check-ins: 1
```
你可能注意到了Fossil 在你的家目录下创建了一个名为 `.fossil` 的隐藏文件,用来追踪你的全局 Fossil 配置。这个配置不是只适用于你的一个项目的;这个文件只会在你第一次使用 Fossil 时生成。
#### 添加文件
使用 `add``commit` 子命令来向你的仓库添加文件。例如,创建一个简单的 `README` 文件,把它添加到仓库:
```
$ echo "My first Fossil project" > README
$ fossil add README
ADDED  README
$ fossil commit -m 'My first commit'
New_Version: 2472a43acd11c93d08314e852dedfc6a476403695e44f47061607e4e90ad01aa
```
#### 使用分支
Fossil 仓库开始时默认使用的主分支名为 `trunk`。当你想修改代码而又不影响主干代码时,你可以从 trunk 分支切走。创建新分支需要使用 `branch` 子命令,这个命令需要两个参数:一个新分支的名字,一个新分支的基分支名字。在本例中,只有一个分支 `trunk`,因此尝试创建一个名为 `dev` 的新分支:
```
$ fossil branch --help
Usage: fossil branch new BRANCH-NAME BASIS ?OPTIONS?
$ fossil branch new dev trunk
New branch: cb90e9c6f23a9c98e0c3656d7e18d320fa52e666700b12b5ebbc4674a0703695
```
你已经创建了一个新分支,但是你当前所在的分支仍然是 `trunk`
```
$ fossil branch current
trunk
```
使用 `checkout` 命令切换到你的新分支 `dev`
```
$ fossil checkout dev
dev
```
#### 合并修改
假设你在 `dev` 分支中添加了一个新文件,完成了测试,现在想把它合并到 `trunk`。这个过程叫做*合并*。
首先,切回目标分支(本例中目标分支为 `trunk`
```
$ fossil checkout trunk
trunk
$ ls
README
```
这个分支中没有你的新文件(或者你对其他文件的修改),而那些内容是合并的过程需要的信息:
```
$ fossil merge dev
 "fossil undo" is available to undo changes to the working checkout.
$ ls
myfile.lua  README
```
### 查看 Fossil 时间线
使用 `timeline` 选项来查看仓库的历史。这个命令列出了你的仓库的所有活动的详细信息,包括用来表示每次修改的哈希值、每次提交时填写的信息以及提交者:
```
$ fossil timeline
=== 2020-11-09 ===
06:24:16 [5ef06e668b] added exciting new file (user: klaatu tags: dev)
06:11:19 [cb90e9c6f2] Create new branch named "dev" (user: klaatu tags: dev)
06:08:09 [a2bb73e4a3] *CURRENT* some additions were made (user: klaatu tags: trunk)
06:00:47 [2472a43acd] This is my first commit. (user: klaatu tags: trunk)
04:09:35 [9e6cd96dd6] initial empty check-in (user: klaatu tags: trunk)
+++ no more data (5) +++
```
![Fossil UI][6]
### 公开你的 Fossil 仓库
因为 Fossil 有个内置的 web 界面,所以 Fossil 不像 GitLab 和 Gitea 那样需要主机服务。Fossil 就是它自己的主机服务,只要你把它放在一台机器上就行了。在你公开你的 Fossil 仓库之前,你还需要通过 web 用户界面UI来配置一些信息
使用 `ui` 子命令启动一个本地的实例:
```
$ pwd
/home/klaatu/myprojectdir/
$ fossil ui
```
“Users” 和 “Settings” 是安全相关的“Configuration” 是项目属性相关的包括一个合适的标题。web 界面不仅仅是一个方便的功能。 它是能在生产环境中使用并作为 Fossil 项目的宿主机来使用的。它还有一些其他的高级选项,比如用户管理(或者叫自我管理)、在同一个服务器上与其他的 Fossil 仓库进行单点登录SSO
当配置完成后,关掉 web 界面并按下 `Ctrl+C` 来停止 UI 引擎。像提交代码一样提交你的 web 修改。
```
$ fossil commit -m 'web ui updates'
New_Version: 11fe7f2855a3246c303df00ec725d0fca526fa0b83fa67c95db92283e8273c60
```
现在你可以配置你的 Fossil 服务器了。
1. 把你的 Fossil 仓库(本例中是 `myproject.fossil`)复制到服务器,你只需要这一个文件。
2. 如果你的服务器没有安装 Fossil就在你的服务器上安装 Fossil。在服务器上安装的过程跟在本地一样。
3. 在你的 `cgi-bin` 目录下(或它对应的目录,这取决于你的 HTTP 守护进程)创建一个名为 `repo_myproject.cgi` 的文件:
```
#!/usr/local/bin/fossil
repository: /home/klaatu/public_html/myproject.fossil
```
添加可执行权限:
```
$ chmod +x repo_myproject.cgi
```
你需要做的都已经做完了。现在可以通过互联网访问你的项目了。
你可以通过 CGI 脚本来访问 web UI例如 `https://example.com/cgi-bin/repo_myproject.cgi`
你也可以通过命令行来进行交互:
```
$ fossil clone https://klaatu@example.com/cgi-bin/repo_myproject.cgi
```
在本地的克隆仓库中工作时,你需要使用 `push` 子命令把本地的修改推送到远程的仓库,使用 `pull` 子命令把远程的修改拉取到本地仓库:
```
$ fossil push https://klaatu@example.com/cgi-bin/repo_myproject.cgi
```
### 使用 Fossil 作为独立的托管
Fossil 将大量的权力交到了你的手中(以及你的合作者的手中),让你不再依赖托管服务。本文只是简单的介绍了基本概念。你的代码项目还会用到很多有用的 Fossil 功能。尝试一下 Fossil。它不仅会改变你对版本控制的理解它会让你不再考虑其他的版本控制系统。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/fossil
作者:[Klaatu][a]
选题:[lujun9972][b]
译者:[lxbwolf](https://github.com/lxbwolf)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/klaatu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_dinosaur_sunset.png?itok=lbzpbW5p "Dinosaurs on land at sunset"
[2]: https://fossil-scm.org/home/doc/trunk/www/index.wiki
[3]: https://www.sqlite.org/index.html
[4]: https://fossil-scm.org/home/uv/download.html
[5]: https://opensource.com/article/17/6/set-path-linux
[6]: https://opensource.com/sites/default/files/uploads/fossil-ui.jpg "Fossil UI"
[7]: https://creativecommons.org/licenses/by-sa/4.0/

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12911-1.html)
[#]: subject: (Keep track of multiple Git remote repositories)
[#]: via: (https://opensource.com/article/20/11/multiple-git-repositories)
[#]: author: (Peter Portante https://opensource.com/users/portante)
跟踪多个 Git 远程仓库
======
> 拥有一致的命名标准是保持本地和上游 Git 仓库保持一致的关键。
![](https://img.linux.net.cn/data/attachment/album/202012/11/220828tjt9qlpmg1opvibq.jpg)
当本地 Git 仓库的命名与远程仓库不一致时,与远程仓库协作就会变得很混乱。
解决此问题的一个方法是标准化两个词的使用和含义:`origin` 指的是你个人的 `example.com/<USER>/*` 仓库,而 `upstream` 指的是你从 `origin` 仓库<ruby>复刻<rt>fork</rt></ruby>出来的 `example.com` 仓库。换句话说,`upstream` 指的是公开提交工作的上游仓库,而 `origin` 指的是你对上游仓库的本地复刻,例如,你从这里生成<ruby>拉取请求<rt>pull request</rt></ruby>PR
以 [pbench][2] 仓库为例,下面是一个逐步建立新的本地克隆的方法,其中 `origin``upstream` 的定义是一致的。
1、在大多数 Git 托管服务上,当你想在上面工作时,必须对它进行复刻。当你运行自己的 Git 服务器时,这并不是必要的,但对于一个公开的代码库来说,这是一个在贡献者之间传输差异的简单方法。
创建一个 Git 仓库的复刻。在这个例子中,假设你的复刻位于 `example.com/<USER>/pbench`
2、接下来你必须获得一个统一资源标识符 [URI][3]),以便通过 SSH 进行<ruby>克隆<rt>cloning</rt></ruby>。在大多数 Git 托管服务上,比如 GitLab 或 GitHub它在一个标有 “Clone” 或 “Clone over SSH” 的按钮或面板上,可以将克隆 URI 复制到剪贴板中。
3、在你的开发系统中使用你复制的 URI 克隆仓库:
```
$ git clone git@example.com:<USER>/pbench.git
```
这将以默认名称 `origin` 来克隆 Git 仓库,作为你的 `pbench` 仓库复刻副本。
4、切换到刚才克隆的目录
```
$ cd ~/pbench
```
5、下一步获取源仓库的 SSH URI你最初复刻的那个。这可能和上面的方法一样。找到 “Clone” 按钮或面板,复制克隆地址。在软件开发中,这通常被称为“上游”,因为(理论上)这是大多数提交发生的地方,而你打算让这些提交流向下游的仓库。
6、将 URI 添加到你的本地仓库中。是的,将有*两个不同*的远程仓库分配给你的本地仓库副本:
```
$ git remote add upstream git@example.com:bigproject/pbench.git
```
7、现在你有两个命名远程仓库`origin` 和 `upstream`。 你可以用 `remote` 子命令查看你的远程仓库:
```
$ git remote -v
```
现在,你的本地 `master` 分支正在跟踪 `origin``master`,这不一定是你想要的。你可能想跟踪这个分支的 `upstream` 版本,因为大多数开发都在上游进行。这个想法是,你要在从上游获得的内容的基础上添加更改。
8、将你的本地的 `master` 分支改成跟踪 `upstream/master`
```
$ git fetch upstream
$ git branch --set-upstream-to=upstream/master master
```
你可以对任何你想要的分支这样做,而不仅仅是 `master`。例如,有些项目使用 `dev` 分支来处理所有不稳定的变化,而将 `master` 保留给已批准发布的代码。
9、一旦你设置了你的跟踪分支一定要变基`rebase`)你的 `master` 分支,使它与上游仓库的任何新变化保持一致:
```
$ git remote update
$ git checkout master
$ git rebase
```
这是一个保持 Git 仓库在不同复刻之间同步的好方法。如果你想自动完成这项工作,请阅读 Seth Kenlon 关于[使用 Ansible 托管 Git 仓库][4]的文章。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/multiple-git-repositories
作者:[Peter Portante][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/portante
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
[2]: https://github.com/distributed-system-analysis/pbench
[3]: https://en.wikipedia.org/wiki/Uniform_Resource_Identifier
[4]: https://opensource.com/article/19/11/how-host-github-gitlab-ansible

View File

@ -0,0 +1,222 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12912-1.html)
[#]: subject: (Getting started with Fedora CoreOS)
[#]: via: (https://fedoramagazine.org/getting-started-with-fedora-coreos/)
[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/)
Fedora CoreOS 入门
======
![Fedora CoreOS入门][1]
现在被称为 DevOps 时代,操作系统的关注度似乎比工具要低一些。然而,这并不意味着操作系统没有创新。(编辑注:基于 Linux 内核的众多发行版所提供的多样化产品就是一个很好的例子)。[Fedora CoreOS][4] 就对这个 DevOps 时代的操作系统应该是什么样有着独特的理念。
### Fedora CoreOS 的理念
Fedora CoreOSFCOS是由 CoreOS Container Linux 和 Fedora Atomic Host 合并而来。它是一个专注于运行容器化应用程序的精简的独体操作系统。安全性是首要重点FCOS 提供了自动更新,并带有 SELinux 强化。
为了使自动更新能够很好地工作,它们需要非常健壮,目标是运行 FCOS 的服务器在更新后不会崩溃。这是通过使用不同的发布流stable、testing 和 next来实现的。每个流每 2 周发布一次更新内容会从一个流推广到另一个流next -> testing -> stable。这样落地在 stable 流中的更新就有机会经过长时间的测试。
### 入门
对于这个例子,让我们使用 stable 流和一个 QEMU 基础镜像,我们可以作为一个虚拟机运行。你可以使用 [coreos-installer][5] 来下载该镜像。
在你的Workstation终端上更新镜像的链接后运行以下命令编辑注在 Silverblue 上,基于容器的 coreos 工具是最简单的方法,可以尝试一下。说明可以在 <https://docs.fedoraproject.org/en-US/fedora-coreos/tutorial-setup/> 中找到,特别是 “Setup with Podman or Docker” 一节。):
```
$ sudo dnf install coreos-installer
$ coreos-installer download --image-url https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.20200907.3.0/x86_64/fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2.xz
$ xz -d fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2.xz
$ ls
fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2
```
#### 创建一个配置
要定制一个 FCOS 系统,你需要提供一个配置文件,[Ignition][6] 将使用这个文件来配置系统。你可以用这个文件来配置诸如创建用户、添加受信任的 SSH 密钥、启用 systemd 服务等等。
以下配置创建了一个 `core` 用户,并在 `authorized_keys` 文件中添加了一个 SSH 密钥。它还创建了一个 systemd 服务,使用 [podman][7] 来运行一个简单的 “hello world” 容器:
```
version: "1.0.0"
variant: fcos
passwd:
users:
- name: core
ssh_authorized_keys:
- ssh-ed25519 my_public_ssh_key_hash fcos_key
systemd:
units:
-
contents: |
[Unit]
Description=Run a hello world web service
After=network-online.target
Wants=network-online.target
[Service]
ExecStart=/bin/podman run --pull=always --name=hello --net=host -p 8080:8080 quay.io/cverna/hello
ExecStop=/bin/podman rm -f hello
[Install]
WantedBy=multi-user.target
enabled: true
name: hello.service
```
在配置中加入你的 SSH 密钥后,将其保存为 `config.yaml`。接下来使用 Fedora CoreOS Config Transpiler`fcct`)工具将这个 YAML 配置转换成有效的 Ignition 配置JSON 格式)。
直接从 Fedora 的资源库中安装 `fcct`,或者从 [GitHub][8] 中获取二进制文件:
```
$ sudo dnf install fcct
$ fcct -output config.ign config.yaml
```
#### 安装并运行 Fedora CoreOS
要运行镜像,你可以使用 libvirt 堆栈。要在 Fedora 系统上使用 `dnf` 软件包管理器安装它:
```
$ sudo dnf install @virtualization
```
现在让我们创建并运行一个 Fedora CoreOS 虚拟机:
```
$ chcon --verbose unconfined_u:object_r:svirt_home_t:s0 config.ign
$ virt-install --name=fcos \
--vcpus=2 \
--ram=2048 \
--import \
--network=bridge=virbr0 \
--graphics=none \
--qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=${PWD}/config.ign" \
--disk=size=20,backing_store=${PWD}/fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2
```
安装成功后,会显示一些信息并提供登录提示符:
```
Fedora CoreOS 32.20200907.3.0
Kernel 5.8.10-200.fc32.x86_64 on an x86_64 (ttyS0)
SSH host key: SHA256:BJYN7AQZrwKZ7ZF8fWSI9YRhI++KMyeJeDVOE6rQ27U (ED25519)
SSH host key: SHA256:W3wfZp7EGkLuM3z4cy1ZJSMFLntYyW1kqAqKkxyuZrE (ECDSA)
SSH host key: SHA256:gb7/4Qo5aYhEjgoDZbrm8t1D0msgGYsQ0xhW5BAuZz0 (RSA)
ens2: 192.168.122.237 fe80::5054:ff:fef7:1a73
Ignition: user provided config was applied
Ignition: wrote ssh authorized keys file for user: core
```
Ignition 配置文件没有为 `core` 用户提供任何密码,因此无法通过控制台直接登录。(不过,也可以通过 Ignition 配置为用户配置密码。)
使用 `Ctrl + ]` 组合键退出虚拟机的控制台。然后检查 `hello.service` 是否在运行:
```
$ curl http://192.168.122.237:8080
Hello from Fedora CoreOS!
```
使用预先配置的 SSH 密钥,你还可以访问虚拟机并检查其上运行的服务:
```
$ ssh core@192.168.122.237
$ systemctl status hello
● hello.service - Run a hello world web service
Loaded: loaded (/etc/systemd/system/hello.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-10-28 10:10:26 UTC; 42s ago
```
#### zincati、rpm-ostree 和自动更新
zincati 服务使用自动更新驱动 rpm-ostreed。
检查虚拟机上当前运行的 Fedora CoreOS 版本,并检查 zincati 是否找到了更新:
```
$ ssh core@192.168.122.237
$ rpm-ostree status
State: idle
Deployments:
● ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20200907.3.0 (2020-09-23T08:16:31Z)
Commit: b53de8b03134c5e6b683b5ea471888e9e1b193781794f01b9ed5865b57f35d57
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
$ systemctl status zincati
● zincati.service - Zincati Update Agent
Loaded: loaded (/usr/lib/systemd/system/zincati.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-10-28 13:36:23 UTC; 7s ago
Oct 28 13:36:24 cosa-devsh zincati[1013]: [INFO ] initialization complete, auto-updates logic enabled
Oct 28 13:36:25 cosa-devsh zincati[1013]: [INFO ] target release '32.20201004.3.0' selected, proceeding to stage it
... zincati reboot ...
```
重启后,我们再远程登录一次,检查新版的 Fedora CoreOS
```
$ ssh core@192.168.122.237
$ rpm-ostree status
State: idle
Deployments:
● ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20201004.3.0 (2020-10-19T17:12:33Z)
Commit: 64bb377ae7e6949c26cfe819f3f0bd517596d461e437f2f6e9f1f3c24376fd30
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20200907.3.0 (2020-09-23T08:16:31Z)
Commit: b53de8b03134c5e6b683b5ea471888e9e1b193781794f01b9ed5865b57f35d57
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
```
`rpm-ostree status` 现在显示了两个版本的 Fedora CoreOS一个是 QEMU 镜像中的版本,一个是更新后的最新版本。有了这两个版本,就可以使用 `rpm-ostree rollback` 命令回滚到之前的版本。
最后,你可以确保 hello 服务仍在运行并提供内容:
```
$ curl http://192.168.122.237:8080
Hello from Fedora CoreOS!
```
更多信息参见:[Fedora CoreOS 更新][9]。
#### 删除虚拟机
要进行事后清理,使用以下命令删除虚拟机和相关存储:
```
$ virsh destroy fcos
$ virsh undefine --remove-all-storage fcos
```
### 结论
Fedora CoreOS 为在容器中运行应用程序提供了一个坚实而安全的操作系统。它在推荐主机使用声明式配置文件进行配置的 DevOps 环境中表现出色。自动更新和回滚到以前版本的操作系统的能力,可以在服务的运行过程中带来安心的感觉。
通过关注项目[文档][10]中的教程,了解更多关于 Fedora CoreOS 的信息。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/getting-started-with-fedora-coreos/
作者:[Clément Verna][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/cverna/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/11/fcos-gettingstarted-1-816x345.jpg
[2]: https://unsplash.com/@pawel_czerwinski?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/core?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://getfedora.org/coreos/
[5]: https://github.com/coreos/coreos-installer/releases
[6]: https://github.com/coreos/ignition
[7]: https://podman.io/
[8]: https://github.com/coreos/fcct/releases
[9]: https://docs.fedoraproject.org/en-US/fedora-coreos/auto-updates/
[10]: https://docs.fedoraproject.org/en-US/fedora-coreos/tutorials/

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12893-1.html)
[#]: subject: (How to Go Full Dark Mode With LibreOffice)
[#]: via: (https://itsfoss.com/libreoffice-dark-mode/)
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
如何在 LibreOffice 中完全启用深色模式
======
![](https://img.linux.net.cn/data/attachment/album/202012/07/083812n0zgss9qt175pm9z.jpg)
[LibreOffice][1] 是一款自由开源的跨平台办公生产力软件。如果你没有充分利用它,那么必须看下 [LibreOffice 小技巧][2]。
甚至在非编程人员中,深色主题也越来越受欢迎。它减轻了眼睛的压力,特别适合长时间使用屏幕。有人认为,这使文本看起来清晰明了,有助于提高生产率。
如今,某些 Linux 发行版例如 [Ubuntu 带有深色模式][3],使你的系统具有更暗的色彩。当你打开<ruby>深色模式<rt>dark mode</rt></ruby>时,某些应用将自动切换到深色模式。
LibreOffice 也会这样,但你编辑的主区域除外:
![LibreOffice semi dark mode matching with the system theme][4]
你可以更改它。如果要让 LibreOffice 进入完全深色模式,只需更改一些设置。让我告诉你如何做。
### 如何在 LibreOffice 中完全启用深色模式
如前所述,你需要先启用系统范围的深色模式。这样可以确保窗口颜色(或标题栏)与应用内深色完全融合。
接下来,打开套件中的**任意** LibreOffice 应用,例如 **Writer**。然后从菜单中,依次点击 **Tools -> Options -> Application Colors**,然后选择 **Document background 和 Application background****Black****Automatic**(任意适合你的方式)。
![][5]
如果图标不是深色,那么可以从菜单(如下图所示)中更改它们,**Tools -> Options -> View** ,我在 MX Linux 上的个人选择是 Ubuntu 的 [Yaru][6] 图标样式(如果你使用的图标包为深色版本,请选择它) 。
![][7]
当然,你也可以尝试其他 Linux 发行版的 [icon 主题][8]。
最终结果应如下所示:
![][9]
#### LibreOffice flatpak 软件包的其他技巧
如果你使用的是 LibreOffice 套件的 [Flatpak 软件包][10],那么 LibreOffice 的标题区域(或菜单区域)可能看起来是白色的。在这种情况下,你可以尝试进入 **Tools -> Options -> Personalization**,然后选择 “**灰色主题**”,如下截图所示。
![][11]
它并不完全是黑色的,但应该可以使外观看起来更好。希望可以帮助你切换到深色主题的 LibreOffice 体验!
#### 总结
深色主题逐渐开始在我们的台式机中占主导地位,它具有现代品味并减少了眼睛疲劳,尤其是在弱光条件下。
LibreOffice 使你可以自由地将工作环境切换为深色主题或保留浅色主题元素。实际上,你将有大量的自定义选项来调整你喜欢的内容。你是否已在 LibreOffice 上切换为深色主题?你首选哪种颜色组合?在下面的评论中让我们知道!
--------------------------------------------------------------------------------
via: https://itsfoss.com/libreoffice-dark-mode/
作者:[Dimitrios Savvopoulos][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/dimitrios/
[b]: https://github.com/lujun9972
[1]: https://www.libreoffice.org
[2]: https://itsfoss.com/libreoffice-tips/
[3]: https://itsfoss.com/dark-mode-ubuntu/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/libreOffice-dark-mode.png?resize=799%2C450&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/1-libreoffice-application-colours.png?resize=800%2C551&ssl=1
[6]: https://extensions.libreoffice.org/en/extensions/show/yaru-icon-theme
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/2-libreoffice-iconstyle-1.png?resize=800%2C531&ssl=1
[8]: https://itsfoss.com/best-icon-themes-ubuntu-16-04/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/3-libreoffice-dark.png?resize=800%2C612&ssl=1
[10]: https://itsfoss.com/what-is-flatpak/
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/libre-office-personalization.png?resize=800%2C636&ssl=1

View File

@ -0,0 +1,256 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12904-1.html)
[#]: subject: (8 Git aliases that make me more efficient)
[#]: via: (https://opensource.com/article/20/11/git-aliases)
[#]: author: (Ricardo Gerardi https://opensource.com/users/rgerardi)
8 个让我更有效率的 Git 别名
======
> 使用别名为你最常用或复杂的 Git 命令创建快捷方式。
![](https://img.linux.net.cn/data/attachment/album/202012/09/202245q50ss5kncqc241sf.jpg)
这篇出色的文章《[改变我使用 Git 工作方式的七个技巧][2]》启发了我写下另一个对我在命令行上使用 Git 的经验有重大影响的 Git 特性:别名。
定义 Git 的别名来替代命令有两大好处。
* 它简化了有许多选项的长命令,使它们更短,更容易记住。
* 缩短了经常使用的命令,使你的工作更有效率。
### 如何定义和使用别名
要定义 Git 的别名,请使用 `git config` 命令,加上别名和要替换的命令。例如,要为 `git push` 创建别名 `p`
```
$ git config --global alias.p 'push'
```
你可以通过将别名作为 `git` 的参数来使用别名,就像其他命令一样:
```
$ git p
```
要查看所有的别名,用 `git config` 列出你的配置:
```
$ git config --global -l
user.name=ricardo
user.email=ricardo@example.com
alias.p=push
```
你也可以用你喜欢的 shell 来定义别名,比如 Bash 或 Zsh。不过用 Git 定义别名有几个功能是用 shell 无法实现的。首先,它允许你在不同的 shell 中使用别名,而无需额外配置。此外,它还集成了 Git 的自动更正功能所以当你输入错误的命令时Git 可以建议你正确的别名。最后Git 还会将别名保存在用户配置文件中,你可以通过复制一个文件将别名转移到其他机器上。
无论使用哪种方法,定义别名都能改善你使用 Git 的整体体验。更多关于定义 Git 别名的信息,请看《[Git Book][4]》。
### 8 个有用的 Git 别名
现在你知道如何创建和使用别名了,来看看一些有用的别名。
#### 1、Git 状态
Git 命令行用户经常使用 `status` 命令来查看已更改或未跟踪的文件。默认情况下,这个命令提供了很多行的冗长输出,你可能不想要或不需要。你可以使用一个别名来处理这两个组件。定义别名 `st` 来缩短命令,并使用选项 `-sb` 来输出一个不那么啰嗦的状态和分支信息。
```
$ git config --global alias.st 'status -sb'
```
如果你在一个干净的分支上使用这个别名,你的输出就像这样:
```
$  git st
## master
```
在一个带有已更改和未跟踪文件的分支上使用它,会产生这样的输出:
```
$ git st
## master
 M test2
?? test3
```
#### 2、Git 单行日志
创建一个别名,以单行方式显示你的提交,使输出更紧凑:
```
$ git config --global alias.ll 'log --oneline'
```
使用这个别名可以提供所有提交的简短列表:
```
$ git ll
33559c5 (HEAD -> master) Another commit
17646c1 test1
```
#### 3、Git 的最近一次提交
这将显示你最近一次提交的详细信息。这是扩展了《Git Book》中 [别名][4] 一章的例子:
```
$ git config --global alias.last 'log -1 HEAD --stat'
```
用它来查看最后的提交:
```
$ git last
commit f3dddcbaabb928f84f45131ea5be88dcf0692783 (HEAD -> branch1)
Author: ricardo <ricardo@example.com>
Date:   Tue Nov 3 00:19:52 2020 +0000
    Commit to branch1
 test2 | 1 +
 test3 | 0
 2 files changed, 1 insertion(+)
```
#### 4、Git 提交
当你对 Git 仓库进行修改时,你会经常使用 `git commit`。使用 `cm` 别名使 `git commit -m` 命令更有效率:
```
$ git config --global alias.cm 'commit -m'
```
因为 Git 别名扩展了命令,所以你可以在执行过程中提供额外的参数:
```
$ git cm "A nice commit message"
[branch1 0baa729] A nice commit message
 1 file changed, 2 insertions(+)
```
#### 5、Git 远程仓库
`git remote -v` 命令列出了所有配置的远程仓库。用别名 `rv` 将其缩短:
```
$ git config --global alias.rv 'remote -v'
```
#### 6、Git 差异
`git diff` 命令可以显示不同提交的文件之间的差异,或者提交和工作树之间的差异。用 `d` 别名来简化它:
```
$ git config --global alias.d 'diff'
```
标准的 `git diff` 命令对小的改动很好用,但对于比较复杂的改动,外部工具如 `vimdiff` 就更有用。创建别名 `dv` 来使用 `vimdiff` 显示差异,并使用 `y` 参数跳过确认提示:
```
$ git config --global alias.dv 'difftool -t vimdiff -y'
```
使用这个别名来显示两个提交之间的 `file1` 差异:
```
$ git dv 33559c5 ca1494d file1
```
![vim-diff results][5]
#### 7、Git 配置列表
`gl` 别名可以更方便地列出所有用户配置:
```
$ git config --global alias.gl 'config --global -l'
```
现在你可以看到所有定义的别名(和其他配置选项):
```
$ git gl
user.name=ricardo
user.email=ricardo@example.com
alias.p=push
alias.st=status -sb
alias.ll=log --oneline
alias.last=log -1 HEAD --stat
alias.cm=commit -m
alias.rv=remote -v
alias.d=diff
alias.dv=difftool -t vimdiff -y
alias.gl=config --global -l
alias.se=!git rev-list --all | xargs git grep -F
```
#### 8、搜索提交
Git 别名允许你定义更复杂的别名,比如执行外部 shell 命令,可以在别名前加上 `!` 字符。你可以用它来执行自定义脚本或更复杂的命令,包括 shell 管道。
例如,定义 `se` 别名来搜索你的提交:
```
$ git config --global alias.se '!git rev-list --all | xargs git grep -F'
```
使用这个别名来搜索提交中的特定字符串:
```
$ git se test2
0baa729c1d683201d0500b0e2f9c408df8f9a366:file1:test2
ca1494dd06633f08519ec43b57e25c30b1c78b32:file1:test2
```
### 自动更正你的别名
使用 Git 别名的一个很酷的好处是它与自动更正功能的原生集成。如果你犯了错误默认情况下Git 会建议使用与你输入的命令相似的命令,包括别名。例如,如果你把 `status` 打成了 `ts`,而不是 `st`Git 会推荐正确的别名:
```
$ git ts
git: 'ts' is not a git command. See 'git --help'.
The most similar command is
        st
```
如果你启用了自动更正功能Git 会自动执行正确的命令:
```
$ git config --global help.autocorrect 20
$ git ts
WARNING: You called a Git command named 'ts', which does not exist.
Continuing in 2.0 seconds, assuming that you meant 'st'.
## branch1
?? test4
```
### 优化 Git 命令
Git 别名是一个很有用的功能它可以优化常见的重复性命令的执行从而提高你的效率。Git 允许你定义任意数量的别名,有些用户会定义很多别名。我更喜欢只为最常用的命令定义别名 —— 定义太多别名会让人难以记忆,而且可能需要查找才能使用。
更多关于别名的内容,包括其他有用的内容,请参见 [Git 维基的别名页面][7]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/git-aliases
作者:[Ricardo Gerardi][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rgerardi
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
[2]: https://linux.cn/article-12894-1.html
[3]: mailto:ricardo@example.com
[4]: https://git-scm.com/book/en/v2/Git-Basics-Git-Aliases
[5]: https://opensource.com/sites/default/files/uploads/vimdiff.png (vim-diff results)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://git.wiki.kernel.org/index.php/Aliases

View File

@ -0,0 +1,151 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12887-1.html)
[#]: subject: (Journal five minutes a day with Jupyter)
[#]: via: (https://opensource.com/article/20/11/daily-journal-jupyter)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
每天用 Jupyter 写 5 分钟的日记
======
> 用 Jupyter 和 Python 在你的日常写作背后实现一些自动化。
![](https://img.linux.net.cn/data/attachment/album/202012/05/131314woxpksatp2toe7tz.jpg)
有些人会遵循传统,制定一年的计划。不过,一年的时间很长,所以我以季节性的主题或轨迹来规划。每个季度,我都会坐下来,看看即将到来的三个月的季节,并决定在这段时间里我将努力做什么。
对于我最新的主题,我决定要每天写一篇日记。我喜欢有明确的承诺,所以我承诺每天写 5 分钟。我也喜欢有可观察的承诺,哪怕只是对我而言,所以我把我的记录放在 Git 里。
我决定在写日记的过程中实现一些自动化,于是我使用了我最喜欢的自动化工具:[Jupyter][2]。Jupyter 有一个有趣的功能 [ipywidgets][3],这是一套用于 Jupyter Notebooks、JupyterLab 和 IPython 内核的交互式 HTML 组件。
如果你想跟着本文的代码走,请注意,让你的 JupyterLab 实例支持组件可能有点复杂,请按照[这些说明][4]来进行设置。
### 导入 ipywidgets 模块
首先,你需要导入一堆东西,比如 ipywidgets 和 [Twisted][5]。Twisted 模块可以用来创建一个异步时间计数器:
```
import twisted.internet.asyncioreactor
twisted.internet.asyncioreactor.install()
from twisted.internet import reactor, task
import ipywidgets, datetime, subprocess, functools, os
```
### 设置定时条目
用 Twisted 实现时间计数器是利用了 `task.LoopingCall`。然而,结束循环调用的唯一方法是用一个异常。倒计时时钟总会停止,所以你需要一个自定义的异常来指示“一切正常;计数器结束”:
```
class DoneError(Exception):
    pass
```
现在你已经写好了异常,你可以写定时器了。第一步是创建一个 `ipywidgets.Label` 的文本标签组件。循环使用 `divmod` 计算出分和秒,然后设置标签的文本值:
```
def time_out_counter(reactor):
label = ipywidgets.Label("Time left: 5:00")
current_seconds = datetime.timedelta(minutes=5).total_seconds()
def decrement(count):
nonlocal current_seconds
current_seconds -= count
time_left = datetime.timedelta(seconds=max(current_seconds, 0))
minutes, left = divmod(time_left, minute)
seconds = int(left.total_seconds())
label.value = f"Time left: {minutes}:{seconds:02}"
if current_seconds < 0:
raise DoneError("finished")
minute = datetime.timedelta(minutes=1)
call = task.LoopingCall.withCount(decrement)
call.reactor = reactor
d = call.start(1)
d.addErrback(lambda f: f.trap(DoneError))
return d, label
```
### 从 Jupyter 组件中保存文本
下一步是写一些东西,将你输入的文字保存到一个文件中,并提交到 Git。另外由于你要写 5 分钟的日记,你需要一个能给你提供写字区域的组件(滚动肯定是可以的,但一次能看到更多的文字就更好了)。
这就用到了组件 `Textarea`,这是一个你可以书写的文本字段,而 `Output` 则是用来给出反馈的。这一点很重要,因为 `git push` 可能会花点时间或失败,这取决于网络。如果备份失败,用反馈提醒用户很重要:
```
def editor(fname):
    textarea = ipywidgets.Textarea(continuous_update=False)
    textarea.rows = 20
    output = ipywidgets.Output()
    runner = functools.partial(subprocess.run, capture_output=True, text=True, check=True)
    def save(_ignored):
        with output:
            with open(fname, "w") as fpout:
                fpout.write(textarea.value)
            print("Sending...", end='')
            try:
                runner(["git", "add", fname])
                runner(["git", "commit", "-m", f"updated {fname}"])
                runner(["git", "push"])
            except subprocess.CalledProcessError as exc:
                print("Could not send")
                print(exc.stdout)
                print(exc.stderr)
            else:
                 print("Done")
    textarea.observe(save, names="value")
    return textarea, output, save
```
`continuous_update=False` 是为了避免每个字符都保存一遍并发送至 Git。相反只要脱离输入焦点它就会保存。这个函数也返回 `save` 函数,所以可以明确地调用它。
### 创建一个布局
最后,你可以使用 `ipywidgets.VBox` 把这些东西放在一起。这是一个包含一些组件并垂直显示的东西。还有一些其他的方法来排列组件,但这足够简单:
```
def journal():
    date = str(datetime.date.today())
    title = f"Log: Startdate {date}"
    filename = os.path.join(f"{date}.txt")
    d, clock = time_out_counter(reactor)
    textarea, output, save = editor(filename)
    box = ipywidgets.VBox([
        ipywidgets.Label(title),
        textarea,
        clock,
        output
    ])
    d.addCallback(save)
    return box
```
biu你已经定义了一个写日记的函数了所以是时候试试了。
```
journal()
```
![Jupyter journal][6]
你现在可以写 5 分钟了!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/daily-journal-jupyter
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tea-cup-mug-flowers-book-window.jpg?itok=JqThhl51 (Ceramic mug of tea or coffee with flowers and a book in front of a window)
[2]: https://jupyter.org/
[3]: https://ipywidgets.readthedocs.io/en/latest/
[4]: https://ipywidgets.readthedocs.io/en/latest/user_install.html
[5]: https://twistedmatrix.com/trac/
[6]: https://opensource.com/sites/default/files/uploads/journaling_output_13_0.png (Jupyter journal)
[7]: https://creativecommons.org/licenses/by-sa/4.0/

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12897-1.html)
[#]: subject: (How this open source security tool halted significant DDoS attacks)
[#]: via: (https://opensource.com/article/20/12/open-source-vs-ddos-attacks)
[#]: author: (Philippe Humeau https://opensource.com/users/philippe-humeau)
如何在 1 分钟内阻止 7000 台机器的僵尸网络
======
> 对 CrowdSec 的配置更改,在不到一分钟的时间内阻止了一个 7000 台机器的僵尸网络的攻击。
![](https://img.linux.net.cn/data/attachment/album/202012/07/220444x6kaedeu6ko0e7uo.jpg)
2020 年,我们的生活和工作方式在短短几天内被彻底颠覆。随着 COVID-19 开始在全球范围内蔓延,我们将工作带回家,与同事、朋友和家人保持在线联系成为关键的必需品。这为黑客造成破坏打开了大门。例如,根据 Neustar 的数据今年上半年全球的分布式拒绝服务DDOS 攻击[增长了 151%][2]。
[CrowdSec][3] 是一个开源的安全引擎,它可以分析访问者的行为,并提供适应各种攻击的响应。它能解析来自各种来源的日志,并应用启发式方案来识别攻击性行为,并防范大多数攻击类别。并且,它与其它安装的 CrowdSec 系统共享该情报。每次 IP 地址被阻止时,它都会通知整个用户社区。这就创建了一个[实时、协作的 IP 信誉数据库][4],利用人群的力量使互联网更加安全。
### CrowdSec 如何工作:案例研究
Sorf Networks 是一家总部位于土耳其的技术公司,为客户提供高配置的托管服务器和 DDoS 防护解决方案,它提供了一个 CrowdSec 工作的例子。Sorf 的一个客户每天都会遇到来自 1 万多台机器僵尸网络的 DDoS 攻击,并努力寻找一种能够满足技术要求的解决方案来及时处理这些攻击。
虽然客户采取了一般的预防措施来缓解这些攻击,比如引入 JavaScriptJS<ruby>挑战<rt>challenges</rt></ruby>、限速等,但这些措施在整个攻击面并不可行。一些 URL 需要被非常基本的软件使用,而这些软件不支持 JS 挑战。黑客就是黑客,这正是他们每天的目标:链条上最薄弱的环节。
Sorf Networks 首先使用 [Fail2ban][5](这启发了 CrowdSec为其客户建立了一个 DDoS 缓解策略。它在一定程度上帮助了客户,但它太慢了。它需要 50 分钟来处理日志和处理 7000 到 10000 台机器的 DDoS 攻击。这使得它在这种情况下没有效果。另外,因为它没有禁止 IP日志会持续堆积它需要每秒处理几千条日志这是不可能的。
在使用租用的僵尸网络进行的 DDoS 测试中,一次攻击可以高达每秒 6700 个左右的请求,这些请求来自 8600 个独立 IP。这是对一台服务器流量的捕捉
![Server traffic][6]
虽然 CrowdSec 技术可以应对巨大的攻击,但其默认设置每秒只能处理约 1000 个端点。Sorf 需要一个量身定做的配置来处理单台机器上这么多的流量。
Sorf 的团队对 CrowdSec 的配置进行了修改,以显著提高其吞吐量来处理日志。首先,它去掉了高消耗且非关键的<ruby>富集<rt>enrichment</rt></ruby>解析器,例如 [GeoIP 富集][7]。它还将允许的 goroutine 的默认数量从一个增加到五个。之后,团队又用 8000 到 9000 台主机做了一次实测,平均每秒 6000 到 7000 个请求。这个方案是有代价的,因为 CrowdSec 在运行过程中吃掉了 600% 的 CPU但其内存消耗却保持在 270MB 左右。
然而,结果却显示出明显的成功:
* 在一分钟内CrowdSec 能够处理所有的日志
* 95% 的僵尸网络被禁止,攻击得到有效缓解
* 15 个域现在受到保护,不受 DDoS 攻击
根据 Sorf Networks 的总监 Cagdas Aydogdu 的说法CrowdSec 的平台使团队“能够在令人难以置信的短时间内提供一个世界级的高效防御系统”。
* * *
本文改编自[如何用 CrowdSec 在 1 分钟内阻止 7000 台机器的僵尸网络][8],原载于 CrowdSec 网站。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/12/open-source-vs-ddos-attacks
作者:[Philippe Humeau][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/philippe-humeau
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_password_chaos_engineer_monster.png?itok=J31aRccu (Security monster)
[2]: https://www.businesswire.com/news/home/20200916005046/en/DDoS-Attacks-Increase-by-151-in-First-Half-Of-2020
[3]: https://crowdsec.net/
[4]: https://opensource.com/article/20/10/crowdsec
[5]: https://www.fail2ban.org
[6]: https://opensource.com/sites/default/files/uploads/crowdsec_servertraffic.png (Server traffic)
[7]: https://hub.crowdsec.net/author/crowdsecurity/configurations/geoip-enrich
[8]: https://crowdsec.net/2020/10/21/how-to-stop-a-botnet-with-crowdsec/

View File

@ -0,0 +1,115 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12901-1.html)
[#]: subject: (Try Jed as your Linux terminal text editor)
[#]: via: (https://opensource.com/article/20/12/jed)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
尝试将 Jed 作为你的 Linux 终端文本编辑器
======
> Jed 方便的下拉菜单,让新用户可以轻松地使用终端文本编辑器。
![](https://img.linux.net.cn/data/attachment/album/202012/09/085456f7fmt74eu6eekpmt.jpg)
你可能听说过 Emacs、Vim 和 Nano 这些典型的 Linux 文本编辑器,但 Linux 有大量的开源文本编辑器,我的目标是在 12 月份对其中的 31 个文本编辑器进行一次公平的测试。
在这篇文章中,我将介绍 [Jed][2],它是一个基于终端的编辑器,它的特点是有一个方便的下拉菜单,这让那些刚刚接触终端编辑器的用户,以及那些不喜欢记住每个功能的组合键的用户而言变得特别容易。
### 安装 Jed
在 Linux 上,你的发行版软件仓库可能会让 Jed 通过你的软件包管理器安装:
```
$ sudo dnf install jed
```
并不是所有发行版都是如此,但它是一个很容易从源码编译的应用。首先,下载 [S 语言][3]Jed 的编写语言)并安装(其中 `x.y.z` 请替换为对应的版本号):
```
$ wget https://www.jedsoft.org/releases/slang/slang-x.y.z.tar.bz2
$ tar xvf slang*bz2
$ cd slang-x.y.z
$ ./configure ; make
$ sudo make install
```
安装好后,对 [Jed 源码][4]也同样操作(其中 `x.y.z` 请替换为对应的版本号):
```
$ wget https://www.jedsoft.org/releases/jed/jed-x.y.z.tar.bz2
$ tar xvf jed*bz2
$ cd jed-x.y.z
$ ./configure ; make
$ sudo make install
```
### 启动 Jed
Jed 在终端中运行,所以要启动它,只需打开终端,输入 `jed`
```
F10 key ==> File Edit Search Buffers Windows System Help
This is a scratch buffer. It is NOT saved when you exit.
To access the menus, press F10 or ESC-m and the use the arrow
keys to navigate.
Latest version information is available on the web from
<http://www.jedsoft.org/jed/>. Other sources of JED
information include the usenet newsgroups comp.editors and
alt.lang.s-lang. To subscribe to the jed-users mailing list, see
<http://www.jedsoft.org/jed/mailinglists.html>.
Copyright (C) 1994, 2000-2009 John E. Davis
Email comments or suggestions to <jed@jedsoft.org>.
[ (Jed 0.99.19U) Emacs: *scratch* () 1/16 8:49am ]
```
### 如何使用 Jed
Jed 自动加载的说明很清晰且很有帮助。你可以按 `F10` 键或 `Esc` 键,然后按字母 `M` 进入顶部菜单。这将使你的光标进入 Jed 顶部的菜单栏,但它不会打开菜单。要打开菜单,请按键盘上的回车键。使用方向键来浏览每个菜单。
屏幕上的菜单不仅对初次使用的用户很有帮助,对有经验的用户来说,它还提供了很好的键盘快捷键提醒。例如,你大概能猜到如何保存正在处理的文件。进入 **File** 菜单,选择 **Save**。如果你想加快这个过程,你可以记住 `Ctrl+X`,然后 `Ctrl+S` 的组合键(是的,这是连续的两个组合键)。
### 探索 Jed 的功能
对于一个简单的编辑器来说Jed 拥有一系列令人惊讶的实用功能。它有一个内置的多路复用器,允许你同时打开多个文件,但它会“叠”在另一个文件之上,所以你可以在它们之间切换。你可以分割你的 Jed 窗口,让多个文件同时出现在屏幕上,改变你的颜色主题,或者打开一个 shell。
对于任何有 Emacs 使用经验的人来说Jed 的许多“没有宣传”的功能例如用于导航和控制的组合键都是一目了然的。然而当一个组合键与你所期望的大相径庭时就会有一个轻微的学习或者说没有学习曲线。例如GNU Emacs 中的 `Alt+B` 可以将光标向后移动一个字,但在 Jed 中,默认情况下,它是 **Buffers** 菜单的快捷键。这让我措手不及,大约本文每句话都遇到一次。
![Jed][8]
Jed 也有**模式**,允许你加载模块或插件来帮助你编写特定种类的文本。例如,我使用默认的 text 模式写了这篇文章,但当我在编写 [Lua][9] 时,我能够切换到 lua 模式。这些模式提供语法高亮,并帮助匹配括号和其他分隔符。你可以在 `/usr/share/jed/lib` 中查看 Jed 捆绑了哪些模式,而且因为它们是用 S 语言编写的,你可以浏览代码,并可能学习一种新的语言。
### 尝试 Jed
Jed 是一个令人愉快且清新的 Linux 终端文本编辑器。它轻量级,易于使用,设计相对简单。作为 Vi 的替代方案,你可以在你的 `~/.bashrc` 文件中(如果你是 root 用户,在 root 用户的 `~/.bashrc` 文件中)将 Jed 设置为 `EDITOR``VISUAL` 变量。今天就试试 Jed 吧。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/12/jed
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: https://www.jedsoft.org/jed
[3]: https://www.jedsoft.org/releases/slang/
[4]: https://www.jedsoft.org/releases/jed
[5]: http://www.jedsoft.org/jed/\>
[6]: http://www.jedsoft.org/jed/mailinglists.html\>
[7]: mailto:jed@jedsoft.org
[8]: https://opensource.com/sites/default/files/jed.png (Jed)
[9]: https://opensource.com/article/20/2/lua-cheat-sheet

View File

@ -0,0 +1,109 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12905-1.html)
[#]: subject: (Zotero: An Open Source App to Help You Collect & Share Research)
[#]: via: (https://itsfoss.com/zotero/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Zotero一款帮助你收集和分享研究成果的开源应用
======
> Zotero 是一款令人印象深刻的自由开源的应用,它让你可以收集、组织、引用和共享研究成果。你还可以使用 Zotero 为你的文档即时创建参考文献和书目。
![](https://img.linux.net.cn/data/attachment/album/202012/09/213010i6o78b6y5gifyjtf.jpg)
通常,你可以[使用 Linux 上任何一款笔记应用][1]来收集和分享你的想法。但是,在这里,我想分享一些专门为你量身定做的东西,来帮助你收集、整理和分享你的研究成果,即 [Zotero][2]。
### Zotero收集、整理和分享研究成果
![][3]
Zotero 是一个完全开源的项目,你可以在 [GitHub][4] 上找到它。它的目的是帮助你轻松地收集、整理、添加笔记和分享你的研究成果。
而且,这一切都不需要基于云端的服务,它是完全离线的。所以,你的研究成果是属于你的。当然,除非你出于协作目的想将其同步,为此你可能需要参考[该文档][5]。
作为一个好的起点,你可以选择 [WebDAV 存储][6],或者直接创建一个 Zotero 帐户来轻松同步和分享你的研究成果。
例如,我创建了一个名为 `ankush9` 的 Zotero 账户,你可以在 <https://www.zotero.org/ankush9> 找到我的研究合集(我添加到我的出版物中)。
![][7]
这使得它很容易分享你组织的研究成果,你可以选择将哪些部分共享到出版物中。
让我着重介绍一下 Zotero 的主要功能,来帮助你决定是否需要尝试一下。
### Zotero 的功能
![][8]
* 能够使用浏览器插件从网页直接添加信息
* 为每份资料添加说明
* 支持添加标签
* 支持添加语音记录
* 添加视频作为附件
* 添加软件作为附件
* 将电子邮件作为附件
* 将播客作为附件
* 添加博客文章
* 添加一个文件链接
* 根据项目建立书目
* 离线快照存储(无需连接互联网即可访问保存的网页)
* 可以复制项目
* 整理库中的项目
* 提供了一个垃圾箱,可以删除你的项目,并在需要时轻松恢复。
* 支持同步
* 支持数据导出
* 可整合 LibreOffice 插件
* 使用你的 Zotero 个人资料链接轻松分享你的研究笔记。
* 跨平台支持
如果你只是想快速创建书目,你可以尝试他们的另一个工具,[ZoteroBib][9]。
### 在 Linux 上安装 Zotero
![][12]
它适用于 Windows、macOS 和 Linux。对于 Linux如果你使用的是基于 Ubuntu 的发行版(或 Ubuntu 本身),你可以下载一个 deb 文件(由第三方维护)并安装它。
安装 [deb 文件][13]很简单,它在 Pop OS 20.04 上工作得很好。如果你使用的是其他 Linux 发行版,你可以[解压 tar 包][14]并进行安装。
你可以按照[官方安装说明][15]来找到合适的方法。
- [下载 Zotero][2]
### 总结
它有大量的功能,你可以组织、分享、引用和收集资源,以供你进行搜索。由于支持音频、视频、文本和链接,它应该适合几乎所有的东西。
当然,我会将它推荐给有经验的用户,让它发挥最大的作用。而且,如果你之前使用过树状图(思维导图)笔记工具的人,你就知道要做什么了。
你觉得 Zotero 怎么样?如果它不适合你,你有更好的替代品建议么?请在下面的评论中告诉我你的想法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/zotero/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/note-taking-apps-linux/
[2]: https://www.zotero.org/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/zotero-app.png?resize=800%2C481&ssl=1
[4]: https://github.com/zotero/zotero
[5]: https://www.zotero.org/support/
[6]: https://en.wikipedia.org/wiki/WebDAV
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/zotero-online-publication.jpg?resize=800%2C600&ssl=1
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/zotero-extension.jpg?resize=800%2C414&ssl=1
[9]: https://zbib.org/
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/zotero-preferences.png?resize=800%2C489&ssl=1
[13]: https://itsfoss.com/install-deb-files-ubuntu/
[14]: https://en.wikipedia.org/wiki/Tarball
[15]: https://www.zotero.org/support/installation

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12898-1.html)
[#]: subject: (How to Add Third-Party Repositories in Fedora and Get Access to a Huge Number of Additional Software)
[#]: via: (https://itsfoss.com/fedora-third-party-repos/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
如何在 Fedora 中添加第三方存储库以访问大量附加软件
======
![](https://img.linux.net.cn/data/attachment/album/202012/08/074323tkjpr2499rtjnjq0.jpg)
在你安装 Fedora 后。你可能会发现你想要安装和使用的一些软件在软件商店中找不到。出于一些原因,这些软件包不能出现在 Fedora 存储库中。
不用担心,我将告诉你如何为 Fedora 添加第三方存储库来使这些软件包可使用。
### 在 Fedora 中的第三方存储库是什么?
操作系统开发人员通常会决定哪些软件包可以在其存储库中使用哪些软件包不可以在其存储库中使用。Fedora 也是如此。依据 [Fedora 文档][1] ,第三方存储库包含有 “拥有更为宽松的许可政策,并提供 Fedora 因各种原因所排除软件包” 的软件包。
Fedora 强制执行下面的 [准则][2] ,当它打包软件包时:
* 如果它是专有的,它就不能包含在 Fedora 中
* 如果它在法律上被限制,它就不能包含在 Fedora 中
* 如果它违反美国法律(特别是联邦政府或适用于州政府的法律),它就不能包含在 Fedora 中
因此,有一些可以由用户自行添加的存储库。这使得用户能够访问附加的软件包。
### 在 Fedora 中启用 RPM Fusion 存储库
[RPM Fusion][3] 是 Fedora 的第三方应用程序的主要来源。RPM Fusion 是由三个项目Dribble、Freshrpms 和 Livna合并而成的。RPM Fusion 提供两种不同的软件存储库。
* free 存储库:包含开源软件。
* nonfree 存储库:包含没有开源协议的软件,但是它们的源文件代码却是可以自由使用的。
这里有两种方法来启动 RPM Fusion从终端启用或通过点击几个按钮来启用。我们将逐一查看。
#### 方法 1命令行方法
这是启用 RPM Fusion 存储库的最简单的方法。只需要输入下面的命令即可启用两个存储库:
```
sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
会要求你输入密码、确认是否你想要安装这些存储库。在你确认后,安装过程将在几秒钟或几分钟内完成。
![通过命令行安装 RPM Fusion][4]
#### 方法 2图形用户界面方法
使用这个方法来启用 RPM Fusion 存储库,你需要访问 [RPM Fusion 网站][5] 。你将看到针对不同 Fedora 版本的两个存储库的链接。
RPM Fusion 建议先安装 free 存储库。因此,单击针对你 Fedora 版本的 free 存储库的链接。然后会打开一个窗口来询问你是否想安装该存储库。单击安装。
![通过图形用户界面安装 RPM Fusion][6]
在安装过程完成后,返回并使用相同的步骤安装 nonfree 存储库。
### 启用 Fedora 的第三方存储库
Fedora 最近开始提供它自己的 [第三方应用程序存储库][7] 。在这个存储库中 [可使用的应用程序的数量][8] 是非常少的。你可以 [使用它来在 Fedora 上安装 Chrome 浏览器][9] 。除 Chrome 外,它也包含 Adobe Brackets、Atom、Steam、Vivaldi、Opera 等应用程序。
就像 RPM Fusion 一样,你可以通过终端或图形用户界面的方法来启用这个存储库。
#### 方法 1命令行方法
为启用 Fedora 的第三方存储库,输入下面的命令到你的终端中:
```
sudo dnf install fedora-workstation-repositories
```
当被提示时,确保输入你的密码并输入 `Y` 来确认安装。
#### 方法 2图形用户界面方法
如果你不习惯使用终端,你可以使用图形用户界面方法。
首先,你需要打开 Gnome “软件”。接下来,你需要单击右上角的汉堡菜单,并从菜单中选择“软件存储库”。
![Gnome 软件的菜单][10]
在软件存储库窗口中,你将在其顶部看到写着 “第三方存储库” 字样的部分。单击“安装”按钮。当你被提示时,输入你的密码。
![Fedora 第三方存储库安装][11]
随着这些附加存储库的启用,你可以安装软件到你的系统当中。你可以从软件中心管理器或使用 DNF 软件包管理器来轻松地安装它们。
如果你发现这篇文章很有趣,请花费一些时间来在社交媒体上分享它。
--------------------------------------------------------------------------------
via: https://itsfoss.com/fedora-third-party-repos/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://docs.fedoraproject.org/en-US/quick-docs/setup_rpmfusion/#third-party-repositories
[2]: https://fedoraproject.org/wiki/Forbidden_items
[3]: https://rpmfusion.org/RPM%20Fusion
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/install-rpmfusion-cli.png?resize=800%2C604&ssl=1
[5]: https://rpmfusion.org/Configuration
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/install-rpmfusion-gui.png?resize=800%2C582&ssl=1
[7]: https://fedoraproject.org/wiki/Workstation/Third_Party_Software_Repositories
[8]: https://fedoraproject.org/wiki/Workstation/Third_party_software_list
[9]: https://itsfoss.com/install-google-chrome-fedora/
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/software-meni.png?resize=800%2C672&ssl=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/fedora-third-party-repo-gui.png?resize=746%2C800&ssl=1
[12]: https://%0Areddit.com/r/linuxusersgroup

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12889-1.html)
[#]: subject: (Download the 2020 Linux Foundation Annual Report)
[#]: via: (https://www.linux.com/news/download-the-2020-linux-foundation-annual-report/)
[#]: author: (The Linux Foundation https://www.linuxfoundation.org/blog/2020/12/download-the-2020-linux-foundation-annual-report/)
下载 2020 年度 Linux 基金会年度报告
======
![][1]
2020 年对于 <ruby>Linux 基金会<rt>Linux Foundation</rt></ruby>LF和我们托管的社区来说是充满挑战的一年。在这次大流行期间我们都看到我们和世界各地许多同事、朋友和家人的日常生活完全改变了。在我们的社区中有太多的人也为失去家人和朋友而悲痛。
看到 LF 的成员加入到对抗 COVID-19 的斗争中,令人振奋。我们在世界各地的成员为科研人员贡献了技术资源,为挣扎中的家庭和个人提供了援助,为国家和国际努力做出了贡献,有些人甚至一起在 [LF 公共卫生][3] 下创建了开源项目,以帮助各国应对这场大流行病。
今年,我们的项目社区在继续发展,在许多开放技术领域、开放标准、开放数据和开放硬件方面都有新的举措。今年,我们迎接了 150 多个新社区加入 LF包括 [FINOS 基金会][4],它是开源金融服务项目的保护项目。
我们的 [活动团队][5] 不得不进行了重大转型,在几周内,从面对面的活动转变为虚拟活动,参与者从不足 100 人到数万人不等。这些虚拟聚会帮助我们社区中的许多人在这个困难时期建立了联系。我们也学到了很多关于未来可能通过提供虚拟体验的<ruby>混合亲身活动<rt>hybrid in-person events</rt></ruby>来提供更具包容性的体验。今年,我们很想念在我们的社区中见到的许多朋友,并期待着在安全的情况下再次见到你们。
我们的 [培训和认证][6] 团队能够帮助 170 多万名报名参加我们免费培训课程的人。我要祝贺今年获得 LF 认证的 4 万多人。
《[LF 的 2020 年度工作报告][7]》显示,尽管商业环境充满挑战,但经过培训和认证的开源专业人员仍有大有需求,并能轻松地展示其价值。
作为我们正在进行的多元化努力的一部分,在加入反对不平等的斗争中,我们的社区专注于该如何在项目中使用语言,并寻找导师来指导下一代的贡献者。我们的社区,如 Linux 内核团队和在北美 KubeCon 上发起的 <ruby>[包容性命名倡议][8]<rt>Inclusive Naming Initiative</rt></ruby>,在加强我们的互动方式上取得了进展。
今年是我们<ruby>联合发展基金会<rt>Joint Development Foundation</rt></ruby>JDF<ruby>开放标准社区<rt>open standards communities</rt></ruby>的突破性一年。我们迎来了六个建立开放标准的新项目。[JDF 还被批准为 ISO/IEC JTC 1 公开发布规范PAS提交者][9]。今年还标志着我们的第一个开放标准社区 OpenChain 通过 PAS 程序被正式认可为国际标准。今天Linux 基金会可以把我们的社区,从开源仓库带到一个公认的全球标准。
今年,我们生态系统中的许多人已经站出来帮助安全工作。一个新的社区 <ruby>[开源安全基金会][10]<rt>Open Source Security Foundation</rt></ruby>OpenSSF启动了以协调专注于提高开源软件安全性的努力。
当我们继续在美国与挑战作斗争时,[我们也重申 LF 是全球社区的一部分][11]。
我们的成员必须得应对国际贸易政策变化的一年,并了解到开放源码在政治上的蓬勃发展。来自世界各地的我们的成员社区参与了<ruby>开放合作<rt>open collaboration</rt></ruby>,因为它是开放、中立和透明的。这些参与者显然希望继续与全球同行合作,以应对大大小小的挑战。
在这困难的一年结束时,所有这些都让我们确信,开放合作是解决世界上最复杂挑战的模式。没有任何一个人、组织或政府能够单独创造出我们解决最紧迫问题所需的技术。我们代表整个 Linux 基金会团队,期待着帮助您和我们的社区应对接下来的任何挑战。
![][12]
Jim ZemlinLinux 基金会执行总监
- [下载 Linux 基金会 2020 年度报告][2]
这篇文章首先发布于 [Linux 基金会][14]。
--------------------------------------------------------------------------------
via: https://www.linux.com/news/download-the-2020-linux-foundation-annual-report/
作者:[The Linux Foundation][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxfoundation.org/blog/2020/12/download-the-2020-linux-foundation-annual-report/
[b]: https://github.com/lujun9972
[1]: https://www.linuxfoundation.org/wp-content/uploads/2020/12/2020-Linux-Foundation-Annual-Report_blogheader.png
[2]: http://linuxfoundation.org/2020-annual-report
[3]: https://www.lfph.io/
[4]: https://www.finos.org/
[5]: https://events.linuxfoundation.org/
[6]: https://training.linuxfoundation.org/
[7]: https://training.linuxfoundation.org/resources/2020-open-source-jobs-report/
[8]: https://inclusivenaming.org/
[9]: https://www.linuxfoundation.org/blog/2020/05/joint-development-foundation-recognized-as-an-iso-iec-jtc-1-pas-submitter-and-submits-openchain-for-international-review/
[10]: https://openssf.org/
[11]: https://www.linuxfoundation.org/blog/2020/08/open-source-collaboration-is-a-global-endeavor/
[12]: https://www.linuxfoundation.org/wp-content/uploads/2020/12/JimZemlin_Sig-150x150.png
[13]: https://www.linuxfoundation.org/blog/2020/12/download-the-2020-linux-foundation-annual-report/
[14]: https://www.linuxfoundation.org/

View File

@ -1,83 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (When to use 5G, when to use Wi-Fi 6)
[#]: via: (https://www.networkworld.com/article/3402316/when-to-use-5g-when-to-use-wi-fi-6.html)
[#]: author: (Lee Doyle )
When to use 5G, when to use Wi-Fi 6
======
5G is a cellular service, and Wi-Fi 6 is a short-range wireless access technology, and each has attributes that make them useful in specific enterprise roles.
![Thinkstock][1]
We have seen hype about whether [5G][2] cellular or [Wi-Fi 6][3] will win in the enterprise, but the reality is that the two are largely complementary with an overlap for some use cases, which will make for an interesting competitive environment through the early 2020s.
### The potential for 5G in enterprises
The promise of 5G for enterprise users is higher speed connectivity with lower latency. Cellular technology uses licensed spectrum which largely eliminates potential interference that may occur with unlicensed Wi-Fi spectrum. Like current 4G LTE technologies, 5G can be supplied by cellular wireless carriers or built as a private network .
The architecture for 5G requires many more radio access points and can suffer from poor or no connectivity indoors. So, the typical organization needs to assess its [current 4G and potential 5G service][4] for its PCs, routers and other devices. Deploying indoor microcells, repeaters and distributed antennas can help solve indoor 5G service issues. As with 4G, the best enterprise 5G use case is for truly mobile connectivity such as public safety vehicles and in non-carpeted environments like mining, oil and gas extraction, transportation, farming and some manufacturing.
In addition to broad mobility, 5G offers advantages in terms of authentication while roaming and speed of deployment as might be needed to provide WAN connectivity to a pop-up office or retail site. 5G will have the capacity to offload traffic in cases of data congestion such as live video. As 5G standards mature, the technology will improve its options for low-power IoT connectivity.
5G will gradually roll out over the next four to five years starting in large cities and specific geographies; 4G technology will remain prevalent for a number of years. Enterprise users will need new devices, dongles and routers to connect to 5G services. For example, Apple iPhones are not expected to support 5G until 2020, and IoT devices will need specific cellular compatibility to connect to 5G.
Doyle Research expects the 1Gbps and higher bandwidth promised by 5G will have a significant impact on the SD-WAN market. 4G LTE already enables cellular services to become a primary WAN link. 5G is likely to be cost competitive or cheaper than many wired WAN options such as MPLS or the internet. 5G gives enterprise WAN managers more options to provide increased bandwidth to their branch sites and remote users potentially displacing MPLS over time.
### The potential for Wi-Fi 6 in enterprises
Wi-Fi is nearly ubiquitous for connecting mobile laptops, tablets and other devices to enterprise networks. Wi-Fi 6 (802.11ax) is the latest version of Wi-Fi and brings the promise of increased speed, low latency, improved aggregate bandwidth and advanced traffic management. While it has some similarities with 5G (both are based on orthogonal frequency division multiple access), Wi-Fi 6 is less prone to interference, requires less power (which prolongs device battery life) and has improved spectral efficiency.
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][5] ]**
As is typical for Wi-Fi, early [vendor-specific versions of Wi-Fi 6][6] are currently available from many manufacturers. The Wi-Fi alliance plans for certification of Wi-Fi 6-standard gear in 2020. Most enterprises will upgrade to Wi-Fi 6 along standard access-point life cycles of three years or so unless they have specific performance/latency requirements that prompt an upgrade sooner.
Wi-Fi access points continue to be subject to interference, and it can be challenging to design and site APs to provide appropriate coverage. Enterprise LAN managers will continue to need vendor-supplied tools and partners to configure optimal Wi-Fi coverage for their organizations. Wi-Fi 6 solutions must be integrated with wired campus infrastructure. Wi-Fi suppliers need to do a better job at providing unified network management across wireless and wired solutions in the enterprise.
### Need for wired backhaul
For both technologies, wireless is combined with wired-network infrastructure to deliver high-speed communications end-to-end. In the enterprise, Wi-Fi is typically paired with wired Ethernet switches for campus and larger branches. Some devices are connected via cable to the switch, others via Wi-Fi and laptops may use both methods. Wi-Fi access points are connected via Ethernet inside the enterprise and to the WAN or internet by fiber connections.
The architecture for 5G makes extensive use of fiber optics to connect the distributed radio access network back to the core of the 5G network. Fiber is typically required to provide the high bandwidth needed to connect 5G endpoints to SaaS-based applications, and to provide live video and high-speed internet access. Private 5G networks will also have to meet high-speed wired-connectivity requirements.
### Handoff issues
Enterprise IT managers need to be concerned with handoff challenges as phones switch between 5G and Wi-Fi 6. These issues can affect performance and user satisfaction. Several groups are working towards standards to promote better interoperability between Wi-Fi 6 and 5G. As the architectures of Wi-Fi 6 align with 5G, the experience of moving between cellular and Wi-Fi networks should become more seamless.
### 5G vs Wi-Fi 6 depends on locations, applications and devices
Wi-Fi 6 and 5G are competitive with each other for specific situations in the enterprise environment that depend on location, application and device type. IT managers should carefully evaluate their current and emerging connectivity requirements. Wi-Fi will continue to dominate indoor environments and cellular wins for broad outdoor coverage.
Some of the overlap cases occur in stadiums, hospitality and other large event spaces with many users competing for bandwidth. Government applications, including aspect of smart cities, can be applicable to both Wi-Fi and cellular. Health care facilities have many distributed medical devices and users that need connectivity. Large distributed manufacturing environments share similar characteristics. The emerging IoT deployments are perhaps the most interesting “competitive” environment with many overlapping use cases.
### Recommendations for IT Leaders
While the wireless technologies enabling them are converging, Wi-Fi 6 and 5G are fundamentally distinct networks both of which have their role in enterprise connectivity. Enterprise IT leaders should focus on how Wi-Fi and cellular can complement each other, with Wi-Fi continuing as the in-building technology to connect PCs and laptops, offload phone and tablet data, and for some IoT connectivity.
4G LTE moving to 5G will remain the truly mobile technology for phone and tablet connectivity, an option (via dongle) for PC connections, and increasingly popular for connecting some IoT devices. 5G WAN links will increasingly become standard as a backup for improved SD-WAN reliability and as primary links for remote offices.
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3402316/when-to-use-5g-when-to-use-wi-fi-6.html
作者:[Lee Doyle][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2017/07/wi-fi_wireless_communication_network_abstract_thinkstock_610127984_1200x800-100730107-large.jpg
[2]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
[3]: https://www.networkworld.com/article/3215907/why-80211ax-is-the-next-big-thing-in-wi-fi.html
[4]: https://www.networkworld.com/article/3330603/5g-versus-4g-how-speed-latency-and-application-support-differ.html
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[6]: https://www.networkworld.com/article/3309439/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 questions about AI ethics and how open source can help)
[#]: via: (https://opensource.com/article/20/11/ai-ethics)
[#]: author: (Sahana Sreeram https://opensource.com/users/sahanasreeram01gmailcom)
4 questions about AI ethics and how open source can help
======
Open source resources could provide effective solutions to some key
ethical considerations for artificial intelligence.
![A robot arm illustration][1]
As a high school student, I've become very interested in artificial intelligence (AI), which is emerging as one of the most impactful innovations of recent times. This past summer, I was selected for the [AI4ALL][2] program, where we learned how to develop AI systems using Python.
For my final project, I created an object-detection program and integrated it with a virtual drone simulation. Throughout the project, I was able to use open source frameworks, including [TensorFlow][3], [Keras][4], [Scikit-learn][5], and [PyTorch][6], to aid in developing the object-detection machine learning (ML) algorithm process.
By doing this project and using other open source frameworks like [Linux][7], [Apache Kafka][8], and [ElasticSearch][9], I realized the impact of open source technologies on AI system development. Access to these powerful, flexible technologies enables people to experiment with and develop AI systems.
I discovered another important aspect of AI when I attended a seminar at [William &amp; Mary][10] about AI and ethics. As we discussed some of the ethical concerns surrounding AI systems, I started wondering: How do I make sure that the systems I'm developing are ethical?
To answer my question, I interviewed three experts: [Alon Kaufman][11], PhD, CEO of Duality Technologies; [Iria Giuffrida][12], PhD, a law professor at William &amp; Mary; and an executive in the fintech industry (name kept confidential upon request). I am very grateful to these experts for giving me their valuable time, fueling my curiosity, and inspiring me with their journeys in this field.
### How important of a role does ethics currently play in AI systems?
As more advances are made with AI, society is starting to appreciate the importance of ethical implications. However, the discussions about ethical values in AI system development are still an academic exercise, Dr. Kaufman says. Today, much of AI, and especially ML models, remains a "black box" because advanced algorithms and complex AI technologies (such as deep learning) produce models that are not self-explainable. Without having explainable AI/ML models, it's hard to introduce ethical considerations into the conversation. 
For Dr. Kaufman, the industry's focus today is on data security and protection. As more smart technology is developed, it's getting easier to acquire data. The large amounts of data handled in AI systems make them vulnerable to cyber breaches.
While it is certainly vital to build data protection measures for AI systems, it is becoming critical to shift the focus to other ethical implications, such as privacy and machine bias.
### Is ethical AI development a good goal for the future, or is it wishful thinking?
The short answer is that it is a reasonable goal to create ethical AI systems, but it will take time.
"Ethics" is a difficult concept to define. One approach is to look at ethics through a three-pronged methodology: security, privacy, and fairness. Most of the industry is now focused on security, and a practical next step could be to address the issues of privacy and ensuring data anonymity. "Privacy-enhancing technologies, such as secure computing and [synthetic data][13] could offer assistance in this process," explains Dr. Kaufman. Only after finding solutions to the security and privacy areas of ethics is it reasonable to venture into machine fairness and bias.
There are many factors to consider in the development of unbiased AI. "The first step is the deployment of active-learning models into society, followed by the actual development of [fair systems][14]," says the fintech industry expert. He explains that when ML equipped with active learning is deployed, society's own biases are often imposed on the model, even unknowingly, creating very flawed systems. When developers create AI systems, their preconceived notions must never be injected into the system code, says Dr. Kaufman. These are some of the key areas that must be addressed in developing unbiased models.
### How early should students be exposed to ethical considerations in AI/ML?
It's increasingly important to learn not only about computer science principles but also about how to build ethical models. Professor Giuffrida says computer scientists and AI developers have not explicitly been trained to think about the ethics behind their systems. This is why the development of bias-proof systems cannot be a linear process, she says; they must go through multiple review levels to minimize injecting bias in the machine process.
Professor Giuffrida emphasizes that AI development should be treated as an interdisciplinary study, meaning that it is not just developers who are in charge of creating accurate and ethical systems—that's an enormous responsibility to put on one group. But introducing ethical concepts to aspiring AI engineers at a foundational level could accelerate building sustainable, functional, and ethical systems.
### Will ethics come at the expense of innovation?
This was the main question I kept coming back to throughout the interviews. During the AI4ALL course, we explored the recent [Genderify fiasco][15], which was AI software intended to identify a user's gender based on several data points, including name, email address, and username. When launched for public use, the results were highly inaccurate—for example, the name "Meghan Smith" was assigned female, but "Dr. Meghan Smith" was assigned male. These biases led to the service's failure. More comprehensive ethical reviews and tests conducted before launch could make this type of product successful, at least initially.
All three experts voiced the idea that ethical boundaries are becoming necessary. According to the fintech executive, "Reasonable ethical constraints will not limit innovation—companies must eventually figure out how to innovate within these boundaries." In this fashion, ethical considerations could increase sustainability and success for a product—rather than putting a complete stop to AI development. Venture capitalist Rob Toews offered an interesting perspective on [regulating AI][16], and I recommend this article if you want to learn more about this topic.
Ultimately, introducing ethics into pure AI is a daunting but necessary step to secure AI's viability in our world. Open source resources could provide effective solutions to some key ethical considerations, including comprehensive system testing. For example, the [AI Fairness 360][17] open source toolkit by IBM assesses discrimination and biases in ML models. Open source communities have the ability to develop tools like this to provide greater coverage and support for ethical AI systems. In the meantime, I believe it is my generation's responsibility to build bridges between creating revolutionary AI technologies and considering ethical implications that will stand the test of time.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/ai-ethics
作者:[Sahana Sreeram][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sahanasreeram01gmailcom
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/robot_arm_artificial_ai.png?itok=8CUU3U_7 (A robot arm illustration)
[2]: https://ai-4-all.org/
[3]: https://www.tensorflow.org/
[4]: https://keras.io/
[5]: https://scikit-learn.org/stable/
[6]: https://pytorch.org/
[7]: https://www.linux.org/
[8]: https://kafka.apache.org/
[9]: https://www.elastic.co/
[10]: https://www.wm.edu/
[11]: https://www.linkedin.com/in/alon-kaufman-24067b154/
[12]: https://law2.wm.edu/faculty/bios/fulltime/igiuffrida.php
[13]: https://en.wikipedia.org/wiki/Synthetic_data
[14]: https://en.wikipedia.org/wiki/Fairness_(machine_learning)
[15]: https://www.theverge.com/2020/7/29/21346310/ai-service-gender-verification-identification-genderify
[16]: https://www.forbes.com/sites/robtoews/2020/06/28/here-is-how-the-united-states-should-regulate-artificial-intelligence/?sh=1c242ebc7821
[17]: https://aif360.mybluemix.net/

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Comparing the similarities and differences between inner source and open source)
[#]: via: (https://opensource.com/article/20/11/inner-source)
[#]: author: (Nithya Ruff https://opensource.com/users/nithyaruff)
Comparing the similarities and differences between inner source and open source
======
Open source principles can be applied within a company to access the
benefits of collaborative development. Here's how.
![two women kanban brainstorming and brainmapping with post-it notes on a whiteboard ][1]
Open source software (OSS) has been around since the 1990s and has thrived, quickly growing to become mainstream. It is now more well understood around the world than it has ever been before. Some refer to it as FOSS to highlight the Freedom part of open source (Free and Open Source Software). And in 2014, at OSCON, the term "inner source" was debuted, and people started talking about how to use the principles of open source, but inside of a company. It raised several questions for those unfamiliar with the term, which I hope to answer with this article. For example, what is similar about the two, what is different, the company roles involved in the two, is inner source taking the energy away from open source, etc. These are all fair questions, and as my organization practices both and is involved in both movements, I want to take some time to share insight with this audience as a developer, as a company, and as an open source enthusiast.
There are two principal elements that make open source successful. The first is that it is licensed under OSS licenses that promote and support free distribution of the source with the binaries for people to use any way they want. When GPL (General Public License) was introduced in the 1990s, it was revolutionary—no charge, source included, and no restriction on use. That helped proliferate open source projects like Linux and allowed rapid innovation.
The other particularly essential element is the way open source software grows and develops. It happens through a community of people who are most often not in the same organization or geographic location. They come together because they believe in the solution or mission of the project. Because these people came from different development cultures but needed to work on common projects, they evolved norms for working together, such as how they communicate with each other, how decisions are made, and how to review new submissions. This codification of how disparate teams work without being in the same organization is unique and powerful. This type of development is known as "collaborative development."
Not all communities are effective, but the ones who figure out how to work together often have some common elements:
* Governance: They work out how the project will govern itself—decisions, roles, reviews, meetings, mission, budgets, organization structures, etc.
* Communication: They figure out how to work with people across time zones, across cultures, and organizations. For example, documenting decisions on a mailing list vs. live discussions. Providing people with 24 to 48 hours to review and respond to a question or decision is another good practice established to allow people in different zones and different work situations to collaborate.
* Common tools: Review and communication are imperative to development. Tools such as GitLab and GitHub build in collaborative practices to enable working across teams.
* Architecture: Projects are often architected for maximum engagement into components, API (Application Programming Interface), sub-systems so that not everyone needs to know everything, or they dont change the whole project when they make changes.
Both in response to digital challenges as well as changing customer expectations, enterprises outside of the tech vendor space, in particular, have been transforming how they build products and services as well as how they engage with their customers. And with the 2020 COVID-19 pandemic, many companies and organizations have gone virtual and fast-tracked their digital transformation plans to meet changing customer and market needs. And some accelerated their plans to capture new opportunities and support changes in how customers consumed their services. Teams that relied on physical proximity, hallway conversations, and other elements of development were faced with a need to change how they worked. Many of them realized that they could turn to the collaborative development model of open source for examples of how to work remotely and effectively.
Even before the COVID-19 pandemic, because of early digital transformation plans, companies had already begun adopting and working with open source and had realized the benefits of open innovation. The need to go faster, reduce costs, breakdown silos across the company, and reuse vs. reinvent has created a spur to do more collaborative development or open source-style community development inside the company walls.
The benefits of collaborative development can be applied in many ways:
* Collaboration among remote and geographically distributed organizations
* Creating a more open and collaborative culture inside a company
* Creating communities around common interests and common problems
The traditional way companies develop software inside companies is quite different from the open source collaborative model. Because of how budgets, organization structures, and incentives work, teams work in silos, are top-driven, and have no incentive to share and collaborate across the organization. Teams often report to the same organization, resulting in low documentation of decisions and low leverage of experts across the company.
Many organizations have been experimenting with collaborative development for some time. At Comcast, for example, teams called it enterprise source. But then the term "inner source" was created by Tim OReilly, founder, CEO, and publisher at OReilly publishing in 2001. (This [interview][2] with Tim is a remarkable story of how he came to coin this term). He talks about how the modularity of code and the access to source was a huge driver for collaboration, even more than the license itself. He calls it the "Architecture of Participation." Soon after, in 2015, Danese Cooper, who was running the OSPO (Open Source Program Office) at PayPal at that time, [introduced this term][3] at OSCON. She also shared what PayPal had been doing inside the company to break down silos and to create collaboration. The [InnerSource Commons][4] non-profit foundation was also established to allow people to collaborate on their experiences, patterns, and best practices. Here is a more formal definition of inner source:
> "Inner source is an adaptation of open source practices to code that remains proprietary and can be seen only within a single organization, or a small set of collaborating organizations. Thus, inner source is a historical development drawing on the open source movement along with other trends in software."
>
> —From [_Adopting InnerSource_][5], OReilly Publications
The biggest similarities between open source and inner source involve the collaborative development model. Working across teams inside an organization on common source code is a culture change for a traditional enterprise. Both versions of it need sponsorship and support from the top to thrive inside an organization.
The biggest differences are what motivates one to start an inner source project over an open source project. The often-repeated wisdom of open source communities is that people get into it because they have an "itch to scratch." Often, trying to solve a technical issue is the reason that they start a new project. With inner source, there may be more of an organization-wide reason to start opening their code and collaborating with other teams. One main reason is to enable another team to work more effectively with the platform or to reduce a duplicate effort. Some organizations have chosen a default approach of opening all repositories for anyone in the company to see and use or collaborate on. One huge driver has been to stage it as an inner source project before fully open sourcing it. This allows the team to hone a good collaboration strategy and practice before open sourcing.
I like the model that PayPal used—having inner source and open source be a part of the same organization. There are shared practices, skills, and tools that can be used across the two. At Comcast, both practices live in the Open Source Program Office. The inner source team has built a guild with members of architecture, security, and leadership to develop the practice and governance of our inner source projects. You can listen to Comcast Open Source Program Manager and InnerSource leader Brittany Istenes [talk][6] to hear more about what we did with the guild and scaling inner source back from the InnerSource Commons Fall 2020 Summit.
While some people criticize the inner source movement and say that it has distracted from the open source movement, it has actually accelerated open source collaborative practices inside organizations. While most companies use open source, few were deeply engaged in communities or understood how open collaboration works. The fact that the collaborative development model can be used inside a company to do product or organizational work has led to a deeper understanding and better development practices at Comcast. I see a natural growth from this to more contributions and more engagement with open source communities as teams feel more confident in engaging through their inner source work.
No matter what you call it—inner source, enterprise source, collaborative development—more collaboration inside the company is a particularly good thing. Better development practices, documentation, onboarding, and mentoring are often direct results. If you are interested in more information, check out [innersourcecommons.org][7] for great talks from several companies practicing inner source, including Comcast. I believe that inner source is the next phase of open source development.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/inner-source
作者:[Nithya Ruff][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/nithyaruff
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/whiteboard-brainstorming-brainmapping-design-thinking-postits-kanban.png?itok=Is2Tg1Jk (Brainstorming with post-it notes on a whiteboard)
[2]: https://www.youtube.com/watch?v=2HR0D8_tKoA&t=573s
[3]: https://www.youtube.com/watch?v=r4QU1WJn9f8
[4]: https://innersourcecommons.org/
[5]: https://www.oreilly.com/library/view/adopting-innersource/9781492041863/ch01.html
[6]: https://www.youtube.com/watch?v=RenQ8B7aX84
[7]: http://innersourcecommons.org

View File

@ -0,0 +1,90 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why failure should be normalized and how to do it)
[#]: via: (https://opensource.com/article/20/11/normalize-failure)
[#]: author: (Lisa Seelye https://opensource.com/users/lisa)
Why failure should be normalized and how to do it
======
"Everybody is perfect, except you." This toxic thought can creep in and
ruin your confidence. Here's how to normalize failure during the
learning process and remember that everyone makes mistakes.
![failure sign at a party, celebrating failure][1]
All of your heroes have failures under their belts—from minor mistakes to major disasters. Nobody knows how to do everything automatically, and the process of learning is usually a messy one. So why is the perception that everyone but you knows what theyre doing so common? Why do we externalize our successes but internalize our failures?
How does it make you feel when you struggle to learn something new, then see another person take their Jira card away and return at the end of the sprint with something fully fleshed out and working, gushing about it at the demo? Sure, you closed your card too, but it was really hard! There was a new algorithm, a new programming language, a new system all to be learned. How did she make it look so effortless?
The truth is, she might have struggled with the same issues you did and wondered how you made it look so effortless!
### Failure is normal, healthy, and invisible
Whether we call them mistakes, bad assumptions, or some other euphemism, it's hard not to judge ourselves or expect better of ourselves, especially compared to other people.
My background is largely in Linux systems administration, and in this line of work, it is often a matter of "when" and not "if" we will have a production service disruption. Those service disruptions can happen for various reasons—sometimes it's because a person made a mistake. Setting aside the discussion around controls to mitigate human error, we can see plain as day that I, Lisa Seelye, made a mistake that directly caused a production problem.
Whenever a group of sysadmins gets together, we usually end up talking about our work, and inevitably, we get around to stories of production service disruptions that weve been a part of (or caused). Its cathartic to hear how badly other people have messed up and then look around and see that we are all human and making mistakes is part of that.
I feel that this kind of sharing is vital to the success of people in the information technology sector.
### Why should we share?
In addition to sharing our mistakes in order to normalize them, I also believe that it is equally important to share our learning processes—this is both to drive home the idea that we all start somewhere and that learning is often filled with failures and misconceptions.
As an individual, I need to remind myself that it may only appear that my peers return from a week working on a card with a fully fleshed-out solution. Reality may, in fact, be that they dont understand the requirements, the codebase, the language, the algorithms needed, etc. Either way, its a logical fallacy to believe they do not face these challenges because of the appearance of the final product.
### But why should we share?
We should share our learning experiences because we all benefit from hearing about the challenges other people face and how they overcome them. If the Jira card wasnt clear, then we can do better. If the algorithm wasnt clear, then maybe education can be done around it.
Most importantly, we need to normalize that its okay not to know everything, that its okay to still be learning, and to ask for help. Setting an example for new or more junior engineers is important. In our industry, we deal with extremely complex systems that can interact with one another in strange or unexpected ways. In many cases, it is simply not possible for one person to know everything. Being open about our learning processes and our mistakes can lead to tighter bonding.
Do new engineers on your team have the set expectation that its okay to interrupt and ask questions? Saying it on day one is easy, but practicing the value is another thing. How are approachability and openness demonstrated in your team?
### My learning opportunities
It could be very easy to title this section "my mistakes" and then rattle off all the times Ive made mistakes, but that doesnt quite illustrate the point. I recognize these mistakes, but theyre also events that expanded the understanding of my craft. While I didnt set out to intentionally do any of these things, I certainly learned from them.
I have accidentally dropped (deleted) a customers database. It was lucky for everyone that it was a beta-phase database and no further harm was done. I learned a valuable lesson that day: be very watchful of what code is doing, and be careful about what environment you are working in.
One day, while performing routine maintenance with an odd DNS setup, I accidentally broke the ability for customers to provide credit card information to the secure site. We had two "payments" DNS records that served to override a wildcard DNS record, and I assumed that the second "payments" record was still present. It wasnt. And then the wildcard record took over, and the DNS started behaving like "payments" wasnt special at all anymore. Of course, I had no idea this was happening at all—it wasnt until my maintenance was over that I learned of the folly.
Customers werent able to provide payment information for almost two hours! I learned my lesson, though: when there is something special about a particular configuration, be sure to make sure it stays special throughout its lifetime. When DNS gets involved, all kinds of things can break.
Before I started speaking at conferences, I was an attendee, and I watched talks online. Pivoting to speaking myself, I was worried that Id say too many "umms" and "uhs" and that my jokes would fall flat. The speakers I enjoyed over the years seemed to not have those problems at all, while I was unpolished.
But once I got up on stage, I found that my perception had changed. I had practiced my talk with an audience and listened to their feedback—turns out I had a little polish. In front of the audience, I did misspeak and not say something exactly how I wanted, but it didnt matter. What I didnt realize from my vantage point in the audience is that the audience wants the speaker to succeed, and the speaker can shift directions in their talk without the audience knowing.
I certainly have made and continue to make those sounds, and I have even had to correct major factual information the night before a talk, but the audience never knows. The audience sees what I show them and, because they want me to succeed, they forgive my "umms" and "uhs."
I admit that I am not perfect. I ship bugs, and I try to learn from them.
### How to share
Sharing the difficulties weve encountered along the learning process or in our day to day career is important, though just as important is how they are shared. I share the things Ive learned (the hard way) with frankness and no self-judgment. It is in this spirit that I think we should all share. Im not a bad person because Ive made mistakes, and neither are you.
How, then, should we share? Who is the audience?
In the midst of a production service disruption, like with the DNS payment situation above, theres no room for coyness or hiding anything. The most important thing is to make key stakeholders aware and then rally to fix the situation. The audience is first internal—your team, business leaders, the support team. Next, the audience is outward-facing—the customers. It is wise to involve communications experts when crafting that outward message.
When sharing with a junior engineer, we need to normalize the learning process. People arent born knowing how pointers in C work, so we all need to learn what the pitfalls of pointers are. Its okay to need to learn new skills at any skill level, and its also okay to need to reinforce those skills. The message we send should be free of judgment.
So write a blog post, make a Twitter thread, share frustrations in Slack channels, ask for help. Together we can dismiss the myth that everybody's perfect, except you.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/normalize-failure
作者:[Lisa Seelye][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lisa
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_failure_celebrate.png?itok=LbvDAEZF (failure sign at a party, celebrating failure)

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Openness is the key to innovation, history shows)
[#]: via: (https://opensource.com/open-organization/20/12/open-innovation-history)
[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland)
Openness is the key to innovation, history shows
======
History demonstrates the kinds of organizational structures that foster
innovation: open, communal, and humble. Here are just two examples.
![Team checklist][1]
In the [first part of this article series][2]—an extended review of the book [How Innovation Works][3] by [Matt Ridley][4]—I examined Ridley's characterization of innovation: it's gradual, incremental, and collective, and involves extensive collaboration between parties. This, I argued, is why [open organization principles][5] are so important and play a major role in fostering innovation.
In [part two of the series][6], I reviewed Ridley's assessment of the environments where innovation and discovery thrive, and I demonstrated some essential characteristics of the innovation process.
Now, in this concluding part of my review, I bring these ideas alive by recounting case studies of innovative discoveries throughout history.
### Innovation throughout history
In _How Innovation Works_, Ridley mentions that innovation is mostly a "bottom-up phenomenon." For example, throughout the 19th century, as Britain and Europe developed new railways, the steel industry, new applications for electricity, the textile industry, and many other breakthroughs, governments weren't involved. Only after the fact did governments play the role of regulator and standards creator. Sometimes, governments even played the role of the customer or user.
The same was true in America in the early 20th century. There were few public subsidies for research and development before 1940. And after World War II, the Japanese miracle was a function of private corporations backed by a vast ecosystem of small enterprises. Most efforts resembled open organizations—a modest group of dedicated hands-on, ground floor members. That's where most innovation originated, on the ground floor according to open organization principles.
By contrast, the Soviet Union was a very clear case of a public governmental entrepreneurial state; it funded a great deal of research centrally, and allowed virtually no involvement from private enterprise. Innovation in transportation, food production, health care, and consumer products suffered as a result. (In this environment open organization principles were not in use.)
Innovations come from humble places, Ridley's argues, and large, bureaucratic corporations were not particularly good at developing innovative products. Instead, small, loosely assembled communities (open organizations with front line teams) have been more innovative throughout history. They have been far more capable of exploring new concepts, particularly if they have a wide base of contributions to work with.
Let me review two historical examples of this, drawn from Ridley's work (one brief, one lengthier).
### Lighting every house (the light bulb)
Who invented the light bulb? Easy answer: Thomas Edison.
But at least a dozen other people were working on similar technologies around the same time, even before Edison, as Ridley documents in detail in _How Innovation Works_. Edison, however, [gets recognition for the invention][2].
Through some 50,000 trial-and-error experiments, a full complement of staff developed a usable, affordable light bulb for interior and room lighting (Edison's team tested roughly 6,000 plant materials until it found the right kind of bamboo for the bulb's filament). The invention made both oil lamps and gas lamps things of the past. Perhaps better than anyone else at the time, Edison knew that innovation was largely a team effort. He hired groups of skilled craftsmen and scientists (totalling 200 people in all) and worked them incredibly hard. Their collaboration, iterative and adaptive testing, and sharing eventually led to the registration of more than 400 patents in six years. Edison supported his innovative teams extensively, stuffing his workshops with every kind of material, tool, and book. His goal was not simply to inspire invention; it was to turn ideas into practical, reliable, and affordable items—the result of extensive application of open organization principles.
### Two approaches to flight (the powered aircraft)
Now let me offer a more detailed example, one focused on the innovation of powered aircraft. To better illustrate the advantage of an open organizational approach, I'll juxtapose two approaches to this innovation.
#### Samuel Langley's approach
In 1903, the East Coast of the United States saw the first airplane flight by Samuel Langley. It impressed the US government, which expressed a desire to invest in the development of powered flight (powered flight, of course, would allow people to travel longer distances).
Innovations come from humble places.
Initially, the American government supported Langley's experiments with an investment of $50,000. Later, it offered another US$20,000. Langley had convinced the government he could build a powered plane that would stay in the air longer than a simple glider (Langley was a professor from New England, an astronomer, and a well-connected head of the Smithsonian Institute in Washington, DC, so his reputation would have seemed to support his claims).
Because of his confidence, he decided to _keep all his research and experiments confidential_. He could not have been more closed in his approach to research and experimentation.
This went on for seven years, until a test plane was finally ready for presentation to the government. Langley was not a hands-on experimenter and was not willing to pilot his prototype plane himself (he commissioned Charles Manly). The plane crashed after flying just a short distance, to everyone's disappointment. Langley's reputation never recovered. The fiasco led to the US government pulling all funding for the project, and it gave up on powered flight after a decade of wasting money.
From the perspective of someone thinking with open organization principles, Langley would seem to have done everything wrong. He spent lots of money, depended on a single authority (the government), consulted with very few outside people, and built the full aircraft from scratch instead of building and testing components in an incremental way, by experimenting with different designs, shapes, sizes, and weights. He built _no community_ to work on the project. He _included_ only the minimum necessary outsiders. He was _not transparent_ of his activities or concerns. He did not establish step-by-step milestones in which he could make _adaptations_ if need be. And, he _collaborated_ with as few people as possible.
#### The Wright brothers' approach
But just south of Langley's failure, in an area called [Kitty Hawk][7], with almost nobody watching, two brothers from Ohio were busy achieving innovation in powered flight, and by spending only a fraction of what Langley did. They were Orville and Wilbur Wright.
The Wright brothers' initial test flight lasted 12 seconds and traveled 40 yards (37 meters). Their second flight (later that day) lasted almost one minute and covered more than 800 feet (244 meters). Just five people were present to observe these successful powered flights.
Thinking again with open organization principles, we might say the Wright brothers did everything right.
Innovation happens when someone creates a community while working on a problem. Through concurrent community collaboration, that group might make discoveries that lead to an innovation.
These two brothers were not well-known researchers (or even engineers for that matter). They simply were experienced bicycle builders and had a small shop. They were diligent craftsmen and loved building things. They simply wanted to address the challenges and problems of powered flight through building an aircraft. Fueled by those passions, they started building a development team (_community_ building), which included others in the project to bring in valued experience. They _included_ key outside people. They were very _transparent_ regarding their activities and concerns (more on this in a moment). They established step-by-step milestones in which they could make _adaptations_ if need be.
Moreover, they _collaborated_ with as many people as they needed to, gathering expertise where they could find it. First they wrote to Otto Lillienthal in Frankfurt, Germany**,** a famous designer of gliders. They also approached a rather eccentric French-American in Chicago by the name of Octave Chanute, who studied the powered flight problems to overcome and published papers on them. These are examples of inclusivity at its best—approaching people far different than them to achieve something. Thorough documentation of the Wright brother's collaboration with Chanute (and his extensive network of like-minded thinkers) helped solidify their basic design for a powered aircraft. And just by luck, someone the Wright brothers hired to work in the bicycle shop was an extremely good mechanic and loved designing machines. His name was Charlie Taylor. The internal combustion engine was just being developed at that time, and the Wright brothers knew that the motors on the market were too heavy for an aircraft, so Charlie built one from scratch, using aluminium (a much lighter material).
This more open approach to innovating in the field of powered aircraft is the one that eventually gave rise to the inventions (and industry) we enjoy today and brought the Wright brothers fame.
### History lessons
Think about both those examples, two case studies of innovation through open organization principles. Both indicate that, for _any_ innovation, quite a few people are required (not lone individuals). In both cases, inventors needed to "cross-pollinate" ideas in two ways:
1. concurrently, working together directly, and
2. learning sequentially over time.
Our lessons from history illustrate these approaches to innovation. Innovation happens when someone creates a _community_ while working on a problem. Through concurrent community collaboration, that group might make discoveries that lead to an innovation. Sometimes, however, those communities fail. But because they _document_ their work as they collaborate, another person (and community) can replicate what they've done and improve upon it (sequential innovation)—perhaps succeeding where the first group did not.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/20/12/open-innovation-history
作者:[Ron McFarland][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ron-mcfarland
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
[2]: https://opensource.com/open-organization/20/10/best-ideas-recognition-innovation
[3]: https://www.goodreads.com/en/book/show/45449488-how-innovation-works
[4]: https://en.wikipedia.org/wiki/Matt_Ridley
[5]: https://theopenorganization.org/definition/
[6]: https://opensource.com/open-organization/20/11/environments-for-innovation
[7]: https://www.google.com/search?rlz=1CAKMJF_enJP874&sxsrf=ALeKk02B7m1kL1nXrRT2cH4-zyr8VV_DDw:1597632027500&ei=u-05X5byJ8Ko-QaRt6HoDQ&q=kitty%20hawk%20museum&oq=kitty+hawk+mus&gs_lcp=CgZwc3ktYWIQARgAMgIIADICCAAyAggAMgIIADICCAAyAggAMgIIADoECAAQRzoHCC4QJxCTAjoHCAAQFBCHAjoICC4QxwEQrwE6BAgAEENQ04sBWIqeAWCvtAFoAXABeACAAY4BiAHEBZIBAzEuNZgBAKABAaoBB2d3cy13aXrAAQE&sclient=psy-ab&npsic=0&rflfq=1&rlha=0&rllag=36111118,-75775286,9090&tbm=lcl&rldimm=17550871878646241173&lqi=ChFraXR0eSBoYXdrIG11c2V1bVobCgZtdXNldW0iEWtpdHR5IGhhd2sgbXVzZXVt&phdesc=B2b85nIlVwQ&ved=2ahUKEwjz-uuLm6HrAhVLFogKHXjGD7YQvS4wAHoECAoQLQ&rldoc=1&tbs=lrf:!1m4!1u3!2m2!3m1!1e1!1m4!1u2!2m2!2m1!1e1!1m4!1u16!2m2!16m1!1e1!1m4!1u16!2m2!16m1!1e2!1m5!1u15!2m2!15m1!1shas_1wheelchair_1accessible_1entrance!4e2!2m1!1e2!2m1!1e16!2m1!1e3!3sIAE,lf:1,lf_ui:1&rlst=f#rlfi=hd:;si:17259280498191889741,l,ChFraXR0eSBoYXdrIG11c2V1bVobCgZtdXNldW0iEWtpdHR5IGhhd2sgbXVzZXVt,y,tBWxg7TmYyk;mv:%5B%5B41.993476711913615,-73.13628440817928%5D,%5B33.753275807154516,-89.35208728632068%5D,null,%5B37.98892655631248,-81.24418584724998%5D,6%5D

View File

@ -1,514 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (chenmu-kk)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux Desktop Setup · HookRace Blog)
[#]: via: (https://hookrace.net/blog/linux-desktop-setup/)
[#]: author: (Dennis Felsing http://felsin9.de/nnis/)
Linux Desktop Setup
======
My software setup has been surprisingly constant over the last decade, after a few years of experimentation since I initially switched to Linux in 2006. It might be interesting to look back in another 10 years and see what changed. A quick overview of whats running as Im writing this post:
[![htop overview][1]][2]
### Motivation
My software priorities are, in no specific order:
* Programs should run on my local system so that Im in control of them, this excludes cloud solutions.
* Programs should run in the terminal, so that they can be used consistently from anywhere, including weak computers or a phone.
* Keyboard focused is nearly automatic by using terminal software. I prefer to use the mouse where it makes sense only, reaching for the mouse all the time during typing feels like a waste of time. Occasionally it took me an hour to notice that the mouse wasnt even plugged in.
* Ideally use fast and efficient software, I dont like hearing the fan and feeling the room heat up. I can also keep running older hardware for much longer, my 10 year old Thinkpad x200s is still fine for all the software I use.
* Be composable. I dont want to do every step manually, instead automate more when it makes sense. This naturally favors the shell.
### Operating Systems
I had a hard start with Linux 12 years ago by removing Windows, armed with just the [Gentoo Linux][3] installation CD and a printed manual to get a functioning Linux system. It took me a few days of compiling and tinkering, but in the end I felt like I had learnt a lot.
I havent looked back to Windows since then, but I switched to [Arch Linux][4] on my laptop after having the fan fail from the constant compilation stress. Later I also switched all my other computers and private servers to Arch Linux. As a rolling release distribution you get package upgrades all the time, but the most important breakages are nicely reported in the [Arch Linux News][5].
One annoyance though is that Arch Linux removes the old kernel modules once you upgrade it. I usually notice that once I try plugging in a USB flash drive and the kernel fails to load the relevant module. Instead youre supposed to reboot after each kernel upgrade. There are a few [hacks][6] around to get around the problem, but I havent been bothered enough to actually use them.
Similar problems happen with other programs, commonly Firefox, cron or Samba requiring a restart after an upgrade, but annoyingly not warning you that thats the case. [SUSE][7], which I use at work, nicely warns about such cases.
For the [DDNet][8] production servers I prefer [Debian][9] over Arch Linux, so that I have a lower chance of breakage on each upgrade. For my firewall and router I used [OpenBSD][10] for its clean system, documentation and great [pf firewall][11], but right now I dont have a need for a separate router anymore.
### Window Manager
Since I started out with Gentoo I quickly noticed the huge compile time of KDE, which made it a no-go for me. I looked around for more minimal solutions, and used [Openbox][12] and [Fluxbox][13] initially. At some point I jumped on the tiling window manager train in order to be more keyboard-focused and picked up [dwm][14] and [awesome][15] close to their initial releases.
In the end I settled on [xmonad][16] thanks to its flexibility, extendability and being written and configured in pure [Haskell][17], a great functional programming language. One example of this is that at home I run a single 40” 4K screen, but often split it up into four virtual screens, each displaying a workspace on which my windows are automatically arranged. Of course xmonad has a [module][18] for that.
[dzen][19] and [conky][20] function as a simple enough status bar for me. My entire conky config looks like this:
```
out_to_console yes
update_interval 1
total_run_times 0
TEXT
${downspeed eth0} ${upspeed eth0} | $cpu% ${loadavg 1} ${loadavg 2} ${loadavg 3} $mem/$memmax | ${time %F %T}
```
And gets piped straight into dzen2 with `conky | dzen2 -fn '-xos4-terminus-medium-r-normal-*-12-*-*-*-*-*-*-*' -bg '#000000' -fg '#ffffff' -p -e '' -x 1000 -w 920 -xs 1 -ta r`.
One important feature for me is to make the terminal emit a beep sound once a job is done. This is done simply by adding a `\a` character to the `PR_TITLEBAR` variable in zsh, which is shown whenever a job is done. Of course I disable the actual beep sound by blacklisting the `pcspkr` kernel module with `echo "blacklist pcspkr" > /etc/modprobe.d/nobeep.conf`. Instead the sound gets turned into an urgency by urxvts `URxvt.urgentOnBell: true` setting. Then xmonad has an urgency hook to capture this and I can automatically focus the currently urgent window with a key combination. In dzen I get the urgent windowspaces displayed with a nice and bright `#ff0000`.
The final result in all its glory on my Laptop:
[![Laptop screenshot][21]][22]
I hear that [i3][23] has become quite popular in the last years, but it requires more manual window alignment instead of specifying automated methods to do it.
I realize that there are also terminal multiplexers like [tmux][24], but I still require a few graphical applications, so in the end I never used them productively.
### Terminal Persistency
In order to keep terminals alive I use [dtach][25], which is just the detach feature of screen. In order to make every terminal on my computer detachable I wrote a [small wrapper script][26]. This means that even if I had to restart my X server I could keep all my terminals running just fine, both local and remote.
### Shell & Programming
Instead of [bash][27] I use [zsh][28] as my shell for its huge number of features.
As a terminal emulator I found [urxvt][29] to be simple enough, support Unicode and 256 colors and has great performance. Another great feature is being able to run the urxvt client and daemon separately, so that even a large number of terminals barely takes up any memory (except for the scrollback buffer).
There is only one font that looks absolutely clean and perfect to me: [Terminus][30]. Since is a bitmap font everything is pixel perfect and renders extremely fast and at low CPU usage. In order to switch fonts on-demand in each terminal with `CTRL-WIN-[1-7]` my ~/.Xdefaults contains:
```
URxvt.font: -xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-*
dzen2.font: -xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-*
URxvt.keysym.C-M-1: command:\033]50;-xos4-terminus-medium-r-normal-*-12-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-2: command:\033]50;-xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-3: command:\033]50;-xos4-terminus-medium-r-normal-*-18-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-4: command:\033]50;-xos4-terminus-medium-r-normal-*-22-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-5: command:\033]50;-xos4-terminus-medium-r-normal-*-24-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-6: command:\033]50;-xos4-terminus-medium-r-normal-*-28-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-7: command:\033]50;-xos4-terminus-medium-r-normal-*-32-*-*-*-*-*-*-*\007
URxvt.keysym.C-M-n: command:\033]10;#ffffff\007\033]11;#000000\007\033]12;#ffffff\007\033]706;#00ffff\007\033]707;#ffff00\007
URxvt.keysym.C-M-b: command:\033]10;#000000\007\033]11;#ffffff\007\033]12;#000000\007\033]706;#0000ff\007\033]707;#ff0000\007
```
For programming and writing I use [Vim][31] with syntax highlighting and [ctags][32] for indexing, as well as a few terminal windows with grep, sed and the other usual suspects for search and manipulation. This is probably not at the same level of comfort as an IDE, but allows me more automation.
One problem with Vim is that you get so used to its key mappings that youll want to use them everywhere.
[Python][33] and [Nim][34] do well as scripting languages where the shell is not powerful enough.
### System Monitoring
[htop][35] (look at the background of that site, its a live view of the server thats hosting it) works great for getting a quick overview of what the software is currently doing. [lm_sensors][36] allows monitoring the hardware temperatures, fans and voltages. [powertop][37] is a great little tool by Intel to find power savings. [ncdu][38] lets you analyze disk usage interactively.
[nmap][39], iptraf-ng, [tcpdump][40] and [Wireshark][41] are essential tools for analyzing network problems.
There are of course many more great tools.
### Mails & Synchronization
On my home server I have a [fetchmail][42] daemon running for each email acccount that I have. Fetchmail just retrieves the incoming emails and invokes [procmail][43]:
```
#!/bin/sh
for i in /home/deen/.fetchmail/*; do
FETCHMAILHOME=$i /usr/bin/fetchmail -m 'procmail -d %T' -d 60
done
```
The configuration is as simple as it could be and waits for the server to inform us of fresh emails:
```
poll imap.1und1.de protocol imap timeout 120 user "dennis@felsin9.de" password "XXX" folders INBOX keep ssl idle
```
My `.procmailrc` config contains a few rules to backup all mails and sort them into the correct directories, for example based on the mailing list id or from field in the mail header:
```
MAILDIR=/home/deen/shared/Maildir
LOGFILE=$HOME/.procmaillog
LOGABSTRACT=no
VERBOSE=off
FORMAIL=/usr/bin/formail
NL="
"
:0wc
* ! ? test -d /media/mailarchive/`date +%Y`
| mkdir -p /media/mailarchive/`date +%Y`
# Make backups of all mail received in format YYYY/YYYY-MM
:0c
/media/mailarchive/`date +%Y`/`date +%Y-%m`
:0
* ^From: .*(.*@.*.kit.edu|.*@.*.uka.de|.*@.*.uni-karlsruhe.de)
$MAILDIR/.uni/
:0
* ^list-Id:.*lists.kit.edu
$MAILDIR/.uni-ml/
[...]
```
To send emails I use [msmtp][44], which is also great to configure:
```
account default
host smtp.1und1.de
tls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
auth on
from dennis@felsin9.de
user dennis@felsin9.de
password XXX
[...]
```
But so far the emails are still on the server. My documents are all stored in a directory that I synchronize between all computers using [Unison][45]. Think of Unison as a bidirectional interactive [rsync][46]. My emails are part of this documents directory and thus they end up on my desktop computers.
This also means that while the emails reach my server immediately, I only fetch them on deman instead of getting instant notifications when an email comes in.
From there I read the mails with [mutt][47], using the sidebar plugin to display my mail directories. The `/etc/mailcap` file is essential to display non-plaintext mails containing HTML, Word or PDF:
```
text/html;w3m -I %{charset} -T text/html; copiousoutput
application/msword; antiword %s; copiousoutput
application/pdf; pdftotext -layout /dev/stdin -; copiousoutput
```
### News & Communication
[Newsboat][48] is a nice little RSS/Atom feed reader in the terminal. I have it running on the server in a `tach` session with about 150 feeds. Filtering feeds locally is also possible, for example:
```
ignore-article "https://forum.ddnet.tw/feed.php" "title =~ \"Map Testing •\" or title =~ \"Old maps •\" or title =~ \"Map Bugs •\" or title =~ \"Archive •\" or title =~ \"Waiting for mapper •\" or title =~ \"Other mods •\" or title =~ \"Fixes •\""
```
I use [Irssi][49] the same way for communication via IRC.
### Calendar
[remind][50] is a calendar that can be used from the command line. Setting new reminders is done by editing the `rem` files:
```
# One time events
REM 2019-01-20 +90 Flight to China %b
# Recurring Holidays
REM 1 May +90 Holiday "Tag der Arbeit" %b
REM [trigger(easterdate(year(today()))-2)] +90 Holiday "Karfreitag" %b
# Time Change
REM Nov Sunday 1 --7 +90 Time Change (03:00 -> 02:00) %b
REM Apr Sunday 1 --7 +90 Time Change (02:00 -> 03:00) %b
# Birthdays
FSET birthday(x) "'s " + ord(year(trigdate())-x) + " birthday is %b"
REM 16 Apr +90 MSG Andreas[birthday(1994)]
# Sun
SET $LatDeg 49
SET $LatMin 19
SET $LatSec 49
SET $LongDeg -8
SET $LongMin -40
SET $LongSec -24
MSG Sun from [sunrise(trigdate())] to [sunset(trigdate())]
[...]
```
Unfortunately there is no Chinese Lunar calendar function in remind yet, so Chinese holidays cant be calculated easily.
I use two aliases for remind:
```
rem -m -b1 -q -g
```
to see a list of the next events in chronological order and
```
rem -m -b1 -q -cuc12 -w$(($(tput cols)+1)) | sed -e "s/\f//g" | less
```
to show a calendar fitting just the width of my terminal:
![remcal][51]
### Dictionary
[rdictcc][52] is a little known dictionary tool that uses the excellent dictionary files from [dict.cc][53] and turns them into a local database:
```
$ rdictcc rasch
====================[ A => B ]====================
rasch:
- apace
- brisk [speedy]
- cursory
- in a timely manner
- quick
- quickly
- rapid
- rapidly
- sharpish [Br.] [coll.]
- speedily
- speedy
- swift
- swiftly
rasch [gehen]:
- smartly [quickly]
Rasch {n} [Zittergras-Segge]:
- Alpine grass [Carex brizoides]
- quaking grass sedge [Carex brizoides]
Rasch {m} [regional] [Putzrasch]:
- scouring pad
====================[ B => A ]====================
Rasch model:
- Rasch-Modell {n}
```
### Writing and Reading
I have a simple todo file containing my tasks, that is basically always sitting open in a Vim session. For work I also use the todo file as a “done” file so that I can later check what tasks I finished on each day.
For writing documents, letters and presentations I use [LaTeX][54] for its superior typesetting. A simple letter in German format can be set like this for example:
```
\documentclass[paper = a4, fromalign = right]{scrlttr2}
\usepackage{german}
\usepackage{eurosym}
\usepackage[utf8]{inputenc}
\setlength{\parskip}{6pt}
\setlength{\parindent}{0pt}
\setkomavar{fromname}{Dennis Felsing}
\setkomavar{fromaddress}{Meine Str. 1\\69181 Leimen}
\setkomavar{subject}{Titel}
\setkomavar*{enclseparator}{Anlagen}
\makeatletter
\@setplength{refvpos}{89mm}
\makeatother
\begin{document}
\begin{letter} {Herr Soundso\\Deine Str. 2\\69121 Heidelberg}
\opening{Sehr geehrter Herr Soundso,}
Sie haben bei mir seit dem Bla Bla Bla.
Ich fordere Sie hiermit zu Bla Bla Bla auf.
\closing{Mit freundlichen Grüßen}
\end{letter}
\end{document}
```
Further example documents and presentations can be found over at [my private site][55].
To read PDFs [Zathura][56] is fast, has Vim-like controls and even supports two different PDF backends: Poppler and MuPDF. [Evince][57] on the other hand is more full-featured for the cases where I encounter documents that Zathura doesnt like.
### Graphical Editing
[GIMP][58] and [Inkscape][59] are easy choices for photo editing and interactive vector graphics respectively.
In some cases [Imagemagick][60] is good enough though and can be used straight from the command line and thus automated to edit images. Similarly [Graphviz][61] and [TikZ][62] can be used to draw graphs and other diagrams.
### Web Browsing
As a web browser Ive always used [Firefox][63] for its extensibility and low resource usage compared to Chrome.
Unfortunately the [Pentadactyl][64] extension development stopped after Firefox switched to Chrome-style extensions entirely, so I dont have satisfying Vim-like controls in my browser anymore.
### Media Players
[mpv][65] with hardware decoding allows watching videos at 5% CPU load using the `vo=gpu` and `hwdec=vaapi` config settings. `audio-channels=2` in mpv seems to give me clearer downmixing to my stereo speakers / headphones than what PulseAudio does by default. A great little feature is exiting with `Shift-Q` instead of just `Q` to save the playback location. When watching with someone with another native tongue you can use `--secondary-sid=` to show two subtitles at once, the primary at the bottom, the secondary at the top of the screen
My wirelss mouse can easily be made into a remote control with mpv with a small `~/.config/mpv/input.conf`:
```
MOUSE_BTN5 run "mixer" "pcm" "-2"
MOUSE_BTN6 run "mixer" "pcm" "+2"
MOUSE_BTN1 cycle sub-visibility
MOUSE_BTN7 add chapter -1
MOUSE_BTN8 add chapter 1
```
[youtube-dl][66] works great for watching videos hosted online, best quality can be achieved with `-f bestvideo+bestaudio/best --all-subs --embed-subs`.
As a music player [MOC][67] hasnt been actively developed for a while, but its still a simple player that plays every format conceivable, including the strangest Chiptune formats. In the AUR there is a [patch][68] adding PulseAudio support as well. Even with the CPU clocked down to 800 MHz MOC barely uses 1-2% of a single CPU core.
![moc][69]
My music collection sits on my home server so that I can access it from anywhere. It is mounted using [SSHFS][70] and automount in the `/etc/fstab/`:
```
root@server:/media/media /mnt/media fuse.sshfs noauto,x-systemd.automount,idmap=user,IdentityFile=/root/.ssh/id_rsa,allow_other,reconnect 0 0
```
### Cross-Platform Building
Linux is great to build packages for any major operating system except Linux itself! In the beginning I used [QEMU][71] to with an old Debian, Windows and Mac OS X VM to build for these platforms.
Nowadays I switched to using chroot for the old Debian distribution (for maximum Linux compatibility), [MinGW][72] to cross-compile for Windows and [OSXCross][73] to cross-compile for Mac OS X.
The script used to [build DDNet][74] as well as the [instructions for updating library builds][75] are based on this.
### Backups
As usual, we nearly forgot about backups. Even if this is the last chapter, it should not be an afterthought.
I wrote [rrb][76] (reverse rsync backup) 10 years ago to wrap rsync so that I only need to give the backup server root SSH rights to the computers that it is backing up. Surprisingly rrb needed 0 changes in the last 10 years, even though I kept using it the entire time.
The backups are stored straight on the filesystem. Incremental backups are implemented using hard links (`--link-dest`). A simple [config][77] defines how long backups are kept, which defaults to:
```
KEEP_RULES=( \
7 7 \ # One backup a day for the last 7 days
31 8 \ # 8 more backups for the last month
365 11 \ # 11 more backups for the last year
1825 4 \ # 4 more backups for the last 5 years
)
```
Since some of my computers dont have a static IP / DNS entry and I still want to back them up using rrb I use a reverse SSH tunnel (as a systemd service) for them:
```
[Unit]
Description=Reverse SSH Tunnel
After=network.target
[Service]
ExecStart=/usr/bin/ssh -N -R 27276:localhost:22 -o "ExitOnForwardFailure yes" server
KillMode=process
Restart=always
[Install]
WantedBy=multi-user.target
```
Now the server can reach the client through `ssh -p 27276 localhost` while the tunnel is running to perform the backup, or in `.ssh/config` format:
```
Host cr-remote
HostName localhost
Port 27276
```
While talking about SSH hacks, sometimes a server is not easily reachable thanks to some bad routing. In that case you can route the SSH connection through another server to get better routing, in this case going through the USA to reach my Chinese server which had not been reliably reachable from Germany for a few weeks:
```
Host chn.ddnet.tw
ProxyCommand ssh -q usa.ddnet.tw nc -q0 chn.ddnet.tw 22
Port 22
```
### Final Remarks
Thanks for reading my random collection of tools. I probably forgot many programs that I use so naturally every day that I dont even think about them anymore. Lets see how stable my software setup stays in the next years. If you have any questions, feel free to get in touch with me at [dennis@felsin9.de][78].
Comments on [Hacker News][79].
--------------------------------------------------------------------------------
via: https://hookrace.net/blog/linux-desktop-setup/
作者:[Dennis Felsing][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://felsin9.de/nnis/
[b]: https://github.com/lujun9972
[1]: https://hookrace.net/public/linux-desktop/htop_small.png
[2]: https://hookrace.net/public/linux-desktop/htop.png
[3]: https://gentoo.org/
[4]: https://www.archlinux.org/
[5]: https://www.archlinux.org/news/
[6]: https://www.reddit.com/r/archlinux/comments/4zrsc3/keep_your_system_fully_functional_after_a_kernel/
[7]: https://www.suse.com/
[8]: https://ddnet.tw/
[9]: https://www.debian.org/
[10]: https://www.openbsd.org/
[11]: https://www.openbsd.org/faq/pf/
[12]: http://openbox.org/wiki/Main_Page
[13]: http://fluxbox.org/
[14]: https://dwm.suckless.org/
[15]: https://awesomewm.org/
[16]: https://xmonad.org/
[17]: https://www.haskell.org/
[18]: http://hackage.haskell.org/package/xmonad-contrib-0.15/docs/XMonad-Layout-LayoutScreens.html
[19]: http://robm.github.io/dzen/
[20]: https://github.com/brndnmtthws/conky
[21]: https://hookrace.net/public/linux-desktop/laptop_small.png
[22]: https://hookrace.net/public/linux-desktop/laptop.png
[23]: https://i3wm.org/
[24]: https://github.com/tmux/tmux/wiki
[25]: http://dtach.sourceforge.net/
[26]: https://github.com/def-/tach/blob/master/tach
[27]: https://www.gnu.org/software/bash/
[28]: http://www.zsh.org/
[29]: http://software.schmorp.de/pkg/rxvt-unicode.html
[30]: http://terminus-font.sourceforge.net/
[31]: https://www.vim.org/
[32]: http://ctags.sourceforge.net/
[33]: https://www.python.org/
[34]: https://nim-lang.org/
[35]: https://hisham.hm/htop/
[36]: http://lm-sensors.org/
[37]: https://01.org/powertop/
[38]: https://dev.yorhel.nl/ncdu
[39]: https://nmap.org/
[40]: https://www.tcpdump.org/
[41]: https://www.wireshark.org/
[42]: http://www.fetchmail.info/
[43]: http://www.procmail.org/
[44]: https://marlam.de/msmtp/
[45]: https://www.cis.upenn.edu/~bcpierce/unison/
[46]: https://rsync.samba.org/
[47]: http://www.mutt.org/
[48]: https://newsboat.org/
[49]: https://irssi.org/
[50]: https://www.roaringpenguin.com/products/remind
[51]: https://hookrace.net/public/linux-desktop/remcal.png
[52]: https://github.com/tsdh/rdictcc
[53]: https://www.dict.cc/
[54]: https://www.latex-project.org/
[55]: http://felsin9.de/nnis/research/
[56]: https://pwmt.org/projects/zathura/
[57]: https://wiki.gnome.org/Apps/Evince
[58]: https://www.gimp.org/
[59]: https://inkscape.org/
[60]: https://imagemagick.org/Usage/
[61]: https://www.graphviz.org/
[62]: https://sourceforge.net/projects/pgf/
[63]: https://www.mozilla.org/en-US/firefox/new/
[64]: https://github.com/5digits/dactyl
[65]: https://mpv.io/
[66]: https://rg3.github.io/youtube-dl/
[67]: http://moc.daper.net/
[68]: https://aur.archlinux.org/packages/moc-pulse/
[69]: https://hookrace.net/public/linux-desktop/moc.png
[70]: https://github.com/libfuse/sshfs
[71]: https://www.qemu.org/
[72]: http://www.mingw.org/
[73]: https://github.com/tpoechtrager/osxcross
[74]: https://github.com/ddnet/ddnet-scripts/blob/master/ddnet-release.sh
[75]: https://github.com/ddnet/ddnet-scripts/blob/master/ddnet-lib-update.sh
[76]: https://github.com/def-/rrb/blob/master/rrb
[77]: https://github.com/def-/rrb/blob/master/config.example
[78]: mailto:dennis@felsin9.de
[79]: https://news.ycombinator.com/item?id=18979731

View File

@ -1,104 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Dual booting Windows and Linux using UEFI)
[#]: via: (https://opensource.com/article/19/5/dual-booting-windows-linux-uefi)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss/users/ckrzen)
Dual booting Windows and Linux using UEFI
======
A quick rundown of setting up Linux and Windows to dual boot on the same
machine, using the Unified Extensible Firmware Interface (UEFI).
![Linux keys on the keyboard for a desktop computer][1]
Rather than doing a step-by-step how-to guide to configuring your system to dual boot, Ill highlight the important points. As an example, I will refer to my new laptop that I purchased a few months ago. I first installed [Ubuntu Linux][2] onto the entire hard drive, which destroyed the pre-installed [Windows 10][3] installation. After a few months, I decided to install a different Linux distribution, and so also decided to re-install Windows 10 alongside [Fedora Linux][4] in a dual boot configuration. Ill highlight some essential facts to get started.
### Firmware
Dual booting is not just a matter of software. Or, it is, but it involves changing your firmware, which among other things tells your machine how to begin the boot process. Here are some firmware-related issues to keep in mind.
#### UEFI vs. BIOS
Before attempting to install, make sure your firmware configuration is optimal. Most computers sold today have a new type of firmware known as [Unified Extensible Firmware Interface (UEFI)][5], which has pretty much replaced the other firmware known as [Basic Input Output System (BIOS)][6], which is often included through the mode many providers call Legacy Boot.
I had no need for BIOS, so I chose UEFI mode.
#### Secure Boot
One other important setting is Secure Boot. This feature detects whether the boot path has been tampered with, and stops unapproved operating systems from booting. For now, I disabled this option to ensure that I could install Fedora Linux. According to the Fedora Project Wiki [Features/Secure Boot ][7] Fedora Linux will work with it enabled. This may be different for other Linux distributions —I plan to revisit this setting in the future.
In short, if you find that you cannot install your Linux OS with this setting active, disable Secure Boot and try again.
### Partitioning the boot drive
If you choose to dual boot and have both operating systems on the same drive, you have to break it into partitions. Even if you dual boot using two different drives, most Linux installations are best broken into a few basic partitions for a variety of reasons. Here are some options to consider.
#### GPT vs MBR
If you decide to manually partition your boot drive in advance, I recommend using the [GUID Partition Table (GPT)][8] rather than the older [Master Boot Record (MBR)][9]. Among the reasons for this change, there are two specific limitations of MBR that GPT doesnt have:
* MBR can hold up to 15 partitions, while GPT can hold up to 128.
* MBR only supports up to 2 terabytes, while GPT uses 64-bit addresses which allows it to support disks up to 8 million terabytes.
If you have shopped for hard drives recently, then you know that many of todays drives exceed the 2 terabyte limit.
#### The EFI system partition
If you are doing a fresh installation or using a new drive, there are probably no partitions to begin with. In this case, the OS installer will create the first one, which is the [EFI System Partition (ESP)][10]. If you choose to manually partition your drive using a tool such as [gdisk][11], you will need to create this partition with several parameters. Based on the existing ESP, I set the size to around 500MB and assigned it the ef00 (EFI System) partition type. The UEFI specification requires the format to be FAT32/msdos, most likely because it is supportable by a wide range of operating systems.
![Partitions][12]
### Operating System Installation
Once you accomplish the first two tasks, you can install your operating systems. While I focus on Windows 10 and Fedora Linux here, the process is fairly similar when installing other combinations as well.
#### Windows 10
I started the Windows 10 installation and created a 20 Gigabyte Windows partition. Since I had previously installed Linux on my laptop, the drive had an ESP, which I chose to keep. I deleted all existing Linux and swap partitions to start fresh, and then started my Windows installation. The Windows installer automatically created another small partition—16 Megabytes—called the [Microsoft Reserved Partition (MSR)][13]. Roughly 400 Gigabytes of unallocated space remained on the 512GB boot drive once this was finished.
I then proceeded with and completed the Windows 10 installation process. I then rebooted into Windows to make sure it was working, created my user account, set up wi-fi, and completed other tasks that need to be done on a first-time OS installation.
#### Fedora Linux
I next moved to install Linux. I started the process, and when it reached the disk configuration steps, I made sure not to change the Windows NTFS and MSR partitions. I also did not change the EPS, but I did set its mount point to **/boot/efi**. I then created the usual ext4 formatted partitions, **/** (root), **/boot** , and **/home**. The last partition I created was Linux **swap**.
As with Windows, I continued and completed the Linux installation, and then rebooted. To my delight, at boot time the [GRand][14] [Unified Boot Loader (GRUB)][14] menu provided the choice to select either Windows or Linux, which meant I did not have to do any additional configuration. I selected Linux and completed the usual steps such as creating my user account.
### Conclusion
Overall, the process was painless. In past years, there has been some difficulty navigating the changes from UEFI to BIOS, plus the introduction of features such as Secure Boot. I believe that we have now made it past these hurdles and can reliably set up multi-boot systems.
I dont miss the [Linux LOader (LILO)][15] anymore!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/dual-booting-windows-linux-uefi
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss/users/ckrzen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
[2]: https://www.ubuntu.com
[3]: https://www.microsoft.com/en-us/windows
[4]: https://getfedora.org
[5]: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface
[6]: https://en.wikipedia.org/wiki/BIOS
[7]: https://fedoraproject.org/wiki/Features/SecureBoot
[8]: https://en.wikipedia.org/wiki/GUID_Partition_Table
[9]: https://en.wikipedia.org/wiki/Master_boot_record
[10]: https://en.wikipedia.org/wiki/EFI_system_partition
[11]: https://sourceforge.net/projects/gptfdisk/
[12]: /sites/default/files/u216961/gdisk_screenshot_s.png
[13]: https://en.wikipedia.org/wiki/Microsoft_Reserved_Partition
[14]: https://en.wikipedia.org/wiki/GNU_GRUB
[15]: https://en.wikipedia.org/wiki/LILO_(boot_loader)

View File

@ -1,181 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Creating a Chat Bot with Recast.AI)
[#]: via: (https://opensourceforu.com/2019/11/creating-a-chat-bot-with-recast-ai/)
[#]: author: (Athira Lekshmi C.V https://opensourceforu.com/author/athira-lekshmi/)
Creating a Chat Bot with Recast.AI
======
[![][1]][2]
_According to a Gartner report from February 2018, “25 per cent of customer service and support operations will integrate virtual customer assistant (VCA) or chatbot technology across engagement channels by 2020, up from less than 2 per cent in 2017.” In the light of this, readers will find this tutorial on how the open source Recast. AI bot-creating platform works, helpful._
Chat bots, both voice based and others, have been in use for quite a while now. From chatbots that engage the user in a murder mystery game to bots which help in real estate deals and medical diagnosis, chatbots have traversed across domains.
There are many platforms which enable users to create and deploy bots. Recast.AI (now known as SAP Conversational AI after its acquisition by SAP) is a forerunner amongst these.
The cool interface, its collaborative nature and the analytics tools it provides, make it a popular choice.
As the Recast official site says, “It is an ultimate collaborative platform to build, train, deploy and monitor intelligent bots.”
![Figure 1: Setting the bot properties][3]
![Figure 2: Bot dashboard][4]
![Figure 3: Searching an intent][5]
**Building a basic bot in Recast**
Let us look at how to build a basic bot in Recast.
1. Create an account in _<https://cai.tools.sap>_. Signing up can be done either with an email ID or with a GitHub account.
2. Once you log in, you will land on the dashboard. Click on the + New Bot icon on the top right-hand side to create a new bot.
3. On the next screen, you will see that there is a set of predefined skills you can select. Select Greetings for the time being (Figure 1). This bot is already trained to understand basic greetings.
4. Provide a name for your bot. For now, since this is a very basic bot, you can have the bot crack some jokes, so let us name it Joke Bot and select the default language as English.
5. Select Non-personal data under the data policy since you wont be dealing with any sensitive information; then select the Public bot option and click on Create a bot.
So thats your bot created on the Recast platform.
![Figure 4: @joke intent][6]
![Figure 5: Predefined expressions][7]
**The five stages of developing a bot**
To use the words from the official Recast blog, there are five stages in a bots life.
* Training Teaching your bot what it needs to understand
* Building Creating your conversational flow with the Bot Builder tool
* Coding Connecting your bot with external APIs or a database
* Connecting Shipping your bot to one or several messaging platforms
* Monitoring Training your bot to make it sharper and get insights on its usage
**Training a bot through intents**
You will be able to see the options to either search, fork or create an intent in the dashboard.
“An intent is a box of expressions that mean the same thing but which are constructed in different ways. Intents are the heart of your bots understanding. Each one of your intents represents an idea your bot is able to understand.” (from the _Recast.AI_ website)
As decided earlier, you need the bot to be able to crack jokes. So the base line is that the bot should be able to understand that the user is asking it to tell a joke; it shouldnt be that even when the user just says, “Hi,” the bot responds with a joke that would not be good.
So group the utterances that the user might make, like:
```
Tell me a joke.
Tell me a funny fact.
Can you crack a joke?
Whats funny today?
```
…………………
Before going on to create the intent from scratch, let us explore the Search/fork option. Type _Joke_ in the search field (Figure 3). This gives a list of intents created by users of Recast around the globe, which is public, and this is why Recast is said to be collaborative in nature. So theres no need to create all intents from scratch, one can build upon intents already created. This brings down the effort needed to train the bot with common intents.
* Select the first intent in the list and fork it into the bot.
* Click on the Fork button. The intent is now added to the bot (Figure 4).
* Click on the intent @joke, and a list of expressions which already exist in the intent will be displayed (Figure 5).
* Add a few more expressions to it (Figure 6).
![Figure 6: Suggested expressions][8]
![Figure 7: Suggested expressions][9]
Once a few expressions are added, the bot gives suggestions like shown in Figure 7. Select a few and add them to the intent (Figure 7).
You can also tag your own custom entities to detect keywords, depending on your bots context.
**Skills**
A skill is a block of conversation that has a clear purpose and that your bot can execute to achieve a goal. It can be as simple as the ability to greet someone, but it can also be more complex, like giving movie suggestions based on information provided by the user.
It need not be just a one query-answer set, but rather, skills running through multiple exchanges. For example, consider a bot which helps you learn about currency exchange rates. It starts by asking the source currency, then the target currency, before giving the exact response. Skills can be combined to create complex conversational flows.
Heres how you create a skill for the joke bot:
* Go to the _Build_ tab. Click on the + icon to create a skill.
* Name the skill _Joke_ (Figure 8)
* Once created, click on the skill. You will see four tabs. _Read me, Triggers, Requirements and Actions_.
* Navigate to the Requirements tab. You should store the information only if the intent joke is present. So, add a requirement as shown in Figure 9.
![Figure 8: Skills dashboard][10]
![Figure 9: Adding a trigger][11]
Since this is a simple use case, you neednt consider any specific requirements in the Requirement tab but consider a case for which a response needs to be triggered only if certain keywords or entities are present in such a case you will need requirements.
Requirements are either intents or entities that your skill needs to retrieve before executing actions. Requirements are pieces of information that are important in the conversation and that your bot can use; for example, the users name or a location. Once a requirement is completed, the associated value is stored in the bots memory for the entire conversation.
Now let us move to the Action tab to set the responses (see Figure 10).
Click on Add _new message group_. Then select _Send message_ and add a text message, which can be any joke in this case. Also, since you dont want your bot to crack the same joke each time, you can add multiple messages which will be randomly picked each time.
![Figure 10: Adding actions][12]
![Figure 11: Adding text messages][13]
![Figure 12: Setting up webchat][14]
**Channel integrations**
Well, the success of a bot also depends upon how easily it is accessible. Recast has built-in integrations with many messaging channels such as Skype for Business, Kik Messenger, Telegram, Line, Facebook Messenger, Slack, Alexa, etc. In addition to that, Recast also provides SDKs to develop custom channels.
Also, there is a ready-to-use Web chat provided by Recast (in the Connect tab). You can customise the colour schemes, headers, bot pictures, etc. It provides you with a script tag to be injected into the page. Your interface is now up (Figure 12).
The Web chat code base is open sourced, which makes it easier for developers to play around with the look and feel, the standard response types and much more.
The dashboard provides step-by-step procedures on how to deploy the bot on various channels. The joke bot was deployed in Telegram and in Web chat, as shown in Figure 13.
![Figure 13: Webchat deployed][15]
![Figure 14: Bot deployed in Telegram][16]
![Figure 15: Multi-language bot][17]
**And there is more**
Recast supports multiple languages, Select one language as the base while creating the bot, but then you also have the option to add as many languages as you want.
The example considered here is a simple static joke bot, but actual use cases will need interaction with various systems. Recast has a Web hook feature which allows users to connect with various systems to get responses. Also, there is detailed API documentation to help leverage each independent feature of the platform.
As for analytics, Recast has a monitoring dashboard which helps you understand the accuracy of the bot and train it further.
![Avatar][18]
[Athira Lekshmi C.V][19]
The author is an open-source enthusiast.
[![][20]][21]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/creating-a-chat-bot-with-recast-ai/
作者:[Athira Lekshmi C.V][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/athira-lekshmi/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/04/Build-ChatBoat.jpg?resize=696%2C442&ssl=1 (Build ChatBoat)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/04/Build-ChatBoat.jpg?fit=900%2C572&ssl=1
[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-1-Setting-the-bot-properties.jpg?resize=350%2C201&ssl=1
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-2-Setting-the-bot-properties.jpg?resize=350%2C217&ssl=1
[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-3-Searching-an-intent.jpg?resize=350%2C271&ssl=1
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-4-@joke-intent.jpg?resize=350%2C214&ssl=1
[7]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-5-Predefined-expressions-350x227.jpg?resize=350%2C227&ssl=1
[8]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-6-Suggested-expressions-350x197.jpg?resize=350%2C197&ssl=1
[9]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-7-Suggested-expressions-350x248.jpg?resize=350%2C248&ssl=1
[10]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-8-Skills-dashboard.jpg?resize=350%2C187&ssl=1
[11]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-9-Adding-a-trigger.jpg?resize=350%2C197&ssl=1
[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-10-Adding-actions.jpg?resize=350%2C175&ssl=1
[13]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-11-Adding-text-messages.jpg?resize=350%2C255&ssl=1
[14]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-12-Setting-up-webchat.jpg?resize=350%2C326&ssl=1
[15]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-13-Webchat-deployed.jpg?resize=350%2C425&ssl=1
[16]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-14-Bot-deployed-in-Telegram.jpg?resize=350%2C269&ssl=1
[17]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-15-Multi-language-bot.jpg?resize=350%2C419&ssl=1
[18]: https://secure.gravatar.com/avatar/d24503a2a0bb8bd9eefe502587d67323?s=100&r=g
[19]: https://opensourceforu.com/author/athira-lekshmi/
[20]: https://opensourceforu.com/wp-content/uploads/2019/11/assoc.png
[21]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -1,181 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Emacs)
[#]: via: (https://opensource.com/article/20/3/getting-started-emacs)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Getting started with Emacs
======
10 tips for diving into the world of this useful group of open source
text editors.
![Computer keyboard typing][1]
Many people say they want to learn [Emacs][2], but many of them shy away after the briefest encounter. It's not because Emacs is bad or even that complex. The problem, I believe, is that people don't actually want to learn Emacs; they want to be comfortable with Emacs traditions. They want to understand the arcane keyboard shortcuts and unfamiliar terminology. They want to use Emacs as they believe it's "meant to be used."
I sympathize with this because that's how I felt about Emacs. I thought that all true Emacs users only ran it inside a terminal, and never used arrow keys or menus, much less a mouse. That's a great way to discourage yourself from getting started with Emacs. There are enough unique .emacs config files out there to prove that if there's one common strain among Emacs users, it's that everyone uses Emacs differently.
Learning Emacs is easy. It's loving Emacs that's hard. To love Emacs, you have to discover the features it has that you've been looking for, sometimes without knowing you've been missing them. And that takes experience.
The only way to get that experience is to start at the beginning—by actively using Emacs. Here are ten tips to help you figure out what works best for you.
### Start with the GUI
One of the greatest things about Emacs (and its friendly competitor, [Vim][3], too) is that it can be run in a terminal, which is useful when you SSH into a server, but not very significant on computers built within the last 15 years or so. The GUI version of Emacs can be run on extremely [low-powered devices][4], and it has lots of pragmatic features that both novice and experienced users can appreciate.
For example, if you don't know how to copy a word with just keyboard shortcuts in Emacs, the Edit menu's Copy, Cut, and Paste selections provide the path of least resistance. There's no reason to punish yourself for choosing Emacs. Use its menus, use your mouse to select regions and click in-buffer buttons, and don't let unfamiliarity stand in your way of being productive.
![Emacs slackware][5]
These features are built into Emacs because users use them. You should use them when you need them and work your way up to the obscure commands you think you might need when you eventually use Emacs over SSH on a VT100 terminal with no Alt or arrow keys.
### Get used to the terminology
Emacs has special terms for its UI elements. The evolution of personal computing didn't build upon the same terms, so many are relatively foreign to the modern computer user, and others are the same but have different meanings. Here are some of the most common terms:
* Frame: In Emacs, the frame is what modern computing calls a "window."
* Buffer: A buffer is a communication channel for Emacs. It may serve as a command-line for an Emacs process, or a shell, or just the contents of a file.
* Window: A window is your view into a buffer.
* Mini-buffer: The primary command-line, located at the bottom of the Emacs window.
![Emacs tutorial map][6]
### Making sense of Emacs modifier keys
On a PC keyboard, the Ctrl key is referred to as C, and the Alt key is referred to as M. These aren't the C and M keys, and because they're always paired with an accompanying letter or symbol key, they're easy to recognize in the documentation.
For example, C-x means Ctrl+X in modern keyboard notation, and M-x is Alt+X. You press the keys together just as you do when you're cutting text from any application.
There's another level of keyboard shortcuts, though, quite unlike anything on modern computers. Sometimes, a keyboard shortcut isn't just one key combo and instead consists of a series of key presses.
For instance, C-x C-f means to press Ctrl+X as usual, and then Ctrl+C as usual.
Sometimes, a keyboard shortcut has a mixture of key types. The combo C-x 3 means to press Ctrl+X as usual, and then the number 3 key.
The way Emacs can do these fancy power combos is because certain keys put Emacs into a special command mode. If you press C-X (that's Ctrl+X), you're telling Emacs to enter an idle state, waiting for a second key or keyboard shortcut.
Emacs documentation, both official and unofficial, is full of keyboard shortcuts. Practice mentally translating C to Ctrl and M to Alt, and all those docs will make a lot more sense to you.
### Alternate shortcuts for cut, copy, and paste
Canonically, copying text is performed with a series of keyboard shortcuts that depend on how you want to copy or cut.
For instance, you can cut a whole word with M-d (Emacs lingo for Alt+d), or a whole line with C-k (Ctrl+K), or a highlighted region with M-m (Alt+M). You can get used to that if you want, but if you like Ctrl+C and Ctrl+X and Ctrl-V, then you can use those instead.
Enabling modern cut-copy-paste requires activating a feature called CUA (Common User Access). To activate CUA, click on the Options menu and select Use CUA Keys. With this enabled, C-c copies highlighted text, C-x cuts highlighted text, and C-v pastes text. This mode is only actually active when you've selected text, so you can still learn the usual C-x and C-c bindings that Emacs normally uses.
### It's OK to share
Emacs is an application and has no awareness of your feelings or loyalty towards it. If you want to use Emacs only for tasks that "feel" right for Emacs, and different editor (like Vim) for other tasks, you can do that.
Your interactions with an application influence how you work, so if the pattern of keypresses required in Emacs doesn't agree with a specific task, then don't force yourself to use Emacs for that task. Emacs is just one of many open source tools available to you, and there's no reason to limit yourself to just one tool.
### Explore new functions
Most of what Emacs does is an elisp function that can be invoked from a menu selection, keyboard shortcut, or in some cases, a specific event. All functions can be executed from the mini-buffer (the command-line at the bottom of the Emacs frame). You could, in theory, even navigate your cursor by typing in functions like **forward-word** and **backward-word** and **next-line** and **previous-line**, and so on. It would be unbearably inefficient, but that's the kind of direct access to the code you're running. In a way, Emacs is its own API.
You can find out about new functions by reading about Emacs on community blogs, or you can take a more direct approach and use the describe-function function. To get help on any function, press M-x (that's Alt+X) and then type describe-function, and then press Return or Enter. You are prompted for a function name, and then shown a description of that function.
You can get a list of all available functions by typing M-x (Alt+X), followed by ?.
You can also get pop-up descriptions of functions as you type them by pressing M-x and then typing **auto-complete-mode**, and then pressing Return or Enter. With this mode active, as you type any Emacs function into your document, you're offered auto-completion options, along with a description of the function.
![Emacs function][7]
When you find a useful function and use it, Emacs tells you the keyboard binding for it, if one is set. If one doesn't exist, you can assign one yourself by opening your $HOME/.emacs configuration file and entering a keyboard shortcut assignment. The syntax is global-set-key, followed by the keyboard shortcut you want to use, followed by the function you want to invoke.
For example, to assign the screenwriter-slugline function to a keyboard binding:
```
`(global-set-key (kbd “C-c s”) 'screenwriter-slugline)`
```
Reload your configuration file, and the keyboard shortcut is available to you:
```
`M-x load-file ~/.emacs`
```
### Panic button
As you use Emacs and try new functions, you're bound to start invoking something you didn't mean to invoke. The universal panic button in Emacs is C-g (that's Ctrl+G).
I remember this by associating G with GNU, and I imagine I'm calling upon GNU to rescue me from a poor decision, but feel free to make up your own mnemonic.
If you press C-g a few times, the Emacs mini-buffer returns to a latent state, pop-up windows are hidden, and you're back to the safety of a plain old boring text editor.
### Ignore the keyboard shortcuts
There are too many potential keyboard shortcuts to summarize them all here, much less for you ever to hope to memorize. That's by design. Emacs is meant to be customized, and when people write plugins for Emacs, they get to define their own special keyboard shortcuts.
The idea isn't to memorize all shortcuts right away. Your goal, instead, is to get comfortable using Emacs. The more comfortable you become in Emacs, the more you'll get tired of resorting to the menu bar all the time, and you'll start to memorize the combos important to you.
Everyone has their own favorite shortcuts based on what they typically do in Emacs. Someone who spends all day writing code in Emacs may know all the keyboard shortcuts for running a debugger or for launching language-specific modes, but know nothing about Org-mode or Artist-mode. It's natural, and it's fine.
### Practice Emacs when you use Bash
One advantage to knowing Emacs keyboard shortcuts is that many of them also apply in Bash:
* C-a—go to the beginning of a line
* C-e—go to the end of a line
* C-k—cut the whole line
* M-f—go forward a word
* M-b—go back a word
* M-d—cut a word
* C-y—yank back (paste) the most recently cut content
* M-Shift-U—capitalize a word
* C-t—swap two characters (for example, sl becomes ls)
There are many more examples, and it an make your interactions with your Bash terminal faster than you ever imagined.
### Packages
Emacs has a built-in package manager to help you discover new plugins. Its package manager contains modes to help you edit specific types of text (for instance, if you edit JSON files frequently, you might try ejson-mode), embedded applications, themes, spellchecking options, linters, and more. This is where Emacs has the potential to become crucial to your daily computing; once you find a great Emacs package, you may not be able to live without it.
![Emacs emoji][8]
You can browse packages by pressing M-x (that's Alt+X) and then typing **package-list-packages** command and then pressing Return or Enter. The package manager updates its cache each time you launch it, so be patient the first time you use it while it downloads a list of available packages. Once loaded, you can navigate with your keyboard or mouse (remember, Emacs is a GUI application). Each package name is a button, so either move your cursor over it and press Return or just click it with your mouse. You can read about the package in the new window that appears in your Emacs frame, and then install it with the Install button.
Some packages need special configuration, which is sometimes listed in its description, but other times require you to visit the package's home page to read more about it. For example, the auto-complete package ac-emoji installs easily but requires you to have a symbol font defined. It works either way, but you only see the corresponding emoji if you have the font installed, and you might not know that unless you visit its homepage.
### Tetris
Emacs has games, believe it or not. There's Sudoku, quizzes, minesweeper, a just-for-fun psychotherapist, and even Tetris. These aren't particularly useful, but interacting with Emacs on any level is great practice, and games are a great way to maximize your time spent in Emacs.
![Emacs tetris][9]
Tetris is also how I was introduced to Emacs in the first place, so of all versions of the game, it's the Emacs version that's truly my favorite.
### Using Emacs
GNU Emacs is popular because it's impossibly flexible and highly extensible. People get so accustomed to Emacs keyboard shortcuts that they habitually try to use them in all other applications, and they build applications into Emacs, so they never have to leave in the first place. If you want Emacs to play an important role in your computing life, the ultimate key is to embrace the unknown and start using Emacs. Stumble through it until you've discovered how to make it work for you, and then settle in for 40 years of comfort.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/3/getting-started-emacs
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/keyboaord_enter_writing_documentation.jpg?itok=kKrnXc5h (Computer keyboard typing)
[2]: https://opensource.com/downloads/emacs-cheat-sheet
[3]: https://opensource.com/downloads/cheat-sheet-vim
[4]: https://opensource.com/article/17/2/pocketchip-or-pi
[5]: https://opensource.com/sites/default/files/uploads/emacs-slackware.jpg (Emacs slackware)
[6]: https://opensource.com/sites/default/files/uploads/emacs-tutorial-map.png (Emacs tutorial map)
[7]: https://opensource.com/sites/default/files/uploads/emacs-function.jpg (Emacs function)
[8]: https://opensource.com/sites/default/files/uploads/emacs-emoji_0.jpg (Emacs emoji)
[9]: https://opensource.com/sites/default/files/uploads/emacs-tetris.jpg (Emacs tetris)

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (zhangxiangping)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -156,7 +156,7 @@ via: https://opensource.com/article/20/4/python-map-covid-19
作者:[AnuragGupta][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[zhangxiangping](https://github.com/zxp93)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,75 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (scanimage: scan from the command line!)
[#]: via: (https://jvns.ca/blog/2020/07/11/scanimage--scan-from-the-command-line/)
[#]: author: (Julia Evans https://jvns.ca/)
scanimage: scan from the command line!
======
Heres another quick post about a command line tool I was delighted by.
Last night, I needed to scan some documents for some bureaucratic reasons. Id never used a scanner on Linux before and I was worried it would take hours to figure out. I started by using `gscan2pdf` and had trouble figuring out the user interface I wanted to scan both sides of the page at the same time (which I knew our scanner supported) but couldnt get it to work.
### enter scanimage!
`scanimage` is a command line tool, in the `sane-utils` Debian package. I think all Linux scanning tools use the `sane` libraries (“scanner access now easy”) so my guess is that it has similar abilities to any other scanning software. I didnt need OCR in this case so were not going to talk about OCR.
### get your scanners name with `scanimage -L`
`scanimage -L` lists all scanning devices you have.
At first I couldnt get this to work and I was a bit frustrated but it turned out that Id connected the scanner to my computer, but not plugged it into the wall. Oops.
Once everything was plugged in it worked right away. Apparently our scanner is called `fujitsu:ScanSnap S1500:2314`. Hooray!
### list options for your scanner with `--help`
Apparently each scanner has different options (makes sense!) so I ran this command to get the options for my scanner:
```
scanimage --help -d 'fujitsu:ScanSnap S1500:2314'
```
I found out that my scanner supported a `--source` option (which I could use to enable duplex scanning) and a `--resolution` option (which I changed to 150 to decrease the file sizes and make scanning faster).
### scanimage doesnt output PDFs (but you can write a tiny script)
The only downside was I wanted a PDF of my scanned document, and scanimage doesnt seem to support PDF output.
So I wrote this 5-line shell script to scan a bunch of PNGs into a temp directory and convert the resulting PNGs to a PDF.
```
#!/bin/bash
set -e
DIR=`mktemp -d`
CUR=$PWD
cd $DIR
scanimage -b --format png -d 'fujitsu:ScanSnap S1500:2314' --source 'ADF Front' --resolution 150
convert *.png $CUR/$1
```
I ran the script like this. `scan-single-sided output-file-to-save.pdf`
Youll probably need a different `-d` and `--source` for your scanner.
### it was so easy!
I always expect using printers/scanners on Linux to be a nightmare and I was really surprised how `scanimage` Just Worked I could just run my script with `scan-single-sided receipts.pdf` and it would scan a document and save it to `receipts.pdf`!.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2020/07/11/scanimage--scan-from-the-command-line/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972

View File

@ -1,682 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Add throwing mechanics to your Python game)
[#]: via: (https://opensource.com/article/20/9/add-throwing-python-game)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Add throwing mechanics to your Python game
======
Running around avoiding enemies is one thing. Fighting back is another.
Learn how in the 12th article in this series on creating a platformer in
Pygame.
![Gaming on a grid with penguin pawns][1]
This is part 12 in an ongoing series about creating video games in [Python 3][2] using the [Pygame][3] module. Previous articles are:
1. [Learn how to program in Python by building a simple dice game][4]
2. [Build a game framework with Python using the Pygame module][5]
3. [How to add a player to your Python game][6]
4. [Using Pygame to move your game character around][7]
5. [What's a hero without a villain? How to add one to your Python game][8]
6. [Put platforms in a Python game with Pygame][9]
7. [Simulate gravity in your Python game][10]
8. [Add jumping to your Python platformer game][11]
9. [Enable your Python game player to run forward and backward][12]
10. [Put some loot in your Python platformer game][13]
11. [Add scorekeeping to your Python game][14]
My previous article was meant to be the final article in this series, and it encouraged you to go program your own additions to this game. Many of you did! I got emails asking for help with a common mechanic that I hadn't yet covered: combat. After all, jumping to avoid baddies is one thing, but sometimes it's awfully satisfying to just make them go away. It's common in video games to throw something at your enemies, whether it's a ball of fire, an arrow, a bolt of lightning, or whatever else fits the game.
Unlike anything you have programmed for your platformer game in this series so far, throwable items have a _time to live_. Once you throw an object, it's expected to travel some distance and then disappear. If it's an arrow or something like that, it may disappear when it passes the edge of the screen. If it's a fireball or a bolt of lightning, it might fizzle out after some amount of time.
That means each time a throwable item is spawned, a unique measure of its lifespan must also be spawned. To introduce this concept, this article demonstrates how to throw only one item at a time. (In other words, only one throwable item may exist at a time.) On the one hand, this is a game limitation, but on the other hand, it is a game mechanic in itself. Your player won't be able to throw 50 fireballs at once, since you only allow one at a time, so it becomes a challenge for your player to time when they release a fireball to try to hit an enemy. And behind the scenes, this also keeps your code simple.
If you want to enable more throwable items at once, challenge yourself after you finish this tutorial by building on the knowledge you gain.
### Create the throwable class
If you followed along with the other articles in this series, you should be familiar with the basic `__init__` function when spawning a new object on the screen. It's the same function you used for spawning your [player][6] and your [enemies][8]. Here's an `__init__` function to spawn a throwable object:
```
class Throwable(pygame.sprite.Sprite):
    """
    Spawn a throwable object
    """
    def __init__(self, x, y, img, throw):
        pygame.sprite.Sprite.__init__(self)
        self.image = pygame.image.load(os.path.join('images',img))
        self.image.convert_alpha()
        self.image.set_colorkey(ALPHA)
        self.rect   = self.image.get_rect()
        self.rect.x = x
        self.rect.y = y
        self.firing = throw
```
The primary difference in this function compared to your `Player` class or `Enemy` class `__init__` function is that it has a `self.firing` variable. This variable keeps track of whether or not a throwable object is currently alive on screen, so it stands to reason that when a throwable object is created, the variable is set to `1`.
### Measure time to live
Next, just as with `Player` and `Enemy`, you need an `update` function so that the throwable object moves on its own once it's thrown into the air toward an enemy.
The easiest way to determine the lifespan of a throwable object is to detect when it goes off-screen. Which screen edge you need to monitor depends on the physics of your throwable object.
* If your player is throwing something that travels quickly along the horizontal axis, like a crossbow bolt or arrow or a very fast magical force, then you want to monitor the horizontal limit of your game screen. This is defined by `worldx`.
* If your player is throwing something that travels vertically or both horizontally and vertically, then you must monitor the vertical limit of your game screen. This is defined by `worldy`.
This example assumes your throwable object goes a little forward and eventually falls to the ground. The object does not bounce off the ground, though, and continues to fall off the screen. You can try different settings to see what fits your game best:
```
    def update(self,worldy):
        '''
        throw physics
        '''
        if self.rect.y &lt; worldy: #vertical axis
            self.rect.x  += 15 #how fast it moves forward
            self.rect.y  += 5  #how fast it falls
        else:
            self.kill()     #remove throwable object
            self.firing = 0 #free up firing slot
```
To make your throwable object move faster, increase the momentum of the `self.rect` values.
If the throwable object is off-screen, then the object is destroyed, freeing up the RAM that it had occupied. In addition, `self.firing` is set back to `0` to allow your player to take another shot.
### Set up your throwable object
Just like with your player and enemies, you must create a sprite group in your setup section to hold the throwable object.
Additionally, you must create an inactive throwable object to start the game with. If there isn't a throwable object when the game starts, the first time a player attempts to throw a weapon, it will fail.
This example assumes your player starts with a fireball as a weapon, so each instance of a throwable object is designated by the `fire` variable. In later levels, as the player acquires new skills, you could introduce a new variable using a different image but leveraging the same `Throwable` class.
In this block of code, the first two lines are already in your code, so don't retype them:
```
player_list = pygame.sprite.Group() #context
player_list.add(player)             #context
fire = Throwable(player.rect.x,player.rect.y,'fire.png',0)
firepower = pygame.sprite.Group()
```
Notice that a throwable item starts at the same location as the player. That makes it look like the throwable item is coming from the player. The first time the fireball is generated, a `0` is used so that `self.firing` shows as available.
### Get throwing in the main loop
Code that doesn't appear in the main loop will not be used in the game, so you need to add a few things in your main loop to get your throwable object into your game world.
First, add player controls. Currently, you have no firepower trigger. There are two states for a key on a keyboard: the key can be down, or the key can be up. For movement, you use both: pressing down starts the player moving, and releasing the key (the key is up) stops the player. Firing needs only one signal. It's a matter of taste as to which key event (a key press or a key release) you use to trigger your throwable object.
In this code block, the first two lines are for context:
```
            if event.key == pygame.K_UP or event.key == ord('w'):
                player.jump(platform_list)
            if event.key == pygame.K_SPACE:
                if not fire.firing:
                    fire = Throwable(player.rect.x,player.rect.y,'fire.png',1)
                    firepower.add(fire)
```
Unlike the fireball you created in your setup section, you use a `1` to set `self.firing` as unavailable.
Finally, you must update and draw your throwable object. The order of this matters, so put this code between your existing `enemy.move` and `player_list.draw` lines:
```
    enemy.move()  # context
    if fire.firing:
        fire.update(worldy)
        firepower.draw(world)
    player_list.draw(screen)  # context
    enemy_list.draw(screen)   # context
```
Notice that these updates are performed only if the `self.firing` variable is set to 1. If it is set to 0, then `fire.firing` is not true, and the updates are skipped. If you tried to do these updates, no matter what, your game would crash because there wouldn't be a `fire` object to update or draw.
Launch your game and try to throw your weapon.
### Detect collisions
If you played your game with the new throwing mechanic, you probably noticed that you can throw objects, but it doesn't have any effect on your foes.
The reason is that your enemies do not check for a collision. An enemy can be hit by your throwable object and never know about it.
You've already done collision detection in your `Player` class, and this is very similar. In your `Enemy` class, add a new `update` function:
```
    def update(self,firepower, enemy_list):
        """
        detect firepower collision
        """
        fire_hit_list = pygame.sprite.spritecollide(self,firepower,False)
        for fire in fire_hit_list:
            enemy_list.remove(self)
```
The code is simple. Each enemy object checks to see if it has been hit by the `firepower` sprite group. If it has, then the enemy is removed from the enemy group and disappears.
To integrate that function into your game, call the function in your new firing block in the main loop:
```
    if fire.firing:                             # context
        fire.update(worldy)                    # context
        firepower.draw(screen)                  # context
        enemy_list.update(firepower,enemy_list) # update enemy
```
You can try your game now, and most everything works as expected. There's still one problem, though, and that's the direction of the throw.
### Change the throw mechanic direction
Currently, your hero's fireball moves only to the right. This is because the `update` function of the `Throwable` class adds pixels to the position of the fireball, and in Pygame, a larger number on the X-axis means movement toward the right of the screen. When your hero turns the other way, you probably want it to throw its fireball to the left.
By this point, you know how to implement this, at least technically. However, the easiest solution uses a variable in what may be a new way for you. Generically, you can "set a flag" (sometimes also termed "flip a bit") to indicate the direction your hero is facing. Once you do that, you can check that variable to learn whether the fireball needs to move left or right.
First, create a new variable in your `Player` class to represent which direction your hero is facing. Because my hero faces right naturally, I treat that as the default:
```
        self.score = 0
        self.facing_right = True  # add this
        self.is_jumping = True
```
When this variable is `True`, your hero sprite is facing right. It must be set anew every time the player changes the hero's direction, so do that in your main loop on the relevant `keyup` events:
```
        if event.type == pygame.KEYUP:
            if event.key == pygame.K_LEFT or event.key == ord('a'):
                player.control(steps, 0)
                player.facing_right = False  # add this line
            if event.key == pygame.K_RIGHT or event.key == ord('d'):
                player.control(-steps, 0)
                player.facing_right = True  # add this line
```
Finally, change the `update` function of your `Throwable` class to check whether the hero is facing right or not and to add or subtract pixels from the fireball's position as appropriate:
```
        if self.rect.y &lt; worldy:
            if player.facing_right:
                self.rect.x += 15
            else:
                self.rect.x -= 15
            self.rect.y += 5
```
Try your game again and clear your world of some baddies.
![Python platformer with throwing capability][15]
(Seth Kenlon, [CC BY-SA 4.0][16])
As a bonus challenge, try incrementing your player's score whenever an enemy is vanquished.
### The complete code
```
#!/usr/bin/env python3
# by Seth Kenlon
# GPLv3
# This program is free software: you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see &lt;[http://www.gnu.org/licenses/\&gt;][17].
import pygame
import pygame.freetype
import sys
import os
'''
Variables
'''
worldx = 960
worldy = 720
fps = 40
ani = 4
world = pygame.display.set_mode([worldx, worldy])
forwardx  = 600
backwardx = 120
BLUE = (80, 80, 155)
BLACK = (23, 23, 23)
WHITE = (254, 254, 254)
ALPHA = (0, 255, 0)
tx = 64
ty = 64
font_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "fonts", "amazdoom.ttf")
font_size = tx
pygame.freetype.init()
myfont = pygame.freetype.Font(font_path, font_size)
'''
Objects
'''
def stats(score, health):
    myfont.render_to(world, (4, 4), "Score:"+str(score), BLUE, None, size=64)
    myfont.render_to(world, (4, 72), "Health:"+str(health), BLUE, None, size=64)
class Throwable(pygame.sprite.Sprite):
    """
    Spawn a throwable object
    """
    def __init__(self, x, y, img, throw):
        pygame.sprite.Sprite.__init__(self)
        self.image = pygame.image.load(os.path.join('images', img))
        self.image.convert_alpha()
        self.image.set_colorkey(ALPHA)
        self.rect = self.image.get_rect()
        self.rect.x = x
        self.rect.y = y
        self.firing = throw
    def update(self, worldy):
        '''
        throw physics
        '''
        if self.rect.y &lt; worldy:
            if player.facing_right:
                self.rect.x += 15
            else:
                self.rect.x -= 15
            self.rect.y += 5
        else:
            self.kill()
            self.firing = 0
# x location, y location, img width, img height, img file
class Platform(pygame.sprite.Sprite):
    def __init__(self, xloc, yloc, imgw, imgh, img):
        pygame.sprite.Sprite.__init__(self)
        self.image = pygame.image.load(os.path.join('images', img)).convert()
        self.image.convert_alpha()
        self.image.set_colorkey(ALPHA)
        self.rect = self.image.get_rect()
        self.rect.y = yloc
        self.rect.x = xloc
class Player(pygame.sprite.Sprite):
    """
    Spawn a player
    """
    def __init__(self):
        pygame.sprite.Sprite.__init__(self)
        self.movex = 0
        self.movey = 0
        self.frame = 0
        self.health = 10
        self.damage = 0
        self.score = 0
        self.facing_right = True
        self.is_jumping = True
        self.is_falling = True
        self.images = []
        for i in range(1, 5):
            img = pygame.image.load(os.path.join('images', 'walk' + str(i) + '.png')).convert()
            img.convert_alpha()
            img.set_colorkey(ALPHA)
            self.images.append(img)
            self.image = self.images[0]
            self.rect = self.image.get_rect()
    def gravity(self):
        if self.is_jumping:
            self.movey += 3.2
    def control(self, x, y):
        """
        control player movement
        """
        self.movex += x
    def jump(self):
        if self.is_jumping is False:
            self.is_falling = False
            self.is_jumping = True
    def update(self):
        """
        Update sprite position
        """
        # moving left
        if self.movex &lt; 0:
            self.is_jumping = True
            self.frame += 1
            if self.frame &gt; 3 * ani:
                self.frame = 0
            self.image = pygame.transform.flip(self.images[self.frame // ani], True, False)
        # moving right
        if self.movex &gt; 0:
            self.is_jumping = True
            self.frame += 1
            if self.frame &gt; 3 * ani:
                self.frame = 0
            self.image = self.images[self.frame // ani]
        # collisions
        enemy_hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
        if self.damage == 0:
            for enemy in enemy_hit_list:
                if not self.rect.contains(enemy):
                    self.damage = self.rect.colliderect(enemy)
        if self.damage == 1:
            idx = self.rect.collidelist(enemy_hit_list)
            if idx == -1:
                self.damage = 0   # set damage back to 0
                self.health -= 1  # subtract 1 hp
        ground_hit_list = pygame.sprite.spritecollide(self, ground_list, False)
        for g in ground_hit_list:
            self.movey = 0
            self.rect.bottom = g.rect.top
            self.is_jumping = False  # stop jumping
        # fall off the world
        if self.rect.y &gt; worldy:
            self.health -=1
            print(self.health)
            self.rect.x = tx
            self.rect.y = ty
        plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
        for p in plat_hit_list:
            self.is_jumping = False  # stop jumping
            self.movey = 0
            if self.rect.bottom &lt;= p.rect.bottom:
               self.rect.bottom = p.rect.top
            else:
               self.movey += 3.2
        if self.is_jumping and self.is_falling is False:
            self.is_falling = True
            self.movey -= 33  # how high to jump
        loot_hit_list = pygame.sprite.spritecollide(self, loot_list, False)
        for loot in loot_hit_list:
            loot_list.remove(loot)
            self.score += 1
            print(self.score)
        plat_hit_list = pygame.sprite.spritecollide(self, plat_list, False)
        self.rect.x += self.movex
        self.rect.y += self.movey
class Enemy(pygame.sprite.Sprite):
    """
    Spawn an enemy
    """
    def __init__(self, x, y, img):
        pygame.sprite.Sprite.__init__(self)
        self.image = pygame.image.load(os.path.join('images', img))
        self.image.convert_alpha()
        self.image.set_colorkey(ALPHA)
        self.rect = self.image.get_rect()
        self.rect.x = x
        self.rect.y = y
        self.counter = 0
    def move(self):
        """
        enemy movement
        """
        distance = 80
        speed = 8
        if self.counter &gt;= 0 and self.counter &lt;= distance:
            self.rect.x += speed
        elif self.counter &gt;= distance and self.counter &lt;= distance * 2:
            self.rect.x -= speed
        else:
            self.counter = 0
        self.counter += 1
    def update(self, firepower, enemy_list):
        """
        detect firepower collision
        """
        fire_hit_list = pygame.sprite.spritecollide(self, firepower, False)
        for fire in fire_hit_list:
            enemy_list.remove(self)
class Level:
    def ground(lvl, gloc, tx, ty):
        ground_list = pygame.sprite.Group()
        i = 0
        if lvl == 1:
            while i &lt; len(gloc):
                ground = Platform(gloc[i], worldy - ty, tx, ty, 'tile-ground.png')
                ground_list.add(ground)
                i = i + 1
        if lvl == 2:
            print("Level " + str(lvl))
        return ground_list
    def bad(lvl, eloc):
        if lvl == 1:
            enemy = Enemy(eloc[0], eloc[1], 'enemy.png')
            enemy_list = pygame.sprite.Group()
            enemy_list.add(enemy)
        if lvl == 2:
            print("Level " + str(lvl))
        return enemy_list
    # x location, y location, img width, img height, img file
    def platform(lvl, tx, ty):
        plat_list = pygame.sprite.Group()
        ploc = []
        i = 0
        if lvl == 1:
            ploc.append((200, worldy - ty - 128, 3))
            ploc.append((300, worldy - ty - 256, 3))
            ploc.append((550, worldy - ty - 128, 4))
            while i &lt; len(ploc):
                j = 0
                while j &lt;= ploc[i][2]:
                    plat = Platform((ploc[i][0] + (j * tx)), ploc[i][1], tx, ty, 'tile.png')
                    plat_list.add(plat)
                    j = j + 1
                print('run' + str(i) + str(ploc[i]))
                i = i + 1
        if lvl == 2:
            print("Level " + str(lvl))
        return plat_list
    def loot(lvl):
        if lvl == 1:
            loot_list = pygame.sprite.Group()
            loot = Platform(tx*5, ty*5, tx, ty, 'loot_1.png')
            loot_list.add(loot)
        if lvl == 2:
            print(lvl)
        return loot_list
'''
Setup
'''
backdrop = pygame.image.load(os.path.join('images', 'stage.png'))
clock = pygame.time.Clock()
pygame.init()
backdropbox = world.get_rect()
main = True
player = Player()  # spawn player
player.rect.x = 0  # go to x
player.rect.y = 30  # go to y
player_list = pygame.sprite.Group()
player_list.add(player)
steps = 10
fire = Throwable(player.rect.x, player.rect.y, 'fire.png', 0)
firepower = pygame.sprite.Group()
eloc = []
eloc = [300, worldy-ty-80]
enemy_list = Level.bad(1, eloc)
gloc = []
i = 0
while i &lt;= (worldx / tx) + tx:
    gloc.append(i * tx)
    i = i + 1
ground_list = Level.ground(1, gloc, tx, ty)
plat_list = Level.platform(1, tx, ty)
enemy_list = Level.bad( 1, eloc )
loot_list = Level.loot(1)
'''
Main Loop
'''
while main:
    for event in pygame.event.get():
        if event.type == pygame.QUIT:
            pygame.quit()
            try:
                sys.exit()
            finally:
                main = False
        if event.type == pygame.KEYDOWN:
            if event.key == ord('q'):
                pygame.quit()
                try:
                    sys.exit()
                finally:
                    main = False
            if event.key == pygame.K_LEFT or event.key == ord('a'):
                player.control(-steps, 0)
            if event.key == pygame.K_RIGHT or event.key == ord('d'):
                player.control(steps, 0)
            if event.key == pygame.K_UP or event.key == ord('w'):
                player.jump()
        if event.type == pygame.KEYUP:
            if event.key == pygame.K_LEFT or event.key == ord('a'):
                player.control(steps, 0)
                player.facing_right = False
            if event.key == pygame.K_RIGHT or event.key == ord('d'):
                player.control(-steps, 0)
                player.facing_right = True
            if event.key == pygame.K_SPACE:
                if not fire.firing:
                    fire = Throwable(player.rect.x, player.rect.y, 'fire.png', 1)
                    firepower.add(fire)
    # scroll the world forward
    if player.rect.x &gt;= forwardx:
        scroll = player.rect.x - forwardx
        player.rect.x = forwardx
        for p in plat_list:
            p.rect.x -= scroll
        for e in enemy_list:
            e.rect.x -= scroll
        for l in loot_list:
            l.rect.x -= scroll
    # scroll the world backward
    if player.rect.x &lt;= backwardx:
        scroll = backwardx - player.rect.x
        player.rect.x = backwardx
        for p in plat_list:
            p.rect.x += scroll
        for e in enemy_list:
            e.rect.x += scroll
        for l in loot_list:
            l.rect.x += scroll
    world.blit(backdrop, backdropbox)
    player.update()
    player.gravity()
    player_list.draw(world)
    if fire.firing:
        fire.update(worldy)
        firepower.draw(world)
    enemy_list.draw(world)
    enemy_list.update(firepower, enemy_list)
    loot_list.draw(world)
    ground_list.draw(world)
    plat_list.draw(world)
    for e in enemy_list:
        e.move()
    stats(player.score, player.health)
    pygame.display.flip()
    clock.tick(fps)
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/add-throwing-python-game
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/game_pawn_grid_linux.png?itok=4gERzRkg (Gaming on a grid with penguin pawns)
[2]: https://www.python.org/
[3]: https://www.pygame.org/news
[4]: https://opensource.com/article/17/10/python-101
[5]: https://opensource.com/article/17/12/game-framework-python
[6]: https://opensource.com/article/17/12/game-python-add-a-player
[7]: https://opensource.com/article/17/12/game-python-moving-player
[8]: https://opensource.com/article/18/5/pygame-enemy
[9]: https://opensource.com/article/18/7/put-platforms-python-game
[10]: https://opensource.com/article/19/11/simulate-gravity-python
[11]: https://opensource.com/article/19/12/jumping-python-platformer-game
[12]: https://opensource.com/article/19/12/python-platformer-game-run
[13]: https://opensource.com/article/19/12/loot-python-platformer-game
[14]: https://opensource.com/article/20/1/add-scorekeeping-your-python-game
[15]: https://opensource.com/sites/default/files/uploads/pygame-throw.jpg (Python platformer with throwing capability)
[16]: https://creativecommons.org/licenses/by-sa/4.0/
[17]: http://www.gnu.org/licenses/\>

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,93 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to rebase to Fedora 33 on Silverblue)
[#]: via: (https://fedoramagazine.org/how-to-rebase-to-fedora-33-on-silverblue/)
[#]: author: (Michal Konečný https://fedoramagazine.org/author/zlopez/)
How to rebase to Fedora 33 on Silverblue
======
![][1]
Silverblue is [an operating system for your desktop built on Fedora][2]. Its excellent for daily use, development, and container-based workflows. It offers [numerous advantages][3] such as being able to roll back in case of any problems. If you want to update to Fedora 33 on your Silverblue system, this article tells you how. It not only shows you what to do, but also how to revert things if something unforeseen happens.
Prior to actually doing the rebase to Fedora 33, you should apply any pending updates. Enter the following in the terminal:
```
$ rpm-ostree update
```
or install updates through GNOME Software and reboot.
### Rebasing using GNOME Software
The GNOME Software shows you that there is new version of Fedora available on the Updates screen.
![Fedora 33 is available][4]
First thing you need to do is to download the new image, so click on the _Download_ button. This will take some time and after its done you will see that the update is ready to install.
![Fedora 33 is ready for installation][5]
Click on the _Install_ button. This step will take only a few moments and then you will be prompted to restart your computer.
![Restart is needed to rebase to Fedora 33 Silverblue][6]
Click on _Restart_ button and you are done. After restart you will end up in new and shiny release of Fedora 33. Easy, isnt it?
### Rebasing using terminal
If you prefer to do everything in a terminal, than this next guide is for you.
Rebasing to Fedora 33 using terminal is easy. First, check if the 33 branch is available:
```
$ ostree remote refs fedora
```
You should see the following in the output:
```
fedora:fedora/33/x86_64/silverblue
```
Next, rebase your system to the Fedora 33 branch.
```
$ rpm-ostree rebase fedora:fedora/33/x86_64/silverblue
```
Finally, the last thing to do is restart your computer and boot to Fedora 33.
### How to roll back
If anything bad happens—for instance, if you cant boot to Fedora 33 at all—its easy to go back. Pick the previous entry in the GRUB menu at boot, and your system will start in its previous state before switching to Fedora 33. To make this change permanent, use the following command:
```
$ rpm-ostree rollback
```
Thats it. Now you know how to rebase Silverblue to Fedora 33 and roll back. So why not do it today?
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/how-to-rebase-to-fedora-33-on-silverblue/
作者:[Michal Konečný][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/zlopez/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/02/fedora-silverblue-logo.png
[2]: https://docs.fedoraproject.org/en-US/fedora-silverblue/
[3]: https://fedoramagazine.org/give-fedora-silverblue-a-test-drive/
[4]: https://fedoramagazine.org/wp-content/uploads/2020/10/Screenshot-from-2020-10-29-12-53-37-1024x725.png
[5]: https://fedoramagazine.org/wp-content/uploads/2020/10/Screenshot-from-2020-10-29-13-00-15-1024x722.png
[6]: https://fedoramagazine.org/wp-content/uploads/2020/10/Screenshot-from-2020-10-29-13-01-32-1024x727.png

View File

@ -1,119 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Day 2: Rails associations & dragging divs around)
[#]: via: (https://jvns.ca/blog/2020/11/10/day-2--rails-associations---dragging-divs-around/)
[#]: author: (Julia Evans https://jvns.ca/)
Day 2: Rails associations & dragging divs around
======
null
Hello! Today was day 2 of building my toy project. Here are a few more notes on things that have been fun about Rails!
### the goal: make a refrigerator poetry forum
I wanted to make kind of a boring standard website to learn Rails, and that other people could interact with. Like a forum! But of course if people can actually type on a website that creates all kinds of problems (what if theyre spammers? or just mean?).
The first idea I came up with that would let people interact with the website but not actually be able to type things into it was a refrigerator poetry forum where you can write poems given only a fixed set of words.
So, thats the plan!
My goal with this project is to find out if I want to use Rails for other small web projects (instead of what I usually do, which is use something more basic like Flask, or give up on having a backend at all and write everything in Javascript).
### how do you drag the words around? jQuery UI draggable!
I wanted people to be able to drag the words around, but I didnt feel like writing a lot of Javascript. It turns out that this is SUPER easy theres a jQuery library to do it called “draggable”!
At first the dragging didnt work on mobile, but theres a hack to make jQuery UI work on mobile called [jQuery UI touch punch][1]. Heres what it looks like (you can view source if youre interested in seeing how it works, theres very little code).
banana forest cake is
### a fun Rails feature: “associations”
Ive never used a relational ORM before, and one thing I was excited about with Rails was to see what using Active Record is like! Today I learned about one of Rails ORM features: [associations][2]. Heres what thats about if you know absolutely nothing about ORMs like me.
In my forum, I have:
* users
* topics (I was going to call this “threads” but apparently thats a reserved word in Rails so theyre called “topics” for now)
* posts
When displaying a post, I need to show the username of the user who created the post. So I thought I might need to write some code to load the posts and load the user for each post like this: (in Rails, `Post.where` and `User.find` will run SQL statements and turn the results into Ruby objects)
```
@posts = Post.where(topic_id: id)
@posts.each do |post|
user = User.find(post.user_id)
post.user = user
end
```
This is no good though its doing a separate SQL query for every post! I knew there was a better way, and I found out that its called [Associations][2]. That link is to the guide from <https://guides.rubyonrails.org>, which has treated me well so far.
Basically all I needed to do was:
1. Add a `has_many :posts` line to the User model
2. Add a `belongs_to :user` line to the Post model
3. Rails now knows how to join these two tables even though I didnt tell it what columns to join on! I think this is because I named the `user_id` column in the `posts` table according to the convention it expects.
4. Do the exact same thing for `User` and `Topic` (a topic also `has_many :posts`)
And then my code to load every post along with its associated user becomes just one line! Heres the line:
```
@posts = @topic.posts.order(created_at: :asc).preload(:user)
```
More importantly than it being just one line, instead of doing a separate query to get the user for each post, it gets all the users in 1 query. Apparently there are a bunch of [different ways][3] to do similar things in Rails (preload, eager load, joins, and includes?). I dont know what all those are yet but maybe Ill learn that later.
### a fun Rails feature: scaffolding!
Rails has this command line tool called `rails` and it does a lot of code generation. For example, I wanted to add a Topic model / controller. Instead of having to go figure out where to add all the code, I could just run:
```
rails generate scaffold Topic title:text
```
and it generated a bunch of code, so that I already had basic endpoints to create / edit / delete Topics. For example, heres my [topic controller right now][4], most of which I did not write (I only wrote the highlighted 3 lines). Ill probably delete a lot of it, but it feels kinda nice to have a starting point where I can expand on the parts I want and delete the parts I dont want.
### database migrations!
The `rails` tool can also generate database migrations! For example, I decided I wanted to remove the `title` field from posts.
Heres what I had to do:
```
rails generate migration RemoveTitleFromPosts title:string
rails db:migrate
```
Thats it just run a couple of command line incantations! I ran a few of these migrations as I changed my mind about what I wanted my database schema to be and its been pretty straightforward so far it feels pretty magical.
It got a tiny bit more interesting when I tried to add a `not null` constraint to a column where some of the fields in that column were null the migration failed. But I could just fix the offending records and easily rerun the migration.
### thats all for today!
tomorrow maybe Ill put it on the internet if I make more progress.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2020/11/10/day-2--rails-associations---dragging-divs-around/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://github.com/furf/jquery-ui-touch-punch
[2]: https://guides.rubyonrails.org/association_basics.html
[3]: https://blog.bigbinary.com/2013/07/01/preload-vs-eager-load-vs-joins-vs-includes.html
[4]: https://github.com/jvns/refrigerator-forum/blob/776b3227cfd7004cb1efb00ec7e3f82a511cbdc4/app/controllers/topics_controller.rb#L13-L15

View File

@ -1,258 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to use Serializers in the Django Python web framework)
[#]: via: (https://opensource.com/article/20/11/django-rest-framework-serializers)
[#]: author: (Renato Oliveira https://opensource.com/users/renato-oliveira)
How to use Serializers in the Django Python web framework
======
Serialization transforms data into a format that can be stored or
transmitted and then reconstructs it for use. DRF has the best-known
serializers.
![Net catching 1s and 0s or data in the clouds][1]
Serialization is the process of transforming data into a format that can be stored or transmitted and then reconstructing it. It's used all the time when developing applications or storing data in databases, in memory, or converting it into files.
I recently helped two junior developers at [Labcodes][2] understand serializers, and I thought it would be good to share my approach with Opensource.com readers.
Suppose you're creating software for an e-commerce site and you have an Order that registers the purchase of a single product, by someone, at a price, on a date:
```
class Order:
    def __init__(self, product, customer, price, date):
        self.product = product
        self.customer = customer
        self.price = price
        self.date = date
```
Now, imagine you want to store and retrieve order data from a key-value database. Luckily, its interface accepts and return dictionaries, so you need to convert your object into a dictionary:
```
def serialize_order(order):
    return {
        'product': order.product,
        'customer': order.customer,
        'price': order.price,
        'date': order.date
    }
```
And if you want to get some data from that database, you can get the dict data and turn that into your Order object:
```
def deserialize_order(order_data):
    return Order(
        product=order_data['product'],
        customer=order_data['customer'],
        price=order_data['price'],
        date=order_data['date'],
    )
```
This is pretty straightforward to do with simple data, but when you need to deal with complex objects made of complex attributes, this approach doesn't scale well. You also need to handle the validation of different types of fields, and that's a lot of work to do manually.
That's where frameworks' serializers are handy. They allow you to create serializers with little boilerplates that will work for your complex cases.
[Django][3] comes with a serialization module that allows you to "translate" Models into other formats:
```
from django.core import serializers
serializers.serialize('json', Order.objects.all())
```
It covers the most-used cases for web applications, such as JSON, YAML, and XML. But you can also use third-party serializers or create your own. You just need to register it in your settings.py file:
```
# settings.py
SERIALIZATION_MODULES = {
    'my_format': appname.serializers.MyFormatSerializer,
}
```
To create your own `MyFormatSerializer`, you need to implement the `.serialize()` method and accept a queryset and extra options as params:
```
class MyFormatSerializer:
    def serialize(self, queryset, **options):
        # serious serialization happening
```
Now you can serialize your queryset to your new format:
```
from django.core import serializers
serializers.serialize('my_format', Order.objects.all())
```
You can use the options parameters to define the behavior of your serializer. For example, if you want to define that you're going to work with nested serialization when dealing with `ForeignKeys` or you just want that data to return its primary keys, you can pass a `flat=True` param as an option and deal with that within your method:
```
class MyFormatSerializer:
    def serializer(self, queryset, **options):
        if options.get('flat', False):
            # don't recursively serialize relationships
        # recursively serialize relationships
```
One way to use Django serialization is with the `loaddata` and `dumpdata` management commands.
### DRF serializers
In the Django community, the [Django REST framework][4] (DRF) offers the best-known serializers. Although you can use Django's serializers to build the JSON you'll respond to in your API, the one from the REST framework comes with nice features that help you deal with and easily validate complex data.
In the Order example, you could create a serializer like this:
```
from restframework import serializers
class OrderSerializer(serializers.Serializer):
    product = serializers.CharField(max_length=255)
    customer = serializers.CharField(max_lenght=255)
    price = serializers.DecimalField(max_digits=5, decimal_places=2)
    date = serializers.DateField()
```
And easily serialize its data:
```
order = Order('pen', 'renato', 10.50, date.today())
serializer = OrderSerializer(order)
serializer.data
# {'product': 'pen', 'customer': 'renato', 'price': '10.50', 'date': '2020-08-16'}
```
To be able to return an instance from data, you need to implement two methods—create and update:
```
from rest_framework import serializers
class OrderSerializer(serializers.Serializer):
    product = serializers.CharField(max_length=255)
    customer = serializers.CharField(max_length=255)
    price = serializers.DecimalField(max_digits=5, decimal_places=2)
    date = serializers.DateField()
    def create(self, validated_data):
        # perform order creation
        return order
    def update(self, instance, validated_data):
       # perform instance update
       return instance
```
After that, you can create or update instances by calling `is_valid()` to validate the data and `save()` to create or update an instance:
```
serializer = OrderSerializer(**data)
## to validate data, mandatory before calling save
serializer.is_valid()
serializer.save()
```
### Model serializers
When serializing data, you often need to do it from a database, therefore, from your models. A ModelSerializer, like a ModelForm, provides an API to create serializers from your models. Suppose you have an Order model:
```
from django.db import models
class Order(models.Model):
    product = models.CharField(max_length=255)
    customer = models.CharField(max_length=255)
    price = models.DecimalField(max_digits=5, decimal_places=2)
    date = models.DateField()    
```
You can create a serializer for it like this:
```
from rest_framework import serializers
class OrderSerializer(serializers.ModelSerializer):
    class Meta:
        model = Order
        fields = '__all__'
```
Django automatically includes all model fields in the serializer and creates the `create` and `update` methods.
### Using serializers in class-based views (CBVs)
Like Forms with Django's CBVs, serializers integrate well with DRFs. You can set the `serializer_class` attribute so that the serializer will be available to the view:
```
from rest_framework import generics
class OrderListCreateAPIView(generics.ListCreateAPIView):
    queryset = Order.objects.all()
    serializer_class = OrderSerializer
```
You can also define the `get_serializer_class()` method:
```
from rest_framework import generics
class OrderListCreateAPIView(generics.ListCreateAPIView):
    queryset = Order.objects.all()
   
    def get_serializer_class(self):
        if is_free_order():
            return FreeOrderSerializer
        return OrderSerializer
```
There are other methods in CBVs that interact with serializers. For example, [get_serializer()][5] returns an already-instantiated serializer, while [get_serializer_context()][6] returns the arguments you'll pass to the serializer when creating its instance. For views that create or update data, there are `create` and `update` that validate the data with the `is_valid` method to be saved, and [perform_create][7] and [perform_update][8] that call the serializer's save method.
### Learn more
For other resources, see my friend André Ericson's [Classy Django REST Framework][9] website. It is a [Classy Class-Based Views][10] REST Framework version that gives you an in-depth inspection of the classes that compose DRF. Of course, the official [documentation][11] is an awesome resource.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/django-rest-framework-serializers
作者:[Renato Oliveira][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/renato-oliveira
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
[2]: http://www.labcodes.com.br
[3]: https://www.djangoproject.com/
[4]: https://www.django-rest-framework.org/
[5]: http://www.cdrf.co/3.9/rest_framework.generics/CreateAPIView.html#get_serializer
[6]: http://www.cdrf.co/3.9/rest_framework.generics/CreateAPIView.html#get_serializer_context
[7]: http://www.cdrf.co/3.9/rest_framework.generics/CreateAPIView.html#perform_create
[8]: http://www.cdrf.co/3.9/rest_framework.generics/RetrieveUpdateAPIView.html#perform_update
[9]: http://www.cdrf.co/
[10]: https://ccbv.co.uk/
[11]: https://www.django-rest-framework.org/api-guide/serializers/#serializers

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (Mjseven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,247 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get started with Fossil, an alternative to Git)
[#]: via: (https://opensource.com/article/20/11/fossil)
[#]: author: (Klaatu https://opensource.com/users/klaatu)
Get started with Fossil, an alternative to Git
======
Fossil is an all-in-one version control system, bug tracker, wiki,
forum, and documentation solution.
![Dinosaurs on land at sunset][1]
As any programmer knows, there are many reasons it's vital to keep track of code changes. Sometimes you just want a history of how your project started and evolved, as a matter of curiosity or education. Other times, you want to enable other coders to contribute to your project, and you need a reliable way to merge disparate parts. And more critically, sometimes an adjustment you make to fix one problem breaks something else that was working.
The [Fossil][2] source code management system is an all-in-one version control system, bug tracker, wiki, forum, and documentation solution from the creator of the famous [SQLite][3] database.
### Install Fossil
Fossil is a single, self-contained C program, so you can probably just [download Fossil][4] from its website and place it anywhere in your system [PATH][5]. For example, assuming `/usr/local/bin` is in your path, as it usually is by default:
```
$ wget <https://fossil-scm.org/home/uv/fossil-linux-x64-X.Y.tar.gz>
$ sudo tar xvf fossil-linux-x64-X.Y.tar.gz \
\--directory /usr/local/bin
```
You might also find Fossil in your software repository through your package manager, or you can compile it from source code.
### Create a Fossil repository
If you have a coding project you want to track with Fossil, the first step is to create a Fossil repository:
```
$ fossil init myproject.fossil
project-id: 010836ac6112fefb0b015702152d447c8c1d8604
server-id:  54d837e9dc938ba1caa56d31b99c35a4c9627f44
admin-user: klaatu (initial password is "14b605")
```
Creating a Fossil repo returns three items: a unique project ID, a unique server ID, and an admin ID and password. The project and server IDs are version numbers. The admin credentials establish your ownership of this repository and can be used if you decide to run Fossil as a server for other users to access.
### Work in a Fossil repository
To start working in a Fossil repo, you must create a working location for its data. You might think of this process as creating a virtual environment in Python or unzipping a ZIP file that you intend to zip back up again later.
Create a working directory and change into it:
```
$ mkdir myprojectdir
$ cd myprojectdir
```
Open your Fossil repository into the directory you created:
```
$ fossil open ../myproject
project-name: &lt;unnamed&gt;
repository:   /home/klaatu/myprojectdir/../myproject
local-root:   /home/klaatu/myprojectdir/
config-db:    /home/klaatu/.fossil
project-code: 010836ac6112fefb0b015702152d447c8c1d8604
checkout:     9e6cd96dd675544c58a246520ad58cdd460d1559 2020-11-09 04:09:35 UTC
tags:         trunk
comment:      initial empty check-in (user: klaatu)
check-ins:    1
```
You might notice that Fossil created a hidden file called `.fossil` in your home directory to track your global Fossil preferences. This is not specific to your project; it's just an artifact of the first time you use Fossil.
#### Add files
To add files to your repository, use the `add` and `commit` subcommands. For example, create a simple README file and add it to the repository:
```
$ echo "My first Fossil project" &gt; README
$ fossil add README
ADDED  README
$ fossil commit -m 'My first commit'
New_Version: 2472a43acd11c93d08314e852dedfc6a476403695e44f47061607e4e90ad01aa
```
#### Use branches
By default, a Fossile repository starts with a main branch called `trunk`. You can branch off the trunk when you want to work on code without affecting your primary codebase. Creating a new branch requires the `branch` subcommand, along with a new branch name and the branch you want to use as the basis for your new one. In this example, the only branch is `trunk`, so try creating a new branch called `dev`:
```
$ fossil branch --help
Usage: fossil branch new BRANCH-NAME BASIS ?OPTIONS?
$ fossil branch new dev trunk
New branch: cb90e9c6f23a9c98e0c3656d7e18d320fa52e666700b12b5ebbc4674a0703695
```
You've created a new branch, but your current branch is still `trunk`:
```
$ fossil branch current
trunk
```
To make your new `dev` branch active, use the `checkout` command:
```
$ fossil checkout dev
dev
```
#### Merge changes
Suppose you add an exciting new file to your `dev` branch, and having tested it, you're satisfied that it's ready to take its place in `trunk`. This is called a _merge_.
First, change your branch back to your target branch (in this example, that's `trunk`): 
```
$ fossil checkout trunk
trunk
$ ls
README
```
Your new file (or any changes you made to an existing file) doesn't exist there yet, but that's what performing the merge will take care of:
```
$ fossil merge dev
 "fossil undo" is available to undo changes to the working checkout.
$ ls
myfile.lua  README
```
### View the Fossil timeline
To see the history of your repository, use the `timeline` option. This shows a detailed list of all activity in your repository, including a hash representing the change, the commit message you provided when committing code, and who made the change:
```
$ fossil timeline
=== 2020-11-09 ===
06:24:16 [5ef06e668b] added exciting new file (user: klaatu tags: dev)
06:11:19 [cb90e9c6f2] Create new branch named "dev" (user: klaatu tags: dev)
06:08:09 [a2bb73e4a3] *CURRENT* some additions were made (user: klaatu tags: trunk)
06:00:47 [2472a43acd] This is my first commit. (user: klaatu tags: trunk)
04:09:35 [9e6cd96dd6] initial empty check-in (user: klaatu tags: trunk)
+++ no more data (5) +++
```
![Fossil UI][6]
(Klaatu, [CC BY-SA 4.0][7])
### Make your Fossil repository public
Because Fossil features a built-in web interface, Fossil doesn't need a hosting service like GitLab or Gitea do. Fossil is its own hosting service, as long as you have a server to put it on. Before making your Fossil project public, though, you must configure some attributes through the web user interface (UI).
Launch a local instance with the `ui` subcommand:
```
$ pwd
/home/klaatu/myprojectdir/
$ fossil ui
```
Specifically, look at **Users** and **Settings** for security, and **Configuration** for project properties (including a proper title). The web interface isn't just a convenience function. It's intended for actual use and is indeed used as the host for the Fossil project. There are several surprisingly advanced options, from user management (or self-management, if you please) to single-sign-on (SSO) with other Fossil repositories on the same server.
Once you're satisfied with your changes, close the web interface and press **Ctrl+C** to stop the UI engine from running. Commit your web changes just as you would any other update:
```
$ fossil commit -m 'web ui updates'
New_Version: 11fe7f2855a3246c303df00ec725d0fca526fa0b83fa67c95db92283e8273c60
```
Now you're ready to set up your Fossil server.
1. Copy your Fossil repository (in this example, `myproject.fossil`) to your server. You only need the single file.
2. Install Fossil on your server, if it's not already installed. The process for installing Fossil to your server is the same as it was for your local computer.
3. In your `cgi-bin` directory (or the equivalent of that directory, depending upon which HTTP daemon you're running), create a file called `repo_myproject.cgi`:
```
#!/usr/local/bin/fossil
repository: /home/klaatu/public_html/myproject.fossil
```
Make the script executable:
```
`$ chmod +x repo_myproject.cgi`
```
That's all there is to it. Your project is now live on the internet.
You can visit the web UI by navigating to your CGI script, such as `https://example.com/cgi-bin/repo_myproject.cgi`.
Or you can interact with your repository from a terminal through the same URL:
```
`$ fossil clone https://klaatu@example.com/cgi-bin/repo_myproject.cgi`
```
Working with a local clone requires you to use the `push` subcommand to send local changes back to your remote repository and the `pull` subcommand to get remotely made changes into your local copy:
```
`$ fossil push https://klaatu@example.com/cgi-bin/repo_myproject.cgi`
```
### Use Fossil for independent hosting
Fossil places a lot of power into your hands (and the hands of your collaborators) and makes you independent of hosting services. This article only hints at the basics. There's so much more to Fossil that can help you and your teams in your code projects. Give Fossil a try. It won't just change the way you think about version control; it'll help you stop thinking about version control altogether.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/fossil
作者:[Klaatu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/klaatu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_dinosaur_sunset.png?itok=lbzpbW5p (Dinosaurs on land at sunset)
[2]: https://fossil-scm.org/home/doc/trunk/www/index.wiki
[3]: https://www.sqlite.org/index.html
[4]: https://fossil-scm.org/home/uv/download.html
[5]: https://opensource.com/article/17/6/set-path-linux
[6]: https://opensource.com/sites/default/files/uploads/fossil-ui.jpg (Fossil UI)
[7]: https://creativecommons.org/licenses/by-sa/4.0/

View File

@ -1,102 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Keep track of multiple Git remote repositories)
[#]: via: (https://opensource.com/article/20/11/multiple-git-repositories)
[#]: author: (Peter Portante https://opensource.com/users/portante)
Keep track of multiple Git remote repositories
======
Having consistent naming standards is key to keeping local and upstream
Git repos straight.
![Digital hand surrounding by objects, bike, light bulb, graphs][1]
Working with remote repositories gets confusing when the names of the remote repositories in your local Git repo are inconsistent.
One approach to solving this issue is to standardize the use and meaning of two words: `origin`, referring to your personal `example.com/<USER>/*` repos, and `upstream`, referring to the `example.com` repo from which you forked the `origin` repo. In other words, `upstream` refers to the upstream repo where work is publicly submitted, while `origin` refers to your local fork of the upstream repo from which you generate pull requests (PRs), for example.
Using the [pbench][2] repo as an example, here is a step-by-step approach to set up a new local clone with `origin` and `upstream` defined consistently.
1. On most Git hosting services, you must fork a project when you want to work on it. When you run your own Git server, that's not necessary, but for a codebase that's open to the public, it's an easy way to transfer diffs among contributors.
Create a fork of a Git repository. For this example, assume your fork is located at `example.com/<USER>/pbench`.
2. Next, you must obtain a Uniform Resource Identifier ([URI][3]) for cloning over SSH. On most Git hosting services, such as GitLab or GitHub, it's in a button or panel labeled **Clone** or **Clone over SSH**. Copy the clone URI to your clipboard.
3. On your development system, clone the repo using the text you copied:
```
`$ git clone git@example.com:<USER>/pbench.git`
```
This clones the Git repository with the default name `origin` for your forked copy of the pbench repo.
4. Change directory to the repo you just cloned:
```
`$ cd ~/pbench`
```
5. Next, obtain the SSH URI of the source repo (the one you originally forked). This is probably done the same way as above: Find the **Clone** button or panel and copy the clone address. In software development, this is typically referred to as "upstream" because (in theory) this is where most commits happen, and you intend to let those commits flow downstream into your copy of the repository.
6. Add the URI to your local copy of the repository. Yes, there will be _two different_ remotes assigned to your local copy of the repository:
```
`$ git remote add upstream \ git@example.com:bigproject/pbench.git`
```
7. You now have two named remote repos: `origin` and `upstream`. You can see your remote repos with the remote subcommand:
```
`$ git remote -v`
```
Right now, your local `master` branch is tracking the `origin` master, which is not necessarily what you want. You probably want to track the `upstream` version of this branch because upstream is where most development takes place. The idea is that you are adding your changes on top of whatever you get from upstream.
8. Change your local master branch to track `upstream/master`:
```
$ git fetch upstream
$ git branch --set-upstream-to=upstream/master master
```
You can do this for any branch you want, not just `master`. For instance, some projects use a `dev` branch for all unstable changes, reserving `master` for code approved for release.
9. Once you've set your tracking branches, be sure to `rebase` your master to bring it up to date to any new changes made to the upstream repo:
```
$ git remote update
$ git checkout master
$ git rebase
```
This is a great way to keep your Git repositories synchronized between forks. If you want to automate this, read Seth Kenlon's article on [cohosting Git repositories with Ansible][4] to learn how.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/multiple-git-repositories
作者:[Peter Portante][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/portante
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
[2]: https://github.com/distributed-system-analysis/pbench
[3]: https://en.wikipedia.org/wiki/Uniform_Resource_Identifier
[4]: https://opensource.com/article/19/11/how-host-github-gitlab-ansible

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to choose a wireless protocol for home automation)
[#]: via: (https://opensource.com/article/20/11/wireless-protocol-home-automation)
[#]: author: (Steve Ovens https://opensource.com/users/stratusss)
How to choose a wireless protocol for home automation
======
Which of the three dominant wireless protocols used in home
automation—WiFi, Z-Wave, and Zigbee—is right for you? Consider the
options in part three of this series.
![Digital images of a computer desktop][1]
In the second article in this series, I talked about [local control vs. cloud connectivity][2] and some things to consider for your home automation setup.
In this third article, I will discuss the underlying technology for connecting devices to [Home Assistant][3], including the dominant protocols that smart devices use to communicate and some things to think about before purchasing smart devices.
### Connecting devices to Home Assistant
Many different devices work with Home Assistant. Some connect through a cloud service, and others work by communicating with a central unit, such as a [SmartThings Hub][4], that Home Assistant communicates with. And still others have a facility to communicate over your local network.
For a device to be truly useful, one of its key features must be wireless connectivity. There are currently three dominant wireless protocols that smart devices use: WiFi, Z-Wave, and Zigbee. I'll do a quick breakdown of each including their pros and cons.
**A note about wireless spectra:** Spectra are measured in hertz (Hz). A gigahertz (GHz) is 1 billion Hz. In general, the larger the number of Hz, the more data can be transmitted and the faster the connection. However, higher frequencies are more susceptible to interference and do not travel very well through solid objects. Lower frequencies can travel further and pass through solid objects more readily, but the trade-off is they cannot send much data.
### WiFi
[WiFi][5] is the most widely known of the three standards. These devices are the easiest to get up and running if you are starting from scratch. This is because almost everyone interested in home automation already has a WiFi router or an access point. In fact, in most countries in the western world, WiFi is considered almost on the same level as running water; if you go to a hotel, you expect a clean, temperature-controlled room with a WiFi password provided at check-in.
Therefore, Internet of Things (IoT) devices that use the WiFi protocol require no additional hardware to get started. Plug in the new device, launch a vendor-provided application or a web browser, enter your credentials, and you're done.
It's important to note that almost all moderate- to low-priced IoT devices use the 2.4GHz wireless spectrum. Why does this matter? Well, 2.4GHz has been around so long that virtually all devices—from cordless phones to smart bulbs—use this spectrum. In most countries, there are generally only about a dozen channels that off-the-shelf devices can broadcast and receive on. Like overloading a cell tower when too many users attempt to make phone calls during an emergency, channels can become overcrowded and susceptible to outside interference.
While well-behaving smart devices use little-to-no bandwidth, if they struggle to send/receive messages due to overcrowding on the spectrum, your automation will have mixed results. A WiFi access point can only communicate with one client at a time. That means the more devices you have on WiFi, the greater the chance that someone on the network will have to wait their turn to communicate.
**Pros:**
* Ubiquitous
* Tend to be inexpensive
* Easy to set up
* Easy to extend the range
* Uses existing network
* Requires no hub
**Cons:**
* Can suffer from interference from neighboring devices or adjacent networks
* Uses the most populated 2.4GHz spectrum
* Your router limits the number of devices
* Uses more power, which means less or no battery-powered devices
* Has the potential to impact latency-sensitive activities like gaming over WiFi
* Most off-the-shelf products require an internet connection
### Z-Wave
[Z-Wave][6] is a closed wireless protocol controlled and maintained by a company named Zensys. Because it is controlled by a single entity, all devices are guaranteed to work together. There is one standard and one implementation. This means that you never have to worry about which device you buy from which manufacturer; they will always work.
Z-Wave operates in the 0.9GHz spectrum, which means it has the largest range of the popular protocols. A central hub is required to coordinate all the devices on a Z-Wave ecosystem. Z-Wave operates on a [mesh network][7] topology, which means that every device acts as a potential repeater for other devices. In theory, this allows a much greater coverage area. Z-Wave limits the number of "hops" to 4. That means that, in order for a signal to get from a device to a hub, it can only travel through four devices. This could be a positive or a negative, depending on your perspective. 
On the one hand, it reduces the ecosystem's maximum latency by preventing packets from traveling through a significant number of devices before reaching the destination. The more devices a signal must go through, the longer it can take for devices to become responsive.
On the other hand, it means that you need to be more strategic about providing a good path from your network's extremities back to the hub. Remember, the lower frequency that enables greater distance also limits the speed and amount of data that can be transferred. This is currently not an issue, but no one knows what size messages future smart devices will want to send.
**Pros:**
* Z-Wave compatibility guaranteed
* Form mesh network 
* Low powered and can be battery powered
* Mesh networks become more reliable with more devices
* Uses 0.9GHz and can transmit up to 100 meters
* Least likely of the three to have signal interference from solid objects or external sources
**Cons:**
* Closed protocol
* Costs the most
* Maximum of four hops in the mesh
* Can support up to 230 devices per network
* Uses 0.9GHz, which is the slowest of all protocols
### Zigbee
Unlike Z-Wave, [Zigbee][8] is an open standard. This can be a pro or a con, depending on your perspective. Because it is an open standard, manufacturers are free to alter the implementation to suit their products. To borrow an analogy from one of my favorite YouTube channels, [The Hook Up][9], Zigbee is like going through a restaurant drive-through. Having the same standard means you will always be able to speak to the restaurant and they will be able to hear you. However, if you speak a different language than the drive-through employee, you won't be able to understand each other. Both of you can speak and hear each other, but the meaning will be lost.
Similarly, the Zigbee standard allows all devices on a Zigbee network to "hear" each other, but different implementations mean they may not "understand" each other. Fortunately, more often than not, your Zigbee devices should be able to interoperate. However, there is a non-trivial chance that your devices will not be able to understand each other. When this happens, you may end up with multiple networks that could interfere with each other.
Like Z-Wave, Zigbee employs a mesh network topology but has no limit to the number of "hops" devices can use to communicate with the hub. This, combined with some tweaks to the standard, means that Zigbee theoretically can support more than 65,000 devices on a single network.
**Pros:**
* Open standard
* Form mesh network
* Low-powered and can be battery powered
* Can support over 65,000 devices
* Can communicate faster than Z-Wave
**Cons:**
* No guaranteed compatibility
* Can form separate mesh networks that interfere with each other
* Uses the oversaturated 2.4GHz spectrum
* Transmits only 10 to 30 meters
### Pick your protocol
Perhaps you already have some smart devices. Or maybe you are just starting to investigate your options. There is a lot to consider when you're buying devices. Rather than focusing on the lights, sensors, smart plugs, thermometers, and the like, it's perhaps more important to know which protocol (WiFi, Z-Wave, or Zigbee) you want to use.
Whew! I am finally done laying home automation groundwork. In the next article, I will show you how to start the initial installation and configuration of a Home Assistant virtual machine.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/11/wireless-protocol-home-automation
作者:[Steve Ovens][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/stratusss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_browser_web_desktop.png?itok=Bw8ykZMA (Digital images of a computer desktop)
[2]: https://opensource.com/article/20/11/cloud-vs-local-home-automation
[3]: https://opensource.com/article/20/11/home-assistant
[4]: https://www.smartthings.com/
[5]: https://en.wikipedia.org/wiki/Wi-Fi
[6]: https://www.z-wave.com/
[7]: https://en.wikipedia.org/wiki/Mesh_networking
[8]: https://zigbeealliance.org/
[9]: https://www.youtube.com/channel/UC2gyzKcHbYfqoXA5xbyGXtQ

View File

@ -0,0 +1,182 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Stratis up and running)
[#]: via: (https://fedoramagazine.org/getting-started-with-stratis-up-and-running/)
[#]: author: (Gordon Keegan https://fedoramagazine.org/author/gmkeegan/)
Getting started with Stratis up and running
======
![][1]
[Photo][2] by [Jeremy Lapak][3] on [Unsplash][4]
When adding storage to a Linux server, system administrators often use commands like _pvcreate_, _vgcreate_, _lvcreate_, and _mkfs_ to integrate the new storage into the system. [Stratis][5] is a command-line tool designed to make managing storage much simpler. It creates, modifies, and destroys pools of storage. It also allocates and deallocates filesystems from the storage pools.
Instead of an entirely in-kernel approach like ZFS or Btrfs, Stratis uses a hybrid approach with components in both user space and kernel land. It builds on existing block device managers like device mapper and existing filesystems like XFS. Monitoring and control is performed by a user space daemon.
Stratis tries to avoid some ZFS characteristics like restrictions on adding new hard drives or replacing existing drives with bigger ones. One of its main design goals is to achieve a positive command-line experience.
### Install Stratis
Begin by installing the required packages. Several Python-related dependencies will be automatically pulled in. The _stratisd_ package provides the _stratisd_ daemon which creates, manages, and monitors local storage pools. The _stratis-cli_ package provides the _stratis_ command along with several Python libraries.
```
# yum install -y stratisd stratis-cli
```
Next, enable the _stratisd_ service.
```
# systemctl enable --now stratisd
```
Note that the “enable now” syntax shown above both permanently enables and immediately starts the service.
After determining what disks/block devices are present and available, the three basic steps to using Stratis are:
1. Create a pool of the desired disks.
2. Create a filesystem in the pool.
3. Mount the filesystem.
In the following example, four virtual disks are available in a virtual machine. Be sure not to use the root/system disk (/dev/vda in this example)!
```
# sfdisk -s
/dev/vda: 31457280
/dev/vdb: 5242880
/dev/vdc: 5242880
/dev/vdd: 5242880
/dev/vde: 5242880
total: 52428800 blocks
```
### Create a storage pool using Stratis
```
# stratis pool create testpool /dev/vdb /dev/vdc
# stratis pool list
Name Total Physical Size Total Physical Used
testpool 10 GiB 56 MiB
```
After creating the pool, check the status of its block devices:
```
# stratis blockdev list
Pool Name Device Node Physical Size State Tier
testpool /dev/vdb 5 GiB In-use Data
testpool /dev/vdc 5 GiB In-use Data
```
### Create a filesystem using Stratis
Next, create a filesystem. As mentioned earlier, Stratis uses the existing DM (device mapper) and XFS filesystem technologies to create thinly-provisioned filesystems. By building on these existing technologies, large filesystems can be created and it is possible to add physical storage as storage needs grow.
```
# stratis fs create testpool testfs
# stratis fs list
Pool Name Name Used Created Device UUID
testpool testfs 546 MiB Apr 18 2020 09:15 /stratis/testpool/testfs 095fb4891a5743d0a589217071ff71dc
```
Note that “fs” in the example above can optionally be written out as “filesystem”.
### Mount the filesystem
Next, create a mount point and mount the filesystem.
```
# mkdir /testdir
# mount /stratis/testpool/testfs /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc 1.0T 7.2G 1017G 1% /testdir
```
The actual space used by a filesystem is shown using the _stratis fs list_ command demonstrated previously. Notice how the _testdir_ filesystem has a virtual size of **1.0T**. If the data in a filesystem approaches its virtual size, and there is available space in the storage pool, Stratis will automatically grow the filesystem. Note that beginning with Fedora 34, the form of device path will be _/dev/stratis/&lt;pool-name&gt;/&lt;filesystem-name&gt;_.
### Add the filesystem to fstab
To configure automatic mounting of the filesystem at boot time, run following commands:
```
# UUID=`lsblk -n -o uuid /stratis/testpool/testfs`
# echo "UUID=${UUID} /testdir xfs defaults 0 0" >> /etc/fstab
```
After updating fstab, verify that the entry is correct by unmounting and mounting the filesystem:
```
# umount /testdir
# mount /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc 1.0T 7.2G 1017G 1% /testdir
```
### Adding cache devices with Stratis
Suppose _/dev/vdd_ is an available SSD (solid state disk). To configure it as a cache device and check its status, use the following commands:
```
# stratis pool add-cache testpool /dev/vdd
# stratis blockdev
Pool Name Device Node Physical Size State Tier
testpool /dev/vdb 5 GiB In-use Data
testpool /dev/vdc 5 GiB In-use Data
testpool /dev/vdd 5 GiB In-use Cache
```
### Growing the storage pool
Suppose the _testfs_ filesystem is close to using all the storage capacity of _testpool_. You could add an additional disk/block device to the pool with commands similar to the following:
```
# stratis pool add-data testpool /dev/vde
# stratis blockdev
Pool Name Device Node Physical Size State Tier
testpool /dev/vdb 5 GiB In-use Data
testpool /dev/vdc 5 GiB In-use Data
testpool /dev/vdd 5 GiB In-use Cache
testpool /dev/vde 5 GiB In-use Data
```
After adding the device, verify that the pool shows the added capacity:
```
# stratis pool
Name Total Physical Size Total Physical Used
testpool 15 GiB 606 MiB
```
### Conclusion
Stratis is a tool designed to make managing storage much simpler. Creating a filesystem with enterprise functionalities like thin-provisioning, snapshots, volume management, and caching can be accomplished quickly and easily with just a few basic commands.
See also [Getting Started with Stratis Encryption][6].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/getting-started-with-stratis-up-and-running/
作者:[Gordon Keegan][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/gmkeegan/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/11/stratis-up-and-running-816x345.jpg
[2]: https://unsplash.com/photos/CVvFVQ_-oUg
[3]: https://unsplash.com/@jeremy_justin?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://unsplash.com/s/photos/runner?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[5]: https://stratis-storage.github.io/
[6]: https://fedoramagazine.org/getting-started-with-stratis-encryption/

View File

@ -0,0 +1,157 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create universal blockchain smart contracts)
[#]: via: (https://opensource.com/article/20/12/blockchain-smart-contracts)
[#]: author: (Gage Mondok https://opensource.com/users/matt-coolidge)
Create universal blockchain smart contracts
======
Chainlink connects blockchain data with external, "real-world" data
using decentralized oracles.
![cubes coming together to create a larger cube][1]
Blockchain [smart contracts][2] have the ability to access off-chain data by integrating [decentralized oracles][3]. Before diving into how to use them, it's important to understand why smart contracts matter in the big picture and why they need oracles for data access.
Transactions happen every day, as they have for tens of thousands of years. They're generally governed by an agreement or contract. This may be driven by a vendor's terms of service, regulatory frameworks, or some combination of both. Parameters for these agreements are not always clear or transparent, and they are ultimately governed by a brand (whether that's a person or a company) and its willingness to act upon terms agreed upon in advance.
Contracts, like the rest of the world, are going digital. The rise of blockchain technology has introduced smart contracts, a more tamper-proof, transparent, and fair system for governing such agreements. Smart contracts are governed by math, not brands. They automatically enforce the parameters of a contract once they're executed, creating a more equitable structure for all parties.
The challenge with smart contracts is that they generally depend on their ability to bridge real-world data with blockchains (or data from one blockchain to another) so that the smart contract can recognize quality, assess reliable data, and trigger agreed-upon outcomes once terms are met. Traditionally, this has been an overly complex and difficult process, which limited broader adoption.
### About Chainlink
[Chainlink][4] is an open source abstraction layer that provides a framework to easily connect any blockchain with any external (or separate blockchain) API. You can think of Chainlink as the blockchain equivalent of the transport layer in TCP/IP, ensuring data is reliably transmitted in and out. Chainlink was designed to be the standard data layer for smart contracts, unlocking their true capability to affect the external world, and turning them into externally aware, universal smart contracts.
Smart contracts have the power to revolutionize how trust and automation are handled in business, but their restriction in scope to events on the blockchain has severely limited their potential. A majority of what developers want to interact with exists in the "real world," such as pricing data, shipping events, world events, etc. To create universal smart contracts, which are externally aware and thus can handle a wide, universal set of jobs with the world's data at its fingertips, the Chainlink network gives [Solidity][5] and other blockchain developers a framework of decentralized oracles to build with.
You can use these oracles to retrieve data for your decentralized application (dApp) in real-time on the Ethereum mainnet.
#### Chainlink adapters
[Adapters][6] are the default data manipulation functions that every Chainlink node supports by default. The nodes are the decentralized oracles in this case. They fulfill the data requests, and the Chainlink network is composed of an ever-growing number of them. Nodes are run by a multitude of independent operators. Through adapters, all developers have a standard interface for making data requests, and node operators have a standard for serving that data. These adapters include functionality such as HTTP GET, HTTP POST, Compare, Copy, etc. Adapters are a dApp's connection to the external world's data.
For example, here are the parameters for the [HttpGet][7] adapter:
* **get**: Takes a string containing the API URL to make a GET request to
* **headers**: Takes an object containing keys as strings and values as arrays of strings
* **queryParams**: Takes a string or array of strings for the URL's query parameters
* **extPath**: Takes a slash-delimited string or array of strings to be appended to the job's URL
#### Chainlink requests
For a universal smart contract to interact with these adapters, you need another functionality, requests. All contracts that inherit from [ChainlinkClient][8] can create a Chainlink.Request struct that allows developers to form a request to a Chainlink decentralized oracle. This request should add the desired adapter parameters to the struct according to the request you want to make. Submitting this request requires some basic fields, such as the address of the node you want to use as your oracle, the jobId, and the agreed-upon fee. In addition to those default fields, you can add your desired adapter parameters to the request struct:
```
// Set the URL to perform the GET request on
request.add("get", "[https://min-api.cryptocompare.com/data/price?fsym=ETH\&amp;tsyms=USD][9]");
```
With this struct, requests are flexible and can be formulated to fit various situations involving getting, posting, and manipulating data from any API because the requests can contain any of the adapter functions. What makes this system decentralized is that Chainlink's oracle network consists of many of these nodes, and developers are free to choose which and how many they want to request from based on their needs. This enables redundant failover and error checking via multiple sources, as high-reliability dApps often require.
For more information on constructing a request and the functions needed to submit it and receive a response within a ChainlinkClient contract, see Chainlink's full [HTTP GET request example][10].
For common requests, a node operator may already have an existing oracle job preconfigured, and in this case, the request is much simpler. Rather than building a custom request struct and adding the necessary adapters, the default request struct is all you need to create. No additional adapter parameters are needed; the set of decentralized oracles you choose will know how to respond based on the jobId provided when creating the request struct.
This example comes from the full [CoinGecko Consumer API][11]:
```
Chainlink.[Request][12] memory req = buildChainlinkRequest(jobId, address(this),     this.fulfillEthereumPrice.selector);
sendChainlinkRequestTo(oracle, req, fee);
```
You can use a decentralized oracle data service, such as [Chainlink Market][13], to search through existing oracles and the jobs they support in order to find the jobId you require.
### External adapters
But what if you have a complex use case for your smart contract that isn't covered by the default adapter functions? What if you need to perform some advanced data manipulation? Maybe it's not raw data you want to submit to your contract but rather metadata generated by statistical analysis of multiple data points. Maybe you can manipulate the data on-chain with the default adapters but want to reduce gas costs. Perhaps you don't want your API request on-chain due to using a credentialed source, and you don't want to specify those credentials on-chain or in the oracle job spec. This is where [external adapters][14] come in.
![Chainlink External Adapter for IoT Devices][15]
(Chainlink, ©2020)
External adapters are the "whatever data you need; we can handle it" of Chainlink. When we say universal smart contracts, we really mean _universal_. Since external adapters are pieces of code that exist off-chain with the Chainlink oracle node, they can be written in any language of your choice and perform whatever functionality you can think up—so long as the data input and output adhere to the adapter's JSON specification. External adapters act as the interface between the Chainlink decentralized oracle network and external data, letting the node operators know how to request and receive the JSON response that is then consumed on-chain.
Defining this interface specification off-chain through an external adapter opens up vast possibilities: You can now store your API credentials off-chain per your personal security standards, data can be programmed in any way in the language of your choice, and all of this happens without using any Ethereum gas fees to fund an on-chain transaction. In a sense, external adapters are like another layer of a decentralized oracle, packaging up data outside the blockchain with speed and at low cost and putting it into one tidy JSON format to be verifiably committed on-chain by the Chainlink oracle node.
External adapters are a large part of what makes Chainlink such a versatile decentralized oracle network. Contract developers are free to implement these adapters as needed, or they can choose from [existing adapters][16] on the Chainlink Market. If you are a smart contract developer looking to create an external adapter, Chainlink merely requires you to specify the JSON interfaces for the data request and the return data; between those two interfaces is where developers are free to create and manipulate the data to fit their use case. As an oracle node operator, to support the external adapter and handle the additional requests, you must [create a bridge][17] for it in your node user interface and add the adapter's bridge name to your supported tasks.
![Create a new bridge in Chainlink][18]
(ChainLink, ©2020)
```
{
  "initiators": [
    { "type": "runLog" }
  ],
  "tasks": [
    { "type": "randomNumber" },
    { "type": "copy",
      "params": {"copyPath": ["details", "current"]}},
    { "type": "multiply",
      "params": {"times": 100 }},
    { "type": "ethuint256" },
    { "type": "ethtx" }
  ]
}
```
You can access a full example of creating an external adapter on Chainlink's [building external adapters][19] page.
Chainlink is striving to give blockchain and smart contract developers the tools to empower universal smart contracts with real-world data, exactly how they need it. Chainlink's design, incorporating direct calls to any API through default adapters and extensible external adapters, gives developers a flexible platform to create as they see fit, with any data they might need. This opens up smart contracts to a literal world of data and the new use cases this empowers.
### Start building with Chainlink
If you're a smart contract developer looking to increase your smart contracts' utility with external data, try out this Chainlink [example walkthrough][20] to deploy a universal smart contract that interacts with off-chain data.
Chainlink is open source under the [MIT License][21], so if you're developing a product that could benefit from Chainlink decentralized oracles or would like to assist in developing the Chainlink Network, visit the [developer documentation][22] or join the technical discussion on [Discord][23]. You can also learn more on Chainlink's [website][4], [Twitter][24], [Reddit][25], [YouTube][26], [Telegram][27], and [GitHub][28].
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/12/blockchain-smart-contracts
作者:[Gage Mondok][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/matt-coolidge
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
[2]: https://blog.chain.link/what-is-a-smart-contract-and-why-it-is-a-superior-form-of-digital-agreement/
[3]: https://blog.chain.link/what-is-the-blockchain-oracle-problem/
[4]: https://chain.link/
[5]: https://github.com/ethereum/solidity
[6]: https://docs.chain.link/docs/adapters
[7]: https://docs.chain.link/docs/adapters#httpget
[8]: https://github.com/smartcontractkit/chainlink/blob/develop/evm-contracts/src/v0.6/ChainlinkClient.sol
[9]: https://min-api.cryptocompare.com/data/price?fsym=ETH\&tsyms=USD
[10]: https://docs.chain.link/docs/make-a-http-get-request
[11]: https://docs.chain.link/docs/existing-job-request
[12]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+request
[13]: https://market.link/
[14]: https://docs.chain.link/docs/external-adapters
[15]: https://opensource.com/sites/default/files/chainlink-external-adapter.png (Chainlink External Adapters enable smart contracts to easily integrate with specialized APIs)
[16]: https://market.link/search/adapters
[17]: https://docs.chain.link/docs/node-operators#config
[18]: https://opensource.com/sites/default/files/uploads/chainlink_newbridge.png (Create a new bridge in Chainlink)
[19]: https://docs.chain.link/docs/developers
[20]: https://docs.chain.link/docs/example-walkthrough
[21]: https://github.com/smartcontractkit/chainlink/blob/develop/LICENSE
[22]: https://docs.chain.link/
[23]: https://discordapp.com/invite/aSK4zew
[24]: https://twitter.com/chainlink
[25]: https://www.reddit.com/r/Chainlink/
[26]: https://www.youtube.com/channel/UCnjkrlqaWEBSnKZQ71gdyFA
[27]: https://t.me/chainlinkofficial
[28]: https://github.com/smartcontractkit/chainlink

View File

@ -0,0 +1,125 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 collaboration tips for using an open source alternative to Google Docs)
[#]: via: (https://opensource.com/article/20/12/onlyoffice-docs)
[#]: author: (Nadya Knyazeva https://opensource.com/users/hellonadya)
5 collaboration tips for using an open source alternative to Google Docs
======
Collaborative writing and editing is a breeze when you put these
ONLYOFFICE features to work.
![Filing cabinet for organization][1]
ONLYOFFICE Docs is a self-hosted open source alternative to Microsoft Office and Google Docs for collaborating on documents, spreadsheets, and presentations in real time.
The following are the five most important ways [ONLYOFFICE Docs][2] helps organize my collaborative work.
### 1\. Integrate with document storage
ONLYOFFICE Docs is highly flexible in how you can store documents. By default, you can use ONLYOFFICE Docs within an ONLYOFFICE Workspace. This provides a productivity solution for managing documents and projects. It's the clear way to use ONLYOFFICE Docs because it's included; when you install one, you get the other.
However, the full ONLYOFFICE suite can be integrated with ownCloud, Nextcloud, and other popular sync and share platforms. Helpful [connectors][3] are available in your sharing platform's official app store or on GitHub.
Finally, since ONLYOFFICE is open source, web app developers are free to integrate ONLYOFFICE Docs into their applications using the [ONLYOFFICE API][4].
### 2\. Manage document permissions
In ONLYOFFFICE Docs, you can differentiate what your teammates can do when they open shared documents. You can grant them permission to view, edit, or share files or perform specific actions—leave comments, suggest changes in review mode, fill in determined fields, etc. Differentiating document permissions can help structure and secure your collaboration.
![ONLYOFFICE sharing options][5]
(Nadya Knyazeva, [CC BY-SA 4.0][6])
The permissions you have available depend on your document management system. In ONLYOFFICE Workspace and ownCloud, you can share files with all the permissions listed above, plus you'll get an additional permission for spreadsheets (Custom Filter in ONLYOFFICE or Modify Filter in ownCloud). The filtering permission allows you to decide whether filters applied by one user should affect only that person or everyone. If you're integrating with Nextcloud or, for example, Seafile, you get fewer permission options.
If you are integrating the suite and want to add more permissions, your app must allow registering new sharing attributes (such as the ability to restrict downloading, printing, or copying document content to the clipboard), as described in [the API documentation][7].
### 3\. True collaboration
The collaborative work toolset is basically the same for all environments. You have comments to add notes, suggestions, or questions for people working on a document together. ONLYOFFICE has this, of course, but it strives to provide a few extra features whenever possible. For instance, in ONLYOFFICE Workspace, you can quickly add mentions by typing + or @ followed by a user's name to draw a specific person's attention to your comment.
![ONLYOFFICE comments][8]
(Nadya Knyazeva, [CC BY-SA 4.0][6])
There's also a chat feature to quickly discuss something with teammates without switching to a messaging app (be aware that the chat history clears when you close a document).
Track changes enables reviewing documents by suggesting changes. All the changes made in this mode are highlighted, and the owner and users with full editing access can accept or reject them or preview the document with all the changes accepted or rejected.
What's important about collaborative work in ONLYOFFICE Docs is that users working simultaneously on the same docs can set individual preferences (e.g., enable track changes or spell checking, display non-printing characters, zoom the doc in and out, and so on) without disturbing each other.
### 4\. Version control
Versioning is so important that entire industries have developed around the process. For developers, writing without Git-style revision control can be unsettling. For content creators, emailing revisions back and forth to one another gets messy and confusing.
ONLYOFFICE Docs allows viewing a document's version history in the editor. Changes and the author who made them are highlighted in different colors. This feature's availability is determined by the doc management system you use; version history is available for ONLYOFFICE Workspace, Nextcloud, and ownCloud integration.
![ONLYOFFICE version history in Nextcloud][9]
(Nadya Knyazeva, [CC BY-SA 4.0][6])
### 5\. Change real-time co-editing mode
There are two ways to co-edit a document in real-time in ONLYOFFICE Docs. They are called Fast and Strict modes, and they're available regardless of how you integrate ONLYOFFICE into your toolchain.
Fast mode allows you to see your co-authors' changes as they are typing. Your changes are also shown to others immediately.
In Strict mode, you lock the document you are working on, and no one can see what you are typing until you click Save. You can see what parts of the document are locked by co-authors, but you can not see what they are doing until they save.
When collaborating on a document in one of these modes, the Ctrl+Z (undo) command affects only your work, so your co-authors' actions are unaffected.
### Bonus: Security options
Depending on your environment, you'll find different options to protect collaboration on documents.
ONLYOFFICE Workspace offers the standard security toolset, with HTTPS, backups, two-factor authentication, secure sign-on, and an option to encrypt data at rest. One feature that, according to ONLYOFFICE, has no counterpart is called _Private Rooms_.
A Private Room is a folder that can be accessed only through the desktop app. Each office file created and stored there is encrypted using the AES-265 algorithm. Everything you type—every letter, every number, every symbol—is encrypted immediately, even if you're collaborating in real time.
![ONLYOFFICE Private Rooms][10]
(Nadya Knyazeva, [CC BY-SA 4.0][6])
ONLYOFFICE Docs also uses JSON Web Tokens (JWT) for security. The editors request an encrypted signature to check who can access the document and what they can do with it. Currently, JWT is implemented for ONLYOFFICE Workspace and for Nextcloud, ownCloud, Alfresco, Confluence, HumHub, and Nuxeo integrations, in addition to their built-in security tools.
In a Nextcloud integration, you can also insert watermarks to protect sensitive docs. Watermarks are enabled by an admin and cannot be removed from a document.
![ONLYOFFICE Watermark in Nextcloud][11]
(Nadya Knyazeva, [CC BY-SA 4.0][6])
### So many features
There are many more features in ONLYOFFICE that will fit into one article. If you're looking for an open source alternative to Microsoft or Google collaboration tools, ONLYOFFICE is the most powerful option I know of. Give it a try and let me know in the comments what you think of ONLYOFFICE Docs as a collaboration tool.
Take a look at five great open source alternatives to Google Docs.
Sandstorm's Jade Wang shares some of her favorite open source web apps that are self-hosted...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/12/onlyoffice-docs
作者:[Nadya Knyazeva][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hellonadya
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_organize_letter.png?itok=GTtiiabr (Filing cabinet for organization)
[2]: https://www.onlyoffice.com/office-suite.aspx
[3]: https://www.onlyoffice.com/all-connectors.aspx
[4]: https://api.onlyoffice.com/editors/basic
[5]: https://opensource.com/sites/default/files/uploads/1._sharing_window.png (ONLYOFFICE sharing options)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://api.onlyoffice.com/editors/config/document/permissions
[8]: https://opensource.com/sites/default/files/uploads/2._comments.png (ONLYOFFICE comments)
[9]: https://opensource.com/sites/default/files/uploads/3._version_history_in_nextcloud.png (ONLYOFFICE version history in Nextcloud)
[10]: https://opensource.com/sites/default/files/uploads/4_privateroom.png (ONLYOFFICE Private Rooms)
[11]: https://opensource.com/sites/default/files/uploads/5._watermark.png (ONLYOFFICE Watermark in Nextcloud)

View File

@ -0,0 +1,246 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Set up OpenStack on a Raspberry Pi cluster)
[#]: via: (https://opensource.com/article/20/12/openstack-raspberry-pi)
[#]: author: (AJ Canlas https://opensource.com/users/ajscanlas)
Set up OpenStack on a Raspberry Pi cluster
======
Since Arm processors are "first-class citizens" in OpenStack, and
Raspberry Pis are built on Arm, why not combine them?
![Raspberries with pi symbol overlay][1]
In the year since the Raspberry Pi 4 was released, I've seen many tutorials (like [this][2] and [this][3]) and [articles][4] on how well the 4GB model works with container platforms such as Kubernetes (K8s), Lightweight Kubernetes (K3s), and Docker Swarm. As I was doing research, I read that Arm processors are "first-class citizens" in OpenStack. Since Raspberry Pi is built on Arm, I decided to test this theory by installing OpenStack on a Raspberry Pi cluster.
### Prerequisites
There are a few things I need to consider for this project:
1. Whether to use Ubuntu 64-bit or CentOS 64-bit for Raspberry Pi to boot headless; [Raspberry Pi OS][5] will not suffice, even as a Debian derivative, because there are no OpenStack packages for it.
2. I need the latest version of OpenStack that will run in my distribution because I don't think the latest versions have an AArch64 image.
3. I doubt that there is any documentation for automating a ground-up installation, so I will use a step-by-step process.
Materials used:
1. Four Raspberry Pi 4Bs, 4GB model (8GB recommended)
2. Four 32GB MicroSD cards
3. Four Raspberry Pi cases with fans and heatsinks (very important)
4. Four Raspberry Official Power supply
### Configure the base operating system
I used the CentOS AArch64 image; as I suspected, there is not a CentOS 8 image available for Raspberry Pi, so I used CentOS 7 AArch64 with this [prebuilt image][6]. It didn't work when I tried installing it with `dd`, but it worked like a charm when I used balenaEtcher.
![balenaEtcher][7]
(Aaron John Canlas, [CC BY-SA 4.0][8])
After burning the images on the SD cards, I plugged the Raspberry Pis into my router to check their IP addresses so I could remotely access them using SSH. I configured their WiFi and hostnames using `nmtui` to access them without any cables attached (except for their power source, of course). The default user is `root`, and the default password is `centos`.
![SSH console in CentOS 7 AArch64][9]
(Aaron John Canlas, [CC BY-SA 4.0][8])
I updated the operating system:
```
`[root@rpi4b4-0 ~]# yum update -y`
```
I repeated this process on all my Raspberry Pis then rebooted them.
### Install OpenStack
The latest OpenStack releases (Ussuri and Victoria) require CentOS 8, so I used Train, as it's the most recent version that uses CentOS 7.
![Confirming OpenStack Train's availability][10]
(Aaron John Canlas, [CC BY-SA 4.0][8])
I used the OpenStack Foundation's installation steps, but I encountered some issues with it. To make it easier for others to install OpenStack on CentOS 7, I compiled the following links and tips.
#### Prerequisites
1. Network Time Protocol (NTP)
1. [Controller node installation][11]
2. [Compute node installation][12]
3. [Verify operation][13]
2. OpenStack packages: [Run the Train version of the installation][14] (this link includes all versions)
3. SQL database: [Controller node installation][15]
4. Message queue: [Controller node installation][16]
5. Memcached: [Controller node installation][17]
6. Etcd: [Controller node installation][17]
#### OpenStack services
1. Identity service (Keystone)
1. [Controller node installation][18]
2. [Verify operation][19]
![Keystone verification][20]
(Aaron John Canlas, [CC BY-SA 4.0][8])
2. Imaging service (Glance)
1. [Controller node installation][21]
2. [Verify operation][22]
TIP: Instead of downloading the CirrOS image mentioned in the documents, make sure to use the [AArch64 CirrOS image][23] for Raspberry Pis.
![Glance verification][24]
(Aaron John Canlas, [CC BY-SA 4.0][8])
3. Placement service (Placement)
1. [Controller node installation][25]
2. [Verify operation][26]
![Placement verification][27]
(Aaron John Canlas, [CC BY-SA 4.0][8])
4. Compute service (Nova)
1. [Controller node installation][28]
2. [Compute node installation][29]
3. [Verify operation][30]
TIP: In the last part of the verify operation, `nova-status upgrade check` fails due to a packaging error. To fix it, edit the following file in the controller: [code]
[root@rpi4b4-0 ~]# vim /etc/httpd/conf.d/00-placement-api.conf
(…)
&lt;Directory /usr/bin&gt;
   &lt;IfVersion &gt;= 2.4&gt;
      Require all granted
   &lt;/IfVersion&gt;
   &lt;IfVersion &lt; 2.4&gt;
      Order allow,deny
      Allow from all
   &lt;/IfVersion&gt;
&lt;/Directory&gt;
```
If you run `nova-status upgrade check` now, it will work.
![Nova verification][31]
(Aaron John Canlas, [CC BY-SA 4.0][8])
5. Networking service (Neutron)
1. [Controller node installation][32]
2. [Controller for Option 2: Self-service network installation][33]
To enable the `br_netfiler` module: [code]
[root@rpi4b4-0 ~]# modprobe br_netfilter
[root@rpi4b4-0 ~]# echo "br_netfilter" &gt; /etc/modules-load.d/br_netfilter.conf
[root@rpi4b4-0 ~]# lsmod|grep br_netfilter
```
3. [Compute node installation][34]
4. [Compute for Option 2: Self-service network installation][35]
To enable the `br_netfiler` module: [code]
[root@rpi4b4-X ~]# modprobe br_netfilter
[root@rpi4b4-X ~]# echo "br_netfilter" &gt; /etc/modules-load.d/br_netfilter.conf
[root@rpi4b4-X ~]# lsmod|grep br_netfilter
```
5. [Verify operation][36]
6. [Verify operation for Option 2: Self-service network installation][37]
![Neutron verification][38]
(Aaron John Canlas, [CC BY-SA 4.0][8])
6. Dashboarding service (Horizon)
1. [Controller node installation][39]
TIP: Upon restarting the httpd service, you will get a 404 error. To resolve it, add this line to the end of the `/etc/openstack-dashboard/local_settings`: [code]`WEBROOT = '/dashboard'`[/code] Then restart httpd as usual: [code]`[root@rpi4b4-X ~]# systemctl restart httpd.service`
```
2. [Verify operation][40]
![Horizon verification][41]
(Aaron John Canlas, [CC BY-SA 4.0][8])
### How well does it work?
This was a fun and successful experiment. In terms of performance, it's quite slow considering that my controller only has four cores and 4GB RAM, but it's useful for managing multiple computers in one dashboard. My next step will be to try this with a [TripleO][42] deployment and a [Ceph][43] storage cluster to enable live migration. I might try using the Raspberry Pi's Ethernet if I have a larger cluster and workload in mind; for now, [Grafana][44] is working fine for internet monitoring.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/12/openstack-raspberry-pi
作者:[AJ Canlas][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ajscanlas
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2 (Raspberries with pi symbol overlay)
[2]: https://opensource.com/article/20/6/kubernetes-raspberry-pi
[3]: https://opensource.com/article/20/3/kubernetes-raspberry-pi-k3s
[4]: https://opensource.com/article/20/8/kubernetes-raspberry-pi
[5]: https://www.raspberrypi.org/software/
[6]: http://mirror.centos.org/altarch/
[7]: https://opensource.com/sites/default/files/uploads/balenaetcher.png (balenaEtcher)
[8]: https://creativecommons.org/licenses/by-sa/4.0/
[9]: https://opensource.com/sites/default/files/uploads/ssh_console.png (SSH console in CentOS 7 AArch64)
[10]: https://opensource.com/sites/default/files/uploads/openstack_train.png (Confirming OpenStack Train's availability)
[11]: https://docs.openstack.org/install-guide/environment-ntp-controller.html
[12]: https://docs.openstack.org/install-guide/environment-ntp-other.html
[13]: https://docs.openstack.org/install-guide/environment-ntp-verify.html
[14]: https://docs.openstack.org/install-guide/environment-packages-rdo.html
[15]: https://docs.openstack.org/install-guide/environment-sql-database-rdo.html#install-and-configure-components
[16]: https://docs.openstack.org/install-guide/environment-messaging-rdo.html
[17]: https://docs.openstack.org/install-guide/environment-memcached-rdo.html
[18]: https://docs.openstack.org/keystone/train/install/keystone-install-rdo.html
[19]: https://docs.openstack.org/keystone/train/install/keystone-verify-rdo.html
[20]: https://opensource.com/sites/default/files/uploads/keystone_verification.png (Keystone verification)
[21]: https://docs.openstack.org/glance/train/install/install-rdo.html
[22]: https://docs.openstack.org/glance/train/install/verify.html
[23]: http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img
[24]: https://opensource.com/sites/default/files/uploads/glance_verification.png (Glance verification)
[25]: https://docs.openstack.org/placement/train/install/install-rdo.html
[26]: https://docs.openstack.org/placement/train/install/verify.html
[27]: https://opensource.com/sites/default/files/uploads/placement_verification.png (Placement verification)
[28]: https://docs.openstack.org/nova/train/install/controller-install-rdo.html
[29]: https://docs.openstack.org/nova/train/install/compute-install-rdo.html
[30]: https://docs.openstack.org/nova/train/install/verify.html
[31]: https://opensource.com/sites/default/files/uploads/nova_verification.png (Nova verification)
[32]: https://docs.openstack.org/neutron/train/install/controller-install-rdo.html
[33]: https://docs.openstack.org/neutron/train/install/controller-install-option2-rdo.html
[34]: https://docs.openstack.org/neutron/train/install/compute-install-rdo.html
[35]: https://docs.openstack.org/neutron/train/install/compute-install-option2-rdo.html
[36]: https://docs.openstack.org/neutron/train/install/verify.html
[37]: https://docs.openstack.org/neutron/train/install/verify-option2.html
[38]: https://opensource.com/sites/default/files/uploads/neutron_verification.png (Neutron verification)
[39]: https://docs.openstack.org/horizon/train/install/install-rdo.html
[40]: https://docs.openstack.org/horizon/train/install/verify-rdo.html
[41]: https://opensource.com/sites/default/files/uploads/horizon_verification.png (Horizon verification)
[42]: https://docs.openstack.org/tripleo-docs/latest/index.html
[43]: https://en.wikipedia.org/wiki/Ceph_(software)
[44]: https://grafana.com/

View File

@ -0,0 +1,372 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Vagrant beyond the basics)
[#]: via: (https://fedoramagazine.org/vagrant-beyond-basics/)
[#]: author: (Andy Mott https://fedoramagazine.org/author/amott/)
Vagrant beyond the basics
======
![][1]
Photo by [Kelli McClintock][2] on [Unsplash][3]
There are, like most things in the Unix/Linux world, many ways of doing things with Vagrant, but here are some examples of ways to grow your Vagrantfile portfolio and increase your knowledge and use.
If you have not yet installed vagrant you can follow the first part of this series.
> [Installing and running Vagrant using qemu-kvm][4]
### Some Vagrantfile basics
All Vagrantfiles start with “**Vagrant.configure(“2”) do |config|**” and finish with a corresponding “**end**”:
```
```
Vagrant.configure("2") do |config|
  ...
  ...
end
```
```
The “2” represents the version of Vagrant, and is currently either 1 or 2. Unless you need to use the older version simply stick with the latest.
The config structure is broken down into namespaces:
**config.vm** modify the configuration of the machine(s) that Vagrant manages.
**config.ssh** for configuring how Vagrant will access your machine over SSH.
**config.winrm** configuring how Vagrant will access your Windows guest over WinRM.
**config.winssh** the WinSSH communicator is built specifically for the Windows native port of OpenSSH.
**config.vagrant** modify the behavior of Vagrant itself.
Each line in a namespace begins with the word config:
**config.vm.box = “fedora/32-cloud-base”
config.vm.network “private_network”**
There are many options here, and a read of the documentation pages is strongly recommended. They can be found at <https://www.vagrantup.com/docs/vagrantfile>
Also in this section you can configure provider-specific options. In this case the provider is libvirt, and the specific config looks like this:
```
```
config.vm.provider :libvirt do |libvirt|
  libvirt.cpus = 1
  libvirt.memory = 512
```
```
In the example above, all libvirt VMs will be created with a single CPU and 512Mb of memory unless specifically overridden.
The VM namespace is where you define all machines you want this Vagrantfile to build. Notice that this is still a part of the config section, and lines should therefore begin with config. All sections or parts of sections have an end statement to close them off.
### Creating multiple machines at once
Depending on what you need to achieve, this can be a simple loop or multiple machine definitions. To create any number of machines in a series, with the same settings but perhaps different names and/or IP addresses, you can just provide a range as shown here:
```
```
(1..5).each do |i|
  config.vm.define "server#{i}" do |server|
    server.vm.hostname = "server#{i}.example.com"
  end
end
```
```
This will create 5 servers, named server1, server2, server3 etc.
Of note, using Ruby style “**for i in 1..3 do**” doesnt work despite Vagrantfile syntax actually being Ruby, so use the method from the example above.
If you need servers with different hostnames, different hardware etc then youll need to specify them individually, or at least in groups if the situation lends itself to that. Lets say you need to create a typical web/db/load balancer infrastructure, with 2 web servers, a single database server and a load balancer for the web traffic. Ignoring the specific software setup for this, to simply create the virtual machines ready for provisioning you could use something like this:
```
```
# Load Balancer
config.vm.define "loadbal", primary: true do |loadbal|
  loadbal.vm.hostname = "loadbal"
end
# Database
config.vm.define "db", primary: true do |db|
  db.vm.hostname = "db"
end
# Web Servers x2
(1..2).each do |i|
  config.vm.define "web#{i}" do |web|
    web.vm.hostname = "web#{i}"
  end
end
```
```
This uses a combination of multiple machine calls and a small loop to build 4 VMs with a single vagrant up command.
### Networking
Vagrant generally creates its own network for VM access, and you use this with vagrant ssh. If you create more than one VM then you must use the VM name to identify which one you wish to connect to **vagrant ssh** _**vmname**_.
There are a number of configuration options available which allow you to interact with your VMs in various ways.
The vagrant-libvirt plugin creates a network for the guests to use. This is automated and will always be present even if you define your own networks. The network is named “vagrant-libvirt” and can be seen either in the Virtual Networks tab of virt-managers connection details or by issuing a **sudo virsh net-list** command.
If you use dhcp for your guests, you can find the individual IP addresses with the virsh net-dhcp-list command: **sudo virsh net-dhcp-leases vagrant-libvirt**
#### Port Forwarding
The simplest change to default networking is port forwarding. This uses a simple format like most Vagrant config: **config.vm.network “forwarded_port”, guest: 80, host: 8080**
This listens to port 8080 on your local machine and forwards connections to port 80 on the Vagrant machine. If you need to use a UDP port, simply add **, protocol: “udp”** to the end of that line (notice that comma which should come immediately after the second port number).
Obviously for more complex configurations this might not be ideal, as you need to specify every single port you want to forward. If you then add multiple machines the complexity can really become too much.
In addition to this, anyone on your network can access these ports if they know your IP address, so thats something you should be aware of.
#### Public Network
This creates a network card for the Vagrant VM which connects to your host network, and will therefore be visible to all machines on that network. As Vagrant is not designed to be secure, you should be aware of any vulnerabilities and take steps to protect against them.
To configure a public network, add **config.vm.network “public_network”** to your Vagrantfile. This will use DHCP to obtain a network address.
If you wish to assign a static IP address, you can add one to the end of the network declaration: **config.vm.network “public_network”, ip: “192.168.0.1”**
If youre creating multiple guests you can put the network configuration in the vm namespace, and even allocate IPs based on iteration too:
```
```
Vagrant.configure("2") do |config|
  config.vm.box = "centos/8"
  config.vm.provider :libvirt do |libvirt|
    libvirt.qemu_use_session = false
  end
  # Servers x2
  (1..2).each do |i|
    config.vm.define "server#{i}" do |server|
      server.vm.hostname = "server#{i}"
      server.vm.network "public_network", ip: "192.168.122.20#{i}"
    end
  end
end
```
```
#### Private Network
This works very much like the Public Network option, only the network is only available to the host machine and the Vagrant guests. The syntax is almost identical too: **config.vm.network “private_network”, type: “dhcp”**
 To use a static IP address, simply add it**:**
```
```
config.vm.network "private_network", ip: "192.168.50.4"
```
```
This will create a new network in libvirt, usually named something like “vagrant-private-dhcp” you can see this with the command **sudo virsh net-list** while the VM is running. This network is created and destroyed along with the vagrant guests.
Again, the network config can be specified for all guests, or per guest as shown in the public network example above.
### Provisioning
Once you have your VMs defined, you can obviously then do whatever you want with them, but as soon as you issue a vagrant destroy command any changes will be lost. This is where automated provisioning comes in.
You can use several methods to provision your machines, from simple file copies to shell scripts, Ansible, Chef and Puppet. Many of the main methods can be used, but Ill cover the simple ones here if you need to use something else please read the documentation as its all covered.
#### File uploads
To copy a file to the Vagrant guest, add a line to the Vagrantfile like this:
```
```
config.vm.provision "file", source: "~/myfile", destination: "myfile"
```
```
You can copy directories too:
```
```
config.vm.provision "file", source: "~/path/to/host/folder", destination: "$HOME/remote/newfolder"
```
```
The directory structure should already exist on the Vagrant host, and will be copied in its entirety, including subdirectories and files.
Note: If you add a trailing slash to the destination path, the source path will be placed under this so make sure you only do this if you want that outcome. For example, if the above destination was **“$HOME/remote/newfolder/”,** then the result would see “$HOME/remote/newfolder/folder” created with the contents of the source placed here.
#### Shell commands
You can include individual commands, inline scripts or external scripts to perform provisioning tasks.
A single command would take this form, and any valid command line command can be used here: **config.vm.provision “shell”, inline: “sudo dnf update -y”**
An inline script is less common, and declared at the top of the Vagrantfile then called during provisioning:
```
```
$script = &amp;lt;&amp;lt;-SCRIPT
echo I am provisioning...
date &gt; /etc/vagrant_provisioned_at
SCRIPT
Vagrant.configure("2") do |config|
  config.vm.provision "shell", inline: $script
end
```
```
More common is the external shell script, which gives more flexibility and makes code more modular. Vagrant uploads the file to the guest then executes it. Simply call the script in the provisioning line:
**config.vm.provision “shell”, path: “script.sh”**
The file need not be local to the Vagrant host either:
**config.vm.provision “shell”, path: “<https://example.com/provisioner.sh>**
#### Ansible
To use Ansible to provision your VMs you must have it installed on the Vagrant host; see <https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-on-rhel-centos-or-fedora>.
You specify an Ansible playbook to provision your VM in the following way:
```
```
config.vm.provision "ansible" do |ansible|
  ansible.playbook = "playbook.yml"
end
```
```
This then calls the playbook, which will run as any externally-run ansible playbook would.
If youre building multiple VMs with your Vagrantfile then its likely you want different configurations for some of them, and in this case you should provision within the definition of each VM, as shown here:
```
```
# Web Servers x2
(1..2).each do |i|
  config.vm.define "web#{i}" do |web|
    web.vm.hostname = "web#{i}"
    web.vm.provision "ansible" do |ansible|
      ansible.playbook = "web.yml"
    end
  end
end
```
```
Ansible provisioners come in two formats ansible and ansible_local. The ansible provisioner requires that Ansible is installed on the Vagrant host, and will connect remotely to your guest VMs to provision them. This means all necessary ssh authentication must be in place for it to work. The ansible_local provisioner executes playbooks directly on the guest VMs, which therefore requires Ansible be installed on each of the guests you want to provision. Vagrant will try to install Ansible on the guests in order to do this, (This can be controlled with the **install** option, but is enabled by default). On RHEL-style systems like Fedora, Ansible is installed from the EPEL repository. Simply use either ansible or ansible_local in the **config_vm_provision** command to choose the style you need.
### Synced Folders
Vagrant allows you to sync folders between your Vagrant host and your guests, allowing access to configuration files, data etc. By default, the folder containing the Vagrant file is shared and mounted under /vagrant on each guest.
To configure additional synced folders, use the **config.vm.synced.folder** command:
```
```
config.vm.synced_folder "src/", "/srv/website"
```
```
The two parameters are the source folder on the Vagrant host and the mount directory on the guest. The destination folder will be created if it does not exist, recursively if necessary.
Options for synced folders allow you to configure them better, including the option to disable them completely. Other options allow you to specify a group owner of the folder (group), the folder owner (owner), plus mount options. There are others but these are the main ones.
You can disable the default share with the following command:
```
```
config.vm.synced_folder ".", "/vagrant", disabled: true
```
```
Other options are configured as follows:
```
```
config.vm.synced_folder "src/", "/srv/website",
  owner: "apache", group: "apache"
```
```
#### NFS synced folders
When using Vagrant on a Linux host, synced folders use NFS (with the exception of the default share which uses rsync; see below) so you must have NFS installed on the Vagrant host, and the guests also need NFS support installation. To use NFS with non-Linux hosts, simply specify the folder type as nfs:
```
```
config.vm.synced_folder ".", "/vagrant", type: "nfs"
```
```
#### RSync synced folders
These are the easiest to use as they usually work without any intervention on a Linux host. This is a one-way sync from host to guest performed at startup (**vagrant up**) or after a **vagrant reload** command is issued. The default share of the Vagrant project directory is done with rsync. To configure a synced folder with rsync, specify the type as rsync:
```
```
config.vm.synced_folder ".", "/vagrant", type: "rsync"
```
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/vagrant-beyond-basics/
作者:[Andy Mott][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/amott/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/11/vagrant_beyond_basics-816x345.png
[2]: https://unsplash.com/@kelli_mcclintock?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/box?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/vagrant-qemukvm-fedora-devops-sysadmin/

View File

@ -0,0 +1,95 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why I love Emacs)
[#]: via: (https://opensource.com/article/20/12/emacs)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Why I love Emacs
======
Emacs isn't a mere text editor; it places you in control and allows you
to solve nearly any problem you encounter.
![Emoji keyboard][1]
I'm a habitual [Emacs][2] user. I didn't choose Emacs as much as it chose me. Back when I was first learning about Unix, I stumbled upon a little-known feature in a strange application called Emacs, which was apparently hidden away on my computer. Legend had it (and was proven true) that if you typed `emacs` into a terminal, pressed **Alt**+**X**, and typed `tetris`, you could play a falling-blocks game.
![Tetris in Emacs][3]
That was my introduction to GNU Emacs. While it was frivolous, it was also an accurate indication of what Emacs is all about—the idea that users can reprogram their (virtual) worlds and do _whatever_ they want with an application. Playing Tetris in your text editor is probably not your primary goal on an everyday basis, but it goes to show that Emacs is, proudly, a programming platform. In fact, you might think of it as a kind of precursor to [Jupyter][4], combining a powerful programming language (called `elisp`, to be exact) with its own live environment. As a consequence, Emacs is flexible as a text editor, customizable, and powerful.
Elisp (and Common Lisp, by extension) aren't necessarily the easiest languages to start out on, if you're used to Bash or Python or similar languages. But LISP dialects are powerful, and because Emacs is a LISP interpreter, you can build applications, whether they're Emacs plugins or prototypes of something you want to develop into a stand-alone project. The wildly popular [org-mode project][5] is just one example: it's an Emacs plugin as well as a markdown syntax with mobile apps to interpret and extend its capabilities. There are many examples of similarly useful applications-within-Emacs, including an email client, a PDF viewer, web browser, a shell, and a file manager.
### Two interfaces
GNU Emacs has at least two user interfaces: a graphical user interface (GUI) and a terminal user interface (TUI). This sometimes surprises people because Emacs is often pitted against Vi, which runs in a terminal (although gVim provides a GUI for a modern Vi implementation). If you want to run GNU Emacs as a terminal application, you can launch it with the `-nw` option:
```
`$ emacs -nw`
```
With a GUI, you can just launch Emacs from your application menu or a terminal.
You might think that a GUI renders Emacs less effective, as if "real text editors run in a terminal," but a GUI can make Emacs easier to learn because its GUI follows some typical conventions (a menu bar, adjustable widgets, mouse interaction, and so on).
In fact, if you run Emacs as a GUI application, you can probably get through the day without noticing you're in Emacs at all. Most of the usual conventions apply, as long as you use the GUI. For instance, you can select text with your mouse, navigate to the **Edit** menu, select **Copy**, and then place your cursor elsewhere and select **Paste**. To save a document, you can go to **File** and **Save** or **Save As**. You can press **Ctrl** and scroll up to make your screen font larger, you can use the scroll bar to navigate through your document, and so on.
Getting to know Emacs in its GUI form is a great way to flatten the learning curve.
### Emacs keyboard shortcuts
GNU Emacs is infamous for complex keyboard combinations. They're not only unfamiliar (**Alt**+**W** to copy? **Ctrl**+**Y** to paste?), they're also notated with arcane terminology ("Alt" is called "Meta"), and sometimes they come in pairs (**Ctrl**+**X** followed by **Ctrl**+**S** to save) and other times alone (**Ctrl**+**S** to search). Why would anyone willfully choose to use this?
Well, some don't. But those who do are fans of how these combinations easily flow into the rhythm of everyday typing (and often have the **Caps Lock** key serve as a **Ctrl** key). Those who prefer something different, however, have several options.
* The `evil` mode lets you use Vim keybindings in Emacs. It's that simple: You get to keep the key combinations you've committed to muscle memory, and you inherit the most powerful text editor available.
* Common User Access (CUA) keys keep all of the usual Emacs key combinations but the most jarring ones—copy, cut, paste, and undo—are all mapped to their modern bindings (**Ctrl**+**C**, **Ctrl**+**X**, **Ctrl**+**V**, and **Ctrl**+**Z**, respectively).
* The `global-set-key` function, part of the programming side of Emacs, allows you to define your own keyboard shortcuts. Traditionally, user-defined shortcuts start with **Ctrl**+**C**, but nothing is stopping you from inventing your own scheme. Emacs isn't precious of its own identity. You're welcome to bend it to your will.
### Learn Emacs
It takes time to get very good with Emacs. For me, that meant printing out a [cheat sheet][6] and keeping it next to my keyboard all day, every day. When I forgot a key combo, I looked it up on my cheat sheet. If it wasn't on my cheat sheet, I learned the keyboard combo, either by executing the function and noting how Emacs told me I could access it quicker or by using `describe-function`:
```
M-x describe-function: save-buffer
save-buffer is an interactive compiled Lisp function in files.el.
It is bound to C-x C-s, &lt;menu-bar&gt; &lt;file&gt; &lt;save-buffer&gt;.
[...]
```
As you use it, you learn it. And the more you learn about it, the more empowered you become to improve it and make it your own.
### Try Emacs
It's a common joke to say that Emacs is an operating system with a text editor included. Maybe that's meant to insinuate Emacs is bloated and overly complex, and there's certainly an argument that a text editor shouldn't require `libpoppler` according to its default configuration (you can compile Emacs without it).
But there's a greater truth lurking behind this joke, and it reveals a lot about what makes Emacs so fun. It doesn't make sense to compare Emacs to other text editors, like Vim, Nano, or even [VSCodium][7], because the really important part of Emacs isn't the idea that you can type stuff into a window and save it. That's basic functionality that even Bash provides. The true significance of Emacs is how it places you in control and how, through Emacs Lisp ([Elisp][8]), nearly any problem can be solved.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/12/emacs
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/emoji-keyboard.jpg?itok=JplrSZ9c (Emoji keyboard)
[2]: https://en.wikipedia.org/wiki/Emacs
[3]: https://opensource.com/sites/default/files/tetris.png (Tetris in Emacs)
[4]: https://opensource.com/article/20/11/surprising-jupyter
[5]: https://opensource.com/article/19/1/productivity-tool-org-mode
[6]: https://opensource.com/downloads/emacs-cheat-sheet
[7]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
[8]: https://www.gnu.org/software/emacs/manual/html_node/elisp/

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get the most out of the Vi text editor)
[#]: via: (https://opensource.com/article/20/12/vi-text-editor)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Get the most out of the Vi text editor
======
Vi is the quintessential Unix text editor. Get to know it—or any of its
incarnations, Vim, Neovim, gVim, nvi, or Elvis, for Linux, macOS,
Windows, or BSD.
![Business woman on laptop sitting in front of window][1]
Whether you know it as Vim, Neovim, gVim, nvi, or even Elvis, the quintessential Unix editor is easily Vi. Included in probably every Linux and BSD distribution, Vi is a lightweight and minimalist text editor that many users love for its simple and succinct keyboard shortcuts and dual-mode design.
The original Vi editor was an application written by Bill Joy, creator of the [C shell][2]. Modern incarnations of Vi have [added many features][3], including multiple levels of undo, better navigation while in insert mode, line folding, syntax highlighting, plugin support, and much more. Vim is regarded as the most popular modern implementation, and most people actually mean Vim when they refer to Vi.
All incarnations hearken back to the same goal, though, so this article looks at Vi in a generic sense. The implementation on your computer may differ slightly, but you can still benefit from editing text the Vi way.
### Install Vi
If you're running Linux, macOS, or BSD, then you already have the `vi` command installed. If you're on Windows, you can [download Vim and gVim][4].
![gVim][5]
(Seth Kenlon, [CC BY-SA 4.0][6])
On [NetBSD][7], nvi is a common alternative to Vi, while Slackware provides [Elvis][8] (and Vim), and the popular [Neovim][9] fork aims to help users extend Vim with [Lua][10].
### Launch Vi
Start Vi or Vim with the `vi` command in a terminal. If a `.vimrc` file is not found on your system, then Vim starts in Vi-compatibility mode (this can also be forced with the `-C` option). If you want to use gVim to have a graphical user interface (GUI), you can start it from your desktop's application menu.
If you're a new user just learning Vi, using a graphical user interface can be a nice way to provide yourself a buffer between how you might _expect_ a text editor to behave and how Vi was designed to behave. The GUI version has a menu bar, some mouse integration, a toolbar, and other features to help you find the basic functions you probably take for granted in a typical text editor but don't know how to do in Vi yet.
### How to use Vi
Probably the easiest way to learn Vi is with `vimtutor`, an interactive tutorial packaged with Vim. To start the tutorial, launch `vimtutor` and read through the instructions, trying each exercise. As the tutorial says, getting good with Vi is less about memorizing what key does what and more about establishing muscle memory to invoke common actions as you type.
#### Escape
One of the first things you learn about Vi is the importance of the **Esc** key. **Esc** is what activates _command mode_, and it doesn't take long to learn that whenever you're in doubt in Vi, just press **Esc**. Any key you press while in command mode is not entered into the text document you're working on; instead, it is interpreted by Vi as a command. For instance, to move your cursor left, you press the **H** key on your keyboard. If you're in _insert_ mode, then pressing **H** types the letter H, just as you'd expect. But in _command_ mode, pressing **H** moves left, **L** moves right, **J** moves down, and **K** moves up.
The separation between command mode and insert mode is a sharp contrast to the way any other text editor works, and for that reason, it's probably Vi's most significant differentiator. Interestingly, though, it's theoretically not so different from the way you probably already work. After all, when you take your hands off the keyboard to select text with a mouse, you're essentially placing yourself into a kind of command mode. With Vi, instead of moving your hands off the keyboard to move the mouse and press function keys or Ctrl, you put the _editor_ into a special mode of operation, such that it reassigns your key presses to commands instead of text input.
#### Extend Vi
Before Vim version 8.0, Vi was very much "just" a text editor. There were plugins for it, but installing them was a manual process that many users never thought to do. Luckily, Vim version 8 and higher offer support for plugin management, making it trivial to install and load plugins.
Installing plugins for Vim can be done with the `vim-plug` function. For instance, to install the Vi file browser [NERDTree][11]:
```
`:PlugInstall NERDTree`
```
You can also update plugins:
```
`:PlugUpdate NERDTree`
```
For more information on installing plugins and themes, both with `vim-plug` and manually, read my article [_How to install Vim plugins_][12].
### Vi as default
Vi isn't just popular; it's a [POSIX][13] standard. It's an application every sysadmin should know how to use, even if they don't intend to use it on an everyday basis. It's also a fast and simple editor, so once you get good at it, it may be the editor you've long been searching for.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/12/vi-text-editor
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
[2]: https://opensource.com/article/20/8/tcsh
[3]: https://vimhelp.org/vi_diff.txt.html#vi-differences
[4]: https://www.vim.org/download.php
[5]: https://opensource.com/sites/default/files/uploads/gvim.jpg (gVim)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://opensource.com/article/19/3/netbsd-raspberry-pi
[8]: https://github.com/mbert/elvis
[9]: http://neovim.io
[10]: https://opensource.com/article/20/2/lua-cheat-sheet
[11]: https://www.vim.org/scripts/script.php?script_id=1658
[12]: https://opensource.com/article/20/2/how-install-vim-plugins
[13]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains

Some files were not shown because too many files have changed in this diff Show More