mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
commit
7a150e1adf
214
published/20180622 Use LVM to Upgrade Fedora.md
Normal file
214
published/20180622 Use LVM to Upgrade Fedora.md
Normal file
@ -0,0 +1,214 @@
|
||||
使用 LVM 升级 Fedora
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/06/lvm-upgrade-816x345.jpg)
|
||||
|
||||
大多数用户发现使用标准流程升级[从一个 Fedora 版本升级到下一个][1]很简单。但是,Fedora 升级也不可避免地会遇到许多特殊情况。本文介绍了使用 DNF 和逻辑卷管理(LVM)进行升级的一种方法,以便在出现问题时保留可引导备份。这个例子是将 Fedora 26 系统升级到 Fedora 28。
|
||||
|
||||
此处展示的过程比标准升级过程更复杂。在使用此过程之前,你应该充分掌握 LVM 的工作原理。如果没有适当的技能和细心,你可能会丢失数据和/或被迫重新安装系统!如果你不知道自己在做什么,那么**强烈建议**你坚持只使用得到支持的升级方法。
|
||||
|
||||
### 准备系统
|
||||
|
||||
在开始之前,请确保你的现有系统已完全更新。
|
||||
|
||||
```
|
||||
$ sudo dnf update
|
||||
$ sudo systemctl reboot # 或采用 GUI 方式
|
||||
```
|
||||
|
||||
检查你的根文件系统是否是通过 LVM 挂载的。
|
||||
|
||||
```
|
||||
$ df /
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg_sdg-f26 20511312 14879816 4566536 77% /
|
||||
|
||||
$ sudo lvs
|
||||
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
|
||||
f22 vg_sdg -wi-ao---- 15.00g
|
||||
f24_64 vg_sdg -wi-ao---- 20.00g
|
||||
f26 vg_sdg -wi-ao---- 20.00g
|
||||
home vg_sdg -wi-ao---- 100.00g
|
||||
mockcache vg_sdg -wi-ao---- 10.00g
|
||||
swap vg_sdg -wi-ao---- 4.00g
|
||||
test vg_sdg -wi-a----- 1.00g
|
||||
vg_vm vg_sdg -wi-ao---- 20.00g
|
||||
```
|
||||
|
||||
如果你在安装 Fedora 时使用了默认值,你可能会发现根文件系统挂载在名为 `root` 的逻辑卷(LV)上。卷组(VG)的名称可能会有所不同。看看根卷的总大小。在该示例中,根文件系统名为 `f26`,大小为 `20G`。
|
||||
|
||||
接下来,确保 LVM 中有足够的可用空间。
|
||||
|
||||
```
|
||||
$ sudo vgs
|
||||
VG #PV #LV #SN Attr VSize VFree
|
||||
vg_sdg 1 8 0 wz--n- 232.39g 42.39g
|
||||
```
|
||||
|
||||
该系统有足够的可用空间,可以为升级后的 Fedora 28 的根卷分配 20G 的逻辑卷。如果你使用的是默认安装,则你的 LVM 中将没有可用空间。对 LVM 的一般性管理超出了本文的范围,但这里有一些情形下可能采取的方法:
|
||||
|
||||
1、`/home` 在自己的逻辑卷,而且 `/home` 中有大量空闲空间。
|
||||
|
||||
你可以从图形界面中注销并切换到文本控制台,以 `root` 用户身份登录。然后你可以卸载 `/home`,并使用 `lvreduce -r` 调整大小并重新分配 `/home` 逻辑卷。你也可以从<ruby>现场镜像<rt>Live image</rt></ruby>启动(以便不使用 `/home`)并使用 gparted GUI 实用程序进行分区调整。
|
||||
|
||||
2、大多数 LVM 空间被分配给根卷,该文件系统中有大量可用空间。
|
||||
|
||||
你可以从现场镜像启动并使用 gparted GUI 实用程序来减少根卷的大小。此时也可以考虑将 `/home` 移动到另外的文件系统,但这超出了本文的范围。
|
||||
|
||||
3、大多数文件系统已满,但你有个已经不再需要逻辑卷。
|
||||
|
||||
你可以删除不需要的逻辑卷,释放卷组中的空间以进行此操作。
|
||||
|
||||
### 创建备份
|
||||
|
||||
首先,为升级后的系统分配新的逻辑卷。确保为系统的卷组(VG)使用正确的名称。在这个例子中它是 `vg_sdg`。
|
||||
|
||||
```
|
||||
$ sudo lvcreate -L20G -n f28 vg_sdg
|
||||
Logical volume "f28" created.
|
||||
```
|
||||
|
||||
接下来,创建当前根文件系统的快照。此示例创建名为 `f26_s` 的快照卷。
|
||||
|
||||
```
|
||||
$ sync
|
||||
$ sudo lvcreate -s -L1G -n f26_s vg_sdg/f26
|
||||
Using default stripesize 64.00 KiB.
|
||||
Logical volume "f26_s" created.
|
||||
```
|
||||
|
||||
现在可以将快照复制到新逻辑卷。当你替换自己的卷名时,**请确保目标正确**。如果不小心,就会不可撤销地删除了数据。此外,请确保你从根卷的快照复制,**而不是**从你的现在的根卷。
|
||||
|
||||
```
|
||||
$ sudo dd if=/dev/vg_sdg/f26_s of=/dev/vg_sdg/f28 bs=256k
|
||||
81920+0 records in
|
||||
81920+0 records out
|
||||
21474836480 bytes (21 GB, 20 GiB) copied, 149.179 s, 144 MB/s
|
||||
```
|
||||
|
||||
给新文件系统一个唯一的 UUID。这不是绝对必要的,但 UUID 应该是唯一的,因此这避免了未来的混淆。以下是在 ext4 根文件系统上的方法:
|
||||
|
||||
```
|
||||
$ sudo e2fsck -f /dev/vg_sdg/f28
|
||||
$ sudo tune2fs -U random /dev/vg_sdg/f28
|
||||
```
|
||||
|
||||
然后删除不再需要的快照卷:
|
||||
|
||||
```
|
||||
$ sudo lvremove vg_sdg/f26_s
|
||||
Do you really want to remove active logical volume vg_sdg/f26_s? [y/n]: y
|
||||
Logical volume "f26_s" successfully removed
|
||||
```
|
||||
|
||||
如果你单独挂载了 `/home`,你可能希望在此处制作 `/home` 的快照。有时,升级的应用程序会进行与旧版 Fedora 版本不兼容的更改。如果需要,编辑**旧**根文件系统上的 `/etc/fstab` 文件以在 `/home` 上挂载快照。请记住,当快照已满时,它将消失!另外,你可能还希望给 `/home` 做个正常备份。
|
||||
|
||||
### 配置以使用新的根
|
||||
|
||||
首先,安装新的逻辑卷并备份现有的 GRUB 设置:
|
||||
|
||||
```
|
||||
$ sudo mkdir /mnt/f28
|
||||
$ sudo mount /dev/vg_sdg/f28 /mnt/f28
|
||||
$ sudo mkdir /mnt/f28/f26
|
||||
$ cd /boot/grub2
|
||||
$ sudo cp -p grub.cfg grub.cfg.old
|
||||
```
|
||||
|
||||
编辑 `grub.conf` 并在第一个菜单项 `menuentry` 之前添加这些,除非你已经有了:
|
||||
|
||||
```
|
||||
menuentry 'Old boot menu' {
|
||||
configfile /grub2/grub.cfg.old
|
||||
}
|
||||
```
|
||||
|
||||
编辑 `grub.conf` 并更改默认菜单项以激活并挂载新的根文件系统。改变这一行:
|
||||
|
||||
```
|
||||
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f26 ro rd.lvm.lv=vg_sdg/f26 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
|
||||
```
|
||||
|
||||
如你看到的这样。请记住使用你系统上的正确的卷组和逻辑卷条目名称!
|
||||
|
||||
```
|
||||
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f28 ro rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
|
||||
```
|
||||
|
||||
编辑 `/mnt/f28/etc/default/grub` 并改变在启动时激活的默认的根卷:
|
||||
|
||||
```
|
||||
GRUB_CMDLINE_LINUX="rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet"
|
||||
```
|
||||
|
||||
编辑 `/mnt/f28/etc/fstab`,将挂载的根文件系统从旧的逻辑卷:
|
||||
|
||||
```
|
||||
/dev/mapper/vg_sdg-f26 / ext4 defaults 1 1
|
||||
```
|
||||
|
||||
改为新的:
|
||||
|
||||
```
|
||||
/dev/mapper/vg_sdg-f28 / ext4 defaults 1 1
|
||||
```
|
||||
|
||||
然后,出于参考的用途,只读挂载旧的根卷:
|
||||
|
||||
```
|
||||
/dev/mapper/vg_sdg-f26 /f26 ext4 ro,nodev,noexec 0 0
|
||||
```
|
||||
|
||||
如果你的根文件系统是通过 UUID 挂载的,你需要改变这个方式。如果你的根文件系统是 ext4 你可以这样做:
|
||||
|
||||
```
|
||||
$ sudo e2label /dev/vg_sdg/f28 F28
|
||||
```
|
||||
|
||||
现在编辑 `/mnt/f28/etc/fstab` 使用该卷标。改变该根文件系统的挂载行,像这样:
|
||||
|
||||
```
|
||||
LABEL=F28 / ext4 defaults 1 1
|
||||
```
|
||||
|
||||
### 重启与升级
|
||||
|
||||
重新启动,你的系统将使用新的根文件系统。它仍然是 Fedora 26,但是是带有新的逻辑卷名称的副本,并可以进行 `dnf` 系统升级!如果出现任何问题,请使用旧引导菜单引导回到你的工作系统,此过程可避免触及旧系统。
|
||||
|
||||
```
|
||||
$ sudo systemctl reboot # or GUI equivalent
|
||||
...
|
||||
$ df / /f26
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg_sdg-f28 20511312 14903196 4543156 77% /
|
||||
/dev/mapper/vg_sdg-f26 20511312 14866412 4579940 77% /f26
|
||||
```
|
||||
|
||||
你可能希望验证使用旧的引导菜单确实可以让你回到挂载在旧的根文件系统上的根。
|
||||
|
||||
现在按照[此维基页面][2]中的说明进行操作。如果系统升级出现任何问题,你还会有一个可以重启回去的工作系统。
|
||||
|
||||
### 进一步的考虑
|
||||
|
||||
创建新的逻辑卷并将根卷的快照复制到其中的步骤可以使用通用脚本自动完成。它只需要新的逻辑卷的名称,因为现有根的大小和设备很容易确定。例如,可以输入以下命令:
|
||||
|
||||
```
|
||||
$ sudo copyfs / f28
|
||||
```
|
||||
|
||||
提供挂载点以进行复制可以更清楚地了解发生了什么,并且复制其他挂载点(例如 `/home`)可能很有用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-lvm-upgrade-fedora/
|
||||
|
||||
作者:[Stuart D Gathman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/sdgathman/
|
||||
[1]:https://fedoramagazine.org/upgrading-fedora-27-fedora-28/
|
||||
[2]:https://fedoraproject.org/wiki/DNF_system_upgrade
|
@ -0,0 +1,249 @@
|
||||
探索 Linux 内核:Kconfig/kbuild 的秘密
|
||||
======
|
||||
|
||||
> 深入理解 Linux 配置/构建系统是如何工作的。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/15/093935dvyk5znoaooaooba.jpg)
|
||||
|
||||
自从 Linux 内核代码迁移到 Git 以来,Linux 内核配置/构建系统(也称为 Kconfig/kbuild)已存在很长时间了。然而,作为支持基础设施,它很少成为人们关注的焦点;甚至在日常工作中使用它的内核开发人员也从未真正思考过它。
|
||||
|
||||
为了探索如何编译 Linux 内核,本文将深入介绍 Kconfig/kbuild 内部的过程,解释如何生成 `.config` 文件和 `vmlinux`/`bzImage` 文件,并介绍一个巧妙的依赖性跟踪技巧。
|
||||
|
||||
### Kconfig
|
||||
|
||||
构建内核的第一步始终是配置。Kconfig 有助于使 Linux 内核高度模块化和可定制。Kconfig 为用户提供了许多配置目标:
|
||||
|
||||
|
||||
| 配置目标 | 解释 |
|
||||
| ---------------- | --------------------------------------------------------- |
|
||||
| `config` | 利用命令行程序更新当前配置 |
|
||||
| `nconfig` | 利用基于 ncurses 菜单的程序更新当前配置 |
|
||||
| `menuconfig` | 利用基于菜单的程序更新当前配置 |
|
||||
| `xconfig` | 利用基于 Qt 的前端程序更新当前配置 |
|
||||
| `gconfig` | 利用基于 GTK+ 的前端程序更新当前配置 |
|
||||
| `oldconfig` | 基于提供的 `.config` 更新当前配置 |
|
||||
| `localmodconfig` | 更新当前配置,禁用没有载入的模块 |
|
||||
| `localyesconfig` | 更新当前配置,转换本地模块到核心 |
|
||||
| `defconfig` | 带有来自架构提供的 `defconcig` 默认值的新配置 |
|
||||
| `savedefconfig` | 保存当前配置为 `./defconfig`(最小配置) |
|
||||
| `allnoconfig` | 所有选项回答为 `no` 的新配置 |
|
||||
| `allyesconfig` | 所有选项回答为 `yes` 的新配置 |
|
||||
| `allmodconfig` | 尽可能选择所有模块的新配置 |
|
||||
| `alldefconfig` | 所有符号(选项)设置为默认值的新配置 |
|
||||
| `randconfig` | 所有选项随机选择的新配置 |
|
||||
| `listnewconfig` | 列出新选项 |
|
||||
| `olddefconfig` | 同 `oldconfig` 一样,但设置新符号(选项)为其默认值而无须提问 |
|
||||
| `kvmconfig` | 启用支持 KVM 访客内核模块的附加选项 |
|
||||
| `xenconfig` | 启用支持 xen 的 dom0 和 访客内核模块的附加选项 |
|
||||
| `tinyconfig` | 配置尽可能小的内核 |
|
||||
|
||||
我认为 `menuconfig` 是这些目标中最受欢迎的。这些目标由不同的<ruby>主程序<rt>host program</rt></ruby>处理,这些程序由内核提供并在内核构建期间构建。一些目标有 GUI(为了方便用户),而大多数没有。与 Kconfig 相关的工具和源代码主要位于内核源代码中的 `scripts/kconfig/` 下。从 `scripts/kconfig/Makefile` 中可以看到,这里有几个主程序,包括 `conf`、`mconf` 和 `nconf`。除了 `conf` 之外,每个都负责一个基于 GUI 的配置目标,因此,`conf` 处理大多数目标。
|
||||
|
||||
从逻辑上讲,Kconfig 的基础结构有两部分:一部分实现一种[新语言][1]来定义配置项(参见内核源代码下的 Kconfig 文件),另一部分解析 Kconfig 语言并处理配置操作。
|
||||
|
||||
大多数配置目标具有大致相同的内部过程(如下所示):
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/kconfig_process.png)
|
||||
|
||||
请注意,所有配置项都具有默认值。
|
||||
|
||||
第一步读取源代码根目录下的 Kconfig 文件,构建初始配置数据库;然后它根据如下优先级读取现有配置文件来更新初始数据库:
|
||||
|
||||
1. `.config`
|
||||
2. `/lib/modules/$(shell,uname -r)/.config`
|
||||
3. `/etc/kernel-config`
|
||||
4. `/boot/config-$(shell,uname -r)`
|
||||
5. `ARCH_DEFCONFIG`
|
||||
6. `arch/$(ARCH)/defconfig`
|
||||
|
||||
如果你通过 `menuconfig` 进行基于 GUI 的配置或通过 `oldconfig` 进行基于命令行的配置,则根据你的自定义更新数据库。最后,该配置数据库被转储到 `.config` 文件中。
|
||||
|
||||
但 `.config` 文件不是内核构建的最终素材;这就是 `syncconfig` 目标存在的原因。`syncconfig`曾经是一个名为 `silentoldconfig` 的配置目标,但它没有做到其旧名称所说的工作,所以它被重命名。此外,因为它是供内部使用的(不适用于用户),所以它已从上述列表中删除。
|
||||
|
||||
以下是 `syncconfig` 的作用:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncconfig.png)
|
||||
|
||||
`syncconfig` 将 `.config` 作为输入并输出许多其他文件,这些文件分为三类:
|
||||
|
||||
* `auto.conf` & `tristate.conf` 用于 makefile 文本处理。例如,你可以在组件的 makefile 中看到这样的语句:`obj-$(CONFIG_GENERIC_CALIBRATE_DELAY) += calibrate.o`。
|
||||
* `autoconf.h` 用于 C 语言的源文件。
|
||||
* `include/config/` 下空的头文件用于 kbuild 期间的配置依赖性跟踪。下面会解释。
|
||||
|
||||
配置完成后,我们将知道哪些文件和代码片段未编译。
|
||||
|
||||
### kbuild
|
||||
|
||||
组件式构建,称为*递归 make*,是 GNU `make` 管理大型项目的常用方法。kbuild 是递归 make 的一个很好的例子。通过将源文件划分为不同的模块/组件,每个组件都由其自己的 makefile 管理。当你开始构建时,顶级 makefile 以正确的顺序调用每个组件的 makefile、构建组件,并将它们收集到最终的执行程序中。
|
||||
|
||||
kbuild 指向到不同类型的 makefile:
|
||||
|
||||
* `Makefile` 位于源代码根目录的顶级 makefile。
|
||||
* `.config` 是内核配置文件。
|
||||
* `arch/$(ARCH)/Makefile` 是架构的 makefile,它用于补充顶级 makefile。
|
||||
* `scripts/Makefile.*` 描述所有的 kbuild makefile 的通用规则。
|
||||
* 最后,大约有 500 个 kbuild makefile。
|
||||
|
||||
顶级 makefile 会将架构 makefile 包含进去,读取 `.config` 文件,下到子目录,在 `scripts/ Makefile.*` 中定义的例程的帮助下,在每个组件的 makefile 上调用 `make`,构建每个中间对象,并将所有的中间对象链接为 `vmlinux`。内核文档 [Documentation/kbuild/makefiles.txt][2] 描述了这些 makefile 的方方面面。
|
||||
|
||||
作为一个例子,让我们看看如何在 x86-64 上生成 `vmlinux`:
|
||||
|
||||
![vmlinux overview][4]
|
||||
|
||||
(此插图基于 Richard Y. Steven 的[博客][5]。有过更新,并在作者允许的情况下使用。)
|
||||
|
||||
进入 `vmlinux` 的所有 `.o` 文件首先进入它们自己的 `built-in.a`,它通过变量`KBUILD_VMLINUX_INIT`、`KBUILD_VMLINUX_MAIN`、`KBUILD_VMLINUX_LIBS` 表示,然后被收集到 `vmlinux` 文件中。
|
||||
|
||||
在下面这个简化的 makefile 代码的帮助下,了解如何在 Linux 内核中实现递归 make:
|
||||
|
||||
```
|
||||
# In top Makefile
|
||||
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps)
|
||||
+$(call if_changed,link-vmlinux)
|
||||
|
||||
# Variable assignments
|
||||
vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) $(KBUILD_VMLINUX_LIBS)
|
||||
|
||||
export KBUILD_VMLINUX_INIT := $(head-y) $(init-y)
|
||||
export KBUILD_VMLINUX_MAIN := $(core-y) $(libs-y2) $(drivers-y) $(net-y) $(virt-y)
|
||||
export KBUILD_VMLINUX_LIBS := $(libs-y1)
|
||||
export KBUILD_LDS := arch/$(SRCARCH)/kernel/vmlinux.lds
|
||||
|
||||
init-y := init/
|
||||
drivers-y := drivers/ sound/ firmware/
|
||||
net-y := net/
|
||||
libs-y := lib/
|
||||
core-y := usr/
|
||||
virt-y := virt/
|
||||
|
||||
# Transform to corresponding built-in.a
|
||||
init-y := $(patsubst %/, %/built-in.a, $(init-y))
|
||||
core-y := $(patsubst %/, %/built-in.a, $(core-y))
|
||||
drivers-y := $(patsubst %/, %/built-in.a, $(drivers-y))
|
||||
net-y := $(patsubst %/, %/built-in.a, $(net-y))
|
||||
libs-y1 := $(patsubst %/, %/lib.a, $(libs-y))
|
||||
libs-y2 := $(patsubst %/, %/built-in.a, $(filter-out %.a, $(libs-y)))
|
||||
virt-y := $(patsubst %/, %/built-in.a, $(virt-y))
|
||||
|
||||
# Setup the dependency. vmlinux-deps are all intermediate objects, vmlinux-dirs
|
||||
# are phony targets, so every time comes to this rule, the recipe of vmlinux-dirs
|
||||
# will be executed. Refer "4.6 Phony Targets" of `info make`
|
||||
$(sort $(vmlinux-deps)): $(vmlinux-dirs) ;
|
||||
|
||||
# Variable vmlinux-dirs is the directory part of each built-in.a
|
||||
vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
|
||||
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
|
||||
$(net-y) $(net-m) $(libs-y) $(libs-m) $(virt-y)))
|
||||
|
||||
# The entry of recursive make
|
||||
$(vmlinux-dirs):
|
||||
$(Q)$(MAKE) $(build)=$@ need-builtin=1
|
||||
```
|
||||
|
||||
递归 make 的<ruby>配方<rt>recipe</rt></ruby>被扩展开是这样的:
|
||||
|
||||
```
|
||||
make -f scripts/Makefile.build obj=init need-builtin=1
|
||||
```
|
||||
|
||||
这意味着 `make` 将进入 `scripts/Makefile.build` 以继续构建每个 `built-in.a`。在`scripts/link-vmlinux.sh` 的帮助下,`vmlinux` 文件最终位于源根目录下。
|
||||
|
||||
#### vmlinux 与 bzImage 对比
|
||||
|
||||
许多 Linux 内核开发人员可能不清楚 `vmlinux` 和 `bzImage` 之间的关系。例如,这是他们在 x86-64 中的关系:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/vmlinux-bzimage.png)
|
||||
|
||||
源代码根目录下的 `vmlinux` 被剥离、压缩后,放入 `piggy.S`,然后与其他对等对象链接到 `arch/x86/boot/compressed/vmlinux`。同时,在 `arch/x86/boot` 下生成一个名为 `setup.bin` 的文件。可能有一个可选的第三个文件,它带有重定位信息,具体取决于 `CONFIG_X86_NEED_RELOCS` 的配置。
|
||||
|
||||
由内核提供的称为 `build` 的宿主程序将这两个(或三个)部分构建到最终的 `bzImage` 文件中。
|
||||
|
||||
#### 依赖跟踪
|
||||
|
||||
kbuild 跟踪三种依赖关系:
|
||||
|
||||
1. 所有必备文件(`*.c` 和 `*.h`)
|
||||
2. 所有必备文件中使用的 `CONFIG_` 选项
|
||||
3. 用于编译该目标的命令行依赖项
|
||||
|
||||
第一个很容易理解,但第二个和第三个呢? 内核开发人员经常会看到如下代码:
|
||||
|
||||
```
|
||||
#ifdef CONFIG_SMP
|
||||
__boot_cpu_id = cpu;
|
||||
#endif
|
||||
```
|
||||
|
||||
当 `CONFIG_SMP` 改变时,这段代码应该重新编译。编译源文件的命令行也很重要,因为不同的命令行可能会导致不同的目标文件。
|
||||
|
||||
当 `.c` 文件通过 `#include` 指令使用头文件时,你需要编写如下规则:
|
||||
|
||||
```
|
||||
main.o: defs.h
|
||||
recipe...
|
||||
```
|
||||
|
||||
管理大型项目时,需要大量的这些规则;把它们全部写下来会很乏味无聊。幸运的是,大多数现代 C 编译器都可以通过查看源文件中的 `#include` 行来为你编写这些规则。对于 GNU 编译器集合(GCC),只需添加一个命令行参数:`-MD depfile`
|
||||
|
||||
```
|
||||
# In scripts/Makefile.lib
|
||||
c_flags = -Wp,-MD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) \
|
||||
-include $(srctree)/include/linux/compiler_types.h \
|
||||
$(__c_flags) $(modkern_cflags) \
|
||||
$(basename_flags) $(modname_flags)
|
||||
```
|
||||
|
||||
这将生成一个 `.d` 文件,内容如下:
|
||||
|
||||
```
|
||||
init_task.o: init/init_task.c include/linux/kconfig.h \
|
||||
include/generated/autoconf.h include/linux/init_task.h \
|
||||
include/linux/rcupdate.h include/linux/types.h \
|
||||
...
|
||||
```
|
||||
|
||||
然后主程序 [fixdep][6] 通过将 depfile 文件和命令行作为输入来处理其他两个依赖项,然后以 makefile 格式输出一个 `.<target>.cmd` 文件,它记录命令行和目标的所有先决条件(包括配置)。 它看起来像这样:
|
||||
|
||||
```
|
||||
# The command line used to compile the target
|
||||
cmd_init/init_task.o := gcc -Wp,-MD,init/.init_task.o.d -nostdinc ...
|
||||
...
|
||||
# The dependency files
|
||||
deps_init/init_task.o := \
|
||||
$(wildcard include/config/posix/timers.h) \
|
||||
$(wildcard include/config/arch/task/struct/on/stack.h) \
|
||||
$(wildcard include/config/thread/info/in/task.h) \
|
||||
...
|
||||
include/uapi/linux/types.h \
|
||||
arch/x86/include/uapi/asm/types.h \
|
||||
include/uapi/asm-generic/types.h \
|
||||
...
|
||||
```
|
||||
|
||||
在递归 make 中,`.<target>.cmd` 文件将被包含,以提供所有依赖关系信息并帮助决定是否重建目标。
|
||||
|
||||
这背后的秘密是 `fixdep` 将解析 depfile(`.d` 文件),然后解析里面的所有依赖文件,搜索所有 `CONFIG_` 字符串的文本,将它们转换为相应的空的头文件,并将它们添加到目标的先决条件。每次配置更改时,相应的空的头文件也将更新,因此 kbuild 可以检测到该更改并重建依赖于它的目标。因为还记录了命令行,所以很容易比较最后和当前的编译参数。
|
||||
|
||||
### 展望未来
|
||||
|
||||
Kconfig/kbuild 在很长一段时间内没有什么变化,直到新的维护者 Masahiro Yamada 于 2017 年初加入,现在 kbuild 正在再次积极开发中。如果你不久后看到与本文中的内容不同的内容,请不要感到惊讶。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/kbuild-and-kconfig
|
||||
|
||||
作者:[Cao Jin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pinocchio
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kconfig-language.txt
|
||||
[2]: https://www.mjmwired.net/kernel/Documentation/kbuild/makefiles.txt
|
||||
[3]: https://opensource.com/file/411516
|
||||
[4]: https://opensource.com/sites/default/files/uploads/vmlinux_generation_process.png (vmlinux overview)
|
||||
[5]: https://blog.csdn.net/richardysteven/article/details/52502734
|
||||
[6]: https://github.com/torvalds/linux/blob/master/scripts/basic/fixdep.c
|
@ -0,0 +1,69 @@
|
||||
使用 MacSVG 创建 SVG 动画
|
||||
======
|
||||
|
||||
> 开源 SVG:墙上的魔法字。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/18/000809mzl1wb1ww754z455.jpg)
|
||||
|
||||
新巴比伦的摄政王[伯沙撒][1]没有注意到他在盛宴期间神奇地[书写在墙上的文字][2]。但是,如果他在公元前 539 年有一台笔记本电脑和良好的互联网连接,他可能会通过在浏览器上阅读 SVG 来避开那些讨厌的波斯人。
|
||||
|
||||
出现在网页上的动画文本和对象是建立用户兴趣和参与度的好方法。有几种方法可以实现这一点,例如视频嵌入、动画 GIF 或幻灯片 —— 但你也可以使用[可缩放矢量图形(SVG)][3]。
|
||||
|
||||
SVG 图像与 JPG 不同,因为它可以缩放而不会丢失其分辨率。矢量图像是由点而不是像素创建的,所以无论它放大到多大,它都不会失去分辨率或像素化。充分利用可缩放的静态图像的一个例子是网站的徽标。
|
||||
|
||||
### 动起来,动起来
|
||||
|
||||
你可以使用多种绘图程序创建 SVG 图像,包括开源的 [Inkscape][4] 和 Adobe Illustrator。让你的图像“能动起来”需要更多的努力。幸运的是,有一些开源解决方案甚至可以引起伯沙撒的注意。
|
||||
|
||||
[MacSVG][5] 是一款可以让你的图像动起来的工具。你可以在 [GitHub][6] 上找到源代码。
|
||||
|
||||
根据其[官网][5]说,MacSVG 由阿肯色州康威的 Douglas Ward 开发,是一个“用于设计 HTML5 SVG 艺术和动画的开源 Mac OS 应用程序”。
|
||||
|
||||
我想使用 MacSVG 来创建一个动画签名。我承认我发现这个过程有点令人困惑,并且在我第一次尝试创建一个实际的动画 SVG 图像时失败了。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/macsvg-screen.png)
|
||||
|
||||
重要的是首先要了解要展示的书法内容实际写的是什么。
|
||||
|
||||
动画文字背后的属性是 [stroke-dasharray][7]。将该术语分成三个单词有助于解释正在发生的事情:“stroke” 是指用笔(无论是物理的笔还是数字化笔)制作的线条或笔画。“dash” 意味着将笔划分解为一系列折线。“array” 意味着将整个东西生成为数组。这是一个简单的概述,但它可以帮助我理解应该发生什么以及为什么。
|
||||
|
||||
使用 MacSVG,你可以导入图形(.PNG)并使用钢笔工具描绘书写路径。我使用了草书来表示我的名字。然后,只需应用该属性来让书法动画起来、增加和减少笔划的粗细、改变其颜色等等。完成后,动画的书法将导出为 .SVG 文件,并可以在网络上使用。除书写外,MacSVG 还可用于许多不同类型的 SVG 动画。
|
||||
|
||||
### 在 WordPress 中书写
|
||||
|
||||
我准备在我的 [WordPress][8] 网站上传和分享我的 SVG 示例,但我发现 WordPress 不允许进行 SVG 媒体导入。幸运的是,我找到了一个方便的插件:Benbodhi 的 [SVG 支持][9]插件允许快速、轻松地导入我的 SVG,就像我将 JPG 导入媒体库一样。我能够在世界各地向巴比伦人展示我[写在墙上的魔法字][10]。
|
||||
|
||||
我在 [Brackets][11] 中开源了 SVG 的源代码,结果如下:
|
||||
|
||||
```
|
||||
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://web.resource.org/cc/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" height="360px" style="zoom: 1;" cursor="default" id="svg_document" width="480px" baseProfile="full" version="1.1" preserveAspectRatio="xMidYMid meet" viewBox="0 0 480 360"><title id="svg_document_title">Path animation with stroke-dasharray</title><desc id="desc1">This example demonstrates the use of a path element, an animate element, and the stroke-dasharray attribute to simulate drawing.</desc><defs id="svg_document_defs"></defs><g id="main_group"></g><path stroke="#004d40" id="path2" stroke-width="9px" d="M86,75 C86,75 75,72 72,61 C69,50 66,37 71,34 C76,31 86,21 92,35 C98,49 95,73 94,82 C93,91 87,105 83,110 C79,115 70,124 71,113 C72,102 67,105 75,97 C83,89 111,74 111,74 C111,74 119,64 119,63 C119,62 110,57 109,58 C108,59 102,65 102,66 C102,67 101,75 107,79 C113,83 118,85 122,81 C126,77 133,78 136,64 C139,50 147,45 146,33 C145,21 136,15 132,24 C128,33 123,40 123,49 C123,58 135,87 135,96 C135,105 139,117 133,120 C127,123 116,127 120,116 C124,105 144,82 144,81 C144,80 158,66 159,58 C160,50 159,48 161,43 C163,38 172,23 166,22 C160,21 155,12 153,23 C151,34 161,68 160,78 C159,88 164,108 163,113 C162,118 165,126 157,128 C149,130 152,109 152,109 C152,109 185,64 185,64 " fill="none" transform=""><animate values="0,1739;1739,0;" attributeType="XML" begin="0; animate1.end+5s" id="animateSig1" repeatCount="indefinite" attributeName="stroke-dasharray" fill="freeze" dur="2"></animate></path></svg>
|
||||
```
|
||||
|
||||
你会使用 MacSVG 做什么?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/macsvg-open-source-tool-animation
|
||||
|
||||
作者:[Jeff Macharyas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rikki-endsley
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Belshazzar
|
||||
[2]: https://en.wikipedia.org/wiki/Belshazzar%27s_feast
|
||||
[3]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics
|
||||
[4]: https://inkscape.org/
|
||||
[5]: https://macsvg.org/
|
||||
[6]: https://github.com/dsward2/macSVG
|
||||
[7]: https://gist.github.com/mbostock/5649592
|
||||
[8]: https://macharyas.com/
|
||||
[9]: https://wordpress.org/plugins/svg-support/
|
||||
[10]: https://macharyas.com/index.php/2018/10/14/open-source-svg/
|
||||
[11]: http://brackets.io/
|
@ -78,7 +78,7 @@ $ show dfshow
|
||||
* `PgUp` / `PgDn` - 一次移动一页,
|
||||
* `F3` / `F4` - 立即转到列表的顶部和底部,
|
||||
* `F5` - 刷新,
|
||||
* `F6` - 标记/取消标记文件(标记的文件将在它们前面用 `*`表示),
|
||||
* `F6` - 标记/取消标记文件(标记的文件将在它们前面用 `*` 表示),
|
||||
* `F7` / `F8` - 一次性标记/取消标记所有文件,
|
||||
* `F9` - 按以下顺序对列表排序 - 日期和时间、名称、大小。
|
||||
|
167
published/20181220 Getting started with Prometheus.md
Normal file
167
published/20181220 Getting started with Prometheus.md
Normal file
@ -0,0 +1,167 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11234-1.html)
|
||||
[#]: subject: (Getting started with Prometheus)
|
||||
[#]: via: (https://opensource.com/article/18/12/introduction-prometheus)
|
||||
[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
|
||||
|
||||
Prometheus 入门
|
||||
======
|
||||
|
||||
> 学习安装 Prometheus 监控和警报系统并编写它的查询。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/16/113724zqe12khkdye2mesy.jpg)
|
||||
|
||||
[Prometheus][1] 是一个开源的监控和警报系统,它直接从目标主机上运行的代理程序中抓取指标,并将收集的样本集中存储在其服务器上。也可以使用像 `collectd_exporter` 这样的插件推送指标,尽管这不是 Promethius 的默认行为,但在主机位于防火墙后面或位于安全策略禁止打开端口的某些环境中它可能很有用。
|
||||
|
||||
Prometheus 是[云原生计算基金会(CNCF)][2]的一个项目。它使用<ruby>联合模型<rt>federation model</rt></ruby>进行扩展,该模型使得一个 Prometheus 服务器能够抓取另一个 Prometheus 服务器的数据。这允许创建分层拓扑,其中中央系统或更高级别的 Prometheus 服务器可以抓取已从下级实例收集的聚合数据。
|
||||
|
||||
除 Prometheus 服务器外,其最常见的组件是[警报管理器][3]及其输出器。
|
||||
|
||||
警报规则可以在 Prometheus 中创建,并配置为向警报管理器发送自定义警报。然后,警报管理器处理和管理这些警报,包括通过电子邮件或第三方服务(如 [PagerDuty][4])等不同机制发送通知。
|
||||
|
||||
Prometheus 的输出器可以是库、进程、设备或任何其他能将 Prometheus 抓取的指标公开出去的东西。 这些指标可在端点 `/metrics` 中获得,它允许 Prometheus 无需代理直接抓取它们。本文中的教程使用 `node_exporter` 来公开目标主机的硬件和操作系统指标。输出器的输出是明文的、高度可读的,这是 Prometheus 的优势之一。
|
||||
|
||||
此外,你可以将 Prometheus 作为后端,配置 [Grafana][5] 来提供数据可视化和仪表板功能。
|
||||
|
||||
### 理解 Prometheus 的配置文件
|
||||
|
||||
抓取 `/metrics` 的间隔秒数控制了时间序列数据库的粒度。这在配置文件中定义为 `scrape_interval` 参数,默认情况下设置为 60 秒。
|
||||
|
||||
在 `scrape_configs` 部分中为每个抓取作业设置了目标。每个作业都有自己的名称和一组标签,可以帮助你过滤、分类并更轻松地识别目标。一项作业可以有很多目标。
|
||||
|
||||
### 安装 Prometheus
|
||||
|
||||
在本教程中,为简单起见,我们将使用 Docker 安装 Prometheus 服务器和 `node_exporter`。Docker 应该已经在你的系统上正确安装和配置。对于更深入、自动化的方法,我推荐 Steve Ovens 的文章《[如何使用 Ansible 与 Prometheus 建立系统监控][6]》。
|
||||
|
||||
在开始之前,在工作目录中创建 Prometheus 配置文件 `prometheus.yml`,如下所示:
|
||||
|
||||
```
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
evaluation_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
- job_name: 'prometheus'
|
||||
|
||||
static_configs:
|
||||
- targets: ['localhost:9090']
|
||||
|
||||
- job_name: 'webservers'
|
||||
|
||||
static_configs:
|
||||
- targets: ['<node exporter node IP>:9100']
|
||||
```
|
||||
|
||||
通过运行以下命令用 Docker 启动 Prometheus:
|
||||
|
||||
```
|
||||
$ sudo docker run -d -p 9090:9090 -v
|
||||
/path/to/prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
prom/prometheus
|
||||
```
|
||||
|
||||
默认情况下,Prometheus 服务器将使用端口 9090。如果此端口已在使用,你可以通过在上一个命令的后面添加参数 `--web.listen-address="<IP of machine>:<port>"` 来更改它。
|
||||
|
||||
在要监视的计算机中,使用以下命令下载并运行 `node_exporter` 容器:
|
||||
|
||||
```
|
||||
$ sudo docker run -d -v "/proc:/host/proc" -v "/sys:/host/sys" -v
|
||||
"/:/rootfs" --net="host" prom/node-exporter --path.procfs
|
||||
/host/proc --path.sysfs /host/sys --collector.filesystem.ignored-
|
||||
mount-points "^/(sys|proc|dev|host|etc)($|/)"
|
||||
```
|
||||
|
||||
出于本文练习的目的,你可以在同一台机器上安装 `node_exporter` 和 Prometheus。请注意,生产环境中在 Docker 下运行 `node_exporter` 是不明智的 —— 这仅用于测试目的。
|
||||
|
||||
要验证 `node_exporter` 是否正在运行,请打开浏览器并导航到 `http://<IP of Node exporter host>:9100/metrics`,这将显示收集到的所有指标;也即是 Prometheus 将要抓取的相同指标。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/check-node_exporter.png)
|
||||
|
||||
要确认 Prometheus 服务器安装成功,打开浏览器并导航至:<http://localhost:9090>。
|
||||
|
||||
你应该看到了 Prometheus 的界面。单击“Status”,然后单击“Targets”。在 “Status” 下,你应该看到你的机器被列为 “UP”。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/targets-up.png)
|
||||
|
||||
### 使用 Prometheus 查询
|
||||
|
||||
现在是时候熟悉一下 [PromQL][7](Prometheus 的查询语法)及其图形化 Web 界面了。转到 Prometheus 服务器上的 `http://localhost:9090/graph`。你将看到一个查询编辑器和两个选项卡:“Graph” 和 “Console”。
|
||||
|
||||
Prometheus 将所有数据存储为时间序列,使用指标名称标识每个数据。例如,指标 `node_filesystem_avail_bytes` 显示可用的文件系统空间。指标的名称可以在表达式框中使用,以选择具有此名称的所有时间序列并生成即时向量。如果需要,可以使用选择器和标签(一组键值对)过滤这些时间序列,例如:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype="ext4"}
|
||||
```
|
||||
|
||||
过滤时,你可以匹配“完全相等”(`=`)、“不等于”(`!=`),“正则匹配”(`=~`)和“正则排除匹配”(`!~`)。以下示例说明了这一点:
|
||||
|
||||
要过滤 `node_filesystem_avail_bytes` 以显示 ext4 和 XFS 文件系统:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype=~"ext4|xfs"}
|
||||
```
|
||||
|
||||
要排除匹配:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype!="xfs"}
|
||||
```
|
||||
|
||||
你还可以使用方括号得到从当前时间往回的一系列样本。你可以使用 `s` 表示秒,`m` 表示分钟,`h` 表示小时,`d` 表示天,`w` 表示周,而 `y` 表示年。使用时间范围时,返回的向量将是范围向量。
|
||||
|
||||
例如,以下命令生成从五分钟前到现在的样本:
|
||||
|
||||
```
|
||||
node_memory_MemAvailable_bytes[5m]
|
||||
```
|
||||
|
||||
Prometheus 还包括了高级查询的功能,例如:
|
||||
|
||||
```
|
||||
100 * (1 - avg by(instance)(irate(node_cpu_seconds_total{job='webservers',mode='idle'}[5m])))
|
||||
```
|
||||
|
||||
请注意标签如何用于过滤作业和模式。指标 `node_cpu_seconds_total` 返回一个计数器,`irate()`函数根据范围间隔的最后两个数据点计算每秒的变化率(意味着该范围可以小于五分钟)。要计算 CPU 总体使用率,可以使用 `node_cpu_seconds_total` 指标的空闲(`idle`)模式。处理器的空闲比例与繁忙比例相反,因此从 1 中减去 `irate` 值。要使其为百分比,请将其乘以 100。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cpu-usage.png)
|
||||
|
||||
### 了解更多
|
||||
|
||||
Prometheus 是一个功能强大、可扩展、轻量级、易于使用和部署的监视工具,对于每个系统管理员和开发人员来说都是必不可少的。出于这些原因和其他原因,许多公司正在将 Prometheus 作为其基础设施的一部分。
|
||||
|
||||
要了解有关 Prometheus 及其功能的更多信息,我建议使用以下资源:
|
||||
|
||||
+ 关于 [PromQL][8]
|
||||
+ 什么是 [node_exporters 集合][9]
|
||||
+ [Prometheus 函数][10]
|
||||
+ [4 个开源监控工具] [11]
|
||||
+ [现已推出:DevOps 监控工具的开源指南] [12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/introduction-prometheus
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://prometheus.io/
|
||||
[2]: https://www.cncf.io/
|
||||
[3]: https://prometheus.io/docs/alerting/alertmanager/
|
||||
[4]: https://en.wikipedia.org/wiki/PagerDuty
|
||||
[5]: https://grafana.com/
|
||||
[6]: https://opensource.com/article/18/3/how-use-ansible-set-system-monitoring-prometheus
|
||||
[7]: https://prometheus.io/docs/prometheus/latest/querying/basics/
|
||||
[8]: https://prometheus.io/docs/prometheus/latest/querying/basics/
|
||||
[9]: https://github.com/prometheus/node_exporter#collectors
|
||||
[10]: https://prometheus.io/docs/prometheus/latest/querying/functions/
|
||||
[11]: https://opensource.com/article/18/8/open-source-monitoring-tools
|
||||
[12]: https://opensource.com/article/18/8/now-available-open-source-guide-devops-monitoring-tools
|
150
published/20190318 Let-s try dwm - dynamic window manager.md
Normal file
150
published/20190318 Let-s try dwm - dynamic window manager.md
Normal file
@ -0,0 +1,150 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11235-1.html)
|
||||
[#]: subject: (Let’s try dwm — dynamic window manager)
|
||||
[#]: via: (https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/)
|
||||
[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/)
|
||||
|
||||
试试动态窗口管理器 dwm 吧
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
如果你崇尚效率和极简主义,并且正在为你的 Linux 桌面寻找新的窗口管理器,那么你应该尝试一下<ruby>动态窗口管理器<rt>dynamic window manager</rt></ruby> dwm。以不到 2000 标准行的代码写就的 dwm,是一个速度极快而功能强大,且可高度定制的窗口管理器。
|
||||
|
||||
你可以在平铺、单片和浮动布局之间动态选择,使用标签将窗口组织到多个工作区,并使用键盘快捷键快速导航。本文将帮助你开始使用 dwm。
|
||||
|
||||
### 安装
|
||||
|
||||
要在 Fedora 上安装 dwm,运行:
|
||||
|
||||
```
|
||||
$ sudo dnf install dwm dwm-user
|
||||
```
|
||||
|
||||
`dwm` 包会安装窗口管理器本身,`dwm-user` 包显著简化了配置,本文稍后将对此进行说明。
|
||||
|
||||
此外,为了能够在需要时锁定屏幕,我们还将安装 `slock`,这是一个简单的 X 显示锁屏。
|
||||
|
||||
```
|
||||
$ sudo dnf install slock
|
||||
```
|
||||
|
||||
当然,你可以根据你的个人喜好使用其它的锁屏。
|
||||
|
||||
### 快速入门
|
||||
|
||||
要启动 dwm,在登录屏选择 “dwm-user” 选项。
|
||||
|
||||
![][2]
|
||||
|
||||
登录后,你将看到一个非常简单的桌面。事实上,顶部唯一的一个面板列出了代表工作空间的 9 个标签和一个代表窗户布局的 `[]=` 符号。
|
||||
|
||||
#### 启动应用
|
||||
|
||||
在查看布局之前,首先启动一些应用程序,以便你可以随时使用布局。可以通过按 `Alt+p` 并键入应用程序的名称,然后回车来启动应用程序。还有一个快捷键 `Alt+Shift+Enter` 用于打开终端。
|
||||
|
||||
现在有一些应用程序正在运行了,请查看布局。
|
||||
|
||||
#### 布局
|
||||
|
||||
默认情况下有三种布局:平铺布局,单片布局和浮动布局。
|
||||
|
||||
平铺布局由条形图上的 `[]=` 表示,它将窗口组织为两个主要区域:左侧为主区域,右侧为堆叠区。你可以按 `Alt+t` 激活平铺布局。
|
||||
|
||||
![][3]
|
||||
|
||||
平铺布局背后的想法是,主窗口放在主区域中,同时仍然可以看到堆叠区中的其他窗口。你可以根据需要在它们之间快速切换。
|
||||
|
||||
要在两个区域之间交换窗口,请将鼠标悬停在堆叠区中的一个窗口上,然后按 `Alt+Enter` 将其与主区域中的窗口交换。
|
||||
|
||||
![][4]
|
||||
|
||||
单片布局由顶部栏上的 `[N]` 表示,可以使你的主窗口占据整个屏幕。你可以按 `Alt+m` 切换到它。
|
||||
|
||||
最后,浮动布局可让你自由移动和调整窗口大小。它的快捷方式是 `Alt+f`,顶栏上的符号是 `><>`。
|
||||
|
||||
#### 工作区和标签
|
||||
|
||||
每个窗口都分配了一个顶部栏中列出的标签(1-9)。要查看特定标签,请使用鼠标单击其编号或按 `Alt+1..9`。你甚至可以使用鼠标右键单击其编号,一次查看多个标签。
|
||||
|
||||
通过使用鼠标突出显示后,并按 `Alt+Shift+1..9`,窗口可以在不同标签之间移动。
|
||||
|
||||
### 配置
|
||||
|
||||
为了使 dwm 尽可能简约,它不使用典型的配置文件。而是你需要修改代表配置的 C 语言头文件,并重新编译它。但是不要担心,在 Fedora 中你只需要简单地编辑主目录中的一个文件,而其他一切都会在后台发生,这要归功于 Fedora 的维护者提供的 `dwm-user` 包。
|
||||
|
||||
首先,你需要使用类似于以下的命令将文件复制到主目录中:
|
||||
|
||||
```
|
||||
$ mkdir ~/.dwm
|
||||
$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h
|
||||
```
|
||||
|
||||
你可以通过运行 `man dwm-start` 来获取确切的路径。
|
||||
|
||||
其次,只需编辑 `~/.dwm/config.h` 文件。例如,让我们配置一个新的快捷方式:通过按 `Alt+Shift+L` 来锁定屏幕。
|
||||
|
||||
考虑到我们已经安装了本文前面提到的 `slock` 包,我们需要在文件中添加以下两行以使其工作:
|
||||
|
||||
在 `/* commands */` 注释下,添加:
|
||||
|
||||
```
|
||||
static const char *slockcmd[] = { "slock", NULL };
|
||||
```
|
||||
|
||||
添加下列行到 `static Key keys[]` 中:
|
||||
|
||||
```
|
||||
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
|
||||
```
|
||||
|
||||
最终,它应该看起来如下:
|
||||
|
||||
```
|
||||
...
|
||||
/* commands */
|
||||
static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
|
||||
static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
|
||||
static const char *termcmd[] = { "st", NULL };
|
||||
static const char *slockcmd[] = { "slock", NULL };
|
||||
|
||||
static Key keys[] = {
|
||||
/* modifier key function argument */
|
||||
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
|
||||
{ MODKEY, XK_p, spawn, {.v = dmenucmd } },
|
||||
{ MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },
|
||||
...
|
||||
```
|
||||
|
||||
保存文件。
|
||||
|
||||
最后,按 `Alt+Shift+q` 注销,然后重新登录。`dwm-user` 包提供的脚本将识别你已更改主目录中的`config.h` 文件,并会在登录时重新编译 dwm。因为 dwm 非常小,它快到你甚至都不会注意到它重新编译了。
|
||||
|
||||
你现在可以尝试按 `Alt+Shift+L` 锁定屏幕,然后输入密码并按回车键再次登录。
|
||||
|
||||
### 总结
|
||||
|
||||
如果你崇尚极简主义并想要一个非常快速而功能强大的窗口管理器,dwm 可能正是你一直在寻找的。但是,它可能不适合初学者,你可能需要做许多其他配置才能按照你的喜好进行配置。
|
||||
|
||||
要了解有关 dwm 的更多信息,请参阅该项目的主页: <https://dwm.suckless.org/>。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/
|
||||
|
||||
作者:[Adam Šamalík][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/asamalik/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-magazine-image-816x345.png
|
||||
[2]: https://fedoramagazine.org/wp-content/uploads/2019/03/choosing-dwm-1024x469.png
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-desktop-1024x593.png
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-2019-03-15-at-11.12.32-1024x592.png
|
@ -0,0 +1,78 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11245-1.html)
|
||||
[#]: subject: (How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip])
|
||||
[#]: via: (https://itsfoss.com/turn-on-raspberry-pi/)
|
||||
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
|
||||
|
||||
如何打开和关闭树莓派(绝对新手)
|
||||
======
|
||||
|
||||
> 这篇短文教你如何打开树莓派以及如何在之后正确关闭它。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/19/192825rlrjy3sj77j7j79y.jpg)
|
||||
|
||||
[树莓派][1]是[最流行的 SBC(单板计算机)][2]之一。如果你对这个话题感兴趣,我相信你已经有了一个树莓派。我还建议你使用[其他树莓派配件][3]来开始使用你的设备。
|
||||
|
||||
你已经准备好打开并开始使用。与桌面和笔记本电脑等传统电脑相比,它有相似以及不同之处。
|
||||
|
||||
今天,让我们继续学习如何打开和关闭树莓派,因为它并没有真正的“电源按钮”。
|
||||
|
||||
在本文中,我使用的是树莓派 3B+,但对于所有树莓派变体都是如此。
|
||||
|
||||
### 如何打开树莓派
|
||||
|
||||
![Micro USB port for Power][7]
|
||||
|
||||
micro USB 口为树莓派供电,打开它的方式是将电源线插入 micro USB 口。但是开始之前,你应该确保做了以下事情。
|
||||
|
||||
* 根据官方[指南][8]准备带有 Raspbian 的 micro SD 卡并插入 micro SD 卡插槽。
|
||||
* 插入 HDMI 线、USB 键盘和鼠标。
|
||||
* 插入以太网线(可选)。
|
||||
|
||||
成上述操作后,请插入电源线。这会打开树莓派,显示屏将亮起并加载操作系统。
|
||||
|
||||
如果您将其关闭并且想要再次打开它,则必须从电源插座(首选)或从电路板的电源端口拔下电源线,然后再插上。它没有电源按钮。
|
||||
|
||||
### 关闭树莓派
|
||||
|
||||
关闭树莓派非常简单,单击菜单按钮并选择关闭。
|
||||
|
||||
![Turn off Raspberry Pi graphically][9]
|
||||
|
||||
或者你可以在终端使用 [shutdown 命令][10]
|
||||
|
||||
```
|
||||
sudo shutdown now
|
||||
```
|
||||
|
||||
`shutdown` 执行后**等待**它完成,接着你可以关闭电源。
|
||||
|
||||
再说一次,树莓派关闭后,没有真正的办法可以在不关闭再打开电源的情况下打开树莓派。你可以使用 GPIO 打开树莓派,但这需要额外的改装。
|
||||
|
||||
*注意:Micro USB 口往往是脆弱的,因此请关闭/打开电源,而不是经常拔出插入 micro USB 口。
|
||||
|
||||
好吧,这就是关于打开和关闭树莓派的所有内容,你打算用它做什么?请在评论中告诉我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/turn-on-raspberry-pi/
|
||||
|
||||
作者:[Chinmay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/chinmay/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.raspberrypi.org/
|
||||
[2]: https://linux.cn/article-10823-1.html
|
||||
[3]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-3-microusb.png?fit=800%2C532&ssl=1
|
||||
[8]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/Raspbian-ui-menu.jpg?fit=800%2C492&ssl=1
|
||||
[10]: https://linuxhandbook.com/linux-shutdown-command/
|
57
published/20190611 What is a Linux user.md
Normal file
57
published/20190611 What is a Linux user.md
Normal file
@ -0,0 +1,57 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "qfzy1233 "
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-11231-1.html"
|
||||
[#]: subject: "What is a Linux user?"
|
||||
[#]: via: "https://opensource.com/article/19/6/what-linux-user"
|
||||
[#]: author: "Anderson Silva https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth"
|
||||
|
||||
何谓 Linux 用户?
|
||||
======
|
||||
|
||||
> “Linux 用户”这一定义已经拓展到了更大的范围,同时也发生了巨大的改变。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/15/211706trkbzp8juhbzifia.jpg)
|
||||
|
||||
*编者按: 本文更新于 2019 年 6 月 11 日下午 1:15:19,以更准确地反映作者对 Linux 社区中开放和包容的社区性的看法。*
|
||||
|
||||
再有不到两年,Linux 内核就要迎来它 30 岁的生日了。让我们回想一下!1991 年的时候你在哪里?你出生了吗?那年我 13 岁!在 1991 到 1993 年间只推出了少数几款 Linux 发行版,至少它们中的三个:Slackware、Debian 和 Red Hat 为 Linux 运动发展提供了[支柱][2]。
|
||||
|
||||
当年获得 Linux 发行版的副本,并在笔记本或服务器上进行安装和配置,和今天相比是很不一样的。当时是十分艰难的!也是令人沮丧的!如果你让能让它运行起来,就是一个了不起的成就!我们不得不与不兼容的硬件、设备上的配置跳线、BIOS 问题以及许多其他问题作斗争。甚至即使硬件是兼容的,很多时候,你仍然需要编译内核、模块和驱动程序才能让它们在你的系统上工作。
|
||||
|
||||
如果你当时经过过那些,你可能会表示赞同。有些读者甚至称它们为“美好的过往”,因为选择使用 Linux 意味着仅仅是为了让操作系统继续运行,你就必须学习操作系统、计算机体系架构、系统管理、网络,甚至编程。但我并不赞同他们的说法,窃以为:Linux 在 IT 行业带给我们的最让人惊讶的改变就是,它成为了我们每个人技术能力的基础组成部分!
|
||||
|
||||
将近 30 年过去了,无论是桌面和服务器领域 Linux 系统都有了脱胎换骨的变换。你可以在汽车上,在飞机上,家用电器上,智能手机上……几乎任何地方发现 Linux 的影子!你甚至可以购买预装 Linux 的笔记本电脑、台式机和服务器。如果你考虑云计算,企业甚至个人都可以一键部署 Linux 虚拟机,由此可见 Linux 的应用已经变得多么普遍了。
|
||||
|
||||
考虑到这些,我想问你的问题是:**这个时代如何定义“Linux 用户”?**
|
||||
|
||||
如果你从 System76 或 Dell 为你的父母或祖父母购买一台 Linux 笔记本电脑,为其登录好他们的社交媒体和电子邮件,并告诉他们经常单击“系统升级”,那么他们现在就是 Linux 用户了。如果你是在 Windows 或 MacOS 机器上进行以上操作,那么他们就是 Windows 或 MacOS 用户。令人难以置信的是,与 90 年代不同,现在的 Linux 任何人都可以轻易上手。
|
||||
|
||||
由于种种原因,这也归因于 web 浏览器成为了桌面计算机上的“杀手级应用程序”。现在,许多用户并不关心他们使用的是什么操作系统,只要他们能够访问到他们的应用程序或服务。
|
||||
|
||||
你知道有多少人经常使用他们的电话、桌面或笔记本电脑,但不会管理他们系统上的文件、目录和驱动程序?又有多少人不会安装“应用程序商店”没有收录的二进制文件程序?更不要提从头编译应用程序,对我来说,几乎全是这样的。这正是成熟的开源软件和相应的生态对于易用性的改进的动人之处。
|
||||
|
||||
今天的 Linux 用户不需要像上世纪 90 年代或 21 世纪初的 Linux 用户那样了解、学习甚至查询信息,这并不是一件坏事。过去那种认为 Linux 只适合工科男使用的想法已经一去不复返了。
|
||||
|
||||
对于那些对计算机、操作系统以及在自由软件上创建、使用和协作的想法感兴趣、好奇、着迷的 Linux 用户来说,Liunx 依旧有研究的空间。如今在 Windows 和 MacOS 上也有同样多的空间留给创造性的开源贡献者。今天,成为 Linux 用户就是成为一名与 Linux 系统同行的人。这是一件很棒的事情。
|
||||
|
||||
### Linux 用户定义的转变
|
||||
|
||||
当我开始使用 Linux 时,作为一个 Linux 用户意味着知道操作系统如何以各种方式、形态和形式运行。Linux 在某种程度上已经成熟,这使得“Linux 用户”的定义可以包含更广泛的领域及那些领域里的人们。这可能是显而易见的一点,但重要的还是要说清楚:任何 Linux 用户皆“生”而平等。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/what-linux-user
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qfzy1233](https://github.com/qfzy1233)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22
|
||||
[2]: https://en.wikipedia.org/wiki/Linux_distribution#/media/File:Linux_Distribution_Timeline.svg
|
@ -1,38 +1,38 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11243-1.html)
|
||||
[#]: subject: (How To Check Linux Package Version Before Installing It)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-check-linux-package-version-before-installing-it/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Check Linux Package Version Before Installing It
|
||||
如何在安装之前检查 Linux 软件包的版本?
|
||||
======
|
||||
|
||||
![Check Linux Package Version][1]
|
||||
|
||||
Most of you will know how to [**find the version of an installed package**][2] in Linux. But, what would you do to find the packages’ version which are not installed in the first place? No problem! This guide describes how to check Linux package version before installing it in Debian and its derivatives like Ubuntu. This small tip might be helpful for those wondering what version they would get before installing a package.
|
||||
大多数人都知道如何在 Linux 中[查找已安装软件包的版本][2],但是,你会如何查找那些还没有安装的软件包的版本呢?很简单!本文将介绍在 Debian 及其衍生品(如 Ubuntu)中,如何在软件包安装之前检查它的版本。对于那些想在安装之前知道软件包版本的人来说,这个小技巧可能会有所帮助。
|
||||
|
||||
### Check Linux Package Version Before Installing It
|
||||
### 在安装之前检查 Linux 软件包版本
|
||||
|
||||
There are many ways to find a package’s version even if it is not installed already in DEB-based systems. Here I have given a few methods.
|
||||
在基于 DEB 的系统中,即使软件包还没有安装,也有很多方法可以查看他的版本。接下来,我将一一介绍。
|
||||
|
||||
##### Method 1 – Using Apt
|
||||
#### 方法 1 – 使用 Apt
|
||||
|
||||
The quick and dirty way to check a package version, simply run:
|
||||
检查软件包的版本的懒人方法:
|
||||
|
||||
```
|
||||
$ apt show <package-name>
|
||||
```
|
||||
|
||||
**Example:**
|
||||
**示例:**
|
||||
|
||||
```
|
||||
$ apt show vim
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
**示例输出:**
|
||||
|
||||
```
|
||||
Package: vim
|
||||
@ -67,23 +67,21 @@ Description: Vi IMproved - enhanced vi editor
|
||||
N: There is 1 additional record. Please use the '-a' switch to see it
|
||||
```
|
||||
|
||||
As you can see in the above output, “apt show” command displays, many important details of the package such as,
|
||||
正如你在上面的输出中看到的,`apt show` 命令显示了软件包许多重要的细节,例如:
|
||||
|
||||
1. package name,
|
||||
2. version,
|
||||
3. origin (from where the vim comes from),
|
||||
4. maintainer,
|
||||
5. home page of the package,
|
||||
6. dependencies,
|
||||
7. download size,
|
||||
8. description,
|
||||
9. and many.
|
||||
1. 包名称,
|
||||
2. 版本,
|
||||
3. 来源(vim 来自哪里),
|
||||
4. 维护者,
|
||||
5. 包的主页,
|
||||
6. 依赖,
|
||||
7. 下载大小,
|
||||
8. 简介,
|
||||
9. 其他。
|
||||
|
||||
因此,Ubuntu 仓库中可用的 Vim 版本是 **8.0.1453**。如果我把它安装到我的 Ubuntu 系统上,就会得到这个版本。
|
||||
|
||||
|
||||
So, the available version of Vim package in the Ubuntu repositories is **8.0.1453**. This is the version I get if I install it on my Ubuntu system.
|
||||
|
||||
Alternatively, use **“apt policy”** command if you prefer short output:
|
||||
或者,如果你不想看那么多的内容,那么可以使用 `apt policy` 这个命令:
|
||||
|
||||
```
|
||||
$ apt policy vim
|
||||
@ -98,7 +96,7 @@ vim:
|
||||
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
```
|
||||
|
||||
Or even shorter:
|
||||
甚至更短:
|
||||
|
||||
```
|
||||
$ apt list vim
|
||||
@ -107,17 +105,17 @@ vim/bionic-updates,bionic-security 2:8.0.1453-1ubuntu1.1 amd64
|
||||
N: There is 1 additional version. Please use the '-a' switch to see it
|
||||
```
|
||||
|
||||
**Apt** is the default package manager in recent Ubuntu versions. So, this command is just enough to find the detailed information of a package. It doesn’t matter whether given package is installed or not. This command will simply list the given package’s version along with all other details.
|
||||
`apt` 是 Ubuntu 最新版本的默认包管理器。因此,这个命令足以找到一个软件包的详细信息,给定的软件包是否安装并不重要。这个命令将简单地列出给定包的版本以及其他详细信息。
|
||||
|
||||
##### Method 2 – Using Apt-get
|
||||
#### 方法 2 – 使用 Apt-get
|
||||
|
||||
To find a package version without installing it, we can use **apt-get** command with **-s** option.
|
||||
要查看软件包的版本而不安装它,我们可以使用 `apt-get` 命令和 `-s` 选项。
|
||||
|
||||
```
|
||||
$ apt-get -s install vim
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
**示例输出:**
|
||||
|
||||
```
|
||||
NOTE: This is only a simulation!
|
||||
@ -136,19 +134,19 @@ Inst vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic
|
||||
Conf vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic-security [amd64])
|
||||
```
|
||||
|
||||
Here, -s option indicates **simulation**. As you can see in the output, It performs no action. Instead, It simply performs a simulation to let you know what is going to happen when you install the Vim package.
|
||||
这里,`-s` 选项代表 **模拟**。正如你在输出中看到的,它不执行任何操作。相反,它只是模拟执行,好让你知道在安装 Vim 时会发生什么。
|
||||
|
||||
You can substitute “install” option with “upgrade” option to see what will happen when you upgrade a package.
|
||||
你可以将 `install` 选项替换为 `upgrade`,以查看升级包时会发生什么。
|
||||
|
||||
```
|
||||
$ apt-get -s upgrade vim
|
||||
```
|
||||
|
||||
##### Method 3 – Using Aptitude
|
||||
#### 方法 3 – 使用 Aptitude
|
||||
|
||||
**Aptitude** is an ncurses and commandline-based front-end to APT package manger in Debian and its derivatives.
|
||||
在 Debian 及其衍生品中,`aptitude` 是一个基于 ncurses(LCTT 译注:ncurses 是终端基于文本的字符处理的库)和命令行的前端 APT 包管理器。
|
||||
|
||||
To find the package version with Aptitude, simply run:
|
||||
使用 aptitude 来查看软件包的版本,只需运行:
|
||||
|
||||
```
|
||||
$ aptitude versions vim
|
||||
@ -156,7 +154,7 @@ p 2:8.0.1453-1ubuntu1
|
||||
p 2:8.0.1453-1ubuntu1.1 bionic-security,bionic-updates 500
|
||||
```
|
||||
|
||||
You can also use simulation option ( **-s** ) to see what would happen if you install or upgrade package.
|
||||
你还可以使用模拟选项(`-s`)来查看安装或升级包时会发生什么。
|
||||
|
||||
```
|
||||
$ aptitude -V -s install vim
|
||||
@ -167,33 +165,29 @@ Need to get 1,152 kB of archives. After unpacking 2,852 kB will be used.
|
||||
Would download/install/remove packages.
|
||||
```
|
||||
|
||||
Here, **-V** flag is used to display detailed information of the package version.
|
||||
|
||||
Similarly, just substitute “install” with “upgrade” option to see what would happen if you upgrade a package.
|
||||
这里,`-V` 标志用于显示软件包的详细信息。
|
||||
|
||||
```
|
||||
$ aptitude -V -s upgrade vim
|
||||
```
|
||||
|
||||
Another way to find the non-installed package’s version using Aptitude command is:
|
||||
类似的,只需将 `install` 替换为 `upgrade` 选项,即可查看升级包会发生什么。
|
||||
|
||||
```
|
||||
$ aptitude search vim -F "%c %p %d %V"
|
||||
```
|
||||
|
||||
Here,
|
||||
这里,
|
||||
|
||||
* **-F** is used to specify which format should be used to display the output,
|
||||
* **%c** – status of the given package (installed or not installed),
|
||||
* **%p** – name of the package,
|
||||
* **%d** – description of the package,
|
||||
* **%V** – version of the package.
|
||||
* `-F` 用于指定应使用哪种格式来显示输出,
|
||||
* `%c` – 包的状态(已安装或未安装),
|
||||
* `%p` – 包的名称,
|
||||
* `%d` – 包的简介,
|
||||
* `%V` – 包的版本。
|
||||
|
||||
当你不知道完整的软件包名称时,这非常有用。这个命令将列出包含给定字符串(即 vim)的所有软件包。
|
||||
|
||||
|
||||
This is helpful when you don’t know the full package name. This command will list all packages that contains the given string (i.e vim).
|
||||
|
||||
Here is the sample output of the above command:
|
||||
以下是上述命令的示例输出:
|
||||
|
||||
```
|
||||
[...]
|
||||
@ -207,17 +201,17 @@ p vim-voom Vim two-pane out
|
||||
p vim-youcompleteme fast, as-you-type, fuzzy-search code completion engine for Vim 0+20161219+git
|
||||
```
|
||||
|
||||
##### Method 4 – Using Apt-cache
|
||||
#### 方法 4 – 使用 Apt-cache
|
||||
|
||||
**Apt-cache** command is used to query APT cache in Debian-based systems. It is useful for performing many operations on APT’s package cache. One fine example is we can [**list installed applications from a certain repository/ppa**][3].
|
||||
`apt-cache` 命令用于查询基于 Debian 的系统中的 APT 缓存。对于要在 APT 的包缓存上执行很多操作时,它很有用。一个很好的例子是我们可以从[某个仓库或 ppa 中列出已安装的应用程序][3]。
|
||||
|
||||
Not just installed applications, we can also find the version of a package even if it is not installed. For instance, the following command will find the version of Vim package:
|
||||
不仅是已安装的应用程序,我们还可以找到软件包的版本,即使它没有被安装。例如,以下命令将找到 Vim 的版本:
|
||||
|
||||
```
|
||||
$ apt-cache policy vim
|
||||
```
|
||||
|
||||
Sample output:
|
||||
示例输出:
|
||||
|
||||
```
|
||||
vim:
|
||||
@ -231,19 +225,19 @@ vim:
|
||||
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
```
|
||||
|
||||
As you can see in the above output, Vim is not installed. If you wanted to install it, you would get version **8.0.1453**. It also displays from which repository the vim package is coming from.
|
||||
正如你在上面的输出中所看到的,Vim 并没有安装。如果你想安装它,你会知道它的版本是 **8.0.1453**。它还显示 vim 包来自哪个仓库。
|
||||
|
||||
##### Method 5 – Using Apt-show-versions
|
||||
#### 方法 5 – 使用 Apt-show-versions
|
||||
|
||||
**Apt-show-versions** command is used to list installed and available package versions in Debian and Debian-based systems. It also displays the list of all upgradeable packages. It is quite handy if you have a mixed stable/testing environment. For instance, if you have enabled both stable and testing repositories, you can easily find the list of applications from testing and also you can upgrade all packages in testing.
|
||||
在 Debian 和基于 Debian 的系统中,`apt-show-versions` 命令用于列出已安装和可用软件包的版本。它还显示所有可升级软件包的列表。如果你有一个混合的稳定或测试环境,这是非常方便的。例如,如果你同时启用了稳定和测试仓库,那么你可以轻松地从测试库找到应用程序列表,还可以升级测试库中的所有软件包。
|
||||
|
||||
Apt-show-versions is not installed by default. You need to install it using command:
|
||||
默认情况下系统没有安装 `apt-show-versions`,你需要使用以下命令来安装它:
|
||||
|
||||
```
|
||||
$ sudo apt-get install apt-show-versions
|
||||
```
|
||||
|
||||
Once installed, run the following command to find the version of a package,for example Vim:
|
||||
安装后,运行以下命令查找软件包的版本,例如 Vim:
|
||||
|
||||
```
|
||||
$ apt-show-versions -a vim
|
||||
@ -253,15 +247,15 @@ vim:amd64 2:8.0.1453-1ubuntu1.1 bionic-updates archive.ubuntu.com
|
||||
vim:amd64 not installed
|
||||
```
|
||||
|
||||
Here, **-a** switch prints all available versions of the given package.
|
||||
这里,`-a` 选项打印给定软件包的所有可用版本。
|
||||
|
||||
If the given package is already installed, you need not to use **-a** option. In that case, simply run:
|
||||
如果已经安装了给定的软件包,那么就不需要使用 `-a` 选项。在这种情况下,只需运行:
|
||||
|
||||
```
|
||||
$ apt-show-versions vim
|
||||
```
|
||||
|
||||
And, that’s all. If you know any other methods, please share them in the comment section below. I will check and update this guide.
|
||||
差不多完了。如果你还了解其他方法,在下面的评论中分享,我将检查并更新本指南。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -269,8 +263,8 @@ via: https://www.ostechnix.com/how-to-check-linux-package-version-before-install
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,113 +1,116 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11221-1.html)
|
||||
[#]: subject: (How to install Elasticsearch and Kibana on Linux)
|
||||
[#]: via: (https://opensource.com/article/19/7/install-elasticsearch-and-kibana-linux)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
如何在 Linux 上安装 Elasticsearch 和 Kibana
|
||||
======
|
||||
获取我们关于安装两者的简化说明。
|
||||
|
||||
> 获取我们关于安装两者的简化说明。
|
||||
|
||||
![5 pengiuns floating on iceburg][1]
|
||||
|
||||
如果你热衷学习基于开源 Lucene 库的著名开源搜索引擎 Elasticsearch,那么没有比在本地安装它更好的方法了。这个过程在 [Elasticsearch 网站][2]中有详细介绍,但如果你是初学者,官方说明就比必要的信息多得多。本文采用一种简化的方法。
|
||||
如果你渴望学习基于开源 Lucene 库的著名开源搜索引擎 Elasticsearch,那么没有比在本地安装它更好的方法了。这个过程在 [Elasticsearch 网站][2]中有详细介绍,但如果你是初学者,官方说明就比必要的信息多得多。本文采用一种简化的方法。
|
||||
|
||||
### 添加 Elasticsearch 仓库
|
||||
|
||||
首先,将 Elasticsearch 仓库添加到你的系统,以便你可以根据需要安装它并接收更新。如何做取决于你的发行版。在基于 RPM 的系统上,例如 [Fedora][3]、[CentOS] [4]、[Red Hat Enterprise Linux(RHEL)][5]或 [openSUSE][6],(本文任何地方引用 Fedora 或 RHEL 的也适用于 CentOS 和 openSUSE)在 **/etc/yum.repos.d/** 中创建一个名为 **elasticsearch.repo** 的仓库描述文件:
|
||||
|
||||
首先,将 Elasticsearch 仓库添加到你的系统,以便你可以根据需要安装它并接收更新。如何做取决于你的发行版。在基于 RPM 的系统上,例如 [Fedora][3]、[CentOS] [4]、[Red Hat Enterprise Linux(RHEL)][5] 或 [openSUSE][6],(本文任何地方引用 Fedora 或 RHEL 的也适用于 CentOS 和 openSUSE)在 `/etc/yum.repos.d/` 中创建一个名为 `elasticsearch.repo` 的仓库描述文件:
|
||||
|
||||
```
|
||||
$ cat << EOF | sudo tee /etc/yum.repos.d/elasticsearch.repo
|
||||
$ cat << EOF | sudo tee /etc/yum.repos.d/elasticsearch.repo
|
||||
[elasticsearch-7.x]
|
||||
name=Elasticsearch repository for 7.x packages
|
||||
baseurl=<https://artifacts.elastic.co/packages/oss-7.x/yum>
|
||||
baseurl=https://artifacts.elastic.co/packages/oss-7.x/yum
|
||||
gpgcheck=1
|
||||
gpgkey=<https://artifacts.elastic.co/GPG-KEY-elasticsearch>
|
||||
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
enabled=1
|
||||
autorefresh=1
|
||||
type=rpm-md
|
||||
EOF
|
||||
```
|
||||
|
||||
在 Ubuntu 或 Debian 上,不要使用 **add-apt-repository** 工具。由于它自身默认的和 Elasticsearch 仓库提供的不匹配而导致错误 。相反,设置这个:
|
||||
|
||||
在 Ubuntu 或 Debian 上,不要使用 `add-apt-repository` 工具。由于它自身默认的和 Elasticsearch 仓库提供的不匹配而导致错误。相反,设置这个:
|
||||
|
||||
```
|
||||
$ echo "deb <https://artifacts.elastic.co/packages/oss-7.x/apt> stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
|
||||
$ echo "deb https://artifacts.elastic.co/packages/oss-7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
|
||||
```
|
||||
|
||||
此存储库仅包含 Elasticsearch 的开源功能,在 [Apache 许可证][7]下,订阅没有提供额外功能。如果你需要仅限订阅的功能(这些功能是_并不_开源),那么 **baseurl** 必须设置为:
|
||||
|
||||
在你从该仓库安装之前,导入 GPG 公钥,然后更新:
|
||||
|
||||
```
|
||||
`baseurl=https://artifacts.elastic.co/packages/7.x/yum`
|
||||
$ sudo apt-key adv --keyserver \
|
||||
hkp://keyserver.ubuntu.com:80 \
|
||||
--recv D27D666CD88E42B4
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
|
||||
此存储库仅包含 Elasticsearch 的开源功能,在 [Apache 许可证][7]下发布,没有提供订阅版本的额外功能。如果你需要仅限订阅的功能(这些功能是**并不**开源),那么 `baseurl` 必须设置为:
|
||||
|
||||
```
|
||||
baseurl=https://artifacts.elastic.co/packages/7.x/yum
|
||||
```
|
||||
|
||||
### 安装 Elasticsearch
|
||||
|
||||
你需要安装的软件包的名称取决于你使用的是开源版本还是订阅版本。本文使用开源版本,包名最后有 **-oss** 后缀。如果包名后没有 **-oss**,那么表示你请求的是仅限订阅版本。
|
||||
你需要安装的软件包的名称取决于你使用的是开源版本还是订阅版本。本文使用开源版本,包名最后有 `-oss` 后缀。如果包名后没有 `-oss`,那么表示你请求的是仅限订阅版本。
|
||||
|
||||
如果你创建了订阅版本的仓库却尝试安装开源版本,那么就会收到“非指定”的错误。如果你创建了一个开源版本仓库却没有将 **-oss** 添加到包名后,那么你也会收到错误。
|
||||
如果你创建了订阅版本的仓库却尝试安装开源版本,那么就会收到“非指定”的错误。如果你创建了一个开源版本仓库却没有将 `-oss` 添加到包名后,那么你也会收到错误。
|
||||
|
||||
使用包管理器安装 Elasticsearch。例如,在 Fedora、CentOS 或 RHEL 上运行以下命令:
|
||||
|
||||
|
||||
```
|
||||
$ sudo dnf install elasticsearch-oss
|
||||
```
|
||||
|
||||
在 Ubuntu 或 Debian 上,运行:
|
||||
|
||||
|
||||
```
|
||||
$ sudo apt install elasticsearch-oss
|
||||
```
|
||||
|
||||
如果你在安装 Elasticsearch 时遇到错误,那么你可能安装的是错误的软件包。如果你想如本文这样使用开源包,那么请确保在 Yum 配置中使用正确的 **apt** 仓库或 baseurl。
|
||||
如果你在安装 Elasticsearch 时遇到错误,那么你可能安装的是错误的软件包。如果你想如本文这样使用开源包,那么请确保使用正确的 apt 仓库或在 Yum 配置正确的 `baseurl`。
|
||||
|
||||
### 启动并启用 Elasticsearch
|
||||
|
||||
安装 Elasticsearch 后,你必须启动并启用它:
|
||||
|
||||
|
||||
```
|
||||
$ sudo systemctl daemon-reload
|
||||
$ sudo systemctl enable --now elasticsearch.service
|
||||
```
|
||||
|
||||
要确认 Elasticsearch 在其默认端口 9200 上运行,请在 Web 浏览器中打开 **localhost:9200**。你可以使用 GUI 浏览器,也可以在终端中执行此操作:
|
||||
要确认 Elasticsearch 在其默认端口 9200 上运行,请在 Web 浏览器中打开 `localhost:9200`。你可以使用 GUI 浏览器,也可以在终端中执行此操作:
|
||||
|
||||
|
||||
```
|
||||
$ curl localhost:9200
|
||||
{
|
||||
|
||||
"name" : "fedora30",
|
||||
"cluster_name" : "elasticsearch",
|
||||
"cluster_uuid" : "OqSbb16NQB2M0ysynnX1hA",
|
||||
"version" : {
|
||||
"number" : "7.2.0",
|
||||
"build_flavor" : "oss",
|
||||
"build_type" : "rpm",
|
||||
"build_hash" : "508c38a",
|
||||
"build_date" : "2019-06-20T15:54:18.811730Z",
|
||||
"build_snapshot" : false,
|
||||
"lucene_version" : "8.0.0",
|
||||
"minimum_wire_compatibility_version" : "6.8.0",
|
||||
"minimum_index_compatibility_version" : "6.0.0-beta1"
|
||||
},
|
||||
"tagline" : "You Know, for Search"
|
||||
"name" : "fedora30",
|
||||
"cluster_name" : "elasticsearch",
|
||||
"cluster_uuid" : "OqSbb16NQB2M0ysynnX1hA",
|
||||
"version" : {
|
||||
"number" : "7.2.0",
|
||||
"build_flavor" : "oss",
|
||||
"build_type" : "rpm",
|
||||
"build_hash" : "508c38a",
|
||||
"build_date" : "2019-06-20T15:54:18.811730Z",
|
||||
"build_snapshot" : false,
|
||||
"lucene_version" : "8.0.0",
|
||||
"minimum_wire_compatibility_version" : "6.8.0",
|
||||
"minimum_index_compatibility_version" : "6.0.0-beta1"
|
||||
},
|
||||
"tagline" : "You Know, for Search"
|
||||
}
|
||||
```
|
||||
|
||||
### 安装 Kibana
|
||||
|
||||
Kibana 是 Elasticsearch 数据可视化的图形界面。它包含在 Elasticsearch 仓库,因此你可以使用包管理器进行安装。与 Elasticsearch 本身一样,如果你使用的是 Elasticsearch 的开源版本,那么必须将 **-oss** 放到包名最后,订阅版本则不用(两者安装需要匹配):
|
||||
Kibana 是 Elasticsearch 数据可视化的图形界面。它包含在 Elasticsearch 仓库,因此你可以使用包管理器进行安装。与 Elasticsearch 本身一样,如果你使用的是 Elasticsearch 的开源版本,那么必须将 `-oss` 放到包名最后,订阅版本则不用(两者安装需要匹配):
|
||||
|
||||
|
||||
```
|
||||
@ -116,12 +119,11 @@ $ sudo dnf install kibana-oss
|
||||
|
||||
在 Ubuntu 或 Debian 上:
|
||||
|
||||
|
||||
```
|
||||
$ sudo apt install kibana-oss
|
||||
```
|
||||
|
||||
Kibana 在端口 5601 上运行,因此打开图形化 Web 浏览器并进入 **localhost:5601** 来开始使用 Kibana,如下所示:
|
||||
Kibana 在端口 5601 上运行,因此打开图形化 Web 浏览器并进入 `localhost:5601` 来开始使用 Kibana,如下所示:
|
||||
|
||||
![Kibana running in Firefox.][8]
|
||||
|
||||
@ -129,30 +131,26 @@ Kibana 在端口 5601 上运行,因此打开图形化 Web 浏览器并进入 *
|
||||
|
||||
如果在安装 Elasticsearch 时出现错误,请尝试手动安装 Java 环境。在 Fedora、CentOS 和 RHEL 上:
|
||||
|
||||
|
||||
```
|
||||
$ sudo dnf install java-openjdk-devel java-openjdk
|
||||
```
|
||||
|
||||
在 Ubuntu 上:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install default-jdk`
|
||||
$ sudo apt install default-jdk
|
||||
```
|
||||
|
||||
如果所有其他方法都失败,请尝试直接从 Elasticsearch 服务器安装 Elasticsearch RPM:
|
||||
|
||||
|
||||
```
|
||||
$ wget <https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.2.0-x86\_64.rpm{,.sha512}>
|
||||
$ shasum -a 512 -c elasticsearch-oss-7.2.0-x86_64.rpm.sha512 && sudo rpm --install elasticsearch-oss-7.2.0-x86_64.rpm
|
||||
$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.2.0-x86_64.rpm{,.sha512}
|
||||
$ shasum -a 512 -c elasticsearch-oss-7.2.0-x86_64.rpm.sha512 && sudo rpm --install elasticsearch-oss-7.2.0-x86_64.rpm
|
||||
```
|
||||
|
||||
在 Ubuntu 或 Debian 上,请使用 DEB 包。
|
||||
|
||||
如果你无法使用 Web 浏览器访问 Elasticsearch 或 Kibana,那么可能是你的防火墙阻止了这些端口。你可以通过调整防火墙设置来允许这些端口上的流量。例如,如果你运行的是 **firewalld**(Fedora 和 RHEL 上的默认值,并且可以在 Debian 和 Ubuntu 上安装),那么你可以使用 **firewall-cmd**:
|
||||
|
||||
如果你无法使用 Web 浏览器访问 Elasticsearch 或 Kibana,那么可能是你的防火墙阻止了这些端口。你可以通过调整防火墙设置来允许这些端口上的流量。例如,如果你运行的是 `firewalld`(Fedora 和 RHEL 上的默认防火墙,并且可以在 Debian 和 Ubuntu 上安装),那么你可以使用 `firewall-cmd`:
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --add-port=9200/tcp --permanent
|
||||
@ -169,7 +167,7 @@ via: https://opensource.com/article/19/7/install-elasticsearch-and-kibana-linux
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -69,17 +69,17 @@ package org.opensource.demo.singleton;
|
||||
|
||||
public class OpensourceSingleton {
|
||||
|
||||
private static OpensourceSingleton uniqueInstance;
|
||||
private static OpensourceSingleton uniqueInstance;
|
||||
|
||||
private OpensourceSingleton() {
|
||||
}
|
||||
private OpensourceSingleton() {
|
||||
}
|
||||
|
||||
public static OpensourceSingleton getInstance() {
|
||||
if (uniqueInstance == null) {
|
||||
uniqueInstance = new OpensourceSingleton();
|
||||
}
|
||||
return uniqueInstance;
|
||||
}
|
||||
public static OpensourceSingleton getInstance() {
|
||||
if (uniqueInstance == null) {
|
||||
uniqueInstance = new OpensourceSingleton();
|
||||
}
|
||||
return uniqueInstance;
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
@ -102,20 +102,20 @@ package org.opensource.demo.singleton;
|
||||
|
||||
public class ImprovedOpensourceSingleton {
|
||||
|
||||
private volatile static ImprovedOpensourceSingleton uniqueInstance;
|
||||
private volatile static ImprovedOpensourceSingleton uniqueInstance;
|
||||
|
||||
private ImprovedOpensourceSingleton() {}
|
||||
private ImprovedOpensourceSingleton() {}
|
||||
|
||||
public static ImprovedOpensourceSingleton getInstance() {
|
||||
if (uniqueInstance == null) {
|
||||
synchronized (ImprovedOpensourceSingleton.class) {
|
||||
if (uniqueInstance == null) {
|
||||
uniqueInstance = new ImprovedOpensourceSingleton();
|
||||
}
|
||||
}
|
||||
}
|
||||
return uniqueInstance;
|
||||
}
|
||||
public static ImprovedOpensourceSingleton getInstance() {
|
||||
if (uniqueInstance == null) {
|
||||
synchronized (ImprovedOpensourceSingleton.class) {
|
||||
if (uniqueInstance == null) {
|
||||
uniqueInstance = new ImprovedOpensourceSingleton();
|
||||
}
|
||||
}
|
||||
}
|
||||
return uniqueInstance;
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
@ -141,20 +141,20 @@ package org.opensource.demo.factory;
|
||||
|
||||
public class OpensourceFactory {
|
||||
|
||||
public OpensourceJVMServers getServerByVendor([String][18] name) {
|
||||
if(name.equals("Apache")) {
|
||||
return new Tomcat();
|
||||
}
|
||||
else if(name.equals("Eclipse")) {
|
||||
return new Jetty();
|
||||
}
|
||||
else if (name.equals("RedHat")) {
|
||||
return new WildFly();
|
||||
}
|
||||
else {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
public OpensourceJVMServers getServerByVendor(String name) {
|
||||
if(name.equals("Apache")) {
|
||||
return new Tomcat();
|
||||
}
|
||||
else if(name.equals("Eclipse")) {
|
||||
return new Jetty();
|
||||
}
|
||||
else if (name.equals("RedHat")) {
|
||||
return new WildFly();
|
||||
}
|
||||
else {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@ -164,9 +164,9 @@ public class OpensourceFactory {
|
||||
package org.opensource.demo.factory;
|
||||
|
||||
public interface OpensourceJVMServers {
|
||||
public void startServer();
|
||||
public void stopServer();
|
||||
public [String][18] getName();
|
||||
public void startServer();
|
||||
public void stopServer();
|
||||
public String getName();
|
||||
}
|
||||
```
|
||||
|
||||
@ -176,17 +176,17 @@ public interface OpensourceJVMServers {
|
||||
package org.opensource.demo.factory;
|
||||
|
||||
public class WildFly implements OpensourceJVMServers {
|
||||
public void startServer() {
|
||||
[System][19].out.println("Starting WildFly Server...");
|
||||
}
|
||||
public void startServer() {
|
||||
System.out.println("Starting WildFly Server...");
|
||||
}
|
||||
|
||||
public void stopServer() {
|
||||
[System][19].out.println("Shutting Down WildFly Server...");
|
||||
}
|
||||
public void stopServer() {
|
||||
System.out.println("Shutting Down WildFly Server...");
|
||||
}
|
||||
|
||||
public [String][18] getName() {
|
||||
return "WildFly";
|
||||
}
|
||||
public String getName() {
|
||||
return "WildFly";
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@ -209,9 +209,9 @@ package org.opensource.demo.observer;
|
||||
|
||||
public interface Topic {
|
||||
|
||||
public void addObserver([Observer][22] observer);
|
||||
public void deleteObserver([Observer][22] observer);
|
||||
public void notifyObservers();
|
||||
public void addObserver(Observer observer);
|
||||
public void deleteObserver(Observer observer);
|
||||
public void notifyObservers();
|
||||
}
|
||||
```
|
||||
|
||||
@ -226,39 +226,39 @@ import java.util.List;
|
||||
import java.util.ArrayList;
|
||||
|
||||
public class Conference implements Topic {
|
||||
private List<Observer> listObservers;
|
||||
private int totalAttendees;
|
||||
private int totalSpeakers;
|
||||
private [String][18] nameEvent;
|
||||
private List<Observer> listObservers;
|
||||
private int totalAttendees;
|
||||
private int totalSpeakers;
|
||||
private String nameEvent;
|
||||
|
||||
public Conference() {
|
||||
listObservers = new ArrayList<Observer>();
|
||||
}
|
||||
public Conference() {
|
||||
listObservers = new ArrayList<Observer>();
|
||||
}
|
||||
|
||||
public void addObserver([Observer][22] observer) {
|
||||
listObservers.add(observer);
|
||||
}
|
||||
public void addObserver(Observer observer) {
|
||||
listObservers.add(observer);
|
||||
}
|
||||
|
||||
public void deleteObserver([Observer][22] observer) {
|
||||
int i = listObservers.indexOf(observer);
|
||||
if (i >= 0) {
|
||||
listObservers.remove(i);
|
||||
}
|
||||
}
|
||||
public void deleteObserver(Observer observer) {
|
||||
int i = listObservers.indexOf(observer);
|
||||
if (i >= 0) {
|
||||
listObservers.remove(i);
|
||||
}
|
||||
}
|
||||
|
||||
public void notifyObservers() {
|
||||
for (int i=0, nObservers = listObservers.size(); i < nObservers; ++ i) {
|
||||
[Observer][22] observer = listObservers.get(i);
|
||||
observer.update(totalAttendees,totalSpeakers,nameEvent);
|
||||
}
|
||||
}
|
||||
public void notifyObservers() {
|
||||
for (int i=0, nObservers = listObservers.size(); i < nObservers; ++ i) {
|
||||
Observer observer = listObservers.get(i);
|
||||
observer.update(totalAttendees,totalSpeakers,nameEvent);
|
||||
}
|
||||
}
|
||||
|
||||
public void setConferenceDetails(int totalAttendees, int totalSpeakers, [String][18] nameEvent) {
|
||||
this.totalAttendees = totalAttendees;
|
||||
this.totalSpeakers = totalSpeakers;
|
||||
this.nameEvent = nameEvent;
|
||||
notifyObservers();
|
||||
}
|
||||
public void setConferenceDetails(int totalAttendees, int totalSpeakers, String nameEvent) {
|
||||
this.totalAttendees = totalAttendees;
|
||||
this.totalSpeakers = totalSpeakers;
|
||||
this.nameEvent = nameEvent;
|
||||
notifyObservers();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@ -269,8 +269,8 @@ public class Conference implements Topic {
|
||||
```
|
||||
package org.opensource.demo.observer;
|
||||
|
||||
public interface [Observer][22] {
|
||||
public void update(int totalAttendees, int totalSpeakers, [String][18] nameEvent);
|
||||
public interface Observer {
|
||||
public void update(int totalAttendees, int totalSpeakers, String nameEvent);
|
||||
}
|
||||
```
|
||||
|
||||
@ -281,27 +281,27 @@ public interface [Observer][22] {
|
||||
```
|
||||
package org.opensource.demo.observer;
|
||||
|
||||
public class MonitorConferenceAttendees implements [Observer][22] {
|
||||
private int totalAttendees;
|
||||
private int totalSpeakers;
|
||||
private [String][18] nameEvent;
|
||||
private Topic topic;
|
||||
public class MonitorConferenceAttendees implements Observer {
|
||||
private int totalAttendees;
|
||||
private int totalSpeakers;
|
||||
private String nameEvent;
|
||||
private Topic topic;
|
||||
|
||||
public MonitorConferenceAttendees(Topic topic) {
|
||||
this.topic = topic;
|
||||
topic.addObserver(this);
|
||||
}
|
||||
public MonitorConferenceAttendees(Topic topic) {
|
||||
this.topic = topic;
|
||||
topic.addObserver(this);
|
||||
}
|
||||
|
||||
public void update(int totalAttendees, int totalSpeakers, [String][18] nameEvent) {
|
||||
this.totalAttendees = totalAttendees;
|
||||
this.totalSpeakers = totalSpeakers;
|
||||
this.nameEvent = nameEvent;
|
||||
printConferenceInfo();
|
||||
}
|
||||
public void update(int totalAttendees, int totalSpeakers, String nameEvent) {
|
||||
this.totalAttendees = totalAttendees;
|
||||
this.totalSpeakers = totalSpeakers;
|
||||
this.nameEvent = nameEvent;
|
||||
printConferenceInfo();
|
||||
}
|
||||
|
||||
public void printConferenceInfo() {
|
||||
[System][19].out.println(this.nameEvent + " has " + totalSpeakers + " speakers and " + totalAttendees + " attendees");
|
||||
}
|
||||
public void printConferenceInfo() {
|
||||
System.out.println(this.nameEvent + " has " + totalSpeakers + " speakers and " + totalAttendees + " attendees");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
190
published/20190715 What is POSIX- Richard Stallman explains.md
Normal file
190
published/20190715 What is POSIX- Richard Stallman explains.md
Normal file
@ -0,0 +1,190 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (martin2011qi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11222-1.html)
|
||||
[#]: subject: (What is POSIX? Richard Stallman explains)
|
||||
[#]: via: (https://opensource.com/article/19/7/what-posix-richard-stallman-explains)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
POSIX 是什么?让我们听听 Richard Stallman 的诠释
|
||||
======
|
||||
|
||||
> 从计算机自由先驱的口中探寻操作系统兼容性标准背后的本质。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/13/231737robbwoss7p3p7jwo.jpg)
|
||||
|
||||
[POSIX][2] 是什么?为什么如此重要?你可能在很多的技术类文章中看到这个术语,但往往会在探寻其本质时迷失在<ruby>技术初始主义<rt>techno-initialisms</rt></ruby>的海洋或是<ruby>以 X 结尾的行话<rt>jargon-that-ends-in-X</rt></ruby>中。我给 [Richard Stallman][3] 博士(在黑客圈里面常称之为 RMS)发了邮件以探寻这个术语的起源及其背后的概念。
|
||||
|
||||
Richard Stallman 认为用 “开源” 和 “闭源” 来归类软件是一种错误的方法。Stallman 将程序分类为 <ruby>尊重自由的<rt>freedom-respecting</rt></ruby>(“<ruby>自由<rt>free</rt></ruby>” 或 “<ruby>自由(西语)<rt>libre</rt></ruby>”)和 <ruby>践踏自由的<rt>freedom-trampling</rt></ruby>(“<ruby>非自由<rt>non-free</rt></ruby>” 或 “<ruby>专有<rt>proprietary</rt></ruby>”)。开源讨论通常会为了(用户)实际得到的<ruby>优势/便利<rt>advantages</rt></ruby>考虑去鼓励某些做法,而非作为道德层面上的约束。
|
||||
|
||||
Stallman 在由其本人于 1984 年发起的<ruby>自由软件运动<rt>The free software movement</rt></ruby>表明,不仅仅是这些 <ruby>优势/便利<rt>advantages</rt></ruby> 受到了威胁。计算机的用户 <ruby>理应得到<rt>deserve</rt></ruby> 计算机的控制权,因此拒绝被用户控制的程序即是 <ruby>非正义<rt>injustice</rt></ruby>,理应被<ruby>拒绝<rt>rejected</rt></ruby>和<ruby>排斥<rt>eliminated</rt></ruby>。对于用户的控制权,程序应当给予用户 [四项基本自由][4]:
|
||||
|
||||
* 自由度 0:无论用户出于何种目的,必须可以按照用户意愿,自由地运行该软件。
|
||||
* 自由度 1:用户可以自由地学习并修改该软件,以便按照自己的意愿进行计算。作为前提,用户必须可以访问到该软件的源代码。
|
||||
* 自由度 2:用户可以自由地分发该软件的副本,以便可以帮助他人。
|
||||
* 自由度 3:用户可以自由地分发该软件修改后的副本。借此,你可以让整个社区受益于你的改进。作为前提,用户必须可以访问到该软件的源代码。
|
||||
|
||||
### 关于 POSIX
|
||||
|
||||
**Seth:** POSIX 标准是由 [IEEE][5] 发布,用于描述 “<ruby>可移植操作系统<rt>portable operating system</rt></ruby>” 的文档。只要开发人员编写符合此描述的程序,他们生产的便是符合 POSIX 的程序。在科技行业,我们称之为 “<ruby>规范<rt>specification</rt></ruby>” 或将其简写为 “spec”。就技术用语而言,这是可以理解的,但我们不禁要问是什么使操作系统 “可移植”?
|
||||
|
||||
**RMS:** 我认为是<ruby>接口<rt>interface</rt></ruby>应该(在不同系统之间)是可移植的,而非任何一种*系统*。实际上,内部构造不同的各种系统都支持部分的 POSIX 接口规范。
|
||||
|
||||
**Seth:** 因此,如果两个系统皆具有符合 POSIX 的程序,那么它们便可以彼此假设,从而知道如何相互 “交谈”。我了解到 “POSIX” 这个简称是你想出来的。那你是怎么想出来的呢?它是如何就被 IEEE 采纳了呢?
|
||||
|
||||
**RMS:** IEEE 已经完成了规范的开发,但还没为其想好简练的名称。标题类似是 “可移植操作系统接口”,虽然我已记不清确切的单词。委员会倾向于将 “IEEEIX” 作为简称。而我认为那不太好。发音有点怪 - 听起来像恐怖的尖叫,“Ayeee!” - 所以我觉得人们反而会倾向于称之为 “Unix”。
|
||||
|
||||
但是,由于 <ruby>[GNU 并不是 Unix][6]<rt>GNU's Not Unix</rt></ruby>,并且它打算取代之,我不希望人们将 GNU 称为 “Unix 系统”。因此,我提出了人们可能会实际使用的简称。那个时候也没有什么灵感,我就用了一个并不是非常聪明的方式创造了这个简称:我使用了 “<ruby>可移植操作系统<rt>portable operating system</rt></ruby>” 的首字母缩写,并在末尾添加了 “ix” 作为简称。IEEE 也欣然接受了。
|
||||
|
||||
**Seth:** POSIX 缩写中的 “操作系统” 是仅涉及 Unix 和类 Unix 的系统(如 GNU)呢?还是意图包含所有操作系统?
|
||||
|
||||
**RMS:** 术语 “操作系统” 抽象地说,涵盖了完全不像 Unix 的系统、完全和 POSIX 规范无关的系统。但是,POSIX 规范适用于大量类 Unix 系统;也只有这样的系统才适合 POSIX 规范。
|
||||
|
||||
**Seth:** 你是否参与审核或更新当前版本的 POSIX 标准?
|
||||
|
||||
**RMS:** 现在不了。
|
||||
|
||||
**Seth:** GNU Autotools 工具链可以使应用程序更容易移植,至少在构建和安装时如此。所以可以认为 Autotools 是构建可移植基础设施的重要一环吗?
|
||||
|
||||
**RMS:** 是的,因为即使在遵循 POSIX 的系统中,也存在着诸多差异。而 Autotools 可以使程序更容易适应这些差异。顺带一提,如果有人想助力 Autotools 的开发,可以发邮件联系我。
|
||||
|
||||
**Seth:** 我想,当 GNU 刚刚开始让人们意识到一个非 Unix 的系统可以从专有的技术中解放出来的时候,关于自由软件如何协作方面,这其间一定存在一些空白区域吧。
|
||||
|
||||
**RMS:** 我不认为有任何空白或不确定性。我只是照着 BSD 的接口写而已。
|
||||
|
||||
**Seth:** 一些 GNU 应用程序符合 POSIX 标准,而另一些 GNU 应用程序的 GNU 特定的功能,要么不在 POSIX 规范中,要么缺少该规范要求的功能。对于 GNU 应用程序 POSIX 合规性有多重要?
|
||||
|
||||
**RMS:** 遵循标准对于利于用户的程度很重要。我们不将标准视为权威,而是且将其作为可能有用的指南来遵循。因此,我们谈论的是<ruby>遵循<rt>following</rt></ruby>标准而不是“<ruby>遵守<rt>complying</rt></ruby>”。可以参考 <ruby>GNU 编码标准<rt>GNU Coding Standards</rt></ruby>中的 [非 GNU 标准][7] 段落。
|
||||
|
||||
我们努力在大多数问题上与标准兼容,因为在大多数的问题上这最有利于用户。但也偶有例外。
|
||||
|
||||
例如,POSIX 指定某些实用程序以 512 字节为单位测量磁盘空间。我要求委员会将其改为 1K,但被拒绝了,说是有个<ruby>官僚主义的规则<rt>bureaucratic rule</rt></ruby>强迫选用 512。我不记得有多少人试图争辩说,用户会对这个决定感到满意的。
|
||||
|
||||
由于 GNU 在用户的<ruby>自由<rt>freedom</rt></ruby>之后的第二优先级,是用户的<ruby>便利<rt>convenience</rt></ruby>,我们使 GNU 程序以默认 1K 为单位按块测量磁盘空间。
|
||||
|
||||
然而,为了防止竞争对手利用这点给 GNU 安上 “<ruby>不合规<rt>noncompliant</rt></ruby>” 的骂名,我们实现了遵循 POSIX 和 ISO C 的可选模式,这种妥协着实可笑。想要遵循 POSIX,只需设置环境变量 `POSIXLY_CORRECT`,即可使程序符合 POSIX 以 512 字节为单位列出磁盘空间。如果有人知道实际使用 `POSIXLY_CORRECT` 或者 GCC 中对应的 `--pedantic` 会为某些用户提供什么实际好处的话,请务必告诉我。
|
||||
|
||||
**Seth:** 符合 POSIX 标准的自由软件项目是否更容易移植到其他类 Unix 系统?
|
||||
|
||||
**RMS:** 我认为是这样,但自上世纪 80 年代开始,我决定不再把时间浪费在将软件移植到 GNU 以外的系统上。我开始专注于推进 GNU 系统,使其不必使用任何非自由软件。至于将 GNU 程序移植到非类 GNU 系统就留给想在其他系统上运行它们的人们了。
|
||||
|
||||
**Seth:** POSIX 对于软件的自由很重要吗?
|
||||
|
||||
**RMS:** 本质上说,(遵不遵循 POSIX)其实没有任何区别。但是,POSIX 和 ISO C 的标准化确实使 GNU 系统更容易迁移,这有助于我们更快地实现从非自由软件中解放用户的目标。这个目标于上世纪 90 年代早期达成,当时Linux成为自由软件,同时也填补了 GNU 中内核的空白。
|
||||
|
||||
### POSIX 采纳 GNU 的创新
|
||||
|
||||
我还问过 Stallman 博士,是否有任何 GNU 特定的创新或惯例后来被采纳为 POSIX 标准。他无法回想起具体的例子,但友好地代我向几位开发者发了邮件。
|
||||
|
||||
开发者 Giacomo Catenazzi,James Youngman,Eric Blake,Arnold Robbins 和 Joshua Judson Rosen 对以前的 POSIX 迭代以及仍在进行中的 POSIX 迭代做出了回应。POSIX 是一个 “<ruby>活的<rt>living</rt></ruby>” 标准,因此会不断被行业专业人士更新和评审,许多从事 GNU 项目的开发人员提出了对 GNU 特性的包含。
|
||||
|
||||
为了回顾这些有趣的历史,接下来会罗列一些已经融入 POSIX 的流行的 GNU 特性。
|
||||
|
||||
#### Make
|
||||
|
||||
一些 GNU **Make** 的特性已经被 POSIX 的 `make` 定义所采用。相关的 [规范][8] 提供了从现有实现中借来的特性的详细归因。
|
||||
|
||||
#### Diff 和 patch
|
||||
|
||||
[diff][9] 和 [patch][10] 命令都直接从这些工具的 GNU 版本中引进了 `-u` 和 `-U` 选项。
|
||||
|
||||
#### C 库
|
||||
|
||||
POSIX 采用了 GNU C 库 **glibc** 的许多特性。<ruby>血统<rt>Lineage</rt></ruby>一时已难以追溯,但 James Youngman 如是写道:
|
||||
|
||||
> “我非常确定 GCC 首创了许多 ISO C 的特性。例如,**_Noreturn** 是 C11 中的新特性,但 GCC-1.35 便具有此功能(使用 `volatile` 作为声明函数的修饰符)。另外尽管我不确定,GCC-1.35 支持的可变长度数组似乎与现代 C 中的(<ruby>柔性数组<rt>conformant array</rt></ruby>)非常相似。”
|
||||
|
||||
Giacomo Catenazzi 援引 Open Group 的 [strftime][11] 文章,并指出其归因:“这是基于某版本 GNU libc 的 `strftime()` 的特性。”
|
||||
|
||||
Eric Blake 指出,对于 `getline()` 和各种基于语言环境的 `*_l()` 函数,GNU 绝对是这方面的先驱。
|
||||
|
||||
Joshua Judson Rosen 补充道,他清楚地记得,在全然不同的操作系统的代码中奇怪地目睹了熟悉的 GNU 式的行为后,对 `getline()` 函数的采用给他留下了深刻的印象。
|
||||
|
||||
“等等……那不是 GNU 特有的吗?哦,显然已经不再是了。”
|
||||
|
||||
Rosen 向我指出了 [getline 手册页][12] 中写道:
|
||||
|
||||
> `getline()` 和 `getdelim()` 最初都是 GNU 扩展。在 POSIX.1-2008 中被标准化。
|
||||
|
||||
Eric Blake 向我发送了一份其他扩展的列表,这些扩展可能会在下一个 POSIX 修订版中添加(代号为 Issue 8,大约在 2021 年前后):
|
||||
|
||||
* [ppoll][13]
|
||||
* [pthread_cond_clockwait et al.][14]
|
||||
* [posix_spawn_file_actions_addchdir][15]
|
||||
* [getlocalename_1][16]
|
||||
* [reallocarray][17]
|
||||
|
||||
### 关于用户空间的扩展
|
||||
|
||||
POSIX 不仅为开发人员定义了函数和特性,还为用户空间定义了标准行为。
|
||||
|
||||
#### ls
|
||||
|
||||
`-A` 选项会排除来自 `ls` 命令结果中的符号 `.`(代表当前位置)和 `..`(代表上一级目录)。它被 POSIX 2008 采纳。
|
||||
|
||||
#### find
|
||||
|
||||
`find` 命令是一个<ruby>特别的<rt>ad hoc</rt></ruby> [for 循环][18] 工具,也是 [<ruby>并行<rt>parallel</rt></ruby>][19] 处理的出入口。
|
||||
|
||||
一些从 GNU 引入到 POSIX 的<ruby>便捷操作<rt>conveniences</rt></ruby>,包括 `-path` 和 `-perm` 选项。
|
||||
|
||||
`-path` 选项帮你过滤与文件系统路径模式匹配的搜索结果,并且从 1996 年(根据 `findutil` 的 Git 仓库中最早的记录)GNU 版本的 `find` 便可使用此选项。James Youngman 指出 [HP-UX][20] 也很早就有这个选项,所以究竟是 GNU 还是 HP-UX 做出的这一创新(抑或两者兼而有之)无法考证。
|
||||
|
||||
`-perm` 选项帮你按文件权限过滤搜索结果。这在 1996 年 GNU 版本的 `find` 中便已存在,随后被纳入 POSIX 标准 “IEEE Std 1003.1,2004 Edition” 中。
|
||||
|
||||
`xargs` 命令是 `findutils` 软件包的一部分,1996 年的时候就有一个 `-p` 选项会将 `xargs` 置于交互模式(用户将被提示是否继续),随后被纳入 POSIX 标准 “IEEE Std 1003.1, 2004 Edition” 中。
|
||||
|
||||
#### Awk
|
||||
|
||||
GNU awk(即 `/usr/bin` 目录中的 `gawk` 命令,可能也是符号链接 `awk` 的目标地址)的维护者 Arnold Robbins 说道,`gawk` 和 `mawk`(另一个GPL 的 `awk` 实现)允许 `RS`(记录分隔符)是一个正则表达式,即这时 `RS` 的长度会大于 1。这一特性还不是 POSIX 的特性,但有 [迹象表明它即将会是][21]:
|
||||
|
||||
> NUL 在扩展正则表达式中产生的未定义行为允许 GNU `gawk` 程序未来可以扩展以处理二进制数据。
|
||||
>
|
||||
> 使用多字符 RS 值的未指定行为是为了未来可能的扩展,它是基于用于记录分隔符(RS)的扩展正则表达式的。目前的历史实现为采用该字符串的第一个字符而忽略其他字符。
|
||||
|
||||
这是一个重大的增强,因为 `RS` 符号定义了记录之间的分隔符。可能是逗号、分号、短划线、或者是任何此类字符,但如果它是字符*序列*,则只会使用第一个字符,除非你使用的是 `gawk` 或 `mawk`。想象一下这种情况,使用省略号(连续的三个点)作为解析 IP 地址文档的分隔记录,只是想获取在每个 IP 地址的每个点处解析的结果。
|
||||
|
||||
[mawk][22] 首先支持这个功能,但是几年来没有维护者,留下来的火把由 `gawk` 接过。(`mawk` 已然获得了一个新的维护者,可以说是大家薪火传承地将这一特性推向共同的预期值。)
|
||||
|
||||
### POSIX 规范
|
||||
|
||||
总的来说,Giacomo Catenzzi 指出,“……因为 GNU 的实用程序使用广泛,而且许多其他的选项和行为又对标规范。在 shell 的每次更改中,Bash 都会(作为一等公民)被用作比较。” 当某些东西被纳入 POSIX 规范时,无需提及 GNU 或任何其他影响,你可以简单地认为 POSIX 规范会受到许多方面的影响,GNU 只是其中之一。
|
||||
|
||||
共识是 POSIX 存在的意义所在。一群技术人员共同努力为了实现共同规范,再分享给数以百计各异的开发人员,经由他们的赋能,从而实现软件的独立性,以及开发人员和用户的自由。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2 (Scissors cutting open access to files)
|
||||
[2]: https://pubs.opengroup.org/onlinepubs/9699919799.2018edition/
|
||||
[3]: https://stallman.org/
|
||||
[4]: https://www.gnu.org/philosophy/free-sw.en.html
|
||||
[5]: https://www.ieee.org/
|
||||
[6]: http://gnu.org
|
||||
[7]: https://www.gnu.org/prep/standards/html_node/Non_002dGNU-Standards.html
|
||||
[8]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/make.html
|
||||
[9]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/diff.html
|
||||
[10]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/patch.html
|
||||
[11]: https://pubs.opengroup.org/onlinepubs/9699919799/functions/strftime.html
|
||||
[12]: http://man7.org/linux/man-pages/man3/getline.3.html
|
||||
[13]: http://austingroupbugs.net/view.php?id=1263
|
||||
[14]: http://austingroupbugs.net/view.php?id=1216
|
||||
[15]: http://austingroupbugs.net/view.php?id=1208
|
||||
[16]: http://austingroupbugs.net/view.php?id=1220
|
||||
[17]: http://austingroupbugs.net/view.php?id=1218
|
||||
[18]: https://opensource.com/article/19/6/how-write-loop-bash
|
||||
[19]: https://opensource.com/article/18/5/gnu-parallel
|
||||
[20]: https://www.hpe.com/us/en/servers/hp-ux.html
|
||||
[21]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html
|
||||
[22]: https://invisible-island.net/mawk/
|
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (scvoet)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11229-1.html)
|
||||
[#]: subject: (Linux Smartphone Librem 5 is Available for Preorder)
|
||||
[#]: via: (https://itsfoss.com/librem-5-available/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
基于 Linux 的智能手机 Librem 5 开启预售
|
||||
======
|
||||
|
||||
Purism 近期[宣布][1]了 [Librem 5 智能手机][2]的最终规格。它不是基于 Android 或 iOS 的,而是基于 [Android 的开源替代品][4]--[PureOS][3]。
|
||||
|
||||
随着这一消息的宣布,Librem 5 也正式[以 649 美元的价格开启预售][5](这是 7 月 31 日前的早鸟价),在那以后价格将会上涨 50 美元,产品将会于 2019 年第三季度发货。
|
||||
|
||||
![][6]
|
||||
|
||||
以下是 Purism 博客文章中关于 Librem 5 的信息:
|
||||
|
||||
> 我们认为手机不应该跟踪你,也不应该利用你的数字生活。
|
||||
|
||||
> Librem 5 意味着你有机会通过自由开源软件、开放式治理和透明度来收回和保护你的私人信息和数字生活。Librem 5 是一个**基于 [PureOS][3] 的手机**,这是一个完全免费、符合道德的**不基于 Android 或 iOS** 的开源操作系统(了解更多关于[为什么这很重要][7]的信息)。
|
||||
|
||||
> 我们已成功超额完成了众筹计划,我们将会一一去实现我们的承诺。Librem 5 的硬件和软件开发正在[稳步前进][8],它计划在 2019 年的第三季度发行初始版本。你可以用 649 美元的价格预购直到产品发货或正式价格生效。现在附赠外接显示器、键盘和鼠标的套餐也可以预购了。
|
||||
|
||||
### Librem 5 的配置
|
||||
|
||||
从它的预览来看,Librem 5 旨在提供更好的隐私保护和安全性。除此之外,它试图避免使用 Google 或 Apple 的服务。
|
||||
|
||||
虽然这个想法够好,它是如何成为一款低于 700 美元的商用智能手机?
|
||||
|
||||
![Librem 5 智能手机][9]
|
||||
|
||||
让我们来看一下它的配置:
|
||||
|
||||
![Librem 5][10]
|
||||
|
||||
从数据上讲它的配置已经足够高了。不是很好,也不是很差。但是,性能呢?用户体验呢?
|
||||
|
||||
我们并不能够确切地了解到它的信息,除非我们用过它。所以,如果你打算预购,应该要考虑到这一点。
|
||||
|
||||
### Librem 5 提供终身软件更新支持
|
||||
|
||||
当然,和同价位的智能手机相比,它的这些配置并不是很优秀。
|
||||
|
||||
然而,随着他们做出终身软件更新支持的承诺后,它看起来确实像被开源爱好者所钟情的一个好产品。
|
||||
|
||||
### 其他关键特性
|
||||
|
||||
Purism 还强调 Librem 5 将成为有史以来第一款以 [Matrix][12] 提供支持的智能手机。这意味着它将支持端到端的分布式加密通讯的短信、电话。
|
||||
|
||||
除了这些,耳机接口和用户可以自行更换电池使它成为一个可靠的产品。
|
||||
|
||||
### 总结
|
||||
|
||||
即使它很难与 Android 或 iOS 智能手机竞争,但多一种选择方式总是好的。Librem 5 不可能成为每个用户都喜欢的智能手机,但如果你是一个开源爱好者,而且正在寻找一款尊重隐私和安全,不使用 Google 和 Apple 服务的简单智能手机,那么这就很适合你。
|
||||
|
||||
另外,它提供终身的软件更新支持,这让它成为了一个优秀的智能手机。
|
||||
|
||||
你如何看待 Librem 5?有在考虑预购吗?请在下方的评论中将你的想法告诉我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/librem-5-available/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Scvoet][c]
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[c]: https://github.com/scvoet
|
||||
[1]: https://puri.sm/posts/librem-5-smartphone-final-specs-announced/
|
||||
[2]: https://itsfoss.com/librem-linux-phone/
|
||||
[3]: https://pureos.net/
|
||||
[4]: https://itsfoss.com/open-source-alternatives-android/
|
||||
[5]: https://shop.puri.sm/shop/librem-5/
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/librem-5-linux-smartphone.jpg?resize=800%2C450&ssl=1
|
||||
[7]: https://puri.sm/products/librem-5/pureos-mobile/
|
||||
[8]: https://puri.sm/posts/tag/phones
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/librem-5-smartphone.jpg?ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/librem-5-specs.png?ssl=1
|
||||
[11]: https://itsfoss.com/linux-games-performance-boost-amd-gpu/
|
||||
[12]: http://matrix.org
|
@ -0,0 +1,173 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11224-1.html)
|
||||
[#]: subject: (Use Postfix to get email from your Fedora system)
|
||||
[#]: via: (https://fedoramagazine.org/use-postfix-to-get-email-from-your-fedora-system/)
|
||||
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
|
||||
|
||||
使用 Postfix 从 Fedora 系统中获取电子邮件
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
交流是非常重要的。你的电脑可能正试图告诉你一些重要的事情。但是,如果你没有正确配置邮件传输代理([MTA][2]),那么你可能不会收到通知。Postfix 是一个[易于配置且以强大的安全记录而闻名][3]的 MTA。遵循以下步骤,以确保从本地服务发送的电子邮件通知将通过 Postfix MTA 路由到你的互联网电子邮件账户中。
|
||||
|
||||
### 安装软件包
|
||||
|
||||
使用 `dnf` 来安装一些必须软件包([你应该配置了 sudo,对吧?][4]):
|
||||
|
||||
```
|
||||
$ sudo -i
|
||||
# dnf install postfix mailx
|
||||
```
|
||||
|
||||
如果以前配置了不同的 MTA,那么你可能需要将 Postfix 设置为系统默认。使用 `alternatives` 命令设置系统默认 MTA:
|
||||
|
||||
```
|
||||
$ sudo alternatives --config mta
|
||||
There are 2 programs which provide 'mta'.
|
||||
Selection Command
|
||||
*+ 1 /usr/sbin/sendmail.sendmail
|
||||
2 /usr/sbin/sendmail.postfix
|
||||
Enter to keep the current selection[+], or type selection number: 2
|
||||
```
|
||||
|
||||
### 创建一个 password_maps 文件
|
||||
|
||||
你需要创建一个 Postfix 查询表条目,其中包含你要用于发送电子邮件账户的地址和密码:
|
||||
|
||||
```
|
||||
# MY_EMAIL_ADDRESS=glb@gmail.com
|
||||
# MY_EMAIL_PASSWORD=abcdefghijklmnop
|
||||
# MY_SMTP_SERVER=smtp.gmail.com
|
||||
# MY_SMTP_SERVER_PORT=587
|
||||
# echo "[$MY_SMTP_SERVER]:$MY_SMTP_SERVER_PORT $MY_EMAIL_ADDRESS:$MY_EMAIL_PASSWORD" >> /etc/postfix/password_maps
|
||||
# chmod 600 /etc/postfix/password_maps
|
||||
# unset MY_EMAIL_PASSWORD
|
||||
# history -c
|
||||
```
|
||||
|
||||
如果你使用的是 Gmail 账户,那么你需要为 Postfix 配置一个“应用程序密码”而不是使用你的 Gmail 密码。有关配置应用程序密码的说明,参阅“[使用应用程序密码登录][5]”。
|
||||
|
||||
接下来,你必须对 Postfix 查询表运行 `postmap` 命令,以创建或更新 Postfix 实际使用的文件的散列版本:
|
||||
|
||||
```
|
||||
# postmap /etc/postfix/password_maps
|
||||
```
|
||||
|
||||
散列后的版本将具有相同的文件名,但后缀为 `.db`。
|
||||
|
||||
### 更新 main.cf 文件
|
||||
|
||||
更新 Postfix 的 `main.cf` 配置文件,以引用刚刚创建 Postfix 查询表。编辑该文件并添加以下行:
|
||||
|
||||
```
|
||||
relayhost = smtp.gmail.com:587
|
||||
smtp_tls_security_level = verify
|
||||
smtp_tls_mandatory_ciphers = high
|
||||
smtp_tls_verify_cert_match = hostname
|
||||
smtp_sasl_auth_enable = yes
|
||||
smtp_sasl_security_options = noanonymous
|
||||
smtp_sasl_password_maps = hash:/etc/postfix/password_maps
|
||||
```
|
||||
|
||||
这里假设你使用 Gmail 作为 `relayhost` 设置,但是你可以用正确的主机名和端口替换系统应该将邮件发送到的邮件主机。
|
||||
|
||||
有关上述配置选项的最新详细信息,参考 man 帮助:
|
||||
|
||||
```
|
||||
$ man postconf.5
|
||||
```
|
||||
|
||||
### 启用、启动和测试 Postfix
|
||||
|
||||
更新 `main.cf` 文件后,启用并启动 Postfix 服务:
|
||||
|
||||
```
|
||||
# systemctl enable --now postfix.service
|
||||
```
|
||||
|
||||
然后,你可以使用 `exit` 命令或 `Ctrl+D` 以 root 身份退出 `sudo` 会话。你现在应该能够使用 `mail` 命令测试你的配置:
|
||||
|
||||
```
|
||||
$ echo 'It worked!' | mail -s "Test: $(date)" glb@gmail.com
|
||||
```
|
||||
|
||||
### 更新服务
|
||||
|
||||
如果你安装了像 [logwatch][6]、[mdadm][7]、[fail2ban][8]、[apcupsd][9] 或 [certwatch][10] 这样的服务,你现在可以更新它们的配置,以便它们的电子邮件通知转到你的 Internet 电子邮件地址。
|
||||
|
||||
另外,你可能希望将发送到本地系统 root 账户的所有电子邮件都转到互联网电子邮件地址中,将以下行添加到系统的 `/etc/alises` 文件中(你需要使用 `sudo` 编辑此文件,或首先切换到 `root` 账户):
|
||||
|
||||
```
|
||||
root: glb+root@gmail.com
|
||||
```
|
||||
|
||||
现在运行此命令重新读取别名:
|
||||
|
||||
```
|
||||
# newaliases
|
||||
```
|
||||
|
||||
* 提示: 如果你使用的是 Gmail,那么你可以在用户名和 `@` 符号之间[添加字母数字标记][11],如上所示,以便更轻松地识别和过滤从计算机收到的电子邮件。
|
||||
|
||||
### 常用命令
|
||||
|
||||
**查看邮件队列:**
|
||||
|
||||
```
|
||||
$ mailq
|
||||
```
|
||||
|
||||
**清除队列中的所有电子邮件:**
|
||||
|
||||
```
|
||||
# postsuper -d ALL
|
||||
```
|
||||
|
||||
**过滤设置,以获得感兴趣的值:**
|
||||
|
||||
```
|
||||
$ postconf | grep "^relayhost\|^smtp_"
|
||||
```
|
||||
|
||||
**查看 `postfix/smtp` 日志:**
|
||||
|
||||
```
|
||||
$ journalctl --no-pager -t postfix/smtp
|
||||
```
|
||||
|
||||
**进行配置更改后重新加载 postfix:**
|
||||
|
||||
```
|
||||
$ systemctl reload postfix
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-postfix-to-get-email-from-your-fedora-system/
|
||||
|
||||
作者:[Gregory Bartholomew][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/glb/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/postfix-816x345.jpg
|
||||
[2]: https://en.wikipedia.org/wiki/Message_transfer_agent
|
||||
[3]: https://en.wikipedia.org/wiki/Postfix_(software)
|
||||
[4]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[5]: https://support.google.com/accounts/answer/185833
|
||||
[6]: https://src.fedoraproject.org/rpms/logwatch
|
||||
[7]: https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/
|
||||
[8]: https://fedoraproject.org/wiki/Fail2ban_with_FirewallD
|
||||
[9]: https://src.fedoraproject.org/rpms/apcupsd
|
||||
[10]: https://www.linux.com/learn/automated-certificate-expiration-checks-centos
|
||||
[11]: https://gmail.googleblog.com/2008/03/2-hidden-ways-to-get-more-from-your.html
|
||||
[12]: https://unsplash.com/@sharonmccutcheon?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[13]: https://unsplash.com/search/photos/envelopes?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,109 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (scvoet)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11232-1.html)
|
||||
[#]: subject: (How To Add ‘New Document’ Option In Right Click Context Menu In Ubuntu 18.04)
|
||||
[#]: via: ((https://www.ostechnix.com/how-to-add-new-document-option-in-right-click-context-menu-in-ubuntu-18-04/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
如何在 Ubuntu 18.04 的右键单击菜单中添加“新建文档”按钮
|
||||
======
|
||||
|
||||
![Add 'New Document' Option In Right Click Context Menu In Ubuntu 18.04 GNOME desktop][1]
|
||||
|
||||
前几天,我在各种在线资源站点上收集关于 [Linux 包管理器][2] 的参考资料。在我想创建一个用于保存笔记的文件,我突然发现我的 Ubuntu 18.04 LTS 桌面并没有“新建文件”的按钮了,它好像离奇失踪了。在谷歌一下后,我发现原来“新建文档”按钮不再被集成在 Ubuntu GNOME 版本中了。庆幸的是,我找到了一个在 Ubuntu 18.04 LTS 桌面的右键单击菜单中添加“新建文档”按钮的简易解决方案。
|
||||
|
||||
就像你在下方截图中看到的一样,Nautilus 文件管理器的右键单击菜单中并没有“新建文件”按钮。
|
||||
|
||||
![][3]
|
||||
|
||||
*Ubuntu 18.04 移除了右键点击菜单中的“新建文件”的选项。*
|
||||
|
||||
如果你想添加此按钮,请按照以下步骤进行操作。
|
||||
|
||||
### 在 Ubuntu 的右键单击菜单中添加“新建文件”按钮
|
||||
|
||||
首先,你需要确保你的系统中有 `~/Templates` 文件夹。如果没有的话,可以按照下面的命令进行创建。
|
||||
|
||||
```
|
||||
$ mkdir ~/Templates
|
||||
```
|
||||
|
||||
然后打开终端应用并使用 `cd` 命令进入 `~/Templates` 文件夹:
|
||||
|
||||
```
|
||||
$ cd ~/Templates
|
||||
```
|
||||
|
||||
创建一个空文件:
|
||||
|
||||
```
|
||||
$ touch Empty\ Document
|
||||
```
|
||||
|
||||
或
|
||||
|
||||
```
|
||||
$ touch "Empty Document"
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
新打开一个 Nautilus 文件管理器,然后检查一下右键单击菜单中是否成功添加了“新建文档”按钮。
|
||||
|
||||
![][5]
|
||||
|
||||
*在 Ubuntu 18.04 的右键单击菜单中添加“新建文件”按钮*
|
||||
|
||||
如上图所示,我们重新启用了“新建文件”的按钮。
|
||||
|
||||
你还可以为不同文件类型添加按钮。
|
||||
|
||||
```
|
||||
$ cd ~/Templates
|
||||
|
||||
$ touch New\ Word\ Document.docx
|
||||
$ touch New\ PDF\ Document.pdf
|
||||
$ touch New\ Text\ Document.txt
|
||||
$ touch New\ PyScript.py
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
在“新建文件”子菜单中给不同的文件类型添加按钮
|
||||
|
||||
注意,所有文件都应该创建在 `~/Templates` 文件夹下。
|
||||
|
||||
现在,打开 Nautilus 并检查“新建文件” 菜单中是否有相应的新建文件按钮。
|
||||
|
||||
![][7]
|
||||
|
||||
如果你要从子菜单中删除任一文件类型,只需在 Templates 目录中移除相应的文件即可。
|
||||
|
||||
```
|
||||
$ rm ~/Templates/New\ Word\ Document.docx
|
||||
```
|
||||
|
||||
我十分好奇为什么最新的 Ubuntu GNOME 版本将这个常用选项移除了。不过,重新启用这个按钮也十分简单,只需要几分钟。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-add-new-document-option-in-right-click-context-menu-in-ubuntu-18-04/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[scvoet](https://github.com/scvoet)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/07/Add-New-Document-Option-In-Right-Click-Context-Menu-1-720x340.png
|
||||
[2]: https://www.ostechnix.com/linux-package-managers-compared-appimage-vs-snap-vs-flatpak/
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/07/new-document-option-missing.png
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/07/Create-empty-document-in-Templates-directory.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/07/Add-New-Document-Option-In-Right-Click-Context-Menu-In-Ubuntu.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/07/Add-options-for-different-files-types.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/07/Add-New-Document-Option-In-Right-Click-Context-Menu.png
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11220-1.html)
|
||||
[#]: subject: (How To Set Up Time Synchronization On Ubuntu)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-set-up-time-synchronization-on-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
@ -10,16 +10,15 @@
|
||||
如何在 Ubuntu 上设置时间同步
|
||||
======
|
||||
|
||||
![Set Up Time Synchronization On Ubuntu][1]
|
||||
|
||||
你可能设置过 [**cron 任务**][2] 来在特定时间备份重要文件或执行系统相关任务。也许你配置了一个[**日志服务**][3]在特定时间间隔轮转日志。如果你的时钟不同步,这些任务将无法按时执行。这就是要在 Linux 系统上设置正确的时区并保持时钟与 Internet 同步的原因。本指南介绍如何在 Ubuntu Linux 上设置时间同步。下面的步骤已经在 Ubuntu 18.04 上进行了测试,但是对于使用 systemd 的 **timesyncd** 服务的其他基于 Ubuntu 的系统它们是相同的。
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/13/135423xnk7zib00nn2aebv.jpg)
|
||||
|
||||
你可能设置过 [cron 任务][2] 来在特定时间备份重要文件或执行系统相关任务。也许你配置了一个日志服务器在特定时间间隔[轮转日志][3]。但如果你的时钟不同步,这些任务将无法按时执行。这就是要在 Linux 系统上设置正确的时区并保持时钟与互联网同步的原因。本指南介绍如何在 Ubuntu Linux 上设置时间同步。下面的步骤已经在 Ubuntu 18.04 上进行了测试,但是对于使用 systemd 的 `timesyncd` 服务的其他基于 Ubuntu 的系统它们是相同的。
|
||||
|
||||
### 在 Ubuntu 上设置时间同步
|
||||
|
||||
通常,我们在安装时设置时区。但是,你可以根据需要更改或设置不同的时区。
|
||||
|
||||
首先,让我们使用 “date” 命令查看 Ubuntu 系统中的当前时区:
|
||||
首先,让我们使用 `date` 命令查看 Ubuntu 系统中的当前时区:
|
||||
|
||||
```
|
||||
$ date
|
||||
@ -31,16 +30,16 @@ $ date
|
||||
Tue Jul 30 11:47:39 UTC 2019
|
||||
```
|
||||
|
||||
如上所见,“date” 命令显示实际日期和当前时间。这里,我当前的时区是 **UTC**,代表**协调世界时**。
|
||||
如上所见,`date` 命令显示实际日期和当前时间。这里,我当前的时区是 **UTC**,代表**协调世界时**。
|
||||
|
||||
或者,你可以在 **/etc/timezone** 文件中查找当前时区。
|
||||
或者,你可以在 `/etc/timezone` 文件中查找当前时区。
|
||||
|
||||
```
|
||||
$ cat /etc/timezone
|
||||
UTC
|
||||
```
|
||||
|
||||
现在,让我们看看时钟是否与 Internet 同步。只需运行:
|
||||
现在,让我们看看时钟是否与互联网同步。只需运行:
|
||||
|
||||
```
|
||||
$ timedatectl
|
||||
@ -58,23 +57,23 @@ systemd-timesyncd.service active: yes
|
||||
RTC in local TZ: no
|
||||
```
|
||||
|
||||
如你所见,“timedatectl” 命令显示本地时间、世界时、时区以及系统时钟是否与 Internet 服务器同步,以及 **systemd-timesyncd.service** 是处于活动状态还是非活动状态。就我而言,系统时钟已与 Internet 时间服务器同步。
|
||||
如你所见,`timedatectl` 命令显示本地时间、世界时、时区以及系统时钟是否与互联网服务器同步,以及 `systemd-timesyncd.service` 是处于活动状态还是非活动状态。就我而言,系统时钟已与互联网时间服务器同步。
|
||||
|
||||
如果时钟不同步,你会看到下面截图中显示的 **“System clock synchronized: no”**。
|
||||
如果时钟不同步,你会看到下面截图中显示的 `System clock synchronized: no`。
|
||||
|
||||
![][4]
|
||||
|
||||
时间同步已禁用。
|
||||
*时间同步已禁用。*
|
||||
|
||||
注意:上面的截图是旧截图。这就是你看到不同日期的原因。
|
||||
|
||||
如果你看到 **“System clock synchronized:** 值设置为 **no**,那么 timeyncd 服务可能处于非活动状态。因此,只需重启服务并看下是否正常。
|
||||
如果你看到 `System clock synchronized:` 值设置为 `no`,那么 `timesyncd` 服务可能处于非活动状态。因此,只需重启服务并看下是否正常。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart systemd-timesyncd.service
|
||||
```
|
||||
|
||||
现在检查 timesyncd 服务状态:
|
||||
现在检查 `timesyncd` 服务状态:
|
||||
|
||||
```
|
||||
$ sudo systemctl status systemd-timesyncd.service
|
||||
@ -100,7 +99,7 @@ Jul 30 10:50:35 ubuntuserver systemd-timesyncd[498]: Network configuration chang
|
||||
Jul 30 10:51:06 ubuntuserver systemd-timesyncd[498]: Synchronized to time server [2001:67c:1560:800
|
||||
```
|
||||
|
||||
如果此服务已启用并处于活动状态,那么系统时钟应与 Internet 时间服务器同步。
|
||||
如果此服务已启用并处于活动状态,那么系统时钟应与互联网时间服务器同步。
|
||||
|
||||
你可以使用命令验证是否启用了时间同步:
|
||||
|
||||
@ -114,9 +113,9 @@ $ timedatectl
|
||||
$ sudo timedatectl set-ntp true
|
||||
```
|
||||
|
||||
现在,你的系统时钟将与 Internet 时间服务器同步。
|
||||
现在,你的系统时钟将与互联网时间服务器同步。
|
||||
|
||||
##### 使用 timedatectl 命令更改时区
|
||||
#### 使用 timedatectl 命令更改时区
|
||||
|
||||
如果我想使用 UTC 以外的其他时区怎么办?这很容易!
|
||||
|
||||
@ -130,33 +129,37 @@ $ timedatectl list-timezones
|
||||
|
||||
![][5]
|
||||
|
||||
使用 timedatectl 命令列出时区
|
||||
*使用 timedatectl 命令列出时区*
|
||||
|
||||
你可以使用以下命令设置所需的时区(例如,Asia/Kolkata):
|
||||
你可以使用以下命令设置所需的时区(例如,Asia/Shanghai):
|
||||
|
||||
(LCTT 译注:本文原文使用印度时区作为示例,这里为了便于使用,换为中国标准时区 CST。另外,在时区设置中,要注意 CST 这个缩写会代表四个不同的时区,因此建议使用城市和 UTC+8 来说设置。)
|
||||
|
||||
```
|
||||
$ sudo timedatectl set-timezone Asia/Kolkata
|
||||
$ sudo timedatectl set-timezone Asia/Shanghai
|
||||
```
|
||||
|
||||
使用 “date” 命令再次检查时区是否已真正更改:
|
||||
使用 `date` 命令再次检查时区是否已真正更改:
|
||||
|
||||
**$ date**
|
||||
Tue Jul 30 17:52:33 **IST** 2019
|
||||
```
|
||||
$ date
|
||||
Tue Jul 30 20:22:33 CST 2019
|
||||
```
|
||||
|
||||
或者,如果需要详细输出,请使用 timedatectl 命令:
|
||||
或者,如果需要详细输出,请使用 `timedatectl` 命令:
|
||||
|
||||
```
|
||||
$ timedatectl
|
||||
Local time: Tue 2019-07-30 17:52:35 IST
|
||||
Local time: Tue 2019-07-30 20:22:35 CST
|
||||
Universal time: Tue 2019-07-30 12:22:35 UTC
|
||||
RTC time: Tue 2019-07-30 12:22:36
|
||||
Time zone: Asia/Kolkata (IST, +0530)
|
||||
Time zone: Asia/Shanghai (CST, +0800)
|
||||
System clock synchronized: yes
|
||||
systemd-timesyncd.service active: yes
|
||||
RTC in local TZ: no
|
||||
```
|
||||
|
||||
如你所见,我已将时区从 UTC 更改为 IST(印度标准时间)。
|
||||
如你所见,我已将时区从 UTC 更改为 CST(中国标准时间)。
|
||||
|
||||
要切换回 UTC 时区,只需运行:
|
||||
|
||||
@ -164,9 +167,9 @@ RTC in local TZ: no
|
||||
$ sudo timedatectl set-timezone UTC
|
||||
```
|
||||
|
||||
##### 使用 tzdata 更改时区
|
||||
#### 使用 tzdata 更改时区
|
||||
|
||||
在较旧的 Ubuntu 版本中,没有 timedatectl 命令。这种情况下,你可以使用 **tzdata**(Time zone data)来设置时间同步。
|
||||
在较旧的 Ubuntu 版本中,没有 `timedatectl` 命令。这种情况下,你可以使用 `tzdata`(Time zone data)来设置时间同步。
|
||||
|
||||
```
|
||||
$ sudo dpkg-reconfigure tzdata
|
||||
@ -176,41 +179,41 @@ $ sudo dpkg-reconfigure tzdata
|
||||
|
||||
![][6]
|
||||
|
||||
接下来,选择与你的时区对应的城市或地区。这里,我选择了 **Kolkata**。
|
||||
接下来,选择与你的时区对应的城市或地区。这里,我选择了 **Kolkata**(LCTT 译注:中国用户请相应使用 Shanghai 等城市)。
|
||||
|
||||
![][7]
|
||||
|
||||
最后,你将在终端中看到类似下面的输出。
|
||||
|
||||
```
|
||||
Current default time zone: 'Asia/Kolkata'
|
||||
Local time is now: Tue Jul 30 19:29:25 IST 2019.
|
||||
Current default time zone: 'Asia/Shanghai'
|
||||
Local time is now: Tue Jul 30 21:59:25 CST 2019.
|
||||
Universal Time is now: Tue Jul 30 13:59:25 UTC 2019.
|
||||
```
|
||||
|
||||
##### 在图形模式下配置时区
|
||||
#### 在图形模式下配置时区
|
||||
|
||||
有些用户可能对命令行方式不太满意。如果你是其中之一,那么你可以轻松地在图形模式的系统设置面板中进行设置。
|
||||
|
||||
点击**超级键**(Windows 键),在Ubuntu dash 中输入 **settings**,然后点击 **Settings** 图标。
|
||||
点击 Super 键(Windows 键),在 Ubuntu dash 中输入 **settings**,然后点击设置图标。
|
||||
|
||||
![][8]
|
||||
|
||||
从 Ubuntu dash 启动系统的设置
|
||||
*从 Ubuntu dash 启动系统的设置*
|
||||
|
||||
或者,单击位于 Ubuntu 桌面右上角的向下箭头,然后单击左上角的 “Settings” 图标。
|
||||
或者,单击位于 Ubuntu 桌面右上角的向下箭头,然后单击左上角的“设置”图标。
|
||||
|
||||
![][9]
|
||||
|
||||
从顶部面板启动系统的设置
|
||||
*从顶部面板启动系统的设置*
|
||||
|
||||
在下一个窗口中,选择 **Details**,然后单击 **Date & Time** 选项。打开 **Automatic Date & Time** 和 **Automatic Time Zone**。
|
||||
在下一个窗口中,选择“细节”,然后单击“日期与时间”选项。打开“自动的日期与时间”和“自动的时区”。
|
||||
|
||||
![][10]
|
||||
|
||||
在 Ubuntu 中设置自动时区
|
||||
*在 Ubuntu 中设置自动时区*
|
||||
|
||||
关闭设置窗口就行了!你的系统始终应该与 Internet 时间服务器同步了。
|
||||
关闭设置窗口就行了!你的系统始终应该与互联网时间服务器同步了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -218,8 +221,8 @@ via: https://www.ostechnix.com/how-to-set-up-time-synchronization-on-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
99
published/20190806 Unboxing the Raspberry Pi 4.md
Normal file
99
published/20190806 Unboxing the Raspberry Pi 4.md
Normal file
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11248-1.html)
|
||||
[#]: subject: (Unboxing the Raspberry Pi 4)
|
||||
[#]: via: (https://opensource.com/article/19/8/unboxing-raspberry-pi-4)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall)
|
||||
|
||||
树莓派 4 开箱记
|
||||
======
|
||||
|
||||
> 树莓派 4 与其前代产品相比具有令人印象深刻的性能提升,而入门套件使其易于快速启动和运行。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/20/091730rl99q2ahycd4sz9h.jpg)
|
||||
|
||||
当树莓派 4 [在 6 月底宣布发布][2]时,我没有迟疑,在发布的同一天就从 [CanaKit][3] 订购了两套树莓派 4 入门套件。1GB RAM 版本有现货,但 4GB 版本要在 7 月 19 日才能发货。由于我想两个都试试,这两个都订购了让它们一起发货。
|
||||
|
||||
![CanaKit's Raspberry Pi 4 Starter Kit and official accessories][4]
|
||||
|
||||
这是我开箱我的树莓派 4 后所看到的。
|
||||
|
||||
### 电源
|
||||
|
||||
树莓派 4 使用 USB-C 连接器供电。虽然 USB-C 电缆现在非常普遍,但你的树莓派 4 [可能不喜欢你的USB-C 线][5](至少对于树莓派 4 的第一版如此)。因此,除非你确切知道自己在做什么,否则我建议你订购含有官方树莓派充电器的入门套件。如果你想尝试手头的充电设备,那么该设备的输入是 100-240V ~ 50/60Hz 0.5A,输出为 5.1V - 3.0A。
|
||||
|
||||
![Raspberry Pi USB-C charger][6]
|
||||
|
||||
### 键盘和鼠标
|
||||
|
||||
官方的键盘和鼠标是和入门套件是[分开出售][7]的,总价 25 美元,并不很便宜,因为你的这台树莓派电脑也才只有 35 到 55 美元。但树莓派徽标印在这个键盘上(而不是 Windows 徽标),并且外观相宜。键盘也是 USB 集线器,因此它允许你插入更多设备。我插入了我的 [YubiKey][8] 安全密钥,它运行得非常好。我会把键盘和鼠标分类为“值得拥有”而不是“必须拥有”。你的常规键盘和鼠标应该也可以正常工作。
|
||||
|
||||
![Official Raspberry Pi keyboard \(with YubiKey plugged in\) and mouse.][9]
|
||||
|
||||
![Raspberry Pi logo on the keyboard][10]
|
||||
|
||||
### Micro-HDMI 电缆
|
||||
|
||||
可能让一些人惊讶的是,与带有 Mini-HDMI 端口的树莓派 Zero 不同,树莓派 4 配备了 Micro-HDMI。它们不是同一个东西!因此,即使你手头有合适的 USB-C 线缆/电源适配器、鼠标和键盘,也很有可能需要使用 Micro-HDMI 转 HDMI 的线缆(或适配器)来将你的新树莓派接到显示器上。
|
||||
|
||||
### 外壳
|
||||
|
||||
树莓派的外壳已经有了很多年,这可能是树莓派基金会销售的第一批“官方”外围设备之一。有些人喜欢它们,而有些人不喜欢。我认为将一个树莓派放在一个盒子里可以更容易携带它,可以避免静电和针脚弯曲。
|
||||
|
||||
另一方面,把你的树莓派装在盒子里会使电路板过热。这款 CanaKit 入门套件还配备了处理器散热器,这可能有所帮助,因为较新的树莓派已经[以运行相当热而闻名][11]了。
|
||||
|
||||
![Raspberry Pi 4 case][12]
|
||||
|
||||
### Raspbian 和 NOOBS
|
||||
|
||||
入门套件附带的另一个东西是 microSD 卡,其中预装了树莓派 4 的 [NOOBS][13] 操作系统的正确版本。(我拿到的是 3.19 版,发布于 2019 年 6 月 24 日)。如果你是第一次使用树莓派并且不确定从哪里开始,这可以为你节省大量时间。入门套件中的 microSD 卡容量为 32GB。
|
||||
|
||||
插入 microSD 卡并连接所有电缆后,只需启动树莓派,引导进入 NOOBS,选择 Raspbian 发行版,然后等待安装。
|
||||
|
||||
![Raspberry Pi 4 with 4GB of RAM][14]
|
||||
|
||||
我注意到在安装最新的 Raspbian 时有一些改进。(如果它们已经出现了一段时间,请原谅我 —— 自从树莓派 3 出现以来我没有对树莓派进行过全新安装。)其中一个是 Raspbian 会要求你在安装后的首次启动为你的帐户设置一个密码,另一个是它将运行软件更新(假设你有网络连接)。这些都是很大的改进,有助于保持你的树莓派更安全。我很希望能有一天在安装时看到加密 microSD 卡的选项。
|
||||
|
||||
![Running Raspbian updates at first boot][15]
|
||||
|
||||
![Raspberry Pi 4 setup][16]
|
||||
|
||||
运行非常顺畅!
|
||||
|
||||
### 结语
|
||||
|
||||
虽然 CanaKit 不是美国唯一授权的树莓派零售商,但我发现它的入门套件的价格物超所值。
|
||||
|
||||
到目前为止,我对树莓派 4 的性能提升印象深刻。我打算尝试用一整个工作日将它作为我唯一的计算机,我很快就会写一篇关于我探索了多远的后续文章。敬请关注!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/unboxing-raspberry-pi-4
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi4_board_hardware.jpg?itok=KnFU7NvR (Raspberry Pi 4 board, posterized filter)
|
||||
[2]: https://opensource.com/article/19/6/raspberry-pi-4
|
||||
[3]: https://www.canakit.com/raspberry-pi-4-starter-kit.html
|
||||
[4]: https://opensource.com/sites/default/files/uploads/raspberrypi4_canakit.jpg (CanaKit's Raspberry Pi 4 Starter Kit and official accessories)
|
||||
[5]: https://www.techrepublic.com/article/your-new-raspberry-pi-4-wont-power-on-usb-c-cable-problem-now-officially-confirmed/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/raspberrypi_usb-c_charger.jpg (Raspberry Pi USB-C charger)
|
||||
[7]: https://www.canakit.com/official-raspberry-pi-keyboard-mouse.html?defpid=4476
|
||||
[8]: https://www.yubico.com/products/yubikey-hardware/
|
||||
[9]: https://opensource.com/sites/default/files/uploads/raspberrypi_keyboardmouse.jpg (Official Raspberry Pi keyboard (with YubiKey plugged in) and mouse.)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/raspberrypi_keyboardlogo.jpg (Raspberry Pi logo on the keyboard)
|
||||
[11]: https://www.theregister.co.uk/2019/07/22/raspberry_pi_4_too_hot_to_handle/
|
||||
[12]: https://opensource.com/sites/default/files/uploads/raspberrypi4_case.jpg (Raspberry Pi 4 case)
|
||||
[13]: https://www.raspberrypi.org/downloads/noobs/
|
||||
[14]: https://opensource.com/sites/default/files/uploads/raspberrypi4_ram.jpg (Raspberry Pi 4 with 4GB of RAM)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/raspberrypi4_rasbpianupdate.jpg (Running Raspbian updates at first boot)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/raspberrypi_setup.jpg (Raspberry Pi 4 setup)
|
@ -0,0 +1,113 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11238-1.html)
|
||||
[#]: subject: (Find Out How Long Does it Take To Boot Your Linux System)
|
||||
[#]: via: (https://itsfoss.com/check-boot-time-linux/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
你的 Linux 系统开机时间已经击败了 99% 的电脑
|
||||
======
|
||||
|
||||
当你打开系统电源时,你会等待制造商的徽标出现,屏幕上可能会显示一些消息(以非安全模式启动),然后是 [Grub][1] 屏幕、操作系统加载屏幕以及最后的登录屏。
|
||||
|
||||
你检查过这花费了多长时间么?也许没有。除非你真的需要知道,否则你不会在意开机时间。
|
||||
|
||||
但是如果你很想知道你的 Linux 系统需要很长时间才能启动完成呢?使用秒表是一种方法,但在 Linux 中,你有一种更好、更轻松地了解系统启动时间的方法。
|
||||
|
||||
### 在 Linux 中使用 systemd-analyze 检查启动时间
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/17/104358s1ho8ug868hso1y8.jpg)
|
||||
|
||||
无论你是否喜欢,[systemd][3] 运行在大多数流行的 Linux 发行版中。systemd 有许多管理 Linux 系统的工具。其中一个就是 `systemd-analyze`。
|
||||
|
||||
`systemd-analyze` 命令为你提供最近一次启动时运行的服务数量以及消耗时间的详细信息。
|
||||
|
||||
如果在终端中运行以下命令:
|
||||
|
||||
```
|
||||
systemd-analyze
|
||||
```
|
||||
|
||||
你将获得总启动时间以及固件、引导加载程序、内核和用户空间所消耗的时间:
|
||||
|
||||
```
|
||||
Startup finished in 7.275s (firmware) + 13.136s (loader) + 2.803s (kernel) + 12.488s (userspace) = 35.704s
|
||||
|
||||
graphical.target reached after 12.408s in userspace
|
||||
```
|
||||
|
||||
正如你在上面的输出中所看到的,我的系统花了大约 35 秒才进入可以输入密码的页面。我正在使用戴尔 XPS Ubuntu。它使用 SSD 存储,尽管如此,它还需要很长时间才能启动。
|
||||
|
||||
不是那么令人印象深刻,是吗?为什么不共享你们系统的启动时间?我们来比较吧。
|
||||
|
||||
你可以使用以下命令将启动时间进一步细分为每个单元:
|
||||
|
||||
```
|
||||
systemd-analyze blame
|
||||
```
|
||||
|
||||
这将生成大量输出,所有服务按所用时间的降序列出。
|
||||
|
||||
```
|
||||
7.347s plymouth-quit-wait.service
|
||||
6.198s NetworkManager-wait-online.service
|
||||
3.602s plymouth-start.service
|
||||
3.271s plymouth-read-write.service
|
||||
2.120s apparmor.service
|
||||
1.503s [email protected]
|
||||
1.213s motd-news.service
|
||||
908ms snapd.service
|
||||
861ms keyboard-setup.service
|
||||
739ms fwupd.service
|
||||
702ms bolt.service
|
||||
672ms dev-nvme0n1p3.device
|
||||
608ms [email protected]:intel_backlight.service
|
||||
539ms snap-core-7270.mount
|
||||
504ms snap-midori-451.mount
|
||||
463ms snap-screencloud-1.mount
|
||||
446ms snapd.seeded.service
|
||||
440ms snap-gtk\x2dcommon\x2dthemes-1313.mount
|
||||
420ms snap-core18-1066.mount
|
||||
416ms snap-scrcpy-133.mount
|
||||
412ms snap-gnome\x2dcharacters-296.mount
|
||||
```
|
||||
|
||||
#### 额外提示:改善启动时间
|
||||
|
||||
如果查看此输出,你可以看到网络管理器和 [plymouth][4] 都消耗了大量的启动时间。
|
||||
|
||||
Plymouth 负责你在 Ubuntu 和其他发行版中在登录页面出现之前的引导页面。网络管理器负责互联网连接,可以关闭它来加快启动时间。不要担心,在你登录后,你可以正常使用 wifi。
|
||||
|
||||
```
|
||||
sudo systemctl disable NetworkManager-wait-online.service
|
||||
```
|
||||
|
||||
如果要还原更改,可以使用以下命令:
|
||||
|
||||
```
|
||||
sudo systemctl enable NetworkManager-wait-online.service
|
||||
```
|
||||
|
||||
请不要在不知道用途的情况下自行禁用各种服务。这可能会产生危险的后果。
|
||||
|
||||
现在你知道了如何检查 Linux 系统的启动时间,为什么不在评论栏分享你的系统的启动时间?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/check-boot-time-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.gnu.org/software/grub/
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/linux-boot-time.jpg?resize=800%2C450&ssl=1
|
||||
[3]: https://en.wikipedia.org/wiki/Systemd
|
||||
[4]: https://wiki.archlinux.org/index.php/Plymouth
|
87
published/20190808 How to manipulate PDFs on Linux.md
Normal file
87
published/20190808 How to manipulate PDFs on Linux.md
Normal file
@ -0,0 +1,87 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11230-1.html)
|
||||
[#]: subject: (How to manipulate PDFs on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3430781/how-to-manipulate-pdfs-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
如何在 Linux 命令行操作 PDF
|
||||
======
|
||||
|
||||
> pdftk 命令提供了许多处理 PDF 的命令行操作,包括合并页面、加密文件、添加水印、压缩文件,甚至还有修复 PDF。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/15/110119x6sjnjs6s22srnje.jpg)
|
||||
|
||||
虽然 PDF 通常被认为是相当稳定的文件,但在 Linux 和其他系统上你可以做很多处理。包括合并、拆分、旋转、拆分成单页、加密和解密、添加水印、压缩和解压缩,甚至还有修复。 `pdftk` 命令能执行所有甚至更多操作。
|
||||
|
||||
“pdftk” 代表 “PDF 工具包”(PDF tool kit),这个命令非常易于使用,并且可以很好地操作 PDF。例如,要将独立的文件合并成一个文件,你可以使用以下命令:
|
||||
|
||||
```
|
||||
$ pdftk pg1.pdf pg2.pdf pg3.pdf pg4.pdf pg5.pdf cat output OneDoc.pdf
|
||||
```
|
||||
|
||||
`OneDoc.pdf` 将包含上面显示的所有五个文档,命令将在几秒钟内运行完毕。请注意,`cat` 选项表示将文件连接在一起,`output` 选项指定新文件的名称。
|
||||
|
||||
你还可以从 PDF 中提取选定页面来创建单独的 PDF 文件。例如,如果要创建仅包含上面创建的文档的第 1、2、3 和 5 页的新 PDF,那么可以执行以下操作:
|
||||
|
||||
```
|
||||
$ pdftk OneDoc.pdf cat 1-3 5 output 4pgs.pdf
|
||||
```
|
||||
|
||||
另外,如果你想要第 1、3、4 和 5 页(总计 5 页),我们可以使用以下命令:
|
||||
|
||||
```
|
||||
$ pdftk OneDoc.pdf cat 1 3-end output 4pgs.pdf
|
||||
```
|
||||
|
||||
你可以选择单独页面或者页面范围,如上例所示。
|
||||
|
||||
下一个命令将从一个包含奇数页(1、3 等)的文件和一个包含偶数页(2、4 等)的文件创建一个整合文档:
|
||||
|
||||
```
|
||||
$ pdftk A=odd.pdf B=even.pdf shuffle A B output collated.pdf
|
||||
```
|
||||
|
||||
请注意,`shuffle` 选项使得能够完成整合,并指示文档的使用顺序。另请注意:虽然上面建议用的是奇数/偶数页,但你不限于仅使用两个文件。
|
||||
|
||||
如果要创建只能由知道密码的收件人打开的加密 PDF,可以使用如下命令:
|
||||
|
||||
```
|
||||
$ pdftk prep.pdf output report.pdf user_pw AsK4n0thingGeTn0thing
|
||||
```
|
||||
|
||||
选项提供 40(`encrypt_40bit`)和 128(`encrypt_128bit`)位加密。默认情况下使用 128 位加密。
|
||||
|
||||
你还可以使用 `burst` 选项将 PDF 文件分成单个页面:
|
||||
|
||||
```
|
||||
$ pdftk allpgs.pdf burst
|
||||
$ ls -ltr *.pdf | tail -5
|
||||
-rw-rw-r-- 1 shs shs 22933 Aug 8 08:18 pg_0001.pdf
|
||||
-rw-rw-r-- 1 shs shs 23773 Aug 8 08:18 pg_0002.pdf
|
||||
-rw-rw-r-- 1 shs shs 23260 Aug 8 08:18 pg_0003.pdf
|
||||
-rw-rw-r-- 1 shs shs 23435 Aug 8 08:18 pg_0004.pdf
|
||||
-rw-rw-r-- 1 shs shs 23136 Aug 8 08:18 pg_0005.pdf
|
||||
```
|
||||
|
||||
`pdftk` 命令使得合并、拆分、重建、加密 PDF 文件非常容易。要了解更多选项,请查看 [PDF 实验室][3]中的示例页面。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3430781/how-to-manipulate-pdfs-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/book-pages-100807709-large.jpg
|
||||
[3]: https://www.pdflabs.com/docs/pdftk-cli-examples/
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,164 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11244-1.html)
|
||||
[#]: subject: (How to measure the health of an open source community)
|
||||
[#]: via: (https://opensource.com/article/19/8/measure-project)
|
||||
[#]: author: (Jon Lawrence https://opensource.com/users/the3rdlaw)
|
||||
|
||||
如何衡量一个开源社区的健康度
|
||||
======
|
||||
|
||||
> 这比较复杂。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/19/184719nz3xuazppzu3vwcg.jpg)
|
||||
|
||||
作为一个经常管理软件开发团队的人,多年来我一直关注度量指标。一次次,我发现自己领导团队使用一个又一个的项目平台(例如 Jira、GitLab 和 Rally)生成了大量可测量的数据。从那时起,我已经及时投入了大量时间从记录平台中提取了有用的指标,并采用了一种我们可以理解的格式,然后使用这些指标对开发的许多方面做出更好的选择。
|
||||
|
||||
今年早些时候,我有幸在 [Linux 基金会][2]遇到了一个名为<ruby>[开源软件社区健康分析][3]<rt>Community Health Analytics for Open Source Software</rt></ruby>(CHAOSS)的项目。该项目侧重于从各种来源收集和丰富指标,以便开源社区的利益相关者可以衡量他们项目的健康状况。
|
||||
|
||||
### CHAOSS 介绍
|
||||
|
||||
随着我对该项目的基本指标和目标越来越熟悉,一个问题在我的脑海中不断翻滚。什么是“健康”的开源项目,由谁来定义?
|
||||
|
||||
特定角色的人认为健康的东西可能另一个角色的人就不会这样认为。似乎可以用 CHAOSS 收集的细粒度数据进行市场细分实验,重点关注对特定角色可能最有意义的背景问题,以及 CHAOSS 收集哪些指标可能有助于回答这些问题。
|
||||
|
||||
CHAOSS 项目创建并维护了一套开源应用程序和度量标准定义,使得这个实验具有可能性,这包括:
|
||||
|
||||
* 许多基于服务器的应用程序,用于收集、聚合和丰富度量标准(例如 Augur 和 GrimoireLab)。
|
||||
* ElasticSearch、Kibana 和 Logstash(ELK)的开源版本。
|
||||
* 身份服务、数据分析服务和各种集成库。
|
||||
|
||||
在我过去的一个程序中,有六个团队从事于不同复杂程度的项目,我们找到了一个简洁的工具,它允许我们从简单(或复杂)的 JQL 语句中创建我们想要的任何类型的指标,然后针对这些指标开发计算。在我们注意到之前,我们仅从 Jira 中就提取了 400 多个指标,而且还有更多指标来自手动的来源。
|
||||
|
||||
在项目结束时,我们认定这 400 个指标中,大多数指标在*以我们的角色*做出决策时并不重要。最终,只有三个对我们非常重要:“缺陷去除效率”、“已完成的条目与承诺的条目”,以及“每个开发人员的工作进度”。这三个指标最重要,因为它们是我们对自己、客户和团队成员所做出的承诺,因此是最有意义的。
|
||||
|
||||
带着这些通过经验得到的教训和对什么是健康的开源项目的问题,我跳进了 CHAOSS 社区,开始建立一套角色,以提供一种建设性的方法,从基于角色的角度回答这个问题。
|
||||
|
||||
CHAOSS 是一个开源项目,我们尝试使用民主共识来运作。因此,我决定使用<ruby>组成分子<rt>constituent</rt></ruby>这个词而不是利益相关者,因为它更符合我们作为开源贡献者的责任,以创建更具共生性的价值链。
|
||||
|
||||
虽然创建此组成模型的过程采用了特定的“目标-问题-度量”方法,但有许多方法可以进行细分。CHAOSS 贡献者已经开发了很好的模型,可以按照矢量进行细分,例如项目属性(例如,个人、公司或联盟)和“失败容忍度”。在为 CHAOSS 开发度量定义时,每个模型都会提供建设性的影响。
|
||||
|
||||
基于这一切,我开始构建一个谁可能关心 CHAOSS 指标的模型,以及每个组成分子在 CHAOSS 的四个重点领域中最关心的问题:
|
||||
|
||||
* [多样性和包容性][4]
|
||||
* [演化][5]
|
||||
* [风险][6]
|
||||
* [价值][7]
|
||||
|
||||
在我们深入研究之前,重要的是要注意 CHAOSS 项目明确地将背景判断留给了实施指标的团队。什么是“有意义的”和“什么是健康的?”的答案预计会因团队和项目而异。CHAOSS 软件的现成仪表板尽可能地关注客观指标。在本文中,我们关注项目创始人、项目维护者和贡献者。
|
||||
|
||||
### 项目组成分子
|
||||
|
||||
虽然这绝不是这些组成分子可能认为重要的问题的详尽清单,但这些选择感觉是一个好的起点。以下每个“目标-问题-度量”标准部分与 CHAOSS 项目正在收集和汇总的指标直接相关。
|
||||
|
||||
现在,进入分析的第 1 部分!
|
||||
|
||||
#### 项目创始人
|
||||
|
||||
作为**项目创始人**,我**最**关心:
|
||||
|
||||
* 我的项目**对其他人有用吗?**通过以下测量:
|
||||
* 随着时间推移有多少复刻?
|
||||
* **指标:**存储库复刻数。
|
||||
* 随着时间的推移有多少贡献者?
|
||||
* **指标:**贡献者数量。
|
||||
* 贡献净质量。
|
||||
* **指标:**随着时间的推移提交的错误。
|
||||
* **指标:**随着时间的回归。
|
||||
* 项目的财务状况。
|
||||
* **指标:**随着时间的推移的捐赠/收入。
|
||||
* **指标:**随着时间的推移的费用。
|
||||
* 我的项目对其它人的**可见**程度?
|
||||
* 有谁知道我的项目?别人认为它很整洁吗?
|
||||
* **指标:**社交媒体上的提及、分享、喜欢和订阅的数量。
|
||||
* 有影响力的人是否了解我的项目?
|
||||
* **指标:**贡献者的社会影响力。
|
||||
* 人们在公共场所对项目有何评价?是正面还是负面?
|
||||
* **指标:**跨社交媒体渠道的情感(关键字或 NLP)分析。
|
||||
* 我的项目**可行性**程度?
|
||||
* 我们有足够的维护者吗?该数字是随着时间的推移而上升还是下降?
|
||||
* **指标:**维护者数量。
|
||||
* 改变速度如何随时间变化?
|
||||
* **指标:**代码随时间的变化百分比。
|
||||
* **指标:**拉取请求、代码审查和合并之间的时间。
|
||||
* 我的项目的[多样化 & 包容性][4]如何?
|
||||
* 我们是否拥有有效的公开行为准则(CoC)?
|
||||
* **度量标准:** 检查存储库中的 CoC 文件。
|
||||
* 与我的项目相关的活动是否积极包容?
|
||||
* **指标:**关于活动的票务政策和活动包容性行为的手动报告。
|
||||
* 我们的项目在可访问性上做的好不好?
|
||||
* **指标:**验证发布的文字会议纪要。
|
||||
* **指标:**验证会议期间使用的隐藏式字幕。
|
||||
* **指标:**验证在演示文稿和项目前端设计中色盲可访问的素材。
|
||||
* 我的项目代表了多少[价值][7]?
|
||||
* 我如何帮助组织了解使用我们的项目将节省多少时间和金钱(劳动力投资)
|
||||
* **指标:**仓库的议题、提交、拉取请求的数量和估计人工费率。
|
||||
* 我如何理解项目创建的下游价值的数量,以及维护我的项目对更广泛的社区的重要性(或不重要)?
|
||||
* **指标:**依赖我的项目的其他项目数。
|
||||
* 为我的项目做出贡献的人有多少机会使用他们学到的东西来找到合适的工作岗位,以及在哪些组织(即生活工资)?
|
||||
* **指标:**使用或贡献此库的组织数量。
|
||||
* **指标:**使用此类项目的开发人员的平均工资。
|
||||
* **指标:**与该项目匹配的关键字的职位发布计数。
|
||||
|
||||
### 项目维护者
|
||||
|
||||
作为**项目维护者**,我**最**关心:
|
||||
|
||||
* 我是**高效的**维护者吗?
|
||||
* **指标:**拉取请求在代码审查之前等待的时间。
|
||||
* **指标:**代码审查和后续拉取请求之间的时间。
|
||||
* **指标:**我的代码审核中有多少被批准?
|
||||
* **指标:**我的代码评论中有多少被拒绝或返工?
|
||||
* **指标:**代码审查的评论的情感分析。
|
||||
* 我如何让**更多人**帮助我维护这件事?
|
||||
* **指标:**项目贡献者的社交覆盖面数量。
|
||||
* 我们的**代码质量**随着时间的推移变得越来越好吗?
|
||||
* **指标:**计算随着时间的推移引入的回归数量。
|
||||
* **指标:**计算随着时间推移引入的错误数量。
|
||||
* **指标:**错误归档、拉取请求、代码审查、合并和发布之间的时间。
|
||||
|
||||
### 项目开发者和贡献者
|
||||
|
||||
作为**项目开发者或贡献者**,我**最**关心:
|
||||
|
||||
* 我可以从为这个项目做出贡献中获得哪些有价值的东西,以及实现这个价值需要多长时间?
|
||||
* **指标:**下游价值。
|
||||
* **指标:**提交、代码审查和合并之间的时间。
|
||||
* 通过使用我在贡献中学到的东西来增加工作机是否有良好的前景?
|
||||
* **指标:**生活工资。
|
||||
* 这个项目有多受欢迎?
|
||||
* **指标:**社交媒体帖子、分享和收藏的数量。
|
||||
* 社区有影响力的人知道我的项目吗?
|
||||
* **指标:**创始人、维护者和贡献者的社交范围。
|
||||
|
||||
通过创建这个列表,我们开始让 CHAOSS 更加丰满了,并且在今年夏天项目中首次发布该指标时,我迫不及待地想看看广泛的开源社区可能有什么其他伟大的想法,以及我们还可以从这些贡献中学到什么(并衡量!)。
|
||||
|
||||
### 其它角色
|
||||
|
||||
接下来,你需要了解有关其他角色(例如基金会、企业开源计划办公室、业务风险和法律团队、人力资源等)以及最终用户的目标问题度量集的更多信息。他们关心开源的不同事物。
|
||||
|
||||
如果你是开源贡献者或组成分子,我们邀请你[来看看这个项目][8]并参与社区活动!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/measure-project
|
||||
|
||||
作者:[Jon Lawrence][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/the3rdlaw
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
|
||||
[2]: https://www.linuxfoundation.org/
|
||||
[3]: https://chaoss.community/
|
||||
[4]: https://github.com/chaoss/wg-diversity-inclusion
|
||||
[5]: https://github.com/chaoss/wg-evolution
|
||||
[6]: https://github.com/chaoss/wg-risk
|
||||
[7]: https://github.com/chaoss/wg-value
|
||||
[8]: https://github.com/chaoss/
|
@ -0,0 +1,86 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11236-1.html)
|
||||
[#]: subject: (How to Get Linux Kernel 5.0 in Ubuntu 18.04 LTS)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-hwe-kernel/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
如何在 Ubuntu 18.04 LTS 中获取 Linux 5.0 内核
|
||||
======
|
||||
|
||||
> 最近发布的 Ubuntu 18.04.3 包括 Linux 5.0 内核中的几个新功能和改进,但默认情况下没有安装。本教程演示了如何在 Ubuntu 18.04 LTS 中获取 Linux 5 内核。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/17/101052xday1jyrszbddsfc.jpg)
|
||||
|
||||
[Ubuntu 18.04 的第三个“点发布版”已经发布][2],它带来了新的稳定版本的 GNOME 组件、livepatch 桌面集成和内核 5.0。
|
||||
|
||||
可是等等!什么是“<ruby>小数点版本<rt>point release</rt></ruby>”?让我先解释一下。
|
||||
|
||||
### Ubuntu LTS 小数点版本
|
||||
|
||||
Ubuntu 18.04 于 2018 年 4 月发布,由于它是一个长期支持 (LTS) 版本,它将一直支持到 2023 年。从那时起,已经有许多 bug 修复、安全更新和软件升级。如果你今天下载 Ubuntu 18.04,你需要在[在安装 Ubuntu 后首先安装这些更新][3]。
|
||||
|
||||
当然,这不是一种理想情况。这就是 Ubuntu 提供这些“小数点版本”的原因。点发布版包含所有功能和安全更新以及自 LTS 版本首次发布以来添加的 bug 修复。如果你今天下载 Ubuntu,你会得到 Ubuntu 18.04.3 而不是 Ubuntu 18.04。这节省了在新安装的 Ubuntu 系统上下载和安装数百个更新的麻烦。
|
||||
|
||||
好了!现在你知道“小数点版本”的概念了。你如何升级到这些小数点版本?答案很简单。只需要像平时一样[更新你的 Ubuntu 系统][4],这样你将在最新的小数点版本上了。
|
||||
|
||||
你可以[查看 Ubuntu 版本][5]来了解正在使用的版本。我检查了一下,因为我用的是 Ubuntu 18.04.3,我以为我的内核会是 5。当我[查看 Linux 内核版本][6]时,它仍然是基本内核 4.15。
|
||||
|
||||
![Ubuntu Version And Linux Kernel Version Check][7]
|
||||
|
||||
这是为什么?如果 Ubuntu 18.04.3 有 Linux 5.0 内核,为什么它仍然使用 Linux 4.15 内核?这是因为你必须通过选择 LTS <ruby>支持栈<rt>Enablement Stack</rt></ruby>(通常称为 HWE)手动请求在 Ubuntu LTS 中安装新内核。
|
||||
|
||||
### 使用 HWE 在Ubuntu 18.04 中获取 Linux 5.0 内核
|
||||
|
||||
默认情况下,Ubuntu LTS 将保持在最初发布的 Linux 内核上。<ruby>[硬件支持栈][9]<rt>hardware enablement stack</rt></ruby>(HWE)为现有的 Ubuntu LTS 版本提供了更新的内核和 xorg 支持。
|
||||
|
||||
最近发生了一些变化。如果你下载了 Ubuntu 18.04.2 或更新的桌面版本,那么就会为你启用 HWE,默认情况下你将获得新内核以及常规更新。
|
||||
|
||||
对于服务器版本以及下载了 18.04 和 18.04.1 的人员,你需要安装 HWE 内核。完成后,你将获得 Ubuntu 提供的更新的 LTS 版本内核。
|
||||
|
||||
要在 Ubuntu 桌面上安装 HWE 内核以及更新的 xorg,你可以在终端中使用此命令:
|
||||
|
||||
```
|
||||
sudo apt install --install-recommends linux-generic-hwe-18.04 xserver-xorg-hwe-18.04
|
||||
```
|
||||
|
||||
如果你使用的是 Ubuntu 服务器版,那么就不会有 xorg 选项。所以只需在 Ubutnu 服务器版中安装 HWE 内核:
|
||||
|
||||
```
|
||||
sudo apt-get install --install-recommends linux-generic-hwe-18.04
|
||||
```
|
||||
|
||||
完成 HWE 内核的安装后,重启系统。现在你应该拥有更新的 Linux 内核了。
|
||||
|
||||
### 你在 Ubuntu 18.04 中获取 5.0 内核了么?
|
||||
|
||||
请注意,下载并安装了 Ubuntu 18.04.2 的用户已经启用了 HWE。所以这些用户将能轻松获取 5.0 内核。
|
||||
|
||||
在 Ubuntu 中启用 HWE 内核遇到困难了么?这完全取决于你。[Linux 5.0 内核][10]有几项性能改进和更好的硬件支持。你将从新内核获益。
|
||||
|
||||
你怎么看?你会安装 5.0 内核还是宁愿留在 4.15 内核上?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-hwe-kernel/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.youtube.com/channel/UCEU9D6KIShdLeTRyH3IdSvw
|
||||
[2]: https://ubuntu.com/blog/enhanced-livepatch-desktop-integration-available-with-ubuntu-18-04-3-lts
|
||||
[3]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/
|
||||
[4]: https://itsfoss.com/update-ubuntu/
|
||||
[5]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||
[6]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/ubuntu-version-and-kernel-version-check.png?resize=800%2C300&ssl=1
|
||||
[9]: https://wiki.ubuntu.com/Kernel/LTSEnablementStack
|
||||
[10]: https://itsfoss.com/linux-kernel-5/
|
205
published/20190815 SSLH - Share A Same Port For HTTPS And SSH.md
Normal file
205
published/20190815 SSLH - Share A Same Port For HTTPS And SSH.md
Normal file
@ -0,0 +1,205 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11247-1.html)
|
||||
[#]: subject: (SSLH – Share A Same Port For HTTPS And SSH)
|
||||
[#]: via: (https://www.ostechnix.com/sslh-share-port-https-ssh/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
SSLH:让 HTTPS 和 SSH 共享同一个端口
|
||||
======
|
||||
|
||||
![SSLH - Share A Same Port For HTTPS And SSH][1]
|
||||
|
||||
一些 ISP 和公司可能已经阻止了大多数端口,并且只允许少数特定端口(如端口 80 和 443)访问来加强其安全性。在这种情况下,我们别无选择,但同一个端口可以用于多个程序,比如 HTTPS 端口 443,很少被阻止。通过 SSL/SSH 多路复用器 SSLH 的帮助,它可以侦听端口 443 上的传入连接。更简单地说,SSLH 允许我们在 Linux 系统上的端口 443 上运行多个程序/服务。因此,你可以同时通过同一个端口同时使用 SSL 和 SSH。如果你遇到大多数端口被防火墙阻止的情况,你可以使用 SSLH 访问远程服务器。这个简短的教程描述了如何在类 Unix 操作系统中使用 SSLH 让 https、ssh 共享相同的端口。
|
||||
|
||||
### SSLH:让 HTTPS、SSH 共享端口
|
||||
|
||||
#### 安装 SSLH
|
||||
|
||||
大多数 Linux 发行版上 SSLH 都有软件包,因此你可以使用默认包管理器进行安装。
|
||||
|
||||
在 Debian、Ubuntu 及其衍生品上运行:
|
||||
|
||||
```
|
||||
$ sudo apt-get install sslh
|
||||
```
|
||||
|
||||
安装 SSLH 时,将提示你是要将 sslh 作为从 inetd 运行的服务,还是作为独立服务器运行。每种选择都有其自身的优点。如果每天只有少量连接,最好从 inetd 运行 sslh 以节省资源。另一方面,如果有很多连接,sslh 应作为独立服务器运行,以避免为每个传入连接生成新进程。
|
||||
|
||||
![][2]
|
||||
|
||||
*安装 sslh*
|
||||
|
||||
在 Arch Linux 和 Antergos、Manjaro Linux 等衍生品上,使用 Pacman 进行安装,如下所示:
|
||||
|
||||
```
|
||||
$ sudo pacman -S sslh
|
||||
```
|
||||
|
||||
在 RHEL、CentOS 上,你需要添加 EPEL 存储库,然后安装 SSLH,如下所示:
|
||||
|
||||
```
|
||||
$ sudo yum install epel-release
|
||||
$ sudo yum install sslh
|
||||
```
|
||||
|
||||
在 Fedora:
|
||||
|
||||
```
|
||||
$ sudo dnf install sslh
|
||||
```
|
||||
|
||||
如果它在默认存储库中不可用,你可以如[这里][3]所述手动编译和安装 SSLH。
|
||||
|
||||
#### 配置 Apache 或 Nginx Web 服务器
|
||||
|
||||
如你所知,Apache 和 Nginx Web 服务器默认会监听所有网络接口(即 `0.0.0.0:443`)。我们需要更改此设置以告知 Web 服务器仅侦听 `localhost` 接口(即 `127.0.0.1:443` 或 `localhost:443`)。
|
||||
|
||||
为此,请编辑 Web 服务器(nginx 或 apache)配置文件并找到以下行:
|
||||
|
||||
```
|
||||
listen 443 ssl;
|
||||
```
|
||||
|
||||
将其修改为:
|
||||
|
||||
```
|
||||
listen 127.0.0.1:443 ssl;
|
||||
```
|
||||
|
||||
如果你在 Apache 中使用虚拟主机,请确保你也修改了它。
|
||||
|
||||
```
|
||||
VirtualHost 127.0.0.1:443
|
||||
```
|
||||
|
||||
保存并关闭配置文件。不要重新启动该服务。我们还没有完成。
|
||||
|
||||
#### 配置 SSLH
|
||||
|
||||
使 Web 服务器仅在本地接口上侦听后,编辑 SSLH 配置文件:
|
||||
|
||||
```
|
||||
$ sudo vi /etc/default/sslh
|
||||
```
|
||||
|
||||
找到下列行:
|
||||
|
||||
```
|
||||
Run=no
|
||||
```
|
||||
|
||||
将其修改为:
|
||||
|
||||
```
|
||||
Run=yes
|
||||
```
|
||||
|
||||
然后,向下滚动一点并修改以下行以允许 SSLH 在所有可用接口上侦听端口 443(例如 `0.0.0.0:443`)。
|
||||
|
||||
```
|
||||
DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --pidfile /var/run/sslh/sslh.pid"
|
||||
```
|
||||
|
||||
这里,
|
||||
|
||||
* `–user sslh`:要求在这个特定的用户身份下运行。
|
||||
* `–listen 0.0.0.0:443`:SSLH 监听于所有可用接口的 443 端口。
|
||||
* `–sshs 127.0.0.1:22` : 将 SSH 流量路由到本地的 22 端口。
|
||||
* `–ssl 127.0.0.1:443` : 将 HTTPS/SSL 流量路由到本地的 443 端口。
|
||||
|
||||
保存并关闭文件。
|
||||
|
||||
最后,启用并启动 `sslh` 服务以更新更改。
|
||||
|
||||
```
|
||||
$ sudo systemctl enable sslh
|
||||
$ sudo systemctl start sslh
|
||||
```
|
||||
|
||||
#### 测试
|
||||
|
||||
检查 SSLH 守护程序是否正在监听 443。
|
||||
|
||||
```
|
||||
$ ps -ef | grep sslh
|
||||
sslh 2746 1 0 15:51 ? 00:00:00 /usr/sbin/sslh --foreground --user sslh --listen 0.0.0.0 443 --ssh 127.0.0.1 22 --ssl 127.0.0.1 443 --pidfile /var/run/sslh/sslh.pid
|
||||
sslh 2747 2746 0 15:51 ? 00:00:00 /usr/sbin/sslh --foreground --user sslh --listen 0.0.0.0 443 --ssh 127.0.0.1 22 --ssl 127.0.0.1 443 --pidfile /var/run/sslh/sslh.pid
|
||||
sk 2754 1432 0 15:51 pts/0 00:00:00 grep --color=auto sslh
|
||||
```
|
||||
|
||||
现在,你可以使用端口 443 通过 SSH 访问远程服务器:
|
||||
|
||||
```
|
||||
$ ssh -p 443 [email protected]
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
[email protected]'s password:
|
||||
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-55-generic x86_64)
|
||||
|
||||
* Documentation: https://help.ubuntu.com
|
||||
* Management: https://landscape.canonical.com
|
||||
* Support: https://ubuntu.com/advantage
|
||||
|
||||
System information as of Wed Aug 14 13:11:04 IST 2019
|
||||
|
||||
System load: 0.23 Processes: 101
|
||||
Usage of /: 53.5% of 19.56GB Users logged in: 0
|
||||
Memory usage: 9% IP address for enp0s3: 192.168.225.50
|
||||
Swap usage: 0% IP address for enp0s8: 192.168.225.51
|
||||
|
||||
* Keen to learn Istio? It's included in the single-package MicroK8s.
|
||||
|
||||
https://snapcraft.io/microk8s
|
||||
|
||||
61 packages can be updated.
|
||||
22 updates are security updates.
|
||||
|
||||
|
||||
Last login: Wed Aug 14 13:10:33 2019 from 127.0.0.1
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
*通过 SSH 使用 443 端口访问远程系统*
|
||||
|
||||
看见了吗?即使默认的 SSH 端口 22 被阻止,我现在也可以通过 SSH 访问远程服务器。正如你在上面的示例中所看到的,我使用 https 端口 443 进行 SSH 连接。
|
||||
|
||||
我在我的 Ubuntu 18.04 LTS 服务器上测试了 SSLH,它如上所述工作得很好。我在受保护的局域网中测试了 SSLH,所以我不知道是否有安全问题。如果你在生产环境中使用它,请在下面的评论部分中告诉我们使用 SSLH 的优缺点。
|
||||
|
||||
有关更多详细信息,请查看下面给出的官方 GitHub 页面。
|
||||
|
||||
资源:
|
||||
|
||||
* [SSLH GitHub 仓库][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/sslh-share-port-https-ssh/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2017/08/SSLH-Share-A-Same-Port-For-HTTPS-And-SSH-1-720x340.jpg
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2017/08/install-sslh.png
|
||||
[3]: https://github.com/yrutschle/sslh/blob/master/doc/INSTALL.md
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2017/08/Access-remote-systems-via-SSH-using-port-443.png
|
||||
[5]: https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/
|
||||
[6]: https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/
|
||||
[7]: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/
|
||||
[8]: https://www.ostechnix.com/how-to-stop-ssh-session-from-disconnecting-in-linux/
|
||||
[9]: https://www.ostechnix.com/allow-deny-ssh-access-particular-user-group-linux/
|
||||
[10]: https://www.ostechnix.com/4-ways-keep-command-running-log-ssh-session/
|
||||
[11]: https://www.ostechnix.com/scanssh-fast-ssh-server-open-proxy-scanner/
|
||||
[12]: https://github.com/yrutschle/sslh
|
@ -0,0 +1,81 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11241-1.html)
|
||||
[#]: subject: (GNOME and KDE team up on the Linux desktop, docs for Nvidia GPUs open up, a powerful new way to scan for firmware vulnerabilities, and more news)
|
||||
[#]: via: (https://opensource.com/article/19/8/news-august-17)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
开源新闻综述:GNOME 和 KDE 达成合作、Nvidia 开源 GPU 文档
|
||||
======
|
||||
|
||||
> 不要错过两周以来最大的开源头条新闻。
|
||||
|
||||
![Weekly news roundup with TV][1]
|
||||
|
||||
在本期开源新闻综述中,我们将介绍两种新的强大数据可视化工具、Nvidia 开源其 GPU 文档、激动人心的新工具、确保自动驾驶汽车的固件安全等等!
|
||||
|
||||
### GNOME 和 KDE 在 Linux 桌面上达成合作伙伴
|
||||
|
||||
Linux 在桌面计算机上一直处于分裂状态。在最近的一篇[公告][2]中称,“两个主要的 Linux 桌面竞争对手,[GNOME 基金会][3] 和 [KDE][4] 已经同意合作。”
|
||||
|
||||
这两个组织将成为今年 11 月在巴塞罗那举办的 [Linux App Summit(LAS)2019][5] 的赞助商。这一举措在某种程度上似乎是对桌面计算不再是争夺支配地位的最佳场所的回应。无论是什么原因,Linux 桌面的粉丝们都有新的理由希望未来出现一个标准化的 GUI 环境。
|
||||
|
||||
### 新的开源数据可视化工具
|
||||
|
||||
这个世界上很少有不是由数据驱动的。除非数据以人们可以互动的形式出现,否则它并不是很好使用。最近开源的两个数据可视化项目正在尝试使数据更有用。
|
||||
|
||||
第一个工具名为 **Neuroglancer**,由 [Google 的研究团队][6]创建。它“使神经科医生能够在交互式可视化中建立大脑神经通路的 3D 模型。”Neuroglancer 通过使用神经网络追踪大脑中的神经元路径并构建完整的可视化来实现这一点。科学家已经使用了 Neuroglancer(你可以[从 GitHub 取得][7])通过扫描果蝇的大脑来建立一个交互式地图。
|
||||
|
||||
第二个工具来自一个不太能想到的的来源:澳大利亚信号理事会。这是该国家类似 NSA 的机构,它“开源了[内部数据可视化和分析工具][8]之一。”这个被称为 **[Constellation][9]** 的工具可以“识别复杂数据集中的趋势和模式,并且能够扩展到‘数十亿输入’。”该机构总干事迈克•伯吉斯表示,他希望“这一工具将有助于产生有利于所有澳大利亚人的科学和其他方面的突破。”鉴于它是开源的,它可以使整个世界受益。
|
||||
|
||||
### Nvidia 开始发布 GPU 文档
|
||||
|
||||
多年来,图形处理单元(GPU)制造商 Nvidia 并没有做出什么让开源项目轻松开发其产品的驱动程序的努力。现在,该公司通过[发布 GPU 硬件文档][10]向这些项目迈出了一大步。
|
||||
|
||||
该公司根据 MIT 许可证发布的文档[可在 GitHub 上获取][11]。它涵盖了几个关键领域,如设备初始化、内存时钟/调整和电源状态。据硬件新闻网站 Phoronix 称,开发了 Nvidia GPU 的开源驱动程序的 Nouveau 项目将是率先使用该文档来推动其开发工作的项目之一。
|
||||
|
||||
### 用于保护固件的新工具
|
||||
|
||||
似乎每周都有的消息称,移动设备或连接互联网的小设备中出现新漏洞。通常,这些漏洞存在于控制设备的固件中。自动驾驶汽车服务 Cruise [发布了一个开源工具][12],用于在这些漏洞成为问题之前捕获这些漏洞。
|
||||
|
||||
该工具被称为 [FwAnalzyer][13]。它检查固件代码中是否存在许多潜在问题,包括“识别潜在危险的可执行文件”,并查明“任何错误遗留的调试代码”。Cruise 的工程师 Collin Mulliner 曾帮助开发该工具,他说通过在代码上运行 FwAnalyzer,固件开发人员“现在能够检测并防止各种安全问题。”
|
||||
|
||||
### 其它新闻
|
||||
|
||||
* [为什么洛杉矶决定将未来寄予开源][14]
|
||||
* [麻省理工学院出版社发布了关于开源出版软件的综合报告][15]
|
||||
* [华为推出鸿蒙操作系统,不会放弃 Android 智能手机][16]
|
||||
|
||||
*一如既往地感谢 Opensource.com 的工作人员和主持人本周的帮助。*
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/news-august-17
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
|
||||
[2]: https://www.zdnet.com/article/gnome-and-kde-work-together-on-the-linux-desktop/
|
||||
[3]: https://www.gnome.org/
|
||||
[4]: https://kde.org/
|
||||
[5]: https://linuxappsummit.org/
|
||||
[6]: https://www.cbronline.com/news/brain-mapping-google-ai
|
||||
[7]: https://github.com/google/neuroglancer
|
||||
[8]: https://www.computerworld.com.au/article/665286/australian-signals-directorate-open-sources-data-analysis-tool/
|
||||
[9]: https://www.constellation-app.com/
|
||||
[10]: https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-Open-GPU-Docs
|
||||
[11]: https://github.com/nvidia/open-gpu-doc
|
||||
[12]: https://arstechnica.com/information-technology/2019/08/self-driving-car-service-open-sources-new-tool-for-securing-firmware/
|
||||
[13]: https://github.com/cruise-automation/fwanalyzer
|
||||
[14]: https://www.techrepublic.com/article/why-la-decided-to-open-source-its-future/
|
||||
[15]: https://news.mit.edu/2019/mit-press-report-open-source-publishing-software-0808
|
||||
[16]: https://www.itnews.com.au/news/huawei-unveils-harmony-operating-system-wont-ditch-android-for-smartphones-529432
|
@ -1,91 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Smartphone Librem 5 is Available for Preorder)
|
||||
[#]: via: (https://itsfoss.com/librem-5-available/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Linux Smartphone Librem 5 is Available for Preorder
|
||||
======
|
||||
|
||||
Purism recently [announced][1] the final specs for its [Librem 5 smartphone][2]. This is not based on Android or iOS – but built on [PureOS][3], which is an [open-source alternative to Android][4].
|
||||
|
||||
Along with the announcement, the Librem 5 is also available for [pre-orders for $649][5] (as an early bird offer till 31st July) and it will go up by $50 following the date. It will start shipping from Q3 of 2019.
|
||||
|
||||
![][6]
|
||||
|
||||
Here’s what Purism mentioned about Librem 5 in its blog post:
|
||||
|
||||
_**We believe phones should not track you nor exploit your digital life.**_
|
||||
|
||||
_The Librem 5 represents the opportunity for you to take back control and protect your private information, your digital life through free and open source software, open governance, and transparency. The Librem 5 is_ **_a phone built on_ [_PureOS_][3]**_, a fully free, ethical and open-source operating system that is_ _**not based on Android or iOS**_ _(learn more about [why this is important][7])._
|
||||
|
||||
_We have successfully crossed our crowdfunding goals and will be delivering on our promise. The Librem 5’s hardware and software development is advancing [at a steady pace][8], and is scheduled for an initial release in Q3 2019. You can preorder the phone at $649 until shipping begins and regular pricing comes into effect. Kits with an external monitor, keyboard and mouse, are also available for preorder._
|
||||
|
||||
### Librem 5 Specifications
|
||||
|
||||
From what it looks like, Librem 5 definitely aims to provide better privacy and security. In addition to this, it tries to avoid using anything from Google or Apple.
|
||||
|
||||
While the idea is good enough – how does it hold up as a commercial smartphone under $700?
|
||||
|
||||
![Librem 5 Smartphone][9]
|
||||
|
||||
Let us take a look at the tech specs:
|
||||
|
||||
![Librem 5][10]
|
||||
|
||||
On paper the tech specs seems to be good enough. Not too great – not too bad. But, what about the performance? The user experience?
|
||||
|
||||
Well, we can’t be too sure about it – unless we use it. So, if you are pre-ordering it – take that into consideration.
|
||||
|
||||
[][11]
|
||||
|
||||
Suggested read Linux Games Get A Performance Boost for AMD GPUs Thanks to Valve's New Compiler
|
||||
|
||||
### Lifetime software updates for Librem 5
|
||||
|
||||
Of course, the specs aren’t very pretty when compared to the smartphones available at this price range.
|
||||
|
||||
However, with the promise of lifetime software updates – it does look like a decent offering for open source enthusiasts.
|
||||
|
||||
### Other Key Features
|
||||
|
||||
Purism also highlights the fact that Librem 5 will be the first-ever [Matrix][12]-powered smartphone. This means that it will support end-to-end decentralized encrypted communications for messaging and calling.
|
||||
|
||||
In addition to all these, the presence of headphone jack and a user-replaceable battery makes it a pretty solid deal.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
Even though it is tough to compete with the likes of Android/iOS smartphones, having an alternative is always good. Librem 5 may not prove to be an ideal smartphone for every user – but if you are an open-source enthusiast and looking for a simple smartphone that respects privacy and security without utilizing Google/Apple services, this is for you.
|
||||
|
||||
Also the fact that it will receive lifetime software updates – makes it an interesting smartphone.
|
||||
|
||||
What do you think about Librem 5? Are you thinking to pre-order it? Let us know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/librem-5-available/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://puri.sm/posts/librem-5-smartphone-final-specs-announced/
|
||||
[2]: https://itsfoss.com/librem-linux-phone/
|
||||
[3]: https://pureos.net/
|
||||
[4]: https://itsfoss.com/open-source-alternatives-android/
|
||||
[5]: https://shop.puri.sm/shop/librem-5/
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/librem-5-linux-smartphone.jpg?resize=800%2C450&ssl=1
|
||||
[7]: https://puri.sm/products/librem-5/pureos-mobile/
|
||||
[8]: https://puri.sm/posts/tag/phones
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/librem-5-smartphone.jpg?ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/librem-5-specs.png?ssl=1
|
||||
[11]: https://itsfoss.com/linux-games-performance-boost-amd-gpu/
|
||||
[12]: http://matrix.org
|
@ -0,0 +1,107 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (scvoet)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (LiVES Video Editor 3.0 is Here With Significant Improvements)
|
||||
[#]: via: (https://itsfoss.com/lives-video-editor/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
LiVES 视频编辑器 3.0 有了显著的改善
|
||||
======
|
||||
|
||||
我们最近列出了[最好开源视频编辑器][1]的清单。LiVES 是这些开源视频编辑器中的免费提供服务的一个。
|
||||
|
||||
即使许多用户还在等待 Windows 版本的发行,但在刚刚发行的 LiVES 视频编辑器 Linux 版本中(最新版本 v3.0.1)进行了一个重大更新,更新内容中包括了一些新的功能和改进。
|
||||
|
||||
在这篇文章里,我将会列出新版本中的重要改进,并且我将会提到在 Linux 上安装的步骤。
|
||||
|
||||
### LiVES 视频编辑器 3.0:新的改进
|
||||
|
||||
![Zorin OS 中正在加载的 LiVES 视频编辑器][2]
|
||||
|
||||
总的来说,在这次重大更新中 LiVES 视频编辑器旨在提供更加丝滑的回放、防止闻所未闻的崩溃、优化视频记录,以及让在线视频下载更加实用。
|
||||
|
||||
下面列出了修改:
|
||||
|
||||
* 如果需要加载的话,可以静默加载直到到视频播放完毕。
|
||||
* 改进回放插件为 openGL,提供更加丝滑的回放。
|
||||
* 重新启用了 openGL 回放插件的高级选项。
|
||||
* 在 VJ/预解码 中允许“充足”的所有帧
|
||||
* 重构了在播放时基础计算的代码(有了更好的 a/v 同步)。
|
||||
* 彻底修复了外部音频和音频,提高了准确性并减少了 CPU 周期。
|
||||
* 进入多音轨模式时自动切换至内部音频。
|
||||
* 重新显示效果映射器窗口时,将会正常展示效果状态(on/off)。
|
||||
* 解决了音频和视频线程之间的冲突。
|
||||
* 现在可以对在线视频下载器,剪辑大小和格式进行修改并添加了更新选项。
|
||||
* 对实时效果实行了参考计数的记录。
|
||||
* 大范围重写了主界面,清理代码并改进多视觉。
|
||||
* 优化了视频播放器运行时的录制功能。
|
||||
* 改进了 projectM 过滤器,包括支持了 SDL2。
|
||||
* 添加了一个选项来逆转多轨合成器中的 Z-order(后层现在可以覆盖上层了)。
|
||||
* 增加了对 musl libc 的支持
|
||||
* 更新了乌克兰语的翻译
|
||||
|
||||
|
||||
如果您不是一位高级视频编辑师,也许会对上面列出的重要更新提不起太大的兴趣。但正是因为这些更新,才使得“LiVES 视频编辑器”成为了最好的开源视频编辑软件。
|
||||
|
||||
[][3]
|
||||
|
||||
推荐阅读 VidCutter Lets You Easily Trim And Merge Videos In Linux
|
||||
|
||||
### 在 Linux 上安装 LiVES 视频编辑器
|
||||
|
||||
LiVES 几乎可以在所有主要 Linux 发行版中使用。但是,您可能并不能在软件中心找到它的最新版本。所以,如果你想通过这种方式安装,那你就不得不耐心等待了。
|
||||
|
||||
如果你想要手动安装,可以从它的下载页面获取 Fedora/Open SUSE 的 RPM 安装包。它也适用于其他 Linux 发行版。
|
||||
|
||||
[下载 LiVES 视频编辑器] [4]
|
||||
|
||||
如果您使用的是 Ubuntu(或其他基于 Ubuntu 的发行版),您可以安装由 [Ubuntuhandbook][6] 进行维护的[非官方 PPA][5]。
|
||||
|
||||
下面由我来告诉你,你该做些什么:
|
||||
|
||||
**1. **启动终端后输入以下命令:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:ubuntuhandbook1 / lives
|
||||
```
|
||||
|
||||
系统将提示您输入密码用于确认添加 PPA。
|
||||
|
||||
**2. **完成后,您现在可以轻松地更新软件包列表并安装 LiVES 视频编辑器。以下是需要您输入的命令段:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install life-plugins
|
||||
```
|
||||
|
||||
**3.** 现在,它开始下载并安装视频编辑器,等待大约一分钟即可完成。
|
||||
|
||||
**总结**
|
||||
|
||||
Linux 上有许多[视频编辑器] [7]。但它们通常被认为不能进行专业的编辑。而我并不是一名专业人士,所以像 LiVES 这样免费的视频编辑器就足以进行简单的编辑了。
|
||||
|
||||
您认为怎么样呢?您在 Linux 上使用 LiVES 或其他视频编辑器的体验还好吗?在下面的评论中告诉我们你的感觉吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/lives-video-editor/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Scvoet][c]
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[c]: https://github.com/scvoet
|
||||
[1]: https://itsfoss.com/open-source-video-editors/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/lives-video-editor-loading.jpg?ssl=1
|
||||
[3]: https://itsfoss.com/vidcutter-video-editor-linux/
|
||||
[4]: http://lives-video.com/index.php?do=downloads#binaries
|
||||
[5]: https://itsfoss.com/ppa-guide/
|
||||
[6]: http://ubuntuhandbook.org/index.php/2019/08/lives-video-editor-3-0-released-install-ubuntu/
|
||||
[7]: https://itsfoss.com/best-video-editing-software-linux/
|
@ -1,3 +1,12 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (cycoe)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A brief history of text-based games and open source)
|
||||
[#]: via: (https://opensource.com/article/18/7/interactive-fiction-tools)
|
||||
[#]: author: (Jason Mclntosh https://opensource.com/users/jmac)
|
||||
|
||||
A brief history of text-based games and open source
|
||||
======
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,103 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (WAN Transformation: It’s More Than SD-WAN)
|
||||
[#]: via: (https://www.networkworld.com/article/3430638/wan-transformation-it-s-more-than-sd-wan.html)
|
||||
[#]: author: (Cato Networks https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
WAN Transformation: It’s More Than SD-WAN
|
||||
======
|
||||
Tomorrow’s networking challenges will extend beyond the capabilities of SD-WAN alone. Here’s why and how you can prepare your network.
|
||||
![metamorworks][1]
|
||||
|
||||
As an IT leader, you’re expected to be the technology vanguard of your organization. It is you who must deflate technology hype and devise the technology plan to keep the organization competitive.
|
||||
|
||||
Addressing the WAN is, of course, essential to that plan. The high costs and limited agility of legacy MPLS-based networks are well known. What’s less clear is how to transform the enterprise network in a way that will remain agile and efficient for decades to come.
|
||||
|
||||
Many mistakenly assume [SD-WAN][2] to be that transformation. After all, SD-WAN brings agility, scalability, and cost efficiencies lacking in telco-managed MPLS services. But while a critical step, SD-WAN alone is insufficient to address the networking challenges you’re likely to face today — and tomorrow. Here’s why.
|
||||
|
||||
### SD-WAN Cannot Fix the Unpredictability of the Global Internet
|
||||
|
||||
Enterprise networks are nothing but predictable. Yet to realize their benefits, SD-WANs must rely on the unpredictable public Internet, a crapshoot, meeting enterprise requirements one day, wildly amiss the next. There’s simply no way to anticipate [exceptional Internet events][3]. And with global Internet connections, you don’t even need to wait for unusual. At Cato, we routinely see [how latency across our private backbone can halve the latency of similar Internet routes][4], a fact confirmed by numerous third-party sources.
|
||||
|
||||
![][5]
|
||||
|
||||
SD-WAN Lacks Security, Yet Security is Required _Everywhere_Is it any wonder [SD-WAN vendors][6] partner with legacy telcos? But telcos too often come with a last-mile agenda, locking you into specific providers. Cost and support models are also designed for the legacy business, not the digital one.
|
||||
|
||||
It’s no secret that with Internet access you need the advanced security protection of a next-generation firewall, IPS, and the rest of today’s security stack. It's also no secret SD-WAN lacks advanced security. How, then, will you provide branch locations with [secure Direct Internet Access (DIA)?][7]
|
||||
|
||||
Deploying branch security appliances will complicate the network, running counter to your goal of creating a leaner, more agile infrastructure. Appliances, whether physical or virtual ([VNFs in an NFV architecture][8]), must be maintained. New software patches must be tested, staged, and deployed. As traffic loads grow or compute-intensive features are enabled, such as TLS inspection, the security appliance’s compute requirements increase, ultimately forcing unplanned hardware upgrades.
|
||||
|
||||
Cloud security services can avoid those problems. But too often they only inspect Internet traffic, not site-to-site traffic, forcing IT to maintain and coordinate separate security policies, complicating troubleshooting and deployment.
|
||||
|
||||
### SD-WAN Does Not Extend Well to the Cloud, Mobile Users, or the Tools of Tomorrow
|
||||
|
||||
Then there’s the problem of the new tenants of the enterprise. SD-WAN is an MPLS replacement; it doesn’t extend naturally to the cloud, today’s destination for most enterprise traffic. And mobile users are completely beyond SD-WAN’s scope, requiring separate connectivity and security infrastructure that too often disrupts the mobile experience and fragments visibility, complicating troubleshooting and management.
|
||||
|
||||
Just over the horizon are IoT devices, not to mention the developments we can’t even foresee. In many cases, installing appliances won’t be possible. How will your SD-WAN accommodate these developments without compromising on the operational agility and efficiencies demanded by the digital business?
|
||||
|
||||
### It’s Time to Evolve the Network Architecture
|
||||
|
||||
Continuing to solve network challenges in parts —MPLS service here, remote access VPN there, and a sprinkling of cloud access solutions, routers, firewalls, WAN optimizers, and sensors — only persists in complicating the enterprise network, ultimately restricting how much cost can be saved or operational efficiency gained. SD-WAN-only solutions are symptomatic of this segmented thinking, solving only a small part of the enterprise’s far bigger networking challenge.
|
||||
|
||||
What’s needed isn’t another point appliance or another network. What’s needed is **one network** that connects **and** secures **all** company resources worldwide without compromising on cost or performance. This is an architectural issue, one that can’t be solved by repackaging multiple appliances as a network service. Such approaches lead to inconsistent services, poor manageability, and high latency — a fact that Gartner notes in its recent [Hype Cycle for Enterprise Networking][9].
|
||||
|
||||
# Picture the Network of the Future
|
||||
|
||||
What might this architecture look like? At its basis, think of collapsing MPLS, VPN, and all other networks with **one** global, private, managed backbone available from anywhere to anywhere. Such as network would connect all edges — sites, cloud resources, and mobile devices — with far better performance than the Internet at far lower cost than MPLS services.
|
||||
|
||||
Such a vision is possible today, in fact, due to two trends — the massive investment in global IP capacity and advancements in high-performance, commercial off-the-shelf (COTS) hardware.
|
||||
|
||||
#### Connect
|
||||
|
||||
The Points of Presence (PoPs) comprising such a backbone would interconnect using SLA-backed IP connections across multiple provider networks. By connecting PoPs across multiple networks, the backbone would offer better performance and resiliency than any one underlying network. It would, in effect, bring the power of SD-WAN to the backbone core.
|
||||
|
||||
The cloud-native software would execute all major networking and security functions normally running in edge appliances. WAN optimization, dynamic path selection, policy-based routing, and more would move to the cloud. The PoPs would also monitor the real-time conditions of the underlying networks, routing traffic, including cloud traffic, along the optimum path to the PoP closest to the destination.
|
||||
|
||||
With most processing done by the PoP, connecting any type of “edge” — site, cloud resources, mobile devices, IoT devices, and more — would become simple. All that’s needed is a small client, primarily to establish an encrypted tunnel across an Internet connection to the nearest PoP. By colocating PoP and cloud IXPs in the same physical data centers, cloud resources would implicitly become part of your new optimized, corporate network. All without deploying additional software or hardware.
|
||||
|
||||
#### Secure
|
||||
|
||||
To ensure security, all traffic would only reach an “edge” after security inspection that runs as part of the cloud-native software. Security services would include next-generation firewall, secure web gateway, and advanced threat protection. By running in the PoPs, security services benefit from scalability and elasticity of the cloud, which was never available as appliances.
|
||||
|
||||
With the provider running the security stack, IT would be freed from its security burden. Security services would always be current without the operational overhead of appliances or investing in specialized security skills. Inspecting all enterprise traffic by one platform means IT needs only one set of security policies to protect all users. Overall, security is made simpler, and mobile users and cloud resources no longer need to remain second-class citizens.
|
||||
|
||||
#### Run
|
||||
|
||||
Without deploying tons of specialized appliances, the network would be much easier to run and manage. A single pane of glass would give the IT manager end-to-end visibility across networking and security domains without the myriads of sensors, agents, normalization tools, and more needed today for that kind of capability.
|
||||
|
||||
### One Architecture Many Benefits
|
||||
|
||||
Such an approach addresses the gamut of networking challenges facing today’s enterprises. Connectivity costs would be slashed. Latency would rival those of global MPLS but with far better throughput thanks to built-in network optimization, which would be available inside — and outside — sites. Security would be pervasive, easy to maintain, and effective.
|
||||
|
||||
This architecture isn’t just a pipe dream. Hundreds of companies across the globe today realize these benefits every day by relying on such a platform from [Cato Networks][10]. It’s a secure, global, managed SD-WAN service powered by the scalability, self-service, and agility of the cloud.
|
||||
|
||||
### The Time is Now
|
||||
|
||||
WAN transformation is a rare opportunity for IT leaders to profoundly impact the business’ ability to do business better tomorrow and for decades to come. SD-WAN is a piece of that vision, but only a piece. Addressing the entire network challenge (not just a part of it) to accommodate the needs you can anticipate — and the ones you can’t -- will go a long way towards measuring the effectiveness of your IT leadership.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3430638/wan-transformation-it-s-more-than-sd-wan.html
|
||||
|
||||
作者:[Cato Networks][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/istock-1127447341-100807620-large.jpg
|
||||
[2]: https://www.catonetworks.com/glossary-use-cases/sd-wan?utm_source=IDG&utm_campaign=IDG
|
||||
[3]: https://arstechnica.com/information-technology/2019/07/facebook-cloudflare-microsoft-and-twitter-suffer-outages/
|
||||
[4]: https://www.catonetworks.com/blog/the-internet-is-broken-heres-why?utm_source=IDG&utm_campaign=IDG
|
||||
[5]: https://images.idgesg.net/images/article/2019/08/capture-100807619-large.jpg
|
||||
[6]: https://www.topsdwanvendors.com?utm_source=IDG&utm_campaign=IDG
|
||||
[7]: https://www.catonetworks.com/glossary-use-cases/secure-direct-internet-access?utm_source=IDG&utm_campaign=IDG
|
||||
[8]: https://www.catonetworks.com/blog/the-pains-and-problems-of-nfv?utm_source=IDG&utm_campaign=IDG
|
||||
[9]: https://www.gartner.com/en/documents/3947237/hype-cycle-for-enterprise-networking-2019
|
||||
[10]: https://www.catonetworks.com?utm_source=IDG&utm_campaign=IDG
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Extreme's acquisitions have prepped it to better battle Cisco, Arista, HPE, others)
|
||||
[#]: via: (https://www.networkworld.com/article/3432173/extremes-acquisitions-have-prepped-it-to-better-battle-cisco-arista-hpe-others.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Extreme's acquisitions have prepped it to better battle Cisco, Arista, HPE, others
|
||||
======
|
||||
Extreme has bought cloud, SD-WAN and data center technologies that make it more prepared to take on its toughest competitors.
|
||||
Extreme Networks has in recent months restyled the company with data-center networking technology acquisitions and upgrades, but now comes the hard part – executing with enterprise customers and effectively competing with the likes of Cisco, VMware, Arista, Juniper, HPE and others.
|
||||
|
||||
The company’s latest and perhaps most significant long-term move was closing the [acquisition of wireless-networking vendor Aerohive][1] for about $210 million. The deal brings Extreme Aerohive’s wireless-networking technology – including its WiFi 6 gear, SD-WAN software and cloud-management services.
|
||||
|
||||
**More about edge networking**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][2]
|
||||
* [Edge computing best practices][3]
|
||||
* [How edge computing can help secure the IoT][4]
|
||||
|
||||
|
||||
|
||||
With the Aerohive technology, Extreme says customers and partners will be able to mix and match a broader array of software, hardware, and services to create networks that support their unique needs, and that can be managed and automated from the enterprise edge to the cloud.
|
||||
|
||||
The Aerohive buy is just the latest in a string of acquisitions that have reshaped the company. In the past few years the company has acquired networking and data-center technology from Avaya and Brocade, and it bought wireless player Zebra Technologies in 2016 for $55 million.
|
||||
|
||||
While it has been a battle to integrate and get solid sales footing for those acquisitions – particularly Brocade and Avaya, the company says those challenges are behind it and that the Aerohive integration will be much smoother.
|
||||
|
||||
“After scaling Extreme’s business to $1B in revenue [for FY 2019, which ended in June] and expanding our portfolio to include end-to-end enterprise networking solutions, we are now taking the next step to transform our business to add sustainable, subscription-oriented cloud-based solutions that will enable us to drive recurring revenue and improved cash-flow generation,” said Extreme CEO Ed Meyercord at the firm’s [FY 19 financial analysts][5] call.
|
||||
|
||||
The strategy to move more toward a software-oriented, cloud-based revenue generation and technology development is brand new for Extreme. The company says it expects to generate as much as 30 percent of revenues from recurring charges in the near future. The tactic was enabled in large part by the Aerohive buy, which doubled Extreme’s customer based to 60,000 and its sales partners to 11,000 and whose revenues are recurring and cloud-based. The acquisition also created the number-three enterprise Wireless LAN company behind Cisco and HPE/Aruba.
|
||||
|
||||
“We are going to take this Aerohive system and expand across our entire portfolio and use it to deliver common, simplified software with feature packages for on-premises or in-cloud based on customers' use case,” added Norman Rice, Extreme’s Chief Marketing, Development and Product Operations Officer. “We have never really been in any cloud conversations before so for us this will be a major add.”
|
||||
|
||||
Indeed, the Aerohive move is key for the company’s future, analysts say.
|
||||
|
||||
To continue reading this article register now
|
||||
|
||||
[Get Free Access][6]
|
||||
|
||||
[Learn More][7] Existing Users [Sign In][6]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432173/extremes-acquisitions-have-prepped-it-to-better-battle-cisco-arista-hpe-others.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3405440/extreme-targets-cloud-services-sd-wan-wifi-6-with-210m-aerohive-grab.html
|
||||
[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[5]: https://seekingalpha.com/article/4279527-extreme-networks-inc-extr-ceo-ed-meyercord-q4-2019-results-earnings-call-transcript
|
||||
[6]: javascript://
|
||||
[7]: https://www.networkworld.com/learn-about-insider/
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Nvidia rises to the need for natural language processing)
|
||||
[#]: via: (https://www.networkworld.com/article/3432203/nvidia-rises-to-the-need-for-natural-language-processing.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Nvidia rises to the need for natural language processing
|
||||
======
|
||||
As the demand for natural language processing grows for chatbots and AI-powered interactions, more companies will need systems that can provide it. Nvidia says its platform can handle it.
|
||||
![andy.brandon50 \(CC BY-SA 2.0\)][1]
|
||||
|
||||
Nvidia is boasting of a breakthrough in conversation natural language processing (NLP) training and inference, enabling more complex interchanges between customers and chatbots with immediate responses.
|
||||
|
||||
The need for such technology is expected to grow, as digital voice assistants alone are expected to climb from 2.5 billion to 8 billion within the next four years, according to Juniper Research, while Gartner predicts that by 2021, 15% of all customer service interactions will be completely handled by AI, an increase of 400% from 2017.
|
||||
|
||||
The company said its DGX-2 AI platform trained the BERT-Large AI language model in less than an hour and performed AI inference in 2+ milliseconds, making it possible “for developers to use state-of-the-art language understanding for large-scale applications.”
|
||||
|
||||
**[ Also read: [What is quantum computing (and why enterprises should care)][2] ]**
|
||||
|
||||
BERT, or Bidirectional Encoder Representations from Transformers, is a Google-powered AI language model that many developers say has better accuracy than humans in some performance evaluations. It’s all discussed [here][3].
|
||||
|
||||
### Nvidia sets natural language processing records
|
||||
|
||||
All told, Nvidia is claiming three NLP records:
|
||||
|
||||
**1\. Training:** Running the largest version of the BERT language model, a Nvidia DGX SuperPOD with 92 Nvidia DGX-2H systems running 1,472 V100 GPUs cut training from several days to 53 minutes. A single DGX-2 system, which is about the size of a tower PC, trained BERT-Large in 2.8 days.
|
||||
|
||||
“The quicker we can train a model, the more models we can train, the more we learn about the problem, and the better the results get,” said Bryan Catanzaro, vice president of applied deep learning research, in a statement.
|
||||
|
||||
**2\. Inference**: Using Nvidia T4 GPUs on its TensorRT deep learning inference platform, Nvidia performed inference on the BERT-Base SQuAD dataset in 2.2 milliseconds, well under the 10 millisecond processing threshold for many real-time applications, and far ahead of the 40 milliseconds measured with highly optimized CPU code.
|
||||
|
||||
**3\. Model:** Nvidia said its new custom model, called Megatron, has 8.3 billion parameters, making it 24 times larger than the BERT-Large and the world's largest language model based on Transformers, the building block used for BERT and other natural language AI models.
|
||||
|
||||
In a move sure to make FOSS advocates happy, Nvidia is also making a ton of source code available via [GitHub][4].
|
||||
|
||||
* NVIDIA GitHub BERT training code with PyTorch
|
||||
* NGC model scripts and check-points for TensorFlow
|
||||
* TensorRT optimized BERT Sample on GitHub
|
||||
* Faster Transformer: C++ API, TensorRT plugin, and TensorFlow OP
|
||||
* MXNet Gluon-NLP with AMP support for BERT (training and inference)
|
||||
* TensorRT optimized BERT Jupyter notebook on AI Hub
|
||||
* Megatron-LM: PyTorch code for training massive Transformer models
|
||||
|
||||
|
||||
|
||||
Not that any of this is easily consumed. We’re talking very advanced AI code. Very few people will be able to make heads or tails of it. But the gesture is a positive one.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432203/nvidia-rises-to-the-need-for-natural-language-processing.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/alphabetic_letters_characters_language_by_andybrandon50_cc_by-sa_2-0_1500x1000-100794409-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
|
||||
[3]: https://medium.com/ai-network/state-of-the-art-ai-solutions-1-google-bert-an-ai-model-that-understands-language-better-than-92c74bb64c
|
||||
[4]: https://github.com/NVIDIA/TensorRT/tree/release/5.1/demo/BERT/
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get ready for the convergence of IT and OT networking and security)
|
||||
[#]: via: (https://www.networkworld.com/article/3432132/get-ready-for-the-convergence-of-it-and-ot-networking-and-security.html)
|
||||
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
|
||||
|
||||
Get ready for the convergence of IT and OT networking and security
|
||||
======
|
||||
Collecting telemetry data from operational networks and passing it to information networks for analysis has its benefits. But this convergence presents big cultural and technology challenges.
|
||||
![Thinkstock][1]
|
||||
|
||||
Most IT networking professionals are so busy with their day-to-day responsibilities that they don’t have time to consider taking on more work. But for companies with an industrial component, there’s an elephant in the room that is clamoring for attention. I’m talking about the increasingly common convergence of IT and operational technology (OT) networking and security.
|
||||
|
||||
Traditionally, IT and OT have had very separate roles in an organization. IT is typically tasked with moving data between computers and humans, whereas OT is tasked with moving data between “things,” such as sensors, actuators, smart machines, and other devices to enhance manufacturing and industrial processes. Not only were the roles for IT and OT completely separate, but their technologies and networks were, too.
|
||||
|
||||
That’s changing, however, as companies want to collect telemetry data from the OT side to drive analytics and business processes on the IT side. The lines between the two sides are blurring, and this has big implications for IT networking and security teams.
|
||||
|
||||
“This convergence of IT and OT systems is absolutely on the increase, and it's especially affecting the industries that are in the business of producing things, whatever those things happen to be,” according to Jeff Hussey, CEO of [Tempered Networks][2], which is working to help bridge the gap between the two. “There are devices on the OT side that are increasingly networked but without any security to those networks. Their operators historically relied on an air gap between the networks of devices, but those gaps no longer exist. The complexity of the environment and the expansive attack surface that is created as a result of connecting all of these devices to corporate networks massively increases the tasks needed to secure even the traditional networks, much less the expanded converged networks.”
|
||||
|
||||
**[ Also read: [Is your enterprise software committing security malpractice?][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
Hussey is well versed on the cultural and technology issues in this arena. When asked if IT and OT people are working together to integrate their networks, he says, “That would be ideal, but it’s not really what we see in the marketplace. Typically, we see some acrimony between these two groups.”
|
||||
|
||||
Hussey explains that the groups move at different paces.
|
||||
|
||||
“The OT groups think in terms of 10-plus year cycles, whereas the IT groups think in terms of three-plus years cycles,” he says. “There's a lot more change and iteration in IT environments than there is OT environments, which are traditionally extremely static. But now companies want to bring telemetry data that is produced by OT devices back to some workload in a data center or in a cloud. That forces a requirement for secure connectivity because of corporate governance or regulatory requirements, and this is when we most often see the two groups clash.”
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
|
||||
|
||||
Based on the situations Hussey has observed so far, the onus to connect and secure the disparate networks falls to the IT side of the house. This is a big challenge because the tools that have traditionally been used for security in IT environments aren’t necessarily appropriate or applicable in OT environments. IT and OT systems have very different protocols and operating systems. It’s not practical to try to create network segmentation using firewall rules, access control lists, VLANs, or VPNs because those things can’t scale to the workloads presented in OT environments.
|
||||
|
||||
### OT practices create IT security concerns
|
||||
|
||||
Steve Fey, CEO of [Totem Building Cybersecurity][6], concurs with Hussey and points out another significant issue in trying to integrate the networking and security aspects of IT and OT systems. In the OT world, it’s often the device vendors or their local contractors who manage and maintain all aspects of the device, typically through remote access. These vendors even install the remote access capabilities and set up the users. “This is completely opposite to how it should be done from a cybersecurity policy perspective,” says Fey. And yet, it’s common today in many industrial environments.
|
||||
|
||||
Fey’s company is in the building controls industry, which automates control of everything from elevators and HVAC systems to lighting and life safety systems in commercial buildings.
|
||||
|
||||
“The building controls industry, in particular, is one that's characterized by a completely different buying and decision-making culture than in enterprise IT. Everything from how the systems are engineered, purchased, installed, and supported is very different than the equivalent world of enterprise IT. Even the suppliers are largely different,” says Fey. “This is another aspect of the cultural challenge between IT and OT teams. They are two worlds that are having to figure each other out because of the cyber threats that pose a risk to these control systems.”
|
||||
|
||||
Fey says major corporate entities are just waking up to the reality of this massive threat surface, whether it’s in their buildings or their manufacturing processes.
|
||||
|
||||
“There’s a dire need to overcome decades of installed OT systems that have been incorrectly configured and incorrectly operated without the security policies and safeguards that are normal to enterprise IT. But the toolsets for these environments are incompatible, and the cultural differences are great,” he says.
|
||||
|
||||
Totem’s goal is to bridge this gap with a specific focus on cyber and to provide a toolset that is recognizable to the enterprise IT world.
|
||||
|
||||
Both Hussey and Fey say it’s likely that IT groups will be charged with leading the convergence of IT and OT networks, but they must include their OT counterparts in the efforts. There are big cultural and technical gaps to bridge to deliver the results that industrial companies are hoping to achieve.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432132/get-ready-for-the-convergence-of-it-and-ot-networking-and-security.html
|
||||
|
||||
作者:[Linda Musthaler][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Linda-Musthaler/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/02/abstract_networks_thinkstock_881604844-100749945-large.jpg
|
||||
[2]: https://www.temperednetworks.com/
|
||||
[3]: https://www.networkworld.com/article/3429559/is-your-enterprise-software-committing-security-malpractice.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[6]: https://totembuildings.com/
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Powering edge data centers: Blue energy might be the perfect solution)
|
||||
[#]: via: (https://www.networkworld.com/article/3432116/powering-edge-data-centers-blue-energy-might-be-the-perfect-solution.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Powering edge data centers: Blue energy might be the perfect solution
|
||||
======
|
||||
Blue energy, created by mixing seawater and fresh water, could be the perfect solution for providing cheap and eco-friendly power to edge data centers.
|
||||
![Benkrut / Getty Images][1]
|
||||
|
||||
About a cubic yard of freshwater mixed with seawater provides almost two-thirds of a kilowatt-hour of energy. And scientists say a revolutionary new battery chemistry based on that theme could power edge data centers.
|
||||
|
||||
The idea is to harness power from wastewater treatment plants located along coasts, which happen to be ideal locations for edge data centers and are heavy electricity users.
|
||||
|
||||
“Places where salty ocean water and freshwater mingle could provide a massive source of renewable power,” [writes Rob Jordan in a Stanford University article][2].
|
||||
|
||||
**[ Read also: [Data centers may soon recycle heat into electricity][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
The chemical process harnesses a mix of sodium and chloride ions. They’re squirted from battery electrodes into a solution and cause current to flow. That initial infusion is then followed by seawater being exchanged with wastewater effluent. It reverses the current flow and creates the energy, the researchers explain.
|
||||
|
||||
“Energy is recovered during both the freshwater and seawater flushes, with no upfront energy investment and no need for charging,” the article says.
|
||||
|
||||
In other words, the battery is continually recharging and discharging with no added input—such as electricity from the grid. The Stanford researchers say the technology could be ideal for making coastal wastewater plants energy independent.
|
||||
|
||||
### Coastal edge data centers
|
||||
|
||||
But edge data centers, also taking up location on the coasts, could also benefit. Those data centers are already exploring kinetic wave-energy to harvest power, as well as using seawater to cool data centers.
|
||||
|
||||
I’ve written about [Ocean Energy’s offshore, power platform using kinetic wave energy][5]. That 125-feet-long, wave converter solution, not only uses ocean water for power generation, but its sea-environment implementation means the same body of water can be used for cooling, too.
|
||||
|
||||
“Ocean cooling and ocean energy in the one device” is a seductive solution, the head of that company said at the time.
|
||||
|
||||
[Microsoft, too, has an underwater data center][6] that proffers the same kinds of power benefits.
|
||||
|
||||
Locating data centers on coasts or in the sea rather than inland doesn’t just provide virtually free-of-cost, power and cooling advantages, plus the associated eco-emissions advantages. The coasts tend to be where the populous is, and locating data center operations near to where the actual calculations, data stores, and other activities need to take place fits neatly into low-latency edge computing, conceptually.
|
||||
|
||||
Other advantages to placing a data center actually in the ocean, although close to land, include the fact that there’s no rent in open waters. And in international waters, one could imagine regulatory advantages—there isn’t a country’s official hovering around.
|
||||
|
||||
However, by placing the installation on terra firma (as the seawater-saltwater mix power solution would be designed for) but close to water at a coast, one can use the necessary seawater and gain an advantage of ease of access to the real estate, connections, and so on.
|
||||
|
||||
The Stanford University engineers, in their seawater/wastewater mix tests, flushed a battery prototype 180 times with wastewater from the Palo Alto Regional Water Quality Control Plant and seawater from nearby Half Moon Bay. The group says they obtained 97% “capturing [of] the salinity gradient energy,” or the blue energy, as it’s sometimes called.
|
||||
|
||||
“Surplus power production could even be diverted to nearby industrial operations,” the article continues.
|
||||
|
||||
“Tapping blue energy at the global scale: rivers running into the ocean” is yet to be solved. “But it is a good starting point,” says Stanford scholar Kristian Dubrawski in the article.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432116/powering-edge-data-centers-blue-energy-might-be-the-perfect-solution.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/uk_united_kingdom_northern_ireland_belfast_river_lagan_waterfront_architecture_by_benkrut_gettyimages-530205844_2400x1600-100807934-large.jpg
|
||||
[2]: https://news.stanford.edu/2019/07/29/generating-energy-wastewater/
|
||||
[3]: https://www.networkworld.com/article/3410578/data-centers-may-soon-recycle-heat-into-electricity.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.networkworld.com/article/3314597/wave-energy-to-power-undersea-data-centers.html
|
||||
[6]: https://www.networkworld.com/article/3283332/microsoft-launches-undersea-free-cooling-data-center.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -1,4 +1,3 @@
|
||||
leemeans translating
|
||||
7 deadly sins of documentation
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-cat-writing-king-typewriter-doc.png?itok=afaEoOqc)
|
||||
|
@ -1,5 +1,3 @@
|
||||
bestony is translating.
|
||||
|
||||
Create a Clean-Code App with Kotlin Coroutines and Android Architecture Components
|
||||
============================================================
|
||||
|
||||
|
@ -1,215 +0,0 @@
|
||||
Use LVM to Upgrade Fedora
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/06/lvm-upgrade-816x345.jpg)
|
||||
|
||||
Most users find it simple to upgrade [from one Fedora release to the next][1] with the standard process. However, there are inevitably many special cases that Fedora can also handle. This article shows one way to upgrade using DNF along with Logical Volume Management (LVM) to keep a bootable backup in case of problems. This example upgrades a Fedora 26 system to Fedora 28.
|
||||
|
||||
The process shown here is more complex than the standard upgrade process. You should have a strong grasp of how LVM works before you use this process. Without proper skill and care, you could **lose data and/or be forced to reinstall your system!** If you don’t know what you’re doing, it is **highly recommended** you stick to the supported upgrade methods only.
|
||||
|
||||
### Preparing the system
|
||||
|
||||
Before you start, ensure your existing system is fully updated.
|
||||
```
|
||||
$ sudo dnf update
|
||||
$ sudo systemctl reboot # or GUI equivalent
|
||||
|
||||
```
|
||||
|
||||
Check that your root filesystem is mounted via LVM.
|
||||
```
|
||||
$ df /
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg_sdg-f26 20511312 14879816 4566536 77% /
|
||||
|
||||
$ sudo lvs
|
||||
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
|
||||
f22 vg_sdg -wi-ao---- 15.00g
|
||||
f24_64 vg_sdg -wi-ao---- 20.00g
|
||||
f26 vg_sdg -wi-ao---- 20.00g
|
||||
home vg_sdg -wi-ao---- 100.00g
|
||||
mockcache vg_sdg -wi-ao---- 10.00g
|
||||
swap vg_sdg -wi-ao---- 4.00g
|
||||
test vg_sdg -wi-a----- 1.00g
|
||||
vg_vm vg_sdg -wi-ao---- 20.00g
|
||||
|
||||
```
|
||||
|
||||
If you used the defaults when you installed Fedora, you may find the root filesystem is mounted on a LV named root. The name of your volume group will likely be different. Look at the total size of the root volume. In the example, the root filesystem is named f26 and is 20G in size.
|
||||
|
||||
Next, ensure you have enough free space in LVM.
|
||||
```
|
||||
$ sudo vgs
|
||||
VG #PV #LV #SN Attr VSize VFree
|
||||
vg_sdg 1 8 0 wz--n- 232.39g 42.39g
|
||||
|
||||
```
|
||||
|
||||
This system has enough free space to allocate a 20G logical volume for the upgraded Fedora 28 root. If you used the default install, there will be no free space in LVM. Managing LVM in general is beyond the scope of this article, but here are some possibilities:
|
||||
|
||||
1\. /home on its own LV, and lots of free space in /home
|
||||
|
||||
You can logout of the GUI and switch to a text console, logging in as root. Then you can unmount /home, and use lvreduce -r to resize and reallocate the /home LV. You might also boot from a Live image (so as not to use /home) and use the gparted GUI utility.
|
||||
|
||||
2\. Most of the LVM space allocated to a root LV, with lots of free space in the filesystem
|
||||
|
||||
You can boot from a Live image and use the gparted GUI utility to reduce the root LV. Consider moving /home to its own filesystem at this point, but that is beyond the scope of this article.
|
||||
|
||||
3\. Most of the file systems are full, but you have LVs you no longer need
|
||||
|
||||
You can delete the unneeded LVs, freeing space in the volume group for this operation.
|
||||
|
||||
### Creating a backup
|
||||
|
||||
First, allocate a new LV for the upgraded system. Make sure to use the correct name for your system’s volume group (VG). In this example it’s vg_sdg.
|
||||
```
|
||||
$ sudo lvcreate -L20G -n f28 vg_sdg
|
||||
Logical volume "f28" created.
|
||||
|
||||
```
|
||||
|
||||
Next, make a snapshot of your current root filesystem. This example creates a snapshot volume named f26_s.
|
||||
```
|
||||
$ sync
|
||||
$ sudo lvcreate -s -L1G -n f26_s vg_sdg/f26
|
||||
Using default stripesize 64.00 KiB.
|
||||
Logical volume "f26_s" created.
|
||||
|
||||
```
|
||||
|
||||
The snapshot can now be copied to the new LV. **Make sure you have the destination correct** when you substitute your own volume names. If you are not careful you could delete data irrevocably. Also, make sure you are copying from the snapshot of your root, **not** your live root.
|
||||
```
|
||||
$ sudo dd if=/dev/vg_sdg/f26_s of=/dev/vg_sdg/f28 bs=256k
|
||||
81920+0 records in
|
||||
81920+0 records out
|
||||
21474836480 bytes (21 GB, 20 GiB) copied, 149.179 s, 144 MB/s
|
||||
|
||||
```
|
||||
|
||||
Give the new filesystem copy a unique UUID. This is not strictly necessary, but UUIDs are supposed to be unique, so this avoids future confusion. Here is how for an ext4 root filesystem:
|
||||
```
|
||||
$ sudo e2fsck -f /dev/vg_sdg/f28
|
||||
$ sudo tune2fs -U random /dev/vg_sdg/f28
|
||||
|
||||
```
|
||||
|
||||
Then remove the snapshot volume which is no longer needed:
|
||||
```
|
||||
$ sudo lvremove vg_sdg/f26_s
|
||||
Do you really want to remove active logical volume vg_sdg/f26_s? [y/n]: y
|
||||
Logical volume "f26_s" successfully removed
|
||||
|
||||
```
|
||||
|
||||
You may wish to make a snapshot of /home at this point if you have it mounted separately. Sometimes, upgraded applications make changes that are incompatible with a much older Fedora version. If needed, edit the /etc/fstab file on the **old** root filesystem to mount the snapshot on /home. Remember that when the snapshot is full, it will disappear! You may also wish to make a normal backup of /home for good measure.
|
||||
|
||||
### Configuring to use the new root
|
||||
|
||||
First, mount the new LV and backup your existing GRUB settings:
|
||||
```
|
||||
$ sudo mkdir /mnt/f28
|
||||
$ sudo mount /dev/vg_sdg/f28 /mnt/f28
|
||||
$ sudo mkdir /mnt/f28/f26
|
||||
$ cd /boot/grub2
|
||||
$ sudo cp -p grub.cfg grub.cfg.old
|
||||
|
||||
```
|
||||
|
||||
Edit grub.conf and add this before the first menuentry, unless you already have it:
|
||||
```
|
||||
menuentry 'Old boot menu' {
|
||||
configfile /grub2/grub.cfg.old
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Edit grub.conf and change the default menuentry to activate and mount the new root filesystem. Change this line:
|
||||
```
|
||||
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f26 ro rd.lvm.lv=vg_sdg/f26 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
|
||||
|
||||
```
|
||||
|
||||
So that it reads like this. Remember to use the correct names for your system’s VG and LV entries!
|
||||
```
|
||||
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f28 ro rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
|
||||
|
||||
```
|
||||
|
||||
Edit /mnt/f28/etc/default/grub and change the default root LV activated at boot:
|
||||
```
|
||||
GRUB_CMDLINE_LINUX="rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet"
|
||||
|
||||
```
|
||||
|
||||
Edit /mnt/f28/etc/fstab to change the mounting of the root filesystem from the old volume:
|
||||
```
|
||||
/dev/mapper/vg_sdg-f26 / ext4 defaults 1 1
|
||||
|
||||
```
|
||||
|
||||
to the new one:
|
||||
```
|
||||
/dev/mapper/vg_sdg-f28 / ext4 defaults 1 1
|
||||
|
||||
```
|
||||
|
||||
Then, add a read-only mount of the old system for reference purposes:
|
||||
```
|
||||
/dev/mapper/vg_sdg-f26 /f26 ext4 ro,nodev,noexec 0 0
|
||||
|
||||
```
|
||||
|
||||
If your root filesystem is mounted by UUID, you will need to change this. Here is how if your root is an ext4 filesystem:
|
||||
```
|
||||
$ sudo e2label /dev/vg_sdg/f28 F28
|
||||
|
||||
```
|
||||
|
||||
Now edit /mnt/f28/etc/fstab to use the label. Change the mount line for the root file system so it reads like this:
|
||||
```
|
||||
LABEL=F28 / ext4 defaults 1 1
|
||||
|
||||
```
|
||||
|
||||
### Rebooting and upgrading
|
||||
|
||||
Reboot, and your system will be using the new root filesystem. It is still Fedora 26, but a copy with a new LV name, and ready for dnf system-upgrade! If anything goes wrong, use the Old Boot Menu to boot back into your working system, which this procedure avoids touching.
|
||||
```
|
||||
$ sudo systemctl reboot # or GUI equivalent
|
||||
...
|
||||
$ df / /f26
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg_sdg-f28 20511312 14903196 4543156 77% /
|
||||
/dev/mapper/vg_sdg-f26 20511312 14866412 4579940 77% /f26
|
||||
|
||||
```
|
||||
|
||||
You may wish to verify that using the Old Boot Menu does indeed get you back to having root mounted on the old root filesystem.
|
||||
|
||||
Now follow the instructions at [this wiki page][2]. If anything goes wrong with the system upgrade, you have a working system to boot back into.
|
||||
|
||||
### Future ideas
|
||||
|
||||
The steps to create a new LV and copy a snapshot of root to it could be automated with a generic script. It needs only the name of the new LV, since the size and device of the existing root are easy to determine. For example, one would be able to type this command:
|
||||
```
|
||||
$ sudo copyfs / f28
|
||||
|
||||
```
|
||||
|
||||
Supplying the mount-point to copy makes it clearer what is happening, and copying other mount-points like /home could be useful.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-lvm-upgrade-fedora/
|
||||
|
||||
作者:[Stuart D Gathman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/sdgathman/
|
||||
[1]:https://fedoramagazine.org/upgrading-fedora-27-fedora-28/
|
||||
[2]:https://fedoraproject.org/wiki/DNF_system_upgrade
|
@ -1,257 +0,0 @@
|
||||
Exploring the Linux kernel: The secrets of Kconfig/kbuild
|
||||
======
|
||||
Dive into understanding how the Linux config/build system works.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/compass_map_explore_adventure.jpg?itok=ecCoVTrZ)
|
||||
|
||||
The Linux kernel config/build system, also known as Kconfig/kbuild, has been around for a long time, ever since the Linux kernel code migrated to Git. As supporting infrastructure, however, it is seldom in the spotlight; even kernel developers who use it in their daily work never really think about it.
|
||||
|
||||
To explore how the Linux kernel is compiled, this article will dive into the Kconfig/kbuild internal process, explain how the .config file and the vmlinux/bzImage files are produced, and introduce a smart trick for dependency tracking.
|
||||
|
||||
### Kconfig
|
||||
|
||||
The first step in building a kernel is always configuration. Kconfig helps make the Linux kernel highly modular and customizable. Kconfig offers the user many config targets:
|
||||
| config | Update current config utilizing a line-oriented program |
|
||||
| nconfig | Update current config utilizing a ncurses menu-based program |
|
||||
| menuconfig | Update current config utilizing a menu-based program |
|
||||
| xconfig | Update current config utilizing a Qt-based frontend |
|
||||
| gconfig | Update current config utilizing a GTK+ based frontend |
|
||||
| oldconfig | Update current config utilizing a provided .config as base |
|
||||
| localmodconfig | Update current config disabling modules not loaded |
|
||||
| localyesconfig | Update current config converting local mods to core |
|
||||
| defconfig | New config with default from Arch-supplied defconfig |
|
||||
| savedefconfig | Save current config as ./defconfig (minimal config) |
|
||||
| allnoconfig | New config where all options are answered with 'no' |
|
||||
| allyesconfig | New config where all options are accepted with 'yes' |
|
||||
| allmodconfig | New config selecting modules when possible |
|
||||
| alldefconfig | New config with all symbols set to default |
|
||||
| randconfig | New config with a random answer to all options |
|
||||
| listnewconfig | List new options |
|
||||
| olddefconfig | Same as oldconfig but sets new symbols to their default value without prompting |
|
||||
| kvmconfig | Enable additional options for KVM guest kernel support |
|
||||
| xenconfig | Enable additional options for xen dom0 and guest kernel support |
|
||||
| tinyconfig | Configure the tiniest possible kernel |
|
||||
|
||||
I think **menuconfig** is the most popular of these targets. The targets are processed by different host programs, which are provided by the kernel and built during kernel building. Some targets have a GUI (for the user's convenience) while most don't. Kconfig-related tools and source code reside mainly under **scripts/kconfig/** in the kernel source. As we can see from **scripts/kconfig/Makefile** , there are several host programs, including **conf** , **mconf** , and **nconf**. Except for **conf** , each of them is responsible for one of the GUI-based config targets, so, **conf** deals with most of them.
|
||||
|
||||
Logically, Kconfig's infrastructure has two parts: one implements a [new language][1] to define the configuration items (see the Kconfig files under the kernel source), and the other parses the Kconfig language and deals with configuration actions.
|
||||
|
||||
Most of the config targets have roughly the same internal process (shown below):
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/kconfig_process.png)
|
||||
|
||||
Note that all configuration items have a default value.
|
||||
|
||||
The first step reads the Kconfig file under source root to construct an initial configuration database; then it updates the initial database by reading an existing configuration file according to this priority:
|
||||
|
||||
> .config
|
||||
> /lib/modules/$(shell,uname -r)/.config
|
||||
> /etc/kernel-config
|
||||
> /boot/config-$(shell,uname -r)
|
||||
> ARCH_DEFCONFIG
|
||||
> arch/$(ARCH)/defconfig
|
||||
|
||||
If you are doing GUI-based configuration via **menuconfig** or command-line-based configuration via **oldconfig** , the database is updated according to your customization. Finally, the configuration database is dumped into the .config file.
|
||||
|
||||
But the .config file is not the final fodder for kernel building; this is why the **syncconfig** target exists. **syncconfig** used to be a config target called **silentoldconfig** , but it doesn't do what the old name says, so it was renamed. Also, because it is for internal use (not for users), it was dropped from the list.
|
||||
|
||||
Here is an illustration of what **syncconfig** does:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncconfig.png)
|
||||
|
||||
**syncconfig** takes .config as input and outputs many other files, which fall into three categories:
|
||||
|
||||
* **auto.conf & tristate.conf** are used for makefile text processing. For example, you may see statements like this in a component's makefile:
|
||||
|
||||
```
|
||||
obj-$(CONFIG_GENERIC_CALIBRATE_DELAY) += calibrate.o
|
||||
```
|
||||
|
||||
* **autoconf.h** is used in C-language source files.
|
||||
|
||||
* Empty header files under **include/config/** are used for configuration-dependency tracking during kbuild, which is explained below.
|
||||
|
||||
|
||||
|
||||
|
||||
After configuration, we will know which files and code pieces are not compiled.
|
||||
|
||||
### kbuild
|
||||
|
||||
Component-wise building, called _recursive make_ , is a common way for GNU `make` to manage a large project. Kbuild is a good example of recursive make. By dividing source files into different modules/components, each component is managed by its own makefile. When you start building, a top makefile invokes each component's makefile in the proper order, builds the components, and collects them into the final executive.
|
||||
|
||||
Kbuild refers to different kinds of makefiles:
|
||||
|
||||
* **Makefile** is the top makefile located in source root.
|
||||
* **.config** is the kernel configuration file.
|
||||
* **arch/$(ARCH)/Makefile** is the arch makefile, which is the supplement to the top makefile.
|
||||
* **scripts/Makefile.*** describes common rules for all kbuild makefiles.
|
||||
* Finally, there are about 500 **kbuild makefiles**.
|
||||
|
||||
|
||||
|
||||
The top makefile includes the arch makefile, reads the .config file, descends into subdirectories, invokes **make** on each component's makefile with the help of routines defined in **scripts/Makefile.*** , builds up each intermediate object, and links all the intermediate objects into vmlinux. Kernel document [Documentation/kbuild/makefiles.txt][2] describes all aspects of these makefiles.
|
||||
|
||||
As an example, let's look at how vmlinux is produced on x86-64:
|
||||
|
||||
![vmlinux overview][4]
|
||||
|
||||
(The illustration is based on Richard Y. Steven's [blog][5]. It was updated and is used with the author's permission.)
|
||||
|
||||
All the **.o** files that go into vmlinux first go into their own **built-in.a** , which is indicated via variables **KBUILD_VMLINUX_INIT** , **KBUILD_VMLINUX_MAIN** , **KBUILD_VMLINUX_LIBS** , then are collected into the vmlinux file.
|
||||
|
||||
Take a look at how recursive make is implemented in the Linux kernel, with the help of simplified makefile code:
|
||||
|
||||
```
|
||||
# In top Makefile
|
||||
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps)
|
||||
+$(call if_changed,link-vmlinux)
|
||||
|
||||
# Variable assignments
|
||||
vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) $(KBUILD_VMLINUX_LIBS)
|
||||
|
||||
export KBUILD_VMLINUX_INIT := $(head-y) $(init-y)
|
||||
export KBUILD_VMLINUX_MAIN := $(core-y) $(libs-y2) $(drivers-y) $(net-y) $(virt-y)
|
||||
export KBUILD_VMLINUX_LIBS := $(libs-y1)
|
||||
export KBUILD_LDS := arch/$(SRCARCH)/kernel/vmlinux.lds
|
||||
|
||||
init-y := init/
|
||||
drivers-y := drivers/ sound/ firmware/
|
||||
net-y := net/
|
||||
libs-y := lib/
|
||||
core-y := usr/
|
||||
virt-y := virt/
|
||||
|
||||
# Transform to corresponding built-in.a
|
||||
init-y := $(patsubst %/, %/built-in.a, $(init-y))
|
||||
core-y := $(patsubst %/, %/built-in.a, $(core-y))
|
||||
drivers-y := $(patsubst %/, %/built-in.a, $(drivers-y))
|
||||
net-y := $(patsubst %/, %/built-in.a, $(net-y))
|
||||
libs-y1 := $(patsubst %/, %/lib.a, $(libs-y))
|
||||
libs-y2 := $(patsubst %/, %/built-in.a, $(filter-out %.a, $(libs-y)))
|
||||
virt-y := $(patsubst %/, %/built-in.a, $(virt-y))
|
||||
|
||||
# Setup the dependency. vmlinux-deps are all intermediate objects, vmlinux-dirs
|
||||
# are phony targets, so every time comes to this rule, the recipe of vmlinux-dirs
|
||||
# will be executed. Refer "4.6 Phony Targets" of `info make`
|
||||
$(sort $(vmlinux-deps)): $(vmlinux-dirs) ;
|
||||
|
||||
# Variable vmlinux-dirs is the directory part of each built-in.a
|
||||
vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
|
||||
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
|
||||
$(net-y) $(net-m) $(libs-y) $(libs-m) $(virt-y)))
|
||||
|
||||
# The entry of recursive make
|
||||
$(vmlinux-dirs):
|
||||
$(Q)$(MAKE) $(build)=$@ need-builtin=1
|
||||
```
|
||||
|
||||
The recursive make recipe is expanded, for example:
|
||||
|
||||
```
|
||||
make -f scripts/Makefile.build obj=init need-builtin=1
|
||||
```
|
||||
|
||||
This means **make** will go into **scripts/Makefile.build** to continue the work of building each **built-in.a**. With the help of **scripts/link-vmlinux.sh** , the vmlinux file is finally under source root.
|
||||
|
||||
#### Understanding vmlinux vs. bzImage
|
||||
|
||||
Many Linux kernel developers may not be clear about the relationship between vmlinux and bzImage. For example, here is their relationship in x86-64:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/vmlinux-bzimage.png)
|
||||
|
||||
The source root vmlinux is stripped, compressed, put into **piggy.S** , then linked with other peer objects into **arch/x86/boot/compressed/vmlinux**. Meanwhile, a file called setup.bin is produced under **arch/x86/boot**. There may be an optional third file that has relocation info, depending on the configuration of **CONFIG_X86_NEED_RELOCS**.
|
||||
|
||||
A host program called **build** , provided by the kernel, builds these two (or three) parts into the final bzImage file.
|
||||
|
||||
#### Dependency tracking
|
||||
|
||||
Kbuild tracks three kinds of dependencies:
|
||||
|
||||
1. All prerequisite files (both * **.c** and * **.h** )
|
||||
2. **CONFIG_** options used in all prerequisite files
|
||||
3. Command-line dependencies used to compile the target
|
||||
|
||||
|
||||
|
||||
The first one is easy to understand, but what about the second and third? Kernel developers often see code pieces like this:
|
||||
|
||||
```
|
||||
#ifdef CONFIG_SMP
|
||||
__boot_cpu_id = cpu;
|
||||
#endif
|
||||
```
|
||||
|
||||
When **CONFIG_SMP** changes, this piece of code should be recompiled. The command line for compiling a source file also matters, because different command lines may result in different object files.
|
||||
|
||||
When a **.c** file uses a header file via a **#include** directive, you need write a rule like this:
|
||||
|
||||
```
|
||||
main.o: defs.h
|
||||
recipe...
|
||||
```
|
||||
|
||||
When managing a large project, you need a lot of these kinds of rules; writing them all would be tedious and boring. Fortunately, most modern C compilers can write these rules for you by looking at the **#include** lines in the source file. For the GNU Compiler Collection (GCC), it is just a matter of adding a command-line parameter: **-MD depfile**
|
||||
|
||||
```
|
||||
# In scripts/Makefile.lib
|
||||
c_flags = -Wp,-MD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) \
|
||||
-include $(srctree)/include/linux/compiler_types.h \
|
||||
$(__c_flags) $(modkern_cflags) \
|
||||
$(basename_flags) $(modname_flags)
|
||||
```
|
||||
|
||||
This would generate a **.d** file with content like:
|
||||
|
||||
```
|
||||
init_task.o: init/init_task.c include/linux/kconfig.h \
|
||||
include/generated/autoconf.h include/linux/init_task.h \
|
||||
include/linux/rcupdate.h include/linux/types.h \
|
||||
...
|
||||
```
|
||||
|
||||
Then the host program **[fixdep][6]** takes care of the other two dependencies by taking the **depfile** and command line as input, then outputting a **. <target>.cmd** file in makefile syntax, which records the command line and all the prerequisites (including the configuration) for a target. It looks like this:
|
||||
|
||||
```
|
||||
# The command line used to compile the target
|
||||
cmd_init/init_task.o := gcc -Wp,-MD,init/.init_task.o.d -nostdinc ...
|
||||
...
|
||||
# The dependency files
|
||||
deps_init/init_task.o := \
|
||||
$(wildcard include/config/posix/timers.h) \
|
||||
$(wildcard include/config/arch/task/struct/on/stack.h) \
|
||||
$(wildcard include/config/thread/info/in/task.h) \
|
||||
...
|
||||
include/uapi/linux/types.h \
|
||||
arch/x86/include/uapi/asm/types.h \
|
||||
include/uapi/asm-generic/types.h \
|
||||
...
|
||||
```
|
||||
|
||||
A **. <target>.cmd** file will be included during recursive make, providing all the dependency info and helping to decide whether to rebuild a target or not.
|
||||
|
||||
The secret behind this is that **fixdep** will parse the **depfile** ( **.d** file), then parse all the dependency files inside, search the text for all the **CONFIG_** strings, convert them to the corresponding empty header file, and add them to the target's prerequisites. Every time the configuration changes, the corresponding empty header file will be updated, too, so kbuild can detect that change and rebuild the target that depends on it. Because the command line is also recorded, it is easy to compare the last and current compiling parameters.
|
||||
|
||||
### Looking ahead
|
||||
|
||||
Kconfig/kbuild remained the same for a long time until the new maintainer, Masahiro Yamada, joined in early 2017, and now kbuild is under active development again. Don't be surprised if you soon see something different from what's in this article.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/kbuild-and-kconfig
|
||||
|
||||
作者:[Cao Jin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pinocchio
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kconfig-language.txt
|
||||
[2]: https://www.mjmwired.net/kernel/Documentation/kbuild/makefiles.txt
|
||||
[3]: https://opensource.com/file/411516
|
||||
[4]: https://opensource.com/sites/default/files/uploads/vmlinux_generation_process.png (vmlinux overview)
|
||||
[5]: https://blog.csdn.net/richardysteven/article/details/52502734
|
||||
[6]: https://github.com/torvalds/linux/blob/master/scripts/basic/fixdep.c
|
@ -1,69 +0,0 @@
|
||||
Create animated, scalable vector graphic images with MacSVG
|
||||
======
|
||||
|
||||
Open source SVG: The writing is on the wall
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_design_paper_plane_2_0.jpg?itok=xKdP-GWE)
|
||||
|
||||
The Neo-Babylonian regent [Belshazzar][1] did not heed [the writing on the wall][2] that magically appeared during his great feast. However, if he had had a laptop and a good internet connection in 539 BC, he might have staved off those pesky Persians by reading the SVG on the browser.
|
||||
|
||||
Animating text and objects on web pages is a great way to build user interest and engagement. There are several ways to achieve this, such as a video embed, an animated GIF, or a slideshow—but you can also use [scalable vector graphics][3] (SVG).
|
||||
|
||||
An SVG image is different from, say, a JPG, because it is scalable without losing its resolution. A vector image is created by points, not dots, so no matter how large it gets, it will not lose its resolution or pixelate. An example of a good use of scalable, static images would be logos on websites.
|
||||
|
||||
### Move it, move it
|
||||
|
||||
You can create SVG images with several drawing programs, including open source [Inkscape][4] and Adobe Illustrator. Getting your images to “do something” requires a bit more effort. Fortunately, there are open source solutions that would get even Belshazzar’s attention.
|
||||
|
||||
[MacSVG][5] is one tool that will get your images moving. You can find the source code on [GitHub][6].
|
||||
|
||||
Developed by Douglas Ward of Conway, Arkansas, MacSVG is an “open source Mac OS app for designing HTML5 SVG art and animation,” according to its [website][5].
|
||||
|
||||
I was interested in using MacSVG to create an animated signature. I admit that I found the process a bit confusing and failed at my first attempts to create an actual animated SVG image.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/macsvg-screen.png)
|
||||
|
||||
It is important to first learn what makes “the writing on the wall” actually write.
|
||||
|
||||
The attribute behind the animated writing is [stroke-dasharray][7]. Breaking the term into three words helps explain what is happening: Stroke refers to the line or stroke you would make with a pen, whether physical or digital. Dash means breaking the stroke down into a series of dashes. Array means producing the whole thing into an array. That’s a simple overview, but it helped me understand what was supposed to happen and why.
|
||||
|
||||
With MacSVG, you can import a graphic (.PNG) and use the pen tool to trace the path of the writing. I used a cursive representation of my first name. Then it was just a matter of applying the attributes to animate the writing, increase and decrease the thickness of the stroke, change its color, and so on. Once completed, the animated writing was exported as an .SVG file and was ready for use on the web. MacSVG can be used for many different types of SVG animation in addition to handwriting.
|
||||
|
||||
### The writing is on the WordPress
|
||||
|
||||
I was ready to upload and share my SVG example on my [WordPress][8] site, but I discovered that WordPress does not allow for SVG media imports. Fortunately, I found a handy plugin: Benbodhi’s [SVG Support][9] allowed a quick, easy import of my SVG the same way I would import a JPG to my Media Library. I was able to showcase my [writing on the wall][10] to Babylonians everywhere.
|
||||
|
||||
I opened the source code of my SVG in [Brackets][11], and here are the results:
|
||||
|
||||
```
|
||||
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://web.resource.org/cc/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" height="360px" style="zoom: 1;" cursor="default" id="svg_document" width="480px" baseProfile="full" version="1.1" preserveAspectRatio="xMidYMid meet" viewBox="0 0 480 360"><title id="svg_document_title">Path animation with stroke-dasharray</title><desc id="desc1">This example demonstrates the use of a path element, an animate element, and the stroke-dasharray attribute to simulate drawing.</desc><defs id="svg_document_defs"></defs><g id="main_group"></g><path stroke="#004d40" id="path2" stroke-width="9px" d="M86,75 C86,75 75,72 72,61 C69,50 66,37 71,34 C76,31 86,21 92,35 C98,49 95,73 94,82 C93,91 87,105 83,110 C79,115 70,124 71,113 C72,102 67,105 75,97 C83,89 111,74 111,74 C111,74 119,64 119,63 C119,62 110,57 109,58 C108,59 102,65 102,66 C102,67 101,75 107,79 C113,83 118,85 122,81 C126,77 133,78 136,64 C139,50 147,45 146,33 C145,21 136,15 132,24 C128,33 123,40 123,49 C123,58 135,87 135,96 C135,105 139,117 133,120 C127,123 116,127 120,116 C124,105 144,82 144,81 C144,80 158,66 159,58 C160,50 159,48 161,43 C163,38 172,23 166,22 C160,21 155,12 153,23 C151,34 161,68 160,78 C159,88 164,108 163,113 C162,118 165,126 157,128 C149,130 152,109 152,109 C152,109 185,64 185,64 " fill="none" transform=""><animate values="0,1739;1739,0;" attributeType="XML" begin="0; animate1.end+5s" id="animateSig1" repeatCount="indefinite" attributeName="stroke-dasharray" fill="freeze" dur="2"></animate></path></svg>
|
||||
```
|
||||
|
||||
What would you use MacSVG for?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/macsvg-open-source-tool-animation
|
||||
|
||||
作者:[Jeff Macharyas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rikki-endsley
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Belshazzar
|
||||
[2]: https://en.wikipedia.org/wiki/Belshazzar%27s_feast
|
||||
[3]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics
|
||||
[4]: https://inkscape.org/
|
||||
[5]: https://macsvg.org/
|
||||
[6]: https://github.com/dsward2/macSVG
|
||||
[7]: https://gist.github.com/mbostock/5649592
|
||||
[8]: https://macharyas.com/
|
||||
[9]: https://wordpress.org/plugins/svg-support/
|
||||
[10]: https://macharyas.com/index.php/2018/10/14/open-source-svg/
|
||||
[11]: http://brackets.io/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: subject: (How To Customize The GNOME 3 Desktop?)
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,166 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with Prometheus)
|
||||
[#]: via: (https://opensource.com/article/18/12/introduction-prometheus)
|
||||
[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
|
||||
|
||||
Getting started with Prometheus
|
||||
======
|
||||
Learn to install and write queries for the Prometheus monitoring and alerting system.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn)
|
||||
|
||||
[Prometheus][1] is an open source monitoring and alerting system that directly scrapes metrics from agents running on the target hosts and stores the collected samples centrally on its server. Metrics can also be pushed using plugins like **collectd_exporter** —although this is not Promethius' default behavior, it may be useful in some environments where hosts are behind a firewall or prohibited from opening ports by security policy.
|
||||
|
||||
Prometheus, a project of the [Cloud Native Computing Foundation][2], scales up using a federation model, which enables one Prometheus server to scrape another Prometheus server. This allows creation of a hierarchical topology, where a central system or higher-level Prometheus server can scrape aggregated data already collected from subordinate instances.
|
||||
|
||||
Besides the Prometheus server, its most common components are its [Alertmanager][3] and its exporters.
|
||||
|
||||
Alerting rules can be created within Prometheus and configured to send custom alerts to Alertmanager. Alertmanager then processes and handles these alerts, including sending notifications through different mechanisms like email or third-party services like [PagerDuty][4].
|
||||
|
||||
Prometheus' exporters can be libraries, processes, devices, or anything else that exposes the metrics that will be scraped by Prometheus. The metrics are available at the endpoint **/metrics** , which allows Prometheus to scrape them directly without needing an agent. The tutorial in this article uses **node_exporter** to expose the target hosts' hardware and operating system metrics. Exporters' outputs are plaintext and highly readable, which is one of Prometheus' strengths.
|
||||
|
||||
In addition, you can configure [Grafana][5] to use Prometheus as a backend to provide data visualization and dashboarding functions.
|
||||
|
||||
### Making sense of Prometheus' configuration file
|
||||
|
||||
The number of seconds between when **/metrics** is scraped controls the granularity of the time-series database. This is defined in the configuration file as the **scrape_interval** parameter, which by default is set to 60 seconds.
|
||||
|
||||
Targets are set for each scrape job in the **scrape_configs** section. Each job has its own name and a set of labels that can help filter, categorize, and make it easier to identify the target. One job can have many targets.
|
||||
|
||||
### Installing Prometheus
|
||||
|
||||
In this tutorial, for simplicity, we will install a Prometheus server and **node_exporter** with docker. Docker should already be installed and configured properly on your system. For a more in-depth, automated method, I recommend Steve Ovens' article [How to use Ansible to set up system monitoring with Prometheus][6].
|
||||
|
||||
Before starting, create the Prometheus configuration file **prometheus.yml** in your work directory as follows:
|
||||
|
||||
```
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
evaluation_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
- job_name: 'prometheus'
|
||||
|
||||
static_configs:
|
||||
- targets: ['localhost:9090']
|
||||
|
||||
- job_name: 'webservers'
|
||||
|
||||
static_configs:
|
||||
- targets: ['<node exporter node IP>:9100']
|
||||
```
|
||||
|
||||
Start Prometheus with Docker by running the following command:
|
||||
|
||||
```
|
||||
$ sudo docker run -d -p 9090:9090 -v
|
||||
/path/to/prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
prom/prometheus
|
||||
```
|
||||
|
||||
By default, the Prometheus server will use port 9090. If this port is already in use, you can change it by adding the parameter **\--web.listen-address=" <IP of machine>:<port>"** at the end of the previous command.
|
||||
|
||||
In the machine you want to monitor, download and run the **node_exporter** container by using the following command:
|
||||
|
||||
```
|
||||
$ sudo docker run -d -v "/proc:/host/proc" -v "/sys:/host/sys" -v
|
||||
"/:/rootfs" --net="host" prom/node-exporter --path.procfs
|
||||
/host/proc --path.sysfs /host/sys --collector.filesystem.ignored-
|
||||
mount-points "^/(sys|proc|dev|host|etc)($|/)"
|
||||
```
|
||||
|
||||
For the purposes of this learning exercise, you can install **node_exporter** and Prometheus on the same machine. Please note that it's not wise to run **node_exporter** under Docker in production—this is for testing purposes only.
|
||||
|
||||
To verify that **node_exporter** is running, open your browser and navigate to **http:// <IP of Node exporter host>:9100/metrics**. All the metrics collected will be displayed; these are the same metrics Prometheus will scrape.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/check-node_exporter.png)
|
||||
|
||||
To verify the Prometheus server installation, open your browser and navigate to <http://localhost:9090>.
|
||||
|
||||
You should see the Prometheus interface. Click on **Status** and then **Targets**. Under State, you should see your machines listed as **UP**.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/targets-up.png)
|
||||
|
||||
### Using Prometheus queries
|
||||
|
||||
It's time to get familiar with [PromQL][7], Prometheus' query syntax, and its graphing web interface. Go to **<http://localhost:9090/graph>** on your Prometheus server. You will see a query editor and two tabs: Graph and Console.
|
||||
|
||||
Prometheus stores all data as time series, identifying each one with a metric name. For example, the metric **node_filesystem_avail_bytes** shows the available filesystem space. The metric's name can be used in the expression box to select all of the time series with this name and produce an instant vector. If desired, these time series can be filtered using selectors and labels—a set of key-value pairs—for example:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype="ext4"}
|
||||
```
|
||||
|
||||
When filtering, you can match "exactly equal" ( **=** ), "not equal" ( **!=** ), "regex-match" ( **=~** ), and "do not regex-match" ( **!~** ). The following examples illustrate this:
|
||||
|
||||
To filter **node_filesystem_avail_bytes** to show both ext4 and XFS filesystems:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype=~"ext4|xfs"}
|
||||
```
|
||||
|
||||
To exclude a match:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype!="xfs"}
|
||||
```
|
||||
|
||||
You can also get a range of samples back from the current time by using square brackets. You can use **s** to represent seconds, **m** for minutes, **h** for hours, **d** for days, **w** for weeks, and **y** for years. When using time ranges, the vector returned will be a range vector.
|
||||
|
||||
For example, the following command produces the samples from five minutes to the present:
|
||||
|
||||
```
|
||||
node_memory_MemAvailable_bytes[5m]
|
||||
```
|
||||
|
||||
Prometheus also includes functions to allow advanced queries, such as this:
|
||||
|
||||
```
|
||||
100 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated (1 - avg by(instance)(irate(node_cpu_seconds_total{job='webservers',mode='idle'}[5m])))
|
||||
```
|
||||
|
||||
Notice how the labels are used to filter the job and the mode. The metric **node_cpu_seconds_total** returns a counter, and the **irate()** function calculates the per-second rate of change based on the last two data points of the range interval (meaning the range can be smaller than five minutes). To calculate the overall CPU usage, you can use the idle mode of the **node_cpu_seconds_total** metric. The idle percent of a processor is the opposite of a busy processor, so the **irate** value is subtracted from 1. To make it a percentage, multiply it by 100.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cpu-usage.png)
|
||||
|
||||
### Learn more
|
||||
|
||||
Prometheus is a powerful, scalable, lightweight, and easy to use and deploy monitoring tool that is indispensable for every system administrator and developer. For these and other reasons, many companies are implementing Prometheus as part of their infrastructure.
|
||||
|
||||
To learn more about Prometheus and its functions, I recommend the following resources:
|
||||
|
||||
+ About [PromQL][8]
|
||||
+ What [node_exporters collects][9]
|
||||
+ [Prometheus functions][10]
|
||||
+ [4 open source monitoring tools][11]
|
||||
+ [Now available: The open source guide to DevOps monitoring tools][12]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/introduction-prometheus
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://prometheus.io/
|
||||
[2]: https://www.cncf.io/
|
||||
[3]: https://prometheus.io/docs/alerting/alertmanager/
|
||||
[4]: https://en.wikipedia.org/wiki/PagerDuty
|
||||
[5]: https://grafana.com/
|
||||
[6]: https://opensource.com/article/18/3/how-use-ansible-set-system-monitoring-prometheus
|
||||
[7]: https://prometheus.io/docs/prometheus/latest/querying/basics/
|
||||
[8]: https://prometheus.io/docs/prometheus/latest/querying/basics/
|
||||
[9]: https://github.com/prometheus/node_exporter#collectors
|
||||
[10]: https://prometheus.io/docs/prometheus/latest/querying/functions/
|
||||
[11]: https://opensource.com/article/18/8/open-source-monitoring-tools
|
||||
[12]: https://opensource.com/article/18/8/now-available-open-source-guide-devops-monitoring-tools
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zianglei)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,150 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Let’s try dwm — dynamic window manager)
|
||||
[#]: via: (https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/)
|
||||
[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/)
|
||||
|
||||
Let’s try dwm — dynamic window manager
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
If you like efficiency and minimalism, and are looking for a new window manager for your Linux desktop, you should try _dwm_ — dynamic window manager. Written in under 2000 standard lines of code, dwm is extremely fast yet powerful and highly customizable window manager.
|
||||
|
||||
You can dynamically choose between tiling, monocle and floating layouts, organize your windows into multiple workspaces using tags, and quickly navigate through using keyboard shortcuts. This article helps you get started using dwm.
|
||||
|
||||
## **Installation**
|
||||
|
||||
To install dwm on Fedora, run:
|
||||
|
||||
```
|
||||
$ sudo dnf install dwm dwm-user
|
||||
```
|
||||
|
||||
The _dwm_ package installs the window manager itself, and the _dwm-user_ package significantly simplifies configuration which will be explained later in this article.
|
||||
|
||||
Additionally, to be able to lock the screen when needed, we’ll also install _slock_ — a simple X display locker.
|
||||
|
||||
```
|
||||
$ sudo dnf install slock
|
||||
```
|
||||
|
||||
However, you can use a different one based on your personal preference.
|
||||
|
||||
## **Quick start**
|
||||
|
||||
To start dwm, choose the _dwm-user_ option on the login screen.
|
||||
|
||||
![][2]
|
||||
|
||||
After you log in, you’ll see a very simple desktop. In fact, the only thing there will be a bar at the top listing our nine tags that represent workspaces and a _[]=_ symbol that represents the layout of your windows.
|
||||
|
||||
### Launching applications
|
||||
|
||||
Before looking into the layouts, first launch some applications so you can play with the layouts as you go. Apps can be started by pressing _Alt+p_ and typing the name of the app followed by _Enter_. There’s also a shortcut _Alt+Shift+Enter_ for opening a terminal.
|
||||
|
||||
Now that some apps are running, have a look at the layouts.
|
||||
|
||||
### Layouts
|
||||
|
||||
There are three layouts available by default: the tiling layout, the monocle layout, and the floating layout.
|
||||
|
||||
The tiling layout, represented by _[]=_ on the bar, organizes windows into two main areas: master on the left, and stack on the right. You can activate the tiling layout by pressing _Alt+t._
|
||||
|
||||
![][3]
|
||||
|
||||
The idea behind the tiling layout is that you have your primary window in the master area while still seeing the other ones in the stack. You can quickly switch between them as needed.
|
||||
|
||||
To swap windows between the two areas, hover your mouse over one in the stack area and press _Alt+Enter_ to swap it with the one in the master area.
|
||||
|
||||
![][4]
|
||||
|
||||
The monocle layout, represented by _[N]_ on the top bar, makes your primary window take the whole screen. You can switch to it by pressing _Alt+m_.
|
||||
|
||||
Finally, the floating layout lets you move and resize your windows freely. The shortcut for it is _Alt+f_ and the symbol on the top bar is _> <>_.
|
||||
|
||||
### Workspaces and tags
|
||||
|
||||
Each window is assigned to a tag (1-9) listed at the top bar. To view a specific tag, either click on its number using your mouse or press _Alt+1..9._ You can even view multiple tags at once by clicking on their number using the secondary mouse button.
|
||||
|
||||
Windows can be moved between different tags by highlighting them using your mouse, and pressing _Alt+Shift+1..9._
|
||||
|
||||
## **Configuration**
|
||||
|
||||
To make dwm as minimalistic as possible, it doesn’t use typical configuration files. Instead, you modify a C header file representing the configuration, and recompile it. But don’t worry, in Fedora it’s as simple as just editing one file in your home directory and everything else happens in the background thanks to the _dwm-user_ package provided by the maintainer in Fedora.
|
||||
|
||||
First, you need to copy the file into your home directory using a command similar to the following:
|
||||
|
||||
```
|
||||
$ mkdir ~/.dwm
|
||||
$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h
|
||||
```
|
||||
|
||||
You can get the exact path by running _man dwm-start._
|
||||
|
||||
Second, just edit the _~/.dwm/config.h_ file. As an example, let’s configure a new shortcut to lock the screen by pressing _Alt+Shift+L_.
|
||||
|
||||
Considering we’ve installed the _slock_ package mentioned earlier in this post, we need to add the following two lines into the file to make it work:
|
||||
|
||||
Under the _/* commands */_ comment, add:
|
||||
|
||||
```
|
||||
static const char *slockcmd[] = { "slock", NULL };
|
||||
```
|
||||
|
||||
And the following line into _static Key keys[]_ :
|
||||
|
||||
```
|
||||
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
|
||||
```
|
||||
|
||||
In the end, it should look like as follows: (added lines are highlighted)
|
||||
|
||||
```
|
||||
...
|
||||
/* commands */
|
||||
static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
|
||||
static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
|
||||
static const char *termcmd[] = { "st", NULL };
|
||||
static const char *slockcmd[] = { "slock", NULL };
|
||||
|
||||
static Key keys[] = {
|
||||
/* modifier key function argument */
|
||||
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
|
||||
{ MODKEY, XK_p, spawn, {.v = dmenucmd } },
|
||||
{ MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },
|
||||
...
|
||||
```
|
||||
|
||||
Save the file.
|
||||
|
||||
Finally, just log out by pressing _Alt+Shift+q_ and log in again. The scripts provided by the _dwm-user_ package will recognize that you have changed the _config.h_ file in your home directory and recompile dwm on login. And becuse dwm is so tiny, it’s fast enough you won’t even notice it.
|
||||
|
||||
You can try locking your screen now by pressing _Alt+Shift+L_ , and then logging back in again by typing your password and pressing enter.
|
||||
|
||||
## **Conclusion**
|
||||
|
||||
If you like minimalism and want a very fast yet powerful window manager, dwm might be just what you’ve been looking for. However, it probably isn’t for beginners. There might be a lot of additional configuration you’ll need to do in order to make it just as you like it.
|
||||
|
||||
To learn more about dwm, see the project’s homepage at <https://dwm.suckless.org/>.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/
|
||||
|
||||
作者:[Adam Šamalík][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/asamalik/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-magazine-image-816x345.png
|
||||
[2]: https://fedoramagazine.org/wp-content/uploads/2019/03/choosing-dwm-1024x469.png
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-desktop-1024x593.png
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-2019-03-15-at-11.12.32-1024x592.png
|
@ -1,111 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip])
|
||||
[#]: via: (https://itsfoss.com/turn-on-raspberry-pi/)
|
||||
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
|
||||
|
||||
How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip]
|
||||
======
|
||||
|
||||
_**Brief: This quick tip teaches you how to turn on Raspberry Pi and how to shut it down properly afterwards.**_
|
||||
|
||||
The [Raspberry Pi][1] is one of the [most popular SBC (Single-Board-Computer)][2]. If you are interested in this topic, I believe that you’ve finally got a Pi device. I also advise to get all the [additional Raspberry Pi accessories][3] to get started with your device.
|
||||
|
||||
You’re ready to turn it on and start to tinker around with it. It has it’s own similarities and differences compared to traditional computers like desktops and laptops.
|
||||
|
||||
Today, let’s go ahead and learn how to turn on and shutdown a Raspberry Pi as it doesn’t really feature a ‘power button’ of sorts.
|
||||
|
||||
For this article I’m using a Raspberry Pi 3B+, but it’s the same for all the Raspberry Pi variants.
|
||||
|
||||
Bestseller No. 1
|
||||
|
||||
[][4]
|
||||
|
||||
[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4]
|
||||
|
||||
CanaKit - Personal Computers
|
||||
|
||||
$79.99 [][5]
|
||||
|
||||
Bestseller No. 2
|
||||
|
||||
[][6]
|
||||
|
||||
[CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply][6]
|
||||
|
||||
CanaKit - Personal Computers
|
||||
|
||||
$54.99 [][5]
|
||||
|
||||
### Turn on Raspberry Pi
|
||||
|
||||
![Micro USB port for Power][7]
|
||||
|
||||
The micro USB port powers the Raspberry Pi, the way you turn it on is by plugging in the power cable into the micro USB port. But, before you do that you should make sure that you have done the following things.
|
||||
|
||||
* Preparing the micro SD card with Raspbian according to the official [guide][8] and inserting into the micro SD card slot.
|
||||
* Plugging in the HDMI cable, USB keyboard and a Mouse.
|
||||
* Plugging in the Ethernet Cable(Optional).
|
||||
|
||||
|
||||
|
||||
Once you have done the above, plug in the power cable. This turns on the Raspberry Pi and the display will light up and load the Operating System.
|
||||
|
||||
Bestseller No. 1
|
||||
|
||||
[][4]
|
||||
|
||||
[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4]
|
||||
|
||||
CanaKit - Personal Computers
|
||||
|
||||
$79.99 [][5]
|
||||
|
||||
### Shutting Down the Pi
|
||||
|
||||
Shutting down the Pi is pretty straight forward, click the menu button and choose shutdown.
|
||||
|
||||
![Turn off Raspberry Pi graphically][9]
|
||||
|
||||
Alternatively, you can use the [shutdown command][10] in the terminal:
|
||||
|
||||
```
|
||||
sudo shutdown now
|
||||
```
|
||||
|
||||
Once the shutdown process has started **wait** till it completely finishes and then you can cut the power to it. Once the Pi shuts down, there is no real way to turn the Pi back on without turning off and turning on the power. You could the GPIO’s to turn on the Pi from the shutdown state but it’ll require additional modding.
|
||||
|
||||
[][2]
|
||||
|
||||
Suggested read 12 Single Board Computers: Alternative to Raspberry Pi
|
||||
|
||||
_Note: Micro USB ports tend to be fragile, hence turn-off/on the power at source instead of frequently unplugging and plugging into the micro USB port._
|
||||
|
||||
Well, that’s about all you should know about turning on and shutting down the Pi, what do you plan to use it for? Let me know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/turn-on-raspberry-pi/
|
||||
|
||||
作者:[Chinmay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/chinmay/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.raspberrypi.org/
|
||||
[2]: https://itsfoss.com/raspberry-pi-alternatives/
|
||||
[3]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
|
||||
[4]: https://www.amazon.com/CanaKit-Raspberry-Starter-Premium-Black/dp/B07BCC8PK7?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BCC8PK7&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case))
|
||||
[5]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime)
|
||||
[6]: https://www.amazon.com/CanaKit-Raspberry-Premium-Clear-Supply/dp/B07BC7BMHY?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BC7BMHY&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply)
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-3-microusb.png?fit=800%2C532&ssl=1
|
||||
[8]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/Raspbian-ui-menu.jpg?fit=800%2C492&ssl=1
|
||||
[10]: https://linuxhandbook.com/linux-shutdown-command/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Flowsnow)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zmaster-zhang)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,179 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use Postfix to get email from your Fedora system)
|
||||
[#]: via: (https://fedoramagazine.org/use-postfix-to-get-email-from-your-fedora-system/)
|
||||
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
|
||||
|
||||
Use Postfix to get email from your Fedora system
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Communication is key. Your computer might be trying to tell you something important. But if your mail transport agent ([MTA][2]) isn’t properly configured, you might not be getting the notifications. Postfix is a MTA [that’s easy to configure and known for a strong security record][3]. Follow these steps to ensure that email notifications sent from local services will get routed to your internet email account through the Postfix MTA.
|
||||
|
||||
### Install packages
|
||||
|
||||
Use _dnf_ to install the required packages ([you configured][4] _[sudo][4]_[, right?][4]):
|
||||
|
||||
```
|
||||
$ sudo -i
|
||||
# dnf install postfix mailx
|
||||
```
|
||||
|
||||
If you previously had a different MTA configured, you may need to set Postfix to be the system default. Use the _alternatives_ command to set your system default MTA:
|
||||
|
||||
```
|
||||
$ sudo alternatives --config mta
|
||||
There are 2 programs which provide 'mta'.
|
||||
Selection Command
|
||||
*+ 1 /usr/sbin/sendmail.sendmail
|
||||
2 /usr/sbin/sendmail.postfix
|
||||
Enter to keep the current selection[+], or type selection number: 2
|
||||
```
|
||||
|
||||
### Create a _password_maps_ file
|
||||
|
||||
You will need to create a Postfix lookup table entry containing the email address and password of the account that you want to use to for sending email:
|
||||
|
||||
```
|
||||
# MY_EMAIL_ADDRESS=glb@gmail.com
|
||||
# MY_EMAIL_PASSWORD=abcdefghijklmnop
|
||||
# MY_SMTP_SERVER=smtp.gmail.com
|
||||
# MY_SMTP_SERVER_PORT=587
|
||||
# echo "[$MY_SMTP_SERVER]:$MY_SMTP_SERVER_PORT $MY_EMAIL_ADDRESS:$MY_EMAIL_PASSWORD" >> /etc/postfix/password_maps
|
||||
# chmod 600 /etc/postfix/password_maps
|
||||
# unset MY_EMAIL_PASSWORD
|
||||
# history -c
|
||||
```
|
||||
|
||||
If you are using a Gmail account, you’ll need to configure an “app password” for Postfix, rather than using your gmail password. See “[Sign in using App Passwords][5]” for instructions on configuring an app password.
|
||||
|
||||
Next, you must run the _postmap_ command against the Postfix lookup table to create or update the hashed version of the file that Postfix actually uses:
|
||||
|
||||
```
|
||||
# postmap /etc/postfix/password_maps
|
||||
```
|
||||
|
||||
The hashed version will have the same file name but it will be suffixed with _.db_.
|
||||
|
||||
### Update the _main.cf_ file
|
||||
|
||||
Update Postfix’s _main.cf_ configuration file to reference the Postfix lookup table you just created. Edit the file and add these lines.
|
||||
|
||||
```
|
||||
relayhost = smtp.gmail.com:587
|
||||
smtp_tls_security_level = verify
|
||||
smtp_tls_mandatory_ciphers = high
|
||||
smtp_tls_verify_cert_match = hostname
|
||||
smtp_sasl_auth_enable = yes
|
||||
smtp_sasl_security_options = noanonymous
|
||||
smtp_sasl_password_maps = hash:/etc/postfix/password_maps
|
||||
```
|
||||
|
||||
The example assumes you’re using Gmail for the _relayhost_ setting, but you can substitute the correct hostname and port for the mail host to which your system should hand off mail for sending.
|
||||
|
||||
For the most up-to-date details about the above configuration options, see the man page:
|
||||
|
||||
```
|
||||
$ man postconf.5
|
||||
```
|
||||
|
||||
### Enable, start, and test Postfix
|
||||
|
||||
After you have updated the main.cf file, enable and start the Postfix service:
|
||||
|
||||
```
|
||||
# systemctl enable --now postfix.service
|
||||
```
|
||||
|
||||
You can then exit your _sudo_ session as root using the _exit_ command or **Ctrl+D**. You should now be able to test your configuration with the _mail_ command:
|
||||
|
||||
```
|
||||
$ echo 'It worked!' | mail -s "Test: $(date)" glb@gmail.com
|
||||
```
|
||||
|
||||
### Update services
|
||||
|
||||
If you have services like [logwatch][6], [mdadm][7], [fail2ban][8], [apcupsd][9] or [certwatch][10] installed, you can now update their configurations so that their email notifications will go to your internet email address.
|
||||
|
||||
Optionally, you may want to configure all email that is sent to your local system’s root account to go to your internet email address. Add this line to the _/etc/aliases_ file on your system (you’ll need to use _sudo_ to edit this file, or switch to the _root_ account first):
|
||||
|
||||
```
|
||||
root: glb+root@gmail.com
|
||||
```
|
||||
|
||||
Now run this command to re-read the aliases:
|
||||
|
||||
```
|
||||
# newaliases
|
||||
```
|
||||
|
||||
* TIP: If you are using Gmail, you can [add an alpha-numeric mark][11] between your username and the **@** symbol as demonstrated above to make it easier to identify and filter the email that you will receive from your computer(s).
|
||||
|
||||
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
**View the mail queue:**
|
||||
|
||||
```
|
||||
$ mailq
|
||||
```
|
||||
|
||||
**Clear all email from the queues:**
|
||||
|
||||
```
|
||||
# postsuper -d ALL
|
||||
```
|
||||
|
||||
**Filter the configuration settings for interesting values:**
|
||||
|
||||
```
|
||||
$ postconf | grep "^relayhost\|^smtp_"
|
||||
```
|
||||
|
||||
**View the _postfix/smtp_ logs:**
|
||||
|
||||
```
|
||||
$ journalctl --no-pager -t postfix/smtp
|
||||
```
|
||||
|
||||
**Reload _postfix_ after making configuration changes:**
|
||||
|
||||
```
|
||||
$ systemctl reload postfix
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
_Photo by _[_Sharon McCutcheon_][12]_ on [Unsplash][13]_.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-postfix-to-get-email-from-your-fedora-system/
|
||||
|
||||
作者:[Gregory Bartholomew][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/glb/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/postfix-816x345.jpg
|
||||
[2]: https://en.wikipedia.org/wiki/Message_transfer_agent
|
||||
[3]: https://en.wikipedia.org/wiki/Postfix_(software)
|
||||
[4]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[5]: https://support.google.com/accounts/answer/185833
|
||||
[6]: https://src.fedoraproject.org/rpms/logwatch
|
||||
[7]: https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/
|
||||
[8]: https://fedoraproject.org/wiki/Fail2ban_with_FirewallD
|
||||
[9]: https://src.fedoraproject.org/rpms/apcupsd
|
||||
[10]: https://www.linux.com/learn/automated-certificate-expiration-checks-centos
|
||||
[11]: https://gmail.googleblog.com/2008/03/2-hidden-ways-to-get-more-from-your.html
|
||||
[12]: https://unsplash.com/@sharonmccutcheon?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[13]: https://unsplash.com/search/photos/envelopes?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -1,109 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Add ‘New Document’ Option In Right Click Context Menu In Ubuntu 18.04)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-add-new-document-option-in-right-click-context-menu-in-ubuntu-18-04/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Add ‘New Document’ Option In Right Click Context Menu In Ubuntu 18.04
|
||||
======
|
||||
|
||||
![Add 'New Document' Option In Right Click Context Menu In Ubuntu 18.04 GNOME desktop][1]
|
||||
|
||||
The other day, I was collecting reference notes for [**Linux package managers**][2] on various online sources. When I tried to create a text file to save those notes, I noticed that the ‘New document’ option is missing in my Ubuntu 18.04 LTS desktop. I thought somehow the option is gone in my system. After googling a bit, It turns out to be the “new document” option is not included in Ubuntu GNOME editions. Luckily, I have found an easy solution to add ‘New Document’ option in right click context menu in Ubuntu 18.04 LTS desktop.
|
||||
|
||||
As you can see in the following screenshot, the “New Doucment” option is missing in the right-click context menu of Nautilus file manager.
|
||||
|
||||
![][3]
|
||||
|
||||
new document option is missing in right-click context menu ubuntu 18.04
|
||||
|
||||
If you want to add this option, just follow the steps given below.
|
||||
|
||||
### Add ‘New Document’ Option In Right Click Context Menu In Ubuntu
|
||||
|
||||
First, make sure you have **~/Templates** directory in your system. If it is not available create one like below.
|
||||
|
||||
```
|
||||
$ mkdir ~/Templates
|
||||
```
|
||||
|
||||
Next open the Terminal application and cd into the **~/Templates** folder using command:
|
||||
|
||||
```
|
||||
$ cd ~/Templates
|
||||
```
|
||||
|
||||
Create an empty file:
|
||||
|
||||
```
|
||||
$ touch Empty\ Document
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
$ touch "Empty Document"
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
Now open your Nautilus file manager and check if “New Doucment” option is added in context menu.
|
||||
|
||||
![][5]
|
||||
|
||||
Add ‘New Document’ Option In Right Click Context Menu In Ubuntu 18.04
|
||||
|
||||
As you can see in the above screenshot, the “New Document” option is back again.
|
||||
|
||||
You can also additionally add options for different files types like below.
|
||||
|
||||
```
|
||||
$ cd ~/Templates
|
||||
|
||||
$ touch New\ Word\ Document.docx
|
||||
$ touch New\ PDF\ Document.pdf
|
||||
$ touch New\ Text\ Document.txt
|
||||
$ touch New\ PyScript.py
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
Add options for different files types in New Document sub-menu
|
||||
|
||||
Please note that all files should be created inside the **~/Templates** directory.
|
||||
|
||||
Now, open the Nautilus and check if the newly created file types are present in “New Document” sub-menu.
|
||||
|
||||
![][7]
|
||||
|
||||
If you want to remove any file type from the sub-menu, simply remove the appropriate file from the Templates directory.
|
||||
|
||||
```
|
||||
$ rm ~/Templates/New\ Word\ Document.docx
|
||||
```
|
||||
|
||||
I am wondering why this option has been removed in recent Ubuntu GNOME editions. I use it frequently. However, it is easy to re-enable this option in couple minutes.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-add-new-document-option-in-right-click-context-menu-in-ubuntu-18-04/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/07/Add-New-Document-Option-In-Right-Click-Context-Menu-1-720x340.png
|
||||
[2]: https://www.ostechnix.com/linux-package-managers-compared-appimage-vs-snap-vs-flatpak/
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/07/new-document-option-missing.png
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/07/Create-empty-document-in-Templates-directory.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/07/Add-New-Document-Option-In-Right-Click-Context-Menu-In-Ubuntu.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/07/Add-options-for-different-files-types.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/07/Add-New-Document-Option-In-Right-Click-Context-Menu.png
|
@ -1,99 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Unboxing the Raspberry Pi 4)
|
||||
[#]: via: (https://opensource.com/article/19/8/unboxing-raspberry-pi-4)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall)
|
||||
|
||||
Unboxing the Raspberry Pi 4
|
||||
======
|
||||
The Raspberry Pi 4 delivers impressive performance gains over its
|
||||
predecessors, and the Starter Kit makes it easy to get it up and running
|
||||
quickly.
|
||||
![Raspberry Pi 4 board, posterized filter][1]
|
||||
|
||||
When the Raspberry Pi 4 was [announced at the end of June][2], I wasted no time. I ordered two Raspberry Pi 4 Starter Kits the same day from [CanaKit][3]. The 1GB RAM version was available right away, but the 4GB version wouldn't ship until July 19th. Since I wanted to try both, I ordered them to be shipped together.
|
||||
|
||||
![CanaKit's Raspberry Pi 4 Starter Kit and official accessories][4]
|
||||
|
||||
Here's what I found when I unboxed my Raspberry Pi 4.
|
||||
|
||||
### Power supply
|
||||
|
||||
The Raspberry Pi 4 uses a USB-C connector for its power supply. Even though USB-C cables are very common now, your Pi 4 [may not like your USB-C cable][5] (at least with these first editions of the Raspberry Pi 4). So, unless you know exactly what you are doing, I recommend ordering the Starter Kit, which comes with an official Raspberry Pi charger. In case you would rather try whatever you have on hand, the device's input reads 100-240V ~ 50/60Hz 0.5A, and the output says 5.1V --- 3.0A.
|
||||
|
||||
![Raspberry Pi USB-C charger][6]
|
||||
|
||||
### Keyboard and mouse
|
||||
|
||||
The official keyboard and mouse are [sold separately][7] from the Starter Kit, and at $25 total, they aren't really cheap, given you're paying only $35 to $55 for a proper computer. But the Raspberry Pi logo is printed on this keyboard (instead of the Windows logo), and there is something compelling about having an appropriate appearance. The keyboard is also a USB hub, so it allows you to plug in even more devices. I plugged in my [YubiKey][8] security key, and it works very nicely. I would classify the keyboard and mouse as a "nice to have" versus a "must-have." Your regular keyboard and mouse should work fine.
|
||||
|
||||
![Official Raspberry Pi keyboard \(with YubiKey plugged in\) and mouse.][9]
|
||||
|
||||
![Raspberry Pi logo on the keyboard][10]
|
||||
|
||||
### Micro-HDMI cable
|
||||
|
||||
Something that may have caught some folks by surprise is that, unlike the Raspberry Pi Zero that comes with a Mini-HDMI port, the Raspberry Pi 4 comes with a Micro-HDMI. They are not the same thing! So, even though you may have a suitable USB-C cable/power adaptor, mouse, and keyboard on hand, there is a pretty good chance you will need a Micro-HDMI-to-HDMI cable (or an adapter) to plug your new Raspberry Pi into a display.
|
||||
|
||||
### The case
|
||||
|
||||
Cases for the Raspberry Pi have been around for years and are probably one of the very first "official" peripherals the Raspberry Pi Foundation sold. Some people like them; others don't. I think putting a Pi in a case makes it easier to carry it around and avoid static electricity and bent pins.
|
||||
|
||||
On the other hand, keeping your Pi covered can overheat the board. This CanaKit Starter Kit also comes with heatsink for the processor, which might help, as the newer Pis are already [known for running pretty hot][11].
|
||||
|
||||
![Raspberry Pi 4 case][12]
|
||||
|
||||
### Raspbian and NOOBS
|
||||
|
||||
The other item that comes with the Starter Kit is a microSD card with the correct version of the [NOOBS][13] operating system for the Raspberry Pi 4 pre-installed. (I got version 3.1.1, released June 24, 2019). If you're using a Raspberry Pi for the first time and are not sure where to start, this could save you a lot of time. The microSD card in the Starter Kit is 32GB.
|
||||
|
||||
After you insert the microSD card and connect all the cables, just start up the Pi, boot into NOOBS, pick the Raspbian distribution, and wait while it installs.
|
||||
|
||||
![Raspberry Pi 4 with 4GB of RAM][14]
|
||||
|
||||
I noticed a couple of improvements while installing the latest Raspbian. (Forgive me if they've been around for a while—I haven't done a fresh install on a Pi since the 3 came out.) One is that Raspbian will ask you to set up a password for your account at first boot after installation, and the other is that it will run a software update (assuming you have network connectivity). These are great improvements to help keep your Raspberry Pi a little more secure. I would love to see the option to encrypt the microSD card at installation … maybe someday?
|
||||
|
||||
![Running Raspbian updates at first boot][15]
|
||||
|
||||
![Raspberry Pi 4 setup][16]
|
||||
|
||||
It runs very smoothly!
|
||||
|
||||
### Wrapping up
|
||||
|
||||
Although CanaKit isn't the only authorized Raspberry Pi retailer in the US, I found its Starter Kit to provide great value for the price.
|
||||
|
||||
So far, I am very impressed with the performance gains in the Raspberry Pi 4. I'm planning to try spending an entire workday using it as my only computer, and I'll write a follow-up article soon about how far I can go. Stay tuned!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/unboxing-raspberry-pi-4
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi4_board_hardware.jpg?itok=KnFU7NvR (Raspberry Pi 4 board, posterized filter)
|
||||
[2]: https://opensource.com/article/19/6/raspberry-pi-4
|
||||
[3]: https://www.canakit.com/raspberry-pi-4-starter-kit.html
|
||||
[4]: https://opensource.com/sites/default/files/uploads/raspberrypi4_canakit.jpg (CanaKit's Raspberry Pi 4 Starter Kit and official accessories)
|
||||
[5]: https://www.techrepublic.com/article/your-new-raspberry-pi-4-wont-power-on-usb-c-cable-problem-now-officially-confirmed/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/raspberrypi_usb-c_charger.jpg (Raspberry Pi USB-C charger)
|
||||
[7]: https://www.canakit.com/official-raspberry-pi-keyboard-mouse.html?defpid=4476
|
||||
[8]: https://www.yubico.com/products/yubikey-hardware/
|
||||
[9]: https://opensource.com/sites/default/files/uploads/raspberrypi_keyboardmouse.jpg (Official Raspberry Pi keyboard (with YubiKey plugged in) and mouse.)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/raspberrypi_keyboardlogo.jpg (Raspberry Pi logo on the keyboard)
|
||||
[11]: https://www.theregister.co.uk/2019/07/22/raspberry_pi_4_too_hot_to_handle/
|
||||
[12]: https://opensource.com/sites/default/files/uploads/raspberrypi4_case.jpg (Raspberry Pi 4 case)
|
||||
[13]: https://www.raspberrypi.org/downloads/noobs/
|
||||
[14]: https://opensource.com/sites/default/files/uploads/raspberrypi4_ram.jpg (Raspberry Pi 4 with 4GB of RAM)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/raspberrypi4_rasbpianupdate.jpg (Running Raspbian updates at first boot)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/raspberrypi_setup.jpg (Raspberry Pi 4 setup)
|
@ -1,118 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Find Out How Long Does it Take To Boot Your Linux System)
|
||||
[#]: via: (https://itsfoss.com/check-boot-time-linux/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Find Out How Long Does it Take To Boot Your Linux System
|
||||
======
|
||||
|
||||
When you power on your system, you wait for the manufacturer’s logo to come up, a few messages on the screen perhaps (booting in insecure mode), [Grub][1] screen, operating system loading screen and finally the login screen.
|
||||
|
||||
Did you check how long did it take? Perhaps not. Unless you really need to know, you won’t bother with the boot time details.
|
||||
|
||||
But what if you are curious to know long long your Linux system takes to boot? Running a stopwatch is one way to find that but in Linux, you have better and easier ways to find out your system’s start up time.
|
||||
|
||||
### Checking boot time in Linux with systemd-analyze
|
||||
|
||||
![][2]
|
||||
|
||||
Like it or not, [systemd][3] is running on most of the popular Linux distributions. The systemd has a number of utilities to manage your Linux system. One of those utilities is systemd-analyze.
|
||||
|
||||
The systemd-analyze command gives you a detail of how many services ran at the last start up and how long they took.
|
||||
|
||||
If you run the following command in the terminal:
|
||||
|
||||
```
|
||||
systemd-analyze
|
||||
```
|
||||
|
||||
You’ll get the total boot time along with the time taken by firmware, boot loader, kernel and the userspace:
|
||||
|
||||
```
|
||||
Startup finished in 7.275s (firmware) + 13.136s (loader) + 2.803s (kernel) + 12.488s (userspace) = 35.704s
|
||||
|
||||
graphical.target reached after 12.408s in userspace
|
||||
```
|
||||
|
||||
As you can see in the output above, it took about 35 seconds for my system to reach the screen where I could enter my password. I am using Dell XPS Ubuntu edition. It uses SSD storage and despite of that it takes this much time to start.
|
||||
|
||||
Not that impressive, is it? Why don’t you share your system’s boot time? Let’s compare.
|
||||
|
||||
You can further breakdown the boot time into each unit with the following command:
|
||||
|
||||
```
|
||||
systemd-analyze blame
|
||||
```
|
||||
|
||||
This will produce a huge output with all the services listed in the descending order of the time taken.
|
||||
|
||||
```
|
||||
7.347s plymouth-quit-wait.service
|
||||
6.198s NetworkManager-wait-online.service
|
||||
3.602s plymouth-start.service
|
||||
3.271s plymouth-read-write.service
|
||||
2.120s apparmor.service
|
||||
1.503s [email protected]
|
||||
1.213s motd-news.service
|
||||
908ms snapd.service
|
||||
861ms keyboard-setup.service
|
||||
739ms fwupd.service
|
||||
702ms bolt.service
|
||||
672ms dev-nvme0n1p3.device
|
||||
608ms [email protected]:intel_backlight.service
|
||||
539ms snap-core-7270.mount
|
||||
504ms snap-midori-451.mount
|
||||
463ms snap-screencloud-1.mount
|
||||
446ms snapd.seeded.service
|
||||
440ms snap-gtk\x2dcommon\x2dthemes-1313.mount
|
||||
420ms snap-core18-1066.mount
|
||||
416ms snap-scrcpy-133.mount
|
||||
412ms snap-gnome\x2dcharacters-296.mount
|
||||
```
|
||||
|
||||
#### Bonus Tip: Improving boot time
|
||||
|
||||
If you look at this output, you can see that both network manager and [plymouth][4] take a huge bunch of boot time.
|
||||
|
||||
[][5]
|
||||
|
||||
Suggested read How To Fix: No Unity, No Launcher, No Dash In Ubuntu Linux
|
||||
|
||||
Plymouth is responsible for that boot splash screen you see before the login screen in Ubuntu and other distributions. Network manager is responsible for the internet connection and may be turned off to speed up boot time. Don’t worry, once you log in, you’ll have wifi working normally.
|
||||
|
||||
```
|
||||
sudo systemctl disable NetworkManager-wait-online.service
|
||||
```
|
||||
|
||||
If you want to revert the change, you can use this command:
|
||||
|
||||
```
|
||||
sudo systemctl enable NetworkManager-wait-online.service
|
||||
```
|
||||
|
||||
Now, please don’t go disabling various services on your own without knowing what it is used for. It may have dangerous consequences.
|
||||
|
||||
_**Now that you know how to check the boot time of your Linux system, why not share your system’s boot time in the comment section?**_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/check-boot-time-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.gnu.org/software/grub/
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/linux-boot-time.jpg?resize=800%2C450&ssl=1
|
||||
[3]: https://en.wikipedia.org/wiki/Systemd
|
||||
[4]: https://wiki.archlinux.org/index.php/Plymouth
|
||||
[5]: https://itsfoss.com/how-to-fix-no-unity-no-launcher-no-dash-in-ubuntu-12-10-quick-tip/
|
@ -1,93 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to manipulate PDFs on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3430781/how-to-manipulate-pdfs-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How to manipulate PDFs on Linux
|
||||
======
|
||||
The pdftk command provides many options for working with PDFs, including merging pages, encrypting files, applying watermarks, compressing files, and even repairing PDFs -- easily and on the command line.
|
||||
![Toshiyuki IMAI \(CC BY-SA 2.0\)][1]
|
||||
|
||||
While PDFs are generally regarded as fairly stable files, there’s a lot you can do with them on both Linux and other systems. This includes merging, splitting, rotating, breaking into single pages, encrypting and decrypting, applying watermarks, compressing and uncompressing, and even repairing. The **pdftk** command does all this and more.
|
||||
|
||||
The name “pdftk” stands for “PDF tool kit,” and the command is surprisingly easy to use and does a good job of manipulating PDFs. For example, to pull separate files into a single PDF file, you would use a command like this:
|
||||
|
||||
```
|
||||
$ pdftk pg1.pdf pg2.pdf pg3.pdf pg4.pdf pg5.pdf cat output OneDoc.pdf
|
||||
```
|
||||
|
||||
That OneDoc.pdf file will contain all five of the documents shown and the command will run in a matter of seconds. Note that the **cat** option directs the files to be joined together and the **output** option specifies the name of the new file.
|
||||
|
||||
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
You can also pull select pages from a PDF to create a separate PDF file. For example, if you wanted to create a new PDF with only pages 1, 2, 3, and 5 of the document created above, you could do this:
|
||||
|
||||
```
|
||||
$ pdftk OneDoc.pdf cat 1-3 5 output 4pgs.pdf
|
||||
```
|
||||
|
||||
If, on the other hand, you wanted pages 1, 3, 4, and 5, we might use this syntax instead:
|
||||
|
||||
```
|
||||
$ pdftk OneDoc.pdf cat 1 3-end output 4pgs.pdf
|
||||
```
|
||||
|
||||
You have the option of specifying all individual pages or using page ranges as shown in the examples above.
|
||||
|
||||
This next command will create a collated document from one that contains the odd pages (1, 3, etc.) and one that contains the even pages (2, 4, etc.):
|
||||
|
||||
```
|
||||
$ pdftk A=odd.pdf B=even.pdf shuffle A B output collated.pdf
|
||||
```
|
||||
|
||||
Notice that the **shuffle** option make this collation possible and dictates the order in which the documents are used. Note also: While the odd/even pages example might suggest otherwise, you are not restricted to using only two input files.
|
||||
|
||||
If you want to create an encrypted PDF that can only be opened by a recipient who knows the password, you could use a command like this one:
|
||||
|
||||
```
|
||||
$ pdftk prep.pdf output report.pdf user_pw AsK4n0thingGeTn0thing
|
||||
```
|
||||
|
||||
The options provide for 40 (**encrypt_40bit**) and 128 (**encrypt_128bit**) bit encryption. The 128 bit encryption is used by default.
|
||||
|
||||
You can also break a PDF file into individual pages using the **burst** option:
|
||||
|
||||
```
|
||||
$ pdftk allpgs.pdf burst
|
||||
$ ls -ltr *.pdf | tail -5
|
||||
-rw-rw-r-- 1 shs shs 22933 Aug 8 08:18 pg_0001.pdf
|
||||
-rw-rw-r-- 1 shs shs 23773 Aug 8 08:18 pg_0002.pdf
|
||||
-rw-rw-r-- 1 shs shs 23260 Aug 8 08:18 pg_0003.pdf
|
||||
-rw-rw-r-- 1 shs shs 23435 Aug 8 08:18 pg_0004.pdf
|
||||
-rw-rw-r-- 1 shs shs 23136 Aug 8 08:18 pg_0005.pdf
|
||||
```
|
||||
|
||||
The **pdftk** command makes pulling together, tearing apart, rebuilding and encrypting PDF files surprisingly easy. To learn more about its many options, I check out the examples page from [PDF Labs][3].
|
||||
|
||||
**[ Also see: [Invaluable tips and tricks for troubleshooting Linux][4] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3430781/how-to-manipulate-pdfs-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/book-pages-100807709-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://www.pdflabs.com/docs/pdftk-cli-examples/
|
||||
[4]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,202 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to measure the health of an open source community)
|
||||
[#]: via: (https://opensource.com/article/19/8/measure-project)
|
||||
[#]: author: (Jon Lawrence https://opensource.com/users/the3rdlaw)
|
||||
|
||||
How to measure the health of an open source community
|
||||
======
|
||||
It's complicated.
|
||||
![metrics and data shown on a computer screen][1]
|
||||
|
||||
As a person who normally manages software development teams, over the years I’ve come to care about metrics quite a bit. Time after time, I’ve found myself leading teams using one project platform or another (Jira, GitLab, and Rally, for example) generating an awful lot of measurable data. From there, I’ve promptly invested significant amounts of time to pull useful metrics out of the platform-of-record and into a format where we could make sense of them, and then use the metrics to make better choices about many aspects of development.
|
||||
|
||||
Earlier this year, I had the good fortune of coming across a project at the [Linux Foundation][2] called [Community Health Analytics for Open Source Software][3], or CHAOSS. This project focuses on collecting and enriching metrics from a wide range of sources so that stakeholders in open source communities can measure the health of their projects.
|
||||
|
||||
### What is CHAOSS?
|
||||
|
||||
As I grew familiar with the project’s underlying metrics and objectives, one question kept turning over in my head. What is a "healthy" open source project, and by whose definition?
|
||||
|
||||
What’s considered healthy by someone in a particular role may not be viewed that way by someone in another role. It seemed there was an opportunity to back out from the granular data that CHAOSS collects and do a market segmentation exercise, focusing on what might be the most meaningful contextual questions for a given role, and what metrics CHAOSS collects that might help answer those questions.
|
||||
|
||||
This exercise was made possible by the fact that the CHAOSS project creates and maintains a suite of open source applications and metric definitions, including:
|
||||
|
||||
* A number of server-based applications for gathering, aggregating, and enriching metrics (such as Augur and GrimoireLab).
|
||||
* The open source versions of ElasticSearch, Kibana, and Logstash (ELK).
|
||||
* Identity services, data analysis services, and a wide range of integration libraries.
|
||||
|
||||
|
||||
|
||||
In one of my past programs, where half a dozen teams were working on projects of varying complexity, we found a neat tool which allowed us to create any kind of metric we wanted from a simple (or complex) JQL statement, and then develop calculations against and between those metrics. Before we knew it, we were pulling over 400 metrics from Jira alone, and more from manual sources.
|
||||
|
||||
By the end of the project, we decided that out of the 400-ish metrics, most of them didn’t really matter when it came to making decisions _in our roles_. At the end of the day, there were only three that really mattered to us: "Defect Removal Efficiency," "Points completed vs. Points committed," and "Work-in-Progress per Developer." Those three metrics mattered most because they were promises we made to ourselves, to our clients, and to our team members, and were, therefore, the most meaningful.
|
||||
|
||||
Drawing from the lessons learned through that experience and the question of what is a healthy open source project?, I jumped into the CHAOSS community and started building a set of personas to offer a constructive approach to answering that question from a role-based lens.
|
||||
|
||||
CHAOSS is an open source project and we try to operate using democratic consensus. So, I decided that instead of stakeholders, I’d use the word _constituent_, because it aligns better with the responsibility we have as open source contributors to create a more symbiotic value chain.
|
||||
|
||||
While the exercise of creating this constituent model takes a particular goal-question-metric approach, there are many ways to segment. CHAOSS contributors have developed great models that segment by vectors, like project profiles (for example, individual, corporate, or coalition) and "Tolerance to Failure." Every model provides constructive influence when developing metric definitions for CHAOSS.
|
||||
|
||||
Based on all of this, I set out to build a model of who might care about CHAOSS metrics, and what questions each constituent might care about most in each of CHAOSS’ four focus areas:
|
||||
|
||||
* [Diversity and Inclusion][4]
|
||||
* [Evolution][5]
|
||||
* [Risk][6]
|
||||
* [Value][7]
|
||||
|
||||
|
||||
|
||||
Before we dive in, it’s important to note that the CHAOSS project expressly leaves contextual judgments to teams implementing the metrics. What’s "meaningful" and the answer to "What is healthy?" is expected to vary by team and by project. The CHAOSS software’s ready-made dashboards focus on objective metrics as much as possible. In this article, we focus on project founders, project maintainers, and contributors.
|
||||
|
||||
### Project constituents
|
||||
|
||||
While this is by no means an exhaustive list of questions these constituents might feel are important, these choices felt like a good place to start. Each of the Goal-Question-Metric segments below is directly tied to metrics that the CHAOSS project is collecting and aggregating.
|
||||
|
||||
Now, on to Part 1 of the analysis!
|
||||
|
||||
#### Project founders
|
||||
|
||||
As a **project founder**, I care **most** about:
|
||||
|
||||
* Is my project **useful to others?** Measured as a function of:
|
||||
|
||||
* How many forks over time?
|
||||
**Metric:** Repository forks.
|
||||
|
||||
* How many contributors over time?
|
||||
**Metric:** Contributor count.
|
||||
|
||||
* Net quality of contributions.
|
||||
**Metric:** Bugs filed over time.
|
||||
**Metric:** Regressions over time.
|
||||
|
||||
* Financial health of my project.
|
||||
**Metric:** Donations/Revenue over time.
|
||||
**Metric:** Expenses over time.
|
||||
|
||||
* How **visible** is my project to others?
|
||||
|
||||
* Does anyone know about my project? Do others think it’s neat?
|
||||
**Metric:** Social media mentions, shares, likes, and subscriptions.
|
||||
|
||||
* Does anyone with influence know about my project?
|
||||
**Metric:** Social reach of contributors.
|
||||
|
||||
* What are people saying about the project in public spaces? Is it positive or negative?
|
||||
**Metric:** Sentiment (keyword or NLP) analysis across social media channels.
|
||||
|
||||
* How **viable** is my project?
|
||||
|
||||
* Do we have enough maintainers? Is the number rising or falling over time?
|
||||
**Metric:** Number of maintainers.
|
||||
|
||||
* Do we have enough contributors? Is the number rising or falling over time?
|
||||
**Metric:** Number of contributors.
|
||||
|
||||
* How is velocity changing over time?
|
||||
**Metric:** Percent change of code over time.
|
||||
**Metric:** Time between pull request, code review, and merge.
|
||||
|
||||
* How [**diverse & inclusive**][4] is my project?
|
||||
|
||||
* Do we have a valid, public, Code of Conduct (CoC)?
|
||||
**Metric:** CoC repository file check.
|
||||
|
||||
* Are events associated with my project actively inclusive?
|
||||
**Metric:** Manual reporting on event ticketing policies and event inclusion activities.
|
||||
|
||||
* Does our project do a good job of being accessible?
|
||||
**Metric:** Validation of typed meeting minutes being posted.
|
||||
**Metric:** Validation of closed captioning used during meetings.
|
||||
**Metric:** Validation of color-blind-accessible materials in presentations and in project front-end designs.
|
||||
|
||||
* How much [**value**][7] does my project represent?
|
||||
|
||||
* How can I help organizations understand how much time and money using our project would save them (labor investment)
|
||||
**Metric:** Repo count of issues, commits, pull requests, and the estimated labor rate.
|
||||
|
||||
* How can I understand the amount of downstream value my project creates and how vital (or not) it is to the wider community to maintain my project?
|
||||
**Metric:** Repo count of how many other projects rely on my project.
|
||||
|
||||
* How much opportunity is there for those contributing to my project to use what they learn working on it to land good jobs and at what organizations (aka living wage)?
|
||||
**Metric:** Count of organizations using or contributing to this library.
|
||||
**Metric:** Averages of salaries for developers working with this kind of project.
|
||||
**Metric:** Count of job postings with keywords that match this project.
|
||||
|
||||
|
||||
|
||||
|
||||
### Project maintainers
|
||||
|
||||
As a **Project Maintainer,** I care **most** about:
|
||||
|
||||
* Am I an **efficient** maintainer?
|
||||
**Metric:** Time PR’s wait before a code review.
|
||||
**Metric:** Time between code review and subsequent PR’s.
|
||||
**Metric:** How many of my code reviews are approvals?
|
||||
**Metric:** How many of my code reviews are rejections/rework requests?
|
||||
**Metric:** Sentiment analysis of code review comments.
|
||||
|
||||
* How do I get **more people** to help me maintain this thing?
|
||||
**Metric:** Count of social reach of project contributors.
|
||||
|
||||
* Is our **code quality** getting better or worse over time?
|
||||
**Metric:** Count how many regressions are being introduced over time.
|
||||
**Metric:** Count how many bugs are being introduced over time.
|
||||
**Metric:** Time between bug filing, pull request, review, merge, and release.
|
||||
|
||||
|
||||
|
||||
|
||||
### Project developers and contributors
|
||||
|
||||
As a **project developer or contributor**, I care most about:
|
||||
|
||||
* What things of value can I gain from contributing to this project and how long might it take to realize that value?
|
||||
**Metric:** Downstream value.
|
||||
**Metric:** Time between commits, code reviews, and merges.
|
||||
|
||||
* Are there good prospects for using what I learn by contributing to increase my job opportunities?
|
||||
**Metric:** Living wage.
|
||||
|
||||
* How popular is this project?
|
||||
**Metric:** Counts of social media posts, shares, and favorites.
|
||||
|
||||
* Do community influencers know about my project?
|
||||
**Metric:** Social reach of founders, maintainers, and contributors.
|
||||
|
||||
|
||||
|
||||
|
||||
By creating this list, we’ve just begun to put meat on the contextual bones of CHAOSS, and with the first release of metrics in the project this summer, I can’t wait to see what other great ideas the broader open source community may have to contribute and what else we can all learn (and measure!) from those contributions.
|
||||
|
||||
### Other roles
|
||||
|
||||
Next, you need to learn more about goal-question-metric sets for other roles (such as foundations, corporate open source program offices, business risk and legal teams, human resources, and others) as well as end users, who have a distinctly different set of things they care about when it comes to open source.
|
||||
|
||||
If you’re an open source contributor or constituent, we invite you to [come check out the project][8] and get engaged in the community!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/measure-project
|
||||
|
||||
作者:[Jon Lawrence][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/the3rdlaw
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
|
||||
[2]: https://www.linuxfoundation.org/
|
||||
[3]: https://chaoss.community/
|
||||
[4]: https://github.com/chaoss/wg-diversity-inclusion
|
||||
[5]: https://github.com/chaoss/wg-evolution
|
||||
[6]: https://github.com/chaoss/wg-risk
|
||||
[7]: https://github.com/chaoss/wg-value
|
||||
[8]: https://github.com/chaoss/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (0x996)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,91 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Get Linux Kernel 5.0 in Ubuntu 18.04 LTS)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-hwe-kernel/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
How to Get Linux Kernel 5.0 in Ubuntu 18.04 LTS
|
||||
======
|
||||
|
||||
_**The recently released Ubuntu 18.04.3 includes Linux Kernel 5.0 among several new features and improvements but you won’t get it by default. This tutorial demonstrates how to get Linux Kernel 5 in Ubuntu 18.04 LTS.**_
|
||||
|
||||
[Subscribe to It’s FOSS YouTube Channel for More Videos][1]
|
||||
|
||||
The [third point release of Ubuntu 18.04 is here][2] and it brings new stable versions of GNOME component, livepatch desktop integration and kernel 5.0.
|
||||
|
||||
But wait! What is a point release? Let me explain it to you first.
|
||||
|
||||
### Ubuntu LTS point release
|
||||
|
||||
Ubuntu 18.04 was released in April 2018 and since it’s a long term support (LTS) release, it will be supported till 2023. There have been a number of bug fixes, security updates and software upgrades since then. If you download Ubuntu 18.04 today, you’ll have to install all those updates as one of the first [things to do after installing Ubuntu 18.04][3].
|
||||
|
||||
That, of course, is not an ideal situation. This is why Ubuntu provides these “point releases”. A point release consists of all the feature and security updates along with the bug fixes that has been added since the initial release of the LTS version. If you download Ubuntu today, you’ll get Ubuntu 18.04.3 instead of Ubuntu 18.04. This saves the trouble of downloading and installing hundreds of updates on a newly installed Ubuntu system.
|
||||
|
||||
Okay! So now you know the concept of point release. How do you upgrade to these point releases? The answer is simple. Just [update your Ubuntu system][4] like you normally do and you’ll be already on the latest point release.
|
||||
|
||||
You can [check Ubuntu version][5] to see which point release you are using. I did a check and since I was on Ubuntu 18.04.3, I assumed that I would have gotten Linux kernel 5 as well. When I [check the Linux kernel version][6], it was still the base kernel 4.15.
|
||||
|
||||
![Ubuntu Version And Linux Kernel Version Check][7]
|
||||
|
||||
Why is that? If Ubuntu 18.04.3 has Linux kernel 5.0 then why does it still have Linux Kernel 4.15? It’s because you have to manually ask for installing the new kernel in Ubuntu LTS by opting for LTS Enablement Stack popularly known as HWE.
|
||||
|
||||
[][8]
|
||||
|
||||
Suggested read Canonical Announces Ubuntu Edge!
|
||||
|
||||
### Get Linux Kernel 5.0 in Ubuntu 18.04 with Hardware Enablement Stack
|
||||
|
||||
By default, Ubuntu LTS release stay on the same Linux kernel they were released with. The [hardware enablement stack][9] (HWE) provides newer kernel and xorg support for existing Ubuntu LTS release.
|
||||
|
||||
Things have been changed recently. If you downloaded Ubuntu 18.04.2 or newer desktop version, HWE is enabled for you and you’ll get the new kernel along with the regular updates by default.
|
||||
|
||||
For server versions and people who downloaded 18.04 and 18.04.1, you’ll have to install the HWE kernel. Once you do that, you’ll get the newer kernel releases provided by Ubuntu to the LTS version.
|
||||
|
||||
To install HWE kernel in Ubuntu desktop along with newer xorg, you can use this command in the terminal:
|
||||
|
||||
```
|
||||
sudo apt install --install-recommends linux-generic-hwe-18.04 xserver-xorg-hwe-18.04
|
||||
```
|
||||
|
||||
If you are using Ubuntu Server edition, you won’t have the xorg option. So just install the HWE kernel in Ubutnu server:
|
||||
|
||||
```
|
||||
sudo apt-get install --install-recommends linux-generic-hwe-18.04
|
||||
```
|
||||
|
||||
Once you finish installing the HWE kernel, restart your system. Now you should have the newer Linux kernel.
|
||||
|
||||
**Are you getting kernel 5.0 in Ubuntu 18.04?**
|
||||
|
||||
Do note that HWE is enabled for people who downloaded and installed Ubuntu 18.04.2. So these users will get Kernel 5.0 without any trouble.
|
||||
|
||||
Should you go by the trouble of enabling HWE kernel in Ubuntu? It’s entirely up to you. [Linux Kernel 5.0][10] has several performance improvement and better hardware support. You’ll get the benefit of the new kernel.
|
||||
|
||||
What do you think? Will you install kernel 5.0 or will you rather stay on the kernel 4.15?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-hwe-kernel/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.youtube.com/channel/UCEU9D6KIShdLeTRyH3IdSvw
|
||||
[2]: https://ubuntu.com/blog/enhanced-livepatch-desktop-integration-available-with-ubuntu-18-04-3-lts
|
||||
[3]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/
|
||||
[4]: https://itsfoss.com/update-ubuntu/
|
||||
[5]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||
[6]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/ubuntu-version-and-kernel-version-check.png?resize=800%2C300&ssl=1
|
||||
[8]: https://itsfoss.com/canonical-announces-ubuntu-edge/
|
||||
[9]: https://wiki.ubuntu.com/Kernel/LTSEnablementStack
|
||||
[10]: https://itsfoss.com/linux-kernel-5/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (LazyWolfLin)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -390,7 +390,7 @@ via: https://theartofmachinery.com/2019/08/12/c_const_isnt_for_performance.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[LazyWolfLin](https://github.com/LazyWolfLin)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,143 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A comprehensive guide to agile project management)
|
||||
[#]: via: (https://opensource.com/article/19/8/guide-agile-project-management)
|
||||
[#]: author: (Matt ShealyDaniel OhLeigh Griffin https://opensource.com/users/mshealyhttps://opensource.com/users/jkriegerhttps://opensource.com/users/daniel-ohhttps://opensource.com/users/mtakanehttps://opensource.com/users/ahmadnassrihttps://opensource.com/users/agagancarczykhttps://opensource.com/users/lgriffin)
|
||||
|
||||
A comprehensive guide to agile project management
|
||||
======
|
||||
Agile project management's 12 guiding principles can help your team move
|
||||
faster together.
|
||||
![A diagram of a branching process][1]
|
||||
|
||||
With a focus on continuous improvements, agile project management upends the traditional linear way of developing products and services. Increasingly, organizations are adopting agile project management because it utilizes a series of shorter development cycles to deliver features and improve continually. This management style allows for rapid development, continuous integration (CI), and continuous delivery (CD).
|
||||
|
||||
Agile project management allows cross-functional teams to work on chunks of projects, solving problems and moving projects forward in shorter phases. This enables them to iterate more quickly and deliver more frequent updates.
|
||||
|
||||
The agile methodology provides a higher level of quality improvements on an incremental basis instead of waiting to distribute finished projects. And agile project management works. For example, PWC reports [agile projects are 28% more successful][2] than traditional project methodologies.
|
||||
|
||||
### Adopting agile methodology
|
||||
|
||||
When agile methodology was introduced, it met skepticism and resistance. With today's rapid pace of innovation, it has become an accepted standard. The Project Management Institute's annual Pulse of the Profession survey finds that [71% of organizations report using an agile approach][3] in project management, whether it is a fully agile project or hybrid model.
|
||||
|
||||
"We cannot afford anymore to have projects taking two to five years to deliver," says Michelin's Phillippe Husser in the Pulse of the Profession report. "During this time, the initial requirements have changed."
|
||||
|
||||
### The 12 agile principles
|
||||
|
||||
While agile project management is very different from traditional project management, it doesn't have to be daunting to make the switch. Agile project management relies on [12 guiding principles][4] that can help your team move faster together.
|
||||
|
||||
#### 1\. Customer-first
|
||||
|
||||
One of the first principles for groups using agile management is that the "highest priority is to [satisfy the customer through early and continuous delivery][5]." This means that above all else, the team works to solve problems for the customer, not to build features and tools that are cool but hard to use. This strategy encourages all product decisions to be data-driven from a customer's perspective. It may mean that many team members regularly interact with end users (including with interviews) or have access to data that shows usage.
|
||||
|
||||
The agile methodology drastically reduces the time from project initiation to customer feedback. As customers' needs or how they interact with the product change, the team is flexible in responding to these needs to build customer-focused technology. This process creates a feedback loop for continuous improvement.
|
||||
|
||||
#### 2\. The only thing constant is change
|
||||
|
||||
While it may seem radical to change requirements, the agile methodology allows for changing requirements, even late into development.
|
||||
|
||||
This principle is closely tied to the first. If the end goal of the team is to serve the end user best, the team must be flexible and able to make changes based on customers' behaviors and needs. Flexibility also allows an organization to capitalize on an emerging technology or new trends and gain competitive advantage.
|
||||
|
||||
#### 3\. Deliver faster
|
||||
|
||||
Instead of annual or semi-annual product updates and patches, agile encourages regular updates when a need is identified or to improve operations. Waiting to do significant releases can bloat the technology and create unforeseen issues, no matter how much it has been tested.
|
||||
|
||||
Agile encourages the team to deliver working software frequently within a short time frame. Smaller, more frequent releases allow for regular updates to the technology without huge risk. If something goes out and doesn't work, it requires a slight pullback. The agile methodology also encourages automation to help push out updates continuously.
|
||||
|
||||
#### 4\. Build cross-functional teams
|
||||
|
||||
Agile methodology believes that the most well-thought-out, usable, and sellable technologies require cross-functional teams working towards a shared goal. DevOps (development and operations) and DevSecOps (development, security, and operations) teams work in concert instead of in a linear progression. This allows the business team, the developers, QA, and other essential teams to work together from start to finish.
|
||||
|
||||
This change in perspective means all teams have skin in the game and makes it harder to push errors or low-quality tech onto the next team. Rather than making excuses, everyone works together on the same goals.
|
||||
|
||||
For cross-functional teams to work, it takes involvement from the top. A third of projects [fail because of a lack of participation from senior management][6].
|
||||
|
||||
#### 5\. Encourage independent work
|
||||
|
||||
Another tenet of agile management is that individuals can stretch their job and learn new skills while working on projects. Because the teams are cross-functional, individuals are exposed to different abilities, roles, and styles. This exposure creates better-rounded workers who can attack problems from different perspectives.
|
||||
|
||||
Agile teams are typically self-directed. It takes the right team with a focused goal.
|
||||
|
||||
Agile allows managers to (per the Agile Manifesto) "build projects around motivated individuals. Give them the environment and support they need and trust them to get the job done."
|
||||
|
||||
#### 6\. Meet in person
|
||||
|
||||
While this principle may seem strange in the era of increased remote workers, agile management does encourage in-person meetings. This is because many managers believe the most efficient and effective method of conveying information is a face-to-face conversation.
|
||||
|
||||
For non-remote teams, this can mean having different team members sitting close together or even creating war rooms of different groups to communicate more effectively. Co-location means faster interactions. Instead of waiting for an email or call to be returned, talk to each other.
|
||||
|
||||
This goal can still be accomplished for remote teams. By using tools like Slack or Zoom, you can simulate in-person meetings and find the right answers quickly.
|
||||
|
||||
#### 7\. Go live
|
||||
|
||||
Organizations may have several ways to document the plan and measure success against goals. However, one of the best ways to measure a team's success in agile is via working software. Agile teams don't look at future forecasts to see how they are doing. Instead, live code is the primary measure of progress.
|
||||
|
||||
Planning and documentation are great, but without software that does the job, everything else is irrelevant.
|
||||
|
||||
#### 8\. Sustainable development
|
||||
|
||||
While agile development encourages fast releases, it is still vital that the team makes sustainable and scalable code. Because the first principle is to serve the customer, the team must think about creating technology and tools that can be used for the long haul.
|
||||
|
||||
The team should also be managed in a way that supports individuals. While long hours may be required for a short time, maintaining overall work-life balance is essential to avoid burnout.
|
||||
|
||||
#### 9\. Technical excellence
|
||||
|
||||
Agile methodology also believes that every member of the team is responsible for continuous attention to technical excellence. Even those without technical ability should QA work and ensure it is being built in a simple and accessible way. While bells and whistles may be nice, an agile methodology believes that good design enhances agility.
|
||||
|
||||
Additionally, code should improve with each iteration. Everyone is responsible for providing clear code or instructions throughout the process—not just at the end.
|
||||
|
||||
#### 10\. Simplify
|
||||
|
||||
Agile teams believe that simplicity is essential. There's a saying in agile circles: "maximize the amount of work not done." Eliminate and automate anything you can, and build tools that are straightforward for the end user.
|
||||
|
||||
#### 11\. Let teams self-organize
|
||||
|
||||
"The best architectures, requirements, and designs emerge from self-organizing teams," says the Agile Manifesto. While management is needed for oversight, the best agile teams figure out what needs to be done—and how it gets done—themselves.
|
||||
|
||||
#### 12\. Take time to reflect
|
||||
|
||||
At regular intervals, the best teams reflect on how to become more effective, then adjust accordingly.
|
||||
|
||||
Agile teams are introspective and evaluate their efficiency. When they discover a better way, they evolve.
|
||||
|
||||
### Agile evolves with automation
|
||||
|
||||
Agile management has many benefits for the team and the end user. While the basic principles (like the nature of agile) have been established, the strategy is always evolving. Agile project management is evolving now by leveraging different types of automation.
|
||||
|
||||
For example, IT teams are leveraging [IT process automation to manage repetitive tasks][7] that used to take significant human resources. This allows teams to work more efficiently and focus on the bigger picture rather than monitoring, managing, and maintaining the software, hardware, infrastructure, and cloud services.
|
||||
|
||||
The more tasks that can be handled efficiently by your process automation rules, the quicker you will be able to iterate, test, and improve.
|
||||
|
||||
### Getting started
|
||||
|
||||
Overall, agile project management presents many benefits. It provides a faster way for teams to deliver a better product with fewer bugs. It can encourage diverse teams to work together and learn from each other. It fosters better team communication, both in-person and remotely. And it can ultimately create a better experience for the end user.
|
||||
|
||||
Agile, however, has some drawbacks. If a team is still exploring what technology or solutions to build or doesn't have a firm grasp of the target customer, agile may not be the best methodology. Agile may also have too many requirements for very small teams and may be too flexible for extremely large teams.
|
||||
|
||||
Always complete due diligence to identify which management style is best for your team, and consider combining agile with other methodologies to create the best structure for you.
|
||||
|
||||
The open development method is better for business. Scrum is dead.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/guide-agile-project-management
|
||||
|
||||
作者:[Matt ShealyDaniel OhLeigh Griffin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mshealyhttps://opensource.com/users/jkriegerhttps://opensource.com/users/daniel-ohhttps://opensource.com/users/mtakanehttps://opensource.com/users/ahmadnassrihttps://opensource.com/users/agagancarczykhttps://opensource.com/users/lgriffin
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/freesoftwareway_law3.png?itok=wyze_0fV (A diagram of a branching process)
|
||||
[2]: https://www.pwc.com/gx/en/actuarial-insurance-services/assets/agile-project-delivery-confidence.pdf
|
||||
[3]: https://www.pmi.org/-/media/pmi/documents/public/pdf/learning/thought-leadership/pulse/pulse-of-the-profession-2017.pdf
|
||||
[4]: http://agilemanifesto.org/principles.html
|
||||
[5]: https://heleo.com/ericries-5-reasons-continuously-update-product/5110/
|
||||
[6]: https://www.business2community.com/strategy/project-management-statistics-45-stats-you-cant-ignore-02168819
|
||||
[7]: https://www.atera.com/blog/it-automation/
|
@ -0,0 +1,245 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Apache Hive vs. Apache HBase: Which is the query performance champion?)
|
||||
[#]: via: (https://opensource.com/article/19/8/apache-hive-vs-apache-hbase)
|
||||
[#]: author: (Alex Bekker https://opensource.com/users/egor14https://opensource.com/users/sachinpb)
|
||||
|
||||
Apache Hive vs. Apache HBase: Which is the query performance champion?
|
||||
======
|
||||
Let's look closely at the Apache Hive and Apache HBase to understand
|
||||
which one can cope better with query performance.
|
||||
![computer screen ][1]
|
||||
|
||||
It's super easy to get lost in the world of big data technologies. There are so many of them that it seems a day never passes without the advent of a new one. Still, such fast development is only half the trouble. The real problem is that it's difficult to understand the functionality and the intended use of the existing technologies.
|
||||
|
||||
To find out what technology suits their needs, IT managers often contrast them. We've also conducted an academic study to make a clear distinction between Apache Hive and Apache HBase—two important technologies that are frequently used in [Hadoop implementation projects][2].
|
||||
|
||||
### Data model comparison
|
||||
|
||||
#### Apache Hive's data model
|
||||
|
||||
To understand Apache Hive's data model, you should get familiar with its three main components: a table, a partition, and a bucket.
|
||||
|
||||
Hive's **table** doesn't differ a lot from a relational database table (the main difference is that there are no relations between the tables). Hive's tables can be managed or external. To understand the difference between these two types, let's look at the _load data_ and _drop a table_ operations. When you load data into a **managed table**, you actually move the data from Hadoop Distributed File System's (HDFS) inner data structures into the Hive directory (which is also in HDFS). And when you drop such a table, you delete the data it contains from the directory. In the case of **external tables**, Hive doesn't load the data into the Hive directory but creates a "ghost-table" that indicates where actual data is physically stored in HDFS. So, when you drop an external table, the data is not affected.
|
||||
|
||||
Both managed and external tables can be further broken down to **partitions**. A partition represents the rows of the table grouped together based on a **partition key**. Each partition is stored as a separate folder in the Hive directory. For instance, the table below can be partitioned based on a country, and the rows for each country will be stored together. Of course, this example is simplified. In real life, you'll deal with more than three partitions and much more than four rows in each, and partitioning will help you significantly reduce your partition key query execution time.
|
||||
|
||||
**Customer ID** | **Country** | **State/Province** | **City** | **Gender** | **Family status** | …
|
||||
---|---|---|---|---|---|---
|
||||
00001 | US | Nebraska | Beatrice | F | Single | …
|
||||
00002 | Canada | Ontario | Toronto | F | Married | …
|
||||
00003 | Brasil | Para | Belem | M | Married | …
|
||||
00004 | Canada | Ontario | Toronto | M | Married | …
|
||||
00005 | US | Nebraska | Aurora | M | Single | …
|
||||
00006 | US | Arizona | Phoenix | F | Single | …
|
||||
| | | | | |
|
||||
00007 | US | Texas | Austin | F | Married |
|
||||
… | … | | | … | … | …
|
||||
|
||||
You can break your data further into **buckets**, which are even easier to manage and enable faster query execution. Let's take the partition with the US data from our previous example and cluster it into buckets based on the Customer ID column. When you specify the number of buckets, Hive applies a hash function to the chosen column, which assigns a hash value to each row in the partition and then "packs" the rows into a certain number of buckets. So, if we have 10 million Customer IDs in the partition and specify the number of buckets as 50, each bucket will contain about 200,000 rows. As a result, if you need to find the data about a particular customer, Hive will directly go to the relevant bucket to find the info.
|
||||
|
||||
#### Apache HBase's data model
|
||||
|
||||
HBase also stores data in **tables**. The cells in an HBase table are organized by **row keys** and **column families*****.*** Each column family has a set of storage properties (for example, row keys encryption and data compression rules). In addition, there are **column qualifiers** to ease data management. Neither row keys nor column qualifiers have a data type assigned (they are always treated as bytes).
|
||||
|
||||
**Row key**
|
||||
|
||||
**Geography**
|
||||
|
||||
**Demographics**
|
||||
|
||||
**Customer ID**
|
||||
|
||||
**Country**
|
||||
|
||||
**State**
|
||||
|
||||
**City**
|
||||
|
||||
**Gender**
|
||||
|
||||
**Family status**
|
||||
|
||||
…
|
||||
|
||||
00001
|
||||
|
||||
US
|
||||
|
||||
Texas
|
||||
|
||||
Austin
|
||||
|
||||
F
|
||||
|
||||
Single
|
||||
|
||||
…
|
||||
|
||||
00002
|
||||
|
||||
Canada
|
||||
|
||||
Ontario
|
||||
|
||||
Toronto
|
||||
|
||||
F
|
||||
|
||||
Married
|
||||
|
||||
…
|
||||
|
||||
00003
|
||||
|
||||
Brasil
|
||||
|
||||
Para
|
||||
|
||||
Belem
|
||||
|
||||
M
|
||||
|
||||
Married
|
||||
|
||||
…
|
||||
|
||||
00004
|
||||
|
||||
Canada
|
||||
|
||||
Ontario
|
||||
|
||||
Toronto
|
||||
|
||||
M
|
||||
|
||||
Married
|
||||
|
||||
…
|
||||
|
||||
00005
|
||||
|
||||
US
|
||||
|
||||
Arizona
|
||||
|
||||
Phoenix
|
||||
|
||||
M
|
||||
|
||||
Single
|
||||
|
||||
…
|
||||
|
||||
00006
|
||||
|
||||
US
|
||||
|
||||
Nebraska
|
||||
|
||||
Aurora
|
||||
|
||||
F
|
||||
|
||||
Single
|
||||
|
||||
…
|
||||
|
||||
00007
|
||||
|
||||
US
|
||||
|
||||
Nebraska
|
||||
|
||||
Beatrice
|
||||
|
||||
F
|
||||
|
||||
Married
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
…
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
…
|
||||
|
||||
…
|
||||
|
||||
…
|
||||
|
||||
Every **cell** has a timestamp, or, in other words, bears the mark of when it was created. This info is crucial during the read operations, as it allows identifying the most recent (and therefore more up-to-date) data versions. You can specify a timestamp during a write operation, otherwise, HBase gives the cell a current timestamp automatically.
|
||||
|
||||
Data in a table is **lexicographically sorted based on row keys**, and to store closely related data together, a developer needs to design a good algorithm for row key composition.
|
||||
|
||||
_As to_ **partitioning**, HBase does it automatically based on the row keys. Still, you can manage the process by changing the start and end row keys for each partition.
|
||||
|
||||
#### Key takeaways on data models
|
||||
|
||||
1. Both Hive and HBase are capable of organizing data in a way to enable quick access to the required data and reduce query execution time (though their approach to partitioning is different).
|
||||
2. Both Hive and HBase act as data management agents. When somebody says that Hive or HBase stores data, it really means the data is stored in a data store (usually in HDFS). This means the success of your Hadoop endeavor goes beyond either/or technology choices and strongly depends on other [important factors][3], such as calculating the required cluster size correctly and integrating all the architectural components seamlessly.
|
||||
|
||||
|
||||
|
||||
### Query performance
|
||||
|
||||
#### Hive as an analytical query engine
|
||||
|
||||
Hive is specifically designed to enable data analytics. To successfully perform this task, it uses its dedicated **Hive Query Language** (HiveQL), which is very similar to analytics-tuned SQL.
|
||||
|
||||
Initially, Hive converted HiveQL queries into Hadoop MapReduce jobs, simplifying the lives of developers who could bypass more complicated MapReduce code. Running queries in Hive usually took some time, since Hive scanned all the available data sets, if not specified otherwise. It was possible to limit the volume of scanned data by specifying the partitions and buckets that Hive had to address. Anyway, that was batch processing. Nowadays, Apache Hive is also able to convert queries into Apache Tez or Apache Spark jobs.
|
||||
|
||||
The earliest versions of Hive did not provide **record-level updates, inserts, and deletes**, which was one of the most serious limitations in Hive. This functionality appeared only in version 0.14.0 (though with some [constraints][4]: for example, your table's file format should be [ORC][5]).
|
||||
|
||||
#### HBase as a data manager that supports queries
|
||||
|
||||
Being a data manager, HBase alone is not intended for analytical queries. It doesn't have a dedicated query language. To run CRUD (create, read, update, and delete) and search queries, it has a JRuby-based shell, which offers **simple data manipulation possibilities**, such as Get, Put, and Scan. For the first two operations, you should specify the row key, while scans run over a whole range of rows.
|
||||
|
||||
HBase's primary purpose is to offer a random data input/output for HDFS. At the same time, one can surely say that HBase contributes to fast analytics by enabling consistent reads. This is possible due to the fact that HBase writes data to only one server, which doesn't require comparing multiple data versions from different nodes. Besides, HBase **handles append operations very well.** It also enables **updates and deletes**, but copes with these two not so perfectly.
|
||||
|
||||
#### Indexing
|
||||
|
||||
In Hive 3.0.0, indexing was removed. Prior to that, it was possible to create indexes on columns, though the advantages of faster queries should have been weighted against the cost of indexing during write operations and extra space for storing the indexes. Anyway, Hive's data model, with its ability to group data into buckets (which can be created for any column, not only for the keyed one), offers an approach similar to the one that indexing provides.
|
||||
|
||||
HBase enables multi-layered indexing. But again, you have to think about the trade-off between gaining read query response vs. slower writes and the costs associated with storing indexes.
|
||||
|
||||
#### Key takeaways on query performance
|
||||
|
||||
1. Running analytical queries is exactly the task for Hive. HBase's initial task is to ingest data as well as run CRUD and search queries.
|
||||
2. While HBase handles row-level updates, deletes, and inserts well, the Hive community is working to eliminate this stumbling block.
|
||||
|
||||
|
||||
|
||||
### To sum it up
|
||||
|
||||
There are many similarities between Hive and HBase. Both are data management agents, and both are strongly interconnected with HDFS. The main difference between these two is that HBase is tailored to perform CRUD and search queries while Hive does analytical ones. These two technologies complement each other and are frequently used together in Hadoop consulting projects so businesses can make the most of both applications' strengths.
|
||||
|
||||
Introduction to Apache Hadoop, an open source software framework for storage and large scale...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/apache-hive-vs-apache-hbase
|
||||
|
||||
作者:[Alex Bekker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/egor14https://opensource.com/users/sachinpb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK (computer screen )
|
||||
[2]: https://www.scnsoft.com/services/big-data/hadoop
|
||||
[3]: https://www.scnsoft.com/blog/hadoop-implementation-milestones
|
||||
[4]: http://community.cloudera.com/t5/Batch-SQL-Apache-Hive/Update-and-Delete-are-not-working-in-Hive/td-p/57358/page/3
|
||||
[5]: https://orc.apache.org/
|
@ -0,0 +1,238 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Building a non-breaking breakpoint for Python debugging)
|
||||
[#]: via: (https://opensource.com/article/19/8/debug-python)
|
||||
[#]: author: (Liran Haimovitch https://opensource.com/users/liranhaimovitch)
|
||||
|
||||
Building a non-breaking breakpoint for Python debugging
|
||||
======
|
||||
Have you ever wondered how to speed up a debugger? Here are some lessons
|
||||
learned while building one for Python.
|
||||
![Real python in the graphic jungle][1]
|
||||
|
||||
This is the story of how our team at [Rookout][2] built non-breaking breakpoints for Python and some of the lessons we learned along the way. I'll be presenting all about the nuts and bolts of debugging in Python at [PyBay 2019][3] in San Francisco this month. Let's dig in.
|
||||
|
||||
### The heart of Python debugging: sys.set_trace
|
||||
|
||||
There are many Python debuggers out there. Some of the more popular include:
|
||||
|
||||
* **pdb**, part of the Python standard library
|
||||
* **PyDev**, the debugger behind the Eclipse and PyCharm IDEs
|
||||
* **ipdb**, the IPython debugger
|
||||
|
||||
|
||||
|
||||
Despite the range of choices, almost every Python debugger is based on just one function: **sys.set_trace**. And let me tell you, **[sys.settrace][4]** might just be the most complex function in the Python standard library.
|
||||
|
||||
![set_trace Python 2 docs page][5]
|
||||
|
||||
In simpler terms, **settrace** registers a trace function for the interpreter, which may be called in any of the following cases:
|
||||
|
||||
* Function call
|
||||
* Line execution
|
||||
* Function return
|
||||
* Exception raised
|
||||
|
||||
|
||||
|
||||
A simple trace function might look like this:
|
||||
|
||||
|
||||
```
|
||||
def simple_tracer(frame, event, arg):
|
||||
co = frame.f_code
|
||||
func_name = co.co_name
|
||||
line_no = frame.f_lineno
|
||||
print("{e} {f} {l}".format(
|
||||
e=event, f=func_name, l=line_no))
|
||||
return simple_tracer
|
||||
```
|
||||
|
||||
When looking at this function, the first things that come to mind are its arguments and return values. The trace function arguments are:
|
||||
|
||||
* **frame** object, which is the full state of the interpreter at the point of the function's execution
|
||||
* **event** string, which can be **call**, **line**, **return**, or **exception**
|
||||
* **arg** object, which is optional and depends on the event type
|
||||
|
||||
|
||||
|
||||
The trace function returns itself because the interpreter keeps track of two kinds of trace functions:
|
||||
|
||||
* **Global trace function (per thread):** This trace function is set for the current thread by **sys.settrace** and is invoked whenever a new **frame** is created by the interpreter (essentially on every function call). While there's no documented way to set the trace function for a different thread, you can call **threading.settrace** to set the trace function for all newly created **threading** module threads.
|
||||
* **Local trace function (per frame):** This trace function is set by the interpreter to the value returned by the global trace function upon frame creation. There's no documented way to set the local trace function once the frame has been created.
|
||||
|
||||
|
||||
|
||||
This mechanism is designed to allow the debugger to have more granular control over which frames are traced to reduce performance impact.
|
||||
|
||||
### Building our debugger in three easy steps (or so we thought)
|
||||
|
||||
With all that background, writing your own debugger using a custom trace function looks like a daunting task. Luckily, **pdb**, the standard Python debugger, is built on top of **Bdb**, a base class for building debuggers.
|
||||
|
||||
A naive breakpoints debugger based on **Bdb** might look like this:
|
||||
|
||||
|
||||
```
|
||||
import bdb
|
||||
import inspect
|
||||
|
||||
class Debugger(bdb.Bdb):
|
||||
def __init__(self):
|
||||
Bdb.__init__(self)
|
||||
self.breakpoints = dict()
|
||||
self.set_trace()
|
||||
|
||||
def set_breakpoint(self, filename, lineno, method):
|
||||
self.set_break(filename, lineno)
|
||||
try :
|
||||
self.breakpoints[(filename, lineno)].add(method)
|
||||
except KeyError:
|
||||
self.breakpoints[(filename, lineno)] = [method]
|
||||
|
||||
def user_line(self, frame):
|
||||
if not self.break_here(frame):
|
||||
return
|
||||
|
||||
# Get filename and lineno from frame
|
||||
(filename, lineno, _, _, _) = inspect.getframeinfo(frame)
|
||||
|
||||
methods = self.breakpoints[(filename, lineno)]
|
||||
for method in methods:
|
||||
method(frame)
|
||||
```
|
||||
|
||||
All this does is:
|
||||
|
||||
1. Inherits from **Bdb** and write a simple constructor initializing the base class and tracing.
|
||||
2. Adds a **set_breakpoint** method that uses **Bdb** to set the breakpoint and keeps track of our breakpoints.
|
||||
3. Overrides the **user_line** method that is called by **Bdb** on certain user lines. The function makes sure it is being called for a breakpoint, gets the source location, and invokes the registered breakpoints
|
||||
|
||||
|
||||
|
||||
### How well did the simple Bdb debugger work?
|
||||
|
||||
Rookout is about bringing a debugger-like user experience to production-grade performance and use cases. So, how well did our naive breakpoint debugger perform?
|
||||
|
||||
To test it and measure the global performance overhead, we wrote two simple test methods and executed each of them 16 million times under multiple scenarios. Keep in mind that no breakpoint was executed in any of the cases.
|
||||
|
||||
|
||||
```
|
||||
def empty_method():
|
||||
pass
|
||||
|
||||
def simple_method():
|
||||
a = 1
|
||||
b = 2
|
||||
c = 3
|
||||
d = 4
|
||||
e = 5
|
||||
f = 6
|
||||
g = 7
|
||||
h = 8
|
||||
i = 9
|
||||
j = 10
|
||||
```
|
||||
|
||||
Using the debugger takes a shocking amount of time to complete. The bad results make it clear that our naive **Bdb** debugger is not yet production-ready.
|
||||
|
||||
![First Bdb debugger results][6]
|
||||
|
||||
### Optimizing the debugger
|
||||
|
||||
There are three main ways to reduce debugger overhead:
|
||||
|
||||
1. **Limit local tracing as much as possible:** Local tracing is very costly compared to global tracing due to the much larger number of events per line of code.
|
||||
2. **Optimize "call" events and return control to the interpreter faster:** The main work in **call** events is deciding whether or not to trace.
|
||||
3. **Optimize "line" events and return control to the interpreter faster:** The main work in **line** events is deciding whether or not we hit a breakpoint.
|
||||
|
||||
|
||||
|
||||
So we forked **Bdb**, reduced the feature set, simplified the code, optimized for hot code paths, and got impressive results. However, we were still not satisfied. So, we took another stab at it, migrated and optimized our code to **.pyx**, and compiled it using [Cython][7]. The final results (as you can see below) were still not good enough. So, we ended up diving into CPython's source code and realizing we could not make tracing fast enough for production use.
|
||||
|
||||
![Second Bdb debugger results][8]
|
||||
|
||||
### Rejecting Bdb in favor of bytecode manipulation
|
||||
|
||||
After our initial disappointment from the trial-and-error cycles of standard debugging methods, we decided to look into a less obvious option: bytecode manipulation.
|
||||
|
||||
The Python interpreter works in two main stages:
|
||||
|
||||
1. **Compiling Python source code into Python bytecode:** This unreadable (for humans) format is optimized for efficient execution and is often cached in those **.pyc** files we have all come to love.
|
||||
2. **Iterating through the bytecode in the _interpreter loop_:** This executes one instruction at a time.
|
||||
|
||||
|
||||
|
||||
This is the pattern we chose: use **bytecode manipulation** to set **non-breaking breakpoints** with no global overhead. This is done by finding the bytecode in memory that represents the source line we are interested in and inserting a function call just before the relevant instruction. This way, the interpreter does not have to do any extra work to support our breakpoints.
|
||||
|
||||
This approach is not magic. Here's a quick example.
|
||||
|
||||
We start with a very simple function:
|
||||
|
||||
|
||||
```
|
||||
def multiply(a, b):
|
||||
result = a * b
|
||||
return result
|
||||
```
|
||||
|
||||
In documentation hidden in the **[inspect][9]** module (which has several useful utilities), we learn we can get the function's bytecode by accessing **multiply.func_code.co_code**:
|
||||
|
||||
|
||||
```
|
||||
`'|\x00\x00|\x01\x00\x14}\x02\x00|\x02\x00S'`
|
||||
```
|
||||
|
||||
This unreadable string can be improved using the **[dis][10]** module in the Python standard library. By calling **dis.dis(multiply.func_code.co_code)**, we get:
|
||||
|
||||
|
||||
```
|
||||
4 0 LOAD_FAST 0 (a)
|
||||
3 LOAD_FAST 1 (b)
|
||||
6 BINARY_MULTIPLY
|
||||
7 STORE_FAST 2 (result)
|
||||
|
||||
5 10 LOAD_FAST 2 (result)
|
||||
13 RETURN_VALUE
|
||||
```
|
||||
|
||||
This gets us closer to understanding what happens behind the scenes of debugging but not to a straightforward solution. Unfortunately, Python does not offer a method for changing a function's bytecode from within the interpreter. You can overwrite the function object, but that's not good enough for the majority of real-world debugging scenarios. You have to go about it in a roundabout way using a native extension.
|
||||
|
||||
### Conclusion
|
||||
|
||||
When building a new tool, you invariably end up learning a lot about how stuff works. It also makes you think out of the box and keep your mind open to unexpected solutions.
|
||||
|
||||
Working on non-breaking breakpoints for Rookout has taught me a lot about compilers, debuggers, server frameworks, concurrency models, and much much more. If you are interested in learning more about bytecode manipulation, Google's open source **[cloud-debug-python][11]** has tools for editing bytecode.
|
||||
|
||||
* * *
|
||||
|
||||
_Liran Haimovitch will present "[Understanding Python’s Debugging Internals][12]" at [PyBay][3], which will be held August 17-18 in San Francisco. Use code [OpenSource35][13] for a discount when you purchase your ticket to let them know you found out about the event from our community._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/debug-python
|
||||
|
||||
作者:[Liran Haimovitch][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/liranhaimovitch
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python_jungle_lead.jpeg?itok=pFKKEvT- (Real python in the graphic jungle)
|
||||
[2]: https://rookout.com/
|
||||
[3]: https://pybay.com/
|
||||
[4]: https://docs.python.org/3/library/sys.html#sys.settrace
|
||||
[5]: https://opensource.com/sites/default/files/uploads/python2docs.png (set_trace Python 2 docs page)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/debuggerresults1.png (First Bdb debugger results)
|
||||
[7]: https://cython.org/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/debuggerresults2.png (Second Bdb debugger results)
|
||||
[9]: https://docs.python.org/2/library/inspect.html
|
||||
[10]: https://docs.python.org/2/library/dis.html
|
||||
[11]: https://github.com/GoogleCloudPlatform/cloud-debug-python
|
||||
[12]: https://pybay.com/speaker/liran-haimovitch/
|
||||
[13]: https://ti.to/sf-python/pybay2019/discount/OpenSource35
|
@ -0,0 +1,86 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (To equip tomorrow's cybersecurity experts, we'll need an open approach)
|
||||
[#]: via: (https://opensource.com/open-organization/19/8/open-cybersecurity-education)
|
||||
[#]: author: (rjoyce https://opensource.com/users/rjoycehttps://opensource.com/users/hypercoyotehttps://opensource.com/users/debbryant)
|
||||
|
||||
To equip tomorrow's cybersecurity experts, we'll need an open approach
|
||||
======
|
||||
An open approach to training the next generation of cybersecurity
|
||||
experts can fully equip them to combat a constantly shifting threat
|
||||
landscape.
|
||||
![Locks and a fingerprint][1]
|
||||
|
||||
Today's world—marked by an increase of Internet-connected devices, digital assets, and information systems infrastructure—demands more cybersecurity professionals. Cybersecurity is the practice of defending these devices, assets, and systems against malicious cyberattacks from both internal and external entities. Often these cyberattacks are linked to cybercrimes, or crimes committed using a computer to generate profit or to affect the integrity, availability, and confidentiality of the data or system. In 2016, cybercrimes [cost the global economy more than $450 billion][2].
|
||||
|
||||
Developing a robust cybersecurity workforce is therefore essential for mitigating the effects of cybercrime on the global economy. The United States Bureau of Labor Statistics has predicted [a shortage of 1.8 million cybersecurity professionals by the year 2022][3]. The United States has already developed a working group, the National Initiative for Cybersecurity Education (NICE), to promote cybersecurity education. Educators play a critical role helping promote cybersecurity as early as possible in academic organizations. And they should take an open approach to doing it.
|
||||
|
||||
It's critical for students to not only become acquainted with the advantages of open source software but also to develop strong skills working openly, since open source software is not only common in the IT industry in general, but is specifically necessary in the field of cybersecurity. With this approach, students can learn within the safety and guidance of the classroom while also naturally acquiring research and troubleshooting skills by facing challenges that are presented or arise during exercises.
|
||||
|
||||
In this article, we'll explain how experiencing these challenges in the classroom environment is imperative for preparing students for the industry and equipping them to face the unforgiving challenges that await them in the IT industry—especially in the rapidly evolving cybersecurity field.
|
||||
|
||||
### Developing an open approach to cybersecurity education
|
||||
|
||||
Open source software, open source communities, and open source principles have been pivotal in the adoption of computer automation that is so common today. For instance, most smart devices are running a version of the Linux kernel. In the cybersecurity field, it's common to find Linux at the heart of most operating systems that are running on security appliances. But going beyond the operating system, Ansible has taken the management scene by storm, allowing for simplified automation of management tasks that even professionals without programming or scripting experience can quickly grasp and begin to implement. In addition to the benefits of automation, a variety of open source applications provide seemingly limitless capabilities for computer users—such as the ability to create video, music, games, or graphic designs on par with proprietary software. Open source software has often been the creative spark that has enabled countless individuals to pursue goals that would have otherwise been unobtainable.
|
||||
|
||||
Open source has had the same democratizing effect for cybersecurity professionals. Like other open source projects, open source cybersecurity tools receive extensive community support, so they're often some of the most-used security tools in existence today. Such tools include Nmap, OpenVAS, OSSEC, Metasploit Framework, Wireshark, and the Kali Linux distribution, to name a few. These open source tools are an invaluable asset for educators, as they provide an opportunity for students to use the same cybersecurity tools currently being used in industry—but within a safe learning environment, a factor that is critical for student growth in the field.
|
||||
|
||||
Open source software has often been the creative spark that has enabled countless individuals to pursue goals that would have otherwise been unobtainable.
|
||||
|
||||
In Murray State University's Telecommunications Systems Management (TSM) program, we're developing curricula and resources aimed at getting students excited about cybersecurity and motivated to pursue it. But students often enter the program with little or no understanding of open source principles or software, so bringing participants up to speed has been one of our biggest challenges. That's why we've partnered with [Red Hat Academy][4] to supplement our materials and instill fundamental Linux skills and knowledge into our students. This foundation not only prepares students to use the open source security tools that are based on Linux operating systems but also equips them to experiment with a wider variety of Linux-based open source cybersecurity tools, giving them valuable, hands-on experience. And since these tools are freely available, they can continue practicing their skills outside the classroom.
|
||||
|
||||
### Equipping students for a collaborative industry
|
||||
|
||||
As we've said, open source software's ubiquity and ample community support makes it critical to the field of cybersecurity. In the TSM program, our courses incorporate open tools and open practices to simulate the environments students should expect to find if they choose to enter the cybersecurity industry. By creating this type of learning experience in the classroom—a place where instructors can offer immediate guidance and the stakes are low—we're able to help students can gain the critical thinking skills needed for the variety of challenges they'll encounter in the field.
|
||||
|
||||
Chief among these, for example, are the skills associated with seeking, assessing, understanding resources from cybersecurity communities. In our courses, we emphasize the process of researching community forums and reading software documentation. Because no one could ever hope to prepare students for every situation they might encounter in the field, we help students _train themselves_ how to use the tools at their disposal to resolve different situations that may arise. Because open source cybersecurity tools often give rise to engaged and supportive communities, students have the opportunity to develop troubleshooting skills when they encounter challenges by discovering solutions in conversation with people outside the classroom. Developing the ability to quickly and efficiently research problems and solutions is critical for a cybersecurity student, since technology (and the threat landscape) is always evolving.
|
||||
|
||||
### A more authentic operating system experience
|
||||
|
||||
Because no one could ever hope to prepare students for every situation they might encounter in the field, we help students train themselves how to use the tools at their disposal to resolve different situations that may arise.
|
||||
|
||||
Most operating systems courses take a narrow approach focused on proprietary software, which is an injustice to students as it denies them access to the diversity of the operating systems found in the IT industry. For instance, as companies are moving their services to the cloud, they are increasingly running on open source, Linux-based operating systems. Additionally, since open source software enables developers to repackage the software and customize distributions, many are adopting these varying distributions of Linux simply because they are a better fit for a particular application. Still others are moving their servers from proprietary platforms to Linux due to the attraction of the accountability that comes with open source software—especially in light of frustrations that occur when proprietary vendors push updates that cause major issues in their infrastructure.
|
||||
|
||||
In the TSM courses, our students gain a strong understanding of foundational Linux concepts. In particular, the curricula from Red Hat Academy gives students granular experience with many of the foundational commands, and it allows them to gain an understanding of a popular open source system design. Linux has a well-developed community of other users, developers, and tinkerers that provide an excellent forum for students to engage other open source users for help. Having students develop a strong foundational knowledge in Linux is critical as they progress through the TSM program. As students work through their courses, they naturally develop their knowledge and skills, and by obtaining this hand-on experience they also gain a foundation that prepares the student for a variety of careers—becoming traditional security analysts, for example, or pursuing careers in penetration testing using Kali Linux. No matter their path, having a strong Linux background is essential for students.
|
||||
|
||||
### Embracing community-driven development
|
||||
|
||||
One of the major frustrations in the IT field is being forced to use tools that simply do not work or quickly become unusable. Often, software purchased to accomplish some particular task will quickly become obsolete as the vendor offers "upgrades" and "add-ons" to accommodate the changing needs of their customer—at a price. This experience isn't limited to IT experts; end users also experience this frustration. Driving this practice this is, naturally, a desire to maintain long-term profits, as companies must continue to sell software to survive or must lock their users into subscription models.
|
||||
|
||||
The fact that much of the open source software in use today is provided free of charge is enough to draw industry experts to use it. However, open source software is more than just freeware. Because the users of those tools have formed such large communities, they receive proportional support from their communities as well. It's not unusual to see small projects grow into full software suites as users submit feedback to community driven development. This type of feedback often creates products that are superior to their paid counterparts, which do not have such a direct line into the community they seek to serve. This is absolutely true in the case of cybersecurity tools, where the majority of the most popular tools are all open source, community-driven projects. In the TSM program, students are well-versed in tools such as these, thanks to the availability and free distribution model that open source software affords. The result is that through hands-on use, students gain a firm understanding of how to utilize these types of tools.
|
||||
|
||||
### Future proofing
|
||||
|
||||
Open source software provides students, who come from a variety of socio-economic backgrounds, with the opportunity to expand their experience without needing to be employed in a particular field.
|
||||
|
||||
Staying relevant in the IT industry is a constant battle, especially when dealing with the many products and solutions that are always seeking to gain market share. This battle extends as well to the "soldiers on the ground," who may find keeping a diversified toolset difficult when many of the solutions are kept out of their hands due to a price ceiling.
|
||||
|
||||
Open source software provides students, who come from a variety of socio-economic backgrounds, with the opportunity to expand their experience without needing to be employed in a particular field, as the software is readily available to them through open source distribution channels. Similarly, graduates who find jobs in one particular segment of the market still have the opportunity to train their skills in _other_ areas in which they may be interested, thanks to the breadth of open source software commonly used in the IT industry.
|
||||
|
||||
As we train these students how to train themselves, expose them to the variety of tools at their disposal, and educate them on how widely used these tools are, the students are not only equipped to enter the workforce, but are also empowered to stay ahead of the game as well.
|
||||
|
||||
_(This article is part of the_ [Open Organization Guide for Educators][5] _project.)_
|
||||
|
||||
Working on cybersecurity and looking for support for your project? The Homeland Open Security...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/8/open-cybersecurity-education
|
||||
|
||||
作者:[rjoyce][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rjoycehttps://opensource.com/users/hypercoyotehttps://opensource.com/users/debbryant
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx (Locks and a fingerprint)
|
||||
[2]: http://www.hiscox.com/cyber-readiness-report.pdf
|
||||
[3]: https://iamcybersafe.org/wpcontent/uploads/2017/06/Europe-GISWS-Report.pdf
|
||||
[4]: https://www.redhat.com/en/services/training/red-hat-academy
|
||||
[5]: https://github.com/open-organization-ambassadors/open-org-educators-guide
|
@ -0,0 +1,121 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 misconceptions about ethics and bias in AI)
|
||||
[#]: via: (https://opensource.com/article/19/8/4-misconceptions-ethics-and-bias-ai)
|
||||
[#]: author: (Rachel Thomas https://opensource.com/users/rachel-thomas)
|
||||
|
||||
4 misconceptions about ethics and bias in AI
|
||||
======
|
||||
As artificial intelligence increasingly affects our lives, we must
|
||||
consider how algorithms affect real people. Join us at PyBay 2019 to
|
||||
continue the conversation.
|
||||
![A brain design in a head][1]
|
||||
|
||||
At [PyBay 2019][2] in August, I will continue a conversation I started at PyBay 2018 about the importance of ethics in the artificial intelligence (AI) we're developing, especially as it gains more and more influence in our everyday lives. In last year's keynote, I dug into how we're overlooking the essential role humans play in AI's future.
|
||||
|
||||
Ethical discussions around technology are more and more common, and I come to them from my first love, math. Math usually gives us a sense of certainty, but I have found that the more challenging, human parts of my work offer me the greatest potential to improve the world. If you're curious about the more technical side, here's a list of resources I put together:
|
||||
|
||||
> Deep Learning: <https://t.co/MhwV37J54I>
|
||||
> NLP: <https://t.co/zC31JstaF1>
|
||||
> Comp Linear Algebra: <https://t.co/CY7Gu90yLz>
|
||||
> Bias, Ethics, & AI: <https://t.co/ThSz3bnZ4k>
|
||||
> Debunk Pipeline Myth: <https://t.co/qIW64edWiQ>
|
||||
> AI Needs You: <https://t.co/xUAv2eIatU>
|
||||
> 67 Questions: <https://t.co/8m7JK57Aaq>
|
||||
>
|
||||
> — Rachel Thomas (@math_rachel) [July 9, 2019][3]
|
||||
|
||||
Misconceptions about the impact of all types and parts of technology have been common for a long time, but they are hitting home ever harder as AI systems gain increasing popularity and influence over our everyday lives. In this article, I'll walk through some common misconceptions about AI ethics then offer some healthy principles we can use to make AI work with us toward a better future.
|
||||
|
||||
### 1\. Misconception: Engineers are only responsible for the code
|
||||
|
||||
There is an idea that engineers are only responsible for their code, not how the code is used nor the quality of the outcomes it produces. The problem is that in complicated, real-world systems, which involve a mixture of software and various administrative processes, often nobody feels responsible for the outcome. For example, a software program that had bugs decreased essential healthcare services to people with serious disabilities, including cerebral palsy and diabetes, as reported in [The Verge][4]. In this case, the creator of the algorithm blamed state officials for their process, and state officials could blame the team that implemented the software, and so on, with nobody taking responsibility.
|
||||
|
||||
Systems where nobody feels responsible and there is no accountability do not lead to good outcomes. I don't bring up responsibility in order to point fingers, but because I want to help ensure good outcomes. Our code often interacts with very messy, real-world systems and can accidentally amplify those problems in an undesirable way.
|
||||
|
||||
### 2\. Misconception: Humans and computers are interchangeable
|
||||
|
||||
People often talk about human and computer decision makers as though they are plug-and-play interchangeable, or have the mindset of building machines to replicate exactly what humans do. However, humans and computers are typically used in different ways in practice.
|
||||
|
||||
One powerful example pertains to AI's value proposition—the idea that companies could scale services with AI that would be unaffordable if humans did all the work. Whether it's faster health insurance signups or recommending items on consumer sites, AI is meant to make life simpler for us and cheaper for service providers. The Trojan horse hiding here is that algorithms may be implemented in such a way that the outcome is a dead end with no appeals process and no way to catch or address mistakes. This can be incredibly harmful if a person is [fired from a job][5] or [denied needed healthcare][4] based on an algorithm without explanation or recourse.
|
||||
|
||||
People remain at risk even when we add humans back into the equation. Studies show that when given an option to override a harmful AI conclusion, people are likely to assume the code is objective or error-free and are reluctant to override "the system." In many cases, AI is being used because it is cheap, not because it is more accurate or leads to better outcomes. As [Cathy O'Neil][6] puts it, we are creating a world where "the privileged are processed by people; the poor are processed by algorithms."
|
||||
|
||||
> The privileged are processed by people; the poor are processed by algorithms. - [@mathbabedotorg][7] [pic.twitter.com/ZMEDTEPOvK][8]
|
||||
>
|
||||
> — Rachel Thomas (@math_rachel) [December 16, 2016][9]
|
||||
|
||||
Another angle posits that humans and computers are at odds with one another. That's fun in a story like competing in chess or Go, but the better issue is figuring out how machines can augment and complement human goals. Ultimately, algorithms are designed by human beings with human ends in mind.
|
||||
|
||||
### 3\. Misconception: We can't regulate the tech industry
|
||||
|
||||
I regularly hear that the tech industry is too hard to regulate and regulation won't be effective. It reminds me of a [99% Invisible podcast episode][10] about the early days of the automobile. When cars came out, there were no speed limits, licenses, or drunk driving laws, and they were made with a lot of sharp metal and shatterable glass. At the time, the idea of making cars safer was a tough conversation. People would argue that cars are inherently dangerous because the people driving them were dangerous, and that danger had nothing to do with the vehicle. At the time, the idea of making cars safer was a tough conversation, and car companies were strongly resistent to anyone discussing safety. People argued that cars were inherently dangerous because the people driving them were dangerous, and that the danger had nothing to do with the vehicle. Consumer safety advocates worked for decades to change the mindset and laws around car safety, addressing many of these previous issues.
|
||||
|
||||
Consider a case study on what is effective at spurring action: people warned executives of a large social media company for years (beginning as early as 2013) of how their platform was being used to incite ethnic violence in Myanmar, and executives took little action. After the UN determined in 2018 that the site had played a "determining role" in the Myanmar genocide, the company said they would hire "dozens" of additional moderators. Contrast this to when Germany passed a hate speech law with significant finacial penalties, and that same social media site hired 1,200 moderators in under a year to avoid being fined. The different orders of magnitude in response to [a potential fine vs a genocide][11] may provide insight into the potential effectiveness of regulation.
|
||||
|
||||
### 4\. Misconception: Tech is only about optimizing metrics
|
||||
|
||||
It can be easy to think of our job in tech being to optimize metrics and respond to consumer demand.
|
||||
|
||||
> _"Recommendation systems and collaborative filtering are never neutral; they are always ranking one video, pin, or group against another when they're deciding what to show you."_
|
||||
> –Renee Diresta, _[Wired][12]_
|
||||
|
||||
Metrics are just a proxy for the things we truly care about, and over-emphasizing metrics can lead to unintended consequences. When optimizing for viewing time, a popular video site was found to be pushing the most controversial, conspiracy-centric videos because they were the ones people on the site watched for the longest time. That metrics-only perspective resulted, for example, in people interested in lawnmower reviews being recommended extremist, white supremacist conspiracy theories.
|
||||
|
||||
We can choose to not just optimize for metrics, but also to consider desired outcomes. [Evan Estola][13] discussed what that looked like for his team at [Meetup.com][14], in his 2016 Machine Learning Conference presentation [When Recommendations Systems Go Bad][15]. Meetup's data showed that fewer women than men were going to technology-focused meetups. There was a risk that they could create an algorithm that recommended fewer tech meetups to women, which would cause fewer women to find out about tech meetups, decreasing attendance further, and then recommending even fewer tech meetups to women. That feedback loop would result in even fewer women going to tech events. Meetup decided to short-circuit that feedback loop before it was created.
|
||||
|
||||
Technology impacts the world and exposes us to new ideas. We need to think more about the values we stand for and the broader systems we want to build rather than solely optimizing for metrics.
|
||||
|
||||
### Better principles for AI
|
||||
|
||||
I share these misconceptions so we can move past them and make the world a better place. We can improve our world through the ethical use of AI.Keep the following ideas in mind to create a better future with AI:
|
||||
|
||||
* We have a responsibility to think about the whole system.
|
||||
* We need to work with domain experts and with those impacted by AI.
|
||||
* We have to find ways to leverage the strengths of computers and humans and bring them together for the best outcomes.
|
||||
* We must acknowledge regulation is both possible and has been impactful in the past.
|
||||
* We can't be afraid of hard and messy problems.
|
||||
* We can choose to optimize for impact on the world, not just for metrics.
|
||||
|
||||
|
||||
|
||||
By internalizing these concepts in our work and our daily lives, we can make the future a better place for everyone.
|
||||
|
||||
* * *
|
||||
|
||||
_Rachel Thomas will present [Getting Specific About Algorithmic Bias][16] at [PyBay 2019][17] August 17–18 in San Francisco. Use the [OpenSource35][18] discount code when purchasing tickets._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/4-misconceptions-ethics-and-bias-ai
|
||||
|
||||
作者:[Rachel Thomas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rachel-thomas
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_patents4abstract_B.png?itok=6RHeRaYh (A brain design in a head)
|
||||
[2]: http://pybay.com/
|
||||
[3]: https://twitter.com/math_rachel/status/1148385754982363136?ref_src=twsrc%5Etfw
|
||||
[4]: https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy
|
||||
[5]: https://www.washingtonpost.com/local/education/creative--motivating-and-fired/2012/02/04/gIQAwzZpvR_story.html
|
||||
[6]: https://twitter.com/math_rachel/status/809810694065385472
|
||||
[7]: https://twitter.com/mathbabedotorg?ref_src=twsrc%5Etfw
|
||||
[8]: https://t.co/ZMEDTEPOvK
|
||||
[9]: https://twitter.com/math_rachel/status/809810694065385472?ref_src=twsrc%5Etfw
|
||||
[10]: https://99percentinvisible.org/episode/nut-behind-wheel/
|
||||
[11]: https://www.fast.ai/2018/04/19/facebook/#myanmar
|
||||
[12]: https://www.wired.com/story/creating-ethical-recommendation-engines/
|
||||
[13]: https://mlconf.com/speakers/evan-estola/
|
||||
[14]: http://meetup.com/
|
||||
[15]: https://mlconf.com/sessions/when-recommendations-systems-go-bad-machine-learn/
|
||||
[16]: https://pybay.com/speaker/rachel-thomas/
|
||||
[17]: https://pybay.com/
|
||||
[18]: https://ti.to/sf-python/pybay2019/discount/OpenSource35
|
@ -0,0 +1,266 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (9 open source cloud native projects to consider)
|
||||
[#]: via: (https://opensource.com/article/19/8/cloud-native-projects)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brsonhttps://opensource.com/users/marcobravo)
|
||||
|
||||
9 open source cloud native projects to consider
|
||||
======
|
||||
Work with containers? Get familiar with these projects from the Cloud
|
||||
Native Computing Foundation
|
||||
![clouds in the sky with blue pattern][1]
|
||||
|
||||
As the practice of developing applications with containers is getting more popular, [cloud-native applications][2] are also on the rise. By [definition][3]:
|
||||
|
||||
> "Cloud-native technologies are used to develop applications built with services packaged in containers, deployed as microservices, and managed on elastic infrastructure through agile DevOps processes and continuous delivery workflows."
|
||||
|
||||
This description includes four elements that are integral to cloud-native applications:
|
||||
|
||||
1. Container
|
||||
2. Microservice
|
||||
3. DevOps
|
||||
4. Continuous integration and continuous delivery (CI/CD)
|
||||
|
||||
|
||||
|
||||
Although these technologies have very distinct histories, they complement each other well and have led to surprisingly exponential growth of cloud-native applications and toolsets in a short time. This [Cloud Native Computing Foundation][4] (CNCF) infographic shows the size and breadth of the cloud-native application ecosystem today.
|
||||
|
||||
![Cloud-Native Computing Foundation applications ecosystem][5]
|
||||
|
||||
Cloud-Native Computing Foundation projects
|
||||
|
||||
I mean, just look at that! And this is just a start. Just as NodeJS’s creation sparked the explosion of endless JavaScript tools, the popularity of container technology started the exponential growth of cloud-native applications.
|
||||
|
||||
The good news is that there are several organizations that oversee and connect these dots together. One is the [**Open Containers Initiative (OCI)**][6], which is a lightweight, open governance structure (or project), "formed under the auspices of the Linux Foundation for the express purpose of creating open industry standards around container formats and runtime." The other is the **CNCF**, "an open source software foundation dedicated to making cloud native computing universal and sustainable."
|
||||
|
||||
In addition to building a community around cloud-native applications generally, CNCF also helps projects set up structured governance around their cloud-native applications. CNCF created the concept of maturity levels—Sandbox, Incubating, or Graduated—which correspond to the Innovators, Early Adopters, and Early Majority tiers on the diagram below.
|
||||
|
||||
![CNCF project maturity levels][7]
|
||||
|
||||
CNCF project maturity levels
|
||||
|
||||
The CNCF has detailed [criteria][8] for each maturity level (included below for readers’ convenience). A two-thirds supermajority of the Technical Oversight Committee (TOC) is required for a project to be Incubating or Graduated.
|
||||
|
||||
### Sandbox stage
|
||||
|
||||
> To be accepted in the sandbox, a project must have at least two TOC sponsors. See the CNCF Sandbox Guidelines v1.0 for the detailed process.
|
||||
|
||||
### Incubating stage
|
||||
|
||||
> Note: The incubation level is the point at which we expect to perform full due diligence on projects.
|
||||
>
|
||||
> To be accepted to incubating stage, a project must meet the sandbox stage requirements plus:
|
||||
>
|
||||
> * Document that it is being used successfully in production by at least three independent end users which, in the TOC’s judgement, are of adequate quality and scope.
|
||||
> * Have a healthy number of committers. A committer is defined as someone with the commit bit; i.e., someone who can accept contributions to some or all of the project.
|
||||
> * Demonstrate a substantial ongoing flow of commits and merged contributions.
|
||||
> * Since these metrics can vary significantly depending on the type, scope, and size of a project, the TOC has final judgement over the level of activity that is adequate to meet these criteria
|
||||
>
|
||||
|
||||
|
||||
### Graduated stage
|
||||
|
||||
> To graduate from sandbox or incubating status, or for a new project to join as a graduated project, a project must meet the incubating stage criteria plus:
|
||||
>
|
||||
> * Have committers from at least two organizations.
|
||||
> * Have achieved and maintained a Core Infrastructure Initiative Best Practices Badge.
|
||||
> * Have completed an independent and third party security audit with results published of similar scope and quality as the following example (including critical vulnerabilities addressed): <https://github.com/envoyproxy/envoy#security-audit> and all critical vulnerabilities need to be addressed before graduation.
|
||||
> * Adopt the CNCF Code of Conduct.
|
||||
> * Explicitly define a project governance and committer process. This preferably is laid out in a GOVERNANCE.md file and references an OWNERS.md file showing the current and emeritus committers.
|
||||
> * Have a public list of project adopters for at least the primary repo (e.g., ADOPTERS.md or logos on the project website).
|
||||
> * Receive a supermajority vote from the TOC to move to graduation stage. Projects can attempt to move directly from sandbox to graduation, if they can demonstrate sufficient maturity. Projects can remain in an incubating state indefinitely, but they are normally expected to graduate within two years.
|
||||
>
|
||||
|
||||
|
||||
## 9 projects to consider
|
||||
|
||||
While it’s impossible to cover all of the CNCF projects in this article, I’ll describe are nine of most interesting Graduated and Incubating open source projects.
|
||||
|
||||
Name | License | What It Is
|
||||
---|---|---
|
||||
[Kubernetes][9] | Apache 2.0 | Orchestration platform for containers
|
||||
[Prometheus][10] | Apache 2.0 | Systems and service monitoring tool
|
||||
[Envoy][11] | Apache 2.0 | Edge and service proxy
|
||||
[rkt][12] | Apache 2.0 | Pod-native container engine
|
||||
[Jaeger][13] | Apache 2.0 | Distributed tracing system
|
||||
[Linkerd][14] | Apache 2.0 | Transparent service mesh
|
||||
[Helm][15] | Apache 2.0 | Kubernetes package manager
|
||||
[Etcd][16] | Apache 2.0 | Distributed key-value store
|
||||
[CRI-O][17] | Apache 2.0 | Lightweight runtime for Kubernetes
|
||||
|
||||
I also created this video tutorial to walk through these projects.
|
||||
|
||||
## Graduated projects
|
||||
|
||||
Graduated projects are considered mature—adopted by many organizations—and must adhere to the CNCF’s guidelines. Following are three of the most popular open source CNCF Graduated projects. (Note that some of these descriptions are adapted and reused from the projects' websites.)
|
||||
|
||||
### Kubernetes
|
||||
|
||||
Ah, Kubernetes. How can we talk about cloud-native applications without mentioning Kubernetes? Invented by Google, Kubernetes is undoubtedly the most famous container-orchestration platform for container-based applications, and it is also an open source tool.
|
||||
|
||||
What is a container orchestration platform? Basically, a container engine on its own may be okay for managing a few containers. However, when you are talking about thousands of containers and hundreds of services, managing those containers becomes super complicated. This is where the container engine comes in. The container-orchestration engine helps scale containers by automating the deployment, management, networking, and availability of containers.
|
||||
|
||||
Docker Swarm and Mesosphere Marathon are other container-orchestration engines, but it is safe to say that Kubernetes has won the race (at least for now). Kubernetes also gave birth to Container-as-a-Service (CaaS) platforms like [OKD][18], the Origin community distribution of Kubernetes that powers [Red Hat OpenShift][19].
|
||||
|
||||
To get started, visit the [Kubernetes GitHub repository][9], and access its documentation and learning resources from the [Kubernetes documentation][20] page.
|
||||
|
||||
### Prometheus
|
||||
|
||||
Prometheus is an open source system monitoring and alerting toolkit built at SoundCloud in 2012. Since then, many companies and organizations have adopted Prometheus, and the project has a very active developer and user community. It is now a standalone open source project that is maintained independently of the company.
|
||||
|
||||
![Prometheus’ architecture][21]
|
||||
|
||||
Prometheus’ architecture
|
||||
|
||||
The easiest way to think about Prometheus is to visualize a production system that needs to be up 24 hours a day and 365 days a year. No system is perfect, and there are techniques to reduce failures (called fault-tolerant systems). However, if an issue occurs, the most important thing is to identify it as soon as possible. That is where a monitoring tool like Prometheus comes in handy. Prometheus is more than a container-monitoring tool, but it is most popular among cloud-native application companies. In addition, other open source monitoring tools, including [Grafana][22], leverage Prometheus.
|
||||
|
||||
The best way to get started with Prometheus is to check out its [GitHub repo][10]. Running Prometheus locally is easy, but you need to have a container engine installed. You can access detailed documentation on [Prometheus’ website][23].
|
||||
|
||||
### Envoy
|
||||
|
||||
Envoy (or Envoy Proxy) is an open source edge and service proxy designed for cloud-native applications. Created at Lyft, Envoy is a high-performance, C++, distributed proxy designed for single services and applications, as well as a communications bus and a universal data plane designed for large microservice service mesh architectures. Built on the learnings of solutions such as Nginx, HAProxy, hardware load balancers, and cloud load balancers, Envoy runs alongside every application and abstracts the network by providing common features in a platform-agnostic manner.
|
||||
|
||||
When all service traffic in an infrastructure flows through an Envoy mesh, it becomes easy to visualize problem areas via consistent observability, tune overall performance, and add substrate features in a single place. Basically, Envoy Proxy is a service mesh tool that helps organizations build a fault-tolerant system for production environments.
|
||||
|
||||
There are numerous alternatives for service mesh applications, such as Uber’s [Linkerd][24] (discussed below) and [Istio][25]. Istio extends Envoy Proxy by deploying as a [Sidecar][26] and leveraging the [Mixer][27] configuration model. Notable Envoy features are:
|
||||
|
||||
* All the "table stakes" features (when paired with a control plane, like Istio) are included
|
||||
* Low, 99th percentile latencies at scale when running under load
|
||||
* Acts as an L3/L4 filter at its core with many L7 filters provided out of the box
|
||||
* Support for gRPC and HTTP/2 (upstream/downstream)
|
||||
* It’s API-driven and supports dynamic configuration and hot reloads
|
||||
* Has a strong focus on metric collection, tracing, and overall observability
|
||||
|
||||
|
||||
|
||||
Understanding Envoy, proving its capabilities, and realizing its full benefits require extensive experience with running production-level environments. You can learn more in its [detailed documentation][28] and by accessing its [GitHub][11] repository.
|
||||
|
||||
## Incubating projects
|
||||
|
||||
Following are six of the most popular open source CNCF Incubating projects.
|
||||
|
||||
### rkt
|
||||
|
||||
rkt, pronounced "rocket," is a pod-native container engine. It has a command-line interface (CLI) for running containers on Linux. In a sense, it is similar to other containers, like [Podman][29], Docker, and CRI-O.
|
||||
|
||||
rkt was originally developed by CoreOS (later acquired by Red Hat), and you can find detailed [documentation][30] on its website and access the source code on [GitHub][12].
|
||||
|
||||
### Jaeger
|
||||
|
||||
Jaeger is an open source, end-to-end distributed tracing system for cloud-native applications. In one way, it is a monitoring solution like Prometheus. Yet it is different because its use cases extend into:
|
||||
|
||||
* Distributed transaction monitoring
|
||||
* Performance and latency optimization
|
||||
* Root-cause analysis
|
||||
* Service dependency analysis
|
||||
* Distributed context propagation
|
||||
|
||||
|
||||
|
||||
Jaeger is an open source technology built by Uber. You can find [detailed documentation][31] on its website and its [source code][13] on GitHub.
|
||||
|
||||
### Linkerd
|
||||
|
||||
Like Lyft with Envoy Proxy, Uber developed Linkerd as an open source solution to maintain its service at the production level. In some ways, Linkerd is just like Envoy, as both are service mesh tools designed to give platform-wide observability, reliability, and security without requiring configuration or code changes.
|
||||
|
||||
However, there are some subtle differences between the two. While Envoy and Linkerd function as proxies and can report over services that are connected, Envoy isn’t designed to be a Kubernetes Ingress controller, as Linkerd is. Notable features of Linkerd include:
|
||||
|
||||
* Support for multiple platforms (Docker, Kubernetes, DC/OS, Amazon ECS, or any stand-alone machine)
|
||||
* Built-in service discovery abstractions to unite multiple systems
|
||||
* Support for gRPC, HTTP/2, and HTTP/1.x requests plus all TCP traffic
|
||||
|
||||
|
||||
|
||||
You can read more about it on [Linkerd’s website][32] and access its source code on [GitHub][14].
|
||||
|
||||
### Helm
|
||||
|
||||
Helm is basically the package manager for Kubernetes. If you’ve used Apache Maven, Maven Nexus, or a similar service, you will understand Helm’s purpose. Helm helps you manage your Kubernetes application. It uses "Helm Charts" to define, install, and upgrade even the most complex Kubernetes applications. Helm isn’t the only method for this; another concept becoming popular is [Kubernetes Operators][33], which are used by Red Hat OpenShift 4.
|
||||
|
||||
You can try Helm by following the [quickstart guide][34] in its documentation or its [GitHub guide][15].
|
||||
|
||||
### Etcd
|
||||
|
||||
Etcd is a distributed, reliable key-value store for the most critical data in a distributed system. Its key features are:
|
||||
|
||||
* Well-defined, user-facing API (gRPC)
|
||||
* Automatic TLS with optional client certificate authentication
|
||||
* Speed (benchmarked at 10,000 writes per second)
|
||||
* Reliability (distributed using Raft)
|
||||
|
||||
|
||||
|
||||
Etcd is used as a built-in default data storage for Kubernetes and many other technologies. That said, it is rarely run independently or as a separate service; instead, it utilizes the one integrated into Kubernetes, OKD/OpenShift, or another service. There is also an [etcd Operator][35] to manage its lifecycle and unlock its API management capabilities:
|
||||
|
||||
You can learn more in [etcd’s documentation][36] and access its [source code][16] on GitHub.
|
||||
|
||||
### CRI-O
|
||||
|
||||
CRI-O is an Open Container Initiative (OCI)-compliant implementation of the Kubernetes runtime interface. CRI-O is used for various functions including:
|
||||
|
||||
* Runtime using runc (or any OCI runtime-spec implementation) and OCI runtime tools
|
||||
* Image management using containers/image
|
||||
* Storage and management of image layers using containers/storage
|
||||
* Networking support through the Container Network Interface (CNI)
|
||||
|
||||
|
||||
|
||||
CRI-O provides plenty of [documentation][37], including guides, tutorials, articles, and even podcasts, and you can also access its [GitHub page][17].
|
||||
|
||||
* * *
|
||||
|
||||
Did I miss an interesting open source cloud-native project? Please let me know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/cloud-native-projects
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brsonhttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e (clouds in the sky with blue pattern)
|
||||
[2]: https://opensource.com/article/18/7/what-are-cloud-native-apps
|
||||
[3]: https://thenewstack.io/10-key-attributes-of-cloud-native-applications/
|
||||
[4]: https://www.cncf.io
|
||||
[5]: https://opensource.com/sites/default/files/uploads/cncf_1.jpg (Cloud-Native Computing Foundation applications ecosystem)
|
||||
[6]: https://www.opencontainers.org
|
||||
[7]: https://opensource.com/sites/default/files/uploads/cncf_2.jpg (CNCF project maturity levels)
|
||||
[8]: https://github.com/cncf/toc/blob/master/process/graduation_criteria.adoc
|
||||
[9]: https://github.com/kubernetes/kubernetes
|
||||
[10]: https://github.com/prometheus/prometheus
|
||||
[11]: https://github.com/envoyproxy/envoy
|
||||
[12]: https://github.com/rkt/rkt
|
||||
[13]: https://github.com/jaegertracing/jaeger
|
||||
[14]: https://github.com/linkerd/linkerd
|
||||
[15]: https://github.com/helm/helm
|
||||
[16]: https://github.com/etcd-io/etcd
|
||||
[17]: https://github.com/cri-o/cri-o
|
||||
[18]: https://www.okd.io/
|
||||
[19]: https://www.openshift.com
|
||||
[20]: https://kubernetes.io/docs/home
|
||||
[21]: https://opensource.com/sites/default/files/uploads/cncf_3.jpg (Prometheus’ architecture)
|
||||
[22]: https://grafana.com
|
||||
[23]: https://prometheus.io/docs/introduction/overview
|
||||
[24]: https://linkerd.io/
|
||||
[25]: https://istio.io/
|
||||
[26]: https://istio.io/docs/reference/config/networking/v1alpha3/sidecar
|
||||
[27]: https://istio.io/docs/reference/config/policy-and-telemetry
|
||||
[28]: https://www.envoyproxy.io/docs/envoy/latest
|
||||
[29]: https://podman.io
|
||||
[30]: https://coreos.com/rkt/docs/latest
|
||||
[31]: https://www.jaegertracing.io/docs/1.13
|
||||
[32]: https://linkerd.io/2/overview
|
||||
[33]: https://coreos.com/operators
|
||||
[34]: https://helm.sh/docs
|
||||
[35]: https://github.com/coreos/etcd-operator
|
||||
[36]: https://etcd.io/docs/v3.3.12
|
||||
[37]: https://github.com/cri-o/cri-o/blob/master/awesome.md
|
@ -0,0 +1,105 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Reinstall Ubuntu in Dual Boot or Single Boot Mode)
|
||||
[#]: via: (https://itsfoss.com/reinstall-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
How to Reinstall Ubuntu in Dual Boot or Single Boot Mode
|
||||
======
|
||||
|
||||
If you have messed up your Ubuntu system and after trying numerous ways to fix it, you finally give up and take the easy way out: you reinstall Ubuntu.
|
||||
|
||||
We have all been in a situation when reinstalling Linux seems a better idea than try to troubleshoot and fix the issue for good. Troubleshooting a Linux system teaches you a lot but you cannot always afford to spend more time fixing a broken system.
|
||||
|
||||
There is no Windows like recovery drive system in Ubuntu as far as I know. So, the question then arises: how to reinstall Ubuntu? Let me show you how can you reinstall Ubuntu.
|
||||
|
||||
Warning!
|
||||
|
||||
Playing with disk partitions is always a risky task. I strongly recommend to make a backup of your data on an external disk.
|
||||
|
||||
### How to reinstall Ubuntu Linux
|
||||
|
||||
![][1]
|
||||
|
||||
Here are the steps to follow for reinstalling Ubuntu.
|
||||
|
||||
#### Step 1: Create a live USB
|
||||
|
||||
First, download Ubuntu from its website. You can download [whichever Ubuntu version][2] you want to use.
|
||||
|
||||
[Download Ubuntu][3]
|
||||
|
||||
Once you have got the ISO image, it’s time to create a live USB from it. If your Ubuntu system is still accessible, you can create a live disk using the startup disk creator tool provided by Ubuntu.
|
||||
|
||||
If you cannot access your Ubuntu system, you’ll have to use another system. You can refer to this article to learn [how to create live USB of Ubuntu in Windows][4].
|
||||
|
||||
#### Step 2: Reinstall Ubuntu
|
||||
|
||||
Once you have got the live USB of Ubuntu, plugin the USB. Reboot your system. At boot time, press F2/10/F12 key to go into the BIOS settings and make sure that you have set Boot from Removable Devices/USB option at the top. Save and exit BIOS. This will allow you to boot into live USB.
|
||||
|
||||
Once you are in the live USB, choose to install Ubuntu. You’ll get the usual option for choosing your language and keyboard layout. You’ll also get the option to download updates etc.
|
||||
|
||||
![Go ahead with regular installation option][5]
|
||||
|
||||
The important steps comes now. You should see an “Installation Type” screen. What you see on your screen here depends heavily on how Ubuntu sees the disk partitioning and installed operating systems on your system.
|
||||
|
||||
[][6]
|
||||
|
||||
Suggested read How to Update Ubuntu Linux [Beginner's Tip]
|
||||
|
||||
Be very careful in reading the options and its details at this step. Pay attention to what each options says. The screen options may look different in different systems.
|
||||
|
||||
![Reinstall Ubuntu option in dual boot mode][7]
|
||||
|
||||
In my case, it finds that I have Ubuntu 18.04.2 and Windows installed on my system and it gives me a few options.
|
||||
|
||||
The first option here is to erase Ubuntu 18.04.2 and reinstall it. It tells me that it will delete my personal data but it says nothing about deleting all the operating systems (i.e. Windows).
|
||||
|
||||
If you are super lucky or in single boot mode, you may see an option where you can see a “Reinstall Ubuntu”. This option will keep your existing data and even tries to keep the installed software. If you see this option, you should go for it it.
|
||||
|
||||
Attention for Dual Boot System
|
||||
|
||||
If you are dual booting Ubuntu and Windows and during reinstall, your Ubuntu system doesn’t see Windows, you must go for Something else option and install Ubuntu from there. I have described the [process of reinstalling Linux in dual boot in this tutorial][8].
|
||||
|
||||
For me, there was no reinstall and keep the data option so I went for “Erase Ubuntu and reinstall”option. This will install Ubuntu afresh even if it is in dual boot mode with Windows.
|
||||
|
||||
The reinstalling part is why I recommend using separate partitions for root and home. With that, you can keep your data in home partition safe even if you reinstall Linux. I have already demonstrated it in this video:
|
||||
|
||||
Once you have chosen the reinstall Ubuntu option, the rest of the process is just clicking next. Select your location and when asked, create your user account.
|
||||
|
||||
![Just go on with the installation options][9]
|
||||
|
||||
Once the procedure finishes, you’ll have your Ubuntu reinstalled afresh.
|
||||
|
||||
In this tutorial, I have assumed that you know things because you already has Ubuntu installed before. If you need clarification at any step, please feel free to ask in the comment section.
|
||||
|
||||
[][10]
|
||||
|
||||
Suggested read How To Fix No Wireless Network In Ubuntu
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/reinstall-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/Reinstall-Ubuntu.png?resize=800%2C450&ssl=1
|
||||
[2]: https://itsfoss.com/which-ubuntu-install/
|
||||
[3]: https://ubuntu.com/download/desktop
|
||||
[4]: https://itsfoss.com/create-live-usb-of-ubuntu-in-windows/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/reinstall-ubuntu-1.jpg?resize=800%2C473&ssl=1
|
||||
[6]: https://itsfoss.com/update-ubuntu/
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/reinstall-ubuntu-dual-boot.jpg?ssl=1
|
||||
[8]: https://itsfoss.com/replace-linux-from-dual-boot/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/reinstall-ubuntu-3.jpg?ssl=1
|
||||
[10]: https://itsfoss.com/fix-no-wireless-network-ubuntu/
|
203
sources/tech/20190814 How to install Python on Windows.md
Normal file
203
sources/tech/20190814 How to install Python on Windows.md
Normal file
@ -0,0 +1,203 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to install Python on Windows)
|
||||
[#]: via: (https://opensource.com/article/19/8/how-install-python-windows)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/greg-p)
|
||||
|
||||
How to install Python on Windows
|
||||
======
|
||||
Install Python, run an IDE, and start coding right from your Microsoft
|
||||
Windows desktop.
|
||||
![Hands programming][1]
|
||||
|
||||
So you want to learn to program? One of the most common languages to start with is [Python][2], popular for its unique blend of [object-oriented][3] structure and simple syntax. Python is also an _interpreted_ _language_, meaning you don't need to learn how to compile code into machine language: Python does that for you, allowing you to test your programs sometimes instantly and, in a way, while you write your code.
|
||||
|
||||
Just because Python is easy to learn doesn't mean you should underestimate its potential power. Python is used by [movie][4] [studios][5], financial institutions, IT houses, video game studios, makers, hobbyists, [artists][6], teachers, and many others.
|
||||
|
||||
On the other hand, Python is also a serious programming language, and learning it takes dedication and practice. Then again, you don't have to commit to anything just yet. You can install and try Python on nearly any computing platform, so if you're on Windows, this article is for you.
|
||||
|
||||
If you want to try Python on a completely open source operating system, you can [install Linux][7] and then [try Python][8].
|
||||
|
||||
### Get Python
|
||||
|
||||
Python is available from its website, [Python.org][9]. Once there, hover your mouse over the **Downloads** menu, then over the **Windows** option, and then click the button to download the latest release.
|
||||
|
||||
![Downloading Python on Windows][10]
|
||||
|
||||
Alternatively, you can click the **Downloads** menu button and select a specific version from the downloads page.
|
||||
|
||||
### Install Python
|
||||
|
||||
Once the package is downloaded, open it to start the installer.
|
||||
|
||||
It is safe to accept the default install location, and it's vital to add Python to PATH. If you don't add Python to your PATH, then Python applications won't know where to find Python (which they require in order to run). This is _not_ selected by default, so activate it at the bottom of the install window before continuing!
|
||||
|
||||
![Select "Add Python 3 to PATH"][11]
|
||||
|
||||
Before Windows allows you to install an application from a publisher other than Microsoft, you must give your approval. Click the **Yes** button when prompted by the **User Account Control** system.
|
||||
|
||||
![Windows UAC][12]
|
||||
|
||||
Wait patiently for Windows to distribute the files from the Python package into the appropriate locations, and when it's finished, you're done installing Python.
|
||||
|
||||
Time to play.
|
||||
|
||||
### Install an IDE
|
||||
|
||||
To write programs in Python, all you really need is a text editor, but it's convenient to have an integrated development environment (IDE). An IDE integrates a text editor with some friendly and helpful Python features. IDLE 3 and NINJA-IDE are two options to consider.
|
||||
|
||||
#### IDLE 3
|
||||
|
||||
Python comes with an IDE called IDLE. You can write code in any text editor, but using an IDE provides you with keyword highlighting to help detect typos, a **Run** button to test code quickly and easily, and other code-specific features that a plain text editor like [Notepad++][13] normally doesn't have.
|
||||
|
||||
To start IDLE, click the **Start** (or **Window**) menu and type **python** for matches. You may find a few matches, since Python provides more than one interface, so make sure you launch IDLE.
|
||||
|
||||
![IDLE 3 IDE][14]
|
||||
|
||||
If you don't see Python in the Start menu, launch the Windows command prompt by typing **cmd** in the Start menu, then type:
|
||||
|
||||
|
||||
```
|
||||
`C:\Windows\py.exe`
|
||||
```
|
||||
|
||||
If that doesn't work, try reinstalling Python. Be sure to select **Add Python to PATH** in the install wizard. Refer to the [Python docs][15] for detailed instructions.
|
||||
|
||||
#### Ninja-IDE
|
||||
|
||||
If you already have some coding experience and IDLE seems too simple for you, try [Ninja-IDE][16]. Ninja-IDE is an excellent Python IDE. It has keyword highlighting to help detect typos, quotation and parenthesis completion to avoid syntax errors, line numbers (helpful when debugging), indentation markers, and a **Run** button to test code quickly and easily.
|
||||
|
||||
![Ninja-IDE][17]
|
||||
|
||||
To install it, visit the Ninja-IDE website and [download the Windows installer][18]. The process is the same as with Python: start the installer, allow Windows to install a non-Microsoft application, and wait for the installer to finish.
|
||||
|
||||
Once Ninja-IDE is installed, double-click the Ninja-IDE icon on your desktop or select it from the Start menu.
|
||||
|
||||
### Tell Python what to do
|
||||
|
||||
Keywords tell Python what you want it to do. In either IDLE or Ninja-IDE, go to the File menu and create a new file.
|
||||
|
||||
Ninja users: Do not create a new project, just a new file.
|
||||
|
||||
In your new, empty file, type this into IDLE or Ninja-IDE:
|
||||
|
||||
|
||||
```
|
||||
`print("Hello world.")`
|
||||
```
|
||||
|
||||
* If you are using IDLE, go to the Run menu and select the Run Module option.
|
||||
* If you are using Ninja, click the Run File button in the left button bar.
|
||||
|
||||
|
||||
|
||||
![Running code in Ninja-IDE][19]
|
||||
|
||||
Any time you run code, your IDE prompts you to save the file you're working on. Do that before continuing.
|
||||
|
||||
The keyword **print** tells Python to print out whatever text you give it in parentheses and quotes.
|
||||
|
||||
That's not very exciting, though. At its core, Python has access to only basic keywords like **print** and **help**, basic math functions, and so on.
|
||||
|
||||
Use the **import** keyword to load more keywords. Start a new file in IDLE or Ninja and name it **pen.py**.
|
||||
|
||||
**Warning**: Do not call your file **turtle.py**, because **turtle.py** is the name of the file that contains the turtle program you are controlling. Naming your file **turtle.py** confuses Python because it thinks you want to import your own file.
|
||||
|
||||
Type this code into your file and run it:
|
||||
|
||||
|
||||
```
|
||||
`import turtle`
|
||||
```
|
||||
|
||||
[Turtle][20] is a fun module to use. Add this code to your file:
|
||||
|
||||
|
||||
```
|
||||
turtle.begin_fill()
|
||||
turtle.forward(100)
|
||||
turtle.left(90)
|
||||
turtle.forward(100)
|
||||
turtle.left(90)
|
||||
turtle.forward(100)
|
||||
turtle.left(90)
|
||||
turtle.forward(100)
|
||||
turtle.end_fill()
|
||||
```
|
||||
|
||||
See what shapes you can draw with the turtle module.
|
||||
|
||||
To clear your turtle drawing area, use the **turtle.clear()** keyword. What do you think the keyword **turtle.color("blue")** does?
|
||||
|
||||
Try more complex code:
|
||||
|
||||
|
||||
```
|
||||
import turtle as t
|
||||
import time
|
||||
|
||||
t.color("blue")
|
||||
t.begin_fill()
|
||||
|
||||
counter = 0
|
||||
|
||||
while counter < 4:
|
||||
t.forward(100)
|
||||
t.left(90)
|
||||
counter = counter+1
|
||||
|
||||
t.end_fill()
|
||||
time.sleep(2)
|
||||
```
|
||||
|
||||
As a challenge, try changing your script to get this result:
|
||||
|
||||
![Example Python turtle output][21]
|
||||
|
||||
Once you complete that script, you're ready to move on to more exciting modules. A good place to start is this [introductory dice game][22].
|
||||
|
||||
### Stay Pythonic
|
||||
|
||||
Python is a fun language with modules for practically anything you can think to do with it. As you can see, it's easy to get started with Python, and as long as you're patient with yourself, you may find yourself understanding and writing Python code with the same fluidity as you write your native language. Work through some [Python articles][23] here on Opensource.com, try scripting some small tasks for yourself, and see where Python takes you. To really integrate Python with your daily workflow, you might even try Linux, which is natively scriptable in ways no other operating system is. You might find yourself, given enough time, using the applications you create!
|
||||
|
||||
Good luck, and stay Pythonic.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/how-install-python-windows
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sethhttps://opensource.com/users/greg-p
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S (Hands programming)
|
||||
[2]: https://www.python.org/
|
||||
[3]: https://opensource.com/article/19/7/get-modular-python-classes
|
||||
[4]: https://github.com/edniemeyer/weta_python_db
|
||||
[5]: https://www.python.org/about/success/ilm/
|
||||
[6]: https://opensource.com/article/19/7/rgb-cube-python-scribus
|
||||
[7]: https://opensource.com/article/19/7/ways-get-started-linux
|
||||
[8]: https://opensource.com/article/17/10/python-101
|
||||
[9]: https://www.python.org/downloads/
|
||||
[10]: https://opensource.com/sites/default/files/uploads/win-python-install.jpg (Downloading Python on Windows)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/win-python-path.jpg (Select "Add Python 3 to PATH")
|
||||
[12]: https://opensource.com/sites/default/files/uploads/win-python-publisher.jpg (Windows UAC)
|
||||
[13]: https://notepad-plus-plus.org/
|
||||
[14]: https://opensource.com/sites/default/files/uploads/idle3.png (IDLE 3 IDE)
|
||||
[15]: http://docs.python.org/3/using/windows.html
|
||||
[16]: http://ninja-ide.org/
|
||||
[17]: https://opensource.com/sites/default/files/uploads/win-python-ninja.jpg (Ninja-IDE)
|
||||
[18]: http://ninja-ide.org/downloads/
|
||||
[19]: https://opensource.com/sites/default/files/uploads/ninja_run.png (Running code in Ninja-IDE)
|
||||
[20]: https://opensource.com/life/15/8/python-turtle-graphics
|
||||
[21]: https://opensource.com/sites/default/files/uploads/win-python-idle-turtle.jpg (Example Python turtle output)
|
||||
[22]: https://opensource.com/article/17/10/python-101#python-101-dice-game
|
||||
[23]: https://opensource.com/sitewide-search?search_api_views_fulltext=Python
|
80
sources/tech/20190814 Taz Brown- How Do You Fedora.md
Normal file
80
sources/tech/20190814 Taz Brown- How Do You Fedora.md
Normal file
@ -0,0 +1,80 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Taz Brown: How Do You Fedora?)
|
||||
[#]: via: (https://fedoramagazine.org/taz-brown-how-do-you-fedora/)
|
||||
[#]: author: (Charles Profitt https://fedoramagazine.org/author/cprofitt/)
|
||||
|
||||
Taz Brown: How Do You Fedora?
|
||||
======
|
||||
|
||||
![Taz Brown: How Do You Fedora?][1]
|
||||
|
||||
We recently interviewed Taz Brown on how she uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the [feedback][2] form to express your interest in becoming a interviewee.
|
||||
|
||||
Taz Brown is a seasoned IT professional with over 15 years of experience. “I have worked as a systems administrator, senior Linux administrator, DevOps engineer and I now work as a senior Ansible automation consultant at Red Hat with the Automation Practice Team.” Originally Taz started using Ubuntu, but she started using CentOS, Red Hat Enterprise Linux and Fedora as a Linux administrator in the IT industry.
|
||||
|
||||
Taz is relatively new to contributing to open source, but she found that code was not the only way to contribute. “I prefer to contribute through documentation as I am not a software developer or engineer. I found that there was more than one way to contribute to open source than just through code.”
|
||||
|
||||
### All about Taz
|
||||
|
||||
Her childhood hero is Wonder Woman. Her favorite movie is Hackers. “My favorite scene is the beginning of the movie,” Taz tells the Magazine. “The movie starts with a group of special agents breaking into a house to catch the infamous hacker, Zero Cool. We soon discover that Zero Cool is actually 11-year-old Dade Murphy, who managed to crash 1,507 computer systems in one day. He is charged for his crimes and his family is fined $45,000. Additionally, he is banned from using computers or touch-tone telephones until he is 18.”
|
||||
|
||||
Her favorite character in the movie is Paul Cook. “Paul Cook, Lord Nikon, played by Laurence Mason was my favorite character. One of the main reasons is that I never really saw a hacker movie that had characters that looked like me so I was fascinated by his portrayal. He was enigmatic. It was refreshing to see and it made me real proud that I was passionate about IT and that I was a geek of sorts.”
|
||||
|
||||
Taz is an amateur photographer and uses a Nikon D3500. “I definitely like vintage things so I am looking to add a new one to my collection soon.” She also enjoys 3D printing, and drawing. “I use open source tools in my hobbies such as Wekan, which is an open-source kanban utility.”
|
||||
|
||||
![][3]
|
||||
|
||||
### The Fedora community
|
||||
|
||||
Taz first started using Linux about 8 years ago. “I started using Ubuntu and then graduated to Fedora and its community and I was hooked. I have been using Fedora now for about 5 years.”
|
||||
|
||||
When she became a Linux Administrator, Linux turned into a passion. “I was trying to find my way in terms of contributing to open source. I didn’t know where to go so I wondered if I could truly be an open source enthusiast and influencer because the community is so vast, but once I found a few people who embraced my interests and could show me the way, I was able to open up and ask questions and learn from the community.”
|
||||
|
||||
Taz first became involved with the Fedora community through her work as a Linux systems engineer while working at Mastercard. “My first impressions of the Fedora community was one of true collaboration, respect and sharing.”
|
||||
|
||||
When Brown talked about the Fedora Project she gave an excellent analogy. “America is an melting pot and that’s how I see open source projects like the Fedora Project. There is plenty of room for diverse contributions to the Fedora Project. There are so many ways in which to get and stay involved and there is also room for new ideas.”
|
||||
|
||||
When we asked Brown about what she would like to see improved in the Fedora community, she commented on making others more aware of the opportunities. “I wish those who are typically underrepresented in tech were more aware of the amazing commitment that the Fedora Project has to diversity and inclusion in open source and in the Fedora community.”
|
||||
|
||||
Next Taz had some advice for people looking to join the Fedora Community. “It’s a great decision and one that you likely will not regret joining. Fedora is a project with a very large supportive community and if you’re new to open source, it’s definitely a great place to start. There is a lot of cool stuff in Fedora. I believe there are limitless opportunities for The Fedora Project.”
|
||||
|
||||
### What hardware?
|
||||
|
||||
Taz uses an Lenovo Thinkserver TS140 with 64 GB of ram, 4 1 TB SSDs and a 1 TB HD for data storage. The server is currently running Fedora 30. She also has a Synology NAS with 164 TB of storage using a RAID 5 configuration. Taz also has a Logitech MX Master and MX Master 2S. “For my keyboard, I use a Kinesis Advantage 2.” She also uses two 38 inch LG ultrawide curved monitors and a single 34 inch LG ultrawide monitor.
|
||||
|
||||
She owns a System76 laptop. “I use the 16.1-inch Oryx Pro by System76 with IPS Display with i7 processor with 6 cores and 12 threads.” It has 6 GB GDDR6 RTX 2060 w/ 1920 CUDA Cores and also 64 GB of DDR4 RAM and a total of 4 TB of SSD storage. “I love the way Fedora handles my peripherals and like my mouse and keyboard. Everything works seamlessly. Plug and play works as it should and performance never suffers.”
|
||||
|
||||
![][4]
|
||||
|
||||
### What software?
|
||||
|
||||
Brown is currently running Fedora 30. She has a variety of software in her everyday work flow. “I use Wekan, which is an open-source kanban, which I use to manage my engagements and projects. My favorite editor is [Atom][5], though I use to use Sublime at one point in time.”
|
||||
|
||||
And as for terminals? “I use Terminator as my go-to terminal because of grid arrangement as well as it’s many keyboard shortcuts and its tab formation.” Taz continues, “I love using neofetch which comes up with a nifty fedora logo and system information every time I log in to the terminal. I also have my terminal pimped out using [powerline][6] and powerlevel9k and vim-powerline as well.”
|
||||
|
||||
![][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/taz-brown-how-do-you-fedora/
|
||||
|
||||
作者:[Charles Profitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/cprofitt/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/header-use-this-816x345.png
|
||||
[2]: https://fedoramag.wpengine.com/submit-an-idea-or-tip/
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/08/Image-1.png
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/08/20190517_171809-1-1024x768.jpg
|
||||
[5]: https://fedoramagazine.org/install-atom-fedora/
|
||||
[6]: https://fedoramagazine.org/add-power-terminal-powerline/
|
||||
[7]: https://fedoramagazine.org/wp-content/uploads/2019/08/screens-1024x421.jpg
|
130
sources/tech/20190815 12 extensions for your GNOME desktop.md
Normal file
130
sources/tech/20190815 12 extensions for your GNOME desktop.md
Normal file
@ -0,0 +1,130 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (12 extensions for your GNOME desktop)
|
||||
[#]: via: (https://opensource.com/article/19/8/extensions-gnome-desktop)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdosshttps://opensource.com/users/erezhttps://opensource.com/users/alanfdosshttps://opensource.com/users/patrickhttps://opensource.com/users/liamnairn)
|
||||
|
||||
12 extensions for your GNOME desktop
|
||||
======
|
||||
Add functionality and features to your Linux desktop with these add-ons.
|
||||
![A person working.][1]
|
||||
|
||||
The GNOME desktop is the default graphical user interface for most of the popular Linux distributions and some of the BSD and Solaris operating systems. Currently at version 3, GNOME provides a sleek user experience, and extensions are available for additional functionality.
|
||||
|
||||
We've covered [GNOME extensions][2] at Opensource.com before, but to celebrate GNOME's 22nd anniversary, I decided to revisit the topic. Some of these extensions may already be installed, depending on your Linux distribution; if not, check your package manager.
|
||||
|
||||
### How to add extensions from the package manager
|
||||
|
||||
To install extensions that aren't in your distro, open the package manager and click **Add-ons**. Then click **Shell Extensions** at the top-right of the Add-ons screen, and you will see a button for **Extension Settings** and a list of available extensions.
|
||||
|
||||
![Package Manager Add-ons Extensions view][3]
|
||||
|
||||
Use the Extension Settings button to enable, disable, or configure the extensions you have installed.
|
||||
|
||||
Now that you know how to add and enable extensions, here are some good ones to try.
|
||||
|
||||
## 1\. GNOME Clocks
|
||||
|
||||
[GNOME Clocks][4] is an application that includes a world clock, alarm, stopwatch, and timer. You can configure clocks for different geographic locations. For example, if you regularly work with colleagues in another time zone, you can set up a clock for their location. You can access the World Clocks section in the top panel's drop-down menu by clicking the system clock. It shows your configured world clocks (not including your local time), so you can quickly check the time in other parts of the world.
|
||||
|
||||
## 2\. GNOME Weather
|
||||
|
||||
[GNOME Weather][5] displays the weather conditions and forecast for your current location. You can access local weather conditions from the top panel's drop-down menu. You can also check the weather in other geographic locations using Weather's Places menu.
|
||||
|
||||
GNOME Clocks and Weather are small applications that have extension-like functionality. Both are installed by default on Fedora 30 (which is what I'm using). If you're using another distribution and don't see them, check the package manager.
|
||||
|
||||
You can see both extensions in action in the image below.
|
||||
|
||||
![Clocks and Weather shown in the drop-down][6]
|
||||
|
||||
## 3\. Applications Menu
|
||||
|
||||
I think the GNOME 3 interface is perfectly enjoyable in its stock form, but you may prefer a traditional application menu. In GNOME 30, the [Applications Menu][7] extension was installed by default but not enabled. To enable it, click the Extensions Settings button in the Add-ons section of the package manager and enable the Applications Menu extension.
|
||||
|
||||
![Extension Settings][8]
|
||||
|
||||
Now you can see the Applications Menu in the top-left corner of the top panel.
|
||||
|
||||
![Applications Menu][9]
|
||||
|
||||
## 4\. More columns in applications view
|
||||
|
||||
The Applications view is set by default to six columns of icons, probably because GNOME needs to accommodate a wide array of displays. If you're using a wide-screen display, you can use the [More columns in applications menu][10] extension to increase the columns. I find that setting it to eight makes better use of my screen by eliminating the empty columns on either side of the icons when I launch the Applications view.
|
||||
|
||||
## Add system info to the top panel
|
||||
|
||||
The next three extensions provide basic system information to the top panel.
|
||||
|
||||
* 5. [Harddisk LED][11] shows a small hard drive icon with input/output (I/O) activity.
|
||||
* 6. [Load Average][12] indicates Linux load averages taken over three time intervals.
|
||||
* 7. [Uptime Indicator][13] shows system uptime; when it's clicked, it shows the date and time the system was started.
|
||||
|
||||
|
||||
|
||||
## 8\. Sound Input and Output Device Chooser
|
||||
|
||||
Your system may have more than one audio device for input and output. For example, my laptop has internal speakers and sometimes I use a wireless Bluetooth speaker. The [Sound Input and Output Device Chooser][14] extension adds a list of your sound devices to the System Menu so you can quickly select which one you want to use.
|
||||
|
||||
## 9\. Drop Down Terminal
|
||||
|
||||
Fellow Opensource.com writer [Scott Nesbitt][15] recommended the next two extensions. The first, [Drop Down Terminal][16], enables a terminal window to drop down from the top panel by pressing a certain key; the default is the key above Tab; on my keyboard, that's the tilde (~) character. Drop Down Terminal has a settings menu for customizing transparency, height, the activation keystroke, and other configurations.
|
||||
|
||||
## 10\. Todo.txt
|
||||
|
||||
[Todo.txt][17] adds a menu to the top panel for maintaining a file for Todo.txt task tracking. You can add or delete a task from the menu or mark it as completed.
|
||||
|
||||
![Drop-down menu for Todo.txt][18]
|
||||
|
||||
## 11\. Removable Drive Menu
|
||||
|
||||
Opensource.com editor [Seth Kenlon][19] suggested [Removable Drive Menu][20]. It provides a drop-down menu for managing removable media, such as USB thumb drives. From the extension's menu, you can access a drive's files and eject it. The menu only appears when removable media is inserted.
|
||||
|
||||
![Removable Drive Menu][21]
|
||||
|
||||
## 12\. GNOME Internet Radio
|
||||
|
||||
I enjoy listening to internet radio streams with the [GNOME Internet Radio][22] extension, which I wrote about in [How to Stream Music with GNOME Internet Radio][23].
|
||||
|
||||
* * *
|
||||
|
||||
What are your favorite GNOME extensions? Please share them in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/extensions-gnome-desktop
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdosshttps://opensource.com/users/erezhttps://opensource.com/users/alanfdosshttps://opensource.com/users/patrickhttps://opensource.com/users/liamnairn
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl (A person working.)
|
||||
[2]: https://opensource.com/article/17/2/top-gnome-shell-extensions
|
||||
[3]: https://opensource.com/sites/default/files/uploads/add-onsextensions_6.png (Package Manager Add-ons Extensions view)
|
||||
[4]: https://wiki.gnome.org/Apps/Clocks
|
||||
[5]: https://wiki.gnome.org/Apps/Weather
|
||||
[6]: https://opensource.com/sites/default/files/uploads/clocksweatherdropdown_6.png (Clocks and Weather shown in the drop-down)
|
||||
[7]: https://extensions.gnome.org/extension/6/applications-menu/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/add-onsextensionsettings_6.png (Extension Settings)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/applicationsmenuextension_5.png (Applications Menu)
|
||||
[10]: https://extensions.gnome.org/extension/1305/more-columns-in-applications-view/
|
||||
[11]: https://extensions.gnome.org/extension/988/harddisk-led/
|
||||
[12]: https://extensions.gnome.org/extension/1381/load-average/
|
||||
[13]: https://extensions.gnome.org/extension/508/uptime-indicator/
|
||||
[14]: https://extensions.gnome.org/extension/906/sound-output-device-chooser/
|
||||
[15]: https://opensource.com/users/scottnesbitt
|
||||
[16]: https://extensions.gnome.org/extension/442/drop-down-terminal/
|
||||
[17]: https://extensions.gnome.org/extension/570/todotxt/
|
||||
[18]: https://opensource.com/sites/default/files/uploads/todo.txtmenu_3.png (Drop-down menu for Todo.txt)
|
||||
[19]: https://opensource.com/users/seth
|
||||
[20]: https://extensions.gnome.org/extension/7/removable-drive-menu/
|
||||
[21]: https://opensource.com/sites/default/files/uploads/removabledrivemenu_3.png (Removable Drive Menu)
|
||||
[22]: https://extensions.gnome.org/extension/836/internet-radio/
|
||||
[23]: https://opensource.com/article/19/6/gnome-internet-radio
|
@ -0,0 +1,140 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Clementine Music Player for All Your Audio Needs)
|
||||
[#]: via: (https://itsfoss.com/clementine-music-player/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
Clementine Music Player for All Your Audio Needs
|
||||
======
|
||||
|
||||
[VLC][1] is a mainstay for most fans of FOSS technology and most Linux distros. It’s a great little player, don’t get me wrong, but if you have a large library of audio files, some times you need something more powerful.
|
||||
|
||||
The [Clementine Music Player][2] is a full-service audio player with all the tools you need to keep track of your audio library. According to the [project’s website][3], Clementine “inspired by Amarok 1.4, focusing on a fast and easy-to-use interface for searching and playing your music.”
|
||||
|
||||
![Clementine Music Player][4]
|
||||
|
||||
Clementine contains the following features:
|
||||
|
||||
* Search and play your local music library
|
||||
* Listen to internet radio from Spotify, Grooveshark, SomaFM, Magnatune, Jamendo, SKY.fm, Digitally Imported, JAZZRADIO.com, Soundcloud, Icecast and Subsonic servers
|
||||
* Search and play songs you’ve uploaded to Box, Dropbox, Google Drive, and OneDrive
|
||||
* Create smart playlists and dynamic playlists
|
||||
* Tabbed playlists, import and export M3U, XSPF, PLS and ASX
|
||||
* CUE sheet support
|
||||
* Play audio CDs
|
||||
* Visualizations from projectM.
|
||||
* Lyrics and artist biographies and photos
|
||||
* Transcode music into MP3, Ogg Vorbis, Ogg Speex, FLAC or AAC
|
||||
* Edit tags on MP3 and OGG files, organize your music
|
||||
* Fetch missing tags from MusicBrainz
|
||||
* Discover and download Podcasts
|
||||
* Download missing album cover art from Last.fm and Amazon
|
||||
* Cross-platform – works on Windows, Mac OS X and Linux
|
||||
* Native desktop notifications on Linux (libnotify) and Mac OS X (Growl)
|
||||
* Remote control using an [Android device][5], a Wii Remote, MPRIS or the command-line
|
||||
* Copy music to your iPod, iPhone, MTP or mass-storage USB player. Queue manager
|
||||
|
||||
|
||||
|
||||
Clementine is released under the GPL v3. The most recent version of Clementine (1.3.1) was released in April of 2016.
|
||||
|
||||
### Installing Clementine music player in Linux
|
||||
|
||||
Now let’s take a look at how you can install Clementine on your system. Clementine is a popular application and is available in almost all major Linux distributions.
|
||||
|
||||
You can search for it on your distribution’s software center:
|
||||
|
||||
![Clementine in Ubuntu Software Center][6]
|
||||
|
||||
If you are feeling a little geeky, you can always hop on the terminal train and use your distribution’s package manger to install Clementine.
|
||||
|
||||
[][7]
|
||||
|
||||
Suggested read Clementine Version 1.3 Released After 2 Years
|
||||
|
||||
In Ubuntu, Clementine is available in the [Universe repository][8]. You can [use the apt command][9] in this fashion:
|
||||
|
||||
```
|
||||
sudo apt install clementine
|
||||
```
|
||||
|
||||
In Fedora, you can use this command:
|
||||
|
||||
```
|
||||
sudo dnf clementine
|
||||
```
|
||||
|
||||
In Manjaro or other Arch-based distributions, you can use:
|
||||
|
||||
```
|
||||
sudo pacman -S clementine
|
||||
```
|
||||
|
||||
You can find a list of Linux distros that have Clementine in their repos [here][10]. You can also download packages directly from the [Clementine site][11].
|
||||
|
||||
If you have a Windows system, you can download a .exe from the [project’s download page][11].
|
||||
|
||||
### Experience with Clementine music player
|
||||
|
||||
I have installed Clementine on multiple Linux and Windows systems. I have a large collection of Old Time Radio shows, audio drama, and some music. Clementine does a good job of keeping it all organized.
|
||||
|
||||
![Clementine Library][12]
|
||||
|
||||
I also used it to download and listen to podcasts. It worked well for that. However, I did not use that part very much.
|
||||
|
||||
I didn’t use the cloud options because I don’t trust the cloud with my audio stuff.
|
||||
|
||||
![Clementine Cloud Settings][13]
|
||||
|
||||
### Final Thoughts on Clementine
|
||||
|
||||
I like Clementine. I installed it on multiple systems. I only used a small portion of the wide range of tools it offers and they worked great.
|
||||
|
||||
The only thing that concerns me is that the developers haven’t updated the project in three years. The [project’s GitHub page][14] has almost 2,000 open issues and 40 pending pull requests. I can understand that the developers might think that the project is stable and mature, but it looks like some users are having issues that are not being addressed.
|
||||
|
||||
I guess this is the reason why Clementine has been forked into [Strawberry music player][15].
|
||||
|
||||
If you are looking for an audio management program that is updated regularly, you may check out [Sayonara music player][16].
|
||||
|
||||
Have you ever used Clementine? What is your favorite music player/manager? Please let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][17].
|
||||
|
||||
[][18]
|
||||
|
||||
Suggested read UberWriter: Uber Cool Markdown Editor for Linux
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/clementine-music-player/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/install-latest-vlc/
|
||||
[2]: https://www.clementine-player.org/
|
||||
[3]: https://www.clementine-player.org/about
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/clementine-music-player.jpg?ssl=1
|
||||
[5]: https://play.google.com/store/apps/details?id=de.qspool.clementineremote
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/clementine-ubuntu-software-center.jpg?resize=800%2C391&ssl=1
|
||||
[7]: https://itsfoss.com/clementine-version-1-3-released/
|
||||
[8]: https://itsfoss.com/ubuntu-repositories/
|
||||
[9]: https://itsfoss.com/apt-command-guide/
|
||||
[10]: https://repology.org/project/clementine-player/versions
|
||||
[11]: https://www.clementine-player.org/downloads
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/07/clementine-library.png?resize=800%2C579&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/clementine-cloud-settings.png?resize=800%2C488&ssl=1
|
||||
[14]: https://github.com/clementine-player/Clementine
|
||||
[15]: https://itsfoss.com/strawberry-music-player/
|
||||
[16]: https://itsfoss.com/sayonara-music-player/
|
||||
[17]: http://reddit.com/r/linuxusersgroup
|
||||
[18]: https://itsfoss.com/uberwriter/
|
@ -0,0 +1,143 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Change Linux Console Font Type And Size)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-change-linux-console-font-type-and-size/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Change Linux Console Font Type And Size
|
||||
======
|
||||
|
||||
It is quite easy to change the text font type and its size if you have graphical desktop environment. How would you do that in an Ubuntu headless server that doesn’t have a graphical environment? No worries! This brief guide describes how to change Linux console font and size. This can be useful for those who don’t like the default font type/size or who prefer different fonts in general.
|
||||
|
||||
### Change Linux Console Font Type And Size
|
||||
|
||||
Just in case you don’t know yet, this is how a headless Ubuntu Linux server console looks like.
|
||||
|
||||
![][2]
|
||||
|
||||
Ubuntu Linux console
|
||||
|
||||
As far as I know, we can [**list the installed fonts**][3], but there is no option to change the font type or its size from Linux console as we do in the Terminal emulators in GUI desktop.
|
||||
|
||||
But that doesn’t mean that we can’t change it. We still can change the console fonts.
|
||||
|
||||
If you’re using Debian, Ubuntu and other DEB-based systems, you can use **“console-setup”** configuration file for **setupcon** which is used to configure font and keyboard layout for the console. The standard location of the console-setup configuration file is **/etc/default/console- setup**.
|
||||
|
||||
Now, run the following command to setup font for your Linux console.
|
||||
|
||||
```
|
||||
$ sudo dpkg-reconfigure console-setup
|
||||
```
|
||||
|
||||
Choose the encoding to use on your Linux console. Just leave the default values, choose OK and hit ENTER to continue.
|
||||
|
||||
![][4]
|
||||
|
||||
Choose encoding to set on the console in Ubuntu
|
||||
|
||||
Next choose the character set that should be supported by the console font from the list. By default, it was the the last option i.e. **Guess optimal character set** in my system. Just leave it as default and hit ENTER key.
|
||||
|
||||
![][5]
|
||||
|
||||
Choose character set in Ubuntu
|
||||
|
||||
Next choose the font for your console and hit ENTER key. Here, I am choosing “TerminusBold”.
|
||||
|
||||
![][6]
|
||||
|
||||
Choose font for your Linux console
|
||||
|
||||
In this step, we choose the desired font size for our Linux console.
|
||||
|
||||
![][7]
|
||||
|
||||
Choose font size for your Linux console
|
||||
|
||||
After a few seconds, the selected font with size will applied for your Linux console.
|
||||
|
||||
This is how console fonts looked like in my Ubuntu 18.04 LTS server before changing the font type and size.
|
||||
|
||||
![][8]
|
||||
|
||||
This is after changing the font type and size.
|
||||
|
||||
![][9]
|
||||
|
||||
As you can see, the text size is much bigger, better and the font type is different that default one.
|
||||
|
||||
You can also directly edit **/etc/default/console-setup** file and set the font type and size as you wish. As per the following example, my Linux console font type is “Terminus Bold” and font size is 32.
|
||||
|
||||
```
|
||||
ACTIVE_CONSOLES="/dev/tty[1-6]"
|
||||
CHARMAP="UTF-8"
|
||||
CODESET="guess"
|
||||
FONTFACE="TerminusBold"
|
||||
FONTSIZE="16x32"
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
* [**How To Switch Between TTYs Without Using Function Keys In Linux**][10]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
##### Display Console fonts
|
||||
|
||||
To show your console font, simply type:
|
||||
|
||||
```
|
||||
$ showconsolefont
|
||||
```
|
||||
|
||||
This command will show a table of glyphs or letters of a font.
|
||||
|
||||
![][11]
|
||||
|
||||
Show console fonts
|
||||
|
||||
If your Linux distribution does not have “console-setup”, you can get it from [**here**][12].
|
||||
|
||||
On Linux distributions that uses **Systemd** , you can change the console font by editing **“/etc/vconsole.conf”** file.
|
||||
|
||||
Here is an example configuration for German keyboard.
|
||||
|
||||
```
|
||||
$ vi /etc/vconsole.conf
|
||||
|
||||
KEYMAP=de-latin1
|
||||
FONT=Lat2-Terminus16
|
||||
```
|
||||
|
||||
Hope you find this useful.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-change-linux-console-font-type-and-size/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/08/Ubuntu-Linux-console.png
|
||||
[3]: https://www.ostechnix.com/find-installed-fonts-commandline-linux/
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-encoding-to-set-on-the-console.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-character-set-in-Ubuntu.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-font-for-Linux-console.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-font-size-for-Linux-console.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2019/08/Linux-console-tty-ubuntu-1.png
|
||||
[9]: https://www.ostechnix.com/wp-content/uploads/2019/08/Ubuntu-Linux-TTY-console.png
|
||||
[10]: https://www.ostechnix.com/how-to-switch-between-ttys-without-using-function-keys-in-linux/
|
||||
[11]: https://www.ostechnix.com/wp-content/uploads/2019/08/show-console-fonts.png
|
||||
[12]: https://software.opensuse.org/package/console-setup
|
@ -0,0 +1,103 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Fix “Kernel driver not installed (rc=-1908)” VirtualBox Error In Ubuntu)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-fix-kernel-driver-not-installed-rc-1908-virtualbox-error-in-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Fix “Kernel driver not installed (rc=-1908)” VirtualBox Error In Ubuntu
|
||||
======
|
||||
|
||||
I use Oracle VirtualBox to test various Linux and Unix distributions. I’ve tested hundred of virtual machines in VirtualBox so far. Today, I started Ubuntu 18.04 server VM in my Ubuntu 18.04 desktop and I got the following error.
|
||||
|
||||
```
|
||||
Kernel driver not installed (rc=-1908)
|
||||
|
||||
The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall virtualbox-dkms package and load the kernel module by executing
|
||||
|
||||
'modprobe vboxdrv'
|
||||
|
||||
as root.
|
||||
|
||||
where: suplibOsInit what: 3 VERR_VM_DRIVER_NOT_INSTALLED (-1908) - The support driver is not installed. On linux, open returned ENOENT.
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
“Kernel driver not installed (rc=-1908)” Error in Ubuntu
|
||||
|
||||
I clicked OK to close the message box and and I saw another one in the background.
|
||||
|
||||
```
|
||||
Failed to open a session for the virtual machine Ubuntu 18.04 LTS Server.
|
||||
|
||||
The virtual machine 'Ubuntu 18.04 LTS Server' has terminated unexpectedly during startup with exit code 1 (0x1).
|
||||
|
||||
Result Code:
|
||||
NS_ERROR_FAILURE (0x80004005)
|
||||
Component:
|
||||
MachineWrap
|
||||
Interface:
|
||||
IMachine {85cd948e-a71f-4289-281e-0ca7ad48cd89}
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
The virtual machine has terminated unexpectedly during startup with exit code 1 (0x1)
|
||||
|
||||
I didn’t know what to do first. I ran the following command to check if it helps.
|
||||
|
||||
```
|
||||
$ sudo modprobe vboxdrv
|
||||
```
|
||||
|
||||
And, I got this error.
|
||||
|
||||
```
|
||||
modprobe: FATAL: Module vboxdrv not found in directory /lib/modules/5.0.0-23-generic
|
||||
```
|
||||
|
||||
After carefully reading the both error messages, I realized that I should update the Virtualbox application.
|
||||
|
||||
If you ever run into this error in Ubuntu and its variants like Linux Mint, all you have to do is just reinstall or update the **“virtualbox-dkms”** package using command:
|
||||
|
||||
```
|
||||
$ sudo apt install virtualbox-dkms
|
||||
```
|
||||
|
||||
Or, it is much better to update the whole system:
|
||||
|
||||
```
|
||||
$ sudo apt upgrade
|
||||
```
|
||||
|
||||
Now the error has gone and I could start VMs from VirtualBox without any issues.
|
||||
|
||||
* * *
|
||||
|
||||
**Related read:**
|
||||
|
||||
* [**Solve “Result Code: NS_ERROR_FAILURE (0x80004005)” VirtualBox Error In Arch Linux**][4]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-fix-kernel-driver-not-installed-rc-1908-virtualbox-error-in-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/08/Kernel-driver-not-installed-virtualbox-ubuntu.png
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/08/The-virtual-machine-has-terminated-unexpectedly-during-startup-with-exit-code-1-0x1.png
|
||||
[4]: https://www.ostechnix.com/solve-result-code-ns_error_failure-0x80004005-virtualbox-error-arch-linux/
|
@ -0,0 +1,168 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Set up Automatic Security Update (Unattended Upgrades) on Debian/Ubuntu?)
|
||||
[#]: via: (https://www.2daygeek.com/automatic-security-update-unattended-upgrades-ubuntu-debian/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Set up Automatic Security Update (Unattended Upgrades) on Debian/Ubuntu?
|
||||
======
|
||||
|
||||
One of an important task for Linux admins to make the system up-to-date.
|
||||
|
||||
It’s keep your system more stable and avoid unwanted access and attack.
|
||||
|
||||
Installing a package in Linux is a piece of cake.
|
||||
|
||||
In the similar way we can update security patches as well.
|
||||
|
||||
This is a simple tutorial that will show you to configure your system to receive automatic security updates.
|
||||
|
||||
There are some security risks involved when you running an automatic security package upgrades without inspection, but there are also benefits.
|
||||
|
||||
If you don’t want to miss security patches and would like to stay up-to-date with the latest security patches.
|
||||
|
||||
Then you should set up an automatic security update with help of unattended upgrades utility.
|
||||
|
||||
You can **[manually install Security Updates on Debian & Ubuntu systems][1]** if you don’t want to go for automatic security update.
|
||||
|
||||
There are many ways that we can automate this. However, we are going with an official method and later we will cover other ways too.
|
||||
|
||||
### How to Install unattended-upgrades package in Debian/Ubuntu?
|
||||
|
||||
By default unattended-upgrades package should be installed on your system. But in case if it’s not installed use the following command to install it.
|
||||
|
||||
Use **[APT-GET Command][2]** or **[APT Command][3]** to install unattended-upgrades package.
|
||||
|
||||
```
|
||||
$ sudo apt-get install unattended-upgrades
|
||||
```
|
||||
|
||||
The below two files are allows you to customize this utility.
|
||||
|
||||
```
|
||||
/etc/apt/apt.conf.d/50unattended-upgrades
|
||||
/etc/apt/apt.conf.d/20auto-upgrades
|
||||
```
|
||||
|
||||
### Make necessary changes in 50unattended-upgrades file
|
||||
|
||||
By default only minimal required options were enabled for security updates. It’s not limited and you can configure many option in this to make this utility more useful.
|
||||
|
||||
I have trimmed the file and added only the enabled lines for better clarifications.
|
||||
|
||||
```
|
||||
# vi /etc/apt/apt.conf.d/50unattended-upgrades
|
||||
|
||||
Unattended-Upgrade::Allowed-Origins {
|
||||
"${distro_id}:${distro_codename}";
|
||||
"${distro_id}:${distro_codename}-security";
|
||||
"${distro_id}ESM:${distro_codename}";
|
||||
};
|
||||
Unattended-Upgrade::DevRelease "false";
|
||||
```
|
||||
|
||||
There are three origins are enabled and the details are below.
|
||||
|
||||
* **`${distro_id}:${distro_codename}:`**` ` It is necessary because security updates may pull in new dependencies from non-security sources.
|
||||
* **`${distro_id}:${distro_codename}-security:`**` ` It is used to get a security updates from sources.
|
||||
* **`${distro_id}ESM:${distro_codename}:`**` ` It is used to get a security updates for ESM (Extended Security Maintenance) users.
|
||||
|
||||
|
||||
|
||||
**Enable Email Notification:** If you would like to receive email notifications after every security update, then modify the following line (uncomment it and add your email id).
|
||||
|
||||
From:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Mail "root";
|
||||
```
|
||||
|
||||
To:
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Mail "[email protected]";
|
||||
```
|
||||
|
||||
**Auto Remove Unused Dependencies:** You may need to run “sudo apt autoremove” command after every update to remove unused dependencies from the system.
|
||||
|
||||
We can automate this task by making the changes in the following line (uncomment it and change “false” to “true”).
|
||||
|
||||
From:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Remove-Unused-Dependencies "false";
|
||||
```
|
||||
|
||||
To:
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Remove-Unused-Dependencies "true";
|
||||
```
|
||||
|
||||
**Enable Automatic Reboot:** You may need to reboot your system when a security updates installed for kernel. To do so, make the following changes in the following line.
|
||||
|
||||
From:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Automatic-Reboot "false";
|
||||
```
|
||||
|
||||
To: Uncomment it and change “false” to “true” to enable automatic reboot.
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Automatic-Reboot "true";
|
||||
```
|
||||
|
||||
**Enable Automatic Reboot at The Specific Time:** If automatic reboot is enabled and you would like to perform the reboot at the specific time, then make the following changes.
|
||||
|
||||
From:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Automatic-Reboot-Time "02:00";
|
||||
```
|
||||
|
||||
To: Uncomment it and change the time as per your requirement. I set it to reboot at 5 AM.
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Automatic-Reboot-Time "05:00";
|
||||
```
|
||||
|
||||
### How to Enable Automatic Security Update?
|
||||
|
||||
Now, we have configured the necessary options. Once you are done.
|
||||
|
||||
Open the following file and verify it, both the values are set up correctly or not? It should not be a zero’s. (1=enabled, 0=disabled).
|
||||
|
||||
```
|
||||
# vi /etc/apt/apt.conf.d/20auto-upgrades
|
||||
|
||||
APT::Periodic::Update-Package-Lists "1";
|
||||
APT::Periodic::Unattended-Upgrade "1";
|
||||
```
|
||||
|
||||
**Details:**
|
||||
|
||||
* The first line makes apt to perform “apt-get update” automatically every day.
|
||||
* The second line makes apt to install security updates automatically every day.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/automatic-security-update-unattended-upgrades-ubuntu-debian/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/manually-install-security-updates-ubuntu-debian/
|
||||
[2]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
@ -0,0 +1,184 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Setup Multilingual Input Method On Ubuntu)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-setup-multilingual-input-method-on-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Setup Multilingual Input Method On Ubuntu
|
||||
======
|
||||
|
||||
For those who don’t know, there are hundreds of spoken languages in India and 22 languages are listed as official languages in Indian constitution. I am not a native English speaker, so I often use **Google translate** if I ever needed to type and/or translate something from English to my native language, which is Tamil. Well, I guess I don’t need to rely on Google translate anymore. I just found way to type in Indian languages on Ubuntu. This guide explains how to setup multilingual input method. It has been exclusively written for Ubuntu 18.04 LTS, however it might work on other Ubuntu variants like Linux mint, Elementary OS.
|
||||
|
||||
### Setup Multilingual Input Method On Ubuntu Linux
|
||||
|
||||
With the help of **IBus** , we can easily setup multilingual input method on Ubuntu and its derivatives. Ibus, stands for **I** ntelligent **I** nput **Bus** , is an input method framework for multilingual input in Unix-like operating systems. It allows us to type in our native language in most GUI applications, for example LibreOffice.
|
||||
|
||||
##### Install IBus On Ubuntu
|
||||
|
||||
To install IBus package on Ubuntu, run:
|
||||
|
||||
```
|
||||
$ sudo apt install ibus-m17n
|
||||
```
|
||||
|
||||
The Ibus-m17n package provides a lot of Indian and other countries languages including amharic, arabic, armenian, assamese, athapascan languages, belarusian, bengali, burmese, central khmer, chamic languages, chinese, cree, croatian, czech, danish, divehi, dhivehi, maldivian, esperanto, french, georgian, ancient and modern greek, gujarati, hebrew, hindi, inuktitut, japanese, kannada, kashmiri, kazakh, korean, lao, malayalam, marathi, nepali, ojibwa, oriya, panjabi, punjabi, persian, pushto, pashto, russian, sanskrit, serbian, sichuan yi, nuosu, siksika, sindhi, sinhala, sinhalese, slovak, swedish, tai languages, tamil, telugu, thai, tibetan, uighur, uyghur, urdu, uzbek, vietnamese, as well as yiddish.
|
||||
|
||||
##### Add input languages
|
||||
|
||||
We can add languages in System **Settings** section. Click the drop down arrow on the top right corner of your Ubuntu desktop and choose Settings icon in the bottom left corner.
|
||||
|
||||
![][2]
|
||||
|
||||
Launch System’s settings from top panel
|
||||
|
||||
From the Settings section, click on **Region & Language** option in the left pane. Then click the **+** (plus) sign button on the right side under **Input Sources** tab.
|
||||
|
||||
![][3]
|
||||
|
||||
Region & language section in Settings section
|
||||
|
||||
In the next window, click on the **three vertical dots** button.
|
||||
|
||||
![][4]
|
||||
|
||||
Add input source in Ubuntu
|
||||
|
||||
Search and choose the input language you’d like to add from the list.
|
||||
|
||||
![][5]
|
||||
|
||||
Add input language
|
||||
|
||||
For the purpose of this guide, I am going to add **Tamil** language. After choosing the language, click **Add** button.
|
||||
|
||||
![][6]
|
||||
|
||||
Add Input Source
|
||||
|
||||
Now you will see the selected input source has been added. You will see it in Region & Language section under Input Sources tab.
|
||||
|
||||
![][7]
|
||||
|
||||
Input sources section in Ubuntu
|
||||
|
||||
Click the “Manage Installed Languages” button under Input Sources tab.
|
||||
|
||||
![][8]
|
||||
|
||||
Manage Installed Languages In Ubuntu
|
||||
|
||||
Next you will be asked whether you want to install translation packs for the chosen language. You can install them if you want. Or, simply choose “Remind Me Later” button. You will be notified when you open this next time.
|
||||
|
||||
![][9]
|
||||
|
||||
The language support is not installed completely
|
||||
|
||||
Once the translation packs are installed, Click **Install / Remove Languages** button. Also make sure IBus is selected in Keyboard input method system.
|
||||
|
||||
![][10]
|
||||
|
||||
Install / Remove Languages In Ubuntu
|
||||
|
||||
Choose your desired language from the list and click Apply button.
|
||||
|
||||
![][11]
|
||||
|
||||
Choose input language
|
||||
|
||||
That’s it. That’s we have successfully setup multilingual input method on Ubuntu 18.04 desktop. Similarly, add as many as input languages you want.
|
||||
|
||||
After adding all language sources, log out and log in back.
|
||||
|
||||
##### Type In Indian languages and/or your preferred languages
|
||||
|
||||
Once you have added all languages, you will see them from the drop download on the top bar of your Ubuntu desktop.
|
||||
|
||||
![][12]
|
||||
|
||||
Choose input language from top bar in Ubuntu desktop
|
||||
|
||||
Alternatively, you can use **SUPER+SPACE** keys from the Keyboard to switch between input languages.
|
||||
|
||||
![][13]
|
||||
|
||||
Choose input language using Super+Space keys in Ubuntu
|
||||
|
||||
Open any GUI text editors/apps and start typing!
|
||||
|
||||
![][14]
|
||||
|
||||
Type in Indian languages in Ubuntu
|
||||
|
||||
##### Add IBus to startup applications
|
||||
|
||||
We need let IBus to start automatically on every reboot, so you need not to start it manually whenever you want to type in your preferred language.
|
||||
|
||||
To do so, simply type “startup applications” in the dash and click on Startup Applications option.
|
||||
|
||||
![][15]
|
||||
|
||||
Launch startup applications in Ubuntu
|
||||
|
||||
In the next window, click Add, type “Ibus” in the name field and “ibus-daemon” in the Command field and then click Add button.
|
||||
|
||||
![][16]
|
||||
|
||||
Add Ibus to startup applications on Ubuntu
|
||||
|
||||
From now IBus will automatically start on system startup.
|
||||
|
||||
* * *
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
* [**How To Use Google Translate From Commandline In Linux**][17]
|
||||
* [**How To Type Indian Rupee Sign (₹) In Linux**][18]
|
||||
* [**How To Setup Japanese Language Environment In Arch Linux**][19]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
So, it is your turn now. What application/tool you’re using to type in local Indian languages? Let us know them in the comment section below.
|
||||
|
||||
**Reference:**
|
||||
|
||||
* [**IBus – Ubuntu Community Wiki**][20]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-setup-multilingual-input-method-on-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/07/Ubuntu-system-settings.png
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/08/Region-language-in-Settings-ubuntu.png
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-input-source-in-Ubuntu.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-input-language.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-Input-Source-Ubuntu.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/08/Input-sources-Ubuntu.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2019/08/Manage-Installed-Languages.png
|
||||
[9]: https://www.ostechnix.com/wp-content/uploads/2019/08/The-language-support-is-not-installed-completely.png
|
||||
[10]: https://www.ostechnix.com/wp-content/uploads/2019/08/Install-Remove-languages.png
|
||||
[11]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-language.png
|
||||
[12]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-input-language-from-top-bar-in-Ubuntu.png
|
||||
[13]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-input-language-using-SuperSpace-keys.png
|
||||
[14]: https://www.ostechnix.com/wp-content/uploads/2019/08/Setup-Multilingual-Input-Method-On-Ubuntu.png
|
||||
[15]: https://www.ostechnix.com/wp-content/uploads/2019/08/Launch-startup-applications-in-ubuntu.png
|
||||
[16]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-Ibus-to-startup-applications-on-Ubuntu.png
|
||||
[17]: https://www.ostechnix.com/use-google-translate-commandline-linux/
|
||||
[18]: https://www.ostechnix.com/type-indian-rupee-sign-%e2%82%b9-linux/
|
||||
[19]: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/
|
||||
[20]: https://help.ubuntu.com/community/ibus
|
@ -0,0 +1,143 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Type Indian Rupee Sign (₹) In Linux)
|
||||
[#]: via: (https://www.ostechnix.com/type-indian-rupee-sign-%e2%82%b9-linux/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Type Indian Rupee Sign (₹) In Linux
|
||||
======
|
||||
|
||||
This brief guide explains how to type Indian Rupee sign in Unix-like operating systems. The other day, I wanted to type **“Indian Rupee Sign (₹)”**. My keyboard has rupee symbol on it, but I didn’t know how to type it. After a few google searches, I found a way to do this. If you ever wondered how to type rupee symbol in Linux, follow the steps given below.
|
||||
|
||||
### Type Indian Rupee Sign In Linux
|
||||
|
||||
The default keyboard layout in most GNU/Linux and other operating systems is **English (US)**. To type Indian rupee symbol, you need to change the Keyboard layout to **English (India, with rupee)**. I have given the steps to do this in three Desktop environments – **GNOME** , **MATE** and **KDE Plasma**. However, the steps are same for other DEs and other operating systems as well. Just find where the Keyboard layout settings is and change the layout to English (India, with rupee sign).
|
||||
|
||||
##### In GNOME Desktop Environment:
|
||||
|
||||
I tested this on Ubuntu 18.04 LTS desktop. This may work on other Linux distros with GNOME DE.
|
||||
|
||||
Click the drop down arrow on the top right corner of your Ubuntu desktop and choose **Settings** icon in the bottom left corner.
|
||||
|
||||
![][2]
|
||||
|
||||
Launch System’s settings from top panel
|
||||
|
||||
From the Settings section, click on **Region & Language** option in the left pane. Then click the **+** (plus) sign button on the right side under **Input Sources** tab.
|
||||
|
||||
![][3]
|
||||
|
||||
Region & language section in Settings section
|
||||
|
||||
In the next window, click on the **three vertical dots** button and choose the input language you’d like to add from the list.
|
||||
|
||||
![][4]
|
||||
|
||||
Add input source in Ubuntu
|
||||
|
||||
Scroll down a bit and search for **English (India)**. Click on it and then select **English (India, with rupee)** from list and finally click Add button.
|
||||
|
||||
![][5]
|
||||
|
||||
Choose “English (India, with rupee)” option
|
||||
|
||||
You will see it under Input Sources tab. If you want to make it as default, just choose it and click “UP” arrow button.
|
||||
|
||||
Close the Settings window and Log off and Log in back once.
|
||||
|
||||
Now choose the “English (India, with rupee)” from the language drop down box on the top bar of your Ubuntu desktop.
|
||||
|
||||
![][6]
|
||||
|
||||
Choose “English (India, with rupee)” option
|
||||
|
||||
Alternatively, you can use **SUPER+SPACE** keys from the Keyboard to choose it.
|
||||
|
||||
![][7]
|
||||
|
||||
Choose “English (India, with rupee)” option using super+space keys
|
||||
|
||||
Now, you can be able to type Indian rupee symbol by pressing **“Right ALT+4”**.
|
||||
|
||||
If your keyboard has the **AltGr key** on it, then press **AltGr+4** to type Indian rupee symbol.
|
||||
|
||||
Alternatively, you can use the key combination **“CTRL+SHIFT+u+20b9”** to type rupee symbol (Just hold CTRL+SHIFT keys and type **u20b9** letters and leave the keys). This will work everywhere.
|
||||
|
||||
##### In MATE Desktop Environment:
|
||||
|
||||
If you use MATE DE, go to **System - > Preferences -> Hardware -> Keyboard** from the Menu. Then, click on **Layouts** tab and click **Add** button to add a Indian keyboard layout.
|
||||
|
||||
![][8]
|
||||
|
||||
Add Keyboard layout in Keyboard preferences
|
||||
|
||||
Choose **India** from Country drop-down box and **Indian English (India, with rupee)** from Variants drop-down box. Click Add to add the chosen layout.
|
||||
|
||||
![][9]
|
||||
|
||||
Choose Indian English (India, with rupee) option
|
||||
|
||||
The Indian layout will be added to the Keyboard layout section. Choose it and click “Move up” to make it as default. Then, click “Options” button to choose the Keyboard layout options.
|
||||
|
||||
![][10]
|
||||
|
||||
From the Keyboard layout options windows, click on “Key to choose the 3rd level” and choose a key of your choice to apply for rupee symbol. I have chosen **“Any Alt”** key (So, I can use either left or right ALT keys).
|
||||
|
||||
![][11]
|
||||
|
||||
Close the keyboard preferences window.
|
||||
|
||||
Now, you can be able to type Indian rupee symbol by pressing **“ALT+4”**. Alternatively, you can use the key combination **“CTRL+SHIFT+u+20b9”** to type rupee symbol (Just hold CTRL+SHIFT keys and type **u20b9** letters and leave the keys). This will work everywhere.
|
||||
|
||||
Please note that this will work, only if your keyboard has **₹** symbol on **4**. If your keyboard doesn’t has this symbol or very old, it won’t work.
|
||||
|
||||
##### On KDE Plasma desktop environment:
|
||||
|
||||
If you use KDE Plasma, go to **Application Launcher - > System Settings -> Hardware -> Input Devices -> Keyboard -> Layouts**.
|
||||
|
||||
Check the box “Configure Layouts” and click “Add”.
|
||||
|
||||
![][12]
|
||||
|
||||
Choose “Indian” from Layout drop-down box and “English (India, rupee sign”) from Variant drop-down box.
|
||||
|
||||
![][13]
|
||||
|
||||
The chosen layout will be added to Layouts section. Choose it and click “Move up” to make it as default keyboard layout.
|
||||
|
||||
Then, click **Advanced** and click **“Key to choose the 3rd level”** and choose a key of your choice to apply for rupee symbol. I have chosen **“Any Alt”** key. Finally, click **“Apply”**.
|
||||
|
||||
![][14]
|
||||
|
||||
Now, you can be able to type Indian rupee symbol by pressing **“ALT+4”**. This procedure is same for all other DEs. All you have to do is find where keyboard layout is and change the layout to **English (India, rupee sign)**.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/type-indian-rupee-sign-%e2%82%b9-linux/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/07/Ubuntu-system-settings.png
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/08/Region-language-in-Settings-ubuntu.png
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-input-source-in-Ubuntu.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2017/08/English-India-with-rupee-option.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2017/08/Choose-English-India-with-rupee-option-from-language-bar.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2017/08/Choose-English-India-with-rupee-option-using-superspace-keys.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2017/08/Keyboard-Preferences_001.png
|
||||
[9]: https://www.ostechnix.com/wp-content/uploads/2017/08/Choose-a-Layout_002.png
|
||||
[10]: https://www.ostechnix.com/wp-content/uploads/2017/08/Keyboard-Preferences_003-1.png
|
||||
[11]: https://www.ostechnix.com/wp-content/uploads/2017/08/Keyboard-Layout-Options_004.png
|
||||
[12]: https://www.ostechnix.com/wp-content/uploads/2017/08/Keyboard-%E2%80%94-System-Settings_001.png
|
||||
[13]: https://www.ostechnix.com/wp-content/uploads/2017/08/Add-Layout-%E2%80%94-System-Settings_002.png
|
||||
[14]: https://www.ostechnix.com/wp-content/uploads/2017/08/Keyboard-layout-settings.png
|
@ -0,0 +1,282 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to create a vanity Tor .onion web address)
|
||||
[#]: via: (https://opensource.com/article/19/8/how-create-vanity-tor-onion-address)
|
||||
[#]: author: (Kc Nwaezuoke https://opensource.com/users/areahintshttps://opensource.com/users/sethhttps://opensource.com/users/bexelbiehttps://opensource.com/users/bcotton)
|
||||
|
||||
How to create a vanity Tor .onion web address
|
||||
======
|
||||
Generate a vanity .onion website to protect your anonymity—and your
|
||||
visitors' privacy, too.
|
||||
![Password security with a mask][1]
|
||||
|
||||
[Tor][2] is a powerful, open source network that enables anonymous and non-trackable (or difficult to track) browsing of the internet. It's able to achieve this because of users running Tor nodes, which serve as intentional detours between two otherwise direct paths. For instance, if you are in New Zealand and visit python.nz, instead of being routed next door to the data center running python.nz, your traffic might be routed to Pittsburgh and then Berlin and then Vanuatu and finally to python.nz. The Tor network, being built upon opt-in participant nodes, has an ever-changing structure. Only within this dynamic network space can there exist an exciting, transient top-level domain identifier: the .onion address.
|
||||
|
||||
If you own or are looking to create a website, you can generate a vanity .onion site to protect your and your visitors' anonymity.
|
||||
|
||||
### What are onion addresses?
|
||||
|
||||
Because Tor is dynamic and intentionally re-routes traffic in unpredictable ways, an onion address makes both the information provider (you) and the person accessing the information (your traffic) difficult to trace by one another, by intermediate network hosts, or by an outsider. Generally, an onion address is unattractive, with 16-character names like 8zd335ae47dp89pd.onion. Not memorable, and difficult to identify when spoofed, but a few projects that culminated with Shallot (forked as eschalot) provides "vanity" onion addresses to solve those issues.
|
||||
|
||||
Creating a vanity onion URL on your own is possible but computationally expensive. Getting the exact 16 characters you want could take a single computer billions of years to achieve.
|
||||
|
||||
Here's a rough example (courtesy of [Shallot][3]) of how much time it takes to generate certain lengths of characters on a 1.5GHz processor:
|
||||
|
||||
Characters | Time
|
||||
---|---
|
||||
1 | Less than 1 second
|
||||
2 | Less than 1 second
|
||||
3 | Less than 1 second
|
||||
4 | 2 seconds
|
||||
5 | 1 minute
|
||||
6 | 30 minutes
|
||||
7 | 1 day
|
||||
8 | 25 days
|
||||
9 | 2.5 years
|
||||
10 | 40 years
|
||||
11 | 640 years
|
||||
12 | 10 millennia
|
||||
13 | 160 millennia
|
||||
14 | 2.6 million years
|
||||
|
||||
I love how this table goes from 25 days to 2.5 years. If you wanted to generate 56 characters, it would take 1078 years.
|
||||
|
||||
An onion address with 16 characters is referred to as a version 2 onion address, and one with 56 characters is a version 3 onion address. If you're using the Tor browser, you can check out this [v2 address][4] or this [v3 address][5].
|
||||
|
||||
A v3 address has several advantages over v2:
|
||||
|
||||
* Better crypto (v3 replaced SHA1/DH/RSA1024 with SHA3/ed25519/curve25519)
|
||||
* Improved directory protocol that leaks much less information to directory servers
|
||||
* Improved directory protocol with a smaller surface for targeted attacks
|
||||
* Better onion address security against impersonation
|
||||
|
||||
|
||||
|
||||
However, the downside (supposedly) of v3 is the marketing effort you might need to get netizens to type that marathon-length URL in their browser.
|
||||
|
||||
You can [learn more about v3][6] in the Tor docs.
|
||||
|
||||
### Why you might need an onion address
|
||||
|
||||
A .onion domain has a few key advantages. Its key feature is that it can be accessed only with a Tor browser. Many people don't even know Tor exists, so you shouldn't expect massive traffic on your .onion site. However, the Tor browser provides numerous layers of anonymity not available on more popular browsers. If you want to ensure near-total anonymity for both you and your visitors, onion addresses are built for it.
|
||||
|
||||
With Tor, you do not need to register with ICANN to create your own domain. You don't need to hide your details from Whois searches, and your ICANN account won't be vulnerable to malicious takeovers. You are completely in control of your privacy and your domain.
|
||||
|
||||
An onion address is also an effective way to bypass censorship restrictions imposed by a government or regime. Its privacy helps protect you if your site may be viewed as a threat to the interests of the political class. Sites like Wikileaks are the best examples.
|
||||
|
||||
### What you need to generate a vanity URL
|
||||
|
||||
To configure a vanity onion address, you need to generate a new private key to match a custom hostname.
|
||||
|
||||
Two applications that you can use for generating .onion addresses are [eschalot][7] for v2 addresses and [mkp224o][8] for v3 addresses.
|
||||
|
||||
Eschalot is a Tor hidden service name generator. It allows you to produce a (partially) customized vanity .onion address using a brute-force method. Eschalot is distributed in source form under the BSD license and should compile on any Unix or Linux system.
|
||||
|
||||
mkp224o is a vanity address generator for ed25519 .onion services that's available on GitHub with the CC0 1.0 Universal license. It generates vanity 56-character onion addresses.
|
||||
|
||||
Here's a simple explanation of how these applications work. (This assumes you are comfortable with Git.)
|
||||
|
||||
#### Eschalot
|
||||
|
||||
Eschalot requires [OpenSSL][9] 0.9.7 or later libraries with source headers. Confirm your version with this command:
|
||||
|
||||
|
||||
```
|
||||
$ openssl version
|
||||
OpenSSL 1.1.1c FIPS 28 May 2019
|
||||
```
|
||||
|
||||
You also need a [Make][10] utility (either BSD or GNU Make will do) and a C compiler (GCC, PCC, or LLVM/Clang).
|
||||
|
||||
Clone the eschalot repo to your system, and then compile:
|
||||
|
||||
|
||||
```
|
||||
$ git clone <https://github.com/ReclaimYourPrivacy/eschalot.git>
|
||||
$ cd eschalot-1.2.0
|
||||
$ make
|
||||
```
|
||||
|
||||
If you're not using GCC, you must set the **CC** environment variable. For example, to use PCC instead:
|
||||
|
||||
|
||||
```
|
||||
$ make clean
|
||||
$ env CC=pcc make
|
||||
```
|
||||
|
||||
##### Using eschalot
|
||||
|
||||
To see Echalot's Help pages, type **./eschalot** in the terminal:
|
||||
|
||||
|
||||
```
|
||||
$ ./eschalot
|
||||
Version: 1.2.0
|
||||
|
||||
usage:
|
||||
eschalot [-c] [-v] [-t count] ([-n] [-l min-max] -f filename) | (-r regex) | (-p prefix)
|
||||
-v : verbose mode - print extra information to STDERR
|
||||
-c : continue searching after the hash is found
|
||||
-t count : number of threads to spawn default is one)
|
||||
-l min-max : look for prefixes that are from 'min' to 'max' characters long
|
||||
-n : Allow digits to be part of the prefix (affects wordlist mode only)
|
||||
-f filename: name of the text file with a list of prefixes
|
||||
-p prefix : single prefix to look for (1-16 characters long)
|
||||
-r regex : search for a POSIX-style regular expression
|
||||
|
||||
Examples:
|
||||
eschalot -cvt4 -l8-12 -f wordlist.txt >> results.txt
|
||||
eschalot -v -r '^test|^exam'
|
||||
eschalot -ct5 -p test
|
||||
|
||||
base32 alphabet allows letters [a-z] and digits [2-7]
|
||||
Regex pattern examples:
|
||||
xxx must contain 'xxx'
|
||||
^foo must begin with 'foo'
|
||||
bar$ must end with 'bar'
|
||||
b[aoeiu]r must have a vowel between 'b' and 'r'
|
||||
'^ab|^cd' must begin with 'ab' or 'cd'
|
||||
[a-z]{16} must contain letters only, no digits
|
||||
^dusk.*dawn$ must begin with 'dusk' and end with 'dawn'
|
||||
[a-z2-7]{16} any name - will succeed after one iteration
|
||||
```
|
||||
|
||||
You can use eschalot to generate an address using the prefix **-p** for _privacy_. Assuming your system has multiple CPU cores, use _multi-threading_ (**-t**) to speed up the URL generation. To _get verbose output_, use the **-v** option. Write the results of your calculation to a file named **newonion.txt**:
|
||||
|
||||
|
||||
```
|
||||
`./eschalot -v -t4 -p privacy >> newonion.txt`
|
||||
```
|
||||
|
||||
The script executes until it finds a suitable match:
|
||||
|
||||
|
||||
```
|
||||
$ ./eschalot -v -t4 -p privacy >> newonion.txt
|
||||
Verbose, single result, no digits, 4 threads, prefixes 7-7 characters long.
|
||||
Thread #1 started.
|
||||
Thread #2 started.
|
||||
Thread #3 started.
|
||||
Thread #4 started.
|
||||
Running, collecting performance data...
|
||||
Found a key for privacy (7) - privacyzofgsihx2.onion
|
||||
```
|
||||
|
||||
To access the public and private keys eschalot generates, locate **newonion.txt** in the eschalot folder.
|
||||
|
||||
#### mkp224o
|
||||
|
||||
mkp224o requires a C99 compatible compiler, Libsodium, GNU Make, GNU Autoconf, and a Unix-like platform. It has been tested on Linux and OpenBSD.
|
||||
|
||||
To get started, clone the mkp224o repo onto your system, generate the required [Autotools infrastructure][11], configure, and compile:
|
||||
|
||||
|
||||
```
|
||||
$ git clone <https://github.com/cathugger/mkp224o.git>
|
||||
$ cd mkp224o
|
||||
$ ./autogen.sh
|
||||
$ ./configure
|
||||
$ make
|
||||
```
|
||||
|
||||
##### Using mkp224o
|
||||
|
||||
Type **./mkp224o -h** to view Help:
|
||||
|
||||
|
||||
```
|
||||
$ ./mkp224o -h
|
||||
Usage: ./mkp224o filter [filter...] [options]
|
||||
./mkp224o -f filterfile [options]
|
||||
Options:
|
||||
-h - print help to stdout and quit
|
||||
-f - specify filter file which contains filters separated by newlines
|
||||
-D - deduplicate filters
|
||||
-q - do not print diagnostic output to stderr
|
||||
-x - do not print onion names
|
||||
-v - print more diagnostic data
|
||||
-o filename - output onion names to specified file (append)
|
||||
-O filename - output onion names to specified file (overwrite)
|
||||
-F - include directory names in onion names output
|
||||
-d dirname - output directory
|
||||
-t numthreads - specify number of threads to utilise (default - CPU core count or 1)
|
||||
-j numthreads - same as -t
|
||||
-n numkeys - specify number of keys (default - 0 - unlimited)
|
||||
-N numwords - specify number of words per key (default - 1)
|
||||
-z - use faster key generation method; this is now default
|
||||
-Z - use slower key generation method
|
||||
-B - use batching key generation method (>10x faster than -z, experimental)
|
||||
-s - print statistics each 10 seconds
|
||||
-S t - print statistics every specified ammount of seconds
|
||||
-T - do not reset statistics counters when printing
|
||||
-y - output generated keys in YAML format instead of dumping them to filesystem
|
||||
-Y [filename [host.onion]] - parse YAML encoded input and extract key(s) to filesystem
|
||||
```
|
||||
|
||||
One or more filters are required for mkp224o to work. When executed, mkp224o creates a directory with secret and public keys, plus a hostname for each discovered service. By default, **root** is the current directory, but that can be overridden with the **-d** switch.
|
||||
|
||||
Use the **-t numthreads** option to define how many threads you want to use during processing, and **-v** to see verbose output. Use the **fast** filter, and generate four keys by setting the **-n** option:
|
||||
|
||||
|
||||
```
|
||||
$ ./mkp224o filter fast -t 4 -v -n 4 -d ~/Extracts
|
||||
set workdir: /home/areahints/Extracts/
|
||||
sorting filters... done.
|
||||
filters:
|
||||
fast
|
||||
filter
|
||||
in total, 2 filters
|
||||
using 4 threads
|
||||
fastrcl5totos3vekjbqcmgpnias5qytxnaj7gpxtxhubdcnfrkapqad.onion
|
||||
fastz7zvpzic6dp6pvwpmrlc43b45usm2itkn4bssrklcjj5ax74kaad.onion
|
||||
fastqfj44b66mqffbdfsl46tg3c3xcccbg5lfuhr73k7odfmw44uhdqd.onion
|
||||
fast4xwqdhuphvglwic5dfcxoysz2kvblluinr4ubak5pluunduy7qqd.onion
|
||||
waiting for threads to finish... done.
|
||||
```
|
||||
|
||||
In the directory path set with **-d**, mkp224o creates a folder with the v3 address name it has generated, and within it you see your hostname, secret, and public files.
|
||||
|
||||
Use the **-s** switch to enable printing statistics, which may be useful when benchmarking different ed25519 implementations on your machine. Also, read the **OPTIMISATION.txt** file in mkp224o for performance-related tips.
|
||||
|
||||
### Notes about security
|
||||
|
||||
If you're wondering about the security of v2 generated keys, [Shallot][3] provides an interesting take:
|
||||
|
||||
> It is sometimes claimed that private keys generated by Shallot are less secure than those generated by Tor. This is false. Although Shallot generates a keypair with an unusually large public exponent **e**, it performs all of the sanity checks specified by PKCS #1 v2.1 (directly in **sane_key**), and then performs all of the sanity checks that Tor does when it generates an RSA keypair (by calling the OpenSSL function **RSA_check_key**).
|
||||
|
||||
"[Zooko's Triangle][12]" (which is discussed in [Stiegler's Petname Systems][13]) argues that names cannot be global, secure, and memorable at the same time. This means while .onion names are unique and secure, they have the disadvantage that they cannot be meaningful to humans.
|
||||
|
||||
Imagine that an attacker creates an .onion name that looks similar to the .onion of a different onion service and replaces its hyperlink on the onion wiki. How long would it take for someone to recognize it?
|
||||
|
||||
The onion address system has trade-offs, but vanity addresses may be a reasonable balance among them.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/how-create-vanity-tor-onion-address
|
||||
|
||||
作者:[Kc Nwaezuoke][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/areahintshttps://opensource.com/users/sethhttps://opensource.com/users/bexelbiehttps://opensource.com/users/bcotton
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_password_mask_secret.png?itok=EjqwosxY (Password security with a mask)
|
||||
[2]: https://www.torproject.org/
|
||||
[3]: https://github.com/katmagic/Shallot
|
||||
[4]: http://6zdgh5a5e6zpchdz.onion/
|
||||
[5]: http://vww6ybal4bd7szmgncyruucpgfkqahzddi37ktceo3ah7ngmcopnpyyd.onion/
|
||||
[6]: https://www.torproject.org/docs/tor-onion-service.html.en#four
|
||||
[7]: https://github.com/ReclaimYourPrivacy/eschalot
|
||||
[8]: https://github.com/cathugger/mkp224o
|
||||
[9]: https://www.openssl.org/
|
||||
[10]: https://en.wikipedia.org/wiki/Make_(software)
|
||||
[11]: https://opensource.com/article/19/7/introduction-gnu-autotools
|
||||
[12]: https://en.wikipedia.org/wiki/Zooko%27s_triangle
|
||||
[13]: http://www.skyhunter.com/marcs/petnames/IntroPetNames.html
|
@ -0,0 +1,195 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Keeping track of Linux users: When do they log in and for how long?)
|
||||
[#]: via: (https://www.networkworld.com/article/3431864/keeping-track-of-linux-users-when-do-they-log-in-and-for-how-long.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Keeping track of Linux users: When do they log in and for how long?
|
||||
======
|
||||
Getting an idea how often your users are logging in and how much time they spend on a Linux server is pretty easy with a couple commands and maybe a script or two.
|
||||
![Adikos \(CC BY 2.0\)][1]
|
||||
|
||||
The Linux command line provides some excellent tools for determining how frequently users log in and how much time they spend on a system. Pulling information from the **/var/log/wtmp** file that maintains details on user logins can be time-consuming, but with a couple easy commands, you can extract a lot of useful information on user logins.
|
||||
|
||||
One of the commands that helps with this is the **last** command. It provides a list of user logins that can go quite far back. The output looks like this:
|
||||
|
||||
```
|
||||
$ last | head -5 | tr -s " "
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:44 still logged in
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:41 - 09:41 (00:00)
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:40 - 09:41 (00:00)
|
||||
nemo pts/1 192.168.0.18 Wed Aug 14 09:38 still logged in
|
||||
shs pts/0 192.168.0.14 Tue Aug 13 06:15 - 18:18 (00:24)
|
||||
```
|
||||
|
||||
Note that the **tr -s " "** portion of the command above reduces strings of blanks to single blanks, and in this case, it keeps the output shown from being so wide that it would be wrapped around on this web page. Without the **tr** command, that output would look like this:
|
||||
|
||||
```
|
||||
$ last | head -5
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:44 still logged in
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:41 - 09:41 (00:00)
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:40 - 09:41 (00:00)
|
||||
nemo pts/1 192.168.0.18 Wed Aug 14 09:38 still logged in
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:15 - 09:40 (00:24)
|
||||
```
|
||||
|
||||
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
While it’s easy to generate and review login activity records like these for all users with the **last** command or for some particular user with a **last username** command, without the pipe to **head**, these commands will generally result in a _lot_ of data. In this case, a listing for all users would have 908 lines.
|
||||
|
||||
```
|
||||
$ last | wc -l
|
||||
908
|
||||
```
|
||||
|
||||
### Counting logins with last
|
||||
|
||||
If you don't need all of the login detail, you can view user login sessions as a simple count of logins for all users on the system with a command like this:
|
||||
|
||||
```
|
||||
$ for user in `ls /home`; do echo -ne "$user\t"; last $user | wc -l; done
|
||||
dorothy 21
|
||||
dory 13
|
||||
eel 29
|
||||
jadep 124
|
||||
jdoe 27
|
||||
jimp 42
|
||||
nemo 9
|
||||
shark 17
|
||||
shs 423
|
||||
test 2
|
||||
waynek 201
|
||||
```
|
||||
|
||||
The list above shows how many times each user has logged since the current **/var/log/wtmp** file was initiated. Notice, however, that the command to generate it does depend on user accounts being set up in the default /home directory.
|
||||
|
||||
Depending on how much data has been accumulated in your current **wtmp** file, you may see a lot of logins or relatively few. To get a little more insight into how relevant the number of logins are, you could turn this command into a script, adding a command that shows when the first login in the current file occurred to provide a little perspective.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
echo -n "Logins since "
|
||||
who /var/log/wtmp | head -1 | awk '{print $3}'
|
||||
echo "======================="
|
||||
|
||||
for user in `ls /home`
|
||||
do
|
||||
echo -ne "$user\t"
|
||||
last $user | wc -l
|
||||
done
|
||||
```
|
||||
|
||||
When you run the script, the "Logins since" line will let you know how to interpret the stats shown.
|
||||
|
||||
```
|
||||
$ ./show_user_logins
|
||||
Logins since 2018-10-05
|
||||
=======================
|
||||
dorothy 21
|
||||
dory 13
|
||||
eel 29
|
||||
jadep 124
|
||||
jdoe 27
|
||||
jimp 42
|
||||
nemo 9
|
||||
shark 17
|
||||
shs 423
|
||||
test 2
|
||||
waynek 201
|
||||
```
|
||||
|
||||
### Looking at accumulated login time with **ac**
|
||||
|
||||
The **ac** command provides a report on user login time — hours spent logged in. As with the **last** command, **ac** reports on user logins since the last rollover of the **wtmp** file since **ac**, like **last**, gets its details from **/var/log/wtmp**. The **ac** command, however, provides a much different view of user activity than the number of logins. For a single user, we might use a command like this one:
|
||||
|
||||
```
|
||||
$ ac nemo
|
||||
total 31.61
|
||||
```
|
||||
|
||||
This tells us that nemo has spent nearly 32 hours logged in. To use the command to generate a listing of the login times for all users, you might use a command like this:
|
||||
|
||||
```
|
||||
$ for user in `ls /home`; do ac $user | sed "s/total/$user\t/" ; done
|
||||
dorothy 9.12
|
||||
dory 1.67
|
||||
eel 4.32
|
||||
…
|
||||
```
|
||||
|
||||
In this command, we are replacing the word “total” in each line with the relevant username. And, as long as usernames are fewer than 8 characters, the output will line up nicely. To left justify the output, you can modify that command to this:
|
||||
|
||||
```
|
||||
$ for user in `ls /home`; do ac $user | sed "s/^\t//" | sed "s/total/$user\t/" ; done
|
||||
dorothy 9.12
|
||||
dory 1.67
|
||||
eel 4.32
|
||||
...
|
||||
```
|
||||
|
||||
The first used of **sed** in that string of commands strips off the initial tabs.
|
||||
|
||||
To turn this command into a script and display the initial date for the **wtmp** file to add more relevance to the hour counts, you could use a script like this:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
echo -n "hours online since "
|
||||
who /var/log/wtmp | head -1 | awk '{print $3}'
|
||||
echo "============================="
|
||||
|
||||
for user in `ls /home`
|
||||
do
|
||||
ac $user | sed "s/^\t//" | sed "s/total/$user\t/"
|
||||
done
|
||||
```
|
||||
|
||||
If you run the script, you'll see the hours spent by each user over the lifespan of the **wtmp** file:
|
||||
|
||||
```
|
||||
$ ./show_user_hours
|
||||
hours online since 2018-10-05
|
||||
=============================
|
||||
dorothy 70.34
|
||||
dory 4.67
|
||||
eel 17.05
|
||||
jadep 186.04
|
||||
jdoe 28.20
|
||||
jimp 11.49
|
||||
nemo 11.61
|
||||
shark 13.04
|
||||
shs 3563.60
|
||||
test 1.00
|
||||
waynek 312.00
|
||||
```
|
||||
|
||||
The difference between the user activity levels in this example is pretty obvious with one user spending only one hour on the system since October and another dominating the system.
|
||||
|
||||
### Wrap-up
|
||||
|
||||
Reviewing how often users log into a system and how many hours they spend online can both give you an overview of how a system is being used and who are likely the heaviest users. Of course, login time does not necessarily correspond to how much work each user is getting done, but it's likely close and commands such as **last** and **ac **can help you identify the most active users.
|
||||
|
||||
### More Linux advice: Sandra Henry-Stocker explains how to use the rev command in this 2-Minute Linux Tip video
|
||||
|
||||
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3431864/keeping-track-of-linux-users-when-do-they-log-in-and-for-how-long.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/keyboard-adikos-100808324-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,169 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cockpit and the evolution of the Web User Interface)
|
||||
[#]: via: (https://fedoramagazine.org/cockpit-and-the-evolution-of-the-web-user-interface/)
|
||||
[#]: author: (Shaun Assam https://fedoramagazine.org/author/sassam/)
|
||||
|
||||
Cockpit and the evolution of the Web User Interface
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Over 3 years ago the Fedora Magazine published an article entitled [Cockpit: an overview][2]. Since then, the interface has see some eye-catching changes. Today’s Cockpit is cleaner and the larger fonts makes better use of screen real-estate.
|
||||
|
||||
This article will go over some of the changes made to the UI. It will also explore some of the general tools available in the web interface to simplify those monotonous sysadmin tasks.
|
||||
|
||||
### Cockpit installation
|
||||
|
||||
Cockpit can be installed using the **dnf install cockpit** command. This provides a minimal setup providing the basic tools required to use the interface.
|
||||
|
||||
Another option is to install the Headless Management group. This will install additional packages used to extend the usability of Cockpit. It includes extensions for NetworkManager, software packages, disk, and SELinux management.
|
||||
|
||||
Run the following commands to enable the web service on boot and open the firewall port:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable --now cockpit.socket
|
||||
Created symlink /etc/systemd/system/sockets.target.wants/cockpit.socket -> /usr/lib/systemd/system/cockpit.socket
|
||||
|
||||
$ sudo firewall-cmd --permanent --add-service cockpit
|
||||
success
|
||||
$ sudo firewall-cmd --reload
|
||||
success
|
||||
```
|
||||
|
||||
### Logging into the web interface
|
||||
|
||||
To access the web interface, open your favourite browser and enter the server’s domain name or IP in the address bar followed by the service port (9090). Because Cockpit uses HTTPS, the installation will create a self-signed certificate to encrypt passwords and other sensitive data. You can safely accept this certificate, or request a CA certificate from your sysadmin or a trusted source.
|
||||
|
||||
Once the certificate is accepted, the new and improved login screen will appear. Long-time users will notice the username and password fields have been moved to the top. In addition, the white background behind the credential fields immediately grabs the user’s attention.
|
||||
|
||||
![][3]
|
||||
|
||||
A feature added to the login screen since the previous article is logging in with **sudo** privileges — if your account is a member of the wheel group. Check the box beside _Reuse my password for privileged tasks_ to elevate your rights.
|
||||
|
||||
Another edition to the login screen is the option to connect to remote servers also running the Cockpit web service. Click _Other Options_ and enter the host name or IP address of the remote machine to manage it from your local browser.
|
||||
|
||||
### Home view
|
||||
|
||||
Right off the bat we get a basic overview of common system information. This includes the make and model of the machine, the operating system, if the system is up-to-date, and more.
|
||||
|
||||
![][4]
|
||||
|
||||
Clicking the make/model of the system displays hardware information such as the BIOS/Firmware. It also includes details about the components as seen with **lspci**.
|
||||
|
||||
![][5]
|
||||
|
||||
Clicking on any of the options to the right will display the details of that device. For example, the _% of CPU cores_ option reveals details on how much is used by the user and the kernel. In addition, the _Memory & Swap_ graph displays how much of the system’s memory is used, how much is cached, and how much of the swap partition active. The _Disk I/O_ and _Network Traffic_ graphs are linked to the Storage and Networking sections of Cockpit. These topics will be revisited in an upcoming article that explores the system tools in detail.
|
||||
|
||||
#### Secure Shell Keys and authentication
|
||||
|
||||
Because security is a key factor for sysadmins, Cockpit now has the option to view the machine’s MD5 and SHA256 key fingerprints. Clicking the **Show fingerprints** options reveals the server’s ECDSA, ED25519, and RSA fingerprint keys.
|
||||
|
||||
![][6]
|
||||
|
||||
You can also add your own keys by clicking on your username in the top-right corner and selecting **Authentication**. Click on **Add keys** to validate the machine on other systems. You can also revoke your privileges in the Cockpit web service by clicking on the **X** button to the right.
|
||||
|
||||
![][7]
|
||||
|
||||
#### Changing the host name and joining a domain
|
||||
|
||||
Changing the host name is a one-click solution from the home page. Click the host name currently displayed, and enter the new name in the _Change Host Name_ box. One of the latest features is the option to provide a _Pretty name_.
|
||||
|
||||
Another feature added to Cockpit is the ability to connect to a directory server. Click _Join a domain_ and a pop-up will appear requesting the domain address or name, organization unit (optional), and the domain admin’s credentials. The Domain Membership group provides all the packages required to join an LDAP server including FreeIPA, and the popular Active Directory.
|
||||
|
||||
To opt-out, click on the domain name followed by _Leave Domain_. A warning will appear explaining the changes that will occur once the system is no longer on the domain. To confirm click the red _Leave Domain_ button.
|
||||
|
||||
![][8]
|
||||
|
||||
#### Configuring NTP and system date and time
|
||||
|
||||
Using the command-line and editing config files definitely takes the cake when it comes to maximum tweaking. However, there are times when something more straightforward would suffice. With Cockpit, you have the option to set the system’s date and time manually or automatically using NTP. Once synchronized, the information icon on the right turns from red to blue. The icon will disappear if you manually set the date and time.
|
||||
|
||||
To change the timezone, type the continent and a list of cities will populate beneath.
|
||||
|
||||
![][9]
|
||||
|
||||
#### Shutting down and restarting
|
||||
|
||||
You can easily shutdown and restart the server right from home screen in Cockpit. You can also delay the shutdown/reboot and send a message to warn users.
|
||||
|
||||
![][10]
|
||||
|
||||
#### Configuring the performance profile
|
||||
|
||||
If the _tuned_ and _tuned-utils_ packages are installed, performance profiles can be changed from the main screen. By default it is set to a recommended profile. However, if the purpose of the server requires more performance, we can change the profile from Cockpit to suit those needs.
|
||||
|
||||
![][11]
|
||||
|
||||
### Terminal web console
|
||||
|
||||
A Linux sysadmin’s toolbox would be useless without access to a terminal. This allows admins to fine-tune the server beyond what’s available in Cockpit. With the addition of themes, admins can quickly adjust the text and background colours to suit their preference.
|
||||
|
||||
Also, if you type **exit** by mistake, click the _Reset_ button in the top-right corner*.* This will provide a fresh screen with a flashing cursor.
|
||||
|
||||
![][12]
|
||||
|
||||
### Adding a remote server and the Dashboard overlay
|
||||
|
||||
The Headless Management group includes the Dashboard module (**cockpit-dashboard**). This provides an overview the of the CPU, memory, network, and disk performance in a real-time graph. Remote servers can also be added and managed through the same interface.
|
||||
|
||||
For example, to add a remote computer in Dashboard, click the **+** button. Enter the name or IP address of the server and select the colour of your choice. This helps to differentiate the stats of the servers in the graph. To switch between servers, click on the host name (as seen in the screen-cast below). To remove a server from the list, click the check-mark icon, then click the red trash icon. The example below demonstrates how Cockpit manages a remote machine named _server02.local.lan_.
|
||||
|
||||
![][13]
|
||||
|
||||
### Documentation and finding help
|
||||
|
||||
As always, the _man_ pages are a great place to find documentation. A simple search in the command-line results with pages pertaining to different aspects of using and configuring the web service.
|
||||
|
||||
```
|
||||
$ man -k cockpit
|
||||
cockpit (1) - Cockpit
|
||||
cockpit-bridge (1) - Cockpit Host Bridge
|
||||
cockpit-desktop (1) - Cockpit Desktop integration
|
||||
cockpit-ws (8) - Cockpit web service
|
||||
cockpit.conf (5) - Cockpit configuration file
|
||||
```
|
||||
|
||||
The Fedora repository also has a package called **cockpit-doc**. The package’s description explains it best:
|
||||
|
||||
> The Cockpit Deployment and Developer Guide shows sysadmins how to deploy Cockpit on their machines as well as helps developers who want to embed or extend Cockpit.
|
||||
|
||||
For more documentation visit <https://cockpit-project.org/external/source/HACKING>
|
||||
|
||||
### Conclusion
|
||||
|
||||
This article only touches upon some of the main functions available in Cockpit. Managing storage devices, networking, user account, and software control will be covered in an upcoming article. In addition, optional extensions such as the 389 directory service, and the _cockpit-ostree_ module used to handle packages in Fedora Silverblue.
|
||||
|
||||
The options continue to grow as more users adopt Cockpit. The interface is ideal for admins who want a light-weight interface to control their server(s).
|
||||
|
||||
What do you think about Cockpit? Share your experience and ideas in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/cockpit-and-the-evolution-of-the-web-user-interface/
|
||||
|
||||
作者:[Shaun Assam][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/sassam/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/cockpit-overview/
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-login-screen.png
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-home-screen.png
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-system-info.gif
|
||||
[6]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-ssh-key-fingerprints.png
|
||||
[7]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-authentication.png
|
||||
[8]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-hostname-domain.gif
|
||||
[9]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-date-time.png
|
||||
[10]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-power-options.gif
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-tuned.gif
|
||||
[12]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-terminal.gif
|
||||
[13]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-add-remote-servers.gif
|
@ -0,0 +1,208 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Designing open audio hardware as DIY kits)
|
||||
[#]: via: (https://opensource.com/article/19/8/open-audio-kit-developer)
|
||||
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansenhttps://opensource.com/users/alanfdosshttps://opensource.com/users/clhermansen)
|
||||
|
||||
Designing open audio hardware as DIY kits
|
||||
======
|
||||
Did you know you can build your own speaker systems? Muffsy creator
|
||||
shares how he got into making open audio hardware and why he started
|
||||
selling his designs to other DIYers.
|
||||
![Colorful sound wave graph][1]
|
||||
|
||||
Previously in this series about people who are developing audio technology in the open, I interviewed [Juan Rios, developer and maintainer of Guayadeque][2] and [Sander Jansen, developer and maintainer of Goggles Music Manager][3]. These conversations have broadened my thinking and helped me enjoy their software even more than before.
|
||||
|
||||
For this article, I contacted Håvard Skrödahl, founder of [Muffsy][4]. His hobby is designing open source audio hardware, and he offers his designs as kits for those of us who can't wait to wind up the soldering iron for another adventure.
|
||||
|
||||
I've built two of Håvard's kits: a [moving coil (MC) cartridge preamp][5] and a [moving magnet (MM) cartridge phono preamp][6]. Both were a lot of fun to build and sound great. They were also a bit of a stroll down memory lane for me. In my 20s, I built some other audio kits, including a [Hafler DH-200 power amplifier][7] and a [DH-110 preamplifier][8]. Before that, I built a power amplifier using a Motorola circuit design; both the design and the amplifier were lost along the way, but they were a lot of fun!
|
||||
|
||||
### Meet Håvard Skrödahl, open audio hardware maker
|
||||
|
||||
**Q: How did you get started designing music playback hardware?**
|
||||
|
||||
**A:** I was a teenager in the mid-'80, and listening to records and cassettes were the only options we had. Vinyl was, of course, the best quality, while cassettes were more portable. About five years ago, I was getting back into vinyl and found myself lacking the equipment I needed. So, I decided to make my own phono stage (also called a phono preamp). The first iteration was bulky and had a relatively bad [RIAA filter][9], but I improved it during the first few months.
|
||||
|
||||
The first version was completely homemade. I was messing about with toner transfer and chemicals and constantly breaking drill bits to create this board.
|
||||
|
||||
![Phono stage board top][10]
|
||||
|
||||
Top of the phono stage board
|
||||
|
||||
![Bottom of the phono stage board][11]
|
||||
|
||||
Bottom of the phono stage board
|
||||
|
||||
I was over the moon with this phono stage. It worked perfectly, even though the RIAA curve was out by quite a bit. It also had variable input impedance (greatly inspired by the [Audiokarma CNC phono stage][12]).
|
||||
|
||||
When I moved on to getting boards professionally made, I found that the impedance settings could be improved quite a bit. My setup needed adjustable gain, so I added it. The RIAA filter was completely redesigned, and it is (to my knowledge) the only accurate RIAA filter circuit that uses [standard E24 component values][13].
|
||||
|
||||
![Muffsy audio hardware boards][14]
|
||||
|
||||
Various iterations of boards in development.
|
||||
|
||||
**Q: How did you decide to offer your work as kits? And how did you go from kits to open source?**
|
||||
|
||||
**A:** The component values being E24 came from a lack of decent, component providers in my area (or so I thought, as it turned out I have a great provider nearby), so I had to go for standard values. This meant my circuit was well suited for DIY, and I started selling blank printed circuit boards on [Tindie][15].
|
||||
|
||||
What really made the phono stage suitable as a kit was a power supply that didn't require messing about with [mains electricity][16]. It's basically an AC adapter, a voltage doubler, and a couple of three-pin voltage regulators.
|
||||
|
||||
So there I was; I had a phono stage, a power supply, and the right (and relatively easy to source) components. The boards fit straight into the enclosure I'd selected, and I made a suitable back panel. I could now sell a DIY kit that turns into a working device once it is assembled. This is pretty unique; you won't see many kit suppliers provide everything that's needed to make a functional product.
|
||||
|
||||
![Phono stage kit with the power supply and back panel][17]
|
||||
|
||||
The assembled current phono stage kit with the power supply and back panel.
|
||||
|
||||
As a bonus, since this is just my hobby, I'm not even aiming for profit. This is also partly why my designs are open source. Not revealing who is using the designs, but you'll find them in more than one professional vinyl mastering studio, in governmental digitization projects, and even at a vinyl player manufacturer that you will have heard of.
|
||||
|
||||
**Q: Tell us a bit about your educational background. Are you an electrical engineer? Where did your interest in circuits come from?**
|
||||
|
||||
**A:** I went to a military school of electrical engineering (EE). My career has been pretty void of EE though, apart from a few years as a telephony switch technician. The music interest has stayed with me, though, and I've been dreaming of making something all my life. I would rather avoid mains electricity though, so signal level and below is where I'm happy.
|
||||
|
||||
**Q: In your day job, do you do hardware stuff as well? Or software? What about open source—does it matter to your day job?**
|
||||
|
||||
**A:** My profession is IT management, system architecture, and security. So I'm not doing any hardware designs at work. I wouldn't be where I am today without open source, so that is the other part of the reason why my designs are openly available.
|
||||
|
||||
**Q: Can you tell us a bit about what it takes to go from a design, to a circuit board, to producing a kit?**
|
||||
|
||||
**A:** I am motivated by my own needs when it comes to starting a new project. Or if I get inspired, like I was when I made a constant current [LED][18] tester. The LED tester required a very specific sub-milliampere meter, and it was kind of hard to find an enclosure for it. So the LED tester wasn't suited for a kit.
|
||||
|
||||
![LED tester][19]
|
||||
|
||||
LED tester
|
||||
|
||||
I made a [notch filter][20] that requires matched precision capacitors, and the potentiometers are quite hard to fine-tune. Besides, I don't see people lining up to buy this product, so it's not suited to be a kit.
|
||||
|
||||
![Notch filter][21]
|
||||
|
||||
Notch filter
|
||||
|
||||
I made an inverse RIAA filter using only surface-mount device [(SMD) components][22]—not something I would offer as a kit. I built an SMD vacuum pickup tool for this project, so it wasn't all for nothing.
|
||||
|
||||
![SMD vacuum pickup tool][23]
|
||||
|
||||
SMD vacuum pickup tool
|
||||
|
||||
I've made various PSU/transformer breakout boards, which are not suitable as kits because they require mains electricity.
|
||||
|
||||
![PSU/transformer breakout board][24]
|
||||
|
||||
PSU/transformer breakout boards
|
||||
|
||||
I designed and built the [MC Head Amp][25] without even owning an [MC cartridge][26]. I even built the [O2 headphone amp][27] without owning a pair of headphones, although people much smarter than me suspect it was a clever excuse for buying a pair of Sennheisers.
|
||||
|
||||
Kits need to be something I think people need, they must be easy to assemble (or rather difficult to assemble incorrectly), not too expensive nor require exotic components, and can't weigh too much because of the very expensive postage from Sweden.
|
||||
|
||||
Most importantly, I need to have time and space for another kit. This picture shows pretty much all the space I have available for my kits, two boxes deep, IKEA all the way.
|
||||
|
||||
![A shelf filled with boxed of Muffsy kits][28]
|
||||
|
||||
**Q: Are you a musician or audiophile? What kind of equipment do you have?**
|
||||
|
||||
**A:** I'm not a musician, and I am not an audiophile in the way most people would define such a person. I do know, from education and experience, that good sound doesn't have to cost a whole lot. It can be quite cheap, actually. A lot of the cost is in transformers, enclosures, and licenses (the stickers on the front of your gear). Stay away from those, and you're back at signal level audio that can be really affordable.
|
||||
|
||||
Don't get me wrong; there are a lot of gifted designers who spend an awful lot of time and creativity making the next great piece of equipment. They deserve to get paid for their work. What I mean is that the components that go into this equipment can be bought for not much money at all.
|
||||
|
||||
My equipment is a simple [op-amp][29]-based preamp with a rotational input-switch and a sub-$25 class-D amp based on the TPA3116 chip (which I will be upgrading to an IcePower 125ASX2). I'm using both the Muffsy Phono Preamp and the Muffsy MC Head Amp. Then I've got some really nice Dynaco A25 loudspeakers that I managed to refurbish through nothing more than good old dumb luck. I went for the cheapest Pro-Ject turntable that's still a "real" turntable. That's it. No surround, no remote control (yet), unless you count the Chromecast Audio that's connected to the back of my amp.
|
||||
|
||||
![Håvard Skrödahl's A/V setup][30]
|
||||
|
||||
Håvard's A/V setup
|
||||
|
||||
I'll happily shell out for quality components, good connectors, and shielded signal cables. But, to be diplomatic, I'd rather use the correct component for the job instead of the most expensive one. I do get questions about specific resistors and expensive "boutique" components now and then. I keep my answer short and remind people that my own builds are identical to what I sell on Tindie.
|
||||
|
||||
My preamp uses my MC Head Amp as a building block.
|
||||
|
||||
![Preamp internals][31]
|
||||
|
||||
Preamp internals
|
||||
|
||||
**Q: What kind of software do you use for hardware design?**
|
||||
|
||||
**A:** I've been using [Eagle][32] for many years. Getting into a different workflow takes a lot of time and requires a whole lot of mistakes, so no [KiCad][33] yet.
|
||||
|
||||
**Q: Can you tell us a bit about where your kits are going? Is there a new head amplifier? A new phono amplifier? What about a line-level pre-amp or power amp?**
|
||||
|
||||
**A:** If I were to do a power amp, I wouldn't dream of selling it because of what I said about mains electricity. Chip amps and [Class-D][34] seem to have taken over the DIY segment anyway, and I'm really happy with Class-D.
|
||||
|
||||
My latest kit is an input selector. It's something that's a cross between hardware and software as it uses an [ESP32][35] system on a chip microcontroller. And it's something that I want for myself.
|
||||
|
||||
The kit includes everything you need. It's got a rotational encoder, an infrared receiver, and I'm even adding a remote control to the kit. The software and hardware are available on GitHub, also under a permissive open source license, and will soon include Alexa voice support and [MQTT][36] for app or command line remote control.
|
||||
|
||||
![Input selector][37]
|
||||
|
||||
Input selector kit
|
||||
|
||||
My lineup now consists of preamps for MC and MM cartridges, a power supply and a back panel for them, and the input selector. I'm even selling bare circuit boards for a tube preamp and accompanying power supply.
|
||||
|
||||
These components make up pretty much all the internals of a complete preamplifier, which has become one of my main motivational factors.
|
||||
|
||||
I have nothing new or significantly better to provide in terms of an ordinary preamplifier, so I'm using a modified version of a well-known circuit. I cannot, and would not, sell this circuit, as it's proprietary.
|
||||
|
||||
Anyhow, here's my personal goal. It's still a work in progress, using an S3207 enclosure and a front panel made at Frontpanel Express.
|
||||
|
||||
![Muffsy preamp][38]
|
||||
|
||||
New Muffsy preamp prototype
|
||||
|
||||
* * *
|
||||
|
||||
Thanks, Håvard, that looks pretty great! I'd be happy to have something like that sitting on my Hi-Fi shelf.
|
||||
|
||||
I hope there are people out there just waiting to try their hand at audio kit building or even board layout from proven open source schematics, and they find Håvard's story motivating. As for me, I think my next project could be an [active crossover][39].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/open-audio-kit-developer
|
||||
|
||||
作者:[Chris Hermansen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clhermansenhttps://opensource.com/users/alanfdosshttps://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/colorful_sound_wave.png?itok=jlUJG0bM (Colorful sound wave graph)
|
||||
[2]: https://opensource.com/article/19/6/creator-guayadeque-music-player
|
||||
[3]: https://opensource.com/article/19/6/gogglesmm-developer-sander-jansen
|
||||
[4]: https://www.muffsy.com/
|
||||
[5]: https://opensource.com/article/18/5/building-muffsy-phono-head-amplifier-kit
|
||||
[6]: https://opensource.com/article/18/7/diy-amplifier-vinyl-records
|
||||
[7]: https://kenrockwell.com/audio/hafler/dh-200.htm
|
||||
[8]: https://www.hifiengine.com/manual_library/hafler/dh-110.shtml
|
||||
[9]: https://en.wikipedia.org/wiki/RIAA_equalization
|
||||
[10]: https://opensource.com/sites/default/files/uploads/phono-stagetop.png (Phono stage board top)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/phono-stagebottom.png (Bottom of the phono stage board)
|
||||
[12]: https://forum.psaudio.com/t/the-cnc-phono-stage-diy/3613
|
||||
[13]: https://en.wikipedia.org/wiki/E_series_of_preferred_numbers
|
||||
[14]: https://opensource.com/sites/default/files/uploads/boards.png (Muffsy audio hardware boards)
|
||||
[15]: https://www.tindie.com/stores/skrodahl/
|
||||
[16]: https://en.wikipedia.org/wiki/Mains_electricity
|
||||
[17]: https://opensource.com/sites/default/files/uploads/phonostage-kit.png (Phono stage kit with the power supply and back panel)
|
||||
[18]: https://en.wikipedia.org/wiki/Light-emitting_diode
|
||||
[19]: https://opensource.com/sites/default/files/uploads/led-tester.png (LED tester)
|
||||
[20]: https://en.wikipedia.org/wiki/Band-stop_filter
|
||||
[21]: https://opensource.com/sites/default/files/uploads/notch-filter.png (Notch filter)
|
||||
[22]: https://en.wikipedia.org/wiki/Surface-mount_technology
|
||||
[23]: https://opensource.com/sites/default/files/uploads/smd-vacuum-pick-tool.png (SMD vacuum pickup tool)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/psu-transformer-breakout-board.png (PSU/transformer breakout board)
|
||||
[25]: https://leachlegacy.ece.gatech.edu/headamp/
|
||||
[26]: https://blog.audio-technica.com/audio-solutions-question-week-differences-moving-magnet-moving-coil-phono-cartridges/
|
||||
[27]: http://nwavguy.blogspot.com/2011/07/o2-headphone-amp.html
|
||||
[28]: https://opensource.com/sites/default/files/uploads/kit-shelves.png (Muffsy kits on shelves)
|
||||
[29]: https://en.wikipedia.org/wiki/Operational_amplifier
|
||||
[30]: https://opensource.com/sites/default/files/uploads/av-setup.png (Håvard Skrödahl's A/V setup)
|
||||
[31]: https://opensource.com/sites/default/files/uploads/preamp-internals.png (Preamp internals)
|
||||
[32]: https://en.wikipedia.org/wiki/EAGLE_(program)
|
||||
[33]: https://en.wikipedia.org/wiki/KiCad
|
||||
[34]: https://en.wikipedia.org/wiki/Class-D_amplifier
|
||||
[35]: https://en.wikipedia.org/wiki/ESP32
|
||||
[36]: http://mqtt.org/
|
||||
[37]: https://opensource.com/sites/default/files/uploads/input-selector.png (Input selector)
|
||||
[38]: https://opensource.com/sites/default/files/uploads/muffsy-preamp.png (Muffsy preamp)
|
||||
[39]: https://www.youtube.com/watch?v=7u9OKPL1ezA&feature=youtu.be
|
@ -0,0 +1,150 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to encrypt files with gocryptfs on Linux)
|
||||
[#]: via: (https://opensource.com/article/19/8/how-encrypt-files-gocryptfs)
|
||||
[#]: author: (Brian "bex" Exelbierd https://opensource.com/users/bexelbiehttps://opensource.com/users/sethhttps://opensource.com/users/marcobravo)
|
||||
|
||||
How to encrypt files with gocryptfs on Linux
|
||||
======
|
||||
Gocryptfs encrypts at the file level, so synchronization operations can
|
||||
work efficiently on each file.
|
||||
![Filing papers and documents][1]
|
||||
|
||||
[Gocryptfs][2] is a Filesystem in Userspace (FUSE)-mounted file-level encryption program. FUSE-mounted means that the encrypted files are stored in a single directory tree that is mounted, like a USB key, using the [FUSE][3] interface. This allows any user to do the mount—you don't need to be root. Because gocryptfs encrypts at the file level, synchronization operations that copy your files can work efficiently on each file. This contrasts with disk-level encryption, where the whole disk is encrypted as a single, large binary blob.
|
||||
|
||||
When you use gocryptfs in its normal mode, your files are stored on your disk in an encrypted format. However, when you mount the encrypted files, you get unencrypted access to your files, just like any other file on your computer. This means all your regular tools and programs can use your unencrypted files. Changes, new files, and deletions are reflected in real-time in the encrypted version of the files stored on your disk.
|
||||
|
||||
### Install gocryptfs
|
||||
|
||||
Installing gocryptfs is easy on [Fedora][4] because it is packaged for Fedora 30 and Rawhide. Therefore, **sudo dnf install gocryptfs** does all the required installation work. If you're not using Fedora, you can find details on installing from source, on Debian, or via Homebrew in the [Quickstart][5].
|
||||
|
||||
### Initialize your encrypted filesystem
|
||||
|
||||
To get started, you need to decide where you want to store your encrypted files. This example will keep the files in **~/.sekrit_files** so that they don't show up when doing a normal **ls**.
|
||||
|
||||
Start by initializing the filesystem. This will require you to choose a password. You are strongly encouraged to use a unique password you've never used anywhere else, as this is your key to unlocking your files. The project's authors recommend a password with between 64 and 128 bits of entropy. Assuming you use upper and lower case letters and numbers, this means your password should be between [11 and 22 characters long][6]. If you're using a password manager, this should be easy to accomplish with a generated password.
|
||||
|
||||
When you initialize the filesystem, you will see a unique key. Store this key somewhere securely, as it will allow you to access your files if you need to recover your files but have forgotten your password. The key works without your password, so keep it private!
|
||||
|
||||
The initialization routine looks like this:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir ~/.sekrit_files
|
||||
$ gocryptfs -init ~/.sekrit_files
|
||||
Choose a password for protecting your files.
|
||||
Password:
|
||||
Repeat:
|
||||
|
||||
Your master key is:
|
||||
|
||||
XXXXXXXX-XXXXXXXX-XXXXXXXX-XXXXXXXX-
|
||||
XXXXXXXX-XXXXXXXX-XXXXXXXX-XXXXXXXX
|
||||
|
||||
If the gocryptfs.conf file becomes corrupted or you ever forget your password,
|
||||
there is only one hope for recovery: The master key. Print it to a piece of
|
||||
paper and store it in a drawer. This message is only printed once.
|
||||
The gocryptfs filesystem has been created successfully.
|
||||
You can now mount it using: gocryptfs .sekrit_files MOUNTPOINT
|
||||
```
|
||||
|
||||
If you look in the **~/.sekrit_files** directory, you will see two files: a configuration file and a unique directory-level initialization vector. You will not need to edit these two files by hand. Make sure you do not delete these files.
|
||||
|
||||
### Use your encrypted filesystem
|
||||
|
||||
To use your encrypted filesystem, you need to mount it. This requires an empty directory where you can mount the filesystem. For example, use the **~/my_files** directory. As you can see from the initialization, mounting is easy:
|
||||
|
||||
|
||||
```
|
||||
$ gocryptfs ~/.sekrit_files ~/my_files
|
||||
Password:
|
||||
Decrypting master key
|
||||
Filesystem mounted and ready.
|
||||
```
|
||||
|
||||
If you check out the **~/my_files** directory, you'll see it is empty. The configuration and initialization vector files aren't data, so they don't show up. Let's put some data in the filesystem and see what happens:
|
||||
|
||||
|
||||
```
|
||||
$ cp /usr/share/dict/words ~/my_files/
|
||||
$ ls -la ~/my_files/ ~/.sekrit_files/
|
||||
~/my_files/:
|
||||
.rw-r--r-- 5.0M bexelbie 19 Jul 17:48 words
|
||||
|
||||
~/.sekrit_files/:
|
||||
.r--------@ 402 bexelbie 19 Jul 17:39 gocryptfs.conf
|
||||
.r--------@ 16 bexelbie 19 Jul 17:39 gocryptfs.diriv
|
||||
.rw-r--r--@ 5.0M bexelbie 19 Jul 17:48 xAQrtlyYSFeCN5w7O3-9zg
|
||||
```
|
||||
|
||||
Notice that there is a new file in the **~/.sekrit_files** directory. This is the encrypted copy of the dictionary you copied in (the file name will vary). Feel free to use **cat** and other tools to examine these files and experiment with adding, deleting, and modifying files. Make sure to test with a few applications, such as LibreOffice.
|
||||
|
||||
Remember, this a filesystem mount, so the contents of **~/my_files** aren't saved to disk. You can verify this by running **mount | grep my_files** and observing the output. Only the encrypted files are written to your disk. The FUSE interface is doing real-time encryption and decryption of the files and presenting them to your applications and shell as a filesystem.
|
||||
|
||||
### Unmount the filesystem
|
||||
|
||||
When you're done with your files, you can unmount them. This causes the unencrypted filesystem to no longer be available. The encrypted files in **~/.sekrit_files** are unaffected. Unmount the filesystem using the FUSE mounter program with **fusermount -u ~/my_files** .
|
||||
|
||||
### Back up your data
|
||||
|
||||
One of the cool benefits of gocryptfs using file-level encryption is that it makes backing up your encrypted data easier. The files are safe to store on a synchronizing system, such as OwnCloud or Dropbox. The standard disclaimer about not modifying the same file at the same time applies. However, the files can be backed up even if they are mounted. You can also save your data any other way you would typically back up files. You don't need anything special.
|
||||
|
||||
When you do backups, make sure to include the **gocryptfs.diriv** file. This file is not a secret and can be saved with the backup. However, your **gocryptfs.conf** is a secret. When you control the entirety of the backup chain, such as with tape, you can back it up with the rest of the files. However, when the files are backed up to the cloud or publicly, you may wish to omit this file. In theory, if someone gets this file, the only thing protecting your files is the strength of your password. If you have chosen a [strong password][6], that may be enough; however, you need to consider your situation carefully. More details are in this gocryptfs [upstream issue][7].
|
||||
|
||||
### Bonus: Reverse mode
|
||||
|
||||
A neat feature of gocryptfs is the reverse mode function. In reverse mode, point gocryptfs at your unencrypted data, and it will create a mount point with an encrypted view of this data. This is useful for things such as creating encrypted backups. This is easy to do:
|
||||
|
||||
|
||||
```
|
||||
$ gocryptfs -reverse -init my_files
|
||||
Choose a password for protecting your files.
|
||||
Password:
|
||||
Repeat:
|
||||
|
||||
Your master key is:
|
||||
|
||||
XXXXXXXX-XXXXXXXX-XXXXXXXX-XXXXXXXX-
|
||||
XXXXXXXX-XXXXXXXX-XXXXXXXX-XXXXXXXX
|
||||
|
||||
If the gocryptfs.conf file becomes corrupted or you ever forget your password,
|
||||
there is only one hope for recovery: The master key. Print it to a piece of
|
||||
paper and store it in a drawer. This message is only printed once.
|
||||
The gocryptfs-reverse filesystem has been created successfully.
|
||||
You can now mount it using: gocryptfs -reverse my_files MOUNTPOINT
|
||||
|
||||
$ gocryptfs -reverse my_files sekrit_files
|
||||
Password:
|
||||
Decrypting master key
|
||||
Filesystem mounted and ready.
|
||||
```
|
||||
|
||||
Now **sekrit_files** contains an encrypted view of your unencrypted data from **my_files**. This can be backed up, shared, or handled as needed. The directory is read-only, as there is nothing useful you can do with those files except back them up.
|
||||
|
||||
A new file, **.gocryptfs.reverse.conf**, has been added to **my_files** to provide a stable encrypted view. This configuration file will ensure that each reverse mount will use the same encryption key. This way you could, for example, back up only changed files.
|
||||
|
||||
Gocryptfs is a flexible file encryption tool that allows you to store your data in an encrypted manner without changing your workflow or processes significantly. The design has undergone a security audit, and the developers have experience with other systems, such as **encfs**. I encourage you to add gocryptfs to your system today and start protecting your data.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/how-encrypt-files-gocryptfs
|
||||
|
||||
作者:[Brian "bex" Exelbierd][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bexelbiehttps://opensource.com/users/sethhttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
|
||||
[2]: https://nuetzlich.net/gocryptfs/
|
||||
[3]: https://en.wikipedia.org/wiki/Filesystem_in_Userspace
|
||||
[4]: https://getfedora.org
|
||||
[5]: https://nuetzlich.net/gocryptfs/quickstart/
|
||||
[6]: https://github.com/rfjakob/gocryptfs/wiki/Password-Strength
|
||||
[7]: https://github.com/rfjakob/gocryptfs/issues/50
|
147
sources/tech/20190816 How to plan your next IT career move.md
Normal file
147
sources/tech/20190816 How to plan your next IT career move.md
Normal file
@ -0,0 +1,147 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to plan your next IT career move)
|
||||
[#]: via: (https://opensource.com/article/19/8/plan-next-IT-career-move)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/marcobravo)
|
||||
|
||||
How to plan your next IT career move
|
||||
======
|
||||
Ask yourself these essential career questions about cloud, DevOps,
|
||||
coding, and where you're going next in IT.
|
||||
![Two hands holding a resume with computer, clock, and desk chair ][1]
|
||||
|
||||
Being part of technology-oriented communities has been an essential part of my career development. The first community that made a difference for me was focused on virtualization. Less than a year into my first career-related job, I met a group of friends who were significant contributors to this "vCommunity," and I found their enthusiasm to be contagious. That began our daily "nerd herd," where a handful of us met nearly every day for coffee before our shifts in tech support. We often discussed the latest software releases or hardware specs of a related storage array, and other times we strategized about how we could help each other grow in our careers.
|
||||
|
||||
> Any community worth being a part of will lift you up as you lift others up in the process.
|
||||
|
||||
In those years, I learned a foundational truth that's as true for me today as it was then: Any community worth being a part of will lift you up as you lift others up in the process.
|
||||
|
||||
![me with friends at New England VTUG meeting][2]
|
||||
|
||||
Matthew with friends [Jonathan][3] and [Paul][4] (L-R) at a New England VTUG meeting.
|
||||
|
||||
We began going to conferences together, with the first major effort by being the volunteer social team for a user group out of New England. We set up a Twitter account, sending play-by-plays as the event happened, but we also were there, in person, to welcome new members into our community of practice. While it wasn't my intention, finding that intersection of community and technology taught me the skills that led to my next job offer. And my story wasn't unique; many of us supported each other, and many of us advanced in our careers along the way.
|
||||
|
||||
While I remained connected to the vCommunity, I haven't kept up with the (mostly proprietary) technology stack we used to talk about.
|
||||
|
||||
My preferred technology shifted direction drastically when I fell in love with open source. It's been about five years since I knew virtualization deeply and two years since I spoke at an event centered on the topic. So it was a surprise and honor to be invited to give the opening keynote to the last edition of the [New England Virtualization Technology User Group][5]'s (VTUG) Summer Slam in July. Here's what I spoke about.
|
||||
|
||||
### Technology and, more importantly, employment
|
||||
|
||||
When I heard the user group was hosting its last-ever event, I said I'd love to be part of it. The challenge was that I didn't know how I would. While there is plenty of open source virtualization technology, I had shifted further up the stack toward applications and programming languages of late, so my technical angle wouldn't make for a good talk. The organizer said, "Good, that's what people need to hear."
|
||||
|
||||
Being further away from the vCommunity meant I had missed some of the context of the last few years. A noticeable amount of the community was facing unemployment. When they went to apply for a new job, there were new titles like [DevOps Engineer][6] and [SRE][7]. Not only that, I was told that the deep focus on a single vendor of proprietary virtualization technology is no longer enough. Virtualization and storage administration (my first area of expertise) appear to be the hardest hit by this shift. One story I heard at the event was that over 50% of a local user group's attendees were looking, and there was a gap in awareness of how to move forward.
|
||||
|
||||
So while I enjoy having lighthearted conversations with people learning to contribute to open source, this talk was different. It had more to do with people's lives than usual. The stakes were higher.
|
||||
|
||||
### 3 trends worth exploring
|
||||
|
||||
There are a thousand ways to summarize the huge waves of change that are taking place in the tech industry. In my talk, I offered the idea that cloud, DevOps, and coding are three distinct trends making some of those waves and worth considering as you plan the next steps in your IT-oriented career.
|
||||
|
||||
* Cloud, including the new operational model of IT that is often Kubernetes-based
|
||||
* DevOps, which rejects the silos, ticket systems, and blame of legacy IT departments
|
||||
* Coding, including the practice of infrastructure as code (accompanied by a ton of YAML)
|
||||
|
||||
|
||||
|
||||
I imagine them as literal waves, crashing down upon the ships of the old way to make way for the new.
|
||||
|
||||
![Adaption of The Great Wave Off Kanagawa][8]
|
||||
|
||||
Adaption of [The Great Wave Off Kanagawa][9]
|
||||
|
||||
We have two mutually exclusive options as we consider how to respond to these shifts. We can paddle hard, feeling like we're going against the current, or we can stay put and be crushed by the wave. One is uncomfortable in the short term, while the other is more comfortable for now. Only one option is survivable. It sounds scary, and I'm okay with that. The stakes are real.
|
||||
|
||||
![Adaption of The Great Wave Off Kanagawa][10]
|
||||
|
||||
Adaption of [The Great Wave Off Kanagawa][9]
|
||||
|
||||
Cloud, DevOps, and coding are each massive topics with a ton of nuance to unpack. But if want to retool your skills for the future, I'm confident that focusing on **any** of them will set you up for a successful next step.
|
||||
|
||||
### Finding the right adoption timeline
|
||||
|
||||
One of the most challenging aspects of this is the sheer onslaught of information. It's reasonable to ask what you should learn, specifically, and when. I'm reminded of the work of [Nassim Taleb][11] who, among his deeply thoughtful insights into risk, makes note of a powerful concept:
|
||||
|
||||
> "The longer a technology lives, the longer it can be expected to live."
|
||||
> – Nassim Taleb, [_Antifragile_][12] (2012)
|
||||
|
||||
This sticking power of technology may offer insight into the right time to jump on a wave of newness. It doesn't have to be right away, given that early adopters may find their efforts don't have enough stick-to-it-ness to linger beyond a passing trend. That aligns well with my style: I'm rarely an early adopter, and I'm comfortable with that fact. I leave a lot of the early testing and debugging of new projects to those who are excited by the uncertainty of it all, and I'll be around for the phase when the brilliant idea needs to be refined (or, as [Simon Wardley][13] puts it, I prefer the [Settling phase over the Pioneer one][14]). That also aligns well with the perspective of most admin-centric professionals I know. They're wary of new things because they know saying yes to something in production is easier than supporting it after it gets there.
|
||||
|
||||
![One theory on when to adopt technology as its being displaced][15]
|
||||
|
||||
What I also love about Taleb's words is they offer a reasonable equation to make sure you're not the last to adopt a technology. Why not be last? Because you'll be so far behind that no one will want to hire you.
|
||||
|
||||
So what does that equation look like? I think it's taking Taleb's theory, more broadly called the [Lindy effect][16], and doing the math: you can expect that any technology will be in existence at least as long as it was before a competitor came into play. So if X technology excited for 30 years before Y threatened its dominance, you can expect X to be in existence for another 30 years (even if Y is way more popular, powerful, and trendy). It takes a long time for technology to "die."
|
||||
|
||||
My observation is more of a half-life of that concept: you can expect broad adoption of Y technology halfway through the adoption curve. By that point, companies hiring will start to signal they want this knowledge on their team, and it's reasonable for even the most skeptical of Sysadmins to learn said technology. In practice, that may look like this, where ETOA is the estimated time of mass adoption:
|
||||
|
||||
![IP4 to IP6 estimated time of mass adoption][17]
|
||||
|
||||
Many would love for IPv6 to be widely adopted sooner than 2027, and this theory offers a potential reason why it takes so long. Change is going somewhere, but the pace more aligned with the Lindy effect than to those people's expectations.
|
||||
|
||||
To paraphrase statistician [George Box][18], "all models are wrong, but some are more useful than others." Taleb's adaptation of the Lindy effect helps me think about how I need to prioritize my learning relative to the larger waves of change happening in the industry.
|
||||
|
||||
### Asking the right questions
|
||||
|
||||
The thing I cannot say often enough is that _people who have IT admin skills have a ton of options when they look at the industry_.
|
||||
|
||||
Every person who has been an admin has had to learn new technology. They excel at it. And while mastery takes time, a decent amount of familiarity and a willingness to learn are very hirable skills. Learning a new technology and inspecting its operational model to anticipate its failures in production are hugely valuable to any project. I look at open source software projects on GitHub and GitLab regularly, and many are looking for feedback on how to get their projects ready for production environments. Admins are experts at operational improvement.
|
||||
|
||||
All that said, it can still be paralyzing to decide what to learn. When people are feeling stuck, I recommend asking yourself these questions to jumpstart your thinking:
|
||||
|
||||
1. What technology do you want to know?
|
||||
2. What's your next career?
|
||||
|
||||
|
||||
|
||||
The first question is full of important reminders. First off, many of us in IT have the privilege of choosing to study the things that bring us joy to learn. It's a wonderful feeling to be excited by our work, and I love when I see that in someone I'm mentoring.
|
||||
|
||||
Another favorite takeaway is that no one is born knowing any specific technology. All technology skills are learned skills. So, your back-of-the-mind thought that "I don't understand that" is often masking the worry that "I can't learn it." Investigate further, and you'll find that your nagging sense of impossibility is easily disproven by everything you've accomplished until this point. I find a gentle and regular reminder that all technology is learned is a great place to start. You've done this before; you will do so again.
|
||||
|
||||
Here's one more piece of advice: stick with one thing and go deep. Skills—and the stories we tell about them to potential employers—are more interesting the deeper we can go. Therefore, when you're learning to code in your language of choice, find a way to build something that you can talk about at length in a job interview. Maybe it's an Ansible module in Python or a Go module for Terraform. Either one is a lot more powerful than saying you can code Hello World in seven languages.
|
||||
|
||||
There is also power in asking yourself what your next career will be. It's a reminder that you have one and, to survive and advance, you have to continue learning. What got you here will not get you where you're going next.
|
||||
|
||||
It's freeing to find that your next career can be an evolution of what you know now or a doubling-down on something much larger. I advocate for evolutionary, not revolutionary. There is a lot of foundational knowledge in everything we know, and it can be powerful to us and the story we tell others when we stay connected to our past.
|
||||
|
||||
### Community is key
|
||||
|
||||
All careers evolve and skills develop. Many of us are drawn to IT because it requires continual learning. Know that you can do it, and stick with your community to help you along the way.
|
||||
|
||||
If you're looking for a way to apply your background in IT administration and you have an open source story to tell, we would love to help you share it. Read our [information for writers][19] to learn how.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/plan-next-IT-career-move
|
||||
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI (Two hands holding a resume with computer, clock, and desk chair )
|
||||
[2]: https://opensource.com/sites/default/files/uploads/vtug.png (Matthew with friends at New England VTUG meeting)
|
||||
[3]: https://twitter.com/jfrappier
|
||||
[4]: https://twitter.com/paulbraren
|
||||
[5]: https://vtug.com/
|
||||
[6]: https://opensource.com/article/19/7/how-transition-career-devops-engineer
|
||||
[7]: https://opensource.com/article/19/7/sysadmins-vs-sres
|
||||
[8]: https://opensource.com/sites/default/files/uploads/greatwave.png (Adaption of The Great Wave Off Kanagawa)
|
||||
[9]: https://en.wikipedia.org/wiki/The_Great_Wave_off_Kanagawa
|
||||
[10]: https://opensource.com/sites/default/files/uploads/greatwave2.png (Adaption of The Great Wave Off Kanagawa)
|
||||
[11]: https://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb
|
||||
[12]: https://en.wikipedia.org/wiki/Antifragile
|
||||
[13]: https://twitter.com/swardley
|
||||
[14]: https://blog.gardeviance.org/2015/03/on-pioneers-settlers-town-planners-and.html
|
||||
[15]: https://opensource.com/sites/default/files/articles/displacing-technology-lindy-effect-opensource.com_.png (One theory on when to adopt technology as its being displaced)
|
||||
[16]: https://en.wikipedia.org/wiki/Lindy_effect
|
||||
[17]: https://opensource.com/sites/default/files/uploads/ip4-to-etoa.png (IP4 to IP6 estimated time of mass adoption)
|
||||
[18]: https://en.wikipedia.org/wiki/George_E._P._Box
|
||||
[19]: https://opensource.com/how-submit-article
|
@ -0,0 +1,209 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Share Your Keyboard and Mouse Between Linux and Raspberry Pi)
|
||||
[#]: via: (https://itsfoss.com/keyboard-mouse-sharing-between-computers/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Share Your Keyboard and Mouse Between Linux and Raspberry Pi
|
||||
======
|
||||
|
||||
_**This DIY tutorial teaches you to share mouse and keyboard between multiple computers using open source software Barrier.**_
|
||||
|
||||
I have a multi-monitor setup where my [Dell XPS running Ubuntu][1] is connected to two external monitors. I recently got a [Raspberry Pi 4][2] that has the capability to double up as a desktop. I bought a new screen so that I could set it up for monitoring the performance of my cloud servers.
|
||||
|
||||
Now the problem is that I have fours screens and one pair of keyboard and mouse. I could use a new keyboard-mouse pair but my desk doesn’t have enough free space and it’s not very convenient to switch keyboards and mouse all the time.
|
||||
|
||||
One way to tackle this problem would be to buy a kvm switch. This is a handy gadget that allows you to use same display screen, keyboard and mouse between several computers running various operating systems. You can easily find one for around $30 on Amazon.
|
||||
|
||||
Bestseller No. 1
|
||||
|
||||
[][3]
|
||||
|
||||
[UGREEN USB Switch Selector 2 Computers Sharing 4 USB Devices USB 2.0 Peripheral Switcher Box Hub for Mouse, Keyboard, Scanner, Printer, PCs with One-Button Swapping and 2 Pack USB A to A Cable][3]
|
||||
|
||||
Ugreen Group Limited - Electronics
|
||||
|
||||
$18.99 [][4]
|
||||
|
||||
But I didn’t go for the hardware solution. I opted for a software based approach to share the keyboard and mouse between computers.
|
||||
|
||||
I used [Barrier][5], an open source fork of the now proprietary software [Synergy][6]. Synergy Core is still open source but you can’t get encryption option in its GUI. With all its limitation, Barrier works fine for me.
|
||||
|
||||
Let’s see how you can use Barrier to share mouse and keyboard with multiple computers. Did I mention that you can even share clipboard and thus copy paste text between the computers?
|
||||
|
||||
### Set up Barrier to share keyboard and mouse between Linux and Raspberry Pi or other devices
|
||||
|
||||
![][7]
|
||||
|
||||
I have prepared this tutorial with Ubuntu 18.04.3 and Raspbian 10. Some installation instructions may differ based on your distribution and version but you’ll get the idea of what you need to do here.
|
||||
|
||||
#### Step 1: Install Barrier
|
||||
|
||||
The first step is obvious. You need to install Barrier in your computer.
|
||||
|
||||
Barrier is available in the universe repository starting Ubuntu 19.04 so you can easily install it using apt command.
|
||||
|
||||
You’ll have to use the snap version of Barrier in Ubuntu 18.04. Open Software Center and search for Barrier. I recommend using barrier-maxiberta
|
||||
|
||||
![Install this Barrier version][8]
|
||||
|
||||
On other distributions, you should [enable Snap][9] first and then use this command:
|
||||
|
||||
```
|
||||
sudo snap install barrier-maxiberta
|
||||
```
|
||||
|
||||
Barrier is available in Debian 10 repositories. So installing barrier on Raspbian was easy with the [apt command][10]:
|
||||
|
||||
```
|
||||
sudo apt install barrier
|
||||
```
|
||||
|
||||
Once you have installed the software, it’s time to configure it.
|
||||
|
||||
[][11]
|
||||
|
||||
Suggested read Fix Application Installation Issues In elementary OS
|
||||
|
||||
#### Step 2: Configure Barrier server
|
||||
|
||||
Barrier works on server-client model. You should configure your main computer as server and the secondary computer as client.
|
||||
|
||||
In my case, my Ubuntu 18.04 is my main system so I set it up as the server. Search for Barrier in menu and start it.
|
||||
|
||||
![Setup Barrier as server][12]
|
||||
|
||||
You should see an IP address and an SSL fingerprint. It’s not entirely done because you have to configure the server a little. Click on Configure Server option.
|
||||
|
||||
![Configure the Barrier server][13]
|
||||
|
||||
In here, you should see your own system in the center. Now you have to drag and drop the computer icon from the top right to a suitable position. The position is important because that’s how your mouse pointer will move between screens.
|
||||
|
||||
![Setup Barrier server with client screens][14]
|
||||
|
||||
Do note that you should provide the [hostname][15] of the client computer. In my case, it was raspberrypi. It won’t work if the hostname is not correct. Don’t know the client’s hostname? Don’t worry, you can get it from the client system.
|
||||
|
||||
#### Step 3: Setup barrier client
|
||||
|
||||
On the second computer, start Barrier and choose to use it as client.
|
||||
|
||||
![Setup Barrier Client on Raspberry Pi][16]
|
||||
|
||||
You need to provide the IP address of Barrier server. You can find this IP address on the Barrier application running on the main system (see the screenshots in previous section).
|
||||
|
||||
![Setup Barrier Client on Raspberry Pi][17]
|
||||
|
||||
If you see an option to accept secure connection from another computer, accept it.
|
||||
|
||||
You should be now able to move your mouse pointer between the screens connected to two different computers running two different operating systems. How cool is that!
|
||||
|
||||
### Optional: Autostart Barrier [Intermediate to Advanced Users]
|
||||
|
||||
Now that you have setup Barrier and enjoying by using the same mouse and keyboard for more than one computers, what happens when you reboot your system? You need to start Barrier in both the systems again, right? This means that you need to connect keyboard-mouse to the second computer as well.
|
||||
|
||||
[][18]
|
||||
|
||||
Suggested read How To Fix Windows Updates Stuck At 0%
|
||||
|
||||
Since I use Wireless mouse and keyboard, this is still easier as all I need to do is to take the adapter from my laptop and plug it in the Raspberry Pi. This works but I don’t want to do this extra step. This is why I made Barrier running at the start on both systems so that I could use the same mouse and keyboard without any additional step.
|
||||
|
||||
There is no autostart option in the Barrier application. But it’s easy to [add an application to autostart in Ubuntu][19]. Just open the Startup Applications program and add the command _**barrier-maxiberta.barrier**_ here.
|
||||
|
||||
![Adding Barrier To Startup applications in Ubuntu][20]
|
||||
|
||||
That was the easy part. It’s not the same in Raspberry Pi though. Since Raspbian uses systemd, you can use it to create a new service that will run at the boot time.
|
||||
|
||||
Open a terminal and create a new file named barrier.service in /etc/systemd/system directory. If this directory doesn’t exist, create it. You can use your favorite command line text editor for this task. I used Vim here.
|
||||
|
||||
```
|
||||
sudo vim /etc/systemd/system/barrier.service
|
||||
```
|
||||
|
||||
Now add lines like these to your file. _**You must replace 192.168.0.109 with your barrier server’s IP address.**_
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Barrier Client mouse/keyboard share
|
||||
Requires=display-manager.service
|
||||
After=display-manager.service
|
||||
StartLimitIntervalSec=0
|
||||
|
||||
[Service]
|
||||
Type=forking
|
||||
ExecStart=/usr/bin/barrierc --no-restart --name raspberrypi --enable-crypto 192.168.0.109
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
User=pi
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
Save your file. I would advise to run the command mentioned in ExecStart line manually to see if it works or not. This will save you some headache later.
|
||||
|
||||
Reload the systemd daemon:
|
||||
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
```
|
||||
|
||||
Now start this new service
|
||||
|
||||
```
|
||||
systemctl start barrier.service
|
||||
```
|
||||
|
||||
Check its status to see if its running fine:
|
||||
|
||||
```
|
||||
systemctl status barrier.service
|
||||
```
|
||||
|
||||
If it works, add it to startup services:
|
||||
|
||||
```
|
||||
systemctl enable barrier.service
|
||||
```
|
||||
|
||||
This should take care of things for you. Now you should be able to control the Raspberry Pi (or any other second computer) with a single keyboard mouse pair.
|
||||
|
||||
I know that these DIY stuff may not work straightforward for everyone so if you face issues, let me know in the comments and I’ll try to help you out.
|
||||
|
||||
If it worked for you or if you use some other solution to share the mouse and keyboard between the computers, do mention it in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/keyboard-mouse-sharing-between-computers/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/dell-xps-13-ubuntu-review/
|
||||
[2]: https://itsfoss.com/raspberry-pi-4/
|
||||
[3]: https://www.amazon.com/UGREEN-Selector-Computers-Peripheral-One-Button/dp/B01MXXQKGM?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01MXXQKGM&keywords=kvm%20switch (UGREEN USB Switch Selector 2 Computers Sharing 4 USB Devices USB 2.0 Peripheral Switcher Box Hub for Mouse, Keyboard, Scanner, Printer, PCs with One-Button Swapping and 2 Pack USB A to A Cable)
|
||||
[4]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime)
|
||||
[5]: https://github.com/debauchee/barrier
|
||||
[6]: https://symless.com/synergy
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/Share-Keyboard-and-Mouse.jpg?resize=800%2C450&ssl=1
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/barrier-ubuntu-snap.jpg?ssl=1
|
||||
[9]: https://itsfoss.com/install-snap-linux/
|
||||
[10]: https://itsfoss.com/apt-command-guide/
|
||||
[11]: https://itsfoss.com/fix-application-installation-issues-elementary-os-loki/
|
||||
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/barrier-2.jpg?resize=800%2C512&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/barrier-server-ubuntu.png?resize=800%2C450&ssl=1
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/barrier-server-configuration.png?ssl=1
|
||||
[15]: https://itsfoss.com/change-hostname-ubuntu/
|
||||
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/setup-barrier-client.jpg?resize=800%2C400&ssl=1
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/setup-barrier-client-2.jpg?resize=800%2C400&ssl=1
|
||||
[18]: https://itsfoss.com/fix-windows-updates-stuck-0/
|
||||
[19]: https://itsfoss.com/manage-startup-applications-ubuntu/
|
||||
[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/adding-barrier-to-startup-apps-ubuntu.jpg?ssl=1
|
@ -0,0 +1,321 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Create Availability Zones in OpenStack from Command Line)
|
||||
[#]: via: (https://www.linuxtechi.com/create-availability-zones-openstack-command-line/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Create Availability Zones in OpenStack from Command Line
|
||||
======
|
||||
|
||||
In **OpenStack** terminology, **Availability Zones (AZ**) is defined as a logical partition of compute(nova), block storage (cinder) and network Service (neutron). Availability zones are required to segregate the work load of environments like production and non-production, let me elaborate this statement.
|
||||
|
||||
[![Availability-Zones-OpenStack-Command-Line][1]][2]
|
||||
|
||||
let’s suppose we have a tenant in OpenStack who wants to deploy their VMs in Production and Non-Production, so to create this type of setup in openstack , first we have to identify which computes will be considered as Production and Non-production then we have to create host-aggregate group where we will add the computes to the host-aggregate group and then we will map th host aggregate group to the Availability Zone.
|
||||
|
||||
In this tutorial we will demonstrate how to create and use computes availability zones in openstack via command line.
|
||||
|
||||
### Creating compute availability zones
|
||||
|
||||
Whenever OpenStack is deployed, Nova is the default Availability Zone(AZ) created automatically and all the compute nodes belongs to Nova AZ. Run the below openstack command from the controller node to list Availability zones,
|
||||
|
||||
```
|
||||
~# source openrc
|
||||
~# openstack availability zone list
|
||||
+-----------+-------------+
|
||||
| Zone Name | Zone Status |
|
||||
+-----------+-------------+
|
||||
| internal | available |
|
||||
| nova | available |
|
||||
| nova | available |
|
||||
| nova | available |
|
||||
+-----------+-------------+
|
||||
~#
|
||||
```
|
||||
|
||||
To list only compute’s availability zones, run the beneath openstack command,
|
||||
|
||||
```
|
||||
~# openstack availability zone list --compute
|
||||
+-----------+-------------+
|
||||
| Zone Name | Zone Status |
|
||||
+-----------+-------------+
|
||||
| internal | available |
|
||||
| nova | available |
|
||||
+-----------+-------------+
|
||||
~#
|
||||
```
|
||||
|
||||
To list all compute hosts which are mapped to nova availability zone execute the below command,
|
||||
|
||||
```
|
||||
~# openstack host list | grep -E "Zone|nova"
|
||||
| Host Name | Service | Zone |
|
||||
| compute-0-1 | compute | nova |
|
||||
| compute-0-2 | compute | nova |
|
||||
| compute-0-4 | compute | nova |
|
||||
| compute-0-3 | compute | nova |
|
||||
| compute-0-8 | compute | nova |
|
||||
| compute-0-6 | compute | nova |
|
||||
| compute-0-9 | compute | nova |
|
||||
| compute-0-5 | compute | nova |
|
||||
| compute-0-7 | compute | nova |
|
||||
~#
|
||||
```
|
||||
|
||||
Let’s create a two host-aggregate group with name **production** and **non-production**, add compute-4,5 & 6 to production host aggregate group and compute-7,8 & 9 to non-production host aggregate group.
|
||||
|
||||
Create Production and Non-Production Host aggregate using following OpenStack commands,
|
||||
|
||||
```
|
||||
~# openstack aggregate create production
|
||||
+-------------------+----------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------+
|
||||
| availability_zone | None |
|
||||
| created_at | 2019-08-17T03:02:41.561259 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| id | 7 |
|
||||
| name | production |
|
||||
| updated_at | None |
|
||||
+-------------------+----------------------------+
|
||||
|
||||
~# openstack aggregate create non-production
|
||||
+-------------------+----------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------+
|
||||
| availability_zone | None |
|
||||
| created_at | 2019-08-17T03:02:53.806713 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| id | 10 |
|
||||
| name | non-production |
|
||||
| updated_at | None |
|
||||
+-------------------+----------------------------+
|
||||
~#
|
||||
```
|
||||
|
||||
Now create the availability zones and associate it to its respective host aggregate groups
|
||||
|
||||
**Syntax:**
|
||||
|
||||
# openstack aggregate set –zone <az_name> <host_aggregate_name>
|
||||
|
||||
```
|
||||
~# openstack aggregate set --zone production-az production
|
||||
~# openstack aggregate set --zone non-production-az non-production
|
||||
```
|
||||
|
||||
Finally add the compute host to its host-aggregate group
|
||||
|
||||
**Syntax:**
|
||||
|
||||
# openstack aggregate add host <host_aggregate_name> <compute_host>
|
||||
|
||||
```
|
||||
~# openstack aggregate add host production compute-0-4
|
||||
~# openstack aggregate add host production compute-0-5
|
||||
~# openstack aggregate add host production compute-0-6
|
||||
```
|
||||
|
||||
Similarly add compute host to non-production host aggregation group,
|
||||
|
||||
```
|
||||
~# openstack aggregate add host non-production compute-0-7
|
||||
~# openstack aggregate add host non-production compute-0-8
|
||||
~# openstack aggregate add host non-production compute-0-9
|
||||
```
|
||||
|
||||
Execute the beneath openstack commands to verify Host aggregate group and its availability zone
|
||||
|
||||
```
|
||||
~# openstack aggregate list
|
||||
+----+----------------+-------------------+
|
||||
| ID | Name | Availability Zone |
|
||||
+----+----------------+-------------------+
|
||||
| 7 | production | production-az |
|
||||
| 10 | non-production | non-production-az |
|
||||
+----+----------------+-------------------+
|
||||
~#
|
||||
```
|
||||
|
||||
Run below commands to list computes associated to AZ and host aggregate group
|
||||
|
||||
```
|
||||
~# openstack aggregate show production
|
||||
+-------------------+--------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------+
|
||||
| availability_zone | production-az |
|
||||
| created_at | 2019-08-17T03:02:42.000000 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [u'compute-0-4', u'compute-0-5', u'compute-0-6'] |
|
||||
| id | 7 |
|
||||
| name | production |
|
||||
| properties | |
|
||||
| updated_at | None |
|
||||
+-------------------+--------------------------------------------+
|
||||
|
||||
~# openstack aggregate show non-production
|
||||
+-------------------+---------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+---------------------------------------------+
|
||||
| availability_zone | non-production-az |
|
||||
| created_at | 2019-08-17T03:02:54.000000 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [u'compute-0-7', u'compute-0-8', u'compute-0-9'] |
|
||||
| id | 10 |
|
||||
| name | non-production |
|
||||
| properties | |
|
||||
| updated_at | None |
|
||||
+-------------------+---------------------------------------------+
|
||||
~#
|
||||
```
|
||||
|
||||
Above command’s output confirm that we have successfully create host aggregate group and availability zones.
|
||||
|
||||
### Launch Virtual Machines in Availability Zones
|
||||
|
||||
Now let’s create couple virtual machines in these two availability zones, to create a vm in particular availability zone use below command,
|
||||
|
||||
**Syntax:**
|
||||
|
||||
# openstack server create –flavor <flavor-name> –image <Image-Name-Or-Image-ID> –nic net-id=<Network-ID> –security-group <Security-Group-ID> –key-name <Keypair-Name> –availability-zone <AZ-Name> <VM-Name>
|
||||
|
||||
Example is shown below:
|
||||
|
||||
```
|
||||
~# openstack server create --flavor m1.small --image Cirros --nic net-id=37b9ab9a-f198-4db1-a5d6-5789b05bfb4c --security-group f8dda7c3-f7c3-423b-923a-2b21fe0bbf3c --key-name mykey --availability-zone production-az test-vm-prod-az
|
||||
```
|
||||
|
||||
Run below command to verify virtual machine details:
|
||||
|
||||
```
|
||||
~# openstack server show test-vm-prod-az
|
||||
```
|
||||
|
||||
![Openstack-Server-AZ-command][1]
|
||||
|
||||
To create a virtual machine in a specific compute node under availability zone, use the below command,
|
||||
|
||||
**Syntax:**
|
||||
|
||||
# openstack server create –flavor <flavor-name> –image <Image-Name-Or-Image-ID> –nic net-id=<Network-ID> –security-group <Security-Group-ID> –key-name {Keypair-Name} –availability-zone <AZ-Name>:<Compute-Host> <VM-Name>
|
||||
|
||||
Let suppose we want to spin a vm under production AZ on specific compute (compute-0-6), so to accomplish this, run the beneath command,
|
||||
|
||||
```
|
||||
~# openstack server create --flavor m1.small --image Cirros --nic net-id=37b9ab9a-f198-4db1-a5d6-5789b05bfb4c --security-group f8dda7c3-f7c3-423b-923a-2b21fe0bbf3c --key-name mykey --availability-zone production-az:compute-0-6 test-vm-prod-az-host
|
||||
```
|
||||
|
||||
Execute following command to verify the VM details:
|
||||
|
||||
```
|
||||
~# openstack server show test-vm-prod-az-host
|
||||
```
|
||||
|
||||
Output above command would be something like below:
|
||||
|
||||
![OpenStack-VM-AZ-Specific-Host][1]
|
||||
|
||||
Similarly, we can create virtual machines in non-production AZ, example is shown below
|
||||
|
||||
```
|
||||
~# openstack server create --flavor m1.small --image Cirros --nic net-id=37b9ab9a-f198-4db1-a5d6-5789b05bfb4c --security-group f8dda7c3-f7c3-423b-923a-2b21fe0bbf3c --key-name mykey --availability-zone non-production-az vm-nonprod-az
|
||||
```
|
||||
|
||||
Use below command verify the VM details,
|
||||
|
||||
```
|
||||
~# openstack server show vm-nonprod-az
|
||||
```
|
||||
|
||||
Output of above command would be something like below,
|
||||
|
||||
![OpenStack-Non-Production-AZ-VM][1]
|
||||
|
||||
### Removing Host aggregate group and Availability Zones
|
||||
|
||||
Let’s suppose we want to remove /delete above created host aggregate group and availability zones, for that first we must remove host from the host aggregate group, use the below command,
|
||||
|
||||
```
|
||||
~# openstack aggregate show production
|
||||
```
|
||||
|
||||
Above command will give u the list of compute host which are added to production host aggregate group.
|
||||
|
||||
Use below command to remove host from the host aggregate group
|
||||
|
||||
**Syntax:**
|
||||
|
||||
# openstack aggregate remove host <host-aggregate-name> <compute-name>
|
||||
|
||||
```
|
||||
~# openstack aggregate remove host production compute-0-4
|
||||
~# openstack aggregate remove host production compute-0-5
|
||||
~# openstack aggregate remove host production compute-0-6
|
||||
```
|
||||
|
||||
Once you remove all host from the group, then re-run the below command,
|
||||
|
||||
```
|
||||
~# openstack aggregate show production
|
||||
+-------------------+----------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------+
|
||||
| availability_zone | production-az |
|
||||
| created_at | 2019-08-17T03:02:42.000000 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [] |
|
||||
| id | 7 |
|
||||
| name | production |
|
||||
| properties | |
|
||||
| updated_at | None |
|
||||
+-------------------+----------------------------+
|
||||
~#
|
||||
```
|
||||
|
||||
As we can see in above output there is no compute host associated to production host aggregate group, now we are good to remove the group
|
||||
|
||||
Use below command to delete host aggregate group and its associated availability zone
|
||||
|
||||
```
|
||||
~# openstack aggregate delete production
|
||||
```
|
||||
|
||||
Run the following command to check whether availability zone has been removed or not,
|
||||
|
||||
```
|
||||
~# openstack availability zone list | grep -i production-az
|
||||
~#
|
||||
```
|
||||
|
||||
Similarly, you can refer the above steps to remove or delete non-production host aggregate and its availability zone.
|
||||
|
||||
That’s all from this tutorial, in case above content helps you to understand OpenStack host aggregate and availability zones then please do share your feedback and comments.
|
||||
|
||||
**Read Also: [Top 30 OpenStack Interview Questions and Answers][3]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/create-availability-zones-openstack-command-line/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/08/Availability-Zones-OpenStack-Command-Line.jpg
|
||||
[3]: https://www.linuxtechi.com/openstack-interview-questions-answers/
|
@ -0,0 +1,133 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Command Line Heroes: Season 1: OS Wars)
|
||||
[#]: via: (https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1)
|
||||
[#]: author: (redhat https://www.redhat.com)
|
||||
|
||||
《代码英雄》第一季(1):操作系统战争(上)
|
||||
======
|
||||
|
||||
> 代码英雄讲述了开发人员、程序员、黑客、极客和开源反叛者如何彻底改变技术前景的真实史诗故事。
|
||||
|
||||
本文是《[代码英雄](https://www.redhat.com/en/command-line-heroes)》系列播客[第一季第一集:操作系统战争(上)](https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1) 的[音频](https://dts.podtrac.com/redirect.mp3/audio.simplecast.com/f7670e99.mp3)脚本。
|
||||
|
||||
Saron Yitbarek:有些故事如史诗般,惊险万分,在我脑海中似乎出现了星球大战电影开头的爬行文字。你知道的,就像——
|
||||
|
||||
配音:“第一集,操作系统大战”
|
||||
|
||||
Saron Yitbarek:是的,就像那样子。
|
||||
|
||||
配音:这是一个局势紧张加剧的时期。<ruby>比尔·盖茨<rt>Bill Gates</rt></ruby>与<ruby>史蒂夫·乔布斯<rt>Steve Jobs</rt></ruby>的帝国发起了一场无可避免的专有软件之战。[00:00:30] 盖茨与 IBM 结成了强大的联盟,而乔布斯则拒绝了对它的硬件和操作系统开放授权。他们争夺统治地位的争斗在一场操作系统战争中席卷了整个银河系。与此同时,这些帝王们所不知道的偏远之地,开源的反叛者们开始集聚。
|
||||
|
||||
Saron Yitbarek:好吧。这也许有点戏剧性,但当我们谈论上世纪八九十年代和 2000 年代的操作系统之争时,这也不算言过其实。[00:01:00] 确实曾经发生过一场史诗级的统治之战。史蒂夫·乔布斯和比尔·盖茨确实掌握着数十亿人的命运。掌控了操作系统,你就掌握了绝大多数人使用计算机的方式、互相通讯的方式、获取信息的方式。我可以一直罗列下去,不过你知道我的意思。掌握了操作系统,你就是帝王。
|
||||
|
||||
我是 Saron Yitbarek,你现在收听的是代码英雄,一款红帽公司原创的博客节目。[00:01:30] 你问,什么是<ruby>代码英雄<rt>Command Line Hero</rt></ruby>?嗯,如果你愿意创造而不仅仅是使用,如果你相信开发者拥有构建美好未来的能力,如果你希望拥有一个大家都有权利表达科技如何塑造生活的世界,那么你,我的朋友,就是一位代码英雄。在本系列节目中,我们将为你带来那些“白码起家”(LCTT 译注:原文是 “from the command line up”,应该是演绎自 “from the ground up”——白手起家)改变技术的程序员故事。[00:02:00] 那么我是谁,凭什么指导你踏上这段艰苦的旅程?Saron Yitbarek 是哪根葱?嗯,事实上我觉得我跟你差不多。我是一名面向初学者的开发人员,我做的任何事都依赖于开源软件,我的世界就是如此。通过在博客中讲故事,我可以跳出无聊的日常工作,鸟瞰全景,希望这对你也一样有用。
|
||||
|
||||
我迫不及待地想知道,开源技术从何而来?我的意思是,我对<ruby>林纳斯·托瓦兹<rt>Linus Torvalds</rt></ruby>和 Linux^® 的荣耀有一些了解,[00:02:30] 我相信你也一样,但是说真的,开源并不是一开始就有的,对吗?如果我想发自内心的感激这些最新、最棒的技术,比如 DevOps 和容器之类的,我感觉我对那些早期的开发者缺乏了解,我有必要了解这些东西来自何处。所以,让我们暂时先不用担心内存泄露和缓冲溢出。我们的旅程将从操作系统之战开始,这是一场波澜壮阔的桌面控制之战。[00:03:00] 这场战争亘古未有,因为:首先,在计算机时代,大公司拥有指数级的规模优势;其次,从未有过这么一场控制争夺战是如此变化多端。比尔·盖茨和史蒂夫·乔布斯? 他们也不知道结果会如何,但是到目前为止,这个故事进行到一半的时候,他们所争夺的所有东西都将发生改变、进化,最终上升到云端。
|
||||
|
||||
[00:03:30] 好的,让我们回到 1983 年的秋季。还有六年我才出生。那时候的总统还是<ruby>罗纳德·里根<rt>Ronald Reagan</rt></ruby>,美国和苏联扬言要把地球拖入核战争之中。在檀香山(火奴鲁鲁)的市政中心正在举办一年一度的苹果公司销售会议。一群苹果公司的员工正在等待史蒂夫·乔布斯上台。他 28 岁,热情洋溢,看起来非常自信。乔布斯很严肃地对着麦克风说他邀请了三个行业专家来就软件进行了一次小组讨论。[00:04:00] 然而随后发生的事情你肯定想不到。超级俗气的 80 年代音乐响彻整个房间。一堆多彩灯管照亮了舞台,然后一个播音员的声音响起-
|
||||
|
||||
配音:女士们,先生们,现在是麦金塔软件的约会游戏时间。
|
||||
|
||||
Saron Yitbarek:乔布斯的脸上露出一个大大的笑容,台上有三个 CEO 都需要轮流向他示好。这基本上就是 80 年代钻石王老五,不过是科技界的。[00:04:30] 两个软件大佬讲完话后,然后就轮到第三个人讲话了。仅此而已不是吗?是的。新面孔比尔·盖茨带着一个大大的遮住了半张脸的方框眼镜。他宣称在 1984 年,微软的一半收入将来自于麦金塔软件。他的这番话引来了观众热情的掌声。[00:05:00] 但是他们不知道的是,在一个月后,比尔·盖茨将会宣布发布 Windows 1.0 的计划。你永远也猜不到乔布斯正在跟苹果未来最大的敌人打情骂俏。但微软和苹果即将经历科技史上最糟糕的婚礼。他们会彼此背叛、相互毁灭,但又深深地、痛苦地捆绑在一起。
|
||||
|
||||
James Allworth:[00:05:30] 我猜从哲学上来讲,一个更理想化、注重用户体验高于一切,是一个一体化的组织,而微软则更务实,更模块化 ——
|
||||
|
||||
Saron Yitbarek:这位是 James Allworth。他是一位多产的科技作家,曾在苹果零售的企业团队工作。注意他给出的苹果的定义,一个一体化的组织,那种只对自己负责的公司,一个不想依赖别人的公司,这是关键。
|
||||
|
||||
James Allworth:[00:06:00] 苹果是一家一体化的公司,它希望专注于令人愉悦的用户体验,这意味着它希望控制整个技术栈以及交付的一切内容:从硬件到操作系统,甚至运行在操作系统上的应用程序。当新的创新、重要的创新刚刚进入市场,而你需要横跨软硬件,并且能够根据自己意愿和软件的革新来改变硬件时,这是一个优势。例如 ——,
|
||||
|
||||
Saron Yitbarek:[00:06:30] 很多人喜欢这种一体化的模式,并因此成为了苹果的铁杆粉丝。还有很多人则选择了微软。让我们回到檀香山的销售会议上,在同一场活动中,乔布斯向观众展示了他即将发布的超级碗广告。你可能已经亲眼见过这则广告了。想想<ruby>乔治·奥威尔<rt>George Orwell<rt></ruby>的 《一九八四》。在这个冰冷、灰暗的世界里,无意识的机器人正在独裁者的投射凝视下徘徊。[00:07:00] 这些机器人就像是 IBM 的用户们。然后,代表苹果公司的漂亮而健美的<ruby>安娅·梅杰<rt>Anya Major</rt></ruby>穿着鲜艳的衣服跑过大厅。她向着大佬们的屏幕猛地投出大锤,将它砸成了碎片。老大哥的咒语解除了,一个低沉的声音响起,苹果公司要开始介绍麦金塔了。
|
||||
|
||||
配音:这就是为什么现实中的 1984 跟小说《一九八四》不一样了。
|
||||
|
||||
Saron Yitbarek:是的,现在回顾那则广告,认为苹果是一个致力于解放大众的自由斗士的想法有点过分了。但这件事触动了我的神经。[00:07:30] Ken Segal 曾在为苹果制作这则广告的广告公司工作过。在早期,他为史蒂夫·乔布斯做了十多年的广告。
|
||||
|
||||
Ken Segal:1984 这则广告的风险很大。事实上,它的风险太大,以至于苹果公司在看到它的时候都不想播出它。你可能听说了史蒂夫喜欢它,但苹果公司董事会的人并不喜欢它。事实上,他们很愤怒,这么多钱被花在这么一件事情上,以至于他们想解雇广告代理商。[00:08:00] 史蒂夫则为我们公司辩护。
|
||||
|
||||
Saron Yitbarek:乔布斯,一如既往地,慧眼识英雄。
|
||||
|
||||
Ken Segal:这则广告在公司内、在业界内都引起了共鸣,成为了苹果产品的代表。无论人们那天是否有在购买电脑,它都带来了一种持续了一年又一年的影响,并有助于定义这家公司的品质:我们是叛军,我们是拿着大锤的人。
|
||||
|
||||
Saron Yitbarek:[00:08:30] 因此,在争夺数十亿潜在消费者心智的过程中,苹果公司和微软公司的帝王们正在学着把自己塑造成救世主、非凡的英雄、一种对生活方式的选择。但比尔·盖茨知道一些苹果难以理解的事情。那就是在一个相互连接的世界里,没有人,即便是帝王,也不能独自完成任务。
|
||||
|
||||
[00:09:00] 1985 年 6 月 25 日。盖茨给当时的苹果 CEO John Scully 发了一份备忘录。那是一个迷失的年代。乔布斯刚刚被逐出公司,直到 1996 年才回到苹果。也许正是因为乔布斯离开了,盖茨才敢写这份东西。在备忘录中,他鼓励苹果授权制造商分发他们的操作系统。我想读一下备忘录的最后部分,让你们知道这份备忘录是多么的有洞察力。[00:09:30] 盖茨写道:“如果没有其他个人电脑制造商的支持,苹果现在不可能让他们的创新技术成为标准。苹果必须开放麦金塔的架构,以获得快速发展和建立标准所需的支持。”换句话说,你们不要再自己玩自己的了。你们必须有与他人合作的意愿。你们必须与开发者合作。
|
||||
|
||||
[00:10:00]多年后你依然可以看到这条哲学思想,当微软首席执行官<ruby>史蒂夫·鲍尔默<rt>Steve Ballmer</rt></ruby>上台做主题演讲时,他开始大喊:“开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者。”你懂我的意思了吧。微软喜欢开发人员。虽然目前(LCTT 译注:本播客发布于 2018 年初)他们不打算与这些开发人员共享源代码,但是他们确实想建立起整个合作伙伴生态系统。[00:10:30] 而当比尔·盖茨建议苹果公司也这么做时,如你可能已经猜到的,这个想法就被苹果公司抛到了九霄云外。他们的关系产生了间隙,五个月后,微软发布了 Windows 1.0。战争开始了。
|
||||
|
||||
> 开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者。
|
||||
|
||||
[00:11:00] 你正在收听的是来自红帽公司的原创播客《代码英雄》。本集是第一集,我们将回到过去,重温操作系统战争的史诗故事,我们将会发现,科技巨头之间的战争是如何为我们今天所生活的开源世界扫清道路的
|
||||
|
||||
好的,让我们先来个背景故事吧。如果你已经听过了,那么请原谅我,但它很经典。当时是 1979 年,史蒂夫·乔布斯开车去<ruby>帕洛阿尔托<rt>Palo Alto</rt></ruby>的<ruby>施乐公园研究中心<rt>Xerox Park research center</rt></ruby>。[00:11:30] 那里的工程师一直在为他们所谓的图形用户界面开发一系列的元素。也许你听说过。它们有菜单、滚动条、按钮、文件夹和重叠的窗口。这是对计算机界面的一个美丽的新设想。这是前所未有的。作家兼记者 Steve Levy 会谈到它的潜力。
|
||||
|
||||
Steven Levy:[00:12:00] 对于这个新界面来说,有很多令人激动的地方,它比以前的交互界面更友好,以前用的所谓的命令行 —— 你和电脑之间的交互方式跟现实生活中的交互方式完全不同。鼠标和电脑上的图像让你可以做到像现实生活中的交互一样,你可以像指向现实生活中的东西一样指向电脑上的东西。这让事情变得简单多了。你无需要记住所有那些命令。
|
||||
|
||||
Saron Yitbarek:[00:12:30] 不过,施乐的高管们并没有意识到他们正坐在金矿上。一如既往地,工程师比主管们更清楚它的价值。因此那些工程师,当被要求向乔布斯展示所有这些东西是如何工作时,有点紧张。然而这是毕竟是高管的命令。乔布斯觉得,用他的话来说,“这个产品天才本来能够让施乐公司垄断整个行业,可是它最终会被公司的经营者毁掉,因为他们对产品的好坏没有概念。”[00:13:00] 这话有些苛刻,但是,乔布斯带着一卡车施乐高管错过的想法离开了会议。这几乎包含了他需要革新桌面计算体验的所有东西。1983 年,苹果发布了 Lisa 电脑,1984 年又发布了 Mac 电脑。这些设备的创意是抄袭自施乐公司的。
|
||||
|
||||
让我感兴趣的是,乔布斯对控诉他偷了图形用户界面的反应。他对此很冷静。他引用毕加索的话:“好的艺术家抄袭,伟大的艺术家偷窃。”他告诉一位记者,“我们总是无耻地窃取伟大的创意。”[00:13:30] 伟大的艺术家偷窃,好吧,我的意思是,我们说的并不是严格意义上的“偷窃”。没人拿到了专有的源代码并公然将其集成到他们自己的操作系统中去。这要更温和些,更像是创意的借用。这就难控制的多了,就像乔布斯自己即将学到的那样。传奇的软件奇才、真正的代码英雄 Andy Hertzfeld 就是麦金塔开发团队的最初成员。
|
||||
|
||||
Andy Hertzfeld:[00:14:00] 是的,微软是我们的第一个麦金塔电脑软件合作伙伴。当时,我们并没有把他们当成是竞争对手。他们是苹果之外,我们第一家交付麦金塔电脑原型的公司。我通常每周都会和微软的技术主管聊一次。他们是第一个尝试我们所编写软件的外部团队。[00:14:30] 他们给了我们非常重要的反馈,总的来说,我认为我们的关系非常好。但我也注意到,在我与技术主管的交谈中,他开始问一些系统实现方面的问题,而他本无需知道这些,我觉得,他们想要复制麦金塔电脑。我很早以前就向史蒂夫·乔布斯反馈过这件事,但在 1983 年秋天,这件事达到了高潮。[00:15:00] 我们发现,他们在 1983 年 11 月的 COMDEX 上发布了 Windows,但却没有提前告诉我们。对此史蒂夫·乔布斯勃然大怒。他认为那是一种背叛。
|
||||
|
||||
Saron Yitbarek:随着新版 Windows 的发布,很明显,微软从苹果那里学到了苹果从施乐那里学来的所有想法。乔布斯很易怒。他的关于伟大艺术家如何偷窃的毕加索名言被别人学去了,而且恐怕盖茨也正是这么做的。[00:15:30] 据报道,当乔布斯怒斥盖茨偷了他们的东西时,盖茨回应道:“史蒂夫,我觉得这更像是我们都有一个叫施乐的富有邻居,我闯进他家偷电视机,却发现你已经偷过了”。苹果最终以窃取 GUI 的外观和风格为名起诉了微软。这个案子持续了好几年,但是在 1993 年,第 9 巡回上诉法院的一名法官最终站在了微软一边。[00:16:00] Vaughn Walker 法官宣布外观和风格不受版权保护。这是非常重要的。这一决定让苹果在无法垄断桌面计算的界面。很快,苹果短暂的领先优势消失了。以下是 Steven Levy 的观点。
|
||||
|
||||
Steven Levy:他们之所以失去领先地位,不是因为微软方面窃取了知识产权,而是因为他们无法巩固自己在上世纪 80 年代拥有的更好的操作系统的优势。坦率地说,他们的电脑索价过高。[00:16:30] 因此微软从 20 世纪 80 年代中期开始开发 Windows 系统,但直到 1990 年开发出的 Windows 3,我想,它才真正算是一个为黄金时期做好准备的版本,才真正可供大众使用。[00:17:00] 从此以后,微软能够将数以亿计的用户迁移到图形界面,而这是苹果无法做到的。虽然苹果公司有一个非常好的操作系统,但是那已经是 1984 年的产品了。
|
||||
|
||||
Saron Yitbarek:现在微软主导着操作系统的战场。他们占据了 90% 的市场份额,并且针对各种各样的个人电脑进行了标准化。操作系统的未来看起来会由微软掌控。此后发生了什么?[00:17:30] 1997 年,波士顿 Macworld 博览会上,你看到了一个几近破产的苹果,一个谦逊的多的史蒂夫·乔布斯走上舞台,开始谈论伙伴关系的重要性,特别是他们与微软的新型合作伙伴关系。史蒂夫·乔布斯呼吁双方缓和关系,停止火拼。微软将拥有巨大的市场份额。从表面看,我们可能会认为世界和平了。但当利益如此巨大时,事情就没那么简单了。[00:18:00] 就在苹果和微软在数十年的争斗中伤痕累累、最终败退到死角之际,一名 21 岁的芬兰计算机科学专业学生出现了。几乎是偶然地,他彻底改变了一切。
|
||||
|
||||
我是 Saron Yitbarek,这里是代码英雄。
|
||||
|
||||
正当某些科技巨头正忙着就专有软件相互攻击时,自由软件和开源软件的新领军者如雨后春笋般涌现。[00:18:30] 其中一位优胜者就是<ruby>理查德·斯托尔曼<rt>Richard Stallman</rt></ruby>。你也许对他的工作很熟悉。他想要有自由软件和自由社会。这就像言论自由一样的<ruby>自由<rt>free</rt></ruby>,而不是像免费啤酒一样的<ruby>免费<rt>free</rt></ruby>。早在 80 年代,斯托尔曼就发现,除了昂贵的专有操作系统(如 UNIX)外,就没有其他可行的替代品。因此他决定自己做一个。斯托尔曼的<ruby>自由软件基金会<rt>Free Software Foundation</rt></ruby>开发了 GNU,当然,它的意思是 “GNU's not UNIX”。它将是一个像 UNIX 一样的操作系统,但不包含所有的 UNIX 代码,而且用户可以自由共享。
|
||||
|
||||
[00:19:00] 为了让你体会到上世纪 80 年代自由软件概念的重要性,从不同角度来说拥有 UNIX 代码的两家公司,<ruby>AT&T 贝尔实验室<rt>AT&T Bell Laboratories</rt></ruby>以及<ruby>UNIX 系统实验室<rt>UNIX System Laboratories</rt></ruby>威胁将会起诉任何看过 UNIX 源代码后又创建自己操作系统的人。这些人是次级专利所属。[00:19:30] 用这两家公司的话来说,所有这些程序员都在“精神上受到了污染”,因为他们都见过 UNIX 代码。在 UNIX 系统实验室和<ruby>伯克利软件设计公司<rt>Berkeley Software Design</rt></ruby>之间的一个著名的法庭案例中,有人认为任何功能类似的系统,即使它本身没有使用 UNIX 代码,也侵犯版权。Paul Jones 当时是一名开发人员。他现在是数字图书馆 ibiblio.org 的主管。
|
||||
|
||||
Paul Jones:[00:20:00] 任何看过代码的人都受到了精神污染是他们的观点。因此几乎所有在安装有与 UNIX 相关操作系统的电脑上工作过的人以及任何在计算机科学部门工作的人都受到精神上的污染。因此,在 USENIX 的一年里,我们都得到了一写带有红色字母的白色小别针,上面写着“精神受到了污染”。我们很喜欢带着这些别针到处走,以表达我们跟着贝尔实验室混,因为我们的精神受到了污染。
|
||||
|
||||
Saron Yitbarek:[00:20:30] 整个世界都被精神污染了。想要保持纯粹、保持事物的美好和专有的旧哲学正变得越来越不现实。正是在这被污染的现实中,历史上最伟大的代码英雄之一诞生了,他是一个芬兰男孩,名叫<ruby>林纳斯·托瓦兹<rt>Linus Torvalds</rt></ruby>。如果这是《星球大战》,那么林纳斯·托瓦兹就是我们的<ruby>卢克·天行者<rt>Luke Skywalker</rt></ruby>。他是赫尔辛基大学一名温文尔雅的研究生。[00:21:00] 有才华,但缺乏大志。典型的被逼上梁山的英雄。和其他年轻的英雄一样,他也感到沮丧。他想把 386 处理器整合到他的新电脑中。他对自己的 IBM 兼容电脑上运行的 MS-DOS 操作系统并不感冒,也负担不起 UNIX 软件 5000 美元的价格,而只有 UNIX 才能让他自由地编程。解决方案是托瓦兹在 1991 年春天基于 MINIX 开发了一个名为 Linux 的操作系统内核。他自己的操作系统内核。
|
||||
|
||||
Steven Vaughan-Nichols:[00:21:30] 林纳斯·托瓦兹真的只是想找点乐子而已。
|
||||
|
||||
Saron Yitbarek:Steven Vaughan-Nichols 是 ZDNet.com 的特约编辑,而且他从科技行业出现以来就一直在写科技行业相关的内容。
|
||||
|
||||
Steven Vaughan-Nichols:当时有几个类似的操作系统。他最关注的是一个名叫 MINIX 的操作系统,MINIX 旨在让学生学习如何构建操作系统。林纳斯看到这些,觉得很有趣,但他想建立自己的操作系统。[00:22:00] 所以,它实际上始于赫尔辛基的一个 DIY 项目。一切就这样开始了,基本上就是一个大孩子在玩耍,学习如何做些什么。[00:22:30] 但不同之处在于,他足够聪明、足够执着,也足够友好,让所有其他人都参与进来,然后他开始把这个项目进行到底。27 年后,这个项目变得比他想象的要大得多。
|
||||
|
||||
Saron Yitbarek:到 1991 年秋季,托瓦兹发布了 10000 行代码,世界各地的人们开始评头论足,然后进行优化、添加和修改代码。[00:23:00] 对于今天的开发人员来说,这似乎很正常,但请记住,在那个时候,像这样的开放协作是对微软、苹果和 IBM 已经做的很好的整个专有系统的道德侮辱。随后这种开放性被奉若神明。托瓦兹将 Linux 置于 GNU 通用公共许可证(GPL)之下。曾经保障斯托尔曼的 GNU 系统自由的许可证现在也将保障 Linux 的自由。Vaughan-Nichols 解释道,这种融入到 GPL 的重要性怎么强调都不过分,它基本上能永远保证软件的自由和开放性。
|
||||
|
||||
Steven Vaughan-Nichols:[00:23:30] 事实上,根据 Linux 所遵循的许可协议,即 GPL 第 2 版,如果你想贩卖 Linux 或者向全世界展示它,你必须与他人共享代码,所以如果你对其做了一些改进,仅仅给别人使用是不够的。事实上你必须和他们分享所有这些变化的具体细节。然后,如果这些改进足够好,就会被 Linux 所吸收。
|
||||
|
||||
Saron Yitbarek:[00:24:00] 事实证明,这种公开的方式极具吸引力。<ruby>埃里克·雷蒙德</rt>Eric Raymond</rt></ruby> 是这场运动的早期传道者之一,他在他那篇著名的文章中写道:“微软和苹果这样的公司一直在试图建造软件大教堂,而 Linux 及类似的软件则提供了一个由不同议程和方法组成的巨大集市,集市比大教堂有趣多了。”
|
||||
|
||||
tormy Peters:我认为在那个时候,真正吸引人的是人们终于可以把控自己的世界了。
|
||||
|
||||
Saron Yitbarek:Stormy Peters 是一位行业分析师,也是自由和开源软件的倡导者。
|
||||
|
||||
Stormy Peters:[00:24:30] 当开源软件第一次出现的时候,所有的操作系统都是专有的。如果不使用专有软件,你甚至不能添加打印机,你不能添加耳机,你不能自己开发一个小型硬件设备,然后让它在你的笔记本电脑上运行。你甚至不能放入 DVD 并复制它,因为你不能改变软件,即使你拥有这张 DVD,你也无法复制它。[00:25:00] 你无法控制你购买的硬件/软件系统。你不能从中创造出任何新的、更大的、更好的东西。这就是为什么开源操作系统在一开始就如此重要的原因。我们需要一个开源协作环境,在那里我们可以构建更大更好的东西。
|
||||
|
||||
Saron Yitbarek:请注意,Linux 并不是一个纯粹的平等主义乌托邦。林纳斯·托瓦兹不会批准对内核的所有修改,而是主导了内核的变更。他安排了十几个人来管理内核的不同部分。[00:25:30] 这些人也会信任自己下面的人,以此类推,形成信任金字塔。变化可能来自任何地方,但它们都是经过判断和策划的。
|
||||
|
||||
然而,考虑到到林纳斯的 DIY 项目一开始是多么的简陋和随意,这项成就令人十分惊讶。他完全不知道自己就是这一切中的卢克·天行者。当时他只有 21 岁,一半的时间都在编程。但是当魔盒第一次被打开,人们开始给他反馈。[00:26:00] 几十个,然后几百个,成千上万的贡献者。有了这样的众包基础,Linux 很快就开始成长。真的成长得很快。甚至最终引起了微软的注意。他们的首席执行官<ruby>史蒂夫·鲍尔默<rt>Steve Ballmer</rt></ruby>将 Linux 称为是“一种癌症,从知识产权得角度来看,它传染了接触到得任何东西 ”。Steven Levy 将会描述 Ballmer 的由来。
|
||||
|
||||
Steven Levy:[00:26:30] 一旦微软真正巩固了它的垄断地位,而且它也确实被联邦法院判定为垄断,他们将会对任何可能对其构成威胁的事情做出强烈反应。因此,既然他们对软件收费,很自然得,他们将自由软件得出现看成是一种癌症。他们试图提出一个知识产权理论,来解释为什么这对消费者不利。
|
||||
|
||||
Saron Yitbarek:[00:27:00] Linux 在不断传播,微软也开始担心起来。到了 2006 年,Linux 成为仅次于 Windows 的第二大常用操作系统,全球约有 5000 名开发人员在使用它。5000 名开发者。还记得比尔·盖茨给苹果公司的备忘录吗?在那份备忘录中,他向苹果公司的员工们论述了与他人合作的重要性。事实证明,开源将把伙伴关系的概念提升到一个全新的水平,这是比尔·盖茨从未预见到的。
|
||||
|
||||
[00:27:30] 我们一直在谈论操作系统之间的大战,但是到目前为止,并没有怎么提到无名英雄和开发者们。在下次的代码英雄中,情况就不同了。第二集讲的是操作系统大战的第二部分,是关于 Linux 崛起的。业界醒悟过来,认识到了开发人员的重要性。这些开源反叛者变得越来越强大,战场从桌面转移到了服务器领域。[00:28:00] 这里有商业间谍活动、新的英雄人物,还有科技史上最不可思议的改变。这一切都在操作系统大战的后半集内达到了高潮。
|
||||
|
||||
要想免费自动获得新一集的代码英雄,请点击订阅苹果播客、Spotify、谷歌 Play,或其他应用获取该播客。在这一季剩下的时间里,我们将参观最新的战场,相互争斗的版图,这里是下一代的代码英雄留下印记的地方。[00:28:30] 更多信息,请访问 https://redhat.com/commandlineheroes 。我是 Saron Yitbarek。下次之前,继续编码。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1
|
||||
|
||||
作者:[redhat][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.redhat.com
|
||||
[b]: https://github.com/lujun9972
|
@ -1,62 +0,0 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "qfzy1233 "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: subject: "What is a Linux user?"
|
||||
[#]: via: "https://opensource.com/article/19/6/what-linux-user"
|
||||
[#]: author: "Anderson Silva https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth"
|
||||
|
||||
何谓 Linux 用户?
|
||||
======
|
||||
|
||||
“Linux 用户”这一定义已经拓展到了更大的范围,同时也发生了巨大的改变。
|
||||
![][1]
|
||||
|
||||
> _编者按: 本文更新于2019年6月11日下午1:15:19更新,以更准确地反映作者对Linux社区中开放和包容的社区性的看法_
|
||||
|
||||
|
||||
|
||||
再过两年,Linux 内核就要迎来它30岁的生日了。让我们回想一下!1991年的时候你在哪里?你出生了吗?那年我13岁!在1991到1993年间只推出了极少数的 Linux 发行版,且至少有三款—Slackware, Debian, 和 Red Hat–支持[backbone][2] 这也使得 Linux 运动得以发展。
|
||||
|
||||
当年获得Linux发行版的副本,并在笔记本或服务器上进行安装和配置,和今天相比是很不一样的。当时是十分艰难的!也是令人沮丧的!如果你让能让它运行起来,就是一个了不起的成就!我们不得不与不兼容的硬件、设备上的配置跳线、BIOS 问题以及许多其他问题作斗争。即使硬件是兼容的,很多时候,你仍然需要编译内核、模块和驱动程序才能让它们在你的系统上工作。
|
||||
|
||||
如果你当时在场,你可能会点头。有些读者甚至称它们为“美好的过往”,因为选择使用 Linux 意味着仅仅是为了让操作系统继续运行,你就必须学习操作系统、计算机体系架构、系统管理、网络,甚至编程。但我并不赞同他们的说法,窃以为: Linux 在 IT 行业带给我们的最让人惊讶的改变就是,它成为了我们每个人技术能力的基础组成部分!
|
||||
|
||||
将近30年过去了,无论是桌面和服务器领域 Linux 系统都有了脱胎换骨的变换。你可以在汽车上,在飞机上,家用电器上,智能手机上……几乎任何地方发现 Linux 的影子!你甚至可以购买预装 Linux 的笔记本电脑、台式机和服务器。如果你考虑云计算,企业甚至个人都可以一键部署 Linux 虚拟机,由此可见 Linux 的应用已经变得多么普遍了。
|
||||
|
||||
考虑到这些,我想问你的问题是: **这个时代如何定义“Linux用户”**
|
||||
|
||||
如果你从 System76 或 Dell 为你的父母或祖父母购买一台 Linux 笔记本电脑,为其登录好他们的社交媒体和电子邮件,并告诉他们经常单击“系统升级”,那么他们现在就是 Linux 用户了。如果你是在 Windows 或MacOS 机器上进行以上操作,那么他们就是 Windows 或 MacOS 用户。令人难以置信的是,与90年代不同,现在的 Linux 任何人都可以轻易上手。
|
||||
|
||||
由于种种原因,这也归因于web浏览器成为了桌面计算机上的“杀手级应用程序”。现在,许多用户并不关心他们使用的是什么操作系统,只要他们能够访问到他们的应用程序或服务。
|
||||
|
||||
你知道有多少人经常使用他们的电话、桌面或笔记本电脑,但无法管理他们系统上的文件、目录和驱动程序?又有多少人不会通过二进制文件安装“应用程序商店”没有收录的程序?更不要提从头编译应用程序,对我来说,几乎没有人。这正是成熟的开源软件和相应的生态对于易用性的改进的动人之处。
|
||||
|
||||
今天的 Linux 用户不需要像90年代或21世纪初的 Linux 用户那样了解、学习甚至查询信息,这并不是一件坏事。过去那种认为Linux只适合工科男使用的想法已经一去不复返了。
|
||||
|
||||
对于那些对计算机、操作系统以及在自由软件上创建、使用和协作的想法感兴趣、好奇、着迷的Linux用户来说,Liunx 依旧有研究的空间。如今在 Windows 和 MacOS 上也有同样多的空间留给创造性的开源贡献者。今天,成为Linux用户就是成为一名与 Linux 系统同行的人。这是一件很棒的事情。
|
||||
|
||||
|
||||
|
||||
### Linux 用户定义的转变
|
||||
|
||||
当我开始使用Linux时,作为一个 Linux 用户意味着知道操作系统如何以各种方式、形态和形式运行。Linux 在某种程度上已经成熟,这使得“Linux用户”的定义可以包含更广泛的领域及那些领域里的人们。这可能是显而易见的一点,但重要的还是要说清楚:任何Linux 用户皆“生”而平等。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/what-linux-user
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qfzy1233](https://github.com/qfzy1233)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22
|
||||
[2]: https://en.wikipedia.org/wiki/Linux_distribution#/media/File:Linux_Distribution_Timeline.svg
|
@ -1,195 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (martin2011qi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is POSIX? Richard Stallman explains)
|
||||
[#]: via: (https://opensource.com/article/19/7/what-posix-richard-stallman-explains)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
POSIX 是什么?让我们听听 Richard Stallman 的诠释
|
||||
======
|
||||
从计算机自由先驱口中探寻操作系统兼容性标准背后的本质。
|
||||
|
||||
![Scissors cutting open access to files][1]
|
||||
|
||||
[POSIX][2] 是什么?为什么如此重要?你可能在很多的技术类文章中看到这个术语,但往往会在探寻其本质时迷失在<ruby>技术初始主义<rt>techno-initialisms</rt></ruby>的海洋中或是<ruby>以 X 结尾的行话<rt>jargon-that-ends-in-X</rt></ruby>中。
|
||||
|
||||
Richard Stallman 认为用 “开源” 和 “闭源” 来归类软件是一种错误的方法。Stallman 将程序分类为 _<ruby>尊重自由的<rt>freedom-respecting</rt></ruby>_(“<ruby>自由<rt>free</rt></ruby>” 或 “<ruby>自由(西语)<rt>libre</rt></ruby>”)和 _<ruby>践踏自由的<rt>freedom-trampling</rt></ruby>_(“<ruby>非自由<rt>non-free</rt></ruby>” 或 “<ruby>专有<rt>proprietary</rt></ruby>”)。开源话语通常会为了(用户)实际得到的<ruby>优势/便利<rt>advantages</rt></ruby>考虑,而去鼓励某些做法,而非作为道德层面上的约束。
|
||||
|
||||
Stallman 在由其本人于 1984 年发起的自由软件运动中表明,不仅仅是这些 _<ruby>优势/便利<rt>advantages</rt></ruby>_ 受到了威胁。计算机的用户 _<ruby>理应得到<rt>deserve</rt></ruby>_ 计算机的控制权,因此拒绝被用户控制的程序即是 <ruby>非正义<rt>injustice</rt></ruby>,理应得到<ruby>拒绝<rt>rejected</rt></ruby>和<ruby>排斥<rt>eliminated</rt></ruby>。对于用户的控制权,程序应当给予用户 [四项基本自由][4]:
|
||||
|
||||
* 自由度0:无论用户出于何种目的,必须可以按照用户意愿,自由地运行该软件。
|
||||
* 自由度1:用户可以自由地学习并修改该软件,以此来帮助用户完成用户自己的计算。作为前提,用户必须可以访问到该软件的源代码。
|
||||
* 自由度2:用户可以自由地分发该软件的拷贝,这样就可以助人。
|
||||
* 自由度3:用户可以自由地分发该软件修改后的拷贝。借此,用户可以把改进后的软件分享给整个社区令他人也从中受益。作为前提,用户必须可以访问到该软件的源代码。
|
||||
|
||||
### 关于 POSIX
|
||||
|
||||
**Seth:** POSIX 标准是由 [IEEE][5] 发布,用于描述 “可移植操作系统” 的文档。只要开发人员编写符合此描述的程序,他们生产的便是符合 POSIX 的程序。在科技行业,我们称之为 “<ruby>规范<rt>specification</rt></ruby>” 或将其简写为 “spec”。 就技术用语而言,这是可以理解的,但我们不禁要问是什么使操作系统 “可移植”?
|
||||
|
||||
**RMS:** 我认为完成这项任务应该是 _接口_, 接口就应该做到可移植(在不同系统中皆然),而不应只支持任何一种 _系统_。实际上,各种各样的系统内部构造虽然不同,但仍会支持一部分 POSIX 接口规范。
|
||||
|
||||
**Seth:** 因此,如果两个系统皆具有 POSIX 兼容程序,那么他们便可以相互对对方做出假设,也就使他们知道如何相互 “交谈”。我了解到 “POSIX” 这个简称是你想出来的。那你是怎么想出来的呢?它是如何就被 IEEE 采纳了呢?
|
||||
|
||||
**RMS:** IEEE 已经完成了规范的开发,但还没为其想好简练的名称。标题上写的是 “便携式操作系统接口”,虽然我已记不清确切的单词。委员倾向于将 “IEEEIX” 作为简称。而我认为那不太好。发音有点怪 - 听起来像恐怖的尖叫,“Ayeee!” - 所以我觉得人们反而会倾向于称之为 “Unix”。
|
||||
|
||||
但是,由于 [GNU 并不是 Unix(GNU's Not Unix)][6],并且它打算取代之,所以我不希望人们将 GNU 称为 “Unix 系统”。因此,我提出了人们在实际使用中会比较好用的简称。那个时候也没有什么灵感,我就用了一个并不是非常聪明的方式创造了这个简称:我使用了 “<ruby>便携式操作系统<rt>portable operating system</rt></ruby>” 的首字母缩写,并在末尾添加了 “ix” 作为简称。IEEE 也欣然接受了。
|
||||
|
||||
**Seth:** POSIX 中的 “操作系统” 是仅涉及 Unix 和类 Unix 的系统(如GNU)呢?还是意图包含所有操作系统?
|
||||
|
||||
**RMS:** 缩写中的 “操作系统” 一词涵盖的系统并不完全指的是类 Unix 系统,也不是完全要符合 POSIX 规范。但是,该规范适用于大量类 Unix 的系统;也只有这样的系统才会去遵守 POSIX 规范。
|
||||
|
||||
**Seth:** 你是否参与审核或更新当前版本的 POSIX 标准?
|
||||
|
||||
**RMS:** 现在不了。
|
||||
|
||||
**Seth:** GNU Autotools 工具链能使应用程序更容易移植,至少在构建和安装的时间上是这样的。所以可以认为 Autotools 是构建便携式基础设施的重要一环吗?
|
||||
|
||||
**RMS:** 是的,因为即使在遵循 POSIX 的系统中,也存在着诸多差异。而 Autotools 可以使程序更容易适应这些差异。顺带一提,如果有人想助力 Autotools 的开发,可以发邮件联系我。
|
||||
|
||||
**Seth:** 我认为,是 GNU 首先开始让人们意识到这种可能,一个非 Unix 的系统可以从专有的技术中挣脱出来的可能。关于自由软件如何协作方面,在实现的途中应该有些缺乏明确性地方吧!。
|
||||
|
||||
**RMS:** 我认为没有任何空白或不确定性。我只是简单地照着 BSD 的接口写而已。
|
||||
|
||||
**Seth:** 一些 GNU 应用程序符合 POSIX 标准,而另一些 GNU 应用程序则具有不在 POSIX 规范中的或者是缺少规范要求的 GNU 特定功能。对于 GNU 应用程序 POSIX 合规性有多重要?
|
||||
|
||||
**RMS:** 遵循标准对于为用户服务的程度很重要。我们不仅将标准视为权威,而且将其作为可能有用的指南来遵循。因此,我们谈论的是<ruby>遵循<rt>following</rt></ruby>标准而不是“<ruby>遵守<rt>complying</rt></ruby>”。可以参考 GNU 编码标准中的 [非 GNU 标准][7] 段落。
|
||||
|
||||
我们努力在大多数问题上与标准兼容,因为这样做在大多数的问题上能为用户提供最好的服务。但也偶有例外。
|
||||
|
||||
例如,POSIX 指定某些实用程序以 512 字节为单位测量磁盘空间。我要求委员会将其改为 1K,但被拒绝了,说是有个<ruby>官僚主义的规则<rt>bureaucratic rule</rt></ruby>强迫选用 512。我不记得有多少人试图争辩说,用户会对这个决定感到满意的。
|
||||
|
||||
由于 GNU 的第二优先级,在用户的<ruby>自由<rt>freedom</rt></ruby>之后即是用户的<ruby>便利<rt>convenience</rt></ruby>,我们使 GNU 程序以默认 1K 为单位按块测量磁盘空间。
|
||||
|
||||
然而,为了防止竞争对手利用这点给 GNU 安上 “<ruby>不合规<rt>noncompliant</rt></ruby>” 的骂名,我们实现了遵循 POSIX 和 ISO C 的可选模式,这种妥协着实可笑。想要遵循 POSIX,只需设置环境变量 POSIXLY_CORRECT,即可使程序符合以 512 字节为单位的 POSIX 列表磁盘空间。如果有人知道实际使用 POSIXLY_CORRECT 或者 GCC 中的 **\--pedantic** 会为某些用户提供什么实际利益的话,请务必告诉我。
|
||||
|
||||
**Seth:** 符合 POSIX 标准的自由软件项目是否更容易移植到其他类 Unix 系统?
|
||||
|
||||
**RMS:** 我认为是这样,但自 80 年代开始,我决定不再把时间浪费在将软件移植到 GNU 以外的系统上。我开始专注于推进 GNU 系统,使其不必使用任何非自由软件。至于将 GNU 程序移植到非类 GNU 系统就留给想在其他系统上运行它们的人们了。
|
||||
|
||||
**Seth:** POSIX 对于软件的自由很重要吗?
|
||||
|
||||
**RMS:** 本质上说,(遵不遵循 POSIX)其实没有任何区别。但是,POSIX 和 ISO C 的标准化确实使 GNU 系统更容易迁移,这有助于我们更快地实现从非自由软件中解放用户的目标。在 90 年代早期,使用自由软件成功地打造出 Linux,同时也填补了 GNU 中对于内核的空白。
|
||||
|
||||
### POSIX 采纳 GNU 的创新
|
||||
|
||||
我还讯问过 Dr. Stallman,是否有任何 GNU 特定的创新或惯例后来被采纳为 POSIX 标准。他无法回想起具体的例子,但友好地代我向几位开发人员发了邮件。
|
||||
|
||||
开发人员 Giacomo Catenazzi,James Youngman,Eric Blake,Arnold Robbins 和 Joshua Judson Rosen 都根据当前版本的 POSIX 以及诸多前回版本做出了回应。POSIX 是一个 “<ruby>活的<rt>living</rt></ruby>” 标准,因此会不断被行业专业人士更新和评审,许多从事 GNU 项目的开发人员会提出对于 GNU 特性的包容。
|
||||
|
||||
为了回顾这些有趣的历史,接下来会罗列一些脍炙人口的已经融入 POSIX 的 GNU 特性。
|
||||
|
||||
#### Make
|
||||
|
||||
一些 GNU **Make** 的特性已经被 POSIX 定义的 **make** 所采用。相关的 [规范][8] 提供了从现有实现中借来特性的详细归属。
|
||||
|
||||
#### Diff 和 patch
|
||||
|
||||
**[diff][9]** 和 **[patch][10]** 命令都直接从这些工具的 GNU 版本中引进了 **-u** 和 **-U** 选项。
|
||||
|
||||
#### C 库
|
||||
|
||||
POSIX 采用了 GNU C 库 **glibc** 的许多特性。<ruby>血统<rt>Lineage</rt></ruby>一时已难以追溯,但 James Youngman 如是写道:
|
||||
|
||||
>“我非常确定 GCC 首创了许多 ISO C 的特性。例如,**_Noreturn** 是 C11 中的新特性,但 GCC-1.35 便具有此功能(使用 **volatile** 作为声明函数的修饰符)。另外尽管我不确定该 GCC-1.35 支持的可变长度数组似乎与现代 C 中的(<ruby>柔性数组<rt>conformant array</rt></ruby>没有找到准确的翻译,推测是柔性数组)非常相似。“
|
||||
|
||||
Giacomo Catenazzi 援引 Open Group 的 [文章 **strftime**][11],并指出其归因:“这是基于某版本 GNU libc 的 **strftime()** 的某特性。”
|
||||
|
||||
Eric Blake 指出,对于 **getline()** 和各种基于本地环境的 ***_l()** 函数,GNU 绝对是这方面的先驱。
|
||||
|
||||
Joshua Judson Rosen 补充道,他对采纳 **getline** 函数的印象深刻,特别是在目睹了全然不同操作系统的代码中和类 GNU 代码相似却又怪异的做法后。
|
||||
|
||||
“等等……那是 GNU 特有的……不是吗?显然已经不再是了。”
|
||||
|
||||
Rosen 指出的 [**getline** 手册页][12] 中写道:
|
||||
|
||||
> **getline()** 和 **getdelim()** 最初都是 GNU 扩展。在 POSIX.1-2008 中被标准化。
|
||||
|
||||
Eric Blake 向我发送了一份其他扩展的列表,这些扩展可能会在下一个 POSIX 修订版中添加(代号为 Issue 8,大约在 2021 年前后):
|
||||
|
||||
* [ppoll][13]
|
||||
* [pthread_cond_clockwait et al.][14]
|
||||
* [posix_spawn_file_actions_addchdir][15]
|
||||
* [getlocalename_1][16]
|
||||
* [reallocarray][17]
|
||||
|
||||
|
||||
|
||||
### 关于用户空间的扩展
|
||||
|
||||
POSIX 不仅为开发人员定义函数和特性,还为用户空间定义了标准行为。
|
||||
|
||||
#### ls
|
||||
|
||||
**-A** 选项会排除符号 **.**(代表当前位置)和 **..**(代表上一级目录)这些来自 **ls** 命令的结果。被 POSIX 2008 采纳。
|
||||
|
||||
#### find
|
||||
|
||||
The **find** command is a useful tool for ad hoc [**for** loops][18] and as a gateway into [parallel][19] processing.
|
||||
|
||||
**find** 命令是一个<ruby>专门的<rt>ad hoc</rt></ruby> [**for** loops][18] 工具,也是 [<ruby>并行<rt>parallel</rt></ruby>][19] 处理的出入口。
|
||||
|
||||
A few conveniences made their way from GNU to POSIX, including the **-path** and **-perm** options.
|
||||
|
||||
一些从 GNU 引入到 POSIX 的<ruby>便捷操作<rt>conveniences</rt></ruby>,包括 **-path** 和 **-perm** 选项。
|
||||
|
||||
**-path** 选项帮你过滤与文件系统路径模式匹配的搜索结果,并且 1996 年前 GNU 版本的 **find** (根据 **findutil** 在 Git 仓库中最早的记录)便可使用此选项。James Youngman 指出 [HP-UX][20] 也很早就有这个选项,所以究竟是 GNU 还是 HP-UX 做出的这一创新(抑或两者兼而有之)无法考证。
|
||||
|
||||
**-perm** 选项帮你按文件权限过滤搜索结果。1996 年 GNU 版本的 **find** 中便已存在,随后被纳入 POSIX 标准 “IEEE Std 1003.1,2004 Edition” 中。
|
||||
|
||||
**xargs** 命令是 **findutils** 软件包的一部分,1996 年的时候就有一个 **-p** 选项会将 **xargs** 置于交互模式(用户将被提示是否继续),随后被纳入 POSIX 标准 "IEEE Std 1003.1, 2004 Edition." 中。
|
||||
|
||||
#### Awk
|
||||
|
||||
GNU **awk**(即 **/usr/bin** 目录中的 **gawk** 命令,可能也是符号链接 **awk** 的目标地址)的维护者 Arnold Robbins 说道,**gawk** 和 **mawk**(另一个GPL 的 **awk** 实现)允许使用 **RS** 作为正则表达式,即 **RS** 的长度大于 1 时的情况。这一特性还不是 POSIX 的特性,但有 [迹象表明它即将会是][21]:
|
||||
|
||||
> _在扩展正则表达式中未定义的行为将导致 NUL 结果,以支持 GNU gawk 程序未来对于处理二进制数据的扩展。_
|
||||
>
|
||||
> _使用多字符 RS 值的未指定行为是为了支持未来可能基于扩展正则表达式使用记录分隔符的特性。历史实现为采用字符串的第一个字符而忽略其他字符。_
|
||||
|
||||
这是一个重大的增强,因为 **RS** 符号定义了记录之间的分隔符。可能是逗号、分号、短划线、或者是任何此类字符,但如果它是字符 _序列_,则只会使用第一个字符,除非你使用的是 **gawk** 或 **mawk**。想象一下这种情况,使用省略号(连续的三个点)作为解析 IP 地址文档的分隔记录,只是想获取在每个 IP 地址的每个点处解析的结果。
|
||||
|
||||
**[Mawk][22]** 首先支持这个功能,但是几年来没有维护者,留下来的火把由 **gawk** 接过。(**Mawk** 已然获得了一个新的维护者,可以说是大家薪火传承地将这一特性推向共同的预期值。)
|
||||
|
||||
### POSIX 规范
|
||||
|
||||
总的来说,Giacomo Catenzzi 指出,“……因为 GNU 的实用程序使用广泛,而且许多其他的选项和行为又对标规范。在 shell 的每次更改中,Bash 都会被用作比较(作为一等公民)。” 当某些东西被纳入 POSIX 规范时,无需提及 GNU 或任何其他影响,你可以简单地认为 POSIX 规范会受到许多方面的影响,GNU 只是其中之一。
|
||||
|
||||
共识是 POSIX 存在的意义所在。一群技术人员共同努力为了实现普世的规范,再分享给数以百计独特的开发人员,经由他们的赋能,从而实现软件的独立性,以及开发人员和用户的自由。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2 (Scissors cutting open access to files)
|
||||
[2]: https://pubs.opengroup.org/onlinepubs/9699919799.2018edition/
|
||||
[3]: https://stallman.org/
|
||||
[4]: https://www.gnu.org/philosophy/free-sw.en.html
|
||||
[5]: https://www.ieee.org/
|
||||
[6]: http://gnu.org
|
||||
[7]: https://www.gnu.org/prep/standards/html_node/Non_002dGNU-Standards.html
|
||||
[8]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/make.html
|
||||
[9]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/diff.html
|
||||
[10]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/patch.html
|
||||
[11]: https://pubs.opengroup.org/onlinepubs/9699919799/functions/strftime.html
|
||||
[12]: http://man7.org/linux/man-pages/man3/getline.3.html
|
||||
[13]: http://austingroupbugs.net/view.php?id=1263
|
||||
[14]: http://austingroupbugs.net/view.php?id=1216
|
||||
[15]: http://austingroupbugs.net/view.php?id=1208
|
||||
[16]: http://austingroupbugs.net/view.php?id=1220
|
||||
[17]: http://austingroupbugs.net/view.php?id=1218
|
||||
[18]: https://opensource.com/article/19/6/how-write-loop-bash
|
||||
[19]: https://opensource.com/article/18/5/gnu-parallel
|
||||
[20]: https://www.hpe.com/us/en/servers/hp-ux.html
|
||||
[21]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html
|
||||
[22]: https://invisible-island.net/mawk/
|
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fix ‘E: The package cache file is corrupted, it has the wrong hash’ Error In Ubuntu)
|
||||
[#]: via: (https://www.ostechnix.com/fix-e-the-package-cache-file-is-corrupted-it-has-the-wrong-hash-error-in-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
修复 Ubuntu 中 “E: The package cache file is corrupted, it has the wrong hash” 错误
|
||||
======
|
||||
|
||||
今天,我尝试更新我的 Ubuntu 18.04 LTS 的仓库列表,但收到了一条错误消息:“**E: The package cache file is corrupted, it has the wrong hash**”。这是我在终端运行的命令以及输出:
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Hit:1 http://it-mirrors.evowise.com/ubuntu bionic InRelease
|
||||
Hit:2 http://it-mirrors.evowise.com/ubuntu bionic-updates InRelease
|
||||
Hit:3 http://it-mirrors.evowise.com/ubuntu bionic-backports InRelease
|
||||
Hit:4 http://it-mirrors.evowise.com/ubuntu bionic-security InRelease
|
||||
Hit:5 http://ppa.launchpad.net/alessandro-strada/ppa/ubuntu bionic InRelease
|
||||
Hit:7 http://ppa.launchpad.net/leaeasy/dde/ubuntu bionic InRelease
|
||||
Hit:8 http://ppa.launchpad.net/rvm/smplayer/ubuntu bionic InRelease
|
||||
Ign:6 https://dl.bintray.com/etcher/debian stable InRelease
|
||||
Get:9 https://dl.bintray.com/etcher/debian stable Release [3,674 B]
|
||||
Fetched 3,674 B in 3s (1,196 B/s)
|
||||
Reading package lists... Done
|
||||
E: The package cache file is corrupted, it has the wrong hash
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
Ubuntu 中的 “The package cache file is corrupted, it has the wrong hash” 错误
|
||||
|
||||
经过一番谷歌搜索,我找到了解决此错误的方法。
|
||||
|
||||
如果你遇到过这个错误,不要惊慌。只需运行下面的命令修复。
|
||||
|
||||
在运行命令之前,**请再次确认你在最后加入了 “\*”**。在命令最后加上 ***** 很重要。如果你没有添加,它会删除 **/var/lib/apt/lists/** 目录,而且无法恢复。我提醒过你了!
|
||||
|
||||
```
|
||||
$ sudo rm -rf /var/lib/apt/lists/*
|
||||
```
|
||||
|
||||
现在我再次使用下面的命令更新系统:
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
现在好了!希望它有帮助。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/fix-e-the-package-cache-file-is-corrupted-it-has-the-wrong-hash-error-in-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/08/The-package-cache-file-is-corrupted.png
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/08/apt-update-command-output-in-Ubuntu.png
|
Loading…
Reference in New Issue
Block a user