mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-25 00:50:15 +08:00
commit
5bb5badbc7
@ -0,0 +1,191 @@
|
||||
如何使用 GParted 实用工具缩放根分区
|
||||
======
|
||||
|
||||
今天,我们将讨论磁盘分区。这是 Linux 中的一个好话题。这允许用户来重新调整在 Linux 中的活动 root 分区。
|
||||
|
||||
在这篇文章中,我们将教你如何使用 GParted 缩放在 Linux 上的活动根分区。
|
||||
|
||||
比如说,当我们安装 Ubuntu 操作系统时,并没有恰当地配置,我们的系统仅有 30 GB 磁盘。我们需要安装另一个操作系统,因此我们想在其中制作第二个分区。
|
||||
|
||||
虽然不建议重新调整活动分区。然而,我们要执行这个操作,因为没有其它方法来释放系统分区。
|
||||
|
||||
> 注意:在执行这个动作前,确保你备份了重要的数据,因为如果一些东西出错(例如,电源故障或你的系统重启),你可以得以保留你的数据。
|
||||
|
||||
### Gparted 是什么
|
||||
|
||||
[GParted][1] 是一个自由的分区管理器,它使你能够缩放、复制和移动分区,而不丢失数据。通过使用 GParted 的 Live 可启动镜像,我们可以使用 GParted 应用程序的所有功能。GParted Live 可以使你能够在 GNU/Linux 以及其它的操作系统上使用 GParted,例如,Windows 或 Mac OS X 。
|
||||
|
||||
#### 1) 使用 df 命令检查磁盘空间利用率
|
||||
|
||||
我只是想使用 `df` 命令向你显示我的分区。`df` 命令输出清楚地表明我仅有一个分区。
|
||||
|
||||
```
|
||||
$ df -h
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
/dev/sda1 30G 3.4G 26.2G 16% /
|
||||
none 4.0K 0 4.0K 0% /sys/fs/cgroup
|
||||
udev 487M 4.0K 487M 1% /dev
|
||||
tmpfs 100M 844K 99M 1% /run
|
||||
none 5.0M 0 5.0M 0% /run/lock
|
||||
none 498M 152K 497M 1% /run/shm
|
||||
none 100M 52K 100M 1% /run/user
|
||||
```
|
||||
|
||||
#### 2) 使用 fdisk 命令检查磁盘分区
|
||||
|
||||
我将使用 `fdisk` 命令验证这一点。
|
||||
|
||||
```
|
||||
$ sudo fdisk -l
|
||||
[sudo] password for daygeek:
|
||||
|
||||
Disk /dev/sda: 33.1 GB, 33129218048 bytes
|
||||
255 heads, 63 sectors/track, 4027 cylinders, total 64705504 sectors
|
||||
Units = sectors of 1 * 512 = 512 bytes
|
||||
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||
Disk identifier: 0x000473a3
|
||||
|
||||
Device Boot Start End Blocks Id System
|
||||
/dev/sda1 * 2048 62609407 31303680 83 Linux
|
||||
/dev/sda2 62611454 64704511 1046529 5 Extended
|
||||
/dev/sda5 62611456 64704511 1046528 82 Linux swap / Solaris
|
||||
```
|
||||
|
||||
#### 3) 下载 GParted live ISO 镜像
|
||||
|
||||
使用下面的命令来执行下载 GParted live ISO。
|
||||
|
||||
```
|
||||
$ wget https://downloads.sourceforge.net/gparted/gparted-live-0.31.0-1-amd64.iso
|
||||
```
|
||||
|
||||
#### 4) 使用 GParted Live 安装介质启动你的系统
|
||||
|
||||
使用 GParted Live 安装介质(如烧录的 CD/DVD 或 USB 或 ISO 镜像)启动你的系统。你将获得类似于下面屏幕的输出。在这里选择 “GParted Live (Default settings)” ,并敲击回车按键。
|
||||
|
||||
![][3]
|
||||
|
||||
#### 5) 键盘选择
|
||||
|
||||
默认情况下,它选择第二个选项,按下回车即可。
|
||||
|
||||
![][4]
|
||||
|
||||
#### 6) 语言选择
|
||||
|
||||
默认情况下,它选择 “33” 美国英语,按下回车即可。
|
||||
|
||||
![][5]
|
||||
|
||||
#### 7) 模式选择(图形用户界面或命令行)
|
||||
|
||||
默认情况下,它选择 “0” 图形用户界面模式,按下回车即可。
|
||||
|
||||
![][6]
|
||||
|
||||
#### 8) 加载 GParted Live 屏幕
|
||||
|
||||
现在,GParted Live 屏幕已经加载,它显示我以前创建的分区列表。
|
||||
|
||||
![][7]
|
||||
|
||||
#### 9) 如何重新调整根分区大小
|
||||
|
||||
选择你想重新调整大小的根分区,在这里仅有一个分区,所以我将编辑这个分区以便于安装另一个操作系统。
|
||||
|
||||
![][8]
|
||||
|
||||
为做到这一点,按下 “Resize/Move” 按钮来重新调整分区大小。
|
||||
|
||||
![][9]
|
||||
|
||||
现在,在第一个框中输入你想从这个分区中取出的大小。我将索要 “10GB”,所以,我添加 “10240MB”,并让该对话框的其余部分为默认值,然后点击 “Resize/Move” 按钮。
|
||||
|
||||
![][10]
|
||||
|
||||
它将再次要求你确认重新调整分区的大小,因为你正在编辑活动的系统分区,然后点击 “Ok”。
|
||||
|
||||
![][11]
|
||||
|
||||
分区从 30GB 缩小到 20GB 已经成功。也显示 10GB 未分配的磁盘空间。
|
||||
|
||||
![][12]
|
||||
|
||||
最后点击 “Apply” 按钮来执行下面剩余的操作。
|
||||
|
||||
![][13]
|
||||
|
||||
`e2fsck` 是一个文件系统检查实用程序,自动修复文件系统中与 HDD 相关的坏扇道、I/O 错误。
|
||||
|
||||
![][14]
|
||||
|
||||
`resize2fs` 程序将重新调整 ext2、ext3 或 ext4 文件系统的大小。它可以被用于扩大或缩小一个位于设备上的未挂载的文件系统。
|
||||
|
||||
![][15]
|
||||
|
||||
`e2image` 程序将保存位于设备上的关键的 ext2、ext3 或 ext4 文件系统的元数据到一个指定文件中。
|
||||
|
||||
![][16]
|
||||
|
||||
所有的操作完成,关闭对话框。
|
||||
|
||||
![][17]
|
||||
|
||||
现在,我们可以看到未分配的 “10GB” 磁盘分区。
|
||||
|
||||
![][18]
|
||||
|
||||
重启系统来检查这一结果。
|
||||
|
||||
![][19]
|
||||
|
||||
#### 10) 检查剩余空间
|
||||
|
||||
重新登录系统,并使用 `fdisk` 命令来查看在分区中可用的空间。是的,我可以看到这个分区上未分配的 “10GB” 磁盘空间。
|
||||
|
||||
```
|
||||
$ sudo parted /dev/sda print free
|
||||
[sudo] password for daygeek:
|
||||
Model: ATA VBOX HARDDISK (scsi)
|
||||
Disk /dev/sda: 32.2GB
|
||||
Sector size (logical/physical): 512B/512B
|
||||
Partition Table: msdos
|
||||
Disk Flags:
|
||||
|
||||
Number Start End Size Type File system Flags
|
||||
32.3kB 10.7GB 10.7GB Free Space
|
||||
1 10.7GB 32.2GB 21.5GB primary ext4 boot
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/magesh/
|
||||
[1]:https://gparted.org/
|
||||
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-1.png
|
||||
[4]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-2.png
|
||||
[5]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-3.png
|
||||
[6]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-4.png
|
||||
[7]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-5.png
|
||||
[8]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-6.png
|
||||
[9]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-7.png
|
||||
[10]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-8.png
|
||||
[11]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-9.png
|
||||
[12]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-10.png
|
||||
[13]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-11.png
|
||||
[14]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-12.png
|
||||
[15]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-13.png
|
||||
[16]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-14.png
|
||||
[17]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-15.png
|
||||
[18]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-16.png
|
||||
[19]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-17.png
|
@ -0,0 +1,172 @@
|
||||
BootISO:从 ISO 文件中创建一个可启动的 USB 设备
|
||||
======
|
||||
|
||||

|
||||
|
||||
为了安装操作系统,我们中的大多数人(包括我)经常从 ISO 文件中创建一个可启动的 USB 设备。为达到这个目的,在 Linux 中有很多自由可用的应用程序。甚至在过去我们写了几篇介绍这种实用程序的文章。
|
||||
|
||||
每个人使用不同的应用程序,每个应用程序有它们自己的特色和功能。在这些应用程序中,一些应用程序属于 CLI 程序,一些应用程序则是 GUI 的。
|
||||
|
||||
今天,我们将讨论名为 BootISO 的实用程序类似工具。它是一个简单的 bash 脚本,允许用户来从 ISO 文件中创建一个可启动的 USB 设备。
|
||||
|
||||
很多 Linux 管理员使用 `dd` 命令开创建可启动的 ISO ,它是一个著名的原生方法,但是与此同时,它也是一个非常危险的命令。因此,小心,当你用 `dd` 命令执行一些动作时。
|
||||
|
||||
建议阅读:
|
||||
|
||||
- [Etcher:从一个 ISO 镜像中创建一个可启动的 USB 驱动器 & SD 卡的简单方法][1]
|
||||
- [在 Linux 上使用 dd 命令来从一个 ISO 镜像中创建一个可启动的 USB 驱动器][2]
|
||||
|
||||
### BootISO 是什么
|
||||
|
||||
[BootISO][3] 是一个简单的 bash 脚本,允许用户来安全的从一个 ISO 文件中创建一个可启动的 USB 设备,它是用 bash 编写的。
|
||||
|
||||
它不提供任何图形用户界面而是提供了大量的选项,可以让初学者顺利地在 Linux 上来创建一个可启动的 USB 设备。因为它是一个智能工具,能自动地选择连接到系统上的 USB 设备。
|
||||
|
||||
当系统有多个 USB 设备连接,它将打印出列表。当你手动选择了另一个硬盘而不是 USB 时,在这种情况下,它将安全地退出,而不会在硬盘上写入任何东西。
|
||||
|
||||
这个脚本也将检查依赖关系,并提示用户安装,它可以与所有的软件包管理器一起工作,例如 apt-get、yum、dnf、pacman 和 zypper。
|
||||
|
||||
### BootISO 的功能
|
||||
|
||||
* 它检查选择的 ISO 是否是正确的 mime 类型。如果不是,那么退出。
|
||||
* 如果你选择除 USB 设备以外的任何其它的磁盘(本地硬盘),BootISO 将自动地退出。
|
||||
* 当你有多个驱动器时,BootISO 允许用户选择想要使用的 USB 驱动器。
|
||||
* 在擦除和分区 USB 设备前,BootISO 会提示用户确认。
|
||||
* BootISO 将正确地处理来自一个命令的任何错误,并退出。
|
||||
* BootISO 在遇到问题退出时将调用一个清理例行程序。
|
||||
|
||||
### 如何在 Linux 中安装 BootISO
|
||||
|
||||
在 Linux 中安装 BootISO 有几个可用的方法,但是,我建议用户使用下面的方法安装。
|
||||
|
||||
```
|
||||
$ curl -L https://git.io/bootiso -O
|
||||
$ chmod +x bootiso
|
||||
$ sudo mv bootiso /usr/local/bin/
|
||||
```
|
||||
|
||||
一旦 BootISO 已经安装,运行下面的命令来列出可用的 USB 设备。
|
||||
|
||||
```
|
||||
$ bootiso -l
|
||||
|
||||
Listing USB drives available in your system:
|
||||
NAME HOTPLUG SIZE STATE TYPE
|
||||
sdd 1 32G running disk
|
||||
```
|
||||
|
||||
如果你仅有一个 USB 设备,那么简单地运行下面的命令来从一个 ISO 文件中创建一个可启动的 USB 设备。
|
||||
|
||||
```
|
||||
$ bootiso /path/to/iso file
|
||||
```
|
||||
|
||||
```
|
||||
$ bootiso /opt/iso_images/archlinux-2018.05.01-x86_64.iso
|
||||
Granting root privileges for bootiso.
|
||||
Listing USB drives available in your system:
|
||||
NAME HOTPLUG SIZE STATE TYPE
|
||||
sdd 1 32G running disk
|
||||
Autoselecting `sdd' (only USB device candidate)
|
||||
The selected device `/dev/sdd' is connected through USB.
|
||||
Created ISO mount point at `/tmp/iso.vXo'
|
||||
`bootiso' is about to wipe out the content of device `/dev/sdd'.
|
||||
Are you sure you want to proceed? (y/n)>y
|
||||
Erasing contents of /dev/sdd...
|
||||
Creating FAT32 partition on `/dev/sdd1'...
|
||||
Created USB device mount point at `/tmp/usb.0j5'
|
||||
Copying files from ISO to USB device with `rsync'
|
||||
Synchronizing writes on device `/dev/sdd'
|
||||
`bootiso' took 250 seconds to write ISO to USB device with `rsync' method.
|
||||
ISO succesfully unmounted.
|
||||
USB device succesfully unmounted.
|
||||
USB device succesfully ejected.
|
||||
You can safely remove it !
|
||||
```
|
||||
|
||||
当你有多个 USB 设备时,可以使用 `--device` 选项指明你的设备名称。
|
||||
|
||||
```
|
||||
$ bootiso -d /dev/sde /opt/iso_images/archlinux-2018.05.01-x86_64.iso
|
||||
```
|
||||
|
||||
默认情况下,BootISO 使用 `rsync` 命令来执行所有的动作,如果你想使用 `dd` 命令代替它,使用下面的格式。
|
||||
|
||||
```
|
||||
$ bootiso --dd -d /dev/sde /opt/iso_images/archlinux-2018.05.01-x86_64.iso
|
||||
```
|
||||
|
||||
如果你想跳过 mime 类型检查,BootISO 实用程序带有下面的选项。
|
||||
|
||||
```
|
||||
$ bootiso --no-mime-check -d /dev/sde /opt/iso_images/archlinux-2018.05.01-x86_64.iso
|
||||
```
|
||||
|
||||
为 BootISO 添加下面的选项来跳过在擦除和分区 USB 设备前的用户确认。
|
||||
|
||||
```
|
||||
$ bootiso -y -d /dev/sde /opt/iso_images/archlinux-2018.05.01-x86_64.iso
|
||||
```
|
||||
|
||||
连同 `-y` 选项一起,启用自动选择 USB 设备。
|
||||
|
||||
```
|
||||
$ bootiso -y -a /opt/iso_images/archlinux-2018.05.01-x86_64.iso
|
||||
```
|
||||
|
||||
为知道更多的 BootISO 选项,运行下面的命令。
|
||||
|
||||
```
|
||||
$ bootiso -h
|
||||
Create a bootable USB from any ISO securely.
|
||||
Usage: bootiso [...]
|
||||
|
||||
Options
|
||||
|
||||
-h, --help, help Display this help message and exit.
|
||||
-v, --version Display version and exit.
|
||||
-d, --device Select block file as USB device.
|
||||
If is not connected through USB, `bootiso' will fail and exit.
|
||||
Device block files are usually situated in /dev/sXX or /dev/hXX.
|
||||
You will be prompted to select a device if you don't use this option.
|
||||
-b, --bootloader Install a bootloader with syslinux (safe mode) for non-hybrid ISOs. Does not work with `--dd' option.
|
||||
-y, --assume-yes `bootiso' won't prompt the user for confirmation before erasing and partitioning USB device.
|
||||
Use at your own risks.
|
||||
-a, --autoselect Enable autoselecting USB devices in conjunction with -y option.
|
||||
Autoselect will automatically select a USB drive device if there is exactly one connected to the system.
|
||||
Enabled by default when neither -d nor --no-usb-check options are given.
|
||||
-J, --no-eject Do not eject device after unmounting.
|
||||
-l, --list-usb-drives List available USB drives.
|
||||
-M, --no-mime-check `bootiso' won't assert that selected ISO file has the right mime-type.
|
||||
-s, --strict-mime-check Disallow loose application/octet-stream mime type in ISO file.
|
||||
-- POSIX end of options.
|
||||
--dd Use `dd' utility instead of mounting + `rsync'.
|
||||
Does not allow bootloader installation with syslinux.
|
||||
--no-usb-check `bootiso' won't assert that selected device is a USB (connected through USB bus).
|
||||
Use at your own risks.
|
||||
|
||||
Readme
|
||||
|
||||
Bootiso v2.5.2.
|
||||
Author: Jules Samuel Randolph
|
||||
Bugs and new features: https://github.com/jsamr/bootiso/issues
|
||||
If you like bootiso, please help the community by making it visible:
|
||||
* star the project at https://github.com/jsamr/bootiso
|
||||
* upvote those SE post: https://goo.gl/BNRmvm https://goo.gl/YDBvFe
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/bootiso-a-simple-bash-script-to-securely-create-a-bootable-usb-device-in-linux-from-iso-file/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/prakash/
|
||||
[1]:https://www.2daygeek.com/etcher-easy-way-to-create-a-bootable-usb-drive-sd-card-from-an-iso-image-on-linux/
|
||||
[2]:https://www.2daygeek.com/create-a-bootable-usb-drive-from-an-iso-image-using-dd-command-on-linux/
|
||||
[3]:https://github.com/jsamr/bootiso
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10977-1.html)
|
||||
[#]: subject: (Get desktop notifications from Emacs shell commands ·)
|
||||
[#]: via: (https://blog.hoetzel.info/post/eshell-notifications/)
|
||||
[#]: author: (Jürgen Hötzel https://blog.hoetzel.info)
|
||||
@ -10,11 +10,11 @@
|
||||
让 Emacs shell 命令发送桌面通知
|
||||
======
|
||||
|
||||
我总是使用 [Eshell][1] 来与操作系统进行交互,因为它与 Emacs 无缝整合、支持处理 (远程) [TRAMP][2] 文件 而且在 Windows 上也能工作得很好。
|
||||
我总是使用 [Eshell][1] 来与操作系统进行交互,因为它与 Emacs 无缝整合、支持处理 (远程) [TRAMP][2] 文件,而且在 Windows 上也能工作得很好。
|
||||
|
||||
启动 shell 命令后 (比如耗时严重的构建任务) 我经常会由于切换 buffer 而忘了追踪任务的运行状态。
|
||||
启动 shell 命令后 (比如耗时严重的构建任务) 我经常会由于切换缓冲区而忘了追踪任务的运行状态。
|
||||
|
||||
多亏了 Emacs 的 [hooks][3] 机制,你可以配置 Emacs 在某个外部命令完成后调用一个 elisp 函数。
|
||||
多亏了 Emacs 的 [钩子][3] 机制,你可以配置 Emacs 在某个外部命令完成后调用一个 elisp 函数。
|
||||
|
||||
我使用 [John Wiegleys][4] 所编写的超棒的 [alert][5] 包来发送桌面通知:
|
||||
|
||||
@ -33,7 +33,7 @@
|
||||
(add-hook 'eshell-kill-hook #'eshell-command-alert)
|
||||
```
|
||||
|
||||
[alert][5] 的规则可以用程序来设置。就我这个情况来看,我只需要当对应的 buffer 不可见时被通知:
|
||||
[alert][5] 的规则可以用程序来设置。就我这个情况来看,我只需要当对应的缓冲区不可见时得到通知:
|
||||
|
||||
```
|
||||
(alert-add-rule :status '(buried) ;only send alert when buffer not visible
|
||||
@ -44,7 +44,7 @@
|
||||
|
||||
这甚至对于 [TRAMP][2] 也一样生效。下面这个截屏展示了失败的 `make` 命令产生的 Gnome 桌面通知。
|
||||
|
||||
![。./。./img/eshell.png][6]
|
||||
![../../img/eshell.png][6]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -53,7 +53,7 @@ via: https://blog.hoetzel.info/post/eshell-notifications/
|
||||
作者:[Jürgen Hötzel][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -64,4 +64,4 @@ via: https://blog.hoetzel.info/post/eshell-notifications/
|
||||
[3]: https://www.gnu.org/software/emacs/manual/html_node/emacs/Hooks.html (hooks)
|
||||
[4]: https://github.com/jwiegley (John Wiegleys)
|
||||
[5]: https://github.com/jwiegley/alert (alert)
|
||||
[6]: https://blog.hoetzel.info/img/eshell.png (../../img/eshell.png)
|
||||
[6]: https://blog.hoetzel.info/img/eshell.png
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10984-1.html)
|
||||
[#]: subject: (Running LEDs in reverse could cool computers)
|
||||
[#]: via: (https://www.networkworld.com/article/3386876/running-leds-in-reverse-could-cool-computers.html#tk.rss_all)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
反向运行 LED 能够冷却计算机
|
||||
======
|
||||
|
||||
> 电子产品的小型化正在触及其极限,部分原因在于热量管理。许多人现在都在积极地尝试解决这个问题。其中一种正在探索的途径是反向运行的 LED。
|
||||
|
||||
![monsitj / Getty Images][1]
|
||||
|
||||
寻找更有效的冷却计算机的方法,几乎与渴望发现更好的电池化学成分一样,在科学家的研究日程中也处于重要位置。
|
||||
|
||||
更多的冷却手段对于降低成本至关重要。冷却技术也使得在较小的空间中可以进行更强大的处理,其有限的处理能力应该是进行计算而不是浪费热量。冷却技术可以阻止热量引起的故障,从而延长部件的使用寿命,并且可以促进环保的数据中心 —— 更少的热量意味着对环境的影响更小。
|
||||
|
||||
如何从微处理器中消除热量是科学家们一直在探索的一个方向,他们认为他们已经提出了一个简单而不寻常、且反直觉的解决方案。他们说可以运行一个发光二极管(LED)的变体,其电极反转可以迫使该元件表现得像处于异常低温下工作一样。如果将其置于较热的电子设备旁边,然后引入纳米级间隙,可以使 LED 吸收热量。
|
||||
|
||||
“一旦 LED 反向偏置,它就会像一个非常低温的物体一样,吸收光子,”密歇根大学机械工程教授埃德加·梅霍夫在宣布了这一突破的[新闻稿][4]中说。 “与此同时,该间隙可防止热量返回,从而产生冷却效果。”
|
||||
|
||||
研究人员表示,LED 和相邻的电子设备(在这种情况下是热量计,通常用于测量热能)必须非常接近。他们说他们已经能够证明达到了每平方米 6 瓦的冷却功率。他们解释说,这是差不多是地球表面所接受到的阳光的能量。
|
||||
|
||||
物联网(IoT)设备和智能手机可能是最终将受益于这种 LED 改造的电子产品。这两种设备都需要在更小的空间中容纳更多的计算功率。
|
||||
|
||||
“从微处理器中可以移除的热量开始限制在给定空间内容纳的功率,”密歇根大学的公告说。
|
||||
|
||||
### 材料科学和冷却计算机
|
||||
|
||||
[我之前写过关于新形式的计算机冷却的文章][5]。源自材料科学的外来材料是正在探索的想法之一。美国能源部劳伦斯伯克利国家实验室表示,钠铋(Na3Bi)可用于晶体管设计。这种新物质带电荷,重要的是具有可调节性;但是,它不需要像超导体那样进行冷却。
|
||||
|
||||
事实上,这是超导体的一个问题。不幸的是,它们比大多数电子设备需要更多的冷却 —— 通过极端冷却消除电阻。
|
||||
|
||||
另外,[康斯坦茨大学的德国研究人员][6]表示他们很快将拥有超导体驱动的计算机,没有废热。他们计划使用电子自旋 —— 一种新的电子物理维度,可以提高效率。该大学去年在一份新闻稿中表示,这种方法“显著降低了计算中心的能耗”。
|
||||
|
||||
另一种减少热量的方法可能是用嵌入在微处理器上的[螺旋和回路来取代传统的散热器][7]。宾汉姆顿大学的科学家们表示,印在芯片上的微小通道可以为冷却剂提供单独的通道。
|
||||
|
||||
康斯坦茨大学说:“半导体技术的小型化正在接近其物理极限。”热管理现在被科学家提上了议事日程。这是“小型化的一大挑战”。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3386876/running-leds-in-reverse-could-cool-computers.html#tk.rss_all
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/big_data_center_server_racks_storage_binary_analytics_by_monsitj_gettyimages-944444446_3x2-100787357-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3242807/data-center/top-10-data-center-predictions-idc.html#nww-fsb
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
|
||||
[4]: https://news.umich.edu/running-an-led-in-reverse-could-cool-future-computers/
|
||||
[5]: https://www.networkworld.com/article/3326831/computers-could-soon-run-cold-no-heat-generated.html
|
||||
[6]: https://www.uni-konstanz.de/en/university/news-and-media/current-announcements/news/news-in-detail/Supercomputer-ohne-Abwaerme/
|
||||
[7]: https://www.networkworld.com/article/3322956/chip-cooling-breakthrough-will-reduce-data-center-power-costs.html
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (chen-ni)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10982-1.html)
|
||||
[#]: subject: (10 Places Where You Can Buy Linux Computers)
|
||||
[#]: via: (https://itsfoss.com/get-linux-laptops/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
@ -10,7 +10,7 @@
|
||||
可以买到 Linux 电脑的 10 个地方
|
||||
======
|
||||
|
||||
_**你在找 Linux 笔记本吗? 这里列出一些出售 Linux 电脑或者是专注于 Linux 系统的电商。**_
|
||||
> 你在找 Linux 笔记本吗? 这里列出一些出售 Linux 电脑或者是专注于 Linux 系统的电商。
|
||||
|
||||
如今市面上几乎所有的电脑(苹果除外)都预装了 Windows 系统。Linux 使用者的惯常做法就是买一台这样的电脑,然后要么删除 Windows 系统并安装 Linux,要么[安装 Linux 和 Windows 的双系统][1]。
|
||||
|
||||
@ -30,13 +30,13 @@ _**你在找 Linux 笔记本吗? 这里列出一些出售 Linux 电脑或者是
|
||||
|
||||
在揭晓这个提供预装 Linux 电脑的商家的清单之前,需要先声明一下。
|
||||
|
||||
请根据你的独立决策购买。我在这里只是简单地列出一些售卖 Linux 电脑的商家,并不保证他们的产品质量,售后服务等等这些事情。
|
||||
请根据你的独立决策购买。我在这里只是简单地列出一些售卖 Linux 电脑的商家,并不保证他们的产品质量、售后服务等等这些事情。
|
||||
|
||||
这也并不是一个排行榜。清单并不是按照某个特定次序排列的,每一项前面的数字只是为了方便计数,而并不代表名次。
|
||||
|
||||
让我们看看你可以在哪儿买到预装 Linux 的台式机或者笔记本吧。
|
||||
|
||||
#### 1\. 戴尔
|
||||
#### 1、戴尔
|
||||
|
||||
![戴尔 XPS Ubuntu | 图片所有权: Lifehacker][3]
|
||||
|
||||
@ -50,15 +50,15 @@ _**你在找 Linux 笔记本吗? 这里列出一些出售 Linux 电脑或者是
|
||||
|
||||
所以,去戴尔的官网上搜索关键字 “Ubuntu” 来获取预装 Ubuntu 的产品的信息吧。
|
||||
|
||||
**支持范围** : 世界上大部分地区。
|
||||
**支持范围**:世界上大部分地区。
|
||||
|
||||
[戴尔][5]
|
||||
- [戴尔][5]
|
||||
|
||||
#### 2\. System76
|
||||
#### 2、System76
|
||||
|
||||
[System76][6] 是 Linux 计算机世界里的一个响亮的名字。这家总部设在美国的企业专注于运行 Linux 的高端技术设备。他们的目标用户群体是软件开发者。
|
||||
|
||||
最初,System76 在自己的机器上提供的是 Ubuntu 系统。2017 年,他们发布了属于自己的 Linux 发行版,基于 Ubuntu 的 [Pop!_OS][7]。从此以后,Pop!_OS 就是他们机器上的默认操作系统了,但是仍然保留了 Ubuntu 这个选择。
|
||||
最初,System76 在自己的机器上提供的是 Ubuntu 系统。2017 年,他们发布了属于自己的 Linux 发行版,基于 Ubuntu 的 [Pop!\_OS][7]。从此以后,Pop!\_OS 就是他们机器上的默认操作系统了,但是仍然保留了 Ubuntu 这个选择。
|
||||
|
||||
除了性能之外,System76 还格外重视设计。他们的 [Thelio 系列台式机][8] 采用纯手工木制设计。
|
||||
|
||||
@ -68,99 +68,95 @@ _**你在找 Linux 笔记本吗? 这里列出一些出售 Linux 电脑或者是
|
||||
|
||||
值得一提的是,System76 在美国制造他们的电脑,而没有使用中国大陆或者台湾这种常规的选择。也许是出于这个原因,他们产品的售价较为高昂。
|
||||
|
||||
**支持范围** : 美国以及其它 60 个国家。在美国境外可能会有额外的关税。更多信息见[这里][13].
|
||||
**支持范围**:美国以及其它 60 个国家。在美国境外可能会有额外的关税。更多信息见[这里][13].
|
||||
|
||||
[System76][6]
|
||||
- [System76][6]
|
||||
|
||||
#### 3\. Purism
|
||||
#### 3、Purism
|
||||
|
||||
Purism 是一个总部设在美国的企业,以提供确保数据安全和隐私的产品和服务为荣。这就是为什么 Purism 称自己为 “效力社会的公司”。
|
||||
|
||||
[][14]
|
||||
|
||||
Purism 是从一个众筹项目开始的,该项目旨在创造一个几乎没有任何专有软件的高端开源笔记本。2015年,从这个 [成功的 25 万美元的众筹项目][15] 中诞生了 [Librem 15][16] 笔记本。
|
||||
|
||||
![Purism Librem 13][17]
|
||||
|
||||
后来 Purism 发布了一个 13 英寸的版本 [Librem 13][18]。Purism 还开发了一个自己的 Linux 发行版 [Pure OS][19],该发行版非常注重隐私和安全问题。
|
||||
|
||||
[Pure OS 在台式设备和移动设备上都可以运行][20],并且是 Librem 笔记本和[Librem 5 Linux 手机] 的默认操纵系统。
|
||||
[Pure OS 在台式设备和移动设备上都可以运行][20],并且是 Librem 笔记本和[Librem 5 Linux 手机][21] 的默认操纵系统。
|
||||
|
||||
Purism 的零部件来自中国大陆、台湾、日本以及美国,并在美国完成组装。他们的所有设备都有可以用来关闭麦克风、摄像头、无线连接或者是蓝牙的硬件开关。
|
||||
Purism 的零部件来自中国大陆、台湾、日本以及美国,并在美国完成组装。他们的所有设备都有可以直接关闭的硬件开关,用来关闭麦克风、摄像头、无线连接或者是蓝牙。
|
||||
|
||||
**支持范围** : 全世界范围国际免邮。可能需要支付额外的关税。
|
||||
**支持范围**:全世界范围国际免邮。可能需要支付额外的关税。
|
||||
|
||||
[Purism][22]
|
||||
- [Purism][22]
|
||||
|
||||
#### 4\. Slimbook
|
||||
#### 4、Slimbook
|
||||
|
||||
Slimbook 是一个总部设在西班牙的 Linux 电脑销售商. Slimbook 在发行了 [第一款 KDE 品牌笔记本][23]之后成为了人们关注的焦点。
|
||||
Slimbook 是一个总部设在西班牙的 Linux 电脑销售商。Slimbook 在发行了 [第一款 KDE 品牌笔记本][23]之后成为了人们关注的焦点。
|
||||
|
||||
他们的产品不仅限于KDE Neon。他们还提供 Ubuntu,Kubuntu,Ubuntu MATE,Linux Mint 以及包括 [Lliurex][24] 和 [Max][25]在内的西班牙发行版。您也可以选择 Windows(需要额外付费)或者不预装任何操作系统。
|
||||
他们的产品不仅限于 KDE Neon。他们还提供 Ubuntu、Kubuntu、Ubuntu MATE、Linux Mint 以及包括 [Lliurex][24] 和 [Max][25] 这样的西班牙发行版。你也可以选择 Windows(需要额外付费)或者不预装任何操作系统。
|
||||
|
||||
Slimbook 有众多 Linux 笔记本,台式机和迷你电脑可供选择。他们另外一个非常不错的产品是一个类似于 iMac 的 24 英寸 [拥有内置 CPU 的曲面显示屏][26]。
|
||||
Slimbook 有众多 Linux 笔记本、台式机和迷你电脑可供选择。他们另外一个非常不错的产品是一个类似于 iMac 的 24 英寸 [拥有内置 CPU 的曲面显示屏][26]。
|
||||
|
||||
![Slimbook Kymera Aqua 水冷 Linux 电脑][27]
|
||||
|
||||
想要一台水冷 Linux 电脑吗?Slimbook 的 [Kymera Aqua][28] 是合适之选。
|
||||
|
||||
**支持范围** : 全世界范围,不过在邮费和关税上都可能产生额外费用。
|
||||
**支持范围**:全世界范围,不过在邮费和关税上都可能产生额外费用。
|
||||
|
||||
[Slimbook][29]
|
||||
- [Slimbook][29]
|
||||
|
||||
#### 5\. TUXEDO
|
||||
#### 5、TUXEDO
|
||||
|
||||
作为这个 Linux 电脑销售商清单里的另一个欧洲成员,[TUXEDO][30] 总部设在德国,主要服务德国用户,其次是欧洲用户。
|
||||
|
||||
TUXEDO 只使用 Linux 系统,产品都是“德国制造”,并且提供 5 年保修和终身售后支持。
|
||||
TUXEDO 只使用 Linux 系统,产品都是“德国制造”,并且提供 5 年保修和终生售后支持。
|
||||
|
||||
TUXEDO 在 Linux 系统的硬件适配上下了很大功夫。并且如果你遇到了麻烦或者是想从头开始,可以通过系统恢复选项,自动恢复出厂设置。
|
||||
|
||||
![Tuxedo 电脑支持众多发行版][31]
|
||||
|
||||
TUXEDO 有许多 Linux 笔记本、台式机和迷你电脑产品可供选择。他们还同时拥有 Intel 和 AMD 的处理器。 除了电脑,TUXEDO 还提供一系列 Linux 支持的附件,比如扩展坞、DVD和蓝光刻录机、移动电源以及其它外围设备。
|
||||
TUXEDO 有许多 Linux 笔记本、台式机和迷你电脑产品可供选择。他们还同时支持 Intel 和 AMD 的处理器。除了电脑,TUXEDO 还提供一系列 Linux 支持的附件,比如扩展坞、DVD 和蓝光刻录机、移动电源以及其它外围设备。
|
||||
|
||||
**支持范围** : 150 欧元以上的订单在德国和欧洲范围内免邮。欧洲外国家会有额外的运费和关税。更多信息见 [这里][32].
|
||||
**支持范围**:150 欧元以上的订单在德国和欧洲范围内免邮。欧洲外国家会有额外的运费和关税。更多信息见 [这里][32].
|
||||
|
||||
[TUXEDO][33]
|
||||
- [TUXEDO][33]
|
||||
|
||||
#### 6\. Vikings
|
||||
#### 6、Vikings
|
||||
|
||||
[Vikings][34] 的总部设在德国(而不是斯堪的纳维亚半岛,哈哈)。Vikings 拥有[自由软件基金会][35]的认证,专注于自由友好的硬件。
|
||||
|
||||
![Vikings 的产品经过了自由软件基金会认证][36]
|
||||
|
||||
Vikings 的 Linux 笔记本和台式机使用的是 [coreboot][37] 或者 [Libreboot][38],而不是像 BIOS 和 UEFI 这样的专有启动系统。你还可以购买 [服务器硬件][39],这款产品不运行任何专有软件。
|
||||
Vikings 的 Linux 笔记本和台式机使用的是 [coreboot][37] 或者 [Libreboot][38],而不是像 BIOS 和 UEFI 这样的专有启动系统。你还可以购买不运行任何专有软件的 [硬件服务器][39]。
|
||||
|
||||
Vikings 还有包括路由器、扩展坞等在内的其它配件。他们的产品都是在德国组装完成的。
|
||||
|
||||
**支持范围** : 全世界(除了朝鲜)。非欧洲国家可能会有额外关税费用。更多信息见[这里][40].
|
||||
**支持范围**:全世界(除了朝鲜)。非欧洲国家可能会有额外关税费用。更多信息见[这里][40]。
|
||||
|
||||
[Vikings][41]
|
||||
- [Vikings][41]
|
||||
|
||||
#### 7\. Ubuntushop.be
|
||||
#### 7、Ubuntushop.be
|
||||
|
||||
不!尽管名字里有Ubuntu,但这不是官方的 Ubuntu 商店。Ubuntushop 总部位于比利时,最初是销售安装了 Ubuntu 的电脑。
|
||||
不不!尽管名字里有 Ubuntu,但这不是官方的 Ubuntu 商店。Ubuntushop 总部位于比利时,最初是销售安装了 Ubuntu 的电脑。
|
||||
|
||||
如今,你可以买到预装了包括 Mint、Manjaro、 elementrayOS 在内的 Linux 发行版的笔记本电脑。你还可以要求所购买的设备上安装你所选择的发行版。
|
||||
如今,你可以买到预装了包括 Mint、Manjaro、elementrayOS 在内的 Linux 发行版的笔记本电脑。你还可以要求所购买的设备上安装你所选择的发行版。
|
||||
|
||||
![][42]
|
||||
|
||||
Ubuntushop 的一个独特之处在于,它的所有电脑都带有默认的 Tails OS live 选项。即使你安装了某个其它的 Linux 发行版作为日常使用的系统,也随时可以选择启动到 Tails OS(不需要使用 live USB)。[Tails OS][43] 是一个基于 Debian 的发行版,它在用户注销后会删除所有使用痕迹,并且在默认情况下使用 Tor 网络。
|
||||
|
||||
[][44]
|
||||
|
||||
和此列表中的许多其他重要玩家不同,我觉得 Ubuntushop 所提供的更像是一种“家庭工艺”。商家手动组装一个电脑,安装 Linux 然后卖给你。不过他们也在一些可选项上下了功夫,比如说轻松的重装系统,拥有自己的云服务器等等。
|
||||
|
||||
你可以找一台旧电脑快递给他们,同时在他们的网站上买一台新的 Linux 电脑,他们就会在你的旧电脑上安装 [轻量级 Linux][45] 系统然后快递回来,这样你这台旧电脑就可以重新投入使用了。
|
||||
你可以找一台旧电脑快递给他们,就可以变成一台新安装 Linux 的电脑,他们就会在你的旧电脑上安装 [轻量级 Linux][45] 系统然后快递回来,这样你这台旧电脑就可以重新投入使用了。
|
||||
|
||||
**支持范围** : 比利时以及欧洲的其它地区。
|
||||
**支持范围**:比利时以及欧洲的其它地区。
|
||||
|
||||
[Ubuntushop.be][46]
|
||||
- [Ubuntushop.be][46]
|
||||
|
||||
#### 8\. Minifree
|
||||
#### 8、Minifree
|
||||
|
||||
[Minifree][47],自由部门的缩写,是一家注册在英格兰的公司。
|
||||
[Minifree][47],是<ruby>自由部门<rt>Ministry of Freedom</rt></ruby>的缩写,他们是一家注册在英格兰的公司。
|
||||
|
||||
你可以猜到 Minifree 非常注重自由。他们提供安全以及注重隐私的电脑,预装 [Libreboot][38] 而不是 BIOS 或者 UEFI。
|
||||
|
||||
@ -170,15 +166,15 @@ Minifree 的设备经过了 [自由软件基金会][48] 的认证,所以你可
|
||||
|
||||
和这个清单中许多其它 Linux 笔记本销售商不同,Minifree 的电脑并不是特别贵。花 200 欧元就可以买到一台预装了 Libreboot 和 [Trisquel GNU/Linux][50] 的 Linux 电脑。
|
||||
|
||||
除了笔记本以外,Minifree 还有一系列的配件,比如 Libre 路由器、平板电、扩展坞、电池、键盘、鼠标等等。
|
||||
除了笔记本以外,Minifree 还有一系列的配件,比如 Libre 路由器、平板电脑、扩展坞、电池、键盘、鼠标等等。
|
||||
|
||||
如果你和 [Richard Stallman][51] 一样,希望只运行 100% 自由的软件的话,Minifree 就再适合不过了。
|
||||
|
||||
**支持范围** : 全世界。运费信息见 [这里][52]。
|
||||
**支持范围**:全世界。运费信息见 [这里][52]。
|
||||
|
||||
[Minifree][47]
|
||||
- [Minifree][47]
|
||||
|
||||
#### 9\. Entroware
|
||||
#### 9、Entroware
|
||||
|
||||
[Entroware][53] 是另一个总部设在英国的销售商,专注基于 Linux 系统的笔记本、台式机和服务器。
|
||||
|
||||
@ -190,9 +186,9 @@ Minifree 的设备经过了 [自由软件基金会][48] 的认证,所以你可
|
||||
|
||||
支持范围: 英国、爱尔兰、法国、德国、意大利、西班牙。
|
||||
|
||||
[Entroware][58]
|
||||
- [Entroware][58]
|
||||
|
||||
#### 10\. Juno
|
||||
#### 10、Juno
|
||||
|
||||
这是我们清单上的一个新的 Linux 笔记本销售商。Juno 的总部同样设在英国,提供预装 Linux 的电脑。可选择的 Linux 发行版包括 elementary OS、Ubuntu 和 Solus OS。
|
||||
|
||||
@ -204,7 +200,7 @@ Juno 的主要特色是 Juve,一款售价 299 美元的 Chromebook 的低成
|
||||
|
||||
支持范围:英国、美国、加拿大、墨西哥、南美和欧洲的大部分地区、新西兰、亚洲和非洲的某些地区。更多信息见 [这里][62]。
|
||||
|
||||
[Juno Computers][63]
|
||||
- [Juno Computers][63]
|
||||
|
||||
#### 荣誉奖
|
||||
|
||||
@ -216,10 +212,9 @@ Juno 的主要特色是 Juve,一款售价 299 美元的 Chromebook 的低成
|
||||
* [Linux Certified][67]
|
||||
* [Think Penguin][68]
|
||||
|
||||
|
||||
包括宏碁和联想在内的其它主流电脑生产商可能也有基于 Linux 系统的产品,所以你不妨也查看一下他们的产品目录吧。
|
||||
|
||||
你有没有买过一台 Linux 电脑?在哪儿买的?使用体验怎么样?Linux 笔记本值不值得买?分享一下你的想法吧。
|
||||
你有没有买过 Linux 电脑?在哪儿买的?使用体验怎么样?Linux 笔记本值不值得买?分享一下你的想法吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -227,8 +222,8 @@ via: https://itsfoss.com/get-linux-laptops/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/chen-ni)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[chen-ni](https://github.com/chen-ni)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
262
published/20190606 How Linux can help with your spelling.md
Normal file
262
published/20190606 How Linux can help with your spelling.md
Normal file
@ -0,0 +1,262 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Modrisco)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10986-1.html)
|
||||
[#]: subject: (How Linux can help with your spelling)
|
||||
[#]: via: (https://www.networkworld.com/article/3400942/how-linux-can-help-with-your-spelling.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
如何用 Linux 帮助你拼写
|
||||
======
|
||||
|
||||
> 无论你是纠结一个难以理解的单词,还是在将报告发给老板之前再检查一遍,Linux 都可以帮助你解决拼写问题。
|
||||
|
||||

|
||||
|
||||
Linux 为数据分析和自动化提供了各种工具,它也帮助我们解决了一个一直都在纠结的问题 —— 拼写!无论在写每周报告时努力拼出一个单词,还是在提交商业计划书之前想要借助计算机的“眼睛”来找出你的拼写错误。现在我们来看一下它是如何帮助你的。
|
||||
|
||||
### look
|
||||
|
||||
`look` 是其中一款工具。如果你知道一个单词的开头,你就可以用这个命令来获取以这些字母开头的单词列表。除非提供了替代词源,否则 `look` 将使用 `/usr/share/dict/words` 中的内容来为你标识单词。这个文件有数十万个单词,可以满足我们日常使用的大多数英语单词的需要,但是它可能不包含我们计算机领域中的一些人倾向于使用的更加生僻的单词,如 zettabyte。
|
||||
|
||||
`look` 命令的语法非常简单。输入 `look word` ,它将遍历单词文件中的所有单词并找到匹配项。
|
||||
|
||||
```
|
||||
$ look amelio
|
||||
ameliorable
|
||||
ameliorableness
|
||||
ameliorant
|
||||
ameliorate
|
||||
ameliorated
|
||||
ameliorates
|
||||
ameliorating
|
||||
amelioration
|
||||
ameliorations
|
||||
ameliorativ
|
||||
ameliorative
|
||||
amelioratively
|
||||
ameliorator
|
||||
amelioratory
|
||||
```
|
||||
|
||||
如果你遇到系统中单词列表中未包含的单词,将无法获得任何输出。
|
||||
|
||||
```
|
||||
$ look zetta
|
||||
$
|
||||
```
|
||||
|
||||
如果你没有看到你所希望出现的单词,也不要绝望。你可以在你的单词文件中添加单词,甚至引用一个完全不同的单词列表,在网上找一个或者干脆自己创建一个。你甚至不必将添加的单词放在按字母顺序排列的正确位置;只需将其添加到文件的末尾即可。但是,你必须以 root 用户身份执行此操作。例如(要注意 `>>`!):
|
||||
|
||||
```
|
||||
# echo “zettabyte” >> /usr/share/dict/words
|
||||
```
|
||||
|
||||
当使用不同的单词列表时,例如这个例子中的 “jargon” ,你只需要添加文件的名称。如果不采用默认文件时,请使用完整路径。
|
||||
|
||||
```
|
||||
$ look nybble /usr/share/dict/jargon
|
||||
nybble
|
||||
nybbles
|
||||
```
|
||||
|
||||
`look` 命令大小写不敏感,因此你不必关心要查找的单词是否应该大写。
|
||||
|
||||
```
|
||||
$ look zet
|
||||
ZETA
|
||||
Zeta
|
||||
zeta
|
||||
zetacism
|
||||
Zetana
|
||||
zetas
|
||||
Zetes
|
||||
zetetic
|
||||
Zethar
|
||||
Zethus
|
||||
Zetland
|
||||
Zetta
|
||||
```
|
||||
|
||||
当然,不是所有的单词列表都是一样的。一些 Linux 发行版在单词文件中提供了*多得多*的内容。你的文件中可能有十万或者更多倍的单词。
|
||||
|
||||
在我的一个 Linux 系统中:
|
||||
|
||||
```
|
||||
$ wc -l /usr/share/dict/words
|
||||
102402 /usr/share/dict/words
|
||||
```
|
||||
|
||||
在另一个系统中:
|
||||
|
||||
```
|
||||
$ wc -l /usr/share/dict/words
|
||||
479828 /usr/share/dict/words
|
||||
```
|
||||
|
||||
请记住,`look` 命令只适用于通过单词开头查找,但如果你不想从单词的开头查找,还可以使用其他选项。
|
||||
|
||||
### grep
|
||||
|
||||
我们深爱的 `grep` 命令像其他工具一样可以从一个单词文件中选出单词。如果你正在找以某些字母开头或结尾的单词,使用 `grep` 命令是自然而然的事情。它可以通过单词的开头、结尾或中间部分来匹配单词。系统中的单词文件可以像使用 `look` 命令时在 `grep` 命令中轻松使用。不过唯一的缺点是你需要指定文件,这一点与 `look` 不尽相同。
|
||||
|
||||
在单词的开头前加上 `^`:
|
||||
|
||||
```
|
||||
$ grep ^terra /usr/share/dict/words
|
||||
terrace
|
||||
terrace's
|
||||
terraced
|
||||
terraces
|
||||
terracing
|
||||
terrain
|
||||
terrain's
|
||||
terrains
|
||||
terrapin
|
||||
terrapin's
|
||||
terrapins
|
||||
terraria
|
||||
terrarium
|
||||
terrarium's
|
||||
terrariums
|
||||
```
|
||||
|
||||
在单词的结尾后加上 `$`:
|
||||
|
||||
```
|
||||
$ grep bytes$ /usr/share/dict/words
|
||||
bytes
|
||||
gigabytes
|
||||
kilobytes
|
||||
megabytes
|
||||
terabytes
|
||||
```
|
||||
|
||||
使用 `grep` 时,你需要考虑大小写,不过 `grep` 命令也提供了一些选项。
|
||||
|
||||
```
|
||||
$ grep ^[Zz]et /usr/share/dict/words
|
||||
Zeta
|
||||
zeta
|
||||
zetacism
|
||||
Zetana
|
||||
zetas
|
||||
Zetes
|
||||
zetetic
|
||||
Zethar
|
||||
Zethus
|
||||
Zetland
|
||||
Zetta
|
||||
zettabyte
|
||||
```
|
||||
|
||||
为单词文件添加软连接能使这种搜索方式更加便捷:
|
||||
|
||||
```
|
||||
$ ln -s /usr/share/dict/words words
|
||||
$ grep ^[Zz]et words
|
||||
Zeta
|
||||
zeta
|
||||
zetacism
|
||||
Zetana
|
||||
zetas
|
||||
Zetes
|
||||
zetetic
|
||||
Zethar
|
||||
Zethus
|
||||
Zetland
|
||||
Zetta
|
||||
zettabytye
|
||||
```
|
||||
|
||||
### aspell
|
||||
|
||||
`aspell` 命令提供了一种不同的方式。它提供了一种方法来检查你提供给它的任何文件或文本的拼写。你可以通过管道将文本传递给它,然后它会告诉你哪些单词看起来有拼写错误。如果所有单词都拼写正确,则不会有任何输出。
|
||||
|
||||
```
|
||||
$ echo Did I mispell that? | aspell list
|
||||
mispell
|
||||
$ echo I can hardly wait to try out aspell | aspell list
|
||||
aspell
|
||||
$ echo Did I misspell anything? | aspell list
|
||||
$
|
||||
```
|
||||
|
||||
`list` 参数告诉 `aspell` 为标准输入单词提供拼写错误的单词列表。
|
||||
|
||||
你还可以使用 `aspell` 来定位和更正文本文件中的单词。如果它发现一个拼写错误的单词,它将为你提供一个相似(但拼写正确的)单词列表来替换这个单词,你也可以将该单词加入个人词库(`~/.aspell.en.pws`)并忽略拼写错误,或者完全中止进程(使文件保持处理前的状态)。
|
||||
|
||||
```
|
||||
$ aspell -c mytext
|
||||
```
|
||||
|
||||
一旦 `aspell` 发现一个单词出现了拼写错误,它将会为不正确的 “mispell” 提供一个选项列表:
|
||||
|
||||
```
|
||||
1) mi spell 6) misplay
|
||||
2) mi-spell 7) spell
|
||||
3) misspell 8) misapply
|
||||
4) Ispell 9) Aspell
|
||||
5) misspells 0) dispel
|
||||
i) Ignore I) Ignore all
|
||||
r) Replace R) Replace all
|
||||
a) Add l) Add Lower
|
||||
b) Abort x) Exit
|
||||
```
|
||||
|
||||
请注意,备选单词和拼写是数字编号的,而其他选项是由字母选项表示的。你可以选择备选拼写中的一项或者自己输入替换项。“Abort” 选项将使文件保持不变,即使你已经为某些单词选择了替换。你选择添加的单词将被插入到本地单词文件中(例如 `~/.aspell.en.pws`)。
|
||||
|
||||
#### 其他单词列表
|
||||
|
||||
厌倦了英语? `aspell` 命令可以在其他语言中使用,只要你添加了相关语言的单词列表。例如,在 Debian 系统中添加法语的词库,你可以这样做:
|
||||
|
||||
```
|
||||
$ sudo apt install aspell-fr
|
||||
```
|
||||
|
||||
这个新的词库文件会被安装为 `/usr/share/dict/French`。为了使用它,你只需要简单地告诉 `aspell` 你想要使用替换的单词列表:
|
||||
|
||||
```
|
||||
$ aspell --lang=fr -c mytext
|
||||
```
|
||||
|
||||
这种情况下,当 `aspell` 读到单词 “one” 时,你可能会看到下面的情况:
|
||||
|
||||
```
|
||||
1) once 6) orné
|
||||
2) onde 7) ne
|
||||
3) ondé 8) né
|
||||
4) onze 9) on
|
||||
5) orne 0) cône
|
||||
i) Ignore I) Ignore all
|
||||
r) Replace R) Replace all
|
||||
a) Add l) Add Lower
|
||||
b) Abort x) Exit
|
||||
```
|
||||
|
||||
你也可以从 [GNU 官网][3]获取其他语言的词库。
|
||||
|
||||
### 总结
|
||||
|
||||
即使你是全国拼字比赛的冠军,你可能偶尔也会需要一点拼写方面的帮助,哪怕只是为了找出你手滑打错的单词。`aspell` 工具,加上 `look` 和 `grep` 命令已经准备来助你一臂之力了。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3400942/how-linux-can-help-with-your-spelling.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Modrisco](https://github.com/Modrisco)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/06/linux-spelling-100798596-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: ftp://ftp.gnu.org/gnu/aspell/dict/0index.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,146 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10983-1.html)
|
||||
[#]: subject: (Expand And Unexpand Commands Tutorial With Examples)
|
||||
[#]: via: (https://www.ostechnix.com/expand-and-unexpand-commands-tutorial-with-examples/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
expand 与 unexpand 命令实例教程
|
||||
======
|
||||
|
||||
![Expand And Unexpand Commands Explained][1]
|
||||
|
||||
本指南通过实际的例子解释两个 Linux 命令,即 `expand` 和 `unexpand`。对于好奇的人,`expand` 和 `unexpand` 命令用于将文件中的 `TAB` 字符替换为空格,反之亦然。在 MS-DOS 中也有一个名为 `expand` 的命令,它用于解压压缩文件。但 Linux 的 `expand` 命令只是将 `TAB` 转换为空格。这两个命令是 GNU coreutils 包的一部分,由 David MacKenzie 编写。
|
||||
|
||||
为了演示,我将在本文使用名为 `ostechnix.txt` 的文本文件。下面给出的所有命令都在 Arch Linux 中进行测试。
|
||||
|
||||
### expand 命令示例
|
||||
|
||||
与我之前提到的一样,`expand` 命令使用空格替换文件中的 `TAB` 字符。
|
||||
|
||||
现在,让我们将 `ostechnix.txt` 中的 `TAB` 转换为空格,并将结果写入标准输出:
|
||||
|
||||
```
|
||||
$ expand ostechnix.txt
|
||||
```
|
||||
|
||||
如果你不想在标准输出中显示结果,只需将其写入另一个文件,如下所示。
|
||||
|
||||
```
|
||||
$ expand ostechnix.txt>output.txt
|
||||
```
|
||||
|
||||
我们还可以将标准输入中的 `TAB` 转换为空格。为此,只需运行 `expand` 命令而不带文件名:
|
||||
|
||||
```
|
||||
$ expand
|
||||
```
|
||||
|
||||
只需输入文本并按回车键就能将 `TAB` 转换为空格。按 `CTRL+C` 退出。
|
||||
|
||||
如果你不想转换非空白字符后的 `TAB`,请使用 `-i` 标记,如下所示。
|
||||
|
||||
```
|
||||
$ expand -i ostechnix.txt
|
||||
```
|
||||
|
||||
我们还可以设置每个 `TAB` 为指定数字的宽度,而不是 `8`(默认值)。
|
||||
|
||||
```
|
||||
$ expand -t=5 ostechnix.txt
|
||||
```
|
||||
|
||||
我们甚至可以使用逗号分隔指定多个 `TAB` 位置,如下所示。
|
||||
|
||||
```
|
||||
$ expand -t 5,10,15 ostechnix.txt
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ expand -t "5 10 15" ostechnix.txt
|
||||
```
|
||||
|
||||
有关更多详细信息,请参阅手册页。
|
||||
|
||||
```
|
||||
$ man expand
|
||||
```
|
||||
|
||||
### unexpand 命令示例
|
||||
|
||||
正如你可能已经猜到的那样,`unexpand` 命令将执行与 `expand` 命令相反的操作。即它会将空格转换为 `TAB`。让我向你展示一些例子,以了解如何使用 `unexpand` 命令。
|
||||
|
||||
要将文件中的空白(当然是空格)转换为 `TAB` 并将输出写入标准输出,请执行以下操作:
|
||||
|
||||
```
|
||||
$ unexpand ostechnix.txt
|
||||
```
|
||||
|
||||
如果要将输出写入文件而不是仅将其显示到标准输出,请使用以下命令:
|
||||
|
||||
```
|
||||
$ unexpand ostechnix.txt>output.txt
|
||||
```
|
||||
|
||||
从标准输出读取内容,将空格转换为制表符:
|
||||
|
||||
```
|
||||
$ unexpand
|
||||
```
|
||||
|
||||
默认情况下,`unexpand` 命令仅转换初始的空格。如果你想转换所有空格而不是只是一行开头的空格,请使用 `-a` 标志:
|
||||
|
||||
```
|
||||
$ unexpand -a ostechnix.txt
|
||||
```
|
||||
|
||||
仅转换一行开头的空格(请注意它会覆盖 `-a`):
|
||||
|
||||
```
|
||||
$ unexpand --first-only ostechnix.txt
|
||||
```
|
||||
|
||||
使多少个空格替换成一个 `TAB`,而不是 `8`(会启用 `-a`):
|
||||
|
||||
```
|
||||
$ unexpand -t 5 ostechnix.txt
|
||||
```
|
||||
|
||||
相似地,我们可以使用逗号分隔指定多个 `TAB` 的位置。
|
||||
|
||||
```
|
||||
$ unexpand -t 5,10,15 ostechnix.txt
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ unexpand -t "5 10 15" ostechnix.txt
|
||||
```
|
||||
|
||||
有关更多详细信息,请参阅手册页。
|
||||
|
||||
```
|
||||
$ man unexpand
|
||||
```
|
||||
|
||||
在处理大量文件时,`expand` 和 `unexpand` 命令对于用空格替换不需要的 `TAB` 时非常有用,反之亦然。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/expand-and-unexpand-commands-tutorial-with-examples/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Expand-And-Unexpand-Commands-720x340.png
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10988-1.html)
|
||||
[#]: subject: (Graviton: A Minimalist Open Source Code Editor)
|
||||
[#]: via: (https://itsfoss.com/graviton-code-editor/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Graviton:极简的开源代码编辑器
|
||||
======
|
||||
|
||||
[Graviton][1]是一款开发中的自由开源的跨平台代码编辑器。他的开发者 16 岁的 Marc Espin 强调说,它是一个“极简”的代码编辑器。我不确定这点,但它确实有一个清爽的用户界面,就像其他的[现代代码编辑器,如 Atom][2]。
|
||||
|
||||
![Graviton Code Editor Interface][3]
|
||||
|
||||
开发者还将其称为轻量级代码编辑器,尽管 Graviton 基于 [Electron][4]。
|
||||
|
||||
Graviton 拥有你在任何标准代码编辑器中所期望的功能,如语法高亮、自动补全等。由于 Graviton 仍处于测试阶段,因此未来版本中将添加更多功能。
|
||||
|
||||
![Graviton Code Editor with Syntax Highlighting][5]
|
||||
|
||||
### Graviton 代码编辑器的特性
|
||||
|
||||
Graviton 一些值得一说的特性有:
|
||||
|
||||
* 使用 [CodeMirrorJS][6] 为多种编程语言提供语法高亮
|
||||
* 自动补全
|
||||
* 支持插件和主题。
|
||||
* 提供英语、西班牙语和一些其他欧洲语言。
|
||||
* 适用于 Linux、Windows 和 macOS。
|
||||
|
||||
我快速看来一下 Graviton,它可能不像 [VS Code][7] 或 [Brackets][8] 那样功能丰富,但对于一些简单的代码编辑来说,它还算不错的工具。
|
||||
|
||||
### 下载并安装 Graviton
|
||||
|
||||
![Graviton Code Editor][9]
|
||||
|
||||
如上所述,Graviton 是一个可用于 Linux、Windows 和 macOS 的跨平台代码编辑器。它仍处于测试阶段,这意味着将来会添加更多功能,并且你可能会遇到一些 bug。
|
||||
|
||||
你可以在其发布页面上找到最新版本的 Graviton。Debian 和 [Ubuntu 用户可以使用 .deb 安装][10]。它已提供 [AppImage][11],以便可以在其他发行版中使用它。DMG 和 EXE 文件也分别可用于 macOS 和 Windows。
|
||||
|
||||
- [下载 Graviton][12]
|
||||
|
||||
如果你有兴趣,你可以在 GitHub 仓库中找到 Graviton 的源代码:
|
||||
|
||||
- [GitHub 中 Graviton 的源码][13]
|
||||
|
||||
如果你决定使用 Graviton 并发现了一些问题,请在[此处][14]写一份错误报告。如果你使用 GitHub,你可能想为 Graviton 项目加星。这可以提高开发者的士气,因为他知道有更多的用户欣赏他的努力。
|
||||
|
||||
如果你看到现在,我相信你了解[如何从源码安装软件][16]
|
||||
|
||||
### 写在最后
|
||||
|
||||
有时,简单本身就成了一个特性,而 Graviton 专注于极简可以帮助它在已经拥挤的代码编辑器世界中获取一席之地。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/graviton-code-editor/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://graviton.ml/
|
||||
[2]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-interface.jpg?resize=800%2C571&ssl=1
|
||||
[4]: https://electronjs.org/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-interface-2.jpg?resize=800%2C522&ssl=1
|
||||
[6]: https://codemirror.net/
|
||||
[7]: https://itsfoss.com/install-visual-studio-code-ubuntu/
|
||||
[8]: https://itsfoss.com/install-brackets-ubuntu/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-800x473.jpg?resize=800%2C473&ssl=1
|
||||
[10]: https://itsfoss.com/install-deb-files-ubuntu/
|
||||
[11]: https://itsfoss.com/use-appimage-linux/
|
||||
[12]: https://github.com/Graviton-Code-Editor/Graviton-App/releases
|
||||
[13]: https://github.com/Graviton-Code-Editor/Graviton-App
|
||||
[14]: https://github.com/Graviton-Code-Editor/Graviton-App/issues
|
||||
[16]: https://itsfoss.com/install-software-from-source-code/
|
||||
[17]: https://itsfoss.com/contact-us/
|
84
published/20190610 Try a new game on Free RPG Day.md
Normal file
84
published/20190610 Try a new game on Free RPG Day.md
Normal file
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10976-1.html)
|
||||
[#]: subject: (Try a new game on Free RPG Day)
|
||||
[#]: via: (https://opensource.com/article/19/5/free-rpg-day)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/erez/users/seth)
|
||||
|
||||
在免费 RPG 日试玩一下新游戏
|
||||
======
|
||||
|
||||
> 6 月 15 日,你可以在当地的游戏商家庆祝桌面角色扮演游戏并获得免费的 RPG 资料。
|
||||
|
||||

|
||||
|
||||
(LCTT 译注:“<ruby>免费 RPG 日<rt>Free RPG Day</rt></ruby>”是受“<ruby>免费漫画书日<rt>Free Comic Book Day</rt></ruby>”启发而发起的庆祝活动,从 2007 年开始已经举办多次。这里的 RPG 游戏并非我们通常所指的电脑 RPG 游戏,而是指使用纸和笔的桌面游戏,是一种西方传统游戏形式。)
|
||||
|
||||
你有没有想过尝试一下《<ruby>龙与地下城<rt>Dungeons & Dragons</rt></ruby>》,但不知道如何开始?你是否在年轻时玩过《<ruby>开拓者<rt>Pathfinder</rt></ruby>》并一直在考虑重返快乐时光?你是否对角色扮演游戏(RPG)感到好奇,但不确定你是否想玩一个?你是否对桌面游戏的概念完全陌生,直到现在才听说过这种 RPG 游戏?无论是哪一个并不重要,因为[免费 RPG 日] [2]适合所有人!
|
||||
|
||||
第一个免费 RPG 日活动发生在 2007 年,是由世界各地的桌面游戏商家举办的。这个想法是以 0 美元的价格为新手和有经验的游戏玩家带来新的、独家的 RPG 快速入门规则和冒险体验。在这样的一天里,你可以走进当地的桌面游戏商家,得到一本小册子,其中包含桌面 RPG 的简单的初学者规则,你可以在商家里与那里的人或者回家与朋友一起玩。这本小册子是给你的,应该一直留着的。
|
||||
|
||||
这一活动如此的受欢迎,此后该传统一直延续至今。今年,免费 RPG 日定于 6 月 15 日星期六举行。
|
||||
|
||||
### 有什么收获?
|
||||
|
||||
显然,免费 RPG 日背后的想法是让你沉迷于桌面 RPG 游戏。但在你本能的犬儒主义开始之前,考虑到它会慢慢上瘾,爱上一个鼓励你阅读规则和知识的游戏并不太糟,这样你和你的家人、朋友就有了共度时光的借口了。桌面 RPG 是一个功能强大、富有想象力和有趣的媒介,而免费 RPG 日则是对这种游戏很好的介绍。
|
||||
|
||||
![FreeRPG Day logo][3]
|
||||
|
||||
### 开源游戏
|
||||
|
||||
像许多其他行业一样,开源现象影响了桌面游戏。回到世纪之交,《Magic:The Gathering and Dungeons&Dragons》 的提供者<ruby>[威世智公司][4]<rt>Wizards of the Coast</rt></ruby>决定通过开发<ruby>[开源游戏许可证][5]<rt>Open Game License</rt></ruby>(OGL)来采用开源方法。他们将此许可证用于世界上第一个 RPG(《<ruby>龙与地下城<rt>Dungeons & Dragons</rt></ruby>》,D&D)的版本 3 和 3.5。几年后,当他们在第四版上(对开源)产生了动摇时,《<ruby>龙<rt>Dragon</rt></ruby>》杂志的出版商复刻了 D&D 3.5 的“代码”,将其混制版本发布为《<ruby>开拓者<rt>Pathfinder</rt></ruby>》 RPG,从而保持了创新和整个第三方游戏开发者产业的健康发展。最近,威世智公司在 D&D 5e 版本中才又重回了 OGL。
|
||||
|
||||
OGL 允许开发人员至少可以在他们自己产品中使用该游戏的机制。不管你可以不可以使用自定义怪物、武器、王国或流行角色的名称,但你可以随时使用 OGL 游戏的规则和数学计算。事实上,OGL 游戏的规则通常作为[系统参考文档][6](SRD)免费发布的,因此,无论你是否购买了规则书的副本,你都可以了解游戏的玩法。
|
||||
|
||||
如果你之前从未玩过桌面 RPG,那么使用笔和纸玩的游戏也可以拥有游戏引擎似乎很奇怪,但计算就是计算,不管是数字的还是模拟的。作为一个简单的例子:假设游戏引擎规定玩家角色有一个代表其力量的数字。当那个玩家角色与一个有其两倍力量的巨人战斗时,在玩家掷骰子以增加她的角色的力量攻击时,真的会感到紧张。如果没有掷出一个很好的点数的话,她的力量将无法与巨人相匹敌。知道了这一点,第三方或独立开发者就可以为这个游戏引擎设计一个怪物,同时了解骰子滚动可能对玩家的能力得分产生的影响。这意味着他们可以根据游戏引擎的优先级进行数学计算。他们可以设计一系列用来杀死的怪物,在游戏引擎的环境中它们具有有意义的能力和技能,并且他们可以宣称与该引擎的兼容性。
|
||||
|
||||
此外,OGL 允许出版商为其材料定义产品标识。产品标识可以是出版物的商业外观(图形元素和布局)、徽标、术语、传说、专有名称等。未经出版商同意,任何定义为产品标识的内容都可能**无法**重复使用。例如,假设一个出版商发行了一本武器手册,其中包括一个名为 Sigint 的魔法砍刀,它对所有针对僵尸的攻击都给予 +2 魔法附加攻击值。这个特性来自一个故事,该砍刀是一个具有潜伏的僵尸基因的科学家锻造的。但是,该出版物在 OGL 第 1e 节中列出的所有武器的正确名称都被保留为产品标识。这意味着你可以在自己的出版物中使用该数字(武器的持久性、它所造成的伤害,+2 魔法奖励等等)以及与该武器相关的传说(它由一个潜伏的僵尸锻造),但是你不能使用该武器的名称(Sigint)。
|
||||
|
||||
OGL 是一个非常灵活的许可证,因此开发人员必须仔细阅读其第 1e 节。 一些出版商只保留出版物本身的布局,而其他出版商保留除数字和最通用术语之外的所有内容。
|
||||
|
||||
当卓越的 RPG 特许经营权拥抱开源时,至今仍能感受到的它给整个行业掀起的波澜。第三方开发人员可以为 5e 和《开拓者》系统创建内容。由威世智公司创建的整个 [DungeonMastersGuild.com][7] 网站为 D&D 5e 制作了独立内容,旨在促进独立出版。[Starfinder][8]、[OpenD6][9]、[战士,盗贼和法师][10]、[剑与巫师][11] 等及很多其它游戏都采用了 OGL。其他系统,如 Brent Newhall 的 《[Dungeon Delvers][12]》、《[Fate][13]》、《[Dungeon World][14]》 等等,都是根据[知识共享许可][15]授权的的。
|
||||
|
||||
### 获取你的 RPG
|
||||
|
||||
在免费 RPG 日,你可以前往当地游戏商铺,玩 RPG 以及获取与朋友将来一起玩的 RPG 游戏材料。就像<ruby>[Linux 安装节][16]<rt>Linux installfest</rt></ruby> 或 <ruby>[软件自由日][17]<rt>Software Freedom Day</rt></ruby>一样,该活动的定义很松散。每个商家举办的自由 RPG 日都有所不同,每个商家都可以玩他们选择的任何游戏。但是,游戏发行商捐赠的免费内容每年都是相同的。显然,免费的东西视情况而定,但是当你参加免费 RPG 日活动时,请注意有多少游戏采用了开源许可证(如果是 OGL 游戏,OGL 会打印在书背面)。《开拓者》、《Starfinder》 和 D&D 的任何内容肯定都会带有 OGL 的一些优势。许多其他系统的内容使用知识共享许可。有些则像 90 年代复活的 [Dead Earth][18] RPG 一样,使用 [GNU 自由文档许可证] [19]。
|
||||
|
||||
有大量的游戏资源是通过开源许可证开发的。你可能需要也可能不需要关心游戏的许可证;毕竟,许可证与你是否可以与朋友一起玩无关。但是如果你喜欢支持[自由文化][20]而不仅仅是你运行的软件,那么试试一些 OGL 或知识共享游戏吧。如果你不熟悉游戏,请在免费 RPG 日在当地游戏商家试玩桌面 RPG 游戏!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/free-rpg-day
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth/users/erez/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team-game-play-inclusive-diversity-collaboration.png?itok=8sUXV7W1 (plastic game pieces on a board)
|
||||
[2]: https://www.freerpgday.com/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/freerpgday-logoblank.jpg (FreeRPG Day logo)
|
||||
[4]: https://company.wizards.com/
|
||||
[5]: http://www.opengamingfoundation.org/licenses.html
|
||||
[6]: https://www.d20pfsrd.com/
|
||||
[7]: https://www.dmsguild.com/
|
||||
[8]: https://paizo.com/starfinder
|
||||
[9]: https://ogc.rpglibrary.org/index.php?title=OpenD6
|
||||
[10]: http://www.stargazergames.eu/games/warrior-rogue-mage/
|
||||
[11]: https://froggodgames.com/frogs/product/swords-wizardry-complete-rulebook/
|
||||
[12]: http://brentnewhall.com/games/doku.php?id=games:dungeon_delvers
|
||||
[13]: http://www.faterpg.com/licensing/licensing-fate-cc-by/
|
||||
[14]: http://dungeon-world.com/
|
||||
[15]: https://creativecommons.org/
|
||||
[16]: https://www.tldp.org/HOWTO/Installfest-HOWTO/introduction.html
|
||||
[17]: https://www.softwarefreedomday.org/
|
||||
[18]: https://mixedsignals.ml/games/blog/blog_dead-earth
|
||||
[19]: https://www.gnu.org/licenses/fdl-1.3.en.html
|
||||
[20]: https://opensource.com/article/18/1/creative-commons-real-world
|
105
published/20190610 Welcoming Blockchain 3.0.md
Normal file
105
published/20190610 Welcoming Blockchain 3.0.md
Normal file
@ -0,0 +1,105 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (murphyzhao)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10987-1.html)
|
||||
[#]: subject: (Welcoming Blockchain 3.0)
|
||||
[#]: via: (https://www.ostechnix.com/welcoming-blockchain-3-0/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
迎接区块链 3.0
|
||||
======
|
||||
|
||||
![欢迎区块链 3.0][1]
|
||||
|
||||
“[区块链 2.0][2]” 系列文章讨论了自 2008 年比特币等加密货币问世以来区块链技术的发展。本文将探讨区块链的未来发展。**区块链 3.0** 这一新的 DLT(<ruby>分布式分类帐本技术<rt>Distributed Ledger Technology</rt></ruby>)演进浪潮将回答当前区块链所面临的问题(每一个问题都会在这里总结)。下一版本的技术标准也将带来全新的应用和使用案例。在本文的最后,我们也会看一些当前使用这些原则的案例。
|
||||
|
||||
以下是现有区块链平台的几个缺点,并针对这些缺点给出了建议的解决方案。
|
||||
|
||||
### 问题 1:可扩展性
|
||||
|
||||
这个问题 [^1]被视为普遍采用该技术的第一个主要障碍。正如之前所讨论的,很多因素限制了区块链同时处理大量交易的能力。诸如 [以太坊][3] 之类的现有网络每秒能够进行 10-15 次交易(TPS),而像 Visa 所使用的主流网络每秒能够进行超过 2000 次交易。**可扩展性**是困扰所有现代数据库系统的问题。正如我们在这里看到的那样,改进的共识算法和更好的区块链架构设计正在改进它。
|
||||
|
||||
#### 解决可扩展性
|
||||
|
||||
已经提出了更精简、更有效的一致性算法来解决可扩展性问题,并且不会影响区块链的主要结构。虽然大多数加密货币和区块链平台使用资源密集型的 PoW 算法(例如,比特币和以太坊)来生成区块,但是存在更新的 DPoS 和 PoET 算法来解决这个问题。DPoS 和 PoET 算法(还有一些正在开发中)需要更少的资源来维持区块链,并且已显示具有高达 1000 TPS 的吞吐量,可与流行的非区块链系统相媲美。
|
||||
|
||||
可扩展性问题的第二个解决方案是完全改变区块链结构和功能。我们不会详细介绍这一点,但已经提出了诸如<ruby>有向无环图<rt>Directed Acyclic Graph</rt></ruby>(DAG)之类的替代架构来处理这个问题。从本质上讲,这项工作假设并非所有网络节点都需要整个区块链的副本才能使区块链正常工作,或者并非所有的参与者需要从 DLT 系统获得好处。系统不要求所有参与者验证交易,只需要交易发生在共同的参考框架中并相互链接。
|
||||
|
||||
在比特币系统中使用<ruby>[闪电网络][11]<rt>Lightning network</rt></ruby>来实现 DAG,而以太坊使用他们的<ruby>[切片][12]<rt>Sharding</rt></ruby> 协议来实现 DAG。本质上,从技术上来看 DAG 实现并不是区块链。它更像是一个错综复杂的迷宫,只是仍然保留了区块链的点对点和分布式数据库属性。稍后我们将在另一篇文章中探讨 DAG 和 Tangle 网络。
|
||||
|
||||
### 问题 2:互通性
|
||||
|
||||
**互通性**[^4] [^5] 被称为跨链访问,基本上就是指不同区块链之间彼此相互通信以交换指标和信息。由于目前有数不清的众多平台,不同公司为各种应用提供了各种专有系统,平台之间的互操作性就至关重要。例如,目前在一个平台上拥有数字身份的人无法利用其他平台提供的功能,因为各个区块链彼此之间互不了解、不能沟通。这是由于缺乏可靠的验证、令牌交换等有关的问题仍然存在。如果平台之间不能够相互通信,面向全球推出[智能合约][4]也是不可行的。
|
||||
|
||||
#### 解决互通性
|
||||
|
||||
有一些协议和平台专为实现互操作性而设计。这些平台实现了原子交换协议,并向不同的区块链系统提供开放场景,以便在它们之间进行通信和交换信息。**“0x (ZRX)”** 就是其中的一个例子,稍后将对进行描述。
|
||||
|
||||
### 问题 3:治理
|
||||
|
||||
公有链中的治理 [^6] 本身不是限制,而是需要像社区道德指南针一样,在区块链的运作中考虑每个人的意见。结合起来并规模性地看,能预见这样一个问题,即要么协议更改太频繁,要么协议被拥有最多令牌的“中央”权威一时冲动下修改。不过这不是大多数公共区块链目前正在努力避免的问题,因为其运营规模和运营性质不需要更严格的监管。
|
||||
|
||||
#### 解决治理问题
|
||||
|
||||
上面提到的复杂的框架或 DAG 几乎可以消除对全球(平台范围)治理法规的需要和使用。相反,程序可以自动监督事务和用户类型,并决定需要执行的法律。
|
||||
|
||||
### 问题 4:可持续性
|
||||
|
||||
可持续性再次建立在可扩展性问题的基础上。当前的区块链和加密货币因不可长期持续而倍遭批评,这是由于仍然需要大量的监督,并且需要大量资源保持系统运行。如果你读过最近“挖掘加密货币”已经没有这么大利润的相关报道,你就知道“挖矿”图利就是它的本来面目。保持现有平台运行所需的资源量在全球范围和主流使用方面根本不实用。
|
||||
|
||||
#### 解决不可持续性问题
|
||||
|
||||
从资源或经济角度来看,可持续性的答案与可扩展性的答案类似。但是,要在全球范围内实施这一制度,法律和法规必须予以认可。然而,这取决于世界各国政府。来自美国和欧洲政府的有利举措重新燃起了对这方面的希望。
|
||||
|
||||
### 问题 5:用户采用
|
||||
|
||||
目前,阻止消费者广泛采用 [^7] 基于区块链的应用程序的一个障碍是消费者对平台及其底层的技术不熟悉。事实上,大多数应用程序都需要某种技术和计算背景来弄清楚它们是如何工作的,这在这方面也没有帮助。区块链开发的第三次浪潮旨在缩小消费者知识与平台可用性之间的差距。
|
||||
|
||||
#### 解决用户采用问题
|
||||
|
||||
互联网花了很长的时间才发展成现在的样子。多年来,人们在开发标准化互联网技术栈方面做了大量的工作,使 Web 能够像现在这样运作。开发人员正在开发面向用户的前端分布式应用程序,这些应用程序应作为现有 Web 3.0 技术之上的一层,同时由下面的区块链和开放协议的支持。这样的[分布式应用][5]将使用户更熟悉底层技术,从而增加主流采用。
|
||||
|
||||
### 在当前场景中的应用
|
||||
|
||||
我们已经从理论上讨论了上述问题的解决方法,现在我们将继续展示这些方法在当前场景中的应用。
|
||||
|
||||
- [0x][6] – 是一种去中心化的令牌交换,不同平台的用户可以在不需要中央权威机构审查的情况下交换令牌。他们的突破在于,他们如何设计系统使得仅在交易结算后才记录和审查数据块,而不是通常的在交易之间进行(为了验证上下文,通常也会验证交易之前的数据块)。这使在线数字资产交换更快速。
|
||||
- [Cardano][7] – 由以太坊的联合创始人之一创建,Cardano 自诩为一个真正“科学”的平台,和采用了严格的协议,对开发的代码和算法进行了多次审查。Cardano 的所有内容都在数学上尽可能的进行了优化。他们的共识算法叫做 **Ouroboros**,是一种改进的<ruby>权益证明<rt>Proof of Stake</rt></ruby>(PoS)算法。Cardano 是用 [**haskell**][8] 开发的,智能合约引擎使用 haskell 的衍生工具 **plutus** 进行操作。这两者都是函数式编程语言,可以保证安全交易而不会影响效率。
|
||||
- EOS – 我们已经在 [这篇文章][9] 中描述了 EOS。
|
||||
- [COTI][10] – 一个鲜为人知的架构,COTI 不需要挖矿,而且在运行过程中趋近于零功耗。它还将资产存储在本地用户设备上的离线钱包中,而不是存储在纯粹的对等网络上。它们也遵循基于 DAG 的架构,并声称处理吞吐量高达 10000 TPS。他们的平台允许企业在不利用区块链的情况下建立自己的加密货币和数字化货币钱包。
|
||||
|
||||
[^1]: A. P. Paper, K. Croman, C. Decker, I. Eyal, A. E. Gencer, and A. Juels, “On Scaling Decentralized Blockchains | SpringerLink,” 2018.
|
||||
[^4]: [Why is blockchain interoperability important][13]
|
||||
[^5]: [The Importance of Blockchain Interoperability][14]
|
||||
[^6]: R. Beck, C. Müller-Bloch, and J. L. King, “Governance in the Blockchain Economy: A Framework and Research Agenda,” J. Assoc. Inf. Syst., pp. 1020–1034, 2018.
|
||||
[^7]: J. M. Woodside, F. K. A. Jr, W. Giberson, F. K. J. Augustine, and W. Giberson, “Blockchain Technology Adoption Status and Strategies,” J. Int. Technol. Inf. Manag., vol. 26, no. 2, pp. 65–93, 2017.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/welcoming-blockchain-3-0/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[murphyzhao](https://github.com/murphyzhao)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/blockchain-720x340.jpg
|
||||
[2]: https://linux.cn/article-10650-1.html
|
||||
[3]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
|
||||
[4]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
|
||||
[5]: https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/
|
||||
[6]: https://0x.org/
|
||||
[7]: https://www.cardano.org/en/home/
|
||||
[8]: https://www.ostechnix.com/getting-started-haskell-programming-language/
|
||||
[9]: https://www.ostechnix.com/blockchain-2-0-eos-io-is-building-infrastructure-for-developing-dapps/
|
||||
[10]: https://coti.io/
|
||||
[11]: https://cryptoslate.com/beyond-blockchain-directed-acylic-graphs-dag/
|
||||
[12]: https://github.com/ethereum/wiki/wiki/Sharding-FAQ#introduction
|
||||
[13]: https://www.capgemini.com/2019/02/can-the-interoperability-of-blockchains-change-the-world/
|
||||
[14]: https://medium.com/wanchain-foundation/the-importance-of-blockchain-interoperability-b6a0bbd06d11
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,89 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 ways to make enterprise IoT cost effective)
|
||||
[#]: via: (https://www.networkworld.com/article/3401082/6-ways-to-make-enterprise-iot-cost-effective.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
6 ways to make enterprise IoT cost effective
|
||||
======
|
||||
Rob Mesirow, a principal at PwC’s Connected Solutions unit, offers tips for successfully implementing internet of things (IoT) projects without breaking the bank.
|
||||
![DavidLeshem / Getty][1]
|
||||
|
||||
There’s little question that the internet of things (IoT) holds enormous potential for the enterprise, in everything from asset tracking to compliance.
|
||||
|
||||
But enterprise uses of IoT technology are still evolving, and it’s not yet entirely clear which use cases and practices currently make economic and business sense. So, I was thrilled to trade emails recently with [Rob Mesirow][2], a principal at [PwC’s Connected Solutions][3] unit, about how to make enterprise IoT implementations as cost effective as possible.
|
||||
|
||||
“The IoT isn’t just about technology (hardware, sensors, software, networks, communications, the cloud, analytics, APIs),” Mesirow said, “though tech is obviously essential. It also includes ensuring cybersecurity, managing data governance, upskilling the workforce and creating a receptive workplace culture, building trust in the IoT, developing interoperability, and creating business partnerships and ecosystems—all part of a foundation that’s vital to a successful IoT implementation.”
|
||||
|
||||
**[ Also read:[Enterprise IoT: Companies want solutions in these 4 areas][4] ]**
|
||||
|
||||
Yes, that sounds complicated—and a lot of work for a still-hard-to-quantify return. Fortunately, though, Mesirow offered up some tips on how companies can make their IoT implementations as cost effective as possible.
|
||||
|
||||
### 1\. Don’t wait for better technology
|
||||
|
||||
Mesirow advised against waiting to implement IoT projects until you can deploy emerging technology such as [5G networks][5]. That makes sense, as long as your implementation doesn’t specifically require capabilities available only in the new technology.
|
||||
|
||||
### 2\. Start with the basics, and scale up as needed
|
||||
|
||||
“Companies need to start with the basics—building one app/task at a time—instead of jumping ahead with enterprise-wide implementations and ecosystems,” Mesirow said.
|
||||
|
||||
“There’s no need to start an IoT initiative by tackling a huge, expensive ecosystem. Instead, begin with one manageable use case, and build up and out from there. The IoT can inexpensively automate many everyday tasks to increase effectiveness, employee productivity, and revenue.”
|
||||
|
||||
After you pick the low-hanging fruit, it’s time to become more ambitious.
|
||||
|
||||
“After getting a few successful pilots established, businesses can then scale up as needed, building on the established foundation of business processes, people experience, and technology," Mesirow said,
|
||||
|
||||
### 3\. Make dumb things smart
|
||||
|
||||
Of course, identifying the ripest low-hanging fruit isn’t always easy.
|
||||
|
||||
“Companies need to focus on making dumb things smart, deploying infrastructure that’s not going to break the bank, and providing enterprise customers the opportunity to experience what data intelligence can do for their business,” Mesirow said. “Once they do that, things will take off.”
|
||||
|
||||
### 4\. Leverage lower-cost networks
|
||||
|
||||
“One key to building an IoT inexpensively is to use low-power, low-cost networks (Low-Power Wide-Area Networks (LPWAN)) to provide IoT services, which reduces costs significantly,” Mesirow said.
|
||||
|
||||
Naturally, he mentioned that PwC has three separate platforms with some 80 products that hang off those platforms, which he said cost “a fraction of traditional IoT offerings, with security and privacy built in.”
|
||||
|
||||
Despite the product pitch, though, Mesirow is right to call out the efficiencies involved in using low-cost, low-power networks instead of more expensive existing cellular.
|
||||
|
||||
### 5\. Balance security vs. cost
|
||||
|
||||
Companies need to plan their IoT network with costs vs. security in mind, Mesirow said. “Open-source networks will be less expensive, but there may be security concerns,” he said.
|
||||
|
||||
That’s true, of course, but there may be security concerns in _any_ network, not just open-source solutions. Still, Mesirow’s overall point remains valid: Enterprises need to carefully consider all the trade-offs they’re making in their IoT efforts.
|
||||
|
||||
### 6\. Account for _all_ the value IoT provides
|
||||
|
||||
Finally, Mesirow pointed out that “much of the cost-effectiveness comes from the _value_ the IoT provides,” and its important to consider the return, not just the investment.
|
||||
|
||||
“For example,” Mesirow said, the IoT “increases productivity by enabling the remote monitoring and control of business operations. It saves on energy costs by automatically turning off lights and HVAC when spaces are vacant, and predictive maintenance alerts lead to fewer machine repairs. And geolocation can lead to personalized marketing to customer smartphones, which can increase sales to nearby stores.”
|
||||
|
||||
**[ Now read this:[5 reasons the IoT needs its own networks][6] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401082/6-ways-to-make-enterprise-iot-cost-effective.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/money_financial_salary_growth_currency_by-davidleshem-100787975-large.jpg
|
||||
[2]: https://twitter.com/robmesirow
|
||||
[3]: https://digital.pwc.com/content/pwc-digital/en/products/connected-solutions.html
|
||||
[4]: https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html
|
||||
[5]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
|
||||
[6]: https://www.networkworld.com/article/3284506/5-reasons-the-iot-needs-its-own-networks.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco launches a developer-community cert program)
|
||||
[#]: via: (https://www.networkworld.com/article/3401524/cisco-launches-a-developer-community-cert-program.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco launches a developer-community cert program
|
||||
======
|
||||
Cisco has revamped some of its most critical certification and career-development programs in an effort to address the emerging software-oriented-network environment.
|
||||
![Getty Images][1]
|
||||
|
||||
SAN DIEGO – Cisco revamped some of its most critical certification and career-development tools in an effort to address the emerging software-oriented network environment.
|
||||
|
||||
Perhaps one of the biggest additions – rolled out here at the company’s Cisco Live customer event – is the new set of professional certifications for developers utilizing Cisco’s growing DevNet developer community.
|
||||
|
||||
**[ Also see[4 job skills that can boost networking salaries][2] and [20 hot jobs ambitious IT pros should shoot for][3].]**
|
||||
|
||||
The Cisco Certified DevNet Associate, Specialist and Professional certifications will cover software development for applications, automation, DevOps, cloud and IoT. They will also target software developers and network engineers who develop software proficiency to develop applications and automated workflows for operational networks and infrastructure.
|
||||
|
||||
“This certification evolution is the next step to reflect the critical skills network engineers must have to be at the leading edge of networked-enabled business disruption and delivering customer excellence,” said Mike Adams, vice president and general manager of Learning@Cisco. “To perform effectively in this new world, every IT professional needs skills that are broader, deeper and more agile than ever before. And they have to be comfortable working as a multidisciplinary team including infrastructure network engineers, DevOps and automation specialists, and software professionals.”
|
||||
|
||||
Other Cisco Certifications changes include:
|
||||
|
||||
* Streamlined certifications to validate engineering professionals with Cisco Certified Network Associate (CCNA) and Cisco Specialist certifications as well as Cisco Certified Network Professional (CCNP) and Cisco Certified Internetwork Expert (CCIE) certifications in enterprise, data center, service provider, security and collaboration.
|
||||
* For more senior professionals, the CCNP will give learners a choice of five tracks, covering enterprise technologies including infrastructure and wireless, service provider, data center, security and collaboration. Candidates will be able to further specialize in a particular focus area within those technologies.
|
||||
* Cisco says it will eliminate pre-requisites for certifications, meaning engineers can change career options without having to take a defined path.
|
||||
* Expansion of Cisco Networking Academy offerings to train entry level network professionals and software developers. Courses prepare students to earn CCNA and Certified DevNet Associate certifications, equipping them for high-demand jobs in IT.
|
||||
|
||||
|
||||
|
||||
New network technologies such as intent-based networking, multi-domain networking, and programmability fundamentally change the capabilities of the network, giving network engineers the opportunity to architect solutions that utilize the programmable network in new and exciting ways, wrote Susie Wee senior vice president and chief technology officer of DevNet.
|
||||
|
||||
“DevOps practices can be applied to the network, making the network more agile and enabling automation at scale. The new network provides more than just connectivity, it can now use policy and intent to securely connect applications, users, devices and data across multiple environments – from the data center and cloud, to the campus and branch, to the edge, and to the device,” Wee wrote.
|
||||
|
||||
**[[Looking to upgrade your career in tech? This comprehensive online course teaches you how.][4] ]**
|
||||
|
||||
She also announced the DevNet Automation Exchange, a community that will offer shared code, best practices and technology tools for users, developers or channel partners interested in developing automation apps.
|
||||
|
||||
Wee said Cisco seeded the Automation Exchange with over 50 shared code repositories.
|
||||
|
||||
“It is becoming increasingly clear that network ops can be handled much more efficiently with automation, and offering the tools to develop better applications is crucial going forward,” said Zeus Kerravala, founder and principal analyst with ZK Research.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401524/cisco-launches-a-developer-community-cert-program.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/01/run_digital-vanguard_business-executive-with-briefcase_career-growth-100786736-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3227832/lan-wan/4-job-skills-that-can-boost-networking-salaries.html
|
||||
[3]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fupgrading-your-technology-career
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The carbon footprints of IT shops that train AI models are huge)
|
||||
[#]: via: (https://www.networkworld.com/article/3401919/the-carbon-footprints-of-it-shops-that-train-ai-models-are-huge.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
The carbon footprints of IT shops that train AI models are huge
|
||||
======
|
||||
Artificial intelligence (AI) model training can generate five times more carbon dioxide than a car does in a lifetime, researchers at the University of Massachusetts, Amherst find.
|
||||
![ipopba / Getty Images][1]
|
||||
|
||||
A new research paper from the University of Massachusetts, Amherst looked at the carbon dioxide (CO2) generated over the course of training several common large artificial intelligence (AI) models and found that the process can generate nearly five times the amount as an average American car over its lifetime plus the process of making the car itself.
|
||||
|
||||
The [paper][2] specifically examined the model training process for natural-language processing (NLP), which is how AI handles natural language interactions. The study found that during the training process, more than 626,000 pounds of carbon dioxide is generated.
|
||||
|
||||
This is significant, since AI training is one IT process that has remained firmly on-premises and not moved to the cloud. Very expensive equipment is needed, as is large volumes of data, so the cloud isn’t right work for most AI training, and the report notes this. Plus, IT shops want to keep that kind of IP in house. So, if you are experimenting with AI, that power bill is going to go up.
|
||||
|
||||
**[ Read also:[How to plan a software-defined data-center network][3] ]**
|
||||
|
||||
While the report used carbon dioxide as a measure, that’s still the product of electricity generation. Training involves the use of the most powerful processors, typically Nvidia GPUs, and they are not known for being low-power draws. And as the paper notes, “model training also incurs a substantial cost to the environment due to the energy required to power this hardware for weeks or months at a time.”
|
||||
|
||||
Training is the most processor-intensive portion of AI. It can take days, weeks, or even months to “learn” what the model needs to know. That means power-hungry Nvidia GPUs running at full utilization for the entire time. In this case, how to handle and process natural language questions rather than broken sentences of keywords like your typical Google search.
|
||||
|
||||
The report said training one model with a neural architecture generated 626,155 pounds of CO2. By contrast, one passenger flying round trip between New York and San Francisco would generate 1,984 pounds of CO2, an average American would generate 11,023 pounds in one year, and a car would generate 126,000 pounds over the course of its lifetime.
|
||||
|
||||
### How the researchers calculated the CO2 amounts
|
||||
|
||||
The researchers used four models in the NLP field that have been responsible for the biggest leaps in performance. They are Transformer, ELMo, BERT, and GPT-2. They trained all of the models on a single Nvidia Titan X GPU, with the exception of ELMo which was trained on three Nvidia GTX 1080 Ti GPUs. Each model was trained for a maximum of one day.
|
||||
|
||||
**[[Learn Java from beginning concepts to advanced design patterns in this comprehensive 12-part course!][4] ]**
|
||||
|
||||
They then used the number of training hours listed in the model’s original papers to calculate the total energy consumed over the complete training process. That number was converted into pounds of carbon dioxide equivalent based on the average energy mix in the U.S.
|
||||
|
||||
The big takeaway is that computational costs start out relatively inexpensive, but they mushroom when additional tuning steps were used to increase the model’s final accuracy. A tuning process known as neural architecture search ([NAS][5]) is the worst offender because it does so much processing. NAS is an algorithm that searches for the best neural network architecture. It is seriously advanced AI and requires the most processing time and power.
|
||||
|
||||
The researchers suggest it would be beneficial to directly compare different models to perform a cost-benefit (accuracy) analysis.
|
||||
|
||||
“To address this, when proposing a model that is meant to be re-trained for downstream use, such as re-training on a new domain or fine-tuning on a new task, authors should report training time and computational resources required, as well as model sensitivity to hyperparameters. This will enable direct comparison across models, allowing subsequent consumers of these models to accurately assess whether the required computational resources,” the authors wrote.
|
||||
|
||||
They also say researchers who are cost-constrained should pool resources and avoid the cloud, as cloud compute time is more expensive. In an example, it said a GPU server with eight Nvidia 1080 Ti GPUs and supporting hardware is available for approximately $20,000. To develop the sample models used in their study, that hardware would cost $145,000, plus electricity to run the models, about half the estimated cost to use on-demand cloud GPUs.
|
||||
|
||||
“Unlike money spent on cloud compute, however, that invested in centralized resources would continue to pay off as resources are shared across many projects. A government-funded academic compute cloud would provide equitable access to all researchers,” they wrote.
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401919/the-carbon-footprints-of-it-shops-that-train-ai-models-are-huge.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/ai-vendor-relationship-management_artificial-intelligence_hand-on-virtual-screen-100795246-large.jpg
|
||||
[2]: https://arxiv.org/abs/1906.02243
|
||||
[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fjava
|
||||
[5]: https://www.oreilly.com/ideas/what-is-neural-architecture-search
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,95 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco offers cloud-based security for SD-WAN resources)
|
||||
[#]: via: (https://www.networkworld.com/article/3402079/cisco-offers-cloud-based-security-for-sd-wan-resources.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco offers cloud-based security for SD-WAN resources
|
||||
======
|
||||
Cisco adds support for its cloud-based security gateway Umbrella to SD-WAN software
|
||||
![Thinkstock][1]
|
||||
|
||||
SAN DIEGO— As many companies look to [SD-WAN][2] technology to reduce costs, improve connectivity and streamline branch office access, one of the key requirements will be solid security technologies to protect corporate resources.
|
||||
|
||||
At its Cisco Live customer event here this week, the company took aim at that need by telling customers it added support for the its cloud-based security gateway – known as Umbrella – to its SD-WAN software offerings.
|
||||
|
||||
**More about SD-WAN**
|
||||
|
||||
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][3]
|
||||
* [How to pick an off-site data-backup method][4]
|
||||
* [SD-Branch: What it is and why you’ll need it][5]
|
||||
* [What are the options for security SD-WAN?][6]
|
||||
|
||||
|
||||
|
||||
At its most basic, SD-WAN lets companies aggregate a variety of network connections – including MPLS, 4G LTE and DSL – into a branch or network-edge location and provides a management software that can turn up new sites, prioritize traffic and set security policies. SD-WAN's driving principle is to simplify the way big companies turn up new links to branch offices, better manage the way those links are utilized – for data, voice or video – and potentially save money in the process.
|
||||
|
||||
According to Cisco, Umbrella can provide the first line of defense against threats on the internet. By analyzing and learning from internet activity patterns, Umbrella automatically uncovers attacker infrastructure and proactively blocks requests to malicious destinations before a connection is even established — without adding latency for users. With Umbrella, customers can stop phishing and malware infections earlier, identify already infected devices faster and prevent data exfiltration, Cisco says.
|
||||
|
||||
Branch offices and roaming users are more vulnerable to attacks, and attackers are looking to exploit them, said Gee Rittenhouse, senior vice president and general manager of Cisco's Security Business Group. He pointed to Enterprise Strategy Group research that says 68 percent of branch offices and roaming users were the source of compromise in recent attacks. And as organizations move to more direct internet access, this becomes an even greater risk, Rittenhouse said.
|
||||
|
||||
“Scaling security at every location often means more appliances to ship and manage, more policies to separately maintain, which translates into more money and resources needed – but Umbrella offers an alternative to all that," he said. "Umbrella provides simple deployment and management, and in a single cloud platform, it unifies multiple layers of security, ncluding DNS, secure web gateway, firewall and cloud-access security,” Rittenhouse said.
|
||||
|
||||
“It also acts as your secure onramp to the internet by offering secure internet access and controlled SaaS usage across all locations and roaming users.”
|
||||
|
||||
Basically users can set up Umbrella support via the SD-WAN dashboard vManage, and the system automatically creates a secure tunnel to the cloud.** ** Once the SD-WAN traffic is pointed at the cloud, firewall and other security policies can be set. Customers can then see traffic and collect information about patterns or set policies and respond to anomalies, Rittenhouse said.
|
||||
|
||||
Analysts said the Umbrella offering is another important security option offered by Cisco for SD-WAN customers.
|
||||
|
||||
“Since it is cloud-based, using Umbrella is a great option for customers with lots of branch or SD-WAN locations who don’t want or need to have a security gateway on premises,” said Rohit Mehra, vice president of Network Infrastructure at IDC. “One of the largest requirements for large customers going forward will be the need for all manner of security technologies for the SD-WAN environment, and Cisco has a big menu of offerings that can address those requirements.”
|
||||
|
||||
IDC says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a more than 40 percent yearly clip between now and then.
|
||||
|
||||
The Umbrella announcement is on top of other recent SD-WAN security enhancements the company has made. In May [Cisco added support for Advanced Malware Protection (AMP) to its million-plus ISR/ASR edge routers][7] in an effort to reinforce branch- and core-network malware protection across the SD-WAN.
|
||||
|
||||
“Together with Cisco Talos [Cisco’s security-intelligence arm], AMP imbues your SD-WAN branch, core and campuses locations with threat intelligence from millions of worldwide users, honeypots, sandboxes and extensive industry partnerships,” Cisco said.
|
||||
|
||||
In total, AMP identifies more than 1.1 million unique malware samples a day and when AMP in Cisco SD-WAN platform spots malicious behavior it automatically blocks it, Cisco said.
|
||||
|
||||
Last year Cisco added its [Viptela SD-WAN technology to the IOS XE][8] version 16.9.1 software that runs its core ISR/ASR routers such as the ISR models 1000, 4000 and ASR 1000, in use by organizations worldwide. Cisco bought Viptela in 2017.
|
||||
|
||||
The release of Cisco IOS XE offered an instant upgrade path for creating cloud-controlled SD-WAN fabrics to connect distributed offices, people, devices and applications operating on the installed base, Cisco said. At the time Cisco said that Cisco SD-WAN on edge routers builds a secure virtual IP fabric by combining routing, segmentation, security, policy and orchestration.
|
||||
|
||||
With the recent release of IOS-XE SD-WAN 16.11, Cisco has brought AMP and other enhancements to its SD-WAN.
|
||||
|
||||
AMP support is added to a menu of security features already included in Cisco's SD-WAN software including support for URL filtering, Snort Intrusion Prevention, the ability to segment users across the WAN and embedded platform security, including the Cisco Trust Anchor module.
|
||||
|
||||
The software also supports SD-WAN Cloud onRamp for CoLocation, which lets customers tie distributed multicloud applications back to a local branch office or local private data center. That way a cloud-to-branch link would be shorter, faster and possibly more secure that tying cloud-based applications directly to the data center.
|
||||
|
||||
Also in May [Cisco and Teridion][9] said they would team to deliver faster enterprise software-defined WAN services. The integration links Cisco Meraki MX Security/SD-WAN appliances and its Auto VPN technology which lets users quickly bring up and configure secure sessions between branches and data centers with Teridion’s cloud-based WAN service. Teridion’s service promises customers better performance and control over traffic running from remote offices over the public internet to the data center.
|
||||
|
||||
Teridion said the Meraki integration creates an IPSec connection from the Cisco Meraki MX to the Teridion edge. Customers create locations in the Teridion portal and apply the preconfigured Meraki template to them, or just upload a csv file if they have a lot of locations. Then, from each Meraki MX, they can create a third-party IPSec tunnel to the Teridion edge IP addresses that are generated as part of the Teridion configuration, the company stated.
|
||||
|
||||
The combined Cisco Meraki and Teridion offering brings SD-WAN and security capabilities at the WAN edge that are tightly integrated with a WAN service delivered over cost-effective broadband or dedicated Internet access. Meraki’s MX family supports everything from SD-WAN and [Wi-Fi][10] features to next-generation [firewall][11] and intrusion prevention in a single package.
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402079/cisco-offers-cloud-based-security-for-sd-wan-resources.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2015/10/cloud-security-ts-100622309-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html
|
||||
[3]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[4]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[5]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[6]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
|
||||
[7]: https://www.networkworld.com/article/3394597/cisco-adds-amp-to-sd-wan-for-israsr-routers.html
|
||||
[8]: https://www.networkworld.com/article/3296007/cisco-upgrade-enables-sd-wan-in-1m-israsr-routers.html
|
||||
[9]: https://www.networkworld.com/article/3396628/cisco-ties-its-securitysd-wan-gear-with-teridions-cloud-wan-service.html
|
||||
[10]: https://www.networkworld.com/article/3318119/what-to-expect-from-wi-fi-6-in-2019.html
|
||||
[11]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,72 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Dell and Cisco extend VxBlock integration with new features)
|
||||
[#]: via: (https://www.networkworld.com/article/3402036/dell-and-cisco-extend-vxblock-integration-with-new-features.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Dell and Cisco extend VxBlock integration with new features
|
||||
======
|
||||
Dell EMC and Cisco took another step in their alliance, announcing plans to expand VxBlock 1000 integration across servers, networking, storage, and data protection.
|
||||
![Dell EMC][1]
|
||||
|
||||
Just two months ago [Dell EMC and Cisco renewed their converged infrastructure][2] vows, and now the two have taken another step in the alliance. At this year’s at [Cisco Live][3] event taking place in San Diego, the two announced plans to expand VxBlock 1000 integration across servers, networking, storage, and data protection.
|
||||
|
||||
This is done through support of NVMe over Fabrics (NVMe-oF), which allows enterprise SSDs to talk to each other directly through a high-speed fabric. NVMe is an important advance because SATA and PCI Express SSDs could never talk directly to other drives before until NVMe came along.
|
||||
|
||||
To leverage NVMe-oF to its fullest extent, Dell EMC has unveiled a new integrated Cisco compute (UCS) and storage (MDS) 32G options, extending PowerMax capabilities to deliver NVMe performance across the VxBlock stack.
|
||||
|
||||
**More news from Cisco Live 2019:**
|
||||
|
||||
* [Cisco offers cloud-based security for SD-WAN resources][4]
|
||||
* [Cisco software to make networks smarter, safer, more manageable][5]
|
||||
* [Cisco launches a developer-community cert program][6]
|
||||
|
||||
|
||||
|
||||
Dell EMC said this will enhance the architecture, high-performance consistency, availability, and scalability of VxBlock and provide its customers with high-performance, end-to-end mission-critical workloads that can deliver microsecond responses.
|
||||
|
||||
These new compute and storage options will be available to order sometime later this month.
|
||||
|
||||
### Other VxBlock news from Dell EMC
|
||||
|
||||
Dell EMC also announced it is extending its factory-integrated on-premise integrated protection solutions for VxBlock to hybrid and multi-cloud environments, such as Amazon Web Services (AWS). This update will offer to help protect VMware workloads and data via the company’s Data Domain Virtual Edition and Cloud Disaster Recovery software options. This will be available in July.
|
||||
|
||||
The company also plans to release VxBlock Central 2.0 software next month. VxBlock Central is designed to help customers simplify CI administration through converged awareness, automation, and analytics.
|
||||
|
||||
New to version 2.0 is modular licensing that matches workflow automation, advanced analytics, and life-cycle management/upgrade options to your needs.
|
||||
|
||||
VxBlock Central 2.0 has a variety of license features, including the following:
|
||||
|
||||
**Base** – Free with purchase of a VxBlock, the base license allows you to manage your system and improve compliance with inventory reporting and alerting. **Workflow Automation** – Provision infrastructure on-demand using engineered workflows through vRealize Orchestrator. New workflows available with this package include Cisco UCS server expansion with Unity and XtremIO storage arrays. **Advanced Analytics** – View capacity and KPIs to discover deeper actionable insights through vRealize Operations. **Lifecycle Management** (new, available later in 2019) – Apply “guided path” software upgrades to optimize system performance.
|
||||
|
||||
* Lifecycle Management includes a new multi-tenant, cloud-based database based on Cloud IQ that will collect and store the CI component inventory structured by the customer, extending the value and ease of use of the cloud-based analytics monitoring.
|
||||
* This feature extends the value and ease of use of the cloud-based analytics monitoring Cloud IQ already provides for individual Dell EMC storage arrays.
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402036/dell-and-cisco-extend-vxblock-integration-with-new-features.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/dell-emc-vxblock-1000-100794721-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3391071/dell-emc-and-cisco-renew-converged-infrastructure-alliance.html
|
||||
[3]: https://www.ciscolive.com/global/
|
||||
[4]: https://www.networkworld.com/article/3402079/cisco-offers-cloud-based-security-for-sd-wan-resources.html
|
||||
[5]: https://www.networkworld.com/article/3401523/cisco-software-to-make-networks-smarter-safer-more-manageable.html
|
||||
[6]: https://www.networkworld.com/article/3401524/cisco-launches-a-developer-community-cert-program.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,95 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IoT security vs. privacy: Which is a bigger issue?)
|
||||
[#]: via: (https://www.networkworld.com/article/3401522/iot-security-vs-privacy-which-is-a-bigger-issue.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
IoT security vs. privacy: Which is a bigger issue?
|
||||
======
|
||||
When it comes to the internet of things (IoT), security has long been a key concern. But privacy issues could be an even bigger threat.
|
||||
![Ring][1]
|
||||
|
||||
If you follow the news surrounding the internet of things (IoT), you know that security issues have long been a key concern for IoT consumers, enterprises, and vendors. Those issues are very real, but I’m becoming increasingly convinced that related but fundamentally different _privacy_ vulnerabilities may well be an even bigger threat to the success of the IoT.
|
||||
|
||||
In June alone, we’ve seen a flood of IoT privacy issues inundate the news cycle, and observers are increasingly sounding the alarm that IoT users should be paying attention to what happens to the data collected by IoT devices.
|
||||
|
||||
**[ Also read:[It’s time for the IoT to 'optimize for trust'][2] and [A corporate guide to addressing IoT security][2] ]**
|
||||
|
||||
Predictably, most of the teeth-gnashing has come on the consumer side, but that doesn’t mean enterprises users are immune to the issue. One the one hand, just like consumers, companies are vulnerable to their proprietary information being improperly shared and misused. More immediately, companies may face backlash from their own customers if they are seen as not properly guarding the data they collect via the IoT. Too often, in fact, enterprises shoot themselves in the foot on privacy issues, with practices that range from tone-deaf to exploitative to downright illegal—leading almost [two-thirds (63%) of consumers to describe IoT data collection as “creepy,”][3] while more than half (53%) “distrust connected devices to protect their privacy and handle information in a responsible manner.”
|
||||
|
||||
### Ring becoming the poster child for IoT privacy issues
|
||||
|
||||
As a case in point, let’s look at the case of [Ring, the IoT doorbell company now owned by Amazon][4]. Ring is [reportedly working with police departments to build a video surveillance network in residential neighborhoods][5]. Police in more than 50 cities and towns across the country are apparently offering free or discounted Ring doorbells, and sometimes requiring the recipients to share footage for use in investigations. (While [Ring touts the security benefits][6] of working with law enforcement, it has asked police departments to end the practice of _requiring_ users to hand over footage, as it appears to violate the devices’ terms of service.)
|
||||
|
||||
Many privacy advocates are troubled by this degree of cooperation between police and Ring, but that’s only part of the problem. Last year, for example, [Ring workers in Ukraine reportedly watched customer feeds][7]. Amazingly, though, even that only scratches the surface of the privacy flaps surrounding Ring.
|
||||
|
||||
### Guilty by video?
|
||||
|
||||
According to [Motherboard][8], “Ring is using video captured by its doorbell cameras in Facebook advertisements that ask users to identify and call the cops on a woman whom local police say is a suspected thief.” While the police are apparently appreciative of the “additional eyes that may see this woman and recognize her,” the ad calls the woman a thief even though she has not been charged with a crime, much less convicted!
|
||||
|
||||
Ring may be today’s poster child for IoT privacy issues, but IoT privacy complaints are widespread. In many cases, it comes down to what IoT users—or others nearby—are getting in return for giving up their privacy. According to the [Guardian][9], for example, Google’s Sidewalk Labs smart city project is little more than “surveillance capitalism.” And while car owners may get a discount on auto insurance in return for sharing their driving data, that relationship is hardly set in stone. It may not be long before drivers have to give up their data just to get insurance at all.
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][10] ]**
|
||||
|
||||
And as the recent [data breach at the U.S. Customs and Border Protection][11] once again demonstrates, private data is “[a genie without a bottle][12].” No matter what legal or technical protections are put in place, the data may always be revealed or used in unforeseen ways. Heck, when you put it all together, it’s enough to make you wonder [whether doorbells really need to be smart][13] at all?
|
||||
|
||||
**Read more about IoT:**
|
||||
|
||||
* [Google’s biggest, craziest ‘moonshot’ yet][14]
|
||||
* [What is the IoT? How the internet of things works][15]
|
||||
* [What is edge computing and how it’s changing the network][16]
|
||||
* [Most powerful internet of things companies][17]
|
||||
* [10 Hot IoT startups to watch][18]
|
||||
* [The 6 ways to make money in IoT][19]
|
||||
* [What is digital twin technology? [and why it matters]][20]
|
||||
* [Blockchain, service-centric networking key to IoT success][21]
|
||||
* [Getting grounded in IoT networking and security][22]
|
||||
* [Building IoT-ready networks must become a priority][23]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][24]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][25] and [LinkedIn][26] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401522/iot-security-vs-privacy-which-is-a-bigger-issue.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/ringvideodoorbellpro-100794084-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
|
||||
[3]: https://www.cpomagazine.com/data-privacy/consumers-still-concerned-about-iot-security-and-privacy-issues/
|
||||
[4]: https://www.cnbc.com/2018/02/27/amazon-buys-ring-a-former-shark-tank-reject.html
|
||||
[5]: https://www.cnet.com/features/amazons-helping-police-build-a-surveillance-network-with-ring-doorbells/
|
||||
[6]: https://blog.ring.com/2019/02/14/how-rings-neighbors-creates-safer-more-connected-communities/
|
||||
[7]: https://www.theinformation.com/go/b7668a689a
|
||||
[8]: https://www.vice.com/en_us/article/pajm5z/amazon-home-surveillance-company-ring-law-enforcement-advertisements
|
||||
[9]: https://www.theguardian.com/cities/2019/jun/06/toronto-smart-city-google-project-privacy-concerns
|
||||
[10]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[11]: https://www.washingtonpost.com/technology/2019/06/10/us-customs-border-protection-says-photos-travelers-into-out-country-were-recently-taken-data-breach/?utm_term=.0f3a38aa40ca
|
||||
[12]: https://smartbear.com/blog/test-and-monitor/data-scientists-are-sexy-and-7-more-surprises-from/
|
||||
[13]: https://slate.com/tag/should-this-thing-be-smart
|
||||
[14]: https://www.networkworld.com/article/3058036/google-s-biggest-craziest-moonshot-yet.html
|
||||
[15]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
|
||||
[16]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[17]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[18]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[19]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[20]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[21]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[22]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[23]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[24]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[25]: https://www.facebook.com/NetworkWorld/
|
||||
[26]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,121 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Software Defined Perimeter (SDP): Creating a new network perimeter)
|
||||
[#]: via: (https://www.networkworld.com/article/3402258/software-defined-perimeter-sdp-creating-a-new-network-perimeter.html)
|
||||
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
Software Defined Perimeter (SDP): Creating a new network perimeter
|
||||
======
|
||||
Considering the way networks work today and the change in traffic patterns; both internal and to the cloud, this limits the effect of the fixed perimeter.
|
||||
![monsitj / Getty Images][1]
|
||||
|
||||
Networks were initially designed to create internal segments that were separated from the external world by using a fixed perimeter. The internal network was deemed trustworthy, whereas the external was considered hostile. However, this is still the foundation for most networking professionals even though a lot has changed since the inception of the design.
|
||||
|
||||
More often than not the fixed perimeter consists of a number of network and security appliances, thereby creating a service chained stack, resulting in appliance sprawl. Typically, the appliances that a user may need to pass to get to the internal LAN may vary. But generally, the stack would consist of global load balancers, external firewall, DDoS appliance, VPN concentrator, internal firewall and eventually LAN segments.
|
||||
|
||||
The perimeter approach based its design on visibility and accessibility. If an entity external to the network can’t see an internal resource, then access cannot be gained. As a result, external entities were blocked from coming in, yet internal entities were permitted to passage out. However, it worked only to a certain degree. Realistically, the fixed network perimeter will always be breachable; it's just a matter of time. Someone with enough skill will eventually get through.
|
||||
|
||||
**[ Related:[MPLS explained – What you need to know about multi-protocol label switching][2]**
|
||||
|
||||
### Environmental changes – the cloud and mobile workforce
|
||||
|
||||
Considering the way networks work today and the change in traffic patterns; both internal and to the cloud, this limits the effect of the fixed perimeter. Nowadays, we have a very fluid network perimeter with many points of entry.
|
||||
|
||||
Imagine a castle with a portcullis that was used to gain access. To gain entry into the portcullis was easy as we just needed to pass one guard. There was only one way in and one way out. But today, in this digital world, we have so many small doors and ways to enter, all of which need to be individually protected.
|
||||
|
||||
This boils down to the introduction of cloud-based application services and changing the location of the perimeter. Therefore, the existing networking equipment used for the perimeter is topologically ill-located. Nowadays, everything that is important is outside the perimeter, such as, remote access workers, SaaS, IaaS and PaaS-based applications.
|
||||
|
||||
Users require access to the resources in various cloud services regardless of where the resources are located, resulting in complex-to-control multi-cloud environments. Objectively, the users do not and should not care where the applications are located. They just require access to the application. Also, the increased use of mobile workforce that demands anytime and anywhere access from a variety of devices has challenged the enterprises to support this dynamic workforce.
|
||||
|
||||
There is also an increasing number of devices, such as, BYOD, on-site contractors, and partners that will continue to grow internal to the network. This ultimately leads to a world of over-connected networks.
|
||||
|
||||
### Over-connected networks
|
||||
|
||||
Over-connected networks result in complex configurations of network appliances. This results in large and complex policies without any context.
|
||||
|
||||
They provide a level of coarse-grained access to a variety of services where the IP address does not correspond to the actual user. Traditional appliances that use static configurations to limit the incoming and outgoing traffic are commonly based on information in the IP packet and the port number.
|
||||
|
||||
Essentially, there is no notion of policy and explanation of why a given source IP address is on the list. This approach fails to take into consideration any notion of trust and dynamically adjust access in relation to the device, users and application request events.
|
||||
|
||||
### Problems with IP addresses
|
||||
|
||||
Back in the early 1990s, RFC 1597 declared three IP ranges reserved for private use: 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16. If an end host was configured with one of these addresses, it was considered more secure. However, this assumption of trust was shattered with the passage of time and it still haunts us today.
|
||||
|
||||
Network Address Translation (NAT) also changed things to a great extent. NAT allowed internal trusted hosts to communicate directly with the external untrusted hosts. However, since Transmission Control Protocol (TCP) is bidirectional, it allows the data to be injected by the external hosts while connecting back to the internal hosts.
|
||||
|
||||
Also, there is no contextual information regarding the IP addresses as the sole purpose revolved around connectivity. If you have the IP address of someone, you can connect to them. The authentication was handled higher up in the stack.
|
||||
|
||||
Not only do user’s IP addresses change regularly, but there’s also not a one-to-one correspondence between the users and IP addresses. Anyone can communicate from any IP address they please and also insert themselves between you and the trusted resource.
|
||||
|
||||
Have you ever heard of the 20-year old computer that responds to an internet control message protocol (ICMP) request, yet no one knows where it is? But this would not exist on a zero trust network as the network is dark until the administrator turns the lights on with a whitelist policy rule set. This is contrary to the legacy black policy rule set. You can find more information on zero trust in my course: [Zero Trust Networking: The Big Picture][3].
|
||||
|
||||
Therefore, we can’t just rely on the IP addresses and expect them to do much more other than connect. As a result, we have to move away from the IP addresses and network location as the proxy for access trust. The network location can longer be the driver of network access levels. It is not fully equipped to decide the trust of a device, user or application.
|
||||
|
||||
### Visibility – a major gap
|
||||
|
||||
When we analyze networking and its flaws, visibility is a major gap in today’s hybrid environments. By and large, enterprise networks are complex beasts. More than often networking pros do not have accurate data or insights into who or what is accessing the network resource.
|
||||
|
||||
I.T does not have the visibility in place to detect, for example, insecure devices, unauthorized users and potentially harmful connections that could propagate malware or perform data exfiltration.
|
||||
|
||||
Also, once you know how network elements connect, how do you ensure that they don’t reconnect through a broader definition of connectivity? For this, you need contextual visibility. You need full visibility into the network to see who, what, when, and how they are connecting with the device.
|
||||
|
||||
### What’s the workaround?
|
||||
|
||||
A new approach is needed that enables the application owners to protect the infrastructure located in a public or private cloud and on-premise data center. This new network architecture is known as [software-defined perimeter][4] (SDP). Back in 2013, Cloud Security Alliance (CSA) launched the SDP initiative, a project designed to develop the architecture for creating more robust networks.
|
||||
|
||||
The principles behind SDPs are not entirely new. Organizations within the DoD and Intelligence Communities (IC) have implemented a similar network architecture that is based on authentication and authorization prior to network access.
|
||||
|
||||
Typically, every internal resource is hidden behind an appliance. And a user must authenticate before visibility of the authorized services is made available and access is granted.
|
||||
|
||||
### Applying the zero trust framework
|
||||
|
||||
SDP is an extension to [zero trust][5] which removes the implicit trust from the network. The concept of SDP started with Google’s BeyondCorp, which is the general direction that the industry is heading to right now.
|
||||
|
||||
Google’s BeyondCorp puts forward the idea that the corporate network does not have any meaning. The trust regarding accessing an application is set by a static network perimeter containing a central appliance. This appliance permits the inbound and outbound access based on a very coarse policy.
|
||||
|
||||
However, access to the application should be based on other parameters such as who the user is, the judgment of the security stance of the device, followed by some continuous assessment of the session. Rationally, only then should access be permitted.
|
||||
|
||||
Let’s face it, the assumption that internal traffic can be trusted is flawed and zero trust assumes that all hosts internal to the network are internet facing, thereby hostile.
|
||||
|
||||
### What is software-defined perimeter (SDP)?
|
||||
|
||||
The SDP aims to deploy perimeter functionality for dynamically provisioned perimeters meant for clouds, hybrid environments, and on-premise data center infrastructures. There is often a dynamic tunnel that automatically gets created during the session. That is a one-to-one mapping between the requesting entity and the trusted resource. The important point to note here is that perimeters are formed not solely to obey a fixed location already design by the network team.
|
||||
|
||||
SDP relies on two major pillars and these are the authentication and authorization stages. SDPs require endpoints to authenticate and be authorized first before obtaining network access to the protected entities. Then, encrypted connections are created in real-time between the requesting systems and application infrastructure.
|
||||
|
||||
Authenticating and authorizing the users and their devices before even allowing a single packet to reach the target service, enforces what's known as least privilege at the network layer. Essentially, the concept of least privilege is for an entity to be granted only the minimum privileges that it needs to get its work done. Within a zero trust network, privilege is more dynamic than it would be in traditional networks since it uses many different attributes of activity to determine the trust score.
|
||||
|
||||
### The dark network
|
||||
|
||||
Connectivity is based on a need-to-know model. Under this model, no DNS information, internal IP addresses or visible ports of internal network infrastructure are transmitted. This is the reason why SDP assets are considered as “dark”. As a result, SDP isolates any concerns about the network and application. The applications and users are considered abstract, be it on-premise or in the cloud, which becomes irrelevant to the assigned policy.
|
||||
|
||||
Access is granted directly between the users and their devices to the application and resource, regardless of the underlying network infrastructure. There simply is no concept of inside and outside of the network. This ultimately removes the network location point as a position of advantage and also eliminates the excessive implicit trust that IP addresses offer.
|
||||
|
||||
**This article is published as part of the IDG Contributor Network.[Want to Join?][6]**
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402258/software-defined-perimeter-sdp-creating-a-new-network-perimeter.html
|
||||
|
||||
作者:[Matt Conran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/03/sdn_software-defined-network_architecture-100791938-large.jpg
|
||||
[2]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html
|
||||
[3]: http://pluralsight.com/courses/zero-trust-networking-big-picture
|
||||
[4]: https://network-insight.net/2018/09/software-defined-perimeter-zero-trust/
|
||||
[5]: https://network-insight.net/2018/10/zero-trust-networking-ztn-want-ghosted/
|
||||
[6]: /contributor-network/signup.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
83
sources/talk/20190612 When to use 5G, when to use Wi-Fi 6.md
Normal file
83
sources/talk/20190612 When to use 5G, when to use Wi-Fi 6.md
Normal file
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (When to use 5G, when to use Wi-Fi 6)
|
||||
[#]: via: (https://www.networkworld.com/article/3402316/when-to-use-5g-when-to-use-wi-fi-6.html)
|
||||
[#]: author: (Lee Doyle )
|
||||
|
||||
When to use 5G, when to use Wi-Fi 6
|
||||
======
|
||||
5G is a cellular service, and Wi-Fi 6 is a short-range wireless access technology, and each has attributes that make them useful in specific enterprise roles.
|
||||
![Thinkstock][1]
|
||||
|
||||
We have seen hype about whether [5G][2] cellular or [Wi-Fi 6][3] will win in the enterprise, but the reality is that the two are largely complementary with an overlap for some use cases, which will make for an interesting competitive environment through the early 2020s.
|
||||
|
||||
### The potential for 5G in enterprises
|
||||
|
||||
The promise of 5G for enterprise users is higher speed connectivity with lower latency. Cellular technology uses licensed spectrum which largely eliminates potential interference that may occur with unlicensed Wi-Fi spectrum. Like current 4G LTE technologies, 5G can be supplied by cellular wireless carriers or built as a private network .
|
||||
|
||||
The architecture for 5G requires many more radio access points and can suffer from poor or no connectivity indoors. So, the typical organization needs to assess its [current 4G and potential 5G service][4] for its PCs, routers and other devices. Deploying indoor microcells, repeaters and distributed antennas can help solve indoor 5G service issues. As with 4G, the best enterprise 5G use case is for truly mobile connectivity such as public safety vehicles and in non-carpeted environments like mining, oil and gas extraction, transportation, farming and some manufacturing.
|
||||
|
||||
In addition to broad mobility, 5G offers advantages in terms of authentication while roaming and speed of deployment as might be needed to provide WAN connectivity to a pop-up office or retail site. 5G will have the capacity to offload traffic in cases of data congestion such as live video. As 5G standards mature, the technology will improve its options for low-power IoT connectivity.
|
||||
|
||||
5G will gradually roll out over the next four to five years starting in large cities and specific geographies; 4G technology will remain prevalent for a number of years. Enterprise users will need new devices, dongles and routers to connect to 5G services. For example, Apple iPhones are not expected to support 5G until 2020, and IoT devices will need specific cellular compatibility to connect to 5G.
|
||||
|
||||
Doyle Research expects the 1Gbps and higher bandwidth promised by 5G will have a significant impact on the SD-WAN market. 4G LTE already enables cellular services to become a primary WAN link. 5G is likely to be cost competitive or cheaper than many wired WAN options such as MPLS or the internet. 5G gives enterprise WAN managers more options to provide increased bandwidth to their branch sites and remote users – potentially displacing MPLS over time.
|
||||
|
||||
### The potential for Wi-Fi 6 in enterprises
|
||||
|
||||
Wi-Fi is nearly ubiquitous for connecting mobile laptops, tablets and other devices to enterprise networks. Wi-Fi 6 (802.11ax) is the latest version of Wi-Fi and brings the promise of increased speed, low latency, improved aggregate bandwidth and advanced traffic management. While it has some similarities with 5G (both are based on orthogonal frequency division multiple access), Wi-Fi 6 is less prone to interference, requires less power (which prolongs device battery life) and has improved spectral efficiency.
|
||||
|
||||
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][5] ]**
|
||||
|
||||
As is typical for Wi-Fi, early [vendor-specific versions of Wi-Fi 6][6] are currently available from many manufacturers. The Wi-Fi alliance plans for certification of Wi-Fi 6-standard gear in 2020. Most enterprises will upgrade to Wi-Fi 6 along standard access-point life cycles of three years or so unless they have specific performance/latency requirements that prompt an upgrade sooner.
|
||||
|
||||
Wi-Fi access points continue to be subject to interference, and it can be challenging to design and site APs to provide appropriate coverage. Enterprise LAN managers will continue to need vendor-supplied tools and partners to configure optimal Wi-Fi coverage for their organizations. Wi-Fi 6 solutions must be integrated with wired campus infrastructure. Wi-Fi suppliers need to do a better job at providing unified network management across wireless and wired solutions in the enterprise.
|
||||
|
||||
### Need for wired backhaul
|
||||
|
||||
For both technologies, wireless is combined with wired-network infrastructure to deliver high-speed communications end-to-end. In the enterprise, Wi-Fi is typically paired with wired Ethernet switches for campus and larger branches. Some devices are connected via cable to the switch, others via Wi-Fi – and laptops may use both methods. Wi-Fi access points are connected via Ethernet inside the enterprise and to the WAN or internet by fiber connections.
|
||||
|
||||
The architecture for 5G makes extensive use of fiber optics to connect the distributed radio access network back to the core of the 5G network. Fiber is typically required to provide the high bandwidth needed to connect 5G endpoints to SaaS-based applications, and to provide live video and high-speed internet access. Private 5G networks will also have to meet high-speed wired-connectivity requirements.
|
||||
|
||||
### Handoff issues
|
||||
|
||||
Enterprise IT managers need to be concerned with handoff challenges as phones switch between 5G and Wi-Fi 6. These issues can affect performance and user satisfaction. Several groups are working towards standards to promote better interoperability between Wi-Fi 6 and 5G. As the architectures of Wi-Fi 6 align with 5G, the experience of moving between cellular and Wi-Fi networks should become more seamless.
|
||||
|
||||
### 5G vs Wi-Fi 6 depends on locations, applications and devices
|
||||
|
||||
Wi-Fi 6 and 5G are competitive with each other for specific situations in the enterprise environment that depend on location, application and device type. IT managers should carefully evaluate their current and emerging connectivity requirements. Wi-Fi will continue to dominate indoor environments and cellular wins for broad outdoor coverage.
|
||||
|
||||
Some of the overlap cases occur in stadiums, hospitality and other large event spaces with many users competing for bandwidth. Government applications, including aspect of smart cities, can be applicable to both Wi-Fi and cellular. Health care facilities have many distributed medical devices and users that need connectivity. Large distributed manufacturing environments share similar characteristics. The emerging IoT deployments are perhaps the most interesting “competitive” environment with many overlapping use cases.
|
||||
|
||||
### Recommendations for IT Leaders
|
||||
|
||||
While the wireless technologies enabling them are converging, Wi-Fi 6 and 5G are fundamentally distinct networks – both of which have their role in enterprise connectivity. Enterprise IT leaders should focus on how Wi-Fi and cellular can complement each other, with Wi-Fi continuing as the in-building technology to connect PCs and laptops, offload phone and tablet data, and for some IoT connectivity.
|
||||
|
||||
4G LTE moving to 5G will remain the truly mobile technology for phone and tablet connectivity, an option (via dongle) for PC connections, and increasingly popular for connecting some IoT devices. 5G WAN links will increasingly become standard as a backup for improved SD-WAN reliability and as primary links for remote offices.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402316/when-to-use-5g-when-to-use-wi-fi-6.html
|
||||
|
||||
作者:[Lee Doyle][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2017/07/wi-fi_wireless_communication_network_abstract_thinkstock_610127984_1200x800-100730107-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
|
||||
[3]: https://www.networkworld.com/article/3215907/why-80211ax-is-the-next-big-thing-in-wi-fi.html
|
||||
[4]: https://www.networkworld.com/article/3330603/5g-versus-4g-how-speed-latency-and-application-support-differ.html
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
|
||||
[6]: https://www.networkworld.com/article/3309439/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,59 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Data centers should sell spare UPS capacity to the grid)
|
||||
[#]: via: (https://www.networkworld.com/article/3402039/data-centers-should-sell-spare-ups-capacity-to-the-grid.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Data centers should sell spare UPS capacity to the grid
|
||||
======
|
||||
Distributed Energy is gaining traction, providing an opportunity for data centers to sell excess power in data center UPS batteries to the grid.
|
||||
![Getty Images][1]
|
||||
|
||||
The energy storage capacity in uninterruptable power supply (UPS) batteries, languishing often dormant in data centers, could provide new revenue streams for those data centers, says Eaton, a major electrical power management company.
|
||||
|
||||
Excess, grid-generated power, created during times of low demand, should be stored on the now-proliferating lithium-backup power systems strewn worldwide in data centers, Eaton says. Then, using an algorithm tied to grid-demand, electricity should be withdrawn as necessary for grid use. It would then be slid back onto the backup batteries when not needed.
|
||||
|
||||
**[ Read also:[How server disaggregation can boost data center efficiency][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
|
||||
|
||||
The concept is called Distributed Energy and has been gaining traction in part because electrical generation is changing—emerging green power, such as wind and solar, being used now at the grid-level have considerations that differ from the now-retiring, fossil-fuel power generation. You can generate solar only in daylight, yet much demand takes place on dark evenings, for example.
|
||||
|
||||
Coal, gas, and oil deliveries have always been, to a great extent, pre-planned, just-in-time, and used for electrical generation in real time. Nowadays, though, fluctuations between supply, storage, and demand are kicking in. Electricity storage on the grid is required.
|
||||
|
||||
Eaton says that by piggy-backing on existing power banks, electricity distribution could be evened out better. The utilities would deliver power more efficiently, despite the peaks and troughs in demand—with the data center UPS, in effect, acting like a quasi-grid-power storage battery bank, or virtual power plant.
|
||||
|
||||
The objective of this UPS use case, called EnergyAware, is to regulate frequency in the grid. That’s related to the tolerances needed to make the grid work—the cycles per second, or hertz, inherent in electrical current can’t deviate too much. Abnormalities happen if there’s a suddent spike in demand but no power on hand to supply the surge.
|
||||
|
||||
### How the Distributed Energy concept works
|
||||
|
||||
The distributed energy resource (DER), which can be added to any existing lithium-ion battery bank, in any building, allows for the consumption of energy, or the distribution of it, based on a Frequency Regulation grid-demand algorithm. It charges or discharges the backup battery, connected to the grid, thus balancing the grid frequency.
|
||||
|
||||
Often, not much power will need to be removed, just “micro-bursts of energy,” explains Sean James, director of Energy Research at Microsoft, in an Eaton-published promotional video. Microsoft Innovation Center in Virginia has been working with Eaton on the project. Those bursts are enough to get the frequency tolerances back on track, but the UPS still functions as designed.
|
||||
|
||||
Eaton says data centers should start participating in energy markets. That could mean bidding, as a producer of power, to those who need to buy it—the electricity market, also known as the grid. Data centers could conceivably even switch on generators to operate the data halls if the price for its battery-stored power was particularly lucrative at certain times.
|
||||
|
||||
“A data center in the future wouldn’t just be a huge load on the grid,” James says. “In the future, you don’t have a data center or a power plant. It’s something in the middle. A data plant,” he says on the Eaton [website][4].
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402039/data-centers-should-sell-spare-ups-capacity-to-the-grid.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/10/business_continuity_server-100777720-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.eaton.com/us/en-us/products/backup-power-ups-surge-it-power-distribution/backup-power-ups/dual-purpose-ups-technology.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Oracle updates Exadata at long last with AI and machine learning abilities)
|
||||
[#]: via: (https://www.networkworld.com/article/3402559/oracle-updates-exadata-at-long-last-with-ai-and-machine-learning-abilities.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Oracle updates Exadata at long last with AI and machine learning abilities
|
||||
======
|
||||
Oracle to update the Oracle Exadata Database Machine X8 server line to include artificial intelligence (AI) and machine learning capabilities, plus support for hybrid cloud.
|
||||
![Magdalena Petrova][1]
|
||||
|
||||
After a rather [long period of silence][2], Oracle announced an update to its server line, the Oracle Exadata Database Machine X8, which features hardware and software enhancements that include artificial intelligence (AI) and machine learning capabilities, as well as support for hybrid cloud.
|
||||
|
||||
Oracle acquired a hardware business nine years ago with the purchase of Sun Microsystems. It steadily whittled down the offerings, getting out of the commodity hardware business in favor of high-end mission-critical hardware. Whereas the Exalogic line is more of a general-purpose appliance running Oracle’s own version of Linux, Exadata is a purpose-built database server, and they really made some upgrades.
|
||||
|
||||
The Exadata X8 comes with the latest Intel Xeon Scalable processors and PCIe NVME flash technology to drive performance improvements, which Oracle promises a 60% increase in I/O throughput for all-Flash storage and a 25% increase in IOPS per storage server compared to Exadata X7. The X8 offers a 60% performance improvement over the previous generation for analytics with up to 560GB per second throughput. It can scan a 1TB table in under two seconds.
|
||||
|
||||
**[ Also read:[What is quantum computing (and why enterprises should care)][3] ]**
|
||||
|
||||
The company also enhanced the storage server to offload Oracle Database processing, and the X8 features 60% more cores and 40% higher capacity disk drives over the X7.
|
||||
|
||||
But the real enhancements come on the software side. With Exadata X8, Oracle introduces new machine-learning capabilities, such as Automatic Indexing, which continuously learns and tunes the database as usage patterns change. The Indexing technology originated with the Oracle Autonomous Database, the cloud-based software designed to automate management of Oracle databases.
|
||||
|
||||
And no, MySQL is not included in the stack. This is for Oracle databases only.
|
||||
|
||||
“We’re taking code from Autonomous Database and making it available on prem for our customers,” said Steve Zivanic, vice president for converged infrastructure at Oracle’s Cloud Business Group. “That enables companies rather than doing manual indexing for various Oracle databases to automate it with machine learning.”
|
||||
|
||||
In one test, it took a 15-year-old Netsuite database with over 9,000 indexes built up over the lifespan of the database, and in 24 hours, its AI indexer rebuilt the indexes with just 6,000, reducing storage space and greatly increasing performance of the database, since the number of indexes to search were smaller.
|
||||
|
||||
### Performance improvements with Exadata
|
||||
|
||||
Zivanic cited several examples of server consolidation done with Exadata but would not identify companies by name. He told of a large healthcare company that achieved a 10-fold performance improvement over IBM Power servers and consolidated 600 Power servers with 50 Exadata systems.
|
||||
|
||||
A financial services company replaced 4,000 Dell servers running Red Hat Linux and VMware with 100 Exadata systems running 6,000 production Oracle databases. Not only did it reduce its power footprint, but patching was down 99%. An unnamed retailer with 28 racks of hardware from five vendors went from installing 1,400 patches per year to 16 patches on four Exadata racks.
|
||||
|
||||
Because Oracle owns the entire stack, from hardware to OS to middleware and database, Exadata can roll all of its patch components – 640 in all – into a single bundle.
|
||||
|
||||
“The trend we’ve noticed is you see these [IT hardware] companies who try to maintain an erector set mentality,” said Zivanic. “And you have people saying why are we trying to build pods? Why don’t we buy finished goods and focus on our core competency rather than build erector sets?”
|
||||
|
||||
### Oracle Zero Data Loss Recovery Appliance X8 now available
|
||||
|
||||
Oracle also announced the availability of the Oracle Zero Data Loss Recovery Appliance X8, its database backup appliance, which offers up to 10 times faster data recovery of an Oracle Database than conventional data deduplication appliances while providing sub-second recoverability of all transactions.
|
||||
|
||||
The new Oracle Recovery Appliance X8 now features 30% larger capacity, nearly a petabyte in a single rack, for the same price, Oracle says.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402559/oracle-updates-exadata-at-long-last-with-ai-and-machine-learning-abilities.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2017/03/vid-still-79-of-82-100714308-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3317564/is-oracles-silence-on-its-on-premises-servers-cause-for-concern.html
|
||||
[3]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Report: Mirai tries to hook its tentacles into SD-WAN)
|
||||
[#]: via: (https://www.networkworld.com/article/3403016/report-mirai-tries-to-hook-its-tentacles-into-sd-wan.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Report: Mirai tries to hook its tentacles into SD-WAN
|
||||
======
|
||||
|
||||
Mirai – the software that has hijacked hundreds of thousands of internet-connected devices to launch massive DDoS attacks – now goes beyond recruiting just IoT products; it also includes code that seeks to exploit a vulnerability in corporate SD-WAN gear.
|
||||
|
||||
That specific equipment – VMware’s SDX line of SD-WAN appliances – now has an updated software version that fixes the vulnerability, but by targeting it Mirai’s authors show that they now look beyond enlisting security cameras and set-top boxes and seek out any vulnerable connected devices, including enterprise networking gear.
|
||||
|
||||
**More about SD-WAN**
|
||||
|
||||
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][1]
|
||||
* [How to pick an off-site data-backup method][2]
|
||||
* [SD-Branch: What it is and why you’ll need it][3]
|
||||
* [What are the options for security SD-WAN?][4]
|
||||
|
||||
|
||||
|
||||
“I assume we’re going to see Mirai just collecting as many devices as it can,” said Jen Miller-Osborn, deputy director of threat research at Palo Alto Networks’ Unit 42, which recently issued [a report][5] about Mirai.
|
||||
|
||||
### Exploiting SD-WAN gear is new
|
||||
|
||||
While the exploit against the SD-WAN appliances was a departure for Mirai, it doesn’t represent a sea-change in the way its authors are approaching their work, according Miller-Osborn.
|
||||
|
||||
The idea, she said, is simply to add any devices to the botnet, regardless of what they are. The fact that SD-WAN devices were targeted is more about those particular devices having a vulnerability than anything to do with their SD-WAN capabilities.
|
||||
|
||||
### Responsible disclosure headed off execution of exploits
|
||||
|
||||
[The vulnerability][6] itself was discovered last year by independent researchers who responsibly disclosed it to VMware, which then fixed it in a later software version. But the means to exploit the weakness nevertheless is included in a recently discovered new variant of Mirai, according to the Unit 42 report.
|
||||
|
||||
The authors behind Mirai periodically update the software to add new targets to the list, according to Unit 42, and the botherders’ original tactic of simply targeting devices running default credentials has given way to a strategy that also exploits vulnerabilities in a wide range of different devices. The updated variant of the malicious software includes a total of eight new-to-Mirai exploits.
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][7] ]**
|
||||
|
||||
The remediated version of the VMware SD-WAN is SD-WAN Edge 3.1.2. The vulnerability still affects SD-WAN Edge 3.1.1 and earlier, [according to a VMware security advisory][8]. After the Unit 42 report came out VMware posted [a blog][9] that says it is conducting its own investigation into the matter.
|
||||
|
||||
Detecting whether a given SD-WAN implementation has been compromised depends heavily on the degree of monitoring in place on the network. Any products that give IT staff the ability to notice unusual traffic to or from an affected appliance could flag that activity. Otherwise, it could be difficult to tell if anything’s wrong, Miller-Osborne said. “You honestly might not notice it unless you start seeing a hit in performance or an outside actor notifies you about it.”
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403016/report-mirai-tries-to-hook-its-tentacles-into-sd-wan.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[2]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[3]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[4]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
|
||||
[5]: https://unit42.paloaltonetworks.com/new-mirai-variant-adds-8-new-exploits-targets-additional-iot-devices/
|
||||
[6]: https://www.exploit-db.com/exploits/44959
|
||||
[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[8]: https://www.vmware.com/security/advisories/VMSA-2018-0011.html
|
||||
[9]: https://blogs.vmware.com/security/2019/06/vmsa-2018-0011-revisited.html
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,60 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Western Digital launches open-source zettabyte storage initiative)
|
||||
[#]: via: (https://www.networkworld.com/article/3402318/western-digital-launches-open-source-zettabyte-storage-initiative.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Western Digital launches open-source zettabyte storage initiative
|
||||
======
|
||||
Western Digital's Zoned Storage initiative leverages new technology to create more efficient zettabyte-scale data storage for data centers by improving how data is organized when it is stored.
|
||||
![monsitj / Getty Images][1]
|
||||
|
||||
Western Digital has announced a project called the Zoned Storage initiative that leverages new technology to create more efficient zettabyte-scale data storage for data centers by improving how data is organized when it is stored.
|
||||
|
||||
As part of this, the company also launched a [developer site][2] that will host open-source, standards-based tools and other resources.
|
||||
|
||||
The Zoned Storage architecture is designed for Western Digital hardware and its shingled magnetic recording (SMR) HDDs, which hold up to 15TB of data, as well as the emerging zoned namespaces (ZNS) standard for NVMe SSDs, designed to deliver better endurance and predictability.
|
||||
|
||||
**[ Now read:[What is quantum computing (and why enterprises should care)][3] ]**
|
||||
|
||||
This initiative is not being retrofitted for non-SMR drives or non-NVMe SSDs. Western Digital estimates that by 2023, half of all its HDD shipments are expected to be SMR. And that will be needed because IDC predicts data will be generated at a rate of 103 zettabytes a year by 2023.
|
||||
|
||||
With this project Western Digital is targeting cloud and hyperscale providers and anyone building a large data center who has to manage a large amount of data, according to Eddie Ramirez, senior director of product marketing for Western Digital.
|
||||
|
||||
Western Digital is changing how data is written and stored from the traditional random 4K block writes to large blocks of sequential data, like Big Data workloads and video streams, which are rapidly growing in size and use in the digital age.
|
||||
|
||||
“We are now looking at a one-size-fits-all architecture that leaves a lot of TCO [total cost of ownership] benefits on the table if you design for a single architecture,” Ramirez said. “We are looking at workloads that don’t rely on small block randomization of data but large block sequential write in nature.”
|
||||
|
||||
Because drives use 4k write blocks, that leads to overprovisioning of storage, especially around SSDs. This is true of consumer and enterprise SSDs alike. My 1TB SSD drive has only 930GB available. And that loss scales. An 8TB SSD has only 6.4TB available, according to Ramirez. SSDs also have to be built with DRAM for caching of small block random writes. You need about 1GB of DRAM per 1TB of NAND to act as a buffer, according to Ramirez.
|
||||
|
||||
### The benefits of Zoned Storage
|
||||
|
||||
Zoned Storage allows for 15-20% more storage on a HDD the than traditional storage mechanism. It eliminates the overprovisioning of SSDs, so you get all the NAND flash the drive has and you need far fewer DRAM chips on an SSD. Additionally, Western Digital promises you will need up to one-eighth as much DRAM to act as a cache in future SSD drives, lowering the cost.
|
||||
|
||||
Ramirez also said quality of service will improve, not necessarily that peak performance is better, but it will manage latency from outliers better.
|
||||
|
||||
Western Digital has not disclosed what if any pricing is associated with the project. It plans to work with the open-source community, customers, and industry players to help accelerate application development around Zoned Storage through its website.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402318/western-digital-launches-open-source-zettabyte-storage-initiative.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/big_data_center_server_racks_storage_binary_analytics_by_monsitj_gettyimages-951389152_3x2-100787358-large.jpg
|
||||
[2]: http://ZonedStorage.io
|
||||
[3]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ninifly)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,182 +0,0 @@
|
||||
Translating by Robsean
|
||||
How To Resize Active/Primary root Partition Using GParted Utility
|
||||
======
|
||||
Today we are going to discuss about disk partition. It’s one of the best topics in Linux. This allow users to resize the active root partition in Linux.
|
||||
|
||||
In this article we will teach you how to resize the active root partition on Linux using Gparted utility.
|
||||
|
||||
Just imagine, our system has 30GB disk and we didn’t configure properly while installation the Ubuntu operating system.
|
||||
|
||||
We need to install another OS in that so we want to make secondary partition on that.
|
||||
|
||||
Its not advisable to resize active partition. However, we are going to perform this as there is no way to free up the system.
|
||||
|
||||
Make sure you should take backup of important data before performing this action because if something goes wrong (For example, if power got failure or your system got rebooted), you can retain your data.
|
||||
|
||||
### What Is Gparted
|
||||
|
||||
[GParted][1] is a free partition manager that enables you to resize, copy, and move partitions without data loss. We can use all of the features of the GParted application is by using the GParted Live bootable image. GParted Live enables you to use GParted on GNU/Linux as well as other operating systems, such as Windows or Mac OS X.
|
||||
|
||||
### 1) Check Disk Space Usage Using df Command
|
||||
|
||||
I just want to show you about my partition using df command. The df command output clearly showing that i only have one partition.
|
||||
```
|
||||
$ df -h
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
/dev/sda1 30G 3.4G 26.2G 16% /
|
||||
none 4.0K 0 4.0K 0% /sys/fs/cgroup
|
||||
udev 487M 4.0K 487M 1% /dev
|
||||
tmpfs 100M 844K 99M 1% /run
|
||||
none 5.0M 0 5.0M 0% /run/lock
|
||||
none 498M 152K 497M 1% /run/shm
|
||||
none 100M 52K 100M 1% /run/user
|
||||
|
||||
```
|
||||
|
||||
### 2) Check Disk Partition Using fdisk Command
|
||||
|
||||
I’m going to verify this using fdisk command.
|
||||
```
|
||||
$ sudo fdisk -l
|
||||
[sudo] password for daygeek:
|
||||
|
||||
Disk /dev/sda: 33.1 GB, 33129218048 bytes
|
||||
255 heads, 63 sectors/track, 4027 cylinders, total 64705504 sectors
|
||||
Units = sectors of 1 * 512 = 512 bytes
|
||||
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||
Disk identifier: 0x000473a3
|
||||
|
||||
Device Boot Start End Blocks Id System
|
||||
/dev/sda1 * 2048 62609407 31303680 83 Linux
|
||||
/dev/sda2 62611454 64704511 1046529 5 Extended
|
||||
/dev/sda5 62611456 64704511 1046528 82 Linux swap / Solaris
|
||||
|
||||
```
|
||||
|
||||
### 3) Download GParted live ISO Image
|
||||
|
||||
Use the below command to download GParted live ISO to perform this action.
|
||||
```
|
||||
$ wget https://downloads.sourceforge.net/gparted/gparted-live-0.31.0-1-amd64.iso
|
||||
|
||||
```
|
||||
|
||||
### 4) Boot Your System With GParted Live Installation Media
|
||||
|
||||
Boot your system with GParted live installation media (like Burned CD/DVD or USB or ISO image). You will get the output similar to below screen. Here choose **GParted Live (Default settings)** and hit **Enter**.
|
||||
![][3]
|
||||
|
||||
### 5) Keyboard Selection
|
||||
|
||||
By default it chooses the second option, just hit **Enter**.
|
||||
![][4]
|
||||
|
||||
### 6) Language Selection
|
||||
|
||||
By default it chooses **33** for US English, just hit **Enter**.
|
||||
![][5]
|
||||
|
||||
### 7) Mode Selection (GUI or Command-Line)
|
||||
|
||||
By default it chooses **0** for GUI mode, just hit **Enter**.
|
||||
![][6]
|
||||
|
||||
### 8) Loaded GParted Live Screen
|
||||
|
||||
Now, GParted live screen is loaded. It is showing the list of partition which was created by me earlier.
|
||||
![][7]
|
||||
|
||||
### 9) How To Resize The root Partition
|
||||
|
||||
Choose the root partition you want to resize, only one partition is available here so i’m going to edit that partition to install another OS.
|
||||
![][8]
|
||||
|
||||
To do so, press **Resize/Move** button to resize the partition.
|
||||
![][9]
|
||||
|
||||
Here, enter the size which you want take out from this partition in first box. I’m going to claim **10GB** so, i added **10240MB** and leave rest of boxes as default, then hit **Resize/Move** button
|
||||
![][10]
|
||||
|
||||
It will ask you once again to confirm to resize the partition because you are editing the live system partition, then hit **Ok**.
|
||||
![][11]
|
||||
|
||||
It has been successfully shrink the partition from 30GB to 20GB. Also shows Unallocated disk space of 10GB.
|
||||
![][12]
|
||||
|
||||
Finally click `Apply` button to perform remaining below operations.
|
||||
![][13]
|
||||
|
||||
* **`e2fsck`** e2fsck is a file system check utility that automatically repair the file system for bad sectors, I/O errors related to HDD.
|
||||
* **`resize2fs`** The resize2fs program will resize ext2, ext3, or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device.
|
||||
* **`e2image`** The e2image program will save critical ext2, ext3, or ext4 filesystem metadata located on device to a file specified by image-file.
|
||||
|
||||
|
||||
|
||||
**`e2fsck`** e2fsck is a file system check utility that automatically repair the file system for bad sectors, I/O errors related to HDD.
|
||||
![][14]
|
||||
|
||||
**`resize2fs`** The resize2fs program will resize ext2, ext3, or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device.
|
||||
![][15]
|
||||
|
||||
**`e2image`** The e2image program will save critical ext2, ext3, or ext4 filesystem metadata located on device to a file specified by image-file.
|
||||
![][16]
|
||||
|
||||
All the operation got completed and close the dialog box.
|
||||
![][17]
|
||||
|
||||
Now, i could able to see **10GB** of Unallocated disk partition.
|
||||
![][18]
|
||||
|
||||
Reboot the system to check this.
|
||||
![][19]
|
||||
|
||||
### 10) Check Free Space
|
||||
|
||||
Login to the system back and use fdisk command to see the available space in the partition. Yes i could see **10GB** of Unallocated disk space on this partition.
|
||||
```
|
||||
$ sudo parted /dev/sda print free
|
||||
[sudo] password for daygeek:
|
||||
Model: ATA VBOX HARDDISK (scsi)
|
||||
Disk /dev/sda: 32.2GB
|
||||
Sector size (logical/physical): 512B/512B
|
||||
Partition Table: msdos
|
||||
Disk Flags:
|
||||
|
||||
Number Start End Size Type File system Flags
|
||||
32.3kB 10.7GB 10.7GB Free Space
|
||||
1 10.7GB 32.2GB 21.5GB primary ext4 boot
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/magesh/
|
||||
[1]:https://gparted.org/
|
||||
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-1.png
|
||||
[4]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-2.png
|
||||
[5]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-3.png
|
||||
[6]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-4.png
|
||||
[7]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-5.png
|
||||
[8]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-6.png
|
||||
[9]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-7.png
|
||||
[10]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-8.png
|
||||
[11]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-9.png
|
||||
[12]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-10.png
|
||||
[13]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-11.png
|
||||
[14]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-12.png
|
||||
[15]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-13.png
|
||||
[16]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-14.png
|
||||
[17]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-15.png
|
||||
[18]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-16.png
|
||||
[19]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-17.png
|
@ -1,57 +0,0 @@
|
||||
A day in the life of a log message
|
||||
======
|
||||
|
||||
Navigating a modern distributed system from the perspective of a log message.
|
||||
|
||||

|
||||
|
||||
Chaotic systems tend to be unpredictable. This is especially evident when architecting something as complex as a distributed system. Left unchecked, this unpredictability can waste boundless amounts of time. This is why every single component of a distributed system, no matter how small, must be designed to fit together in a streamlined way.
|
||||
|
||||
[Kubernetes][1] provides a promising model for abstracting compute resources—but even it must be reconciled with other distributed platforms such as [Apache Kafka][2] to ensure reliable data delivery. If someone were to integrate these two platforms, how would it work? Furthermore, if you were to trace something as simple as a log message through such a system, what would it look like? This article will focus on how a log message from an application running inside [OKD][3], the Origin Community Distribution of Kubernetes that powers Red Hat OpenShift, gets to a data warehouse through Kafka.
|
||||
|
||||
### OKD-defined environment
|
||||
|
||||
Such a journey begins in OKD, since the container platform completely overlays the hardware it abstracts. This means that the log message waits to be written to **stdout** or **stderr** streams by an application residing in a container. From there, the log message is redirected onto the node's filesystem by a container engine such as [CRI-O][4].
|
||||
|
||||

|
||||
|
||||
ithin OpenShift, one or more containers are encapsulated within virtual compute nodes known as pods. In fact, all applications running within OKD are abstracted as pods. This allows the applications to be manipulated in a uniform way. This also greatly simplifies communication between distributed components, since pods are systematically addressable through IP addresses and [load-balanced services][5] . So when the log message is taken from the node's filesystem by a log-collector application, it can easily be delivered to another pod running within OpenShift.
|
||||
|
||||
### Two peas in a pod
|
||||
|
||||
To ensure ubiquitous dispersal of the log message throughout the distributed system, the log collector needs to deliver the log message into a Kafka cluster data hub running within OpenShift. Through Kafka, the log message can be delivered to the consuming applications in a reliable and fault-tolerant way with low latency. However, in order to reap the benefits of Kafka within an OKD-defined environment, Kafka needs to be fully integrated into OKD.
|
||||
|
||||
Running a [Strimzi operator][6] will instantiate all Kafka components as pods and integrate them to run within an OKD environment. This includes Kafka brokers for queuing log messages, Kafka connectors for reading and writing from Kafka brokers, and Zookeeper nodes for managing the Kafka cluster state. Strimzi can also instantiate the log collector to double as a Kafka connector, allowing the log collector to feed the log messages directly into a Kafka broker pod running within OKD.
|
||||
|
||||
### Kafka inside OKD
|
||||
|
||||
When the log-collector pod delivers the log message to a Kafka broker, the collector writes to a single broker partition, appending the message to the end of the partition. One of the advantages of using Kafka is that it decouples the log collector from the log's final destination. Thanks to the decoupling, the log collector doesn't care whether the logs end up in [Elasticsearch][7], Hadoop, Amazon S3, or all of them at the same time. Kafka is well-connected to all infrastructure, so the Kafka connectors can take the log message wherever it needs to go.
|
||||
|
||||
Once written to a Kafka broker's partition, the log message is replicated across the broker partitions within the Kafka cluster. This is a very powerful concept on its own; combined with the self-healing features of the platform, it creates a very resilient distributed system. For example, when a node becomes unavailable, the applications running on the node are almost instantaneously spawned on healthy node(s). So even if a node with the Kafka broker is lost or damaged, the log message is guaranteed to survive as many deaths as it was replicated and a new Kafka broker will quickly take the original's place.
|
||||
|
||||
### Off to storage
|
||||
|
||||
After it is committed to a Kafka topic, the log message waits to be consumed by a Kafka connector sink, which relays the log message to either an analytics engine or logging warehouse. Upon delivery to its final destination, the log message could be studied for anomaly detection, queried for immediate root-cause analysis, or used for other purposes. Either way, the log message is delivered by Kafka to its destination in a safe and reliable manner.
|
||||
|
||||
OKD and Kafka are powerful distributed platforms that are evolving rapidly. It is vital to create systems that can abstract the complicated nature of distributed computing without compromising performance. After all, how can we boast of systemwide efficiency if we cannot simplify the journey of a single log message?
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/life-log-message
|
||||
|
||||
作者:[Josef Karásek][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jkarasek
|
||||
[1]: https://kubernetes.io/
|
||||
[2]: https://kafka.apache.org/
|
||||
[3]: https://www.okd.io/
|
||||
[4]: http://cri-o.io/
|
||||
[5]: https://kubernetes.io/docs/concepts/services-networking/service/
|
||||
[6]: http://strimzi.io/
|
||||
[7]: https://www.elastic.co/
|
@ -1,131 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (luuming)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 Good Open Source Speech Recognition/Speech-to-Text Systems)
|
||||
[#]: via: (https://fosspost.org/lists/open-source-speech-recognition-speech-to-text)
|
||||
[#]: author: (Simon James https://fosspost.org/author/simonjames)
|
||||
|
||||
5 Good Open Source Speech Recognition/Speech-to-Text Systems
|
||||
======
|
||||
|
||||

|
||||
|
||||
A speech-to-text (STT) system is as its name implies; A way of transforming the spoken words via sound into textual files that can be used later for any purpose.
|
||||
|
||||
Speech-to-text technology is extremely useful. It can be used for a lot of applications such as a automation of transcription, writing books/texts using your own sound only, enabling complicated analyses on information using the generated textual files and a lot of other things.
|
||||
|
||||
In the past, the speech-to-text technology was dominated by proprietary software and libraries; Open source alternatives didn’t exist or existed with extreme limitations and no community around. This is changing, today there are a lot of open source speech-to-text tools and libraries that you can use right now.
|
||||
|
||||
Here we list 5 of them.
|
||||
|
||||
### Open Source Speech Recognition Libraries
|
||||
|
||||
#### Project DeepSpeech
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 15 open source speech recognition][1]
|
||||
|
||||
This project is made by Mozilla; The organization behind the Firefox browser. It’s a 100% free and open source speech-to-text library that also implies the machine learning technology using TensorFlow framework to fulfill its mission.
|
||||
|
||||
In other words, you can use it to build training models yourself to enhance the underlying speech-to-text technology and get better results, or even to bring it to other languages if you want. You can also easily integrate it to your other machine learning projects that you are having on TensorFlow. Sadly it sounds like the project is currently only supporting English by default.
|
||||
|
||||
It’s also available in many languages such as Python (3.6); Which allows you to have it working in seconds:
|
||||
|
||||
```
|
||||
pip3 install deepspeech
|
||||
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav
|
||||
```
|
||||
|
||||
You can also install it using npm:
|
||||
|
||||
```
|
||||
npm install deepspeech
|
||||
```
|
||||
|
||||
For more information, refer to the [project’s homepage][2].
|
||||
|
||||
#### Kaldi
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 17 open source speech recognition][3]
|
||||
|
||||
Kaldi is an open source speech recognition software written in C++, and is released under the Apache public license. It works on Windows, macOS and Linux. Its development started back in 2009.
|
||||
|
||||
Kaldi’s main features over some other speech recognition software is that it’s extendable and modular; The community is providing tons of 3rd-party modules that you can use for your tasks. Kaldi also supports deep neural networks, and offers an [excellent documentation on its website][4].
|
||||
|
||||
While the code is mainly written in C++, it’s “wrapped” by Bash and Python scripts. So if you are looking just for the basic usage of converting speech to text, then you’ll find it easy to accomplish that via either Python or Bash.
|
||||
|
||||
[Project’s homepage][5].
|
||||
|
||||
#### Julius
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 19 open source speech recognition][6]
|
||||
|
||||
Probably one of the oldest speech recognition software ever; It’s development started in 1991 at the University of Kyoto, and then its ownership was transferred to an independent project team in 2005.
|
||||
|
||||
Julius main features include its ability to perform real-time STT processes, low memory usage (Less than 64MB for 20000 words), ability to produce N-best/Word-graph output, ability to work as a server unit and a lot more. This software was mainly built for academic and research purposes. It is written in C, and works on Linux, Windows, macOS and even Android (on smartphones).
|
||||
|
||||
Currently it supports both English and Japanese languages only. The software is probably availbale to install easily in your Linux distribution’s repository; Just search for julius package in your package manager. The latest version was [released][7] around one and half months ago.
|
||||
|
||||
[Project’s homepage][8].
|
||||
|
||||
#### Wav2Letter++
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 21 open source speech recognition][9]
|
||||
|
||||
If you are looking for something modern, then this one is for you. Wav2Letter++ is an open source speech recognition software that was released by Facebook’s AI Research Team just 2 months ago. The code is released under the BSD license.
|
||||
|
||||
Facebook is [describing][10] its library as “the fastest state-of-the-art speech recognition system available”. The concepts on which this tool is built makes it optimized for performance by default; Facebook’s also-new machine learning library [FlashLight][11] is used as the underlying core of Wav2Letter++.
|
||||
|
||||
Wav2Letter++ needs you first to build a training model for the language you desire by yourself in order to train the algorithms on it. No pre-built support of any language (including English) is available; It’s just a machine-learning-driven tool to convert speech to text. It was written in C++, hence the name (Wav2Letter++).
|
||||
|
||||
[Project’s homepage][12].
|
||||
|
||||
#### DeepSpeech2
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 23 open source speech recognition][13]
|
||||
|
||||
Researchers at the Chinese giant Baidu are also working on their own speech-to-text engine, called DeepSpeech2. It’s an end-to-end open source engine that uses the “PaddlePaddle” deep learning framework for converting both English & Mandarin Chinese languages speeches into text. The code is released under BSD license.
|
||||
|
||||
The engine can be trained on any model and for any language you desire. The models are not released with the code; You’ll have to build them yourself, just like the other software. DeepSpeech2’s source code is written in Python; So it should be easy for you to get familiar with it if that’s the language you use.
|
||||
|
||||
[Project’s homepage][14].
|
||||
|
||||
### Conclusion
|
||||
|
||||
The speech recognition category is still mainly dominated by proprietary software giants like Google and IBM (which do provide their own closed-source commercial services for this), but the open source alternatives are promising. Those 5 open source speech recognition engines should get you going in building your application, all of them are still under heavy development by time. In few years, we expect open source to become the norm for those technologies just like in the other industries.
|
||||
|
||||
If you have any other recommendations for this list, or comments in general, we’d love to hear them below!
|
||||
|
||||
**
|
||||
|
||||
Shares
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fosspost.org/lists/open-source-speech-recognition-speech-to-text
|
||||
|
||||
作者:[Simon James][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fosspost.org/author/simonjames
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/hero_speech-machine-learning2.png?resize=820%2C280&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 16 open source speech recognition)
|
||||
[2]: https://github.com/mozilla/DeepSpeech
|
||||
[3]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/Screenshot-at-2019-02-19-1134.png?resize=591%2C138&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 18 open source speech recognition)
|
||||
[4]: http://kaldi-asr.org/doc/index.html
|
||||
[5]: http://kaldi-asr.org
|
||||
[6]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/mic_web.png?resize=385%2C100&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 20 open source speech recognition)
|
||||
[7]: https://github.com/julius-speech/julius/releases
|
||||
[8]: https://github.com/julius-speech/julius
|
||||
[9]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/fully_convolutional_ASR.png?resize=850%2C177&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 22 open source speech recognition)
|
||||
[10]: https://code.fb.com/ai-research/wav2letter/
|
||||
[11]: https://github.com/facebookresearch/flashlight
|
||||
[12]: https://github.com/facebookresearch/wav2letter
|
||||
[13]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/ds2.png?resize=850%2C313&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 24 open source speech recognition)
|
||||
[14]: https://github.com/PaddlePaddle/DeepSpeech
|
@ -1,68 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Running LEDs in reverse could cool computers)
|
||||
[#]: via: (https://www.networkworld.com/article/3386876/running-leds-in-reverse-could-cool-computers.html#tk.rss_all)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Running LEDs in reverse could cool computers
|
||||
======
|
||||
|
||||
### The miniaturization of electronics is reaching its limits in part because of heat management. Many are now aggressively trying to solve the problem. A kind of reverse-running LED is one avenue being explored.
|
||||
|
||||
![monsitj / Getty Images][1]
|
||||
|
||||
The quest to find more efficient methods for cooling computers is almost as high on scientists’ agendas as the desire to discover better battery chemistries.
|
||||
|
||||
More cooling is crucial for reducing costs. It would also allow for more powerful processing to take place in smaller spaces, where limited processing should be crunching numbers instead of making wasteful heat. It would stop heat-caused breakdowns, thereby creating longevity in components, and it would promote eco-friendly data centers — less heat means less impact on the environment.
|
||||
|
||||
Removing heat from microprocessors is one angle scientists have been exploring, and they think they have come up with a simple, but unusual and counter-intuitive solution. They say that running a variant of a Light Emitting Diode (LED) with its electrodes reversed forces the component to act as if it were at an unusually low temperature. Placing it next to warmer electronics, then, with a nanoscale gap introduced, causes the LED to suck out the heat.
|
||||
|
||||
**[ Read also:[IDC’s top 10 data center predictions][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
|
||||
|
||||
“Once the LED is reverse biased, it began acting as a very low temperature object, absorbing photons,” says Edgar Meyhofer, professor of mechanical engineering at University of Michigan, in a [press release][4] announcing the breakthrough. “At the same time, the gap prevents heat from traveling back, resulting in a cooling effect.”
|
||||
|
||||
The researchers say the LED and the adjacent electrical device (in this case a calorimeter, usually used for measuring heat energy) have to be extremely close. They say they’ve been able to demonstrate cooling of six watts per meter-squared. That’s about the power of sunshine on the earth’s surface, they explain.
|
||||
|
||||
Internet of things (IoT) devices and smartphones could be among those electronics that would ultimately benefit from the LED modification. Both kinds of devices require increasing computing power to be squashed into smaller spaces.
|
||||
|
||||
“Removing the heat from the microprocessor is beginning to limit how much power can be squeezed into a given space,” the University of Michigan announcement says.
|
||||
|
||||
### Materials Science and cooling computers
|
||||
|
||||
[I’ve written before about new forms of computer cooling][5]. Exotic materials, derived from Materials Science, are among ideas being explored. Sodium bismuthide (Na3Bi) could be used in transistor design, the U.S. Department of Energy’s Lawrence Berkeley National Laboratory says. The new substance carries a charge and is importantly tunable; however, it doesn’t need to be chilled as superconductors currently do.
|
||||
|
||||
In fact, that’s a problem with superconductors. They unfortunately need more cooling than most electronics — electrical resistance with the technology is expelled through extreme cooling.
|
||||
|
||||
Separately, [researchers in Germany at the University of Konstanz][6] say they soon will have superconductor-driven computers without waste heat. They plan to use electron spin — a new physical dimension in electrons that could create efficiency gains. The method “significantly reduces the energy consumption of computing centers,” the university said in a press release last year.
|
||||
|
||||
Another way to reduce heat could be [to replace traditional heatsinks with spirals and mazes][7] embedded on microprocessors. Miniscule channels printed on the chip itself could provide paths for coolant to travel, again separately, scientists from Binghamton University say.
|
||||
|
||||
“The miniaturization of the semiconductor technology is approaching its physical limits,” the University of Konstanz says. Heat management is very much on scientists’ agenda now. It’s “one of the big challenges in miniaturization."
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3386876/running-leds-in-reverse-could-cool-computers.html#tk.rss_all
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/big_data_center_server_racks_storage_binary_analytics_by_monsitj_gettyimages-944444446_3x2-100787357-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3242807/data-center/top-10-data-center-predictions-idc.html#nww-fsb
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
|
||||
[4]: https://news.umich.edu/running-an-led-in-reverse-could-cool-future-computers/
|
||||
[5]: https://www.networkworld.com/article/3326831/computers-could-soon-run-cold-no-heat-generated.html
|
||||
[6]: https://www.uni-konstanz.de/en/university/news-and-media/current-announcements/news/news-in-detail/Supercomputer-ohne-Abwaerme/
|
||||
[7]: https://www.networkworld.com/article/3322956/chip-cooling-breakthrough-will-reduce-data-center-power-costs.html
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -1,120 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0 – Ongoing Projects (The State Of Smart Contracts Now) [Part 6])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/)
|
||||
[#]: author: (editor https://www.ostechnix.com/author/editor/)
|
||||
|
||||
Blockchain 2.0 – Ongoing Projects (The State Of Smart Contracts Now) [Part 6]
|
||||
======
|
||||
|
||||
![The State Of Smart Contracts Now][1]
|
||||
|
||||
Continuing from our [**earlier post on smart contracts**][2], this post aims to discuss the state of Smart contracts, highlight some current projects and companies currently undertaking developments in the area. Smart contracts as discussed in the previous article of the series are programs that exist and execute themselves on a blockchain network. We explored how smart contracts work and why they are superior to traditional digital platforms. Companies described here operate in a wide variety of industries however most of them deal with identity management systems, financial services, crowd funding systems etc., as these are the areas thought to be most suitable for switching to blockchain based data base systems.
|
||||
|
||||
### Open platforms
|
||||
|
||||
Platforms such as **Counterparty** [1] and **Solidity(Ethereum)** are fully public building blocks for developers to create their own smart contracts. Wide spread developer participation in such projects have allowed these to become de facto standards for developing smart contracts, designing your own cryptocurrency token systems, and creating protocols for the blockchains to function. Many commendable projects have derived from them. **Quorum** , by JP Morgan, derived from Ethereum, is an example. **Ripple** is another example for the same.
|
||||
|
||||
### Managing financial transactions
|
||||
|
||||
Transferring cryptocurrencies over the internet is touted to be the norm in the coming years. The shortfalls with the same are:
|
||||
|
||||
* Identities and wallet addresses are anonymous. The payer doesn’t have any first recourse if the receiver does not honor the transaction.
|
||||
* Erroneous transactions if any will cannot be traced.
|
||||
* Cryptographically generated hash keys are difficult to work with for humans and human errors are a prime concern.
|
||||
|
||||
|
||||
|
||||
Having someone else take in the transaction momentarily and settle it with the receiver after due diligence is preferred in this case.
|
||||
|
||||
**EscrowMyEther** [3] and **PAYFAIR** [4] are two such escrow platforms. Basically, the escrow company takes the agreed upon amount and sends a token to the receiver. Once the receiver delivers what the payer wants via the same escrow platform, both confirm and the final payment is released. These are used extensively by freelancers and hobbyist collectors online.
|
||||
|
||||
### Financial services
|
||||
|
||||
Developments in micro-financing and micro-insurance projects will improve the banking infrastructure for much of the world’s poor and unbanked. Involving the poorer “unbanked” sections of the society is estimated to increase revenues for banks and institutions involved by **$380 billion** [5]. This amount supersedes the savings in operational expenses that can be expected by switching to blockchain DLT for banks.
|
||||
|
||||
**BankQu Inc.** based in Midwest United States goes by the slogan “Dignity through identity”. Their platform allows for individuals to setup their own digital identity record where all their transactions will be vetted and processed real time on the blockchain. Overtime the underlying code records and builds a unique online identity for its users allowing for ultra-quick transactions and settlements. The BankQu case studies exploring more about how they’re helping individuals and companies this way is available [here][3].
|
||||
|
||||
**Stratumn** is helping insurance companies offer better insurance services by automating tasks which were earlier micromanaged by humans. By automation, end to end traceability, and efficient data privacy methods they’ve radically changed how insurance claims are settled. Improved customer experience along with significant cost reductions present a win-win situation for clients as well as firms involved[6].
|
||||
|
||||
A similar endeavor is being run on a trial basis currently by the French Insurance firm, **AXA**. The product _**“fizzy”**_ allows users to subscribe to its service for a small fee and enter their flight details. In case, the flight gets delayed or comes across some other issue, the program automatically scours online databases, checks with the insurance terms and credits the insurance amount to the user’s account. This eliminates the need for the user or the customer to file a claim after checking with the terms manually and in the long-run once such systems become mainstream, increase accountability from airlines[7][8].
|
||||
|
||||
### Keeping track of ownership rights
|
||||
|
||||
It is theoretically possible to track media from creation to end user consumption utilizing timestamped blocks of data in a DLT. Companies **Peertracks** and **Mycelia** are currently helping musicians publish content without worrying about their content being stolen or misused. They help artists sell directly to fans and clients while getting paid for their work without having to go through rights and record labels[9].
|
||||
|
||||
### Identity management platforms
|
||||
|
||||
Blockchain based identity management platforms store your identity on a distributed ledger. Once an account is setup, it is securely encrypted and sent to all the participating nodes after. However, as the owner of the data block only the user has access to the data. Once your identity is established on the network and you begin transactions, an automated program within the network will verify all previous transactions associated with your account, send it for regulatory filings after checking requirements and execute the settlement automatically provided the program deems the transaction legitimate. The upside here being that since the data on the blockchain is tamper-proof and the smart contract checks the input with zero bias (or subjectivity), the transaction doesn’t, as previously mentioned, require oversight or approval from anyone and is taken care of instantaneously.
|
||||
|
||||
Start-ups like **ShoCard** , **Credits** , and **OneName** are currently rolling out similar services and are currently in talks with government and social institutions for integrating them into mainstream use.
|
||||
|
||||
Other independent projects by developers like **Chris Ellis** and **David Duccini** have respectively developed or proposed alternative identity management systems such as **“[World Citizenship][4]”** , and **[IDCoin][5]** , respectively. Mr Ellis even demonstrated the capabilities of his work by creating passports on the a blockchain network[10][11] [12][5].
|
||||
|
||||
### Resource sharing
|
||||
|
||||
**Share & Charge (Slock.It)** is a European blockchain start-up. Their mobile app allows homeowners and other individuals who’ve invested their money in setting up a charging station share their resource with other individuals who’re looking for a quick. This not only allows owners to get back some of their investment, but also allows EV drivers to access significantly more charging points in their near-by geographical area allowing for suppliers to meet demands in a convenient manner. Once a “customer” is done charging their vehicle, the hardware associated creates a secure time stamped block consisting of the data and a smart contract working on the platform automatically credits the corresponding amount of money into the owners account. A track of all such transactions is recorded and proper security verifications kept in place. Interested readers can take a look [here][6], to know the technical angle behind their product[13][14]. The company’s platforms will gradually enable users to share other products and services with individuals in need and earn a passive income from the same.
|
||||
|
||||
The companies we’ve looked at here, comprise a very short list of ongoing projects that make use of smart contracts and blockchain database systems. Platform such as these help in building a secure “box” full of information to be accessed only by the users themselves and the overlying code or the smart contract. The information is vetted in real time based on a trigger, examined, and the algorithm is executed by the system. Such platforms with minimal human oversight, a much-needed step in the right direction with respect to secure digital automation, something which has never been thought of at this scale previously.
|
||||
|
||||
The next post will shed some light on the **different types of blockchains**. Click the following link to know more about this topic.
|
||||
|
||||
* [**Blockchain 2.0 – Public Vs Private Blockchain Comparison**][7]
|
||||
|
||||
|
||||
|
||||
**References:**
|
||||
|
||||
* **[1][About | Counterparty][8]**
|
||||
* **[2] [Quorum | J.P. Morgan][9]
|
||||
**
|
||||
* **[3][Escrow My Ether][10]**
|
||||
* **[4][PayFair][11]**
|
||||
* **[5] B. Pani, “Blockchain Powered Financial Inclusion,” 2016.**
|
||||
* **[6][STRATUMN | Insurance Claim Automation Across Europe][12]**
|
||||
* **[7][fizzy][13]**
|
||||
* **[8][AXA goes blockchain with fizzy | AXA][14]**
|
||||
* **[9] M. Gates, “Blockchain. Ultimate guide to understanding blockchain bitcoin cryptocurrencies smart-contracts and the future of money.pdf.” 2017.**
|
||||
* **[10][ShoCard Is A Digital Identity Card On The Blockchain | TechCrunch][15]**
|
||||
* **[11][J. Biggs, “Your Next Passport Could Be On The Blockchain | TechCrunch][16]**
|
||||
* **[12][OneName – Namecoin Wiki][17]**
|
||||
* **[13][Share&Charge launches its app, on-boards over 1,000 charging stations on the blockchain][18]**
|
||||
* **[14][slock.it – Landing][19]**
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/
|
||||
|
||||
作者:[editor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/State-Of-Smart-Contracts-720x340.png
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
|
||||
[3]: https://banqu.co/case-study/
|
||||
[4]: https://github.com/MrChrisJ/World-Citizenship
|
||||
[5]: https://github.com/IDCoin/IDCoin
|
||||
[6]: https://blog.slock.it/share-charge-smart-contracts-the-technical-angle-58b93ce80f15
|
||||
[7]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
|
||||
[8]: https://counterparty.io/platform/
|
||||
[9]: https://www.jpmorgan.com/global/Quorum
|
||||
[10]: http://escrowmyether.com/
|
||||
[11]: https://payfair.io/
|
||||
[12]: https://stratumn.com/business-case/insurance-claim-automation-across-europe/
|
||||
[13]: https://fizzy.axa/en-gb/
|
||||
[14]: https://group.axa.com/en/newsroom/news/axa-goes-blockchain-with-fizzy
|
||||
[15]: https://techcrunch.com/2015/05/05/shocard-is-a-digital-identity-card-on-the-blockchain/
|
||||
[16]: https://techcrunch.com/2014/10/31/your-next-passport-could-be-on-the-blockchain/
|
||||
[17]: https://wiki.namecoin.org/index.php?title=OneName
|
||||
[18]: https://blog.slock.it/share-charge-launches-its-app-on-boards-over-1-000-charging-stations-on-the-blockchain-ba8275390309
|
||||
[19]: https://slock.it/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (warmfrog)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (LazyWolfLin)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,85 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 essential values for the DevOps mindset)
|
||||
[#]: via: (https://opensource.com/article/19/5/values-devops-mindset)
|
||||
[#]: author: (Brent Aaron Reed https://opensource.com/users/brentaaronreed/users/wpschaub/users/wpschaub/users/wpschaub/users/cobiacomm/users/marcobravo/users/brentaaronreed)
|
||||
|
||||
5 essential values for the DevOps mindset
|
||||
======
|
||||
People and process take more time but are more important than any
|
||||
technology "silver bullet" in solving business problems.
|
||||
![human head, brain outlined with computer hardware background][1]
|
||||
|
||||
Many IT professionals today struggle with adapting to change and disruption. Are you struggling with just trying to keep the lights on, so to speak? Do you feel overwhelmed? This is not uncommon. Today, the status quo is not enough, so IT constantly tries to re-invent itself.
|
||||
|
||||
With over 30 years of combined IT experience, we have witnessed how important people and relationships are to IT's ability to be effective and help the business thrive. However, most of the time, our conversations about IT solutions start with technology rather than people and process. The propensity to look for a "silver bullet" to address business and IT challenges is far too common. But you can't just buy innovation, DevOps, or effective teams and ways of working; they need to be nurtured, supported, and guided.
|
||||
|
||||
With disruption so prevalent and there being such a critical demand for speed of change, we need both discipline and guardrails. The five essential values for the DevOps mindset, described below, will support the practices that will get us there. These values are not new ideas; they are refactored as we've learned from our experience. Some of the values may be interchangeable, they are flexible, and they guide overall principles that support (like a pillar) these five values.
|
||||
|
||||
![5 essential values for the DevOps mindset][2]
|
||||
|
||||
### 1\. Feedback from stakeholders is essential
|
||||
|
||||
How do we know if we are creating more value for us than for our stakeholders? We need persistent quality data to analyze, inform, and drive better decisions. Relevant information from trusted sources is vital for any business to thrive. We need to listen to and understand what our stakeholders are saying—and not saying—and we need to implement changes in a way that enables us to adjust our thinking—and our processes and technologies—and adapt them as needed to delight our stakeholders. Too often, we see little change, or lots of change for the wrong reasons, because of incorrect information (data). Therefore, aligning change to our stakeholders' feedback is an essential value and helps us focus on what is most important to making our company successful.
|
||||
|
||||
> Focus on our stakeholders and their feedback rather than simply changing for the sake of change.
|
||||
|
||||
### 2\. Improve beyond the limits of today's processes
|
||||
|
||||
We want our products and services to continuously delight our customers—our most important stakeholders—therefore, we need to improve continually. This is not only about quality; it could also mean costs, availability, relevance, and many other goals and factors. Creating repeatable processes or utilizing a common framework is great—they can improve governance and a host of other issues—however, that should not be our end goal. As we look for ways to improve, we must adjust our processes, complemented by the right tech and tools. There may be reasons to throw out a "so-called" framework because not doing so could add waste—or worse, simply "cargo culting" (doing something with of no value or purpose).
|
||||
|
||||
> Strive to always innovate and improve beyond repeatable processes and frameworks.
|
||||
|
||||
### 3\. No new silos to break down silos
|
||||
|
||||
Silos and DevOps are incompatible. We see this all the time: an IT director brings in so-called "experts" to implement agile and DevOps, and what do they do? These "experts" create a new problem on top of the existing problem, which is another silo added to an IT department and a business riddled with silos. Creating "DevOps" titles goes against the very principles of agile and DevOps, which are based on the concept of breaking down silos. In both agile and DevOps, teamwork is essential, and if you don't work in a self-organizing team, you're doing neither of them.
|
||||
|
||||
> Inspire and share collaboratively instead of becoming a hero or creating a silo.
|
||||
|
||||
### 4\. Knowing your customer means cross-organization collaboration
|
||||
|
||||
No part of the business is an independent entity because they all have stakeholders, and the primary stakeholder is always the customer. "The customer is always right" (or the king, as I like to say). The point is, without the customer, there really is no business, and to stay in business today, we need to "differentiate" from our competitors. We also need to know how our customers feel about us and what they want from us. Knowing what the customer wants is imperative and requires timely feedback to ensure the business addresses these primary stakeholders' needs and concerns quickly and responsibly.
|
||||
|
||||
![Minimize time spent with build-measure-learn process][3]
|
||||
|
||||
Whether it comes from an idea, a concept, an assumption, or direct stakeholder feedback, we need to identify and measure the feature or service our product delivers by using the explore, build, test, deliver lifecycle. Fundamentally, this means that we need to be "plugged into" our organization across the organization. There are no borders in continuous innovation, learning, and DevOps. Thus when we measure across the enterprise, we can understand the whole and take actionable, meaningful steps to improve.
|
||||
|
||||
> Measure performance across the organization, not just in a line of business.
|
||||
|
||||
### 5\. Inspire adoption through enthusiasm
|
||||
|
||||
Not everyone is driven to learn, adapt, and change; however, just like smiles can be infectious, so can learning and wanting to be part of a culture of change. Adapting and evolving within a culture of learning provides a natural mechanism for a group of people to learn and pass on information (i.e., cultural transmission). Learning styles, attitudes, methods, and processes continually evolve so we can improve upon them. The next step is to apply what was learned and improved and share the information with colleagues. Learning does not happen automatically; it takes effort, evaluation, discipline, awareness, and especially communication; unfortunately these are things that tools and automation alone will not provide. Review your processes, automation, tool strategies, and implementation work, make it transparent, and collaborate with your colleagues on reuse and improvement.
|
||||
|
||||
> Promote a culture of learning through lean quality deliverables, not just tools and automation.
|
||||
|
||||
### Summary
|
||||
|
||||
![Continuous goals of DevOps mindset][4]
|
||||
|
||||
As our companies adopt DevOps, we continue to champion these five values over any book, website, or automation software. It takes time to adopt this mindset, and this is very different than what we used to do as sysadmins. It's a wholly new way of working that will take many years to mature. Do these principles align with your own? Share them in the comments or on our website, [Agents of chaos][5].
|
||||
|
||||
* * *
|
||||
|
||||
Can you really do DevOps without sharing scripts or code? DevOps manifesto proponents value cross-...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/values-devops-mindset
|
||||
|
||||
作者:[Brent Aaron Reed][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brentaaronreed/users/wpschaub/users/wpschaub/users/wpschaub/users/cobiacomm/users/marcobravo/users/brentaaronreed
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X (human head, brain outlined with computer hardware background)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/devops_mindset_values.png (5 essential values for the DevOps mindset)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/devops_mindset_minimze-time.jpg (Minimize time spent with build-measure-learn process)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/devops_mindset_continuous.png (Continuous goals of DevOps mindset)
|
||||
[5]: http://agents-of-chaos.org
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (luuming)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,263 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How Linux can help with your spelling)
|
||||
[#]: via: (https://www.networkworld.com/article/3400942/how-linux-can-help-with-your-spelling.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How Linux can help with your spelling
|
||||
======
|
||||
Whether you're struggling with one elusive word or checking a report before you send it off to your boss, Linux can help with your spelling.
|
||||
![Sandra Henry-Stocker][1]
|
||||
|
||||
Linux provides all sorts of tools for data analysis and automation, but it also helps with an issue that we all struggle with from time to time – spelling! Whether you're grappling with the spelling of a single word while you’re writing your weekly report or you want a set of computerized "eyes" to find your typos before you submit a business proposal, maybe it’s time to check out how it can help.
|
||||
|
||||
### look
|
||||
|
||||
One tool is **look**. If you know how a word begins, you can ask the look command for provide a list of words that start with those letters. Unless an alternate word source is provided, look uses **/usr/share/dict/words** to identify the words for you. This file with its hundreds of thousands of words will suffice for most of the English words that we routinely use, but it might not have some of the more obscure words that some of us in the computing field tend to use — such as zettabyte.
|
||||
|
||||
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
The look command's syntax is as easy as can be. Type "look word" and it will run through all the words in that words file and find matches for you.
|
||||
|
||||
```
|
||||
$ look amelio
|
||||
ameliorable
|
||||
ameliorableness
|
||||
ameliorant
|
||||
ameliorate
|
||||
ameliorated
|
||||
ameliorates
|
||||
ameliorating
|
||||
amelioration
|
||||
ameliorations
|
||||
ameliorativ
|
||||
ameliorative
|
||||
amelioratively
|
||||
ameliorator
|
||||
amelioratory
|
||||
```
|
||||
|
||||
If you happen upon a word that isn't included in the word list on the system, you'll simply get no output.
|
||||
|
||||
```
|
||||
$ look zetta
|
||||
$
|
||||
```
|
||||
|
||||
Don’t despair if you're not seeing what you were hoping for. You can add words to your words file or even reference an altogether different words list — either finding one online and creating one yourself. You don't even have to place an added word in the proper alphabetical location; just add it to the end of the file. You do need to do this as root, however. For example (and be careful with that **> >**!):
|
||||
|
||||
```
|
||||
# echo “zettabyte” >> /usr/share/dict/words
|
||||
```
|
||||
|
||||
Using a different list of words ("jargon" in this case) just requires adding the name of the file. Use a full path if the file is not the default.
|
||||
|
||||
```
|
||||
$ look nybble /usr/share/dict/jargon
|
||||
nybble
|
||||
nybbles
|
||||
```
|
||||
|
||||
The look command is also case-insensitive, so you don't have to concern yourself with whether the word you're looking for should be capitalized or not.
|
||||
|
||||
```
|
||||
$ look zet
|
||||
ZETA
|
||||
Zeta
|
||||
zeta
|
||||
zetacism
|
||||
Zetana
|
||||
zetas
|
||||
Zetes
|
||||
zetetic
|
||||
Zethar
|
||||
Zethus
|
||||
Zetland
|
||||
Zetta
|
||||
```
|
||||
|
||||
Of course, not all word lists are created equal. Some Linux distributions provide a _lot_ more words than others in their word files. Yours might have 100,000 words or many times that number.
|
||||
|
||||
On one of my Linux systems:
|
||||
|
||||
```
|
||||
$ wc -l /usr/share/dict/words
|
||||
102402 /usr/share/dict/words
|
||||
```
|
||||
|
||||
On another:
|
||||
|
||||
```
|
||||
$ wc -l /usr/share/dict/words
|
||||
479828 /usr/share/dict/words
|
||||
```
|
||||
|
||||
Remember that the look command works only with the beginnings of words, but there are other options if you don't want to start there.
|
||||
|
||||
### grep
|
||||
|
||||
Our dearly beloved **grep** command can pluck words from a word file as well as any tool. If you’re looking for words that start or end with particular letters, grep is a natural. It can match words using beginnings, endings, or middle portions of words. Your system's word file will work with grep as easily as it does with look. The only drawback is that unlike with look, you have to specify the file.
|
||||
|
||||
Using word beginnings with ^:
|
||||
|
||||
```
|
||||
$ grep ^terra /usr/share/dict/words
|
||||
terrace
|
||||
terrace's
|
||||
terraced
|
||||
terraces
|
||||
terracing
|
||||
terrain
|
||||
terrain's
|
||||
terrains
|
||||
terrapin
|
||||
terrapin's
|
||||
terrapins
|
||||
terraria
|
||||
terrarium
|
||||
terrarium's
|
||||
terrariums
|
||||
```
|
||||
|
||||
Using word endings with $:
|
||||
|
||||
```
|
||||
$ grep bytes$ /usr/share/dict/words
|
||||
bytes
|
||||
gigabytes
|
||||
kilobytes
|
||||
megabytes
|
||||
terabytes
|
||||
```
|
||||
|
||||
With grep, you do need to concern yourself with capitalization, but the command provides some options for that.
|
||||
|
||||
```
|
||||
$ grep ^[Zz]et /usr/share/dict/words
|
||||
Zeta
|
||||
zeta
|
||||
zetacism
|
||||
Zetana
|
||||
zetas
|
||||
Zetes
|
||||
zetetic
|
||||
Zethar
|
||||
Zethus
|
||||
Zetland
|
||||
Zetta
|
||||
zettabyte
|
||||
```
|
||||
|
||||
Setting up a symbolic link to the words file makes this kind of word search a little easier:
|
||||
|
||||
```
|
||||
$ ln -s /usr/share/dict/words words
|
||||
$ grep ^[Zz]et words
|
||||
Zeta
|
||||
zeta
|
||||
zetacism
|
||||
Zetana
|
||||
zetas
|
||||
Zetes
|
||||
zetetic
|
||||
Zethar
|
||||
Zethus
|
||||
Zetland
|
||||
Zetta
|
||||
zettabytye
|
||||
```
|
||||
|
||||
### aspell
|
||||
|
||||
The aspell command takes a different approach. It provides a way to check the spelling in whatever file or text you provide to it. You can pipe text to it and have it tell you which words appear to be misspelled. If you’re spelling all the words correctly, you’ll see no output.
|
||||
|
||||
```
|
||||
$ echo Did I mispell that? | aspell list
|
||||
mispell
|
||||
$ echo I can hardly wait to try out aspell | aspell list
|
||||
aspell
|
||||
$ echo Did I misspell anything? | aspell list
|
||||
$
|
||||
```
|
||||
|
||||
The "list" argument tells aspell to provide a list of misspelled words in the words that are sent through standard input.
|
||||
|
||||
You can also use aspell to locate and correct words in a text file. If it finds a misspelled word, it will offer you an opportunity to replace it from a list of similar (but correctly spelled) words, to accept the words and add them to your personal words list (~/.aspell.en.pws), to ignore the misspelling, or to abort the process altogether (leaving the file as it was before you started).
|
||||
|
||||
```
|
||||
$ aspell -c mytext
|
||||
```
|
||||
|
||||
Once aspell finds a word that’s misspelled, it offers a list of choices like these for the incorrect "mispell":
|
||||
|
||||
```
|
||||
1) mi spell 6) misplay
|
||||
2) mi-spell 7) spell
|
||||
3) misspell 8) misapply
|
||||
4) Ispell 9) Aspell
|
||||
5) misspells 0) dispel
|
||||
i) Ignore I) Ignore all
|
||||
r) Replace R) Replace all
|
||||
a) Add l) Add Lower
|
||||
b) Abort x) Exit
|
||||
```
|
||||
|
||||
Note that the alternate words and spellings are numbered, while other options are represented by letter choices. You can choose one of the suggested spellings or opt to type a replacement. The "Abort" choice will leave the file intact even if you've already chosen replacements for some words. Words you elect to add will be inserted into a local file (e.g., ~/.aspell.en.pws).
|
||||
|
||||
### Alternate word lists
|
||||
|
||||
Tired of English? The aspell command can work with other languages if you add a word file for them. To add a dictionary for French on Debian systems, for example, you could do this:
|
||||
|
||||
```
|
||||
$ sudo apt install aspell-fr
|
||||
```
|
||||
|
||||
This new dictionary file would be installed as /usr/share/dict/French. To use it, you would simply need to tell aspell that you want to use the alternate word list:
|
||||
|
||||
```
|
||||
$ aspell --lang=fr -c mytext
|
||||
```
|
||||
|
||||
When using, you might see something like this if aspell looks at the word “one”:
|
||||
|
||||
```
|
||||
1) once 6) orné
|
||||
2) onde 7) ne
|
||||
3) ondé 8) né
|
||||
4) onze 9) on
|
||||
5) orne 0) cône
|
||||
i) Ignore I) Ignore all
|
||||
r) Replace R) Replace all
|
||||
a) Add l) Add Lower
|
||||
b) Abort x) Exit
|
||||
```
|
||||
|
||||
You can also get other language word lists from [GNU][3].
|
||||
|
||||
### Wrap-up
|
||||
|
||||
Even if you're a national spelling bee winner, you probably need a little help with spelling every now and then — if only to spot your typos. The aspell tool, along with look and grep, are ready to come to your rescue.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3400942/how-linux-can-help-with-your-spelling.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/06/linux-spelling-100798596-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: ftp://ftp.gnu.org/gnu/aspell/dict/0index.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,94 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Graviton: A Minimalist Open Source Code Editor)
|
||||
[#]: via: (https://itsfoss.com/graviton-code-editor/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Graviton: A Minimalist Open Source Code Editor
|
||||
======
|
||||
|
||||
[Graviton][1] is a free and open source, cross-platform code editor in development. The sixteen years old developer, Marc Espin, emphasizes that it is a ‘minimalist’ code editor. I am not sure about that but it does have a clean user interface like other [modern code editors like Atom][2].
|
||||
|
||||
![Graviton Code Editor Interface][3]
|
||||
|
||||
The developer also calls it a lightweight code editor despite the fact that Graviton is based on [Electron][4].
|
||||
|
||||
Graviton comes with features you expect in any standard code editors like syntax highlighting, auto-completion etc. Since Graviton is still in the beta phase of development, more features will be added to it in the future releases.
|
||||
|
||||
![Graviton Code Editor with Syntax Highlighting][5]
|
||||
|
||||
### Feature of Graviton code editor
|
||||
|
||||
Some of the main highlights of Graviton features are:
|
||||
|
||||
* Syntax highlighting for a number of programming languages using [CodeMirrorJS][6]
|
||||
* Autocomplete
|
||||
* Support for plugins and themes.
|
||||
* Available in English, Spanish and a few other European languages.
|
||||
* Available for Linux, Windows and macOS.
|
||||
|
||||
|
||||
|
||||
I had a quick look at Graviton and it might not be as feature-rich as [VS Code][7] or [Brackets][8], but for some simple code editing, it’s not a bad tool.
|
||||
|
||||
### Download and install Graviton
|
||||
|
||||
![Graviton Code Editor][9]
|
||||
|
||||
As mentioned earlier, Graviton is a cross-platform code editor available for Linux, Windows and macOS. It is still in beta stages which means that you more features will be added in future and you may encounter some bugs.
|
||||
|
||||
You can find the latest version of Graviton on its release page. Debian and [Ubuntu users can install it from .deb file][10]. [AppImage][11] has been provided so that it could be used in other distributions. DMG and EXE files are also available for macOS and Windows respectively.
|
||||
|
||||
[Download Graviton][12]
|
||||
|
||||
If you are interested, you can find the source code of Graviton on its GitHub repository:
|
||||
|
||||
[Graviton Source Code on GitHub][13]
|
||||
|
||||
If you decided to use Graviton and find some issues, please open a bug report [here][14]. If you use GitHub, you may want to star the Graviton project. This boosts the morale of the developer as he would know that more users are appreciating his efforts.
|
||||
|
||||
[][15]
|
||||
|
||||
Suggested read Get Windows Style Sticky Notes For Ubuntu with Indicator Stickynotes
|
||||
|
||||
I believe you know [how to install a software from source code][16] if you are taking that path.
|
||||
|
||||
**In the end…**
|
||||
|
||||
Sometimes, simplicity itself becomes a feature and the Graviton’s focus on being minimalist could help it form a niche for itself in the already crowded segment of code editors.
|
||||
|
||||
At It’s FOSS, we try to highlight open source software. If you know some interesting open source software that you would like more people to know about, [do send us a note][17].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/graviton-code-editor/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://graviton.ml/
|
||||
[2]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-interface.jpg?resize=800%2C571&ssl=1
|
||||
[4]: https://electronjs.org/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-interface-2.jpg?resize=800%2C522&ssl=1
|
||||
[6]: https://codemirror.net/
|
||||
[7]: https://itsfoss.com/install-visual-studio-code-ubuntu/
|
||||
[8]: https://itsfoss.com/install-brackets-ubuntu/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-800x473.jpg?resize=800%2C473&ssl=1
|
||||
[10]: https://itsfoss.com/install-deb-files-ubuntu/
|
||||
[11]: https://itsfoss.com/use-appimage-linux/
|
||||
[12]: https://github.com/Graviton-Code-Editor/Graviton-App/releases
|
||||
[13]: https://github.com/Graviton-Code-Editor/Graviton-App
|
||||
[14]: https://github.com/Graviton-Code-Editor/Graviton-App/issues
|
||||
[15]: https://itsfoss.com/indicator-stickynotes-windows-like-sticky-note-app-for-ubuntu/
|
||||
[16]: https://itsfoss.com/install-software-from-source-code/
|
||||
[17]: https://itsfoss.com/contact-us/
|
@ -0,0 +1,229 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Neofetch – Display Linux system Information In Terminal)
|
||||
[#]: via: (https://www.ostechnix.com/neofetch-display-linux-systems-information/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Neofetch – Display Linux system Information In Terminal
|
||||
======
|
||||
|
||||
![Display Linux system information using Neofetch][1]
|
||||
|
||||
**Neofetch** is a simple, yet useful command line system information utility written in **Bash**. It gathers information about your system’s software and hardware and displays the result in the Terminal. By default, the system information will be displayed alongside your operating system’s logo. However, you can further customize it to use an **ascii image** or any image of your choice instead of the OS logo. You can also configure Neofetch to display which information, where and when that information should be displayed. Neofetch is mainly developed to be used in screenshots of your system information. It supports Linux, BSD, Mac OS X, iOS, and Windows operating systems. In this brief tutorial, let us see how to display Linux system information using Neofetch.
|
||||
|
||||
### Install Neofetch
|
||||
|
||||
Neofetch is available in the default repositories of most Linux distributions.
|
||||
|
||||
On Arch Linux and its variants, install it using command:
|
||||
|
||||
```
|
||||
$ sudo pacman -S netofetch
|
||||
```
|
||||
|
||||
On Debian (Stretch / Sid):
|
||||
|
||||
```
|
||||
$ sudo apt-get install neofetch
|
||||
```
|
||||
|
||||
On Fedora 27:
|
||||
|
||||
```
|
||||
$ sudo dnf install neofetch
|
||||
```
|
||||
|
||||
On RHEL, CentOS:
|
||||
|
||||
Enable EPEL Repository:
|
||||
|
||||
```
|
||||
# yum install epel-relase
|
||||
```
|
||||
|
||||
Fetch the neofetch repository:
|
||||
|
||||
```
|
||||
# curl -o /etc/yum.repos.d/konimex-neofetch-epel-7.repo
|
||||
https://copr.fedorainfracloud.org/coprs/konimex/neofetch/repo/epel-7/konimex-neofetch-epel-7.repo
|
||||
```
|
||||
|
||||
Then, install Neofetch:
|
||||
|
||||
```
|
||||
# yum install neofetch
|
||||
```
|
||||
|
||||
On Ubuntu 17.10 and newer versions:
|
||||
|
||||
```
|
||||
$ sudo apt-get install neofetch
|
||||
```
|
||||
|
||||
On Ubuntu 16.10 and lower versions:
|
||||
|
||||
```
|
||||
$ sudo add-apt-repository ppa:dawidd0811/neofetch
|
||||
|
||||
$ sudo apt update
|
||||
|
||||
$ sudo apt install neofetch
|
||||
```
|
||||
|
||||
On NixOS:
|
||||
|
||||
```
|
||||
$ nix-env -i neofetch
|
||||
```
|
||||
|
||||
### Display Linux system Information Using Neofetch
|
||||
|
||||
Neofetch is pretty easy and straightforward. Let us see some examples.
|
||||
|
||||
Open up your Terminal, and run the following command:
|
||||
|
||||
```
|
||||
$ neofetch
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
![][2]
|
||||
|
||||
Display Linux system Information Using Neofetch
|
||||
|
||||
As you can see in the above output, Neofetch is displaying the following details of my Arch Linux system:
|
||||
|
||||
* Name of the installed operating system,
|
||||
* Laptop model,
|
||||
* Kernel details,
|
||||
* System uptime,
|
||||
* Number of installed packages by default and other package managers,
|
||||
* Default Shell,
|
||||
* Screen resolution,
|
||||
* Desktop environment,
|
||||
* Window manager,
|
||||
* Window manager’s theme,
|
||||
* System theme,
|
||||
* System Icons,
|
||||
* Default Terminal,
|
||||
* CPU type,
|
||||
* GPU type,
|
||||
* Installed memory.
|
||||
|
||||
|
||||
|
||||
Neofetch has plenty of other options too. We will see some of them.
|
||||
|
||||
##### How to use custom imagess in Neofetch output?
|
||||
|
||||
By default, Neofetch will display your OS logo along with the system information. You can, of course, change the image as you wish.
|
||||
|
||||
In order to display images, your Linux system should have the following dependencies installed:
|
||||
|
||||
1. **w3m-img** (It is required to display images. w3m-img is sometimes bundled together with **w3m** package),
|
||||
2. **Imagemagick** (required for thumbnail creation),
|
||||
3. A terminal that supports **\033[14t** or **xdotool** or **xwininfo + xprop** or **xwininfo + xdpyinfo**.
|
||||
|
||||
|
||||
|
||||
W3m-img and ImageMagick packages are available in the default repositories of most Linux distributions. So you can install them using your distribution’s default package manager.
|
||||
|
||||
For instance, run the following command to install w3m-img and ImageMagick on Debian, Ubuntu, Linux Mint:
|
||||
|
||||
```
|
||||
$ sudo apt install w3m-img imagemagick
|
||||
```
|
||||
|
||||
Here is the list of Terminal Emulators with **w3m-img** support:
|
||||
|
||||
1. Gnome-terminal,
|
||||
2. Konsole,
|
||||
3. st,
|
||||
4. Terminator,
|
||||
5. Termite,
|
||||
6. URxvt,
|
||||
7. Xfce4-Terminal,
|
||||
8. Xterm
|
||||
|
||||
|
||||
|
||||
If you have **kitty** , **Terminology** and **iTerm** terminal emulators on your system, you don’t need to install w3m-img.
|
||||
|
||||
Now, run the following command to display your system’s information with a custom image:
|
||||
|
||||
```
|
||||
$ neofetch --w3m /home/sk/Pictures/image.png
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
$ neofetch --w3m --source /home/sk/Pictures/image.png
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
![][3]
|
||||
|
||||
Neofetch output with custom logo
|
||||
|
||||
Replace the image path in the above command with your own.
|
||||
|
||||
Alternatively, you can point a directory that contains the images like below.
|
||||
|
||||
```
|
||||
$ neofetch --w3m <path-to-directory>
|
||||
```
|
||||
|
||||
##### Configure Neofetch
|
||||
|
||||
When we run the Neofetch for the first time, It will create a per-user configuration file at **$HOME/.config/neofetch/config.conf** by default. It also creates a system-wide neofetch config file at **$HOME/.config/neofetch/config**. You can tweak this file to tell Neofetch which details should be displayed, removed and/or modified.
|
||||
|
||||
You can also keep this configuration file between versions. Meaning – just customize it once as per your liking and use the same settings after upgrading to newer version. You can even share this file to your friends and colleagues to have the same settings as yours.
|
||||
|
||||
To view Neofetch help section, run:
|
||||
|
||||
```
|
||||
$ neofetch --help
|
||||
```
|
||||
|
||||
As far as I tested Neofetch, It worked perfectly in my Arch Linux system as expected. It is a nice handy tool to easily and quickly print the details of your system in the Terminal.
|
||||
|
||||
* * *
|
||||
|
||||
**Related read:**
|
||||
|
||||
* [**How to find Linux System details using inxi**][4]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
**Resource:**
|
||||
|
||||
* [**Neofetch on GitHub**][5]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/neofetch-display-linux-systems-information/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2016/06/neofetch-1-720x340.png
|
||||
[2]: http://www.ostechnix.com/wp-content/uploads/2016/06/Neofetch-1.png
|
||||
[3]: http://www.ostechnix.com/wp-content/uploads/2016/06/Neofetch-with-custom-logo.png
|
||||
[4]: https://www.ostechnix.com/how-to-find-your-system-details-using-inxi/
|
||||
[5]: https://github.com/dylanaraps/neofetch
|
@ -1,91 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Search Linux Applications On AppImage, Flathub And Snapcraft Platforms)
|
||||
[#]: via: (https://www.ostechnix.com/search-linux-applications-on-appimage-flathub-and-snapcraft-platforms/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Search Linux Applications On AppImage, Flathub And Snapcraft Platforms
|
||||
======
|
||||
|
||||
![Search Linux Applications On AppImage, Flathub And Snapcraft][1]
|
||||
|
||||
Linux is evolving day by day. In the past, the developers had to build applications separately for different Linux distributions. Since there are several Linux variants exists, building apps for all distributions became tedious task and quite time consuming. Then some developers invented package converters and builders such as [**Checkinstall**][2], [**Debtap**][3] and [**Fpm**][4]. But they didn’t completely solved the problem. All of these tools will simply convert one package format to another. We still need to find and install the required dependencies the app needs to run.
|
||||
|
||||
Well, the time has changed. We have now universal Linux apps. Meaning – we can install these applications on most Linux distributions. Be it Arch Linux, Debian, CentOS, Redhat, Ubuntu or any popular Linux distribution, the Universal apps will work just fine out of the box. These applications are packaged with all necessary libraries and dependencies in a single bundle. All we have to do is to download and run them on any Linux distributions of our choice. The popular universal app formats are **AppImages** , [**Flatpaks**][5] and [**Snaps**][6].
|
||||
|
||||
The AppImages are created and maintained by **Simon peter**. Many popular applications, like Gimp, Firefox, Krita and a lot more, are available in these formats and available directly on their download pages.Just download them, make it executable and run it in no time. You don’t even root permissions to run AppImages.
|
||||
|
||||
The developer of Flatpak is **Alexander Larsson** (a RedHat employee). The Flatpak apps are hosted in a central repository (store) called **“Flathub”**. If you’re a developer, you are encouraged to build your apps in Flatpak format and distribute them to the users via Flathub.
|
||||
|
||||
The **Snaps** are created mainly for Ubuntu, by **Canonical**. However, the developers of other Linux distributions are started to contribute to Snap packing format. So, Snaps will work on other Linux distributions as well. The Snaps can be downloaded either directly from application’s download page or from **Snapcraft** store.
|
||||
|
||||
Many popular Companies and developers have released their applications in AppImage, Flatpak and Snap formats. If you ever looking for an app, just head over to the respective store and grab the application of your choice and run it regardless of the Linux distribution you use.
|
||||
|
||||
There is also a command line universal app search tool called **“Chob”** is available to easily search Linux Applications on AppImage, Flathub and Snapcraft platforms. This tool will only search for the given application and display official link in your default browser. It won’t install them. This guide will explain how to install Chob and use it to search AppImages, Flatpaks and Snaps on Linux.
|
||||
|
||||
### Search Linux Applications On AppImage, Flathub And Snapcraft Platforms Using Chob
|
||||
|
||||
Download the latest Chob binary file from the [**releases page**][7]. As of writing this guide, the latest version was **0.3.5**.
|
||||
|
||||
```
|
||||
$ wget https://github.com/MuhammedKpln/chob/releases/download/0.3.5/chob-linux
|
||||
```
|
||||
|
||||
Make it executable:
|
||||
|
||||
```
|
||||
$ chmod +x chob-linux
|
||||
```
|
||||
|
||||
Finally, search the applications you want. For example, I am going to search applications related to **Vim**.
|
||||
|
||||
```
|
||||
$ ./chob-linux vim
|
||||
```
|
||||
|
||||
Chob will search for the given application (and related) on AppImage, Flathub and Snapcraft platforms and display the results.
|
||||
|
||||
![][8]
|
||||
|
||||
Search Linux applications Using Chob
|
||||
|
||||
Just choose the application you want by typing the appropriate number to open the official link of the selected app in your default web browser where you can read the details of the app.
|
||||
|
||||
![][9]
|
||||
|
||||
View Linux application’s Details In Browser
|
||||
|
||||
For more details, have a look at the Chob official GitHub page given below.
|
||||
|
||||
**Resource:**
|
||||
|
||||
* [**Chob GitHub Repository**][10]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/search-linux-applications-on-appimage-flathub-and-snapcraft-platforms/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/chob-720x340.png
|
||||
[2]: https://www.ostechnix.com/build-packages-source-using-checkinstall/
|
||||
[3]: https://www.ostechnix.com/convert-deb-packages-arch-linux-packages/
|
||||
[4]: https://www.ostechnix.com/build-linux-packages-multiple-platforms-easily/
|
||||
[5]: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/
|
||||
[6]: https://www.ostechnix.com/introduction-ubuntus-snap-packages/
|
||||
[7]: https://github.com/MuhammedKpln/chob/releases
|
||||
[8]: http://www.ostechnix.com/wp-content/uploads/2019/05/Search-Linux-applications-Using-Chob.png
|
||||
[9]: http://www.ostechnix.com/wp-content/uploads/2019/05/View-Linux-applications-Details.png
|
||||
[10]: https://github.com/MuhammedKpln/chob
|
@ -1,80 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Try a new game on Free RPG Day)
|
||||
[#]: via: (https://opensource.com/article/19/5/free-rpg-day)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/erez/users/seth)
|
||||
|
||||
Try a new game on Free RPG Day
|
||||
======
|
||||
Celebrate tabletop role-playing games and get free RPG materials at your
|
||||
local game shop on June 15.
|
||||
![plastic game pieces on a board][1]
|
||||
|
||||
Have you ever thought about trying Dungeons & Dragons but didn't know how to start? Did you play Traveller in your youth and have been thinking about returning to the hobby? Are you curious about role-playing games (RPGs) but not sure whether you want to play one? Are you completely new to the concept of tabletop gaming and have never heard of RPGs until now? It doesn't matter which of these profiles suits you, because [Free RPG Day][2] is for everyone!
|
||||
|
||||
The first Free RPG Day event happened in 2007 at hobby game stores all over the world. The idea was to bring new and exclusive RPG quickstart rules and adventures to both new and experienced gamers for $0. For one day, you could walk into your local game store and get a booklet containing simple, beginner-level rules for a tabletop RPG, which you could play with people there in the store or with friends back home. The booklet was yours to keep forever.
|
||||
|
||||
The event was such a smash hit that the tradition has continued ever since. This year, Free RPG Day is scheduled for Saturday, June 15.
|
||||
|
||||
### What's the catch?
|
||||
|
||||
Obviously, the idea behind Free RPG Day is to get you addicted to tabletop RPG gaming. Before you let instinctual cynicism kick in, consider that as addictions go, it's not too bad to fall in love with a game that encourages you to read books of rules and lore so you and your family and friends have an excuse to spend time together. Tabletop RPGs are a powerful, imaginative, and fun medium, and Free RPG Day is a great introduction.
|
||||
|
||||
![FreeRPG Day logo][3]
|
||||
|
||||
### Open gaming
|
||||
|
||||
Like so many other industries, the open source phenomenon has influenced tabletop gaming. Way back at the turn of the century, [Wizards of the Coast][4], purveyors of Magic: The Gathering and Dungeons & Dragons, decided to adopt open source methodology by developing the [Open Game License][5] (OGL). They used this license for editions 3 and 3.5 of the world's first RPG (Dungeons & Dragons). When they faltered years later for the 4th Edition, the publisher of _Dragon_ magazine forked the "code" of D &D 3.5, publishing its remix as the Pathfinder RPG, keeping innovation and a whole cottage industry of third-party game developers healthy. Recently, Wizards of the Coast returned to the OGL for D&D 5e.
|
||||
|
||||
The OGL allows developers to use, at the very least, a game's mechanics in a product of their own. You may or may not be allowed to use the names of custom monsters, weapons, kingdoms, or popular characters, but you can always use the rules and maths of an OGL game. In fact, the rules of an OGL game are often published for free as a [system reference document][6] (SRD) so, whether you purchase a copy of the rule book or not, you can learn how a game is played.
|
||||
|
||||
If you've never played a tabletop RPG before, it may seem strange that a game played with pen and paper can have a game engine, but computation is computation whether it's digital or analog. As a simplified example: suppose a game engine dictates that a player character has a number to represent its strength. When that player character fights a giant twice its strength, there's real tension when a player rolls dice to add to her character's strength-based attack. Without a very good roll, her strength won't match the giant's. Knowing this, a third-party or independent developer can design a monster for this game engine with an understanding of the effects that dice rolls can have on a player's ability score. This means they can base their math on the game engine's precedence. They can design a slew of monsters to slay, with meaningful abilities and skills in the context of the game engine, and they can advertise compatibility with that engine.
|
||||
|
||||
Furthermore, the OGL allows a publisher to define _product identity_ for their material. Product identity can be anything from the trade dress of the publication (graphical elements and layout), logos, terminology, lore, proper names, and so on. Anything defined as product identity may _not_ be reused without publisher consent. For example, suppose a publisher releases a book of weapons including a magical machete called Sigint, granting a +2 magical bonus to all of its attacks against zombies. This trait is explained by a story about how the machete was forged by a scientist with a latent zombie gene. However, the publication lists in section 1e of the OGL that all proper names of weapons are reserved as product identity. This means you can use the numbers (durability of the weapon, the damage it deals, the +2 magical bonus, and so on) and the lore associated with the sword (it was forged by a latent zombie) in your own publication, but you cannot use the name of the weapon (Sigint).
|
||||
|
||||
The OGL is an extremely flexible license, so developers must read section 1e carefully. Some publishers reserve nothing but the layout of the publication itself, while others reserve everything but the numbers and the most generic of terms.
|
||||
|
||||
When the preeminent RPG franchise embraced open source, it sent waves through the industry that are still felt today. Third-party developers can create content for the 5e and Pathfinder systems. A whole website, [DungeonMastersGuild.com][7], featuring independent content for D&D 5e was created by Wizards of the Coast to promote independent publishing. Games like [Starfinder][8], [OpenD6][9], [Warrior, Rogue & Mage][10], [Swords & Wizardry][11], and many others have adopted the OGL. Other systems, like Brent Newhall's [Dungeon Delvers][12], [Fate][13], [Dungeon World][14], and many more are licensed under a [Creative Commons license][15].
|
||||
|
||||
### Get your RPG
|
||||
|
||||
Free RPG Day is a day for you to go to your local gaming store, play an RPG, and get materials for future RPG games you play with friends. Like a [Linux installfest][16] or [Software Freedom Day][17], the event is loosely defined. Each retailer may do Free RPG Day a little differently, each one running whatever game they choose. However, the free content donated by game publishers is the same each year. Obviously, the free stuff is subject to availability, but when you go to a Free RPG Day event, notice how many games are offered with an open license (if it's an OGL game, the OGL is printed in the back matter of the book). Any content for Pathfinder, Starfinder, and D&D is sure to have taken some advantage of the OGL. Content for many other systems use Creative Commons licenses. Some, like the resurrected [Dead Earth][18] RPG from the '90s, use the [GNU Free Documentation License][19].
|
||||
|
||||
There are plenty of gaming resources out there that are developed with open licenses. You may or may not need to care about the license of a game; after all, the license has no bearing upon whether you can play it with friends or not. But if you enjoy supporting [free culture][20] in more ways than just the software you run, try out a few OGL or Creative Commons games. If you're new to gaming entirely, try out a tabletop RPG at your local game store on Free RPG Day!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/free-rpg-day
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth/users/erez/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team-game-play-inclusive-diversity-collaboration.png?itok=8sUXV7W1 (plastic game pieces on a board)
|
||||
[2]: https://www.freerpgday.com/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/freerpgday-logoblank.jpg (FreeRPG Day logo)
|
||||
[4]: https://company.wizards.com/
|
||||
[5]: http://www.opengamingfoundation.org/licenses.html
|
||||
[6]: https://www.d20pfsrd.com/
|
||||
[7]: https://www.dmsguild.com/
|
||||
[8]: https://paizo.com/starfinder
|
||||
[9]: https://ogc.rpglibrary.org/index.php?title=OpenD6
|
||||
[10]: http://www.stargazergames.eu/games/warrior-rogue-mage/
|
||||
[11]: https://froggodgames.com/frogs/product/swords-wizardry-complete-rulebook/
|
||||
[12]: http://brentnewhall.com/games/doku.php?id=games:dungeon_delvers
|
||||
[13]: http://www.faterpg.com/licensing/licensing-fate-cc-by/
|
||||
[14]: http://dungeon-world.com/
|
||||
[15]: https://creativecommons.org/
|
||||
[16]: https://www.tldp.org/HOWTO/Installfest-HOWTO/introduction.html
|
||||
[17]: https://www.softwarefreedomday.org/
|
||||
[18]: https://mixedsignals.ml/games/blog/blog_dead-earth
|
||||
[19]: https://www.gnu.org/licenses/fdl-1.3.en.html
|
||||
[20]: https://opensource.com/article/18/1/creative-commons-real-world
|
@ -1,113 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (murphyzhao)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Welcoming Blockchain 3.0)
|
||||
[#]: via: (https://www.ostechnix.com/welcoming-blockchain-3-0/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Welcoming Blockchain 3.0
|
||||
======
|
||||
|
||||
![Welcoming blockchain 3.0][1]
|
||||
|
||||
Image credit : <https://pixabay.com/illustrations/blockchain-network-business-3448502/>
|
||||
|
||||
The series of posts [**“Blockchain 2.0”**][2] discussed about the evolution of blockchain technology since the advent of cryptocurrencies since the Bitcoin in 2008. This post will seek to explore the future of blockchains. Lovably called **blockchain 3.0** , this new wave of DLT evolution will answer the issues faced with blockchains currently (each of which will be summarized here). The next version of the tech standard will also bring new applications and use cases. At the end of the post we will also look at a few examples of these principles currently applied.
|
||||
|
||||
Few of the shortcomings of blockchain platforms in existence are listed below with some proposed solutions to those answered afterward.
|
||||
|
||||
### Problem 1: Scalability[1]
|
||||
|
||||
This is seen as the first major hurdle to mainstream adoption. As previously discussed, a lot of limiting factors contribute to the blockchain’s in-capacity to process a lot of transactions at the same time. Existing networks such as [**Ethereum**][3] are capable of measly 10-15 transactions per second (TPS) whereas mainstream networks such as those employed by Visa for instance are capable of more than 2000 TPS. **Scalability** is an issue that plagues all modern database systems. Improved consensus algorithms and better blockchain architecture designs are improving it though as we see here.
|
||||
|
||||
**Solving scalability**
|
||||
|
||||
Implementing leaner and more efficient consensus algorithms have been proposed for solving issues of scalability without disturbing the primary structure of the blockchain. While most cryptocurrencies and blockchain platforms use resource intensive PoW algorithms (For instance, Bitcoin & Ethereum) to generate blocks, newer DPoS and PoET algorithms exist to solve this issue. DPoS and PoET algorithms (there are some more in development) require less resources to maintain the blockchain and have shown to have throughputs ranging up to 1000s of TPS rivalling that of popular non-blockchain systems.
|
||||
|
||||
The second solution to scalability is altering the blockchain structure[1] and functionality altogether. We won’t get into finer details of this, but alternative architectures such as **Directed Acyclic Graph** ( **DAG** ) have been proposed to handle this issue. Essentially, the assumption for this to work is that not all network nodes need to have a copy of the entire blockchain for the blockchain to work or the participants to reap the benefits of a DLT system. The system does not require transactions to be validated by the entirety of the participants and simply requires the transactions to happen in a common frame of reference and be linked to each other.
|
||||
|
||||
The DAG[2] approach is implemented in the Bitcoin system using an implementation called the **Lightning network** and Ethereum implements the same using their **Sharding** [3] protocol. At its heart a DAG implementation is not technically a blockchain. It’s more like a tangled maze, but still retains the peer to peer and distributed database properties of the blockchain. We will explore DAG and Tangle networks in a separate post later.
|
||||
|
||||
### Problem 2: Interoperability[4][5]
|
||||
|
||||
**Interoperability** is called cross-chain interaction is basically different blockchains being able to talk to each other to exchange metrics and information. With so many platforms that is hard to keep a count on at the moment and different companies coming up with proprietary systems for all the myriad of applications, interoperability between platforms is key. For instance, at the moment, someone who owns digital identities on one platform will not be able to exploit features presented by other platforms because the individual blockchains do not understand or know each other. Problems pertaining to lack of credible verifications, token exchange etc. still persist. A global roll out of [**smart contracts**][4] is also not viable without platforms being able to communicate with each other.
|
||||
|
||||
**Solving Interoperability**
|
||||
|
||||
There are protocols and platforms designed just for enabling interoperability at the moment. Such platforms implement atomic swaps protocols and provide open stages for different blockchain systems to communicate and exchange information between them. An example would be **“0x (ZRX)”** which is described later on.
|
||||
|
||||
### Problem 3: Governance[6]
|
||||
|
||||
Not a limitation in its own, **governance** in a public blockchain needs to act as a community moral compass where everyone’s opinion is taken into account on the operation of the blockchain. Combined and seen with scale this presents a problem where in either the protocols change far too frequently or the protocols are changed at the whims of a “central” authority who holds the most tokens. This is not an issue that most public blockchains are working to avoid right now since the scale at their operating in and the nature of their operations don’t require stricter supervision.
|
||||
|
||||
**Solving Governance issues**
|
||||
|
||||
The Tangled framework or the DAG mentioned above would almost eliminate the need and use for global (platform wide) governance laws. Instead a program can automatically oversee the transaction and user type and decide on the laws that need to be implemented.
|
||||
|
||||
### Problem 4: Sustainability
|
||||
|
||||
**Sustainability** builds on the scalability issue again. Current blockchains and cryptocurrencies are notorious for being not sustainable in the long run owing to the significant oversight that is still required and the amount of resources required to keep the systems running. If you’ve read reports of how “mining cryptocurrencies” have not been so profitable lately, this is what it was. The amount of resources required to keep up existing platforms running is simply not practical at a global scale with mainstream use.
|
||||
|
||||
**Solving non-sustainability**
|
||||
|
||||
From a resources or economic point of view the answer to sustainability would be similar to the one for scalability. However, for the system to be implemented on a global scale, laws and regulations need to endorse it. This however depends on the governments of the world. Favourable moves from the American and European governments have however renewed hopes in this aspect.
|
||||
|
||||
### Problem 5: User adoption[7]
|
||||
|
||||
Currently a hindrance to widespread consumer adoption of blockchain based applications is consumer unfamiliarity with the platform and the tech underneath it. The fact that most applications require some sort of a tech and computing background to figure out how they work does not help in this aspect either. The third wave of blockchain developments seek to lessen the gap between consumer knowledge and platform usability.
|
||||
|
||||
**Solving the user adoption issue**
|
||||
|
||||
The internet took a lot of time to be the way it is now. A lot of work has been done on developing a standardized internet technology stack over the years that has allowed the web to function the way it is now. Developers are working on user facing front end distributed applications that should act as a layer on top of existing web 3.0 technology while being supported by blockchains and open protocols underneath. Such [**distributed applications**][5] will make the underlying technology more familiar to users, hence increasing mainstream adoption.
|
||||
|
||||
We’ve discussed about the solutions to the above issues theoretically, and now we proceed to show these being applied in the present scenario.
|
||||
|
||||
**[0x][6]** – is a decentralized token exchange where users from different platforms can exchange tokens without the need of a central authority to vet the same. Their breakthrough comes in how they’ve designed the system to record and vet the blocks only after transactions are settled and not in between (to verify context, blocks preceding the transaction order is also verified normally) as is normally done. This allows for a more liquid faster exchange of digitized assets online.
|
||||
|
||||
**[Cardano][7]** – founded by one of the co-founders of Ethereum, Cardano boasts of being a truly “scientific” platform with multiple reviews and strict protocols for the developed code and algorithms. Everything out of Cardano is assumed to be mathematically as optimized as possible. Their consensus algorithm called **Ouroboros** , is a modified Proof of Stake algorithm. Cardano is developed in [**Haskell**][8] and the smart contract engine uses a derivative of Haskell called **Plutus** for operating. Both are functional programming languages which guarantee secure transactions without compromising efficiency.
|
||||
|
||||
**EOS** – We’ve already described EOS here in [**this post**][9].
|
||||
|
||||
**[COTI][10]** – a rather obscure architecture, COTI entails no mining, and next to zero power consumption in operating. It also stores assets in offline wallets localized on user’s devices rather than a pure peer to peer network. They also follow a DAG based architecture and claim of processing throughputs of up to 10000 TPS. Their platform allows enterprises to build their own cryptocurrency and digitized currency wallets without exploiting a blockchain.
|
||||
|
||||
**References:**
|
||||
|
||||
* [1] **A. P. Paper, K. Croman, C. Decker, I. Eyal, A. E. Gencer, and A. Juels, “On Scaling Decentralized Blockchains | SpringerLink,” 2018.**
|
||||
* [2] [**Going Beyond Blockchain with Directed Acyclic Graphs (DAG)**][11]
|
||||
* [3] [**Ethreum/wiki – On sharding blockchains**][12]
|
||||
* [4] [**Why is blockchain interoperability important**][13]
|
||||
* [5] [**The Importance of Blockchain Interoperability**][14]
|
||||
* [6] **R. Beck, C. Müller-Bloch, and J. L. King, “Governance in the Blockchain Economy: A Framework and Research Agenda,” J. Assoc. Inf. Syst., pp. 1020–1034, 2018.**
|
||||
* [7] **J. M. Woodside, F. K. A. Jr, W. Giberson, F. K. J. Augustine, and W. Giberson, “Blockchain Technology Adoption Status and Strategies,” J. Int. Technol. Inf. Manag., vol. 26, no. 2, pp. 65–93, 2017.**
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/welcoming-blockchain-3-0/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/blockchain-720x340.jpg
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
|
||||
[3]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
|
||||
[4]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
|
||||
[5]: https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/
|
||||
[6]: https://0x.org/
|
||||
[7]: https://www.cardano.org/en/home/
|
||||
[8]: https://www.ostechnix.com/getting-started-haskell-programming-language/
|
||||
[9]: https://www.ostechnix.com/blockchain-2-0-eos-io-is-building-infrastructure-for-developing-dapps/
|
||||
[10]: https://coti.io/
|
||||
[11]: https://cryptoslate.com/beyond-blockchain-directed-acylic-graphs-dag/
|
||||
[12]: https://github.com/ethereum/wiki/wiki/Sharding-FAQ#introduction
|
||||
[13]: https://www.capgemini.com/2019/02/can-the-interoperability-of-blockchains-change-the-world/
|
||||
[14]: https://medium.com/wanchain-foundation/the-importance-of-blockchain-interoperability-b6a0bbd06d11
|
@ -0,0 +1,104 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco software to make networks smarter, safer, more manageable)
|
||||
[#]: via: (https://www.networkworld.com/article/3401523/cisco-software-to-make-networks-smarter-safer-more-manageable.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco software to make networks smarter, safer, more manageable
|
||||
======
|
||||
Cisco software announced at Cisco Live embraces AI to help customers set consistent network and security policies across their domains and improve intent-based networking.
|
||||
![bigstock][1]
|
||||
|
||||
SAN DIEGO—Cisco injected a number of new technologies into its key networking control-point software that makes it easier to stretch networking from the data center to the cloud while making the whole environment smarter and easier to manage.
|
||||
|
||||
At the company’s annual Cisco Live customer event here it rolled out software that lets customers more easily meld typically siloed domains across the enterprise and cloud to the wide area network. The software enables what Cisco calls multidomain integration that lets customers set policies to apply uniform access controls to users, devices and applications regardless of where they connect to the network, the company said.
|
||||
|
||||
**More about SD-WAN**
|
||||
|
||||
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][2]
|
||||
* [How to pick an off-site data-backup method][3]
|
||||
* [SD-Branch: What it is and why you’ll need it][4]
|
||||
* [What are the options for security SD-WAN?][5]
|
||||
|
||||
|
||||
|
||||
The company also unveiled Cisco AI Network Analytics, a software package that uses [AI and machine learning techniques][6] to learn network traffic and security patterns that can help customers spot and fix problems proactively across the enterprise.
|
||||
|
||||
All of the new software runs on Cisco’s DNA Center platform which is rapidly becoming an ever-more crucial component to the company’s intent-based networking plans. DNA Center has always been important since its introduction two years ago as it features automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks.
|
||||
|
||||
Beyond device management and configuration, Cisco DNA Center gives IT teams the ability to control access through policies using Software-Defined Access (SD-Access), automatically provision through Cisco DNA Automation, virtualize devices through Cisco Network Functions Virtualization (NFV), and lower security risks through segmentation and Encrypted Traffic Analysis. But experts say these software enhancements take it to a new level.
|
||||
|
||||
“You can call it the rise of DNA Center and it’s important because it lets customers manage and control their entire network from one place – similar to what VMware does with its vCenter,” said Zeus Kerravala, founder and principal analyst with ZK Research. vCenter is VMware’s centralized platform for controlling its vSphere virtualized environments.
|
||||
|
||||
“Cisco will likely roll more and more functionality into DNA Center in the future making it stronger,” Kerravala said.
|
||||
|
||||
**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][7] ]**
|
||||
|
||||
Together the new software and DNA Center will help customers set consistent policies across their domains and collaborate with others for the benefit of the entire network. Customers can define a policy once, apply it everywhere, and monitor it systematically to ensure it is realizing its business intent, said Prashanth Shenoy, Cisco vice president of marketing for Enterprise Network and Mobility. It will help customers segment their networks to reduce congestion, improve security and compliance and contain network problems, he said.
|
||||
|
||||
“In the campus, Cisco’s SD-Access solution uses this technology to group users and devices within the segments it creates according to their access privileges. Similarly, Cisco ACI creates groups of similar applications in the data center,” Shenoy said. “When integrated, SD-Access and ACI exchange their groupings and provide each other an awareness into their access policies. With this knowledge, each of the domains can map user groups with applications, jointly enforce policies, and block unauthorized access to applications.”
|
||||
|
||||
In the Cisco world it basically means there now can be a unification of its central domain network controllers and they can work together and let customers drive policies across domains.
|
||||
|
||||
Cisco also said that security capabilities can be spread across domains.
|
||||
|
||||
Cisco Advanced Malware Protection (AMP) prevents breaches, monitors malicious behavior and detects and removes malware. Security constructs built into Cisco SD-WAN, and the recently announced SD-WAN onRamp for CoLocation, provide a full security stack that applies protection consistently from user to branch to clouds. Cisco Stealthwatch and Stealthwatch Cloud detect threats across the private network, public clouds, and in encrypted traffic.
|
||||
|
||||
Analysts said Cisco’s latest efforts are an attempt to simplify what are fast becoming complex networks with tons of new devices and applications to support.
|
||||
|
||||
Cisco’s initial efforts were product specific, but its latest announcements cross products and domains, said Lee Doyle principal analyst with Doyle Research. “Cisco is making a strong push to make its networks easier to use, manage and program.”
|
||||
|
||||
That same strategy is behind the new AI Analytics program.
|
||||
|
||||
“Trying to manually analyze and troubleshoot the traffic flowing through thousands of APs, switches and routers is a near impossible task, even for the most sophisticated NetOps team. In a wireless environment, onboarding and interference errors can crop up randomly and intermittently, making it even more difficult to determine probable causes,” said Anand Oswal, senior vice president, engineering for Cisco’s Enterprise Networking Business.
|
||||
|
||||
Cisco has been integrating AI/ML into many operational and security components, with Cisco DNA Center the focal point for insights and actions, Oswal wrote in a [blog][8] about the AI announcement. AI Network Analytics collects massive amounts of network data from Cisco DNA Centers at participating customer sites, encrypts and anonymizes the data to ensure privacy, and collates all of it into the Cisco Worldwide Data Platform. In this cloud, the aggregated data is analyzed with deep machine learning to reveal patterns and anomalies such as:
|
||||
|
||||
* Highly personalized network baselines with multiple levels of granularity that define “normal” for a given network, site, building and SSID.
|
||||
|
||||
* Sudden changes in onboarding times for Wi-Fi devices, by individual APs, floor, building, campus
|
||||
```
|
||||
|
||||
```
|
||||
and branch.
|
||||
|
||||
* Simultaneous connectivity failures with numerous clients at a specific location
|
||||
|
||||
* Changes in SaaS and Cloud application performance via SD-WAN direct internet connections or [Cloud OnRamps][9].
|
||||
|
||||
* Pattern-matching capabilities of ML will be used to spot anomalies in network behavior that might otherwise be missed.
|
||||
|
||||
|
||||
|
||||
|
||||
“The intelligence of its large base of customers can help Cisco to derive important insights about how users can better manage their networks and solve problems and the power of MI/AI technology will continue to improve over time,” Doyle said.
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3401523/cisco-software-to-make-networks-smarter-safer-more-manageable.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/11/intelligentnetwork-100780636-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[3]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[4]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[5]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
|
||||
[6]: https://www.networkworld.com/article/3400382/cisco-will-use-aiml-to-boost-intent-based-networking.html
|
||||
[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
|
||||
[8]: https://blogs.cisco.com/analytics-automation/cisco-ai-network-analytics-making-networks-smarter-simpler-and-more-secure
|
||||
[9]: https://www.networkworld.com/article/3393232/cisco-boosts-sd-wan-with-multicloud-to-branch-access-system.html
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
56
sources/tech/20190611 What is a Linux user.md
Normal file
56
sources/tech/20190611 What is a Linux user.md
Normal file
@ -0,0 +1,56 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is a Linux user?)
|
||||
[#]: via: (https://opensource.com/article/19/6/what-linux-user)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth)
|
||||
|
||||
What is a Linux user?
|
||||
======
|
||||
The definition of who is a "Linux user" has grown to be a bigger tent,
|
||||
and it's a great change.
|
||||
![][1]
|
||||
|
||||
> _Editor's note: this article was updated on Jun 11, 2019, at 1:15:19 PM to more accurately reflect the author's perspective on an open and inclusive community of practice in the Linux community._
|
||||
|
||||
In only two years, the Linux kernel will be 30 years old. Think about that! Where were you in 1991? Were you even born? I was 13! Between 1991 and 1993 a few Linux distributions were created, and at least three of them—Slackware, Debian, and Red Hat–provided the [backbone][2] the Linux movement was built on.
|
||||
|
||||
Getting a copy of a Linux distribution and installing and configuring it on a desktop or server was very different back then than today. It was hard! It was frustrating! It was an accomplishment if you got it running! We had to fight with incompatible hardware, configuration jumpers on devices, BIOS issues, and many other things. Even if the hardware was compatible, many times, you still had to compile the kernel, modules, and drivers to get them to work on your system.
|
||||
|
||||
If you were around during those days, you are probably nodding your head. Some readers might even call them the "good old days," because choosing to use Linux meant you had to learn about operating systems, computer architecture, system administration, networking, and even programming, just to keep the OS functioning. I am not one of them though: Linux being a regular part of everyone's technology experience is one of the most amazing changes in our industry!
|
||||
|
||||
Almost 30 years later, Linux has gone far beyond the desktop and server. You will find Linux in automobiles, airplanes, appliances, smartphones… virtually everywhere! You can even purchase laptops, desktops, and servers with Linux preinstalled. If you consider cloud computing, where corporations and even individuals can deploy Linux virtual machines with the click of a button, it's clear how widespread the availability of Linux has become.
|
||||
|
||||
With all that in mind, my question for you is: **How do you define a "Linux user" today?**
|
||||
|
||||
If you buy your parent or grandparent a Linux laptop from System76 or Dell, log them into their social media and email, and tell them to click "update system" every so often, they are now a Linux user. If you did the same with a Windows or MacOS machine, they would be Windows or MacOS users. It's incredible to me that, unlike the '90s, Linux is now a place for anyone and everyone to compute.
|
||||
|
||||
In many ways, this is due to the web browser becoming the "killer app" on the desktop computer. Now, many users don't care what operating system they are using as long as they can get to their app or service.
|
||||
|
||||
How many people do you know who use their phone, desktop, or laptop regularly but can't manage files, directories, and drivers on their systems? How many can't install a binary that isn't attached to an "app store" of some sort? How about compiling an application from scratch?! For me, it's almost no one. That's the beauty of open source software maturing along with an ecosystem that cares about accessibility.
|
||||
|
||||
Today's Linux user is not required to know, study, or even look up information as the Linux user of the '90s or early 2000s did, and that's not a bad thing. The old imagery of Linux being exclusively for bearded men is long gone, and I say good riddance.
|
||||
|
||||
There will always be room for a Linux user who is interested, curious, _fascinated_ about computers, operating systems, and the idea of creating, using, and collaborating on free software. There is just as much room for creative open source contributors on Windows and MacOS these days as well. Today, being a Linux user is being anyone with a Linux system. And that's a wonderful thing.
|
||||
|
||||
### The change to what it means to be a Linux user
|
||||
|
||||
When I started with Linux, being a user meant knowing how to the operating system functioned in every way, shape, and form. Linux has matured in a way that allows the definition of "Linux users" to encompass a much broader world of possibility and the people who inhabit it. It may be obvious to say, but it is important to say clearly: anyone who uses Linux is an equal Linux user.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/what-linux-user
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22
|
||||
[2]: https://en.wikipedia.org/wiki/Linux_distribution#/media/File:Linux_Distribution_Timeline.svg
|
282
sources/tech/20190612 How to write a loop in Bash.md
Normal file
282
sources/tech/20190612 How to write a loop in Bash.md
Normal file
@ -0,0 +1,282 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to write a loop in Bash)
|
||||
[#]: via: (https://opensource.com/article/19/6/how-write-loop-bash)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/goncasousa/users/howtopamm/users/howtopamm/users/seth/users/wavesailor/users/seth)
|
||||
|
||||
How to write a loop in Bash
|
||||
======
|
||||
Automatically perform a set of actions on multiple files with for loops
|
||||
and find commands.
|
||||
![bash logo on green background][1]
|
||||
|
||||
A common reason people want to learn the Unix shell is to unlock the power of batch processing. If you want to perform some set of actions on many files, one of the ways to do that is by constructing a command that iterates over those files. In programming terminology, this is called _execution control,_ and one of the most common examples of it is the **for** loop.
|
||||
|
||||
A **for** loop is a recipe detailing what actions you want your computer to take _for_ each data object (such as a file) you specify.
|
||||
|
||||
### The classic for loop
|
||||
|
||||
An easy loop to try is one that analyzes a collection of files. This probably isn't a useful loop on its own, but it's a safe way to prove to yourself that you have the ability to handle each file in a directory individually. First, create a simple test environment by creating a directory and placing some copies of some files into it. Any file will do initially, but later examples require graphic files (such as JPEG, PNG, or similar). You can create the folder and copy files into it using a file manager or in the terminal:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir example
|
||||
$ cp ~/Pictures/vacation/*.{png,jpg} example
|
||||
```
|
||||
|
||||
Change directory to your new folder, then list the files in it to confirm that your test environment is what you expect:
|
||||
|
||||
|
||||
```
|
||||
$ cd example
|
||||
$ ls -1
|
||||
cat.jpg
|
||||
design_maori.png
|
||||
otago.jpg
|
||||
waterfall.png
|
||||
```
|
||||
|
||||
The syntax to loop through each file individually in a loop is: create a variable ( **f** for file, for example). Then define the data set you want the variable to cycle through. In this case, cycle through all files in the current directory using the ***** wildcard character (the ***** wildcard matches _everything_ ). Then terminate this introductory clause with a semicolon ( **;** ).
|
||||
|
||||
|
||||
```
|
||||
`$ for f in * ;`
|
||||
```
|
||||
|
||||
Depending on your preference, you can choose to press **Return** here. The shell won't try to execute the loop until it is syntactically complete.
|
||||
|
||||
Next, define what you want to happen with each iteration of the loop. For simplicity, use the **file** command to get a little bit of data about each file, represented by the **f** variable (but prepended with a **$** to tell the shell to swap out the value of the variable for whatever the variable currently contains):
|
||||
|
||||
|
||||
```
|
||||
`do file $f ;`
|
||||
```
|
||||
|
||||
Terminate the clause with another semi-colon and close the loop:
|
||||
|
||||
|
||||
```
|
||||
`done`
|
||||
```
|
||||
|
||||
Press **Return** to start the shell cycling through _everything_ in the current directory. The **for** loop assigns each file, one by one, to the variable **f** and runs your command:
|
||||
|
||||
|
||||
```
|
||||
$ for f in * ; do
|
||||
> file $f ;
|
||||
> done
|
||||
cat.jpg: JPEG image data, EXIF standard 2.2
|
||||
design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced
|
||||
otago.jpg: JPEG image data, EXIF standard 2.2
|
||||
waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced
|
||||
```
|
||||
|
||||
You can also write it this way:
|
||||
|
||||
|
||||
```
|
||||
$ for f in *; do file $f; done
|
||||
cat.jpg: JPEG image data, EXIF standard 2.2
|
||||
design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced
|
||||
otago.jpg: JPEG image data, EXIF standard 2.2
|
||||
waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced
|
||||
```
|
||||
|
||||
Both the multi-line and single-line formats are the same to your shell and produce the exact same results.
|
||||
|
||||
### A practical example
|
||||
|
||||
Here's a practical example of how a loop can be useful for everyday computing. Assume you have a collection of vacation photos you want to send to friends. Your photo files are huge, making them too large to email and inconvenient to upload to your [photo-sharing service][2]. You want to create smaller web-versions of your photos, but you have 100 photos and don't want to spend the time reducing each photo, one by one.
|
||||
|
||||
First, install the **ImageMagick** command using your package manager on Linux, BSD, or Mac. For instance, on Fedora and RHEL:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install ImageMagick`
|
||||
```
|
||||
|
||||
On Ubuntu or Debian:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install ImageMagick`
|
||||
```
|
||||
|
||||
On BSD, use **ports** or [pkgsrc][3]. On Mac, use [Homebrew][4] or [MacPorts][5].
|
||||
|
||||
Once you install ImageMagick, you have a set of new commands to operate on photos.
|
||||
|
||||
Create a destination directory for the files you're about to create:
|
||||
|
||||
|
||||
```
|
||||
`$ mkdir tmp`
|
||||
```
|
||||
|
||||
To reduce each photo to 33% of its original size, try this loop:
|
||||
|
||||
|
||||
```
|
||||
`$ for f in * ; do convert $f -scale 33% tmp/$f ; done`
|
||||
```
|
||||
|
||||
Then look in the **tmp** folder to see your scaled photos.
|
||||
|
||||
You can use any number of commands within a loop, so if you need to perform complex actions on a batch of files, you can place your whole workflow between the **do** and **done** statements of a **for** loop. For example, suppose you want to copy each processed photo straight to a shared photo directory on your web host and remove the photo file from your local system:
|
||||
|
||||
|
||||
```
|
||||
$ for f in * ; do
|
||||
convert $f -scale 33% tmp/$f
|
||||
scp -i seth_web tmp/$f [seth@example.com][6]:~/public_html
|
||||
trash tmp/$f ;
|
||||
done
|
||||
```
|
||||
|
||||
For each file processed by the **for** loop, your computer automatically runs three commands. This means if you process just 10 photos this way, you save yourself 30 commands and probably at least as many minutes.
|
||||
|
||||
### Limiting your loop
|
||||
|
||||
A loop doesn't always have to look at every file. You might want to process only the JPEG files in your example directory:
|
||||
|
||||
|
||||
```
|
||||
$ for f in *.jpg ; do convert $f -scale 33% tmp/$f ; done
|
||||
$ ls -m tmp
|
||||
cat.jpg, otago.jpg
|
||||
```
|
||||
|
||||
Or, instead of processing files, you may need to repeat an action a specific number of times. A **for** loop's variable is defined by whatever data you provide it, so you can create a loop that iterates over numbers instead of files:
|
||||
|
||||
|
||||
```
|
||||
$ for n in {0..4}; do echo $n ; done
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
```
|
||||
|
||||
### More looping
|
||||
|
||||
You now know enough to create your own loops. Until you're comfortable with looping, use them on _copies_ of the files you want to process and, as often as possible, use commands with built-in safeguards to prevent you from clobbering your data and making irreparable mistakes, like accidentally renaming an entire directory of files to the same name, each overwriting the other.
|
||||
|
||||
For advanced **for** loop topics, read on.
|
||||
|
||||
### Not all shells are Bash
|
||||
|
||||
The **for** keyword is built into the Bash shell. Many similar shells use the same keyword and syntax, but some shells, like [tcsh][7], use a different keyword, like **foreach** , instead.
|
||||
|
||||
In tcsh, the syntax is similar in spirit but more strict than Bash. In the following code sample, do not type the string **foreach?** in lines 2 and 3. It is a secondary prompt alerting you that you are still in the process of building your loop.
|
||||
|
||||
|
||||
```
|
||||
$ foreach f (*)
|
||||
foreach? file $f
|
||||
foreach? end
|
||||
cat.jpg: JPEG image data, EXIF standard 2.2
|
||||
design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced
|
||||
otago.jpg: JPEG image data, EXIF standard 2.2
|
||||
waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced
|
||||
```
|
||||
|
||||
In tcsh, both **foreach** and **end** must appear alone on separate lines, so you cannot create a **for** loop on one line as you can with Bash and similar shells.
|
||||
|
||||
### For loops with the find command
|
||||
|
||||
In theory, you could find a shell that doesn't provide a **for** loop function, or you may just prefer to use a different command with added features.
|
||||
|
||||
The **find** command is another way to implement the functionality of a **for** loop, as it offers several ways to define the scope of which files to include in your loop as well as options for [Parallel][8] processing.
|
||||
|
||||
The **find** command is meant to help you find files on your hard drives. Its syntax is simple: you provide the path of the location you want to search, and **find** finds all files and directories:
|
||||
|
||||
|
||||
```
|
||||
$ find .
|
||||
.
|
||||
./cat.jpg
|
||||
./design_maori.png
|
||||
./otago.jpg
|
||||
./waterfall.png
|
||||
```
|
||||
|
||||
You can filter the search results by adding some portion of the name:
|
||||
|
||||
|
||||
```
|
||||
$ find . -name "*jpg"
|
||||
./cat.jpg
|
||||
./otago.jpg
|
||||
```
|
||||
|
||||
The great thing about **find** is that each file it finds can be fed into a loop using the **-exec** flag. For instance, to scale down only the PNG photos in your example directory:
|
||||
|
||||
|
||||
```
|
||||
$ find . -name "*png" -exec convert {} -scale 33% tmp/{} \;
|
||||
$ ls -m tmp
|
||||
design_maori.png, waterfall.png
|
||||
```
|
||||
|
||||
In the **-exec** clause, the bracket characters **{}** stand in for whatever item **find** is processing (in other words, any file ending in PNG that has been located, one at a time). The **-exec** clause must be terminated with a semicolon, but Bash usually tries to use the semicolon for itself. You "escape" the semicolon with a backslash ( **\;** ) so that **find** knows to treat that semicolon as its terminating character.
|
||||
|
||||
The **find** command is very good at what it does, and it can be too good sometimes. For instance, if you reuse it to find PNG files for another photo process, you will get a few errors:
|
||||
|
||||
|
||||
```
|
||||
$ find . -name "*png" -exec convert {} -flip -flop tmp/{} \;
|
||||
convert: unable to open image `tmp/./tmp/design_maori.png':
|
||||
No such file or directory @ error/blob.c/OpenBlob/2643.
|
||||
...
|
||||
```
|
||||
|
||||
It seems that **find** has located all the PNG files—not only the ones in your current directory ( **.** ) but also those that you processed before and placed in your **tmp** subdirectory. In some cases, you may want **find** to search the current directory plus all other directories within it (and all directories in _those_ ). It can be a powerful recursive processing tool, especially in complex file structures (like directories of music artists containing directories of albums filled with music files), but you can limit this with the **-maxdepth** option.
|
||||
|
||||
To find only PNG files in the current directory (excluding subdirectories):
|
||||
|
||||
|
||||
```
|
||||
`$ find . -maxdepth 1 -name "*png"`
|
||||
```
|
||||
|
||||
To find and process files in the current directory plus an additional level of subdirectories, increment the maximum depth by 1:
|
||||
|
||||
|
||||
```
|
||||
`$ find . -maxdepth 2 -name "*png"`
|
||||
```
|
||||
|
||||
Its default is to descend into all subdirectories.
|
||||
|
||||
### Looping for fun and profit
|
||||
|
||||
The more you use loops, the more time and effort you save, and the bigger the tasks you can tackle. You're just one user, but with a well-thought-out loop, you can make your computer do the hard work.
|
||||
|
||||
You can and should treat looping like any other command, keeping it close at hand for when you need to repeat a single action or two on several files. However, it's also a legitimate gateway to serious programming, so if you have to accomplish a complex task on any number of files, take a moment out of your day to plan out your workflow. If you can achieve your goal on one file, then wrapping that repeatable process in a **for** loop is relatively simple, and the only "programming" required is an understanding of how variables work and enough organization to separate unprocessed from processed files. With a little practice, you can move from a Linux user to a Linux user who knows how to write a loop, so get out there and make your computer work for you!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/how-write-loop-bash
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth/users/goncasousa/users/howtopamm/users/howtopamm/users/seth/users/wavesailor/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
||||
[2]: http://nextcloud.com
|
||||
[3]: http://pkgsrc.org
|
||||
[4]: http://brew.sh
|
||||
[5]: https://www.macports.org
|
||||
[6]: mailto:seth@example.com
|
||||
[7]: https://en.wikipedia.org/wiki/Tcsh
|
||||
[8]: https://opensource.com/article/18/5/gnu-parallel
|
284
sources/tech/20190612 The bits and bytes of PKI.md
Normal file
284
sources/tech/20190612 The bits and bytes of PKI.md
Normal file
@ -0,0 +1,284 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The bits and bytes of PKI)
|
||||
[#]: via: (https://opensource.com/article/19/6/bits-and-bytes-pki)
|
||||
[#]: author: (Alex Wood https://opensource.com/users/awood)
|
||||
|
||||
The bits and bytes of PKI
|
||||
======
|
||||
Take a look under the public key infrastructure's hood to get a better
|
||||
understanding of its format.
|
||||
![Computer keyboard typing][1]
|
||||
|
||||
In two previous articles— _[An introduction to cryptography and public key infrastructure][2]_ and _[How do private keys work in PKI and cryptography?][3]_ —I discussed cryptography and public key infrastructure (PKI) in a general way. I talked about how digital bundles called _certificates_ store public keys and identifying information. These bundles contain a lot of complexity, and it's useful to have a basic understanding of the format for when you need to look under the hood.
|
||||
|
||||
### Abstract art
|
||||
|
||||
Keys, certificate signing requests, certificates, and other PKI artifacts define themselves in a data description language called [Abstract Syntax Notation One][4] (ASN.1). ASN.1 defines a series of simple data types (integers, strings, dates, etc.) along with some structured types (sequences, sets). By using those types as building blocks, we can create surprisingly complex data formats.
|
||||
|
||||
ASN.1 contains plenty of pitfalls for the unwary, however. For example, it has two different ways of representing dates: GeneralizedTime ([ISO 8601][5] format) and UTCTime (which uses a two-digit year). Strings introduce even more confusion. We have IA5String for ASCII strings and UTF8String for Unicode strings. ASN.1 also defines several other string types, from the exotic [T61String][6] and [TeletexString][7] to the more innocuous sounding—but probably not what you wanted—PrintableString (only a small subset of ASCII) and UniversalString (encoded in [UTF-32][8]). If you're writing or reading ASN.1 data, I recommend referencing the [specification][9].
|
||||
|
||||
ASN.1 has another data type worth special mention: the object identifier (OID). OIDs are a series of integers. Commonly they are shown with periods delimiting them. Each integer represents a node in what is basically a "tree of things." For example, [1.3.6.1.4.1.2312][10] is the OID for my employer, Red Hat, where "1" is the node for the International Organization for Standardization (ISO), "3" is for ISO-identified organizations, "6" is for the US Department of Defense (which, for historical reasons, is the parent to the next node), "1" is for the internet, "4" is for private organizations, "1" is for enterprises, and finally "2312," which is Red Hat's own.
|
||||
|
||||
More commonly, OIDs are regularly used to identify specific algorithms in PKI objects. If you have a digital signature, it's not much use if you don't know what type of signature it is. The signature algorithm "sha256WithRSAEncryption" has the OID "1.2.840.113549.1.1.11," for example.
|
||||
|
||||
### ASN.1 at work
|
||||
|
||||
Suppose we own a factory that produces flying brooms, and we need to store some data about every broom. Our brooms have a model name, a serial number, and a series of inspections that have been made to ensure flight-worthiness. We could store this information using ASN.1 like so:
|
||||
|
||||
|
||||
```
|
||||
BroomInfo ::= SEQUENCE {
|
||||
model UTF8String,
|
||||
serialNumber INTEGER,
|
||||
inspections SEQUENCE OF InspectionInfo
|
||||
}
|
||||
|
||||
InspectionInfo ::= SEQUENCE {
|
||||
inspectorName UTF8String,
|
||||
inspectionDate GeneralizedTime
|
||||
}
|
||||
```
|
||||
|
||||
The example above defines the model name as a UTF8-encoded string, the serial number as an integer, and our inspections as a series of InspectionInfo items. Then we see that each InspectionInfo item comprises two pieces of data: the inspector's name and the time of the inspection.
|
||||
|
||||
An actual instance of BroomInfo data would look something like this in ASN.1's value assignment syntax:
|
||||
|
||||
|
||||
```
|
||||
broom BroomInfo ::= {
|
||||
model "Nimbus 2000",
|
||||
serialNumber 1066,
|
||||
inspections {
|
||||
{
|
||||
inspectorName "Harry",
|
||||
inspectionDate "201901011200Z"
|
||||
}
|
||||
{
|
||||
inspectorName "Hagrid",
|
||||
inspectionDate "201902011200Z"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Don't worry too much about the particulars of the syntax; for the average developer, having a basic grasp of how the pieces fit together is sufficient.
|
||||
|
||||
Now let's look at a real example from [RFC 8017][11] that I have abbreviated somewhat for clarity:
|
||||
|
||||
|
||||
```
|
||||
RSAPrivateKey ::= SEQUENCE {
|
||||
version Version,
|
||||
modulus INTEGER, -- n
|
||||
publicExponent INTEGER, -- e
|
||||
privateExponent INTEGER, -- d
|
||||
prime1 INTEGER, -- p
|
||||
prime2 INTEGER, -- q
|
||||
exponent1 INTEGER, -- d mod (p-1)
|
||||
exponent2 INTEGER, -- d mod (q-1)
|
||||
coefficient INTEGER, -- (inverse of q) mod p
|
||||
otherPrimeInfos OtherPrimeInfos OPTIONAL
|
||||
}
|
||||
|
||||
Version ::= INTEGER { two-prime(0), multi(1) }
|
||||
(CONSTRAINED BY
|
||||
{-- version must be multi if otherPrimeInfos present --})
|
||||
|
||||
OtherPrimeInfos ::= SEQUENCE SIZE(1..MAX) OF OtherPrimeInfo
|
||||
|
||||
OtherPrimeInfo ::= SEQUENCE {
|
||||
prime INTEGER, -- ri
|
||||
exponent INTEGER, -- di
|
||||
coefficient INTEGER -- ti
|
||||
}
|
||||
```
|
||||
|
||||
The ASN.1 above defines the PKCS #1 format used to store RSA keys. Looking at this, we can see the RSAPrivateKey sequence starts with a version type (either 0 or 1) followed by a bunch of integers and then an optional type called OtherPrimeInfos. The OtherPrimeInfos sequence contains one or more pieces of OtherPrimeInfo. And each OtherPrimeInfo is just a sequence of integers.
|
||||
|
||||
Let's look at an actual instance by asking OpenSSL to generate an RSA key and then pipe it into [asn1parse][12], which will print it out in a more human-friendly format. (By the way, the **genrsa** command I'm using here has been superseded by **genpkey** ; we'll see why a little later.)
|
||||
|
||||
|
||||
```
|
||||
% openssl genrsa 4096 2> /dev/null | openssl asn1parse
|
||||
0:d=0 hl=4 l=2344 cons: SEQUENCE
|
||||
4:d=1 hl=2 l= 1 prim: INTEGER :00
|
||||
7:d=1 hl=4 l= 513 prim: INTEGER :B80B0C2443...
|
||||
524:d=1 hl=2 l= 3 prim: INTEGER :010001
|
||||
529:d=1 hl=4 l= 512 prim: INTEGER :59C609C626...
|
||||
1045:d=1 hl=4 l= 257 prim: INTEGER :E8FC43002D...
|
||||
1306:d=1 hl=4 l= 257 prim: INTEGER :CA39222DD2...
|
||||
1567:d=1 hl=4 l= 256 prim: INTEGER :25F6CD181F...
|
||||
1827:d=1 hl=4 l= 256 prim: INTEGER :38CCE374CB...
|
||||
2087:d=1 hl=4 l= 257 prim: INTEGER :C80430E810...
|
||||
```
|
||||
|
||||
Recall that RSA uses a modulus, _n_ ; a public exponent, _e_ ; and a private exponent, _d_. Now let's look at the sequence. First, we see the version set to 0 for a two-prime RSA key (what **genrsa** generates), an integer for the modulus, _n_ , and then 0x010001 for the public exponent, _e_. If we convert to decimal, we'll see our public exponent is 65537, a number [commonly][13] used as an RSA public exponent. Following the public exponent, we see the integer for the private exponent, _e_ , and then some other integers that are used to speed up decryption and signing. Explaining how this optimization works is beyond the scope of this article, but if you like math, there's a [good video on the subject][14].
|
||||
|
||||
What about that other stuff on the left side of the output? What does "h=4" and "l=513" mean? We'll cover that shortly.
|
||||
|
||||
### DERangement
|
||||
|
||||
We've seen the "abstract" part of Abstract Syntax Notation One, but how does this data get encoded and stored? For that, we turn to a binary format called Distinguished Encoding Rules (DER) defined in the [X.690][15] specification. DER is a stricter version of its parent, Basic Encoding Rules (BER), in that for any given data, there is only one way to encode it. If we're going to be digitally signing data, it makes things a lot easier if there is only one possible encoding that needs to be signed instead of dozens of functionally equivalent representations.
|
||||
|
||||
DER uses a [tag-length-value][16] (TLV) structure. The encoding of a piece of data begins with an identifier octet defining the data's type. ("Octet" is used rather than "byte" since the standard is very old and some early architectures didn't use 8 bits for a byte.) Next are the octets that encode the length of the data, and finally, there is the data. The data can be another TLV series. The left side of the **asn1parse** output makes a little more sense now. The first number indicates the absolute offset from the beginning. The "d=" tells us the depth of that item in the structure. The first line is a sequence, which we descend into on the next line (the depth _d_ goes from 0 to 1) whereupon **asn1parse** begins enumerating all the elements in that sequence. The "hl=" is the header length (the sum of the identifier and length octets), and the "l=" tells us the length of that particular piece of data.
|
||||
|
||||
How is header length determined? It's the sum of the identifier byte and the bytes encoding the length. In our example, the top sequence is 2344 octets long. If it were less than 128 octets, the length would be encoded in a single octet in the "short form": bit 8 would be a zero and bits 7 to 1 would hold the length value ( **2 7-1=127**). A value of 2344 needs more space, so the "long" form is used. The first octet has bit 8 set to one, and bits 7 to 1 contain the length of the length. In our case, a value of 2344 can be encoded in two octets (0x0928). Combined with the first "length of the length" octet, we have three octets total. Add the one identifier octet, and that gives us our total header length of four.
|
||||
|
||||
As a side exercise, let's consider the largest value we could possibly encode. We've seen that we have up to 127 octets to encode a length. At 8 bits per octet, we have a total of 1008 bits to use, so we can hold a number equal to **2 1008-1**. That would equate to a content length of **2.743062*10 279** yottabytes, staggeringly more than the estimated **10 80** atoms in the observable universe. If you're interested in all the details, I recommend reading "[A Layman's Guide to a Subset of ASN.1, BER, and DER][17]."
|
||||
|
||||
What about "cons" and "prim"? Those indicate whether the value is encoded with "constructed" or "primitive" encoding. Primitive encoding is used for simple types like "INTEGER" or "BOOLEAN," while constructed encoding is used for structured types like "SEQUENCE" or "SET." The actual difference between the two encoding methods is whether bit 6 in the identifier octet is a zero or one. If it's a one, the parser knows that the content octets are also DER-encoded and it can descend.
|
||||
|
||||
### PEM pals
|
||||
|
||||
While useful in a lot of cases, a binary format won't pass muster if we need to display the data as text. Before the [MIME][18] standard existed, attachment support was spotty. Commonly, if you wanted to attach data, you put it in the body of the email, and since SMTP only supported ASCII, that meant converting your binary data (like the DER of your public key, for example) into ASCII characters.
|
||||
|
||||
Thus, the PEM format emerged. PEM stands for "Privacy-Enhanced Email" and was an early standard for transmitting and storing PKI data. The standard never caught on, but the format it defined for storage did. PEM-encoded objects are just DER objects that are [base64][19]-encoded and wrapped at 64 characters per line. To describe the type of object, a header and footer surround the base64 string. You'll see **\-----BEGIN CERTIFICATE-----** or **\-----BEGIN PRIVATE KEY-----** , for example.
|
||||
|
||||
Often you'll see files with the ".pem" extension. I don't find this suffix useful. The file could contain a certificate, a key, a certificate signing request, or several other possibilities. Imagine going to a sushi restaurant and seeing a menu that described every item as "fish and rice"! Instead, I prefer more informative extensions like ".crt", ".key", and ".csr".
|
||||
|
||||
### The PKCS zoo
|
||||
|
||||
Earlier, I showed an example of a PKCS #1-formatted RSA key. As you might expect, formats for storing certificates and signing requests also exist in various IETF RFCs. For example, PKCS #8 can be used to store private keys for many different algorithms (including RSA!). Here's some of the ASN.1 from [RFC 5208][20] for PKCS #8. (RFC 5208 has been obsoleted by RFC 5958, but I feel that the ASN.1 in RFC 5208 is easier to understand.)
|
||||
|
||||
|
||||
```
|
||||
PrivateKeyInfo ::= SEQUENCE {
|
||||
version Version,
|
||||
privateKeyAlgorithm PrivateKeyAlgorithmIdentifier,
|
||||
privateKey PrivateKey,
|
||||
attributes [0] IMPLICIT Attributes OPTIONAL }
|
||||
|
||||
Version ::= INTEGER
|
||||
|
||||
PrivateKeyAlgorithmIdentifier ::= AlgorithmIdentifier
|
||||
|
||||
PrivateKey ::= OCTET STRING
|
||||
|
||||
Attributes ::= SET OF Attribute
|
||||
```
|
||||
|
||||
If you store your RSA private key in a PKCS #8, the PrivateKey element will actually be a DER-encoded PKCS #1! Let's prove it. Remember earlier when I used **genrsa** to generate a PKCS #1? OpenSSL can generate a PKCS #8 with the **genpkey** command, and you can specify RSA as the algorithm to use.
|
||||
|
||||
|
||||
```
|
||||
% openssl genpkey -algorithm RSA | openssl asn1parse
|
||||
0:d=0 hl=4 l= 629 cons: SEQUENCE
|
||||
4:d=1 hl=2 l= 1 prim: INTEGER :00
|
||||
7:d=1 hl=2 l= 13 cons: SEQUENCE
|
||||
9:d=2 hl=2 l= 9 prim: OBJECT :rsaEncryption
|
||||
20:d=2 hl=2 l= 0 prim: NULL
|
||||
22:d=1 hl=4 l= 607 prim: OCTET STRING [HEX DUMP]:3082025B...
|
||||
```
|
||||
|
||||
You may have spotted the "OBJECT" in the output and guessed that was related to OIDs. You'd be correct. The OID "1.2.840.113549.1.1.1" is assigned to RSA encryption. OpenSSL has a built-in list of common OIDs and translates them into a human-readable form for you.
|
||||
|
||||
|
||||
```
|
||||
% openssl genpkey -algorithm RSA | openssl asn1parse -strparse 22
|
||||
0:d=0 hl=4 l= 604 cons: SEQUENCE
|
||||
4:d=1 hl=2 l= 1 prim: INTEGER :00
|
||||
7:d=1 hl=3 l= 129 prim: INTEGER :CA6720E706...
|
||||
139:d=1 hl=2 l= 3 prim: INTEGER :010001
|
||||
144:d=1 hl=3 l= 128 prim: INTEGER :05D0BEBE44...
|
||||
275:d=1 hl=2 l= 65 prim: INTEGER :F215DC6B77...
|
||||
342:d=1 hl=2 l= 65 prim: INTEGER :D6095CED7E...
|
||||
409:d=1 hl=2 l= 64 prim: INTEGER :402C7562F3...
|
||||
475:d=1 hl=2 l= 64 prim: INTEGER :06D0097B2D...
|
||||
541:d=1 hl=2 l= 65 prim: INTEGER :AB266E8E51...
|
||||
```
|
||||
|
||||
In the second command, I've told **asn1parse** via the **-strparse** argument to move to octet 22 and begin parsing the content's octets there as an ASN.1 object. We can clearly see that the PKCS #8's PrivateKey looks just like the PKCS #1 that we examined earlier.
|
||||
|
||||
You should favor using the **genpkey** command. PKCS #8 has some features that PKCS #1 does not: PKCS #8 can store private keys for multiple different algorithms (PKCS #1 is RSA-specific), and it provides a mechanism to encrypt the private key using a passphrase and a symmetric cipher.
|
||||
|
||||
Encrypted PKCS #8 objects use a different ASN.1 syntax that I'm not going to dive into, but let's take a look at an actual example and see if anything stands out. Encrypting a private key with **genpkey** requires that you specify the symmetric encryption algorithm to use. I'll use AES-256-CBC for this example and a password of "hello" (the "pass:" prefix is the way of telling OpenSSL that the password is coming in from the command line).
|
||||
|
||||
|
||||
```
|
||||
% openssl genpkey -algorithm RSA -aes-256-cbc -pass pass:hello | openssl asn1parse
|
||||
0:d=0 hl=4 l= 733 cons: SEQUENCE
|
||||
4:d=1 hl=2 l= 87 cons: SEQUENCE
|
||||
6:d=2 hl=2 l= 9 prim: OBJECT :PBES2
|
||||
17:d=2 hl=2 l= 74 cons: SEQUENCE
|
||||
19:d=3 hl=2 l= 41 cons: SEQUENCE
|
||||
21:d=4 hl=2 l= 9 prim: OBJECT :PBKDF2
|
||||
32:d=4 hl=2 l= 28 cons: SEQUENCE
|
||||
34:d=5 hl=2 l= 8 prim: OCTET STRING [HEX DUMP]:17E6FE554E85810A
|
||||
44:d=5 hl=2 l= 2 prim: INTEGER :0800
|
||||
48:d=5 hl=2 l= 12 cons: SEQUENCE
|
||||
50:d=6 hl=2 l= 8 prim: OBJECT :hmacWithSHA256
|
||||
60:d=6 hl=2 l= 0 prim: NULL
|
||||
62:d=3 hl=2 l= 29 cons: SEQUENCE
|
||||
64:d=4 hl=2 l= 9 prim: OBJECT :aes-256-cbc
|
||||
75:d=4 hl=2 l= 16 prim: OCTET STRING [HEX DUMP]:91E9536C39...
|
||||
93:d=1 hl=4 l= 640 prim: OCTET STRING [HEX DUMP]:98007B264F...
|
||||
|
||||
% openssl genpkey -algorithm RSA -aes-256-cbc -pass pass:hello | head -n 1
|
||||
\-----BEGIN ENCRYPTED PRIVATE KEY-----
|
||||
```
|
||||
|
||||
There are a couple of interesting items here. We see our encryption algorithm is recorded with an OID starting at octet 64. There's an OID for "PBES2" (Password-Based Encryption Scheme 2), which defines a standard process for encryption and decryption, and an OID for "PBKDF2" (Password-Based Key Derivation Function 2), which defines a standard process for creating encryption keys from passwords. Helpfully, OpenSSL uses the header "ENCRYPTED PRIVATE KEY" in the PEM output.
|
||||
|
||||
OpenSSL will let you encrypt a PKCS #1, but it's done in a non-standard way via a series of headers inserted into the PEM:
|
||||
|
||||
|
||||
```
|
||||
% openssl genrsa -aes256 -passout pass:hello 4096
|
||||
\-----BEGIN RSA PRIVATE KEY-----
|
||||
Proc-Type: 4,ENCRYPTED
|
||||
DEK-Info: AES-256-CBC,5B2C64DC05B7C0471A278C76562FD776
|
||||
...
|
||||
```
|
||||
|
||||
### In conclusion
|
||||
|
||||
There's a final PKCS format you need to know about: [PKCS #12][21]. The PKCS #12 format allows for storing multiple objects all in one file. If you have a certificate and its corresponding key or a chain of certificates, you can store them together in one PKCS #12 file. Individual entries in the file can be protected with password-based encryption.
|
||||
|
||||
Beyond the PKCS formats, there are other storage methods such as the Java-specific JKS format and the NSS library from Mozilla, which uses file-based databases (SQLite or Berkeley DB, depending on the version). Luckily, the PKCS formats are a lingua franca that can serve as a start or reference if you need to deal with other formats.
|
||||
|
||||
If this all seems confusing, that's because it is. Unfortunately, the PKI ecosystem has a lot of sharp edges between tools that generate enigmatic error messages (looking at you, OpenSSL) and standards that have grown and evolved over the past 35 years. Having a basic understanding of how PKI objects are stored is critical if you're doing any application development that will be accessed over SSL/TLS.
|
||||
|
||||
I hope this article has shed a little light on the subject and might save you from spending fruitless hours in the PKI wilderness.
|
||||
|
||||
* * *
|
||||
|
||||
_The author would like to thank Hubert Kario for providing a technical review._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/bits-and-bytes-pki
|
||||
|
||||
作者:[Alex Wood][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/awood
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/keyboaord_enter_writing_documentation.jpg?itok=kKrnXc5h (Computer keyboard typing)
|
||||
[2]: https://opensource.com/article/18/5/cryptography-pki
|
||||
[3]: https://opensource.com/article/18/7/private-keys
|
||||
[4]: https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One
|
||||
[5]: https://en.wikipedia.org/wiki/ISO_8601
|
||||
[6]: https://en.wikipedia.org/wiki/ITU_T.61
|
||||
[7]: https://en.wikipedia.org/wiki/Teletex
|
||||
[8]: https://en.wikipedia.org/wiki/UTF-32
|
||||
[9]: https://www.itu.int/itu-t/recommendations/rec.aspx?rec=X.680
|
||||
[10]: https://www.alvestrand.no/objectid/1.3.6.1.4.1.2312.html
|
||||
[11]: https://tools.ietf.org/html/rfc8017
|
||||
[12]: https://linux.die.net/man/1/asn1parse
|
||||
[13]: https://www.johndcook.com/blog/2018/12/12/rsa-exponent/
|
||||
[14]: https://www.youtube.com/watch?v=NcPdiPrY_g8
|
||||
[15]: https://en.wikipedia.org/wiki/X.690
|
||||
[16]: https://en.wikipedia.org/wiki/Type-length-value
|
||||
[17]: http://luca.ntop.org/Teaching/Appunti/asn1.html
|
||||
[18]: https://www.theguardian.com/technology/2012/mar/26/ather-of-the-email-attachment
|
||||
[19]: https://en.wikipedia.org/wiki/Base64
|
||||
[20]: https://tools.ietf.org/html/rfc5208
|
||||
[21]: https://tools.ietf.org/html/rfc7292
|
97
sources/tech/20190612 Why use GraphQL.md
Normal file
97
sources/tech/20190612 Why use GraphQL.md
Normal file
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why use GraphQL?)
|
||||
[#]: via: (https://opensource.com/article/19/6/why-use-graphql)
|
||||
[#]: author: (Zach Lendon https://opensource.com/users/zachlendon/users/goncasousa/users/patrickhousley)
|
||||
|
||||
Why use GraphQL?
|
||||
======
|
||||
Here's why GraphQL is gaining ground on standard REST API technology.
|
||||
![][1]
|
||||
|
||||
[GraphQL][2], as I wrote [previously][3], is a next-generation API technology that is transforming both how client applications communicate with backend systems and how backend systems are designed.
|
||||
|
||||
As a result of the support that began with the organization that founded it, Facebook, and continues with the backing of other technology giants such as Github, Twitter, and AirBnB, GraphQL's place as a linchpin technology for application systems seems secure; both now and long into the future.
|
||||
|
||||
### GraphQL's ascent
|
||||
|
||||
The rise in importance of mobile application performance and organizational agility has provided booster rockets for GraphQL's ascent to the top of modern enterprise architectures.
|
||||
|
||||
Given that [REST][4] is a wildly popular architectural style that already allows mechanisms for data interaction, what advantages does this new technology provide over [REST][4]? The ‘QL’ in GraphQL stands for query language, and that is a great place to start.
|
||||
|
||||
The ease at which different client applications within an organization can query only the data they need with GraphQL usurps alternative REST approaches and delivers real-world application performance boosts. With traditional [REST][4] API endpoints, client applications interrogate a server resource, and receive a response containing all the data that matches the request. If a successful response from a [REST][4] API endpoint returns 35 fields, the client application receives 35 fields
|
||||
|
||||
### Fetching problems
|
||||
|
||||
[REST][4] APIs traditionally provide no clean way for client applications to retrieve or update only the data they care about. This is often described as the “over-fetching” problem. With the prevalence of mobile applications in people’s day to day lives, the over-fetching problem has real world consequences. Every request a mobile application needs to make, every byte it has to send and receive, has an increasingly negative performance impact for end users. Users with slower data connections are particularly affected by suboptimal API design choices. Customers who experience poor performance using mobile applications are more likely to not purchase products and use services. Inefficient API designs cost companies money.
|
||||
|
||||
“Over-fetching” isn’t alone - it has a partner in crime - “under-fetching”. Endpoints that, by default, return only a portion of the data a client actually needs require clients to make additional calls to satisfy their data needs - which requires additional HTTP requests. Because of the over and under fetching problems and their impact on client application performance, an API technology that facilitates efficient fetching has a chance to catch fire in the marketplace - and GraphQL has boldly jumped in and filled that void.
|
||||
|
||||
### REST's response
|
||||
|
||||
[REST][4] API designers, not willing to go down without a fight, have attempted to counter the mobile application performance problem through a mix of:
|
||||
|
||||
* “include” and “exclude” query parameters, allowing client applications to specify which fields they want through a potentially long query format.
|
||||
* “Composite” services, which combine multiple endpoints in a way that allow client applications to be more efficient in the number of requests they make and the data they receive.
|
||||
|
||||
|
||||
|
||||
While these patterns are a valiant attempt by the [REST][4] API community to address challenges mobile clients face, they fall short in a few key regards, namely:
|
||||
|
||||
* Include and exclude query key/value pairs quickly get messy, in particular for deeper object graphs that require a nested dot notation syntax (or similar) to target data to include and exclude. Additionally, debugging issues with the query string in this model often requires manually breaking up a URL.
|
||||
* Server implementations for include and exclude queries are often custom, as there is no standard way for server-based applications to handle the use of include and exclude queries, just as there is no standard way for include and exclude queries to be defined.
|
||||
* The rise of composite services creates more tightly coupled back-end and front-end systems, requiring increasing coordination to deliver projects and turning once agile projects back to waterfall. This coordination and coupling has the painful side effect of slowing organizational agility. Additionally, composite services are by definition, not RESTful.
|
||||
|
||||
|
||||
|
||||
### GraphQL's genesis
|
||||
|
||||
For Facebook, GraphQL’s genesis was a response to pain felt and experiences learned from an HTML5-based version of their flagship mobile application back in 2011-2012. Understanding that improved performance was paramount, Facebook engineers realized that they needed a new API design to ensure peak performance. Likely taking the above [REST][4] limitations into consideration, and with needing to support different needs of a number of API clients, one can begin to understand the early seeds of what led co-creators Lee Byron and Dan Schaeffer, Facebook employees at the time, to create what has become known as GraphQL.
|
||||
|
||||
With what is often a single GraphQL endpoint, through the GraphQL query language, client applications are able to reduce, often significantly, the number of network calls they need to make, and ensure that they only are retrieving the data they need. In many ways, this harkens back to earlier models of web programming, where client application code would directly query back-end systems - some might remember writing SQL queries with JSTL on JSPs 10-15 years ago for example!
|
||||
|
||||
The biggest difference now is with GraphQL, we have a specification that is implemented across a variety of client and server languages and libraries. And with GraphQL being an API technology, we have decoupled the back-end and front-end application systems by introducing an intermediary GraphQL application layer that provides a mechanism to access organizational data in a manner that aligns with an organization’s business domain(s).
|
||||
|
||||
Beyond solving technical challenges experienced by software engineering teams, GraphQL has also been a boost to organizational agility, in particular in the enterprise. GraphQL-enabled organizational agility increases are commonly attributable to the following:
|
||||
|
||||
* Rather than creating new endpoints when 1 or more new fields are needed by clients, GraphQL API designers and developers are able to include those fields in existing graph implementations, exposing new capabilities in a fashion that requires less development effort and less change across application systems.
|
||||
* By encouraging API design teams to focus more on defining their object graph and be less focused on what client applications are delivering, the speed at which front-end and back-end software teams deliver solutions for customers has increasingly decoupled.
|
||||
|
||||
|
||||
|
||||
### Considerations before adoption
|
||||
|
||||
Despite GraphQL’s compelling benefits, GraphQL is not without its implementation challenges. A few examples include:
|
||||
|
||||
* Caching mechanisms around [REST][4] APIs are much more mature.
|
||||
* The patterns used to build APIs using [REST][4] are much more well established.
|
||||
* While engineers may be more attracted to newer technologies like GraphQL, the talent pool in the marketplace is much broader for building [REST][4]-based solutions vs. GraphQL.
|
||||
|
||||
|
||||
|
||||
### Conclusion
|
||||
|
||||
By providing both a boost to performance and organizational agility, GraphQL's adoption by companies has skyrocketed in the past few years. It does, however, have some maturing to do in comparison to the RESTful ecosystem of API design.
|
||||
|
||||
One of the great benefits of GraphQL is that it’s not designed as a wholesale replacement for alternative API solutions. Instead, GraphQL can be implemented to complement or enhance existing APIs. As a result, companies are encouraged to explore incrementally adopting GraphQL where it makes the most sense for them - where they find it has the greatest positive impact on application performance and organizational agility.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/why-use-graphql
|
||||
|
||||
作者:[Zach Lendon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/zachlendon/users/goncasousa/users/patrickhousley
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D
|
||||
[2]: https://graphql.org/
|
||||
[3]: https://opensource.com/article/19/6/what-is-graphql
|
||||
[4]: https://en.wikipedia.org/wiki/Representational_state_transfer
|
@ -0,0 +1,98 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Continuous integration testing for the Linux kernel)
|
||||
[#]: via: (https://opensource.com/article/19/6/continuous-kernel-integration-linux)
|
||||
[#]: author: (Major Hayden https://opensource.com/users/mhayden)
|
||||
|
||||
Continuous integration testing for the Linux kernel
|
||||
======
|
||||
How this team works to prevent bugs from being merged into the Linux
|
||||
kernel.
|
||||
![Linux kernel source code \(C\) in Visual Studio Code][1]
|
||||
|
||||
With 14,000 changesets per release from over 1,700 different developers, it's clear that the Linux kernel moves quickly, and brings plenty of complexity. Kernel bugs range from small annoyances to larger problems, such as system crashes and data loss.
|
||||
|
||||
As the call for continuous integration (CI) grows for more and more projects, the [Continuous Kernel Integration (CKI)][2] team forges ahead with a single mission: prevent bugs from being merged into the kernel.
|
||||
|
||||
### Linux testing problems
|
||||
|
||||
Many Linux distributions test the Linux kernel when needed. This testing often occurs around release time, or when users find a bug.
|
||||
|
||||
Unrelated issues sometimes appear, and maintainers scramble to find which patch in a changeset full of tens of thousands of patches caused the new, unrelated bug. Diagnosing the bug may require specialized hardware, a series of triggers, and specialized knowledge of that portion of the kernel.
|
||||
|
||||
#### CI and Linux
|
||||
|
||||
Most modern software repositories have some sort of automated CI testing that tests commits before they find their way into the repository. This automated testing allows the maintainers to find software quality issues, along with most bugs, by reviewing the CI report. Simpler projects, such as a Python library, come with tons of tools to make this process easier.
|
||||
|
||||
Linux must be configured and compiled prior to any testing. Doing so takes time and compute resources. In addition, that kernel must boot in a virtual machine or on a bare metal machine for testing. Getting access to certain system architectures requires additional expense or very slow emulation. From there, someone must identify a set of tests which trigger the bug or verify the fix.
|
||||
|
||||
#### How the CKI team works
|
||||
|
||||
The CKI team at Red Hat currently follows changes from several internal kernels, as well as upstream kernels such as the [stable kernel tree][3]. We watch for two critical events in each repository:
|
||||
|
||||
1. When maintainers merge pull requests or patches, and the resulting commits in the repository change.
|
||||
|
||||
2. When developers propose changes for merging via patchwork or the stable patch queue.
|
||||
|
||||
|
||||
|
||||
|
||||
As these events occur, automation springs into action and [GitLab CI pipelines][4] begin the testing process. Once the pipeline runs [linting][5] scripts, merges any patches, and compiles the kernel for multiple architectures, the real testing begins. We compile kernels in under six minutes for four architectures and submit feedback to the stable mailing list usually in two hours or less. Over 100,000 kernel tests run each month and over 11,000 GitLab pipelines have completed (since January 2019).
|
||||
|
||||
Each kernel is booted on its native architecture, which includes:
|
||||
|
||||
● [aarch64][6]: 64-bit [ARM][7], such as the [Cavium (now Marvell) ThunderX][8].
|
||||
|
||||
● [ppc64/ppc64le][9]: Big and little endian [IBM POWER][10] systems.
|
||||
|
||||
● [s390x][11]: [IBM Zseries][12] mainframes.
|
||||
|
||||
● [x86_64][13]: [Intel][14] and [AMD][15] workstations, laptops, and servers.
|
||||
|
||||
Multiple tests run on these kernels, including the [Linux Test Project (LTP)][16], which contains a myriad of tests using a common test harness. My CKI team open-sourced over 44 tests with more on the way.
|
||||
|
||||
### Get involved
|
||||
|
||||
The upstream kernel testing effort grows day-by-day. Many companies provide test output for various kernels, including [Google][17], Intel, [Linaro][18], and [Sony][19]. Each effort is focused on bringing value to the upstream kernel as well as each company’s customer base.
|
||||
|
||||
If you or your company want to join the effort, please come to the [Linux Plumbers Conference 2019][20] in Lisbon, Portugal. Join us at the Kernel CI hackfest during the two days after the conference, and drive the future of rapid kernel testing.
|
||||
|
||||
For more details, [review the slides][21] from my Texas Linux Fest 2019 talk.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/continuous-kernel-integration-linux
|
||||
|
||||
作者:[Major Hayden][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mhayden
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_kernel_clang_vscode.jpg?itok=fozZ4zrr (Linux kernel source code (C) in Visual Studio Code)
|
||||
[2]: https://cki-project.org/
|
||||
[3]: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
|
||||
[4]: https://docs.gitlab.com/ee/ci/pipelines.html
|
||||
[5]: https://en.wikipedia.org/wiki/Lint_(software)
|
||||
[6]: https://en.wikipedia.org/wiki/ARM_architecture
|
||||
[7]: https://www.arm.com/
|
||||
[8]: https://www.marvell.com/server-processors/thunderx-arm-processors/
|
||||
[9]: https://en.wikipedia.org/wiki/Ppc64
|
||||
[10]: https://www.ibm.com/it-infrastructure/power
|
||||
[11]: https://en.wikipedia.org/wiki/Linux_on_z_Systems
|
||||
[12]: https://www.ibm.com/it-infrastructure/z
|
||||
[13]: https://en.wikipedia.org/wiki/X86-64
|
||||
[14]: https://www.intel.com/
|
||||
[15]: https://www.amd.com/
|
||||
[16]: https://github.com/linux-test-project/ltp
|
||||
[17]: https://www.google.com/
|
||||
[18]: https://www.linaro.org/
|
||||
[19]: https://www.sony.com/
|
||||
[20]: https://www.linuxplumbersconf.org/
|
||||
[21]: https://docs.google.com/presentation/d/1T0JaRA0wtDU0aTWTyASwwy_ugtzjUcw_ZDmC5KFzw-A/edit?usp=sharing
|
@ -0,0 +1,94 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IPython is still the heart of Jupyter Notebooks for Python developers)
|
||||
[#]: via: (https://opensource.com/article/19/6/ipython-still-heart-jupyterlab)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg/users/marcobravo)
|
||||
|
||||
IPython is still the heart of Jupyter Notebooks for Python developers
|
||||
======
|
||||
Project Jupyter's origin in IPython remains significant for the magical
|
||||
development experience it provides.
|
||||
![I love Free Software FSFE celebration][1]
|
||||
|
||||
I recently wrote about how I find Jupyter projects, especially JupyterLab, to be a [magical Python development experience][2]. In researching how the various projects are related to each other, I recapped how Jupyter began as a fork from IPython. As Project Jupyter's [The Big Split™ announcement][3] explained:
|
||||
|
||||
> "If anyone has been confused by what Jupyter is[1], it's the exact same code that lived in IPython, developed by the same people, just in a new home under a new name."
|
||||
|
||||
That [1] links to a footnote that further clarifies:
|
||||
|
||||
> "I saw 'Jupyter is like IPython, but language agnostic' immediately after the announcement, which is a great illustration of why the project needs to not have Python in the name anymore, since it was already language agnostic at the time."
|
||||
|
||||
The fact that Jupyter Notebook and IPython forked from the same source code made sense to me, but I got lost in the current state of the IPython project. Was it no longer needed after The Big Split™ or is it living on in a different way?
|
||||
|
||||
I was surprised to learn that IPython's significance continues to add value to Pythonistas, and that it is an essential part of the Jupyter experience. Here's a portion of the Jupyter FAQ:
|
||||
|
||||
> **Are any languages pre-installed?**
|
||||
>
|
||||
> Yes, installing the Jupyter Notebook will also install the IPython kernel. This allows working on notebooks using the Python programming language.
|
||||
|
||||
I now understand that writing Python in JupyterLab (and Jupyter Notebook) relies on the continued development of IPython as its kernel. Not only that, IPython is the powerhouse default kernel, and it can act as a communication bus for other language kernels according to [the documentation][4], saving a lot of time and development effort.
|
||||
|
||||
The question remains, what can I do with just IPython?
|
||||
|
||||
### What IPython does today
|
||||
|
||||
IPython provides both a powerful, interactive Python shell and a Jupyter kernel. After installing it, I can run **ipython** from any command line on its own and use it as a (much prettier than the default) Python shell:
|
||||
|
||||
|
||||
```
|
||||
$ ipython
|
||||
Python 3.7.3 (default, Mar 27 2019, 09:23:15)
|
||||
Type 'copyright', 'credits' or 'license' for more information
|
||||
IPython 7.4.0 -- An enhanced Interactive Python. Type '?' for help.
|
||||
|
||||
In [1]: import numpy as np
|
||||
In [2]: example = np.array([5, 20, 3, 4, 0, 2, 12])
|
||||
In [3]: average = np.average(example)
|
||||
In [4]: print(average)
|
||||
6.571428571428571
|
||||
```
|
||||
|
||||
That brings us to the more significant issue: IPython's functionality gives JupyterLab the ability to execute the code in every project, and it also provides support for a whole bunch of functionality that's playfully called _magic_ (thank you, Nicholas Reith, for mentioning this in a comment on my previous article).
|
||||
|
||||
### Getting magical, thanks to IPython
|
||||
|
||||
JupyterLab and other frontends using the IPython kernel can feel like your favorite IDE or terminal emulator environment. I'm a huge fan of how [dotfiles][5] give me the power to use shortcuts, and magic has some dotfile-like behavior as well. For example, check out **[%bookmark][6]**. I've mapped my default development folder, **~/Develop** , to a shortcut I can run at any time and hop right into it.
|
||||
|
||||
![Screenshot of commands from JupyterLab][7]
|
||||
|
||||
The use of **%bookmark** and **%cd** , alongside the **!** operator (which I introduced in the previous article), are powered by IPython. As the [documentation][8] states:
|
||||
|
||||
> To Jupyter users: Magics are specific to and provided by the IPython kernel. Whether Magics are available on a kernel is a decision that is made by the kernel developer on a per-kernel basis.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
I, as a curious novice, was not quite sure if IPython remained relevant to the Jupyter ecosystem. I now have a new appreciation for the continuing development of IPython now that I realize it's the source of JupyterLab's powerful user experience. It's also a collection of talented contributors who are part of cutting edge research, so be sure to site them if you use Jupyter projects in your academic papers. They make it easy with this [ready-made citation entry][9].
|
||||
|
||||
Be sure to keep it in mind when you're thinking about open source projects to contribute to, and check out the [latest release notes][10] for a full list of magical features.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/ipython-still-heart-jupyterlab
|
||||
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mbbroberg/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ilovefs_free_sticker_fsfe_heart.jpg?itok=gLJtaieq (I love Free Software FSFE celebration)
|
||||
[2]: https://opensource.com/article/19/5/jupyterlab-python-developers-magic
|
||||
[3]: https://blog.jupyter.org/the-big-split-9d7b88a031a7
|
||||
[4]: https://jupyter-client.readthedocs.io/en/latest/kernels.html
|
||||
[5]: https://en.wikipedia.org/wiki/Hidden_file_and_hidden_directory#Unix_and_Unix-like_environments
|
||||
[6]: https://ipython.readthedocs.io/en/stable/interactive/magics.html?highlight=magic#magic-bookmark
|
||||
[7]: https://opensource.com/sites/default/files/uploads/jupyterlab-commands-ipython.png (Screenshot of commands from JupyterLab)
|
||||
[8]: https://ipython.readthedocs.io/en/stable/interactive/magics.html
|
||||
[9]: https://ipython.org/citing.html
|
||||
[10]: https://ipython.readthedocs.io/en/stable/whatsnew/index.html
|
@ -0,0 +1,93 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open hardware for musicians and music lovers: Headphone, amps, and more)
|
||||
[#]: via: (https://opensource.com/article/19/6/hardware-music)
|
||||
[#]: author: (Michael Weinberg https://opensource.com/users/mweinberg)
|
||||
|
||||
Open hardware for musicians and music lovers: Headphone, amps, and more
|
||||
======
|
||||
From 3D-printed instruments to devices that pull sound out of the air,
|
||||
there are plenty of ways to create music with open hardware projects.
|
||||
![][1]
|
||||
|
||||
The world is full of great [open source music players][2], but why stop at using open source just to _play_ music? You can also use open source hardware to make music. All of the instruments described in this article are [certified by the Open Source Hardware Association][3] (OSHWA). That means you are free to build upon them, remix them, or do anything else with them.
|
||||
|
||||
### Open source instruments
|
||||
|
||||
Instruments are always a good place to start when you want to make music. If your instrument choices lean towards the more traditional, the [F-F-Fiddle][4] may be the one for you.
|
||||
|
||||
![F-f-fiddle][5]
|
||||
|
||||
The F-F-Fiddle is a full-sized electric violin that you can make with a standard desktop 3D printer ([fused filament fabrication][6]—get it?). If you need to see it to believe it, here is a video of the F-F-Fiddle in action:
|
||||
|
||||
Mastered the fiddle and interested in something a bit more exotic? How about the [Open Theremin][7]?
|
||||
|
||||
![Open Theremin][8]
|
||||
|
||||
Like all theremins, Open Theremin lets you play music without touching the instrument. It is, of course, especially good at making [creepy space sounds][9] for your next sci-fi video or space-themed party.
|
||||
|
||||
The [Waft][10] operates similarly by allowing you to control sounds remotely. It uses [Lidar][11] to measure the distance of your hand from the sensor. Check it out:
|
||||
|
||||
Is the Waft a theremin? I'm not sure—theremin pedants should weigh in below in the comments.
|
||||
|
||||
If theremins are too well-known for you, [SIGNUM][12] may be just what you are looking for. In the words of its developers, SIGNUM "uncovers the encrypted codes of information and the language of man/machine communication" by turning invisible wireless communications into audible signals.
|
||||
|
||||
![SIGNUM][13]
|
||||
|
||||
Here is in action:
|
||||
|
||||
### Inputs
|
||||
|
||||
Regardless of what instrument you use, you will need to plug it into something. If you want that something to be a Raspberry Pi, try the [AudioSense-Pi][14], which allows you to connect multiple inputs and outputs to your Pi at once.
|
||||
|
||||
![AudioSense-Pi][15]
|
||||
|
||||
### Synths
|
||||
|
||||
What about synthesizers? SparkFun's [SparkPunk Sound Kit][16] is a simple synth that gives you lots of room to play.
|
||||
|
||||
![SparkFun SparkPunk Sound Kit][17]
|
||||
|
||||
### Headphones
|
||||
|
||||
Making all this music is great, but you also need to think about how you will listen to it. Fortunately, [EQ-1 headphones][18] are open source and 3D-printable.
|
||||
|
||||
![EQ-1 headphones][19]
|
||||
|
||||
Are you making music with open source hardware? Let us know in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/hardware-music
|
||||
|
||||
作者:[Michael Weinberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mweinberg
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_musicinfinity.png?itok=7LkfjcS9
|
||||
[2]: https://opensource.com/article/19/2/audio-players-linux
|
||||
[3]: https://certification.oshwa.org/
|
||||
[4]: https://certification.oshwa.org/us000010.html
|
||||
[5]: https://opensource.com/sites/default/files/uploads/f-f-fiddle.png (F-f-fiddle)
|
||||
[6]: https://en.wikipedia.org/wiki/Fused_filament_fabrication
|
||||
[7]: https://certification.oshwa.org/ch000001.html
|
||||
[8]: https://opensource.com/sites/default/files/uploads/open-theremin.png (Open Theremin)
|
||||
[9]: https://youtu.be/p05ZSHRYXVA?t=771
|
||||
[10]: https://certification.oshwa.org/uk000005.html
|
||||
[11]: https://en.wikipedia.org/wiki/Lidar
|
||||
[12]: https://certification.oshwa.org/es000003.html
|
||||
[13]: https://opensource.com/sites/default/files/uploads/signum.png (SIGNUM)
|
||||
[14]: https://certification.oshwa.org/in000007.html
|
||||
[15]: https://opensource.com/sites/default/files/uploads/audiosense-pi.png (AudioSense-Pi)
|
||||
[16]: https://certification.oshwa.org/us000016.html
|
||||
[17]: https://opensource.com/sites/default/files/uploads/sparkpunksoundkit.png (SparkFun SparkPunk Sound Kit)
|
||||
[18]: https://certification.oshwa.org/us000038.html
|
||||
[19]: https://opensource.com/sites/default/files/uploads/eq-1-headphones.png (EQ-1 headphones)
|
@ -0,0 +1,125 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ubuntu Kylin: The Official Chinese Version of Ubuntu)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-kylin/)
|
||||
[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
|
||||
|
||||
Ubuntu Kylin: The Official Chinese Version of Ubuntu
|
||||
======
|
||||
|
||||
[_**Ubuntu has several official flavors**_][1] _**and Kylin is one of them. In this article, you’ll learn about Ubuntu Kylin, what it is, why it was created and what features it offers.**_
|
||||
|
||||
Kylin was originally developed in 2001 by academicians at the [National University of Defense Technology][2] in the People’s Republic of China. The name is derived from [Qilin][3], a beast from Chinese mythology.
|
||||
|
||||
The first versions of Kylin were based on [FreeBSD][4] and were intended for use by the Chinese military and other government organizations. Kylin 3.0 was purely based on the Linux kernel, and a version called [NeoKylin][5] which was announced in December 2010.
|
||||
|
||||
In 2013, [Canonical][6] (parent company of Ubuntu) reached an agreement with the [Ministry of Industry and Information Technology][7] of the People’s Republic of China to co-create and release an Ubuntu-based OS with features targeted at the Chinese market.
|
||||
|
||||
![Ubuntu Kylin][8]
|
||||
|
||||
### What is Ubuntu Kylin?
|
||||
|
||||
Following the 2013 agreement mentioned above, Ubuntu Kylin is now the official Chinese version of Ubuntu. It is much more than just language localisation. In fact, it is determined to serve the Chinese market the same way as Ubuntu serves the global market.
|
||||
|
||||
The first version of [Ubuntu Kylin][9] came with Ubuntu 13.04. Like Ubuntu, Kylin too has LTS (long term support) and non-LTS versions.
|
||||
|
||||
Currently, Ubuntu Kylin 19.04 LTS implements the [UKUI][10] desktop environment with revised boot up animation, log-in/screen-lock program and OS theme. To offer a more friendly experience for users, it has fixed bugs, has a file preview function, timer log out, the latest [WPS office suite][11] and [Sogou][12] put-in methods integrated within.
|
||||
|
||||
Kylin 4.0.2 is a community edition based on Ubuntu Kylin 16.04 LTS. It includes several third-party applications with long term and stable support. It’s perfect for both server and desktop usage for daily office work and welcomed by the developers to [download][13]. The Kylin forums are actively available to provide feedback and also troubleshooting to find solutions.
|
||||
|
||||
[][14]
|
||||
|
||||
Suggested read Solve Ubuntu Error: Failed to download repository information Check your Internet connection.
|
||||
|
||||
#### UKUI: The desktop environment by Ubuntu Kylin
|
||||
|
||||
![Ubuntu Kylin 19.04 with UKUI Desktop][15]
|
||||
|
||||
[UKUI][16] is designed and developed by the Ubuntu Kylin team and has some great features and provisions:
|
||||
|
||||
* Windows-like interactive functions to bring more friendly user experiences. The Setup Wizard is user-friendly so that users can get started with Ubuntu Kylin quickly.
|
||||
* Control Center has new settings for theme and window. Updated components such as Start Menu, taskbar, notification bar, file manager, window manager and others.
|
||||
* Available separately on both Ubuntu and Debian repositories to provide a new desktop environment for users of Debian/Ubuntu distributions and derivatives worldwide.
|
||||
* New login and lock programs, which is more stable and with many functions.
|
||||
* Includes a feedback program convenient for feedback and questions.
|
||||
|
||||
|
||||
|
||||
#### Kylin Software Center
|
||||
|
||||
![Kylin Software Center][17]
|
||||
|
||||
Kylin has a software center similar to Ubuntu software center and is called Ubuntu Kylin Software Center. It’s part of the Ubuntu Kylin Software Store that also includes Ubuntu Kylin Developer Platform and Ubuntu Kylin Repository with a simple interface and powerful function. It supports both Ubuntu and Ubuntu Kylin Repositories and is especially convenient for quick installation of Chinese characteristic software developed by Ubuntu Kylin team!
|
||||
|
||||
#### Youker: A series of tools
|
||||
|
||||
Ubuntu Kylin has also a series of tools named as Youker. Typing in “Youker” in the Kylin start menu will bring up the Kylin assistant. If you press the “Windows” key on the keyboard, you’d get a response exactly like you would on Windows. It will fire-up the Kylin start menu.
|
||||
|
||||
![Kylin Assistant][18]
|
||||
|
||||
Other Kylin branded applications include Kylin Video (player), Kylin Burner, Youker Weather and Youker Fcitx which supports better office work and personal entertainment.
|
||||
|
||||
![Kylin Video][19]
|
||||
|
||||
#### Special focus on Chinese characters
|
||||
|
||||
In cooperation with Kingsoft, Ubuntu Kylin developers also work on Sogou Pinyin for Linux, Kuaipan for Linux and Kingsoft WPS for Ubuntu Kylin, and also address issues with smart pinyin, cloud storage service and office applications. [Pinyin][20] is romanization system for Chinese characters. With this, user inputs with English keyboard but Chinese characters are displayed on the screen.
|
||||
|
||||
[][21]
|
||||
|
||||
Suggested read How to Remove Older Linux Kernel Versions in Ubuntu
|
||||
|
||||
#### Fun Fact: Ubuntu Kylin runs on Chinese supercomputers
|
||||
|
||||
![Tianhe-2 Supercomputer. Photo by O01326 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=45399546][22]
|
||||
|
||||
It’s already in public knowledge that the [world’s top 500 fastest supercomputers run Linux][23]. Chinese supercomputers [Tianhe-1][24] and [Tianhe-2][25] both use the 64-bit version of Kylin Linux, dedicated to high-performance [parallel computing][26] optimization, power management and high-performance [virtual computing][27].
|
||||
|
||||
#### Summary
|
||||
|
||||
I hope you liked this introduction in the world of Ubuntu Kylin. You can get either of Ubuntu Kylin 19.04 or the community edition based on Ubuntu 16.04 from its [official website][28].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-kylin/
|
||||
|
||||
作者:[Avimanyu Bandyopadhyay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/avimanyu/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/which-ubuntu-install/
|
||||
[2]: https://english.nudt.edu.cn
|
||||
[3]: https://www.thoughtco.com/what-is-a-qilin-195005
|
||||
[4]: https://itsfoss.com/freebsd-12-release/
|
||||
[5]: https://thehackernews.com/2015/09/neokylin-china-linux-os.html
|
||||
[6]: https://www.canonical.com/
|
||||
[7]: http://english.gov.cn/state_council/2014/08/23/content_281474983035940.htm
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/Ubuntu-Kylin.jpeg?resize=800%2C450&ssl=1
|
||||
[9]: http://www.ubuntukylin.com/
|
||||
[10]: http://ukui.org
|
||||
[11]: https://www.wps.com/
|
||||
[12]: https://en.wikipedia.org/wiki/Sogou_Pinyin
|
||||
[13]: http://www.ubuntukylin.com/downloads/show.php?lang=en&id=122
|
||||
[14]: https://itsfoss.com/solve-ubuntu-error-failed-to-download-repository-information-check-your-internet-connection/
|
||||
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/ubuntu-Kylin-19-04-desktop.jpg?resize=800%2C450&ssl=1
|
||||
[16]: http://www.ukui.org/
|
||||
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/kylin-software-center.jpg?resize=800%2C496&ssl=1
|
||||
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/kylin-assistant.jpg?resize=800%2C535&ssl=1
|
||||
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/kylin-video.jpg?resize=800%2C533&ssl=1
|
||||
[20]: https://en.wikipedia.org/wiki/Pinyin
|
||||
[21]: https://itsfoss.com/remove-old-kernels-ubuntu/
|
||||
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/tianhe-2.jpg?resize=800%2C600&ssl=1
|
||||
[23]: https://itsfoss.com/linux-runs-top-supercomputers/
|
||||
[24]: https://en.wikipedia.org/wiki/Tianhe-1
|
||||
[25]: https://en.wikipedia.org/wiki/Tianhe-2
|
||||
[26]: https://en.wikipedia.org/wiki/Parallel_computing
|
||||
[27]: https://computer.howstuffworks.com/how-virtual-computing-works.htm
|
||||
[28]: http://www.ubuntukylin.com
|
@ -0,0 +1,141 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A data-centric approach to patching systems with Ansible)
|
||||
[#]: via: (https://opensource.com/article/19/6/patching-systems-ansible)
|
||||
[#]: author: (Mark Phillips https://opensource.com/users/markp/users/markp)
|
||||
|
||||
A data-centric approach to patching systems with Ansible
|
||||
======
|
||||
Use data and variables in Ansible to control selective patching.
|
||||
![metrics and data shown on a computer screen][1]
|
||||
|
||||
When you're patching Linux machines these days, I could forgive you for asking, "How hard can it be?" Sure, a **yum update -y** will sort it for you in a flash.
|
||||
|
||||
![Animation of updating Linux][2]
|
||||
|
||||
But for those of us working with more than a handful of machines, it's not that simple. Sometimes an update can create unintended consequences across many machines, and you're left wondering how to put things back the way they were. Or you might think, "Should I have applied the critical patch on its own and saved myself a lot of pain?"
|
||||
|
||||
Faced with these sorts of challenges in the past led me to build a way to cherry-pick the updates needed and automate their application.
|
||||
|
||||
### A flexible idea
|
||||
|
||||
Here's an overview of the process:
|
||||
|
||||
![Overview of the Ansible patch process][3]
|
||||
|
||||
This system doesn't permit machines to have direct access to vendor patches. Instead, they're selectively subscribed to repositories. Repositories contain only the patches that are required––although I'd encourage you to give this careful consideration so you don't end up with a proliferation (another management overhead you'll not thank yourself for creating).
|
||||
|
||||
Now patching a machine comes down to 1) The repositories it's subscribed to and 2) Getting the "thumbs up" to patch. By using variables to control both subscription and permission to patch, we don't need to tamper with the logic (the plays); we only need to alter the data.
|
||||
|
||||
Here is an [example Ansible role][4] that fulfills both requirements. It manages repository subscriptions and has a simple variable that controls running the patch command.
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
# tasks file for patching
|
||||
|
||||
\- name: Include OS version specific differences
|
||||
include_vars: "{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml"
|
||||
|
||||
\- name: Ensure Yum repositories are configured
|
||||
template:
|
||||
src: template.repo.j2
|
||||
dest: "/etc/yum.repos.d/{{ item.label }}.repo"
|
||||
owner: root
|
||||
group: root
|
||||
mode: 0644
|
||||
when: patching_repos is defined
|
||||
loop: "{{ patching_repos }}"
|
||||
notify: patching-clean-metadata
|
||||
|
||||
\- meta: flush_handlers
|
||||
|
||||
\- name: Ensure OS shipped yum repo configs are absent
|
||||
file:
|
||||
path: "/etc/yum.repos.d/{{ patching_default_repo_def }}"
|
||||
state: absent
|
||||
|
||||
# add flexibility of repos here
|
||||
\- name: Patch this host
|
||||
shell: 'yum update -y'
|
||||
args:
|
||||
warn: false
|
||||
when: patchme|bool
|
||||
register: result
|
||||
changed_when: "'No packages marked for update' not in result.stdout"
|
||||
```
|
||||
|
||||
### Scenarios
|
||||
|
||||
In our fictitious, large, globally dispersed environment (of four hosts), we have:
|
||||
|
||||
* Two web servers
|
||||
* Two database servers
|
||||
* An application comprising one of each server type
|
||||
|
||||
|
||||
|
||||
OK, so this number of machines isn't "enterprise-scale," but remove the counts and imagine the environment as multiple, tiered, geographically dispersed applications. We want to patch elements of the stack across server types, application stacks, geographies, or the whole estate.
|
||||
|
||||
![Example patch groups][5]
|
||||
|
||||
Using only changes to variables, can we achieve that flexibility? Sort of. Ansible's [default behavior][6] for hashes is to overwrite. In our example, the **patching_repos** variable for the **db1** and **web1** hosts are overwritten because of their later occurrence in our inventory. Hmm, a bit of a pickle. There are two ways to manage this:
|
||||
|
||||
1. Multiple inventory files
|
||||
2. [Change the variable behavior][7]
|
||||
|
||||
|
||||
|
||||
I chose number one because it maintains clarity. Once you start merging variables, it's hard to find where a hash appears and how it's put together. Using the default behavior maintains clarity, and it's the method I'd encourage you to stick with for your own sanity.
|
||||
|
||||
### Get on with it then
|
||||
|
||||
Let's run the play, focusing only on the database servers.
|
||||
|
||||
Did you notice the final step— **Patch this host** —says **skipping**? That's because we didn't set [the controlling variable][8] to do the patching. What we have done is set up the repository subscriptions to be ready.
|
||||
|
||||
So let's run the play again, limiting it to the web servers and tell it to do the patching. I ran this example with verbose output so you can see the yum updates happening.
|
||||
|
||||
Patching an application stack requires another inventory file, as mentioned above. Let's rerun the play.
|
||||
|
||||
Patching hosts in the European geography is the same scenario as the application stack, so another inventory file is required.
|
||||
|
||||
Now that all the repository subscriptions are configured, let's just patch the whole estate. Note the **app1** and **emea** groups don't need the inventory here––they were only being used to separate the repository definition and setup. Now, **yum update -y** patches everything. If you didn't want to capture those repositories, they could be configured as **enabled=0**.
|
||||
|
||||
### Conclusion
|
||||
|
||||
The flexibility comes from how we group our hosts. Because of default hash behavior, we need to think about overlaps—the easiest way, to my mind at least, is with separate inventories.
|
||||
|
||||
With regard to repository setup, I'm sure you've already said to yourself, "Ah, but the cherry-picking isn't that simple!" There is additional overhead in this model to download patches, test that they work together, and bundle them with dependencies in a repository. With complementary tools, you could automate the process, and in a large-scale environment, you'd have to.
|
||||
|
||||
Part of me is drawn to just applying full patch sets as a simpler and easier way to go; skip the cherry-picking part and apply a full set of patches to a "standard build." I've seen this approach applied to both Unix and Windows estates with enforced quarterly updates.
|
||||
|
||||
I’d be interested in hearing your experiences of patching regimes, and the approach proposed here, in the comments below or [via Twitter][9].
|
||||
|
||||
Many companies still have massive data centres full of hardware. Here's how Ansible can help.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/patching-systems-ansible
|
||||
|
||||
作者:[Mark Phillips][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/markp/users/markp
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/quick_update.gif (Animation of updating Linux)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/patch_process.png (Overview of the Ansible patch process)
|
||||
[4]: https://github.com/phips/ansible-patching/blob/master/roles/patching/tasks/main.yml
|
||||
[5]: https://opensource.com/sites/default/files/uploads/patch_groups.png (Example patch groups)
|
||||
[6]: https://docs.ansible.com/ansible/2.3/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
|
||||
[7]: https://docs.ansible.com/ansible/2.3/intro_configuration.html#sts=hash_behaviour
|
||||
[8]: https://github.com/phips/ansible-patching/blob/master/roles/patching/defaults/main.yml#L4
|
||||
[9]: https://twitter.com/thismarkp
|
@ -0,0 +1,171 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to send email from the Linux command line)
|
||||
[#]: via: (https://www.networkworld.com/article/3402027/how-to-send-email-from-the-linux-command-line.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How to send email from the Linux command line
|
||||
======
|
||||
Linux offers several commands that allow you to send email from the command line. Here's look at some that offer interesting options.
|
||||
![Molnia/iStock][1]
|
||||
|
||||
There are several ways to send email from the Linux command line. Some are very simple and others more complicated, but offer some very useful features. The choice depends on what you want to do -– whether you want to get a quick message off to a co-worker or send a more complicated message with an attachment to a large group of people. Here's a look at some of the options:
|
||||
|
||||
### mail
|
||||
|
||||
The easiest way to send a simple message from the Linux command line is to use the **mail** command. Maybe you need to remind your boss that you're leaving a little early that day. You could use a command like this one:
|
||||
|
||||
```
|
||||
$ echo "Reminder: Leaving at 4 PM today" | mail -s "early departure" myboss
|
||||
```
|
||||
|
||||
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
Another option is to grab your message text from a file that contains the content you want to send:
|
||||
|
||||
```
|
||||
$ mail -s "Reminder:Leaving early" myboss < reason4leaving
|
||||
```
|
||||
|
||||
In both cases, the -s options allows you to provide a subject line for your message.
|
||||
|
||||
### sendmail
|
||||
|
||||
Using **sendmail** , you can send a quick message (with no subject) using a command like this (replacing "recip" with your intended recipient:
|
||||
|
||||
```
|
||||
$ echo "leaving now" | sendmail recip
|
||||
```
|
||||
|
||||
You can send just a subject line (with no message content) with a command like this:
|
||||
|
||||
```
|
||||
$ echo "Subject: leaving now" | sendmail recip
|
||||
```
|
||||
|
||||
You can also use sendmail on the command line to send a message complete with a subject line. However, when using this approach, you would add your subject line to the file you intend to send as in this example file:
|
||||
|
||||
```
|
||||
Subject: Requested lyrics
|
||||
I would just like to say that, in my opinion, longer hair and other flamboyant
|
||||
affectations of appearance are nothing more ...
|
||||
```
|
||||
|
||||
Then you would send the file like this (where the lyrics file contains your subject line and text):
|
||||
|
||||
```
|
||||
$ sendmail recip < lyrics
|
||||
```
|
||||
|
||||
Sendmail can be quite verbose in its output. If you're desperately curious and want to see the interchange between the sending and receiving systems, add the -v (verbose) option:
|
||||
|
||||
```
|
||||
$ sendmail -v recip@emailsite.com < lyrics
|
||||
```
|
||||
|
||||
### mutt
|
||||
|
||||
An especially nice tool for command line emailing is the **mutt** command, though you will likely have to install it first. Mutt has a convenient advantage in that it can allow you to include attachments.
|
||||
|
||||
To use mutt to send a quick messsage:
|
||||
|
||||
```
|
||||
$ echo "Please check last night's backups" | mutt -s "backup check" recip
|
||||
```
|
||||
|
||||
To get content from a file:
|
||||
|
||||
```
|
||||
$ mutt -s "Agenda" recip < agenda
|
||||
```
|
||||
|
||||
To add an attachment with mutt, use the -a option. You can even add more than one – as shown in this command:
|
||||
|
||||
```
|
||||
$ mutt -s "Agenda" recip -a agenda -a speakers < msg
|
||||
```
|
||||
|
||||
In the command above, the "msg" file includes content for the email. If you don't have any additional content to provide, you can do this instead:
|
||||
|
||||
```
|
||||
$ echo "" | mutt -s "Agenda" recip -a agenda -a speakers
|
||||
```
|
||||
|
||||
The other useful option that you have with mutt is that it provides a way to send carbon copies (using the -c option) and blind carbon copies (using the -b option).
|
||||
|
||||
```
|
||||
$ mutt -s "Minutes from last meeting" recip@somesite.com -c myboss < mins
|
||||
```
|
||||
|
||||
### telnet
|
||||
|
||||
If you want to get deep into the details of sending email, you can use **telnet** to carry on the email exchange operation, but you'll need to, as they say, "learn the lingo." Mail servers expect a sequence of commands that include things like introducing yourself ( **EHLO** command), providing the email sender ( **MAIL FROM** command), specifying the email recipient ( **RCPT TO** command), and then adding the message ( **DATA** ) and ending the message with a "." as the only character on the line. Not every email server will respond to these requests. This approach is generally used only for troubleshooting.
|
||||
|
||||
```
|
||||
$ telnet emailsite.org 25
|
||||
Trying 192.168.0.12...
|
||||
Connected to emailsite.
|
||||
Escape character is '^]'.
|
||||
220 localhost ESMTP Sendmail 8.15.2/8.15.2/Debian-12; Wed, 12 Jun 2019 16:32:13 -0400; (No UCE/UBE) logging access from: mysite(OK)-mysite [192.168.0.12]
|
||||
EHLO mysite.org <== introduce yourself
|
||||
250-localhost Hello mysite [127.0.0.1], pleased to meet you
|
||||
250-ENHANCEDSTATUSCODES
|
||||
250-PIPELINING
|
||||
250-EXPN
|
||||
250-VERB
|
||||
250-8BITMIME
|
||||
250-SIZE
|
||||
250-DSN
|
||||
250-ETRN
|
||||
250-AUTH DIGEST-MD5 CRAM-MD5
|
||||
250-DELIVERBY
|
||||
250 HELP
|
||||
MAIL FROM: me@mysite.org <== specify sender
|
||||
250 2.1.0 shs@mysite.org... Sender ok
|
||||
RCPT TO: recip <== specify recipient
|
||||
250 2.1.5 recip... Recipient ok
|
||||
DATA <== start message
|
||||
354 Enter mail, end with "." on a line by itself
|
||||
This is a test message. Please deliver it for me.
|
||||
. <== end message
|
||||
250 2.0.0 x5CKWDds029287 Message accepted for delivery
|
||||
quit <== end exchange
|
||||
```
|
||||
|
||||
### Sending email to multiple recipients
|
||||
|
||||
If you want to send email from the Linux command line to a large group of recipients, you can always use a loop to make the job easier as in this example using mutt.
|
||||
|
||||
```
|
||||
$ for recip in `cat recips`
|
||||
do
|
||||
mutt -s "Minutes from May meeting" $recip < May_minutes
|
||||
done
|
||||
```
|
||||
|
||||
### Wrap-up
|
||||
|
||||
There are quite a few ways to send email from the Linux command line. Some tools provide quite a few options.
|
||||
|
||||
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402027/how-to-send-email-from-the-linux-command-line.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2017/08/email_image_blue-100732096-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Learning by teaching, and speaking, in open source)
|
||||
[#]: via: (https://opensource.com/article/19/6/conference-proposal-tips)
|
||||
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
|
||||
|
||||
Learning by teaching, and speaking, in open source
|
||||
======
|
||||
Want to speak at an open source conference? Here are a few tips to get
|
||||
started.
|
||||
![photo of microphone][1]
|
||||
|
||||
_"Everything good, everything magical happens between the months of June and August."_
|
||||
|
||||
When Jenny Han wrote these words, I doubt she had the open source community in mind. Yet, for our group of dispersed nomads, the summer brings a wave of conferences that allow us to connect in person.
|
||||
|
||||
From [OSCON][2] in Portland to [Drupal GovCon][3] in Bethesda, and [Open Source Summit North America][4] in San Diego, there’s no shortage of ways to match faces with Twitter avatars. After months of working on open source projects via Slack and Google Hangouts, the face time that these summer conferences offer is invaluable.
|
||||
|
||||
The knowledge attendees gain at open source conferences serves as the spark for new contributions. And speaking from experience, the best way to gain value from these conferences is for you to _speak_ at them.
|
||||
|
||||
But, does the thought of speaking give you chills? Hear me out before closing your browser.
|
||||
|
||||
Last August, I arrived at the Vancouver Convention Centre to give a lightning talk and speak on a panel at [Open Source Summit North America 2018][5]. It’s no exaggeration to say that this conference—and applying to speak at it—transformed my career. Nine months later, I’ve:
|
||||
|
||||
* Become a Community Moderator for Opensource.com
|
||||
* Spoken at two additional open source conferences ([All Things Open][6] and [DrupalCon North America][7])
|
||||
* Made my first GitHub pull request
|
||||
* Taken "Intro to Python" and written my first lines of code in [React][8]
|
||||
* Taken the first steps towards writing a book proposal
|
||||
|
||||
|
||||
|
||||
I don’t discount how much time, effort, and money are [involved in conference speaking][9]. Regardless, I can say with certainty that nothing else has grown my career so drastically. In the process, I met strangers who quickly became friends and unofficial mentors. Their feedback, advice, and connections have helped me grow in ways that I hadn’t envisioned this time last year.
|
||||
|
||||
Had I not boarded that flight to Canada, I would not be where I am today.
|
||||
|
||||
So, have I convinced you to take the first step? It’s easier than you think. If you want to [apply to speak at an open source conference][10] but are stuck on what to discuss, ask yourself this question: **What do I want to learn?**
|
||||
|
||||
You don’t have to be an expert on the topics that you pitch. You don’t have to know everything about JavaScript, [ML][11], or Linux to [write conference proposals][12] on these topics.
|
||||
|
||||
Here’s what you _do_ need: A willingness to do the work of teaching yourself these topics. And like any self-directed task, you’ll be most willing to do this work if you're invested in the subject.
|
||||
|
||||
As summer conference season draws closer, soak up all the knowledge you can. Then, ask yourself what you want to learn more about, and apply to speak about those subjects at fall/winter open source events.
|
||||
|
||||
After all, one of the most effective ways to learn is by [teaching a topic to someone else][13]. So, what will the open source community learn from you?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/conference-proposal-tips
|
||||
|
||||
作者:[Lauren Maffeo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lmaffeo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/microphone_speak.png?itok=wW6elbl5 (photo of microphone)
|
||||
[2]: https://conferences.oreilly.com/oscon/oscon-or
|
||||
[3]: https://www.drupalgovcon.org
|
||||
[4]: https://events.linuxfoundation.org/events/open-source-summit-north-america-2019/
|
||||
[5]: https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/
|
||||
[6]: https://allthingsopen.org
|
||||
[7]: https://lab.getapp.com/bias-in-ai-drupalcon-debrief/
|
||||
[8]: https://reactjs.org
|
||||
[9]: https://twitter.com/venikunche/status/1130868572098572291
|
||||
[10]: https://opensource.com/article/19/1/public-speaking-resolutions
|
||||
[11]: https://en.wikipedia.org/wiki/ML_(programming_language)
|
||||
[12]: https://dev.to/aspittel/public-speaking-as-a-developer-2ihj
|
||||
[13]: https://opensource.com/article/19/5/learn-python-teaching
|
158
sources/tech/20190614 What is a Java constructor.md
Normal file
158
sources/tech/20190614 What is a Java constructor.md
Normal file
@ -0,0 +1,158 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is a Java constructor?)
|
||||
[#]: via: (https://opensource.com/article/19/6/what-java-constructor)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/ashleykoree)
|
||||
|
||||
What is a Java constructor?
|
||||
======
|
||||
Constructors are powerful components of programming. Use them to unlock
|
||||
the full potential of Java.
|
||||
![][1]
|
||||
|
||||
Java is (disputably) the undisputed heavyweight in open source, cross-platform programming. While there are many [great][2] [cross-platform][2] [frameworks][3], few are as unified and direct as [Java][4].
|
||||
|
||||
Of course, Java is also a pretty complex language with subtleties and conventions all its own. One of the most common questions about Java relates to **constructors** : What are they and what are they used for?
|
||||
|
||||
Put succinctly: a constructor is an action performed upon the creation of a new **object** in Java. When your Java application creates an instance of a class you have written, it checks for a constructor. If a constructor exists, Java runs the code in the constructor while creating the instance. That's a lot of technical terms crammed into a few sentences, but it becomes clearer when you see it in action, so make sure you have [Java installed][5] and get ready for a demo.
|
||||
|
||||
### Life without constructors
|
||||
|
||||
If you're writing Java code, you're already using constructors, even though you may not know it. All classes in Java have a constructor because even if you haven't created one, Java does it for you when the code is compiled. For the sake of demonstration, though, ignore the hidden constructor that Java provides (because a default constructor adds no extra features), and take a look at life without an explicit constructor.
|
||||
|
||||
Suppose you're writing a simple Java dice-roller application because you want to produce a pseudo-random number for a game.
|
||||
|
||||
First, you might create your dice class to represent a physical die. Knowing that you play a lot of [Dungeons and Dragons][6], you decide to create a 20-sided die. In this sample code, the variable **dice** is the integer 20, representing the maximum possible die roll (a 20-sided die cannot roll more than 20). The variable **roll** is a placeholder for what will eventually be a random number, and **rand** serves as the random seed.
|
||||
|
||||
|
||||
```
|
||||
import java.util.Random;
|
||||
|
||||
public class DiceRoller {
|
||||
private int dice = 20;
|
||||
private int roll;
|
||||
private [Random][7] rand = new [Random][7]();
|
||||
```
|
||||
|
||||
Next, create a function in the **DiceRoller** class to execute the steps the computer must take to emulate a die roll: Take an integer from **rand** and assign it to the **roll** variable, add 1 to account for the fact that Java starts counting at 0 but a 20-sided die has no 0 value, then print the results.
|
||||
|
||||
|
||||
```
|
||||
public void Roller() {
|
||||
roll = rand.nextInt(dice);
|
||||
roll += 1;
|
||||
[System][8].out.println (roll);
|
||||
}
|
||||
```
|
||||
|
||||
Finally, spawn an instance of the **DiceRoller** class and invoke its primary function, **Roller** :
|
||||
|
||||
|
||||
```
|
||||
// main loop
|
||||
public static void main ([String][9][] args) {
|
||||
[System][8].out.printf("You rolled a ");
|
||||
|
||||
DiceRoller App = new DiceRoller();
|
||||
App.Roller();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
As long as you have a Java development environment installed (such as [OpenJDK][10]), you can run your application from a terminal:
|
||||
|
||||
|
||||
```
|
||||
$ java dice.java
|
||||
You rolled a 12
|
||||
```
|
||||
|
||||
In this example, there is no explicit constructor. It's a perfectly valid and legal Java application, but it's a little limited. For instance, if you set your game of Dungeons and Dragons aside for the evening to play some Yahtzee, you would need 6-sided dice. In this simple example, it wouldn't be that much trouble to change the code, but that's not a realistic option in complex code. One way you could solve this problem is with a constructor.
|
||||
|
||||
### Constructors in action
|
||||
|
||||
The **DiceRoller** class in this example project represents a virtual dice factory: When it's called, it creates a virtual die that is then "rolled." However, by writing a custom constructor, you can make your Dice Roller application ask what kind of die you'd like to emulate.
|
||||
|
||||
Most of the code is the same, with the exception of a constructor accepting some number of sides. This number doesn't exist yet, but it will be created later.
|
||||
|
||||
|
||||
```
|
||||
import java.util.Random;
|
||||
|
||||
public class DiceRoller {
|
||||
private int dice;
|
||||
private int roll;
|
||||
private [Random][7] rand = new [Random][7]();
|
||||
|
||||
// constructor
|
||||
public DiceRoller(int sides) {
|
||||
dice = sides;
|
||||
}
|
||||
```
|
||||
|
||||
The function emulating a roll remains unchanged:
|
||||
|
||||
|
||||
```
|
||||
public void Roller() {
|
||||
roll = rand.nextInt(dice);
|
||||
roll += 1;
|
||||
[System][8].out.println (roll);
|
||||
}
|
||||
```
|
||||
|
||||
The main block of code feeds whatever arguments you provide when running the application. Were this a complex application, you would parse the arguments carefully and check for unexpected results, but for this sample, the only precaution taken is converting the argument string to an integer type:
|
||||
|
||||
|
||||
```
|
||||
public static void main ([String][9][] args) {
|
||||
[System][8].out.printf("You rolled a ");
|
||||
DiceRoller App = new DiceRoller( [Integer][11].parseInt(args[0]) );
|
||||
App.Roller();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Launch the application and provide the number of sides you want your die to have:
|
||||
|
||||
|
||||
```
|
||||
$ java dice.java 20
|
||||
You rolled a 10
|
||||
$ java dice.java 6
|
||||
You rolled a 2
|
||||
$ java dice.java 100
|
||||
You rolled a 44
|
||||
```
|
||||
|
||||
The constructor has accepted your input, so when the class instance is created, it is created with the **sides** variable set to whatever number the user dictates.
|
||||
|
||||
Constructors are powerful components of programming. Practice using them to unlock the full potential of Java.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/what-java-constructor
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth/users/ashleykoree
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag
|
||||
[2]: https://opensource.com/resources/python
|
||||
[3]: https://opensource.com/article/17/4/pyqt-versus-wxpython
|
||||
[4]: https://opensource.com/resources/java
|
||||
[5]: https://openjdk.java.net/install/index.html
|
||||
[6]: https://opensource.com/article/19/5/free-rpg-day
|
||||
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+random
|
||||
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
|
||||
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[10]: https://openjdk.java.net/
|
||||
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer
|
@ -0,0 +1,281 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Check Linux Package Version Before Installing It)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-check-linux-package-version-before-installing-it/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Check Linux Package Version Before Installing It
|
||||
======
|
||||
|
||||
![Check Linux Package Version][1]
|
||||
|
||||
Most of you will know how to [**find the version of an installed package**][2] in Linux. But, what would you do to find the packages’ version which are not installed in the first place? No problem! This guide describes how to check Linux package version before installing it in Debian and its derivatives like Ubuntu. This small tip might be helpful for those wondering what version they would get before installing a package.
|
||||
|
||||
### Check Linux Package Version Before Installing It
|
||||
|
||||
There are many ways to find a package’s version even if it is not installed already in DEB-based systems. Here I have given a few methods.
|
||||
|
||||
##### Method 1 – Using Apt
|
||||
|
||||
The quick and dirty way to check a package version, simply run:
|
||||
|
||||
```
|
||||
$ apt show <package-name>
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
```
|
||||
$ apt show vim
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
Package: vim
|
||||
Version: 2:8.0.1453-1ubuntu1.1
|
||||
Priority: optional
|
||||
Section: editors
|
||||
Origin: Ubuntu
|
||||
Maintainer: Ubuntu Developers <[email protected]>
|
||||
Original-Maintainer: Debian Vim Maintainers <[email protected]>
|
||||
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
|
||||
Installed-Size: 2,852 kB
|
||||
Provides: editor
|
||||
Depends: vim-common (= 2:8.0.1453-1ubuntu1.1), vim-runtime (= 2:8.0.1453-1ubuntu1.1), libacl1 (>= 2.2.51-8), libc6 (>= 2.15), libgpm2 (>= 1.20.7), libpython3.6 (>= 3.6.5), libselinux1 (>= 1.32), libtinfo5 (>= 6)
|
||||
Suggests: ctags, vim-doc, vim-scripts
|
||||
Homepage: https://vim.sourceforge.io/
|
||||
Task: cloud-image, server
|
||||
Supported: 5y
|
||||
Download-Size: 1,152 kB
|
||||
APT-Sources: http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
|
||||
Description: Vi IMproved - enhanced vi editor
|
||||
Vim is an almost compatible version of the UNIX editor Vi.
|
||||
.
|
||||
Many new features have been added: multi level undo, syntax
|
||||
highlighting, command line history, on-line help, filename
|
||||
completion, block operations, folding, Unicode support, etc.
|
||||
.
|
||||
This package contains a version of vim compiled with a rather
|
||||
standard set of features. This package does not provide a GUI
|
||||
version of Vim. See the other vim-* packages if you need more
|
||||
(or less).
|
||||
|
||||
N: There is 1 additional record. Please use the '-a' switch to see it
|
||||
```
|
||||
|
||||
As you can see in the above output, “apt show” command displays, many important details of the package such as,
|
||||
|
||||
1. package name,
|
||||
2. version,
|
||||
3. origin (from where the vim comes from),
|
||||
4. maintainer,
|
||||
5. home page of the package,
|
||||
6. dependencies,
|
||||
7. download size,
|
||||
8. description,
|
||||
9. and many.
|
||||
|
||||
|
||||
|
||||
So, the available version of Vim package in the Ubuntu repositories is **8.0.1453**. This is the version I get if I install it on my Ubuntu system.
|
||||
|
||||
Alternatively, use **“apt policy”** command if you prefer short output:
|
||||
|
||||
```
|
||||
$ apt policy vim
|
||||
vim:
|
||||
Installed: (none)
|
||||
Candidate: 2:8.0.1453-1ubuntu1.1
|
||||
Version table:
|
||||
2:8.0.1453-1ubuntu1.1 500
|
||||
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
|
||||
500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages
|
||||
2:8.0.1453-1ubuntu1 500
|
||||
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
```
|
||||
|
||||
Or even shorter:
|
||||
|
||||
```
|
||||
$ apt list vim
|
||||
Listing... Done
|
||||
vim/bionic-updates,bionic-security 2:8.0.1453-1ubuntu1.1 amd64
|
||||
N: There is 1 additional version. Please use the '-a' switch to see it
|
||||
```
|
||||
|
||||
**Apt** is the default package manager in recent Ubuntu versions. So, this command is just enough to find the detailed information of a package. It doesn’t matter whether given package is installed or not. This command will simply list the given package’s version along with all other details.
|
||||
|
||||
##### Method 2 – Using Apt-get
|
||||
|
||||
To find a package version without installing it, we can use **apt-get** command with **-s** option.
|
||||
|
||||
```
|
||||
$ apt-get -s install vim
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
NOTE: This is only a simulation!
|
||||
apt-get needs root privileges for real execution.
|
||||
Keep also in mind that locking is deactivated,
|
||||
so don't depend on the relevance to the real current situation!
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
Suggested packages:
|
||||
ctags vim-doc vim-scripts
|
||||
The following NEW packages will be installed:
|
||||
vim
|
||||
0 upgraded, 1 newly installed, 0 to remove and 45 not upgraded.
|
||||
Inst vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic-security [amd64])
|
||||
Conf vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic-security [amd64])
|
||||
```
|
||||
|
||||
Here, -s option indicates **simulation**. As you can see in the output, It performs no action. Instead, It simply performs a simulation to let you know what is going to happen when you install the Vim package.
|
||||
|
||||
You can substitute “install” option with “upgrade” option to see what will happen when you upgrade a package.
|
||||
|
||||
```
|
||||
$ apt-get -s upgrade vim
|
||||
```
|
||||
|
||||
##### Method 3 – Using Aptitude
|
||||
|
||||
**Aptitude** is an ncurses and commandline-based front-end to APT package manger in Debian and its derivatives.
|
||||
|
||||
To find the package version with Aptitude, simply run:
|
||||
|
||||
```
|
||||
$ aptitude versions vim
|
||||
p 2:8.0.1453-1ubuntu1 bionic 500
|
||||
p 2:8.0.1453-1ubuntu1.1 bionic-security,bionic-updates 500
|
||||
```
|
||||
|
||||
You can also use simulation option ( **-s** ) to see what would happen if you install or upgrade package.
|
||||
|
||||
```
|
||||
$ aptitude -V -s install vim
|
||||
The following NEW packages will be installed:
|
||||
vim [2:8.0.1453-1ubuntu1.1]
|
||||
0 packages upgraded, 1 newly installed, 0 to remove and 45 not upgraded.
|
||||
Need to get 1,152 kB of archives. After unpacking 2,852 kB will be used.
|
||||
Would download/install/remove packages.
|
||||
```
|
||||
|
||||
Here, **-V** flag is used to display detailed information of the package version.
|
||||
|
||||
Similarly, just substitute “install” with “upgrade” option to see what would happen if you upgrade a package.
|
||||
|
||||
```
|
||||
$ aptitude -V -s upgrade vim
|
||||
```
|
||||
|
||||
Another way to find the non-installed package’s version using Aptitude command is:
|
||||
|
||||
```
|
||||
$ aptitude search vim -F "%c %p %d %V"
|
||||
```
|
||||
|
||||
Here,
|
||||
|
||||
* **-F** is used to specify which format should be used to display the output,
|
||||
* **%c** – status of the given package (installed or not installed),
|
||||
* **%p** – name of the package,
|
||||
* **%d** – description of the package,
|
||||
* **%V** – version of the package.
|
||||
|
||||
|
||||
|
||||
This is helpful when you don’t know the full package name. This command will list all packages that contains the given string (i.e vim).
|
||||
|
||||
Here is the sample output of the above command:
|
||||
|
||||
```
|
||||
[...]
|
||||
p vim Vi IMproved - enhanced vi editor 2:8.0.1453-1ub
|
||||
p vim-tlib Some vim utility functions 1.23-1
|
||||
p vim-ultisnips snippet solution for Vim 3.1-3
|
||||
p vim-vimerl Erlang plugin for Vim 1.4.1+git20120
|
||||
p vim-vimerl-syntax Erlang syntax for Vim 1.4.1+git20120
|
||||
p vim-vimoutliner script for building an outline editor on top of Vim 0.3.4+pristine
|
||||
p vim-voom Vim two-pane outliner 5.2-1
|
||||
p vim-youcompleteme fast, as-you-type, fuzzy-search code completion engine for Vim 0+20161219+git
|
||||
```
|
||||
|
||||
##### Method 4 – Using Apt-cache
|
||||
|
||||
**Apt-cache** command is used to query APT cache in Debian-based systems. It is useful for performing many operations on APT’s package cache. One fine example is we can [**list installed applications from a certain repository/ppa**][3].
|
||||
|
||||
Not just installed applications, we can also find the version of a package even if it is not installed. For instance, the following command will find the version of Vim package:
|
||||
|
||||
```
|
||||
$ apt-cache policy vim
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
```
|
||||
vim:
|
||||
Installed: (none)
|
||||
Candidate: 2:8.0.1453-1ubuntu1.1
|
||||
Version table:
|
||||
2:8.0.1453-1ubuntu1.1 500
|
||||
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
|
||||
500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages
|
||||
2:8.0.1453-1ubuntu1 500
|
||||
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
```
|
||||
|
||||
As you can see in the above output, Vim is not installed. If you wanted to install it, you would get version **8.0.1453**. It also displays from which repository the vim package is coming from.
|
||||
|
||||
##### Method 5 – Using Apt-show-versions
|
||||
|
||||
**Apt-show-versions** command is used to list installed and available package versions in Debian and Debian-based systems. It also displays the list of all upgradeable packages. It is quite handy if you have a mixed stable/testing environment. For instance, if you have enabled both stable and testing repositories, you can easily find the list of applications from testing and also you can upgrade all packages in testing.
|
||||
|
||||
Apt-show-versions is not installed by default. You need to install it using command:
|
||||
|
||||
```
|
||||
$ sudo apt-get install apt-show-versions
|
||||
```
|
||||
|
||||
Once installed, run the following command to find the version of a package,for example Vim:
|
||||
|
||||
```
|
||||
$ apt-show-versions -a vim
|
||||
vim:amd64 2:8.0.1453-1ubuntu1 bionic archive.ubuntu.com
|
||||
vim:amd64 2:8.0.1453-1ubuntu1.1 bionic-security security.ubuntu.com
|
||||
vim:amd64 2:8.0.1453-1ubuntu1.1 bionic-updates archive.ubuntu.com
|
||||
vim:amd64 not installed
|
||||
```
|
||||
|
||||
Here, **-a** switch prints all available versions of the given package.
|
||||
|
||||
If the given package is already installed, you need not to use **-a** option. In that case, simply run:
|
||||
|
||||
```
|
||||
$ apt-show-versions vim
|
||||
```
|
||||
|
||||
And, that’s all. If you know any other methods, please share them in the comment section below. I will check and update this guide.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-check-linux-package-version-before-installing-it/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Check-Linux-Package-Version-720x340.png
|
||||
[2]: https://www.ostechnix.com/find-package-version-linux/
|
||||
[3]: https://www.ostechnix.com/list-installed-packages-certain-repository-linux/
|
@ -0,0 +1,608 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Find Linux System Details Using inxi)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-find-your-system-details-using-inxi/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Find Linux System Details Using inxi
|
||||
======
|
||||
|
||||
![find Linux system details using inxi][1]
|
||||
|
||||
**Inxi** is a free, open source, and full featured command line system information tool. It shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, GCC version(s), Processes, RAM usage, and a wide variety of other useful information. Be it a hard disk or CPU, mother board or the complete detail of the entire system, inxi will display it more accurately in seconds. Since it is CLI tool, you can use it in Desktop or server edition. Inxi is available in the default repositories of most Linux distributions and some BSD systems.
|
||||
|
||||
### Install inxi
|
||||
|
||||
**On Arch Linux and derivatives:**
|
||||
|
||||
To install inxi in Arch Linux or its derivatives like Antergos, and Manajaro Linux, run:
|
||||
|
||||
```
|
||||
$ sudo pacman -S inxi
|
||||
```
|
||||
|
||||
Just in case if Inxi is not available in the default repositories, try to install it from AUR (It varies year to year) using any AUR helper programs.
|
||||
|
||||
Using [**Yay**][2]:
|
||||
|
||||
```
|
||||
$ yay -S inxi
|
||||
```
|
||||
|
||||
**On Debian / Ubuntu and derivatives:**
|
||||
|
||||
```
|
||||
$ sudo apt-get install inxi
|
||||
```
|
||||
|
||||
**On Fedora / RHEL / CentOS / Scientific Linux:**
|
||||
|
||||
inxi is available in the Fedora default repositories. So, just run the following command to install it straight away.
|
||||
|
||||
```
|
||||
$ sudo dnf install inxi
|
||||
```
|
||||
|
||||
In RHEL and its clones like CentOS and Scientific Linux, you need to add the EPEL repository and then install inxi.
|
||||
|
||||
To install EPEL repository, just run:
|
||||
|
||||
```
|
||||
$ sudo yum install epel-release
|
||||
```
|
||||
|
||||
After installing EPEL repository, install inxi using command:
|
||||
|
||||
```
|
||||
$ sudo yum install inxi
|
||||
```
|
||||
|
||||
**On SUSE/openSUSE:**
|
||||
|
||||
```
|
||||
$ sudo zypper install inxi
|
||||
```
|
||||
|
||||
### Find Linux System Details Using inxi
|
||||
|
||||
inxi will require some additional programs to operate properly. They will be installed along with inxi. However, in case if they are not installed automatically, you need to find and install them.
|
||||
|
||||
To list all required programs, run:
|
||||
|
||||
```
|
||||
$ inxi --recommends
|
||||
```
|
||||
|
||||
If you see any missing programs, then install them before start using inxi.
|
||||
|
||||
Now, let us see how to use it to reveal the Linux system details. inxi usage is pretty simple and straight forward.
|
||||
|
||||
Open up your Terminal and run the following command to print a short summary of CPU, memory, hard drive and kernel information:
|
||||
|
||||
```
|
||||
$ inxi
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
CPU: Dual Core Intel Core i3-2350M (-MT MCP-) speed/min/max: 798/800/2300 MHz
|
||||
Kernel: 5.1.2-arch1-1-ARCH x86_64 Up: 1h 31m Mem: 2800.5/7884.2 MiB (35.5%)
|
||||
Storage: 465.76 GiB (80.8% used) Procs: 163 Shell: bash 5.0.7 inxi: 3.0.34
|
||||
```
|
||||
|
||||
[![Find Linux System Details Using inxi][1]][3]
|
||||
|
||||
Find Linux System Details Using inxi
|
||||
|
||||
As you can see, Inxi displays the following details of my Arch Linux desktop:
|
||||
|
||||
1. CPU type,
|
||||
2. CPU speed,
|
||||
3. Kernel details,
|
||||
4. Uptime,
|
||||
5. Memory details (Total and used memory),
|
||||
6. Hard disk size along with current usage,
|
||||
7. Procs,
|
||||
8. Default shell details,
|
||||
9. Inxi version.
|
||||
|
||||
|
||||
|
||||
To display full summary, use **“-F”** switch as shown below.
|
||||
|
||||
```
|
||||
$ inxi -F
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
System: Host: sk Kernel: 5.1.2-arch1-1-ARCH x86_64 bits: 64 Desktop: Deepin 15.10.1 Distro: Arch Linux
|
||||
Machine: Type: Portable System: Dell product: Inspiron N5050 v: N/A serial: <root required>
|
||||
Mobo: Dell model: 01HXXJ v: A05 serial: <root required> BIOS: Dell v: A05 date: 08/03/2012
|
||||
Battery: ID-1: BAT0 charge: 39.0 Wh condition: 39.0/48.8 Wh (80%)
|
||||
CPU: Topology: Dual Core model: Intel Core i3-2350M bits: 64 type: MT MCP L2 cache: 3072 KiB
|
||||
Speed: 798 MHz min/max: 800/2300 MHz Core speeds (MHz): 1: 798 2: 798 3: 798 4: 798
|
||||
Graphics: Device-1: Intel 2nd Generation Core Processor Family Integrated Graphics driver: i915 v: kernel
|
||||
Display: x11 server: X.Org 1.20.4 driver: modesetting unloaded: vesa resolution: 1366x768~60Hz
|
||||
Message: Unable to show advanced data. Required tool glxinfo missing.
|
||||
Audio: Device-1: Intel 6 Series/C200 Series Family High Definition Audio driver: snd_hda_intel
|
||||
Sound Server: ALSA v: k5.1.2-arch1-1-ARCH
|
||||
Network: Device-1: Realtek RTL810xE PCI Express Fast Ethernet driver: r8169
|
||||
IF: enp5s0 state: down mac: 45:c8:gh:89:b6:45
|
||||
Device-2: Qualcomm Atheros AR9285 Wireless Network Adapter driver: ath9k
|
||||
IF: wlp9s0 state: up mac: c3:11:96:22:87:3g
|
||||
Device-3: Qualcomm Atheros AR3011 Bluetooth type: USB driver: btusb
|
||||
Drives: Local Storage: total: 465.76 GiB used: 376.31 GiB (80.8%)
|
||||
ID-1: /dev/sda vendor: Seagate model: ST9500325AS size: 465.76 GiB
|
||||
Partition: ID-1: / size: 456.26 GiB used: 376.25 GiB (82.5%) fs: ext4 dev: /dev/sda2
|
||||
ID-2: /boot size: 92.8 MiB used: 62.9 MiB (67.7%) fs: ext4 dev: /dev/sda1
|
||||
ID-3: swap-1 size: 2.00 GiB used: 0 KiB (0.0%) fs: swap dev: /dev/sda3
|
||||
Sensors: System Temperatures: cpu: 58.0 C mobo: N/A
|
||||
Fan Speeds (RPM): cpu: 3445
|
||||
Info: Processes: 169 Uptime: 1h 38m Memory: 7.70 GiB used: 2.94 GiB (38.2%) Shell: bash inxi: 3.0.34
|
||||
```
|
||||
|
||||
Inxi used on IRC automatically filters out your network device MAC address, WAN and LAN IP, your /home username directory in partitions, and a few other items in order to maintain basic privacy and security. You can also trigger this filtering with the **-z** option like below.
|
||||
|
||||
```
|
||||
$ inxi -Fz
|
||||
```
|
||||
|
||||
To override the IRC filter, use the **-Z** option.
|
||||
|
||||
```
|
||||
$ inxi -FZ
|
||||
```
|
||||
|
||||
This can be useful in debugging network connection issues online in a private chat, for example. Please be very careful while using -Z option. It will display your MAC addresses. You shouldn’t share the results got with -Z option in public forums.
|
||||
|
||||
##### Displaying device-specific details
|
||||
|
||||
When running inxi without any options, you will get basic details of your system, such as CPU, Memory, Kernel, Uptime, harddisk etc.
|
||||
|
||||
You can, of course, narrow down the result to show specific device details using various options. Inxi has numerous options (both uppercase and lowercase).
|
||||
|
||||
First, we will see example commands for all uppercase options in alphabetical order. Some commands may require root/sudo privileges to get actual data.
|
||||
|
||||
####### **Uppercase options**
|
||||
|
||||
**1\. Display Audio/Sound card details**
|
||||
|
||||
To show your audio and sound card(s) information with sound card driver, use **-A** option.
|
||||
|
||||
```
|
||||
$ inxi -A
|
||||
Audio: Device-1: Intel 6 Series/C200 Series Family High Definition Audio driver: snd_hda_intel
|
||||
Sound Server: ALSA v: k5.1.2-arch1-1-ARCH
|
||||
```
|
||||
|
||||
**2\. Display Battery details**
|
||||
|
||||
To show battery details of your system with current charge and condition, use **-B** option.
|
||||
|
||||
```
|
||||
$ inxi -B
|
||||
Battery: ID-1: BAT0 charge: 39.0 Wh condition: 39.0/48.8 Wh (80%)
|
||||
```
|
||||
|
||||
**3\. Display CPU details**
|
||||
|
||||
To show complete CPU details including no of cores, CPU model, CPU cache, CPU clock speed, CPU min/max speed etc., use **-C** option.
|
||||
|
||||
```
|
||||
$ inxi -C
|
||||
CPU: Topology: Dual Core model: Intel Core i3-2350M bits: 64 type: MT MCP L2 cache: 3072 KiB
|
||||
Speed: 798 MHz min/max: 800/2300 MHz Core speeds (MHz): 1: 798 2: 798 3: 798 4: 798
|
||||
```
|
||||
|
||||
**4\. Display hard disk details**
|
||||
|
||||
To show information about your hard drive, such as Disk type, vendor, device ID, model, disk size, total disk space, used percentage etc., use **-D** option.
|
||||
|
||||
```
|
||||
$ inxi -D
|
||||
Drives: Local Storage: total: 465.76 GiB used: 376.31 GiB (80.8%)
|
||||
ID-1: /dev/sda vendor: Seagate model: ST9500325AS size: 465.76 GiB
|
||||
```
|
||||
|
||||
**5\. Disply Graphics details**
|
||||
|
||||
To show details about the graphics card, including details of grahics card, driver, vendor, display server, resolution etc., use **-G** option.
|
||||
|
||||
```
|
||||
$ inxi -G
|
||||
Graphics: Device-1: Intel 2nd Generation Core Processor Family Integrated Graphics driver: i915 v: kernel
|
||||
Display: x11 server: X.Org 1.20.4 driver: modesetting unloaded: vesa resolution: 1366x768~60Hz
|
||||
Message: Unable to show advanced data. Required tool glxinfo missing.
|
||||
```
|
||||
|
||||
**6\. Display details about processes, uptime, memory, inxi version**
|
||||
|
||||
To show information about no of processes, total uptime, total memory with used memory, Shell details and inxi version etc., use **-I** option.
|
||||
|
||||
```
|
||||
$ inxi -I
|
||||
Info: Processes: 170 Uptime: 5h 47m Memory: 7.70 GiB used: 3.27 GiB (42.4%) Shell: bash inxi: 3.0.34
|
||||
```
|
||||
|
||||
**7\. Display Motherboard details**
|
||||
|
||||
To show information about your machine details, manufacturer, motherboard, BIOS, use **-M** option.
|
||||
|
||||
```
|
||||
$ inxi -M
|
||||
Machine: Type: Portable System: Dell product: Inspiron N5050 v: N/A serial: <root required>
|
||||
Mobo: Dell model: 034ygt v: A018 serial: <root required> BIOS: Dell v: A001 date: 09/04/2015
|
||||
```
|
||||
|
||||
**8\. Display network card details**
|
||||
|
||||
To show information about your network card, including vendor, card driver and no of network interfaces etc., use **-N** option.
|
||||
|
||||
```
|
||||
$ inxi -N
|
||||
Network: Device-1: Realtek RTL810xE PCI Express Fast Ethernet driver: r8169
|
||||
Device-2: Qualcomm Atheros AR9285 Wireless Network Adapter driver: ath9k
|
||||
Device-3: Qualcomm Atheros AR3011 Bluetooth type: USB driver: btusb
|
||||
```
|
||||
|
||||
If you want to show the advanced details of the network cards, such as MAC address, speed and state of nic, use **-n** option.
|
||||
|
||||
```
|
||||
$ inxi -n
|
||||
```
|
||||
|
||||
Please careful sharing this details on public forum.
|
||||
|
||||
**9\. Display Partition details**
|
||||
|
||||
To display basic partition information, use **-P** option.
|
||||
|
||||
```
|
||||
$ inxi -P
|
||||
Partition: ID-1: / size: 456.26 GiB used: 376.25 GiB (82.5%) fs: ext4 dev: /dev/sda2
|
||||
ID-2: /boot size: 92.8 MiB used: 62.9 MiB (67.7%) fs: ext4 dev: /dev/sda1
|
||||
ID-3: swap-1 size: 2.00 GiB used: 0 KiB (0.0%) fs: swap dev: /dev/sda3
|
||||
```
|
||||
|
||||
To show full partition information including mount points, use **-p** option.
|
||||
|
||||
```
|
||||
$ inxi -p
|
||||
```
|
||||
|
||||
**10\. Display RAID details**
|
||||
|
||||
To show RAID info, use **-R** option.
|
||||
|
||||
```
|
||||
$ inxi -R
|
||||
```
|
||||
|
||||
**11\. Display system details**
|
||||
|
||||
To show Linux system information such as hostname, kernel, DE, OS version etc., use **-S** option.
|
||||
|
||||
```
|
||||
$ inxi -S
|
||||
System: Host: sk Kernel: 5.1.2-arch1-1-ARCH x86_64 bits: 64 Desktop: Deepin 15.10.1 Distro: Arch Linux
|
||||
```
|
||||
|
||||
**12\. Displaying weather details**
|
||||
|
||||
Inixi is not just for finding hardware details. It is useful for getting other stuffs too.
|
||||
|
||||
For example, you can display the weather details of a given location. To do so, run inxi with **-W** option like below.
|
||||
|
||||
```
|
||||
$ inxi -W 95623,us
|
||||
Weather: Temperature: 21.1 C (70 F) Conditions: Scattered clouds Current Time: Tue 11 Jun 2019 04:34:35 AM PDT
|
||||
Source: WeatherBit.io
|
||||
```
|
||||
|
||||
Please note that you should use only ASCII letters in city/state/country names to get valid results.
|
||||
|
||||
####### Lowercase options
|
||||
|
||||
**1\. Display basic system details**
|
||||
|
||||
To show only the basic summary of your system details, use **-b** option.
|
||||
|
||||
```
|
||||
$ inxi -b
|
||||
```
|
||||
|
||||
Alternatively, you can use this command:
|
||||
|
||||
Both servers the same purpose.
|
||||
|
||||
```
|
||||
$ inxi -v 2
|
||||
```
|
||||
|
||||
**2\. Set color scheme**
|
||||
|
||||
We can set different color schemes for inxi output using **-c** option. Yu can set color scheme number from **0** to **42**. If no scheme number is supplied, **0** is assumed.
|
||||
|
||||
Here is inxi output with and without **-c** option.
|
||||
|
||||
[![inxi output without color scheme][1]][4]
|
||||
|
||||
inxi output without color scheme
|
||||
|
||||
As you can see, when we run inxi with -c option, the color scheme is disabled. The -c option is useful to turnoff colored output when redirecting clean output without escape codes to a text file.
|
||||
|
||||
Similarly, we can use other color scheme values.
|
||||
|
||||
```
|
||||
$ inxi -c10
|
||||
|
||||
$ inxi -c42
|
||||
```
|
||||
|
||||
**3\. Display optical drive details**
|
||||
|
||||
We can show the optical drive data details along with local hard drive details using **-d** option.
|
||||
|
||||
```
|
||||
$ inxi -d
|
||||
Drives: Local Storage: total: 465.76 GiB used: 376.31 GiB (80.8%)
|
||||
ID-1: /dev/sda vendor: Seagate model: ST9500325AS size: 465.76 GiB
|
||||
Optical-1: /dev/sr0 vendor: PLDS model: DVD+-RW DS-8A8SH dev-links: cdrom
|
||||
Features: speed: 24 multisession: yes audio: yes dvd: yes rw: cd-r,cd-rw,dvd-r,dvd-ram
|
||||
```
|
||||
|
||||
**4\. Display all CPU flags**
|
||||
|
||||
To show all CPU flags used, run:
|
||||
|
||||
```
|
||||
$ inxi -f
|
||||
```
|
||||
|
||||
**5\. Display IP details**
|
||||
|
||||
To show WAN and local ip address along network card details such as device vendor, driver, mac, state etc., use **-i** option.
|
||||
|
||||
```
|
||||
$ inxi -i
|
||||
```
|
||||
|
||||
**6\. Display partition labels**
|
||||
|
||||
If you have set labels for the partitions, you can view them using **-l** option.
|
||||
|
||||
```
|
||||
$ inxi -l
|
||||
```
|
||||
|
||||
You can also view the labels of all partitions along with mountpoints using command:
|
||||
|
||||
```
|
||||
$ inxi -pl
|
||||
```
|
||||
|
||||
**7\. Display Memory details**
|
||||
|
||||
We can display memory details such as total size of installed RAM, how much memory is used, no of available DIMM slots, total size of supported RAM, how much RAM is currently installed in each slots etc., using **-m** option.
|
||||
|
||||
```
|
||||
$ sudo inxi -m
|
||||
[sudo] password for sk:
|
||||
Memory: RAM: total: 7.70 GiB used: 2.26 GiB (29.3%)
|
||||
Array-1: capacity: 16 GiB slots: 2 EC: None
|
||||
Device-1: DIMM_A size: 4 GiB speed: 1067 MT/s
|
||||
Device-2: DIMM_B size: 4 GiB speed: 1067 MT/s
|
||||
```
|
||||
|
||||
**8\. Display unmounted partition details**
|
||||
|
||||
To show unmounted partition details, use **-o** option.
|
||||
|
||||
```
|
||||
$ inxi -o
|
||||
```
|
||||
|
||||
If there were no unmounted partitions in your system, you will see an output something like below.
|
||||
|
||||
```
|
||||
Unmounted: Message: No unmounted partitions found.
|
||||
```
|
||||
|
||||
**9\. Display list of repositories**
|
||||
|
||||
To display the the list of repositories in your system, use **-r** option.
|
||||
|
||||
```
|
||||
$ inxi -r
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
Repos: Active apt sources in file: /etc/apt/sources.list
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial main restricted
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial-updates main restricted
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial universe
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial-updates universe
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial multiverse
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial-updates multiverse
|
||||
deb http://in.archive.ubuntu.com/ubuntu/ xenial-backports main restricted universe multiverse
|
||||
deb http://security.ubuntu.com/ubuntu xenial-security main restricted
|
||||
deb http://security.ubuntu.com/ubuntu xenial-security universe
|
||||
deb http://security.ubuntu.com/ubuntu xenial-security multiverse
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
* [**How To Find The List Of Installed Repositories From Commandline In Linux**][5]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
**10\. Show system temperature, fan speed details**
|
||||
|
||||
Inxi is capable to find motherboard/CPU/GPU temperatures and fan speed.
|
||||
|
||||
```
|
||||
$ inxi -s
|
||||
Sensors: System Temperatures: cpu: 60.0 C mobo: N/A
|
||||
Fan Speeds (RPM): cpu: 3456
|
||||
```
|
||||
|
||||
Please note that Inxi requires sensors to find the system temperature. Make sure **lm_sensors** is installed and correctly configured in your system. For more details about lm_sensors, check the following guide.
|
||||
|
||||
* [**How To View CPU Temperature On Linux**][6]
|
||||
|
||||
|
||||
|
||||
**11\. Display details about processes**
|
||||
|
||||
To show the list processes top 5 processes which are consuming most CPU and Memory, simply run:
|
||||
|
||||
```
|
||||
$ inxi -t
|
||||
Processes: CPU top: 5
|
||||
1: cpu: 14.3% command: firefox pid: 15989
|
||||
2: cpu: 10.5% command: firefox pid: 13487
|
||||
3: cpu: 7.1% command: firefox pid: 15062
|
||||
4: cpu: 3.1% command: xorg pid: 13493
|
||||
5: cpu: 3.0% command: firefox pid: 14954
|
||||
System RAM: total: 7.70 GiB used: 2.99 GiB (38.8%)
|
||||
Memory top: 5
|
||||
1: mem: 1115.8 MiB (14.1%) command: firefox pid: 15989
|
||||
2: mem: 606.6 MiB (7.6%) command: firefox pid: 13487
|
||||
3: mem: 339.3 MiB (4.3%) command: firefox pid: 13630
|
||||
4: mem: 303.1 MiB (3.8%) command: firefox pid: 18617
|
||||
5: mem: 260.1 MiB (3.2%) command: firefox pid: 15062
|
||||
```
|
||||
|
||||
We can also sort this output by either CPU usage or Memory usage.
|
||||
|
||||
For instance, to find the which top 5 processes are consuming most memory, use the following command:
|
||||
|
||||
```
|
||||
$ inxi -t m
|
||||
Processes: System RAM: total: 7.70 GiB used: 2.73 GiB (35.4%)
|
||||
Memory top: 5
|
||||
1: mem: 966.1 MiB (12.2%) command: firefox pid: 15989
|
||||
2: mem: 468.2 MiB (5.9%) command: firefox pid: 13487
|
||||
3: mem: 347.9 MiB (4.4%) command: firefox pid: 13708
|
||||
4: mem: 306.7 MiB (3.8%) command: firefox pid: 13630
|
||||
5: mem: 247.2 MiB (3.1%) command: firefox pid: 15062
|
||||
```
|
||||
|
||||
To sort the top 5 processes based on CPU usage, run:
|
||||
|
||||
```
|
||||
$ inxi -t c
|
||||
Processes: CPU top: 5
|
||||
1: cpu: 14.9% command: firefox pid: 15989
|
||||
2: cpu: 10.6% command: firefox pid: 13487
|
||||
3: cpu: 7.0% command: firefox pid: 15062
|
||||
4: cpu: 3.1% command: xorg pid: 13493
|
||||
5: cpu: 2.9% command: firefox pid: 14954
|
||||
```
|
||||
|
||||
Bydefault, Inxi will display the top 5 processes. You can change the number of processes, for example 10, like below.
|
||||
|
||||
```
|
||||
$ inxi -t cm10
|
||||
Processes: CPU top: 10
|
||||
1: cpu: 14.9% command: firefox pid: 15989
|
||||
2: cpu: 10.6% command: firefox pid: 13487
|
||||
3: cpu: 7.0% command: firefox pid: 15062
|
||||
4: cpu: 3.1% command: xorg pid: 13493
|
||||
5: cpu: 2.9% command: firefox pid: 14954
|
||||
6: cpu: 2.8% command: firefox pid: 13630
|
||||
7: cpu: 1.8% command: firefox pid: 18325
|
||||
8: cpu: 1.4% command: firefox pid: 18617
|
||||
9: cpu: 1.3% command: firefox pid: 13708
|
||||
10: cpu: 0.8% command: firefox pid: 14427
|
||||
System RAM: total: 7.70 GiB used: 2.92 GiB (37.9%)
|
||||
Memory top: 10
|
||||
1: mem: 1160.9 MiB (14.7%) command: firefox pid: 15989
|
||||
2: mem: 475.1 MiB (6.0%) command: firefox pid: 13487
|
||||
3: mem: 353.4 MiB (4.4%) command: firefox pid: 13708
|
||||
4: mem: 308.0 MiB (3.9%) command: firefox pid: 13630
|
||||
5: mem: 269.6 MiB (3.4%) command: firefox pid: 15062
|
||||
6: mem: 249.3 MiB (3.1%) command: firefox pid: 14427
|
||||
7: mem: 238.5 MiB (3.0%) command: firefox pid: 14954
|
||||
8: mem: 208.2 MiB (2.6%) command: firefox pid: 18325
|
||||
9: mem: 194.0 MiB (2.4%) command: firefox pid: 18617
|
||||
10: mem: 143.6 MiB (1.8%) command: firefox pid: 23960
|
||||
```
|
||||
|
||||
The above command will display the top 10 processes that consumes the most CPU and Memory.
|
||||
|
||||
To display only top10 based on memory usage, run:
|
||||
|
||||
```
|
||||
$ inxi -t m10
|
||||
```
|
||||
|
||||
**12\. Display partition UUID details**
|
||||
|
||||
To show partition UUIDs ( **U** niversally **U** nique **Id** entifier), use **-u** option.
|
||||
|
||||
```
|
||||
$ inxi -u
|
||||
```
|
||||
|
||||
There are much more options are yet to be covered. But, these are just enough to get almost all details of your Linux box.
|
||||
|
||||
For more details and options, refer the man page.
|
||||
|
||||
```
|
||||
$ man inxi
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
**Related read:**
|
||||
|
||||
* **[Neofetch – Display your Linux system’s information][7]**
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
The primary purpose of Inxi tool is to use in IRC or forum support. If you are looking for any help via a forum or website where someone is asking the specification of your system, just run this command, and copy/paste the output.
|
||||
|
||||
**Resources:**
|
||||
|
||||
* [**Inxi GitHub Repository**][8]
|
||||
* [**Inxi home page**][9]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-find-your-system-details-using-inxi/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[3]: http://www.ostechnix.com/wp-content/uploads/2016/08/Find-Linux-System-Details-Using-inxi.png
|
||||
[4]: http://www.ostechnix.com/wp-content/uploads/2016/08/inxi-output-without-color-scheme.png
|
||||
[5]: https://www.ostechnix.com/find-list-installed-repositories-commandline-linux/
|
||||
[6]: https://www.ostechnix.com/view-cpu-temperature-linux/
|
||||
[7]: http://www.ostechnix.com/neofetch-display-linux-systems-information/
|
||||
[8]: https://github.com/smxi/inxi
|
||||
[9]: http://smxi.org/docs/inxi.htm
|
@ -0,0 +1,100 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0 – Ongoing Projects (The State Of Smart Contracts Now) [Part 6])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/)
|
||||
[#]: author: (editor https://www.ostechnix.com/author/editor/)
|
||||
|
||||
区块链 2.0:智能合约如今的发展(六)
|
||||
======
|
||||
|
||||
![The State Of Smart Contracts Now][1]
|
||||
|
||||
继续我们的[前面的关于智能合约的文章][2],这篇文章旨在讨论智能合约的形势,重点介绍目前正在该领域进行开发的一些项目和公司。如本系列前一篇文章中讨论的,智能合约是在区块链网络上存在并执行的程序。我们探讨了智能合约的工作原理以及它们优于传统数字平台的原因。这里描述的公司分布于各种各样的行业中,但是大多涉及到身份管理系统、金融服务、众筹系统等,因为这些是被认为最适合切换到基于区块链的数据库系统的领域。
|
||||
|
||||
### 开放平台
|
||||
|
||||
诸如 [Counterparty][8] 和 Solidity(以太坊)等平台是完全公用的构建模块,开发者可以以之创建自己的智能合约。大量的开发人员参与此类项目使这些项目成为开发智能合约、设计自己的加密货币令牌系统以及为区块链创建协议以实现功能的事实标准。许多值得称赞的项目都来源于它们。摩根大通派生自以太坊的 [Quorum][9],就是一个例子。而瑞波是另一个例子。
|
||||
|
||||
### 管理金融交易
|
||||
|
||||
通过互联网转账加密货币被吹捧为在未来几年的常态。与此相关的不足之处是:
|
||||
|
||||
* 身份和钱包地址是匿名的。如果接收方不履行交易,则付款人没有任何第一追索权。
|
||||
* 错误交易(如果无法追踪任何交易)。
|
||||
* 密码生成的哈希密钥很难用于人类,人为错误是主要关注点。
|
||||
|
||||
在这种情况下,可以让其他人暂时接受该交易并在接受尽职调查后与接收方结算。
|
||||
|
||||
[EscrowMyEther][10] 和 [PAYFAIR][11] 是两个这样的托管平台。基本上,托管公司采用商定的金额并向接收方发送令牌。一旦接收方通过相同的托管平台提供付款人想要的内容,两者都会确认并最终付款。 这些得到了自由职业者和业余爱好者收藏家广泛在线使用。
|
||||
|
||||
### 金融服务
|
||||
|
||||
小额融资和小额保险项目的发展将改善世界上大多数贫穷或没有银行账户的人的银行金融服务。据估计,社会中较贫穷的“无银行账户”人群可以为银行和机构的增加 3800 亿美元收入 [^5]。这一金额取代了通过银行切换到区块链 DLT 预期可以节省的运营费用。
|
||||
|
||||
位于美国中西部的 BankQu Inc. 的口号是“通过身份而尊严”。他们的平台允许个人设置他们自己的数字身份记录,其中所有交易将在区块链上实时审查和处理。在底层代码上记录并为其用户构建唯一的在线标识,从而实现超快速的交易和结算。BankQu 案例研究探讨了他们如何以这种方式帮助个人和公司,可以在[这里][3]看到。
|
||||
|
||||
[Stratumn][12] 正在帮助保险公司通过自动执行早期由人类微观管理的任务来提供更好的保险服务。通过自动化、端到端可追溯性和高效的数据隐私方法,他们彻底改变了保险索赔的结算方式。改善客户体验以及显著降低成本,为客户和相关公司带来双赢局面。
|
||||
|
||||
法国保险公司 [AXA][14] 目前正在试行类似的努力。其产品 [fizzy][13] 允许用户以少量费用订阅其服务并输入他们的航班详细信息。如果航班延误或遇到其他问题,该程序会自动搜索在线数据库,检查保险条款并将保险金额记入用户的帐户。这样就消除了用户或客户在手动检查条款后提出索赔的要求,并且一旦这样的系统成为主流,就不需要长期提出索赔,增加了航空公司的责任。
|
||||
|
||||
### 跟踪所有权
|
||||
|
||||
理论上可以利用 DLT 中的带时间戳的数据块来跟踪从创建到最终用户消费的媒体。Peertracks 公司 和 Mycelia 公司目前正在帮助音乐家发布内容,而不必担心其内容被盗或被滥用。他们帮助艺术家直接向粉丝和客户销售,同时获得工作报酬,而无需通过权利和唱片公司 [^9]。
|
||||
|
||||
### 身份管理平台
|
||||
|
||||
基于区块链的身份管理平台可以将你的身份存储在分布式分类帐本中。设置帐户后,会对其进行安全加密,然后将其发送给所有参与节点。但是,作为数据块的所有者,只有该用户才能访问该数据。一旦你在网络上建立身份并开始交易,网络中的自动程序将验证与你的帐户关联的先前所有的交易,在检查要求后将其发送给监管备案,并在程序认为交易合法时自动执行结算。这里的好处是,由于区块链上的数据是防篡改的,而智能合约以零偏差(或主观性)检查输入,如前所述,交易不需要任何人的监督或批准,并且需要小心是即刻生效的。
|
||||
|
||||
像 [ShoCard][15] 、[Credits][16] 和 [OneName][17] 这样的初创公司目前正在推出类似的服务,目前正在与政府和社会机构进行谈判,以便将它们整合到主流用途中。
|
||||
|
||||
开发商的其他独立项目如 Chris Ellis 和 David Duccini 分别开发或提出了替代的身份管理系统,分别是 “[世界公民][4]”和 [IDCoin][5]。Ellis 先生甚至通过在区块链网络上创建护照来证明他的工作能力。
|
||||
|
||||
### 资源共享
|
||||
|
||||
[Share & Charge][18] ([Slock.It][19]) 是一家欧洲的区块链初创公司。他们的移动应用程序允许房主和其他个人投入资金建立充电站与其他正在寻找快速充电的人分享他们的资源。这不仅使业主能够收回他们的一些投资,而且还允许 EV 司机在其近地域获得更多的充电点,从而允许供应商以方便的方式满足需求。一旦“客户”完成对其车辆的充电,相关的硬件就会创建一个由数据组成的安全时间戳块,并且在该平台上工作的智能合约会自动将相应的金额记入所有者账户。记录所有此类交易的跟踪并保持适当的安全验证。有兴趣的读者可以看一下[这里][6],了解他们产品背后的技术角度。该公司的平台将逐步使用户能够与有需要的个人分享其他产品和服务,并从中获得被动收入。
|
||||
|
||||
我们在这里看到的公司,以及一个很短的正在进行中的项目的清单,这些项目利用智能合约和区块链数据库系统。诸如此类的平台有助于构建一个安全的“盒子”,其中包含仅由用户自己和上面的代码或智能合约访问的信息。基于触发器对信息进行实时审查、检查,并且算法由系统执行。这样的平台具有最小化的人为监督,这是在安全数字自动化方面朝着正确方向迈出的急需的一步,这在以前从未被考虑过如此规模。
|
||||
|
||||
下一篇文章将阐述不同类型的区块链。单击以下链接以了解有关此主题的更多信息。
|
||||
|
||||
* [区块链 2.0:公有链与私有链的比较][7]
|
||||
|
||||
|
||||
[^5]: B. Pani, “Blockchain Powered Financial Inclusion,” 2016.
|
||||
[^9]: M. Gates, “Blockchain. Ultimate guide to understanding blockchain bitcoin cryptocurrencies smart-contracts and the future of money.pdf.” 2017.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/
|
||||
|
||||
作者:[editor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/State-Of-Smart-Contracts-720x340.png
|
||||
[2]: https://linux.cn/article-10956-1.html
|
||||
[3]: https://banqu.co/case-study/
|
||||
[4]: https://github.com/MrChrisJ/World-Citizenship
|
||||
[5]: https://github.com/IDCoin/IDCoin
|
||||
[6]: https://blog.slock.it/share-charge-smart-contracts-the-technical-angle-58b93ce80f15
|
||||
[7]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
|
||||
[8]: https://counterparty.io/platform/
|
||||
[9]: https://www.jpmorgan.com/global/Quorum
|
||||
[10]: http://escrowmyether.com/
|
||||
[11]: https://payfair.io/
|
||||
[12]: https://stratumn.com/business-case/insurance-claim-automation-across-europe/
|
||||
[13]: https://fizzy.axa/en-gb/
|
||||
[14]: https://group.axa.com/en/newsroom/news/axa-goes-blockchain-with-fizzy
|
||||
[15]: https://techcrunch.com/2015/05/05/shocard-is-a-digital-identity-card-on-the-blockchain/
|
||||
[16]: https://techcrunch.com/2014/10/31/your-next-passport-could-be-on-the-blockchain/
|
||||
[17]: https://wiki.namecoin.org/index.php?title=OneName
|
||||
[18]: https://blog.slock.it/share-charge-launches-its-app-on-boards-over-1-000-charging-stations-on-the-blockchain-ba8275390309
|
||||
[19]: https://slock.it/
|
@ -1,173 +0,0 @@
|
||||
BootISO – 一个简单的 Bash 脚本来安全地从 ISO 文件中创建一个可启动的 USB 设备
|
||||
======
|
||||
为操作系统安装,我们中的大多数人(包括我)非常经常地从 ISO 文件中创建一个可启动的 USB 设备。
|
||||
|
||||
为达到这个目的,在 Linux 中有很多自由可用的应用程序。甚至在过去我们写了几个实用程序。
|
||||
|
||||
每个人使用不同的应用程序,每个应用程序有它们自己的特色和功能。
|
||||
|
||||
在这些应用程序中,一些应用程序属于 CLI ,一些应用程序与 GUI 关联。
|
||||
|
||||
今天,我们将讨论相同类型的称为 BootISO 的实用程序。它是一个简单的 bash 脚本,允许用户来从 ISO 文件中创建一个可启动的 USB 设备。
|
||||
|
||||
很多 Linux 管理员使用 dd 命令开创建可启动的 ISO ,它是一个本地的且著名的方法,但是与此同时,它也是一个非常危险的命令。因此,小心,当你执行一些带有 dd 命令的动作。
|
||||
|
||||
**建议阅读:**
|
||||
**(#)** [Etcher – 简单的方法来从一个 ISO 镜像中创建一个可启动的 USB 驱动器 & SD 卡][1]
|
||||
|
||||
**(#)** [在 Linux 上使用 dd 命令来从一个 ISO 镜像中创建一个可启动的 USB 驱动器][2]
|
||||
|
||||
### BootISO 是什么
|
||||
|
||||
[BootIOS][3] 是一个简单的 bash 脚本,允许用户来安全的从一个 ISO 文件中创建一个可启动的 USB 设备,它是用 bash 编写的。
|
||||
|
||||
它不提供任何图形用户界面,但是与此同时,它有大量的选项,允许初学者在 Linux 上来创建一个可启动的 USB 设备,而没有任何问题。因为它是一个智能工具,自动地选择是否一些 USB 设备被连接到系统上。
|
||||
|
||||
当系统有多个 USB 设备连接,它将打印列表。当你手动选择另一个硬盘而不是 USB ,在这种情况下,它将安全地退出,而不在硬盘上写任何东西。
|
||||
|
||||
这个脚本也将检查依赖关系,并提示用户安装,它与所有的软件包管理器一起工作,例如 apt-get,yum,dnf,pacman 和 zypper。
|
||||
|
||||
### BootISO 特色
|
||||
|
||||
* 它检查选择的 ISO 是否是正确的 mime 类型。如果不是,那么退出。.
|
||||
* 如果你选择除 USB 设备以外的任何其它的磁盘(本地硬盘),BootISO 将自动地退出。
|
||||
* 当你有多个驱动器时,BootISO 允许用户选择想要的 USB 驱动器。
|
||||
* 在擦除和分区 USB 设备前,BootISO 提示用户确认。
|
||||
* BootISO 将正确地处理来自一个命令的任何失灵,并退出。
|
||||
* BootISO 在退出陷阱时将调用一个清理例行程序。
|
||||
|
||||
|
||||
|
||||
### 如果在 Linux 中安装 BootISO
|
||||
|
||||
在 Linux 中安装 BootISO 有几个可用的方法,但是,我建议用户使用下面的方法安装。
|
||||
```
|
||||
$ curl -L https://git.io/bootiso -O
|
||||
$ chmod +x bootiso
|
||||
$ sudo mv bootiso /usr/local/bin/
|
||||
|
||||
```
|
||||
|
||||
一旦 BootISO 已经安装,运行下面的命令来列出可用的 USB 设备。
|
||||
```
|
||||
$ bootiso -l
|
||||
|
||||
Listing USB drives available in your system:
|
||||
NAME HOTPLUG SIZE STATE TYPE
|
||||
sdd 1 32G running disk
|
||||
|
||||
```
|
||||
|
||||
如果你仅有一个 USB 设备,那么简单地运行下面的命令来从一个 ISO 文件中创建一个可启动的 USB 设备。
|
||||
```
|
||||
$ bootiso /path/to/iso file
|
||||
|
||||
$ bootiso /opt/iso_images/archlinux-2018.05.01-x86_64.iso
|
||||
Granting root privileges for bootiso.
|
||||
Listing USB drives available in your system:
|
||||
NAME HOTPLUG SIZE STATE TYPE
|
||||
sdd 1 32G running disk
|
||||
Autoselecting `sdd' (only USB device candidate)
|
||||
The selected device `/dev/sdd' is connected through USB.
|
||||
Created ISO mount point at `/tmp/iso.vXo'
|
||||
`bootiso' is about to wipe out the content of device `/dev/sdd'.
|
||||
Are you sure you want to proceed? (y/n)>y
|
||||
Erasing contents of /dev/sdd...
|
||||
Creating FAT32 partition on `/dev/sdd1'...
|
||||
Created USB device mount point at `/tmp/usb.0j5'
|
||||
Copying files from ISO to USB device with `rsync'
|
||||
Synchronizing writes on device `/dev/sdd'
|
||||
`bootiso' took 250 seconds to write ISO to USB device with `rsync' method.
|
||||
ISO succesfully unmounted.
|
||||
USB device succesfully unmounted.
|
||||
USB device succesfully ejected.
|
||||
You can safely remove it !
|
||||
|
||||
```
|
||||
|
||||
提到你的设备名称,当你有多个 USB 设备时,使用 `--device` 选项。
|
||||
```
|
||||
$ bootiso -d /dev/sde /opt/iso_images/archlinux-2018.05.01-x86_64.iso
|
||||
|
||||
```
|
||||
|
||||
默认情况下,BootISO 使用 `rsync` 命令来执行所有的动作,如果你想使用 `dd` 命令代替它,使用下面的格式。
|
||||
```
|
||||
$ bootiso --dd -d /dev/sde /opt/iso_images/archlinux-2018.05.01-x86_64.iso
|
||||
|
||||
```
|
||||
|
||||
如果你想跳过 `mime-type` 检查,BootISO 实用程序带有下面的选项。
|
||||
```
|
||||
$ bootiso --no-mime-check -d /dev/sde /opt/iso_images/archlinux-2018.05.01-x86_64.iso
|
||||
|
||||
```
|
||||
|
||||
为 BootISO 添加下面的选项来跳过在擦除和分区 USB 设备前的用户确认。
|
||||
```
|
||||
$ bootiso -y -d /dev/sde /opt/iso_images/archlinux-2018.05.01-x86_64.iso
|
||||
|
||||
```
|
||||
|
||||
连同 -y 选项一起,启用自动选择 USB 设备。
|
||||
```
|
||||
$ bootiso -y -a /opt/iso_images/archlinux-2018.05.01-x86_64.iso
|
||||
|
||||
```
|
||||
|
||||
为知道更多全部可用的 BootISO 选项,运行下面的命令。
|
||||
```
|
||||
$ bootiso -h
|
||||
Create a bootable USB from any ISO securely.
|
||||
Usage: bootiso [...]
|
||||
|
||||
Options
|
||||
|
||||
-h, --help, help Display this help message and exit.
|
||||
-v, --version Display version and exit.
|
||||
-d, --device Select block file as USB device.
|
||||
If is not connected through USB, `bootiso' will fail and exit.
|
||||
Device block files are usually situated in /dev/sXX or /dev/hXX.
|
||||
You will be prompted to select a device if you don't use this option.
|
||||
-b, --bootloader Install a bootloader with syslinux (safe mode) for non-hybrid ISOs. Does not work with `--dd' option.
|
||||
-y, --assume-yes `bootiso' won't prompt the user for confirmation before erasing and partitioning USB device.
|
||||
Use at your own risks.
|
||||
-a, --autoselect Enable autoselecting USB devices in conjunction with -y option.
|
||||
Autoselect will automatically select a USB drive device if there is exactly one connected to the system.
|
||||
Enabled by default when neither -d nor --no-usb-check options are given.
|
||||
-J, --no-eject Do not eject device after unmounting.
|
||||
-l, --list-usb-drives List available USB drives.
|
||||
-M, --no-mime-check `bootiso' won't assert that selected ISO file has the right mime-type.
|
||||
-s, --strict-mime-check Disallow loose application/octet-stream mime type in ISO file.
|
||||
-- POSIX end of options.
|
||||
--dd Use `dd' utility instead of mounting + `rsync'.
|
||||
Does not allow bootloader installation with syslinux.
|
||||
--no-usb-check `bootiso' won't assert that selected device is a USB (connected through USB bus).
|
||||
Use at your own risks.
|
||||
|
||||
Readme
|
||||
|
||||
Bootiso v2.5.2.
|
||||
Author: Jules Samuel Randolph
|
||||
Bugs and new features: https://github.com/jsamr/bootiso/issues
|
||||
If you like bootiso, please help the community by making it visible:
|
||||
* star the project at https://github.com/jsamr/bootiso
|
||||
* upvote those SE post: https://goo.gl/BNRmvm https://goo.gl/YDBvFe
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/bootiso-a-simple-bash-script-to-securely-create-a-bootable-usb-device-in-linux-from-iso-file/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/prakash/
|
||||
[1]:https://www.2daygeek.com/etcher-easy-way-to-create-a-bootable-usb-drive-sd-card-from-an-iso-image-on-linux/
|
||||
[2]:https://www.2daygeek.com/create-a-bootable-usb-drive-from-an-iso-image-using-dd-command-on-linux/
|
||||
[3]:https://github.com/jsamr/bootiso
|
@ -0,0 +1,56 @@
|
||||
日志消息的一日之旅
|
||||
======
|
||||
|
||||
> 从一条日志消息的角度来巡览现代分布式系统。
|
||||
|
||||

|
||||
|
||||
混沌系统往往是不可预测的。在构建像分布式系统这样复杂的东西时,这一点尤其明显。如果不加以控制,这种不可预测性会无止境的浪费时间。这就是为什么分布式系统的每个组件,无论多小,都必须设计成以简化的方式组合在一起。
|
||||
|
||||
[Kubernetes][1] 为抽象计算资源提供了一个很有前景的模型 —— 但即使是它也必须与 [Apache Kafka][2] 等其他分布式平台协调一致,以确保可靠的数据传输。如果有人要整合这两个平台,它会如何运作?此外,如果你通过这样的系统跟踪像日志消息这么简单的东西,它会是什么样子?本文将重点介绍来自[OKD][3] 内运行的应用程序的日志消息如何通过 Kafka 进入数据仓库(OKD 是为 Red Hat OpenShift 提供支持的 Kubernetes 的原初社区发行版)。
|
||||
|
||||
### OKD 定义的环境
|
||||
|
||||
这样的旅程始于 OKD,因为该容器平台完全覆盖了它抽象的硬件。这意味着日志消息等待由驻留在容器中的应用程序写入 stdout 或 stderr 流。从那里,日志消息被容器引擎(例如 [CRI-O][4])重定向到节点的文件系统。
|
||||
|
||||

|
||||
|
||||
在 OpenShift 中,一个或多个容器封装在称为 pod 的虚拟计算节点中。实际上,在 OKD 中运行的所有应用程序都被抽象为 pod(豆荚)。这允许应用程序以统一的方式操纵。这也大大简化了分布式组件之间的通信,因为 pod 可以通过 IP 地址和[负载均衡服务][5]系统地寻址。因此,当日志消息由日志收集器应用程序从节点的文件系统获取时,它可以很容易地传递到在 OpenShift 中运行的另一个 pod。
|
||||
|
||||
### 在豆荚里的两个豌豆
|
||||
|
||||
为了确保可以在整个分布式系统中无处不在地传播日志消息,日志收集器需要将日志消息传递到在 OpenShift 中运行的 Kafka 集群数据中心。通过 Kafka,日志消息可以以可靠且容错的方式低延迟传递给消费应用程序。但是,为了在 OKD 定义的环境中获得 Kafka 的好处,Kafka 需要完全集成到 OKD 中。
|
||||
|
||||
运行 [Strimzi 操作子][6]将实例化所有 Kafka 组件为 pod,并将它们集成在 OKD 环境中运行。 这包括用于排队日志消息的 Kafka 代理,用于从 Kafka 代理读取和写入的 Kafka 连接器,以及用于管理 Kafka 集群状态的 Zookeeper 节点。Strimzi 还可以将日志收集器双实例化作为 Kafka 连接器,允许日志收集器将日志消息直接提供给在 OKD 中运行的 Kafka 代理 pod。
|
||||
|
||||
### 在 OKD 内的 Kafka
|
||||
|
||||
当日志收集器 pod 将日志消息传递给 Kafka 代理时,收集器会写到单个代理分区,并将日志消息附加到该分区的末尾。使用 Kafka 的一个优点是它将日志收集器与日志的最终目标分离。由于解耦,日志收集器不关心日志最后是放在 [Elasticsearch][7]、Hadoop、Amazon S3 中的某个还是全都。Kafka 与所有基础设施连接良好,因此 Kafka 连接器可以在任何需要的地方获取日志消息。
|
||||
|
||||
一旦写入 Kafka 代理的分区,该日志消息就会在 Kafka 集群内的代理分区中复制。这是它的一个非常强大的概念;结合平台的自愈功能,它创建了一个非常灵活的分布式系统。例如,当节点变得不可用时,节点上运行的应用程序几乎立即在健康节点上生成。因此,即使带有 Kafka 代理的节点丢失或损坏,日志消息也能保证存活在必要多的节点上,并且新的 Kafka 代理将快速原位取代。
|
||||
|
||||
### 存储起来
|
||||
|
||||
在日志消息被提交到 Kafka 主题后,它将等待 Kafka 连接器接收器使用它,该接收器将日志消息中继到分析引擎或日志记录仓库。在传递到其最终目的地时,可以分析日志消息以进行异常检测,可以查询日志以立即进行根本原因分析,或用于其他目的。无论哪种方式,日志消息都由 Kafka 以安全可靠的方式传送到目的地。
|
||||
|
||||
OKD 和 Kafka 是正在迅速发展的功能强大的分布式平台。创建能够在不影响性能的情况下抽象出分布式计算的复杂特性的系统至关重要。毕竟,如果我们不能简化单条日志消息的旅程,我们怎么能夸耀全系统的效率呢?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/life-log-message
|
||||
|
||||
作者:[Josef Karásek][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jkarasek
|
||||
[1]: https://kubernetes.io/
|
||||
[2]: https://kafka.apache.org/
|
||||
[3]: https://www.okd.io/
|
||||
[4]: http://cri-o.io/
|
||||
[5]: https://kubernetes.io/docs/concepts/services-networking/service/
|
||||
[6]: http://strimzi.io/
|
||||
[7]: https://www.elastic.co/
|
@ -0,0 +1,127 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (luuming)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 Good Open Source Speech Recognition/Speech-to-Text Systems)
|
||||
[#]: via: (https://fosspost.org/lists/open-source-speech-recognition-speech-to-text)
|
||||
[#]: author: (Simon James https://fosspost.org/author/simonjames)
|
||||
|
||||
5 款不错的开源语音识别/语音文字转换系统
|
||||
|
||||
======
|
||||
|
||||

|
||||
|
||||
<ruby>语音文字转换<rt>speech-to-text</rt></ruby>(STT)系统就像它名字所蕴含的那样,是一种将说出的单词转换为文本文件以供后续用途的方式。
|
||||
|
||||
语音文字转换技术非常有用。它可以用到许多应用中,例如自动转录,使用自己的声音写书籍或文本,用生成的文本文件和其他工具做复杂的分析等。
|
||||
|
||||
在过去,语音文字转换技术以专有软件和库为主导,开源替代品并不存在或是有严格的限制并且没有社区。这一点正在发生改变,当今有许多开源语音文字转换工具和库可以让你立即使用。
|
||||
|
||||
这里我列出了 5 个。
|
||||
|
||||
### 开源语音识别库
|
||||
|
||||
#### DeepSpeech 项目
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 15 open source speech recognition][1]
|
||||
|
||||
该项目由 Firefox 浏览器背后的组织 Mozilla 团队开发。它 100% 自由并且使用 TensorFlow 机器学习框架实现。
|
||||
|
||||
换句话说,你可以用它训练自己的模型获得更好的效果,甚至可以用它转换其它的语言。你也可以轻松的将它集成到自己的 Tensorflow 机器学习项目中。可惜的是项目当前默认仅支持英语。
|
||||
|
||||
它也支持许多编程语言,例如 Python(3.6)。可以让你在数秒之内获取:
|
||||
|
||||
```
|
||||
pip3 install deepspeech
|
||||
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav
|
||||
```
|
||||
|
||||
你也可以通过 npm 安装它:
|
||||
|
||||
```
|
||||
npm install deepspeech
|
||||
```
|
||||
|
||||
想要获得更多信息,请参考[项目主页][2]。
|
||||
|
||||
#### Kaldi
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 17 open source speech recognition][3]
|
||||
|
||||
Kaldi 是一个用 C++ 写的开源语音识别软件,并且在 Apache 公共许可下发布。它可以运行在 Windows,macOS 和 Linux 上。它的开发始于 2009。
|
||||
|
||||
Kaldi 超过其他语音识别软件的主要特点是可扩展和模块化。社区提供了大量的三方模块可以用来完成你的任务。Kaldi 也支持深度神经网络,并且在它的网站上提供了[出色的文档][4]。
|
||||
|
||||
虽然代码主要由 C++ 完成,但它通过 Bash 和 Python 脚本进行了封装。因此,如果你仅仅想使用基本的语音到文字转换功能,你就会发现通过 Python 或 Bash 能够轻易的完成。
|
||||
|
||||
[Project’s homepage][5].
|
||||
|
||||
#### Julius
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 19 open source speech recognition][6]
|
||||
|
||||
可能是有史以来最古老的语音识别软件之一。它的发展始于 1991 年的京都大学,之后在 2005 年将所有权转移到了一个独立的项目组。
|
||||
|
||||
Julius 的主要特点包括了执行实时 STT 的能力,低内存占用(20000 单词少于 64 MB),输出<ruby>最优词<rt>N-best word</rt></ruby>/<ruby>词图<rt>Word-graph</rt></ruby>的能力,当作服务器单元运行的能力和很多东西。这款软件主要为学术和研究所设计。由 C 语言写成,并且可以运行在 Linux,Windows,macOS 甚至 Android(在智能手机上)。
|
||||
|
||||
它当前仅支持英语和日语。软件或许易于从 Linux 发行版的仓库中安装。只要在软件包管理器中搜索 julius 即可。最新的版本[发布][7]于大约一个半月之前。
|
||||
|
||||
[Project’s homepage][8].
|
||||
|
||||
#### Wav2Letter++
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 21 open source speech recognition][9]
|
||||
|
||||
如果你在寻找一个更加时髦的,那么这款一定适合。Wav2Letter++ 是一款由 Facebook 的 AI 研究团队于 2 个月之前发布的开源语言识别软件。代码在 BSD 许可下发布。
|
||||
|
||||
Facebook 描述它的库是“最快<ruby>最先进<rt>state-of-the-art</rt></ruby>的语音识别系统”。构建它时的想法使其能在默认情况下对性能进行优化。Facebook 最新的机器学习库 [FlashLight][11] 也被用作 Wav2Letter++ 的底层核心。
|
||||
|
||||
Wav2Letter++ 需要你先为所描述的语言建立一个模型来训练算法。没有任何一种语言(包括英语)的预训练模型,它仅仅是个机器学习驱动的文本语音转换工具,它用 C++ 写成,因此命名为 Wav2Letter++。
|
||||
|
||||
[Project’s homepage][12].
|
||||
|
||||
#### DeepSpeech2
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 23 open source speech recognition][13]
|
||||
|
||||
中国巨头百度的研究人员也在开发他们自己的语音文字转换引擎,叫做“DeepSpeech2”。它是一个端对端的开源引擎,使用“PaddlePaddle”深度学习框架进行英语或汉语的文字转换。代码在 BSD 许可下发布。
|
||||
|
||||
引擎可以训练在任何模型之上,并且可以用于任何想要的语言。模型并未随代码一同发布。你要像其他软件那样自己建立模型。DeepSpeech2 的源代码由 Python 写成,如果你使用过就会非常容易上手。
|
||||
|
||||
[Project’s homepage][14].
|
||||
|
||||
### 总结
|
||||
|
||||
语音识别领域仍然主要地由专有软件巨头所占据,比如 Google 和 IBM(它们为此提供了闭源商业服务),但是开源同类软件很有前途。这 5 款开源语音识别引擎应当能够帮助你构建应用,随着时间推移,它们会不断地发展。在几年之后,我们希望开源成为这些技术中的常态,就像其他行业那样。
|
||||
|
||||
如果你对清单有其他的建议或评论,我们很乐意在下面听到。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fosspost.org/lists/open-source-speech-recognition-speech-to-text
|
||||
|
||||
作者:[Simon James][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/LuuMing)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fosspost.org/author/simonjames
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/hero_speech-machine-learning2.png?resize=820%2C280&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 16 open source speech recognition)
|
||||
[2]: https://github.com/mozilla/DeepSpeech
|
||||
[3]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/Screenshot-at-2019-02-19-1134.png?resize=591%2C138&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 18 open source speech recognition)
|
||||
[4]: http://kaldi-asr.org/doc/index.html
|
||||
[5]: http://kaldi-asr.org
|
||||
[6]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/mic_web.png?resize=385%2C100&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 20 open source speech recognition)
|
||||
[7]: https://github.com/julius-speech/julius/releases
|
||||
[8]: https://github.com/julius-speech/julius
|
||||
[9]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/fully_convolutional_ASR.png?resize=850%2C177&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 22 open source speech recognition)
|
||||
[10]: https://code.fb.com/ai-research/wav2letter/
|
||||
[11]: https://github.com/facebookresearch/flashlight
|
||||
[12]: https://github.com/facebookresearch/wav2letter
|
||||
[13]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/ds2.png?resize=850%2C313&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 24 open source speech recognition)
|
||||
[14]: https://github.com/PaddlePaddle/DeepSpeech
|
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 essential values for the DevOps mindset)
|
||||
[#]: via: (https://opensource.com/article/19/5/values-devops-mindset)
|
||||
[#]: author: (Brent Aaron Reed https://opensource.com/users/brentaaronreed/users/wpschaub/users/wpschaub/users/wpschaub/users/cobiacomm/users/marcobravo/users/brentaaronreed)
|
||||
|
||||
关于 DevOps 思维模式所需具备的5个基本价值观
|
||||
======
|
||||
人和流程比在解决的业务问题的任何技术“银弹”更重要,且需要花更多的时间。
|
||||
![human head, brain outlined with computer hardware background][1]
|
||||
|
||||
今天的许多 IT 专业人士都在努力适应变化和扰动。您是否正在努力适应变化,可以这么说? 你觉得不堪重负吗?这并不罕见。今天,IT 的现状还不够好,所以需要不断尝试重新自我演进。
|
||||
|
||||
凭借30多年的IT综合经验,我们见证了人们和人际关系的重要性,它有让 IT有效和帮助业务蓬勃发展的能力。但是,在大多数情况下,我们关于IT解决方案的对话始于技术而非人员和流程。寻找“银弹”来解决业务和IT挑战的倾向非常普遍。但你不能只关心创新,DevOps或有效的团队和工作方式;他们需要得到培养,支持和引导。
|
||||
|
||||
由于扰动如此普遍,并且对变革速度存在如此迫切的需求,我们需要纪律和围栏。下面描述的 DevOps 思维模式的五个基本价值观将支持将我们带到那里的实践。这些价值观不是新观念;我们从经验中学到了它们并重构它们。一些值可以互换,它们是灵活的,并且它们指导支持(如支柱)这五个价值观的整体原则。
|
||||
|
||||
![5 essential values for the DevOps mindset][2]
|
||||
|
||||
### 1\. 利益相关方的反馈至关重要
|
||||
|
||||
我们如何知道我们是否为我们创造了比利益相关者更多的价值? 我们需要持久的质量数据来分析,通知并推动更好的决策。 来自可靠来源的相关信息对于任何业务的蓬勃发展至关重要。 我们需要倾听并理解我们的利益相关者所说的,而不是说我们需要以一种方式实施变革,使我们能够调整我们的思维、我们的流程和技术,并根据需要对其进行调整以使我们的利益相关者满意。由于信息(数据)不正确,我们常常看到很少的变化,或者由于错误的原因而发生的很多变化。因此,将变更与利益相关方的反馈结合起来是一项重要的价值,并有助我们专注于使公司成功最重要的事情。
|
||||
|
||||
|
||||
> 关注我们的利益相关者及其反馈,而不仅仅是为了改变而改变。
|
||||
|
||||
### 2\. 超越当今流程的极限
|
||||
|
||||
我们希望我们的产品和服务能够不断让客户满意——我们最重要的利益相关者。因此,我们需要不断改进。这不仅仅是关于质量;它还可能意味着成本,可用性,相关性以及许多其他目标和因素。创建可重复的流程或使用通用框架是非常棒的,它们可以改善治理和许多其他问题。但是,这不应该是我们的最终目标。在寻找改进方法时,我们必须调整我们的流程,并辅以正确的技术和工具。可能有理由抛弃“所谓的”框架,因为不这样做可能会增加浪费,更糟糕的是只是“货物结果”(做一些没有价值或目的的东西)。
|
||||
|
||||
> 力争始终创新并改进可重复的流程和框架。
|
||||
|
||||
### 3\. 无需新的筒仓来打破旧的筒仓
|
||||
|
||||
筒仓和DevOps是不兼容的。我们始终看到这一点:IT 主管带来了所谓的“专家”来实施敏捷和DevOps,他们做了什么?这些“专家”在现有问题的基础上创建了一个新问题,这是另一个加入 IT 部门的筒仓和一个充满筒仓的企业。创建“DevOps”标题违背了敏捷和DevOps的原则,这些原则基于打破筒仓的概念。在敏捷和DevOps中,团队合作是必不可少的,如果你不在自组织团队中工作,那么你就不会做任何事情。
|
||||
|
||||
> 相互激励和共享,而不是成为英雄或创建一个筒仓。
|
||||
|
||||
### 4\. 了解您的客户意味着跨组织协作
|
||||
|
||||
业务的任何部分都不是一个独立的实体,因为它们都有利益相关者,主要利益相关者始终是客户。“客户永远是对的”(正如我想说的那样,或者是国王)。关键是,没有客户,真的没有业务,而且为了保持业务,如今我们需要与竞争对手“区别对待”。我们还需要了解客户对我们的看法以及他们对我们的期望。了解客户的需求势在必行,需要及时反馈,以确保业务能够快速,负责地满足这些主要利益相关者的需求和关注。
|
||||
|
||||
![Minimize time spent with build-measure-learn process][3]
|
||||
|
||||
无论是想法,概念,假设还是直接的利益相关者反馈,我们都需要通过使用探索,构建,测试和交付生命周期来识别和衡量我们的产品提供的功能或服务。从根本上说,这意味着我们需要在整个组织中“插入”我们的组织。在持续创新,学习和DevOps方面没有任何边界。因此,当我们在整个企业中进行衡量时,我们可以理解整体并采取可行的,有意义的步骤来改进。
|
||||
|
||||
> 衡量整个组织的绩效,而不仅仅是在业务范围内。
|
||||
|
||||
### 5\. 热情鼓舞采纳
|
||||
|
||||
不是每个人都被驱使去学习,适应和改变;然而,就像微笑可能具有传染性一样,学习和意愿成为变革文化的一部分也是如此。在学习文化中适应和演化为一群人提供了学习和传递信息(即文化传播)的自然机制。学习风格,态度,方法和过程不断演化,因此我们可以改进它们。下一步是应用所学和改进的内容并与同事分享信息。学习不会自动发生;它需要努力,评估,纪律,意识,特别是沟通;遗憾的是,这些都是工具和自动化无法提供的。检查您的流程,自动化,工具策略和实施工作,使其透明化,并与您的同事协作重复使用和改进。
|
||||
|
||||
> 通过精益质量的可交付成果促进学习文化,而不仅仅是工具和自动化。
|
||||
|
||||
### 总结
|
||||
|
||||
![Continuous goals of DevOps mindset][4]
|
||||
|
||||
随着我们的公司采用DevOps,我们继续在任何书籍,网站或自动化软件上支持这五个价值观。采用这种思维方式需要时间,这与我们以前作为系统管理员所做的完全不同。这是一种全新的工作方式,需要很多年才能成熟。这些原则是否与您自己的原则一致?在评论或我们的网站上分享。[混乱特工]
|
||||
|
||||
* * *
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/values-devops-mindset
|
||||
|
||||
作者:[Brent Aaron Reed][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/arrowfeng)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brentaaronreed/users/wpschaub/users/wpschaub/users/wpschaub/users/cobiacomm/users/marcobravo/users/brentaaronreed
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X (human head, brain outlined with computer hardware background)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/devops_mindset_values.png (5 essential values for the DevOps mindset)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/devops_mindset_minimze-time.jpg (Minimize time spent with build-measure-learn process)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/devops_mindset_continuous.png (Continuous goals of DevOps mindset)
|
||||
[5]: http://agents-of-chaos.org
|
@ -1,148 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Expand And Unexpand Commands Tutorial With Examples)
|
||||
[#]: via: (https://www.ostechnix.com/expand-and-unexpand-commands-tutorial-with-examples/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
expand 与 unexpand 命令教程与示例
|
||||
======
|
||||
|
||||
![Expand And Unexpand Commands Explained][1]
|
||||
|
||||
本指南通过实际的例子解释两个 Linux 命令,即 **expand** 和 **unexpand**。对于好奇的人,expand 和 unexpand 命令用于将文件中的 TAB 字符替换为空格,反之亦然。在 MS-DOS 中也有一个名为 “expand” 的命令,它用于解压压缩文件。但 Linux expand 命令只是将 tab 转换为空格。这两个命令是 **GNU coreutils** 的一部分,由 **David MacKenzie** 编写。
|
||||
|
||||
为了演示,我将在本文使用名为 “ostechnix.txt” 的文本文件。下面给出的所有命令都在 Arch Linux 中进行测试。
|
||||
|
||||
### expand 命令示例
|
||||
|
||||
与我之前提到的一样,expand 命令使用空格替换文件中的 TAB 字符。
|
||||
|
||||
现在,让我们将ostechnix.txt 中的 tab 转换为空格,并将结果写入标准输出:
|
||||
|
||||
```
|
||||
$ expand ostechnix.txt
|
||||
```
|
||||
|
||||
如果你不想在标准输出中显示结果,只需将其写入另一个文件,如下所示。
|
||||
|
||||
```
|
||||
$ expand ostechnix.txt>output.txt
|
||||
```
|
||||
|
||||
我们还可以将标准输入中的 tab 转换为空格。为此,只需运行 “expand” 命令而不带文件名:
|
||||
|
||||
```
|
||||
$ expand
|
||||
```
|
||||
|
||||
只需输入文本并按回车键就能将 tab 转换为空格。按 **CTRL+C** 退出。
|
||||
|
||||
|
||||
如果你不想在非空白符后转换 tab,请使用 **-i** 标记,如下所示。
|
||||
|
||||
```
|
||||
$ expand -i ostechnix.txt
|
||||
```
|
||||
|
||||
我们还可以设置每个 tab 为指定数字的宽度,而不是 8(默认值)。
|
||||
|
||||
```
|
||||
$ expand -t=5 ostechnix.txt
|
||||
```
|
||||
|
||||
我们甚至可以使用逗号分隔指定多个 tab 位置,如下所示
|
||||
|
||||
```
|
||||
$ expand -t 5,10,15 ostechnix.txt
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ expand -t "5 10 15" ostechnix.txt
|
||||
```
|
||||
|
||||
有关更多详细信息,请参阅手册页。
|
||||
|
||||
```
|
||||
$ man expand
|
||||
```
|
||||
|
||||
### unexpand 命令示例
|
||||
|
||||
正如你可能已经猜到的那样,**unexpand** 命令将执行与 expand 命令相反的操作。即它会将空格转换为 TAB。让我向你展示一些例子,以了解如何使用 unexpand 命令。
|
||||
|
||||
要将文件中的空白(当然是空格)转换为 tab 并将输出写入标准输出,请执行以下操作:
|
||||
|
||||
```
|
||||
$ unexpand ostechnix.txt
|
||||
```
|
||||
|
||||
如果要将输出写入文件而不是仅将其显示到标准输出,请使用以下命令:
|
||||
|
||||
```
|
||||
$ unexpand ostechnix.txt>output.txt
|
||||
```
|
||||
|
||||
从标准输出读取内容,将空格转换为制表符:
|
||||
|
||||
```
|
||||
$ unexpand
|
||||
```
|
||||
|
||||
默认情况下,unexpand 命令仅转换初始的空格。如果你想转换所有空格而不是只是初始空格,请使用 **-a** 标志:
|
||||
|
||||
```
|
||||
$ unexpand -a ostechnix.txt
|
||||
```
|
||||
|
||||
仅转换开头的空白(请注意它会覆盖 **-a**):
|
||||
|
||||
```
|
||||
$ unexpand --first-only ostechnix.txt
|
||||
```
|
||||
|
||||
使多少个空格替换成一个 tab,而不是 **8**(启用 **-a** ):
|
||||
|
||||
```
|
||||
$ unexpand -t 5 ostechnix.txt
|
||||
```
|
||||
|
||||
相似地,我们可以使用逗号分隔指定多个tab的位置。
|
||||
|
||||
```
|
||||
$ unexpand -t 5,10,15 ostechnix.txt
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ unexpand -t "5 10 15" ostechnix.txt
|
||||
```
|
||||
|
||||
有关更多详细信息,请参阅手册页。
|
||||
|
||||
```
|
||||
$ man unexpand
|
||||
```
|
||||
|
||||
|
||||
在处理大量文件时,expand 和 unexpand 命令对于用空格替换不需要的 TAB 时非常有用,反之亦然。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/expand-and-unexpand-commands-tutorial-with-examples/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Expand-And-Unexpand-Commands-720x340.png
|
@ -0,0 +1,91 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Search Linux Applications On AppImage, Flathub And Snapcraft Platforms)
|
||||
[#]: via: (https://www.ostechnix.com/search-linux-applications-on-appimage-flathub-and-snapcraft-platforms/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
在 AppImage、Flathub 和 Snapcraft 平台上搜索 Linux 应用
|
||||
======
|
||||
|
||||
![Search Linux Applications On AppImage, Flathub And Snapcraft][1]
|
||||
|
||||
Linux 一直在发展。过去,开发人员必须分别为不同的 Linux 发行版构建应用。由于存在多种 Linux 变体,因此为所有发行版构建应用变得很繁琐,而且非常耗时。接着一些开发人员发明了包转换器和构建器,如 [**Checkinstall**][2]、[**Debtap**][3] 和 [**Fpm**][4]。但他们没有完全解决问题。所有这些工具都只是将一种包格式转换为另一种包格式。我们仍然需要找到并安装应用运行所需的依赖项。
|
||||
|
||||
好吧,时代已经变了。我们现在有了通用的 Linux 应用。这意味着我们可以在大多数 Linux 发行版上安装这些应用。无论是 Arch Linux、Debian、CentOS、Redhat、Ubuntu 还是任何流行的 Linux 发行版,通用应用都可以正常使用。这些应用与所有必需的库和依赖项打包在一个包中。我们所要做的就是在我们使用的任何 Linux 发行版上下载并运行它们。流行的通用应用格式有 **AppImages**、[**Flatpaks**][5] 和 [**Snaps**][6]。
|
||||
|
||||
AppImages 由 **Simon peter** 创建和维护。许多流行的应用,如 Gimp、Firefox、Krita 等等,都有这些格式,并可直接在下载页面下载。只需下载它们,使其可执行并立即运行它。你甚至无需 root 权限来运行 AppImages。
|
||||
|
||||
Flatpak 的开发人员是 **Alexander Larsson**(RedHat 员工)。 Flatpak 应用托管在名为 **“Flathub”** 的中央仓库(商店)中。如果你是开发人员,建议你使用 Flatpak 格式构建应用,并通过 Flathub 将其分发给用户。
|
||||
|
||||
**Snaps** 由 **Canonical** 而建,主要用于 Ubuntu。但是,其他 Linux 发行版的开发人员开始为 Snap 打包格式做出贡献。因此,Snaps 也开始适用于其他 Linux 发行版。Snaps 可以直接从应用的下载页面下载,也可以从 **Snapcraft** 商店下载。
|
||||
|
||||
许多受欢迎的公司和开发人员已经发布了 AppImage、Flatpak 和 Snap 格式的应用。如果你在寻找一款应用,只需进入相应的商店并获取你选择的应用并运行它,而不用管你使用何种 Linux 发行版。
|
||||
|
||||
还有一个名为 **“Chob”** 的命令行通用应用搜索工具可在 AppImage、Flathub 和 Snapcraft 平台上轻松搜索 Linux 应用。此工具仅搜索给定的应用并在默认浏览器中显示官方链接。它不会安装它们。本指南将解释如何安装 Chob 并使用它来搜索 Linux 上的 AppImages、Flatpaks 和 Snaps。
|
||||
|
||||
### 使用 Chob 在 AppImage、Flathub 和 Snapcraft 平台上搜索 Linux 应用
|
||||
|
||||
从[**发布页面**][7]下载最新的 Chob 二进制文件。在编写本指南时,最新版本为 **0.3.5**。
|
||||
|
||||
```
|
||||
$ wget https://github.com/MuhammedKpln/chob/releases/download/0.3.5/chob-linux
|
||||
```
|
||||
|
||||
使其可执行:
|
||||
|
||||
```
|
||||
$ chmod +x chob-linux
|
||||
```
|
||||
|
||||
最后,搜索你想要的应用。例如,我将搜索与 **Vim** 相关的应用。
|
||||
|
||||
```
|
||||
$ ./chob-linux vim
|
||||
```
|
||||
|
||||
Chob 将在 AppImage、Flathub 和 Snapcraft 平台上搜索给定的应用(和相关应用)并显示结果。
|
||||
|
||||
![][8]
|
||||
|
||||
使用 Chob 搜索 Linux 应用
|
||||
|
||||
只需要输入你想要应用前面的数字就可在默认浏览器中打开它的官方链接,并可在其中阅读应用的详细信息。
|
||||
|
||||
![][9]
|
||||
|
||||
在浏览器中查看 Linux 应用的详细信息
|
||||
|
||||
有关更多详细信息,请查看下面的 Chob 官方 GitHub 页面。
|
||||
|
||||
**资源:**
|
||||
|
||||
* [**Chob 的 GitHub 仓库**][10]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/search-linux-applications-on-appimage-flathub-and-snapcraft-platforms/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/chob-720x340.png
|
||||
[2]: https://www.ostechnix.com/build-packages-source-using-checkinstall/
|
||||
[3]: https://www.ostechnix.com/convert-deb-packages-arch-linux-packages/
|
||||
[4]: https://www.ostechnix.com/build-linux-packages-multiple-platforms-easily/
|
||||
[5]: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/
|
||||
[6]: https://www.ostechnix.com/introduction-ubuntus-snap-packages/
|
||||
[7]: https://github.com/MuhammedKpln/chob/releases
|
||||
[8]: http://www.ostechnix.com/wp-content/uploads/2019/05/Search-Linux-applications-Using-Chob.png
|
||||
[9]: http://www.ostechnix.com/wp-content/uploads/2019/05/View-Linux-applications-Details.png
|
||||
[10]: https://github.com/MuhammedKpln/chob
|
Loading…
Reference in New Issue
Block a user