Merge pull request #2 from LCTT/master

update
This commit is contained in:
amwps290 2018-06-16 20:35:48 +08:00 committed by GitHub
commit 0f9d98514e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 1636 additions and 531 deletions

View File

@ -1,7 +1,11 @@
机器人学影响 CIO 角色的 3 种方式
======
> 机器人流程自动化如何影响 CIO ?考虑这些可能。
![配图](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_ai.png?itok=toMIgELj)
随着 2017 即将结束,许多 CIO 们的 2018 年目标也将确定。或许你们将参与到机器人流程自动化RPA中。多年以来RPA 对许多公司来说只是一个可望不可及的概念。但是随着组织被迫变得越来越敏捷高效RPA 所具有的潜在优势开始受到重视。
随着 2017 的结束,许多 CIO 们的 2018 年目标也将确定。或许你们将参与到机器人流程自动化RPA中。多年以来RPA 对许多公司来说只是一个可望不可及的概念。但是随着组织被迫变得越来越敏捷高效RPA 所具有的潜在优势开始受到重视。
根据 Redwood Sofeware 和 Sapio Research 的最新 [研究报告][1]IT 决策者们相信,未来 5 年有 59% 的业务可以被自动化处理,从而产生新的速度和效率,并且消减相应的重复性的人工工作量的岗位。但是,目前在相应岗位上没有实施 RPA 的公司中,有 20% 的公司员工超过 1000 人。
@ -43,7 +47,7 @@ via: https://enterprisersproject.com/article/2017/11/3-ways-robotics-affects-cio
作者:[Dennis Walsh][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -160,7 +160,7 @@ func main() {
router.Handle("GET", "/api/auth_user", authRequired(getAuthUser))
addr := fmt.Sprintf(":%d", config.port)
log.Printf("starting server at %s 🚀\n", config.appURL)
log.Printf("starting server at %s \n", config.appURL)
log.Fatalf("could not start server: %v\n", http.ListenAndServe(addr, router))
}
@ -220,7 +220,7 @@ go build
```
我们在目录中有了一个 “passwordless-demo”但是你的目录中可能与示例不一样`go build` 将创建一个同名的可执行文件。如果你没有关闭前面的 cockroach 节点,并且你正确配置了 `SMTP_USERNAME` 和 `SMTP_PASSWORD` 变量,你将看到命令 `starting server at http://localhost/ 🚀` 没有错误输出。
我们在目录中有了一个 “passwordless-demo”但是你的目录中可能与示例不一样`go build` 将创建一个同名的可执行文件。如果你没有关闭前面的 cockroach 节点,并且你正确配置了 `SMTP_USERNAME` 和 `SMTP_PASSWORD` 变量,你将看到命令 `starting server at http://localhost/ ` 没有错误输出。
#### 请求 JSON 的中间件
@ -764,7 +764,7 @@ func fetchUser(ctx context.Context, id string) (User, error) {
如果你在 mailtrap 上点击之后出现有关 `脚本运行被拦截,因为文档的框架是沙箱化的,并且没有设置 'allow-scripts' 权限` 的问题,你可以尝试右键点击 “在新标签中打开链接“。这样做是安全的,因为邮件内容是 [沙箱化的][10]。我在 `localhost` 上有时也会出现这个问题,但是我认为你一旦以 `https://` 方式部署到服务器上应该不会出现这个问题了。
如果有任何问题,请在我的 [GitHub repo][11] 留言或者提交 PRs 👍
如果有任何问题,请在我的 [GitHub repo][11] 留言或者提交 PRs
以后,我为这个 API 写了一个客户端作为这篇文章的[第二部分][13]。

View File

@ -0,0 +1,163 @@
如何装载/卸载 Linux 内核模块
===============
> 找到并装载内核模块以解决外设问题。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82)
本文来自 Manning 出版的 [Linux in Action][1] 的第 15 章。
Linux 使用内核模块管理硬件外设。 我们来看看它是如何工作的。
运行中的 Linux 内核是您不希望被破坏的东西之一。毕竟,内核是驱动计算机所做的一切工作的软件。考虑到在一个运行的系统上必须同时管理诸多细节,最好能让内核尽可能的减少分心,专心的完成它的工作。但是,如果对计算环境进行任何微小的更改都需要重启整个系统,那么插入一个新的网络摄像头或打印机就可能会严重影响您的工作流程。每次添加设备时都必须重新启动,以使系统识别它,这效率很低。
为了在稳定性和可用性之间达成有效的平衡Linux 将内核隔离,但是允许您通过可加载内核模块 (LKM) 实时添加特定的功能。如下图所示,您可以将模块视为软件的一部分,它告诉内核在哪里找到一个设备以及如何使用它。反过来,内核使设备对用户和进程可用,并监视其操作。
![Kernel modules][3]
*内核模块充当设备和 Linux 内核之间的转换器。*
虽然你可以自己编写模块来完全按照你喜欢的方式来支持一个设备但是为什么要这样做呢Linux 模块库已经非常强大,通常不需要自己去实现一个模块。 而绝大多数时候Linux 会自动加载新设备的模块,而您甚至不知道它。
不过,有时候,出于某种原因,它本身并不会自动进行。(你肯定不想让那个招聘经理不耐烦地一直等待你的笑脸加入视频面试。)为了帮助你解决问题,你需要更多地了解内核模块,特别是,如何找到运行你的外设的实际模块,然后如何手动激活它。
### 查找内核模块
按照公认的约定,内核模块是位于 `/lib/modules/` 目录下的具有 .ko内核对象扩展名的文件。 然而,在你找到这些文件之前,你还需要选择一下。因为在引导时你需要从版本列表中选择其一加载,所以支持您选择的特定软件(包括内核模块)必须存在某处。 那么,`/lib/modules/` 就是其中之一。 你会发现目录里充满了每个可用的 Linux 内核版本的模块; 例如:
```
$ ls /lib/modules
4.4.0-101-generic
4.4.0-103-generic
4.4.0-104-generic
```
在我的电脑上运行的内核是版本号最高的版本4.4.0-104-generic但不能保证这对你来说是一样的内核经常更新。 如果您将要在一个运行的系统上使用模块完成一些工作的话,你需要确保您找到正确的目录树。
好消息:有一个可靠的窍门。相对于通过名称来识别目录,并希望能够找到正确的目录,你可以使用始终指向使用的内核名称的系统变量。 您可以使用 `uname -r` `-r` 从系统信息中指定通常显示的内核版本号)来调用该变量:
```
$ uname -r
4.4.0-104-generic
```
通过这些信息,您可以使用称为命令替换的过程将 `uname` 并入您的文件系统引用中。 例如,要导航到正确的目录,您需要将其添加到 `/lib/modules` 。 要告诉 Linux “uname” 不是一个文件系统中的位置,请将 `uname` 部分用反引号括起来,如下所示:
```
$ ls /lib/modules/`uname -r`
build modules.alias modules.dep modules.softdep
initrd modules.alias.bin modules.dep.bin modules.symbols
kernel modules.builtin modules.devname modules.symbols.bin
misc modules.builtin.bin modules.order vdso
```
你可以在 `kernel/` 目录下的子目录中找到大部分模块。 花几分钟时间浏览这些目录,了解事物的排列方式和可用内容。 这些文件名通常会让你知道它们是什么。
```
$ ls /lib/modules/`uname -r`/kernel
arch crypto drivers fs kernel lib mm
net sound ubuntu virt zfs
```
这是查找内核模块的一种方法;实际上,这是一种快速的方式。 但这不是唯一的方法。 如果你想获得完整的集合,你可以使用 `lsmod` 列出所有当前加载的模块以及一些基本信息。 这个截断输出的第一列(在这里列出的太多了)是模块名称,后面是文件大小和数量,然后是每个模块的名称:
```
$ lsmod
[...]
vboxdrv 454656 3 vboxnetadp,vboxnetflt,vboxpci
rt2x00usb 24576 1 rt2800usb
rt2800lib 94208 1 rt2800usb
[...]
```
到底有多少?好吧,我们再运行一次 `lsmod ` ,但是这一次将输出管道输送到 `wc -l` 看一下一共多少行:
```
$ lsmod | wc -l
113
```
这是已加载的模块。 总共有多少个? 运行 `modprobe -c` 并计算这些行将给我们这个数字:
```
$ modprobe -c | wc -l
33350
```
有 33,350 个可用模块!? 看起来好像有人多年来一直在努力为我们提供软件来驱动我们的物理设备。
注意:在某些系统中,您可能会遇到自定义的模块,这些模块要么在 `/etc/modules` 文件中使用独特的条目进行引用,要么在 `/etc/modules-load.d/` 下的配置文件中。这些模块很可能是本地开发项目的产物,可能涉及前沿实验。不管怎样,知道你看到的是什么总是好的。
这就是如何找到模块的方法。 如果出于某种原因,它不会自行加载,您的下一个工作就是弄清楚如何手动加载未激活的模块。
### 手动加载内核模块
在加载内核模块之前,逻辑上您必须确认它存在。在这之前,你需要知道它叫什么。要做到这一点,有时需要兼有魔法和运气以及在线文档作者的辛勤工作的帮助。
我将通过描述一段时间前遇到的问题来说明这个过程。在一个晴朗的日子里,出于某种原因,笔记本电脑上的 WiFi 接口停止工作了。就这样。也许是软件升级把它搞砸了。谁知道呢?我运行了 `lshw -c network` ,得到了这个非常奇怪的信息:
```
network UNCLAIMED
    AR9485 Wireless Network Adapter
```
Linux 识别到了接口Atheros AR9485但将其列为未声明。 那么,正如他们所说的那样,“当情况变得严峻时,就会在互联网上进行艰难的搜索。” 我搜索了一下 atheros ar9 linux 模块,在浏览了一页又一页五年前甚至是十年前的页面后,它们建议我自己写个模块或者放弃吧,然后我终于发现(最起码 Ubuntu 16.04)有一个可以工作的模块。 它的名字是 ath9k 。
是的! 这场战斗胜券在握!向内核添加模块比听起来容易得多。 要仔细检查它是否可用,可以针对模块的目录树运行 `find`,指定 `-type f` 来告诉 Linux 您正在查找文件,然后将字符串 `ath9k` 和星号一起添加以包含所有以你的字符串打头的文件:
```
$ find /lib/modules/$(uname -r) -type f -name ath9k*
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_common.ko
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k.ko
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_htc.ko
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_hw.ko
```
再一步,加载模块:
```
# modprobe ath9k
```
就是这样。无启动,没烦恼。
这里还有一个示例,向您展示如何使用已经崩溃的运行模块。曾经有一段时间,我使用罗技网络摄像头和一个特定的软件会使摄像头在下次系统启动前无法被任何其他程序访问。有时我需要在不同的应用程序中打开相机,但没有时间关机重新启动。(我运行了很多应用程序,在引导之后将它们全部准备好需要一些时间。)
由于这个模块可能是运行的,所以使用 `lsmod` 来搜索 video 这个词应该给我一个关于相关模块名称的提示。 实际上,它比提示更好:用 video 这个词描述的唯一模块是 uvcvideo如下所示
```
$ lsmod | grep video
uvcvideo 90112 0
videobuf2_vmalloc 16384 1 uvcvideo
videobuf2_v4l2 28672 1 uvcvideo
videobuf2_core 36864 2 uvcvideo,videobuf2_v4l2
videodev 176128 4 uvcvideo,v4l2_common,videobuf2_core,videobuf2_v4l2
media 24576 2 uvcvideo,videodev
```
有可能是我自己的操作导致了崩溃,我想我可以挖掘更深一点,看看我能否以正确的方式解决问题。但结果你知道的;有时你不关心理论,只想让设备工作。 所以我用 `rmmod` 杀死了 `uvcvideo` 模块,然后用 `modprobe` 重新启动它,一切都好:
```
# rmmod uvcvideo
# modprobe uvcvideo
```
再一次:不重新启动。没有其他的后续影响。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/how-load-or-unload-linux-kernel-module
作者:[David Clinton][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[amwps290](https://github.com/amwps290)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dbclinton
[1]:https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource
[2]:/file/397906
[3]:https://opensource.com/sites/default/files/uploads/kernels.png "Kernel modules"

View File

@ -0,0 +1,116 @@
Vim-plug极简 Vim 插件管理器
======
![](https://www.ostechnix.com/wp-content/uploads/2018/06/vim-plug-720x340.png)
当没有插件管理器时Vim 用户必须手动下载 tarball 包形式的插件,并将它们解压到 `~/.vim` 目录中。在少量插件的时候可以。但当他们安装更多的插件时,就会变得一团糟。所有插件文件分散在单个目录中,用户无法找到哪个文件属于哪个插件。此外,他们无法找到他们应该删除哪个文件来卸载插件。这时 Vim 插件管理器就可以派上用场。插件管理器将安装插件的文件保存在单独的目录中,因此管理所有插件变得非常容易。我们几个月前已经写了关于 [Vundle][1] 的文章。今天,我们将看到又一个名为 “Vim-plug” 的 Vim 插件管理器。
Vim-plug 是一个自由、开源、速度非常快的、极简的 vim 插件管理器。它可以并行地安装或更新插件。你还可以回滚更新。它创建<ruby>浅层克隆<rt>shallow clone</rt></ruby>最小化磁盘空间使用和下载时间。它支持按需加载插件以加快启动时间。其他值得注意的特性是支持分支/标签/提交、post-update 钩子、支持外部管理的插件等。
### 安装
安装和使用起来非常容易。你只需打开终端并运行以下命令:
```
$ curl -fLo ~/.vim/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim
```
Neovim 用户可以使用以下命令安装 Vim-plug
```
$ curl -fLo ~/.config/nvim/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim
```
### 用法
#### 安装插件
要安装插件,你必须如下所示首先在 Vim 配置文件中声明它们。一般 Vim 的配置文件是 `~/.vimrc`Neovim 的配置文件是 `~/.config/nvim/init.vim`。请记住,当你在配置文件中声明插件时,列表应该以 `call plug#begin(PLUGIN_DIRECTORY)` 开始,并以 `plug#end()` 结束。
例如,我们安装 “lightline.vim” 插件。为此,请在 `~/.vimrc` 的顶部添加以下行。
```
call plug#begin('~/.vim/plugged')
Plug 'itchyny/lightline.vim'
call plug#end()
```
在 vim 配置文件中添加上面的行后,通过输入以下命令重新加载:
```
:source ~/.vimrc
```
或者,只需重新加载 Vim 编辑器。
现在,打开 vim 编辑器:
```
$ vim
```
使用以下命令检查状态:
```
:PlugStatus
```
然后输入下面的命令,然后按回车键安装之前在配置文件中声明的插件。
```
:PlugInstall
```
#### 更新插件
要更新插件,请运行:
```
:PlugUpdate
```
更新插件后,按下 `d` 查看更改。或者,你可以之后输入 `:PlugDiff`
#### 审查插件
有时,更新的插件可能有新的 bug 或无法正常工作。要解决这个问题,你可以简单地回滚有问题的插件。输入 `:PlugDiff` 命令,然后按回车键查看上次 `:PlugUpdate`的更改,并在每个段落上按 `X` 将每个插件回滚到更新前的前一个状态。
#### 删除插件
删除一个插件删除或注释掉你以前在你的 vim 配置文件中添加的 `plug` 命令。然后,运行 `:source ~/.vimrc` 或重启 Vim 编辑器。最后,运行以下命令卸载插件:
```
:PlugClean
```
该命令将删除 vim 配置文件中所有未声明的插件。
#### 升级 Vim-plug
要升级vim-plug本身请输入
```
:PlugUpgrade
```
如你所见,使用 Vim-plug 管理插件并不难。它简化了插件管理。现在去找出你最喜欢的插件并使用 Vim-plug 来安装它们。
就是这些了。我将很快在这里发布另一个有趣的话题。在此之前,请继续关注我们。
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/vim-plug-a-minimalist-vim-plugin-manager/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://linux.cn/article-9416-1.html

View File

@ -1,205 +0,0 @@
Translating by MjSeven
How to Install Docker CE on Your Desktop
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/containers-volumes_0.jpg?itok=gv0_MXiZ)
[In the previous article,][1] we learned some of the basic terminologies of the container world. That background information will come in handy when we run commands and use some of those terms in follow-up articles, including this one. This article will cover the installation of Docker on desktop Linux, macOS, and Windows, and it is intended for beginners who want to get started with Docker containers. The only prerequisite is that you are comfortable with command-line interface.
### Why do I need Docker CE on my local machine?
As a new user, you many wonder why you need containers on your local systems. Arent they meant to run in cloud and servers as microservices? While containers have been part of the Linux world for a very long time, it was Docker that made them really consumable with its tools and technologies.
The greatest thing about Docker containers is that you can use your local machine for development and testing. The container images that you create on your local system can then run “anywhere.” There is no conflict between developers and operators about apps running fine on development systems but not in production.
The point is that in order to create containerized applications, you must be able to run and create containers on your local systems.
You can use any of the three platforms -- desktop Linux, Windows, or macOS as the development platform for containers. Once Docker is successfully running on these systems, you will be using the same commands across platforms so it really doesnt matter which OS you are running underneath.
Thats the beauty of Docker.
### Lets get started
There are two editions of Docker. Docker Enterprise Edition (EE) and Docker Community Edition (CE). We will be using the Docker Community Edition, which is a free of cost version of Docker intended for developers and enthusiasts who want to get started with Docker.
There are two channels of Docker CE: stable and edge. As the name implies, the stable version gives you well-tested quarterly updates, whereas the edge version offers new updates every month. After further testing, these edge features are added to the stable release. I recommend the stable version for new users.
Docker CE is supported on macOS, Windows 10, Ubuntu 14.04, 16.04, 17.04 and 17.10; Debian 7.7,8,9 and 10; Fedora 25, 26, 27; and centOS. While you can download Docker CE binaries and install on your Desktop Linux systems, I recommend adding repositories so you continue to receive patches and updates.
### Install Docker CE on Desktop Linux
You dont need a full blown desktop Linux to run Docker, you can install it on a bare minimal Linux server as well, that you can run in a VM. In this tutorial, I am running it on Fedora 27 and Ubuntu 17.04 running on my main systems.
### Ubuntu Installation
First things first. Run a system update so your Ubuntu packages are fully updated:
```
$ sudo apt-get update
```
Now run system upgrade:
```
$ sudo apt-get dist-upgrade
```
Then install Docker PGP keys:
```
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Update the repository info again:
$ sudo apt-get update
```
Now install Docker CE:
```
$ sudo apt-get install docker-ce
```
Once it's installed, Docker CE runs automatically on Ubuntu based systems. Lets check if its running:
```
$ sudo systemctl status docker
```
You should get the following output:
```
docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2017-12-28 15:06:35 EST; 19min ago
Docs: https://docs.docker.com
Main PID: 30539 (dockerd)
```
Since Docker is installed on your system, you can now use Docker CLI (Command Line Interface) to run Docker commands. Living up to the tradition, lets run the Hello World command:
```
$ sudo docker run hello-world
```
![YMChR_7xglpYBT91rtXnqQc6R1Hx9qMX_iO99vL8][2]
Congrats! You have Docker running on your Ubuntu system.
### Installing Docker CE on Fedora
Things are a bit different on Fedora 27. On Fedora, you first need to install def-plugins-core packages that will allow you to manage your DNF packages from CLI.
```
$ sudo dnf -y install dnf-plugins-core
```
Now install the Docker repo on your system:
```
$ sudo dnf config-manager \
--add-repo \
https://download.docker.com/linux/fedora/docker-ce.repo
Its time to install Docker CE:
$ sudo dnf install docker-ce
```
Unlike Ubuntu, Docker doesnt start automatically on Fedora. So lets start it:
```
$ sudo systemctl start docker
```
You will have to start Docker manually after each reboot, so lets configure it to start automatically after reboots. $ systemctl enable docker Well, its time to run the Hello World command:
```
$ sudo docker run hello-world
```
Congrats, Docker is running on your Fedora 27 system.
### Cutting your roots
You may have noticed that you have to use sudo to run Docker commands. Thats because of Docker daemons binding with the UNIX socket, instead of a TCP port and that socket is owned by the root user. So, you need sudo privileges to run the docker command. You can add system user to the docker group so it wont require sudo:
```
$ sudo groupadd docker
```
In most cases, the docker user group is automatically created when you install Docker CE, so all you need to do is add your user to that group:
```
$ sudo usermod -aG docker $USER
```
To test if the group has been added successfully, run the groups command against the name of the user:
```
$ groups swapnil
```
(Here, Swapnil is the user.)
This is the output on my system:
```
$ swapnil : swapnil adm cdrom sudo dip plugdev lpadmin sambashare docker
```
You can see that the user also belongs to the docker group. Log out of your system, so that group changes take effect. Once you log back in, try the Hello World command without sudo:
```
$ docker run hello-world
```
You can check system wide info about the installed version of Docker and more by running this command:
```
$ docker info
```
### Install Docker CE on macOS and Windows
You can easily install Docker CE (and EE) on macOS and Windows. Download the official Docker for Mac and install it the way you install applications on macOS, by simply dragging them into the Applications directory. Once the file is copied, open Docker from spotlight to start the installation process. Once installed, Docker will start automatically and you can see it in the top bar of macOS.
![IEX23j65zYlF8mZ1c-T_vFw_i1B1T1hibw_AuhEA][3]
macOS is UNIX, so you can simply open the terminal app and start using Docker commands natively. Test the hello world app:
```
$ docker run hello-world
```
Congrats, you have Docker running on your macOS.
### Docker on Windows 10
You need the latest version of Windows 10 Pro or Server in order to run/install Docker on it. If you are not fully updated, Windows wont install Docker. I got an error on my Windows 10 system and had to run system updates. My version was still behind, and I hit [this][4] bug. So, if you fail to install Docker on Windows, just know you are not alone. Keep an eye on that bug to find a solution.
Once you install Docker on Windows, you can either use bash shell via WSL or use PowerShell to run docker commands. Lets test the “Hello World” command in PowerShell:
```
PS C:\Users\swapnil> docker run hello-world
```
Congrats, you have Docker running on Windows.
In the next article, we will talk about pulling images from DockerHub and running containers on our systems. We will also talk about pushing our own containers to Docker Hub.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/how-install-docker-ce-your-desktop
作者:[SWAPNIL BHARTIYA][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/blog/intro-to-linux/2017/12/container-basics-terms-you-need-know
[2]:https://lh5.googleusercontent.com/YMChR_7xglpYBT91rtXnqQc6R1Hx9qMX_iO99vL8Z8C0-BlynDcL5B5pG-zzH0fKU0Qvnzd89v0KDEbZiO0gTfGNGfDtO-FkTt0bmzIQ-TKbNmv18S9RXdkSeXqgKDFRewnaHPj2
[3]:https://lh3.googleusercontent.com/IEX23j65zYlF8mZ1c-T_vFw_i1B1T1hibw_AuhEAfwv9oFpMfcAqkgEk7K5o58iDAAfGozSpIvY_qEsTOHRlSbesMKwTnG9rRkWba1KPSmnuH1LyoccDGNO3Clbz8du0gSByZxNj
[4]:https://github.com/docker/for-win/issues/1263

View File

@ -0,0 +1,338 @@
Passwordless Auth: Client
======
Time to continue with the [passwordless auth][1] posts. Previously, we wrote an HTTP service in Go that provided with a passwordless authentication API. Now, we are gonna code a JavaScript client for it.
Well go with a single page application (SPA) using the technique I showed [here][2]. Read it first if you havent yet.
For the root URL (`/`) well show two different pages depending on the auth state: a page with an access form or a page greeting the authenticated user. Another page is for the auth callback redirect.
### Serving
Ill serve the client with the same Go server, so lets add some routes to the previous `main.go`:
```
router.Handle("GET", "/js/", http.FileServer(http.Dir("static")))
router.HandleFunc("GET", "/...", serveFile("static/index.html"))
```
This serves files under `static/js`, and `static/index.html` is served for everything else.
You can use your own server apart, but youll have to enable [CORS][3] on the server.
### HTML
Lets see that `static/index.html`.
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Passwordless Demo</title>
<link rel="shortcut icon" href="data:,">
<script src="/js/main.js" type="module"></script>
</head>
<body></body>
</html>
```
Single page application left all the rendering to JavaScript, so we have an empty body and a `main.js` file.
Ill user the Router from the [last post][2].
### Rendering
Now, create a `static/js/main.js` file with the following content:
```
import Router from 'https://unpkg.com/@nicolasparada/router'
import { isAuthenticated } from './auth.js'
const router = new Router()
router.handle('/', guard(view('home')))
router.handle('/callback', view('callback'))
router.handle(/^\//, view('not-found'))
router.install(async resultPromise => {
document.body.innerHTML = ''
document.body.appendChild(await resultPromise)
})
function view(name) {
return (...args) => import(`/js/pages/${name}-page.js`)
.then(m => m.default(...args))
}
function guard(fn1, fn2 = view('welcome')) {
return (...args) => isAuthenticated()
? fn1(...args)
: fn2(...args)
}
```
Differing from the last post, we implement an `isAuthenticated()` function and a `guard()` function that uses it to render one or another page. So when a user visits `/` it will show the home or welcome page whether the user is authenticated or not.
### Auth
Now, lets write that `isAuthenticated()` function. Create a `static/js/auth.js` file with the following content:
```
export function getAuthUser() {
const authUserItem = localStorage.getItem('auth_user')
const expiresAtItem = localStorage.getItem('expires_at')
if (authUserItem !== null && expiresAtItem !== null) {
const expiresAt = new Date(expiresAtItem)
if (!isNaN(expiresAt.valueOf()) && expiresAt > new Date()) {
try {
return JSON.parse(authUserItem)
} catch (_) { }
}
}
return null
}
export function isAuthenticated() {
return localStorage.getItem('jwt') !== null && getAuthUser() !== null
}
```
When someone login, we save the JSON web token, expiration date of it and the current authenticated user on `localStorage`. This module uses that.
* `getAuthUser()` gets the authenticated user from `localStorage` making sure the JSON Web Token hasnt expired yet.
* `isAuthenticated()` makes use of the previous function to check whether it doesnt return `null`.
### Fetch
Before continuing with the pages, Ill code some HTTP utilities to work with the server API.
Lets create a `static/js/http.js` file with the following content:
```
import { isAuthenticated } from './auth.js'
function get(url, headers) {
return fetch(url, {
headers: Object.assign(getAuthHeader(), headers),
}).then(handleResponse)
}
function post(url, body, headers) {
return fetch(url, {
method: 'POST',
headers: Object.assign(getAuthHeader(), { 'content-type': 'application/json' }, headers),
body: JSON.stringify(body),
}).then(handleResponse)
}
function getAuthHeader() {
return isAuthenticated()
? { authorization: `Bearer ${localStorage.getItem('jwt')}` }
: {}
}
export async function handleResponse(res) {
const body = await res.clone().json().catch(() => res.text())
const response = {
url: res.url,
statusCode: res.status,
statusText: res.statusText,
headers: res.headers,
body,
}
if (!res.ok) throw Object.assign(
new Error(body.message || body || res.statusText),
response
)
return response
}
export default {
get,
post,
}
```
This module exports `get()` and `post()` functions. They are wrappers around the `fetch` API. Both functions inject an `Authorization: Bearer <token_here>` header to the request when the user is authenticated; that way the server can authenticate us.
### Welcome Page
Lets move to the welcome page. Create a `static/js/pages/welcome-page.js` file with the following content:
```
const template = document.createElement('template')
template.innerHTML = `
<h1>Passwordless Demo</h1>
<h2>Access</h2>
<form id="access-form">
<input type="email" placeholder="Email" autofocus required>
<button type="submit">Send Magic Link</button>
</form>
`
export default function welcomePage() {
const page = template.content.cloneNode(true)
page.getElementById('access-form')
.addEventListener('submit', onAccessFormSubmit)
return page
}
```
This page uses an `HTMLTemplateElement` for the view. It is just a simple form to enter the users email.
To not make this boring, Ill skip error handling and just log them to console.
Now, lets code that `onAccessFormSubmit()` function.
```
import http from '../http.js'
function onAccessFormSubmit(ev) {
ev.preventDefault()
const form = ev.currentTarget
const input = form.querySelector('input')
const email = input.value
sendMagicLink(email).catch(err => {
console.error(err)
if (err.statusCode === 404 && wantToCreateAccount()) {
runCreateUserProgram(email)
}
})
}
function sendMagicLink(email) {
return http.post('/api/passwordless/start', {
email,
redirectUri: location.origin + '/callback',
}).then(() => {
alert('Magic link sent. Go check your email inbox.')
})
}
function wantToCreateAccount() {
return prompt('No user found. Do you want to create an account?')
}
```
It does a `POST` request to `/api/passwordless/start` with the email and redirectUri in the body. In case it returns with `404 Not Found` status code, well create a user.
```
function runCreateUserProgram(email) {
const username = prompt("Enter username")
if (username === null) return
http.post('/api/users', { email, username })
.then(res => res.body)
.then(user => sendMagicLink(user.email))
.catch(console.error)
}
```
The user creation program, first, ask for username and does a `POST` request to `/api/users` with the email and username in the body. On success, it sends a magic link for the user created.
### Callback Page
That was all the functionality for the access form, lets move to the callback page. Create a `static/js/pages/callback-page.js` file with the following content:
```
import http from '../http.js'
const template = document.createElement('template')
template.innerHTML = `
<h1>Authenticating you 👀</h1>
`
export default function callbackPage() {
const page = template.content.cloneNode(true)
const hash = location.hash.substr(1)
const fragment = new URLSearchParams(hash)
for (const [k, v] of fragment.entries()) {
fragment.set(decodeURIComponent(k), decodeURIComponent(v))
}
const jwt = fragment.get('jwt')
const expiresAt = fragment.get('expires_at')
http.get('/api/auth_user', { authorization: `Bearer ${jwt}` })
.then(res => res.body)
.then(authUser => {
localStorage.setItem('jwt', jwt)
localStorage.setItem('auth_user', JSON.stringify(authUser))
localStorage.setItem('expires_at', expiresAt)
location.replace('/')
})
.catch(console.error)
return page
}
```
To remember… when clicking the magic link, we go to `/api/passwordless/verify_redirect` which redirect us to the redirect URI we pass (`/callback`) with the JWT and expiration date in the URL hash.
The callback page decodes the hash from the URL to extract those parameters to do a `GET` request to `/api/auth_user` with the JWT saving all the data to `localStorage`. Finally, it just redirects to home.
### Home Page
Create a `static/pages/home-page.js` file with the following content:
```
import { getAuthUser } from '../auth.js'
export default function homePage() {
const authUser = getAuthUser()
const template = document.createElement('template')
template.innerHTML = `
<h1>Passwordless Demo</h1>
<p>Welcome back, ${authUser.username} 👋</p>
<button id="logout-button">Logout</button>
`
const page = template.content
page.getElementById('logout-button')
.addEventListener('click', logout)
return page
}
function logout() {
localStorage.clear()
location.reload()
}
```
This page greets the authenticated user and also has a logout button. The `logout()` function just clears `localStorage` and reloads the page.
There is it. I bet you already saw the [demo][4] before. Also, the source code is in the same [repository][5].
👋👋👋
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/passwordless-auth-client/
作者:[Nicolás Parada][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://nicolasparada.netlify.com/
[1]:https://nicolasparada.netlify.com/posts/passwordless-auth-server/
[2]:https://nicolasparada.netlify.com/posts/javascript-client-router/
[3]:https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
[4]:https://go-passwordless-demo.herokuapp.com/
[5]:https://github.com/nicolasparada/go-passwordless-demo

View File

@ -0,0 +1,156 @@
translating by lujun9972
How to Read Outlook Emails by Python
======
![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/OVArLzhmRzOEQZsvGavF)
when you start e-mail marketing , You need opt-in email address list. You have opt-in list. You are using email client software and If you can export your list from your email client, You will have good list.
Now I am trying to explain my codes to write all emails into test file from your outlook profile.
First you should import win32com.client, You need to install pywin32
```
pip install pywin32
```
We should connect to Outlook by MAPI
```
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
```
Then we should get all accounts in your outlook profile.
```
accounts= win32com.client.Dispatch("Outlook.Application").Session.Accounts;
```
Then You need to get emails from inbox folder that is named emailleri_al.
```
def emailleri_al(folder):
messages = folder.Items
a=len(messages)
if a>0:
for message2 in messages:
try:
sender = message2.SenderEmailAddress
if sender != "":
print(sender, file=f)
except:
print("Ben hatayım")
print(account.DeliveryStore.DisplayName)
pass
try:
message2.Save
message2.Close(0)
except:
pass
```
You should go to all account and get inbox folder and get emails
```
for account in accounts:
global inbox
inbox = outlook.Folders(account.DeliveryStore.DisplayName)
print("****Account Name**********************************",file=f)
print(account.DisplayName,file=f)
print(account.DisplayName)
print("***************************************************",file=f)
folders = inbox.Folders
for folder in folders:
print("****Folder Name**********************************", file=f)
print(folder, file=f)
print("*************************************************", file=f)
emailleri_al(folder)
a = len(folder.folders)
if a>0 :
global z
z = outlook.Folders(account.DeliveryStore.DisplayName).Folders(folder.name)
x = z.Folders
for y in x:
emailleri_al(y)
print("****Folder Name**********************************", file=f)
print("..."+y.name,file=f)
print("*************************************************", file=
```
All Code is as the following
```
import win32com.client
import win32com
import os
import sys
f = open("testfile.txt","w+")
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
accounts= win32com.client.Dispatch("Outlook.Application").Session.Accounts;
def emailleri_al(folder):
messages = folder.Items
a=len(messages)
if a>0:
for message2 in messages:
try:
sender = message2.SenderEmailAddress
if sender != "":
print(sender, file=f)
except:
print("Error")
print(account.DeliveryStore.DisplayName)
pass
try:
message2.Save
message2.Close(0)
except:
pass
for account in accounts:
global inbox
inbox = outlook.Folders(account.DeliveryStore.DisplayName)
print("****Account Name**********************************",file=f)
print(account.DisplayName,file=f)
print(account.DisplayName)
print("***************************************************",file=f)
folders = inbox.Folders
for folder in folders:
print("****Folder Name**********************************", file=f)
print(folder, file=f)
print("*************************************************", file=f)
emailleri_al(folder)
a = len(folder.folders)
if a>0 :
global z
z = outlook.Folders(account.DeliveryStore.DisplayName).Folders(folder.name)
x = z.Folders
for y in x:
emailleri_al(y)
print("****Folder Name**********************************", file=f)
print("..."+y.name,file=f)
print("*************************************************", file=f)
print("Finished Succesfully")
```
--------------------------------------------------------------------------------
via: https://www.codementor.io/aliacetrefli/how-to-read-outlook-emails-by-python-jkp2ksk95
作者:[A.A. Cetrefli][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.codementor.io/aliacetrefli

View File

@ -0,0 +1,70 @@
Python Debugging Tips
======
When it comes to debugging, theres a lot of choices that you can make. It is hard to give generic advice that always works (other than “Have you tried turning it off and back on?”).
Here are a few of my favorite Python Debugging tips.
### Make a branch
Trust me on this. Even if you never intend to commit the changes back upstream, you will be glad your experiments are contained within their own branch.
If nothing else, it makes cleanup a lot easier!
### Install pdb++
Seriously. It makes you life easier if you are on the command line.
All that pdb++ does is replace the standard pdb module with 100% PURE AWESOMENESS. Heres what you get when you `pip install pdbpp`:
* A Colorized prompt!
* tab completion! (perfect for poking around!)
* It slices! It dices!
Ok, maybe the last one is a little bit much… But in all seriousness, installing pdb++ is well worth your time.
### Poke around
Sometimes the best approach is to just mess around and see what happens. Put a break point in an “obvious” spot and make sure it gets hit. Pepper the code with `print()` and/or `logging.debug()` statements and see where the code execution goes.
Examine the arguments being passed into your functions. Check the versions of the libraries (if things are getting really desperate).
### Only change one thing at a time
Once you are poking around a bit you are going to get ideas on things you could do. But before you start slinging code, take a step back and think about what you could change, and then only change 1 thing.
Once youve made the change, then test and see if you are closer to resolving the issue. If not, change the thing back, and try something else.
Changing only one thing allows you to know what does and doesnt work. Plus once you do get it working, your new commit is going to be much smaller (because there will be less changes).
This is pretty much what one does in the Scientific Process: only change one variable at a time. By allowing yourself to see and measure the results of one change you will save your sanity and arrive at a working solution faster.
### Assume nothing, ask questions
Occasionally a developer (not you of course!) will be in a hurry and whip out some questionable code. When you go through to debug this code you need to stop and make sure you understand what it is trying to accomplish.
Make no assumptions. Just because the code is in the `model.py` file doesnt mean it wont try to render some HTML.
Likewise, double check all of your external connections before you do anything destructive! Going to delete some configuration data? MAKE SURE YOU ARE NOT CONNECTED TO YOUR PRODUCTION SYSTEM.
### Be clever, but not too clever
Sometimes we write code that is so amazingly awesome it is not obvious how it does what it does.
While we might feel smart when we publish that code, more often than not we will wind up feeling dumb later on when the code breaks and we have to remember how it works to figure out why it isnt working.
Keep an eye out for any sections of code that look either overly complicated and long, or extremely short. These could be places where complexity is hiding and causing your bugs.
--------------------------------------------------------------------------------
via: https://pythondebugging.com/articles/python-debugging-tips
作者:[PythonDebugging.com][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://pythondebugging.com

View File

@ -0,0 +1,84 @@
3 open source alternatives to Adobe Lightroom
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/camera-photography-film.jpg?itok=oe2ixyu6)
You wouldn't be wrong to wonder whether the smartphone, that modern jack-of-all-trades, is taking over photography. While that might be valid in the point-and-shoot camera market, there are a sizeable number of photography professionals and hobbyists who recognize that a camera that fits in your pocket can never replace a high-end DSLR camera and the depth, clarity, and realism of its photos.
All of that power comes with a small price in terms of convenience; like negatives from traditional film cameras, the [raw image][1] files produced by DSLRs must be processed before they can be edited or printed. For this, a digital image processing application is indispensable, and the go-to application has been Adobe Lightroom. But for many reasons—including its expensive, subscription-based pricing model and its proprietary license—there's a lot of interest in open source and other alternatives.
Lightroom has two main functions: processing raw image files and digital asset management (DAM)—organizing images with tags, ratings, and other metadata to make it easier to keep track of them.
In this article, we'll look at three open source image processing applications: Darktable, LightZone, and RawTherapee. All of them have DAM capabilities, but none has Lightroom's machine learning-based image categorization and tagging features. If you're looking for more information about open source DAM software, check out Terry Hancock's article "[Digital asset management for an open movie project][2]," where he shares his research on software to organize multimedia files for his [_Lunatics!_][3] open movie project.
### Darktable
![Darktable][4]
Like the other applications on our list, [darktable][5] processes raw images into usable file formats—it exports into JPEG, PNG, TIFF, PPM, PFM, and EXR, and it also supports Google and Facebook web albums, Flickr uploads, email attachments, and web gallery creation.
Its 61 image operation modules allow you to adjust contrast, tone, exposure, color, noise, etc.; add watermarks; crop and rotate; and much more. As with the other applications described in this article, those edits are "non-destructive"—that is, your original raw image is preserved no matter how many tweaks and modifications you make.
Darktable imports raw images from more than 400 cameras plus JPEG, CR2, DNG, OpenEXR, and PFM; images are managed in a database so you can filter and search using metadata including tags, ratings, and color. It's also available in 21 languages and is supported on Linux, MacOS, BSD, Solaris 11/GNOME, and Windows. (The [Windows port][6] is new, and darktable warns it may have "rough edges or missing functionality" compared to other versions.)
Darktable is licensed under [GPLv3][7]; you can learn more by perusing its [features][8], viewing the [user manual][9], or accessing its [source code][10] on GitHub.
### LightZone
![LightZone's tool stack][11]
As a non-destructive raw image processing tool, [LightZone][12] is similar to the other two applications on this list: it's cross-platform, operating on Windows, MacOS, and Linux, and it supports JPG and TIFF images in addition to raw. But it's also unique in several ways.
For one thing, it started out in 2005 as a proprietary image processing tool and later became an open source project under a BSD license. Also, before you can download the application, you must register for a free account; this is so the LightZone development community can track downloads and build the community. (Approval is quick and automated, so it's not a large barrier.)
Another difference is that image modifications are done using stackable tools, rather than filters (like most image-editing applications); tool stacks can be rearranged or removed, as well as saved and copied to a batch of images. You can also edit certain parts of an image using a vector-based tool or by selecting pixels based on color or brightness.
You can get more information on LightZone by searching its [forums][13] or accessing its [source code][14] on GitHub.
### RawTherapee
![RawTherapee][15]
[RawTherapee][16] is another popular open source ([GPL][17]) raw image processor worth your attention. Like darktable and LightZone, it is cross-platform (Windows, MacOS, and Linux) and implements edits in a non-destructive fashion, so you maintain access to your original raw image file no matter what filters or changes you make.
RawTherapee uses a panel-based interface, including a history panel to keep track of your changes and revert to a previous point; a snapshot panel that allows you to work with multiple versions of a photo; and scrollable tool panels to easily select a tool without worrying about accidentally using the wrong one. Its tools offer a wide variety of exposure, color, detail, transformation, and demosaicing features.
The application imports raw files from most cameras and is localized to more than 25 languages, making it widely usable. Features like batch processing and [SSE][18] optimizations improve speed and CPU performance.
RawTherapee offers many other [features][19]; check out its [documentation][20] and [source code][21] for details.
Do you use another open source raw image processing tool in your photography? Do you have any related tips or suggestions for other photographers? If so, please share your recommendations in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/alternatives/adobe-lightroom
作者:[Opensource.com][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com
[1]:https://en.wikipedia.org/wiki/Raw_image_format
[2]:https://opensource.com/article/18/3/movie-open-source-software
[3]:http://lunatics.tv/
[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_darkroom1.jpg?itok=0fjk37tC (Darktable)
[5]:http://www.darktable.org/
[6]:https://www.darktable.org/about/faq/#faq-windows
[7]:https://github.com/darktable-org/darktable/blob/master/LICENSE
[8]:https://www.darktable.org/about/features/
[9]:https://www.darktable.org/resources/
[10]:https://github.com/darktable-org/darktable
[11]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_lightzone1tookstack.jpg?itok=1e3s85CZ (LightZone's tool stack)
[12]:http://www.lightzoneproject.org/
[13]:http://www.lightzoneproject.org/Forum
[14]:https://github.com/ktgw0316/LightZone
[15]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_rawtherapee.jpg?itok=meiuLxPw (RawTherapee)
[16]:http://rawtherapee.com/
[17]:https://github.com/Beep6581/RawTherapee/blob/dev/LICENSE.txt
[18]:https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions
[19]:http://rawpedia.rawtherapee.com/Features
[20]:http://rawpedia.rawtherapee.com/Main_Page
[21]:https://github.com/Beep6581/RawTherapee

View File

@ -0,0 +1,185 @@
How to partition a disk in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl)
Creating and deleting partitions in Linux is a regular practice because storage devices (such as hard drives and USB drives) must be structured in some way before they can be used. In most cases, large storage devices are divided into separate sections called partitions. Partitioning also allows you to divide your hard drive into isolated sections, where each section behaves as its own hard drive. Partitioning is particularly useful if you run multiple operating systems.
There are lots of powerful tools for creating, removing, and otherwise manipulating disk partitions in Linux. In this article, I'll explain how to use the `parted` command, which is particularly useful with large disk devices and many disk partitions. Differences between `parted` and the more common `fdisk` and `cfdisk` commands include:
* **GPT format:** The `parted` command can create a Globally Unique Identifiers Partition Table [GPT][1]), while `fdisk` and `cfdisk` are limited to DOS partition tables.
* **Larger disks:** A DOS partition table can format up to 2TB of disk space, although up to 16TB is possible in some cases. However, a GPT partition table can address up to 8ZiB of space.
* **More partitions:** Using primary and extended partitions, DOS partition tables allow only 16 partitions. With GPT, you get up to 128 partitions by default and can choose to have many more.
* **Reliability:** Only one copy of the partition table is stored in a DOS partition. GPT keeps two copies of the partition table (at the beginning and the end of the disk). The GPT also uses a [CRC][2] checksum to check the partition table integrity, which is not done with DOS partitions.
With today's larger disks and the need for more flexibility in working with them, using `parted` to work with disk partitions is recommended. Most of the time, disk partition tables are created as part of the operating system installation process. Direct use of the `parted` command is most useful when adding a storage device to an existing system.
### Give 'parted' a try
`parted` command. To try these steps, I strongly recommend using a brand new storage device or one where you don't mind wiping out the contents.
The following explains the process of partitioning a storage device with thecommand. To try these steps, I strongly recommend using a brand new storage device or one where you don't mind wiping out the contents.
**1\. List the partitions:** Use `parted -l` to identify the storage device you want to partition. Typically, the first hard disk (`/dev/sda` or `/dev/vda`) will contain the operating system, so look for another disk to find the one you want (e.g., `/dev/sdb`, `/dev/sdc`, `/dev/vdb`, `/dev/vdc`, etc.).
```
$ sudo parted -l
[sudo] password for daniel:
Model: ATA RevuAhn_850X1TU5 (scsi)
Disk /dev/vdc: 512GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number  Start   End    Size   Type     File system  Flags
 1      1049kB  525MB  524MB  primary  ext4         boot
 2      525MB   512GB  512GB  primary               lvm
```
**2\. Open the storage device:** Use `parted` to begin working with the selected storage device. In this example, the device is the third disk on a virtual system (`/dev/vdc`). It is important to indicate the specific device you want to use. If you just type `parted` with no device name, it will randomly select a storage device to modify.
```
$ sudo parted /dev/vdc
GNU Parted 3.2
Using /dev/vdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)
```
**3\. Set the partition table:** Set the partition table type to GPT, then type "Yes" to accept it.
```
(parted) mklabel gpt
Warning: the existing disk label on /dev/vdc will be destroyed
and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes
```
The `mklabel` and `mktable` commands are used for the same purpose (making a partition table on a storage device). The supported partition tables are: aix, amiga, bsd, dvh, gpt, mac, ms-dos, pc98, sun, and loop. Remember `mklabel` will not make a partition, rather it will make a partition table.
**4\. Review the partition table:** Show information about the storage device.
```
(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 1396MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
```
**5\. Get help:** To find out how to make a new partition, type: `(parted) help mkpart`.
```
(parted) help mkpart
  mkpart PART-TYPE [FS-TYPE] START END     make a partition
        PART-TYPE is one of: primary, logical, extended
        FS-TYPE is one of: btrfs, nilfs2, ext4, ext3, ext2, fat32, fat16, hfsx, hfs+, hfs, jfs, swsusp,
        linux-swap(v1), linux-swap(v0), ntfs, reiserfs, hp-ufs, sun-ufs, xfs, apfs2, apfs1, asfs, amufs5,
        amufs4, amufs3, amufs2, amufs1, amufs0, amufs, affs7, affs6, affs5, affs4, affs3, affs2, affs1,
        affs0, linux-swap, linux-swap(new), linux-swap(old)
        START and END are disk locations, such as 4GB or 10%.  Negative values count from the end of the
        disk.  For example, -1s specifies exactly the last sector.
       
        'mkpart' makes a partition without creating a new file system on the partition.  FS-TYPE may be
        specified to set an appropriate partition ID.
```
**6\. Make a partition:** To make a new partition (in this example, 1,396MB on partition 0), type the following:
```
(parted) mkpart primary 0 1396MB
Warning: The resulting partition is not properly aligned for best performance
Ignore/Cancel? I
(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 1396MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start   End     Size    File system Name Flags
1      17.4kB  1396MB  1396MB  primary
```
Filesystem type (fstype) will not create an ext4 filesystem on `/dev/vdc1`. A DOS partition table's partition types are primary, logical, and extended. In a GPT partition table, the partition type is used as the partition name. Providing a partition name under GPT is a must; in the above example, primary is the name, not the partition type.
**7\. Save and quit:** Changes are automatically saved when you quit `parted`. To quit, type the following:
```
(parted) quit
Information: You may need to update /etc/fstab.
$
```
### Words to the wise
Make sure to identify the correct disk before you begin changing its partition table when you add a new storage device. If you mistakenly change the disk partition that contains your computer's operating system, you could make your system unbootable.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/how-partition-disk-linux
作者:[Daniel Oh][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/daniel-oh
[1]:https://en.wikipedia.org/wiki/GUID_Partition_Table
[2]:https://en.wikipedia.org/wiki/Cyclic_redundancy_check

View File

@ -0,0 +1,133 @@
Turn Your Raspberry Pi into a Tor Relay Node
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/tor-onion-router.jpg?itok=6WUl0ElH)
If youre anything like me, you probably got yourself a first- or second-generation Raspberry Pi board when they first came out, played with it for a while, but then shelved it and mostly forgot about it. After all, unless youre a robotics enthusiast, you probably dont have that much use for a computer with a pretty slow processor and 256 megabytes of RAM. This is not to say that there arent cool things you can do with one of these, but between work and other commitments, I just never seem to find the right time for some good old nerding out.
However, if you would like to put it to good use without sacrificing too much of your time or resources, you can turn your old Raspberry Pi into a perfectly functioning Tor relay node.
### What is a Tor Relay node
You have probably heard about the [Tor project][1] before, but just in case you havent, heres a very quick summary. The name “Tor” stands for “The Onion Router” and it is a technology created to combat online tracking and other privacy violations.
Everything you do on the Internet leaves a set of digital footprints in every piece of equipment that your IP packets traverse: all of the switches, routers, load balancers and destination websites log the IP address from which your session originated and the IP address of the internet resource you are accessing (and often its hostname, [even when using HTTPS][2]). If youre browsing from home, then your IP can be directly mapped to your household. If youre using a VPN service ([as you should be][3]), then your IP can be mapped to your VPN provider, and then they are the ones who can map it to your household. In any case, odds are that someone somewhere is assembling an online profile on you based on the sites you visit and how much time you spend on each of them. Such profiles are then sold, aggregated with matching profiles collected from other services, and then monetized by ad networks. At least, thats the optimists view of how that data is used -- Im sure you can think of many examples of how your online usage profiles can be used against you in much more nefarious ways.
The Tor project attempts to provide a solution to this problem by making it impossible (or, at least, unreasonably difficult) to trace the endpoints of your IP session. Tor achieves this by bouncing your connection through a chain of anonymizing relays, consisting of an entry node, relay node, and exit node:
1. The **entry node** only knows your IP address, and the IP address of the relay node, but not the final destination of the request;
2. The **relay node** only knows the IP address of the entry node and the IP address of the exit node, and neither the origin nor the final destination
3. The **exit node** **** only knows the IP address of the relay node and the final destination of the request; it is also the only node that can decrypt the traffic before sending it over to its final destination
Relay nodes play a crucial role in this exchange because they create a cryptographic barrier between the source of the request and the destination. Even if exit nodes are controlled by adversaries intent on stealing your data, they will not be able to know the source of the request without controlling the entire Tor relay chain.
As long as there are plenty of relay nodes, your privacy when using the Tor network remains protected -- which is why I heartily recommend that you set up and run a relay node if you have some home bandwidth to spare.
#### Things to keep in mind regarding Tor relays
A Tor relay node only receives encrypted traffic and sends encrypted traffic -- it never accesses any other sites or resources online, so you do not need to worry that someone will browse any worrisome sites directly from your home IP address. Having said that, if you reside in a jurisdiction where offering anonymity-enhancing services is against the law, then, obviously, do not operate your own Tor relay. You may also want to check if operating a Tor relay is against the terms and conditions of your internet access provider.
### What you will need
* A Raspberry Pi (any model/generation) with some kind of enclosure
* An SD card with [Raspbian Stretch Lite][4]
* An ethernet cable
* A micro-USB cable for power
* A keyboard and an HDMI-capable monitor (to use during the setup)
This guide will assume that you are setting this up on your home connection behind a generic cable or ADSL modem router that performs NAT translation (and it almost certainly does). Most of them have a USB port you can use to power up your Raspberry Pi, and if youre only using the wifi functionality of the router, then it should have a free ethernet port for you to plug into. However, before we get to the point where we can set-and-forget your Raspberry Pi, well need to set it up as a Tor relay node, for which youll need a keyboard and a monitor.
### The bootstrap script
Ive adapted a popular Tor relay node bootstrap script for use with Raspbian Stretch -- you can find it in my GitHub repository here: <https://github.com/mricon/tor-relay-bootstrap-rpi>. Once you have booted up your Raspberry Pi and logged in with the default “pi” user, do the following:
```
sudo apt-get install -y git
git clone https://github.com/mricon/tor-relay-bootstrap-rpi
cd tor-relay-bootstrap-rpi
sudo ./bootstrap.sh
```
Here is what the script will do:
1. Install the latest OS updates to make sure your Pi is fully patched
2. Configure your system for automated unattended updates, so you automatically receive security patches when they become available
3. Install Tor software
4. Tell your NAT router to forward the necessary ports to reach your relay (the ports well use are 443 and 8080, since they are least likely to be filtered by your internet provider)
Once the script is done, youll need to configure the torrc file -- but first, decide how much bandwidth youll want to donate to Tor traffic. First, type “[Speed Test][5]” into Google and click the “Run Speed Test” button. You can disregard the “Download speed” result, as your Tor relay can only operate as fast as your maximum upload bandwidth.
Therefore, take the “Mbps upload” number, divide by 8 and multiply by 1024 to find out the bandwidth speed in Kilobytes per second. E.g. if you got 21.5 Mbps for your upload speed, then that number is:
```
21.5 Mbps / 8 * 1024 = 2752 KBytes per second
```
Youll want to limit your relay bandwidth to about half that amount, and allow bursting to about three-quarters of it. Once decided, open /etc/tor/torrc using your favourite editor and tweak the bandwidth settings.
```
RelayBandwidthRate 1300 KBytes
RelayBandwidthBurst 2400 KBytes
```
Of course, if youre feeling more generous, then feel free to put in higher numbers, though you dont want to max out your outgoing bandwidth -- it will noticeably impact your day-to-day usage if these numbers are set too high.
While you have that file open, you should set two more things. First, the Nickname -- just for your own recordkeeping, and second the ContactInfo line, which should list a single email address. Since your relay will be running unattended, you should use an email address that you regularly check -- you will receive an alert from the “Tor Weather” service if your relay goes offline for longer than 48 hours.
```
Nickname myrpirelay
ContactInfo you@example.com
```
Save the file and reboot the system to start the Tor relay.
### Testing to make sure Tor traffic is flowing
If you would like to make sure that the relay is functioning, you can run the “arm” tool:
```
sudo -u debian-tor arm
```
It will take a while to start, especially on older-generation boards, but eventually it will show you a bar chart of incoming and outgoing traffic (or error messages that will help you troubleshoot your setup).
Once you are convinced that everything is functioning, you can unplug the keyboard and the monitor and relocate the Raspberry Pi into the basement where it will quietly sit and shuffle encrypted bits around. Congratulations, youve helped improve privacy and combat malicious tracking online!
Learn more about Linux through the free ["Introduction to Linux" ][6] course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/intro-to-linux/2018/6/turn-your-raspberry-pi-tor-relay-node
作者:[Konstantin Ryabitsev][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/mricon
[1]:https://www.torproject.org/
[2]:https://en.wikipedia.org/wiki/Server_Name_Indication#Security_implications
[3]:https://www.linux.com/blog/2017/10/tips-secure-your-network-wake-krack
[4]:https://www.raspberrypi.org/downloads/raspbian/
[5]:https://www.google.com/search?q=speed+test
[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,73 @@
7 open source tools to make literature reviews easy
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_EDU_DigitalLiteracy_520x292.png?itok=ktHMrse6)
A good literature review is critical for academic research in any field, whether it is for a research article, a critical review for coursework, or a dissertation. In a recent article, I presented detailed steps for doing [a literature review using open source software][1].
The following is a brief summary of seven free and open source software tools described in that article that will make your next literature review much easier.
### 1\. GNU Linux
Most literature reviews are accomplished by graduate students working in research labs in universities. For absurd reasons, graduate students often have the worst computers on campus. They are often old, slow, and clunky Windows machines that have been discarded and recycled from the undergraduate computer labs. Installing a [flavor of GNU Linux][2] will breathe new life into these outdated PCs. There are more than [100 distributions][3], all of which can be downloaded and installed for free on computers. Most popular Linux distributions come with a "try-before-you-buy" feature. For example, with Ubuntu you can make a [bootable USB stick][4] that allows you to test-run the Ubuntu desktop experience without interfering in any way with your PC configuration. If you like the experience, you can use the stick to install Ubuntu on your machine permanently.
### 2\. Firefox
Linux distributions generally come with a free web browser, and the most popular is [Firefox][5]. Two Firefox plugins that are particularly useful for literature reviews are Unpaywall and Zotero. Keep reading to learn why.
### 3\. Unpaywall
Often one of the hardest parts of a literature review is gaining access to the papers you want to read for your review. The unintended consequence of copyright restrictions and paywalls is it has narrowed access to the peer-reviewed literature to the point that even [Harvard University is challenged][6] to pay for it. Fortunately, there are a lot of open access articles—about a third of the literature is free (and the percentage is growing). [Unpaywall][7] is a Firefox plugin that enables researchers to click a green tab on the side of the browser and skip the paywall on millions of peer-reviewed journal articles. This makes finding accessible copies of articles much faster that searching each database individually. Unpaywall is fast, free, and legal, as it accesses many of the open access sites that I covered in my paper on using [open source in lit reviews][8].
### 4\. Zotero
Formatting references is the most tedious of academic tasks. [Zotero][9] can save you from ever doing it again. It operates as an Android app, desktop program, and a Firefox plugin (which I recommend). It is a free, easy-to-use tool to help you collect, organize, cite, and share research. It replaces the functionality of proprietary packages such as RefWorks, Endnote, and Papers for zero cost. Zotero can auto-add bibliographic information directly from websites. In addition, it can scrape bibliographic data from PDF files. Notes can be easily added on each reference. Finally, and most importantly, it can import and export the bibliography databases in all publishers' various formats. With this feature, you can export bibliographic information to paste into a document editor for a paper or thesis—or even to a wiki for dynamic collaborative literature reviews (see tool #7 for more on the value of wikis in lit reviews).
### 5\. LibreOffice
Your thesis or academic article can be written conventionally with the free office suite [LibreOffice][10], which operates similarly to Microsoft's Office products but respects your freedom. Zotero has a word processor plugin to integrate directly with LibreOffice. LibreOffice is more than adequate for the vast majority of academic paper writing.
### 6\. LaTeX
If LibreOffice is not enough for your layout needs, you can take your paper writing one step further with [LaTeX][11], a high-quality typesetting system specifically designed for producing technical and scientific documentation. LaTeX is particularly useful if your writing has a lot of equations in it. Also, Zotero libraries can be directly exported to BibTeX files for use with LaTeX.
### 7\. MediaWiki
If you want to leverage the open source way to get help with your literature review, you can facilitate a [dynamic collaborative literature review][12]. A wiki is a website that allows anyone to add, delete, or revise content directly using a web browser. [MediaWiki][13] is free software that enables you to set up your own wikis.
Researchers can (in decreasing order of complexity): 1) set up their own research group wiki with MediaWiki, 2) utilize wikis already established at their universities (e.g., [Aalto University][14]), or 3) use wikis dedicated to areas that they research. For example, several university research groups that focus on sustainability (including [mine][15]) use [Appropedia][16], which is set up for collaborative solutions on sustainability, appropriate technology, poverty reduction, and permaculture.
Using a wiki makes it easy for anyone in the group to keep track of the status of and update literature reviews (both current and older or from other researchers). It also enables multiple members of the group to easily collaborate on a literature review asynchronously. Most importantly, it enables people outside the research group to help make a literature review more complete, accurate, and up-to-date.
### Wrapping up
Free and open source software can cover the entire lit review toolchain, meaning there's no need for anyone to use proprietary solutions. Do you use other libre tools for making literature reviews or other academic work easier? Please let us know your favorites in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/open-source-literature-review-tools
作者:[Joshua Pearce][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jmpearce
[1]:http://pareonline.net/getvn.asp?v=23&n=8
[2]:https://opensource.com/article/18/1/new-linux-computers-classroom
[3]:https://distrowatch.com/
[4]:https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-windows#0
[5]:https://www.mozilla.org/en-US/firefox/new/
[6]:https://www.theguardian.com/science/2012/apr/24/harvard-university-journal-publishers-prices
[7]:https://unpaywall.org/
[8]:http://www.academia.edu/36709736/How_to_Perform_a_Literature_Review_with_Free_and_Open_Source_Software
[9]:https://www.zotero.org/
[10]:https://www.libreoffice.org/
[11]:https://www.latex-project.org/
[12]:https://www.academia.edu/1861756/Open_Source_Research_in_Sustainability
[13]:https://www.mediawiki.org/wiki/MediaWiki
[14]:http://wiki.aalto.fi
[15]:http://www.appropedia.org/Category:MOST
[16]:http://www.appropedia.org/Welcome_to_Appropedia

View File

@ -0,0 +1,108 @@
What version of Linux am I running?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC)
The question "what version of Linux" can mean two different things. Strictly speaking, Linux is the kernel, so the question can refer specifically to the kernel's version number, or "Linux" can be used more colloquially to refer to the entire distribution, as in Fedora Linux or Ubuntu Linux.
`apt`, `dnf`, `yum`, or some other command to install packages.
Both are important, and you may need to know one or both answers to fix a problem with a system. For example, knowing the installed kernel version might help diagnose an issue with proprietary drivers, and identifying what distribution is running will help you quickly figure out if you should be using, or some other command to install packages.
The following will help you find out what version of the Linux kernel and/or what Linux distribution is running on a system.
### How to find the Linux kernel version
To find out what version of the Linux kernel is running, run the following command:
```
uname -srm
```
Alternatively, the command can be run by using the longer, more descriptive, versions of the various flags:
```
uname --kernel-name --kernel-release --machine
```
Either way, the output should look similar to the following:
```
Linux 4.16.10-300.fc28.x86_64 x86_64
```
This gives you (in order): the kernel name, the version of the kernel, and the type of hardware the kernel is running on. In this case, the kernel is Linux version 4.16.10-300.fc28.x86_64 running on an x86_64 system.
More information about the `uname` command can be found by running `man uname`.
### How to find the Linux distribution
There are several ways to figure out what distribution is running on a system, but the quickest way is the check the contents of the `/etc/os-release` file. This file provides information about a distribution including, but not limited to, the name of the distribution and its version number. The os-release file in some distributions contains more details than in others, but any distribution that includes an os-release file should provide a distribution's name and version.
To view the contents of the os-release file, run the following command:
```
cat /etc/os-release
```
On Fedora 28, the output looks like this:
```
NAME=Fedora
VERSION="28 (Workstation Edition)"
ID=fedora
VERSION_ID=28
PLATFORM_ID="platform:f28"
PRETTY_NAME="Fedora 28 (Workstation Edition)"
ANSI_COLOR="0;34"
CPE_NAME="cpe:/o:fedoraproject:fedora:28"
HOME_URL="https://fedoraproject.org/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=28
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=28
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation
```
As the example above shows, Fedora's os-release file provides the name of the distribution and the version, but it also identifies the installed variant (the "Workstation Edition"). If we ran the same command on Fedora 28 Server Edition, the contents of the os-release file would reflect that on the `VARIANT` and `VARIANT_ID` lines.
Sometimes it is useful to know if a distribution is like another, so the os-release file can contain an `ID_LIKE` line that identifies distributions the running distribution is based on or is similar to. For example, Red Hat Enterprise Linux's os-release file includes an `ID_LIKE` line stating that RHEL is like Fedora, and CentOS's os-release file states that CentOS is like RHEL and Fedora. The `ID_LIKE` line is very helpful if you are working with a distribution that is based on another distribution and need to find instructions to solve a problem.
CentOS's os-release file makes it clear that it is like RHEL, so documentation and questions and answers in various forums about RHEL should (in most cases) apply to CentOS. CentOS is designed to be a near clone of RHEL, so it is more compatible with its `LIKE` than some entries that might be found in the `ID_LIKE` field, but checking for answers about a "like" distribution is always a good idea if you cannot find the information you are seeking for the running distribution.
More information about the os-release file can be found by running `man os-release`.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/linux-version
作者:[Joshua Allen Holm][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/holmja

View File

@ -0,0 +1,201 @@
怎样在桌面上安装 Docker CE
=====
[在上一篇文章中][1],我们学习了容器世界的一些基本术语。当我们运行命令并在后续文章中使用其中一些术语时,这些背景信息将会派上用场,包括这篇文章。本文将介绍在桌面 Linux, macOS 和 Windows 上安装 Docker它适用于想要开始使用 Docker 容器的初学者。唯一的先决条件是你对命令行界面满意。
### 为什么我在本地机器上需要 Docker CE
作为一个新用户,你很可能想知道为什么你在本地系统上需要容器。难道它们不是作为微服务在云和服务器中运行吗?尽管容器长期以来一直是 Linux 世界的一部分,但 Docker 使它们真正可以使用它的工具和技术。to 校正者:这句话它们意义似乎不明确)
Docker 容器最大的优点是可以使用本地机器进行开发和测试。你在本地系统上创建的容器映像可以在“任何位置”运行。就应用程序在开发系统上运行良好但生产环境中出现问题这一点,开发人员和操作人员之间不会起冲突。
关键是,为了创建容器化的应用程序,你必须能够在本地系统上运行和创建容器。
你可以使用以下三个平台中的任何一个 -- 桌面 Linux, Windows 或 macOS 作为容器的开发平台。一旦 Docker 在这些系统上成功运行,你将可以在不同的平台上使用相同的命令。因此,接下来你运行的操作系统无关紧要。
这就是 Docker 之美。
### 让我们开始吧
现在有两个版本的 DockerDocker 企业版EE和 Docker 社区版CE。我们将使用 Docker 社区版,这是一个免费的 Docker 版本,面向想要开始使用 Docker 的开发人员和爱好者。
Docker CE 有两个版本stable 和 edge。顾名思义stable稳定版本会为你提供经过充分测试的季度更新而 edge 版本每个月都会提供新的更新。经过进一步的测试之后,这些边缘特征将被添加到稳定版本中。我建议新用户使用 stable 版本。
Docker CE 支持 macOS, Windows 10, Ubuntu 14.04, 16.04, 17.04 和 17.10,以及 Debian 7.7, 8, 9 和 10, Fedora 25, 26, 27 和 centOS。虽然你可以下载 Docker CE 二进制文件并安装到桌面 Linux 上,但我建议添加仓库源以便继续获得修补程序和更新。
### 在桌面 Linux 上安装 Docker CE
你不需要一个完整的桌面 Linux 来运行 Docker你也可以将它安装在最小的 Linux 服务器上,即你可以在一个虚拟机中运行。在本教程中,我将在我的主系统 Fedora 27 和 Ubuntu 17.04 上运行它to 校正者:这句话搞不清主要是什么系统)。
### 在 Ubuntu 上安装
首先,运行系统更新,以便你的 Ubuntu 软件包完全更新:
```
$ sudo apt-get update
```
现在运行系统升级:
```
$ sudo apt-get dist-upgrade
```
然后安装 Docker PGP 密钥:
```
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Update the repository info again:
$ sudo apt-get update
```
现在安装 Docker CE
```
$ sudo apt-get install docker-ce
```
一旦安装Docker CE 就会在基于 Ubuntu 的系统上自动运行,让我们来检查它是否在运行:
```
$ sudo systemctl status docker
```
你应该得到以下输出:
```
docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2017-12-28 15:06:35 EST; 19min ago
Docs: https://docs.docker.com
Main PID: 30539 (dockerd)
```
由于 Docker 安装在你的系统上,你现在可以使用 Docker CLI命令行界面运行 Docker 命令。像往常一样,我们运行 Hello World 命令:
```
$ sudo docker run hello-world
```
![YMChR_7xglpYBT91rtXnqQc6R1Hx9qMX_iO99vL8][2]
恭喜!在你的 Ubuntu 系统上正在运行着 Docker。
### 在 Fedora 上安装 Docker CE
Fedora 27 上的情况有些不同。在 Fedora 上,你首先需要安装 def-plugins-core 包,这将允许你从 CLI 管理你的 DNF 包。
```
$ sudo dnf -y install dnf-plugins-core
```
现在在你的系统上安装 Docker 仓库:
```
$ sudo dnf config-manager \
--add-repo \
https://download.docker.com/linux/fedora/docker-ce.repo
Its time to install Docker CE:
$ sudo dnf install docker-ce
```
与 Ubuntu 不同Docker 不会在 Fedora 上自动启动。那么让我们启动它:
```
$ sudo systemctl start docker
```
你必须在每次重新启动后手动启动 Docker因此让我们将其配置为在重新启动后自动启动。$ systemctl enable docker 就行。现在该运行 Hello World 命令了:
```
$ sudo docker run hello-world
```
恭喜,在你的 Fedora 27 系统上正在运行着 Docker。
### 解除 root
你可能已经注意到你必须使用 sudo 来运行 Docker 命令。这是因为 Docker 守护进程与 UNIX 套接字绑定,而不是 TCP 端口,套接字由 root 用户拥有。所以,你需要 sudo 权限才能运行 docker 命令。你可以将系统用户添加到 docker 组,这样它就不需要 sudo 了:
```
$ sudo groupadd docker
```
在大多数情况下,在安装 Docker CE 时会自动创建 Docker 用户组,因此你只需将用户添加到该组中即可:
```
$ sudo usermod -aG docker $USER
```
为了测试组是否已经成功添加,根据用户名运行 groups 命令:
```
$ groups swapnil
```
这里swapnil 是用户名。)
这是在我系统上的输出:
```
$ swapnil : swapnil adm cdrom sudo dip plugdev lpadmin sambashare docker
```
你可以看到该用户也属于 docker 组。注销系统,这样组就会生效。一旦你再次登录,在不使用 sudo 的情况下试试 Hello World 命令:
```
$ docker run hello-world
```
你可以通过运行以下命令来查看关于 Docker 的安装版本以及更多系统信息:
```
$ docker info
```
### 在 macOS 和 Windows 上安装 Docker CE
你可以在 macOS 和 Windows 上很轻松地安装 Docker CE和 EE。下载官方为 macOS 提供的 Docker 安装包,在 macOS 上安装应用程序的方式是只需将它们拖到 Applications 目录即可。一旦文件被复制,从 spotlight译者注mac 下的搜索)下打开 Docker 开始安装。一旦安装Docker 将自动启动,你可以在 macOS 的顶部看到它。
![IEX23j65zYlF8mZ1c-T_vFw_i1B1T1hibw_AuhEA][3]
macOS 是类 UNIX所以你可以简单地打开终端应用程序并开始使用 Docker 命令。测试 hello world 应用:
```
$ docker run hello-world
```
恭喜,你已经在你的 macOS 上运行了 Docker。
### 在 Windows 10 上安装 Docker
你需要最新版本的 Windows 10 Pro 或 Server 才能在它上面安装或运行 Docker。如果你没有完全更新Windows 将不会安装 Docker。我在 Windows 10 系统上遇到了错误,必须运行系统更新。我的版本还很落后,我出现了[这个][14] bug。所以如果你无法在 Windows 上安装 Docker只要知道并不是只有你一个。仔细检查该 bug 以找到解决方案。
一旦你在 Windows 上安装 Docker 后,你可以通过 WSL 使用 bash shell或者使用 PowerShell 来运行 Docker 命令。让我们在 PowerShell 中测试 “Hello World” 命令:
```
PS C:\Users\swapnil> docker run hello-world
```
恭喜,你已经在 Windows 上运行了 Docker。
在下一篇文章中,我们将讨论如何从 DockerHub 中拉取镜像并在我们的系统上运行容器。我们还会讨论推送我们自己的容器到 Docker Hub。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/how-install-docker-ce-your-desktop
作者:[SWAPNIL BHARTIYA][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/blog/intro-to-linux/2017/12/container-basics-terms-you-need-know
[2]:https://lh5.googleusercontent.com/YMChR_7xglpYBT91rtXnqQc6R1Hx9qMX_iO99vL8Z8C0-BlynDcL5B5pG-zzH0fKU0Qvnzd89v0KDEbZiO0gTfGNGfDtO-FkTt0bmzIQ-TKbNmv18S9RXdkSeXqgKDFRewnaHPj2
[3]:https://lh3.googleusercontent.com/IEX23j65zYlF8mZ1c-T_vFw_i1B1T1hibw_AuhEAfwv9oFpMfcAqkgEk7K5o58iDAAfGozSpIvY_qEsTOHRlSbesMKwTnG9rRkWba1KPSmnuH1LyoccDGNO3Clbz8du0gSByZxNj
[4]:https://github.com/docker/for-win/issues/1263

View File

@ -1,201 +0,0 @@
# 装载/卸载 Linux 内核模块
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82)
本文来自 Manning 出版的 [Linux in Action][1] 的第 15 章。
Linux 使用内核模块管理硬件外设。 我们来看看它是如何工作的。
运行中的 Linux 内核是您不希望破坏的东西之一。毕竟,内核是驱动计算机所做的一切的软件。考虑到在一个运行的系统上必须同时管理诸多细节,最好能让内核尽可能的减少分心,专心的完成它的工作。但是,如果在不重新启动整个系统的情况下,对计算环境进行任何微小的更改都是不可能的,那么插入一个新的网络摄像头或打印机可能会对您的工作流程造成严重的破坏。每次添加设备时都必须重新启动,以使系统识别它,这效率很低。
为了在稳定性和可用性之间建立一个有效的平衡Linux 将内核隔离,但是允许您通过可加载内核模块 (LKMs) 实时添加特定的功能。如下图所示,您可以将模块视为软件的一部分,它告诉内核在哪里找到一个设备以及如何使用它。反过来,内核使设备对用户和进程可用,并监视其操作。
![Kernel modules][3]
内核模块充当设备和 Linux 内核之间的转换器。
没有什么能够阻止你编写你自己的模块来完全按照你喜欢的方式来支持一个设备,但是为什么呢? Linux 模块库已经非常强大,通常不需要自己去实现一个模块。 而绝大多数时候Linux 会自动加载新设备的模块,而您甚至不知道它。
不过,有时候,出于某种原因,它本身并不会运行。 (你不想让那个招聘经理不耐烦地等待你的笑脸加入视频会议面试时间太长。)为了帮助你解决问题,你需要更多地了解内核模块,特别是 ,如何找到运行你的外设的实际模块,然后如何手动激活它。
### 查找内核模块
按照公认的约定,模块是位于 `/lib/modules/` 目录下的具有 .ko内核对象扩展名的文件。 然而,在你一直导航到这些文件之前,你可能不得不做出选择。 因为在引导时你需要从加载发行版列表中选择一个选项,所以支持您选择的特定软件(包括内核模块)必须存在某处。 那么,`/lib/modules/` 就是其中之一。 你会发现目录里充满了每个可用的 Linux 内核版本的模块; 例如:
```
$ ls /lib/modules
4.4.0-101-generic
4.4.0-103-generic
4.4.0-104-generic
```
在我的电脑上运行的内核是版本号最高的版本4.4.0-104-generic但不能保证这对你来说是一样的内核经常更新。 如果您将要在一个运行的系统上对你想要使用的模块做一些工作的话,则需要确保您拥有正确的目录树。
`uname -r` `-r` 指定了系统信息中的内核版本号):
```
$ uname -r
4.4.0-104-generic
```
好消息:有一个可靠的窍门。 不通过名称来识别目录,并希望能够找到正确的目录,而是使用始终指向使用的内核名称的系统变量。 您可以使用(从系统信息中指定通常显示的内核版本号)来调用该变量:
通过这些信息,您可以使用称为命令替换的过程将 `uname` 并入您的文件系统引用中。 例如,要导航到正确的目录,您需要将其添加到 `/lib/modules` 。 要告诉 Linux “uname” 不是文件系统的位置,请将 `uname` 部分用反引号括起来,如下所示:
```
$ ls /lib/modules/`uname -r`
build   modules.alias        modules.dep      modules.softdep
initrd  modules.alias.bin    modules.dep.bin  modules.symbols
kernel  modules.builtin      modules.devname  modules.symbols.bin
misc    modules.builtin.bin  modules.order    vdso
```
你可以在 `kernel/` 目录下的子目录中找到大部分模块。 花几分钟时间浏览这些目录,了解事物的排列方式和可用内容。 这些文件名通常会让你知道你在看什么。
```
$ ls /lib/modules/`uname -r`/kernel
arch  crypto  drivers  fs  kernel  lib  mm
net  sound  ubuntu  virt  zfs
```
这是查找内核模块的一种方法; 实际上,这是一种快速的方式。 但这不是唯一的方法。 如果你想获得完整的集合,你可以使用 `lsmod` 列出所有当前加载的模块以及一些基本信息。 这个截断输出的第一列(在这里列出的太多了)是模块名称,后面是文件大小和数量,然后是每个模块的名称:
```
$ lsmod
[...]
vboxdrv          454656  3 vboxnetadp,vboxnetflt,vboxpci
rt2x00usb        24576  1 rt2800usb
rt2800lib        94208  1 rt2800usb
[...]
```
到底有多少?好吧,我们再运行一次 `lsmod ` ,但是这一次将输出管道输送到 `wc -l` 看一下一共多少行:
```
$ lsmod | wc -l
113
```
那些是加载的模块。 总共有多少个? 运行 `modprobe -c` 并计算这些行将给我们这个数字:
```
$ modprobe -c | wc -l
33350
```
有33,350个可用模块 看起来好像有人多年来一直在努力为我们提供软件来驱动我们的物理设备。
注意:在某些系统中,您可能会遇到自定义的模块,这些模块在 `/etc/modules` 文件中使用其唯一条目进行引用,也可以作为保存到 `/etc/modules-load.d/` 的配置文件。这些模块很可能是本地开发项目的产物,可能涉及前沿实验。不管怎样,知道你在看什么是好事。
这就是你如何找到模块。 如果出于某种原因,它不会自行运行,您的下一个工作就是弄清楚如何手动加载非活动模块。
### 手动加载内核模块
在加载内核模块之前,逻辑上您必须确认它的存在。在这之前,你需要知道它叫什么。要做到这一点,有时需要同样的魔法和运气以及在线文档作者的辛勤工作的帮助。
我将通过描述一段时间前遇到的问题来说明这个过程。在一个晴朗的日子里,出于某种原因,笔记本电脑上的 WiFi 接口停止工作了。就这样。也许是软件升级把它搞砸了。谁知道呢?我运行了 `lshw -c network` ,得到了这个非常奇怪的信息:
```
network UNCLAIMED
    AR9485 Wireless Network Adapter
```
Linux 识别到了接口Atheros AR9485但将其列为未声明。 那么,正如他们所说的那样,“当情况变得严峻时,就会在互联网上进行艰难的搜索。” 我搜索了一下 atheros ar9 linux 模块,在浏览了 5 页甚至是 10 年前的页面后,它们建议自己写模块或者放弃,然后我终于发现(使用 Ubuntu 16.04 至少)存在一个工作模块。 它的名字是 ath9k 。
是的! 这场战斗胜券在握! 向内核添加模块比听起来容易得多。 要仔细检查它是否可用,可以针对模块的目录树运行 `find`,指定 `-type f` 来告诉 Linux 您正在查找文件,然后将字符串 `ath9k` 和星号一起添加以包含所有以你的字符串打头的文件:
```
$ find /lib/modules/$(uname -r) -type f -name ath9k*
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_common.ko
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k.ko
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_htc.ko
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_hw.ko
```
再一步,加载模块:
```
# modprobe ath9k
```
就是这样。没有重新启动。没有大惊小怪。
这里还有一个示例,向您展示如何使用已经崩溃的运行模块。曾经有一段时间,我使用罗技网络摄像头和一个特定的软件会使摄像头在下次系统启动前无法被任何其他程序访问。有时我需要在不同的应用程序中打开相机,但没有时间关机重新启动。(我运行了很多应用程序,在引导之后将它们全部准备好需要一些时间。)
由于这个模块可能是运行的,所以使用 `lsmod` 来搜索视频这个词应该给我一个关于相关模块名称的提示。 实际上,它比提示更好:用 video 这个词描述的唯一模块是 uvcvideo如下所示
```
$ lsmod | grep video
uvcvideo               90112  0
videobuf2_vmalloc      16384  1 uvcvideo
videobuf2_v4l2         28672  1 uvcvideo
videobuf2_core         36864  2 uvcvideo,videobuf2_v4l2
videodev              176128  4 uvcvideo,v4l2_common,videobuf2_core,videobuf2_v4l2
media                  24576  2 uvcvideo,videodev
```
有可能是我自己的操作导致了崩溃,我想我可以挖掘更深一点,看看我能否以正确的方式解决问题。 但你知道它是如何的; 有时你不关心理论,只想让设备工作。 所以我用 `rmmod` 杀死了 `uvcvideo` 模块,然后用 `modprobe` 重新启动它,一切都好:
```
# rmmod uvcvideo
# modprobe uvcvideo
```
再一次:不重新启动。没有其他的后续影响。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/how-load-or-unload-linux-kernel-module
作者:[David Clinto][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[amwps290](https://github.com/amwps290)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dbclinton
[1]:https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&amp;a_bid=4ca15fc9&amp;chan=opensource
[2]:/file/397906
[3]:https://opensource.com/sites/default/files/uploads/kernels.png "Kernel modules"

View File

@ -1,120 +0,0 @@
Vim-plug极简 Vim 插件管理器
======
![](https://www.ostechnix.com/wp-content/uploads/2018/06/vim-plug-720x340.png)
当没有插件管理器时Vim 用户必须手动下载 tarball 包的插件并将它们解压到 **~/.vim** 目录中。在少量插件的时候可以。当他们安装更多的插件时,就会变得一团糟。所有插件文件分散在单个目录中,用户无法找到哪个文件属于哪个插件。此外,他们无法找到他们应该删除哪个文件来卸载插件。这时 Vim 插件管理器就可以派上用场。插件管理器将安装插件的文件保存在单独的目录中,因此管理所有插件变得非常容易。我们几个月前已经写了关于 [**Vundle**][1] 的文章。今天,我们将看到又一个名为 **“Vim-plug”** 的 Vim 插件管理器。
Vim-plug 是一个免费、开源、速度非常快的、极简的 vim 插件管理器。它可以并行安装或更新插件。你还可以回滚更新。它创建浅层克隆以最小化磁盘空间使用和下载时间。它支持按需加载插件以加快启动时间。其他值得注意的特性是分支/标签/提交支持、post-update hook、支持外部管理的插件等。
### Vim-plug一个极简的 Vim 插件管理器
#### **安装**
安装和使用起来非常容易。你只需打开终端并运行以下命令:
```
$ curl -fLo ~/.vim/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim
```
Neovim 用户可以使用以下命令安装 Vim-plug
```
$ curl -fLo ~/.config/nvim/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim
```
#### 用法
**安装插件**
要安装插件,你必须如下所示首先在 Vim 配置文件中声明它们。一般 Vim 的配置文件是 **~/.vimrc**Neovim 的配置文件是 **~/.config/nvim/init.vim**。请记住,当你在配置文件中声明插件时,列表应该以 **call plug#begin(PLUGIN_DIRECTORY)** 开始,并以 **plug#end()** 结束。
例如,我们安装 “lightline.vim” 插件。为此,请在 **~/.vimrc** 的顶部添加以下行。
```
call plug#begin('~/.vim/plugged')
Plug 'itchyny/lightline.vim'
call plug#end()
```
在 vim 配置文件中添加上面的行后,通过输入以下命令重新加载:
```
:source ~/.vimrc
```
或者,只需重新加载 Vim 编辑器。
现在,打开 vim 编辑器:
```
$ vim
```
使用以下命令检查状态:
```
:PlugStatus
```
然后输入下面的命令,然后按 ENTER 键安装之前在配置文件中声明的插件。
```
:PlugInstall
```
**更新插件**
要更新插件,请运行:
```
:PlugUpdate
```
更新插件后,按下 **d** 查看更改。或者,你可以之后输入 **:PlugDiff**。
**审查插件**
有时,更新的插件可能有新的 bug 或无法正常工作。要解决这个问题,你可以简单地回滚有问题的插件。输入 **:PlugDiff** 命令,然后按 ENTER 键查看上次 **:PlugUpdate**的更改,并在每个段落上按 **X** 将每个插件回滚到更新前的前一个状态。
**删除插件**
删除一个插件删除或注释掉你以前在你的 vim 配置文件中添加的 **plug** 命令。然后,运行 **:source ~/.vimrc** 或重启 Vim 编辑器。最后,运行以下命令卸载插件:
```
:PlugClean
```
该命令将删除 vim 配置文件中所有未声明的插件。
**升级 Vim-plug**
要升级vim-plug本身请输入
```
:PlugUpgrade
```
如你所见,使用 Vim-plug 管理插件并不难。它简化了插件管理。现在去找出你最喜欢的插件并使用 Vim-plug 来安装它们。
**建议阅读:**
就是这些了。我将很快在这里发布另一个有趣的话题。在此之前,请继续关注 OSTechNix。
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/vim-plug-a-minimalist-vim-plugin-manager/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/manage-vim-plugins-using-vundle-linux/