Merge pull request #32 from LCTT/master

update
This commit is contained in:
MjSeven 2018-06-15 22:21:09 +08:00 committed by GitHub
commit de7df7b55b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
15 changed files with 1520 additions and 370 deletions

View File

@ -0,0 +1,107 @@
底层 Linux 容器运行时之发展史
======
> “容器运行时”是一个被过度使用的名词。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/running-containers-two-ship-container-beach.png?itok=wr4zJC6p)
在 Red Hat我们乐意这么说“容器即 LinuxLinux 即容器”。下面解释一下这种说法。传统的容器是操作系统中的进程,通常具有如下 3 个特性:
1. 资源限制
当你在系统中运行多个容器时,你肯定不希望某个容器独占系统资源,所以我们需要使用资源约束来控制 CPU、内存和网络带宽等资源。Linux 内核提供了 cgroup 特性,可以通过配置控制容器进程的资源使用。
2. 安全性配置
一般而言,你不希望你的容器可以攻击其它容器或甚至攻击宿主机系统。我们使用了 Linux 内核的若干特性建立安全隔离,相关特性包括 SELinux、seccomp 和 capabilities。
LCTT 译注:从 2.2 版本内核开始Linux 将特权从超级用户中分离,产生了一系列可以单独启用或关闭的 capabilities
3. 虚拟隔离
容器外的任何进程对于容器而言都应该不可见。容器应该使用独立的网络。不同的容器对应的进程应该都可以绑定 80 端口。每个容器的<ruby>内核映像<rt>image</rt></ruby><ruby>根文件系统<rt>rootfs</rt>rootfs都应该相互独立。在 Linux 中,我们使用内核的<ruby>名字空间<rt>namespace</rt></ruby>特性提供<ruby>虚拟隔离<rt>virtual separation</rt></ruby>
那么,具有安全性配置并且在 cgroup 和名字空间下运行的进程都可以称为容器。查看一下 Red Hat Enterprise Linux 7 操作系统中的 PID 1 的进程 systemd你会发现 systemd 运行在一个 cgroup 下。
```
# tail -1 /proc/1/cgroup
1:name=systemd:/
```
`ps` 命令让我们看到 systemd 进程具有 SELinux 标签:
```
# ps -eZ | grep systemd
system_u:system_r:init_t:s0             1 ?     00:00:48 systemd
```
以及 capabilities
```
# grep Cap /proc/1/status
...
CapEff: 0000001fffffffff
CapBnd: 0000001fffffffff
CapBnd:    0000003fffffffff
```
最后,查看 `/proc/1/ns` 子目录,你会发现 systemd 运行所在的名字空间。
```
ls -l /proc/1/ns
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 mnt -> mnt:[4026531840]
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 net -> net:[4026532009]
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 pid -> pid:[4026531836]
...
```
如果 PID 1 进程(实际上每个系统进程)具有资源约束、安全性配置和名字空间,那么我可以说系统上的每一个进程都运行在容器中。
容器运行时工具也不过是修改了资源约束、安全性配置和名字空间,然后 Linux 内核运行起进程。容器启动后,容器运行时可以在容器内监控 PID 1 进程,也可以监控容器的标准输入/输出,从而进行容器进程的生命周期管理。
### 容器运行时
你可能自言自语道“哦systemd 看起来很像一个容器运行时”。经过若干次关于“为何容器运行时不使用 `systemd-nspawn` 工具来启动容器”的邮件讨论后,我认为值得讨论一下容器运行时及其发展史。
[Docker][1] 通常被称为容器运行时,但“<ruby>容器运行时<rt>container runtime</rt></ruby>”是一个被过度使用的词语。当用户提到“容器运行时”,他们其实提到的是为开发人员提供便利的<ruby>上层<rt>high-level</rt></ruby>工具,包括 Docker[CRI-O][2] 和 [RKT][3]。这些工具都是基于 API 的,涉及操作包括从容器仓库拉取容器镜像、配置存储和启动容器等。启动容器通常涉及一个特殊工具,用于配置内核如何运行容器,这类工具也被称为“容器运行时”,下文中我将称其为“底层容器运行时”以作区分。像 Docker、CRI-O 这样的守护进程及形如 [Podman][4]、[Buildah][5] 的命令行工具,似乎更应该被称为“容器管理器”。
早期版本的 Docker 使用 `lxc` 工具集启动容器,该工具出现在 `systemd-nspawn` 之前。Red Hat 最初试图将 [libvirt][6] `libvirt-lxc`)集成到 Docker 中替代 `lxc` 工具,因为 RHEL 并不支持 `lxc`。`libvirt-lxc` 也没有使用 `systemd-nspawn`,在那时 systemd 团队仅将 `systemd-nspawn` 视为测试工具,不适用于生产环境。
与此同时,包括我的 Red Hat 团队部分成员在内的<ruby>上游<rt>upstream</rt></ruby> Docker 开发者,认为应该采用 golang 原生的方式启动容器,而不是调用外部应用。他们的工作促成了 libcontainer 这个 golang 原生库用于启动容器。Red Hat 工程师更看好该库的发展前景,放弃了 `libvirt-lxc`
后来成立 <ruby>[开放容器组织][7]<rt>Open Container Initiative</rt></ruby>OCI的部分原因就是人们希望用其它方式启动容器。传统的基于名字空间隔离的容器已经家喻户晓但人们也有<ruby>虚拟机级别隔离<rt>virtual machine-level isolation</rt></ruby>的需求。Intel 和 [Hyper.sh][8] 正致力于开发基于 KVM 隔离的容器Microsoft 致力于开发基于 Windows 的容器。OCI 希望有一份定义容器的标准规范,因而产生了 [OCI <ruby>运行时规范<rt>Runtime Specification</rt></ruby>][9]。
OCI 运行时规范定义了一个 JSON 文件格式,用于描述要运行的二进制,如何容器化以及容器根文件系统的位置。一些工具用于生成符合标准规范的 JSON 文件,另外的工具用于解析 JSON 文件并在该根文件系统rootfs上运行容器。Docker 的部分代码被抽取出来构成了 libcontainer 项目,该项目被贡献给 OCI。上游 Docker 工程师及我们自己的工程师创建了一个新的前端工具,用于解析符合 OCI 运行时规范的 JSON 文件,然后与 libcontainer 交互以便启动容器。这个前端工具就是 [runc][10],也被贡献给 OCI。虽然 `runc` 可以解析 OCI JSON 文件,但用户需要自行生成这些文件。此后,`runc` 也成为了最流行的底层容器运行时,基本所有的容器管理工具都支持 `runc`,包括 CRI-O、Docker、Buildah、Podman 和 [Cloud Foundry Garden][11] 等。此后,其它工具的实现也参照 OCI 运行时规范,以便可以运行 OCI 兼容的容器。
[Clear Containers][12] 和 Hyper.sh 的 `runV` 工具都是参照 OCI 运行时规范运行基于 KVM 的容器,二者将其各自工作合并到一个名为 [Kata][12] 的新项目中。在去年Oracle 创建了一个示例版本的 OCI 运行时工具,名为 [RailCar][13],使用 Rust 语言编写。但该 GitHub 项目已经两个月没有更新了故无法判断是否仍在开发。几年前Vincent Batts 试图创建一个名为 [nspawn-oci][14] 的工具,用于解析 OCI 运行时规范文件并启动 `systemd-nspawn`;但似乎没有引起大家的注意,而且也不是原生的实现。
如果有开发者希望实现一个原生的 `systemd-nspawn --oci OCI-SPEC.json` 并让 systemd 团队认可和提供支持那么CRI-O、Docker 和 Podman 等容器管理工具将可以像使用 `runc` 和 Clear Container/runV [Kata][15] 那样使用这个新的底层运行时。(目前我的团队没有人参与这方面的工作。)
总结如下,在 3-4 年前,上游开发者打算编写一个底层的 golang 工具用于启动容器,最终这个工具就是 `runc`。那时开发者有一个使用 C 编写的 `lxc` 工具,在 `runc` 开发后,他们很快转向 `runc`。我很确信,当决定构建 libcontainer 时,他们对 `systemd-nspawn` 或其它非原生(即不使用 golang的运行 namespaces 隔离的容器的方式都不感兴趣。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/history-low-level-container-runtimes
作者:[Daniel Walsh][a]
译者:[pinewall](https://github.com/pinewall)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rhatdan
[1]:https://github.com/docker
[2]:https://github.com/kubernetes-incubator/cri-o
[3]:https://github.com/rkt/rkt
[4]:https://github.com/projectatomic/libpod/tree/master/cmd/podman
[5]:https://github.com/projectatomic/buildah
[6]:https://libvirt.org/
[7]:https://www.opencontainers.org/
[8]:https://www.hyper.sh/
[9]:https://github.com/opencontainers/runtime-spec
[10]:https://github.com/opencontainers/runc
[11]:https://github.com/cloudfoundry/garden
[12]:https://clearlinux.org/containers
[13]:https://github.com/oracle/railcar
[14]:https://github.com/vbatts/nspawn-oci
[15]:https://github.com/kata-containers

View File

@ -5,27 +5,22 @@
下面我将为你展示,如何在 [Go][6] 中实现一个 HTTP API 去提供这种服务。
### 业务
### 流
* 用户输入他的电子邮件地址。
* 服务器创建一个临时的一次性使用的代码(就像一个临时密码一样)关联到用户,然后给用户邮箱中发送一个“魔法链接”。
* 用户点击魔法链接。
* 服务器提取魔法链接中的代码,获取关联的用户,并且使用一个新的 JWT 重定向到客户端。
* 在每次有新请求时,客户端使用 JWT 去验证用户。
### 必需条件
* 数据库:我们为这个服务使用了一个叫 [CockroachDB][1] 的 SQL 数据库。它非常像 postgres但它是用 Go 写的。
* SMTP 服务器:我们将使用一个第三方的邮件服务器去发送邮件。开发的时我们使用 [mailtrap][2]。Mailtrap 发送所有的邮件到它的收件箱,因此,你在测试时不需要创建多个假邮件帐户。
* SMTP 服务器:我们将使用一个第三方的邮件服务器去发送邮件。开发的时我们使用 [mailtrap][2]。Mailtrap 发送所有的邮件到它的收件箱,因此,你在测试它们时不需要创建多个假冒邮件帐户
从 [Go 的主页][7] 上安装它,然后使用 `go version`1.10.1 atm命令去检查它能否正常工作
从 [它的主页][7] 上安装 Go然后使用 `go version`1.10.1 atm命令去检查它能否正常工作。
从 [它的主页][8] 上下载 CockroachDB展开它并添加到你的 `PATH` 变量中。使用 `cockroach version`2.0 atm命令检查它能否正常工作。
从 [CockroachDB 的主页][8] 上下载它,展开它并添加到你的 `PATH` 变量中。使用 `cockroach version`2.0 atm命令检查它能否正常工作。
### 数据库模式
@ -33,7 +28,6 @@
```
cockroach start --insecure --host 127.0.0.1
```
它会输出一些内容,找到 SQL 地址行,它将显示像 `postgresql://root@127.0.0.1:26257?sslmode=disable` 这样的内容。稍后我们将使用它去连接到数据库。
@ -62,7 +56,7 @@ INSERT INTO users (email, username) VALUES
```
这个脚本创建了一个名为 `passwordless_demo` 的数据库、两个名为 `users` 和 `verification_codes` 的表,以及为了稍后测试而插入的一些假用户。每个验证代码都与用户关联并保存代码创建数据,以用于去检查代码是否过期。
这个脚本创建了一个名为 `passwordless_demo` 的数据库、两个名为 `users` 和 `verification_codes` 的表,以及为了稍后测试而插入的一些假用户。每个验证代码都与用户关联并保存创建时间,以用于去检查验证代码是否过期。
在另外的终端中使用 `cockroach sql` 命令去运行这个脚本:
@ -80,9 +74,7 @@ cat schema.sql | cockroach sql --insecure
我们需要下列的 Go 包:
* [github.com/lib/pq][3]:它是 CockroachDB 使用的 postgres 驱动
* [github.com/matryer/way][4]: 路由器
* [github.com/dgrijalva/jwt-go][5]: JWT 实现
```
@ -94,7 +86,7 @@ go get -u github.com/dgrijalva/jwt-go
### 代码
### 初始化函数
#### 初始化函数
创建 `main.go` 并且通过 `init` 函数里的环境变量中取得一些配置来启动。
@ -137,22 +129,16 @@ func env(key, fallbackValue string) string {
```
* `appURL` 将去构建我们的 “魔法链接”。
* `port` 将要启动的 HTTP 服务器。
* `databaseURL` 是 CockroachDB 地址,我添加 `/passwordless_demo` 前面的数据库地址去表示数据库名字。
* `jwtKey` 用于签名 JWTs。
* `jwtKey` 用于签名 JWT。
* `smtpAddr` 是 `SMTP_HOST` + `SMTP_PORT` 的联合;我们将使用它去发送邮件。
* `smtpUsername` 和 `smtpPassword` 是两个必需的变量。
* `smtpAuth` 也是用于发送邮件。
`env` 函数允许我们去获得环境变量,不存在时返回一个 fallback value
`env` 函数允许我们去获得环境变量,不存在时返回一个回退值
### 主函数
#### 主函数
```
var db *sql.DB
@ -189,7 +175,7 @@ import (
```
然后,我们创建路由器并定义一些端点。对于无密码业务流来说,我们使用两个端点:`/api/passwordless/start` 发送魔法链接,和 `/api/passwordless/verify_redirect` 用 JWT 响应。
然后,我们创建路由器并定义一些端点。对于无密码流来说,我们使用两个端点:`/api/passwordless/start` 发送魔法链接,和 `/api/passwordless/verify_redirect` 用 JWT 响应。
最后,我们启动服务器。
@ -236,7 +222,7 @@ go build
我们在目录中有了一个 “passwordless-demo”但是你的目录中可能与示例不一样`go build` 将创建一个同名的可执行文件。如果你没有关闭前面的 cockroach 节点,并且你正确配置了 `SMTP_USERNAME` 和 `SMTP_PASSWORD` 变量,你将看到命令 `starting server at http://localhost/ 🚀` 没有错误输出。
### JSON 要求的中间件
#### 请求 JSON 的中间件
端点需要从请求体中解码 JSON因此要确保请求是 `application/json` 类型。因为它是一个通用的东西,我将它解耦到中间件。
@ -257,7 +243,7 @@ func jsonRequired(next http.HandlerFunc) http.HandlerFunc {
实现很容易。首先它从请求头中获得内容的类型,然后检查它是否是以 “application/json” 开始,如果不是则以 `415 Unsupported Media Type` 提前返回。
### 响应 JSON 函数
#### 响应 JSON 函数
以 JSON 响应是非常通用的做法,因此我把它提取到函数中。
@ -285,7 +271,7 @@ func respondJSON(w http.ResponseWriter, payload interface{}, code int) {
首先,对原始类型做一个类型判断,并将它们封装到一个 `map`。然后将它们编组到 JSON设置响应内容类型和状态码并写 JSON。如果 JSON 编组失败,则响应一个内部错误。
### 响应内部错误的函数
#### 响应内部错误的函数
`respondInternalError` 是一个响应 `500 Internal Server Error` 的函数,但是也同时将错误输出到控制台。
@ -299,7 +285,7 @@ func respondInternalError(w http.ResponseWriter, err error) {
```
### 创建用户处理程序
#### 创建用户处理程序
下面开始编写 `createUser` 处理程序,因为它非常容易并且是 REST 式的。
@ -391,7 +377,7 @@ respondJSON(w, user, http.StatusCreated)
最后使用创建的用户去响应。
### 无密码验证开始部分的处理程序
#### 无密码验证开始部分的处理程序
```
type PasswordlessStartRequest struct {
@ -401,7 +387,7 @@ type PasswordlessStartRequest struct {
```
这个结构体持有 `passwordlessStart` 的请求体。希望去登入的用户 email。来自客户端的重定向 URI这个应用中将使用我们的 API`https://frontend.app/callback`。
这个结构体含有 `passwordlessStart` 的请求体:希望去登入的用户 email、来自客户端的重定向 URI这个应用中将使用我们的 API`https://frontend.app/callback`。
```
var magicLinkTmpl = template.Must(template.ParseFiles("templates/magic-link.html"))
@ -429,7 +415,7 @@ var magicLinkTmpl = template.Must(template.ParseFiles("templates/magic-link.html
这个模板是给用户发送魔法链接邮件用的。你可以根据你的需要去随意调整它。
现在, 进入 `passwordlessStart` 函数**内部**
现在, 进入 `passwordlessStart` 函数内部:
```
var input PasswordlessStartRequest
@ -525,7 +511,7 @@ w.WriteHeader(http.StatusNoContent)
最后,设置响应状态码为 `204 No Content`。对于成功的状态码,客户端不需要很多数据。
### 发送邮件函数
#### 发送邮件函数
```
func sendMail(to mail.Address, subject, body string) error {
@ -558,16 +544,16 @@ func sendMail(to mail.Address, subject, body string) error {
这个函数创建一个基本的 HTML 邮件结构体并使用 SMTP 服务器去发送它。邮件的内容你可以随意定制,我喜欢使用比较简单的内容。
### 无密码验证重定向处理程序
#### 无密码验证重定向处理程序
```
var rxUUID = regexp.MustCompile("^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$")
```
首先,这个正则表达式去验证一个 UUID验证代码
首先,这个正则表达式去验证一个 UUID验证代码)。
现在进入 `passwordlessVerifyRedirect` 函数 **内部**
现在进入 `passwordlessVerifyRedirect` 函数内部:
```
q := r.URL.Query()
@ -661,23 +647,19 @@ http.Redirect(w, r, callback.String(), http.StatusFound)
* * *
无密码的工作流已经完成。现在需要去写 `getAuthUser` 端点的代码了,它用于获取当前验证用户的信息。你应该还记得,这个端点使用了 `authRequired` 中间件。
无密码的流已经完成。现在需要去写 `getAuthUser` 端点的代码了,它用于获取当前验证用户的信息。你应该还记得,这个端点使用了 `guard` 中间件。
### 使用 Auth 中间件
#### 使用 Auth 中间件
在编写 `authRequired` 中间件之前,我将编写一个不需要验证的分支。目的是,如果没有传递 JWT它将不去验证用户。
在编写 `guard` 中间件之前,我将编写一个不需要验证的分支。目的是,如果没有传递 JWT它将不去验证用户。
```
type ContextKey int
const (
keyAuthUserID ContextKey = iota
)
func jwtKeyFunc(*jwt.Token) (interface{}, error) {
return config.jwtKey, nil
type ContextKey struct {
Name string
}
var keyAuthUserID = ContextKey{"auth_user_id"}
func withAuth(next http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
a := r.Header.Get("Authorization")
@ -689,7 +671,11 @@ func withAuth(next http.HandlerFunc) http.HandlerFunc {
tokenString := a[7:]
p := jwt.Parser{ValidMethods: []string{jwt.SigningMethodHS256.Name}}
token, err := p.ParseWithClaims(tokenString, &jwt.StandardClaims{}, jwtKeyFunc)
token, err := p.ParseWithClaims(
tokenString,
&jwt.StandardClaims{},
func (*jwt.Token) (interface{}, error) { return config.jwtKey, nil },
)
if err != nil {
respondJSON(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
return
@ -707,19 +693,18 @@ func withAuth(next http.HandlerFunc) http.HandlerFunc {
next(w, r.WithContext(ctx))
}
}
```
JWT 将在每次请求时以 “Bearer <token_here>” 格式包含在 “Authorization” 头中。因此,如果没有提供令牌,我们将直接通过,进入接下来的中间件。
JWT 将在每次请求时以 `Bearer <token_here>` 格式包含在 `Authorization` 头中。因此,如果没有提供令牌,我们将直接通过,进入接下来的中间件。
我们创建一个解析器来解析令牌。如果解析失败则返回 `401 Unauthorized`
然后我们从 JWT 中提取出要求的内容,并添加 `Subject`(就是用户 ID到需要的地方。
### Auth 需要的中间件
#### Guard 中间件
```
func authRequired(next http.HandlerFunc) http.HandlerFunc {
func guard(next http.HandlerFunc) http.HandlerFunc {
return withAuth(func(w http.ResponseWriter, r *http.Request) {
_, ok := r.Context().Value(keyAuthUserID).(string)
if !ok {
@ -729,15 +714,13 @@ func authRequired(next http.HandlerFunc) http.HandlerFunc {
next(w, r)
})
}
```
现在,`authRequired` 将使用 `withAuth` 并从请求内容中提取出验证用户的 ID。如果提取失败它将返回 `401 Unauthorized`,提取成功则继续下一步。
现在,`guard` 将使用 `withAuth` 并从请求内容中提取出验证用户的 ID。如果提取失败它将返回 `401 Unauthorized`,提取成功则继续下一步。
### 获取 Auth 用户
#### 获取 Auth 用户
在 `getAuthUser` 处理程序**内部**
在 `getAuthUser` 处理程序内部:
```
ctx := r.Context()
@ -756,9 +739,9 @@ respondJSON(w, user, http.StatusOK)
```
首先,我们从请求内容中提取验证用户的 ID我们使用这个 ID 去获取用户。如果没有获取到内容,则发送一个 `418 I'm a teapot`,或者一个内部错误。最后,我们将用这个用户去响应 😊
首先,我们从请求内容中提取验证用户的 ID我们使用这个 ID 去获取用户。如果没有获取到内容,则发送一个 `418 I'm a teapot`,或者一个内部错误。最后,我们将用这个用户去响应
### 获取 User 函数
#### 获取 User 函数
下面你看到的是 `fetchUser` 函数。
@ -783,15 +766,15 @@ func fetchUser(ctx context.Context, id string) (User, error) {
如果有任何问题,请在我的 [GitHub repo][11] 留言或者提交 PRs 👍
以后,我为这个 API 写一个客户端作为这篇文章的第二部分。
以后,我为这个 API 写一个客户端作为这篇文章的[第二部分][13]
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/passwordless-auth-server/
作者:[Nicolás Parada ][a]
作者:[Nicolás Parada][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -808,3 +791,4 @@ via: https://nicolasparada.netlify.com/posts/passwordless-auth-server/
[10]:https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox
[11]:https://github.com/nicolasparada/go-passwordless-demo
[12]:https://twitter.com/intent/retweet?tweet_id=986602458716803074
[13]:https://nicolasparada.netlify.com/posts/passwordless-auth-client/

View File

@ -1,3 +1,5 @@
translating---geekpi
How To Run A Command For A Specific Time In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/02/Run-A-Command-For-A-Specific-Time-In-Linux-1-720x340.png)

View File

@ -0,0 +1,338 @@
Passwordless Auth: Client
======
Time to continue with the [passwordless auth][1] posts. Previously, we wrote an HTTP service in Go that provided with a passwordless authentication API. Now, we are gonna code a JavaScript client for it.
Well go with a single page application (SPA) using the technique I showed [here][2]. Read it first if you havent yet.
For the root URL (`/`) well show two different pages depending on the auth state: a page with an access form or a page greeting the authenticated user. Another page is for the auth callback redirect.
### Serving
Ill serve the client with the same Go server, so lets add some routes to the previous `main.go`:
```
router.Handle("GET", "/js/", http.FileServer(http.Dir("static")))
router.HandleFunc("GET", "/...", serveFile("static/index.html"))
```
This serves files under `static/js`, and `static/index.html` is served for everything else.
You can use your own server apart, but youll have to enable [CORS][3] on the server.
### HTML
Lets see that `static/index.html`.
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Passwordless Demo</title>
<link rel="shortcut icon" href="data:,">
<script src="/js/main.js" type="module"></script>
</head>
<body></body>
</html>
```
Single page application left all the rendering to JavaScript, so we have an empty body and a `main.js` file.
Ill user the Router from the [last post][2].
### Rendering
Now, create a `static/js/main.js` file with the following content:
```
import Router from 'https://unpkg.com/@nicolasparada/router'
import { isAuthenticated } from './auth.js'
const router = new Router()
router.handle('/', guard(view('home')))
router.handle('/callback', view('callback'))
router.handle(/^\//, view('not-found'))
router.install(async resultPromise => {
document.body.innerHTML = ''
document.body.appendChild(await resultPromise)
})
function view(name) {
return (...args) => import(`/js/pages/${name}-page.js`)
.then(m => m.default(...args))
}
function guard(fn1, fn2 = view('welcome')) {
return (...args) => isAuthenticated()
? fn1(...args)
: fn2(...args)
}
```
Differing from the last post, we implement an `isAuthenticated()` function and a `guard()` function that uses it to render one or another page. So when a user visits `/` it will show the home or welcome page whether the user is authenticated or not.
### Auth
Now, lets write that `isAuthenticated()` function. Create a `static/js/auth.js` file with the following content:
```
export function getAuthUser() {
const authUserItem = localStorage.getItem('auth_user')
const expiresAtItem = localStorage.getItem('expires_at')
if (authUserItem !== null && expiresAtItem !== null) {
const expiresAt = new Date(expiresAtItem)
if (!isNaN(expiresAt.valueOf()) && expiresAt > new Date()) {
try {
return JSON.parse(authUserItem)
} catch (_) { }
}
}
return null
}
export function isAuthenticated() {
return localStorage.getItem('jwt') !== null && getAuthUser() !== null
}
```
When someone login, we save the JSON web token, expiration date of it and the current authenticated user on `localStorage`. This module uses that.
* `getAuthUser()` gets the authenticated user from `localStorage` making sure the JSON Web Token hasnt expired yet.
* `isAuthenticated()` makes use of the previous function to check whether it doesnt return `null`.
### Fetch
Before continuing with the pages, Ill code some HTTP utilities to work with the server API.
Lets create a `static/js/http.js` file with the following content:
```
import { isAuthenticated } from './auth.js'
function get(url, headers) {
return fetch(url, {
headers: Object.assign(getAuthHeader(), headers),
}).then(handleResponse)
}
function post(url, body, headers) {
return fetch(url, {
method: 'POST',
headers: Object.assign(getAuthHeader(), { 'content-type': 'application/json' }, headers),
body: JSON.stringify(body),
}).then(handleResponse)
}
function getAuthHeader() {
return isAuthenticated()
? { authorization: `Bearer ${localStorage.getItem('jwt')}` }
: {}
}
export async function handleResponse(res) {
const body = await res.clone().json().catch(() => res.text())
const response = {
url: res.url,
statusCode: res.status,
statusText: res.statusText,
headers: res.headers,
body,
}
if (!res.ok) throw Object.assign(
new Error(body.message || body || res.statusText),
response
)
return response
}
export default {
get,
post,
}
```
This module exports `get()` and `post()` functions. They are wrappers around the `fetch` API. Both functions inject an `Authorization: Bearer <token_here>` header to the request when the user is authenticated; that way the server can authenticate us.
### Welcome Page
Lets move to the welcome page. Create a `static/js/pages/welcome-page.js` file with the following content:
```
const template = document.createElement('template')
template.innerHTML = `
<h1>Passwordless Demo</h1>
<h2>Access</h2>
<form id="access-form">
<input type="email" placeholder="Email" autofocus required>
<button type="submit">Send Magic Link</button>
</form>
`
export default function welcomePage() {
const page = template.content.cloneNode(true)
page.getElementById('access-form')
.addEventListener('submit', onAccessFormSubmit)
return page
}
```
This page uses an `HTMLTemplateElement` for the view. It is just a simple form to enter the users email.
To not make this boring, Ill skip error handling and just log them to console.
Now, lets code that `onAccessFormSubmit()` function.
```
import http from '../http.js'
function onAccessFormSubmit(ev) {
ev.preventDefault()
const form = ev.currentTarget
const input = form.querySelector('input')
const email = input.value
sendMagicLink(email).catch(err => {
console.error(err)
if (err.statusCode === 404 && wantToCreateAccount()) {
runCreateUserProgram(email)
}
})
}
function sendMagicLink(email) {
return http.post('/api/passwordless/start', {
email,
redirectUri: location.origin + '/callback',
}).then(() => {
alert('Magic link sent. Go check your email inbox.')
})
}
function wantToCreateAccount() {
return prompt('No user found. Do you want to create an account?')
}
```
It does a `POST` request to `/api/passwordless/start` with the email and redirectUri in the body. In case it returns with `404 Not Found` status code, well create a user.
```
function runCreateUserProgram(email) {
const username = prompt("Enter username")
if (username === null) return
http.post('/api/users', { email, username })
.then(res => res.body)
.then(user => sendMagicLink(user.email))
.catch(console.error)
}
```
The user creation program, first, ask for username and does a `POST` request to `/api/users` with the email and username in the body. On success, it sends a magic link for the user created.
### Callback Page
That was all the functionality for the access form, lets move to the callback page. Create a `static/js/pages/callback-page.js` file with the following content:
```
import http from '../http.js'
const template = document.createElement('template')
template.innerHTML = `
<h1>Authenticating you 👀</h1>
`
export default function callbackPage() {
const page = template.content.cloneNode(true)
const hash = location.hash.substr(1)
const fragment = new URLSearchParams(hash)
for (const [k, v] of fragment.entries()) {
fragment.set(decodeURIComponent(k), decodeURIComponent(v))
}
const jwt = fragment.get('jwt')
const expiresAt = fragment.get('expires_at')
http.get('/api/auth_user', { authorization: `Bearer ${jwt}` })
.then(res => res.body)
.then(authUser => {
localStorage.setItem('jwt', jwt)
localStorage.setItem('auth_user', JSON.stringify(authUser))
localStorage.setItem('expires_at', expiresAt)
location.replace('/')
})
.catch(console.error)
return page
}
```
To remember… when clicking the magic link, we go to `/api/passwordless/verify_redirect` which redirect us to the redirect URI we pass (`/callback`) with the JWT and expiration date in the URL hash.
The callback page decodes the hash from the URL to extract those parameters to do a `GET` request to `/api/auth_user` with the JWT saving all the data to `localStorage`. Finally, it just redirects to home.
### Home Page
Create a `static/pages/home-page.js` file with the following content:
```
import { getAuthUser } from '../auth.js'
export default function homePage() {
const authUser = getAuthUser()
const template = document.createElement('template')
template.innerHTML = `
<h1>Passwordless Demo</h1>
<p>Welcome back, ${authUser.username} 👋</p>
<button id="logout-button">Logout</button>
`
const page = template.content
page.getElementById('logout-button')
.addEventListener('click', logout)
return page
}
function logout() {
localStorage.clear()
location.reload()
}
```
This page greets the authenticated user and also has a logout button. The `logout()` function just clears `localStorage` and reloads the page.
There is it. I bet you already saw the [demo][4] before. Also, the source code is in the same [repository][5].
👋👋👋
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/passwordless-auth-client/
作者:[Nicolás Parada][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://nicolasparada.netlify.com/
[1]:https://nicolasparada.netlify.com/posts/passwordless-auth-server/
[2]:https://nicolasparada.netlify.com/posts/javascript-client-router/
[3]:https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
[4]:https://go-passwordless-demo.herokuapp.com/
[5]:https://github.com/nicolasparada/go-passwordless-demo

View File

@ -0,0 +1,156 @@
translating by lujun9972
How to Read Outlook Emails by Python
======
![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/OVArLzhmRzOEQZsvGavF)
when you start e-mail marketing , You need opt-in email address list. You have opt-in list. You are using email client software and If you can export your list from your email client, You will have good list.
Now I am trying to explain my codes to write all emails into test file from your outlook profile.
First you should import win32com.client, You need to install pywin32
```
pip install pywin32
```
We should connect to Outlook by MAPI
```
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
```
Then we should get all accounts in your outlook profile.
```
accounts= win32com.client.Dispatch("Outlook.Application").Session.Accounts;
```
Then You need to get emails from inbox folder that is named emailleri_al.
```
def emailleri_al(folder):
messages = folder.Items
a=len(messages)
if a>0:
for message2 in messages:
try:
sender = message2.SenderEmailAddress
if sender != "":
print(sender, file=f)
except:
print("Ben hatayım")
print(account.DeliveryStore.DisplayName)
pass
try:
message2.Save
message2.Close(0)
except:
pass
```
You should go to all account and get inbox folder and get emails
```
for account in accounts:
global inbox
inbox = outlook.Folders(account.DeliveryStore.DisplayName)
print("****Account Name**********************************",file=f)
print(account.DisplayName,file=f)
print(account.DisplayName)
print("***************************************************",file=f)
folders = inbox.Folders
for folder in folders:
print("****Folder Name**********************************", file=f)
print(folder, file=f)
print("*************************************************", file=f)
emailleri_al(folder)
a = len(folder.folders)
if a>0 :
global z
z = outlook.Folders(account.DeliveryStore.DisplayName).Folders(folder.name)
x = z.Folders
for y in x:
emailleri_al(y)
print("****Folder Name**********************************", file=f)
print("..."+y.name,file=f)
print("*************************************************", file=
```
All Code is as the following
```
import win32com.client
import win32com
import os
import sys
f = open("testfile.txt","w+")
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
accounts= win32com.client.Dispatch("Outlook.Application").Session.Accounts;
def emailleri_al(folder):
messages = folder.Items
a=len(messages)
if a>0:
for message2 in messages:
try:
sender = message2.SenderEmailAddress
if sender != "":
print(sender, file=f)
except:
print("Error")
print(account.DeliveryStore.DisplayName)
pass
try:
message2.Save
message2.Close(0)
except:
pass
for account in accounts:
global inbox
inbox = outlook.Folders(account.DeliveryStore.DisplayName)
print("****Account Name**********************************",file=f)
print(account.DisplayName,file=f)
print(account.DisplayName)
print("***************************************************",file=f)
folders = inbox.Folders
for folder in folders:
print("****Folder Name**********************************", file=f)
print(folder, file=f)
print("*************************************************", file=f)
emailleri_al(folder)
a = len(folder.folders)
if a>0 :
global z
z = outlook.Folders(account.DeliveryStore.DisplayName).Folders(folder.name)
x = z.Folders
for y in x:
emailleri_al(y)
print("****Folder Name**********************************", file=f)
print("..."+y.name,file=f)
print("*************************************************", file=f)
print("Finished Succesfully")
```
--------------------------------------------------------------------------------
via: https://www.codementor.io/aliacetrefli/how-to-read-outlook-emails-by-python-jkp2ksk95
作者:[A.A. Cetrefli][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.codementor.io/aliacetrefli

View File

@ -0,0 +1,70 @@
Python Debugging Tips
======
When it comes to debugging, theres a lot of choices that you can make. It is hard to give generic advice that always works (other than “Have you tried turning it off and back on?”).
Here are a few of my favorite Python Debugging tips.
### Make a branch
Trust me on this. Even if you never intend to commit the changes back upstream, you will be glad your experiments are contained within their own branch.
If nothing else, it makes cleanup a lot easier!
### Install pdb++
Seriously. It makes you life easier if you are on the command line.
All that pdb++ does is replace the standard pdb module with 100% PURE AWESOMENESS. Heres what you get when you `pip install pdbpp`:
* A Colorized prompt!
* tab completion! (perfect for poking around!)
* It slices! It dices!
Ok, maybe the last one is a little bit much… But in all seriousness, installing pdb++ is well worth your time.
### Poke around
Sometimes the best approach is to just mess around and see what happens. Put a break point in an “obvious” spot and make sure it gets hit. Pepper the code with `print()` and/or `logging.debug()` statements and see where the code execution goes.
Examine the arguments being passed into your functions. Check the versions of the libraries (if things are getting really desperate).
### Only change one thing at a time
Once you are poking around a bit you are going to get ideas on things you could do. But before you start slinging code, take a step back and think about what you could change, and then only change 1 thing.
Once youve made the change, then test and see if you are closer to resolving the issue. If not, change the thing back, and try something else.
Changing only one thing allows you to know what does and doesnt work. Plus once you do get it working, your new commit is going to be much smaller (because there will be less changes).
This is pretty much what one does in the Scientific Process: only change one variable at a time. By allowing yourself to see and measure the results of one change you will save your sanity and arrive at a working solution faster.
### Assume nothing, ask questions
Occasionally a developer (not you of course!) will be in a hurry and whip out some questionable code. When you go through to debug this code you need to stop and make sure you understand what it is trying to accomplish.
Make no assumptions. Just because the code is in the `model.py` file doesnt mean it wont try to render some HTML.
Likewise, double check all of your external connections before you do anything destructive! Going to delete some configuration data? MAKE SURE YOU ARE NOT CONNECTED TO YOUR PRODUCTION SYSTEM.
### Be clever, but not too clever
Sometimes we write code that is so amazingly awesome it is not obvious how it does what it does.
While we might feel smart when we publish that code, more often than not we will wind up feeling dumb later on when the code breaks and we have to remember how it works to figure out why it isnt working.
Keep an eye out for any sections of code that look either overly complicated and long, or extremely short. These could be places where complexity is hiding and causing your bugs.
--------------------------------------------------------------------------------
via: https://pythondebugging.com/articles/python-debugging-tips
作者:[PythonDebugging.com][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://pythondebugging.com

View File

@ -1,190 +0,0 @@
How to load or unload a Linux kernel module
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82)
This article is excerpted from chapter 15 of [Linux in Action][1], published by Manning.
Linux manages hardware peripherals using kernel modules. Here's how that works.
A running Linux kernel is one of those things you don't want to upset. After all, the kernel is the software that drives everything your computer does. Considering how many details have to be simultaneously managed on a live system, it's better to leave the kernel to do its job with as few distractions as possible. But if it's impossible to make even small changes to the compute environment without rebooting the whole system, then plugging in a new webcam or printer could cause a painful disruption to your workflow. Having to reboot each time you add a device to get the system to recognize it is hardly efficient.
To create an effective balance between the opposing virtues of stability and usability, Linux isolates the kernel, but lets you add specific functionality on the fly through loadable kernel modules (LKMs). As shown in the figure below, you can think of a module as a piece of software that tells the kernel where to find a device and what to do with it. In turn, the kernel makes the device available to users and processes and oversees its operation.
![Kernel modules][3]
Kernel modules act as translators between devices and the Linux kernel.
There's nothing stopping you from writing your own module to support a device exactly the way you'd like it, but why bother? The Linux module library is already so robust that there's usually no need to roll your own. And the vast majority of the time, Linux will automatically load a new device's module without you even knowing it.
Still, there are times when, for some reason, it doesn't happen by itself. (You don't want to leave that hiring manager impatiently waiting for your smiling face to join the video conference job interview for too long.) To help things along, you'll want to understand a bit more about kernel modules and, in particular, how to find the actual module that will run your peripheral and then how to manually activate it.
### Finding kernel modules
By accepted convention, modules are files with a .ko (kernel object) extension that live beneath the `/lib/modules/` directory. Before you navigate all the way down to those files, however, you'll probably have to make a choice. Because you're given the option at boot time of loading one from a list of releases, the specific software needed to support your choice (including the kernel modules) has to exist somewhere. Well, `/lib/modules`/ is one of those somewheres. And that's where you'll find directories filled with the modules for each available Linux kernel release; for example:
```
$ ls /lib/modules
4.4.0-101-generic
4.4.0-103-generic
4.4.0-104-generic
```
In my case, the active kernel is the version with the highest release number (4.4.0-104-generic), but there's no guarantee that that'll be the same for you (kernels are frequently updated). If you're going to be doing some work with modules that you'd like to use on a live system, you need to be sure you've got the right directory tree.
`uname -r` (the `-r` specifies the kernel release number from within the system information that would normally be displayed):
```
$ uname -r
4.4.0-104-generic
```
Good news: there's a reliable trick. Rather than identifying the directory by name and hoping you'll get the right one, use the system variable that always points to the name of the active kernel. You can invoke that variable using(thespecifies the kernel release number from within the system information that would normally be displayed):
With that information, you can incorporate `uname` into your filesystem references using a process known as command substitution. To navigate to the right directory, for instance, you'd add it to `/lib/modules`. To tell Linux that "uname" isn't a filesystem location, enclose the `uname` part in backticks, like this:
```
$ ls /lib/modules/`uname -r`
build   modules.alias        modules.dep      modules.softdep
initrd  modules.alias.bin    modules.dep.bin  modules.symbols
kernel  modules.builtin      modules.devname  modules.symbols.bin
misc    modules.builtin.bin  modules.order    vdso
```
You'll find most of the modules organized within their subdirectories beneath the `kernel/` directory. Take a few minutes to browse through those directories to get an idea of how things are arranged and what's available. The filenames usually give you a good idea of what you're looking at.
```
$ ls /lib/modules/`uname -r`/kernel
arch  crypto  drivers  fs  kernel  lib  mm
net  sound  ubuntu  virt  zfs
```
That's one way to locate kernel modules; actually, it's the quick and dirty way to go about it. But it's not the only way. If you want to get the complete set, you can list all currently loaded modules, along with some basic information, by using `lsmod`. The first column of this truncated output (there would be far too many to list here) is the module name, followed by the file size and number, and then the names of other modules on which each is dependent:
```
$ lsmod
[...]
vboxdrv          454656  3 vboxnetadp,vboxnetflt,vboxpci
rt2x00usb        24576  1 rt2800usb
rt2800lib        94208  1 rt2800usb
[...]
```
How many are far too many? Well, let's run `lsmod` once again, but this time piping the output to `wc -l` to get a count of the lines:
```
$ lsmod | wc -l
113
```
Those are the loaded modules. How many are available in total? Running `modprobe -c` and counting the lines will give us that number:
```
$ modprobe -c | wc -l
33350
```
There are 33,350 available modules!?! It looks like someone's been working hard over the years to provide us with the software to run our physical devices.
Note: On some systems, you might encounter customized modules that are referenced either with their unique entries in the `/etc/modules` file or as a configuration file saved to `/etc/modules-load.d/`. The odds are that such modules are the product of local development projects, perhaps involving cutting-edge experiments. Either way, it's good to have some idea of what it is you're looking at.
That's how you find modules. Your next job is to figure out how to manually load an inactive module if, for some reason, it didn't happen on its own.
### Manually loading kernel modules
Before you can load a kernel module, logic dictates that you'll have to confirm it exists. And before you can do that, you'll need to know what it's called. Getting that part sometimes requires equal parts magic and luck and some help from of the hard work of online documentation authors.
I'll illustrate the process by describing a problem I ran into some time back. One fine day, for a reason that still escapes me, the WiFi interface on a laptop stopped working. Just like that. Perhaps a software update knocked it out. Who knows? I ran `lshw -c network` and was treated to this very strange information:
```
network UNCLAIMED
    AR9485 Wireless Network Adapter
```
Linux recognized the interface (the Atheros AR9485) but listed it as unclaimed. Well, as they say, "When the going gets tough, the tough search the internet." I ran a search for atheros ar9 linux module and, after sifting through pages and pages of five- and even 10-year-old results advising me to either write my own module or just give up, I finally discovered that (with Ubuntu 16.04, at least) a working module existed. Its name is ath9k.
Yes! The battle's as good as won! Adding a module to the kernel is a lot easier than it sounds. To double check that it's available, you can run `find` against the module's directory tree, specify `-type f` to tell Linux you're looking for a file, and then add the string `ath9k` along with a glob asterisk to include all filenames that start with your string:
```
$ find /lib/modules/$(uname -r) -type f -name ath9k*
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_common.ko
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k.ko
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_htc.ko
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_hw.ko
```
Just one more step, load the module:
```
# modprobe ath9k
```
That's it. No reboots. No fuss.
Here's one more example to show you how to work with active modules that have become corrupted. There was a time when using my Logitech webcam with a particular piece of software would make the camera inaccessible to any other programs until the next system boot. Sometimes I needed to open the camera in a different application but didn't have the time to shut down and start up again. (I run a lot of applications, and getting them all in place after booting takes some time.)
Because this module is presumably active, using `lsmod` to search for the word video should give me a hint about the name of the relevant module. In fact, it's better than a hint: The only module described with the word video is uvcvideo (as you can see in the following):
```
$ lsmod | grep video
uvcvideo               90112  0
videobuf2_vmalloc      16384  1 uvcvideo
videobuf2_v4l2         28672  1 uvcvideo
videobuf2_core         36864  2 uvcvideo,videobuf2_v4l2
videodev              176128  4 uvcvideo,v4l2_common,videobuf2_core,videobuf2_v4l2
media                  24576  2 uvcvideo,videodev
```
There was probably something I could have controlled for that was causing the crash, and I guess I could have dug a bit deeper to see if I could fix things the right way. But you know how it is; sometimes you don't care about the theory and just want your device working. So I used `rmmod` to kill the uvcvideo module and `modprobe` to start it up again all nice and fresh:
```
# rmmod uvcvideo
# modprobe uvcvideo
```
Again: no reboots. No stubborn blood stains.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/how-load-or-unload-linux-kernel-module
作者:[David Clinto][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dbclinton
[1]:https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource
[2]:/file/397906
[3]:https://opensource.com/sites/default/files/uploads/kernels.png (Kernel modules)

View File

@ -0,0 +1,84 @@
3 open source alternatives to Adobe Lightroom
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/camera-photography-film.jpg?itok=oe2ixyu6)
You wouldn't be wrong to wonder whether the smartphone, that modern jack-of-all-trades, is taking over photography. While that might be valid in the point-and-shoot camera market, there are a sizeable number of photography professionals and hobbyists who recognize that a camera that fits in your pocket can never replace a high-end DSLR camera and the depth, clarity, and realism of its photos.
All of that power comes with a small price in terms of convenience; like negatives from traditional film cameras, the [raw image][1] files produced by DSLRs must be processed before they can be edited or printed. For this, a digital image processing application is indispensable, and the go-to application has been Adobe Lightroom. But for many reasons—including its expensive, subscription-based pricing model and its proprietary license—there's a lot of interest in open source and other alternatives.
Lightroom has two main functions: processing raw image files and digital asset management (DAM)—organizing images with tags, ratings, and other metadata to make it easier to keep track of them.
In this article, we'll look at three open source image processing applications: Darktable, LightZone, and RawTherapee. All of them have DAM capabilities, but none has Lightroom's machine learning-based image categorization and tagging features. If you're looking for more information about open source DAM software, check out Terry Hancock's article "[Digital asset management for an open movie project][2]," where he shares his research on software to organize multimedia files for his [_Lunatics!_][3] open movie project.
### Darktable
![Darktable][4]
Like the other applications on our list, [darktable][5] processes raw images into usable file formats—it exports into JPEG, PNG, TIFF, PPM, PFM, and EXR, and it also supports Google and Facebook web albums, Flickr uploads, email attachments, and web gallery creation.
Its 61 image operation modules allow you to adjust contrast, tone, exposure, color, noise, etc.; add watermarks; crop and rotate; and much more. As with the other applications described in this article, those edits are "non-destructive"—that is, your original raw image is preserved no matter how many tweaks and modifications you make.
Darktable imports raw images from more than 400 cameras plus JPEG, CR2, DNG, OpenEXR, and PFM; images are managed in a database so you can filter and search using metadata including tags, ratings, and color. It's also available in 21 languages and is supported on Linux, MacOS, BSD, Solaris 11/GNOME, and Windows. (The [Windows port][6] is new, and darktable warns it may have "rough edges or missing functionality" compared to other versions.)
Darktable is licensed under [GPLv3][7]; you can learn more by perusing its [features][8], viewing the [user manual][9], or accessing its [source code][10] on GitHub.
### LightZone
![LightZone's tool stack][11]
As a non-destructive raw image processing tool, [LightZone][12] is similar to the other two applications on this list: it's cross-platform, operating on Windows, MacOS, and Linux, and it supports JPG and TIFF images in addition to raw. But it's also unique in several ways.
For one thing, it started out in 2005 as a proprietary image processing tool and later became an open source project under a BSD license. Also, before you can download the application, you must register for a free account; this is so the LightZone development community can track downloads and build the community. (Approval is quick and automated, so it's not a large barrier.)
Another difference is that image modifications are done using stackable tools, rather than filters (like most image-editing applications); tool stacks can be rearranged or removed, as well as saved and copied to a batch of images. You can also edit certain parts of an image using a vector-based tool or by selecting pixels based on color or brightness.
You can get more information on LightZone by searching its [forums][13] or accessing its [source code][14] on GitHub.
### RawTherapee
![RawTherapee][15]
[RawTherapee][16] is another popular open source ([GPL][17]) raw image processor worth your attention. Like darktable and LightZone, it is cross-platform (Windows, MacOS, and Linux) and implements edits in a non-destructive fashion, so you maintain access to your original raw image file no matter what filters or changes you make.
RawTherapee uses a panel-based interface, including a history panel to keep track of your changes and revert to a previous point; a snapshot panel that allows you to work with multiple versions of a photo; and scrollable tool panels to easily select a tool without worrying about accidentally using the wrong one. Its tools offer a wide variety of exposure, color, detail, transformation, and demosaicing features.
The application imports raw files from most cameras and is localized to more than 25 languages, making it widely usable. Features like batch processing and [SSE][18] optimizations improve speed and CPU performance.
RawTherapee offers many other [features][19]; check out its [documentation][20] and [source code][21] for details.
Do you use another open source raw image processing tool in your photography? Do you have any related tips or suggestions for other photographers? If so, please share your recommendations in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/alternatives/adobe-lightroom
作者:[Opensource.com][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com
[1]:https://en.wikipedia.org/wiki/Raw_image_format
[2]:https://opensource.com/article/18/3/movie-open-source-software
[3]:http://lunatics.tv/
[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_darkroom1.jpg?itok=0fjk37tC (Darktable)
[5]:http://www.darktable.org/
[6]:https://www.darktable.org/about/faq/#faq-windows
[7]:https://github.com/darktable-org/darktable/blob/master/LICENSE
[8]:https://www.darktable.org/about/features/
[9]:https://www.darktable.org/resources/
[10]:https://github.com/darktable-org/darktable
[11]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_lightzone1tookstack.jpg?itok=1e3s85CZ (LightZone's tool stack)
[12]:http://www.lightzoneproject.org/
[13]:http://www.lightzoneproject.org/Forum
[14]:https://github.com/ktgw0316/LightZone
[15]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_rawtherapee.jpg?itok=meiuLxPw (RawTherapee)
[16]:http://rawtherapee.com/
[17]:https://github.com/Beep6581/RawTherapee/blob/dev/LICENSE.txt
[18]:https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions
[19]:http://rawpedia.rawtherapee.com/Features
[20]:http://rawpedia.rawtherapee.com/Main_Page
[21]:https://github.com/Beep6581/RawTherapee

View File

@ -0,0 +1,185 @@
How to partition a disk in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl)
Creating and deleting partitions in Linux is a regular practice because storage devices (such as hard drives and USB drives) must be structured in some way before they can be used. In most cases, large storage devices are divided into separate sections called partitions. Partitioning also allows you to divide your hard drive into isolated sections, where each section behaves as its own hard drive. Partitioning is particularly useful if you run multiple operating systems.
There are lots of powerful tools for creating, removing, and otherwise manipulating disk partitions in Linux. In this article, I'll explain how to use the `parted` command, which is particularly useful with large disk devices and many disk partitions. Differences between `parted` and the more common `fdisk` and `cfdisk` commands include:
* **GPT format:** The `parted` command can create a Globally Unique Identifiers Partition Table [GPT][1]), while `fdisk` and `cfdisk` are limited to DOS partition tables.
* **Larger disks:** A DOS partition table can format up to 2TB of disk space, although up to 16TB is possible in some cases. However, a GPT partition table can address up to 8ZiB of space.
* **More partitions:** Using primary and extended partitions, DOS partition tables allow only 16 partitions. With GPT, you get up to 128 partitions by default and can choose to have many more.
* **Reliability:** Only one copy of the partition table is stored in a DOS partition. GPT keeps two copies of the partition table (at the beginning and the end of the disk). The GPT also uses a [CRC][2] checksum to check the partition table integrity, which is not done with DOS partitions.
With today's larger disks and the need for more flexibility in working with them, using `parted` to work with disk partitions is recommended. Most of the time, disk partition tables are created as part of the operating system installation process. Direct use of the `parted` command is most useful when adding a storage device to an existing system.
### Give 'parted' a try
`parted` command. To try these steps, I strongly recommend using a brand new storage device or one where you don't mind wiping out the contents.
The following explains the process of partitioning a storage device with thecommand. To try these steps, I strongly recommend using a brand new storage device or one where you don't mind wiping out the contents.
**1\. List the partitions:** Use `parted -l` to identify the storage device you want to partition. Typically, the first hard disk (`/dev/sda` or `/dev/vda`) will contain the operating system, so look for another disk to find the one you want (e.g., `/dev/sdb`, `/dev/sdc`, `/dev/vdb`, `/dev/vdc`, etc.).
```
$ sudo parted -l
[sudo] password for daniel:
Model: ATA RevuAhn_850X1TU5 (scsi)
Disk /dev/vdc: 512GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number  Start   End    Size   Type     File system  Flags
 1      1049kB  525MB  524MB  primary  ext4         boot
 2      525MB   512GB  512GB  primary               lvm
```
**2\. Open the storage device:** Use `parted` to begin working with the selected storage device. In this example, the device is the third disk on a virtual system (`/dev/vdc`). It is important to indicate the specific device you want to use. If you just type `parted` with no device name, it will randomly select a storage device to modify.
```
$ sudo parted /dev/vdc
GNU Parted 3.2
Using /dev/vdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)
```
**3\. Set the partition table:** Set the partition table type to GPT, then type "Yes" to accept it.
```
(parted) mklabel gpt
Warning: the existing disk label on /dev/vdc will be destroyed
and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes
```
The `mklabel` and `mktable` commands are used for the same purpose (making a partition table on a storage device). The supported partition tables are: aix, amiga, bsd, dvh, gpt, mac, ms-dos, pc98, sun, and loop. Remember `mklabel` will not make a partition, rather it will make a partition table.
**4\. Review the partition table:** Show information about the storage device.
```
(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 1396MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
```
**5\. Get help:** To find out how to make a new partition, type: `(parted) help mkpart`.
```
(parted) help mkpart
  mkpart PART-TYPE [FS-TYPE] START END     make a partition
        PART-TYPE is one of: primary, logical, extended
        FS-TYPE is one of: btrfs, nilfs2, ext4, ext3, ext2, fat32, fat16, hfsx, hfs+, hfs, jfs, swsusp,
        linux-swap(v1), linux-swap(v0), ntfs, reiserfs, hp-ufs, sun-ufs, xfs, apfs2, apfs1, asfs, amufs5,
        amufs4, amufs3, amufs2, amufs1, amufs0, amufs, affs7, affs6, affs5, affs4, affs3, affs2, affs1,
        affs0, linux-swap, linux-swap(new), linux-swap(old)
        START and END are disk locations, such as 4GB or 10%.  Negative values count from the end of the
        disk.  For example, -1s specifies exactly the last sector.
       
        'mkpart' makes a partition without creating a new file system on the partition.  FS-TYPE may be
        specified to set an appropriate partition ID.
```
**6\. Make a partition:** To make a new partition (in this example, 1,396MB on partition 0), type the following:
```
(parted) mkpart primary 0 1396MB
Warning: The resulting partition is not properly aligned for best performance
Ignore/Cancel? I
(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 1396MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start   End     Size    File system Name Flags
1      17.4kB  1396MB  1396MB  primary
```
Filesystem type (fstype) will not create an ext4 filesystem on `/dev/vdc1`. A DOS partition table's partition types are primary, logical, and extended. In a GPT partition table, the partition type is used as the partition name. Providing a partition name under GPT is a must; in the above example, primary is the name, not the partition type.
**7\. Save and quit:** Changes are automatically saved when you quit `parted`. To quit, type the following:
```
(parted) quit
Information: You may need to update /etc/fstab.
$
```
### Words to the wise
Make sure to identify the correct disk before you begin changing its partition table when you add a new storage device. If you mistakenly change the disk partition that contains your computer's operating system, you could make your system unbootable.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/how-partition-disk-linux
作者:[Daniel Oh][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/daniel-oh
[1]:https://en.wikipedia.org/wiki/GUID_Partition_Table
[2]:https://en.wikipedia.org/wiki/Cyclic_redundancy_check

View File

@ -0,0 +1,133 @@
Turn Your Raspberry Pi into a Tor Relay Node
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/tor-onion-router.jpg?itok=6WUl0ElH)
If youre anything like me, you probably got yourself a first- or second-generation Raspberry Pi board when they first came out, played with it for a while, but then shelved it and mostly forgot about it. After all, unless youre a robotics enthusiast, you probably dont have that much use for a computer with a pretty slow processor and 256 megabytes of RAM. This is not to say that there arent cool things you can do with one of these, but between work and other commitments, I just never seem to find the right time for some good old nerding out.
However, if you would like to put it to good use without sacrificing too much of your time or resources, you can turn your old Raspberry Pi into a perfectly functioning Tor relay node.
### What is a Tor Relay node
You have probably heard about the [Tor project][1] before, but just in case you havent, heres a very quick summary. The name “Tor” stands for “The Onion Router” and it is a technology created to combat online tracking and other privacy violations.
Everything you do on the Internet leaves a set of digital footprints in every piece of equipment that your IP packets traverse: all of the switches, routers, load balancers and destination websites log the IP address from which your session originated and the IP address of the internet resource you are accessing (and often its hostname, [even when using HTTPS][2]). If youre browsing from home, then your IP can be directly mapped to your household. If youre using a VPN service ([as you should be][3]), then your IP can be mapped to your VPN provider, and then they are the ones who can map it to your household. In any case, odds are that someone somewhere is assembling an online profile on you based on the sites you visit and how much time you spend on each of them. Such profiles are then sold, aggregated with matching profiles collected from other services, and then monetized by ad networks. At least, thats the optimists view of how that data is used -- Im sure you can think of many examples of how your online usage profiles can be used against you in much more nefarious ways.
The Tor project attempts to provide a solution to this problem by making it impossible (or, at least, unreasonably difficult) to trace the endpoints of your IP session. Tor achieves this by bouncing your connection through a chain of anonymizing relays, consisting of an entry node, relay node, and exit node:
1. The **entry node** only knows your IP address, and the IP address of the relay node, but not the final destination of the request;
2. The **relay node** only knows the IP address of the entry node and the IP address of the exit node, and neither the origin nor the final destination
3. The **exit node** **** only knows the IP address of the relay node and the final destination of the request; it is also the only node that can decrypt the traffic before sending it over to its final destination
Relay nodes play a crucial role in this exchange because they create a cryptographic barrier between the source of the request and the destination. Even if exit nodes are controlled by adversaries intent on stealing your data, they will not be able to know the source of the request without controlling the entire Tor relay chain.
As long as there are plenty of relay nodes, your privacy when using the Tor network remains protected -- which is why I heartily recommend that you set up and run a relay node if you have some home bandwidth to spare.
#### Things to keep in mind regarding Tor relays
A Tor relay node only receives encrypted traffic and sends encrypted traffic -- it never accesses any other sites or resources online, so you do not need to worry that someone will browse any worrisome sites directly from your home IP address. Having said that, if you reside in a jurisdiction where offering anonymity-enhancing services is against the law, then, obviously, do not operate your own Tor relay. You may also want to check if operating a Tor relay is against the terms and conditions of your internet access provider.
### What you will need
* A Raspberry Pi (any model/generation) with some kind of enclosure
* An SD card with [Raspbian Stretch Lite][4]
* An ethernet cable
* A micro-USB cable for power
* A keyboard and an HDMI-capable monitor (to use during the setup)
This guide will assume that you are setting this up on your home connection behind a generic cable or ADSL modem router that performs NAT translation (and it almost certainly does). Most of them have a USB port you can use to power up your Raspberry Pi, and if youre only using the wifi functionality of the router, then it should have a free ethernet port for you to plug into. However, before we get to the point where we can set-and-forget your Raspberry Pi, well need to set it up as a Tor relay node, for which youll need a keyboard and a monitor.
### The bootstrap script
Ive adapted a popular Tor relay node bootstrap script for use with Raspbian Stretch -- you can find it in my GitHub repository here: <https://github.com/mricon/tor-relay-bootstrap-rpi>. Once you have booted up your Raspberry Pi and logged in with the default “pi” user, do the following:
```
sudo apt-get install -y git
git clone https://github.com/mricon/tor-relay-bootstrap-rpi
cd tor-relay-bootstrap-rpi
sudo ./bootstrap.sh
```
Here is what the script will do:
1. Install the latest OS updates to make sure your Pi is fully patched
2. Configure your system for automated unattended updates, so you automatically receive security patches when they become available
3. Install Tor software
4. Tell your NAT router to forward the necessary ports to reach your relay (the ports well use are 443 and 8080, since they are least likely to be filtered by your internet provider)
Once the script is done, youll need to configure the torrc file -- but first, decide how much bandwidth youll want to donate to Tor traffic. First, type “[Speed Test][5]” into Google and click the “Run Speed Test” button. You can disregard the “Download speed” result, as your Tor relay can only operate as fast as your maximum upload bandwidth.
Therefore, take the “Mbps upload” number, divide by 8 and multiply by 1024 to find out the bandwidth speed in Kilobytes per second. E.g. if you got 21.5 Mbps for your upload speed, then that number is:
```
21.5 Mbps / 8 * 1024 = 2752 KBytes per second
```
Youll want to limit your relay bandwidth to about half that amount, and allow bursting to about three-quarters of it. Once decided, open /etc/tor/torrc using your favourite editor and tweak the bandwidth settings.
```
RelayBandwidthRate 1300 KBytes
RelayBandwidthBurst 2400 KBytes
```
Of course, if youre feeling more generous, then feel free to put in higher numbers, though you dont want to max out your outgoing bandwidth -- it will noticeably impact your day-to-day usage if these numbers are set too high.
While you have that file open, you should set two more things. First, the Nickname -- just for your own recordkeeping, and second the ContactInfo line, which should list a single email address. Since your relay will be running unattended, you should use an email address that you regularly check -- you will receive an alert from the “Tor Weather” service if your relay goes offline for longer than 48 hours.
```
Nickname myrpirelay
ContactInfo you@example.com
```
Save the file and reboot the system to start the Tor relay.
### Testing to make sure Tor traffic is flowing
If you would like to make sure that the relay is functioning, you can run the “arm” tool:
```
sudo -u debian-tor arm
```
It will take a while to start, especially on older-generation boards, but eventually it will show you a bar chart of incoming and outgoing traffic (or error messages that will help you troubleshoot your setup).
Once you are convinced that everything is functioning, you can unplug the keyboard and the monitor and relocate the Raspberry Pi into the basement where it will quietly sit and shuffle encrypted bits around. Congratulations, youve helped improve privacy and combat malicious tracking online!
Learn more about Linux through the free ["Introduction to Linux" ][6] course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/intro-to-linux/2018/6/turn-your-raspberry-pi-tor-relay-node
作者:[Konstantin Ryabitsev][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/mricon
[1]:https://www.torproject.org/
[2]:https://en.wikipedia.org/wiki/Server_Name_Indication#Security_implications
[3]:https://www.linux.com/blog/2017/10/tips-secure-your-network-wake-krack
[4]:https://www.raspberrypi.org/downloads/raspbian/
[5]:https://www.google.com/search?q=speed+test
[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,73 @@
7 open source tools to make literature reviews easy
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_EDU_DigitalLiteracy_520x292.png?itok=ktHMrse6)
A good literature review is critical for academic research in any field, whether it is for a research article, a critical review for coursework, or a dissertation. In a recent article, I presented detailed steps for doing [a literature review using open source software][1].
The following is a brief summary of seven free and open source software tools described in that article that will make your next literature review much easier.
### 1\. GNU Linux
Most literature reviews are accomplished by graduate students working in research labs in universities. For absurd reasons, graduate students often have the worst computers on campus. They are often old, slow, and clunky Windows machines that have been discarded and recycled from the undergraduate computer labs. Installing a [flavor of GNU Linux][2] will breathe new life into these outdated PCs. There are more than [100 distributions][3], all of which can be downloaded and installed for free on computers. Most popular Linux distributions come with a "try-before-you-buy" feature. For example, with Ubuntu you can make a [bootable USB stick][4] that allows you to test-run the Ubuntu desktop experience without interfering in any way with your PC configuration. If you like the experience, you can use the stick to install Ubuntu on your machine permanently.
### 2\. Firefox
Linux distributions generally come with a free web browser, and the most popular is [Firefox][5]. Two Firefox plugins that are particularly useful for literature reviews are Unpaywall and Zotero. Keep reading to learn why.
### 3\. Unpaywall
Often one of the hardest parts of a literature review is gaining access to the papers you want to read for your review. The unintended consequence of copyright restrictions and paywalls is it has narrowed access to the peer-reviewed literature to the point that even [Harvard University is challenged][6] to pay for it. Fortunately, there are a lot of open access articles—about a third of the literature is free (and the percentage is growing). [Unpaywall][7] is a Firefox plugin that enables researchers to click a green tab on the side of the browser and skip the paywall on millions of peer-reviewed journal articles. This makes finding accessible copies of articles much faster that searching each database individually. Unpaywall is fast, free, and legal, as it accesses many of the open access sites that I covered in my paper on using [open source in lit reviews][8].
### 4\. Zotero
Formatting references is the most tedious of academic tasks. [Zotero][9] can save you from ever doing it again. It operates as an Android app, desktop program, and a Firefox plugin (which I recommend). It is a free, easy-to-use tool to help you collect, organize, cite, and share research. It replaces the functionality of proprietary packages such as RefWorks, Endnote, and Papers for zero cost. Zotero can auto-add bibliographic information directly from websites. In addition, it can scrape bibliographic data from PDF files. Notes can be easily added on each reference. Finally, and most importantly, it can import and export the bibliography databases in all publishers' various formats. With this feature, you can export bibliographic information to paste into a document editor for a paper or thesis—or even to a wiki for dynamic collaborative literature reviews (see tool #7 for more on the value of wikis in lit reviews).
### 5\. LibreOffice
Your thesis or academic article can be written conventionally with the free office suite [LibreOffice][10], which operates similarly to Microsoft's Office products but respects your freedom. Zotero has a word processor plugin to integrate directly with LibreOffice. LibreOffice is more than adequate for the vast majority of academic paper writing.
### 6\. LaTeX
If LibreOffice is not enough for your layout needs, you can take your paper writing one step further with [LaTeX][11], a high-quality typesetting system specifically designed for producing technical and scientific documentation. LaTeX is particularly useful if your writing has a lot of equations in it. Also, Zotero libraries can be directly exported to BibTeX files for use with LaTeX.
### 7\. MediaWiki
If you want to leverage the open source way to get help with your literature review, you can facilitate a [dynamic collaborative literature review][12]. A wiki is a website that allows anyone to add, delete, or revise content directly using a web browser. [MediaWiki][13] is free software that enables you to set up your own wikis.
Researchers can (in decreasing order of complexity): 1) set up their own research group wiki with MediaWiki, 2) utilize wikis already established at their universities (e.g., [Aalto University][14]), or 3) use wikis dedicated to areas that they research. For example, several university research groups that focus on sustainability (including [mine][15]) use [Appropedia][16], which is set up for collaborative solutions on sustainability, appropriate technology, poverty reduction, and permaculture.
Using a wiki makes it easy for anyone in the group to keep track of the status of and update literature reviews (both current and older or from other researchers). It also enables multiple members of the group to easily collaborate on a literature review asynchronously. Most importantly, it enables people outside the research group to help make a literature review more complete, accurate, and up-to-date.
### Wrapping up
Free and open source software can cover the entire lit review toolchain, meaning there's no need for anyone to use proprietary solutions. Do you use other libre tools for making literature reviews or other academic work easier? Please let us know your favorites in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/open-source-literature-review-tools
作者:[Joshua Pearce][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jmpearce
[1]:http://pareonline.net/getvn.asp?v=23&n=8
[2]:https://opensource.com/article/18/1/new-linux-computers-classroom
[3]:https://distrowatch.com/
[4]:https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-windows#0
[5]:https://www.mozilla.org/en-US/firefox/new/
[6]:https://www.theguardian.com/science/2012/apr/24/harvard-university-journal-publishers-prices
[7]:https://unpaywall.org/
[8]:http://www.academia.edu/36709736/How_to_Perform_a_Literature_Review_with_Free_and_Open_Source_Software
[9]:https://www.zotero.org/
[10]:https://www.libreoffice.org/
[11]:https://www.latex-project.org/
[12]:https://www.academia.edu/1861756/Open_Source_Research_in_Sustainability
[13]:https://www.mediawiki.org/wiki/MediaWiki
[14]:http://wiki.aalto.fi
[15]:http://www.appropedia.org/Category:MOST
[16]:http://www.appropedia.org/Welcome_to_Appropedia

View File

@ -0,0 +1,108 @@
What version of Linux am I running?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC)
The question "what version of Linux" can mean two different things. Strictly speaking, Linux is the kernel, so the question can refer specifically to the kernel's version number, or "Linux" can be used more colloquially to refer to the entire distribution, as in Fedora Linux or Ubuntu Linux.
`apt`, `dnf`, `yum`, or some other command to install packages.
Both are important, and you may need to know one or both answers to fix a problem with a system. For example, knowing the installed kernel version might help diagnose an issue with proprietary drivers, and identifying what distribution is running will help you quickly figure out if you should be using, or some other command to install packages.
The following will help you find out what version of the Linux kernel and/or what Linux distribution is running on a system.
### How to find the Linux kernel version
To find out what version of the Linux kernel is running, run the following command:
```
uname -srm
```
Alternatively, the command can be run by using the longer, more descriptive, versions of the various flags:
```
uname --kernel-name --kernel-release --machine
```
Either way, the output should look similar to the following:
```
Linux 4.16.10-300.fc28.x86_64 x86_64
```
This gives you (in order): the kernel name, the version of the kernel, and the type of hardware the kernel is running on. In this case, the kernel is Linux version 4.16.10-300.fc28.x86_64 running on an x86_64 system.
More information about the `uname` command can be found by running `man uname`.
### How to find the Linux distribution
There are several ways to figure out what distribution is running on a system, but the quickest way is the check the contents of the `/etc/os-release` file. This file provides information about a distribution including, but not limited to, the name of the distribution and its version number. The os-release file in some distributions contains more details than in others, but any distribution that includes an os-release file should provide a distribution's name and version.
To view the contents of the os-release file, run the following command:
```
cat /etc/os-release
```
On Fedora 28, the output looks like this:
```
NAME=Fedora
VERSION="28 (Workstation Edition)"
ID=fedora
VERSION_ID=28
PLATFORM_ID="platform:f28"
PRETTY_NAME="Fedora 28 (Workstation Edition)"
ANSI_COLOR="0;34"
CPE_NAME="cpe:/o:fedoraproject:fedora:28"
HOME_URL="https://fedoraproject.org/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=28
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=28
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation
```
As the example above shows, Fedora's os-release file provides the name of the distribution and the version, but it also identifies the installed variant (the "Workstation Edition"). If we ran the same command on Fedora 28 Server Edition, the contents of the os-release file would reflect that on the `VARIANT` and `VARIANT_ID` lines.
Sometimes it is useful to know if a distribution is like another, so the os-release file can contain an `ID_LIKE` line that identifies distributions the running distribution is based on or is similar to. For example, Red Hat Enterprise Linux's os-release file includes an `ID_LIKE` line stating that RHEL is like Fedora, and CentOS's os-release file states that CentOS is like RHEL and Fedora. The `ID_LIKE` line is very helpful if you are working with a distribution that is based on another distribution and need to find instructions to solve a problem.
CentOS's os-release file makes it clear that it is like RHEL, so documentation and questions and answers in various forums about RHEL should (in most cases) apply to CentOS. CentOS is designed to be a near clone of RHEL, so it is more compatible with its `LIKE` than some entries that might be found in the `ID_LIKE` field, but checking for answers about a "like" distribution is always a good idea if you cannot find the information you are seeking for the running distribution.
More information about the os-release file can be found by running `man os-release`.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/linux-version
作者:[Joshua Allen Holm][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/holmja

View File

@ -1,100 +0,0 @@
底层 Linux 容器运行时的发展史
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/running-containers-two-ship-container-beach.png?itok=wr4zJC6p)
在 Red Hat我们乐意这么说“容器就是 LinuxLinux 就是容器”。下面解释一下这种说法。传统的容器是操作系统中的进程,通常具有如下 3 个特性:
### 1\. 资源限制
当你在系统中运行多个容器时,你肯定不希望某个容器独占系统资源,所以我们需要使用资源约束来控制 CPU、内存和网络带宽等资源。Linux 内核提供了 cgroups 特性,可以通过配置控制容器进程的资源使用。
### 2\. 安全性配置
一般而言,你不希望你的容器可以攻击其它容器或甚至攻击的你的主机系统。我们使用了 Linux 内核的若干特性建立安全隔离,相关特性包括 SELinux、seccomp 和 capabilities。
LCTT 译注:从 2.2 版本内核开始Linux 将特权从超级用户中分离,产生了一系列可以单独启用或关闭的 capabilities
### 3\. 虚拟隔离
容器外的任何进程对于容器而言都应该不可见。容器应该使用独立的网络。不同的容器对应的进程都应该可以绑定 80 端口。每个容器的<ruby>内核映像<rt>image</rt></ruby><ruby>根文件系统<rt>rootfs</rt>都应该相互独立。在 Linux 中,我们使用内核 namespaces 特性提供<ruby>虚拟隔离<rt>virtual separation</rt></ruby>
那么,具有安全性配置并且在 cgroup 和 namespace 下运行的进程都可以称为容器。查看一下 Red Hat Enterprise Linux 7 操作系统中的 PID 1 的进程 systemd你会发现 systemd 运行在一个 cgroup 下。
```
# tail -1 /proc/1/cgroup
1:name=systemd:/
```
`ps` 命令让我们看到 systemd 进程具有 SELinux 标签:
```
# ps -eZ | grep systemd
system_u:system_r:init_t:s0             1 ?     00:00:48 systemd
```
以及 capabilities
```
# grep Cap /proc/1/status
...
CapEff: 0000001fffffffff
CapBnd: 0000001fffffffff
CapBnd:    0000003fffffffff
```
最后,查看 `/proc/1/ns` 子目录,你会发现 systemd 运行所在的 namespace。
```
ls -l /proc/1/ns
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 mnt -> mnt:[4026531840]
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 net -> net:[4026532009]
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 pid -> pid:[4026531836]
...
```
如果 PID 1 进程(实际上每个系统进程)具有资源约束、安全性配置和 namespace那么我想说系统上的每一个进程都运行在容器中。
容器运行时工具也不过是修改了资源约束、安全性配置和 namespace然后 Linux 内核运行起进程。容器启动后,容器运行时可以在容器内监控 PID 1 进程,也可以监控容器的标准输入输出,从而进行容器进程的生命周期管理。
### 容器运行时
你可能自言自语道“哦systemd 看起来很像一个容器运行时”。经过若干次关于“为何容器运行时不使用 `systemd-nspawn` 工具启动容器”的邮件讨论后,我认为值得讨论一下容器运行时及其发展史。
[Docker][1] 通常被称为容器运行时,但“容器运行时”是一个被过度使用的词语。当用户提到“容器运行时”,他们其实提到的是为开发人员提供便利的<ruby>上层<rt>high-level</rt></ruby>工具,包括 Docker[CRI-O][2] 和 [RKT][3]。这些工具都是基于 API 的,涉及操作包括从容器仓库拉取容器镜像、配置存储和启动容器等。启动容器通常涉及一个特殊工具,用于配置内核如何运行容器,这类工具也被称为“容器运行时”,下文中我将称其为“底层容器运行时”以作区分。像 Docker、CRI-O 这样的守护进程及形如 [Podman][4]、[Buildah][5] 的命令行工具,似乎更应该被称为“容器管理器”。
早期版本的 Docker 使用 `lxc` 工具集启动容器,该工具出现在 `systemd-nspawn` 之前。Red Hat 最初试图将 `[libvirt][6]` (`libvirt-lxc`) 集成到 Docker 中替代 `lxc` 工具,因为 RHEL 并不支持 `lxc`。`libvirt-lxc` 也没有使用 `systemd-nspawn`,在那时 systemd 团队仅将 `systemd-nspawn` 视为测试工具,不适用于生产环境。
与此同时,包括我 Red Hat 团队部分成员在内的<ruby>上游<rt>upstream</rt></ruby> Docker 开发者,认为应该采用 golang 原生的方式启动容器,而不是调用外部应用。他们的工作促成了 libcontainer 这个 golang 原生库用于启动容器。Red Hat 工程师更看好该库的发展前景,放弃了 `libvirt-lxc`
后来成立 [<ruby>开放容器组织<rt>Open Container Initiative</rt></ruby>][7] (OCI) 的部分原因就是人们希望用其它方式启动容器。传统的基于 namespaces 隔离的容器已经家喻户晓,但人们也有<ruby>虚拟机级别隔离<rt>virtual machine-level isolation</rt></ruby>的需求。Intel 和 [Hyper.sh][8] 正致力于开发基于 KVM 隔离的容器Microsoft 致力于开发基于 Windows 的容器。OCI 希望有一份定义容器的标准规范,因而产生了 [OCI <ruby>运行时规范<rt>Runtime Specification</rt></ruby>][9]。
OCI 运行时规范定义了一个 JSON 文件格式,用于描述要运行的二进制,如何容器化以及容器根文件系统的位置。一些工具用于生成符合标准规范的 JSON 文件,另外的工具用于解析 JSON 文件并在根文件系统上运行容器。Docker 的部分代码被抽取出来构成了 libcontainer 项目,该项目被贡献给 OCI。上游 Docker 工程师及我们自己的工程师创建了一个新的前端工具,用于解析符合 OCI 运行时规范的 JSON 文件,然后与 libcontainer 交互以便启动容器。这个前端工具就是 `[runc][10]`,也被贡献给 OCI。虽然 `runc` 可以解析 OCI JSON 文件,但用户需要自行生成这些文件。此后,`runc` 也成为了最流行的底层容器运行时,基本所有的容器管理工具都支持 `runc`,包括 CRI-ODockerBuildahPodman 和 [Cloud Foundry Garden][11] 等。此后,其它工具的实现也参照 OCI 运行时规范,以便可以运行 OCI 兼容的容器。
[Clear Containers][12] 和 Hyper.sh 的 `runV` 工具都是参照 OCI 运行时规范运行基于 KVM 的容器,二者将其各自工作合并到一个名为 [Kata][12] 的新项目中。在去年Oracle 创建了一个示例版本的 OCI 运行时工具,名为 [RailCar][13],使用 Rust 语言编写。但该 GitHub 项目已经两个月没有更新了故无法判断是否仍在开发。几年前Vincent Batts 试图创建一个名为 `[nspawn-oci][14]` 的工具,用于解析 OCI 运行时规范文件并启动 `systemd-nspawn`;但似乎没有引起大家的注意,而且也不是原生的实现。
如果有开发者希望实现一个原生的 `systemd-nspawn --oci OCI-SPEC.json` 并让 systemd 团队认可和提供支持那么CRI-ODocker 和 Podman 等容器管理工具将可以像使用 `runc` 和 Clear Container/runV ([Kata][15]) 那样使用这个新的底层运行时。(目前我的团队没有人参与这方面的工作。)
总结如下,在 3-4 年前,上游开发者打算编写一个底层的 golong 工具用于启动容器,最终这个工具就是 `runc`。那时开发者使用 C 编写的 `lxc` 工具,在 `runc` 开发后,他们很快转向 `runc`。我很确信,当决定构建 libcontainer 时,他们对 `systemd-nspawn` 或其它非原生(即不使用 golong的运行 namespaces 隔离的容器的方式都不感兴趣。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/history-low-level-container-runtimes
作者:[Daniel Walsh][a]
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rhatdan
[1]:https://github.com/docker
[2]:https://github.com/kubernetes-incubator/cri-o
[3]:https://github.com/rkt/rkt
[4]:https://github.com/projectatomic/libpod/tree/master/cmd/podman
[5]:https://github.com/projectatomic/buildah
[6]:https://libvirt.org/
[7]:https://www.opencontainers.org/
[8]:https://www.hyper.sh/
[9]:https://github.com/opencontainers/runtime-spec
[10]:https://github.com/opencontainers/runc
[11]:https://github.com/cloudfoundry/garden
[12]:https://clearlinux.org/containers
[13]:https://github.com/oracle/railcar
[14]:https://github.com/vbatts/nspawn-oci
[15]:https://github.com/kata-containers

View File

@ -0,0 +1,201 @@
# 装载/卸载 Linux 内核模块
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82)
本文来自 Manning 出版的 [Linux in Action][1] 的第 15 章。
Linux 使用内核模块管理硬件外设。 我们来看看它是如何工作的。
运行中的 Linux 内核是您不希望破坏的东西之一。毕竟,内核是驱动计算机所做的一切的软件。考虑到在一个运行的系统上必须同时管理诸多细节,最好能让内核尽可能的减少分心,专心的完成它的工作。但是,如果在不重新启动整个系统的情况下,对计算环境进行任何微小的更改都是不可能的,那么插入一个新的网络摄像头或打印机可能会对您的工作流程造成严重的破坏。每次添加设备时都必须重新启动,以使系统识别它,这效率很低。
为了在稳定性和可用性之间建立一个有效的平衡Linux 将内核隔离,但是允许您通过可加载内核模块 (LKMs) 实时添加特定的功能。如下图所示,您可以将模块视为软件的一部分,它告诉内核在哪里找到一个设备以及如何使用它。反过来,内核使设备对用户和进程可用,并监视其操作。
![Kernel modules][3]
内核模块充当设备和 Linux 内核之间的转换器。
没有什么能够阻止你编写你自己的模块来完全按照你喜欢的方式来支持一个设备,但是为什么呢? Linux 模块库已经非常强大,通常不需要自己去实现一个模块。 而绝大多数时候Linux 会自动加载新设备的模块,而您甚至不知道它。
不过,有时候,出于某种原因,它本身并不会运行。 (你不想让那个招聘经理不耐烦地等待你的笑脸加入视频会议面试时间太长。)为了帮助你解决问题,你需要更多地了解内核模块,特别是 ,如何找到运行你的外设的实际模块,然后如何手动激活它。
### 查找内核模块
按照公认的约定,模块是位于 `/lib/modules/` 目录下的具有 .ko内核对象扩展名的文件。 然而,在你一直导航到这些文件之前,你可能不得不做出选择。 因为在引导时你需要从加载发行版列表中选择一个选项,所以支持您选择的特定软件(包括内核模块)必须存在某处。 那么,`/lib/modules/` 就是其中之一。 你会发现目录里充满了每个可用的 Linux 内核版本的模块; 例如:
```
$ ls /lib/modules
4.4.0-101-generic
4.4.0-103-generic
4.4.0-104-generic
```
在我的电脑上运行的内核是版本号最高的版本4.4.0-104-generic但不能保证这对你来说是一样的内核经常更新。 如果您将要在一个运行的系统上对你想要使用的模块做一些工作的话,则需要确保您拥有正确的目录树。
`uname -r` `-r` 指定了系统信息中的内核版本号):
```
$ uname -r
4.4.0-104-generic
```
好消息:有一个可靠的窍门。 不通过名称来识别目录,并希望能够找到正确的目录,而是使用始终指向使用的内核名称的系统变量。 您可以使用(从系统信息中指定通常显示的内核版本号)来调用该变量:
通过这些信息,您可以使用称为命令替换的过程将 `uname` 并入您的文件系统引用中。 例如,要导航到正确的目录,您需要将其添加到 `/lib/modules` 。 要告诉 Linux “uname” 不是文件系统的位置,请将 `uname` 部分用反引号括起来,如下所示:
```
$ ls /lib/modules/`uname -r`
build   modules.alias        modules.dep      modules.softdep
initrd  modules.alias.bin    modules.dep.bin  modules.symbols
kernel  modules.builtin      modules.devname  modules.symbols.bin
misc    modules.builtin.bin  modules.order    vdso
```
你可以在 `kernel/` 目录下的子目录中找到大部分模块。 花几分钟时间浏览这些目录,了解事物的排列方式和可用内容。 这些文件名通常会让你知道你在看什么。
```
$ ls /lib/modules/`uname -r`/kernel
arch  crypto  drivers  fs  kernel  lib  mm
net  sound  ubuntu  virt  zfs
```
这是查找内核模块的一种方法; 实际上,这是一种快速的方式。 但这不是唯一的方法。 如果你想获得完整的集合,你可以使用 `lsmod` 列出所有当前加载的模块以及一些基本信息。 这个截断输出的第一列(在这里列出的太多了)是模块名称,后面是文件大小和数量,然后是每个模块的名称:
```
$ lsmod
[...]
vboxdrv          454656  3 vboxnetadp,vboxnetflt,vboxpci
rt2x00usb        24576  1 rt2800usb
rt2800lib        94208  1 rt2800usb
[...]
```
到底有多少?好吧,我们再运行一次 `lsmod ` ,但是这一次将输出管道输送到 `wc -l` 看一下一共多少行:
```
$ lsmod | wc -l
113
```
那些是加载的模块。 总共有多少个? 运行 `modprobe -c` 并计算这些行将给我们这个数字:
```
$ modprobe -c | wc -l
33350
```
有33,350个可用模块 看起来好像有人多年来一直在努力为我们提供软件来驱动我们的物理设备。
注意:在某些系统中,您可能会遇到自定义的模块,这些模块在 `/etc/modules` 文件中使用其唯一条目进行引用,也可以作为保存到 `/etc/modules-load.d/` 的配置文件。这些模块很可能是本地开发项目的产物,可能涉及前沿实验。不管怎样,知道你在看什么是好事。
这就是你如何找到模块。 如果出于某种原因,它不会自行运行,您的下一个工作就是弄清楚如何手动加载非活动模块。
### 手动加载内核模块
在加载内核模块之前,逻辑上您必须确认它的存在。在这之前,你需要知道它叫什么。要做到这一点,有时需要同样的魔法和运气以及在线文档作者的辛勤工作的帮助。
我将通过描述一段时间前遇到的问题来说明这个过程。在一个晴朗的日子里,出于某种原因,笔记本电脑上的 WiFi 接口停止工作了。就这样。也许是软件升级把它搞砸了。谁知道呢?我运行了 `lshw -c network` ,得到了这个非常奇怪的信息:
```
network UNCLAIMED
    AR9485 Wireless Network Adapter
```
Linux 识别到了接口Atheros AR9485但将其列为未声明。 那么,正如他们所说的那样,“当情况变得严峻时,就会在互联网上进行艰难的搜索。” 我搜索了一下 atheros ar9 linux 模块,在浏览了 5 页甚至是 10 年前的页面后,它们建议自己写模块或者放弃,然后我终于发现(使用 Ubuntu 16.04 至少)存在一个工作模块。 它的名字是 ath9k 。
是的! 这场战斗胜券在握! 向内核添加模块比听起来容易得多。 要仔细检查它是否可用,可以针对模块的目录树运行 `find`,指定 `-type f` 来告诉 Linux 您正在查找文件,然后将字符串 `ath9k` 和星号一起添加以包含所有以你的字符串打头的文件:
```
$ find /lib/modules/$(uname -r) -type f -name ath9k*
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_common.ko
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k.ko
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_htc.ko
/lib/modules/4.4.0-97-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_hw.ko
```
再一步,加载模块:
```
# modprobe ath9k
```
就是这样。没有重新启动。没有大惊小怪。
这里还有一个示例,向您展示如何使用已经崩溃的运行模块。曾经有一段时间,我使用罗技网络摄像头和一个特定的软件会使摄像头在下次系统启动前无法被任何其他程序访问。有时我需要在不同的应用程序中打开相机,但没有时间关机重新启动。(我运行了很多应用程序,在引导之后将它们全部准备好需要一些时间。)
由于这个模块可能是运行的,所以使用 `lsmod` 来搜索视频这个词应该给我一个关于相关模块名称的提示。 实际上,它比提示更好:用 video 这个词描述的唯一模块是 uvcvideo如下所示
```
$ lsmod | grep video
uvcvideo               90112  0
videobuf2_vmalloc      16384  1 uvcvideo
videobuf2_v4l2         28672  1 uvcvideo
videobuf2_core         36864  2 uvcvideo,videobuf2_v4l2
videodev              176128  4 uvcvideo,v4l2_common,videobuf2_core,videobuf2_v4l2
media                  24576  2 uvcvideo,videodev
```
有可能是我自己的操作导致了崩溃,我想我可以挖掘更深一点,看看我能否以正确的方式解决问题。 但你知道它是如何的; 有时你不关心理论,只想让设备工作。 所以我用 `rmmod` 杀死了 `uvcvideo` 模块,然后用 `modprobe` 重新启动它,一切都好:
```
# rmmod uvcvideo
# modprobe uvcvideo
```
再一次:不重新启动。没有其他的后续影响。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/how-load-or-unload-linux-kernel-module
作者:[David Clinto][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[amwps290](https://github.com/amwps290)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dbclinton
[1]:https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&amp;a_bid=4ca15fc9&amp;chan=opensource
[2]:/file/397906
[3]:https://opensource.com/sites/default/files/uploads/kernels.png "Kernel modules"

View File

@ -1,21 +1,20 @@
translating---geekpi
How To Test A Package Without Installing It In Linux
如何在 Linux 中不安装软测试一个软件包
======
![](https://www.ostechnix.com/wp-content/uploads/2018/06/nix-720x340.png)
For some reason, you might want to test a package before installing it in your Linux system. If so, youre lucky! Today, I will show you how to do it in Linux using **Nix** package manager. One of the notable feature of Nix package manager is it allows the users to test the packages without having to install them first. This can be helpful when you want to use a particular application temporarily.
出于某种原因,你可能需要在将软件包安装到你的 Linux 系统之前对其进行测试。如果是这样,你很幸运!今天,我将向你展示如何在 Linux 中使用 **Nix** 包管理器来实现。Nix 包管理器的一个显著特性是它允许用户测试软件包而无需先安装它们。当你想要临时使用特定的程序时,这会很有帮助。
### Test A Package Without Installing It In Linux
Make sure you have installed Nix package manager first. If you havent installed it yet, refer the following guide.
### 测试一个软件包而不在 Linux 中安装它
For instance, let us say you want to test your C++ code. You dont have to install GCC. Just run the following command:
确保你先安装了 Nix 包管理器。如果尚未安装,请参阅以下指南。
例如,假设你想测试你的 C++ 代码。你不必安装 GCC。只需运行以下命令
```
$ nix-shell -p gcc
```
This command builds or downloads gcc package and its dependencies, then drops you into a Bash shell where the **gcc** command is present, all without affecting your normal environment.
该命令会构建或下载 gcc 软件包及其依赖项,然后将其放入一个存在 **gcc** 命令的 Bash shell 中,所有这些都不会影响正常环境。
```
LANGUAGE = (unset),
LC_ALL = (unset),
@ -55,7 +54,7 @@ Dload Upload Total Spent Left Speed
```
Check the GCC version:
检查GCC版本
```
[nix-shell:~]$ gcc -v
Using built-in specs.
@ -68,46 +67,46 @@ gcc version 5.4.0 (GCC)
```
Now, go ahead and test the code. Once you are done, type **exit** to return back to your console.
现在,继续并测试代码。完成后,输入 **exit** 返回到控制台。
```
[nix-shell:~]$ exit
exit
```
Once you are exit from the nix-shell, you cant use GCC.
一旦你从 nix-shell 中退出,你就不能使用 GCC。
Here is another example.
这是另一个例子。
```
$ nix-shell -p hello
```
This builds or downloads GNU Hello and its dependencies, then drops you into a Bash shell where the **hello** command is present, all without affecting your normal environment:
这会构建或下载 GNU Hello 和它的依赖关系,然后将其放入 **hello** 命令所在的 Bash shell 中,所有这些都不会影响你的正常环境:
```
[nix-shell:~]$ hello
Hello, world!
```
Type exit to return back to the console.
输入 exit 返回到控制台。
```
[nix-shell:~]$ exit
```
Now test if hello program is available or not.
现在测试你的 hello 程序是否可用。
```
$ hello
hello: command not found
```
For more details about Nix package manager, refer the following guide.
有关 Nix 包管理器的更多详细信息,请参阅以下指南。
Hope this helps! More good stuffs to come. Stay tuned!!
希望本篇对你有帮助!还会有更好的东西。敬请关注!!
Cheers!
干杯!
@ -117,7 +116,7 @@ via: https://www.ostechnix.com/how-to-test-a-package-without-installing-it-in-li
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出